text
stringlengths
56
7.94M
\begin{document} \title[Limit as $s\to 0^+$ of fractional Orlicz-Sobolev spaces]{On the limit as $s\to 0^+$ of fractional Orlicz-Sobolev spaces} \author {Angela Alberico, Andrea Cianchi, Lubo\v s Pick and Lenka Slav\'ikov\'a} \address{Angela Alberico, Istituto per le Applicazioni del Calcolo ``M. Picone''\\ Consiglio Nazionale delle Ricerche \\ Via Pietro Castellino 111\\ 80131 Napoli\\ Italy} \email{[email protected]} \address{Andrea Cianchi, Dipartimento di Matematica e Informatica \lq\lq U. Dini"\\ Universit\`a di Firenze\\ Viale Morgagni 67/a\\ 50134 Firenze\\ Italy} \email{[email protected]} \address{Lubo\v s Pick, Department of Mathematical Analysis\\ Faculty of Mathematics and Physics\\ Charles University\\ Sokolovsk\'a~83\\ 186~75 Praha~8\\ Czech Republic} \email{[email protected]} \address{Lenka Slav\'ikov\'a, Mathematical Institute, University of Bonn, Endenicher Allee 60, 53115 Bonn, Germany} \email{[email protected]} \urladdr{} \subjclass[2000]{46E35, 46E30} \keywords{Fractional Orlicz--Sobolev space; limits of smoothness parameters} \begin{abstract} An extended version of the Maz'ya-Shaposhnikova theorem on the limit as $s\to 0^+$ of the Gagliardo-Slobodeckij fractional seminorm is established in the Orlicz space setting. Our result holds in fractional Orlicz-Sobolev spaces associated with Young functions satisfying the $\Delta_2$-condition, and, as shown by counterexamples, it may fail if this condition is dropped. \end{abstract} \maketitle \section{Introduction and main results} Pivotal instances of spaces of functions endowed with a non-integer order of smoothness are the Besov spaces, defined in terms of norms of differences, the Triebel--Lizorkin spaces, whose notion relies upon the Fourier transform, the Bessel potential spaces, based on representation formulas via potential operators, and the Gagliardo-Slobodeckij spaces, defined in terms of fractional difference quotients. Relations among these families of spaces are known -- see e.g. \cite[Remark~2.1.1]{ST} for a survey of results with this regard. It is also well known that, with the exception of the Bessel potential spaces, they do not agree, in general, with the classical integer-order Sobolev spaces when the order of smoothness is formally set to an integer. \par In particular, this drawback affects the Gagliardo-Slobodeckij spaces $W^{s,p}(\mathbb R\sp n)$, which are defined, for $n \in \mathbb N$, $s \in (0,1)$ and $p \in [1, \infty)$, via a seminorm depending on an integral over $\mathbb R\sp n \times \mathbb R\sp n$ of an $s$-th-order difference quotient. However, some twenty years ago it was discovered that a suitably normalized Gagliardo-Slobodeckij seminorm in $W^{s,p}(\mathbb R\sp n)$ recovers, in the limit as $s \to 1^-$ or $s\to 0^+$, its integer-order counterpart. \par The result was first established at the endpoint $1^-$ by Bourgain, Brezis and Mironescu in \cite{BBM1, BBM2}. In those papers it is shown that the seminorm in $W^{s,p}(\mathbb R\sp n)$ of a function $u$, times $(1-s)^{1/p}$, approaches the $L^p$ norm of $\nabla u$ as $s \to 1^-$ (up to a multiplicative constant depending only on $n$). \par The problem concerning the opposite endpoint $0^+$ was solved by Maz'ya and Shaposhnikova. In \cite{MS} they proved that \iffalse Various notions of fractional-order Sobolev type spaces in the Euclidean space $\mathbb R\sp n$, with $n \geq 1$, are available in the literature. Besov, Gagliardo-Slobodeckij, Lizorkin-Triebel, Zygmund, and Bessel potential spaces are classics in this connection. The definition of these function spaces typically involves a parameter describing the order of smoothness of their members and one or more parameters accounting for their degree of integrability. Inclusion relations among these spaces are known. It is also well known that, with the exception of the Bessel potential spaces, they do not agree, in general, with their integer-order counterparts when the order of smoothness is formally set to an integer. \color{magenta} The spaces with non-integer degree of smoothness have been intensively studied for several decades. Their pivotal examples are Besov and Triebel--Lizorkin spaces, defined via the Fourier transform and its inverse (see~\cite[Definition~2.1.1]{ST}), whose comprehensive systematic treatment can be found for instance in~\cite{T1} and~\cite{T2}. The motivation for studying these spaces stems from their intimate connection to the distribution of eigenvalues of degenerate elliptic differential operators. A detailed description of such connection is given for example in~\cite{ET}, see also~\cite[Section 1.2]{ST}. A good survey of the relations between various spaces of functions with non-integer degree of smoothness is given for example in~\cite[Remark~2.1.1]{ST}. Fractional-type Sobolev spaces involving more general norms than just Lebesgue norms have been studied recently in connection with specific problems, see, for instance,~\cite{SeT}. \color{black} \par In particular, this drawback affects the Gagliardo-Slobodeckij spaces $W^{s,p}(\mathbb R\sp n)$, which are defined, for $s \in (0,1)$ and $p \in [1, \infty)$, via a seminorm depending on an integral over $\mathbb R\sp n \times \mathbb R\sp n$ of an $s$-th-order difference quotient. However, some twenty years ago it was discovered that a suitably normalized Gagliardo-Slobodeckij seminorm in $W^{s,p}(\mathbb R\sp n)$ recovers, in the limit as $s \to 1^-$ or $s\to 0^+$, its integer-order analogue. \par The result was first established at the endpoint $1^-$ by Bourgain, Brezis and Mironescu in \cite{BBM1, BBM2}. In those papers it is shown that the seminorm in $W^{s,p}(\mathbb R\sp n)$ of a function $u$, times $(1-s)^{\frac 1p}$, approaches the $L^p$ norm of $\nabla u$ as $s \to 1^-$ (up to a multiplicative constant depending only on $n$). \par The problem concerning the opposite endpoint $0^+$ was solved by Maz'ya and Shaposhnikova, who proved in \cite{MS} that \fi \begin{equation}\label{feb1} \lim_{s\to 0^+} s \int_{\mathbb R\sp n} \int_{\mathbb R\sp n}\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)^p\frac{dx\,dy}{|x-y|^n} = \frac{2\, \Omegamega_n}{np} \int_{\mathbb R\sp n} |u(x)|^p\; dx \end{equation} for every function $u$ decaying to $0$ near infinity and making the double integral finite for some $s\in (0,1)$. Here, $\Omegamega_n$ denotes the Lebesgue measure of the unit ball in $\mathbb R\sp n$. \par The present paper deals with a version of property \eqref{feb1} in the broader framework of fractional Orlicz-Sobolev spaces. These spaces extend the spaces $W^{s,p}(\mathbb R\sp n)$ in that the role of the power function $t^p$ is played by a more general Young function $A: [0, \infty) \to [0, \infty)$, namely a convex function vanishing at $0$. Specifically, we address the problem of the existence of \begin{equation}\label{feb2} \lim_{s\to 0^+} s\int_{\mathbb R^n} \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}}\,, \end{equation} and of its value in the affirmative case. The ambient space for $u$ is $\bigcup _{s\in (0,1)}V^{s,A}_d(\mathbb R\sp n)$, where $V^{s,A}_d(\mathbb R\sp n)$ denotes the space of those measurable functions $u$ in $\mathbb R\sp n$ which render the double integral in \eqref{feb2} finite, and decay to $0$ near infinity, in the sense that $$|\{x \in \mathbb R\sp n: |u(x)|>t\}| <\infty \qquad \text{for every $t>0$.}$$ Here, $|E|$ stands for the Lebesgue measure of a set $E\subset \mathbb R\sp n$. \par A partial result in this connection is contained in the recent contribution \cite{CMSV}, where bounds for the $\liminf_{s\to 0^+}$ and $\Omegaperatornamewithlimits{lim\,sup}_{s\to 0^+}$ of the expression under the limit in \eqref{feb2} are given for Young functions $A$ satisfying the $\Delta_2$-condition. Recall that this condition amounts to requiring that there exists a constant $c$ such that \begin{equation}\label{delta2} A(2t) \leq c A(t) \qquad \text{for $t \geq 0$.} \end{equation} \par Our results provide a full answer to the relevant problem. We prove that, under the $\Delta_2$-condition on $A$, the limit in \eqref{feb2} does exist, and equals the integral of a function of $|u|$ over $\mathbb R\sp n$. Moreover, we show that the result can fail if the $\Delta_2$-condition is dropped. Interestingly, the function of $|u|$ appearing in the integral obtained in the limit is not $A$, but rather the Young function $\Omegaverline A$ associated with $A$ by the formula \begin{equation}\label{Abar} \Omegaverline A(t) = \int _0^t\frac {A(\tau)}\tau \, d\tau \qquad \text{for $t\geq0$\,.} \end{equation} Notice that $A$ and $\Omegaverline A$ are equivalent as Young functions, since $A(t/2) \leq \Omegaverline A(t) \leq A(t)$ for $t \geq 0$, owing to the monotonicity of $A(t)$ and $A(t)/t$. \begin{theorem}\label{T:lim0} Let $n \in \mathbb N$ and let $A$ be a Young function satisfying the $\Delta_2$-condition. Assume that $u \in \bigcup_{s\in (0,1)} V^{s,A}_d(\mathbb R\sp n)$. Then \begin{equation}\label{jan1} \lim_{s\to 0^+} s\int_{\mathbb R^n} \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} = \frac{2\, \Omegamega_n}n \int_{\mathbb R\sp n} \Omegaverline A(|u(x)|)\; dx\,. \end{equation} \end{theorem} Plainly, equation \eqref{jan1} recovers \eqref{feb1} when $A(t)=t^p$ for some $p \geq 1$, inasmuch as $\Omegaverline A(t)= t^p/p$ in this case. \par The indispensability of the $\Delta_2$-condition for the function $A$ is demonstrated via the next result. \begin{theorem}\label{counterex} Let $n \in \mathbb N$. There exist Young functions $A$, which do not satisfy the $\Delta_2$-condition, and corresponding functions $u:\mathbb R\sp n \to\mathbb R$ such that $u \in \ V^{s,A}_d(\mathbb R\sp n)$ for every $s\in(0,1)$, \begin{equation}\label{A2} \int_{\mathbb R\sp n} \Omegaverline A(|u(x)|)\;dx \leq \int_{\mathbb R\sp n} A(|u(x)|)\;dx <\infty\,, \end{equation} but \begin{equation}\label{A1} \lim_{s\to 0^+}\,s\int_{\mathbb R\sp n} \int_{\mathbb R\sp n} A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|^n} =\infty\,. \end{equation} \end{theorem} Incidentally, let us mention that an analogue of the Bourgain-Brezis-Mironescu theorem on the limit as $s\to 1^-$ for fractional Orlicz-Sobolev spaces built upon Young functions satisfying the $\Delta_2$-condition can be found in \cite{BonderSalort}. Such a condition is removed in a version of this result offered in \cite{ACPS_limit1}. Further properties and applications of fractional Orlicz-Sobolev spaces are the subject of \cite{ACPS_frac, BO, NBS, Sa}. \iffalse \todo[inline]{Angela: I am adding, in magenta color, a few definitions which should be putted in a right position and properly modified, subsequently} \todo[inline]{Angela:YOUNG FUNCTIONS.} {\color{magenta} We say that a function $A: [0, \infty ) \to [0, \infty ]$ is a Young function if it has the form \begin{equation}\label{young} A(t) = \int _0^t a(\tau ) d\tau \quad \text{for $t \geq 0$} \end{equation} for some non-decreasing, left-continuous function $a: [0, \infty ) \to [0, \infty ]$ which is neither identically equal to $0$ nor to $\infty$. Clearly, any convex (non trivial) function from $[0, \infty )$ into $[0, \infty ]$, which is left-continuous and vanishes at $0$, is a Young function. \par\noindent Some useful properties are mentioned below \begin{equation}\label{aA} t/2 \, a(t/2) \leq A(t) \leq t\, a(t) \quad \text{for $t\geq 0\,,$} \end{equation} and, if $k \geq 1$, \begin{equation}\label{kt} k\,A(t) \leq A(kt) \quad \text{for $t \geq 0$.} \end{equation} We say that a function $A$ satisfies the $\Delta_2$-condition if there exists a constant $c$ such that \begin{equation}\label{delta2} A(2t) \leq c A(t) \quad \text{for $t \geq 0$.} \end{equation} A Young function $A$ is said to dominate another Young function $B$ globally if there exists a positive constant $C$ such that \begin{equation}\label{B.5bis} B(t)\leq A(C t) \quad \text{for $ t\geq 0.$} \end{equation} We say that a function $A$ dominates $B$ near infinity if there exists $t_0> 0$ such that \eqref{B.5bis} holds for $t \geq t_0$. The functions $A$ and $B$ are called equivalent globally [near infinity] if they dominate each other globally [near infinity]. We say that a function $B$ grows essentially more slowly than $A$ if \begin{equation}\label{nov 110} \lim _{t \to \infty} \frac{A(\lambda t)}{B(t)} =\infty \end{equation} for every $\lambda >0$.} \par \noindent \todo[inline]{Angela: ORLICZ SPACES} {\color{magenta} Let $\Omega$ be a measurable subset of $\mathbb R\sp n$, with $n\geq 1$. We set \begin{equation}\label{M} \mathcal{M}(\Omega)=\{u:\Omega \to \mathbb R : \text{$u$ is measurable}\}\,, \end{equation} and \begin{equation}\label{M+} \mathcal{M}_+(\Omega)=\{ u\in \mathcal{M}(\Omega) : u \geq 0\}\,. \end{equation} The notation $\mathcal{M}_d(\Omega)$ is adopted for the subset of $\mathcal{M}(\Omega)$ of those functions $u$ that decay near infinity, in the sense that all their level sets $\{|u|>t\}$ have finite Lebesgue measure for $t>0$. Namely, \begin{equation}\label{Md} \mathcal{M}_d(\Omega)=\{ u\in \mathcal{M}(\Omega) : |\{|u|>t\}|<\infty\,\, \text{for every $t>0$}\}\,, \end{equation} where $|E|$ stands for the Lebesgue measure of a set $E\subset \mathbb R\sp n$. Of course, $\mathcal{M}_d(\Omega)= \mathcal{M}(\Omega)$ provided that $|\Omega|<\infty$. The Orlicz space $L^A (\Omega )$, associated with a Young function $A$, on a measurable subset $\Omega$ of $\mathbb R\sp n$, is the Banach function space of those real-valued measurable functions $u$ in $\Omega$ for which the Luxemburg norm \begin{equation*} \|u\|_{L^A(\Omega )}= \inf \left\{ \lambda >0 : \int_{\Omega }A \left( \frac{|u|}{\lambda} \mathbb Right)\; dx \leq 1 \mathbb Right\}\, \end{equation*} is finite. In particular, $L^A (\Omega )= L^p (\Omega )$ if $A(t)= t^p$ for some $p \in [1, \infty )$, and $L^A (\Omega )= L^\infty (\Omega )$ if $A(t)=0$ for $t\in [0, 1]$ and $A(t) = \infty$ for $t>0$. \par\noindent If $A$ dominates $B$ globally, then \begin{equation}\label{normineq} \|u \|_{L^B(\Omega )} \leq C \|u \|_{L^A(\Omega )} \end{equation} for every $u \in L^A(\Omega )$.} \par\noindent \todo[inline]{Angela: FRACTIONAL ORLICZ-SOBOLEV SPACES } { \color{magenta} Let $\Omega$ be an open subset of $\mathbb R\sp n$. Given $m \in \mathbb N$ and a Young function $A$, we denote by $V^{m,A}(\Omega )$ the homogeneous Orlicz-Sobolev space given by \begin{equation}\label{homorliczsobolev} V^{m,A}(\Omega ) = \{ u \in \mathcal M(\Omega):\, \text{$u$ is $m$-times weakly differentiable and}\, \nabla ^m u \in L^A(\Omega)\}. \end{equation} Here, $\nabla ^m u$ denotes the vector of all weak derivatives of $u$ of order $m$. If $m=1$, we also simply write $\nabla u$ instead of $\nabla^1 u$. The notation $W^{m,A}(\Omega )$ is adopted for the classical Orlicz-Sobolev space defined by \begin{equation}\label{orliczsobolev} W^{m,A}(\Omega ) = \{u \in V^{m,A}(\Omega): \nabla ^k u \in L^A(\Omega), \, k=0,1 \dots, m-1\}\,, \end{equation} where $\nabla ^0u$ has to be interpreted as $u$. The space $W^{m,A}(\Omega)$ is a Banach space equipped with the norm $$\|u\|_{W^{m,A}(\Omega )} = \sum _{k=0}^m \|\nabla^k u \|_{L^A(\Omega)}\,.$$ \par\noindent Now, let $s\in (0,1)$. The seminorm $|u|_{s,A, \Omega}$ of a function $u \in \mathcal M (\Omega)$ is given by \begin{equation}\label{aug340} |u|_{s,A, \Omega} = \inf\left\{\lambda>0: \int_{\Omega} \int_{\Omega}A\left(\frac{|u(x)-u(y)|}{\lambda|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^n}\le1\mathbb Right\}\,. \end{equation} The homogeneous fractional Orlicz-Sobolev space $V^{s,A}(\Omega)$ is defined as \begin{equation}\label{aug341} V^{s,A}(\Omega) = \big\{u \in \mathcal M (\Omega): |u|_{s,A, \Omega}<\infty\}\,. \end{equation} The subspace $V^{s,A}(\Omega) \cap \mathcal M_d(\Omega)$ of those functions in $V^{s,A}(\Omega)$ that decay near infinity is denoted by $V^{s,A}_d(\Omega)$. Thus, \begin{equation}\label{nov100} V^{s,A}_d(\Omega) = \{ u\in V^{s,A}(\Omega): |\{|u|>t\}|<\infty\,\, \text{for every $t>0$}\} \,. \end{equation}} \todo[inline]{A counterexample} {\color{magenta} \section{A counterexample} \begin{theorem} There exist a Young function $A$, which does not satisfy the $\Delta_2$-condition, and a function $u:\mathbb R \to \mathbb R$ such that \begin{equation}\label{A1} \Omegaperatornamewithlimits{lim\,sup}_{s\to 0^+}s\int_\mathbb R \int_\mathbb R A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} <\infty \end{equation} but \begin{equation}\label{A2} \int_\mathbb R A(|u(x)|)\;dx =+\infty\,. \end{equation} \end{theorem} \par\noindent } \fi \iffalse \begin{proof}[Proof of Theorem \mathbb Ref{counterex}] Let \begin{equation}\label{A3} A(t)= e^{-\frac1t}\,. \end{equation} One has \begin{equation}\label{A4} A'(t)= \frac 1{t^2} e^{-\frac 1t}\quad \hbox{and}\quad A^{''}(t) = \frac 1{t^2} e^{-\frac 1t}\left(\frac 1t -2\mathbb Right) >0 \qquad \hbox{for}\;\; t\in (0, \tfrac 12)\,. \end{equation} Moreover, if $s\in (0, \frac 12)$, \begin{equation}\label{A5} \left(\frac{A(t^{1-s})}t\mathbb Right)'= \left(\frac {e^{-t^{s-1}}} t\mathbb Right)' = \left( (1-s) t^{s-1} -1\mathbb Right) \frac{e^{-t^{s-1}}}{t^2} >0 \quad\hbox{for}\;\; t\in (0, \tfrac 14)\,. \end{equation} Thus, the function $\frac{A(t^{1-s})}{t}$ is non decreasing in $(0, \frac 14)$ if $s\in (0, \frac 12)$. \\ Assume that \begin{equation}\label{A6} A(t)= \begin{cases} e^{-\frac 1t} \qquad \hbox {for}\;\; t\in (0, \tfrac 14) \\ \hbox{any function such that $A$ is convex in}\;\; (\tfrac 14, \infty)\,. \end{cases} \end{equation} Let $n=1$ and let $u:\mathbb R\to \mathbb R^+$ \begin{equation}\label{A7} u(x) = \frac 1{ \log {(c+|x|)}} \qquad \hbox{for}\;\; x\in \mathbb R\,, \end{equation} where $c>1$ is (sufficiently large) to be chosen later. We have that $u$ is bounded and Lipschitz continuous \begin{equation}\label{A8} u'(x)= -\log^{-2}{(c+|x|)} (c+|x|)^{-1} {\mathbb Rm sign}\, x \quad \hbox{if}\;\; x\neq 0\,. \end{equation} Thus, if $x<y$ \begin{equation}\label{A9} |u(x)-u(y)|\leq \frac {|x-y|}{\log^2{(c+|z|)} (c+|z|)} \end{equation} for some $z\in (x, y)$. \\ We have that \begin{equation}\label{A10} \int_\mathbb R A(u(x))\; dx \geq 2 \int_{x_0}^{\infty} e^{-\log{(c+|x|)}}\; dx= 2\int_{x_0}^{\infty} \frac {dx}{c+|x|}= +\infty\,, \end{equation} where $x_0>0$ is such that $\log^{-1}{(c+|x|)}< \frac 14$ if $|x|>x_0$. \\ We claim that, instead, \begin{equation}\label{A11} \Omegaperatornamewithlimits{lim\,sup}_{s\to 0^+}s\int_\mathbb R \int_\mathbb R A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} <\infty \end{equation} \end{proof} \fi \iffalse Given a Young function $A$, define the Young function $\Omegaverline A$ as \begin{equation}\label{Abar} \Omegaverline A(t) = \int _0^t\frac {A(\tau)}\tau \, d\tau \quad \text{for $t\geq0$.} \end{equation} Notice that $A$ and $\Omegaverline A$ are equivalent. \begin{theorem}\label{T:lim0} Let $A$ be a Young function satisfying the $\Delta_2$-condition. Assume that $u \in \bigcup_{s\in (0,1)} V^{s,A}_d(\mathbb R\sp n)$. Then \begin{equation}\label{jan1} \lim_{s\to 0^+} s\int_{\mathbb R^n} \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} = \frac{2 \Omegamega_n}n \int_{\mathbb R\sp n} \Omegaverline A(|u(x)|)\; dx. \end{equation} \end{theorem} \fi \section{Proof of Theorem \mathbb Ref{T:lim0}} Our approach to Theorem \mathbb Ref{T:lim0} is related to that of \cite{MS}, yet calls into play specific Orlicz space results and techniques. In particular, it makes critical use of a Hardy type inequality for functions in $V^{s,A}_d(\mathbb R\sp n)$, with $s\in (0,1)$, recently established in \cite[Theorem 5.1]{ACPS_frac}. This inequality tells us what follows. \\ Given a Young function $A$, denote by $a: [0, \infty) \to [0, \infty)$ the left-continuous non-decreasing function such that \begin{equation*} A(t)=\int_0^{t}a(\tau) d\tau \qquad\text{for $t\geq 0$\,.} \end{equation*} Assume that \begin{equation}\label{E:0'} \int^{\infty}\left(\frac{t}{A(t)}\mathbb Right)^{\frac{s}{n-s}}\;dt = \infty \end{equation} and \begin{equation}\label{E:0''} \int_{0}\left(\frac{t}{A(t)}\mathbb Right)^{\frac{s}{n-s}}\;dt < \infty\,. \end{equation} Call $B$ the Young function defined by \begin{equation*} B(t)=\int_0^{t}b(\tau) d\tau \qquad\text{for $t\geq 0$,} \end{equation*} where the (generalized) left-continuous inverse of the function $b$ obeys \begin{equation}\label{E:2} b^{\,-1}(r) = \left(\int_{a^{-1}(r)}^{\infty} \left(\int_0^t\left(\frac{1}{a(\varrho)}\mathbb Right)^{\frac{s}{n-s}}\,d\varrho\mathbb Right)^{-\frac{n}{s}}\frac{dt}{a(t)^{\frac{n}{n-s}}} \mathbb Right)^{\frac{s}{s-n}} \qquad\text{for $r\ge0$}\,. \end{equation} Then, there exists a constant $C=C(n, s)$ such that \begin{equation}\label{jan5} \int_{\mathbb R^n}{B}\left(\frac{|u(x)|}{|x|^s}\mathbb Right)\;dx \le (1-s) \int_{\mathbb R^n} \int_{\mathbb R^n} A\left(C\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\,\frac{dx\, dy}{|x-y|^n} \end{equation} for every function $u \in V_d^{s,A} (\mathbb R\sp n)$. Moreover, the constant $C$ is uniformly bounded in $s$ if $s$ is bounded away from $1$. \begin{proof}[Proof of Theorem \mathbb Ref{T:lim0}] Inasmuch as $A$ satisfies the $\Delta _2$-condition, its upper Matuszewska-Orlicz index $I(A)$, defined as \begin{equation}\label{index}I(A) = \lim_{\lambda \to \infty} \frac{\log \Big(\sup _{t>0}\frac{A(\lambda t)}{A(t)}\Big)}{\log \lambda}\,, \end{equation} is finite. A standard (and easily verified) consequence of this fact is that there exists a positive constant $C=C(A)$ such that \begin{equation}\label{E:31} A(\lambda t)\le C \lambda^{I(A)+1}A(t)\qquad\hbox{for $t\ge0$ and $\lambda\ge1$.} \end{equation} Thereby, there exists $s_0\in (0, 1)$ such that conditions \eqref{E:0'} and \eqref{E:0''} are fulfilled if $s\in (0, s_0)$. Hence, inequality \eqref{jan5} holds for $s\in (0, s_0)$. \\ On the other hand, $I(A) < \frac ns$ provided that $s<\frac{n}{I(A)}$. Hence, \cite[Propositions 5.1 and 5.2]{cianchi_Ibero} ensure that the function $B$ is equivalent to $A$ if $s<\frac{n}{I(A)}$. Namely, there exist constants $c_2>c_1>0$ such that $A(c_1t) \leq B (t) \leq A(c_2 t)$ for $t \geq 0$. Set $s_1 = \min \{s_0, \frac{n}{I(A)}\}$. As a consequence of inequality \eqref{jan5}, of the equivalence of $A$ and $B$, of the $\Delta_2$-condition for $A$, and of the inequality $\Omegaverline{A}\leq A$, if $u \in V_d^{s,A} (\mathbb R\sp n)$ for some $s \in (0, s_1)$, then \begin{equation}\label{jan6} \int_{\mathbb R^n}{\Omegaverline A}\left(\frac{ |u(x)|}{\lambda|x|^s}\mathbb Right)\;dx \leq \int_{\mathbb R^n}{A}\left(\frac{ |u(x)|}{\lambda|x|^s}\mathbb Right)\;dx < \infty \end{equation} for every $\lambda >0$.\\ We begin by establishing a lower bound for the $\liminf _{s\to 0^+}$ of the expression on the left-hand side of equation \eqref{jan1}. One has that \begin{align}\label{E:2'} \int_{\mathbb R^n} \int_{\{|x-y|>2|x|\}} A\left(\frac{|u(x)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} & = \frac{\Omegamega_n}{n}\int_{\mathbb R\sp n} \int_{2|x|}^{\infty}A\left(\frac{ |u(x)|}{r^{s}}\mathbb Right)\frac{dr}{r}\;dx = \frac{\Omegamega_n}{ns} \int_{\mathbb R\sp n} \Omegaverline{A}\left(\frac{ |u(x)|}{2^s|x|^{s}}\mathbb Right)\;dx\,. \end{align} Fix $\varepsilon>0$. Owing to the convexity of $A$, \begin{align}\label{E:3} \int_{\mathbb R^n}\int_{\{|x-y|>2|x|\}} & A\left(\frac{|u(x)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \le \frac{1}{1+\varepsilon}\int_{\mathbb R\sp n} \int_{\{|x-y|>2|x|\}}A\left((1+\varepsilon)\frac{ |u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \quad + \frac{\varepsilon}{1+\varepsilon} \int_{\mathbb R\sp n} \int_{\{|x-y|>2|x|\}}A\left(\frac{1+\varepsilon}{\varepsilon}\frac{ |u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} = I_1 + I_2. \end{align} Consider the integral $I_2$. If $|x-y| > 2|x|$, then $\frac{2}{3}|y|\le|x-y|\le2|y|$. Therefore, \begin{align}\label{E:4} I_2 & \le \frac{\varepsilon}{1+\varepsilon} \int_{\mathbb R\sp n}\int_{\{|x-y|>2|x|\}}A\left(\frac{1+\varepsilon}{\varepsilon} \left(\frac{3}{2}\mathbb Right)^{s}\frac{|u(y)|}{|y|^{s}}\mathbb Right)\left(\frac{3}{2}\mathbb Right)^{n}\frac{dy}{|y|^{n}}\;dx \\ \nonumber & = \frac{\varepsilon}{1+\varepsilon}\left(\frac{3}{2}\mathbb Right)^{n} \int_{\mathbb R\sp n}\frac{1}{|y|^{n}} A\left(\frac{1+\varepsilon}{\varepsilon}\left(\frac{3}{2}\mathbb Right)^{s}\frac{|u(y)|}{|y|^{s}}\mathbb Right) \left(\int_{|x-y|\ge2|x|}\;dx\mathbb Right)\;dy \\ & \le \frac{\varepsilon \Omegamega_n}{1+\varepsilon }\int_{\mathbb R\sp n}A\left(\frac{1+\varepsilon}{\varepsilon} \left(\frac{3}{2}\mathbb Right)^{s}\frac{|u(y)|}{|y|^{s}}\mathbb Right)\;dy\,. \nonumber \end{align} Note that the last inequality holds since the set $\{|x-y|>2|x|\}$ is agrees with \iffalse $|x|^2-2x\cdot y+|y|^2\ge4|x|^2$, and this in turn holds if and only if $3|x|^2+2x\cdot y-|y|^2\le0$, which holds if and only if $|x+\frac{y}{3}|^{2}\le\left(\frac{2}{3}|y|\mathbb Right)^{2}$, so the set in question is \fi the ball centered at $-\frac{1}{3}y$, with radius $\frac{2}{3}|y|$. \iffalse Thus, continuing the calculation, we get \begin{align}\label{E:4-continued} I_2 & \le \frac{\varepsilon}{1+\varepsilon}\frac{\Omegamega_n}{n}\int_{\mathbb R\sp n}A\left(\frac{1+\varepsilon}{\varepsilon} \left(\frac{3}{2}\mathbb Right)^{s}\frac{|u(y)|}{|y|^{s}}\mathbb Right)\;dy. \end{align} \fi \\ In order to estimate the integral $I_1$, observe that $$\int_{\{|x-y|>2|x|\}}A\left((1+\varepsilon)\frac{ |u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} = \int_{\{|x-y|>2|y|\}}A\left((1+\varepsilon)\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}}\,.$$ Furthermore, $\{|x-y|>2|x|\} \cap \{|x-y|>2|y| \}= \emptyset$, since if there existed $x,y$ such that $|x-y|>2|x|$ and $|x-y|>2|y|$, then $|x-y| \le |x|+|y| < \frac{|x-y|}{2}+\frac{|x-y|}{2}=|x-y|$, a contradiction. Thus, \begin{align}\label{E:I-1} I_1 \le \frac{1}{2(1+\varepsilon)}\int_{\mathbb R\sp n} \int_{\mathbb R\sp n}A\left((1+\varepsilon)\frac{ |u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}}. \end{align} Consequently, \begin{align}\label{E:page-10} \frac{s}{1+\varepsilon} & \int_{\mathbb R\sp n} \int_{\mathbb R\sp n}A\left((1+\varepsilon)\frac{ |u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \ge 2sI_1 \ge \frac{2\,\Omegamega_n}{n} \int_{\mathbb R\sp n}\Omegaverline{A}\left(\frac{|u(x)|}{2^s|x|^s}\mathbb Right)\;dx - 2sI_2 \\ \nonumber & \ge \frac{2\,\Omegamega_n}{n} \int_{\mathbb R\sp n}\Omegaverline{A}\left(\frac{|u(x)|}{2^s|x|^s}\mathbb Right)\;dx - \frac{2s\varepsilon \Omegamega_n}{1+\varepsilon }\int_{\mathbb R\sp n} A\left(\frac{1+\varepsilon}{\varepsilon} \left(\frac{3}{2}\mathbb Right)^{s}\frac{|u(y)|}{|y|^s}\mathbb Right)\;dy\,, \end{align} where the first inequality follows from \eqref{E:I-1}, the second one is due to \eqref{E:3} and \eqref{E:2'}, and the third one to \eqref{E:4}. Since $A(t)\le \Omegaverline{A}(2t)$ for $t\ge0$, inequality \eqref{E:31} implies that \begin{equation}\label{E:33} A(\lambda t)\le C\lambda^{I(A)+1}\,\Omegaverline{A}(2t)\qquad\text{if $t\ge0$ and $\lambda\ge1$\,.} \end{equation} From inequalities \eqref{E:page-10} and \eqref{E:33} one deduces that \begin{align}\label{jan7} & \frac{s}{1+\varepsilon}\int_{\mathbb R^n}\int_{\mathbb R^n}A\left((1+\varepsilon)\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \ge \frac{2\, \Omegamega_n}{n}\int_{\mathbb R^n}\Omegaverline{A}\left(\frac{|u(x)|}{2^s|x|^{s}}\mathbb Right)\;dx - \frac{2Cs\varepsilon \Omegamega_n}{1+\varepsilon} \left(\frac{2(1+\varepsilon)}{\varepsilon}3^s\mathbb Right)^{I(A)+1} \int_{\mathbb R^n}\Omegaverline A\left(\frac{|u(y)|}{2^s|y|^s}\mathbb Right)\;dy \\ \nonumber & = \frac{2 \,\Omegamega_n}{n} \left[1 - \frac{Cns \varepsilon }{1+\varepsilon} \left(\frac{2(1+\varepsilon)}{\varepsilon}3^s\mathbb Right)^{I(A)+1} \mathbb Right]\int_{\mathbb R^n}\Omegaverline{A}\left(\frac{|u(x)|}{2^s|x|^{s}}\mathbb Right)\;dx\,. \end{align} Thus, there exists $s _2 = s_2(A, n, \varepsilon) \in (0, s _1)$ such that, if $s \in (0, s_2)$, then \begin{equation}\label{jan8} \frac{s}{1+\varepsilon}\int_{\mathbb R^n}\int_{\mathbb R^n}A\left((1+\varepsilon)\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \geq \frac{2\, \Omegamega_n}{n}(1-\varepsilon)\int_{\mathbb R^n}\Omegaverline{A}\left(\frac{|u(x)|}{2^s|x|^{s}}\mathbb Right)\;dx\,. \end{equation} On replacing $u$ by $u/(1+\varepsilon)$ in inequality \eqref{jan8}, one can infer, via Fatou's lemma, that \begin{equation}\label{jan9} \liminf_{s\to0^+} s\int_{\mathbb R^n} \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \geq \frac{2\, \Omegamega_n}{n}(1-\varepsilon^{2})\int_{\mathbb R^n}\Omegaverline{A}\left(\frac{|u(x)|}{1+\varepsilon}\mathbb Right)\;dx\,. \end{equation} By the arbitrariness of $\varepsilon$, \begin{align}\label{E:6} \liminf_{s\to0^+} s & \int_{\mathbb R\sp n} \int_{\mathbb R\sp n}A\left(\frac{ |u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \ge \frac{2\, \Omegamega_n}{n}\int_{\mathbb R\sp n}\Omegaverline{A}\left(|u(x)|\mathbb Right)\; dx\,. \end{align} In particular, inequality \eqref{E:6} implies that, if the integral on the right-hand side diverges, then equation \eqref{jan1} certainly holds. Thus, in what follows, we may assume that \begin{equation}\label{jan100} \int_{\mathbb R\sp n}\Omegaverline{A}\left(|u(x)|\mathbb Right)\; dx < \infty\, . \end{equation} Next, we provide an upper bound for the $\Omegaperatornamewithlimits{lim\,sup} _{s\to 0^+}$ of the expression on the left-hand side of equation \eqref{jan1}. One has that \begin{align}\label{E:8} & s \int_{\mathbb R^n} \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & = s\int_{\mathbb R^n} \int_{\{|y|\ge|x|\}} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} + s\int_{\mathbb R^n} \int_{\{|y|<|x|\}} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & = 2s\int_{\mathbb R^n} \int_{\{|y|\ge|x|\}} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & = 2s \int_{\mathbb R^n} \int_{\{|y|\geq 2|x|\}} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} + 2s \int_{\mathbb R^n} \int_{\{|x|\le|y|< 2|x|\}} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \le \frac{2s}{1+\varepsilon}\int_{\mathbb R^n} \int_{\{|y|\ge2|x|\}} A\left((1+\varepsilon)\frac{|u(x)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} + \frac{2s\, \varepsilon}{1+\varepsilon}\int_{\mathbb R^n} \int_{\{|y|\ge2|x|\}} A\left(\frac{1+\varepsilon}{\varepsilon} \frac{|u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \quad + 2s\int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|\}} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & = J_1+J_2+J_3\,, \end{align} where the inequality holds since $A$ is convex. Let us estimate $J_1$ first. To this purpose, notice that $\{|y|\ge2|x|\} \subset \{|x-y|\ge |x|\}$, since $|x-y| \ge |y|-|x| \ge 2|x|-|x| = |x|$. Thus, \begin{align}\label{E:page-2} \int_{\{|y|\ge2|x|\}} A\left((1+\varepsilon)\frac{|u(x)|}{|x-y|^s}\mathbb Right)\frac{dy}{|x-y|^{n}} & \le \int_{\{|x-y|\ge |x|\}} A\left((1+\varepsilon)\frac{|u(x)|}{|x-y|^s}\mathbb Right)\frac{dy}{|x-y|^{n}} \\ \nonumber & = \frac{\Omegamega_n}{n}\int_{|x|}^{\infty}A\left((1+\varepsilon)\frac{|u(x)|}{r^s}\mathbb Right)\frac{dr}{r}\, \end{align} for every $x\in\mathbb R^{n}$. A change of variables tells us that \begin{equation}\label{feb40} \int_{t}^{\infty}A\left(\frac{(1+\varepsilon)\varrho}{r^s}\mathbb Right)\frac{dr}{r} = \frac{1}{s}\int_{0}^{\frac{(1+\varepsilon)\varrho}{t^s}}\frac{A(\tau)}{\tau}\,d\tau = \frac{1}{s}\Omegaverline{A}\left(\frac{(1+\varepsilon)\varrho}{t^s}\mathbb Right)\qquad \text{for $t, \varrho \ge0$\,.} \end{equation} Thanks to equations \eqref{E:page-2} and \eqref{feb40}, \begin{equation}\label{feb12} J_1 \le \frac{2\,\Omegamega_n}{n(1+\varepsilon)}\int_{\mathbb R^{n}}\Omegaverline{A}\left((1+\varepsilon)\frac{|u(x)|}{|x|^s}\mathbb Right)\;dx\,. \end{equation} As far as the term $J_2$ is concerned, observe that, if $|y|\ge2|x|$, then $|x-y|\ge \frac 12|y|$. Therefore, an application of Fubini's theorem tells us that \begin{align}\label{E:J-2} J_2 & \le \frac{2^{n+1}s \varepsilon }{1+\varepsilon}\int_{\mathbb R^n}A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s} \frac{|u(y)|}{|y|^s}\mathbb Right)\left(\int_{\{|x|\leq\frac{|y|}{2}\}}\;dx\mathbb Right)\frac{dy}{|y|^n} = \frac{2\Omegamega_n s \varepsilon }{1+\varepsilon}\int_{\mathbb R^n}A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s} \frac{|u(y)|}{|y|^s}\mathbb Right)\;dy. \end{align} In order to provide an upper bound for $J_3$, note that, given $r>3$, \begin{align}\label{E:J-3} J_3 & = 2s\int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|,|x-y|<r\}}A\left( \frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \quad + 2s\int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|,|x-y|\ge r\}}A\left( \frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & = J_{31}+J_{32}. \end{align} Since we are assuming that $u\in\bigcup_{s\in(0,1)}V_d^{s,A}(\mathbb R^{n})$, there exists $s_3\in(0,1)$ such that $u\in V_d^{s_3,A}(\mathbb R^{n})$. Let $s \in (0, s_3)$. Then \begin{equation}\label{feb11} J_{31} \le 2s\int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|,|x-y|<r\}}A\left( \frac{|u(x)-u(y)|}{|x-y|^{ s_3}}r^{s_3-s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}}, \end{equation} since $s_3-s>0$ and $u\in V^{s_3,A}_d(\mathbb R^{n})$. If $|x|\le|y|<2|x|$ and $|x-y|\ge r$, then \begin{equation*} 3|x|=2|x|+|x|\ge|y|+|x|\ge|x-y|\ge r, \end{equation*} whence $|x|\ge \frac r3$, and $|x|\le|y|<|x|\le2|y|$. Consequently, \begin{equation*} 3|y|=2|y|+|y|\ge2|x|+|y| \ge|x-y|\ge r, \end{equation*} and hence $|y|\ge\frac r3$ as well. Therefore, owing to the convexity of the function $A$, \begin{align}\label{E:J-32} J_{32}& \le s\int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|,|x-y|\ge r\}}A\left( \frac{2|u(x)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} + s\int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|,|x-y|\ge r\}}A\left( \frac{2|u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \le s\int_{\{|x|\geq\frac{r}{3}\}} \left(\int_{\{|x-y|\ge r\}}A\left( \frac{2|u(x)|}{|x-y|^s}\mathbb Right)\frac{dy}{|x-y|^{n}}\mathbb Right)\;dx + s\int_{\{|y|\geq\frac{r}{3}\}} \left(\int_{\{|x-y|\ge r\}}A\left( \frac{2|u(y)|}{|x-y|^s}\mathbb Right)\frac{dx}{|x-y|^{n}}\mathbb Right)\;dy \\ \nonumber & = 2s\int_{\{|x|\geq\frac{r}{3}\}} \left(\int_{\{|x-y|\ge r\}}A\left( \frac{2|u(x)|}{|x-y|^s}\mathbb Right)\frac{dy}{|x-y|^{n}}\mathbb Right)\;dx = \frac{2s\Omegamega_n}{n}\int_{\{|x|\geq\frac{r}{3}\}} \left(\int_{r}^{\infty}A\left( \frac{2|u(x)|}{\varrho ^s}\mathbb Right)\frac{d \varrho}{\varrho}\mathbb Right)\;dx \\ \nonumber & = \frac{2\Omegamega_n}{n}\int_{\{|x|\geq\frac{r}{3}\}} \Omegaverline{A}\left( \frac{2|u(x)|}{|x|^s}\mathbb Right)\;dx. \end{align} Since we are assuming that $r>3$, the latter equation implies that \begin{equation*} J_{32} \le \frac{2 \Omegamega_n}{n}\int_{\{|x|\geq\frac{r}{3}\}} \Omegaverline{A}\left( 2|u(x)|\mathbb Right)\;dx \qquad\text{for every $s\in(0,1)$.} \end{equation*} Consequently, if $r$ is large enough, then \begin{equation}\label{feb10} J_{32}<\varepsilon \end{equation} for every $s\in(0,1)$. Combining equations \eqref{E:8}, \eqref{feb12}-\eqref{feb11} and \eqref{feb10} implies that, for every $s\in (0,1)$, \begin{align}\label{E:88} s\int_{\mathbb R^n} & \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \le \frac{2\Omegamega_n}{n(1+\varepsilon)}\int_{\mathbb R^{n}}\Omegaverline{A}\left((1+\varepsilon)\frac{|u(x)|}{|x|^s}\mathbb Right)\;dx +\frac{2\Omegamega_n s \varepsilon} {1+\varepsilon}\int_{\mathbb R^n}A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s} \frac{|u(y)|}{|y|^s}\mathbb Right)\;dy \\ \nonumber & \quad + 2s\int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|,|x-y|<r\}}A\left( \frac{|u(x)-u(y)|}{|x-y|^{s_3}}r^{s_3-s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} + \varepsilon. \end{align} Passage to the limit as $s\to 0^+$ in inequality \eqref{E:88} can be performed as follows. If $|y|\le 2$, then the function $(0,1) \ni s\mapsto A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s}\frac{|u(y)|}{|y|^s}\mathbb Right)$ is non-decreasing. Thus, \begin{equation*} A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s}\frac{|u(y)|}{|y|^s}\mathbb Right) \le A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s_3}\frac{|u(y)|}{|y|^{s_3}}\mathbb Right) \qquad\text{for every $s\in(0,{s_3})$,} \end{equation*} and, since we are assuming that $u\in V^{s_3,A}_d(\mathbb R\sp n)$, we have that $$\int_{\mathbb R\sp n} A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s_3}\frac{|u(y)|}{|y|^{s_3}}\mathbb Right)\; dy< \infty, $$ owing to \eqref{jan6}. Inasmuch as \begin{equation*} \lim_{s\to0^+} A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s}\frac{|u(y)|}{|y|^s}\mathbb Right) = A\left(\frac{1+\varepsilon}{\varepsilon} |u(y)|\mathbb Right) \qquad \text{for $y\neq 0$,} \end{equation*} the dominated convergence theorem ensures that \begin{equation}\label{jan101} \lim_{s\to0^+} \int_{\{|y|\le 2\}}A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s}\frac{|u(y)|}{|y|^s}\mathbb Right)\;dy = \int_{\{|y|\le 2\}} A\left(\frac{1+\varepsilon}{\varepsilon} |u(y)|\mathbb Right)<\infty. \end{equation} On the other hand, if $|y|> 2$, then the function $(0,1) \ni s\mapsto A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s}\frac{|u(y)|}{|y|^s}\mathbb Right)$ is non-increasing. Consequently, the monotone convergence theorem yields \begin{equation}\label{jan102} \lim_{s\to0^+} \int_{\{|y|> 2\}}A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s}\frac{|u(y)|}{|y|^s}\mathbb Right)\;dy = \int_{\{|y|> 2\}}A\left(\frac{1+\varepsilon}{\varepsilon} |u(y)|\mathbb Right)<\infty. \end{equation} Equations \eqref{jan101} and \eqref{jan102} imply that \begin{equation}\label{E:second-term} \lim_{s\to0^+}\frac{2\Omegamega_n s\varepsilon }{1+\varepsilon}\int_{\mathbb R\sp n}A\left(\frac{1+\varepsilon}{\varepsilon} 2^{s} \frac{|u(y)|}{|y|^s}\mathbb Right)\;dy =0. \end{equation} An argument analogous to that of the proofs of equations \eqref{jan101} and \eqref{jan102} yields \begin{equation}\label{feb41} \lim_{s\to0^+} \int_{\mathbb R^{n}}\Omegaverline{A}\left((1+\varepsilon)\frac{|u(x)|}{|x|^s}\mathbb Right)\;dx = \int_{\mathbb R\sp n} \Omegaverline{A}\left( (1+\varepsilon)|u(x)|\mathbb Right)\;dx. \end{equation} Next, for every $s\in(0,{s_3})$, \begin{align*} \int_{\mathbb R^n} &\int_{\{|x|\le|y|<2|x|,|x-y|<r\}}A\left(\frac{|u(x)-u(y)|}{|x-y|^{s_3}}r^{s_3-s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \\ &\le \int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|,|x-y|<r\}}A\left(\frac{|u(x)-u(y)|}{|x-y|^{s_3}}r^{s_3}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}}< \infty. \end{align*} Observe that the convergence of the last integral is due to the fact that $u\in V^{s_3,A}_d(\mathbb R\sp n)$ and $A$ satisfies the $\Delta_2$-condition. Therefore, \begin{equation}\label{E:third-term} \lim_{s\to0^+}2s\int_{\mathbb R^n} \int_{\{|x|\le|y|<2|x|,|x-y|<r\}}A\left( \frac{|u(x)-u(y)|}{|x-y|^{s_3}}r^{{s_3}-s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} =0. \end{equation} Thanks to equations ~\eqref{E:88},~\eqref{E:second-term}, \eqref{feb41} and~\eqref{E:third-term}, \iffalse \begin{align}\label{E:89} \Omegaperatornamewithlimits{lim\,sup}_{s\to0^+}s\int_{\mathbb R^n} & \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dx\,dy}{|x-y|^{n}} \le \Omegaperatornamewithlimits{lim\,sup}_{s\to0^+}\frac{2\Omegamega_n}{n(1+\varepsilon)}\int_{\mathbb R^{n}}\Omegaverline{A}\left((1+\varepsilon)\frac{|u(x)|}{|x|^s}\mathbb Right)\,dx + \varepsilon. \end{align} An analogous argument as in the proof of \eqref{jan101} and \eqref{jan102}, exploited for limit on the right-hand side of \eqref{E:89}, tells us that \fi \begin{equation*} \Omegaperatornamewithlimits{lim\,sup}_{s\to0^+}\int_{\mathbb R^n} \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dxdy}{|x-y|^{n}} \le \frac{2\Omegamega_n}{n(1+\varepsilon)}\int_{\mathbb R\sp n} \Omegaverline{A}\left( (1+\varepsilon)|u(x)|\mathbb Right)\,dx + \varepsilon. \end{equation*} Hence, owing to the arbitrariness of $\varepsilon$, \begin{equation}\label{jan10} \Omegaperatornamewithlimits{lim\,sup}_{s\to0^+}\int_{\mathbb R^n} \int_{\mathbb R\sp n} A\left(\frac{|u(x)-u(y)|}{|x-y|^s}\mathbb Right)\frac{dxdy}{|x-y|^{n}} \le \frac{2\Omegamega_n}{n}\int_{\mathbb R\sp n} \Omegaverline{A}\left( |u(x)|\mathbb Right)\,dx. \end{equation} Coupling equations \eqref{E:6} and \eqref{jan10} yields \eqref{jan1}. \end{proof} \section{Proof of Theorem \mathbb Ref{counterex}} Functions $A$ and $u$ as in the statement of Theorem \mathbb Ref{counterex} are explicitly exhibited in our proof. \begin{proof}[Proof of Theorem \mathbb Ref{counterex}] Let $\gamma>1$ and let $A$ be any finite-valued Young function such that \begin{equation*} A(t)= e^{-\frac1{t^{\gamma}}} \quad \hbox{for $t \in (0, \tfrac 1{2e})$\,.} \end{equation*} Note that functions $A$ enjoying this property do exist, since $\lim_{t\to0^+}e^{-\frac1{t^{\gamma}}}=0$ and the function $e^{-\frac1{t^{\gamma}}}$ is convex on the interval $\big(0,\big(\frac{\gamma}{\gamma+1}\big)^{\frac{1}{\gamma}}\big)$. The fact that $A$ is a Young function ensures that, for every $t_0 >0$, \begin{equation}\label{feb20} A(t) \leq t \tfrac{A(t_0)}{t_0} \quad \text{for $t \in [0, t_0]$.} \end{equation} Also, one can verify that, for each $s \in (0,1)$, there exists $\Omegaverline{t}=\Omegaverline{t}(s,n) \in (0, \tfrac 1{2e})$ such that the function \begin{equation}\label{feb22} (0, \Omegaverline t) \ni t \mapsto \frac{A(t^{1-s})}{t^{\gamma}}\quad \text{is increasing.} \end{equation} Let $v\colon\mathbb R^n\to \mathbb R$ be the function defined as \begin{equation}\label{A7} v(x) = \begin{cases} \displaystyle\frac {x_1}{|x| \log^{\frac{1}{\gamma}} {(\kappa+|x|)} } &\text{if $|x|\geq 1$} \\ \\ \displaystyle \frac {x_1}{\log^{\frac{1}{\gamma}} {(\kappa+1)} } &\text{if $|x| < 1$,} \end{cases} \end{equation} where $x=(x_1, \dots , x_n)$ and $\kappa>1$ is a sufficiently large constant to be chosen later in such a way the argument of the function $A$, evaluated at several expressions depending on $v$, belongs to the interval $(0, \tfrac 1{2e})$ or $(0, \Omegaverline t)$. \\ Notice that the function $v$ is Lipschitz continuous in $\mathbb R\sp n$ and continuously differentiable in $\{|x|>1\}$, and \begin{equation}\label{A50} |\nabla v (x)| \le \frac{\kappa}{|x| \log^{\frac{1}{\gamma}}{(\kappa+ |x|)}} \qquad \hbox{if}\; |x|>1\,. \end{equation} Moreover, if $x,y \in \mathbb R\sp n$ are such that $|(1-\tau)x+\tau y|>1$ for $\tau\in[0,1]$, then there exists $\tau_0 \in [0,1]$ satisfying \begin{equation}\label{A51} |v(x)-v(y)| \le \frac{3|x-y|}{|(1-\tau_0)x+\tau_0 y|\log^{\frac{1}{\gamma}}{(\kappa + |(1-\tau_0)x+\tau_0 y|)} }\,. \end{equation} Given $\lambda>1$, choose $\kappa$ so large that $\frac 1 {\lambda \log^{\frac{1}{\gamma}}{(\kappa+1)}}<\frac 1{2e}$. Therefore, there exists a constant $C$ such that \begin{align}\label{A52} &\int_{\mathbb R\sp n} A\left( \frac{|v(x)|}{\lambda}\mathbb Right)\; dx \leq C + \int_{|x|\ge1} \frac {dx}{ (\kappa+|x|)^{\lambda^{\gamma}}} <\infty\,.\nonumber \end{align} Now, we claim that \begin{equation}\label{A53} \int_{\mathbb R^n} \int_{\mathbb R^n} A\left(\frac{|v(x) - v(y)|}{\lambda |x-y|^s}\mathbb Right) \; \frac{dx\,dy}{|x-y|^{n}} < \infty \end{equation} for every $s\in (0,1)$ and $\lambda \geq 1$. Fix any $s\in (0,1)$. To verify equation \eqref{A53}, observe that \begin{align}\label{A54} &\int_{\mathbb R^n} \int_{\mathbb R^n} A\left(\frac{|v(x) - v(y)|}{|x-y|^s}\mathbb Right) \; \frac{dx\,dy}{|x-y|^{n}} = \int \int_{\{|x|\leq 1,|y|\leq 1\}}A\left(\frac{|v(x) - v(y)|}{|x-y|^s}\mathbb Right) \; \frac{dx\,dy}{|x-y|^{n}} \\ & \quad + 2 \int \int_{\{|x|<1,|y|> 1\}}A\left(\frac{|v(x) - v(y)|}{|x-y|^s}\mathbb Right) \; \frac{dx\,dy}{|x-y|^{n}} + \int \int_{\{|x|> 1,|y|> 1\}}A\left(\frac{|v(x) - v(y)|}{|x-y|^s}\mathbb Right) \; \frac{dx\,dy}{|x-y|^{n}}\nonumber \\ & =J_1+ J_2 + J_3\,. \nonumber \end{align} Owing to the Lipschitz continuity of $v$ and to property \eqref{feb20}, if $E\subset \mathbb R^{n}\times\mathbb R^{n}$ is a bounded set, then there exist positive constants $C$ and $C'$ such that \begin{align}\label{E:12} &\int_{E} \int_{E} A\left(\frac{|v(x) - v(y)|}{|x-y|^s}\mathbb Right) \; \frac{dx\,dy}{|x-y|^{n}} \le \int_{E} \int_{E}A\left(C|x-y|^{1-s}\mathbb Right) \; \frac{dx\,dy}{|x-y|^{n}} \le C' \int_{E} \int_{E}\frac{dx\,dy}{|x-y|^{n-1+s}}<\infty. \end{align} Hence, \begin{equation}\label{E:13} J_1<\infty. \end{equation} Next, let us split $J_2$ as \begin{align}\label{A14} J_2 &=\int\int_{\{|x|\leq 1,|y|>1, |x-y|>2\}} A \left( \frac{|v(x) - v(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|^n} \\ & \quad + \int\int_{\{|x|\leq 1, |y|>1, |x-y|\leq 2\}} A \left( \frac{|v(x) - v(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|^n} = J_{21} + J_{22}\nonumber \end{align} Consider $J_{21}$ first. If $|x|\leq 1$ and $|x-y|>2$, then $$|x|+|y| \leq |x| + |y-x| +|x| = 2|x| + |y-x| \leq 2 + |y-x| \leq |x-y|+|y-x| =2|x-y|\,.$$ Thus, $$ |x-y|\leq |x|+|y|\leq 2|x-y|\,.$$ Hence, there exist positive constants $C,C',C''$ such that \begin{align}\label{A17} J_{21}&\leq \int\int_{\{|x|\leq 1,|y|>1, |x-y|>2\}} A \left( C \frac{|v(x)|+ |v(y)|}{(|x| +|y|)^s}\mathbb Right)\; \frac{dx\, dy}{(|x|+|y|)^{n}} \\ &\leq \int\int_{\{|x|\leq 1,|y|>1\}} A \left( 2 C \frac{|v(x)|}{ |y|^s}\mathbb Right)\; \frac{dx\, dy}{|y|^{n}}+ \int\int_{\{|x|\leq 1,|y|>1\}} A \left( 2 C \frac{|v(y)|}{ |y|^s}\mathbb Right)\; \frac{dx\, dy}{|y|^{n}}\nonumber \\ &\leq 2 \int\int_{\{|x|\leq 1,|y|>1\}} A\left( \frac {2C}{\log^{\frac{1}{\gamma}}( \kappa +1 )\,|y|^s}\mathbb Right)\; \frac{dx\, dy}{|y|^{n}} = C' \int_1^\infty A\left( \frac{2C}{\log^{\frac{1}{\gamma}}( \kappa +1 )\,r^s} \mathbb Right) \; \frac{dr}{r} \nonumber \\ &= C'\int_1^\infty e^{-C''r^{\gamma s}} \; \frac{dr}{r} <\infty, \nonumber \end{align} where the last equality holds provided that the constant $\kappa$ is so large that $\frac{2C}{\log^{\frac{1}{\gamma}}( \kappa+1 )} < \frac 1{2e}$. \\ As for $J_{22}$, notice that, if $|x|\leq 1$ and $|x-y|\leq 2$, then $|y|\leq |y-x| +|x| \leq 3$. Thus, by property~\eqref{E:12}, one has that \begin{align}\label{A19} J_{22} < \infty. \end{align} \iffalse since $u$ is Lipschitz continuous, \begin{align}\label{A19} J_{22}&\leq \int\int_{|x|\leq 1, |y|\leq 3, |x-y|\leq 2} A\left( \frac{|u(x) - u(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|} \leq \int\int_{|x|\leq 1, |y|\leq 3} A\left( c \,|x-y|^{1-s} \mathbb Right) \; \frac{dx\, dy}{|x-y|} \\ & \leq \int\int_{|x|\leq 1, |y|\leq 3} c' \,|x-y|^{1-s} \; \frac{dx\, dy}{|x-y|} = \int\int_{|x|\leq 1, |y|\leq 3} c' \; \frac{dx\, dy}{|x-y|^s}<+\infty\,. \nonumber \end{align} \fi Finally, let us focus on the term $J_3$, that can be split as \begin{align}\label{A20} J_3 &= \int\int_{\{|x|>1, |y|>1, |x-y|\geq \frac{ |x|+|y|}{2}\}} A\left( \frac{|v(x) - v(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|^n} \\ & \quad + \int\int_{\{|x|>1, |y|>1, |x-y|< \frac{ |x|+|y|}{2}\}} A\left( \frac{|v(x) - v(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|^n} = J_{31}+ J_{32}\,.\nonumber \end{align} Consider $J_{32}$. If \begin{equation} \label{feb30} |x-y|< \frac{|x|+|y|}{2}, \end{equation} then $|x|\leq |x-y| +|y| \leq \frac {|x|}2 + \frac{|y|}2 +|y|\,,$ whence $|x|\leq 3|y|$. Similarly, $|y|\leq 3|x|$. Thus, $\frac {|y|}3 \leq |x| \leq 3|y|$, and \begin{equation}\label{A22} |x| \geq \frac{|x| +|y|}6, \quad |y|\ge\frac{|x| +|y|}6. \end{equation} Moreover, if $x$ and $y$ fulfill inequality \eqref{feb30}, then there exists an absolute constant $\beta>0$ such that \begin{equation}\label{E:19} |(1-\tau)x+\tau y|\ge\beta(|x|+|y|)\quad \text{for $\tau\in[0,1]$.} \end{equation} Indeed, squaring both sides of inequality \eqref{feb30} shows that it is equivalent to \begin{align*} 8x\cdot y > 2(|x|^2+|y|^2)+(|x|-|y|)^2. \end{align*} Hence, $x\cdot y > \frac 14 (|x|^2+|y|^2)$ and, by inequality \eqref{A22}, there exists an absolute constant $C$ such that \begin{align}\label{E:20} |(1-\tau)x+\tau y|^{2} & = (1-\tau)^{2}|x|^{2}+2\tau(1-\tau)x\cdot y+\tau^2|y|^2 \\ \nonumber & \ge (1-\tau)^{2}|x|^{2}+\tau(1-\tau)\frac{|x|^2+|y|^2}{2}+\tau^2|y|^2 \\ \nonumber & \ge C\min\left\{|x|^2,|y|^2\mathbb Right\} \ge C \left(\frac{|x|+|y|}{6}\mathbb Right)^{2} \quad \text{for $\tau \in[0,1]$.} \end{align} Inequality~\eqref{E:19} is thus established. Let us split $J_{32}$ as \begin{align}\label{E:22} J_{32} & = \int\int_{\{|x|>1, |y|>1, |x-y|< \frac{ |x|+|y|}{2}, \sqrt{|x|^2+|y|^2}<\frac{1}{\beta}\}} A\left( \frac{|v(x) - v(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|^n} \\ \nonumber & \quad + \int\int_{\{|x|>1, |y|>1, |x-y|< \frac{ |x|+|y|}{2}, \sqrt{|x|^2+|y|^2}\ge\frac{1}{\beta}\}}A\left( \frac{|v(x) - v(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|^n} \\ \nonumber & = J_{321}+J_{322}. \end{align} By property~\eqref{E:12}, \begin{equation}\label{E:23} J_{321}<\infty. \end{equation} As for $J_{322}$, note that if $x,y$ are such that $\sqrt{|x|^2+|y|^2}\ge\frac{1}{\beta}$, then \begin{equation}\label{E:21} 1\leq\beta\sqrt{|x|^2+|y|^2}\le \beta(|x|+|y|). \end{equation} If $c$ is sufficiently large, then the following chain holds for a suitable constant $C$: \begin{align}\label{E:24} J_{322} & \le \int\int_{\{|x|>1, |y|>1, |x-y|< \frac{ |x|+|y|}{2}, \sqrt{|x|^2+|y|^2}\ge\frac{1}{\beta}\}} A\left(\frac{3|x-y|^{1-s}} {\log^{\frac{1}{\gamma}}\left(\kappa+\beta\left(|x|+|y|\mathbb Right)\mathbb Right) \beta\left(|x|+|y|\mathbb Right)} \mathbb Right) \; \frac{dx\, dy} {|x-y|^n} \\ \nonumber & \le \int\int_{\{|x|>1, |y|>1, |x-y|< \frac{ |x|+|y|}{2}, \sqrt{|x|^2+|y|^2}\ge\frac{1}{\beta}\}} A\left(\frac{3} {\log^{\frac{1}{\gamma}}\left(\kappa+\beta\left(|x|+|y|\mathbb Right)\mathbb Right) \beta\left(|x|+|y|\mathbb Right)^{s}} \mathbb Right) \; \frac{dx\, dy} {\left(|x|+|y|\mathbb Right)^n} \\ \nonumber & = \int\int_{\{|x|>1, |y|>1, |x-y|< \frac{ |x|+|y|}{2}, \sqrt{|x|^2+|y|^2}\ge\frac{1}{\beta}\}} e^{-C\log\left(\kappa+\beta\left(|x|+|y|\mathbb Right)\mathbb Right)\left(|x|+|y|\mathbb Right)^{\gamma s}} \; \frac{dx\, dy} {\left(|x|+|y|\mathbb Right)^n} \\ \nonumber & \le \int\int_{\{|x|>1, |y|>1\}} \kappa^{-C\left(|x|+|y|\mathbb Right)^{\gamma s}} \; \frac{dx\, dy} {\left(|x|+|y|\mathbb Right)^n} <\infty, \end{align} where the first inequality holds owing to ~\eqref{E:19}, \eqref{E:21}, \eqref{A51}, and the second one by property~\eqref{feb22}. Equations~\eqref{E:22}--\eqref{E:24} ensure that \begin{equation}\label{E:25} J_{32}<\infty. \end{equation} It remains to estimate $J_{31}$. The following chain holds, provided that $\kappa$ is sufficiently large: \begin{align}\label{A56} J_{31} & \le \int\int_{\{|x|>1, |y|>1, |x-y|\geq \frac{ |x|+|y|}{2}\}} A\left( \frac{2|v(x)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|^n} \\ \nonumber & \quad + \int\int_{\{|x|>1, |y|>1, |x-y|\geq \frac{ |x|+|y|}{2}\}} A\left( \frac{2|v(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|^n} \\ \nonumber & = 2 \int\int_{\{|x|>1, |y|>1, |x-y|\geq \frac{ |x|+|y|}{2}\}} A\left( \frac{2|v(x)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|^n} \\ \nonumber & \le 2 \int\int_{\{|x|>1, |y|>1, |x-y|\geq \frac{ |x|+|y|}{2}\}} A\left( \frac{2}{\log^{\frac{1}{\gamma}}\left(\kappa+1\mathbb Right)|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|^n} \\ \nonumber & \le 2^{n+1} \int\int_{\{|x|>1, |y|>1, |x-y|\geq \frac{ |x|+|y|}{2}\}} A\left( \frac{2^{s+1}}{\log^{\frac{1}{\gamma}}\left(\kappa+1\mathbb Right)\left(|x|+|y|\mathbb Right)^s} \mathbb Right) \; \frac{dx\, dy}{\left(|x|+|y|\mathbb Right)^n} \\ \nonumber & \le 2^{n+1} \int\int_{\{|x|>1, |y|>1\}} e^{-\frac{\log(\kappa+1)}{2^{(s+1)\gamma}}\left(|x|+|y|\mathbb Right)^{\gamma s}} \; \frac{dx\, dy}{\left(|x|+|y|\mathbb Right)^n} \\ \nonumber & = 2^{n+1} \int\int_{\{|x|>1, |y|>1\}} (\kappa+1)^{-\left(\frac{\left(|x|+|y|\mathbb Right)^{s}}{2^{(s+1)}}\mathbb Right)^{\gamma}} \; \frac{dx\, dy}{\left(|x|+|y|\mathbb Right)^n}<\infty. \end{align} Property~\eqref{A53} follows from~\eqref{A54}, \eqref{E:13}, \eqref{A14}, \eqref{A17}, \eqref{A19}, \eqref{A20}, \eqref{E:25} and~\eqref{A56}. \\ We conclude by proving that, if $\lambda \in (1,2)$, then \begin{equation}\label{E:27bis} \lim_{s\to0^+}s\int_{\mathbb R^{n}} \int_{\mathbb R^{n}} A\left(\frac{|v(x)- v(y)|}{\lambda |x-y|^s}\mathbb Right)\; \frac{dx\,dy}{|x-y|^{n}} = \infty. \end{equation} To this purpose, note that, given $\sigma\in(0,1)$, \begin{align}\label{E:28bis} &\int_{\mathbb R^{n}} \int_{\mathbb R^{n}} A\left(\frac{|v(x)- v(y)|}{\lambda |x-y|^s}\mathbb Right)\; \frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \ge \int\int_{\{|x|>1,x_1>\sigma|x|, |y|>1, y_1<-\sigma|y|\}} A\left(\frac{|v(x)- v(y)|}{\lambda |x-y|^s}\mathbb Right)\; \frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & = \int\int_{\{|x|>1,x_1>\sigma|x|, |y|>1, y_1<-\sigma|y|\}} A\left(\frac{1}{\lambda}\left(\frac{x_1}{|x|\log^{\frac{1}{\gamma}}\left(\kappa+|x|\mathbb Right)} - \frac{y_1}{|y| \log^{\frac{1}{\gamma}}\left(\kappa+|y|\mathbb Right) }\mathbb Right)\frac{1}{|x-y|^s}\mathbb Right)\; \frac{dx\,dy}{|x-y|^{n}} \\ \nonumber & \ge \int\int_{\{|x|>1,x_1>\sigma|x|, |y|>1, y_1<-\sigma|y|\}} A\left(\frac{2\sigma}{\lambda\log^{\frac{1}{\gamma}}\left(\kappa+|x| +|y|\mathbb Right)}\frac{1}{\left(|x|+|y|\mathbb Right)^{s}}\mathbb Right)\; \frac{dx\,dy}{ (|x|+|y|)^n} \\ \nonumber & = C_{\sigma,n}\int_{1}^{\infty}\int_{1}^{\infty} A\left(\frac{2\sigma}{\lambda \log^{\frac{1}{\gamma}}\left(\kappa+\varrho+r\mathbb Right)}\frac{1}{\left(\varrho+r\mathbb Right)^{s}}\mathbb Right)\; \frac{\varrho^{n-1}r^{n-1}}{\left(\varrho+r\mathbb Right)^{n}}d\varrho\,dr \end{align} for some positive constant $C_{\sigma,n}$ depending on $\sigma$ and $n$. Note that the last equality follows on making use of the polar coordinates in the integral with respect to $x$ and in the integral with respect to $y$, owing to the fact that the integrand is a radial function in $x$ and $y$, respectively, and that each of the sets $\{|x|>1,x_1>\sigma |x|\}$ and $\{|y|>1,y_1<-\sigma |y|\}$ is the intersection of the exterior of a ball centered at $0$ with a cone whose vertex is also $0$. Via the change of variables $\xi=\varrho+r, and a=\varrho-r$, we obtain that \begin{align}\label{E:29bis} \int_{1}^{\infty}&\int_{1}^{\infty} A\left(\frac{2\sigma}{\lambda \log^{\frac{1}{\gamma}}\left(\kappa+\varrho+r\mathbb Right)}\frac{1}{\left(\varrho+r\mathbb Right)^{s}}\mathbb Right)\; \frac{\varrho^{n-1}r^{n-1}}{\left(\varrho+r\mathbb Right)^{n}}d\varrho\,dr \\ \nonumber & = \frac{1}{2} \int_{2}^{\infty}\int_{2-\xi}^{-2 +\xi} A\left(\frac{2\sigma}{\lambda \log^{\frac{1}{\gamma}}\left(\kappa+\xi\mathbb Right) }\frac{1}{\xi^{s}}\mathbb Right)\; \frac{\left(\xi^{2}- and a^{2}\mathbb Right)^{n-1}}{4^{n-1}\xi^{n}}d and a\,d\xi. \end{align} Given $\alpha \in (0, 2)$, if $\xi>\frac{4}{2-\alpha}$ and $2-\xi \le and a \le \xi-2$, then $\xi^2- and a^2 \ge \xi^2-(\xi-2)^2 = 4 \xi-4>\alpha\xi$. Thereby, on choosing $\kappa$ large enough, one has that \begin{align}\label{E:31bis} \int_{2}^{\infty}&\int_{2-\xi}^{-2 +\xi} A\left(\frac{2\sigma}{\lambda\log^{\frac{1}{\gamma}}\left(\kappa+\xi\mathbb Right)}\frac{1}{\xi^{s}}\mathbb Right)\; \frac{\left(\xi^{2}- and a^{2}\mathbb Right)^{n-1}}{\xi^{n}}d and a\,d\xi \\ \nonumber & \ge \int_{\frac{4}{2-\alpha}}^{\infty}\int_{2-\xi}^{-2+\xi} A\left(\frac{2\sigma}{\lambda\log^{\frac{1}{\gamma}}\left(\kappa+\xi\mathbb Right)}\frac{1}{\xi^{s}}\mathbb Right)\; \frac{\left(\alpha\xi\mathbb Right)^{n-1}}{\xi^{n}}d and a\,d\xi >\alpha^{n} \int_{\frac{4}{2-\alpha}}^{\infty} A\left(\frac{2\sigma}{\lambda\log^{\frac{1}{\gamma}}\left(\kappa+\xi\mathbb Right)}\frac{1}{\xi^{s}}\mathbb Right)\,d\xi \\ \nonumber & = \alpha^{n} \int_{\frac{4}{2-\alpha}}^{\infty} e^{-\left(\frac{\lambda}{2\sigma}\mathbb Right)^{\gamma}\log(\kappa+\xi)\xi^{\gamma s}}\,d\xi = \alpha^{n} \int_{\frac{4}{2-\alpha}}^{\infty} \frac{d\xi}{(\kappa+\xi)^{\left(\frac{\lambda}{2\sigma}\mathbb Right)^{\gamma}\xi^{\gamma s}}} \\ \nonumber & = \frac{ \alpha^{n}}{s} \int_{\left(\frac{4}{2-\alpha}\mathbb Right)^{s}}^{\infty} \frac{t^{\frac{1}{s}}}{(\kappa+t^{\frac{1}{s}})^{\left(\frac{\lambda}{2\sigma}\mathbb Right)^{\gamma}t^{\gamma}}}\frac{dt}{t}. \end{align} Now, fix any $\sigma \in (\frac{\lambda}{2}, 1)$. Then $\left(\frac{\lambda}{2\sigma}\mathbb Right)^{\gamma}<1$. Also, $\left(\frac{\lambda}{2\sigma}\mathbb Right)^{\gamma}t^{\gamma}<1$ if $t<\frac{2\sigma}{\lambda}$. Thus, \begin{equation}\label{E:32bis} \frac{\chi_{((\frac{4}{2-\alpha})^{s},\frac{2\sigma}{\lambda})}(t) t^{\frac{1}{s}}}{(\kappa+t^{\frac{1}{s}})^{\left(\frac{\lambda}{2\sigma}\mathbb Right)^{\gamma}t^{\gamma}}} \nearrow \infty \quad \text{as} \quad s\searrow 0^+ \quad \text{for $t \in (1, \tfrac{2\sigma}{\lambda})$}. \end{equation} \iffalse \begin{equation}\label{E:32} \lim_{s\to0_+}\frac{t^{\frac{1}{s}}}{(c+t^{\frac{1}{s}})^{\left(\frac{\lambda}{2\sigma}\mathbb Right)^{\gamma}t^{\gamma}}}=\infty \quad \text{if $t\in\left(\left(\frac{4}{2-\alpha}\mathbb Right)^{s},\tfrac{2\sigma}{\lambda}\mathbb Right)$,} \end{equation} monotonically in $s$. \fi Equation \eqref{E:27bis} follows from~\eqref{E:28bis}, \eqref{E:29bis}, \eqref{E:31bis} and \eqref{E:32bis}, via the monotone convergence theorem for integrals. Altogether, we have shown that the conclusions of the theorem hold with $u=\frac{v}{\lambda}$ for any $\lambda\in(1,2)$. \end{proof} \iffalse We have that $u$ is bounded and Lipschitz continuous \begin{equation}\label{A8} u'(x)= -\log^{-2}{(c+|x|)} (c+|x|)^{-1} {\mathbb Rm sign}\, x \quad \hbox{if}\;\; x\neq 0\,. \end{equation} Thus, if $x<y$ \begin{equation}\label{A9} |u(x)-u(y)|\leq \frac {|x-y|}{\log^2{(c+|z|)} (c+|z|)} \end{equation} for some $z\in (x, y)$. \\ We have that \begin{equation}\label{A10} \int_\mathbb R A(u(x))\; dx \geq 2 \int_{x_0}^{\infty} e^{-\log{(c+|x|)}}\; dx= 2\int_{x_0}^{\infty} \frac {dx}{c+|x|}= +\infty\,, \end{equation} where $x_0>0$ is such that $\log^{-1}{(c+|x|)}< \frac 14$ if $|x|>x_0$. \\ We claim that, instead, \begin{equation}\label{A11} \Omegaperatornamewithlimits{lim\,sup}_{s\to 0^+}s\int_\mathbb R \int_\mathbb R A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} <\infty \end{equation} To verify \eqref{A11}, observe that \begin{align}\label{A12} &\int_\mathbb R \int_\mathbb R A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} = \int\int_{|x|\leq 1, |y|\leq 1} A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} \\ &+2 \int\int_{|x|< 1,|y|> 1} A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} + \int\int_{|x|>1, |y|> 1} A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} \nonumber \\ & = J_1 +J_2+ J_3\,.\nonumber \end{align} Owing to \eqref{A9}, there exist $c', c^{''} >0$ such that \begin{align}\label{A13} J_1 &\leq \int\int_{|x|\leq 1, |y|\leq 1} A \left(c' |x-y|^{1-s}\mathbb Right) \; \frac{dx\, dy}{|x-y|} \leq \int\int_{|x|\leq 1,|y|\leq 1} c^{''} \frac{|x-y|^{1-s}}{|x-y|} \; dx\, dy \\ & = c^{''} \int\int_{|x|\leq 1,|y|\leq 1} \frac{dx\, dy}{|x-y|^s} \leq c^{'''}\,,\nonumber \end{align} where $c^{'''}$ is independent of $s\in (0, \frac 12)$. \\ As far as $J_2$ is concerned, we have that \begin{align}\label{A14} J_2 &=\int\int_{|x|\leq 1,|y|>1, |x-y|>2} A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} \\ &+ \int\int_{|x|\leq 1, |y|>1, |x-y|\leq 2} A \left( \frac{|u(x) - u(y)|}{|x-y|^s}\mathbb Right)\; \frac{dx\, dy}{|x-y|} = J_{2,1} + J_{2,2}\nonumber \end{align} Consider $J_{2,1}$. If $|x|\leq 1$ and $|x-y|>2$, then \begin{equation}\label{A15} |x|+|y| \leq |x| + |y-x| +|x| = 2|x| + |y-x| \leq 2 + |y-x| \leq |x-y|+|y-x| =2|x-y|\,. \end{equation} Thus, \begin{equation}\label{A16} |x-y|\leq |x|+|y|\leq 2|x-y|\,. \end{equation} Hence, there exists $c^{'}>0$ such that \begin{align}\label{A17} J_{2,1}&\leq \int\int_{|x|\leq 1,|y|>1, |x-y|>2} A \left( c^{'} \frac{|u(x)|+ |u(y)|}{(|x| +|y|)^s}\mathbb Right)\; \frac{dx\, dy}{|x|+|y|} \\ &\leq \int\int_{|x|\leq 1,|y|>1} A \left( 2 c^{'} \frac{|u(x)|}{ |y|^s}\mathbb Right)\; \frac{dy}{|y|}+ \int\int_{|x|\leq 1,|y|>1} A \left( 2 c^{'} \frac{|u(y)|}{ |y|^s}\mathbb Right)\; \frac{dy}{|y|}\nonumber \\ &\leq 2 \int\int_{|x|\leq 1,|y|>1} A\left( \frac {2c^{'}}{\log c}\frac1{|y|^s}\mathbb Right)\; \frac{dy}{|y|} = 8 \int_1^\infty A\left( \frac{2c^{'}}{\log c} r^{-s} \mathbb Right) \; \frac{dr}{r} \nonumber \\ &= 8\int_1^\infty c^{-kr^s} \; \frac{dr}{r}= \frac 85 \int_1^\infty c^{-k\tau} \; \frac{d\tau}{\tau}\,,\nonumber \end{align} where the fifth equality holds if the constant $c$ is so large that $\frac{2c^{'}}{\log c} < \frac 14$, where $c$ is the same as in \eqref{A7}. Here, $k=\frac{\log c}{2c^{'}}$. \\Consider $J_{2,2}$. If $|x|\leq 1$ and $|x-y|\leq 2$, then \begin{equation}\label{A18} |y|\leq |y-x| +|x| \leq 3\,. \end{equation} Thus, since $u$ is Lipschitz continuous, \begin{align}\label{A19} J_{2,2}&\leq \int\int_{|x|\leq 1, |y|\leq 3, |x-y|\leq 2} A\left( \frac{|u(x) - u(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|} \leq \int\int_{|x|\leq 1, |y|\leq 3} A\left( c \,|x-y|^{1-s} \mathbb Right) \; \frac{dx\, dy}{|x-y|} \\ & \leq \int\int_{|x|\leq 1, |y|\leq 3} c^{'} \,|x-y|^{1-s} \; \frac{dx\, dy}{|x-y|} = \int\int_{|x|\leq 1, |y|\leq 3} c^{'} \; \frac{dx\, dy}{|x-y|^s}\leq c^{''}\,, \nonumber \end{align} where $c^{''}$ is independent of $s\in (0, \frac 12)$. \\ Finally, let us focus on $J_3$. We have that \begin{align}\label{A20} J_3 &= \int\int_{|x|>1, |y|>1, |x-y|\geq \frac{ |x|+|y|}{2}} A\left( \frac{|u(x) - u(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|} \\ & + \int\int_{|x|>1, |y|>1, |x-y|< \frac{ |x|+|y|}{2}} A\left( \frac{|u(x) - u(y)|}{|x-y|^s} \mathbb Right) \; \frac{dx\, dy}{|x-y|} = J_{3,1}+ J_{3,2}\,.\nonumber \end{align} Consider $J_{3,1}$. If $|x-y|< \frac{|x|+|y|}{2}$, then $|x|\leq |x-y| +|y| \leq \frac {|x|}2 + \frac{|y|}2 +|y|\,,$ whence $|x|\leq 3|y|$. Similarly, $|y|\leq 3|x|$. Thus, \begin{equation}\label{A21} \frac {|y|}3 \leq |x| \leq 3 |y|\,, \end{equation} whence \begin{equation}\label{A22} |x|= \frac{|x|}2 + \frac{|x|}2 \geq \frac{|x| +|y|}2 \,. \end{equation} Also, since $|x|>1, |y|>1$ and $|x-y|<\frac{|x| +|y|}2$, then either $x,y>0$ or $x,y<0$. Assume, for instance, that $0<x<y$. By \eqref{A9}, there exists $z\in (x, y)$ such that \begin{align}\label{A23} |u(x)-u(y)|&\leq \frac {|x-y|}{\log^2{(c+|z|)} (c+|z|)}\leq \frac {|x-y|}{\log^2{(c+|x|)} (c+|x|)} \leq \frac {|x-y|}{\log^2{\left(c+\frac{|x|+|y|}6\mathbb Right)} \left(c+\frac{|x|+|y|}6\mathbb Right)} \,, \end{align} where the last inequality holds owing to \eqref{A22}. \\ Thus, by \eqref{A23}, \fi \section*{Compliance with Ethical Standards}\label{conflicts} \subsection*{Funding} This research was partly funded by: \begin{enumerate} \item Research Project 201758MTR2 of the Italian Ministry of Education, University and Research (MIUR) Prin 2017 ``Direct and inverse problems for partial differential equations: theoretical aspects and applications''; \item GNAMPA of the Italian INdAM -- National Institute of High Mathematics (grant number not available); \item Grant P201-18-00580S of the Czech Science Foundation; \item Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - GZ 2047/1, Projekt-ID 390685813. \end{enumerate} \subsection*{Conflict of Interest} The authors declare that they have no conflict of interest. {\color{black} \end{document}
\begin{document} \maketitle \begin{abstract}In this paper, we are interested in a general type of nonlocal energy, defined on a ball $B_R\subset \ensuremath{\mathbb{R}}n$ for some $R>0$ as \[ \ensuremath{\mathcal{E}} (u, B_R)= \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} F( u(x)-u(y),x-y)\, dx \, dy+\int_{B_R} W(u)\, dx.\] We prove that in $\ensuremath{\mathbb{R}}^2$, under suitable assumptions on the functions $F$ and $W$, bounded continuous global energy minimizers are one-dimensional. This proves a De Giorgi conjecture for minimizers in dimension two, for a general type of nonlocal energy. \end{abstract} \section{Introduction} In this paper we deal with a general type of nonlocal energy. Let \[ F\colon \ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}n \setminus \{0\} \to [0,+\infty), \qquad W: [-1,1] \to [0,+\infty) \] be two functions and let $R>0$. Denoting as usual \[ B_R = \big\{ x\in \ensuremath{\mathbb{R}}n \; \big| \; |x|<R\big \}, \qquad \ensuremath{\mathcal{C}} B_R = \ensuremath{\mathbb{R}}n \setminus B_R,\] we consider for any $|u|\leq 1,$ \eqlab{\label{energy} \ensuremath{\mathcal{E}}(u,B_R):= \ensuremath{\mathcal{K}}_R(u) +\int_{B_R} W(u)\, dx, } with \eqlab{\label{kinn} \ensuremath{\mathcal{K}}_R(u):= \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} F( u(x)-u(y),x-y)\, dx \, dy . } Under suitable assumptions on $F$ and $W,$ we prove that for $n=2$ continuous functions $u \colon \ensuremath{\mathbb{R}}^n\to [-1,1]$, minimizing the energy $\ensuremath{\mathcal{E}}(\cdot,B_R)$ for any $R>0$, are one-dimensional. We say that $u$ is one-dimensional if every level set of $u$ is a hyperplane, or in other terms, if there exists $u_0 \colon \ensuremath{\mathbb{R}} \to [-1,1] $ such that \[ u(x)= u_0(x\cdot \omega) \qquad \mbox{ for some } \; \; \omega \in \partial B_1.\] This type of energy naturally arises in a phase transition problem, which leads to the well-known stationary Allen-Cahn equation \eqlab{ \label{acclass} (-\Delta) u= u-u^3 \quad \mbox{ in } \, \ensuremath{\mathbb{R}}n. } The Italian mathematician Ennio De Giorgi conjectured in 1978 that any smooth, bounded solution of this equation which is monotone in one direction is one-dimensional, at least if $n\leq 8$. The interested reader can check \cite{SavinSurvey} for a very nice survey on phase transitions, minimal surfaces, the Bernstein problem, since the connection between these problems is the reason why the dimension eight comes into play. For a further very nice reference, see \cite{CintiSurvey}. \\ This De Giorgi conjecture has received much attention in the last decades, and has been completely settled for $n\leq 3$, see~\cite{AC00, BCN97,GGUI}. The case $4\leq n \leq 8$ with the additional assumption that \begin{equation} \label{limdgs} \lim_{x_n\to \pm \infty} u(x',x_n)=\pm 1, \quad \mbox{for any} \quad x'\in \ensuremath{\mathbb{R}}^{n-1} \end{equation} was proved in \cite{flatty}. On the other hand, an example showing that the De Giorgi conjecture does not hold in higher dimensions (i.e. for $n\geq 9$) can be found in \cite{PKW08}. A model that accounts for long range interactions is given by the nonlocal, fractional counterpart of the Allen-Cahn equation \eqlab{ \label{brr} (-\Delta)^s u= u-u^3 \quad \mbox{ in } \, \ensuremath{\mathbb{R}}n, } with $s\in (0,1)$. The operator $(-\Delta)^s$ denotes the fractional Laplacian defined as \bgs{\label{frlap} (-\Delta)^s u (x)=C_{n,s}\int_{\ensuremath{\mathbb{R}}n} \frac{2u(x)- u(x+y)-u(x-y)}{|y|^{n+2s}} \, dy, \quad \mbox { with } \; C_{n,s}>0. } An analogue of the De Giorgi conjecture for any smooth, bounded, monotone solution of the fractional Allen-Cahn equation has first been proved for $n=2, s={1}/{2}$ is \cite{CM05}. In the case $n=2$, for any $s \in (0,1)$, the result is proved in \cite{CS15,SV09}. When $n=3$, the papers \cite{CC10, CC14} contain the proof for $s\in \big[{1}/{2},1\big)$, \cite{dipierro2017three,improvement} for $s\in \big(0,{1}/{2}\big)$, and \cite{monoton} for a general $s\in (0,1)$. For $n=4$ and $s={1}/2$ a proof of the conjecture is given in \cite{figalli}. On the other hand, for $4\leq n\leq 8$ and $s \in\big[{1}/{2},1\big)$ the conjecture is proved with the additional assumption \eqref{limdgs} in \cite{savin12,rigidity}. For $s$ in this range, only a counterexample for $n=9$ is missing to complete the picture. One way to tackle the De Giorgi conjecture is to study global minimizers of the Ginzburg-Landau energy functional and understand whether they are one-dimensional. For the Allen-Cahn equation \eqref{acclass}, the related energy in some ball $B_R\subset\ensuremath{\mathbb{R}}n$ is given by \bgs{ \label{classen} \ensuremath{\mathcal{E}}(u,B_R) = \int_{B_R} \frac{1}2|\nabla u|^2 + W(u) \, dx, } with $W$ being the double-well potential \bgs{\label{DF-WELL} W(u)=\displaystyle \frac{(u^2-1)^2}{4}. } Actually, the potential term $W$ can denote any function with a double-well structure, that is \eqlab{ \label{www1} &W \colon[-1,1] \to [0,+\infty), \quad W\in C^2([-1,1]), \quad W>0 \; \; \mbox{ in }\; (-1,1) , \\ &W(\pm 1) = W'(\pm 1) =0, \quad W''(\pm 1) >0. } A local minimizer $u$ of the energy $\ensuremath{\mathcal{E}}(\cdot, B_R)$ is such that $ \ensuremath{\mathcal{E}}(u, B_R) \leq \ensuremath{\mathcal{E}}(v,B_R)$ for any $v=u$ on $\partial B_R. $ A global minimizer is a local minimizer for any $R>0$. It turns out that global minimizers of the Ginzburg-Landau energy (with $W$ as in \eqref{www1}) are one-dimensional for $n\leq 7$, see \cite{flatty} or \cite[Theorem 10.1]{SavinSurvey}. In fact, Savin proves the conjecture for global minimizers, and uses the additional assumption \eqref{limdgs} to go from global minimizers to solutions. The nonlocal energy related to problem \eqref{brr} is \eqlab{ \label{frlapvv} \ensuremath{\mathcal{E}}^s(u,B_R) = {\frac{1}2} \iint_{\ensuremath{\mathbb{R}}^{2n} \setminus (\ensuremath{\mathcal{C}} B_R)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\, dx \, dy + \int_{B_R}W(u) \, dx , } with $W$ satisfying \eqref{www1}. Here, a local minimizer $u$ of the energy $\ensuremath{\mathcal{E}}^s(\cdot, B_R)$ is such that \[ \ensuremath{\mathcal{E}}^s(u, B_R) \leq \ensuremath{\mathcal{E}}^s(v,B_R) \qquad \mbox{ for any }\;\; v=u \quad \mbox{ in } \; \; \ensuremath{\mathcal{C}} B_R, \] and a global minimizer is a local minimizer in any ball. That nonlocal minimizers are one-dimensional is proved for $n\leq 7$ and $s\in [1/2,1)$ in \cite[Theorems 1.2]{savin12,rigidity}, and the conjecture for solutions is settled (as in the classical case) by using the additional assumption \eqref{limdgs}. In other references \cite{improvement,figalli,monoton}, the authors prove the conjecture with different techniques (for critical points of the energy, or for stable solutions). In \cite{CC10, CC14,CS15,CM05,dipierro2017three,SV09}, again various techniques are employed, but all rely on the use of the harmonic extension for the fractional Laplacian. However, there is not an ``extension procedure'' for general nonlocal operators, hence such methods are specific to the fractional Laplacian case. On the other hand, in \cite[Theorem 4.2.1]{nonlocal}, the present author with Valdinoci carry out the proof of the conjecture for minimizers for $n=2$ in the nonlocal setting, thus without the harmonic extension. This allows to develop the technique therein introduced, and take the nonlocal energy in \eqref{frlapvv} to a much more general form. We thus prove the conjecture for global minimizers for $n=2$, for the general nonlocal energy given in \eqref{energy}. As a matter of fact, the results here introduced find as an immediate application the study of the energy related to \eqref{frlapvv}. Furthermore, the result applies also to more engaging equations, involving for instance the fractional $p$-Laplacian, or the mean curvature equation (as we see in Section \ref{examples}). {We mention for the Allen-Cahn equation with general kernels the papers \cite{cozzpass} and \cite{ros2015entire}. While reviewing this paper, we learned about the result reached in \cite{ros2015entire}. There, the one-dimensional property of stable solutions is proved in $\ensuremath{\mathbb{R}}^2$ for the operator \[\mathcal L u(x)= P.V. \int_{\ensuremath{\mathbb{R}}n} (u(x)-u(x+y)) K(y) \, dy \] under some assumptions on the kernel $K$, and by using a Liouville theorem approach.} We organize the rest of the paper as follows. Section \ref{fff} contains the main result and the assumptions on the function $F$. {In Section \ref{missing} we deal with the existence of minimizers of the nonlocal general energy \eqref{energy} in an suitable functional setting. We discuss also some form of a strong comparison principle (i.e. if two ordered minimizers coincide on a small ball, then they coincide in the whole space)}. In Section \ref{prelim} we introduce some energy estimates, which will contribute to the proof of the main result (Theorem \ref{Theorem}) in Section \ref{thmm}. We give two examples of functions $F$ that satisfy our assumptions in Section \ref{examples}. {As a matter of fact, in this last section we obtain that continuous bounded minimizers of the energy related to the fractional $p$-Laplacian and to the fractional mean curvature are one-dimensional in $\ensuremath{\mathbb{R}}^2$. In other words, we prove the De Giorgi conjecture for minimizers in dimension two (also) for the fractional $p$-Laplacian and the mean curvature.} \section{Main result and assumptions on $F$}\label{fff} We fix some $s\in (0,1)$ and $p\in [1,+\infty)$. We consider $F:\ensuremath{\mathbb{R}}\times \ensuremath{\mathbb{R}}n\setminus \{0\} \to [0,+\infty)$ and denote by $t\in \ensuremath{\mathbb{R}}$, $x=(x_1,\dots,x_n)\in \ensuremath{\mathbb{R}}n\setminus\{0\}$ its variables. \begin{assump*}\label{fuffa} In this paper, $F$ satisfies the following on its domain of definition. \begin{itemize} \item symmetry \eqlab{ \label{sym} F (t,x) =F(-t,x)=F(-t,-x) , } \item monotonicity in $t$ \eqlab{\label{mon} F(t_1, x)\leq F(t_2,x) \;\, \mbox{ for any } \; |t_1|\leq| t_2|, } \item monotonicity in $x$ \eqlab{ \label{mon2} F(t, x_1)\leq F(t,x_2)\;\, \mbox{ for any } \; |x_1|\geq | x_2|, } \item {scaling in $x$ \eqlab{ \label{homo} F\left(t,\alpha x\right)\leq \alpha^{-n-sp-1} F(t,x)\; \, \mbox{ for any } \alpha \in (0,1], }} \item integrability: there exist $c_*,c^*>0$ such that { \eqlab{ \label{integr} &c_* \left(\frac{|t|^p}{|x|^{n+sp} } - \frac{1}{|x|^{n+sp-p} }\right) \leq F(t,x) \leq c^* \frac{|t|^p}{|x|^{n+sp}}, } } \item smoothness in $x$ \eqlab{ \label{C2} &F(t, \cdot)\in C^2\left(\ensuremath{\mathbb{R}}n \setminus \{0\}\right) , } \item growth of the partial derivative in $x$: there exists $c_1>0$ such that \eqlab{ \label{partial1} \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| \partial _{x_i} F (t,x)\ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| \leq c_1 \frac{F(t,x)}{|x|} \;\,\, \mbox{ for any } i=1,2,\dots,n, } \item growth of the second order partial derivative in $x$: there exists $c_2>0$ such that \eqlab{\label{partial2} \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| \partial^2_{x_i} F(t,x)\ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| \leq c_2 \frac{F(t,x)}{|x|^2} \;\, \mbox{ for any } i=1,2,\dots,n, } \item smoothness in $t$ \eqlab{ \label{C2t} F(\cdot,x)\in C^1\left(\ensuremath{\mathbb{R}} \right) \; \mbox{ for a.e. } x \in \ensuremath{\mathbb{R}}n, } \item {growth of the derivative in $t$: there exists $c_3>0$ such that \eqlab{ \label{1derivt} \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| \partial _{t} F (t,x)\ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| \leq c_3 \frac{|t|^{p-1}}{|x|^{n+sp}}, }} \item {strict monotonicity of $\partial_t F(t,x)$: \eqlab{ \label{uconvex} \partial_t F(T,x) > \partial_t F(\tau,x), \quad \; &\mbox{ whenever }T> \tau, \mbox{ for any } x \in \ensuremath{\mathbb{R}}n\setminus \{0\}. }} \end{itemize} \end{assump*} \noindent Let \eqlab{ \label{www} &W \colon[-1,1] \to [0,+\infty), \quad W\in C^1([-1,1]), \quad W(\pm 1) = W'(\pm 1) =0. } When $W$ satisfies \eqref{www}, we say that $u\colon\ensuremath{\mathbb{R}}n \to [-1,1]$ is a minimizer for $\ensuremath{\mathcal{E}}(\cdot,B_R)$ given in \eqref{energy} if $\ensuremath{\mathcal{E}}(u, B_R)<\infty$ and if it minimizes $\ensuremath{\mathcal{E}}(\cdot,B_R)$ among all admissible competitors, i.e. \[ \ensuremath{\mathcal{E}}(u,B_R) \leq \ensuremath{\mathcal{E}}(v,B_R) \qquad \mbox{ for any }\; v \; \mbox{ such that } \quad |v|\leq 1 \quad \mbox{ and } \quad v=u \quad \mbox{ in } \; \; \ensuremath{\mathcal{C}} B_R. \] We say that $u\colon\ensuremath{\mathbb{R}}n \to [-1,1]$ is a global minimizer for $\ensuremath{\mathcal{E}}$ if it is a minimizer for $\ensuremath{\mathcal{E}}(\cdot, B_R)$ for any $R>0$. The main result of the paper is the following. \begin{thm}\label{Theorem} Let $u\colon \ensuremath{\mathbb{R}}n \to [-1,1]$ be a continuous global minimizer of the energy \eqref{energy}. Then under the assumptions \eqref{sym} to \eqref{www}, $u$ is one-dimensional. \end{thm} Notice that the assumptions on $F$ give a generalization of the energy in \eqref{frlapvv}, related to the fractional Laplacian. { Moreover, we prove in the last Section \ref{examples}, they are all natural conditions when we consider a nonlocal energy like the one related to the fractional $p$-Laplacian or the fractional mean curvature operator.} \section{{Existence and comparison of minimizers} } \label{missing} Let $p\in[1,+\infty), s\in (0,1), R>0$ and let $ u\colon \ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}}$ be a measurable function. We consider $F\colon \ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}n\setminus \{0\}\to [0,+\infty)$ to be such that it satisfies at least \eqref{sym} and \eqref{integr} (other assumptions will be mentioned, when needed). Furthermore, let \eqlab{ \label{wloc} W \colon \ensuremath{\mathbb{R}} \to [0,+\infty), \qquad W\in L^\infty(\ensuremath{\mathbb{R}})\cap C^1(\ensuremath{\mathbb{R}}) } and let $\ensuremath{\mathcal{E}}(\cdot, B_R)$ be defined by the formula \eqref{energy}. \\We begin by describing the functional framework for the existence (for further reference, check \cite{hitch}). Let \[ W^{s,p}(\Omega) := \big\{ u\in L^p(\Omega) \; \big| \; [u]_{W^{s,p}(\Omega)} <\infty \big\} \] where \[ [u]_{W^{s,p}(\Omega) }= \left(\int_\Omega \int_\Omega \frac{|u(x)-u(y)|^p}{|x-y|^{n+sp} }\, dx \, dy\right)^{\frac1p} \] is the Gagliardo semi-norm. Also, we denote \[ \|u\|_{W^{s,p}(\Omega)} :=\left(\|u\|^p_{L^p(\Omega)} +[u]^p_{W^{s,p}(\Omega) }\right)^{\frac1p}.\] We define \bgs{ \mathcal X_R:= \big\{ \varphi \colon \ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}} \; \big| \; \varphi \in L^\infty(\ensuremath{\mathbb{R}}n) \cap W^{s,p}(B_{2R}) \big\} } and denote \[ [u]_{R,\varphi}:=\left(\int_{B_R} \left( \int_{B_{2R}\setminus B_R} \frac{|u(x)-\varphi(y)|^p}{|x-y|^{n+sp} }\, dy\right) \, dx \right)^{\frac1p}.\] For $\varphi\in \ensuremath{\mathcal{X}}_R$, let \eqlab{\label{thenorm} \mathcal {W}^{s,p}_{R,\varphi}:= \big\{u \colon \ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}} \;\big| \; u \in W^{s,p}(B_R), \,[u]_{R,\varphi}<\infty \mbox{ and }u=\varphi \mbox{ on } \ensuremath{\mathcal{C}} B_R \big\}. } For $u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$, when we say that $u$ is a minimizer for $\ensuremath{\mathcal{E}}(\cdot, B_R)$ it is implied that $u$ is a minimizer with respect to the fixed exterior data $\varphi$. For the sake of precision, we recall that a measurable function $u\colon\ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}}$ is a minimizer for $\ensuremath{\mathcal{E}}$ in $B_R$ if $\ensuremath{\mathcal{E}}(u, B_R)<\infty$ and \[ \ensuremath{\mathcal{E}}(u,B_R) \leq \ensuremath{\mathcal{E}}(v,B_R) \qquad \mbox{ for any }\;\; v=u \quad \mbox{ in } \; \; \ensuremath{\mathcal{C}} B_R. \] For any two sets $A, B\subset \ensuremath{\mathbb{R}}n$, we define \[ u(A,B):= \int_{ A}\int_B F( u(x)-u(y),x-y)\, dx \, dy, \] and recall from \eqref{kinn} that \[ \ensuremath{\mathcal{K}}_R(u)= u(B_R,B_R)+ u(B_R,\ensuremath{\mathcal{C}} B_R)+ u(\ensuremath{\mathcal{C}} B_R,B_R) .\] We have the next useful result. \begin{prop}\label{finiteenergy} If $\varphi\in \ensuremath{\mathcal{X}}_R$ and $u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$, then there exists a positive constant $C$ depending on $n,s,p,R, \|W\|_{L^\infty(\ensuremath{\mathbb{R}})}, \|\varphi\|_{L^\infty(\ensuremath{\mathbb{R}}n)} $ such that \[ \ensuremath{\mathcal{E}}(u,B_R)\leq C \left( \|u\|^p_{W^{s,p}(B_{R})} +[u]^p_{R,\varphi}+ 1 \right).\] Moreover, it holds that \eqlab{\label{krusym} \ensuremath{\mathcal{K}}_R(u) = u(B_R,B_R) + 2u(B_R,\ensuremath{\mathcal{C}} B_R). } \end{prop} \begin{proof} By the right hand side of \eqref{integr} we have that \bgs{\label{fuck1} & u(B_R,B_R) + 2u(B_R,B_{2R} \setminus B_R)\\ \leq &\; c^* \int_{B_R}\int_{B_R} \frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}} \, dx \, dy + 2 c^*\int_{B_R}\int_{B_{2R} \setminus B_R} \frac{|u(x)-\varphi(y)|^p}{|x-y|^{n+sp}} \, dx \, dy \\ \leq &\; c^*\left( [u]^p_{W^{s,p}(B_{R})} +2[u]^p_{R,\varphi}\right). } When $x\in B_R, y\in \ensuremath{\mathcal{C}} B_{2R}$, we have that $|x-y|\geq |y|/2$, hence \bgs{\label{fuck2} u(B_R,\ensuremath{\mathcal{C}} B_{2R}) \leq &\; c^* \int_{B_R} \int_{\ensuremath{\mathcal{C}} B_{2R}} \frac{|u(x)-\varphi(y)|^p}{|x-y|^{n+sp}} \, dx \, dy \\ \leq &\; 2^{p-1} c^* \left( \int_{B_R} |u(x)|^p \int_{\ensuremath{\mathcal{C}} B_{2R} } \frac{ dx \, dy}{|x-y|^{n+sp}} + \int_{B_R} \int_{\ensuremath{\mathcal{C}} B_{2R} } \frac{ |\varphi(y) |^p}{|x-y|^{n+sp}}\,dx \, dy\right) \\ \leq &\; 2^{p-1+n+sp}c^* \left( \|u\|^p_{L^p(B_R)} + \|\varphi\|^p_{L^{\infty}(\ensuremath{\mathcal{C}} B_{2R})}|B_R|\right) \int_{\ensuremath{\mathcal{C}} B_{2R}} |y|^{-n-sp}\, dy \\ \leq &\; C_{n,s,p,R}\left( \|u\|^p_{L^p(B_R)} + \|\varphi\|^p_{L^{\infty}(\ensuremath{\mathcal{C}} B_{2R})}\right). } Therefore we obtain \eqlab{ \label{krrr} \ensuremath{\mathcal{K}}_R(u)\leq C_{n,s,p,R}\left( \|u\|^p_{W^{s,p}(B_R)} +[u]^p_{R,\varphi}+ \|\varphi\|^p_{L^{\infty}(\ensuremath{\mathcal{C}} B_{2R})}\right). } It is enough then to notice that \[ \int_{B_R} W(u) \, dx \leq C_{n,R}\|W\|_{L^\infty(\ensuremath{\mathbb{R}})} \] to conclude the first statement of the Proposition. \\ On the other hand, changing variables, using Fubini to change the order of integration and applying \eqref{sym}, we obtain \bgs{ &u(\ensuremath{\mathcal{C}} B_R, B_R) \\ =&\; \int_{\ensuremath{\mathcal{C}} B_R} \left( \int_{B_R} F(u(x)-u(y),x-y) \, dy \right)\, dx = \int_{\ensuremath{\mathcal{C}} B_R} \left( \int_{B_R} F(u(y)-u(x), y-x) \, dx\right) \, dy \\ =&\; \int_{B_R} \left(\int_{\ensuremath{\mathcal{C}} B_R} F(u(x)-u(y), x-y) \, dy \right)\, dx = u(B_R,\ensuremath{\mathcal{C}} B_R), } from which \eqref{krusym} immediately follows. \end{proof} We give in the next proposition some a priori properties of the minimizers of the energy. \begin{prop}\label{apriori} If $\varphi \in \ensuremath{\mathcal{X}}_R$ and $u$ is a minimizer of $\ensuremath{\mathcal{E}}(\cdot, B_R)$ with $u=\varphi$ in $\ensuremath{\mathcal{C}} B_R$, then \begin{enumerate} \item there exists $C=C_{n,s,p,R}>0$ such that \bgs{\label{rmk2} \ensuremath{\mathcal{E}}(u,B_R)\leq C\left(\|\varphi\|^p_{W^{s,p}(B_{2R})} +\|\varphi\|^p_{L^{\infty}(\ensuremath{\mathcal{C}} B_{2R})} + \|W\|_{L^\infty(\ensuremath{\mathbb{R}}) } \right), } \item $u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$ and furthermore there exists $c=c_{n,s,p,R}>0$ such that \eqlab{\label{bondonlp} \|u\|_{L^p(B_R) } \leq c (1+[u]^p_{W^{s,p}(B_R)}). } \end{enumerate} \end{prop} \begin{proof} We can use $\varphi$ as a competitor for $u$. Using \eqref{krrr} for $\varphi$ (notice that $\varphi\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$), we obtain \bgs{ \label{rm45} \ensuremath{\mathcal{K}}_R(\varphi) \leq &\;C_{n,s,p,R}\left(\|\varphi\|^p_{W^{s,p}(B_{2R})} + \|\varphi\|^p_{L^\infty(\ensuremath{\mathcal{C}} B_{2R})}\right). } Given the minimality of $u$, we get that \bgs{ \ensuremath{\mathcal{E}}(u,B_R)\leq \ensuremath{\mathcal{E}}(\varphi,B_R)\leq C_{n,s,p,R}\left(\|\varphi\|^p_{W^{s,p}(B_{2R})} +\|\varphi\|^p_{L^{\infty}(\ensuremath{\mathcal{C}} B_{2R})} + \|W\|_{L^{\infty}(\ensuremath{\mathbb{R}}) } \right) . } This proves point (1) of the proposition. By a change of variables, we obtain the bound \eqlab{\label{buc} \int_{B_R} \int_{B_{2R}} \frac{1}{|x-y|^{n+sp-p} }dx \, dy\leq |B_R| \int_{B_{3R}} \frac{1}{|z|^{n+sp-p} }dz =C(n,s,p,R). } According to the left hand side of \eqref{integr}, we have \bgs{ \label{nnnu1} u(B_R,B_R) \geq &\,{c_*} \left( \int_{B_R} \int_{B_R} \frac{|u(x)-u(y)|^p}{|x-y|^{n+sp} }dx \, dy - \int_{B_R} \int_{B_R} \frac{1}{|x-y|^{n+sp-p} }dx \, dy \right) \\ =&\,{c_*} \left( [u]^p_{W^{s,p}(B_R)} - C_{n,s,p,R}\right) .} In the same way, we get that \[ u(B_R,B_{2R}\setminus B_R) \geq c_*\left( [u]^p_{R,\varphi} - C_{n,s,p,R}\right).\] Since $\ensuremath{\mathcal{E}}(u,B_R)$ is bounded, it holds that \eqlab{ \label{this} [ u ]^p_{ W^{s,p}(B_R)} + [u]^p_{R,\varphi}\leq \ensuremath{\mathcal{E}}(u,B_R)+ C_{n,s,p,R} <C({n,s,p,R,\|\varphi\|_{L^\infty(\ensuremath{\mathbb{R}}n)}, \|W\|_{L^\infty(\ensuremath{\mathbb{R}})}}). } Using Proposition \ref{pony} we have that \bgs{ \|u\|^p_{L^p(B_R)} \leq &\; C_{n,p,s,R} \left( [u]_{R,\varphi}^p + \|\varphi\|^p_{L^p(B_{2R}\setminus B_R)} \right). } This implies that $u\in L^p(B_R)$, hence by \eqref{this}, we get that $u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$. The bound on the $L^p$ norm \eqref{bondonlp} follows from Proposition \ref{poincy}. \end{proof} \begin{oss} Let us note that there are some cases in which the request \[ [u]_{R,\varphi}<\infty\] can be avoided. For $sp<1$, we can take \bgs{ \mathcal X_R= \big\{ \varphi \colon \ensuremath{\mathcal{C}} B_R \to \ensuremath{\mathbb{R}} \; \big| \; \varphi \in L^\infty(\ensuremath{\mathcal{C}} B_R) \big\}. } In this case, we define \bgs{ \ensuremath{\mathcal W^{s,p}_{R,\varphi} }:= \big\{u \colon \ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}} \; \big| \; u_{\,\big|{B_R}} \in W^{s,p}(B_R) \mbox{ and }u=\varphi \mbox{ on } \ensuremath{\mathcal{C}} B_R \big\}. } Indeed, for $sp<1$, one can use the fractional Hardy inequality, thanks to \cite[Theorem D.1.4, Corollary D.1.5]{tesiluca} and get that \bgs{ \int_{B_R}\int_{B_{2R}\setminus B_R} \frac{|u(x)|^p}{|x-y|^{n+sp}}\, dx \, dy\leq &\, \int_{B_R} \int_{\ensuremath{\mathcal{C}} B_{d_R(x)}(x)} \frac{|u(x)|^p}{|x-y|^{n+sp}}\, dx \, dy \\ \leq &\, \int_{B_R} \frac{|u(x)|^p}{d_R(x)^{sp}}\, dx \leq C(n,s,p,R) \|u\|^p_{W^{s,p}(B_R)}, } where $d_R(x)=dist(x,\partial B_R)$. \\Just as a remark, the fractional Hardy inequality holds also for $sp>1$, see \cite[Theorem 1.1, (17)]{DydaHardy} for any $u\in C_c(B_R)$). Nevertheless, in this case one looks for minimizers in $W^{s,p}_0(B_R)$, a space which is too restrictive for our purposes. \noindent Furthermore (check \cite[Lemma 4.5.10]{tesiluca}, or the forthcoming paper \cite{cl}) \[ \int_{B_R}\int_{B_{2R}\setminus B_R} \frac{|\varphi(y)|^p}{|x-y|^{n+sp}}\, dx \, dy\leq \, \|\varphi\|_{L^\infty(\ensuremath{\mathcal{C}} B_R)} \mbox{Per}_{sp}(B_R) <\infty. \] This follows since the $sp$-perimeter is finite for sets with Lipschitz boundary (see \cite{nms}). Then \bgs{ {[}u{]}^{p}_{R,\varphi}=&\; \int_{B_R}\int_{B_{2R}\setminus B_R} \frac{|u(x)-\varphi(y)|^p}{|x-y|^{n+sp}} \,dx\,dy \leq C(n,s,p,R)\left(|\varphi\|_{L^\infty(\ensuremath{\mathcal{C}} B_R)} + \|u\|^p_{W^{s,p}(B_R)}\right). } We also notice that, in order to obtain the estimates in Proposition \ref{apriori}, one can consider \sys[\tilde \varphi=]{&\varphi, \quad \mbox{ in } \ensuremath{\mathcal{C}} B_R\\ &0, \quad \mbox{ in }B_R .} \end{oss} We prove now the existence of minimizers of the energy. \begin{thm} [Existence] \label{existence} Let $F$ be lower semi-continuous in the first variable, and such that it satisfies \eqref{sym}, \eqref{integr}, and let $W$ be such that it satisfies \eqref{wloc}. If $\varphi \in \ensuremath{\mathcal{X}}_R$, there exists a minimizer $u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$ of $\ensuremath{\mathcal{E}}(\cdot, B_R)$. \end{thm} \begin{proof} Since $F, W \geq 0$, we have that $\ensuremath{\mathcal{E}}(v,B_R)\geq 0$ for any $v\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$. Then there exists $\{u_k\}\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$ a minimizing sequence, i.e. \[ \liminf_{k\to\infty} \ensuremath{\mathcal{E}}(u_k,B_R) = \inf \big\{ \ensuremath{\mathcal{E}}(v,B_R) \;\big| \; v\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }\big\}. \] There is $\bar k >0$ such that for all $k\geq \bar k$ there exists $M>0$ such that \[ \ensuremath{\mathcal{E}}(u_k,B_R) \leq M, \] so in particular by \eqref{this} we have that \[ [u_k]_{W^{s,p}(B_R)} \leq C_1, \qquad [u_k]_{R,\varphi}\leq C_2, \] with $C_1,C_2>0$ depending on $n,s,p,R,M$. Also, by \eqref{bondonlp}, we have that \[ \|u_k\|_{L^p(B_R)}<C(n,s,p,R) \left( [u]_{W^{s,p}(B_R)}+1\right), \] therefore for all $k\geq \bar k$ there is $\tilde M>0$ such that \[ \| u_k\|_{W^{s,p}(B_R)}<\tilde M. \] By compactness (see e.g. Theorem 7.1 in \cite{hitch}), there exists a subsequence, which we still call $\{u_k\}$, such that \bgs{ \|u_k-u\|_{L^p(B_R)}\longrightarrow 0, \quad \mbox{and } \quad u_k \longrightarrow u \; \; \mbox{ a.e. in } \ensuremath{\mathbb{R}}n } for some $u\in W^{s,p}(B_R)$. Also, $u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$, by Fatou and the uniform bound on $[u_k]_{R,\varphi}$. Using Fatou's Theorem, the lower semi-continuity of $F$ in the first variable and \eqref{wloc} we have that \bgs{ \inf \big\{ &\ensuremath{\mathcal{E}}(v,B_R) \,\big| \, v\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }\big\} \\ =&\;\liminf_{k\to \infty} \bigg( \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus(\ensuremath{\mathcal{C}} B_R)^2} F(u_k(x)-u_k(y),x-y)\, dx \, dy + \int_{B_R} W(u_k)\, dx\bigg) \\ \geq& \; \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus(\ensuremath{\mathcal{C}} B_R)^2}\liminf_{k\to \infty}F(u_k(x)-u_k(y),x-y)\, dx \, dy + \int_{B_R} W(u)\, dx \\ \geq & \; \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2}F(u(x)-u(y),x-y)\, dx \, dy + \int_{B_R} W(u)\, dx \\ =& \;\ensuremath{\mathcal{E}}(u,B_R). } Hence $u$ is a minimizer and this concludes the proof of the theorem. \end{proof} We make now an observation on the Euler-Lagrange equation related to the energy $\ensuremath{\mathcal{E}}$. \begin{prop}\label{eulerlagr} Let $F$ satisfy \eqref{sym}, \eqref{integr}, \eqref{C2t}, \eqref{1derivt} and $W$ satisfy \eqref{wloc}. If $\varphi \in \ensuremath{\mathcal{X}}_R $ and $u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$, then \bgs{\label{eq0} \frac{d}{d\ensuremath{\varepsilon}} &\ensuremath{\mathcal{E}}(u+\ensuremath{\varepsilon}\phi,B_R) \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig|_{\ensuremath{\varepsilon}=0}\\ =&\; \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} \partial_t F(u(x)-u(y),x-y) (\phi(x)-\phi(y)) \, dx \, dy + \int_{B_R} W'(u(x))\phi(x)\,dx } for any $\phi \in C^\infty_c(B_R)$. \end{prop} \begin{proof} We give a sketch of the proof. First of all, notice that if $\phi\in C_c^\infty(B_R)$, for any $\ensuremath{\varepsilon}>0$ we have that $u+\ensuremath{\varepsilon} \phi \in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$. By Proposition \ref{finiteenergy} it follows that both $\ensuremath{\mathcal{E}}(u,B_R)$ and $\ensuremath{\mathcal{E}}(u+\ensuremath{\varepsilon} \phi,B_R)$ are finite. \\ Since $F(\cdot, x)\in C^1(\ensuremath{\mathbb{R}})$ by the mean value theorem there is $\tau_\ensuremath{\varepsilon}(x,y)$ satisfying $|\tau_\ensuremath{\varepsilon}(x,y)|\leq \ensuremath{\varepsilon}$ such that \bgs{ &\;\frac{ F\left(u(x)-u(y)+\ensuremath{\varepsilon}(\phi(x)-\phi(y)),x-y\right)-F\left(u(x)-u(y),x-y\right)}\ensuremath{\varepsilon} \\ =&\; \partial_t F\left(u(x)-u(y)+ \tau_\ensuremath{\varepsilon} (\phi(x)-\phi(y)),x-y\right) \left(\phi(x)-\phi(y)\right). } The assumption \eqref{1derivt} and the H{\"o}lder inequality lead to \bgs{ \big| \partial_t F\left(u(x)-u(y)+ \tau_\ensuremath{\varepsilon} (\phi(x)-\phi(y)),x-y\right) \left(\phi(x)-\phi(y)\right)\big| \leq F(x,y), } for some $F \in L^1(\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_{R})^2)$. It is enough to use the Dominated Convergence Theorem to conclude the proposition. \end{proof} Furthermore, we prove some form of a strong comparison principle for minimizers. \begin{thm}\label{maxp} Let $F$ satisfy \eqref{sym}, \eqref{integr}, \eqref{C2t}, \eqref{1derivt} and \eqref{uconvex} and let $W$ satisfy \eqref{wloc}. If $\varphi_1, \varphi_2 \in \ensuremath{\mathcal{X}}_R$ and $u_1 \in \mathcal W^{s,p}_{R,\varphi_1}$, $u_2 \in \mathcal W^{s,p}_{R,\varphi_2}$ are two minimizers of $\ensuremath{\mathcal{E}}(\cdot, B_R)$, such that \bgs{ & u_1,u_2 \in L^\infty(B_R), \\ &u_1\geq u_2\quad \mbox{ in } \ensuremath{\mathbb{R}}^n \\ & u_1=u_2 \quad \mbox{ in } B_\delta(\overline x) \subset \subset B_R } for some $\delta>0, \, \overline x\in B_R$, then $u_1=u_2$ almost everywhere in $\ensuremath{\mathbb{R}}n$. \end{thm} \begin{proof} According to Proposition \ref{eulerlagr} we have that \bgs{ \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} &\ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig( \partial_t F(u_2(x)-u_2(y),x-y) - \partial_t F(u_1(x)-u_1(y),x-y) \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig)(\phi(x)-\phi(y)) \, dx \, dy \\ &+\int_{B_R} \left(W'(u_2(x))-W'(u_1(x))\right) \phi(x) \, dx=0 } for any $\phi \in C^\infty_c(B_R)$. In particular this equality holds for any \[ \phi\in C^\infty_c(B_\frac{\delta}2(\bar x)), \qquad \phi \geq 0. \] Since $\phi(x)=0$ on $\ensuremath{\mathcal{C}} B_\frac{\delta}2(\bar x)$ and $u_1(x)=u_2(x)$ in $B_\delta(\bar x)$, contributions come only from interactions between $B_\frac{\delta}2(\bar x)$ and $\ensuremath{\mathcal{C}} B_\delta(\bar x)$. So, using also \eqref{sym}, we are left with \bgs{\label{brru} & 0= \int_{B_\frac{\delta}2(\bar x) } \left( \int_{ \ensuremath{\mathcal{C}} B_\delta(\bar x) } \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig( \partial_t F(u_1(x)-u_2(y),x-y) - \partial_t F(u_1(x)-u_1(y),x-y) \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig)\, dy \right) \phi(x) \, dx \\ & + \int_{ \ensuremath{\mathcal{C}} B_\delta(\bar x) } \left( \int_{B_\frac{\delta}2(\bar x) } \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig( \partial_t F(u_2(x)-u_1(y),x-y)- \partial_t F(u_1(x)-u_1(y),x-y) \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig) (-\phi(y)) \, dy \right)\, dx \\ &\;=\, 2 \int_{B_\frac{\delta}2(\bar x) } \left( \int_{ \ensuremath{\mathcal{C}} B_\delta(\bar x) } \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig( \partial_t F(u_1(x)-u_2(y),x-y) - \partial_t F(u_1(x)-u_1(y),x-y) \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig)\, dy \right) \phi(x) \, dx . } Let \[ A_{\delta}:=\big\{y\in \ensuremath{\mathcal{C}} B_{\delta} (\bar x)\; \big| \; u_1(y)> u_2(y)\big\} \] and we argue by contradiction, supposing that \eqlab{ \label{ad} |A_\delta|\neq 0.} When $y \in \ensuremath{\mathcal{C}} A_\delta$, by hypothesis $u_1(y)=u_2(y)$, hence \bgs{\label{brru} 0=\int_{B_\frac{\delta}2(\bar x) } \left( \int_{ A_\delta } \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig( \partial_t F(u_1(x)-u_2(y),x-y) - \partial_t F(u_1(x)-u_1(y),x-y) \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig)\, dy \right) \phi(x) \, dx. } Denoting for $x \in B_\frac{\delta}2(\bar x),y \in A_\delta$, \[ h(x,y):= \partial_t F(u_1(x)-u_2(y),x-y) - \partial_t F(u_1(x)-u_1(y),x-y) ,\] by \eqref{uconvex} we have that on $A_\delta$ \eqlab{\label{brruf} h(x,y) >0. } Defining $g:B_\frac{\delta}2(\bar x)\to \ensuremath{\mathbb{R}}_+$ as \[ g(x):= \int_{A_\delta} h(x,y)\, dy\] we get that for any $\phi\in C^\infty_c(B_\frac{\delta}2(\bar x),[0,+\infty))$ \[ 0= \int_{B_\frac{\delta} 2(\bar x)} g(x) \phi(x) \, dx. \] It follows that \[ g(x)=0 \quad \mbox{ for almost any } x\in B_\frac{\delta}2 (\bar x),\] which by \eqref{ad} and \eqref{brruf} gives a contradiction. It follows that that $|A_\delta|=0$, hence $u_1 =u_2$ almost anywhere in $\ensuremath{\mathcal{C}} B_{\delta}(\bar x) $.\\ We conclude by noticing that, by \eqref{1derivt}, $g$ is well defined. Indeed \bgs{ &\; \int_{A_\delta} \left| \partial_t F(u_1(x)-u_2(y),x-y)\right| \, dy \leq c_3 \int_{A_\delta} \frac{|u_1(x)-u_2(y)|^{p-1} }{|x-y|^{n+sp}}\, dy \\ \leq &\; 2^{p-2}c_3 \bigg( \|u_1\|^{p-1}_{L^\infty(B_R)} \int_{\ensuremath{\mathcal{C}} B_\delta(\bar x)} |x-y|^{-n-sp}\, dy +\|u_2\|^{p-1}_{L^\infty(B_R)} \int_{B_R \setminus B_\delta(\bar x)}|x-y|^{-n-sp}\, dy \\ &\; + \|\varphi_2\|^{p-1}_{L^\infty(\ensuremath{\mathcal{C}} B_R)} \int_{\ensuremath{\mathcal{C}} B_R} |x-y|^{-n-sp}\, dy\bigg). } We have that $|y-x|\geq |y-\bar x|-|x-\bar x|\geq |y-\bar x|/2$, hence \[ \int_{A_\delta} \left| \partial_t F(u_1(x)-u_2(y),x-y)\right| \, dy\leq C_{n,s,p,\delta} \left( \|u_1\|^{p-1}_{L^\infty(B_R)}+\|u_2\|^{p-1}_{L^\infty(B_R)}+\|\varphi_2\|^{p-1}_{L^\infty(\ensuremath{\mathcal{C}} B_R)} \right), \] and this concludes the proof. \end{proof} \section{Preliminary energy estimates}\label{prelim} The preliminary results in this Section hold in any dimension, however the main result works with our techniques only in dimension two. In fact, this depends on a Taylor expansion of order two, that we do in the next Lemma. \begin{lem}\label{first} Let $F$ satisfy \eqref{sym} to \eqref{partial2}, $W$ satisfy \eqref{wloc} and let $ \varphi \in C_c^{\infty}(B_1)$. Also, for any $R>1$ and $y\in \ensuremath{\mathbb{R}}n$, let \[ \Psi_{R,\pm}(y):= y\pm \varphi\left(\frac{y}R\right) e_1 \quad \mbox{ and } \quad u_{R,\pm}(x)= u(\Psi^{-1}_{R,\pm}(x)).\] Then for large $R$ the maps $ \Psi_{R,\pm}$ are diffeomorphisms on $\ensuremath{\mathbb{R}}n$ and \[ \ensuremath{\mathcal{E}}(u_{R,+},B_R) +\ensuremath{\mathcal{E}}(u_{R,-},B_R) -2\ensuremath{\mathcal{E}}(u,B_R) \leq \frac{C}{R^2} \ensuremath{\mathcal{E}}(u,B_R) .\] \end{lem} \begin{proof} From here on, we denote for simplicity \[ u=u(y), \; \bar u= u(\bar y), \; \varphi= \varphi\left( \frac{y}R \right), \; \bar \varphi= \varphi\left(\frac{\bar y}R\right).\] Notice that \eqlab{\label{normaphi} |\varphi - \bar \varphi | \leq \frac{ \|\varphi\|_{C^1(\ensuremath{\mathbb{R}}n)}}R |y-\bar y|} and that for any $\delta\in[-1,1]$ \eqlab{ \label{stimamod} | y-\bar y+ \delta e_1(\varphi-\bar \varphi) | \geq \left(1-\frac{2\|\varphi\|_{C^1(\ensuremath{\mathbb{R}}n)}}{R}\right)^{\frac12} |y-\bar y|. } Indeed \[ | y-\bar y+ \delta e_1 (\varphi-\bar \varphi) | ^2 = |y-\bar y|^2 + \delta^2 (\varphi -\bar \varphi)^2 + 2 \delta(y_1-\bar y_1)(\varphi-\bar \varphi) \geq |y-\bar y|^2 - 2| \delta| |y_1 -\bar y_1| |\varphi-\bar \varphi|.\] Using \eqref{normaphi} we have that \[ | \delta| |y_1 -\bar y_1| |\varphi-\bar \varphi| \leq \frac{\|\varphi\|_{C^1(\ensuremath{\mathbb{R}}n)} }{R} |y -\bar y|^2, \] hence \eqref{stimamod} is proved. Now, checking Lemma 4.3 in \cite{nonlocal}, one sees that $\Psi_{R,\pm}$ are diffeomorphisms for large $R$, and that the change of variables \bgs{\label{change1} x:= \Psi_{R,\pm}(y) , \qquad \bar x= \Psi_{R,\pm}(\bar y) } gives \bgs{\label{dx1} dx= 1\pm \frac{1}R\partial_{x_1}\varphi \left(\frac{y}R\right) + \mathcal{O}\left(\frac{1}{R^2}\right)\, dy } and \eqlab{ \label{dxdy} dx\,d\bar x= 1\pm \frac{1}R\partial_{x_1}\varphi \pm \frac{1}R \partial_{x_1} \bar \varphi + \mathcal{O}\left(\frac{1}{R^2}\right)\, dy\, d\bar y. } With this change of variables, we have that \bgs{ \label{FRR} F(u_{R,\pm}(x)-u_{R,\pm}(\bar x), x-\bar x)&= F (u(\Psi^{-1}_{R,\pm}(x))-u(\Psi^{-1}_{R,\pm}(\bar x)), x-\bar x) \\ &=F\left(u(y)-u(\bar y), y-\bar y +e_1 \left( \pm \varphi\mp \bar \varphi\right) \right). } Notice that $\Psi_{R,\pm}^{-1}(B_R)= B_R$ and $\Psi_{R,\pm}^{-1}(\ensuremath{\mathcal{C}} B_R)= \ensuremath{\mathcal{C}} B_R$. Changing variables we have that \eqlab{\label{psibr} &\iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} F(u_{R,\pm}(x)-u_{R,\pm}(\bar x), x-\bar x) \, dx \, d\bar x \\ = & \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} F(u-\bar u, y-\bar y \pm e_1(\varphi-\bar \varphi)) \left(1\pm \frac{1}R\partial_{x_1}\varphi \pm \frac{1}R \partial_{x_1} \bar \varphi + \mathcal{O}\left(\frac{1}{R^2}\right)\right)\, dy\, d\bar y. } Thanks to \eqref{stimamod}, \eqref{mon2} and \eqref{homo}, for any $R$ large enough and any $\delta \in [-1,1]$ we have the estimate \eqlab{ \label{primader} F(u-\bar u, y-\bar y \pm \delta e_1(\varphi-\bar \varphi)) \leq &\, F\left(u-\bar u,\left(1-\frac{2\|\varphi\|_{C^1(\ensuremath{\mathbb{R}}n)} }{R}\right)^{\frac12} (y-\bar y)\right) \\ \leq &\, \left(1-\frac{2\|\varphi\|_{C^1(\ensuremath{\mathbb{R}}n)} }{R}\right)^{\frac{-n-sp-1}{2} } F(u-\bar u, y -\bar y). } We define the function \[ g\colon\ensuremath{\mathbb{R}}\to \ensuremath{\mathbb{R}}_+, \quad g(h):=F(u-\bar u, y-\bar y + h e_1 (\varphi-\bar \varphi) ) \] and we have that \eqlab{\label{gzero} g(0)= F(u-\bar u, y-\bar y) , \quad g(\pm 1)=F(u-\bar u, y-\bar y \pm e_1(\varphi- \bar \varphi) ). } Also, we take the derivatives \bgs{ & g'(h)= \partial_{x_1} F ( u-\bar u, y-\bar y+he_1 ( \varphi-\bar \varphi ) ) (\varphi-\bar \varphi ) , \\ & g''(h)= \partial^2_{x_1} F ( u-\bar u, y-\bar y+h e_1( \varphi-\bar \varphi ) ) (\varphi-\bar \varphi )^2. } Using \eqref{partial1}, \eqref{normaphi}, \eqref{stimamod} and \eqref{primader} we obtain \bgs{ |g'(h)|\leq &\, c_1 |F ( u-\bar u, y-\bar y+he_1 ( \varphi-\bar \varphi ) )| \frac{|\varphi -\bar \varphi| }{|y-\bar y+he_1 ( \varphi-\bar \varphi )|} \\ \leq &\; c_1 \left(1-\frac{2\|\varphi\|_{C^1(\ensuremath{\mathbb{R}}n)} }{R}\right)^{\frac{-n-sp-2}{2}} \frac{\|\varphi\|_{C^1(\ensuremath{\mathbb{R}}n)} } R F(u-\bar u, y-\bar y) , } hence \eqlab{ \label{gprimo} |g'(h)|\leq g(0)\mathcal O \left(\frac{1}{R}\right). } In the same way, using \eqref{partial2} we get that \eqlab{ \label{gsec} |g''(h)|\leq g(0)\mathcal O \left(\frac1{R^2}\right). } By \eqref{C2} since $g\in C^2(\ensuremath{\mathbb{R}})$ with a Taylor expansion we have \[ g(h)=g(0)+g'(\delta)h \] for some $\delta = \,\delta(h) \in (0,h)$, hence \bgs{\label{g11} g(1)= g(0)+g'(\delta_+) ,\qquad g(-1)= g(0)-g'(\delta_-), \qquad \textcolor{black}{\mbox{ for some }}\delta_+\in(0,1), \delta_- \in (-1,0).} Moreover, there exists $\tilde \delta \in (\delta_-,\delta_+)$ such that \eqlab{ \label{secondg} g'(\delta_+) - g'(\delta_-) = g''(\tilde \delta) (\delta_+-\delta_-). } So with this Taylor expansions and formula \eqref{psibr} we obtain \eqlab{ \label{krru11} \ensuremath{\mathcal{K}}_R(u_{R,+}) + \ensuremath{\mathcal{K}}_R(u_{R,-}) = & \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} g(1) \left(1+ \frac{1}R\partial_{x_1}\varphi + \frac{1}R \partial_{x_1} \bar \varphi + \mathcal{O}\left(\frac{1}{R^2}\right)\right) \\ &\quad +g(-1) \left(1- \frac{1}R\partial_{x_1}\varphi - \frac{1}R \partial_{x_1} \bar \varphi + \mathcal{O}\left(\frac{1}{R^2}\right)\right)\, dy\, d\bar y \\ = &\; \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} g(0) \left(2+\mathcal O\left(\frac{1}{R^2}\right) \right) \, dy\,d\bar y \\ & + \; \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} \left(g'(\delta_+) -g'(\delta_-) \right) \left(1+\mathcal O\left(\frac{1}{R^2}\right) \right) \, dy\,d\bar y \\ &+\;\iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} \frac{1}R \left( g'(\delta_+) + g'(\delta_-)\right) (\partial_{x_1} \varphi + \partial_{x_1} \bar \varphi) \, dy\,d\bar y \\ = &\; \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2}g(0) \left(2+\mathcal O\left(\frac{1}{R^2}\right) \right)+T_1(y,\bar y)+T_2(y,\bar y) \, dy \, d\bar y.} In order to have an estimate on $T_1$, we use \eqref{secondg} and get that \bgs{T_1(y,\bar y) \leq & \; \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| g'(\delta_+) -g(\delta_-) \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| \left(1+\mathcal O\left(\frac{1}{R^2}\right) \right) \, dy\,d\bar y \\ \leq&\; \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| g''(\tilde \delta) \ensuremath{\mathcal W^{s,p}_{R,\varphi} }ig| \left(2+\mathcal O\left(\frac{1}{R^2}\right) \right) \, dy\,d\bar y ,} where we have used that $\delta_+-\delta_-\leq2$. By \eqref{gsec} we obtain \[T_1(y,\bar y)\leq g(0)\mathcal O\left(\frac{1}{R^2}\right).\] On the other hand \bgs{ T_2(y,\bar y) \leq\frac{2\|\varphi\|_{C^1(\ensuremath{\mathbb{R}}n)} } R \left(|g'(\delta_+)|+|g'(\delta_-)|\right) , } which by \eqref{gprimo} leads to \[ T_2(y,\bar y) \leq g(0)\mathcal O\left( \frac{1}{R^2} \right).\] Therefore in \eqref{krru11} we have that \bgs{& \ensuremath{\mathcal{K}}_R(u_{R,+}) + \ensuremath{\mathcal{K}}_R(u_{R,-}) \leq \iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} g(0) \left(2+\mathcal O\left(\frac{1}{R^2}\right) \right) \, dy\,d\bar y = \ensuremath{\mathcal{K}}_R(u) \left(2+\mathcal O\left(\frac{1}{R^2}\right) \right).} For the potential energy, the computation easily follows. It suffices to apply the change of variables \eqref{change1} and to recall that $\Psi_{R,\pm}^{-1}(B_R)= B_R$. We get \bgs{ \int_{B_R} W(u_{R,+}(x)) \, dx &+ \int_{B_R}W(u_{R,-}(x) )\, dx\\ = &\;\int_{B_R}W(u (\Psi^{-1}_{R,+}(x)))\, dx +\int_{B_R}W(u (\Psi^{-1}_{R,-}(x))) \, dx\\ = &\; \int_{B_R}W(u (y)) \bigg(1+ \frac{1}R\partial_{x_1}\varphi \left(\frac{y}R\right)+ \mathcal{O}\left(\frac{1}{R^2}\right)\bigg)\, dy \\ &\;+ \int_{B_R}W(u (y)) \bigg(1- \frac{1}R\partial_{x_1}\varphi \left(\frac{y}R\right) + \mathcal{O}\left(\frac{1}{R^2}\right)\bigg)\, dy\\ =&\; \bigg(2+\mathcal{O}\left(\frac{1}{R^2}\right)\bigg) \int_{B_R}W(u (y)) \, dy.} This concludes the proof of Lemma \ref{first}. \end{proof} We give now the following uniform bound on large balls of the energy of the minimizers. This result is an adaptation of Theorem 1.3 in \cite{densityEs} and it works in any dimension. \begin{thm}\label{thmunif} Let $F$ satisfy \eqref{sym}, \eqref{mon} and \eqref{integr} and $W$ satisfy \eqref{www}. If $u$ is a minimizer in $B_{R+2}$ for a large $R$, such that $|u|\leq 1$, then \[ \ensuremath{\mathcal{E}}(u,B_R)\leq \begin{cases} CR^{n-1} \quad &\mbox{if} \quad s\in \big(\frac{1}{p},1\big),\\ CR^{n-1}\log R \quad &\mbox{if} \quad s=\frac{1}{p},\\ CR^{n-sp} \quad &\mbox{if} \quad s\in \big(0,\frac{1}{p}\big), \end{cases} \] for some positive constant $C$ depending on $n, s$ and $W$. \end{thm} This type of energy estimates for the fractional Laplacian are proved in \cite[Theorem 1.3]{densityEs} (see also \cite[Theorem 4.1.2]{nonlocal}). We give a sketch of the proof following \cite{densityEs}, pointing out which assumptions on $F$ make the proof work in our case. \begin{proof}[Proof of Theorem \ref{thmunif}] As a first step, one introduces the auxiliary functions \bgs{\psi(x):=-1+2 \min&\big\{ (|x|-R-1)_+,1\big\}, \quad v(x):=\min \big\{u(x), \psi(x)\big\}, \\& d(x):= \max\big\{ (R+1-|x|),1 \big\}.} Notice that $|\psi|, |v| \leq 1$. For $|x-y|\leq d(x)$ we have that \begin{equation}\label{acpsi1} |\psi(x)-\psi(y)|\leq \frac{2 |x-y|}{d(x)}.\end{equation} Moreover, one obtains the estimate \begin{equation} \label{acdx1} \int_{B_{R+2}} d(x)^{-sp}\, dx \leq \begin{cases} CR^{n-1} \quad & \mbox{if} \quad s\in \big(\frac{1}{p},1\big),\\ CR^{n-1}\log R \quad &\mbox{if} \quad s=\frac{1}{p},\\ CR^{n-sp} \quad & \mbox{if} \quad s\in \big(0,\frac{1}{p}\big) . \end{cases} \end{equation} Also, by \eqref{integr} and \eqref{acpsi1} we get that \bgs{ &\int_{\ensuremath{\mathbb{R}}n} F(\psi(x)-\psi(y),x-y) \, dy \leq c^*\int_{\ensuremath{\mathbb{R}}n} \frac{|\psi(x)-\psi(y)|^p}{|x-y|^{n+sp}}\,dy \\ \leq &\; c^*\int_{|x-y|\leq d(x) } \frac{|\psi(x)-\psi(y)|^p}{|x-y|^{n+sp}} \,dy +c^*\int_{|x-y|\geq d(x) } \frac{|\psi(x)-\psi(y)|^p}{|x-y|^{n+sp}}\,dy \\ \leq &\;c^*\, d(x)^{-p} \int_{|x-y|\leq d(x)} |x-y|^{p-n-sp} \, dy + c^*\,\int_{|x-y|\geq d(x)} |x-y|^{-n-sp }\, dy \leq c d(x)^{-sp}.} It follows that \bgs{ \ensuremath{\mathcal{E}}(\psi, B_{R+2}) \leq&\; \int_{B_{R+2} } \left(\int_{\ensuremath{\mathbb{R}}n} F(\psi(x)-\psi(y),x-y) \, dy\right)\, dx + \int_{B_{R+2}} W(\psi)\, dx\\ \leq &\;c \int_{B_{R+2}} d(x)^{-sp} \, dx+ \int_{B_{R+2}} W(\psi)\, dx.} Moreover $W(-1)=0$ and $\psi=-1$ on $B_{R+1}$, so \[ \int_{B_{R+2} } W(\psi)\, dx = \int_{B_{R+2}\setminus B_{R+1} } W(\psi)\, dx\leq C R^{n-1}.\] With this, we obtain the bound \begin{equation} \label{psir2} \ensuremath{\mathcal{E}}(\psi, B_{R+2}) \leq \begin{cases} CR^{n-1} \quad & \mbox{if} \quad s\in \big(\frac{1}{p},1\big),\\ CR^{n-1}\log R \quad &\mbox{if} \quad s=\frac{1}{p},\\ CR^{n-sp} \quad & \mbox{if} \quad s\in \big(0,\frac{1}{p}\big) , \end{cases} \end{equation} where $C=C(n,s,p)>0$. \\ Letting \[ A:=\{v=\psi\},\] we notice that $B_{R+1}\subseteq A\subseteq B_{R+2}$ and that for $x\in A, y\in \ensuremath{\mathcal{C}} A$ \[ |v(x)-v(y)|\leq \max\big\{|u(x)-u(y)|, |\psi(x)-\psi(y)|\big\}.\] Then by \eqref{mon} we have that \[ F(v(x)-v(y),x-y) \leq F(u(x)-u(y),x-y)+ F(\psi(x)-\psi(y),x-y) .\] Integrating on $A\times \ensuremath{\mathcal{C}} A$ we get that \[v(A,\ensuremath{\mathcal{C}} A)\leq u(A,\ensuremath{\mathcal{C}} A)+ \psi (A,\ensuremath{\mathcal{C}} A).\] We recall that $u$ is a minimizer in $B_{R+2}$, and $u=v$ outside of $B_{R+2}$ (and outside of $A$), so \bgs{ 0\leq &\; \ensuremath{\mathcal{E}}(v,B_{R+2})-\ensuremath{\mathcal{E}}(u,B_{R+2}) =\ensuremath{\mathcal{E}}(v,A)-\ensuremath{\mathcal{E}}(u,A) .} Since $v=\psi$ on $A$, it follows that \bgs{ u(A,A) + \int_A W(u)\, dx \leq \ensuremath{\mathcal{E}}(\psi,A)} and, given that $B_{R+1}\subseteq A\subseteq B_{R+2}$, \bgs{u(B_{R+1},B_{R+1}) + \int_{B_{R+1}} W(u)\, dx \leq \ensuremath{\mathcal{E}}(\psi,B_{R+2}).} Also, one has that \bgs{ u(B_R,\ensuremath{\mathcal{C}} B_{R+1}) \leq &\;C \int_{B_{R+2} } d(x)^{-sp} \, dx \leq \begin{cases} CR^{n-1} \quad & \mbox{if} \quad s\in \big(\frac{1}{p},1\big),\\ CR^{n-1}\log R \quad &\mbox{if} \quad s=\frac{1}{p},\\ CR^{n-sp} \quad & \mbox{if} \quad s\in \big(0,\frac{1}{p}\big). \end{cases}} Using this together with the estimate \eqref{psir2}, we obtain the claim of Theorem \ref{thmunif}. \end{proof} We have the following very useful lemma. \begin{lem}\label{maxmin} Let $F$ be convex in the first variable and such that it satisfies \eqref{sym}, and let $W$ be such that it satisfies \eqref{wloc}. Let $\Omega$ be a measurable set and $u,v\colon \ensuremath{\mathbb{R}}^n \to \ensuremath{\mathbb{R}}$ be two measurable functions. Let \[ m:=\min\{u,v\}, \qquad M:=\max\{u,v\},\] then \[ \ensuremath{\mathcal{E}}(m,\Omega) +\ensuremath{\mathcal{E}}(M,\Omega) \leq \ensuremath{\mathcal{E}}(u,\Omega)+\ensuremath{\mathcal{E}}(v,\Omega).\] \end{lem} We omit the proof of this known result, a complete proof is given e.g. in \cite[Lemma 4.5.15]{tesiluca} (and the forthcoming paper \cite{cl}), while a general abstract version in \cite{sunra}. \begin{prop}\label{blah} \noindent If $W$ satisfies \eqref{www}, let \sys[ \tilde W:=]{ &W, & \mbox{ in } \; &[-1,1]\\ &0, & \mbox{ in } \; & \ensuremath{\mathbb{R}} \setminus [-1,1] ,} \[ \ensuremath{\mathcal{E}} (u,B_R) := \ensuremath{\mathcal{K}}_R(u) + \int_{B_R} W (u) \, dx, \qquad\mbox{ and } \qquad \tilde \ensuremath{\mathcal{E}} (u,B_R) := \ensuremath{\mathcal{K}}_R(u) + \int_{B_R} \tilde W (u) \, dx. \] For any measurable function $\tilde u\colon \ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}}$, denote also \[ u := \max\left\{ \min\left\{ \tilde u,1\right\}, -1\right\}. \] a) It holds that $\ensuremath{\mathcal{E}}(u,B_R) = \tilde \ensuremath{\mathcal{E}} (u, B_R) \leq \tilde \ensuremath{\mathcal{E}} (\tilde u, B_R)$. \noindent Furthermore, if $\varphi \in \ensuremath{\mathcal{X}}_R$ is such that $|\varphi|\leq 1$ and b) if $u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$ is a minimizer for $\ensuremath{\mathcal{E}}(\cdot, B_R)$, then $u$ is a minimizer for $\tilde \ensuremath{\mathcal{E}} (\cdot, B_R)$; c) if $\tilde u\in \ensuremath{\mathcal W^{s,p}_{R,\varphi} }$ is a minimizer for $\tilde \ensuremath{\mathcal{E}}(\cdot, B_R)$, then $u$ is a minimizer for $\ensuremath{\mathcal{E}}(\cdot, B_R)$. \end{prop} \begin{proof} Notice at first that $\tilde W$ satisfies \eqref{wloc}. Then by Lemma \ref{maxmin} and using the notations therein we have that \[ \max \big\{ \tilde \ensuremath{\mathcal{E}}(m, B_R), \tilde \ensuremath{\mathcal{E}}(M,B_R)\big \} \leq \tilde \ensuremath{\mathcal{E}}(u,B_R) + \tilde \ensuremath{\mathcal{E}}(v,B_R).\] given that $F,W\geq0$ (hence $\tilde \ensuremath{\mathcal{E}}(w,B_R) \geq 0$ for any measurable function $w$). Moreover, since $W(\pm 1)=0$ we get that \[ \tilde \ensuremath{\mathcal{E}}(\pm 1, B_R)=0. \] Therefore denoting \bgs{ & \bar u =\min \{ \tilde u,1 \}, \qquad } we obtain \[ \tilde \ensuremath{\mathcal{E}}( u , B_R ) \leq \tilde \ensuremath{\mathcal{E}}( \bar u ,B_R) \leq\tilde \ensuremath{\mathcal{E}} (\tilde u, B_R), \] which concludes a).\\ For b), let $\tilde w\colon \ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}}$ be any competitor for $u$, hence such that $\tilde w=\varphi$ on $\ensuremath{\mathcal{C}} B_R$. Denoting \[ w := \max\left\{ \min\left\{ \tilde w,1\right\}, -1\right\}, \] by the minimality of $u$ and according to a) we have that \[ \tilde \ensuremath{\mathcal{E}} (u, B_R) =\ensuremath{\mathcal{E}} (u,B_R) \leq \ensuremath{\mathcal{E}}(w, B_R) = \tilde \ensuremath{\mathcal{E}}(w, B_R) \leq \tilde \ensuremath{\mathcal{E}} (\tilde w, B_R). \] On the other hand, let $w\colon \ensuremath{\mathbb{R}}n \to [-1,1]$ be any admissible competitor for $u$, thus such $w=\varphi$ on $\ensuremath{\mathcal{C}} B_R$. From the minimality of $\tilde u$ and according to a) we have that \[ \ensuremath{\mathcal{E}}(u,B_R) =\tilde \ensuremath{\mathcal{E}} ( u, B_R) \leq \tilde \ensuremath{\mathcal{E}} (\tilde u, B_R) \leq \tilde \ensuremath{\mathcal{E}} (w, B_R) = \ensuremath{\mathcal{E}} ( w, B_R), \] thus c) is proved. \end{proof} \section{Proof of the main result}\label{thmm} The proof of the main result follows the step of \cite [Theorem 4.2.1]{nonlocal}. We underline the main ideas from \cite{nonlocal}, and focus on the new computations needed for the type of energy here introduced. \begin{proof}[Proof of Theorem \ref{Theorem}] We organize the proof in four steps. \noindent\textbf{Step 1. A geometrical consideration} \\In order to prove that $u$ is one dimensional, one has to prove that the level sets of $u$ are hyperplanes. It is thus enough to prove that $u$ is monotone in any direction. Using this, one has that the level sets are both convex and concave, thus flat. \noindent \textbf{Step 2. Energy estimates} \\ Let $R>8$ and $\varphi \in C_c^{\infty}(B_1)$ such that $\varphi=1$ in $B_{1/2}$, and let $e=(1,0)$. Witgh the notations of Lemma \ref{first} we notice that \begin{align} &u_{R,\pm} (y)= u(y) \; &\text{for} \; &y \in \ensuremath{\mathcal{C}} B_R \label{urpiur1}\\ &u_{R,+} (y)= u(y-e) \; &\text{for} \; &y \in B_{R/2}\label{urpiur2} , \end{align} and \[ |u_{R,\pm}|\leq 1.\] We use the notations of Proposition \ref{blah} for $\tilde W, \tilde \ensuremath{\mathcal{E}}$. Since $u$ is a minimizer for $\ensuremath{\mathcal{E}}$ in $B_R$ and $u_{R,-}$ is a competitor, thanks to Lemma \ref{first} (applied to $\tilde \ensuremath{\mathcal{E}}$) and to Proposition \ref{blah} a), we have that \[ \ensuremath{\mathcal{E}}(u_{R,+},B_R) -\ensuremath{\mathcal{E}}(u,B_R) \leq \ensuremath{\mathcal{E}}(u_{R,+},B_R) +\ensuremath{\mathcal{E}}(u_{R,-},B_R)- 2 \ensuremath{\mathcal{E}}(u,B_R)\leq \frac{C}{R^2} \ensuremath{\mathcal{E}}(u,B_R).\] From Theorem \ref{thmunif} applied for $n=2$ it follows that \eqlab{\label{blaa2} \lim_{R\to \infty} \left(\ensuremath{\mathcal{E}}(u_{R,+},B_R)- \ensuremath{\mathcal{E}}(u,B_R) \right)=0.} {We remark that this is the crucial point where we require $n=2$. } \noindent\textbf{Step 3. Monotonicity} \\ Suppose by contradiction that $u$ is not monotone in any direction. So, denoting $e=(1,0)$, up to translation and dilation, we suppose that \[ u(0) >u(e), \qquad \mbox{ and } \quad u(0)>u(-e).\] For $R$ large enough, we denote \[ v_R(x):=\min\{ u(x), u_{R,+}(x)\}, \qquad w_R(x):=\max\{ u(x), u_{R,+}(x)\} \] and remark that $v_R, w_R$ are continuous and that $|v_R|,|w_R|\leq 1$. So $v_R=w_R=u$ on $\ensuremath{\mathcal{C}} B_R$ and since $u$ is a minimizer \[ \ensuremath{\mathcal{E}}(w_R,B_R)\geq \ensuremath{\mathcal{E}}(u,B_R).\] Moreover, by Lemma \ref{maxmin} (applied to $\tilde \ensuremath{\mathcal{E}}$) and Proposition \ref{blah} a), we have that \[ \ensuremath{\mathcal{E}}(v_R,B_R) + \ensuremath{\mathcal{E}}(w_R,B_R)\leq \ensuremath{\mathcal{E}}(u,B_R)+\ensuremath{\mathcal{E}}(u_{R,+},B_R),\] therefore \eqlab{\label{blaa1} \ensuremath{\mathcal{E}}(v_R,B_R)\leq \ensuremath{\mathcal{E}}(u_{R,+},B_R).} Since $u(0)= u_{R,+}(-e)$ and $u(-e)=u_{R,+}(0)$, {using the continuity of the two functions $u$ and $u_{R,+}$}, we obtain that \bgs{ &v_R<u \; \mbox{ in a neighborhood of } 0\\ & v_R=u \mbox{ in a neighborhood of } -e.} This implies that $v_R$ is not identically nor $u$, nor $u_{R,+}$. We remark now that $v_R$ is not a minimizer of $ \ensuremath{\mathcal{E}}(\cdot, B_2)$. Indeed, $u$ is a global minimizer, hence $u,v_R\in W^{s,p}(B_R)$ for any $R$, so $u,v_R\in \ensuremath{\mathcal{X}}_2$. Moreover, \bgs{ & |u|,|v_R|\leq 1 &\mbox{ in } \; & \ensuremath{\mathbb{R}}^2 ,\\ & v_R \leq u &\mbox{ in } \; & \ensuremath{\mathbb{R}}^2, \\ & v_R= u & \mbox{ in } \; & B_\delta(-e) \; \; \mbox{ for some } \delta>0. } If also $v_R$ is a minimizer for $\ensuremath{\mathcal{E}}(\cdot, B_2)$, then from Proposition \ref{blah} b), $u,v_R$ are minimizers for $\tilde \ensuremath{\mathcal{E}} (\cdot, B_2)$. Theorem \ref{maxp} (applied to $\tilde \ensuremath{\mathcal{E}}$) implies that $v_R=u$ on $B_2$, which gives a contradiction. According to Theorem \ref{existence} and to Proposition \ref{blah} c), there exists $v_R^*$ a minimizer of $ \ensuremath{\mathcal{E}}(\cdot, B_2)$ such that $v_R^*=v_R$ on $\ensuremath{\mathcal{C}} B_2$ and $|v_R^*|\leq 1$. Let \[ \delta_R:=\ensuremath{\mathcal{E}}(v_R,B_2)- \ensuremath{\mathcal{E}}(v_R^*,B_2)\geq 0.\] We prove that there exists an universal constant $c>0$ such that $\lim_{R\to \infty} \delta_R\geq c.$ For this, we define as in Theorem 4.2.1 in \cite{nonlocal} \bgs{&\; \tilde u(x)=u(x-e), \; \; m(x)=\min\{u(x),\tilde u(x)\}, \qquad |\tilde u|, |m| \leq 1, } and observe that $m$ is not a minimizer for $ \ensuremath{\mathcal{E}}(\cdot, B_2)$. Indeed, $u,m\in W^{s,p}(B_R)$ for any $R$, so $u,m\in \ensuremath{\mathcal{X}}_2$. Moreover, \bgs{ & |u|,|m|\leq 1 &\mbox{ in } \; & \ensuremath{\mathbb{R}}^2 ,\\ & m \leq u &\mbox{ in } \; & \ensuremath{\mathbb{R}}^2, } and \bgs{ &m<u \; \mbox{ in a neighborhood of } 0\\ & m=u \mbox{ in a neighborhood of } e. } Using also Proposition \ref{blah} b), if $m$ were a minimizer for $\ensuremath{\mathcal{E}}(\cdot, B_2)$, we would obtain a contradiction by the comparison principle in Theorem \ref{maxp}. According to Theorem \ref{existence} and Proposition \ref{blah} c), there exists $z$ a minimizer of $ \ensuremath{\mathcal{E}}(\cdot, B_2)$ such that $m=z$ in $\ensuremath{\mathcal{C}} B_2$ and $|z|\leq 1$. Furthermore, there exists some $c>0$ (independent of $R$) such that \bgs{\ensuremath{\mathcal{E}}(m,B_2) - \ensuremath{\mathcal{E}}(z,B_2) :=c.} Let \bgs{ &\; z_R(x):= \psi(x) z(x) +(1-\psi(x)) v_R(x),} with $\psi \in C^{\infty}_c(\ensuremath{\mathbb{R}}n, [0,1])$ a cut-off function such that \sys[\psi(x)=]{& 1, && x\in B_{R/4}\\ &0, && x\in \ensuremath{\mathcal{C}} B_{R/2},} and notice that $|z_R|\leq 1$. It holds that \bgs{ & m= v_R, \; \; z=z_R&& \mbox{ in } B_2\\ & m= v_R= z=z_R=v_R*&& \mbox{ in } B_{\frac{R}2}\setminus B_2\\ & v_R=z_R=v_R^*, \;\; m=z && \mbox{ in } B_R\setminus B_{\frac{R}2}\\ & m= z, \; \; u=v_R= v_R^*=z_R&& \mbox{ in } \ensuremath{\mathcal{C}} B_{R}.} With this in mind, we get that \eqlab{\label{ghf} & c= \ensuremath{\mathcal{E}}(m,B_2) -\tilde \ensuremath{\mathcal{E}}(z,B_2) \\ =&\; \ensuremath{\mathcal{E}}(m,B_2)-\ensuremath{\mathcal{E}}(v_R,B_2)+\delta_R + \tilde \ensuremath{\mathcal{E}}(v^*_R,B_2) -\tilde \ensuremath{\mathcal{E}}(z_R,B_2)+\tilde \ensuremath{\mathcal{E}}(z_R,B_2)-\tilde \ensuremath{\mathcal{E}}(z,B_2)\\ \leq &\; \ensuremath{\mathcal{E}}(m,B_2)-\ensuremath{\mathcal{E}}(v_R,B_2)+ \ensuremath{\mathcal{E}}(z_R,B_2)- \ensuremath{\mathcal{E}}(z,B_2)+\delta_R , } since $z_R$ is a competitor for $v_R^*$ in $B_2$. Now, for $x\in B_2, y\in \ensuremath{\mathcal{C}} B_{R/2}$ we have $|x-y|\geq |y|/2$, hence \bgs{ &\ensuremath{\mathcal{E}}(m,B_2)-\ensuremath{\mathcal{E}}(v_R,B_2) \\ = &\; 2\int_{B_2}\left( \int_{\ensuremath{\mathcal{C}} B_{\frac{R}2} } F(m(x)-m(y),x-y) -F(m(x)-v_R(y), x-y) dy \right) dx \\ \leq &\; 4 \int_{B_2} \int_{\ensuremath{\mathcal{C}} B_{\frac{R}2} } F\left(2,\frac{y}2 \right) <C R^{-sp} ,} given \eqref{integr} and \eqref{mon2}. The same bound holds for $ \ensuremath{\mathcal{E}}(z_R,B_2)- \ensuremath{\mathcal{E}}(z,B_2) $. It follows in \eqref{ghf} that \bgs{ \label{nnn}& c \leq C R^{-sp} +\delta_R. } Sending $R\to \infty$ we get that \[ \lim_{R\to \infty} \delta_R \geq c>0.\] Finally, by minimality, \bgs{ \ensuremath{\mathcal{E}}(u,B_R) \leq \ensuremath{\mathcal{E}}(v_R^*,B_R)=\ensuremath{\mathcal{E}}(v_R,B_R) -\delta_R \leq \ensuremath{\mathcal{E}}(u_{R,+},B_R) -\delta_R,} according to \eqref{blaa1}. Sending $R\to \infty$ \[ c\leq \lim_{R\to \infty} \left(\ensuremath{\mathcal{E}}(u_{R,+},B_R) - \ensuremath{\mathcal{E}}(u,B_R)\right) ,\] which contradicts \eqref{blaa2}. This concludes the proof of Theorem \ref{Theorem}. \end{proof} \section{Some examples}\label{examples} {We give in this section some examples of problems to which our results can be applied. We give two example of functions $F$, that agree with the requirements \eqref{sym} to \eqref{uconvex}. As we see here, the context considered is general enough to be applied to the energy related to the fractional Laplacian, the fractional $p$-Laplacian and to the nonlocal mean curvature. } We consider in this section $W$ as in \eqref{www}. \begin{example} We consider \eqlab{\label{plapl} F(t,x)=\frac{1}p \frac{|t|^p}{|x|^{n+sp}}} for $p>1$ and $s\in (0,1)$. The nonlocal energy that we study is \eqlab{\label{plalplfrrr} \ensuremath{\mathcal{E}}(u,B_R) =\frac{1}p\iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} \frac{ |u(x)-u(y)|^p}{|x-y|^{n+sp}}\, dx \, dy + \int_{B_R} W(u)\, dx. } We note that the associated equation is given by \[ (-\Delta)^s_p u +W'(u)=0,\] with $(-\Delta)^s_p$ being the fractional p-Laplacian, defined as \[(-\Delta)^s_p u (x)= P.V. \int_{\ensuremath{\mathbb{R}}n} \frac{ |u(x)-u(y)|^{p-2} (u(x)-u(y))}{|x-y|^{n+sp}} \, dy .\] Notice that for $p=2$, we obtain the fractional Laplacian. The interested reader can see \cite{nonlocal,hitch,silvestre} and references therein for the fractional Laplacian, \cite{nhk,plapl,Korvenpa,korvenpaaobstacle} and references therein for the fractional p-Laplacian, or more general fractional operators. It is not hard to verify that $F$ given in \eqref{plapl} satisfies \eqref{sym} to \eqref{C2} and \eqref{C2t} to \eqref{uconvex}. Also, \eqref{partial1} and \eqref{partial2} follow after simple computations and hold for \[ c_1= n+sp, \quad c_2=(n+sp)(n+sp+1).\] It follows that for $n=2$ bounded global minimizers of \eqref{plalplfrrr} are one-dimensional, more precisely we have the following. \begin{cor} Let $u\colon \ensuremath{\mathbb{R}}n \to [-1,1]$ be a continuous global minimizer of the energy \eqref{plalplfrrr}, with $W$ satisfying \eqref{www}. Then $u$ is one-dimensional. \end{cor} \noindent We remark that \cite[Corollary 3.3]{cozzidg} gives that a global minimizer of \eqref{plalplfrrr} is continuous. \end{example} \begin{example}We consider the function related to the fractional mean curvature equation. Nonlocal minimal surfaces were introduced in \cite{nms} as boundaries of sets that minimize a nonlocal operator, namely the fractional perimeter. The first variation of the fractional perimeter operator is the nonlocal mean curvature, defined as the weighted average of the characteristic function, with respect to a singular kernel (the interested reader can check \cite{abat,bucval,dipiersurv} and other references therein). For smooth hypersurfaces that are globally graphs, i.e. taking $\partial E$ as a graph in the $e_n$ direction defined by a smooth function $u$, the nonlocal mean curvature is given by \[ \mathcal{I}_s[E](x,x_n) =2 P.V.\int_{\ensuremath{\mathbb{R}}^{n}} \frac{dy}{|x-y|^{n+s}} \int_0^{\frac{u(x)-u(y)}{|x-y|}} \frac{d\rho}{(1+\rho^2)^{\frac{n+s+1}2}} \] for $s\in (0,1)$ and $(x,x_n)\in \partial E$, with $u(x)=x_n$ and taking $\nabla u(x)=0$. See e.g. \cite{caffy}, the Appendix in \cite{bucval} for the proof of this formula. Also, the interested reader may see \cite[Chapter 4]{tesiluca} (and the forthcoming paper \cite{cl}) for the the mean curvature (and related nonparametric minimal surfaces). \\ We use the notations \[ g(\rho) = \frac{1}{(1+\rho^2)^{\frac{n+s+1}2}}, \qquad G(\tau) =\int_0^{\tau} g(\rho) \, d\rho, \qquad \mathcal G(t) =\int_0^t G(\tau) \, d\tau.\] Notice that \[ G'(\tau) =g(\tau), \qquad \mathcal G'(t) =G(t) .\] With this, the nonlocal energy to study is \eqlab{ \label{ennms} \ensuremath{\mathcal{E}}(u,B_R) =\iint_{\ensuremath{\mathbb{R}}^{2n}\setminus (\ensuremath{\mathcal{C}} B_R)^2} \frac{dx \, dy}{|x-y|^{n+s-1} } \mathcal G\left(\frac{u(x)-u(y)}{|x-y|}\right)+ \int_{B_R} W(u)\, dx, } and the relative equation is \[ P.V. \int_{\ensuremath{\mathbb{R}}n} \frac{dy}{|x-y|^{n+s} } G\left( \frac{u(x)-u(y)}{|x-y|} \right) + W'(u)=0, \] see also Theorem 1.10 in \cite{dipierro2017decay} for other applications. So, we let \eqlab{ \label{nms} F(t,x)= \frac{1}{|x|^{n+s-1} } \mathcal G\left(\frac{t}{|x|}\right) } and prove that the requirements \eqref{sym} to \eqref{uconvex} hold for $s\in (0,1)$ and $p=1$.\\ \noindent We notice that $g,\mathcal G$ are even, $G$ is odd and \eqlab{ \label{elem} 0< g(\rho) \leq 1, \qquad |G(t)| < \int_{-\infty}^{\infty} g(\rho)\, d \rho < C, \qquad 0\leq \mathcal G(t) \leq C| t|. } Also, the following chain of inequalities holds \eqlab{\label{simpleineq} a^2g(a)\leq aG(a) \leq 2 \mathcal G(a), \quad \mbox{ for any } a\geq 0.} Indeed, since $g$ is decreasing we have that \[ G(a) = \int_0^a g(\rho) \, d \rho \geq \int_0^a g(a) \, d \rho= a g(a).\] Also, denoting $f(a)=aG(a)-2\mathcal G(a)$ we have that \[ f'(a)= a g(a) - G(a) \leq 0.\] So $f$ is decreasing and $f(a)\leq f(0)=0$ for $a\geq 0$. This proves the inequalities in \eqref{simpleineq}. \\ It is easy to check that the assumptions \eqref{sym}, \eqref{mon}, \eqref{mon2} and \eqref{C2} hold for $F$ as in \eqref{nms}. Also for some $\beta>1$, since $g(\beta\rho)\leq g(\rho)$, we have that \[ \mathcal G (\beta t)= \beta^2 \int_{0}^t d\tau \int_0^\tau g(\beta \rho)\, d\rho \leq \beta^2 \mathcal G ( t).\] Thus for some $\alpha \in (0,1)$ \[ F\left(t,\alpha x \right)\leq \alpha^{-n-s-1} \frac{1}{|x|^{n+s-1}} \mathcal G \left(\frac{t}{|x|}\right),\] and we get \eqref{homo}. Using \eqref{elem} we obtain \[ F(t,x) = \frac{1}{|x|^{n+s-1}} \mathcal G\left(\frac{t}{|x|} \right) \leq c \frac{|t| }{|x|^{n+s}},\] that is the right hand side of \eqref{integr} for $p=1$. The left hand side follows using the bound in \cite[Lemma 4.2.1]{tesiluca} (and the forthcoming paper \cite{cl}), that is \[ \mathcal G (\tau) \geq c_* (|\tau|-1). \] By computing the derivative with respect to $x_1$ of $F$, we get that \[ |\partial_{x_1} F(t,x)| \leq (n+s-1) \frac{F(t,x)}{|x|} + \frac{1}{|x|^{n+s} } \bigg| \frac{t}{|x|} G\left(\frac{t}{|x|}\right)\bigg|\leq c_1 \frac{F(t,x)}{|x|} \] thanks to \eqref{simpleineq}. Moreover \[ |\partial^2_{x_1} F(t,x)| \leq C_1 \frac{F(t,x)}{|x|^2} + \frac{1}{|x|^{n+s+1} } \bigg| \frac{t}{|x|} G\left(\frac{t}{|x|}\right)\bigg| + \frac{1}{|x|^{n+s+1} } \frac{t^2}{|x|^2} g\left(\frac{t}{|x|} \right) \leq c_2 \frac{F(t,x)}{|x|^2} \]again by using \eqref{simpleineq}. So \eqref{partial1} and \eqref{partial2} are satisfied. We see also that \[\left| \partial_t F(t,x)\right| \leq \frac{1}{|x|^{n+s}} \left| G\left(\frac{t}{|x|}\right)\right| \leq \frac{C}{|x|^{n+s}},\] where \eqref{simpleineq} and \eqref{elem} were used. Assumptions \eqref{C2t} and \eqref{1derivt} follow. Moreover, it is obvious from the definition of $\mathcal G$ that \[ \partial_t^2 F(t,x)= \frac{1}{|x|^{n+s+1}} g\left(\frac{t}{|x|}\right) >0.\] From this \eqref{uconvex} is straightforward. Theorem \ref{Theorem} then says that in $\ensuremath{\mathbb{R}}^2$, global minimizers of the energy \eqref{ennms} are one-dimensional. To our knowledge, this is a new result in the literature. The precise result goes as follows. \begin{cor} Let $u\colon \ensuremath{\mathbb{R}}n \to [-1,1]$ be a continuous global minimizer of the energy \eqref{ennms} with $W$ satisfying \eqref{www}. Then $u$ is one-dimensional. \end{cor} We remark that, up to our knowledge, the continuity of minimizers of the energy \eqref{ennms} is not known. Even for the classical case the problem is quite a delicate one, the interested reader can consult e.g. \cite{bombi,cafgar,trudy,wang}. \end{example} \begin{appendix} \section{Some known results} \begin{prop}\label{pony} Let $\Omega\subset \mathcal O \subset \ensuremath{\mathbb{R}}n$ be bounded, open sets such that $|\mathcal O \setminus \Omega|>0$ and let $u\colon \ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}}$ be a measurable function. Then \bgs{ \|u\|^p_{L^p(\Omega)} \leq &\; \frac{2^{p-1}}{|\mathcal O \setminus \Omega|} \left( d_{\mathcal O}^{n+sp} \int_{\Omega}\ \int_{\mathcal O \setminus \Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}} \,dx \, dy+ |\Omega|\, \|u\|^p_{L^p(\mathcal O \setminus \Omega)} \right), } with $d_{\mathcal O}=\mbox{diam}(\mathcal O)$. \end{prop} \begin{proof}We have that \bgs{ |u(x)|^p = &\; |u(x)-u(y)+u(y)|^p \\ = &\; \frac{1}{|\mathcal O \setminus \Omega|} \int_{\mathcal O \setminus \Omega} |u(x)-u(y)+u(y)|^p \, dy \\ \leq&\; \frac{2^{p-1}}{|\mathcal O \setminus \Omega|} \left( \int_{\mathcal O \setminus \Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}} |x-y|^{n+sp} + |u(y)|^p\, dy\right) \\ \leq &\; \frac{2^{p-1}}{|\mathcal O \setminus \Omega|} \left( d_{\mathcal O}^{n+sp}\int_{\mathcal O \setminus \Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}} \, dy +\int_{\mathcal O \setminus \Omega} |u(y)|^p\, dy\right). } The conclusion follows by integrating on $\Omega$. \end{proof} We recall also a fractional Poincar\'e inequality (see \cite[Proposition 2.1]{poing} for the proof). \begin{prop}[A fractional Poincar\'e inequality] \label{poincy} Let $\Omega\subset \ensuremath{\mathbb{R}}n$ be bounded, open set and let $u\colon \ensuremath{\mathbb{R}}n \to \ensuremath{\mathbb{R}}$ be in $L^1(\Omega)$. Then \bgs{ \|u-u_{\Omega}\|_{L^p(\Omega)} \leq \left(\frac{d_{\Omega}^{{n+sp}}}{|\Omega|}\right)^{\frac{1}p} [u]_{W^{s,p}(\Omega)}, } where \[ u_\Omega=\frac{1}{|\Omega|} \int_{\Omega} u(x) \, dx \qquad \mbox{ and } \qquad d_{\Omega}=\mbox{diam}(\Omega).\] \end{prop} \end{appendix} \end{document}
\begin{document} \title{On the Morita Frobenius numbers \ of blocks of finite reductive groups} \begin{quote} ABSTRACT. We show that the Morita Frobenius number of the blocks of the alternating groups, the finite groups of Lie type in describing characteristic, and the Ree and Suzuki groups is 1. We also show that the Morita Frobenius number of almost all of the unipotent blocks of the finite groups of Lie type in non-defining characteristic is 1, and that in the remaining cases it is at most 2. \end{quote} \section{INTRODUCTION} Let $\ell$ be a prime number, let $k = \overline{\mathbb{F}}_\ell$ be an algebraic closure of the field of $\ell$ elements and let $A$ be a finite dimensional $k$-algebra. For $a \in \mathbb{N}$, the \textit{$a$-th Frobenius twist of $A$}, denoted by $A^{(\ell^a)}$, is a $k$-algebra with the same underlying ring structure as $A$, endowed with a new action of the scalars of $k$ given by $\lambda.x = \lambda^{\frac{1}{\ell^a}}x$ for all $\lambda \in k$, $x \in A$. Two finite dimensional algebras $A$ and $B$ are \textit{Morita equivalent} if mod$(A)$ and mod$(B)$ are equivalent $k$-linear categories. By definition, $A$ and $A^{(\ell^a)}$ are isomorphic as rings, however, they need not even be Morita equivalent as $k$-algebras. The \textit{Morita Frobenius number} of a $k$-algebra $A$, denoted by $mf(A)$, is the least integer $a$ such that $A$ is Morita equivalent to $A^{(\ell^a)}$. The concept of Morita Frobenius numbers was introduced by Kessar in \cite{K2} in the context of Donovan's Conjecture in block theory. Donovan's Conjecture implies that Morita Frobenius numbers of $\ell$-blocks of finite groups are bounded by a function which depends only on the size of the defect groups of the block. Little is known about the values of Morita Frobenius numbers in general, but it is known that a block of a group algebra can have Morita Frobenius number greater than 1 \cite{B/K}. In this paper we calculate the Morita Frobenius numbers of a large class of blocks of finite reductive groups. We have used GAP \cite{GAP4} to check that the Morita Frobenius number of blocks of simple sporadic groups and their covers is 1. See Sections~\ref{sec:grouptheory} to~\ref{sec:unip} for an explanation of the notation in the following theorem. \begin{theorem} \label{thrm:maintheorem} Let $b$ be an $\ell$-block of a quasi-simple finite group $G$. Let $\closure{G} = G/ Z(G)$. Suppose that one of the following holds. \begin{enumerate}[(a)] \item $\closure{G}$ is an alternating group \item $\closure{G}$ is a finite group of Lie type in defining characteristic \item $\closure{G}$ is a finite group of Lie type in non-defining characteristic, $b$ dominates a unipotent block of $\closure{G}$, and $b$ is not one of the following blocks of $E_8$ \begin{itemize} \item $b = b_{E_8}(\phi_1^2.E_6(q), {E_6}[\theta^i]) $ $ (i=1,2) $ with $\ell=2$ and $q \equiv 1$ modulo 4 \item $b = b_{E_8}({\phi_2^2}.{^2E}_6(q), {^2E_6}[\theta^i])$ $(i=1,2) $ with $\ell \equiv 2$ mod $3$ and $q \equiv 2$ modulo $\ell$ \end{itemize} \end{enumerate} Then $mf(b) = 1$. In the excluded cases of part (c), $mf(b)\leq 2$. \end{theorem} We start with some general results on the Morita Frobenius numbers of blocks in Section~\ref{sec:grouptheory}. Section~\ref{sec:AltGps} deals with the case of the alternating groups, and Section~\ref{sec:definingchar} deals with finite groups of Lie type in defining characteristic. In Section~\ref{sec:unip} we first present key results from $e$-Harish Chandra theory and unipotent block theory, followed by the results for finite groups of Lie type in non-defining characteristic. Finally, Section~\ref{sec:mainproof} contains the proof of Theorem~\ref{thrm:maintheorem}. \section{GENERAL RESULTS ON MORITA FROBENIUS NUMBERS OF BLOCKS} \label{sec:grouptheory} Throughout, $\ell$ is a prime number, $k$ is an algebraically closed field of characteristic $\ell$, and $G$ is a finite group. \subsection{Results on $k$-algebras.} \label{subsec:kalgs} Let $A$ and $B$ be finite dimensional $k$-algebras and let $A_0$ and $B_0$ be basic algebras of $A$ and $B$ respectively. We define the \textit{Frobenius number} of $A$ to be the the least integer $a$ such that $A \cong A^{(\ell^a)}$ as $k$-algebras, and denote it by $frob(A)$. Recall that $A$ and $B$ are Morita equivalent if and only if $A_0 \cong B_0$ as $k$-algebras, and note that $A_0^{(\ell)}$ is a basic algebra of $A^{(\ell)}$. Therefore, $1 \leq mf(A_0) \leq frob(A_0) = mf(A) \leq frob(A)$ for any basic algebra $A_0$ of $A$. Recall that $A$ \textit{has an $\mathbb{F}_{\ell}$-form} if there is a $k$-vector space basis of $A$ such that all structure constants lie in $\mathbb{F}_{\ell}$. By \cite[Lemma 2.1]{K}, $A$ has an $\mathbb{F}_{\ell}$-form if and only if $A \cong A^{(\ell)}$ as $k$-algebras -- that is, if and only if $frob(A) = 1$. \subsection{Results from Block Theory.} \label{subsec:blocktheory} Let $b$ be a block of $kG$. By this we mean that $b$ is a primitive idempotent in $Z(kG)$. We denote the Morita Frobenius and Frobenius numbers of $kGb$ by $mf(b)$ and $frob(b)$, respectively. Let $\sigma: k \rightarrow k$ be the Frobenius automorphism given by $\lambda \mapsto \lambda^\ell$ for all $\lambda \in k$. We also denote by $\sigma: kG \rightarrow kG$ the induced \textit{Galois conjugation} map on $kG$, defined by \[ \sigma \left(\sum_{g \in G} \alpha_g g \right) = \sum_{g \in G} \alpha_g^{\ell}g \] for all $\sum_{g \in G} \alpha_g g \in kG$. Although not an isomorphism of $k$-algebras, Galois conjugation is a ring isomorphism so it permutes the blocks of $kG$. We call $\sigma(b)$ (or $kG\sigma(b)$) the \textit{Galois conjugate} of $b$ (resp. $kGb$), and we say that two blocks $b$ and $c$ of $kG$ are \textit{Galois conjugate} if $b = \sigma^n(c)$ for some positive integer $n$. \begin{lemma}[Benson and Kessar {\cite{B/K}}] \label{lem:galconj} There is a $k$-algebra isomorphism $kGb^{(\ell)} \cong kG\sigma(b)$ between the first Frobenius twist of $kGb$ and the Galois conjugate of $kGb$. \end{lemma} We fix an \textit{$\ell$-modular system} $(K, \mathcal{O}, k)$ with $K$ a field of characteristic 0 containing a $|G|$-th root of unity, $\nu: K \rightarrow \mathbb{Z} \cup \{ \infty \}$ a complete discrete valuation on $K$, $\mathcal{O}$ the valuation ring of $\nu$ with maximal ideal $\mathfrak{m}$, and $k$ the residue field $\mathcal{O}/\mathfrak{m}$. The canonical quotient map $\mathcal{O}G \rightarrow kG$ induces a bijection between the set of blocks of $\mathcal{O}G$ and the set of blocks of $kG$. If $b$ is a block of $kG$, we denote the corresponding block of $\mathcal{O}G$ by $\tilde{b}$. Blocks $\tilde{b}$ and $\tilde{c}$ of $\mathcal{O}G$ are said to be \textit{Galois conjugate} if $b$ and $c$ are Galois conjugate. Let Irr$_K(G)$ denote the set of $K$-valued irreducible characters of $G$ and let $e_{\chi}$ be the central idempotent of $KG$ corresponding to $\chi \in $ Irr$_K(G)$. Let Irr$_K(b) = \{ \chi \in \mbox{ Irr}_K(G) ~|~ \tilde{b}e_{\chi} = e_{\chi} \}$ denote the set of irreducible characters belonging to the block $b$. We fix an automorphism $\hat\sigma : K \rightarrow K$ such that $\hat\sigma(\zeta) = \zeta^{\ell}$ for any $\ell'$-root of unity $\zeta$ in $K$. Then $\hat\sigma$ induces an action on $KG$ via \[\hat\sigma\left(\sum_{g \in G} \alpha_g g\right) = \sum_{g \in G} \hat\sigma(\alpha_g) g\] for all $\sum_{g \in G} \alpha_g g \in KG$, and an action on Irr$_K(G)$ via \[^{\hat\sigma}\chi(g) = \hat\sigma(\chi(g))\] for all $\chi \in $ Irr$_K(G)$ and all $g \in G$. Note that although $\hat\sigma$ may not preserve $\mathcal{O}$, it induces an action on the set of blocks compatible with the action of $\sigma$ on the blocks of $kG$. More precisely, we have the following. \begin{lemma} \label{lem:sigmahat} Let $b$ be a block of $kG$. Then \begin{enumerate}[(a)] \item$\hat{\sigma}(\tilde{b}) = \widetilde{\sigma(b)}$, and \item Irr$_K(\sigma(b)) = \{ ^{\hat\sigma}{\chi} ~|~ \chi \in \mbox{ Irr}_K(b)\}$. \end{enumerate} \end{lemma} \begin{proof} For part (a), see Kessar {\cite[Lemma 3.1]{K3}}. For part (b), we first note the following. \begin{align*} \hat\sigma (e_{\chi}) & = \hspace{1ex} \hat\sigma \left(\frac{ \chi(1) } {|G|} \sum_{g \in G} \chi(g^{-1}) g \right)\\ & = \hspace{1ex} \frac{\chi(1)}{|G|} \sum_{g \in G} \hat\sigma \left(\chi(g^{-1})\right) g\\ & = \hspace{1ex} \frac{^{\hat\sigma}\chi(1)}{|G|} \sum_{g \in G} {^{\hat\sigma}\chi}(g^{-1}) g\\ & = \hspace{1ex} e_{^{\hat\sigma}\chi} \end{align*} Suppose that $\chi \in $ Irr$_K(b)$. Then \[\widetilde{\sigma(b)} e_{^{\hat\sigma}\chi} = \hat\sigma(\tilde{b}) \hat\sigma(e_{\chi}) = \hat\sigma(\tilde{b} e_{\chi} ) = \hat\sigma(e_{\chi}) = e_{^{\hat\sigma}\chi},\] so $^{\hat\sigma}{\chi} \in $ Irr$_K(\sigma(b))$, showing that $ \{ ^{\hat\sigma}{\chi} ~|~ \chi \in \mbox{ Irr}_K(b)\} \subseteq $ Irr$_K(\sigma(b))$. On the other hand, for any $\psi \in $ Irr$_K(\sigma(b))$, since $\hat{\sigma}$ is an automorphism of $K$ we can define a character $\chi \in $ Irr$_K(G)$ by $\chi(g) = {\hat{\sigma}}^{-1}\left( \psi(g)\right)$ for all $g \in G$, so $^{\hat\sigma}{\chi} = \psi$. Since $\psi \in $ Irr$_K(\sigma(b))$, $\widetilde{\sigma(b)} e_{\psi} = e_{\psi}$, so \[\hat{\sigma}\left(\tilde{b}e_{\chi}\right) = \hat{\sigma} (\tilde{b}) \hat{\sigma} ( e_{\chi} ) = \widetilde{\sigma(b)} e_{^{\hat{\sigma}}\chi} = \widetilde{\sigma(b)} e_{\psi} = e_{\psi} = e_{^{\hat{\sigma}}\chi} = {\hat{\sigma}}(e_{\chi}).\] Therefore $\tilde{b}e_{\chi} = e_{\chi}$ so $\chi \in $ Irr$_K(b)$, hence Irr$_K(\sigma(b)) \subseteq \{ ^{\hat\sigma}{\chi} ~|~ \chi \in \mbox{ Irr}_K(b)\}$ and the result follows. \end{proof} \begin{proposition} \label{lem:mainlemma} Let $b$ be a block of $kG$. Suppose that one of the following holds. \begin{itemize} \item[(a)] $b \in \mathbb{Q}G$ \item[(b)] There exist $\chi_1, \dots, \chi_r \in $ Irr$_K(b)$ for some $r \geq 1$ such that $\left(\chi_1 + \dots + \chi_r\right) (g) \in \mathbb{Q}$ for all $g \in G$ \item[(c)] There exists $\chi \in $ Irr$_K(b)$ such that $\chi(1)_{\ell} = |G|_{\ell}$ \item[(d)] The defect groups of $b$ are cyclic or dihedral \end{itemize} Then $mf(b) = 1$. \end{proposition} \begin{proof} If $b \in \mathbb{Q}G$ then $\sigma(b) = b$ since $\mathbb{Q}$ is stabilized by $\hat\sigma$. Therefore $kGb^{(\ell)} \cong kGb$ as $k$-algebras by Lemma~\ref{lem:galconj}, so $frob(b) = 1$ and therefore $mf(b) = 1$. Suppose that there exist $\chi_1, \dots , \chi_r \in $ Irr$_K(b)$ for some $r \geq 1$ such that $\left( \chi_1 + \dots + \chi_r \right) (g) \in \mathbb{Q}$ for all $g \in G$. Then $\left( {^{\hat\sigma}{\chi_1}} + \dots + {^{\hat\sigma}{\chi_r}} \right)(g) = \hat\sigma \left( \chi_1 + \dots + \chi_r \right)(g) = \left( \chi_1 + \dots + \chi_r \right) (g)$ for all $g \in G$. It follows that $\{ {^{\hat\sigma}{\chi_1}} , \dots , {^{\hat\sigma}{\chi_r}}\}$ and $\{\chi_1 , \dots , \chi_r\}$ are equal as sets of irreducible characters, so $\sigma(b) = b$ by Lemma~\ref{lem:sigmahat} (b). Therefore $mf(b) = 1$ following the same argument as in part (a). By \cite[Theorem 6.1.1]{Block}, if there exists a $\chi \in $ Irr$_K(b)$ such that $\chi(1)_{\ell} = |G|_{\ell}$, then $kGb$ is a matrix algebra. Therefore $kGb$ has an $\mathbb{F}_{\ell}$-form for any $\ell$, so $mf(b) = 1$, showing part (c). If $b$ has cyclic defect then its basic algebras are Brauer tree algebras, so they are defined over $\mathbb{F}_{\ell}$. If $b$ has dihedral defect then its basic algebras are defined over $\mathbb{F}_2$ \cite{Erd}. Thus if $b$ has cyclic or dihedral defect then the Frobenius number of any basic algebra of $kGb$ is 1, so $mf(b) = 1$. \end{proof} \begin{lemma} \label{lem:groupaut} Let $b$ be a block of $kG$. Suppose that there exists a group automorphism $\varphi \in $Aut~$(G)$ such that for the induced $k$-algebra isomorphism $\varphi: kG \rightarrow kG$, $\varphi(b) = \sigma(b)$. Then $mf(b) = 1$. \end{lemma} \begin{proof} Since $\varphi|_{kGb}: kGb \rightarrow kG\sigma(b)$ is a $k$-algebra isomorphism, $kGb~\cong~kG\sigma(b)$. Therefore $kGb~\cong~kGb^{(\ell)}$ as $k$-algebras by Lemma~\ref{lem:galconj}, so $frob(b) = 1$, whence $mf(b) = 1$. \end{proof} \begin{lemma} \label{lem:twistedalg} Let $G$ be a finite group such that $H^2(G, k^{\times}) \cong C_2$ and let $\gamma \in H^2(G, k^{\times})$. Then $mf(k_{\gamma}G)=1$. \end{lemma} \begin{proof} Define a map $\sigma:~H^2(G, k^{\times}) \rightarrow H^2(G, k^{\times})$ as follows. Let $\gamma \in H^2(G, k^{\times})$ and let $\tilde{\gamma}$ be a 2-cocycle representing $\gamma $. Then $\sigma(\gamma)$ is defined to be the class in $H^2(G, k^{\times})$ represented by the 2-cocycle given by \[ (g, h) \mapsto \sigma(\tilde{\gamma}(g,h)),\] for all $g, h \in G$. It is easy to check that $\sigma$ is a well-defined group homomorphism on $H^2(G, k^{\times}) $. If $\gamma$ is non-trivial then so is $\sigma(\gamma)$, so since $H^2(G, k^{\times}) \cong C_2$, $k_{\gamma}G \cong k_{\sigma(\gamma)}G$ as $k$-algebras. Recall that $k_{\gamma}G^{(\ell)} \cong k_{\gamma}G$ as rings but not necessarily as $k$-algebras, and that scalar multiplication in $k_{\gamma}G^{(\ell)}$ is given by $\lambda.x = \lambda^{\frac{1}{\ell}}x$ for all $\lambda \in k, x \in k_{\gamma}G$. Let $\varphi: k_{\sigma(\gamma)}G \rightarrow k_{\gamma}G^{(\ell)}$ be the map defined by \[ \varphi \left( \sum_{g \in G} \alpha_g g\right) = \sum_{g \in G} \alpha_g^{\frac{1}{\ell}} g\] for all $ \sum_{g \in G} \alpha_g g \in k_{\sigma(\gamma)}G$. This is a ring isomorphism, and \[\varphi \left(\lambda \sum_{g \in G} \alpha_g g\right) = \sum_{g \in G} \left( \lambda \alpha_g \right)^{\frac{1}{\ell}} g = \lambda^{\frac{1}{\ell}} \sum_{g \in G} \alpha_g^{\frac{1}{\ell}} g = \lambda . \varphi \left( \sum_{g \in G} \alpha_g g \right)\] for all $\lambda \in k$ and $ \sum_{g \in G} \alpha_g g \in k_{\sigma(\gamma)}G$, so $\varphi$ is in fact an isomorphism of $k$-algebras. Therefore $k_{\gamma}G\cong~ k_{\sigma(\gamma)}G \cong k_{\gamma}G^{(\ell)}$ as $k$-algebras, so $frob(k_{\gamma}G)=1$, hence $mf(k_{\gamma}G) = 1$. \end{proof} \subsection{Dominating blocks.} \label{subsec:domblocks} Let $G$ be a finite group with normal subgroup $Z$. Let $\closure{G} = G/Z$ and let $\mu: G \rightarrow \closure{G}$ be the natural quotient map. Denote also by $\mu: kG \rightarrow k\closure{G}$ the induced $k$-algebra homomorphism given by \[ \mu \left(\sum_{g \in G} \alpha_g g \right) = \sum_{g \in G} \alpha_g \mu(g) \] for all $\sum_{g \in G} \alpha_g g \in kG$. If $b$ is a block of $kG$, then $\mu(b) = \closure{b}_1 + \dots \closure{b}_r$ for some $r \geq 0$, where $\closure{b}_i$ are block idempotents of $k\closure{G}$. Recall that if $r \neq 0$, then $b$ is said to \textit{dominate} the blocks $\closure{b}_i$ of $k\closure{G}$, for $1 \leq i \leq r$, and each block $\closure{b}$ of $k\closure{G}$ is dominated by a unique block of $kG$. By identifying $\chi \in $ Irr$_K(\closure{G})$ with $\chi \circ \mu \in $ Irr$_K(G)$, we can consider Irr$_K(\closure{G})$ as a subset of Irr$_K(G)$. See \cite[Ch. 5, Section 8.2]{N/T} for more details. \begin{lemma} \label{lem:domblocks} Let $b$ be a block of $kG$. \begin{itemize} \item[(a)] $b$ dominates some block of $k\closure{G}$ if and only if $b$ covers the principal block of $kZ$ \item[(b)] $b$ dominates a block $\overline{b}$ of $k\closure{G}$ if and only if $\sigma(b)$ dominates $\sigma(\closure{b})$ \item[(c)] If $Z \leq Z(G)$ and $b$ dominates some block of $k\closure{G}$, then $b$ dominates a unique block of $k\closure{G}$ \item[(d)] If $Z$ is an $\ell'$-group (not necessarily central) and $b$ dominates some block of $k\closure{G}$, then $b$ dominates a unique block $\closure{b}$ of $k\closure{G}$ and $kGb \cong k{\closure{G}}{\closure{b}}$ as $k$-algebras \end{itemize} \end{lemma} \begin{proof} Part (a) follows directly from \cite[Ch. 5 Lemma 8.6 (i)]{N/T}. For part (b), note that by \cite[Ch. 5, Lemma 8.6 (ii)]{N/T}, $b$ dominates $\closure{b}$ if and only if Irr$_K(\overline{b}) \subseteq$ Irr$_K(b)$, where we identify characters in Irr$_K(\closure{G})$ with characters in Irr$_K(G)$ as discussed above. Irr$_K(\overline{b}) \subseteq$ Irr$_K(b)$ if and only if we have the following. \[ \mbox{Irr}_K\left(\sigma\left(\closure{b}\right)\right) = \{ {^{\hat{\sigma}}\chi}~|~ \chi \in \mbox{Irr}_K\left(\closure{b}\right)\} \subseteq \{ {^{\hat{\sigma}}\chi}~ |~\chi \in \mbox{Irr}_K(b)\} = \mbox{ Irr}_K(\sigma(b)) \] Therefore $b$ dominates $\closure{b}$ if and only if $\sigma(b)$ dominates $\sigma\left(\closure{b}\right)$. Part (c) follows from \cite[Ch. 5 Theorem 8.11]{N/T}. Finally for part (d), suppose that $Z$ is an $\ell'$-subgroup of $G$ and that $b$ dominates a block $\closure{b}$ of $k\closure{G}$. Then by \cite[Ch. 5, Theorem 8.8]{N/T}, $\closure{b}$ is the unique block of $k\closure{G}$ dominated by $b$, and Irr$_K(b)$ = Irr$_K\left(\closure{b}\right)$. Therefore $\mu(b) = \closure{b}$, so $\mu:kG \rightarrow k\closure{G}$ restricts to another surjection $\closure{\mu}: kGb \rightarrow k\closure{G}\closure{b}$ given by \[ \closure{\mu} \left( \left(\sum_{g \in G} \alpha_g g \right) b \right) = \left( \sum_{g \in G} \alpha_g \mu(g) \right) \closure{b} \] for all $\sum_{g \in G} \alpha_g g \in kG$. It only remains to show that this is an injection. Since Irr$_K(b)$ = Irr$_K\left(\closure{b}\right)$ and rank$_{k}(kGb) = $ dim$_K(KGb)$, \[ \mbox{rank}_{k}(kGb) = \sum_{\chi \in \mbox{\footnotesize{Irr}}_K(b)} \chi(1)^2 = \sum_{\chi \in \mbox{\footnotesize{Irr}}_K\left(\closure{b}\right)} \chi(1)^2 = \mbox{ rank}_{k}(k\closure{G}\closure{b}), \] so $\closure{\mu}: kGb \rightarrow k\closure{G}\closure{b}$ is a $k$-algebra isomorphism as required for part (d). \end{proof} \section{THE ALTERNATING GROUPS} \label{sec:AltGps} \begin{theorem} \label{thrm:alt} Let $G$ be $S_n$, $A_n$, or a double cover of $S_n$ or $A_n$, and let $b$ be a block of $kG$. Then $mf(b) = 1$. \end{theorem} \begin{proof} The irreducible characters of $S_n$ are rational valued so the result follows immediately for blocks of $kS_n$ by Proposition~\ref{lem:mainlemma} (b). The irreducible characters of $A_n$ arise as restrictions of irreducible characters of $S_n$, which are parametrized by the partitions $\lambda$ of $n$. Suppose $b$ is the block of $kS_n$ containing the irreducible character $\chi_{\lambda}$ associated with a partition $\lambda$. By \cite[Lemma 12.1]{O2}, if $\lambda$ is symmetric then $\chi_{\lambda}|_{A_n}$ is an irreducible character of $A_n$, so $\chi_{\lambda}|_{A_n}$ is a rational valued character of $A_n$. If $\lambda$ is not symmetric then $\chi_{\lambda}|_{A_n} = \chi_{\lambda}^1 + \chi_{\lambda}^2$ is the sum of two irreducible conjugate characters of $A_n$, and these may not be rational valued. By \cite[Proposition 12.2]{O2}, if $b$ has non-trivial defect, then $\chi_{\lambda}^1$ and $\chi_{\lambda}^2$ appear in the same block of $kA_n$, and we note that their sum is rational valued. If $b$ has trivial defect then $\chi_{\lambda}^1$ and $\chi_{\lambda}^2$ are in separate blocks of $kA_n$ \cite[Theorem 6.1.46]{J/K}, each of defect zero. Therefore, any block of $kA_n$ satisfies the hypothesis of at least one of parts (a), (b) and (d) of Proposition~\ref{lem:mainlemma}, and therefore, the Morita Frobenius number of all blocks of $kA_n$ is 1. Let $\widetilde{S}_n$ denote a double cover of the symmetric group. When $\ell$ is odd, $S_n$ is a quotient of $\widetilde{S}_n$ by a central $\ell'$-subgroup, so by \cite[Ch. 5 Theorem 8.8]{N/T} $k\widetilde{S}_n$ has two types of blocks -- blocks which dominate unique blocks of $kS_n$, and blocks which do not dominate any block of $kS_n$. First, suppose $c$ is a block of $k\widetilde{S}_n$ which dominates a block $b$ of $kS_n$. Then $k\widetilde{S_n}c \cong kS_nb$ as $k$-algebras by Lemma~\ref{lem:domblocks} (d), so $mf(c) = mf(b) = 1$. Now suppose $c$ is a block of $k\widetilde{S}_n$ which does not dominate a block of $kS_n$. Then $c$ contains only spin characters and these are parametrized by the strict partitions of $n$ -- partitions of $n$ which have no repeated parts. The \emph{parity} of a partition is \begin{equation*} \epsilon(\lambda)=\begin{cases} 0 & \text{if ($n$ minus the number of parts in $\lambda$) is even},\\ 1 & \text{otherwise}. \end{cases} \end{equation*} If $\epsilon(\lambda)= 0$ then $\lambda$ has one associated spin character, $\chi_{\lambda}$. Then $\chi_{\lambda}(g) \neq 0$ only if $g$ has cycle type with all odd parts, and the character values can be calculated using an analogue of the Murnaghan Nakayama formula \cite{Mo}. In particular, $\chi_{\lambda}(g) \in \mathbb{Q}$ for all $g \in G$. If $\epsilon(\lambda)= 1$ then $\lambda$ has two associated spin characters, $\chi_{\lambda}$ and its \emph{associate} $\chi_{\lambda}^{a}$ and there are two possibilities to consider. Firstly, if $\lambda$ is equal to its $\ell$-bar core (see \cite[Definition 5]{Ca}) then $\chi_{\lambda}$ and $\chi_{\lambda}^{a}$ lie in $\ell$-blocks of defect zero. Secondly, if $\lambda$ is not equal to its $\ell$-bar core, then $\chi_{\lambda}$ and $\chi_{\lambda}^{a}$ appear in the same block and $\chi_{\lambda}(g) = - \chi_{\lambda}^{a}(g)$ for all $g \in \widetilde{S}_n$ \cite[Theorems A and B]{Ca}. Therefore $\left( \chi_{\lambda} + \chi_{\lambda}^{a} \right) (g) \in \mathbb{Q}$ for all $g \in G$. Thus the result follows for all blocks $c$ of $k\widetilde{S}_n$ when $\ell$ is odd by Proposition~\ref{lem:mainlemma} (a), (b) and (d). When $\ell=2$, the 2-blocks of $k\widetilde{S}_n$ are in one-to-one correspondence with the 2-blocks of $kS_n$ \cite[Ch. 5, Theorem 8.11]{N/T}, so each block of $k\widetilde{S}_n$ contains at least one rational valued character of $S_n$. The result therefore follows for all 2-blocks of $k\widetilde{S}_n$ by Proposition~\ref{lem:mainlemma} (b). Finally, let $\widetilde{A}_n$ denote a double cover of $A_n$. Suppose $d$ is a block of $k\widetilde{A}_n$ covered by a block $c$ of $k\widetilde{S}_n$. If $c$ has non-trivial defect, then by \cite[Proposition 3.16 (i)]{K2}, $d = c$ so $k\widetilde{A}_n d = k\widetilde{A}_n c$. By the arguments for $k\widetilde{S}_n$ above, $k\widetilde{S}_n c$ satisfies at least one of the hypotheses of parts (a), (b) and (d) of Proposition~\ref{lem:mainlemma} and therefore so does $k\widetilde{A}_n c$. It follows that $mf(k\widetilde{A}_n d) = mf(k\widetilde{A}_n c) = 1$. Now suppose that $c$ has trivial defect. Then $d$ also has trivial defect so $mf(d) = 1$ by Proposition~\ref{lem:mainlemma} (d). \end{proof} \section{FINITE GROUPS OF LIE TYPE IN DEFINING CHARACTERISTIC} \label{sec:definingchar} \begin{theorem} \label{lem:autgalconj} Let $\textbf{G}$ be a simple, simply-connected algebraic group defined over an algebraic closure of the field of $\ell$ elements. Let $q$ be a power of $\ell$ and let $F: \textbf{G} \rightarrow \textbf{G}$ be a Steinberg morphism with respect to an $\mathbb{F}_q$-structure with finite group of fixed points, $\textbf{G}^F$. Let $b$ be a block of of $k\textbf{G}^F$ with Galois conjugate $\sigma(b)$. Then there exists a group automorphism $\varphi:\textbf{G}^F \rightarrow \textbf{G}^F$ such that for the induced $k$-algebra isomorphism $\varphi: k\textbf{G}^F \rightarrow k\textbf{G}^F$, $\varphi(b) = \sigma(b)$. \end{theorem} \begin{proof} By \cite[Theorems 8.3, 8.5]{H}, since $\textbf{G}$ is simply-connected and $\ell$ divides $q$, $k\textbf{G}^F$ has $|Z(\textbf{G}^F)| + 1$ blocks; one of trivial defect which contains the Steinberg character, and $|Z(\textbf{G}^F)|$ of full defect. Note that these results also hold for the Suzuki and Ree groups. First suppose that $Z(\textbf{G}^F) \leq C_2$. Then $k\textbf{G}^F$ has at most three blocks. One of these blocks contains the trivial character and another contains the Steinberg character, so by the proof of Proposition~\ref{lem:mainlemma} (b), all blocks $b$ of $k\textbf{G}^F$ are stabilized by Galois conjugation. We can therefore let $\varphi: \textbf{G}^F \rightarrow \textbf{G}^F$ be the identity map. Now suppose that $Z(\textbf{G}^F) \cong C_m$ for some $m>2$ coprime to $\ell$. Let $Z(\textbf{G}^F) = \langle g \rangle$. Then $Z(\textbf{G}^F)$ has $m$ irreducible characters $\chi_i : Z(\textbf{G}^F) \rightarrow K$, and to each character there is an associated central primitive idempotent $e_i$ of $KZ(\textbf{G}^F)$, \[ e_i = \frac{1}{m} \sum_{0 \leq a \leq m-1} \chi_i(g^{a})g^{-a}\] for $0 \leq i \leq m-1$. Since $m$ is coprime to $\ell$ it is invertible in $\mathcal{O}$, so $e_i \in \mathcal{O}\textbf{G}^F$. Let $\bar{e}_i$ be the image of $e_i$ in $k\textbf{G}^F$ under the canonical quotient mapping $\mathcal{O}\textbf{G}^F \rightarrow k\textbf{G}^F$, \[ \bar{e}_i = \frac{1}{m} \sum_{0 \leq a \leq m-1} \overline{\chi_i(g^{a})} g^{-a}.\] Then $\bar{e}_i$ is a block of $kZ(\textbf{G}^F)$ and is a central, but not necessarily primitive, idempotent of $k\textbf{G}^F$. Since $k\textbf{G}^F$ has $m+1$ blocks, there are exactly $m+1$ primitive central idempotents in $k\textbf{G}^F$. Clearly, the blocks of $kZ(\textbf{G}^F)$ are $\textbf{G}^F$-stable. Therefore, precisely one $\bar{e}_i$ is imprimitive in $k\textbf{G}^F$. Since the trivial and Steinberg characters of $\textbf{G}^F$ both restrict to the trivial character on $Z(\textbf{G}^F)$, it follows that the principal block of $kZ(\textbf{G}^F)$ is imprimitive in $k\textbf{G}^F$ and splits into the principal and Steinberg blocks of $k\textbf{G}^F$. Galois conjugation stabilizes the principal and Steinberg blocks, as discussed above, so it only remains to consider the $m-1$ blocks of $k\textbf{G}^F$ with block idempotent $\bar{e}_i$. Galois conjugation acts on $\bar{e}_i$ by \[ \sigma(\bar{e}_i) = \frac{1}{m} \sum_{0 \leq a \leq m-1} \overline{\chi_i(g^{a})}^{\ell} g^{-a}.\] The action of $\sigma$ is trivial if $\ell \equiv 1$ mod m, so from now on we assume that $\ell \nequiv 1$ mod $m$. Let $F_{\ell}:\textbf{G} \rightarrow \textbf{G}$ be the $\mathbb{F}_{\ell}$-split Steinberg endomorphism given in \cite[Example 22.6]{M/T} which commutes with the Steinberg morphism $F$. Suppose $g \in \textbf{G}^F$. Then $F(F_{\ell}(g)) = F_{\ell}(F(g)) = F_{\ell}(g)$, so $F_{\ell}(g) \in \textbf{G}^F$ for all $g \in \textbf{G}^F$. Since $F_{\ell}$ is injective, it follows that $F_{\ell}(\textbf{G}^F) = \textbf{G}^F$. Therefore, $F_{\ell}$ is an automorphism of $\textbf{G}^F$ and so it restricts to an automorphism of $Z(\textbf{G}^F)$. We claim that $F_{\ell}(z) = z^{\ell}$ for all $z \in Z(\textbf{G}^F)$. Suppose that $\textbf{G}^F = SL_n(q)$ or $SU_n(q)$. Then if $(\alpha_{ij}) \in Z(\textbf{G}^F$), $F_{\ell}(\alpha_{ij}) = (\alpha_{ij}^{\ell}) = (\alpha_{ij})^{\ell}$. Next, suppose that $\textbf{G}^F =$ Spin$_{2n}^{+}(q)$ with $n \geq 5$ odd and $4|q-1$, so $Z(\textbf{G}^F) \cong C_4$. Since we are assuming that $\ell \nequiv$ 1 mod $m$, this only occurs if $\ell \equiv 3$ mod 4. Suppose that $F_{\ell}|_{Z(\textbf{G}^F)}$ is the trivial automorphism of $C_4$. Then $Z(\textbf{G}^F)$ is central in the fixed points of $F_{\ell}$, Spin$_{2n}^+({\ell})$. But $Z($Spin$_{2n}^+({\ell})) \cong C_2$, so this is impossible. Therefore $F_{\ell}|_{Z(\textbf{G}^F)}$ is the non-trivial automorphism of $C_4$, so $F_{\ell}(z) = z^3 = z^{\ell}$ for every $z \in Z(\textbf{G}^F) $. Now, suppose that $\textbf{G}^F =E_6(q)$ or $^2E_6(q)$ and $Z(\textbf{G}^F) \cong C_3$, so $\ell \equiv 2$ mod $3$. If $F_{\ell}|_{Z(\textbf{G}^F)}$ is the trivial automorphism of $C_3$ then $Z(\textbf{G}^F)$ is central in the fixed points of $F_{\ell}$, $E_6(\ell)$. However, $Z(E_6(\ell))$ is trivial when $\ell \equiv 2$ mod $3$, so again we get a contradiction. Therefore $F_{\ell}|_{Z(\textbf{G}^F)}$ is the non-trivial automorphism of $C_3$, so $F_{\ell}(z) = z^2 = z^{\ell}$ for every $z \in Z(\textbf{G}^F) $. This shows the claim for all $\textbf{G}^F$ such that $Z(\textbf{G}^F) \cong C_m$ when $\ell \nequiv 1$ mod $m$. The automorphism $F_{\ell}$ therefore induces an action $\bar{e}_i$ as follows. \begin{align*} F_{\ell}(\bar{e}_i) & = \hspace{1ex} \frac{1}{m} \sum_{0 \leq a \leq m-1} \overline{\chi_i(g^a)} F_{\ell}(g^{-a}) \\ & = \hspace{1ex} \frac{1}{m} \sum_{0 \leq a \leq m-1} \overline{\chi_i(g^a)} g^{-\ell a} \end{align*} Let $\varphi = F_{\ell} ^{\phi(m)-1}$, where $\phi$ is the Euler totient function, and let $\omega$ be a primitive $m$-th root of unity such that $\chi_i(g^a) = \omega^{ia}$ for $1 \leq a \leq m$. Then \begin{align*} \varphi(\bar{e}_i) & = \hspace{1ex} \frac{1}{m} \sum_{0 \leq a \leq m-1} (\overline{\omega^{ia}}) g^{-\ell^{\phi(m)-1}a} \\ & = \hspace{1ex} \frac{1}{m} \sum_{0 \leq a' \leq m-1} (\overline{\omega^{i \ell a'}}) g^{-a'}, \end{align*} letting $a' =\ell^{\phi(m)-1}a$ so that $ \ell a' =\ell^{\phi(m)}a \equiv a $ mod $m$. Therefore \[ \varphi(\bar{e}_i) =\frac{1}{m} \sum_{0 \leq a' \leq m-1} \overline{\chi_i(g^{a'})} g^{-a'} = \sigma(\bar{e}_i).\] This shows the result for all $\textbf{G}^F$ such that $Z(\textbf{G}^F) \cong C_m$, $m > 2$ and $m$ is coprime to $\ell$. Finally, suppose that $\textbf{G}^F = $ Spin$_{2n}^+ (q)$, with $n \geq 4$ even and $\ell$ odd, so $Z(\textbf{G}^F) \cong C_2 \times C_2$. The irreducible characters of $C_2 \times C_2$ are rational valued so the associated central primitive idempotents of $kZ(\textbf{G}^F)$ are stabilized by Galois conjugation. It follows that the central primitive idempotents of $k\textbf{G}^F$ are also stabilized by Galois conjugation, so again, we can let $\varphi$ be the identity map. \end{proof} \begin{corollary} \label{corr:defining} Let $k\textbf{G}^F$ be as in Theorem~\ref{lem:autgalconj}. Then, \begin{enumerate}[(a)] \item For any block $b$ of $k\textbf{G}^F$, $mf(b) = 1$, and \item If $Z$ is a non-trivial central subgroup of $\textbf{G}^F$ and $\overline{b}$ is a block of $k(\textbf{G}^F/Z)$, then $mf\left(\overline{b}\right) = 1$. \end{enumerate} \end{corollary} \begin{proof} Part (a) follows from Theorem~\ref{lem:autgalconj} and Lemma~\ref{lem:groupaut}. For part (b), suppose that $\overline{b}$ is a block of $k(\textbf{G}^F/Z)$ dominated by a block $b$ of $k\textbf{G}^F$. Then by part (a), $mf(b) = 1$. As we are in defining characteristic, $Z(\textbf{G}^F)$ is an $\ell'$-group, so it follows from Lemma~\ref{lem:domblocks} (d) that $k\textbf{G}^Fb \cong k(\textbf{G}^F/Z)\overline{b}$ as $k$-algebras. Therefore $mf\left(\overline{b}\right)= mf(b) = 1$. \end{proof} \section{UNIPOTENT BLOCKS OF FINITE GROUPS OF LIE TYPE IN NON-DEFINING CHARACTERISTIC} \label{sec:unip} In Section 5 we continue to assume that $k = \overline{\mathbb{F}}_{\ell}$, an algebraic closure of the field of $\ell$ elements. Let $p$ be a prime different to $\ell$, and let $\textbf{G}$ be a simple simply-connected algebraic group defined over an algebraic closure of the field of $p$ elements. Fix $q$, a power of $p$, and let $F: \textbf{G} \rightarrow \textbf{G}$ be the Frobenius morphism with respect to an $\mathbb{F}_q$-structure. Let $\textbf{G}^F$ be the fixed points of $\textbf{G}$ under $F$ -- a finite group of Lie type in non-defining characteristic. First we recall some standard notions from $e$-Harish Chandra Theory. See \cite{D/M} and \cite{B/M/M} for more details. \subsection{e-Harish Chandra Theory and Unipotent Blocks.} \label{subsec:dHarChandTheory} We denote by $P_{\left(\textbf{G}, F\right)}(x)$ the polynomial order of $\textbf{G}^F$; i.e. $P_{\left(\textbf{G}, F\right)}(x)$ is the unique polynomial such that $P_{(\textbf{G}, F)}(q^m) = \left|\textbf{G}^F\right|^m$ for infinitely many $m \in \mathbb{N}$. An $F$-stable torus $\textbf{T}$ is called a \textit{$e$-torus} if $P_{(\textbf{T},F)}(x)$ is a power of the $e$-th cyclotomic polynomial, $\Phi_e$, where $e$ is some natural number. An \textit{$e$-split Levi subgroup} \textbf{L} of $\textbf{G}$ is the centralizer in $\textbf{G}$ of some $e$-torus of $\textbf{G}$. Recall that for an $F$-stable Levi subgroup $\textbf{L}$ in an $F$-stable parabolic $\textbf{P}$ of $\textbf{G}$, there exist linear maps \textit{Deligne Lusztig induction} and \textit{restriction} given by \[ R_{\textbf{L} \subset \textbf{P}}^{\textbf{G}}: \mathbb{Z} \mbox{Irr}_K(\textbf{L}^F) \rightarrow \mathbb{Z} \mbox{Irr}_K(\mathbf{G}^F)\] \[ ^*R_{\textbf{L} \subset \textbf{P}}^{\textbf{G}}: \mathbb{Z} \mbox{Irr}_K(\textbf{G}^F) \rightarrow \mathbb{Z} \mbox{Irr}_K(\mathbf{L}^F).\] An irreducible character $\chi$ of $\textbf{G}^F$ is called \textit{unipotent} if there exists an $F$-stable torus $\textbf{T}$ such that $\chi$ is a constituent of $R_{\textbf{T}}^{\textbf{G}}(1)$. The set of unipotent characters of $\textbf{G}^F$ is denoted by $\mathcal{E}(\textbf{G}^F, 1)$. Although it is not known in general whether $R_{\textbf{L} \subset \textbf{P}}^{\textbf{G}}$ and $^*R_{\textbf{L} \subset \textbf{P}}^{\textbf{G}}$ are independent of the choice of $\textbf{P}$, they are known to be independent for unipotent characters \cite{B/Mi}. We will therefore drop the reference to $\textbf{P}$ and denote Deligne Lusztig induction and restriction by $R_\mathbf{L}^\textbf{G}$ and $^*R_\mathbf{L}^\textbf{G}$ respectively. An $\ell$-block of $\textbf{G}^F$ is \textit{unipotent} if it contains a unipotent character. An irreducible character $\chi$ of $\textbf{G}^F$ is \textit{$e$-cuspidal} if $ ^*R_{\textbf{L}}^{\textbf{G}}(\chi) = 0$ for all proper $e$-split Levi subgroups $\textbf{L}$ of $\textbf{G}$. Note that \textit{cuspidal} is widely used instead of 1-cuspidal. A pair $(\textbf{L}, \lambda)$ is called \textit{unipotent $e$-split} if $\textbf{L}$ is an $e$-split Levi of $\textbf{G}$ and $\lambda$ is a unipotent character of $\textbf{L}^F$. If $\lambda$ is also $e$-cuspidal, then $(\textbf{L}, \lambda)$ is called a \textit{unipotent $e$-cuspidal pair}. The \textit{e-Harish Chandra series} above a unipotent $e$-cuspidal pair $(\mathbf{L},\lambda)$ is the set of unipotent characters \[ \mbox{Irr}_K (\textbf{G}^F, (\textbf{L}, \lambda)) = \{\gamma \in \mathcal{E}(\textbf{G}^F, 1) : \gamma \mbox{ is an irreducible constituent of } R_\mathbf{L}^\textbf{G}(\lambda)\}.\] The set of irreducible unipotent characters of $\textbf{G}^F$ is partitioned by the $e$-Harish Chandra series of $\textbf{G}^F$-conjugacy classes of unipotent $e$-cuspidal pairs \cite[Theorem 7.5 (a)]{B/M}: \[ \mbox{Irr}_K (\textbf{G}^F) = \dot{\bigcup} \mbox{ Irr}_K (\textbf{G}^F, (\textbf{L}, \lambda)),\] where the $(\textbf{L}, \lambda)$ run over a system of representatives of $\textbf{G}^F$-conjugacy classes of unipotent $e$-cuspidal pairs of $\textbf{G}$. A unipotent $e$-cuspidal pair $(\textbf{L}, \lambda)$ is said to have \textit{$\ell$-central defect} if \\${\lambda(1)_{\ell}|Z(\textbf{L})^F|_{\ell}=|\textbf{L}^F|_{\ell}}$. If $\ell$ is odd, good for $\textbf{G}$ (see \cite[Section 1.1]{C/E}), and $\ell \neq 3$ if $^3D_4$ is involved in $\textbf{G}$, then all unipotent $e$-cuspidal pairs of $\textbf{G}$ are of $\ell$-central defect \cite[Proposition 4.3]{C/E}. We define $e_{\ell}(q)$ to be the order of $q$ modulo $\ell$ if $\ell > 2$, and the order of $q$ modulo 4 if $\ell =2$. \pagebreak \begin{theorem} \label{thrm:key} Let $e = e_{\ell}(q)$. \begin{enumerate}[(a)] \item Let $(\textbf{L}, \lambda)$ be a unipotent $e$-cuspidal pair of $\textbf{G}$. Then all irreducible constituents of $R_{\mathbf{L}}^{\mathbf{G}}(\lambda)$ lie in the same $\ell$-block, $b_{\textbf{G}^F}(\textbf{L}, \lambda)$, of $\textbf{G}^F$. \item There exists a surjection \begin{align*} \left\{ \begin{array}{c} \textbf{G}^F \mbox{-conjugacy classes of} \\ \mbox{unipotent }e \mbox{-cuspidal } \\ \mbox{pairs of } \textbf{G} \end{array} \right\} & \twoheadrightarrow \left\{ \begin{array}{c} Unipotent \\ \ell \mbox{-blocks of }\textbf{G}^F \end{array} \right\}\\ (\textbf{L}, \lambda) & \mapsto b_{\textbf{G}^F}(\textbf{L}, \lambda) \end{align*} where $(\textbf{L}, \lambda)$ is a representative of a $\textbf{G}^F$-conjugacy class of unipotent $e$-cuspidal pairs of $\textbf{G}$ and $b_{\textbf{G}^F}(\textbf{L}, \lambda)$ is the $\ell$-block of $\textbf{G}^F$ containing all irreducible components of $R_{\mathbf{L}}^{\mathbf{G}}(\lambda)$. \item The surjection in (b) restricts to a bijection if we only consider unipotent $e$-cuspidal pairs of central $\ell$-defect. \begin{align*} \left\{ \begin{array}{c} \textbf{G}^F \mbox{-conjugacy classes of}\\ \mbox{unipotent }e \mbox{-cuspidal pairs of } \textbf{G}\\ \mbox{of } \ell \mbox{-central defect} \end{array} \right\} & \leftrightarrow \left\{ \begin{array}{c} Unipotent \\ \ell \mbox{-blocks of }\textbf{G}^F \end{array} \right\}\\ (\textbf{L}, \lambda) & \mapsto b_{\textbf{G}^F}(\textbf{L}, \lambda) \end{align*} In particular, when $\ell$ is odd, good for $\textbf{G}$ and $\ell \neq 3$ if $^3D_4$ is involved in $\textbf{G}$, then the surjection from part (b) is itself a bijection. \item If $\ell$ is odd or $\textbf{G}$ is of exceptional type, then the $\ell$-block $b_{\textbf{G}^F}(\textbf{L}, \lambda)$ has a defect group $P$ such that $Z(\textbf{L})^F_{\ell} \trianglelefteq P$ and $P / Z(\textbf{L})^F_{\ell}$ is isomorphic to a Sylow $\ell$-subgroup of $W_{\textbf{G}^F}(\textbf{L}, \lambda)$. \end{enumerate} \end{theorem} \begin{proof} Parts (a), (b) and (c) were proved by Enguehard \cite[Theorem A]{E}. It therefore only remains to show part (d). By the proof of \cite[Theorem 7.12]{K/M}, for an $\ell$-block $b=b_{\textbf{G}^F}(\textbf{L}, \lambda)$, we have the following inclusion of Brauer pairs of $\textbf{G}^F$ \[ \left(\{1\}, b \right) \trianglelefteq \left(Z(\textbf{L})^F_{\ell}, b_{\textbf{L}^F}(\lambda)\right) \trianglelefteq \left(P, e_{b}\right),\] where $b_{\textbf{L}^F}(\lambda)$ the block of $k\textbf{L}^F$ containing $\lambda$, $e_{b}$ is a block of $C_{\textbf{G}^F}(P)$, $ \left(Z(\textbf{L})^F_{\ell}, b_{\textbf{L}^F}(\lambda)\right)$ is self-centralizing and $\left(P, e_{b}\right)$ is maximal. By \cite[Lemma 2.1]{K/M}, $ P / \left(P \cap Z(\textbf{L})^F_{\ell} \right) = P / Z(\textbf{L})^F_{\ell} $ is isomorphic to a Sylow $\ell$-subgroup of \[N_{\textbf{G}^F}\left(Z(\textbf{L})^F_{\ell}, b_{\textbf{L}^F}(\lambda)\right) \big/ C_{\textbf{G}^F}\left(Z(\textbf{L})^F_{\ell}\right).\] Since $ C_{\textbf{G}^F}\left(Z(\textbf{L})^F_{\ell}\right) = \textbf{L}^F$ (see the proof of \cite[Theorem 7.12]{K/M2}), $P / Z(\textbf{L})^F_{\ell}$ is therefore isomorphic to a Sylow $\ell$-subgroup of \[N_{\textbf{G}^F}\left(Z(\textbf{L})^F_{\ell}, b_{\textbf{L}^F}(\lambda)\right) \big/ \textbf{L}^F = W_{\textbf{G}^F}(\textbf{L}, \lambda),\] as required. \end{proof} \begin{lemma} \label{lem:lambdarat} Let $(\textbf{L}, \lambda)$ be a unipotent $e$-cuspidal pair of $\textbf{G}$ and suppose that $\lambda$ is rational valued. Then $mf\left(b_{\textbf{G}^F}(\textbf{L}, \lambda)\right)~=~1$. \end{lemma} \begin{proof} Let $b = b_{\textbf{G}^F}(\textbf{L}, \lambda) $ and assume that $\lambda$ is rational valued so $^{\hat{\sigma}}{\lambda} = \lambda$ (see Section~\ref{subsec:blocktheory}). By the Deligne Lusztig induction character formula \cite[Proposition 12.2]{D/M}, $^{\hat{\sigma}}\left( R_{\textbf{L}}^{\textbf{G}}(\lambda) \right) = R_{\textbf{L}}^{\textbf{G}}\left({^{\hat{\sigma}}{\lambda}}\right)$. Suppose $\chi \in R_{\textbf{L}}^{\textbf{G}}(\lambda) \subseteq$~Irr$_K(b)$. Then ${^{\hat{\sigma}}{\chi}} \in R_{\textbf{L}}^{\textbf{G}}({^{\hat{\sigma}}{\lambda}}) = R_{\textbf{L}}^{\textbf{G}}({\lambda})$, so ${^{\hat{\sigma}}{\chi}} \in $ Irr$_K(b)$. By Lemma~\ref{lem:sigmahat}~(b), Irr$_K(\sigma(b)) = \{ ^{\hat\sigma}{\chi} ~|~ \chi \in \mbox{ Irr}_K(b)\}$, so it follows that $\sigma(b) = b$. Therefore $k\textbf{G}^Fb \cong k\textbf{G}^Fb ^{(\ell)}$ as $k$-algebras by Lemma~\ref{lem:galconj}, so $frob(b) = 1$, hence $mf(b) = 1$. \end{proof} \begin{lemma} \label{lem:cuspchars} Let $b$ be a block of $k\textbf{G}^F$ containing an $e$-cuspidal unipotent character $\lambda$ of central $\ell$-defect. Suppose that $Z\left([\textbf{G}, \textbf{G}]^F\right)$ is an $\ell'$-group. Then all characters in Irr$_K(b)$ are $e$-cuspidal. \end{lemma} \begin{proof} Let $\lambda_0 = \lambda|_{[\textbf{G}, \textbf{G}]^F}$. Because $\lambda$ is unipotent, results of Lusztig show that $\lambda_0$ is irreducible (see for example \cite[Proposition 3]{C/E2}). Since $\lambda$ is of central $\ell$-defect, $\left|\textbf{G}^F\right|_{\ell}= \lambda(1)_{\ell} \left|Z(\textbf{G}^F)\right|_{\ell} = \lambda_0(1)_{\ell} \left|Z(\textbf{G}^F)\right|_{\ell}$. As we are assuming that $Z\left([\textbf{G}, \textbf{G}]^F\right)$ is an $\ell'$-group, it follows from $\left|\textbf{G}^F\right| = \left| Z^{\circ}(\textbf{G})^F\right| \left|[\textbf{G}, \textbf{G}]^F\right|$ that $\lambda_0(1)_{\ell} = \left|[\textbf{G}, \textbf{G}]^F\right|_{\ell}$. Therefore $\lambda_0$ is in a block $\bar{b}$ of $[\textbf{G}, \textbf{G}]^F$ of defect 0. Let $\theta \in $ Irr$_K(b)$. Since $b$ covers $\bar{b}$ and $\lambda_0$ is the only character in $\bar{b}$, $\theta$ covers $\lambda_0$. By \cite[Corollary 11.7]{C/R}, therefore $\theta = \omega \lambda$ for a uniquely determined character $\omega$ of $\textbf{G}^F / [\textbf{G}, \textbf{G}]^F$. Since $ [\textbf{G}^F, \textbf{G}^F] \subseteq [\textbf{G}, \textbf{G}]^F$, $\textbf{G}^F / [\textbf{G}, \textbf{G}]^F$ is abelian, so $\omega$ is a linear character. As $\lambda$ is $e$-cuspidal, $\langle \lambda , R_{\textbf{M}}^{\textbf{L}} (\tau) \rangle~=~0$ for any proper $e$-split Levi subgroup $\textbf{M}$ of $\textbf{L}$ and for all $\tau \in $ Irr$_K\textbf{M}^F$. Because $\omega$ is linear, it follows that $\langle \omega \lambda , \omega R_{\textbf{M}}^{\textbf{L}} (\tau) \rangle = \langle \theta , R_{\textbf{M}}^{\textbf{L}} (\omega \tau) \rangle = 0$ for all $\tau \in $ Irr$_K\textbf{M}^F$. Let $\tilde{\tau} = \omega \tau$. Then $\tilde{\tau}$ runs over Irr$_K\textbf{M}^F$ as $\tau$ does, so $\langle \theta , R_{\textbf{M}}^{\textbf{L}} (\tilde{\tau}) \rangle = 0$ for all $\tilde{\tau} \in $ Irr$_K\textbf{M}^F$. Therefore $\theta$ is $e$-cuspidal, as required. \end{proof} \subsection{A result of Puig.} Theorem~\ref{thrm:Puig} shows that under certain conditions, Puig's result \cite[Theorem 5.5]{P} can be applied to a block $b = b_{\textbf{G}^F}(\textbf{L}, \lambda)$ to show that $\mathcal{O}\textbf{G}^Fb$ is Morita equivalent to a specific block of $\mathcal{O}N_{\textbf{G}^F}(\textbf{L}, \lambda)$. This result will be used later to calculate the Morita Frobenius number of some unipotent blocks of $E_8(q)$. First we recall the following. Suppose $M$ is a finite group with a normal $\ell'$-subgroup $U$, and suppose that $L \cong M/U$. Let $\mu: M \rightarrow L$ be the quotient map. If $d$ is the principal block of $\mathcal{O}U$, then Fong Reduction \cite[Theorem 6.8.7]{Block} yields the following inverse $\mathcal{O}$-algebra isomorphisms, \begin{align*} \mathcal{O}L~& \tilde{\longrightarrow}~\mathcal{O}Md \\ x~& \longmapsto ~xd \\ \mu(y)~& \longmapsfrom~y, \end{align*} for all $x \in \mathcal{O}L$, $y \in \mathcal{O}Md$. \begin{theorem} \label{thrm:Puig} Suppose $\ell \geq 5$ and $\ell| q-1$. Let $(\textbf{L}, \lambda)$ be a proper unipotent $1$-cuspidal pair of $\textbf{G}$ of central $\ell$-defect. Let $b = b_{\textbf{G}^F}(\textbf{L}, \lambda)$ and suppose that $P=Z(\textbf{L})^F_{\ell}$ is a defect group of $b$. Let $f = b_{\textbf{L}^F}(\lambda)$ be the block of $\mathcal{O}\textbf{L}^F$ containing $\lambda$. Then $f$ is a block of $\mathcal{O}N_{\textbf{G}^F}(\textbf{L}, \lambda)$ with defect group $P$ such that $\mathcal{O}\textbf{G}^Fb$ and $\mathcal{O}N_{\textbf{G}^F}(\textbf{L}, \lambda)f$ are Morita equivalent. \end{theorem} \begin{proof} Since $\textbf{L}$ is a 1-split Levi subgroup, $\textbf{L}$ is contained in an $F$-stable parabolic subgroup of $\textbf{G}$, $\textbf{M}$, say. Let $\textbf{U}$ be the unipotent radical of $\textbf{M}$, so $\textbf{M} = \textbf{U} \rtimes \textbf{L}$. Set $\textbf{M}^F = \textbf{U}^F \rtimes \textbf{L}^F$. Then $\textbf{M}^F / \textbf{U}^F \cong \textbf{L}^F$. Let $\mu : \textbf{M}^F \rightarrow \textbf{L}^F$ be the quotient map. Let $N = N_{\textbf{G}^F}(\textbf{L}, \lambda)$ and let $c$ be the block of $k\textbf{M}^F$ that dominates $f$. We show that the hypotheses of \cite[Theorem 5.5]{P} are satisfied by $\textbf{M}^F$, $N$, $\textbf{L}^F$, $c$ and $f$. Because $\textbf{U}^F$ is an $\ell'$-group, $c$ dominates a unique block of $\mathcal{O}\textbf{L}^F$ by \cite[Ch. 5 Theorem 8.8]{N/T}, so $\mu(c) = f$. Let $d$ be the principal block of $\mathcal{O}\textbf{U}^F$. Then it follows from the isomorphisms due to Fong Reduction mentioned above, that $c = fd$. Since $d$ is central in $\mathcal{O}M$, therefore $cf = c$. Since $\lambda$ is a 1-cuspidal unipotent character in $f$ with central $\ell$-defect, Lemma~\ref{lem:cuspchars} shows that all the characters in $f$ are 1-cuspidal. It then follows by arguments given in \cite[5.3]{P} that $c(\mathcal{O}\textbf{G}^F)c = c(\mathcal{O}N)c$. Next, since $N_{\textbf{G}^F}(\textbf{L}, \lambda) \subseteq N_{\textbf{G}^F}(\textbf{L}^F, \lambda)$, $N$ normalizes $\textbf{L}^F$ and therefore $f$. By the proof of \cite[Corollary 1.18]{D/M}, $N_{\textbf{G}}(\textbf{L}) \cap \textbf{U} = \{1\}$. Therefore $N_{\textbf{M}^F}(\textbf{L}, \lambda) \cap \textbf{U}^F = \{1\}$, so $N_{\textbf{M}^F}(\textbf{L}, \lambda) \subseteq \textbf{L}^F $ and thus $\textbf{L}^F = N_{\textbf{M}^F}(\textbf{L}, \lambda) = N \cap \textbf{M}^F$. By \cite[Proposition 2.2 (ii)]{C/E}, since $\ell \geq 5$ and $\textbf{L}$ is a proper Levi subgroup of $\textbf{G}$, $\textbf{L}^F = C_{\textbf{G}^F}\left(Z(\textbf{L})^F_{\ell}\right) = C_{\textbf{G}^F}(P)$. Therefore $f$ is a block of $\mathcal{O}C_{\textbf{G}^F}(P)$, so $Br_P(f) = f$. It follows that $Br_P(c) = Br_P(df) = \frac{1}{| \textbf{U}^F |}Br_P(f) \neq 0$, so all hypotheses of \cite[Theorem 5.5]{P} are satisfied. Recall that we have the following inclusion of Brauer pairs $(1, b) \subseteq (P, f)$ from the proof of Theorem~\ref{thrm:key} (d). Therefore $Br_P(b)f = f$. Since $N/\textbf{L}^F$, the relative Weyl group of $(\textbf{L}^F, \lambda)$ in $\textbf{G}^F$, is an $\ell'$-group, \cite[5.5.4]{P} implies that $f$ is a block of $\mathcal{O}N_{\textbf{G}^F}(\textbf{L}, \lambda)$ with defect $P$, and $\mathcal{O}Nf$ and $\mathcal{O}\textbf{G}^Fb$ are source algebra equivalent, and hence Morita equivalent. \end{proof} \subsection{Unipotent blocks of finite groups of Lie type in non-defining characteristic.} \label{subsec:proof} \begin{theorem} \label{thrm:MainTheorem} Let $\textbf{G}$ be a simple, simply-connected algebraic group defined over an algebraic closure of the field of $p$ elements. Let $q$ be a power of $p$ and let $F: \textbf{G} \rightarrow \textbf{G}$ be the Frobenius morphism with respect to an $\mathbb{F}_q$-structure. Let $k$ be a field of postitive characteristic $\ell \neq p$ and let $e = e_{\ell}(q)$. Let $b$ be a unipotent $\ell$-block of $\textbf{G}^F$. Then \begin{enumerate} [(a)] \item $mf(b) \leq 2$ and \item $mf(b) = 1$, except possibly when $b = b_{\textbf{G}^F}(\textbf{L}, \lambda)$ in one of the following situations. \begin{itemize} \item $\textbf{G} = E_8$, $\textbf{L} = \phi_1^2.E_6$, $\lambda = E_6[\theta^i]$ $ (i=1,2)$, with $\ell=2$ and $e=1$ \item $\textbf{G} = E_8$, $\textbf{L} = {\phi_2^2}.{^2E}_6$, $\lambda = {^2E_6}[\theta^i]$ $ (i=1,2)$, with $\ell \equiv 2$ mod 3 and $e=2$ \end{itemize} \end{enumerate} \end{theorem} \begin{proof} Let $b = b_{\textbf{G}^F}(\textbf{L}, \lambda)$ be the block of $\textbf{G}^F$ containing all irreducible constituents of $R_{\mathbf{L}}^{\mathbf{G}}(\lambda)$, where $(\textbf{L}, \lambda)$ is a unipotent $e$-cuspidal pair of $\textbf{G}$ of central $\ell$-defect, as discussed in Theorem~\ref{thrm:key}. By \cite[Proposition 5.6 and Table 1]{G}, the unipotent characters of classical finite groups of Lie type (including $^3D_4(q)$) are rational valued, so by Proposition~\ref{lem:mainlemma} (b) and Lemma~\ref{lem:lambdarat} we need only consider the cases where $\textbf{G}$ is of exceptional type, $\textbf{L}$ contains some component of exceptional type, and $\lambda$ is not rational valued. These $e$-cuspidal pairs can be identified using \cite[Appendix: Table 1]{B/M/M}, \cite{E} and \cite[Chapter 13]{C} and are listed in the following table. We have used the notation of \cite[Chapter 13]{C} for the character labels. \begin{longtable}{c|c|c|c} $\textbf{G}$ & $e$ & $\left(\textbf{L}, \lambda\right)$ & Is of $\ell$-central defect for \\ \hline \hline $G_2$ & $1, 2$ & $\left(G_2, G_2[\theta^i]\right)$ & $\ell \neq 3$ \\ \hline $F_4$ & $1, 2$ & $\left(F_4, F_4[\theta^i]\right)$ & $\ell \neq 3$\\ $F_4$ & $1, 2$ & $\left(F_4, F_4[\pm i]\right)^*$ & $\ell \neq 2$\\ \hline $E_6$ & $1, 2$ & $\left(E_6, E_6[\theta^i]\right)$& $\ell \neq 3$ \\ \hline ${^2E}_6$ & $1, 2$ & $\left({^2E}_6, {^2E}_6[\theta^i] \right)$ & $\ell \neq 3$ \\ \hline $E_7$ & $1$ & $\left(E_7, E_7[\pm \xi] \right)^{\dagger}$ & $\ell \neq 2$\\ $E_7$ & $2$ & $\left(E_7, \phi_{512,11}\right), \left(E_7, \phi_{512,12}\right)$ & $\ell \neq 2$ \\ $E_7$ & $1$ & $\left(E_6, E_6[\theta^i]\right)$& $\ell \neq 3$ \\ $E_7$ & $2$ & $\left(^2E_6, ^2E_6[\theta^i]\right)$& $\ell \neq 3$ \\ \hline $E_8$ & $1, 4$ & $\left(E_8, E_8[\pm \theta^i] \right)$& $\ell \neq 2, 3$ \\ $E_8$ & $1,2$ & $\left(E_8, E_8[\pm i] \right)$ & $\ell \neq 2$ \\ $E_8$ & $1,2,4$ & $\left(E_8, E_8[\zeta^j] \right)$ & $\ell \neq 5$ \\ $E_8$ & $2, 4$ & $\left(E_8, E_6[\theta^i], \phi_{2,1} \right), \left(E_8, E_6[\theta^i], \phi_{2,2} \right)$ & $\ell \neq 5$ \\ $E_8$ & $4$ & \makecell*{ $\left(E_8, E_6[\theta^i], \phi_{1,0} \right), \left(E_8, E_6[\theta^i], \phi_{1,6} \right),$\\ $\left(E_8, E_6[\theta^i], \phi_{1,3'} \right), \left(E_8, E_6[\theta^i], \phi_{1,3''} \right),$ \\ $ \left(E_8, \phi_{4096, 11} \right), \left(E_8, \phi_{4096, 26} \right),$ \\ $ \left(E_8, \phi_{4096, 12} \right), \left(E_8, \phi_{4096, 27} \right),$\\ $\left(E_8, E_7[\pm \xi, 1] \right), \left(E_8, E_7[\pm \xi, \varepsilon] \right)$ }& every $\ell$ \\ $E_8$ & $1$ & $\left(E_7, E_7[\pm \xi]\right)$& $\ell \neq 2$ \\ $E_8$ & $2$ & $\left(E_7, \phi_{512,11}\right), \left(E_7, \phi_{512,12}\right)^{\ddagger}$ & $\ell \neq 2$ \\ $E_8$ & $1$ & $\left(E_6, E_6[\theta^i]\right)$& $\ell \neq 3$ \\ $E_8$ & $2$ & $\left({^2E}_6, {^2E}_6[\theta^i] \right)$ & $\ell \neq 3$\\ \hline \multicolumn{4}{c}{\makecell*{$\theta := $ exp$(2\pi i / 3)$, $\zeta := $ exp$(2\pi i / 5)$ $\xi := \sqrt{-q}$}} \\ \multicolumn{4}{c}{\makecell*{\small{*\cite{E} omits this pair for $\ell = 3$, $e=2$} \hspace{2ex} \small{$\dagger$\cite{E} writes $E_7[\pm \zeta]$ instead of $E_7[\pm \xi]$ for $\ell = 2, e = 1$} \\ \small{$\ddagger$\cite{E} writes $E_7[\pm \xi]$ instead of $\phi_{512,11}, \phi_{512,12}$ for $\ell = 5, e = 2$} }} \end{longtable} First suppose that $\ell$ is good for $\textbf{G}$. Then by inspection, the Sylow $\ell$-subgroups of $W_{\textbf{G}^F}(\textbf{L}, \lambda)$ are trivial so by Theorem~\ref{thrm:key} (d), the defect groups of $b$ are isomorphic to a Sylow $\ell$-subgroup of $Z(\textbf{L})^F$. If $\textbf{L} = \textbf{G}$, then the Sylow $\ell$-subgroups of $Z(\textbf{L}^F)$ are trivial by inspection of \cite[Table 24.2]{M/T}. By \cite[Proposition 3.6.8]{C}, since $\textbf{L}$ is connected reductive, $Z(\textbf{L})^F = Z(\textbf{L}^F)$, therefore $b$ has trivial defect and $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (d). If $\textbf{L}$ and $\textbf{G}$ are such that rk$(\textbf{G}) = $ rk$([\textbf{L},\textbf{L}]) + 1$, then dim($Z^{\circ}\left(\textbf{L})^F\right) = 1$. The Sylow $\ell$-subgroups of $Z^{\circ}(\textbf{L})^F$ are therefore isomorphic to subgroups of the multiplicative group $\textbf{G}_m$, so they are cyclic. By \cite[Proposition 2.2 (i)]{C/E}, since $\ell$ is good for $\textbf{G}$, $Z(\textbf{L})^F_{\ell}$ = $Z^{\circ}(\textbf{L})^F_{\ell}$, therefore $b$ has cyclic defect so $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (d). Now suppose that $\ell$ is bad for $\textbf{G}$, that $\textbf{L} = \textbf{G}$, and that $e=1$. By inspection of the character degrees given in \cite[Chapter 13]{C}, we see that cuspidal characters $\lambda$ of $\textbf{G}^F$ satisfy $\lambda(1)_{\ell}= |\textbf{G}^F|_{\ell}$, so $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (c). The remaining $\ell$-blocks will be handled on a case-by-case basis. First, suppose that $\textbf{G} = E_8$, $\textbf{L} = \phi_1.E_6$ and $\lambda = E_6[\theta^i]$ $(i= 1,2)$ with $\ell \geq 5$ and $e=1$. Then by Theorem~\ref{thrm:Puig}, $k\textbf{G}^Fb$ is Morita equivalent to $kNf$ where $N = N_{\textbf{G}^F} \left( \textbf{L}, \lambda \right)$ and $f = b_{\textbf{L}^F}(\lambda)$ is the block of $k\textbf{L}^F$ containing $\lambda$. Suppose that $P$ is a defect group of $k\textbf{L}^Ff$. Then since $\ell$ is odd and $W_{\textbf{G}^F}(\textbf{L}, \lambda) \cong D_{12}$ is an $\ell'$-group, $P$ is isomorphic to a Sylow $\ell$-subgroup of $Z(\textbf{L})^F$ by Theorem~\ref{thrm:key} (d). Since $N$ normalizes $\textbf{L}$, $P \unlhd N$ so $kNf$ has normal defect. Then by \cite[Theorem 45.12]{Thevenaz}, $kNf $ is Morita equivalent to a twisted algebra $k_{\alpha}(P \rtimes D_{12})$, where $\alpha \in H^2(D_{12}, k^{\times})$. Since $H^2(D_{12}, k^{\times}) \cong C_2$, it follows from the proof of Lemma~\ref{lem:twistedalg} that $mf\left( k_{\alpha} (P \rtimes D_{12})\right)=1$. Whence, $mf(b)=1$. Suppose now that $\textbf{G} = E_8$, $\textbf{L} = \phi_1.E_7$, $\lambda = \phi_{512,11}$ or $\phi_{512, 12}$, $\ell=5$ and $e=1$. The relative Weyl group $W_{\textbf{G}^F}\left(\textbf{L}, \lambda \right) \cong S_2$ has no non-trivial Sylow $\ell$-subgroups, so by Theorem~\ref{thrm:MainTheorem} (d) the defect groups of $b$ are isomorphic to a Sylow $\ell$-subgroup of $Z(\textbf{L})^F$. Note that rk$(\textbf{G}) = $ rk$([\textbf{L},\textbf{L}]) + 1$, so dim($Z^{\circ}(\textbf{L})^F) = 1$ and the Sylow $\ell$-subgroups of $Z^{\circ}(\textbf{L})^F$ are cyclic, as above. Again, using \cite[Proposition 2.2]{C/E}, $Z(\textbf{L})^F_{\ell} = Z^{\circ}(\textbf{L})^F_{\ell}$, so $b$ has cyclic defect and $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (d). Suppose that $\textbf{G} = E_7$, $\textbf{L} = \phi_1.E_6(q)$, $\lambda = E_6[\theta^i]$, $ (i=1,2)$, with $\ell=2$ and $e=1$. Then $b$ has dihedral defect by \cite[page 357]{E}. Therefore by Proposition~\ref{lem:mainlemma} (d), $mf(b) = 1$. Finally, suppose that we are in one of the following cases: $\textbf{G} = E_8$, $\textbf{L} = \phi_1^2.E_6$, $\lambda = E_6[\theta^i]$, $ (i=1,2)$, with $\ell=2$ and $e=1$; or $\textbf{G} = E_8$, $\textbf{L} = {\phi_2^2}.{^2E}_6$, $\lambda = {^2E_6}[\theta^i]$, $ (i=1,2)$, with $\ell \neq 3$ and $e=2$. From \cite{G} we know that the character field of $\lambda$ is $\mathbb{Q}(\theta)$ where $\theta = $ exp$(\frac{2 \pi i}{3})$. Since $\ell \neq 3$, $\theta$ is an $\ell'$-root of unity so $\hat\sigma(\theta) =\theta^{\ell}$ (see Section~\ref{subsec:blocktheory}). If $\ell \equiv 1$ mod 3, then $\hat\sigma(\theta) = \theta$ so ${^{\hat\sigma}}{\lambda} = \lambda$. Therefore by the arguments of Lemma~\ref{lem:lambdarat}, $mf(b) = 1$. If $ \ell \equiv 2$ mod 3, however, then $\hat\sigma(\theta) = \theta^2 \neq \theta$ so we cannot conclude that $mf(b)=1$. Because $\hat\sigma^2(\theta) = \theta^4 = \theta$, however, it follows that ${^{\hat\sigma^2}}{\lambda} = \lambda$, so $mf(b)$ is at most 2. \end{proof} \begin{corollary} \label{cor:maincor} Let $\textbf{G}$, $F$ and $k$ be as in Theorem~\ref{thrm:MainTheorem} and suppose that $\textbf{G}^F$ has non trivial centre. Let $Z$ be a central subgroup of $\textbf{G}^F$ and suppose that $\overline{b}$ is a block of $k(\textbf{G}^F/Z)$ dominated by a unipotent block $b$ of $k\textbf{G}^F$. Then $mf(\overline{b})= 1$. \end{corollary} \begin{proof} The assumption that $\textbf{G}^F$ has non trivial centre means that we do not consider the case where $\textbf{G}^F= E_8(q)$. Thus, for any unipotent block $b$ of $kG$, it follows from the proof of Theorem~\ref{thrm:MainTheorem} that either $\sigma(b) = b$, or $b$ has either trivial, cyclic or dihedral defect. First suppose that $\overline{b}$ is dominated by a unipotent block $b$ of $kG$ such that $\sigma(b) = b$. Then by Lemma~\ref{lem:domblocks} (b), $\sigma\left(\overline{b}\right)$ is also dominated by $b$. Since $Z$ is central, it then follows from part (c) of Lemma~\ref{lem:domblocks} that $\sigma\left(\overline{b}\right) = \overline{b}$. Therefore $k\left(\textbf{G}^F/Z\right)\overline{b} \cong k\left(\textbf{G}^F/Z\right)\overline{b}^{(\ell)}$ as $k$-algebras by Lemma~\ref{lem:galconj}, so $frob\left(\overline{b}\right) = 1$, hence $mf\left(\overline{b}\right) =~1$. Now suppose that $\overline{b}$ is dominated by a unipotent block $b$ of $kG$ which has either trivial, cyclic or dihedral defect. Then by \cite[Ch.5 Theorem 8.7 (ii)]{N/T}, the defect groups of $\overline{b}$ are also either trivial, cyclic or dihedral. Therefore $mf\left(\overline{b}\right) = 1$ by Proposition~\ref{lem:mainlemma} (d). \end{proof} \begin{theorem} \label{thrm:suzree} Let $G$ be a Suzuki or Ree group in non-defining characteristic. Let $b$ be a block of $kG$, and if $G$ is the large Ree group, assume that $b$ is unipotent. Then $mf(b) = 1$. \end{theorem} \begin{proof} First let $G$ be the Suzuki group, $^2B_2(q^2)$ ($q=2^{2m+1}$), and let $b$ be a $\ell$-block of $G$ with $\ell \neq 2$. The subgroups of $G$ of odd order are cyclic \cite[Theorem 9]{S}, so $b$ has cyclic defect and therefore $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (d). Next let $G$ be the small Ree group, $^2G_2(q^2)$ ($q=3^{2m+1}$), and let $b$ be a 2-block of $G$. The Sylow 2-subgroups of $G$ are elementary abelian of order 8 and \cite[I. 8]{W} shows that the only 2-block of $G$ of full defect is the principal block, which contains the rational valued trivial character. If $b$ is not the principal block, then the defect groups of $b$ are proper subgroups of an elementary abelian group of order 8, so $b$ either has dihedral or cyclic defect. Therefore $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (b) and (d). Now let $G$ be the small Ree group and $\ell \geq 5$, and let $b$ be an $\ell$-block of $G$. The order of $G$ is $|G| = q^6\phi_1\phi_2\phi_4\phi_{12}$ with $q = 3^{2m+1}$ for some $m$. Since $\ell$ divides only one $\phi_i$ for some $i \in \{1, 2, 4, 12\}$, by \cite[Corollary 3.13 (2)]{A/B} the Sylow $\ell$-subgroups of $G$ are cyclic. Therefore $b$ has cyclic defect and $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (d). Finally, let $G$ be the large Ree group, $^2F_4(q^2)$ ($q=2^{2m+1}$), and let $b$ be a unipotent $\ell$-block of $G$ with $\ell \neq 2$. By \cite{M}, there are two cases to consider. In the first case we suppose that $\ell \ndivides (q^2 - 1)$. Then $b$ is either the principal block of $G$, or $b$ has trivial defect and therefore $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (b) and (d). In the second case, suppose that $\ell \divides (q^2 - 1)$. Then $b$ contains one of the following sets of characters (notation as per \cite[Appendix D]{H1}): $\{ \chi_1, \chi_2, \chi_3, \chi_4, \chi_9, \chi_{10}, \chi_{11}\}$, $\{ \chi_5, \chi_7\}$ or $\{ \chi_6, \chi_8\}$. In the first case $b$ is the principal block, and in the second and third cases $b$ has cyclic defect \cite[Appendix D]{H1}, so $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (b) and (d). \end{proof} \section{EXCEPTIONAL COVERING GROUPS} \label{sec:proof} \begin{lemma} \label{lem:lgroupext} Let $G$ be a finite group. Suppose that there exists a finite group $\hat{G}$ such that $G \unlhd \hat{G}$ and such that for all blocks $B$ of $k\hat{G}$, either $B$ has cyclic defect or $\sigma(B) = B$. Then $mf(b) = 1$ for all blocks $b$ of $kG$. \end{lemma} \begin{proof} First suppose that $b$ is covered by some block $B$ of $k\hat{G}$ which has cyclic defect. Then the defect groups of $b$ are also cyclic, therefore $mf(b) = 1$ by Proposition~\ref{lem:mainlemma} (d). Now suppose that $b$ is covered by a block $B$ of $k\hat{G}$ such that $\sigma(B) = B$. Recall that $B$ covers $b$ if and only if $Bb \neq 0$, and note that this holds if and only if $\sigma(B)\sigma(b) \neq 0$. Therefore, since $\sigma(B) = B$, $\sigma(b)$ is also covered by $B$. Hence $b$ and $\sigma(b)$ are in the same $\hat{G}$-orbit, so there is a group automorphism of $G$ whose induced $k$-algebra automorphism of $kG$ sends $b$ to $\sigma(b)$. Therefore $mf(b) = 1$ by Lemma~\ref{lem:groupaut}. \end{proof} \begin{lemma} \label{lem:except} Let $\textbf{G}$ be a simple, simply-connected algebraic group defined over an algebraic closure of the field of $p$ elements. Let $q$ be a power of $p$ and let $F: \textbf{G} \rightarrow \textbf{G}$ be a Steinberg morphism with respect to an $\mathbb{F}_q$-structure with finite group of fixed points, $\textbf{G}^F$. Let $G$ be an exceptional cover of $\textbf{G}^F$ and let $b$ be a block of $kG$. Then $mf (b) = 1$. \end{lemma} \begin{proof} If $G$ is not from the following list: $3^2.PSU_4(3)$, $3.O_7(3)$, $3.A_6$, $6.A_6$, $3.G_2(3)$, $3.A7$ with $\ell = 2$; or $G = 4^2.PSL_3(4)$ with $\ell=3$; then using GAP \cite{GAP4} and Proposition~\ref{lem:mainlemma}, it can be shown that every block $b$ of $kG$ satisfies at least one of the following three properties: is a principal block, has cyclic defect, or contains a rational valued character, and therefore $mf(b) = 1$. If $\ell = 2$ and $G$ is one of $3^2.PSU_4(3)$, $3.O_7(3)$, $3.A_6$, $6.A_6$, $3.G_2(3)$ or $3.A7$, then there are some blocks of $kG$ for which none of these three properties hold. For these groups, however, there exist finite groups $\hat{G}$ such that $G \unlhd \hat{G}$ and such that for every block $B$ of $k\hat{G}$, either $B$ has cyclic defect or $\sigma(B) = B$. Therefore by Lemma~\ref{lem:lgroupext}, $mf(b) = 1$ for all blocks $b$ of $kG$. Finally, suppose $G = 4^2.PSL_3(4)$ and $\ell = 3$. Then there are blocks of $kG$ which don't satisfy any of the three properties above, and there is also no suitable $\hat{G}$ which would allow us to apply Lemma~\ref{lem:lgroupext}. Let $G'= PSL_3(4)$ and $Z = C_4 \times C_4$ so $G = Z.G'$. First suppose that $b$ dominates a block $b'$ of $kG'$. Using GAP \cite{GAP4}, as before we can verify that all blocks of $kG'$ satisfy at least one of the three propertie above, so $mf(b')=1$. Since $Z$ is an $\ell'$-group, $b'$ is the unique block dominated by $b$, so by Lemma~\ref{lem:domblocks} (d), $kGb \cong kG'b'$ as $k$-algebras. Therefore $mf(b) = 1$. Now suppose that $b$ does not dominate any block of $kG'$. Then by Lemma~\ref{lem:domblocks} (a), $b$ covers a non-principal block of $kZ$. Since $Z$ is an abelian $\ell'$-group, $kZ$ has one linear character in each block. Suppose $b$ covers a block of $kZ$ containing non-trivial character $\mu$, and let $Z_{\mu} = $ ker$\mu$. Then $b$ dominates a unique block $\overline{b}$ of $k(G/Z_{\mu})$, by Lemma~\ref{lem:domblocks} (d), and $mf(b) = mf(\overline{b})$. If $G/Z_{\mu} \cong 2.PSL_3(4)$ then we can once again use GAP \cite{GAP4} to show all blocks of $k(G/Z_{\mu})$ satisfy one of the three properties above, so $mf(\overline{b}) = 1$. If $G/Z_{\mu} \cong 4_1.PSL_3(4)$ or $4_2.PSL_3(4)$, then there are blocks of $k(G/Z_{\mu})$ which don't satisfy any of the three properties. However, in these two cases there exist outer automorphisms of $G/Z_{\mu}$ of order two such that for every block $B$ of $k\left((G/Z_{\mu}).2\right)$, either $B$ has cyclic defect, or $\sigma(B) = B$. Therefore $mf(\overline{b}) = 1$ by Proposition~\ref{lem:mainlemma} and Lemma~\ref{lem:lgroupext}, as required. \end{proof} \section{PROOF OF THEOREM~\ref{thrm:maintheorem}} \label{sec:mainproof} \begin{proof} Part (a) is shown in Theorem~\ref{thrm:alt}. The result follows for exceptional covering groups of finite groups of Lie type by Lemma~\ref{lem:except}. The remainder of part (b) follows from Corollary~\ref{corr:defining}. Part (c) is shown for the Suzuki and Ree groups in Theorem~\ref{thrm:suzree}, and for all remaining cases in Corollary~\ref{cor:maincor}. \end{proof} \end{document}
\begin{document} \title{Free groups generated by two parabolic maps} \begin{abstract} In this paper we consider a group generated by two unipotent parabolic elements of ${\rm SU}(2,1)$ with distinct fixed points. We give several conditions that guarantee the group is discrete and free. We also give a result on the diameter of a finite ${\mathbb R}$-circle in the Heisenberg group. \end{abstract} \section{Introduction} The study of free, discrete groups has a long history dating back to Schottky and Klein in the nineteenth century. We will be particularly interested in groups generated by two unipotent parabolic maps in ${\rm SU}(2,1)$ and their action on complex hyperbolic space and its boundary. The conditions we give could be thought of as complex hyperbolic analogues of the results proved by Lyndon and Ullman \cite{lyu} and by Ignatov \cite{ig78} giving conditions under which two parabolic elements of ${\rm PSL}(2,{\mathbb C})$ generate a free Kleinian group. Our work is very closely related to the well-known Riley slice of Schottky space, where Riley considered the space of conjugacy classes of subgroups of $\mathbb PSL(2,\mathbb C)$ generated by two non-commuting parabolic maps, see in \cite{ks}. In \cite{PW} Parker and Will considered a related problem, namely they also studied groups with two unipotent generators, but they made the additional assumption that the product of these maps is also unipotent. We will comment on the relationship between our results and those in \cite{PW} below. The main theme of the paper concerns groups generated by two Heisenberg translations with distinct fixed points. We normalise so that the fixed points are $\infty$ and $o$, the origin in the Heisenberg group. Specifically, we consider the group generated by \begin{equation}\label{eq-A-B} A=\left(\begin{matrix} 1 & -\sqrt{2}s_1e^{-i\theta_1} & -s_1^2+it_1 \\ 0 & 1 & \sqrt{2}s_1e^{i\theta_1} \\ 0 & 0 & 1 \end{matrix}\right), \quad B=\left(\begin{matrix} 1 & 0 & 0 \\ \sqrt{2}s_2e^{i\theta_2} & 1 & 0 \\ - s_2^2+it_2 & -\sqrt{2}s_2e^{-i\theta_2} & 1 \end{matrix}\right). \end{equation} From this, it might appear that the space of such pairs of transformations has dimension six, and is parameterised by $s_j,\,t_j,\,\theta_j$ for $j=1,\,2$. In fact, it has dimension four. There is a further normalisation we can do using the stabiliser of the pair $\{o,\infty\}$. This depends on $(k,\psi)\in{\mathbb R}_+\times [0,2\pi)$ which act as follows: \begin{equation}\label{eq-action-stabiliser} (s_1,t_1,\theta_1;s_2,t_2,\theta_2)\longmapsto (s_1k, t_1k^2, \theta_1+\psi; s_2/k, t_2/k^2,\theta_2+\psi). \end{equation} In order to show the symmetry in the parameters, we choose not to make this normalisation in the statement of the results. But in some of the proofs we use it to simplify calculations, for example by choosing $\theta_2=0$. We want to find conditions on $s_j$, $t_j$, $\theta_j$ that ensures $\langle A,B\rangle$ is discrete and freely generated by $A$ and $B$. To do so, we will use Klein's combination theorem (sometimes called the ping-pong theorem) or variants of it on the boundary of complex hyperbolic space. This is given in the following proposition. \begin{proposition}{\label{KCT}} Let $A$ and $B$ be the Heisenberg translations fixing $\infty$ and $o$ respectively, given by \eqref{eq-A-B}. If the fundamental domains $D_A\subset \partial{\bf H}^2_{\mathbb C}$ for $\langle A\rangle$ and $D_B\subset \partial{\bf H}^2_{\mathbb C}$ for $\langle B\rangle$ satisfy $D_A^\circ\cap D_B^\circ\neq \emptyset$ and $\overline{D}_A\cup\overline{D}_B=\partial{\bf H}^2_{\mathbb C}$, then $\langle A,B \rangle$ is free and discrete. \end{proposition} Our results are also related to proofs of discreteness of complex hyperbolic isometry groups using other variations on Klein's combination theorem. For example, see Goldman and Parker \cite{gp1}, Wyss-Gallifent \cite{wg}, Monaghan, Parker and Pratoussevitch \cite{mpa} or Jiang and Xie \cite{xyj}. Our main theorem is: \begin{theorem}\label{thm-main0} Let $(s_1e^{i\theta_1},t_1)$ and $(s_2e^{i\theta_2},t_2)$ be non-trivial elements of the Heisenberg group. Here, $\theta_1$ and $\theta_2$ are only defined when $s_1\neq 0$ and $s_2\neq 0$. Let $A$ and $B$ given by \eqref{eq-A-B} be the associated Heisenberg translations fixing $\infty$ and $o$ respectively. Replacing one of these by its inverse if necessary, we suppose $-\pi/2\le (\theta_1-\theta_2)\le \pi/2$. If one of the following four conditions is satisfied then $\langle A,\,B\rangle$ is discrete and freely generated by $A$ and $B$. The conditions are \begin{enumerate} \item[(1)] \begin{eqnarray*} |s_1^2+it_1|^{1/2}|s_2^2+it_2|^{1/2} & \ge & 2^{1/2}\left(\left(1-\frac{t_2}{|s_2^2+it_2|}\right)^{1/3}+\left(1+\frac{t_2}{|s_2^2+it_2|}\right)^{1/3}\right)^{3/4}\\ && \quad \times \left(\left(1-\frac{t_1}{|s_1^2+it_1|}\right)^{1/3}+\left(1+\frac{t_1}{|s_1^2+it_1|}\right)^{1/3}\right)^{3/4}, \end{eqnarray*} \item[(2)] if $s_1\neq0$ then $\begin{displaystyle} s_1\,|s_2^2+it_2|^{1/2} \ge \begin{cases} \begin{displaystyle}\frac{2~{s_2}^3}{|s_2^2+it_2|^{3/2}}\,\cos(\theta_1-\theta_2)+2 \end{displaystyle} & \hbox{if } s_2\neq0, \\ 2 & \hbox{if } s_2=0. \end{cases} \end{displaystyle}$ \item[(3)] if $s_2\neq 0$ then $\begin{displaystyle} |s_1^2+it_1|^{1/2}\,s_2 \ge \begin{cases} \begin{displaystyle} \frac{2~{s_1}^3}{|s_1^2+it_1|^{3/2}}\,\cos(\theta_1-\theta_2)+2 \end{displaystyle} & \hbox{if } s_1\neq0, \\ 2 & \hbox{if } s_1=0. \end{cases} \end{displaystyle}$ \item[(4)] if both $s_1,\,s_2\neq 0$ then $s_1s_2 \ge 4\cos^3\bigl((\theta_1-\theta_2)/3\bigr)$. \end{enumerate} \end{theorem} Observe that the expressions above are all invariant under the action of maps that fix both fixed points. Specifically, using the action of $(k,\psi)\in{\mathbb R}_+\times [0,2\pi)$ from \eqref{eq-action-stabiliser} the left hand side in each case is a product of two terms, one scaling by $k$ and the other by $1/k$. Similarly, each term on the right hand side involving $(s_j,t_j)$ does not change when we scale by $k$ and the only place $\theta_1$ and $\theta_2$ arise is via $(\theta_1-\theta_2)$, which does not change when we add $\psi$ to both angles. Note that, each item in the above theorem involves different techniques of the proof. The first item of the theorem follows by considering fundamental domains bounded by Cygan spheres (special cases of bisectors), whereas last item follows by considering fundamental domains bounded by two fans. The middle two items have a mix of Cygan spheres and fans. Substituting $s_1=s_2=0$ in Theorem~\ref{thm-main0} (1) we obtain the following corollary, which is well known (for example, it is implicit in Section 3 of \cite{par1} and it is written down explicitly in Theorem 1.1 of Xie, Wang and Jiang \cite{xwy}). \begin{corollary}\label{thm-main2} Let $(0,t_1)$ and $(0,t_2)$ be elements of the Heisenberg group. Let $A$ and $B$ given by \eqref{eq-A-B} be the associated (vertical) Heisenberg translations fixing $\infty$ and $o$ respectively. If $|t_1|^{1/2}\,|t_2|^{1/2} \ge 2$ then $\langle A,\,B\rangle$ is discrete and freely generated by $A$ and $B$. \end{corollary} Note that the right hand sides of parts (1), (2) and (3) of Theorem~\ref{thm-main0} involve $(s_1e^{i\theta_1},t_1)$ and $(s_2e^{i\theta_2},t_2)$. By eliminating $s_j$ and $t_j$, we can get weakening of Theorem~\ref{thm-main0} as follows: \begin{theorem}\label{thm-main1} Let $(s_1e^{i\theta_1},t_1)$ and $(s_2e^{i\theta_2},t_2)$ be as in Theorem~\ref{thm-main0} and let $A$ and $B$ given by \eqref{eq-A-B}. If one of the following three conditions is satisfied then $\langle A,\,B\rangle$ is discrete and freely generated by $A$ and $B$. The conditions are \begin{enumerate} \item[(1')] $\quad \bigl|s_1^2+it_1\bigr|^{1/2}\,\bigl|s_2^2+it_2\bigr|^{1/2}\ge 4$; \item[(2')] if $s_1\neq 0$ then $\quad s_1\,\bigl|s_2^2+it_2\bigr|^{1/2}\ge 4\cos^2\bigl((\theta_1-\theta_2)/2\bigr)$; \item[(3')] if $s_2\neq 0$ then $\quad \bigl|s_1^2+it_1\bigr|^{1/2}\, s_2 \ge 4\cos^2\bigl((\theta_1-\theta_2)/2\bigr)$. \end{enumerate} \end{theorem} \begin{proof} First note that for $-1\le x\le 1$ we have $$ \bigl((1-x)^{1/3}+(1+x)^{1/3}\bigr)^{3/4}\le 2^{3/4} $$ with equality if and only if $x=0$. Therefore, (1') follows from (1). Secondly, $$ \frac{2s_2^3}{|s_2^2+it_2|^{3/2}}\,\cos(\theta_1-\theta_2)+2 \le 2\cos(\theta_1-\theta_2)+2 = 4\cos^2\bigl((\theta_1-\theta_2)/2\bigr). $$ Thus (2') follows from (2) and similarly (3') follows from (3). \end{proof} The following lemma shows that part (4) of Theorem \ref{thm-main0} follows from the other parts. Nevertheless, we will still include a direct geometrical proof of this in Section~\ref{sec-fd-fans}. \begin{lemma} If the condition of Theorem \ref{thm-main0}(4) holds then the conditions of Theorem \ref{thm-main0}(2) and (3) hold. \end{lemma} \begin{proof} First we claim that if $-\pi/2\le \theta_1-\theta_2 \le \pi/2$ then $$ 4\cos^3\bigl((\theta_1-\theta_2)/3\bigr) \ge 4\cos^2\bigl((\theta_1-\theta_2)/2\bigr) $$ with equality if and only if $\theta_1=\theta_2$. To see this, we define $\phi=(\theta_1-\theta_2)/3\in[-\pi/6,\pi/6]$, and write the right hand side in terms of $\phi$. $$ 4\cos^2(3\phi/2) = 2\cos(3\phi)+2 = 8\cos^3(\phi)-6\cos(\phi)+2. $$ Therefore \begin{eqnarray*} 4\cos^3\bigl(\phi\bigr) - 4\cos^2\bigl(3\phi/2\bigr) & = & -4\cos^3(\phi)+6\cos(\phi)-2 \\ & = & 2\bigl(1-\cos(\phi)\bigr)\bigl(2\cos^2(\phi)+2\cos(\phi)-1\bigr). \end{eqnarray*} The quadratic term is positive when $\cos(\phi)\ge \sqrt{3}/2$ and so for such values of $\phi$, this expression is non-negative with equality if and only if $\phi=0$, which proves the claim. Therefore, if Theorem \ref{thm-main0}(4) holds, we have \begin{eqnarray*} s_1\,|s_2^2+it_2|^{1/2} & \ge & s_1s_2 \\ & \ge & 4\cos^3\bigl((\theta_1-\theta_2)/3\bigr) \\ & \ge & 4\cos^2\bigl((\theta_1-\theta_2)/2\bigr) \\ & = & 2\cos(\theta_1-\theta_2)+2 \\ & \ge & \frac{2s_2^3}{|s_2^2+it_2|^{3/2}}\cos(\theta_1+\theta_2)+2. \end{eqnarray*} Hence Theorem \ref{thm-main0}(2) holds. A similar argument shows that if Theorem \ref{thm-main0}(4) then Theorem \ref{thm-main0}(3) holds too. \end{proof} In order to prove Theorem~\ref{thm-main0} (1) we will use Lemma~\ref{lem-diam-Rcircle} below, which gives the maximum Cygan distance between a point on a finite ${\mathbb R}$-circle and any other point on the same ${\mathbb R}$-circle. We believe this will be of independent interest. An ${\mathbb R}$-circle $R$ is the boundary of a totally geodesic Lagrangian subspace of ${\bf H}^2_{\mathbb C}$ and is the fixed point set of an anti-holomorphic involution $\iota_R$ in the isometry group of ${\bf H}^2_{\mathbb C}$. A ${\mathbb C}$-circle $C$ is the boundary of a totally geodesic complex line of ${\bf H}^2_{\mathbb C}$. Thinking of $\partial{\bf H}^2_{\mathbb C}$ as the one point compactification of the Heisenberg group, an ${\mathbb R}$-circle $R$ is called finite if it does not contain the point $\infty$. Finite ${\mathbb R}$-circles are non-planar space curves with interesting geometric properties; see Goldman \cite{gol}. In particular, each finite ${\mathbb R}$-circle $R$ is a meridian of a Cygan sphere. This Cygan sphere is preserved as a set by $\iota_R$ and its centre is $\iota_R(\infty)$. In particular, every point on $R$ is the same Cygan distance from $\iota_R(\infty)$ and we call this distance $r$ its radius. Given a point $p$ of $R$ we want to find a point $q$ in $R$ that maximises the Cygan distance from $p$ among all points of $R$. We call this distance the diameter $d$ of $R$ at the point $p$. It is clear from the triangle inequality $d\le 2r$. However, the Cygan metric is not a geodesic metric, and so for most points $p$ the diameter $d$ is strictly less than $2r$. In order to write points on $R$ in an invariant way, we use the Cartan angular invariant ${\mathbb A}$. The lemma below gives a precise formula for the diameter of $R$ at $p$. \begin{lemma}\label{lem-diam-Rcircle} Let $R$ be a finite ${\mathbb R}$-circle fixed by the anti-holomorphic involution $\iota_R$. Let $r$ be the Cygan distance from $\iota_R(\infty)$ to any point of $R$. For $\alpha\in[0,\pi/2]$, let $p_\alpha$ be a point on $R$ with ${\mathbb A}(p_\alpha,\iota_R(\infty),\infty)=2\alpha-\pi/2$. Then the maximum Cygan distance from $p_\alpha$ to any other point of $R$ is given by $$ d_\alpha(R)=2^{1/2}r\bigl(\cos^{2/3}(\alpha)+\sin^{2/3}(\alpha)\bigr)^{3/4}. $$ \end{lemma} Observe that $d_\alpha(R)\le 2r$, which is attained if and only if $\alpha=\pi/4$, that is whenever $(p_\alpha, \iota_R(\infty), \infty \big)$ lie on a common $\mathbb R$-circle. Also, $d_\alpha(R)\ge \sqrt{2}r$, which is attained if and only if $\alpha=0$ or $\alpha=\pi/2$, that is whenever $(p_\alpha, \iota_R(\infty), \infty \big)$ lie on a common $\mathbb C$-circle. \section{Background} All material in this section is standard; see \cite{gol} for example unless otherwise indicated. \subsection{Complex Hyperbolic space} Let $\mathbb C^{2,1}$ be the $3$-dimensional vector space over $\mathbb C$ equipped with the Hermitian form of signature $(2,1)$ given by $$ \langle{\bf z},{\bf w}\rangle={\bf w}^{\ast}H{\bf z}=\bar w_3 z_1+\bar w_2 z_2+\bar w_1 z_3, $$ where ${\bf z},\,{\bf w}$ are column vectors in $\mathbb C^3$ and the matrix of the Hermitian form is given by \begin{center} $H =\left[ \begin{array}{cccc} 0 & 0 & 1\\ 0 & 1 & 0 \\ 1 & 0 & 0\\ \end{array}\right].$ \end{center} If ${\bf z} \in \mathbb C^{2,1}$ then $\langle{\bf z},{\bf z}\rangle$ is real. Thus we may consider the following subsets of $\mathbb C^{2,1}\setminus \{\bf 0\}:$ \begin{eqnarray*} V_+ & = & \{{\bf z}\in\mathbb C^{2,1}:\langle{\bf z},{\bf z} \rangle>0\}, \\ V_{-} & = & \{{\bf z}\in\mathbb C^{2,1}:\langle{\bf z},{\bf z} \rangle<0\},\\ V_{0} & = & \{{\bf z} \in\mathbb C^{2,1}\setminus \{{\bf 0}\}:\langle{\bf z},{\bf z} \rangle=0\}. \end{eqnarray*} A vector ${\bf z}$ in $\mathbb C^{2,1}$ is called positive, negative or null depending on whether ${\bf z}$ belongs to $V_+$, $V_-$ or $V_0$ respectively. Let $\mathbb P:\mathbb C^{2,1}\setminus\{{\bf 0}\}\longrightarrow \mathbb C {\mathbb P}^{2}$ be the projection map onto the complex projective space. The complex hyperbolic space is defined to be $\c^2=\mathbb P (V_{-})$. The ideal boundary of complex hyperbolic space is $\partial \c^2=\mathbb P (V_{0})$. Let ${\rm U}(2,1)$ be the unitary group of above Hermitian form. The biholomorphic isometry group of $\c^2$ is the projective unitary group ${\rm PU}(2,1)$. In addition, the map ${\bf z}\longmapsto \bar{\bf z}$ that sends each entry of ${\bf z}$ to its complex conjugate $\bar {\bf z}$ is an anti-holomorphic isometry of $\c^2$. Any other anti-holomorphic isometry may be written as the projectivisation of the composition of this map and an element of ${\rm U}(2,1)$. All complex hyperbolic isometries are either holomorphic or anti-holomorphic. We define Siegel domain model of complex hyperbolic space by taking the section defined by $z_3=1$ for the given Hermitian form. In other words, if $(z_1,z_2)$ is in $\mathbb C^2$, we define its standard lift to be ${\bf z}=(z_1,z_2,1)^t$ in $\mathbb C^{2,1}$. The Siegel domain is the subset of $\mathbb C^2$ consisting of points whose standard lift lies in $V_-$. Specifically, the Siegel domain is: $$ \c^2 =\{(z_1,z_2)\in \mathbb C^2 : \ 2\mathbb Re(z_1)+|z_2|^2<0\}. $$ Now consider ${\bf q}_\infty=(1,0,0)^t$ be the column vector in $\mathbb C^{2,1}$. It is easy to see that ${\bf q}_\infty\in V_0$. We define $\mathbb P({\bf q}_\infty)=\infty$, which lies in $\partial \c^2$. Any point in $\partial\c^2-\{\infty\}$ has a standard lift in $V_0$. That is, $$ \partial \c^2-\{\infty\}=\mathbb P(V_{0}-\{{\bf q}_\infty\})=\{(z_1,z_2)\in \mathbb C^2 :2\mathbb Re(z_1)+|z_2|^2=0\}. $$ The origin is the point $o\in \partial{\bf H}^2_{\mathbb C}={\mathbb P}(V_0)$ with $(z_1,z_2)=(0,0)$. Suppose $z_1,\,z_2,\,z_3$ are distinct points in $\partial{\bf H}^2_{\mathbb C}$ with standard lifts ${\bf z}_1$, ${\bf z}_2$, ${\bf z}_3$ respectively, we define their Cartan angular invariant ${\mathbb A}(z_1,z_2,z_3)$ to be $$ {\mathbb A}(z_1,z_2,z_3) =\arg\bigl(-\langle{\bf z}_1,{\bf z}_2\rangle\langle{\bf z}_2,{\bf z}_3\rangle\langle{\bf z}_3,{\bf z}_1\rangle\bigr). $$ We know that ${\mathbb A}(z_1,z_2,z_3)=\pm \frac{\pi}{2}$ (resp. ${\mathbb A}(z_1,z_2,z_3)=0$) if and only if $z_1,z_2,z_3$ lie on the same $\mathbb C$-circle (resp. $\mathbb R$-circle). \subsection{The Heisenberg group} The set $\partial \c^2-\{\infty\}$ naturally carries the structure of the Heisenberg group ${\mathfrak N}$. Thus, the boundary of complex hyperbolic space is the one-point compactification of the Heisenberg group, which should be thought of as a generalisation of the well known fact that the boundary of the upper half space model of $\r^3$ is the one point compactification of $\mathbb C$. We recall that the Heisenberg group ${\mathfrak N}$ is $\mathbb C\times\mathbb R$ with the group law $$ ({\bf z}eta_1,v_1)\cdot({\bf z}eta_2,v_2)=\bigl({\bf z}eta_1+{\bf z}eta_2,v_1+v_2+2\Im({\bf z}eta_1\bar{\bf z}eta_2)\bigr). $$ The identity element in the Heisenberg group is $(0,0)$, which we denote by $o$. The map $\mathbb Pi_V:{\mathfrak N}\longrightarrow {\mathbb C}$ given by $\mathbb Pi_V:({\bf z}eta,v)\longmapsto {\bf z}eta$ is called vertical projection. It is a homomorphism. Given $z=({\bf z}eta,v)$ in the Heisenberg group, we define its standard lift to be $$ {\bf z} = \begin{pmatrix} -|{\bf z}eta|^2 + iv \\ \sqrt{2}{\bf z}eta \\ 1 \end{pmatrix}\in V_0. $$ Given $(\tau,t)$ in the Heisenberg group, there is a unique, upper triangular unipotent element of ${\rm U}(2,1)$ taking the standard lift of $o=(0,0)$ to the standard lift of $(\tau,t)$, which is given by $$ T_{(\tau,t)}=\begin{pmatrix} 1 & -\sqrt{2} \bar{\tau} & -|\tau|^2+it \\ 0 & 1 & \sqrt{2}\tau \\ 0 & 0 & 1 \end{pmatrix}. $$ The map $(\tau,t)$ is a group homomorphism from ${\mathfrak N}$ to ${\rm U}(2,1)$. Thus, applying $T_{(\tau,t)}$ to the standard lift of points in ${\mathfrak N}$ is equivalent to ${\mathfrak N}$ acting on itself by left translation; that is, the map $({\bf z}eta,v)\longmapsto (\tau,t)\cdot({\bf z}eta,v)=\bigl(\tau+{\bf z}eta,t+v+2\Im(\tau\bar{\bf z}eta)\bigr)$. We define Cygan metric on the Heisenberg group by $$ \rho_0\big(({\bf z}eta_1,v_1), ({\bf z}eta_2,v_2)\big)= \big||{\bf z}eta_1-{\bf z}eta_2|^2-iv_1+iv_2-2i\Im({\bf z}eta_1\bar{\bf z}eta_2) \big|^{\frac{1}{2}}. $$ There is an easy way to compute the Cygan distance. If $z_1,\,z_2$ are two points in ${\mathfrak N}$ with standard lifts ${\bf z}_1,\,{\bf z}_2\in V_0$ respectively then $$ \rho_0(z_1,z_2)=\bigl|\langle {\bf z}_1,{\bf z}_2\rangle\bigr|^{1/2}. $$ We can define Cygan sphere $S_{(r,z_0)}$ of radius $r$ and centre $z_0 \in \partial \c^2$ by $$ S_{(r,z_0)}= \Bigl\{z \in \partial \c^2: \rho_0(z,z_0)=r\Bigr\}.$$ \subsection{Special subsets of complex hyperbolic space and its boundary} There are two types of totally geodesic subspaces of $\c^2$ with real dimension 2. The first is the intersection with $\c^2$ of a complex line (copy of ${\mathbb C}{\mathbb P}^1$ inside ${\mathbb C}{\mathbb P}^2$). These are fixed by involutions that are holomorphic isometries of $\c^2$. Their intersection with $\partial\c^2$ are called chains or $\mathbb C$-circles. The second type of totally geodesic subspace is the intersection of $\c^2$ with a Lagrangian planes (copies of ${\mathbb R}{\mathbb P}^2$ inside ${\mathbb C}{\mathbb P}^2$). These are fixed by involutions that are anti-holomorphic isometries of $\c^2$. Their intersection with $\partial\c^2$ are called ${\mathbb R}$-circles. A ${\mathbb C}$-circle or an ${\mathbb R}$-circle is called infinite if it passes through $\infty$ and is called finite otherwise. There are no totally geodesic real hypersurfaces in $\c^2$ and so it is necessary to make a choice for the hypersurfaces containing the sides of a fundamental domain. We make two choices in this paper. The first are Cygan spheres, which are a particular cases of the boundary of bisectors, and the second are fans. Both are foliated by complex lines and Lagrangian planes, and both are mapped to themselves by the involution fixing each complex line in this foliation and by the involution fixing each Lagrangian plane in the foliation. In what follows, we will discuss the boundary of these hypersurfaces in $\partial\c^2={\mathfrak N}\cup\{\infty\}$. \subsection{Isometric spheres} In \cite{gol}, Goldman extended the definition of isometric spheres in real hyperbolic geometry to geometry of complex hyperbolic space. These are spheres in the Cygan metric and are a particular type of bisector. Let $P$ be any element of $\mathbb PU(2,1)$ which does not fix $\infty$, then the isometric sphere corresponding to $P$ is given by $$ I_P= \Bigl\{z \in \partial \c^2: \bigl|\langle {\bf z}, q_\infty \rangle \bigr|= \bigl|\langle {\bf z}, P^{-1}(q_\infty)\rangle\bigr|\Bigr\}. $$ Let $P$ be an element of $\mathbb PU(2,1)$ then $P^{-1}$ has the following form: \begin{equation}\label{eq P} P=\left(\begin{matrix} a & b & c \\ d & e &f \\ g & h & j \end{matrix}\right), \qquad P^{-1}=\left(\begin{matrix} \bar j & \bar f & \bar c \\ \bar h & \bar e &\bar b \\ \bar g & \bar d & \bar a \end{matrix}\right), \end{equation} Such a map does not fix $\infty$ if and only if $g\neq 0$. The isometric sphere of $P$; denoted by $I_P$ is the sphere in Cygan metric with centre $P^{-1}(\infty)$ and radius $r_P=1/\sqrt{|g|}$. Similarly, the isometric sphere $I_{P^{-1}}$ is the Cygan sphere with centre $P(\infty)$ and radius $r_{P^{-1}}=r_P=1/\sqrt{|g|}$. In Heisenberg coordinates, the centres of these spheres are $$ P^{-1}(\infty)= \left(\frac{\bar h}{\sqrt 2{\bar g}},-\Im(\frac{j}{g}) \right), \qquad P(\infty)= \left(\frac{d}{\sqrt 2{g}},\Im(\frac{a}{g}) \right). $$ We will need the following lemma. \begin{lemma}(Proposition $2.4$ of \cite{kam1}){\label{Cygan Sphere}} Let $P$ be any element of $\mathbb PU(2,1)$ such that $P(\infty) \neq \infty$, then there exist $r_P$ such that for all $z \in \partial {{\bf H}^2_{\mathbb C}}\setminus\left\{\infty, P^{-1}(\infty)\right\}$ we have $$\rho_0\big(P(z),P(\infty)\big)=\frac{r_P^2}{\rho_0\big(z,P^{-1}(\infty)\big)}. $$ \end{lemma} Note that $P$ maps $I_P$ to $I_{P^{-1}}$ and maps the component of $\overline {{\bf H}^2_{\mathbb C}} \setminus I_P$ containing $\infty$ to the component of $\overline {{\bf H}^2_{\mathbb C}} \setminus I_{P^{-1}}$ not containing $\infty$. We use an involution $\iota$ swapping $o$ and $\infty$. It is defined as \begin{equation}\label{eq-iota} \iota=\left(\begin{matrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{matrix}\right). \end{equation} For $({\bf z}eta,v)\neq (0,0)$ the involution $\iota$ is given in Heisenberg coordinates as $$ \iota({\bf z}eta,v)=\left(\frac{{\bf z}eta}{-|{\bf z}eta|^2+iv},\,\frac{-v}{\bigl||{\bf z}eta|^2+iv\bigr|^2}\right). $$ As a consequence of Lemma \ref{Cygan Sphere}, the involution $\iota$ maps the Cygan sphere with centre $o$ and radius $r$ to Cygan sphere with centre $o$ and radius $1/r$. Also, conjugating $A$ by $\iota$ results in a matrix of the same form as $B$ but where the indices of $s_j$, $t_j$, $\theta_j$ are all $1$. Likewise, conjugating $B$ by $\iota$ results in a matrix of the same form as $A$ but where the indices of $s_j$, $t_j$, $\theta_j$ are all 2. We can give geographical coordinates on isometric spheres. For convenience when proving Lemma~\ref{lem-diam-Rcircle} we modify the more usual coordinates by letting $\alpha$ vary in $[0,\pi/2]$. Specifically, we parametrise points on the Cygan sphere with centre $o$ and radius $r>0$ by $s_{\alpha,\beta}$ where $(\alpha,\beta)\in[0,\pi/2]\times({\mathbb R}/2\pi{\mathbb Z})$, given by $$ s_{\alpha,\beta}=\Bigl(r\sqrt{\sin(2\alpha)}e^{i\alpha+i\beta},r^2\cos(2\alpha)\Bigr). $$ The point $s_{\alpha,\beta}$ has standard lift $$ {\bf s}_{\alpha.\beta}=\left(\begin{matrix} r^2ie^{2i\alpha} \\ r\sqrt{2\sin(2\alpha)}e^{i\alpha+i\beta} \\ 1 \end{matrix}\right). $$ Fixing $\alpha=\alpha_0$ gives a ${\mathbb C}$-circle. The union of the arcs where $\beta=\beta_0$ and $\beta=\beta_0+\pi$ gives an ${\mathbb R}$-circle. The involutions fixing these ${\mathbb C}$-circles and ${\mathbb R}$-circles all map the sphere to itself. For each $\beta_0$ this involution preserves the Cygan distance between points of the sphere, but the only value of $\alpha_0$ where this is true is $\alpha_0=\pi/4$, corresponding to the equator. \subsection{Fans} Fans are another class of surfaces in the Heisenberg group. They were introduced by Goldman and Parker in \cite{gp2}. An infinite fan is a Euclidean plane in ${\mathfrak N}$ whose image under vertical projection $\mathbb Pi_V$ is an affine line in ${\mathbb C}$. Infinite fans are foliated by infinite ${\mathbb C}$-circles and infinite ${\mathbb R}$-circles. Given $ke^{i\phi}\in{\mathbb C}$, let $F^{(\infty)}_{ke^{i\phi}}$ be the fan whose image under vertical projection is the line given by the equation $x\cos(\phi)+y\sin(\phi)=k$, where $z=x+iy$. We can write points of $$ F^{(\infty)}_{ke^{i\phi}}=\Bigl\{ f_{a,b}=\bigl((k+ia)e^{i\phi},\,b-2ka\bigr)\ :\ (a,b)\in{\mathbb R}^2\Bigr\}. $$ The standard lift of $f_{a,b}$ is $$ {\bf f}_{a,b}\left(\begin{matrix} -a^2-k^2+ib-2ika \\ \sqrt{2}(k+ia)e^{i\phi} \\ 1 \end{matrix}\right). $$ We can give this fan coordinates that resemble geographical coordinates. Fixing $a=a_0$ gives an infinite ${\mathbb C}$-circle and fixing $b=b_0$ gives an infinite ${\mathbb R}$-circle. The involutions that fixing both of these ${\mathbb C}$-circles and the ${\mathbb R}$-circles are all Cygan isometries. We are also interested in fans that are the image of this one under the involution $\iota$. That is $$ F^{(o)}_{ke^{i\phi}}=\left\{ \left(\frac{-(k+ia)(a^2+k^2+ib-2ika)e^{i\phi}}{(a^2+k^2)^2+(b-2ka)^2},\, \frac{-b+2ka}{(a^2+k^2)^2+(b-2ka)^2}\right)\ :\ (a,b)\in{\mathbb R}^2\right\}. $$ \subsection{A discreteness criterion}\label{sec-discreteness} In order to show the group $\langle A,B\rangle$ is discrete and free we will consider its action on $\partial{\bf H}^2_{\mathbb C}={\mathfrak N}\cup\{\infty\}$ and we will use the Klein Combination theorem, Proposition~\ref{KCT}. The construction is the following. We will consider four topological spheres in ${\mathfrak N}\cup\{\infty\}$ called $S_A^+,\,S_A^-,\,S_B^+,\,S_B^-$. The complement of each of these spheres has two (open) components, which we call the interior and the exterior. We assume that: \begin{enumerate} \item the interiors of $S_A^+,\,S_A^-,\,S_B^+,\,S_B^-$ are disjoint; \item $A$ sends the exterior of $S_A^-$ onto the interior of $S_A^+$, and hence $A^{-1}$ sends the exterior of $S_A^+$ onto the interior of $S_A^-$; \item $B$ sends the exterior of $S_B^-$ onto the interior of $S_B^+$, and hence $B^{-1}$ sends the exterior of $S_B^+$ onto the interior of $S_B^-$. \end{enumerate} Then the intersection of the exteriors, which we call $D$, is then a fundamental domain for $\langle A,B\rangle$. It is easy to see that if $W$ is a reduced word in $A^{\pm 1}$ and $B^{\pm 1}$ (that is all consecutive occurrences of $A^{\pm 1}A^{\mp 1}$ and $B^{\pm 1}B^{\mp 1}$ have been cancelled) then $W$ sends $D$ into the interiors of one of $S_A^+,\,S_A^-,\,S_B^+,\,S_B^-$ corresponding to the last generator to be applied. In our constructions, the spheres will either be the Cygan spheres or they will be fans. Many different versions of this result have been used for complex hyperbolic isometries; see, for example, Proposition~6.3 of Parker \cite{par2} or the notion of a group being compressing, due to Wyss-Gallifent in \cite{wg} and used by Monaghan, Parker and Pratoussevitch in \cite{mpa}. We can consider how our construction relates to that of Parker and Will \cite{PW}. Suppose that the interiors of $S_A^+,\,S_A^-,\,S_B^+,\,S_B^-$ are disjoint, but that there are points $q_+=S_A^+\cap S_B^-$ and $q_-=S_A^-\cap S_B^+$. The existence of such points implies that we have equality in the relevant expression of Theorem~\ref{thm-main0}. If furthermore $B(q_+)=q_-$ and $A(q_-)=q_+$ then $q_+$ is a fixed point of $AB$ and $q_-$ is a fixed point of $BA$. The existence of points with these properties in necessary of $AB$ is parabolic. \section{The Cygan diameter of a finite ${\mathbb R}$-circle} In this section we prove Lemma~\ref{lem-diam-Rcircle}. Let $R$ be any finite ${\mathbb R}$-circle and let $\iota_R$ be the anti-holomorphic involution fixing $R$. Let $r$ be the radius of $R$, that is $r$ is the Cygan distance from $\iota_R(\infty)$ to any point of $R$. Applying a Cygan isometry (Heisenberg translation and rotation) if necessary, we may assume that $R$ has the following form $$ R=\Bigl\{p_{\alpha,\epsilon}=\bigl(\epsilon r\sqrt{\sin(2\alpha)}e^{i\alpha},r^2\cos(2\alpha)\bigr)\ :\ \alpha\in[0,\pi/2],\ \epsilon=\pm1\Bigr\}. $$ Perhaps the easiest way to see that this collection of points comprise an ${\mathbb R}$-circle is to consider the following map $\iota_R$: $$ \iota_R:\left(\begin{matrix} z_1 \\ z_2 \\ z_3 \end{matrix}\right) \longmapsto \left(\begin{matrix} \bar{z}_3r^2 \\ -i\bar{z}_2 \\ \bar{z}_1/r^2 \end{matrix}\right). $$ It is easy to check that $\langle \iota_R{\bf z},\iota_R{\bf w}\rangle=\overline{\langle{\bf z},{\bf w}\rangle}$ for any ${\bf z}$, ${\bf w}$ in $\mathbb C^{2,1}$, and so $\iota_R$ is a complex hyperbolic isometry. Moreover, it is easy to check that $\iota_R^2$ is the identity. Hence, by construction, the subset of $V_0$ projectively fixed by $\iota_R$ is an ${\mathbb R}$-circle. For $\alpha\in[0,\pi/2]$ and $\epsilon=\pm1$ consider $$ {\bf p}_{\alpha.\epsilon}=\left(\begin{matrix} r^2ie^{2i\alpha} \\ \epsilon r\sqrt{2\sin(2\alpha)}e^{i\alpha} \\ 1 \end{matrix}\right) \in V_0. $$ Observe that $\iota_R:{\bf p}_{\alpha,\epsilon}\longmapsto (-ie^{-2i\alpha}){\bf p}_{\alpha,\epsilon}$ and so ${\bf p}_{\alpha,\epsilon}$ is projectively fixed by $\iota_R$. We can express $p_{\alpha,\epsilon}={\mathbb P}{\bf p}_{\alpha,\epsilon}$ in Heisenberg coordinates as: $$ p_{\alpha,\epsilon}=\Bigl(\epsilon r\sqrt{\sin(2\alpha)}e^{i\alpha},r^2\cos(2\alpha)\Bigr). $$ This gives the result. It is easy to check that ${\mathbb A}(p_{\alpha,\epsilon},o,\infty)=2\alpha-\pi/2$. Thus, there are two points $p_\alpha$ satisfying the condition ${\mathbb A}(p_{\alpha},\iota_R(\infty), \infty)=2\alpha-\pi/2$, namely $p_{\alpha,+1}$ and $p_{\alpha,-1}$. Given $\theta\in[0,\pi/2]$ and $\eta=\pm 1$ the Cygan distance from $p_{\alpha,\epsilon}$ to $p_{\theta,\eta}$ is given by \begin{eqnarray*} \rho_0({p_{\alpha,\epsilon}, p_{\theta,\eta}}) & = & \Bigl| ir^2e^{2i\theta}-ir^2e^{-2i\alpha} +2\eta\epsilon r^2\sqrt{\sin(2\alpha)\sin(2\theta)}e^{i\theta-i\alpha}\Bigr|^{1/2} \\ & = & \Bigl| -2r^2\sin(\theta+\alpha)e^{i\theta-i\alpha} +2\eta\epsilon r^2\sqrt{\sin(2\alpha)\sin(2\theta)}e^{i\theta-i\alpha}\Bigr|^{1/2} \\ & = & 2^{1/2}r\Bigl( \sin(\alpha+\theta)-\eta\epsilon\sqrt{\sin(2\alpha)\sin(2\theta)}\Bigr)^{1/2} \\ & = & 2^{1/2}r\Bigl( \sin(\alpha)\cos(\theta)+\cos(\alpha)\sin(\theta)-\eta\epsilon 2\sqrt{\sin(\alpha)\cos(\alpha)\sin(\theta)\cos(\theta)}\Bigr)^{1/2} \\ & = & 2^{1/2}r\bigl|\sin^{1/2}(\alpha)\cos^{1/2}(\theta)-\eta\epsilon \cos^{1/2}(\alpha)\sin^{1/2}(\theta)\bigr|. \end{eqnarray*} We need to maximize this quantity. Taking positive square roots of all the trigonometric functions, we see that this maximum arises when $\eta=-\epsilon$. We now use calculus to find the resulting maximum. Putting all this together, gives a proof of Lemma~\ref{lem-diam-Rcircle}. \begin{lemma}\label{lem-trig2} Given $\alpha\in[0,\pi/2]$ define $f_\alpha:[0,\pi/2]\longrightarrow {\mathbb R}$ by $$ f_\alpha(\theta)=\sin^{1/2}(\alpha)\cos^{1/2}(\theta)+\cos^{1/2}(\alpha)\sin^{1/2}(\theta). $$ Then $$ f_\alpha(\theta)\le \bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr)^{3/4}. $$ \end{lemma} \begin{proof} First observe \begin{eqnarray*} f_\alpha(0) & = & \sin^{1/2}(\alpha)\le \bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr)^{3/4}, \\ f_\alpha(\pi/2) & = & \cos^{1/2}(\alpha)\le \bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr)^{3/4}. \end{eqnarray*} In the first case, there is equality if and only if $\alpha=\pi/2$ and in the second case if and only if $\alpha=0$. Now suppose $\theta\in(0,\pi/2)$. Differentiating with respect to $\theta$ we have $$ f'_\alpha(\theta)=\frac{-\sin(\theta)\sin^{1/2}(\alpha)}{2\cos^{1/2}(\theta)} +\frac{\cos(\theta)\cos^{1/2}(\alpha)}{2\sin^{1/2}(\theta)}. $$ Therefore, if $\theta_0$ is a value of $\theta$ for which $f_\alpha'(\theta_0)=0$ we have $$ \sin^{3/2}(\theta_0)\sin^{1/2}(\alpha)=\cos^{3/2}(\theta_0)\cos^{1/2}(\alpha). $$ In other words, $\cos^{1/2}(\theta_0)=k\sin^{1/6}(\alpha)$ and $\sin^{1/2}(\theta_0)=k\cos^{1/6}(\alpha)$ for some constant $k$, depending on $\alpha$. Using $1=\cos^2(\theta_0)+\sin^2(\theta_0)$ we have: $$ 1=k^4\bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr). $$ Hence: \begin{eqnarray*} \cos^{1/2}(\theta_0) & = & \frac{\sin^{1/6}(\alpha)}{\bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr)^{1/4}}, \\ \sin^{1/2}(\theta_0) & = & \frac{\cos^{1/6}(\alpha)}{\bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr)^{1/4}}. \end{eqnarray*} Therefore \begin{eqnarray*} f_\alpha(\theta_0) & = & \sin^{1/2}(\alpha)\cos^{1/2}(\theta_0)+\cos^{1/2}(\alpha)\sin^{1/2}(\theta) \\ & = & \frac{\sin^{2/3}(\alpha)}{\bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr)^{1/4}} +\frac{\cos^{2/3}(\alpha)}{\bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr)^{1/4}} \\ & = & \bigl(\sin^{2/3}(\alpha)+\cos^{2/3}(\alpha)\bigr)^{3/4}. \end{eqnarray*} \end{proof} \section{Fundamental domain bounded by Cygan spheres} \label{sec-fd-bis} First consider $B$ as given in equation \eqref{eq-A-B}. The isometric sphere of $B$ is a Cygan sphere of radius $r_B=1/|s_2^2+it_2|^{1/2}$ with centre $B^{-1}(\infty)$. Likewise, the isometric sphere of $B^{-1}$ is a Cygan sphere of the same radius with centre $B(\infty)$. By construction $o$, the fixed point of $B$, lies on both these spheres and they are both tangent at this point. In this section, we use the result on the diameters of an finite ${\mathbb R}$-circle to find the smallest Cygan sphere centred at $o$ containing both these isometric spheres. Now consider $\iota A\iota$ and do a similar thing. The isometric sphere of $\iota A\iota$ is a Cygan sphere of radius $r_A=1/|s_1^2+it_1|^{1/2}$ with centre $\iota A^{-1}\iota(\infty)=\iota A^{-1}(o)$ and the isometric sphere of $\iota A^{-1}\iota$ is a Cygan sphere of the same radius with centre $\iota A(o)$. Again, we want to find the smallest Cygan sphere centred at $o$ containing both these isometric spheres. Now apply $\iota$ to find the largest Cygan sphere with the images of these isometric spheres in its exterior. If this sphere has large enough radius, these two spheres will be disjoint from the isometric spheres of $B$ and $B^{-1}$. Thus the interiors of these four spheres will be disjoint, and the result will follow as in Section~\ref{sec-discreteness}. \subsection{A Cygan ball containing the isometric spheres of $B$ and $B^{-1}$} \begin{proposition}\label{prop-iso-diam} The isometric spheres of $B$ and $B^{-1}$ are contained in the Cygan ball ${\mathcal B}$ with centre $o$ and radius $d_B$ where $$ d_B=\frac{2^{1/4}}{|s_2^2+it_2|^{1/2}} \left(\left(1-\frac{t_2}{|s_2^2+it_2|}\right)^{1/3}+\left(1+\frac{t_2}{|s_2^2+it_2|}\right)^{1/3}\right)^{3/4}. $$ Furthermore, let $p_+$ and $p_-$ be the points of these two isometric spheres on the boundary of ${\mathcal B}$. Then $B$ sends $p_+$ to $p_-$. \end{proposition} \begin{proof} We want to find the largest Cygan distance from $o$ to another point on the isometric sphere of $B$, respectively $B^{-1}$. Since Cygan spheres are strictly convex, each of these points $p_+$, respectively $p_-$, must be unique. Let $R_+$, respectively $R_-$, be the meridian of the isometric sphere of $B$, respectively $B^{-1}$, passing through $o$. We claim that $p_+$ lies on $R_+$. Observe that there is an anti-holomorphic involution $\iota_{R_+}$ whose fixed point set is $R_+$ which maps the isometric sphere of $B$ to itself isometrically. Thus if $p_+$ does not lie on $R_+$ then $\iota_{R_+}(p_+)$ is a point of the isometric sphere of $B$ different from $p_+$ and the same distance from $o$. This contradicts uniqueness of $p_+$. Thus $p_+$ lies on $R_+$ and similarly $p_-$ lies on $R_-$. This means we can use Lemma~\ref{lem-diam-Rcircle} to find the distances betwen $o$ and $p_\pm$. The Cygan isometric sphere of $B$ has radius $r_B=1/|s_2^2+it_2|^{1/2}$ and centre $$ B^{-1}(\infty)=\left( \frac{s_2 e^{i\theta_2}}{s_2^2+it_2},\ \frac{t_2}{|s_2^2+it_2|^2}\right). $$ Simillarly, The Cygan isometric sphere of $B^{-1}$ has radius $r_{B^{-1}}=r_B =1/|s_2^2+it_2|^{1/2}$ and centre $$ B(\infty)=\left( \frac{s_2 e^{i\theta_2}}{-s_2^2+it_2},\ \frac{-t_2}{|s_2^2+it_2|^2}\right). $$ By construction, $o$ lies on both these spheres. It is easy to see that $$ {\mathbb A}(o,B^{-1}(\infty),\infty)=\arg(s_2^2+it_2),\quad {\mathbb A}(o,B(\infty),\infty)=\arg(s_2^2-it_2). $$ Define $\alpha$ by $2\alpha-\pi/2={\mathbb A}(o,B^{-1}(\infty),\infty)$. Then ${\mathbb A}(o,B(\infty),\infty)=-2\alpha+\pi/2$. This means that $$ \cos(2\alpha)=\frac{t_2}{|s_2^2+it_2|}. $$ Hence \begin{eqnarray*} \sin^2(\alpha) & = & \frac{1-\cos(2\alpha)}{2} = \frac{1}{2}\left(1-\frac{t_2}{|s_2^2+it_2|}\right), \\ \cos^2(\alpha) & = & \frac{1+\cos(2\alpha)}{2} = \frac{1}{2}\left(1+\frac{t_2}{|s_2^2+it_2|}\right). \end{eqnarray*} Therefore the isometric sphere of $B$ is contained in the ball with centre $o$ and radius \begin{eqnarray*} d_B & = & 2^{1/2}r_B\,\bigl(\cos^{2/3}(\alpha)+\sin^{2/3}(\alpha)\bigr)^{3/4} \\ & = & \frac{2^{1/4}}{|s_2^2+it_2|^{1/2}} \left(\left(1-\frac{t_2}{|s_2^2+it_2|}\right)^{1/3}+\left(1+\frac{t_2}{|s_2^2+it_2|}\right)^{1/3}\right)^{3/4}. \end{eqnarray*} The same is true for the isometric sphere of $B^{-1}$. This completes the proof of the first part. To prove the second part, we observe that there is an infinite ${\mathbb R}$-circle $R_0$ passing through $o$ so that the inversion $\iota_{R_0}$ fixing $R_0$ interchanges $B^{-1}(\infty)$ and $B(\infty)$. Hence this inversion also interchanges the isometric spheres of $B$ and $B^{-1}$. Since it fixes the origin, it must also interchange $p_+$ and $p_-$. Assume that $s_2\neq 0$. Then $R_+$ and $R_-$ are the unique meridians of these two isometric spheres passing through $o$. Hence $B=\iota_{R_0}\iota_{R_+}=\iota_{R_-}\iota_{R_0}$. Thus by construction $$ B(p_+)=\iota_{R_0}\iota_{R_+}(p_+)=\iota_{R_0}(p_+)=p_-. $$ Finally, when $s_2=0$ then assume without loss of generality that $t_2>0$ (otherwise replace $B$ with $B^{-1}$). In this case, $p_+$, $o$ are the north and south poles of the isometric sphere of $B$ and $p_-$, $o$ are the south and north poles of the isometric sphere of $B^{-1}$. It is clear that $B$ sends $p_+$ to $p_-$. \end{proof} \subsection{Proof of Theorem \ref{thm-main0}$(1)$} To obtain condition $(1)$ of Theorem \ref{thm-main0}, we do a similar thing for $A$ using a Cygan metric where $o$ is the infinite point. We use the involution $\iota$ given in \eqref{eq-iota}. Recall that $\iota$ maps the Cygan sphere centred at $o$ of radius $d$ to the Cygan sphere centred at $o$ with radius $1/d$. \begin{proof} By a similar argument to Proposition~\ref{prop-iso-diam} shows that the isometric spheres of $\iota A\iota$ and $\iota A^{-1}\iota$ are contained in the Cygan ball with centre $o$ and radius $d_A$ where $$ d_A=\frac{2^{1/4}}{|s_1^2+it_1|^{1/2}} \left(\left(1-\frac{t_1}{|s_1^2+it_1|}\right)^{1/3}+\left(1+\frac{t_1}{|s_1^2+it_1|}\right)^{1/3}\right)^{3/4}. $$ Therefore, the image under $\iota$ of these spheres is contained in the exterior of a Cygan sphere with centre $o$ and radius $1/d_A$. Hence, if $d_B\le 1/d_A$ then we can use the Klein combination theorem, Proposition~\ref{KCT}, to conclude that $A$ and $B$ freely generate $\langle A,\,B\rangle$. The condition $d_B\le 1/d_A$ is equivalent to $$ 1\ge d_Ad_B = \frac{2^{1/2}}{|s_2^2+it_2|^{1/2}|s_1^2+it_1|^{1/2}} \left(\left(1-\frac{t_2}{|s_2^2+it_2|}\right)^{1/3}+\left(1+\frac{t_2}{|s_2^2+it_2|}\right)^{1/3}\right)^{3/4}\\$$ $$\times \left(\left(1-\frac{t_1}{|s_1^2+it_1|}\right)^{1/3}+\left(1+\frac{t_1}{|s_1^2+it_1|}\right)^{1/3}\right)^{3/4}. $$ Multiplying through by $|s_1^2+it_1|^{1/2}|s_2^2+it_2|^{1/2}$ we obtain condition (1) of Theorem~\ref{thm-main0}. \end{proof} Note, that if we simply wanted to obtain condition (1') of Theorem~\ref{thm-main1}, we could use the triangle inequality to say that any two points on a Cygan sphere of radius $r$ are a distance at most $2r$ apart. Therefore all points on the isometric spheres of $B$ and $B^{-1}$ lie within a Cygan distance $2/|s_2^2+it_2|^{1/2}$. Similarly, all points on the isometric spheres of $\iota A\iota$ and $\iota A^{-1}\iota$ lie within a a Cygan distance $2/|s_1^2+it_1|^{1/2}$. Arguing as above, but with these weaker bounds, we obtain the condition. $$ 1 \ge \frac{2}{|s_1^2+it_1|^{1/2}}\,\frac{2}{|s_2^2+it_2|^{1/2}}. $$ Multiplying through by $|s_1^2+it_1|^{1/2}|s_2^2+it_2|^{1/2}$ we obtain condition (1') of Theorem~\ref{thm-main1}. \subsection{Criteria for equality in Theorem \ref{thm-main0}$(1)$} We now briefly discuss what happens when we have equality in the criterion of Theorem \ref{thm-main0}$(1)$. By construction, there are points $p_+$ and $p_-=B(p_+)$ on the isometric spheres of $B$ and $B^{-1}$ so that both $p_+$ and $p_-$ are a distance $d_B$ from $o$. Similarly, there are points $q_+$ and $q_-$ on the $\iota$ images of the isometric spheres of $\iota A\iota$ and $\iota A^{-1}\iota$ so that $A$ sends $q_+$ to $q_-=A(q_+)$ and $q_+$ and $q_-$ are a distance $1/d_A=d_B$ from $o$. If we have $q_-=p_+$ and $q_+=p_-$ then $p_+$ is a fixed point of $AB$ and $p_-$ is a fixed point of $BA$. We claim this only happens when either (a) $s_1=s_2=0$ and $t_1t_2=4$ or (b) $t_1=t_2=0$, $\theta_1=\theta_2$ and $s_1s_2=4$. In the first case $AB$ is screw parabolic with angle $\pi$ and in the second case $AB$ is unipotent. Write $-s_2^2+it_2=ir_2^2e^{2i\alpha_2}$. Define $\phi_2$ by $$ \cos(\phi_2)=\frac{\sin^{1/3}(\alpha_2)}{\bigl(\sin^{2/3}(\alpha_2)+\cos^{2/3}(\alpha_2)\bigr)^{1/2}},\quad \sin(\phi_2)=\frac{\cos^{1/3}(\alpha_2)}{\bigl(\sin^{2/3}(\alpha_2)+\cos^{2/3}(\alpha_2)\bigr)^{1/2}} $$ and $d_B$ by $$ d_B=r_2^{-1}\bigl(\sqrt{2\cos(\phi_2)\sin(\alpha_2)}+\sqrt{2\sin(\phi_2)\cos(\alpha_2)}\bigr). $$ Let $T_+$, respectively $T_-$, be the Heisenberg translation taking $o$ to $B^{-1}(\infty)$, respectively $B(\infty)$. Then $$ T_+^{-1}(o)=\left(\begin{matrix} -ir_2^{-2}e^{-2i\alpha_2} \\ i\sqrt{2\sin(2\alpha_2)}r_2^{-1}e^{2i\alpha_2+i\theta_2} \\ 1 \end{matrix}\right), \quad T_-^{-1}(o)=\left(\begin{matrix} ir_2^{-2}e^{2i\alpha_2} \\ i\sqrt{2\sin(2\alpha_2)}r_2^{-1}e^{-2i\alpha_2+i\theta_2} \\ 1 \end{matrix}\right). $$ The points on the ${\mathbb R}$-circle furthest from these two points are $$ \left(\begin{matrix} -ir_2^{-2}e^{-2i\phi_2} \\ -i\sqrt{2\sin(2\phi_2)}r_2^{-1}e^{-i\phi_2+3i\alpha_2+i\theta_2} \\ 1 \end{matrix}\right), \quad \left(\begin{matrix} ir_2^{-2}e^{2i\phi_2} \\ i\sqrt{2\sin(2\phi_2)}r_2^{-1}e^{i\phi_2-3i\alpha_2+i\theta_2} \\ 1 \end{matrix}\right). $$ The images of these two points under $T_+$ and $T_-$ are $p_+$ and $p_-$. As vectors in standard form these are: $$ {\bf p}_+=\left(\begin{matrix} -d_B^2 e^{-i\phi_2+i\alpha_2} \\ d_B\sqrt{2\cos(\phi_2-\alpha_2)}e^{-2i\phi_2+2i\alpha_2+i\theta_2} \\ 1 \end{matrix}\right), \quad {\bf p}_-=\left(\begin{matrix} -d_B^2 e^{i\phi_2-i\alpha_2} \\ -d_B\sqrt{2\cos(\phi_2-\alpha_2)}e^{2i\phi_2-2i\alpha_2+i\theta_2} \\ 1 \end{matrix}\right). $$ A similar caculation shows that $$ B:{\bf p}_+ \longmapsto e^{-2i\phi_2+2i\alpha_2}{\bf p}_-. $$ Thus, $B$ sends the vector ${\bf p}_+$ to the vector ${\bf p}_-$ with a non-trivial multiplier. Applying the involution $\iota$ and changing all the indices from $2$ to $1$ gives $$ \iota{\bf q}_+=\left(\begin{matrix} -d_A^2 e^{-i\phi_1+i\alpha_1} \\ d_A\sqrt{2\cos(\phi_1-\alpha_1)}e^{-2i\phi_1+2i\alpha_1+i\theta_1} \\ 1 \end{matrix}\right), \quad \iota{\bf q}_-=\left(\begin{matrix} -d_A^2 e^{i\phi_1-i\alpha_1} \\ -d_A\sqrt{2\cos(\phi_1-\alpha_1)}e^{2i\phi_1-2i\alpha_1+i\theta_1} \\ 1 \end{matrix}\right), $$ where \begin{eqnarray*} \cos(\phi_1) & = & \frac{\sin^{1/3}(\alpha_1)}{\bigl(\sin^{2/3}(\alpha_1)+\cos^{2/3}(\alpha_1)\bigr)^{1/2}},\quad \sin(\phi_1) \ =\ \frac{\cos^{1/3}(\alpha_1)}{\bigl(\sin^{2/3}(\alpha_1)+\cos^{2/3}(\alpha_1)\bigr)^{1/2}}, \\ d_A & = & r_1^{-1}\bigl(\sqrt{2\cos(\phi_1)\sin(\alpha_1)}+\sqrt{2\sin(\phi_1)\cos(\alpha_1)}\bigr). \end{eqnarray*} Applying $\iota$ we see that \begin{eqnarray*} {\bf q}_+ & = & \left(\begin{matrix} 1 \\ d_A\sqrt{2\cos(\phi_1-\alpha_1)}e^{-2i\phi_1+2i\alpha_1+i\theta_1} \\ -d_A^2 e^{-i\phi_1+i\alpha_1} \end{matrix}\right) \sim \left(\begin{matrix} -d_A^{-2} e^{i\phi_1-i\alpha_1} \\ -d_A^{-1} \sqrt{2\cos(\phi_1-\alpha_1)}e^{-i\phi_1+i\alpha_1+i\theta_1} \\ 1 \end{matrix}\right), \\ \\ {\bf q}_- & = &\left(\begin{matrix} 1 \\ -d_A\sqrt{2\cos(\phi_1-\alpha_1)}e^{2i\phi_1-2i\alpha_1+i\theta_1} \\ -d_A^2 e^{i\phi_1-i\alpha_1} \end{matrix}\right) \sim \left(\begin{matrix} -d_A^{-2} e^{-i\phi_1+i\alpha_1} \\ d_A^{-1} \sqrt{2\cos(\phi_1-\alpha_1)}e^{i\phi_1-i\alpha_1+i\theta_1} \\ 1 \end{matrix}\right). \end{eqnarray*} Thus we have $$ A:{\bf q}_+ \longmapsto {\bf q}_-. $$ Hence, $A$ sends the vector ${\bf q}_+$ to the vector ${\bf q}_-$ with a multiplier $1$. Now, ${\bf p}_+ = {\bf q}_-$ and ${\bf p}_- = {\bf q}_+$ if and only if either (a) $s_1=s_2=0$ and $t_1t_2=4$ or (b) $t_1=t_2=0$, $\theta_1=\theta_2$ and $s_1s_2=4$. In the first case $AB$ is a screw parabolic with angle $\pi$ and having fixed point ${ p}_+$. This is because the non-trivial multiplier is an eigenvalue of $AB$ associated to its fixed point. Also, in the second case $AB$ is an unipotent element. \section{Domain bounded by fans}\label{sec-fd-fans} Consider the infinite fan $F^{(\infty)}_{ke^{i\theta_1}}$ where $\theta_1$ is the angle associated to the Heisenberg translation $A$ given by \eqref{eq-A-B} and $k$ is any real number. We claim that $A$ sends $F_{ke^{i\theta_1}}^{(\infty)}$ to $F_{(k+s_1)e^{i\theta_1}}^{(\infty)}$. This is most easily seen by considering the standard lifts of points on the fans. \begin{eqnarray*} \lefteqn{ \left(\begin{matrix} 1 & -\sqrt{2}s_1e^{-i\theta_1} & -s_1^2+it_1 \\ 0 & 1 & \sqrt{2}s_1e^{i\theta_1} \\ 0 & 0 & 1 \end{matrix}\right) \left(\begin{matrix} -k^2-a^2+ib-2ika \\ \sqrt{2}(k+ia)e^{i\theta_1} \\ 1 \end{matrix}\right) }\\ & = & \left(\begin{matrix} -(k+s_1)^2-a^2+ib-2i(k+s_1)a+it_1 \\ \sqrt{2}(k+s_1+ia)e^{i\theta_1} \\ 1 \end{matrix}\right). \end{eqnarray*} The images of $F^{(\infty)}_{ke^{i\theta_1}}$ and $F^{(\infty)}_{(k+s_1)e^{i\theta_1}}$ under vertical projection $\mathbb Pi_V$ are the lines $$ x\cos(\theta_1)+y\sin(\theta_1)=k,\quad x\cos(\theta_1)+y\sin(\theta_1)=k+s_1. $$ The slab bounded by the fans $F^{(\infty)}_{ke^{i\theta_1}}$ and $F^{(\infty)}_{(k+s_1)e^{i\theta_1}}$ is a fundamental region for $\langle A\rangle$. The image of the slab under vertical projection $\mathbb Pi_V$ is the strip $$ S_A(k)=\left\{(x,y)\ :\ k\le x\cos(\theta_1)+y\sin(\theta_1) \le k+s_1 \right\}. $$ \begin{figure} \caption{The vertical projection of the isometric spheres of $B$ and $B^{-1} \label{fig:1} \end{figure} Therefore, if we can find a fundamental domain $D_B$ for $\langle B\rangle$ containing the complement of this slab then we can apply the Klein combination theorem to conclude that $\langle A,B\rangle$ is freely generated by $A$ and $B$. In particular, this is true if the vertical projection of $\partial D_B$ is contained in $S_A$ and the vertical projection of $D_B$ contains the complement of $S_A$. To see this, consider a point in the complement of $S_A$. By construction, its pre-image under vertical projection contains at least one point of $D_B$ and no points in its boundary. Since it is connected and path connected (the fibres of vertical projection are copies of the real line) all points in this preimage are contained in $D_B$ as required. In what follows, we give two different fundamental domains for $\langle B\rangle$. \subsection{Proof of Theorem \ref{thm-main0} parts (2) and (3)} \begin{proposition}\label{prop-fan-bisector} Let $(s_1e^{i\theta_1},t_1)$ and $(s_2e^{i\theta_2},t_2)$ be elements of the Heisenberg group with $s_1$ and $s_2$ both non-zero. Replacing one of them by its inverse if necessary, we suppose $-\pi/2\le (\theta_1-\theta_2)\le \pi/2$. Let $A$ and $B$ given by \eqref{eq-A-B} be the associated Heisenberg translations fixing $\infty$ and $o$ respectively. Suppose that $$ s_1\,|s_2^2+it_2|^{1/2} \ge \frac{2~s_2}{|s_2^2+it_2|^{1/2}}\,\cos(\theta_1-\theta_2)+2. $$ Then the vertical projection of the isometric spheres of $B$ and $B^{-1}$ are contained in the strip $S_A$. \end{proposition} \begin{proof} (Proposition~\ref{prop-fan-bisector}). The vertical projection of the isometric sphere of $B$ is a circle with radius $r_2=1/|s_2^2+it_2|^{1/2}$ and centre, $$ \frac{s_2e^{i\theta_2}}{s_2^2+it_2}=\frac{s_2e^{i\theta_2}(s_2^2-it_2)}{|s_2^2+it_2|^2}. $$ Similarly, The vertical projection of the isometric sphere of $B^{-1}$ is a circle with radius $r_2=1/|s_2^2-it_2|^{1/2}$ and centre, $$ \frac{s_2e^{i\theta_2}}{-s_2^2+it_2}=\frac{-s_2e^{i\theta_2}(s_2^2+it_2)}{|s_2^2+it_2|^2}. $$ Now, for a point $(x,y)$ on the vertical projection of the isometric sphere of $B$ we have \begin{eqnarray*} x & = & \frac{s_2(s_2^2\cos(\theta_2)+t_2\sin(\theta_2))}{|s_2^2+it_2|^2}+\frac{\cos(\phi)}{|s_2^2+it_2|^{1/2}},\\ y & = & \frac{s_2(s_2^2\sin(\theta_2)-t_2\cos(\theta_2))}{|s_2^2+it_2|^2}+\frac{\sin(\phi)}{|s_2^2+it_2|^{1/2}}. \end{eqnarray*} Therefore $$ x\cos(\theta_1)+y\sin(\theta_1) = \frac{s_2^3\cos(\theta_1-\theta_2)-s_2t_2\sin(\theta_1-\theta_2)}{|s_2^2+it_2|^2}+\frac{\cos(\phi-\theta_1)}{|s_2^2+it_2|^{1/2}}. $$ Similarly, for a point on the vertical projection of the isometric sphere of $B^{-1}$ we have $$ x\cos(\theta_1)+y\sin(\theta_1) = \frac{-s_2^3\cos(\theta_1-\theta_2)-s_2t_2\sin(\theta_1-\theta_2)}{|s_2^2+it_2|^2}+\frac{\cos(\phi-\theta_1)}{|s_2^2+it_2|^{1/2}}. $$ As $\cos(\theta_1-\theta_2)\ge 0$, we can see that for all points $(x,y)$ in the vertical projections of the isometric spheres of $B$ and $B^{-1}$ we have \begin{eqnarray*} \lefteqn{ \frac{-s_2^3\cos(\theta_1-\theta_2)}{|s_2^2+it_2|^2}-\frac{1}{|s_2^2+it_2|^{1/2}} -\frac{s_2t_2\sin(\theta_1-\theta_2)}{|s_2^2+it_2|^2} } \\ & \le & x\cos(\theta_1)+y\sin(\theta_1) \\ & \le & \frac{s_2^3\cos(\theta_1-\theta_2)}{|s_2^2+it_2|^2} +\frac{1}{|s_2^2+it_2|^{1/2}} -\frac{s_2t_2\sin(\theta_1-\theta_2)}{|s_2^2+it_2|^2}. \end{eqnarray*} Hence these vertical projections lie in a strip of the form $S_A(k)$ provided $$ \frac{2s_2^3\cos(\theta_1-\theta_2)}{|s_2^2+it_2|^2} +\frac{2}{|s_2^2+it_2|^{1/2}}\le s_1. $$ Multiplying through by $|s_2^2+it_2|^{1/2}$, we get following condition $$ s_1\,|s_2^2+it_2|^{1/2} \ge \frac{2~s_2}{|s_2^2+it_2|^{1/2}}\,\cos(\theta_1-\theta_2)+2. $$ \end{proof} The case of Theorem~\ref{thm-main0} arising from condition (2) follows immediately by the above argument. Swapping the roles of $A$ and $B$ also gives the case arising from condition (3) of Theorem~\ref{thm-main0}. \begin{proposition}\label{prop-fan-bisector-2} Let $(s_1e^{i\theta_1},t_1)$ and $(0,t_2)$ be elements of the Heisenberg group. Let $A$ and $B$ given by \eqref{eq-A-B} be the associated Heisenberg translations fixing $\infty$ and $o$ respectively with $s_2=0$. Suppose that $$ s_1\,|t_2|^{1/2} \ge 2. $$ Then the vertical projection of the isometric spheres of $B$ and $B^{-1}$ are contained in the strip $S_A$. \end{proposition} \begin{proof} This follows from the proof of Proposition~\ref{prop-fan-bisector} but is much simpler. In this case, the vertical projections of the isometric spheres of both $B$ and $B^{-1}$ are centred at the origin. If $(x,y)$ is a point on the vertical projection of either of these two isometric spheres are given respectively by $$ x=\frac{\cos(\phi)}{|t_2|^{1/2}},\quad y=\frac{\sin(\phi)}{|t_2|^{1/2}}. $$ Hence $$ \frac{-1}{|t_2|^{1/2}}\le x\cos(\theta_1)+y\sin(\theta_1)=\frac{\cos(\phi-\theta_1)}{|t_2|^{1/2}}\le \frac{1}{|t_2|^{1/2}}. $$ For these two Cygan spheres to lie in a strip of the form $S_A(k)$ we must have $2/|t_2|^{1/2}\le s_1$. This gives the result. \end{proof} \subsection{Criteria for equality in Theorem \ref{thm-main0}(2)} When we have equality in Proposition~\ref{prop-fan-bisector}, the bisectors for $B$ and the fans for $A$ are tangent at the two points \begin{eqnarray*} {\bf q}_+ & = & \left(\begin{matrix} -1/|s_2^2+it_2|-2s_2(s_2^2+it_2)e^{i\theta_1-i\theta_2}/|s_2^2+it_2|^{5/2}-(s_2^2-it_2)/|s_2^2+it_2|^2 \\ \sqrt{2}s_2(s_2^2-it_2)e^{i\theta_2}/|s_2^2+it_2|^2+\sqrt{2}e^{i\theta_1}/|s_2^2+it_2|^{1/2} \\ 1 \end{matrix}\right), \\ {\bf q}_- & = & \left(\begin{matrix} -1/|s_2^2+it_2|-2s_2(s_2^2-it_2)e^{i\theta_1-i\theta_2}/|s_2^2+it_2|^{5/2}-(s_2^2+it_2)/|s_2^2+it_2|^2 \\ -\sqrt{2}s_2(s_2^2+it_2)e^{i\theta_2}/|s_2^2+it_2|^2-\sqrt{2}e^{i\theta_1}/|s_2^2+it_2|^{1/2} \\ 1 \end{matrix}\right). \end{eqnarray*} We can easily verify that ${\bf q}_+$ (resp. ${\bf q}_-$) lies on the isometric sphere of $B$ (resp. $B^{-1}$). A calculation shows that \begin{eqnarray*} B{\bf q}_+ & = & \left(\begin{matrix} -1/|s_2^2+it_2|-2s_2(s_2^2+it_2)e^{i\theta_1-i\theta_2}/|s_2^2+it_2|^{5/2}-(s_2^2-it_2)/|s_2^2+it_2|^2 \\ \sqrt{2}s_2e^{i\theta_2}/|s_2^2+it_2|+\sqrt{2}(s_2^2+it_2)^2e^{i\theta_1}/|s_2^2+it_2|^{5/2} \\ (s_2^2-it_2)/|s_2^2+it_2| \end{matrix}\right) \\ & = & \frac{s_2^2-it_2}{|s_2^2+it_2|} \left(\begin{matrix} -1/|s_2^2+it_2|-2s_2(s_2^2+it_2)^2e^{i\theta_1-i\theta_2}/|s_2^2+it_2|^{7/2}-(s_2^2+it_2)/|s_2^2+it_2|^2 \\ -\sqrt{2}s_2(s_2^2+it_2)e^{i\theta_2}/|s_2^2+it_2|^2-\sqrt{2}(s_2^2+it_2)^3e^{i\theta_1}/|s_2^2+it_2|^{7/2} \\ 1 \end{matrix}\right). \end{eqnarray*} This shows that, $B$ projectively maps ${\bf q}_+$ to ${\bf q}_-$ if and only if $(s_2^2+it_2)^3=|s_2^2+it_2|^3$ which is equivalent to $t_2=0$ as $s_2^2\ge 0$. Similarly, $A{\bf q}_-={\bf q}_+$ if and only if both \begin{eqnarray*} (s_1^2-it_1)|s_2^2+it_2|^{1/2} & = & 2s_1+\frac{2s_1s_2(s_2^2+it_2)e^{i\theta_2-i\theta_1}}{|s_2^2+it_2|^{3/2}} +\frac{4s_2it_2e^{i\theta_1-\theta_2}}{|s_2^2+it_2|^2}-\frac{2it_2}{|s_2^2+it_2|^{3/2}}, \\ s_1|s_2^2+it_2|^{1/2} &= & 2+\frac{2s_2^3e^{i\theta_2-i\theta_1}}{|s_2^2+it_2|^{3/2}}. \end{eqnarray*} Thus, ${\bf q}_-$ is mapped to ${\bf q}_+$ by $A$ only when $t_1=t_2=0$, $\theta_1=\theta_1$ and $s_1s_2=4$. \subsection{Proof of Theorem \ref{thm-main0}(4)} We now construct a fundamental domain for $\langle B\rangle$ bounded by two fans. To do this, we apply the involution $\iota$ given by \eqref{eq-iota}. Fix $k\in{\mathbb R}-\{0\}$. Consider the fan with with vertex the origin $o$ given by $$ F_k^{(o)}=\left\{\left(\begin{matrix} 1 \\ \sqrt{2}(k+ia) \\ -k^2-a^2+ib-2ika \end{matrix}\right)\ :\ (a,b)\in{\mathbb R}^2\right\}. $$ Putting this into Heisenberg coordinates $(x+iy,v)$ we find $$ x=\frac{-k(k^2+a^2)+a(b-2ka)}{(k^2+a^2)^2+(b-2ka)^2},\quad y=\frac{-a(k^2+a^2)-k(b-2ka)}{(k^2+a^2)^2+(b-2ka)^2},\quad v=\frac{-b+2ka}{(k^2+a^2)^2+(b-2ka)^2}. $$ \begin{lemma}\label{lem-project-fan-o} Suppose that $k>0$. The images of $F_k^{(o)}$ and $F_{-k}^{(o)}$ under vertical projection are, respectively, the sets given in polar coordinates $(r,\theta)$ by $$ r\le \frac{1-\cos(\theta)}{2k}\quad\hbox{ and }\quad r\le\frac{1+\cos(\theta)}{2k}. $$ In Cartesian coordinates, these are $$ y^2-4kx(x^2+y^2)-4k^2(x^2+y^2)^2\ge 0\quad \hbox{ and }\quad y^2+4kx(x^2+y^2)-4k^2(x^2+y^2)^2\ge 0. $$ These are the interiors of two cardioids. \end{lemma} \begin{proof} We want to find out the region in the $(x,y)$ plane which is the image of $F_k^{(o)}$ under vertical projection. To do this, we consider local coordinates $(a,b)$ on $F_k^{(o)}$ and we seek points where the map $(a,b)\longmapsto (x,y)$ is not a local diffeomorphism. In other words, we need to find where the Jacobian $J$ of this map vanishes. A calculation shows this Jacobian is $$ J=\frac{-a(k^2+a^2)+k(b-2ka)}{((k^2+a^2)^2+(b-2ka)^2)^2}. $$ Therefore, the Jacobian vanishes precisely when $$ b=2ka+\frac{a}{k}(k^2+a^2). $$ We can parameterise this curve using $a$ as: $$ x=\frac{-k(k^2-a^2)}{(k^2+a^2)^2},\quad y=\frac{-2k^2a}{(k^2+a^2)^2},\quad v=\frac{-ka}{(k^2+a^2)^2}. $$ Writing it in polar coordinates $x+iy=re^{i\theta}$ and eliminating $a$ gives $$ r^2=x^2+y^2=\frac{k^2}{(k^2+a^2)^2},\quad x=r\cos(\theta)=\frac{k}{(k^2+a^2)}\left(1-\frac{2k^2}{(k^2+a^2)}\right). $$ Using $k>0$ we see that $r=k/(k^2+a^2)$ and $2kr=1-\cos(\theta)$. Similarly, in terms of $x$ and $y$ the region is given by $$ y^2-4kx(x^2+y^2)-4k^2(x^2+y^2)^2 =\left(\frac{a(k^2+a^2)-k(b-2ka)}{(k^2+a^2)^2+(b-2ka)^2}\right)^2\ge 0. $$ A similar argument holds for $F_{-k}^{(o)}$ by replacing $k$ with $-k$ throughout. Only here we have $$ x=r\cos(\theta)=\frac{k}{k^2+a^2}\left(\frac{2k^2}{k^2+a^2}-1\right) $$ and so $2kr=1+\cos(\theta)$. \end{proof} \begin{figure} \caption{Cardiods which are the vertical projection of finite fans bounding a fundamental domain for $\langle B\rangle$ and strip which is the vertical projection of a fundamental domain for $\langle A\rangle$ bounded by two infinite fans. We need the cardioids to both lie in the strip.} \label{fig-2} \end{figure} For $k>0$ let $C_k$ and $C_{-k}$ be the cardioids given in polar coordinates $(r,\theta)$ and Cartesian coordinates $(x,y)$ by \begin{eqnarray*} C_k & = & \left\{(r,\theta)\ : \ r=\frac{1-\cos(\theta)}{2k}\right\} =\left\{(x,y)\ :\ y^2-4kx(x^2+y^2)-4k^2(x^2+y^2)^2=0\right\}, \\ C_{-k} & = & \left\{(r,\theta)\ : \ r=\frac{1+\cos(\theta)}{2k}\right\} =\left\{(x,y)\ :\ y^2+4kx(x^2+y^2)-4k^2(x^2+y^2)^2=0\right\}. \end{eqnarray*} \begin{lemma}\label{lem-cardiod-in-strip} For each $k>0$ and each $\phi$ with $-\pi/2\le \phi\le \pi/2$, the cardioids $C_k$ and $C_{-k}$ are both contained in the strip $S_\phi$ given by $$ S_{k,\phi}=\left\{(x,y)\ :\ \Bigl|x\cos(\phi)+y\sin(\phi)\Bigr| \le \frac{\cos^3(\phi/3)}{k}\right\}. $$ \end{lemma} \begin{proof} (See Figure~\ref{fig-2}.) The cardioid $C_k$ is singular at the origin $r=0$. Using the polar parametrisation of $C_k$ we can parametrise its non-singular points by $\theta\in(0,2\pi)$. A point on $C_k$ are given in parametric representation by $$ \bigl(x(\theta),\,y(\theta)\bigr) = \left(\frac{(1-\cos(\theta))\cos(\theta)}{2k},\,\frac{(1-\cos(\theta))\sin(\theta)}{2k}\right). $$ Consider $$ f_\phi(\theta)=k\bigl(x(\theta)\cos(\phi)+y(\theta)\sin(\phi)\bigr)=\frac{(1-\cos(\theta))\cos(\theta-\phi)}{2}. $$ Differentiating with respect to $\theta$ we have \begin{eqnarray*} f'_\phi(\theta) & = & \frac{-(1-\cos(\theta))\sin(\theta-\phi)+\sin(\theta)\cos(\theta-\phi)}{2} \\ & = & \sin(\theta/2)\cos(3\theta/2-\phi). \end{eqnarray*} Since $0<\theta/2<\pi$ we see $\sin(\theta/2)\neq 0$ and so the maximum and minimum values of $f_\phi(\theta)$ occur when $3\theta/2-\phi=\pi/2+n\pi$ where $n$ is an integer. That is, $\theta=2\phi/3+\pi/3+2n\pi/3$. We have $$ f_\phi(2\phi/3+\pi/3+2n\pi/3)=\cos^2(\phi/3+(n-1)\pi/3)\cos(\phi/3-(2n+1)\pi/3). $$ Since $-\pi/2\le\phi\le\pi/2$ we see that the maximum value of $|f_\phi(\theta)|$ occurs when $n=1$, that is when $\theta=2\phi/3+\pi$. The maximum value is $$ |f_{\phi}(2\phi/3+\pi)|=\cos^3(\phi/3). $$ Similarly for $C_{-2k}$. Here we need to maximise the absolute value of $$ g_\phi(\theta)=\frac{(1+\cos(\theta))\cos(\theta-\phi)}{2}. $$ This occurs when $\theta=2\phi/3$. The maximum value is $$ |g_{\phi}(2\phi/3)|=\cos^3(\phi/3). $$ \end{proof} If we have equality in Lemma~\ref{lem-cardiod-in-strip}, we see that the points on the fans $F_k^{(o)}$ and $F_{-k}^{(o)}$ that project to points on the boundary of the strip are, respectively: \begin{equation}\label{eq-extreme} \left(\begin{matrix} -\cos^3(\phi/3)e^{i\phi/3}/k^2 \\ -\sqrt{2}\cos^2(\phi/3)e^{2i\phi/3}/k \\ 1 \end{matrix}\right), \quad \left(\begin{matrix} -\cos^3(\phi/3)e^{i\phi/3}/k^2 \\ \sqrt{2}\cos^2(\phi/3)e^{2i\phi/3}/k \\ 1 \end{matrix}\right). \end{equation} \begin{proof}(Theorem~\ref{thm-main0}(4)) We now prove the condition (4) of Theorem~\ref{thm-main0}. Conjugating $A$ and $B$ by a Heisenberg rotation fixing $o$ and $\infty$, if necessary, we assume that $\theta_2=0$. It is clear that (independent of $t_1$) the map $A$ sends $F_{{-\frac{s_1}{2}}e^{i\theta_1}}^{(\infty)}$ to $F_{{\frac{s_1}{2}}e^{i\theta_1}}^{(\infty)}$: $$ \left(\begin{matrix} 1 & -\sqrt{2}s_1e^{-i\theta_1} & -s_1^2+it_1 \\ 0 & 1 & \sqrt{2}s_1e^{i\theta_1} \\ 0 & 0 & 1 \end{matrix}\right) \left(\begin{matrix} -\frac{s_1^2}{4}-a^2+ib-is_1a \\ \sqrt{2}(-\frac{s_1}{2}+ia)e^{i\theta_1} \\ 1 \end{matrix}\right) =\left(\begin{matrix} -\frac{s_1^2}{4}-a^2+ib-ias_1+it_1 \\ \sqrt{2}(\frac{s_1}{2}+ia)e^{i\theta_1} \\ 1 \end{matrix}\right) $$ Therefore the slab bounded by these two fans is a fundamental region for $\langle A\rangle$. This slab is $$ D_A=\left\{(x+iy,v)\in {\mathfrak N}\ : \ \bigl| x\cos(\theta_1)+y\cos(\theta_1)\bigr|\le \frac{s_1}{2} \right\}\cup\{\infty\}. $$ The image of $D_A$ under vertical projection is the strip $$ S_A=\left\{(x,y)\ :\ \bigl| x\cos(\theta_1)+y\cos(\theta_1)\bigr|\le \frac{s_1}{2} \right\}. $$ Similarly, (independent of $t_2$) the map $B$ sends the fan $F_{-\frac{s_2}{2}}^{(o)}$ to the fan $F_{\frac{s_2}{2}}^{(o)}$: $$ \left(\begin{matrix} 1 & 0 & 0 \\ \sqrt{2}s_2 & 1 & 0 \\ -s_2^2+it_2 & -\sqrt{2}s_2 & 1 \end{matrix}\right) \left(\begin{matrix} 1 \\ \sqrt{2}(-\frac{s_2}{2}+ia) \\ -\frac{s_2^2}{4}-a^2+ib-is_1a \end{matrix}\right) =\left(\begin{matrix} 1 \\ \sqrt{2}(\frac{s_2}{2}+ia) \\ -\frac{s_2^2}{4}-a^2+ib -ias_2+it_2\end{matrix}\right). $$ That is, $\iota B\iota$ sends the fan $F_{-\frac{s_2}{2}}^{(\infty)}$ to the fan $F_{\frac{s_2}{2}}^{(\infty)}$. So the slab $D_{\iota B\iota}$ bounded by these two fans is a fundamental domain for $\langle \iota B\iota\rangle$, where $$ D_{\iota B\iota}=\left\{(x+iy,v)\in{\mathfrak N}\ :\ \bigl| x\bigr|\le \frac{s_2}{2} \right\}\cup\{\infty\}. $$ Thus the image of $D_{\iota B\iota}$ under $\iota$ is a fundamental domain for $\langle B\rangle$. This consists of all points in the exterior of the fans $F_{-\frac{s_2}{2}}^{(o)}$ and $F_{\frac{s_2}{2}}^{(o)}$. In order to show that $\langle A,B\rangle$ is free it is sufficient to show that the fans $F_{\pm \frac{s_2}{2}}^{(o)}$ are contained in the slab $D_A$ between $F_{{-\frac{s_1}{2}}e^{i\theta_1}}^{(\infty)}$ to $F_{{\frac{s_1}{2}}e^{i\theta_1}}^{(\infty)}$. It suffices to show that the vertical projections of $F_{\pm \frac{s_2}{2}}^{(o)}$ are contained in the strip $S_A$. The vertical projections of these fans are given in polar coordinates $(r,\theta)$ by $$ r\le \frac{1-\cos(\theta)}{s_2},\quad r\le \frac{1-\cos(\theta)}{-s_2}. $$ Using Lemma~\ref{lem-cardiod-in-strip} with $\phi=\theta_1$ and $k=s_2/2$ (see Figure~\ref{fig-2}) we see that this is true provided $$ \frac{\cos^3(\frac{\theta_1}{3})}{s_2/2}\le \frac{s_1}{2}. $$ The result follows. \end{proof} Plugging in $k=s_2/2$ and $\phi=\theta_1$ into the formulae for the two extreme points \eqref{eq-extreme}, we see that $B$ maps one to the other if and only if $\theta_1=0$ and $t_2=0$. Similarly, $A$ maps one to the other if and only if $s_1s_2=4$, $\theta_1=0$ and $t_1=0$. \end{document}
\begin{document} \begin{abstract} { {In this work we construct a sequence of Riemannian metrics on the three-sphere with scalar curvature greater than or equal to $6$ and arbitrarily large widths. Our procedure is based on the connected sum construction of positive scalar curvature metrics due to Gromov and Lawson. We develop analogies between the area of boundaries of special open subsets in our three-manifolds and $2$-colorings of associated full binary trees. Then, via combinatorial arguments and using the relative isoperimetric inequality, we argue that the widths converge to infinity.}} \end{abstract} \maketitle \setcounter{tocdepth}{1} \section{Introduction}\label{introduction} Since the proof of the positive mass conjecture in general relativity by Schoen and Yau \cite{S-Y}, and Witten \cite{Witten}, the rigidity phenomena involving the scalar curvature has been fascinating the geometers. These results play an important role in modern differential geometry and there is a vast literature about it, see (\cite{Ambrozio, Bray, BBEN, BBN, B-M, BMN, C-G, Eichmair, M-N, Miao, M-M, Min-Oo, Moraru, SY}). Many of these works concern rigidity phenomena involving the scalar curvature and the area of minimal surfaces of some kind in three-manifolds. The width of a Riemannian three-manifold $(M^3, g)$ is a very interesting geometrical invariant which is closely related to the production of unstable, closed, embedded minimal surfaces, see \cite{C-D} or \cite{Pitts}. It can be defined in different ways depending on the setting, but has always the intent to be the lowest value $W$ for which it is possible to sweep $M$ out using surfaces of $g$ area at most $W$. Let us be more precise about this object. Let $g$ be a Riemannian metric on the three-sphere. A \textit{sweepout} of $(S^3, g)$ is a one-parameter family $\{\Sigma_t\}$, $t\in [0,1]$, of smooth $2$-spheres of finite area which are boundaries of open subsets $\Sigma_t = \partial \Omega_t$, vary smoothly and degenerate to points at times zero, $\Omega_0 = \varnothing$, and one, $\Omega_1 = S^3$. The simplest way to sweep out the three-sphere is using the level sets of any coordinate function $x_i : S^3\subset \mathbb R^4 \rightarrow \mathbb R$. Let $\Lambda$ be a set of sweepouts of $(S^3, g)$. It is said to be saturated if given a map $\phi \in C^{\infty}([0,1]\times S^3, S^3)$ such that $\phi(t, \cdot)$ are diffeomorphism of $S^3$, all of which isotopic to the identity, and a sweepout $\{\Sigma_t\} \in \Lambda$, we have $\{\phi(t,\Sigma_t)\} \in \Lambda$. The \textit{width} of the Riemannian metric $g$ on $S^3$ with respect to the saturated set of sweepouts $\Lambda$ is defined as the following min-max invariant: \begin{equation*} W(S^3, g) = \inf_{\{\Sigma_t\} \in \Lambda} \ \ \sup_{t\in [0,1]} Area_g(\Sigma_t), \end{equation*} where $Area_g(\Sigma_t)$ denotes the surface area of the slice $\Sigma_t$ with respect to $g$. Marques and Neves \cite{M-N} proved that the width of a metric $g$ of positive Ricci curvature in $S^3$, with scalar curvature $R\geq 6$, satisfies the upper bound $W(S^3, g) \leq 4\pi$ and there exists an embedded minimal sphere $\Sigma$, of index one and surface area $Area_g(\Sigma) = W(S^3, g)$. They proved also that in case of equality $W(S^3, g) = 4\pi$, the metric $g$ has constant sectional curvature one. The main purpose of the present work is to prove that this is no longer true without the assumption on the Ricci curvature. More precisely, we have: \subsection*{Theorem A}\label{thm.A} \textit{For any $m>0$ there exists a Riemannian metric $g$ on $S^3$, with scalar curvature $R\geq 6$ and width $W(S^3, g) \geq m$. } \subsubsection*{Remark} It is interesting to stress that the Riemann curvature tensors of the examples that we construct are uniformly bounded. \begin{figure} \caption{The metric on $S^3$ associated with the full binary tree with $8$ leaves.} \label{figure1} \end{figure} In order to prove Theorem A, we construct a special sequence of metrics on the three-sphere using the connected sum procedure of Gromov and Lawson \cite{G-L}. More precisely, given a full binary tree, we associate a spherical region for each node. These regions are subsets of the round three-sphere obtained by removing either one, two or three identical geodesic balls, depending on the vertex degree of the node only. Regions corresponding to neighboring nodes are glued together using a copy of a fixed tube, which is obtained by Gromov-Lawson's method. In Figure \ref{figure1} it is provided a rough depiction of the metric associated with the full binary tree with $8$ leaves. The lower bounds that we obtain for the widths rely on a combinatorial argument and on the relative isoperimetric inequality. The first key step of our argument is the choice of a special slice $\Sigma_{t_0} = \partial \Omega_{t_0}$, for any fixed sweepout $\{\Sigma_t\}$ of $S^3$ with the metrics that we consider. Then, we induce a $2$-coloring on the nodes of the associated binary tree, where the color of each node depends on the volume of $\Omega_{t_0}$ on the corresponding spherical region. A $2$-coloring of a tree is an assignment of one color, black or white, for each node. Finally, a joint application of a combinatorial tool about $2$-colorings of full binary trees and the relative isoperimetric inequality on some compact three-manifolds with boundary gives us lower bounds on the width. Liokumovich also used combinatorial arguments to construct Riemannian metrics of large widths on surfaces of small diameter, see \cite{Liokumovich}. The foundational idea of the Almgren-Pitts min-max theory for the area functional is to achieve the width as the area of a closed minimal surface, possibly disconnected and with multiplicities, see \cite{Pitts} or \cite{C-D} for further details. In our examples of metrics on $S^3$, it is expected that this min-max minimal surface will have multiple components, some of them stable with area strictly less than $4\pi$. These surfaces correspond to the spherical slices of minimum area in the tubes of our construction. Brendle, Marques and Neves \cite{BMN} constructed non-spherical metrics with scalar curvature $R\geq 6$ on the hemisphere, which coincide with the standard round metric in a neighborhood of the boundary sphere. These metrics are counterexamples to the Min-Oo conjecture. Our result gives also a setting in which a scalar curvature rigidity result does not hold. The standard argument to provide a positive lower bound for the width of a Riemannian metric is to consider the supremum of the isoperimetric profile. Given a Riemannian metric $g$ on the three-sphere, its isoperimetric profile is the function $\mathcal I : [0, vol_g(S^3)] \rightarrow \mathbb R$ defined by \begin{equation*} \mathcal I (v) = \inf \{Area_g(\partial \Omega) : \Omega \subset S^3 \text{ and } vol_g(\Omega)=v\}. \end{equation*} In particular, if $\{\Sigma_t\}$ is a sweepout of $(S^3, g)$ and the associated open subsets are $\Omega_t\subset S^3$, then the volumes $vol_g(\Omega_t)$ assume all the values between zero and $vol_g(S^3)$. Then, we have \begin{equation*} \sup\{\mathcal I(v) : v \in [0, vol_g(S^3)]\} \leq W(S^3, g). \end{equation*} If $g$ has positive Ricci curvature and scalar curvature $R \geq 6$, the $4\pi$ upper bound on the supremum of the left-hand-side of the above expression was previously obtained by Eichmair \cite{Eichmair}. We observed that the supremum of the isoperimetric profiles of the examples that we use also form an unbounded sequence. This claim is also proved via a combinatorial result and provides us a second proof of the content of Theorem A. \subsection*{Acknowledgments} The results contained in this paper are based partially on the author's Ph.D. thesis under the guidance of Professor Fernando Cod\'a Marques. This work was done while I was visiting him at Princeton University. It is a pleasure to show my gratefulness for his support. \subsection*{Organization} The content of this paper is organized as follows: In Section \ref{comb-section}, we develop the combinatorial tools that we use to estimate the geometric objects. In Section \ref{GL-section}, we briefly discuss the connected sum procedure due to Gromov and Lawson, and introduce our examples. In Section \ref{width-section}, we prove Theorem A. In Section \ref{isop-section}, we discuss about the isoperimetric profiles of the constructed metrics. \section{Combinatorial Results}\label{comb-section} In this section we state and prove our combinatorial results. For each positive integer $m \in \mathbb{N}$, we use $T_m$ to denote the full binary tree, for which all the $2^m$ leaves, nodes of vertex degree one, have depth $m$. Recall that the vertex degree of a node is the number of edges incidents to it. Observe that $T_m$ has $2^{m+1}-1$ nodes, being $2^k$ nodes on the $k$-th level of depth. We consider $2$-colorings of the nodes of $T_m$. A $2$-coloring of $T_m$ is an assignment of one color, black or white, for each node. Fixed a $2$-coloring, an edge is said to be dichromatic if it connects nodes with different colors. We are interested in $2$-colorings which minimize the quantity of dichromatic edges for a fixed number of black nodes. \begin{figure} \caption{A $2$-coloring of $T_2$ with three black nodes and one dichromatic edge.} \end{figure} \subsection{Definition} Let $m, d \in \mathbb N$ be positive integers. We define $B_m(d)$ to be the set of values $b$, with $1\leq b \leq 2^{m+1}-1$, for which there exists a $2$-coloring of $T_m$ with $d$ dichromatic edges and $b$ black nodes exactly. The first statement that we prove in this section is the following upper bound on the size of the sets $B_m(d)$: \subsection{Lemma}\label{coloring.nodes} \textit{For every $m, d \in \mathbb N$, it follows that $\# B_m(d) \leq 2^{d} m^{d}$. } \begin{proof} Consider a $2$-coloring $\mathcal C$ of the nodes of $T_m$ which has $d$ dichromatic edges exactly. Let $k$ be the highest level of depth of $T_m$ for which one of those $d$ edges joins a node on the $k$-th level to a node on the $(k+1)$-th level of depth. Choose one of these deeper dichromatic edges and observe that it determines a monochromatic component of $\mathcal C$ which is a copy of $T_{m-(k+1)}$, for some $0 \leq k \leq m-1$. Changing the color of the nodes of this $T_{m-(k+1)}$ yields a $2$-coloring $\mathcal C^{\prime}$ with $d-1$ dichromatic edges and $b$ black nodes, for some $b \in B_m(d-1)$. Since we changed the colors of the nodes in the copy of $T_{m-(k+1)}$ only and they all have the same color on $\mathcal C$, the number of black nodes of $\mathcal C$ is either $b + (2^{m-k} -1)$ or $b - (2^{m-k} - 1)$. Therefore, $\# B_m(d) \leq 2m\cdot \# B_m(d-1)$ and the statement follows inductively and from the fact that $B_m(0) = \{2^{m+1}-1\}$. \end{proof} \subsection{Definition} For $m \in \mathbb N$ and $1\leq b \leq 2^{m+1}-1$, let $d_m^{\prime}(b)$ denote the minimum integer $d \in \mathbb N$ for which $b \in B_m(d)$. In other words, any $2$-coloring of $T_m$ with $b$ black nodes exactly has at least $d_m^{\prime}(b)$ dichromatic edges. As a consequence of the above estimate we prove a qualitative result that guarantees that there is no uniform bound on the values $d_m^{\prime}(b)$. \subsection{Proposition}\label{coloring.nodes.2} \textit{There exist integers $b(m)$, with $1\leq b(m) \leq 2^{m+1}-1$ and such that $\{d_m^{\prime}(b(m))\}_{m\in \mathbb N}$ is an unbounded sequence. } \begin{proof} Suppose, by contradiction, there exists $D \in \mathbb N$ such that $d_m^{\prime}(b) \leq D$, for all $m\in \mathbb N$ and $1\leq b \leq 2^{m+1}-1$. Then, each such $b$ belongs to $B_m(d)$, for some $d \leq D$. By Lemma \ref{coloring.nodes}, we have \begin{equation*} \# \bigg( \bigcup_{d \leq D} B_m(d) \bigg) \leq \sum_{d \leq D} 2^{d} m^{d}. \end{equation*} But, for fixed $m\in \mathbb N$, there are $2^{m+1}-1$ possible values for $b$, which can not be controlled by the polynomial right hand side of the above expression. \end{proof} Since the vertex degrees of the nodes of the considered trees do not exceed $3$, we can use Proposition \ref{coloring.nodes.2} to prove the following: \subsection{Corollary}\label{disjoint-dich-edges} \textit{Given $k \in \mathbb N$, there exist $m \in \mathbb N$ and $b(m) < 2^{m+1}-1$, such that any $2$-coloring of $T_m$ with $b(m)$ black nodes exactly has at least $k$ pairs of neighboring nodes with different colors. Moreover, for $1\leq b \leq 2^{m+1}-1$, any $2$-coloring of $T_m$ with $b$ black nodes exactly has at least $(k-|b-b(m)|)/5$ pairwise disjoint pairs of neighboring nodes with different colors. } \begin{proof} Indeed, let $m$ and $b(m)$ be such that $d_m^{\prime}(b(m))\geq k$. This choice is allowed by Proposition \ref{coloring.nodes.2}. Then, any $2$-coloring of $T_m$ with $b(m)$ black nodes exactly has at least $k$ pairs of neighboring nodes with different colors. For $1\leq b \leq 2^{m+1}-1$, we can estimate $d_m^{\prime}(b)$ using the formula: \begin{equation}\label{eq-A} |d_m^{\prime}(t) - d_m^{\prime}(s)| \leq |t-s|. \end{equation} To prove this relation, we need to verify $|d_m^{\prime}(t) - d_m^{\prime}(t+1)| \leq 1$ only. Consider a $2$-coloring of $T_m$ with $t+1$ black nodes and $d_m^{\prime}(t+1)$ dichromatic edges exactly. Let $N$ be a black node of this coloring with the property that no other black node lives in a level deeper than its level of depth. Changing the color of $N$ to white we obtain a $2$-coloring with exactly $t$ black nodes and at most $d_m^{\prime}(t+1)+1$ dichromatic edges. This implies that $d_m^{\prime}(t)\leq d_m^{\prime}(t+1)+1$. Analogously, we obtain $d_m^{\prime}(t+1)\leq d_m^{\prime}(t)+1$, and we are done with the proof of expression (\ref{eq-A}). The choice of $m$ and $b(m)$, together with equation (\ref{eq-A}) gives us that $d_m^{\prime}(b) \geq k-|b-b(m)|$. Observe that each pair of neighboring nodes has a common node with four other pairs of neighboring nodes at most. This allows us to conclude that any $2$-coloring of $T_m$ with $b$ black nodes exactly has at least $(k-|b-b(m)|)/5$ pairwise disjoint pairs of neighboring nodes with different colors. And this concludes the proof of the corollary. \end{proof} Corollary \ref{disjoint-dich-edges} is key in the proof of the lower bound that we provide for the supremum of the isoperimetric profiles of the Riemannian metrics that we construct in the next section. This is done in Section \ref{isop-section}. The rest of this section is devoted to the discussion of an interesting quantitative statement related to the previous results. It also can be applied to provide estimates for the widths of our examples. \subsection{Definition} We define the \textit{dichromatic value of $t$ leaves in $T_m$} as the least number of dichromatic edges of a $2$-coloring of $T_m$ with $t$ black leaves exactly. We denote this number by $d_m(t)$. The following statement about dichromatic values is the analogous of formula (\ref{eq-A}) for a fixed number of black leaves. We omit its proof here. \subsection{Lemma}\label{di-value} \textit{ $|d_m(t) - d_m(s)| \leq |s-t|$. } \subsection{Theorem}\label{coloring.leaves} \textit{ For each integer $m > 1$, let \[ a(m) = \left\{ \begin{array}{l l} 1 + 2 + 2^3 + \ldots + 2^{m-2}, & \quad \text{if } m \text{ is odd}\\ 1 + 2^2 + 2^4 + \ldots + 2^{m-2}, & \quad \text{if } m \text{ is even}. \end{array} \right.\] Then, $d_m(a(m)) \geq \left\lceil \frac{m}{2} \right\rceil$. } \begin{proof} The proof is by induction. Define $a(1) = 1$. The initial cases, $m=1$ and $2$, are very simple. Suppose that the statement is true for $m-1$ and $m-2$. Let $\mathcal C$ be a $2$-coloring of $T_m$ with $a=a(m)$ black leaves exactly. Let us count the number of different types of nodes in the $(m-1)$-th level of depth of $T_m$. Use $\alpha$ to denote the number of nodes which have two neighbor leaves of different colors on $\mathcal C$ and, similarly, let $\beta$ be the number of nodes which have two black neighbor leaves. Then, we can write the number of black leaves as $a(m) = \alpha +2\cdot \beta$. In particular, this expression implies that $\alpha$ is an odd number and \begin{equation}\label{eq1} \beta = \frac{1-\alpha}{2} + \frac{a(m)-1}{2} = \frac{1-\alpha}{2} + a(m-1) - m^{\prime}, \end{equation} where $m^{\prime} = 1$ if $m$ is even and $m^{\prime} = 0$ if $m$ is odd. Indeed, it is easily seen that $a(m)-1=2\cdot (a(m-1) - m^{\prime})$, for every $m\geq 3$. Next, we induce a $2$-coloring $\mathcal C ^{\prime}$ on the nodes of $T_{m-1}$. Let $\mathcal T_{m-2} \subset \mathcal T_{m-1} \subset T_m$ be such that $T_m$ minus $\mathcal T_{m-1}$ is the set of leaves of $T_m$, and $\mathcal T_{m-1}$ minus $\mathcal T_{m-2}$ is the set of leaves of $\mathcal T_{m-1}$. We begin to define $\mathcal C^{\prime}$ on $T_{m-1}$, identified with $\mathcal T_{m-1}$, asking $\mathcal C^{\prime}$ to be equal to $\mathcal C$ on $\mathcal T_{m-2}$ and on the $\alpha$ leaves of $\mathcal T_{m-1}$ which have neighbor leaves of different colors on $\mathcal C$. The $\beta$ leaves of $\mathcal T_{m-1}$ which have two black neighbor leaves of $\mathcal C$ on $T_m$ are colored black on $\mathcal C^{\prime}$. The remaining leaves of $\mathcal T_{m-1}$ have two white neighbor leaves of $\mathcal C$ on $T_m$ and receive the color white on $\mathcal C^{\prime}$. The important properties of $\mathcal C^{\prime}$ are: \begin{enumerate} \item[(i)] The number of black leaves of $\mathcal C^{\prime}$ on $\mathcal T_{m-1}$ is greater than or equal to $\beta$ and at most $\alpha + \beta$; \item[(ii)] The number of dichromatic edges of $\mathcal C$ is at least $\alpha$ plus the number of dichromatic edges of $\mathcal C^{\prime}$. \end{enumerate} The first of these properties follows directly from the construction. To prove the second, we begin by observing that the dichromatic edges of $\mathcal C^{\prime}$ that are in $\mathcal T_{m-2}$ are, automatically, dichromatic edges of $\mathcal C$ on $T_m$. Then, we analyze cases to deal with the dichromatic edges of $\mathcal C^{\prime}$ that use leaves of $\mathcal T_{m-1}$. We omit this simple analysis. Let us use $t$ to denote the number of black leaves of $\mathcal C^{\prime}$ on $\mathcal T_{m-1}$. From (\ref{eq1}) and property (i) above, we conclude that \begin{equation}\label{eq2} \frac{1-\alpha}{2} - m^{\prime} \leq t - a(m-1) \leq \frac{1+\alpha}{2} - m^{\prime}. \end{equation} By Lemma \ref{di-value} and the induction hypothesis, we have \begin{equation}\label{eq-3} d_{m-1}(t) \geq d_{m-1}(a(m-1)) - \frac{1+\alpha}{2} \geq \left\lceil \frac{m-1}{2} \right\rceil - \frac{1+\alpha}{2}. \end{equation} By definition, $\mathcal C^{\prime}$ has at least $d_{m-1}(t)$ dichromatic edges. This implies, together with property (ii) and equation (\ref{eq-3}), that \begin{equation}\label{eq-4} \#\{\text{dichromatic edges of } \mathcal C \} \geq \left\lceil \frac{m-1}{2} \right\rceil + \frac{\alpha-1}{2}. \end{equation} Since $\alpha$ is odd, if $m$ is even we have that the number of dichromatic edges of $\mathcal C$ is at least $\left\lceil (m-1)/2 \right\rceil = \left\lceil m/2 \right\rceil$ and we are done. Otherwise, $m$ is odd and equation (\ref{eq-4}) provides us \begin{equation*} \#\{\text{dichromatic edges of } \mathcal C \} \geq \left\lceil \frac{m}{2} \right\rceil - 1 + \frac{\alpha-1}{2} = \left\lceil \frac{m}{2} \right\rceil + \frac{\alpha-3}{2}. \end{equation*} If $\alpha \geq 3$, the induction process ends and we are done. From now on, we suppose that $m$ is odd and $\alpha = 1$. The arguments are going to be similar to the previous one, but one level above on $T_m$. Recall that $\alpha = 1$ means that there is a unique node of $T_m$ which is neighbor of leaves with different colors. Observe that each node on the $(m-2)$-th level of $T_m$, leaf of $\mathcal T_{m-2}$, is associated with four leaves of $T_m$, the leaves at edge-distance two. A unique node on this level is associated with an odd number ($1$ or $3$) of black leaves of $\mathcal C$, because $\alpha = 1$. We denote this node and odd number by $N$ and $\varphi$, respectively. Let us use $\theta$ and $\gamma$ to denote the number of nodes on the $(m-2)$-th level of $T_m$ which are associated, respectively, with two and four black leaves of $\mathcal C$. Then, we can write the number of black leaves of $\mathcal C$ as $a(m) = \varphi + 2 \theta + 4 \gamma$. In particular, this expression implies that $\varphi + 2\theta \equiv 3$ mod $4$ and \begin{equation*} \gamma = - \frac{1 + \varphi + 2 \theta}{4} + a(m-2). \end{equation*} Next, we induce a $2$-coloring $\mathcal C ^{\prime\prime}$ on the nodes of $T_{m-2}$. We follow the same $\mathcal T_{m-2} \subset T_m$ notation that we introduced before. And now, we still need the analogous $\mathcal T_{m-3} \subset \mathcal T_{m-2}$. We choose $\mathcal C^{\prime\prime}$ to coincide with $\mathcal C$ on the nodes of $\mathcal T_{m-3}$ and on the $\theta$ leaves of $\mathcal T_{m-2}$ which are associated with two black leaves of $\mathcal C$. The $\gamma$ leaves of $\mathcal T_{m-2}$ which are associated with four black leaves of $\mathcal C$ are colored black. The node $N$ is colored white if $\varphi = 1$ and black if $\varphi = 3$. The leaves of $\mathcal T_{m-2}$ which remain uncolored receive the color white. As in the previous step, the important properties of $\mathcal C^{\prime\prime}$ are: \begin{enumerate} \item[(I)] The number of black leaves of $\mathcal C^{\prime\prime}$ is between the values $(\varphi -1)/2 + \gamma$ and $(\varphi -1)/2 + \gamma + \theta$; \item[(II)] The number of dichromatic edges of $\mathcal C$ is at least $1+ \theta$ plus the number of dichromatic edges of $\mathcal C^{\prime\prime}$. \end{enumerate} By the same reasoning that we used to obtain equation (\ref{eq-4}), we have \begin{equation*} \#\{\text{dichromatic edges of } \mathcal C \} \geq \left\lceil \frac{m-2}{2} \right\rceil + \frac{1+ \varphi + 2\theta}{4}. \end{equation*} Using that $m$ is odd, $\varphi = 1$ or $3$ and $\varphi + 2\theta \equiv 3$ mod $4$, we conclude that the right hand side of the above expression is greater than or equal to $\left\lceil m/2 \right\rceil$. This finishes the induction step. \end{proof} An easy consequence of Theorem \ref{coloring.leaves} is the following: \subsection{Corollary}\label{disjoint-dich-edges2} \textit{Given $m \in \mathbb N$ there exists $a(m) \in \mathbb N$, $1\leq a(m) \leq 2^m$, such that any $2$-coloring of $T_m$ with $a(m)$ black leaves exactly has at least $(\left\lceil m/2\right\rceil)/5$ pairwise disjoint pairs of neighboring nodes with different colors. } \section{Constructing the examples}\label{GL-section} In this section, we introduce our examples. We begin with a brief discussion about the Gromov-Lawson metrics of positive scalar curvature. Then, for each full binary tree we construct an associated metric of scalar curvature greater than or equal to $6$ on the three-sphere. \subsection{Gromov-Lawson metrics} Gromov and Lawson developed a method that is adequate to perform connected sums of manifolds with positive scalar curvature, see \cite{G-L}. They proved the following statement: Let $(M^n, g)$ be a Riemannian manifold of positive scalar curvature. Given $p \in M$, $\{e_1,\ldots, e_n\} \subset T_p M$ an orthonormal basis, and $r_0>0$, it is possible to define a positive scalar curvature metric $g^{\prime}$ on the punctured geodesic ball $B(p,r_0)-\{p\}$ that coincides with $g$ near the boundary $\partial B(p,r_0)$, and such that $(B(p,r_1)-\{p\}, g^{\prime})$ is isometric to a half-cylinder for some $r_1>0$. \subsubsection{Remark:}\label{size of tube} Since $(B(p,r_1)-\{p\}, g^{\prime})$ is isometric to a half-cylinder, there exists $0 < r <r_1$ so that the $g^{\prime}$ volume of $B(p,r_0) - B(p,r)$ is bigger than one half of the $g$ volume of the removed geodesic ball $B(p,r_0)$. \subsection{Fundamental blocks}\label{blocks} In order to construct our examples, we use the above metrics to build three types of fundamental blocks. The first type is obtained from the above construction using the standard round metric on $M = S^3$, any $p \in S^3$ and orthonormal basis and $r_0 = 1$. We use $S_1 = (S^3 - B(p, r), g^{\prime})$ to denote this block, where $0< r < r_1 < r_0 = 1$ is chosen as in remark \ref{size of tube}. Observe that $S_1$ is a manifold with boundary, has positive scalar curvature and it has a product metric near the boundary two-sphere $\partial S_1$. To obtain the second fundamental block, $S_2$, we perform the same steps with $M = S_1$ and choosing the new removed geodesic ball to be antipodally symmetric to $B(p,1)$. Finally, the third block, $S_3$, is obtained from $S^3$ after three application of Gromov-Lawson procedure by removing three disjoint geodesic balls $B(p_i,1)$, $i= 1, 2$ and $3$. Summarizing, the fundamental blocks $S_1, S_2$ and $S_3$ are obtained from the standard three-sphere by removing one, two or three geodesic balls, respectively, and attaching a copy of a fixed piece for each removed ball. Each fundamental block has a product metric near its boundary spheres. Up to a re-scaling, admit that they have scalar curvature greater than or equal to $6$. Also, by remark \ref{size of tube}, we can suppose that the attached piece has volume bigger than one half of the volume of the removed balls. \subsection{A metric on $S^3$ associated with $T_m$} For each full binary tree $T_m$, we use the fundamental blocks to construct an associated metric on $S^3$. In our examples, each node of $T_m$ will be associated to a fundamental block. Following the notation of subsection \ref{blocks}, for a node of degree $k$ we associate a fundamental block of type $S_k$. We connect the blocks which correspond to neighboring nodes of $T_m$ by identifying one boundary sphere of the first to a boundary sphere of the second with reverse orientations. After performing all identifications, we obtain a metric on $S^3$, which is denoted by $g_m$ and has scalar curvature $R\geq 6$. The metric $g_m$ decomposes $S^3$ in $2^{m+1}-1$ disjoint closed regions, which are isometric to the standard three-sphere with either one, two or three identical disjoint geodesic balls removed, and $2^{m+1}-2$ connecting tubes. Moreover, by construction, the tubes are isometric to each other and their volume is greater than the volume of one of the removed geodesic balls. \section{Lower bounds on the width}\label{width-section} In the introduction, we briefly defined the notions of sweepouts and width of Riemannian metrics on $S^3$. We begin this section by recalling what these interesting geometrical objects are. Then, we prove that the widths of the metrics $g_m$ constructed in Section \ref{GL-section} converge to infinity. We use $I = [0,1]$ to denote the closed unit interval on the real line. The $2$-dimensional Hausdorff measure on $S^3$ induced by a Riemannian metric $g$ is denoted by $\mathcal H^2$. Let us remember the definition of sweepouts. A \textit{sweepout} of $(S^3,g)$ is a family $\{\Sigma_t\}_{t\in I}$ of smooth $2$-spheres, which are boundaries of open sets $\Sigma_t = \partial \Omega_t$ such that: \begin{enumerate} \item $\Sigma_t$ varies smoothly in $(0,1)$; \item $\Omega_0 = \varnothing$ and $\Omega_1 = S^3$; \item $\Sigma_t$ converges to $\Sigma_{\tau}$, in the Haursdorff topology, as $t\rightarrow \tau$; \item $\mathcal H^2(\Sigma_t)$ is a continuous function of $t \in I$. \end{enumerate} Let $\Lambda$ be a set of sweepouts of $(S^3, g)$. It is said to be saturated if given a map $\phi \in C^{\infty}(I\times S^3, S^3)$ such that $\phi(t, \cdot)$ are diffeomorphism of $S^3$, all of which isotopic to the identity, and a sweepout $\{\Sigma_t\}_{t\in I} \in \Lambda$, we have $\{\phi(t,\Sigma_t)\}_{t\in I} \in \Lambda$. The \textit{width of $(S^3,g)$ associated with $\Lambda$} is the following min-max invariant: \begin{equation*} W(S^3, g, \Lambda) = \inf_{\{\Sigma_t\} \in \Lambda} \ \ \max_{t\in [0,1]} \mathcal H^2(\Sigma_t). \end{equation*} From now on, we fix a saturated set of sweepouts $\Lambda$. The main result of this work is the following: \subsection{Theorem}\label{thm1} \textit{The sequence $\{g_m\}_{m\in \mathbb N}$ of Riemannian metrics on $S^3$ that we constructed satisfies: \begin{equation*} \lim_{m\rightarrow \infty} W(S^3, g_m, \Lambda) = + \infty. \end{equation*} } \begin{proof} Recall that $g_m$ is a metric on $S^3$ which is related to the full binary tree $T_m$. Also, there are $2^{m+1}-1$ disjoint closed subsets of $(S^3, g_m)$ isometric to the standard round metric on the three-sphere with either one, two or three identical disjoint balls removed. These spherical regions are associated to nodes of $T_m$ and two of them are glued to each other if, and only if, their corresponding nodes are neighbors in $T_m$. It is also important to recall that there is a tube connecting such neighboring regions, all of which isometric to each other. For convenience, let us denote these spherical regions by: \begin{itemize} \item $L_i$ if it corresponds to a leaf of $T_m$; \item $B$ if it corresponds to the node of degree $2$; \item $A_j$ if it corresponds to a node of degree $3$. \end{itemize} After connecting any two neighboring spherical regions, we obtain a region $\mathcal A$ which is isometric to one of the domains depicted in Figure \ref{figure2}. \begin{figure} \caption{The possible regions that we obtain after gluing together two neighboring spherical regions.} \label{figure2} \end{figure} In the figure, the first region was obtained by gluing a spherical region of type $L_i$ to its only $A_j$ neighbor. The others are obtained by gluing either two $A_j$ regions or the $B$ region to one its two $A_j$ neighbors. Choose $\alpha> 0$ such that $2\alpha$ is strictly less then the volume of one $A_j$. Then, $0< \alpha < vol(\mathcal A) -\alpha < vol(\mathcal A)$, for any of the possible $\mathcal A$'s. By the relative isoperimetric inequality, there exists $C>0$ such that for any open subset $\Omega \subset \mathcal A$ of finite perimeter and $\alpha \leq vol(\Omega) \leq vol(\mathcal A) -\alpha$, we have \begin{equation}\label{isop-inequality} \mathcal H^2(\partial \Omega \cap int(\mathcal A)) \geq C. \end{equation} Moreover, since we have three possible isometric types of $\mathcal A$'s only, we can suppose that this constant does not depend on the type of $\mathcal A$. Let $\{\Sigma_t\}_{t\in I}$ be a sweepout of $(S^3, g_m)$. Consider the associated open sweepout $\{\Omega_t\}$, for which $\Sigma_t = \partial \Omega_t$. Let $a(m)$ be the integer provided by Corollary \ref{disjoint-dich-edges2}. Choose the least $t_0 \in I$ for which we have $vol(\Omega_{t_0} \cap L_i) \geq \alpha$, for at least $a(m)$ values of $i\in \{1, 2, 3, \ldots, 2^m\}$. Observe that, at most $a(m)-1$ of the $L_i$'s can satisfy $vol(\Omega_{t_0}\cap L_i) > \alpha$. Up to a reordering of their indices, suppose that $vol(\Omega_{t_0} \cap L_i) \geq \alpha$, for $i = 1, 2, \ldots, a(m)$, and $vol(\Omega_{t_0} \cap L_i) \leq \alpha$, otherwise. On $T_m$, consider the $2$-coloring defined in the following way: the leaves associated to $L_1, \ldots, L_{a(m)}$ are colored black, the other leaves are colored white and the nodes which are not leaves are colored black if, and only if, the volume of $\Omega_{t_0}$ inside the corresponding spherical region is greater than or equal to $\alpha$. This $2$-coloring of $T_m$ has exactly $a(m)$ black leaves. By Corollary \ref{disjoint-dich-edges2}, the constructed coloring has at least $m/10$ pairwise disjoint pairs of neighboring nodes with different colors. Observe that each such pair gives one $\mathcal A$ type region for which we have \begin{equation}\label{volume of omega} \alpha \leq vol(\Omega_{t_0}\cap \mathcal A) \leq vol(\mathcal A) -\alpha. \end{equation} This follows because we chose $\alpha$ in such a way that the volume of our spherical regions are greater than $2\alpha$. By equation (\ref{isop-inequality}) and (\ref{volume of omega}) we conclude that $\mathcal H^2(\partial \Omega_{t_0}\cap int(\mathcal A)) \geq C$. Since this holds for $m/10$ pairwise disjoint $\mathcal A$ type regions, we have $\mathcal H^2(\Sigma_{t_0}) \geq C\cdot m/10$. This concludes our argument. \end{proof} \section{On the isoperimetric profiles of the metrics $g_m$} \label{isop-section} In this section, we discuss the fact that isoperimetric profiles $\mathcal I_{m}$ of the metrics $g_m$ are not uniformly bounded. This part also relies on a combinatorial argument, we use Corollary \ref{disjoint-dich-edges}. The idea is to decompose $S^3$ into $2^{m+1}-1$ pieces of identical $g_m$ volumes, all of which being the union of one spherical region ($L_i$, $B$ or $A_j$) with a portion of their neighboring tubes. This decomposition of $S^3$ by balanced pieces is only possible because we chose the tube large enough to have $g_m$ volume greater than the spherical volume $\mu$ of the removed geodesic balls. This allows us to decompose $S^3$ into $2^{m+1}-2$ pieces with volume $vol_{g_0}(S^3) + \tau - 2\mu$ and one piece with volume $vol_{g_0}(S^3)$, and with the other desired properties, where $g_0$ is the standard round metric on $S^3$ and $\tau$ is the volume of the gluing tube. The piece with volume $vol_{g_0}(S^3)$ is the one related to the only node of degree $2$ in $T_m$. For each pair of neighboring spherical regions, the boundary of the associated balanced regions has exactly one component in the connecting tube. This component is a spherical slice which splits the volume $\tau$ of the tube as $\tau - \mu$ plus $\mu$, as depicted in Figure \ref{figure4}. \begin{figure} \caption{The balanced regions.} \label{figure4} \end{figure} There are three types of balanced regions, depending on the number of boundary components, all of which we denote by $\mathcal M$. In this part, we use the isoperimetric inequality in its full generality: there exists $C>0$ so that \begin{equation}\label{full.isop.ineq} \min \{\mathcal H^3(\Omega), \mathcal H^3(\mathcal M - \Omega)\}^{2/3} \leq C\cdot \mathcal H^2(\partial \Omega \cap int(\mathcal M)), \end{equation} for every $\Omega \subset \mathcal M$. Since we have three types of $\mathcal M$ regions only, we suppose that $C>0$ associated to $\mathcal M$ does not depend on its type. Suppose, by contradiction, that there exists $L>0$ such that $\mathcal I_m(v) \leq L$, for every $m \in \mathbb N$ and $v \in [0,vol_{g_m}(S^3)]$. For any $k \in \mathbb N$, let $m, b(m) \in \mathbb N$ be the integers provided by Corollary \ref{disjoint-dich-edges}. Take $v(m) = b(m)\cdot (vol_{g_0}(S^3) + \tau - 2\mu)$ and let $\Omega \subset S^3$ be a subset with $vol_{g_m}(\Omega) = v(m)$ and such that $\mathcal H^2(\partial \Omega) \leq L$. Observe that \begin{equation*} \sum_{\mathcal M} \mathcal H^2(\partial \Omega \cap int(\mathcal M)) \leq \mathcal H^2(\partial \Omega) \leq L. \end{equation*} Using the relative isoperimetric inequality, equation (\ref{full.isop.ineq}), we obtain: \begin{equation*} \sum_{\mathcal M} \min \{\mathcal H^3(\Omega\cap \mathcal M), \mathcal H^3(\mathcal M - \Omega)\}^{2/3} \leq C\cdot L. \end{equation*} Which implies that \begin{equation}\label{eq-A1} \sum_{\mathcal M} \min \{\mathcal H^3(\Omega\cap \mathcal M), \mathcal H^3(\mathcal M - \Omega)\}\leq C_1, \end{equation} where $C_1 = (C\cdot L)^{3/2}$. Let $\mathcal M_1$ be the set of the $\mathcal M$ type regions for which $2\cdot \mathcal H^3(\Omega \cap \mathcal M) < \mathcal H^3 (\mathcal M)$. Similarly, the $\mathcal M$ regions satisfying the opposite inequality compose $\mathcal M_2$. Equation (\ref{eq-A1}) implies \begin{equation} \sum_{\mathcal M \in \mathcal M_1} \mathcal H^3(\Omega\cap \mathcal M) + \sum_{\mathcal M \in \mathcal M_2} \mathcal H^3(\mathcal M - \Omega) \leq C_1. \end{equation} Using that $\mathcal H^3(\mathcal M - \Omega) = \mathcal H^3(\mathcal M) - \mathcal H^3(\Omega \cap \mathcal M)$ and the fact that \begin{equation} \sum_{\mathcal M \in \mathcal M_1} \mathcal H^3(\Omega\cap \mathcal M) + \sum_{\mathcal M \in \mathcal M_2} \mathcal H^3(\Omega\cap \mathcal M) = vol_{g_m}(\Omega) = v(m), \end{equation} we easily conclude \begin{equation}\label{eq-A2} \bigg| v(m) - \sum_{\mathcal M \in \mathcal M_2} \mathcal H^3 (\mathcal M)\bigg| \leq C_1. \end{equation} By the choice of $v(m)$, equation (\ref{eq-A2}) implies that $|\# \mathcal M_2 - b(m)| \leq C_2$, where $C_2 = (C_1 + |\tau - 2 \mu|)/(vol_{g_0}(S^3) + \tau - 2\mu)$ is a uniform constant. Consider the $2$-coloring $\mathcal C$ of $T_m$ whose black nodes are those associated with the balanced regions in $\mathcal M_2$. Then, $\mathcal C$ has $\# \mathcal M_2$ black nodes exactly. By Corollary \ref{disjoint-dich-edges}, there are at least $(k - |\# \mathcal M_2 - b(m)|)/5$ pairwise disjoint pairs of neighboring nodes with different colors. By a reasoning similar to the one that we used in the end of Section \ref{width-section}, we have that \begin{equation} \mathcal H^2(\partial \Omega) \geq C_3 \cdot (k - |\# \mathcal M_2 - b(m)|)/5, \end{equation} for some $C_3>0$, which does not depend on $m$. Recalling that $\mathcal H^2(\partial \Omega) \leq L$ and $|\# \mathcal M_2 - b(m)| \leq C_2$, the above expression provides a uniform upper bound on $k$, which is arbitrary. This is a contradiction and we are done. \end{document}
\begin{document} \begin{center} {\bf{\LARGE{ Bellman Residual Orthogonalization \\ for Offline Reinforcement Learning}}} \vspace*{0.5in} \begin{tabular}{lcl} Andrea Zanette$^\star$ && Martin J. Wainwright$^{\star,\dagger}$ \\ \texttt{[email protected]} && \texttt{[email protected]} \end{tabular} \vspace*{0.10in} \begin{tabular}{c} Department of Electrical Engineering and Computer Sciences$^{\star}$ \\ Department of Statistics$^\dagger$ \\ UC Berkeley, Berkeley, CA \\ \\ Department of Electrical Engineering and Computer Sciences$^{\dagger}$ \\ Department of Mathematics$^\dagger$ \\ Massachusetts Institute of Technology, Cambridge, MA \end{tabular} \vspace*{0.5in} \begin{abstract} We propose and analyze a reinforcement learning principle that approximates the Bellman equations by enforcing their validity only along an user-defined space of test functions. Focusing on applications to model-free offline RL with function approximation, we exploit this principle to derive confidence intervals for off-policy evaluation, as well as to optimize over policies within a prescribed policy class. We prove an oracle inequality on our policy optimization procedure in terms of a trade-off between the value and uncertainty of an arbitrary comparator policy. Different choices of test function spaces allow us to tackle different problems within a common framework. We characterize the loss of efficiency in moving from on-policy to off-policy data using our procedures, and establish connections to concentrability coefficients studied in past work. We examine in depth the implementation of our methods with linear function approximation, and provide theoretical guarantees with polynomial-time implementations even when Bellman closure does not hold. \end{abstract} \end{center} \section{Introduction} Markov decision processes (MDP) provide a general framework for optimal decision-making in sequential settings (e.g.,~\cite{puterman1994markov,Bertsekas_dyn1,Bertsekas_dyn2}). Reinforcement learning refers to a general class of procedures for estimating near-optimal policies based on data from an unknown MDP (e.g.,~\cite{bertsekas1996neuro,sutton2018reinforcement}). Different classes of problems can be distinguished depending on our access to the data-generating mechanism. Many modern applications of RL involve learning based on a pre-collected or offline dataset. Moreover, the state-action spaces are often sufficiently complex that it becomes necessary to implement function approximation. In this paper, we focus on model-free offline reinforcement learning (RL) with function approximation, where prior knowledge about the MDP is encoded via the value function. In this setting, we focus on two fundamental problems: (1) offline policy evaluation---namely, the task of accurately predicting the value of a target policy; and (2) offline policy optimization, which is the task of finding a high-performance policy. There are various broad classes of approaches to off-policy evaluation, including importance sampling~\cite{precup2000eligibility,thomas2016data,jiang2016doubly,liu2018breaking}, as well as regression-based methods~\cite{lagoudakis2003least,munos2008finite,chen2019information}. Many methods for offline policy optimization build on these techniques, with a line of recent papers including the addition of pessimism~\cite{jin2021pessimism,xie2021bellman,zanette2021provable}. We provide a more detailed summary of the literature in~\cref{sec:Literature}. In contrast, this work investigates a different model-free principle---different from importance sampling or regression-based methods---to learn from an offline dataset. It belongs to the class of weight learning algorithms, which leverage an auxiliary function class to either encode the marginalized importance weights of the target policy~\cite{liu2018breaking,xie2020Q}, or estimates of the Bellman errors~\cite{antos2008learning,chen2019information,xie2020Q}. Some work has considered kernel classes~\cite{feng2020accountable} or other weight classes to construct off-policy estimators~\cite{uehara2020minimax} as well as confidence intervals at the population level~\cite{jiang2020minimax}. However, these works do not examine in depth the statistical aspects of the problem, nor elaborate upon the design of the weight function classes.\footnote{For instance, the paper~\cite{feng2020accountable} only shows validity of ther intervals, not a performance bound; on the other hand, the paper~\cite{jiang2020minimax} gives analyses at the population level, and so does not address the alignment of weight functions with respect to the dataset in the construction of the empirical estimator, which we do via self-normalization and regularization. This precludes obtaining the same type of guarantees that we present here.} The last two considerations are essential to obtaining data-dependent procedures accompanied by rigorous guarantees, and to provide guidance on the choice of weight class, which are key contributions of this paper. For space reasons, we motivate our approach in the idealized case where the Bellman operator is known in~\cref{sec:appBRO}, and compare with the weight learning literature at the population level in~\cref{sec:WeightLearning}. Let us summarize our main contributions in the following three paragraphs. \paragraph{Conceptual contributions} Our paper makes two novel contributions of conceptual nature: \begin{enumerate}[leftmargin=*] \item We propose a method, based on \emph{approximate empirical orthogonalization} of the Bellman residual along test functions, to construct confidence intervals and to perform policy optimization. \item We propose a sample-based approximation of such principle, based on \emph{self-normalization} and \emph{regularization}, and obtain general guarantees for parametric as well as non-parametric problems. \end{enumerate} The construction of the estimator, its statistical analysis, and the concrete consequences (described in the next paragraph) are the major distinctions with respect to past work on weight learning methods~\cite{uehara2020minimax,jiang2020minimax}. Our analysis highlights the statistical trade-offs in the choice of the test functions. (See~\cref{sec:WeightLearning} for comparison with past work at the population level.) \paragraph{Domain-specific results} In order to illustrate the broad effectiveness and applicability of our general method and analysis, we consider several domains of interest. We show how to recover various results from past work---and to obtain novel ones---by making appropriate choices of the test functions and invoking our main result. Among these consequences, we discuss the following: \begin{enumerate}[leftmargin=*] \item When marginalized importance weights are available, they can be used as test class. In this case we recover a similar results as the paper~\cite{xie2020Q}; however, here we only require concentrability with respect to a comparator policy instead of over all policies in the class. \item When some knowledge of the Bellman error class is available, it can be used as test class. Similar results have appeared previously either with stronger concentrability~\cite{chen2019information} or in the special case of Bellman closure~\cite{xie2021bellman}. \item We provide a test class that projects the Bellman residual along the error space of the $\Q$ class. The resulting procedure is as an extension of the LSTD algorithm \cite{bradtke1996linear} to non-linear spaces, which makes it a natural approach if no domain-specific knowledge is available. A related result is the lower bound by \cite{foster2021offline}, which proves that without Bellman closure learning is hard even with small density ratios. In contrast, our work shows that learning is still possible even with large density ratios. \item Finally, our procedure inherits some form of ``multiple robustness''. For example, the two test classes corresponding to Bellman completeness and marginalized importance weights can be used together, and guarantees will be obtained if \emph{either} Bellman completeness holds or the importance weights are correct. We examine this issue in \cref{sec:MultipleRobustness}. \end{enumerate} \paragraph{Linear setting} We examine in depth an application to the linear setting, where we propose the first \emph{computationally tractable} policy optimization procedure \emph{without assuming Bellman completeness}. The closest result here is given in the paper~\cite{zanette2021provable}, which holds under Bellman closure. Our procedure can be thought of making use of LSTD-type estimates so as to establish confidence intervals for the projected Bellman equations, and then using an iterative scheme for policy improvement. \section{Background and set-up} \label{sec:background} We begin with some notation used throughout the paper. For a given probability distribution $\Distribution$ over a space $\mathcal{X}$, we define the $L^2(\Distribution)$-inner product and semi-norm as $\innerprodweighted{f_1}{f_2}{\Distribution} = \ensuremath{\mathbb{E}}ecti{[f_1f_2]}{\Distribution}, $ and $ \norm{f_1}{\Distribution} = \sqrt{\innerprodweighted{f_1}{f_1}{\Distribution}}{}$. The identity function that returns one for every input is denoted by $\1$. We frequently use notation such as $c, c', \ensuremath{\tilde{c}}, c_1,c_2$ etc. to denote constants that can take on different values in different sections of the paper. \subsection{Markov decision processes and Bellman errors} We focus on infinite-horizon discounted Markov decision processes~\cite{puterman1994markov,bertsekas1996neuro,sutton2018reinforcement} with discount factor $\discount \in [0,1)$, state space $\StateSpace$, and an action set $ASpace$. For each state-action pair $\psa$, there is a reward distribution $R\psa$ supported in $[0,1]$ with mean $\reward\psa$, and a transition $\Pro(\cdot \mid \state, \action)$. A (stationary) stochastic policy $\policyicy$ maps states to actions. For a given policy, its $Q$-function is the discounted sum of future rewards based on starting from the pair $\psa$, and then following the policy $\policyicy$ in all future time steps $Q^{\pi}\psa = \reward\psa + \sum_{\hstep = 0}^{\infty} \discount^{\hstep} \ensuremath{\mathbb{E}} [ \reward_{\hstep}({\SState}_\hstep, {\AAction}_\hstep) \mid (\SState_0, \AAction_0) = \psa], $ where the expectation is taken over trajectories with $ \AAction_{\hstep} \sim \policy(\cdot \mid \SState_\hstep), \quad \mbox{and} \quad \SState_{\hstep+1} \sim \Pro(\cdot \mid \SState_\hstep, \AAction_\hstep) \quad \mbox{for $\hstep = 1, 2, \ldots$.} $ We also use $\Qpi{\policyicy}(\state, \policyicy) = \ensuremath{\mathbb{E}}ecti{\Qpi{\policyicy}(\state, \AAction)}{\AAction \sim \policyicy(\cdot \mid \state)}$ and define the \emph{Bellman evaluation operator} as $ (\BellmanEvaluation{\policyicy}Q)\psa = \reward\psa + \ensuremath{\mathbb{E}}ecti{Q(\SState^+ ,\policyicy)} {\SState^+ \sim \Pro(\cdot \mid \state, \action)}. $ The value function satisfies $ \Vpi{\policyicy}(\state) = \Qpi{\policyicy}(\state,\policyicy). $ In our analysis, we assume that policies have action-value functions that satisfy the uniform bound $\sup_{\psa} \abs{\Qpi{\policyicy}\psa} \leq 1$. We are also interested in approximating optimal policies, whose value and action-value functions are defined as $\Vstar(\state) = V^{\pistar}(\state) = \sup_{\policyicy} \Vpi{\policyicy}(\state)$ and $\Qstar\psa = Q^{\pistar}\psa = \sup_{\policyicy} Q^{\policyicy}\psa$. We assume that the starting state $\SState_0$ is drawn according to $\nu_{\text{start}}$ and study $ \Vpi{\policyicy} = \ensuremath{\mathbb{E}}_{\SState_0 \sim \nu_{\text{start}}}[\Vpi{\policyicy}(\SState_0)] $. We define the \emph{discounted occupancy measure} associated with a policy $\policyicy$ as the distribution over the state action space $ \DistributionOfPolicy{\policyicy}\psa = (1 - \discount ) \sum_{\hstep=0}^\infty \discount^\hstep \Pro_\hstep[ (\SState_\hstep,\AAction_\hstep) = \psa ]. $ We adopt the shorthand notation $\ensuremath{\mathbb{E}}_\policyicy$ for expectations over $\DistributionOfPolicy{\policyicy}$. For any functions $f, g: \StateSpace \times ASpace \rightarrow \R$, we make frequent use of the shorthands $ \ensuremath{\mathbb{E}}_\policyicy[f] \defeq \ensuremath{\mathbb{E}}_{(\SState, \AAction) \sim \DistributionOfPolicy{\policyicy}}[f(\SState, \AAction)], \quad \mbox{and} \quad \inprod{f}{g}_\policyicy \defeq \ensuremath{\mathbb{E}}_{(\SState, \AAction) \sim \DistributionOfPolicy{\policyicy}} \big[f(\SState, \AAction) \, g(\SState, \AAction) \big]. $ Note moreover that we have $\inprod{\1}{f}_\policyicy = \ensuremath{\mathbb{E}}_\policyicy[f]$ where $\1$ denotes the identity function. For a given $Q$-function and policy $\policyicy$, let us define the \emph{temporal difference error} (or TD error) associated with the sample $\sarsizNp{} = \sars{}$ and the \emph{Bellman error} at $\psa$ \begin{align*} (\TDError{Q}{\policyicy}{})\sarsiz{} \defeq Q\psa - \reward - \gamma Q(\successorstate,\policyicy), \qquad (\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa \defeq Q\psa - \reward\psa - \gamma \E_{\successorstate\sim\Pro\psa}Q(\successorstate,\policyicy). \end{align*} The TD error is a random variable function of $\sarsizNp{}$, while the Bellman error is its conditional expectation with respect to the immediate reward and successor state at $\psa$. Many of our bounds involve the quantity $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \ensuremath{\mathbb{E}}ecti{\big[\ensuremath{\mathcal{B}}or{Q}{\policyicy}{} \PSA \big]} {\PSA \sim \DistributionOfPolicy{\policyicy}}.$ Finally, we introduce the data generation mechanism. A more general sampling model is described in \cref{sec:GeneralGuarantees}. \begin{assumption}[I.i.d. dataset] \label{asm:IIDDataset} An i.i.d. dataset is a collection $\Dataset = \{ \sarsi{i} \}_{i=1}^n$ such that for each $i = 1, \ldots, n$ we have $(\state_i, \action_i, o_i) \sim \mu$ and conditioned on $(\state_i,\action_i,o_i )$, we observe a noisy reward $\reward_i = \reward(\state_i,\action_i) + \eta_i$ with $\E[\eta_i \mid \mathcal F_{i} ] = 0, \; |\reward_i| \leq 1$ and the next state $\successorstate_i\sim \Pro(\state_i,\action_i)$. \end{assumption} \subsection{Function Spaces and Weak Representation} Our methods involve three different types of function spaces, corresponding to policies, action-value functions, and test functions. A test function $\TestFunction{}$ is a mapping $(\state,\action,\identifier) \mapsto \TestFunction{}(\state,\action,\identifier)$ such that \mbox{$\sup_{(\state,\action,\identifier)}\abs{\TestFunction{}(\state,\action,\identifier)} \leq 1$}, where $o$ is an optional identifier containing side information. Our methodology involves the following three function classes: \begin{carlist} \item a \emph{policy class} $\PolicyClass$ that contains all policies $\policyicy$ of interest (for evaluation or optimization); \item for each $\policyicy$, the \emph{predictor class} $\Qclass{\policyicy}$ of action-value functions $Q$ that we permit; and \item for each $\policyicy$, the \emph{test function class} $\TestFunctionClass{\policyicy}$ that we use to enforce the Bellman residual constraints. \end{carlist} We use the shorthands $\Qclass{} = \cup_{\policyicy \in \PolicyClass}\Qclass{\policyicy}$ and \mbox{$\TestFunctionClass{} = \cup_{\policyicy \in \PolicyClass} \TestFunctionClass{\policyicy}$}. We assume \emph{weak realizability}: \begin{assumption}[Weak Realizability] \label{asm:WeakRealizability} For a given policy $\policyicy$, the predictor class $\Qclass{\policyicy}$ is weakly realizable with respect to the test space $\TestFunctionClass{\policyicy}$ and the measure $\mu$ if there exists a predictor $\QpiWeak{\policyicy} \in \Qclass{\policyicy}$ such that \begin{align} \label{EqnWeakRealizable} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} {\mu} = 0 \; \text{for all } \TestFunction{} \in \TestFunctionClass{\policyicy} \qquad \text{and} \qquad \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = 0. \end{align} \end{assumption} The first condition requires the predictor to satisfy the Bellman equations \emph{on average}. The second condition amounts to requiring that the predictor returns the value of $\policyicy$ at the start distribution: using \cref{lem:Simulation} stated in the sequel, we have \begin{align*} \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} = \ensuremath{\mathbb{E}}ecti{[\QpiWeak{\policyicy}-\Qpi{\policyicy}](S ,\policyicy)}{S \sim \nu_{\text{start}}} = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \frac{1}{1-\discount} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = 0. \end{align*} This weak notion should be contrasted with \emph{strong realizability}, which requires a function $\Qpi{\policyicy} \in \Qclass{\policyicy}$ that satisfies the Bellman equation in all state-action pairs. A stronger assumption that we sometime use is Bellman closure, which requires that $ \BellmanEvaluation{\policyicy}(Q) \in \Qclass{\policyicy} \; \mbox{for all $Q \in \Qclass{\policyicy}$.} $ The corresponding `weak' version is given in \cref{sec:appWeakClosure}. \section{Policy Estimates via the Weak Bellman Equations} \label{sec:Algorithms} In this section, we introduce our high-level approach, first at the population level and then in terms of regularized/normalized sample-based approximations. \subsection{Weak Bellman equations, empirical approximations and confidence intervals} We begin by noting that the predictor $\Qpi{\policyicy}$ satisfies the Bellman equations everywhere in the state-action space, i.e., $\ensuremath{\mathcal{B}}or{\Qpi{\policyicy}}{\policyicy}{} = 0$. However, if our dataset is ``small'' relative to the complexity of (functions) on the state-action space, then it is unrealistic to enforce such a stringent condition. Instead, the idea is to control the Bellman error in a weighted-average sense, where the weights are given by a set of \emph{test functions}. At the idealized population level (corresponding to an infinite sample size), we consider predictors that satisfy the conditions \begin{align} \label{eqn:PoP} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0, \qquad \text{for all} \; \TestFunction{} \in \TestFunctionClass{\policyicy}. \end{align} where $\TestFunctionClass{\policyicy}$ is a user-defined set of test functions. The two main challenges here are how to use data to enforce an approximate version of such constraints (along with rigorous data-dependent guarantees), and how to design the test function space. We begin with the former challenge. \paragraph{Construction of the empirical set} Given a dataset $\Dataset = \{(\state_i, \action_i, \reward_i, \successorstate_i, o_i) \}_{i=1}^n$, we can approximate the Bellman errors by a linear combination of the temporal difference errors: \begin{align} \label{eqn:WeightedTemporaResidual} \int \TestFunction{}\psa \underbrace{[Q\psa - (\BellmanEvaluation{\policyicy} Q)\psa]}_{= \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}\psa } d\mu \approx \frac{1}{n} \sum_{i=1}^n \TestFunction{}(\state_i, \action_i) \underbrace{ [Q(\state_i, \action_i) - \reward_i - \discount Q(\successorstate_i,\policyicy) ]}_{= \TDError{Q}{\policyicy}{}\sarsi{i} }. \end{align} Note that the approximation~\eqref{eqn:WeightedTemporaResidual} corresponds to a weighted linear combination of temporal differences. Written more compactly in inner product notation, equation~\eqref{eqn:WeightedTemporaResidual} reads $\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \approx \innerprodweighted{\TestFunction{}}{\TDError{Q}{\policyicy}{}}{n}$, where $ \innerprodweighted{f}{g}{n} = \frac{1}{n}\sum_{\sarsi{} \in \Dataset} (fg)\sarsi{}. $ In general, the action value function $\Qpi{\policyicy}$ does not satisfy $\inprod{\TestFunction{}}{ \TDError{\Qpi{\policyicy}}{\policyicy}{}}_n = 0$ because the empirical approximation~\eqref{eqn:WeightedTemporaResidual} involves sampling error. For these reasons, in order to (approximately) identify $\Qpi{\policyicy}$, we impose only inequalities. Given a class of test functions $\TestFunctionClass{\policyicy}$, a radius parameter $\ensuremath{\rho} \geq 0$ and regularization parameter $\ensuremath{\lambda} \geq 0$, we define the set \begin{align} \label{eqn:EmpiricalBellmanGalerkinEquations} \SuperEmpiricalFeasible \defeq \left \{ Q \in \Qclass{\policyicy} \quad \text{such that} \quad \frac{ \abs{\innerprodweighted{\TestFunction{}}{\TDError{Q}{\pi}{}}{n}} } { \TestNormaRegularizerEmp{} } \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \quad \mbox{for all $\ensuremath{f}c \in \TestFunctionClass{\policyicy}$}\right \}. \end{align} When the choices of $(\ensuremath{\rho}, \ensuremath{\lambda})$ are clear from the context, we adopt the shorthand $\EmpiricalFeasibleSet{\policyicy}( \TestFunctionClass{\policyicy})$, or $\EmpiricalFeasibleSet{\policyicy}$ when the function class $\TestFunctionClass{\policyicy}$ is also clear. If $\TestFunctionClass{\policyicy}$ and $\Qclass{\policyicy}$ have finite cardinality, $\ensuremath{\rho} \approx \ln |\TestFunctionClass{\policyicy}||\Qclass{\policyicy}| + \ln 1/\FailureProbability$, where $\FailureProbability$ is a prescribed failure probability. Our definition of the empirical constraint set~\eqref{eqn:EmpiricalBellmanGalerkinEquations} has two key components: first, the division by $\TestNormaRegularizerEmp{}$ corresponds to a form of \emph{self-normalization}, whereas the addition of $\lambda$ corresponds to a form of \emph{regularization}. Self-normalization is needed so that the constraints remain suitably scale-invariant. More importantly---in conjunction with the regularization---it ensures that test functions that have poor coverage under the dataset do not have major effects on the solution. In particular, the empirical norm $\EmpNorm{f}^2$ in the self-normalization measures how well the given test function is covered by the dataset. Any test function with poor coverage (i.e., $\EmpNorm{f}^2 \approx 0$) will not yield useful information, and the regularization counteracts its influence. In our guarantees, the choices of $\lambda$ and $\rho$ are critical; as shown in our theory, we typically have $\lambda = \rho/n$, where $\rho$ scales with the metric entropy of the predictor, test and policy spaces. Disregarding $\rho$, the right-hand side of the constraint decays as $1/\sqrt{n}$, so that the constraints are enforced more tightly as the sample size increases. \paragraph{Confidence bounds and policy optimization:} First, for any fixed policy $\policyicy$, we can use the feasibility set~\eqref{eqn:EmpiricalBellmanGalerkinEquations} to compute the lower and upper estimates \begin{align} \label{eqn:ConfidenceIntervalEmpirical} \VminEmp{\policyicy} \defeq \min_{Q \in \SuperEmpiricalFeasible} \ensuremath{\mathbb{E}}ecti{\big[Q(S ,\policyicy)\big]}{S \sim \nu_{\text{start}}}, \; \text{and} \quad \VmaxEmp{\policyicy} \defeq \max_{Q \in \SuperEmpiricalFeasible} \ensuremath{\mathbb{E}}ecti{\big[Q(S ,\policyicy)\big]}{S \sim \nu_{\text{start}}}, \end{align} corresponding to estimates of the minimum and maximum value that the policy $\policyicy$ can take at the initial distribution. The family of lower estimates can be used to perform policy optimization over the class $\PolicyClass$, in particular by solving the \emph{max-min} problem \begin{align} \label{eqn:MaxMinEmpirical} \max_{\policy \in \PolicyClass} \Big [\min_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \Big], \qquad \text{or equivalently} \qquad \max_{\policy \in \PolicyClass}\VminEmp{\policyicy}. \end{align} \paragraph{Form of guarantees} Let us now specify and discuss the types of guarantees that we establish for our estimators~\eqref{eqn:ConfidenceIntervalEmpirical} and~\eqref{eqn:MaxMinEmpirical}. All of our theoretical guarantees involve a $\mu$-based counterpart $\PopulationFeasibleSet{\policyicy}$ of the data-dependent set $\EmpiricalFeasibleSet{\policyicy}$. More precisely, we define the population set \begin{align} \label{eqn:ApproximateBellmanGalerkingEquations} \SuperPopulationFeasible \defeq \biggr\{ Q \in \Qclass{\policyicy} \quad \text{such that} \quad \frac{ \abs{\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\pi}{}}{\mu}} } { \TestNormaRegularizerPop{} } \leq \sqrt{\frac{4 \ensuremath{\rho}}{n}} \qquad \mbox{for all $\ensuremath{f}c \in \TestFunctionClass{}$} \biggr\}, \end{align} where $\inprod{f}{g}_\mu \defeq \int f \psa g \psa d \mu$ is the inner product induced by a distribution\footnote{See Section~\ref{SecDataGen} for a precise definition of the relevant $\mu$ for a fairly general sampling model.} $\mu$ over $\psa$. As before, we use the shorthand notation $\PopulationFeasibleSet{\policyicy}$ when the underlying arguments are clear from context. Moreover, in the sequel, we generally ignore the constant $4$ in the definition~\eqref{eqn:ApproximateBellmanGalerkingEquations} by assuming that $\ensuremath{\rho}$ is rescaled appropriately---e.g., that we use a factor of $\frac{1}{4}$ in defining the empirical set. It should be noted that in contrast to the set $\EmpiricalFeasibleSet{\policyicy}$, the set $\PopulationFeasibleSet{\policyicy}$ is \emph{non-random} and it is defined in terms of the distribution $\mu$ and the input space $\InputFunctionalSpace$. It relaxes the orthogonality constraints in the weak Bellman formulation~\eqref{eqn:PoP}. Our guarantees for off-policy confidence intervals take the following form: \begin{subequations} \label{EqnPolEval} \begin{align} \label{EqnCoverage} \mbox{\underline{Coverage guarantee:}} & \qquad \big[ \VminEmp{\policyicy}, \VmaxEmp{\policyicy} \big] \ni \Vpi{\policyicy}. \\ \label{EqnWidthBound} \mbox{\underline{Width bound:}} & \qquad \max \Big \{ |\VminEmp{\policyicy} - \Vpi{\policyicy} |, \; |\VmaxEmp{\policyicy} - \Vpi{\policyicy} | \Big \} \leq \frac{1}{1-\discount} \max_{Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{\policyicy})} |\PolComplexGen{\policyicy}|. \end{align} \end{subequations} Turning to policy optimization, let $\widetilde \pi$ be a solution to the max-min criterion~\eqref{eqn:MaxMinEmpirical}. Then we prove a result of the following type: \begin{align} \label{EqnOracle} \mbox{\underline{Oracle inequality:}} \qquad \Vpi{\widetilde \pi} \geq \max_{\policyicy \in \PolicyClass} \Big \{ \underbrace{ \vphantom{ \frac{1}{1 - \discount} \max_{Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{})} |\PolComplexGen{\policyicy}| } \Vpi{\policyicy}}_{\mbox{\tiny{Value}}} - \underbrace{\frac{1}{1 - \discount} \max_{Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{})} |\PolComplexGen{\policyicy}|}_{\mbox{\tiny{Evaluation uncertainty}}} \Big \}. \end{align} Note that this result guarantees that the estimator competes against an oracle that can search over all policies, and select one based on the optimal trade-off between its value and evaluation uncertainty. \subsection{High-probability guarantees} In this section, we present some high-probability guarantees. So as to facilitate understanding under space constraints, we state here results under simplifying assumptions: (a) the dataset originates from a fixed distribution, and (b) the classes $\PolicyClass{}, \TestFunctionClass{}$ and $\Qclass{}$ have finite cardinality. We emphasize that~\cref{sec:GeneralGuarantees} provides a far more general version of this result, with an extremely flexible sampling model, and involving metric entropies of parametric or non-parametric function classes. \begin{theorem}[Guarantees for finite classes] \label{thm:NewPolicyEvaluationFiniteClasses} Consider a triple $\InputFunctionalSpace$ that is weakly Bellman realizable (\cref{asm:WeakRealizability}); an i.i.d. dataset (\Cref{asm:IIDDataset}); and the choices $\ensuremath{\rho} = c \big \{ \log (|\TestFunctionClass{}| |\PolicyClass | |\Qclass{}|) + \log (1/\FailureProbability) \big \}$ and $\ensuremath{\lambda} = c' \ensuremath{\rho}/n$ for some constants $c, c'$. Then w.p. at least $1 - \delta$: \begin{carlist} \item \underline{Policy evaluation:} For any $\pi \in \PolicyClass$, the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ specify a confidence interval satisfying the coverage~\eqref{EqnCoverage} and width bounds~\eqref{EqnWidthBound} \item \underline{Policy optimization:} Any max-min policy~\eqref{eqn:MaxMinEmpirical} $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{carlist} \end{theorem} \section{Concentrability Coefficients and Test Spaces} \label{sec:Applications} In this section, we develop some connections to concentrability coefficients that have been used in past work, and discuss various choices of the test class. Like the predictor class $\Qclass{\policyicy}$, the test class $\TestFunctionClass{\policyicy}$ encodes domain knowledge, and thus its choice is delicate. Different from the predictor class, the test class does not require a `realizability' condition. As a general principle, the test functions should be chosen as orthogonal as possible with respect to the Bellman residual, so as to enable rapid progress towards the solution; at the same time, they should be sufficiently ``aligned'' with the dataset, meaning that $\norm{\TestFunction{}}{\mu}$ or its empirical counterpart $\norm{\TestFunction{}}{n}$ should be large. Given a test class, each additional test function posits a new constraint which helps identify the correct predictor; at the same time, it increases the metric entropy (parameter $\ensuremath{\rho}$), which makes each individual constraints more loose. In summary, there are trade-offs to be made in the selection of the test class $\TestFunctionClass{}$, much like $\Qclass{}$. In order to assess the statistical cost that we pay for off-policy data, it is natural to define the \emph{off-policy cost coefficient} (OPC) as \begin{align} \label{EqnConcSimple} K^\policy(\PopulationFeasibleSet{\policyicy}, \ensuremath{\rho}, \ensuremath{\lambda}) & \defeq \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{|\PolComplexGen{\policyicy}|^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} = \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{} }{\policyicy} ^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}}, \end{align} With this notation, our off-policy width bound~\eqref{EqnWidthBound} can be re-expressed as \begin{subequations} \begin{align} \label{eqn:ConcreteCI} \abs{\VminEmp{\policyicy} - \VmaxEmp{\policyicy}} \leq 2 \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{K^\policy \frac{ \ensuremath{\rho}}{n}}, \end{align} while the oracle inequality~\eqref{EqnOracle} for policy optimization can be re-expressed in the form \begin{align} \label{EqnConcSimpleBound} \Vpi{\widetilde \pi} \geq \max_{\policyicy \in \PolicyClass} \Big\{ \Vpi{\policyicy} - \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{K^\policy \frac{ \ensuremath{\rho}}{n}} \Big \}, \end{align} \end{subequations} Since $\ensuremath{\lambda} \sim \ensuremath{\rho}/n$, the factor $\sqrt{1 + \ensuremath{\lambda}}$ can be bounded by a constant in the typical case $n \geq \ensuremath{\rho}$. We now offer concrete examples of the OPC , while deferring further examples to \cref{sec:appConc}. \subsection{Likelihood ratios} Our broader goal is to obtain small Bellman error along the distribution induced by $\policyicy$. Assume that one constructs a test function class $\TestFunctionClass{\policyicy}$ of possible likelihood ratios. \begin{proposition}[Likelihood ratio bounds] \label{prop:LikeRatio} Assume that for some constant $\scaling{\policyicy}$, the test function defined as $\TestFunction{}^*\psa = \frac{1}{\scaling{\policyicy}} \frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa}$ belongs to $\TestFunctionClass{\policyicy}$ and satisfies $\norm{\TestFunction{}^*}{\infty} \leq 1$. Then the {OPC } coefficient satisfies \begin{align} \label{EqnLikeRatioBound} \ConcentrabilityGeneric{\policyicy} \stackrel{(i)}{\leq} \frac{ \ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big] }{\policyicy} + \scalingsq{\policyicy} \ensuremath{\lambda}} {1 + \ensuremath{\lambda}} \; \stackrel{(ii)}{\leq} \l \frac{\scaling{\policyicy} \big(1 + \scaling{\policyicy} \ensuremath{\lambda} \big)}{1 + \ensuremath{\lambda}}. \end{align} \end{proposition} Here $\scaling{\policyicy}$ is a scaling parameter that ensures $\norm{\TestFunction{}^*}{\infty} \leq 1$. Concretely one can take $\scaling{\policyicy} = \sup_{(\state,\action)} \frac{\DistributionOfPolicy{\policyicy}(\state,\action)}{\mu(\state,\action)}$. The proof is in \cref{sec:LikeRatio}. Since $\ensuremath{\lambda} = \ensuremath{\lambda}_n \rightarrow 0$ as $n$ increases, the {OPC } coefficient is bounded by a multiple of the expected ratio $\ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big]}{\policyicy}{}$. Up to an additive offset, this expectation is equivalent to the $\chi^2$-distribution between the policy-induced occupation measure $\DistributionOfPolicy{\policyicy}$ and data-generating distribution $\mu$. The concentrability coefficient can be plugged back into \cref{eqn:ConcreteCI,EqnConcSimpleBound} to obtain a concrete policy optimization bound. In this case, we recover a result similar to \cite{xie2020Q}, but with a much milder concentrability coefficient that involves only the chosen comparator policy. \subsection{The error test space} \label{sec:ErrorTestSpace} We now turn to the discussion of a choice for the test space that extends the LSTD algorithm to non-linear spaces. A simplification to the linear setting is presented later in \cref{sec:Linear}. As is well known, the LSTD algorithm \cite{bradtke1996linear} can be seen as minimizing the Bellman error projected onto the linear prediction space $Q$. Define the transition operator $ (\TransitionOperator{\policyicy}Q)\psa = \ensuremath{\mathbb{E}}ecti{Q(\successorstate,\policyicy ) } {\successorstate \sim \Pro\psa} $, and the prediction error $QErr = Q - \QpiWeak{\policyicy}$, where $\QpiWeak{\policyicy}$ is a $Q$-function from the definition of weak realizability. The Bellman error can be re-written as $ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{} = (\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr $. When realizability holds, in the linear setting and at the population level, the LSTD solution seeks to satisfy the projected Bellman equations \begin{align} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} = 0, \quad \text{for all $\TestFunction{} \in \QclassErrCentered{\policyicy}$}. \end{align} In the linear case, $\QclassErrCentered{\policyicy} $ is the class of linear functions $\Qclass{\policyicy}$ used as predictors; when $\Qclass{\policyicy}$ is non-linear, we can extend the LSTD method by using the (nonlinear) error test space $\TestFunctionClass{\policyicy} = \QclassErrCentered{\policyicy} = \{Q - \QpiWeak{\policyicy} \}$. Since $\QclassErrCentered{\policyicy}$ is unknown (as it depends on the weak solution $\QpiWeak{\policyicy}$), we choose instead the larger class \begin{align*} \QclassErr{\policyicy} = \{ Q - Q' \mid Q,Q' \in \Qclass{\policyicy} \}, \end{align*} which contains $\QclassErrCentered{\policyicy}$. The resulting approach can be seen as performing a projection of the Bellman operator $\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}$ into the error space $\QclassErrCentered{\policyicy}$, much like LSTD does in the linear setting. However, different from LSTD, our procedure returns confidence intervals as opposed to a point estimator. This choice of the test space is related to the Bubnov-Galerkin method~\cite{repin2017one} for linear spaces; it selects the test space $\TestFunctionClass{\policyicy}$ to be identical to the trial space $\QclassErrCentered{\policyicy}$ that contains all possible solution errors. \begin{lemma}[OPC coefficient from prediction error] \label{lem:PredictionError} For any test function class $\TestFunctionClass{\policyicy} \supseteq \QclassErr{\policyicy}$, we have \begin{align} \label{EqnPredErrorBound} K^\policy & \leq \max_{Q \in \Qclass{\policyicy}} \big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2 } { \innerprodweighted{QErr} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu}^2 } \big \} = \max_{QErr \in \QclassErrCentered{\policyicy}} \big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {(\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr} {\mu}^2 } \big \}. \end{align} \end{lemma} The above coefficient measures the ratio between the Bellman error along the distribution of the target policy $\policyicy$ and that projected onto the error space $\QclassErrCentered{\policyicy}$ defined by $\Qclass{\policyicy}$. It is a concentrability coefficient that \emph{always} applies, as the choice of the test space does not require domain knowledge. See~\cref{sec:PredictionError} for the proof, and \cref{sec:appBubnov} for further comments and insights, as well as a simplification in the special case of Bellman closure. \subsection{The Bellman test space} \label{sec:DomainKnowledge} In the prior section we controlled the projected Bellman error. Another longstanding approach in reinforcement learning is to control the Bellman error itself, for example by minimizing the squared Bellman residual. In general, this cannot be done if only an offline dataset is available due to the well known \emph{double sampling} issue. However, in some cases we can use an helper class to try to capture the Bellman error. Such class needs to be a superset of the class of \emph{Bellman test functions} given by \begin{align} \label{EqnBellmanTest} \TestFunctionClassBubnov{\policyicy} & \defeq \{ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} \mid Q \in \Qclass{\policyicy} \}. \end{align} Any test class that contains the above allows us to control the Bellman residual, as we show next. \begin{lemma}[Bellman Test Functions] \label{lem:BellmanTestFunctions} For any test function class $ \TestFunctionClass{\policyicy}$ that contains $\TestFunctionClassBubnov{\policyicy}$, we have \begin{subequations} \begin{align} \label{EqnBellBound} \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \leq c_1 \sqrt{\frac{\ensuremath{\rho}}{n}} \qquad \mbox{for any $Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{\policyicy})$.} \end{align} Moreover, the {off-policy cost coefficient} is upper bounded as \begin{align} \label{EqnBellBoundConc} K^\policy & \stackrel{(i)}{\leq} c_1 \sup_{Q \in \Qclass{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2} \stackrel{(ii)}{\leq} c_1 \sup_{Q \in \Qclass{\policyicy}} \frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2} \stackrel{(iii)}{\leq} c_1 \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa} {\mu\psa}. \end{align} \end{subequations} \end{lemma} \noindent See~\cref{sec:BellmanTestFunctions} for the proof of this claim. Consequently, whenever the test class includes the Bellman test functions, the {off-policy cost coefficient} is at most the ratio between the squared Bellman residuals along the data generating distribution and the target distribution. If Bellman closure holds, then the prediction error space $\QclassErr{\policyicy}$ introduced in \cref{sec:ErrorTestSpace} contains the Bellman test functions: for $Q \in \Qclass{\policyicy}$, we can write $ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q \in \QclassErr{\policyicy} $. This fact allows us to recover a result in the recent paper~\cite{xie2021bellman} in the special case of Bellman closure, although the approach presented here is more general. \subsection{Combining test spaces} \label{sec:MultipleRobustness} Often, it is natural to construct a test space that is a union of several simpler classes. A simple but valuable observation is that the resulting procedure inherits the best of the OPC coefficients. Suppose that we are given a collection $\{ \TestFunctionClass{\policyicy}_m \}_{m=1}^M$ of $M$ different test function classes, and define the union $\TestFunctionClass{\policyicy} = \bigcup_{m=1}^M \TestFunctionClass{\policyicy}_m$. For each $m = 1, \ldots, M$, let $K^\policy_m$ be the OPC coefficient defined by the function class $\TestFunctionClass{\policyicy}_m$ and radius $\ensuremath{\rho}$, and let $K^\policy(\TestFunctionClass{})$ be the OPC coefficient associated with the full class. Then we have the following guarantee: \begin{lemma}[Multiple test classes] \label{prop:MultipleRobustness} $ \label{EqnMinBound} K^\policy(\TestFunctionClass{}) \leq \min_{m = 1, \ldots, M} K^\policy_m. $ \end{lemma} \noindent This guarantee is a straightforward consequence of our construction of the feasibility sets: in particular, we have $\PopulationFeasibleSet{\policyicy}(\TestFunctionClass{}) = \cap_{m =1}^M \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{}_m)$, and consequently, by the variational definition of the {off-policy cost coefficient} $K^\policy(\TestFunctionClass{})$ as optimization over $\PopulationFeasibleSet{\policyicy}(\TestFunctionClass{})$, the bound~\eqref{EqnMinBound} follows. In words, when multiple test spaces are combined, then our algorithms inherit the best (smallest) OPC coefficient over all individual test spaces. While this behavior is attractive, one must note that there is a statistical cost to using a union of test spaces: the choice of $\ensuremath{\rho}$ scales as a function of $\TestFunctionClass{}$ via its metric entropy. This increase in $\ensuremath{\rho}$ must be balanced with the benefits of using multiple test spaces.\footnote{For space reasons, we defer to \cref{sec:IS2BC} an application in which we construct a test function space as a union of subclasses, and thereby obtain a method that automatically leverages Bellman closure when it holds, falls back to importance sampling if closure fails, and falls back to a worst-case bound in general.} \section{Linear Setting} \label{sec:Linear} In this section, we turn to a detailed analysis of our estimators using function classes that are linear in a feature map. Let $\phi: \StateSpace \times ASpace \rightarrow \R^\dim$ be a given feature map, and consider linear expansions $g_{\CriticPar{}} \psa \defeq \inprod{\CriticPar{}}{\phi \psa} \; = \; \sum_{j=1}^d \CriticPar{j} \phi_j \psa$. The class of \emph{linear functions} takes the form \begin{align} \label{EqnLinClass} \ensuremath{\mathcal{L}} & \defeq \{ \psa \mapsto g_{\CriticPar{}} \psa \mid \CriticPar{} \in \R^{\dim}, \; \norm{\CriticPar{}}{2} \leq 1 \}. \end{align} Throughout our analysis, we assume that $\norm{\phi(\state, \action)}{2} \leq 1$ for all state-action pairs. Following the approach in \cref{sec:ErrorTestSpace}, which is based on the LSTD method, we should choose the test function class $\TestFunctionClass{\policyicy} = \LinSpace$, as in the linear case the prediction error is linear. In order to obtain a computationally efficient implementation, we need to use a test class that is a ``simpler'' subset of $\LinSpace$. In particular, for linear functions, it is not hard to show that the estimates $\VminEmp{\policyicy}$ and $\VmaxEmp{\policyicy}$ from equation~\eqref{eqn:ConfidenceIntervalEmpirical} can be computed by solving a quadratic program, with two linear constraints for each test function. (See~\cref{sec:LinearConfidenceIntervals} for the details.) Consequently, the computational complexity scales linearly with the number of test functions. Thus, if we restrict ourselves to a finite test class contained within $\LinSpace$, we will obtain a computationally efficient approach. \subsection{A computationally friendly test class and OPC coefficients} Define the empirical covariance matrix $\widehat \Sigma = \frac{1}{n} \sum_{i=1}^{n} \phiEmpirical{i} \phiEmpirical{i}^T$ \mbox{where $\phiEmpirical{i} \defeq \phi(\state_i,\action_i)$.} Let $\{\EigenVectorEmpirical{j} \}_{j=1}^\dim$ be the eigenvectors of empirical covariance matrix $\widehat \Sigma$, and suppose that they are normalized to have unit $\ell_2$-norm. We use these normalized eigenvectors to define the finite test class \begin{align} \label{eqn:LinTestFunction} \TestFunctionClassRandom{\policyicy} \defeq \{ f_j, j = 1, \ldots, \dim \} \quad \mbox{where $f_j \psa \defeq \inprod{\EigenVectorEmpirical{j}}{\phi \psa}$} \end{align} A few observations are in order: \begin{carlist} \item This test class has only $d$ functions, so that our QP implementation has $2 \dim$ constraints, and can be solved in polynomial time. (Again, see \cref{sec:LinearConfidenceIntervals} for details.) \item Since $\TestFunctionClassRandom{\policyicy}$ is a subset of $\LinSpace$ the choice of radius $\ensuremath{\rho} = c( \frac{\dim}{n} + \log 1/\delta)$ is valid for some constant $c$. \end{carlist} \newcommand{\ConcSimple(\TestFunctionClassRandom{\policy})}{K^\policy(\TestFunctionClassRandom{\policyicy})} \paragraph{Concentrability} When weak Bellman closure does not hold, then our analysis needs to take into account how errors propagate via the dynamics. In particular, we define the \emph{next-state feature extractor} $\phiBootstrap{\policyicy} \psa \defeq \ensuremath{\mathbb{E}}ecti{\phi(\successorstate,\policyicy)}{\successorstate \sim \Pro\psa}$, along with the population covariance matrix $\Sigma \defeq \E_\mu \big[\phi \psa \phi^\top \psa \big]$, and its $\ensuremath{\lambda}$-regularized version $\SigmaReg \defeq \Sigma + \TestFunctionReg \Identity$. We also define the matrices \begin{align*} \CovarianceBootstrap{\policyicy} \defeq \ensuremath{\mathbb{E}}ecti{[\phi(\phiBootstrap{\policyicy})^\top]}{ \mu}, \quad \CovarianceWithBootstrapReg{\policyicy} \defeq (\SigmaReg^{\frac{1}{2}} - \discount \SigmaReg^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy} )^\top (\SigmaReg^{\frac{1}{2}} - \discount \SigmaReg^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy}). \end{align*} The matrix $\CovarianceBootstrap{\policyicy}$ is the cross-covariance between successive states, whereas the matrix $\CovarianceWithBootstrapReg{\policyicy}$ is a suitably renormalized and symmetrized version of the matrix $\Sigma^{\frac{1}{2}} - \discount \Sigma^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy}$, which arises naturally from the policy evaluation equation. We refer to quantities that contain evaluations at the next-state (e.g., $\phiBootstrap{\policyicy}$) as bootstrapping terms, and now bound the OPC coefficient in the presence of such terms: \begin{proposition}[OPC bounds with bootstrapping] \label{prop:LinearConcentrability} Under weak realizability, we have \begin{align} \label{EqnOPCBootstrap} \ConcSimple(\TestFunctionClassRandom{\policy}) & \leq c \; \dim \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\policyicy}} {(\CovarianceWithBootstrapReg{\policyicy})^{-1}}^2 \qquad \mbox{with probability at least $1-\FailureProbability$.} \end{align} \end{proposition} \noindent See~\cref{sec:LinearConcentrability} for the proof. The bound~\eqref{EqnOPCBootstrap} takes a familiar form, as it involves the same matrices used to define the LSTD solution. This is expected, as our approach here is essentially equivalent to the LSTD method; the difference is that LSTD only gives a point estimate as opposed to the confidence intervals that we present here; however, they are both derived from the same principle, namely from the Bellman equations projected along the predictor (error) space. The bound quantifies how the feature extractor $\phi$ together with the bootstrapping term $\phiBootstrap{\policyicy}$, averaged along the target policy $\policyicy$, interact with the covariance matrix with bootstrapping $\CovarianceWithBootstrapReg{\policyicy}$. It is an approximation to the OPC coefficient bound derived in~\cref{lem:PredictionError}. The bootstrapping terms capture the temporal difference correlations that can arise in reinforcement learning when strong assumptions like Bellman closure do not hold. As a consequence, such an OPC coefficient being small is a \emph{sufficient} condition for reliable off-policy prediction. This bound on the OPC coefficient always applies, and it reduces to the simpler one~\eqref{EqnOPCClosure} when weak Bellman closure holds, with no need to inform the algorithm of the simplified setting; see \cref{sec:LinearConcentrabilityBellmanClosure} for the proof. \begin{proposition}[OPC bounds under weak Bellman Closure] \label{prop:LinearConcentrabilityBellmanClosure} Under Bellman closure, we have \begin{align} \label{EqnOPCClosure} \ConcSimple(\TestFunctionClassRandom{\policy}) & \leq c \; \dim \norm{\ensuremath{\mathbb{E}}ecti{ \phi}{\policyicy}}{\SigmaReg^{-1}}^2 \qquad \mbox{with probability at least $1 - \delta$.} \end{align} \end{proposition} \subsection{Actor-critic scheme for policy optimization} \label{sec:LinearApproximateOptimization} Having described a practical procedure to compute $\VminEmp{\policyicy}$, we now turn to the computation of the max-min estimator for policy optimization. We define the \emph{soft-max policy class} \begin{align} \label{eqn:SoftMax} \PolicyClass_{\text{lin}} \defeq \Big\{ \psa \mapsto \frac{e^{\innerprod{\phi\psa}{\ActorPar{}}}}{\sum_{\successoraction \in ASpace} e^{\innerprod{\phi(\state,\successoraction)}{\ActorPar{}}}} \mid \norm{\ActorPar{}}{2} \leq \nIter, \; \ActorPar{} \in \R^{\dim} \Big\}. \end{align} In order to compute the max-min solution~\eqref{eqn:MaxMinEmpirical} over this policy class, we implement an actor-critic method, in which the actor performs a variant of mirror descent.\footnote{Strictly speaking, it is mirror ascent, but we use the conventional terminology.} \begin{carlist} \item At each iteration $t = 1, \ldots, \nIter$, the policy $\ActorPolicy{t} \in \PolicyClass_{\text{lin}}$ can be identified with a parameter $\ActorPar{t} \in \R^\dim$. The sequence is initialized with $\ActorPar{1} = 0$. \item Using the finite test function class~\eqref{eqn:LinTestFunction} based on normalized eigenvectors, the pessimistic value estimate $\VminEmp{\ActorPolicy{t}}$ is computed by solving a quadratic program, as previously described. This computation returns the weight vector $\CriticPar{t}$ of the associated optimal action-value function. \item Using the action-value vector $\CriticPar{t}$, we update the actor's parameter as \begin{align} \label{eqn:LinearActorUpdate} \ActorPar{t+1} = \ActorPar{t} + \eta \CriticPar{t} \qquad \mbox{where $\eta = \sqrt{\frac{\log\abs{ASpace}}{2T}}$ is a stepsize parameter. } \end{align} \end{carlist} We now state a guarantee on the behavior of this procedure, based on two OPC coefficients: \begin{align} \label{EqnNewConc} \ConcentrabilityGenericSub{\widetilde \policy}{1} = \dim \l \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}}^2, \quad \mbox{and} \quad \ConcentrabilityGenericSub{\widetilde \policy}{2} = \dim \; \sup_{\policyicy \in \PolicyClass} \Big \{ \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\widetilde \policy}} {(\CovarianceWithBootstrapReg{\policyicy})^{-1}}^2 \Big \}. \end{align} Moreover, in making the following assertion, we assume that every weak solution $\QpiWeak{\policyicy}$ can be evaluated against the distribution of a comparator policy $\widetilde \policy \in \PolicyClass$, i.e., $\innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\widetilde \policy} = 0$ for all $\policyicy \in \PolicyClass $. (This assumption is still weaker than strong realizability). \begin{theorem}[Approximate Guarantees for Linear Soft-Max Optimization] \label{thm:LinearApproximation} Under the above conditions, running the procedure for $T$ rounds returns a policy sequence $\{\ActorPolicy{t}\}_{t=1}^T$ such that, for any comparator policy $\widetilde \policy\in \PolicyClass$, \begin{align} \label{EqnMirrorBound} \frac{1}{T} \sumiter \big \{ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \big \} & \leq \frac{c_1}{1-\discount} \biggr \{ \underbrace{\sqrt{\frac{\log \abs{ASpace}}{T}} \vphantom{\sqrt{\ConcentrabilityGenericSub{\widetilde \policy}{\cdot}\frac{\dim \log(nT) + \log \frac{n}{\FailureProbability} }{n}}} }_{\text{Optimization error}} + \underbrace{\sqrt{\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} \frac{\dim \log(nT) + \log \big(\frac{n }{\FailureProbability}\big) }{n}}}_{\text{Statistical error}} \biggr \}, \end{align} with probability at least $1 - \FailureProbability$. This bound always holds with \mbox{$\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} = \ConcentrabilityGenericSub{\widetilde \policy}{2}$,} and moreover, it holds with \mbox{$\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} = \ConcentrabilityGenericSub{\widetilde \policy}{1}$} when weak Bellman closure is in force. \end{theorem} \noindent See~\cref{sec:LinearApproximation} for the proof. Whenever Bellman closure holds, the result automatically inherits the more favorable concentrability coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{2}$, as originally derived in \cref{prop:LinearConcentrabilityBellmanClosure}. The resulting bound is only $\sqrt{\dim}$ worse than the lower bound recently established in the paper~\cite{zanette2021provable}. However, the method proposed here is robust, in that it provides guarantees even when Bellman closure does not hold. In this case, we have a guarantee in terms of the OPC coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{1}$. Note that it is a uniform version of the one derived previously in~\cref{prop:LinearConcentrability}, in that there is an additional supremum over the policy class. This supremum arises due to the use of gradient-based method, which implicitly searches over policies in bootstrapping terms; see \cref{sec:LinearDiscussion} for a more detailed discussion of this issue. \section*{Acknowledgment} AZ was partially supported by NSF-FODSI grant 2023505. In addition, this work was partially supported by NSF-DMS grant 2015454, NSF-IIS grant 1909365, as well as Office of Naval Research grant DOD-ONR-N00014-18-1-2640 to MJW. The authors are grateful to Nan Jiang and Alekh Agarwal for pointing out further connections with the existing literature, as well as to the reviewers for pointing out clarity issues. {\small{ }} \tableofcontents \section{Additional Discussion and Results} \input{3p1p1-BRO} \subsection{Comparison with Weight Learning Methods} \label{sec:WeightLearning} The work closest to ours is \cite{jiang2020minimax}. They also use an auxiliary weight function class, which is comparable to our test class. However, the test class is used in different ways; we compare them in this section at the population level.\footnote{ The empirical estimator in \cite{jiang2020minimax} does not take into account the `alignment' of each weight function with respect to the dataset, which we do through self-normalization and regularization in the construction of the empirical estimator. This precludes obtaining the same type of strong finite time guarantees that we are able to derive here.} Let us assume that weak realizability holds and that $\TestFunctionClass{}$ is symmetric, i.e., if $\TestFunction{} \in \TestFunctionClass{}$ then $-\TestFunction{} \in \TestFunctionClass{}$ as well. At the population level, our program seeks to solve \begin{align} \label{eqn:PopProgComparison} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} \quad \text{s.t. } \quad \sup_{\TestFunction{} \in \TestFunctionClass{}} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0, \end{align} \defw{w} which is equivalent for any $w \in \TestFunctionClass{}$ to \begin{align*} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \quad \text{s.t. } \quad \sup_{\TestFunction{} \in \TestFunctionClass{}} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0. \end{align*} Removing the constraints leads to the upper bound \begin{align*} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} Since this is a valid upper bound for any $w{} \in \TestFunctionClass{}$, minimizing over $w$ must still yield an upper bound, which reads \begin{align*} \inf_{w \in \TestFunctionClass{}}\sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} This is the population program for ``weight learning'', as described in \cite{jiang2020minimax}. It follows that Bellman residual orthogonalization always produces tighter confidence intervals than ``weight learning'' at the population level. Another interesting comparison is with ``value learning'', also described in \cite{jiang2020minimax}. In this case, assuming symmetric $\TestFunctionClass{}$, we can equivalently express the population program \eqref{eqn:PopProgComparison} using a Lagrange multiplier as follows \begin{align} \label{eqn:LagProgComparison} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \sup_{\lambda \geq 0 ,\TestFunction{} \in \TestFunctionClass{}} \lambda\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align} Rearranging we obtain \begin{align*} \sup_{Q \in \Qclass{\policyicy}} \inf_{\lambda \geq 0 ,\TestFunction{} \in \TestFunctionClass{}}& \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \lambda\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} The ``value learning'' program proposed in \cite{jiang2020minimax} has a similar formulation to ours but differs in two key aspects. The first---and most important---is that \cite{jiang2020minimax} ignores the Lagrange multiplier; this means ``value learning'' is not longer associated to a constrained program. While the Lagrange multiplier could be ``incorporated'' into the test class $\TestFunctionClass{}$, doing so would cause the entropy of $\TestFunctionClass{}$ to be unbounded. Another point of difference is that ``value learning'' uses such expression with $\lambda = 1$ to derive the confidence interval \emph{lower bound}, while we use it to construct the confidence interval \emph{upper bound}. While this may seem like a contradiction, we notice that the expression is derived using different assumptions: we assume weak realizability of $Q$, while \cite{jiang2020minimax} assumes realizability of the density ratios between $\mu$ and the discounted occupancy measure $\policyicy$. \subsection{Additional Literature} \label{sec:Literature} Here we summarize some additional literature. The efficiency of off-policy tabular RL has been investigated in the papers~\cite{yin2020near,yin2020asymptotically,yin2021towards}. For empirical studies on offline RL, see the papers~\cite{laroche2019safe,jaques2019way,wu2019behavior,agarwal2020optimistic,wang2020critic,siegel2020keep,nair2020accelerating,yang2021pessimistic,kumar2021should,buckman2020importance,kumar2019stabilizing,kidambi2020morel,yu2020mopo}. Some of the classical RL algorithm are presented in the papers~\cite{munos2003error,munos2005error,antos2007fitted,antos2008learning,farahmand2010error,farahmand2016regularized}. For a more modern analysis, see~\cite{chen2019information}. These works generally make additionally assumptions on top of realizability. Alternatively, one can use importance sampling \cite{precup2000eligibility,thomas2016data,jiang2016doubly,farajtabar2018more}. A more recent idea is to look at the distributions themselves~\cite{liu2018breaking,nachum2019algaedice,xie2019towards,zhang2020gendice,zhang2020gradientdice,yang2020off,kallus2019efficiently}. Offline policy optimization with pessimism has been studied in the papers~\cite{liu2020provably,rashidinejad2021bridging,jin2021pessimism,xie2021bellman,zanette2021provable,yinnear,uehara2021pessimistic}. There exists a fairly extensive literature on lower bounds with linear representations, including the two papers~\cite{zanette2020exponential,wang2020statistical} that concurrently derived the first exponential lower bounds for the offline setting, and \cite{foster2021offline} proves that realizability and coverage alone are insufficient. In the context of off-policy optimization several works have investigated methods that assume only realizability of the optimal policy~\cite{xie2020batch,xie2020Q}. Related work includes the papers~\cite{duan2020minimax,duan2021risk,jiang2020minimax,uehara2020minimax,tang2019doubly,nachum2020reinforcement,voloshin2021minimax,hao2021bootstrapping,zhang2022efficient,uehara2021finite,chen2022well,lee2021model}. Among concurrent works, we note \cite{zhan2022offline}. \subsection{Definition of Weak Bellman Closure} \label{sec:appWeakClosure} \begin{definition}[Weak Bellman Closure] The Bellman operator $\BellmanEvaluation{\policyicy}$ is \emph{weakly closed} with respect to the triple $\big( \Qclass{\policyicy}, \TestFunctionClass{\policyicy}, \mu \big)$ if for any $Q \in \Qclass{\policyicy}$, there exists a predictor $\QpiProj{\policyicy}{Q} \in \Qclass{\policyicy}$ such that \begin{align} \innerprodweighted{\TestFunction{}}{\QpiProj{\policyicy}{Q}}{\mu} = \innerprodweighted{\TestFunction{}} {\BellmanEvaluation{\policyicy}(Q)} {\mu}. \end{align} \end{definition} \subsection{Additional results on the concentrability coefficients} \label{sec:appConc} \subsubsection{Testing with the identity function} Suppose that the identity function $\1$ belongs to the test class. Doing so amounts to requiring that the Bellman error is controlled in an average sense over all the data. When this choice is made, we can derive some generic upper bounds on $K^\policy$, which we state and prove here: \begin{lemma} If $\1 \in \TestFunctionClass{\policyicy}$, then we have the upper bounds \begin{align} \label{eqn:InEq} K^\policy & \stackrel{(i)}{\leq} \frac{\max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolComplexGen{\policyicy}|^2}{ \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolMuComplex|^2} \; \stackrel{(ii)}{\leq} K^\policy_* \defeq \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{|\PolComplexGen{\policyicy}|^2}{|\PolMuComplex|^2}. \end{align} \end{lemma} \begin{proof} Since $\1 \in \TestFunctionClass{}$, the definition of $\PopulationFeasibleSet{\policyicy}$ implies that \begin{align*} \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolMuComplex|^2 & \leq \big( \munorm{\1}^2 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n} = \big( 1 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}. \end{align*} The upper bound (i) then follows from the definition of $K^\policy$. The upper bound (ii) follows since the right hand side is the maximum ratio. \end{proof} Note that large values of $K^\policy_*$ (defined in \cref{eqn:InEq}) can arise when there exist $Q$-functions in the set $\PopulationFeasibleSet{\policyicy}$ that have low average Bellman error under the data-generating distribution $\mu$, but relatively large values under $\policyicy$. Of course, the likelihood of such unfavorable choices of $Q$ is reduced when we use a larger test function class, which then reduces the size of $\PopulationFeasibleSet{\policyicy}$. However, we pay a price in choosing a larger test function class, since the choice~\eqref{EqnRadChoice} of the radius $\ensuremath{\rho}$ needed for \cref{thm:NewPolicyEvaluation} depends on its complexity. \subsubsection{Mixture distributions} Now suppose that the dataset consists of a collection of trajectories collected by different protocols. More precisely, for each $j = 1, \ldots, \nConstraints$, let $\Dpi{j}$ be a particular protocol for generating a trajectory. Suppose that we generate data by first sampling a random index $J \in [\nConstraints]$ according to a probability distribution $\{\Frequencies{j} \}_{j=1}^{\nConstraints}$, and conditioned $J = j$, we sample $(\state,\action,\identifier)$ according to $\Dpi{j}$. The resulting data follows a mixture distribution, where we set $o = j$ to tag the protocol used to generate the data. To be clear, for each sample $i = 1, \ldots, n$, we sample $J$ as described, and then draw a single sample $(\state,\action,\identifier) \sim \Dpi{j}$ . Following the intuition given in the previous section, it is natural to include test functions that code for the protocol---that is, the binary-indicator functions \begin{align} f_j (\state,\action,\identifier) & = \begin{cases} 1 & \mbox{if $o=j$} \\ 0 & \mbox{otherwise.} \end{cases} \end{align} This test function, when included in the weak formulation, enforces the Bellman evaluation equations for the policy $\policyicy \in \PolicyClass$ under consideration along the distribution induced by each data-generating policy $\Dpi{j}$. \begin{lemma}[Mixture Policy Concentrability] \label{lem:MixturePolicyConcentrability} Suppose that $\mu$ is an $\nConstraints$-component mixture, and that the indicator functions $\{f_j \}_{j=1}^\nConstraints$ are included in the test class. Then we have the upper bounds \begin{align} \label{EqnMixturePolicyUpper} K^\policy & \stackrel{(i)}{\leq} \frac{1 + \nConstraints \ensuremath{\lambda}}{1 + \ensuremath{\lambda}} \; \frac{\max \limits_{Q\in \PopulationFeasibleSet{\policyicy}} [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2} {\max \limits_{Q \in \PopulationFeasibleSet{\policyicy}} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2} \; \stackrel{(ii)}{\leq} \; \frac{1 + \nConstraints \ensuremath{\lambda}}{1 + \ensuremath{\lambda}} \; \max_{Q\in \PopulationFeasibleSet{\policyicy}} \left \{ \frac{ [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2} { \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2} \right \}. \end{align} \end{lemma} \begin{proof} From the definition of $K^\policy$, it suffices to show that \begin{align*} \max \limits_{Q \in \PopulationFeasibleSet{\policyicy}} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2 \leq \frac{\ensuremath{\rho}}{n} \; \big(1 + \nConstraints \ensuremath{\lambda} \big). \end{align*} A direct calculation yields $\innerprodweighted{\TestFunction{j}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = \ensuremath{\mathbb{E}}ecti{\Indicator \{o = j \} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = \Frequencies{j} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}$. Moreover, since each $\TestFunction{j}$ belongs to the test class by assumption, we have the upper bound $\Big|\Frequencies{j} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}} \Big| \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \; \sqrt{ \munorm{\TestFunction{j}}^2 + \ensuremath{\lambda}}$. Squaring each term and summing over the constraints yields \begin{align*} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2 \leq \frac{\ensuremath{\rho}}{n} \sum_{j=1}^\nConstraints \big( \munorm{\TestFunction{j}}^2 + \ensuremath{\lambda} \big) = \frac{\ensuremath{\rho}}{n} \big(1 + \nConstraints \ensuremath{\lambda} \big), \end{align*} where the final equality follows since $\sum_{j=1}^{\nConstraints} \munorm{\TestFunction{j}}^2 = 1$. \end{proof} As shown by the upper bound, the off-policy coefficient $K^\policy$ provides a measure of how the squared-averaged Bellman errors along the policies $\{ \Dpi{j} \}_{j=1}^\nConstraints$, weighted by their probabilities $\{ \Frequencies{j} \}_{j=1}^{\nConstraints}$, transfers to the evaluation policy $\policyicy$. Note that the regularization parameter $\ensuremath{\lambda}$ decays as a function of the sample size---e.g., as $1/n$ in \cref{thm:NewPolicyEvaluation}---the factor $(1 + \nConstraints \TestFunctionReg)/(1 + \TestFunctionReg)$ approaches one as $n$ increases (for a fixed number $\nConstraints$ of mixture components). \subsubsection{Bellman Rank for off-policy evaluation} In this section, we show how more refined bounds can be obtained when---in addition to a mixture condition---additional structure is imposed on the problem. In particular, we consider a notion similar to that of Bellman rank~\cite{jiang17contextual}, but suitably adapted\footnote{ The original definition essentially takes $\widetilde {\PolicyClass}$ as the set of all greedy policies with respect to $\widetilde {\Qclass{}}$. Since a dataset need not originate from greedy policies, the definition of Bellman rank is adapted in a natural way.} to the off-policy setting. Given a policy class $\widetilde {\PolicyClass}$ and a predictor class $\widetilde {\Qclass{}}$, we say that it has Bellman rank is $\dim$ if there exist two maps $\BRleft{} : \widetilde {\PolicyClass} \rightarrow \R^\dim$ and $\BRright{}: \widetilde {\Qclass{}} \rightarrow \R^\dim$ such that \begin{align} \label{eqn:BellmanRank} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \smlinprod{\BRleft{\policyicy}}{\BRright{Q}}_{\R^d}, \qquad \text{for all } \; \policyicy \in \widetilde {\PolicyClass} \; \text{and} \; Q \in \widetilde {\Qclass{}}. \end{align} In words, the average Bellman error of any predictor $Q$ along any given policy $\policyicy$ can be expressed as the Euclidean inner product between two $\dim$-dimensional vectors, one for the policy and one for the predictor. As in the previous section, we assume that the data is generated by a mixture of $\nConstraints$ different distributions (or equivalently policies) $\{\Dpi{j} \}_{j=1}^\nConstraints$. In the off-policy setting, we require that the policy class $\widetilde {\PolicyClass}$ contains all of these policies as well as the target policy---viz. $\{\Dpi{j} \} \cup \{ \policyicy \} \subseteq \widetilde {\PolicyClass}$. Moreover, the predictor class $\widetilde {\Qclass{}}$ should contain the predictor class for the target policy, i.e., $\Qclass{\policyicy} \subseteq \widetilde {\Qclass{}}$. We also assume weak realizability for this discussion. Our result depends on a positive semidefinite matrix determined by the mixture weights $\{\Frequencies{j} \}_{j=1}^\nConstraints$ along with the embeddings $\{\BRleft{\Dpi{j}}\}_{j=1}^\nConstraints$ of the associated policies that generated the data. In particular, we define \begin{align*} \BRCovariance{} = \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 \BRleft{\Dpi{j}} \BRleft{\Dpi{j}}^\top. \end{align*} Assuming that this is matrix is positive definite,\footnote{If not, one can prove a result for a suitably regularized version.} we define the norm $\|u\|_{\BRCovariance{}^{-1}} = \sqrt{u^T (\BRCovariance{})^{-1} u}$. With this notation, we have the following bound. \begin{lemma}[Concentrability with Bellman Rank] \label{lem:ConcentrabilityBellmanRank} For a mixture data-generation process and under the Bellman rank condition~\eqref{eqn:BellmanRank}, we have the upper bound \begin{align} K^\policy & \leq \; \frac{1 + \nConstraints \TestFunctionReg}{1 + \TestFunctionReg} \; \norm{\BRleft{\policyicy}}{\BRCovariance{}^{-1}}^2, \end{align} \end{lemma} \begin{proof} Our proof exploits the upper bound (ii) from the claim~\eqref{EqnMixturePolicyUpper} in~\cref{lem:MixturePolicyConcentrability}. We first evaluate and redefine the ratio in this upper bound. Weak realizability coupled with the Bellman rank condition~\eqref{eqn:BellmanRank} implies that there exists some $\QpiWeak{\policyicy}$ such that \begin{align*} 0 & = \innerprodweighted{\TestFunction{j}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\mu} = \Frequencies{j} \ensuremath{\mathbb{E}}ecti {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\Dpi{j}} = \Frequencies{j} \inprod{\BRleft{\Dpi{j}}}{\BRright{\QpiWeak{\policyicy}}}, \qquad \mbox{for all $j = 1, \ldots, \nConstraints$, and} \\ 0 & = \innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \ensuremath{\mathbb{E}}ecti {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \inprod{\BRleft{\policyicy}}{\BRright{\QpiWeak{\policyicy}}}. \end{align*} Therefore, we have the equivalences $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}} = \inprod{\BRleft{\Dpi{j}}}{ (\BRright{Q} - \BRright{\QpiWeak{\policyicy}})}$ for all $j = 1, \ldots, \nConstraints$, as well as $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \inprod{\BRleft{\policyicy}}{(\BRright{Q} - \BRright{\QpiWeak{\policyicy}})}$. Introducing the shorthand $\BRDelta{Q} = \BRright{Q} - \BRright{\QpiWeak{\policyicy}}$, we can bound the ratio as follows \begin{align*} \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ (\inprod{\BRleft{\policyicy}}{\BRDelta{Q}})^2 }{ \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 (\inprod{\BRleft{\Dpi{j}}}{ \BRDelta{Q}})^2 } \Big \}& = \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ (\inprod{\BRleft{\policyicy}}{ \BRDelta{Q}})^2 }{\BRDelta{Q}^\top \Big( \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 \BRleft{\Dpi{j}} \BRleft{\Dpi{j}}^\top \Big) \BRDelta{Q} } \Big \} \\ & = \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ ( \smlinprod{\BRleft{\policyicy}}{ \BRCovariance{}^{-\frac{1}{2}} \BRDeltaCoord{Q}})^2 }{ \|\BRDeltaCoord{Q}\|_2^2} \Big \} \qquad \mbox{where $\BRDeltaCoord{Q} = \BRCovariance{}^{\frac{1}{2}}\BRDelta{Q}$}\\ & \leq \norm{\BRleft{\policyicy}}{\BRCovariance{}^{-1}}^2, \end{align*} where the final step follows from the Cauchy--Schwarz inequality. \end{proof} Thus, when performing off-policy evaluation with a mixture distribution under the Bellman rank condition, the coefficient $K^\policy$ is bounded by the alignment between the target policy $\policyicy$ and the data-generating distribution $\mu$, as measured in the the embedded space guaranteed by the Bellman rank condition. The structure of this upper bound is similar to a result that we derive in the sequel for linear approximation under Bellman closure (see~\cref{prop:LinearConcentrabilityBellmanClosure}). \subsection{Further comments on the prediction error test space} \label{sec:appBubnov} A few comments on the bound in \cref{lem:PredictionError}: as in our previous results, the pre-factor $\frac{\norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg }$ serves as a normalization factor. Disregarding this leading term, the second ratio measures how the prediction error $QErr = Q - \QpiWeak{\policyicy} $ along $\mu$ transfers to $\policyicy$, as measured via the operator $\IdentityOperator - \discount\TransitionOperator{\policyicy}$. This interaction is complex, since it includes the \emph{bootstrapping term} $-\discount\TransitionOperator{\policyicy}$. (Notably, such a term is not present for standard prediction or bandit problems, in which case $\discount = 0$.) This term reflects the dynamics intrinsic to reinforcement learning, and plays a key role in proving ``hard'' lower bounds for offline RL (e.g., see the work~\cite{zanette2020exponential}). Observe that the bound in \cref{lem:PredictionError} requires only weak realizability, and thus it always applies. This fact is significant in light of a recent lower bound~\cite{foster2021offline}, showing that without Bellman closure, off-policy learning is challenging even under strong concentrability assumption (such as bounds on density ratios). \cref{lem:PredictionError} gives a sufficient condition without Bellman closure, but with a different measure that accounts for bootstrapping. \\ \noindent If, in fact, (weak) Bellman closure holds, then~\cref{lem:PredictionError} takes the following simplified form: \begin{lemma}[OPC coefficient under Bellman closure] \label{lem:PredictionErrorBellmanClosure} If $\QclassErr{\policyicy} \subseteq \TestFunctionClass{\policyicy}$ and weak Bellman closure holds, then \begin{align*} K^\policy \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg } { 1 + \TestFunctionReg } \, \cdot \, \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \} \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm {QErrNC} {\policyicy}^2 } { \norm{QErrNC} {\mu}^2 } \Big \}. \end{align*} \end{lemma} \noindent See \cref{sec:PredictionErrorBellmanClosure} for the proof. \\ In such case, the concentrability measures the increase in the discrepancy $Q - Q'$ of the feasible predictors when moving from the dataset distribution $\mu$ to the distribution of the target policy $\policyicy$. In \cref{sec:DomainKnowledge}, we give another bound under weak Bellman closure, and thereby recover a recent result due to Xie et al.~\cite{xie2021bellman}. Finally, in~\cref{sec:Linear}, we provide some applications of this concentrability factor to the linear setting. \subsection{From Importance Sampling to Bellman Closure} \label{sec:IS2BC} Let us show an application of \cref{prop:MultipleRobustness} on an example with just two test spaces. Suppose that we suspect that Bellman closure holds, but rather than committing to such assumption, we wish to fall back to an importance sampling estimator if Bellman closure does not hold. In order to streamline the presentation of the idea, let us introduce the following setup. Let $\policyicyBeh{}$ be a behavioral policy that generates the dataset, i.e., such that each state-action $\psa$ in the dataset is sampled from its discounted state distribution $\DistributionOfPolicy{\policyicyBeh{}}$. Next, let the identifier $o$ contain the trajectory from $\nu_{\text{start}}$ up to the state-action pair $\psa$ recorded in the dataset. That is, each tuple $\sarsi{}$ in the dataset $\Dataset$ is such that $\psa \sim \DistributionOfPolicy{\policyicyBeh{}}$ and $o$ contains the trajectory up to $\psa$. We now define the test spaces. The first one is denoted with $\TestFunctionISClass{\policyicy}$ and leverages importance sampling. It contains a single test function defined as the importance sampling estimator \begin{align} \label{eqn:ImportanceSampling} \TestFunctionISClass{\policyicy} = \{ \TestFunction{\policyicy}\}, \qquad \text{where} \; \TestFunction{\policyicy}(\state,\action,\identifier) = \frac{1}{\scaling{\policyicy}} \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) }. \end{align} The above product is over the random trajectory contained in the identifier $o$. The normalization factor $\scaling{\policyicy} \in \R$ is connected to the maximum range of the importance sampling estimator, and ensures that $\sup_{(\state,\action,\identifier)} \TestFunction{\policyicy}(\state,\action,\identifier) \leq 1$. The second test space is the prediction error test space $\QclassErr{\policyicy}$ defined in \cref{sec:ErrorTestSpace}. With this choice, let us define three concentrability coefficients. $\ConcentrabilityGenericSub{\policyicy}{1}$ arises from importance sampling, $\ConcentrabilityGenericSub{\policyicy}{2}$ from the prediction error test space when Bellman closure holds and $\ConcentrabilityGenericSub{\policyicy}{3}$ from the prediction error test space when just weak realizability holds. They are defined as \begin{align*} \ConcentrabilityGenericSub{\policyicy}{1} \leq \sqrt{ \scaling{\policyicy} \frac{(1 + \TestFunctionReg\scaling{\policyicy})}{1+\TestFunctionReg} } \qquad \ConcentrabilityGenericSub{\policyicy}{2} \leq \max_{QErr \in \QclassErrCentered{\policyicy}} \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu}^2 } \times \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg }, \qquad \ConcentrabilityGenericSub{\policyicy}{3} \leq c_1 \frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2}. \end{align*} \begin{lemma}[From Importance Sampling to Bellman Closure] \label{lem:IS} The choice $\TestFunctionClass{\policyicy} = \TestFunctionISClass{\policyicy} \cup \QclassErr{\policyicy} \; \text{for all } \policyicy\in\PolicyClass$ ensures that with probability at least $1-\FailureProbability$, the oracle inequality~\eqref{EqnOracle} holds with $\ConcentrabilityGeneric{\policyicy} \leq \min\{ \ConcentrabilityGenericSub{\policyicy}{1}, \ConcentrabilityGenericSub{\policyicy}{2}, \ConcentrabilityGenericSub{\policyicy}{3} \} $ if weak Bellman closure holds and $\ConcentrabilityGeneric{\policyicy} \leq \min\{ \ConcentrabilityGenericSub{\policyicy}{1}, \ConcentrabilityGenericSub{\policyicy}{2} \} $ otherwise. \end{lemma} \begin{proof} Let us calculate the {off-policy cost coefficient} associated with $\TestFunctionISClass{\policyicy}$. The unbiasedness of the importance sampling estimator gives us the following population constraint (here $\mu = \DistributionOfPolicy{\policyicyBeh{}}$) \begin{align*} \abs{ \innerprodweighted{\TestFunction{\policyicy}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} } = \abs{ \ensuremath{\mathbb{E}}ecti{\TestFunction{\policyicy}\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} } = \frac{1}{\scaling{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} } = \frac{1}{\scaling{\policyicy}} \abs{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} } \leq \frac{\ConfidenceInterval{\FailureProbability}}{\sqrt{n}} \sqrt{\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg} \end{align*} The norm of the test function reads (notice that $\mu$ generates $(\state,\action,\identifier)$ here) \begin{align*} \norm{\TestFunction{\policyicy}}{\mu}^2 = \ensuremath{\mathbb{E}}ecti{\TestFunction{\policyicy}^2}{\mu} = \frac{1}{\scaling{\policyicy}^2} \ensuremath{\mathbb{E}}ecti{ \Bigg[ \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) } }{\mu}\Bigg]^2 = \frac{1}{\scaling{\policyicy}^2} \ensuremath{\mathbb{E}}ecti{ \Bigg[ \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) } }{\policyicy}\Bigg] \leq \frac{1}{\scaling{\policyicy}}. \\ \end{align*} Together with the prior display, we obtain \begin{align*} \frac{\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2} {\scaling{\policyicy}^2(\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg)} \leq \frac{\ensuremath{\rho}}{n}. \end{align*} The resulting concentrability coefficient is therefore \begin{align*} \ConcentrabilityGeneric{\policyicy} \leq \max_{Q\in \PopulationFeasibleSet{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {1 + \TestFunctionReg} \times \frac{n}{\ensuremath{\rho}} \leq \max_{Q\in \PopulationFeasibleSet{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {1 + \TestFunctionReg} \times \frac{\scaling{\policyicy}^2(\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg)} {\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2} \leq \scaling{\policyicy} \frac{(1 + \TestFunctionReg\scaling{\policyicy})}{1+\TestFunctionReg}. \end{align*} Chaining the above result with \cref{lem:BellmanTestFunctions,lem:PredictionError}, using \cref{prop:MultipleRobustness} and plugging back into \cref{thm:NewPolicyEvaluation} yields the thesis. \end{proof} \subsection{Implementation for Off-Policy Predictions} \label{sec:LinearConfidenceIntervals} In this section, we describe a computationally efficient way in which to compute the upper/lower estimates~\eqref{eqn:ConfidenceIntervalEmpirical}. Given a finite set of $n_{\TestFunctionClass{}}$ test functions, it involves solving a quadratic program with $2 n_{\TestFunctionClass{}} + 1$ constraints. Let us first work out a concise description of the constraints defining membership in $\EmpiricalFeasibleSet{\policyicy}$. Introduce the shorthand $\nLinEffSq{\TestFunction{}} \defeq \norm{\TestFunction{j}}{n}^2 + \TestFunctionReg$. We then define the empirical average feature vector $\phiEmpiricalExpecti{\TestFunction{}}$, the empirical average reward $\LinEmpiricalReward{\TestFunction{}}$, and the average next-state feature vector $\phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}$ as \begin{align*} \quad \phiEmpiricalExpecti{\TestFunction{}} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa \phi\psa, \qquad \LinEmpiricalReward{\TestFunction{}} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa\reward, \\ \qquad \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa \phi(\successorstate,\policyicy). \end{align*} In terms of this notation, each empirical constraint defining $\EmpiricalFeasibleSet{\policyicy}$ can be written in the more compact form \begin{align*} \frac{\abs{\innerprodweighted{\TestFunction{}} {\TDError{Q}{\policyicy}{}}{n}} } {\nLinEff{\TestFunction{}}} & = \Big | \smlinprod{\phiEmpiricalExpecti{\TestFunction{}} - \discount \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}}{\CriticPar{}} - \LinEmpiricalReward{\TestFunction{}} \Big| \leq \sqrt{\frac{\ensuremath{\rho}}{n}}. \end{align*} Then the set of empirical constraints can be written as a set of constraints linear in the critic parameter $\CriticPar{}$ coupled with the assumed regularity bound on $\CriticPar{}$ \begin{align} \label{eqn:LinearConstraints} \EmpiricalFeasibleSet{\policyicy} = \Big\{ \CriticPar{} \in \R^d \mid \norm{\CriticPar{}}{2} \leq 1, \quad \mbox{and} \quad - \sqrt{\frac{\ensuremath{\rho}}{n}} \leq \smlinprod{\phiEmpiricalExpecti{\TestFunction{}} - \discount \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}}{\CriticPar{}} - \LinEmpiricalReward{\TestFunction{}} \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \quad \mbox{for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$} \Big\}. \end{align} Thus, the estimates $\VminEmp{\policy}$ (respectively $\VmaxEmp{\policy}$) acan be computed by minimizing (respectively maximizing) the linear objective function $w \mapsto \inprod{[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathbb{E}}ecti{\phi(\state,\action)} {\action \sim \policyicy}}{\state \sim \nu_{\text{start}}}]}{\CriticPar{}}$ subject to the $2 n_{\TestFunctionClass{}} + 1$ constraints in equation~\eqref{eqn:LinearConstraints}. Therefore, the estimates can be computed in polynomial time for any test function with a cardinality that grows polynomially in the problem parameters. \subsection{Discussion of Linear Approximate Optimization} \label{sec:LinearDiscussion} Here we discuss the presence of the supremum over policies in the coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{1}$ from equation~\eqref{EqnNewConc}. In particular, it arises because our actor-critic method iteratively approximates the maximum in the max-min estimate~\eqref{eqn:MaxMinEmpirical} using a gradient-based scheme. The ability of a gradient-based method to make progress is related to the estimation accuracy of the gradient, which is the $Q$ estimates of the actor's current policy $\ActorPolicy{t}$; more specifically, the gradient is the $Q$ function parameter $\CriticPar{t}$. In the general case, the estimation error of the gradient $\CriticPar{t}$ depends on the policy under consideration through the matrix $\CovarianceWithBootstrapReg{\ActorPolicy{t}}$, while it is independent in the special case of Bellman closure (as it depends on just $\Sigma$). As the actor's policies are random, this yields the introduction of a $\sup_{\policyicy\in\PolicyClass}$ in the general bound. Notice the method still competes with the best comparator $\widetilde \policy$ by measuring the errors along the distribution of the comparator (through the operator $\ensuremath{\mathbb{E}}ecti{}{\widetilde \policy}$). To be clear, $\sup_{\policyicy\in\PolicyClass}$ may not arise with approximate solution methods that do not rely only on the gradient to make progress (such as second-order methods); we leave this for future research. Reassuringly, when Bellman closure, the approximate solution method recovers the standard guarantees established in the paper~\cite{zanette2021provable}. \section{General Guarantees} \label{sec:GeneralGuarantees} \subsection{A deterministic guarantee} We begin our analysis stating a deterministic set of sufficient conditions for our estimators to satisfy the guarantees~\eqref{EqnPolEval} and~\eqref{EqnOracle}. This formulation is useful, because it reveals the structural conditions that underlie success of our estimators, and in particular the connection to weak realizability. In Section~\ref{SecHighProb}, we exploit this deterministic result to show that, under a fairly general sampling model, our estimators enjoy these guarantees with high probability. In the previous section, we introduced the population level set $\PopulationFeasibleSet{\policyicy}$ that arises in the statement of our guarantees. Also central in our analysis is the infinite data limit of this set. More specifically, for any fixed $(\ensuremath{\rho}, \ensuremath{\lambda})$, if we take the limit $n \rightarrow \infty$, then $\PopulationFeasibleSet{\policyicy}$ reduces to the set of all solutions to the weak formulation~\eqref{eqn:WeakFormulation}---that is \begin{align} \PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) = \{ Q \in \Qclass{\policyicy} \mid \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\pi}{}} {\mu} = 0 \quad \mbox{for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$} \}. \end{align} As before, we omit the dependence on the test function class $\TestFunctionClass{\policyicy}$ when it is clear from context. By construction, we have the inclusion $\PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) \subseteq \SuperPopulationFeasible$ for any non-negative pair $(\ensuremath{\rho}, \ensuremath{\lambda})$. Our first set of guarantees hold when the random set $\EmpiricalFeasibleSet{\policyicy}$ satisfies the \emph{sandwich relation} \begin{align} \label{EqnSandwich} \PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) \subseteq \EmpiricalFeasibleSet{\policyicy}(\ensuremath{\rho}, \ensuremath{\lambda}; \TestFunctionClass{\policyicy}) \subseteq \PopulationFeasibleSet{\policyicy}(4 \ensuremath{\rho}, \ensuremath{\lambda}; \TestFunctionClass{\policyicy}) \end{align} To provide intuition as to why this sandwich condition is natural, observe that it has two important implications: \begin{enumerate} \item[(a)] Recalling the definition of weak realizability~\eqref{EqnWeakRealizable}, the weak solution $\QpiWeak{\policyicy}$ belongs to the empirical constraint set $\EmpiricalFeasibleSet{\policyicy}$ for any choice of test function space. This important property follows because $\QpiWeak{\policyicy}$ must satisfy the constraints~\eqref{eqn:WeakFormulation}, and thus it belongs to $\PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}$. \item[(b)] All solutions in $\EmpiricalFeasibleSet{\policyicy}$ also belong to $\PopulationFeasibleSet{\policyicy}$, which means they approximately satisfy the weak Bellman equations in a way quantified by $\PopulationFeasibleSet{\policyicy}$. \end{enumerate} By leveraging these facts in the appropriate way, we can establish the following guarantee: \\ \begin{proposition} \label{prop:Deterministic} The following two statements hold. \begin{enumerate} \item[(a)] \underline{Policy evaluation:} If the set $\EmpiricalFeasibleSet{\policyicy}$ satisfies the sandwich relation~\eqref{EqnSandwich}, then the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ satisfy the width bound~\eqref{EqnWidthBound}. If, in addition, weak Bellman realizability for $\policyicy$ is assumed, then the coverage~\eqref{EqnCoverage} condition holds. \item[(b)] \underline{Policy optimization:} If the sandwich relation~\eqref{EqnSandwich} and weak Bellman realizability hold for all $\policyicy \in \PolicyClass$, then any max-min~\eqref{eqn:MaxMinEmpirical} optimal policy $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{enumerate} \end{proposition} \noindent See Section~\ref{SecProofPropDeterministic} for the proof of this claim. \\ In summary, \Cref{prop:Deterministic} ensures that when weak realizability is in force, then the sandwich relation~\eqref{EqnSandwich} is a sufficient condition for both the policy evaluation~\eqref{EqnPolEval} and optimization~\eqref{EqnOracle} guarantees to hold. Accordingly, the next phase of our analysis focuses on deriving sufficient conditions for the sandwich relation to hold with high probabability. \subsection{Some high-probability guarantees} \label{SecHighProb} As stated, \Cref{prop:Deterministic} is a ``meta-result'', in that it applies to any choice of set $\EmpiricalFeasibleSet{\policyicy} \equiv \SuperEmpiricalFeasible$ for which the sandwich relation~\eqref{EqnSandwich} holds. In order to obtain a more concrete guarantee, we need to impose assumptions on the way in which the dataset was generated, and concrete choices of $(\ensuremath{\rho}, \ensuremath{\lambda})$ that suffice to ensure that the associated sandwich relation~\eqref{EqnSandwich} holds with high probability. These tasks are the focus of this section. \subsubsection{A model for data generation} \label{SecDataGen} Let us begin by describing a fairly general model for data-generation. Any sample takes the form $\sarsizNp{} \defeq \sarsi{}$, where the five components are defined as follows: \begin{carlist} \item the pair $(\state, \action)$ index the current state and action. \item the random variable $\reward$ is a noisy observation of the mean reward. \item the random state $\successorstate$ is the next-state sample, drawn according to the transition $\Pro\psa$. \item the variable $o$ is an optional identifier. \end{carlist} \noindent As one example of the use of an identifier variable, if samples might be generated by one of two possible policies---say $\policyicy_1$ and $\policyicy_2$---the identifier can take values in the set $\{1, 2 \}$ to indicate which policy was used for a particular sample. \\ Overall, we observe a dataset $\Dataset = \{\sarsizNp{i} \}_{i=1}^n$ of $n$ such quintuples. In the simplest of possible settings, each triple $(\state,\action,\identifier)$ is drawn i.i.d. from some fixed distribution $\mu$, and the noisy reward $\reward_i$ is an unbiased estimate of the mean reward function $\Reward(\state_i, \action_i)$. In this case, our dataset consists of $n$ i.i.d. quintuples. More generally, we would like to accommodate richer sampling models in which the sample $z_i = (\state_i, \action_i, o_i, \reward_i, \successorstate_i)$ at a given time $i$ is allowed to depend on past samples. In order to specify such dependence in a precise way, define the nested sequence of sigma-fields \begin{align} \mathcal F_1 = \emptyset, \quad \mbox{and} \quad \mathcal F_i \defeq \sigma \Big(\{\sarsizNp{j}\}_{j=1}^{i-1} \Big) \qquad \mbox{for $i = 2, \ldots, n$.} \end{align} In terms of this filtration, we make the following definition: \begin{assumption}[Adapted dataset] \label{asm:Dataset} An adapted dataset is a collection $\Dataset = \{ \sarsizNp{i} \}_{i=1}^n$ such that for each $i = 1, \ldots, n$: \begin{carlist} \item There is a conditional distribution $\mu_i$ such that $(\state_i, \action_i, o_i) \sim \mu_i (\cdot \mid \mathcal F_i)$. \item Conditioned on $(\state_i,\action_i,o_i )$, we observe a noisy reward $\reward_i = \reward(\state_i,\action_i) + \eta_i$ with $\E[\eta_i \mid \mathcal F_{i} ] = 0$, and $|\reward_i| \leq 1$. \item Conditioned on $(\state_i,\action_i,o_i )$, the next state $\successorstate_i$ is generated according to $\Pro(\state_i,\action_i)$. \end{carlist} \end{assumption} Under this assumption, we can define the (possibly) random reference measure \begin{align} \mu(\state, \action, o) & \defeq \frac{1}{n} \sum_{i=1}^n \mu_i \big(\state, \action, o \mid \mathcal F_i \big). \end{align} In words, it corresponds to the distribution induced by first drawing a time index $i \in \{1, \ldots, n \}$ uniformly at random, and then sampling a triple $(\state,\action,\identifier)$ from the conditional distribution $\mu_i \big(\cdot \mid \mathcal F_i \big)$. \subsubsection{A general guarantee} \newcommand{\ensuremath{\phi}}{\ensuremath{\phi}} \label{SecGeneral} Recall that there are three function classes that underlie our method: the test function class $\TestFunctionClass{}$, the policy class $\PolicyClass$, and the $Q$-function class $\Qclass{}$. In this section, we state a general guarantee (\cref{thm:NewPolicyEvaluation}) that involves the metric entropies of these sets. In Section~\ref{SecCorollaries}, we provide corollaries of this guarantee for specific function classes. In more detail, we equip the test function class and the $Q$-function class with the usual sup-norm \begin{align*} \|f - \tilde{f}\|_\infty \defeq \sup_{(\state,\action,\identifier)} |f(\state,\action,\identifier) - \tilde{f} (\state,\action,\identifier)|, \quad \mbox{and} \quad \|Q - \tilde{Q}\|_\infty \defeq \sup_{\psa} |Q \psa - \tilde{Q} \psa|, \end{align*} and the policy class with the sup-TV norm \begin{align*} \|\pi - \pitilde\|_{\infty, 1} & \defeq \sup_{\state} \|\pi(\cdot \mid \state) - \pitilde(\cdot \mid \state) \|_1 = \sup_{\state} \sum_{\action} |\pi(\action \mid \state) - \pitilde(\action \mid \state)|. \end{align*} For a given $\epsilon > 0$, we let $\CoveringNumber{\TestFunctionClass{}}{\epsilon}$, $\CoveringNumber{\Qclass{}}{\epsilon}$, and $\CoveringNumber{\PolicyClass}{\epsilon}$ denote the $\epsilon$-covering numbers of each of these function classes in the given norms. Given these covering numbers, a tolerance parameter $\delta \in (0,1)$ and the shorthand $\ensuremath{\phi}(t) = \max \{t, \sqrt{t} \}$, define the radius function \begin{subequations} \label{EqnTheoremChoice} \begin{align} \label{EqnDefnR} \ensuremath{\rho}(\epsilon, \delta) & \defeq n \Big\{ \int_{\epsilon^2}^\epsilon \ensuremath{\phi} \big( \frac{\log N_u(\TestFunctionClass{})}{n} \big) du + \frac{\log N_\epsilon(\Qclass{})}{n} + \frac{ \log N_\epsilon(\Pi)}{n} + \frac{\log(n/\delta)}{n} \Big\}. \end{align} In our theorem, we implement the estimator using a radius $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$, where $\epsilon > 0$ is any parameter that satisfies the bound \begin{align} \label{EqnRadChoice} \epsilon^2 & \stackrel{(i)}{\leq} \bar{c} \: \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}, \quad \mbox{and} \quad \ensuremath{\lambda} \stackrel{(i)}{=} 4 \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}. \end{align} \end{subequations} Here $\bar{c} > 0$ is a suitably chosen but universal constant (whose value is determined in the proof), and we adopt the shorthand $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$ in our statement below. \renewcommand{\descriptionlabel}[1]{ \hspace{\labelsep}\normalfont\underline{#1} } \begin{theorem}[High-probability guarantees] \label{thm:NewPolicyEvaluation} Consider the estimates implemented using triple $\InputFunctionalSpace$ that is weakly Bellman realizable (\cref{asm:WeakRealizability}); an adapted dataset (\Cref{asm:Dataset}); and with the choices~\eqref{EqnTheoremChoice} for $(\epsilon, \ensuremath{\rho}, \ensuremath{\lambda})$. Then with probability at least $1 - \delta$: \begin{description} \item[Policy evaluation:] For any $\pi \in \PolicyClass$, the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ specify a confidence interval satisfying the coverage~\eqref{EqnCoverage} and width bounds~\eqref{EqnWidthBound}. \item[Policy optimization:] Any max-min policy~\eqref{eqn:MaxMinEmpirical} $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{description} \end{theorem} \noindent See \cref{SecProofNewPolicyEvaluation} for the proof of the claim. \\ \paragraph{Choices of $(\ensuremath{\rho}, \epsilon, \ensuremath{\lambda})$:} Let us provide a few comments about the choices of $(\ensuremath{\rho}, \epsilon, \ensuremath{\lambda})$ from equations~\eqref{EqnDefnR} and~\eqref{EqnRadChoice}. The quality of our bounds depends on the size of the constraint set $\PopulationFeasibleSet{\policyicy}$, which is controlled by the constraint level $\sqrt{\frac{\ensuremath{\rho}}{n}}$. Consequently, our results are tightest when $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$ is as small as possible. Note that $\ensuremath{\rho}$ is an decreasing function of $\epsilon$, so that in order to minimize it, we would like to choose $\epsilon$ as large as possible subject to the constraint~\eqref{EqnRadChoice}(i). Ignoring the entropy integral term in equation~\eqref{EqnRadChoice} for the moment---see below for some comments on it---these considerations lead to \begin{align} n \epsilon^2 \asymp \log N_\epsilon(\TestFunctionClass{}) + \log N_\epsilon(\Qclass{}) + \log N_\epsilon(\Pi). \end{align} This type of relation for the choice of $\epsilon$ in non-parametric statistics is well-known (e.g., see Chapters 13--15 in the book~\cite{wainwright2019high} and references therein). Moreover, setting $\ensuremath{\lambda} \asymp \epsilon^2$ as in equation~\eqref{EqnRadChoice}(ii) is often the correct scale of regularization. \paragraph{Key technical steps in proof:} It is worthwhile making a few comments about the structure of the proof so as to clarify the connections to \Cref{prop:Deterministic} along with the weak formulation that underlies our methods. Recall that \Cref{prop:Deterministic} requires the empirical $\EmpiricalFeasibleSet{\policyicy}$ and population sets $\PopulationFeasibleSet{\policyicy}$ to satisfy the sandwich relation~\eqref{EqnSandwich}. In order to prove that this condition holds with high probability, we need to establish uniform control over the family of random variables \begin{align} \label{EqnKeyFamily} \frac{ \big| \inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu \big|}{\TestNormaRegularizerEmp{}}, \qquad \mbox{as indexed by the triple $(f, Q, \policyicy)$.} \end{align} Note that the differences in the numerator of these variables correspond to moving from the empirical constraints on $Q$-functions that are enforced using the TD errors, to the population constraints that involve the Bellman error function. Uniform control of the family~\eqref{EqnKeyFamily}, along with the differences $\|f\|_n - \|f\|_\mu$ uniformly over $f$, allows us to relate the empirical and population sets, since the associated constraints are obtained by shifting between the empirical inner products $\inprod{\cdot}{\cdot}_n$ to the reference inner products $\inprod{\cdot}{\cdot}_\mu$. A simple discretization argument allows us to control the differences uniformly in $(Q, \policyicy)$, as reflected by the metric entropies appearing in our definition~\eqref{EqnTheoremChoice}. Deriving uniform bounds over test functions $f$---due to the self-normalizing nature of the constraints---requires a more delicate argument. More precisely, in order to obtain optimal results for non-parametric problems (see Corollary~\ref{cor:alpha} to follow), we need to localize the empirical process at a scale $\epsilon$, and derive bounds on the localized increments. This portion of the argument leads to the entropy integral---which is localized to the interval $[\epsilon^2, \epsilon]$---in our definition~\eqref{EqnDefnR} of the radius function. \paragraph{Intuition from the on-policy setting:} In order to gain intuition for the statistical meaning of the guarantees in~\cref{thm:NewPolicyEvaluation}, it is worthwhile understanding the implications in a rather special case---namely, the simpler on-policy setting, where the discounted occupation measure induced by the target policy $\policyicy$ coincides with the dataset distribution $\mu$. Let us consider the case in which the identity function $\1$ belongs to the test class $\TestFunctionClass{\policyicy}$. Under these conditions, for any $Q \in \PopulationFeasibleSet{\policyicy}$, we can write \begin{align*} \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolComplexGen{\policyicy}| & \stackrel{(i)}{=} \max_{Q \in \PopulationFeasibleSet{\policyicy}}| \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}| \; \stackrel{(ii)}{\leq} \sqrt{1 + \ensuremath{\lambda}} \; \sqrt{\frac{\ensuremath{\rho}}{n}}, \end{align*} where equality (i) follows from the on-policy assumption, and step (ii) follows from the definition of the set $\PopulationFeasibleSet{\policyicy}$, along with the condition that $\1 \in \TestFunctionClass{\policyicy}$. Consequently, in the on-policy setting, the width bound~\eqref{EqnWidthBound} ensures that \begin{align} \label{EqnOnPolicy} \abs{\VminEmp{\policyicy} - \VmaxEmp{\policyicy}} \leq 2 \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{\frac{ \ensuremath{\rho}}{n}}. \end{align} In this simple case, we see that the confidence interval scales as $\sqrt{\ensuremath{\rho}/n}$, where the quantity $\ensuremath{\rho}$ is related to the metric entropy via equation~\eqref{EqnRadChoice}. In the more general off-policy setting, the bound involves this term, along with additional terms that reflect the cost of off-policy data. We discuss these issues in more detail in \cref{sec:Applications}. Before doing so, however, it is useful derive some specific corollaries that show the form of $\ensuremath{\rho}$ under particular assumptions on the underlying function classes, which we now do. \subsubsection{Some corollaries} \label{SecCorollaries} Theorem~\ref{thm:NewPolicyEvaluation} applies generally to triples of function classes $\InputFunctionalSpace$, and the statistical error $\sqrt{\frac{\ensuremath{\rho}(\epsilon, \delta)}{n}}$ depends on the metric entropies of these function classes via the definition~\eqref{EqnDefnR} of $\ensuremath{\rho}(\epsilon, \delta)$, and the choices~\eqref{EqnRadChoice}. As shown in this section, if we make particular assumptions about the metric entropies, then we can derive more concrete guarantees. \paragraph{Parametric and finite VC classes:} One form of metric entropy, typical for a relatively simple function class $\ensuremath{\mathcal{G}}$ (such as those with finite VC dimension) scales as \begin{align} \label{EqnPolyMetric} \log N_\epsilon(\ensuremath{\mathcal{G}}) & \asymp d \; \log \big(\frac{1}{\epsilon} \big), \end{align} for some dimensionality parameter $d$. For instance, bounds of this type hold for linear function classes with $d$ parameters, and for finite VC classes (with $d$ proportional to the VC dimension); see Chapter 5 of the book~\cite{wainwright2019high} for more details. \begin{corollary} \label{cor:poly} Suppose each class of the triple $\InputFunctionalSpace$ has metric entropy that is at most polynomial~\eqref{EqnPolyMetric} of order $d$. Then for a sample size $n \geq 2 d$, the claims of Theorem~\ref{thm:NewPolicyEvaluation} hold with $\epsilon^2 = d/n$ and \begin{align} \label{EqnPolyRchoice} \tilde{\ensuremath{\rho}}\big( \sqrt{\frac{d}{n}}, \delta \big) & \defeq c \; \Big \{ d \; \log \big( \frac{n}{d} \big) + \log \big(\frac{n}{\delta} \big) \Big \}, \end{align} where $c$ is a universal constant. \end{corollary} \begin{proof} Our strategy is to upper bound the radius $\ensuremath{\rho}$ from equation~\eqref{EqnDefnR}, and then show that this upper bound $\tilde{\ensuremath{\rho}}$ satisfies the conditions~\eqref{EqnRadChoice} for the specified choice of $\epsilon^2$. We first control the term $\log N_\epsilon(\TestFunctionClass{})$. We have \begin{align*} \frac{1}{\sqrt{n}} \int_{\epsilon^2}^\epsilon \sqrt{\log N_u(\TestFunctionClass{})} du & \leq \sqrt{\frac{d}{n}} \; \int_{0}^{\epsilon} \sqrt{\log(1/u)} du \; = \; \epsilon \sqrt{\frac{d}{n}} \; \int_0^1 \sqrt{\log(1/(\epsilon t))} dt \; = \; c \epsilon \log(1/\epsilon) \sqrt{\frac{d}{n}}. \end{align*} Similarly, we have \begin{align*} \frac{1}{n} \int_{\epsilon^2}^\epsilon \log N_u(\TestFunctionClass{}) du & \leq \epsilon \frac{d}{n} \Big \{ \int_{\epsilon}^{1} \log(1/t) dt + \log(1/\epsilon) \Big \} \; \leq \; c \, \epsilon \log(1/\epsilon) \frac{d}{n}. \end{align*} Finally, for terms not involving entropy integrals, we have \begin{align*} \max \Big \{ \frac{\log N_\epsilon(\Qclass{})}{n}, \frac{\log N_\epsilon(\Pi)}{n} \Big\} & \leq c \frac{d}{n} \log(1/\epsilon). \end{align*} Setting $\epsilon^2 = d/n$, we see that the required conditions~\eqref{EqnRadChoice} hold with the specified choice~\eqref{EqnPolyRchoice} of $\tilde{\ensuremath{\rho}}$. \end{proof} \paragraph{Richer function classes:} In the previous section, the metric entropy scaled logarithmically in the inverse precision $1/\epsilon$. For other (richer) function classes, the metric entropy exhibits a polynomial scaling in the inverse precision, with an exponent $\alpha > 0$ that controls the complexity. More precisely, we consider classes of the form \begin{align} \label{EqnAlphaMetric} \log N_\epsilon(\ensuremath{\mathcal{G}}) & \asymp \Big(\frac{1}{\epsilon} \Big)^\alpha. \end{align} For example, the class of Lipschitz functions in dimension $d$ has this type of metric entropy with $\alpha = d$. More generally, for Sobolev spaces of functions that have $s$ derivatives (and the $s^{th}$-derivative is Lipschitz), we encounter metric entropies of this type with $\alpha = d/s$. See Chapter 5 of the book~\cite{wainwright2019high} for further background. \begin{corollary} \label{cor:alpha} Suppose that each function class $\InputFunctionalSpace$ has metric entropy with at most \mbox{$\alpha$-scaling~\eqref{EqnAlphaMetric}} for some $\alpha \in (0,2)$. Then the claims of Theorem~\ref{thm:NewPolicyEvaluation} hold with $\epsilon^2 = (1/n)^{\frac{2}{2 + \alpha}}$, and \begin{align} \label{EqnAlphaRchoice} \tilde{\ensuremath{\rho}}\big((1/n)^{\frac{1}{2 + \alpha}}, \delta \big) & = c \; \Big \{ n^{\frac{\alpha}{2 + \alpha}} + \log(n/\delta) \Big \}. \end{align} where $c$ is a universal constant. \end{corollary} \noindent We note that for standard regression problems over classes with $\alpha$-metric entropy, the rate $(1/n)^{\frac{2}{2 + \alpha}}$ is well-known to be minimax optimal (e.g., see Chapter 15 in the book~\cite{wainwright2019high}, as well as references therein). \begin{proof} We start by controlling the terms involving entropy integrals. In particular, we have \begin{align*} \frac{1}{\sqrt{n}} \int_{\epsilon^2}^\epsilon \sqrt{\log N_u(\TestFunctionClass{})} du & \leq \frac{c}{\sqrt{n}} u^{1 - \frac{\alpha}{2}} \Big |_{0}^\epsilon \; = \; \frac{c}{\sqrt{n}} \epsilon^{1 - \frac{\alpha}{2}}. \end{align*} Requiring that this term is of order $\epsilon^2$ amounts to enforcing that $\epsilon^{1 + \frac{\alpha}{2}} \asymp (1/\sqrt{n})$, or equivalently that $\epsilon^2 \asymp (1/n)^{\frac{2}{2 + \alpha}}$. If $\alpha \in (0,1]$, then the second entropy integral converges and is of lower order. Otherwise, if $\alpha \in (1,2)$, then we have \begin{align*} \frac{1}{n} \int_{\epsilon^2}^\epsilon \log N_u(\TestFunctionClass{}) du & \leq \frac{c}{n} \int_{\epsilon^2}^\epsilon (1/u)^{\alpha} du \leq \frac{c}{n} (\epsilon^2)^{1 - \alpha}. \end{align*} Hence the requirement that this term is bounded by $\epsilon^2$ is equivalent to $\epsilon^{2 \alpha} \succsim (1/n)$, or $\epsilon^2 \succsim (1/n)^{1/\alpha}$. When $\alpha \in (1,2)$, we have $\frac{1}{\alpha} > \frac{2}{2 + \alpha}$, so that this condition is milder than our first condition. Finally, we have $ \max \big \{ \frac{\log N_\epsilon(\Qclass{})}{n}, \frac{\log N_\epsilon(\Pi)}{n} \big\} \leq \frac{c}{n} \big(1/\epsilon)^{\alpha}$, and requiring that this term scales as $\epsilon^2$ amounts to requiring that $\epsilon^{2 +\alpha} \asymp (1/n)$, or equivalently $\epsilon^2 \asymp (1/n)^{\frac{2}{2 + \alpha}}$, as before. \end{proof} \section{Main Proofs} \label{sec:AnalysisProofs} This section is devoted to the proofs of our guarantees for general function classes---namely, \cref{prop:Deterministic} that holds in a deterministic manner, and \cref{thm:NewPolicyEvaluation} that gives high probability bounds under a particular sampling model. \subsection{Proof of \cref{prop:Deterministic}} \label{SecProofPropDeterministic} Our proof makes use of an elementary simulation lemma, which we state here: \begin{lemma}[Simulation lemma] \label{lem:Simulation} For any policy $\policyicy$ and function $Q$, we have \begin{align} \Estart{Q-\Qpi{\policyicy}}{,\policyicy} & = \frac{\PolComplexGen{\policyicy}}{1-\discount} \end{align} \end{lemma} \noindent See \cref{SecProofLemSimulation} for the proof of this claim. \subsubsection{Proof of policy evaluation claims} First of all, we have the elementary bounds \begin{align*} \abs{\VminEmp{\policyicy} - \Vpi{\policyicy}} & = \abs{ \min_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S, \policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \leq \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} }, \quad \mbox{and} \\ \abs{\VmaxEmp{\policyicy} - \Vpi{\policyicy}} & = \abs{ \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \leq \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S, \policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} }. \end{align*} Consequently, in order to prove the bound~\eqref{EqnWidthBound} it suffices to upper bound the right-hand side common in the two above displays. Since $\EmpiricalFeasibleSet{\policyicy} \subseteq \PopulationFeasibleSet{\policyicy}$, we have the upper bound \begin{align*} \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } & \leq \max_{Q\in\PopulationFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \\ & = \max_{Q\in\PopulationFeasibleSet{\policyicy}} \abs{\ensuremath{\mathbb{E}}ecti{[Q(S ,\policyicy) - \Qpi{\policyicy}(S ,\policyicy)]} {S \sim \nu_{\text{start}}}} \\ & \overset{\text{(i)}}{=} \frac{1}{1-\discount} \max_{Q\in\PopulationFeasibleSet{\policyicy}} \frac{\PolComplexGen{\policyicy}}{1-\discount} \end{align*} where step (i) follows from \cref{lem:Simulation}. Combined with the earlier displays, this completes the proof of the bound~\eqref{EqnWidthBound}. We now show the inclusion $[ \VminEmp{\policyicy}, \VmaxEmp{\policyicy}] \ni \Vpi{\policyicy}$ when weak realizability holds. By definition of weak realizability, there exists some $\QpiWeak{\policyicy} \in \PopulationFeasibleSetInfty{\policyicy}$. In conjunction with our sandwich assumption, we are guaranteed that $\QpiWeak{\policyicy} \in \PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}$, and consequently \begin{align*} \VminEmp{\policyicy} & = \min_{Q\in\EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \leq \min_{Q\in\PopulationFeasibleSetInfty{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \leq \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} = \Vpi{\policyicy}, \quad \mbox{and} \\ \VmaxEmp{\policyicy} & = \max_{Q\in\EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \geq \max_{Q\in\PopulationFeasibleSetInfty{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \geq \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} = \Vpi{\policyicy}. \end{align*} \subsubsection{Proof of policy optimization claims} We now prove the oracle inequality~\eqref{EqnOracle} on the value $\Vpi{\widetilde \pi}$ of a policy $\widetilde \pi$ that optimizes the max-min criterion. Fix an arbitrary comparator policy $\policycomp$. Starting with the inclusion $[\VminEmp{\widetilde \pi}, \VmaxEmp{\widetilde \pi}] \ni \Vpi{\widetilde \pi}$, we have \begin{align*} \Vpi{\widetilde \pi} \stackrel{(i)}{\geq} \VminEmp{\widetilde \pi} \stackrel{(ii)}{\geq} \VminEmp{\policycomp} \; = \; \Vpi{\policycomp} - \Big(\Vpi{\policycomp} - \VminEmp{\policycomp} \Big) \stackrel{(iii)}{\geq} \Vpi{\policycomp} - \frac{1}{1-\discount} \max_{Q \in \PopulationFeasibleSet{\policycomp}} \frac{|\PolComplexGen{\policyicy}Gen{\policycomp}|}{1-\discount}, \end{align*} where step (i) follows from the stated inclusion at the start of the argument; step (ii) follows since $\widetilde \pi$ solves the max-min program; and step (iii) follows from the bound $|\Vpi{\policycomp} - \VminEmp{\policycomp}| \leq \frac{1}{1-\discount} \max_{Q\in\PopulationFeasibleSet{\policycomp}} \frac{\PolComplexGen{\policyicy}}{1-\discount}$, as proved in the preceding section. This lower bound holds uniformly for all comparators $\policycomp$, from which the stated claim follows. \subsection{Proof of \cref{lem:Simulation}} \label{SecProofLemSimulation} For each $\timestep = 1, 2, \ldots$, let $\ensuremath{\mathbb{E}}ecti{}{\timestep}$ be the expectation over the state-action pair at timestep $\timestep$ upon starting from $\nu_{\text{start}}$, so that we have $\Estart{Q - \Qpi{\policyicy} }{,\policyicy} = \ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy}]}{0}$ by definition. We claim that \begin{align} \label{EqnInduction} \ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy}]}{0} & = \sum_{\tau=1}^{\timestep} \discount^{\tau-1} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\tau-1} + \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy} ]}{\timestep} \qquad \mbox{for all $\timestep = 1, 2, \ldots$.} \end{align} For the base case $\timestep = 1$, we have \begin{align} \label{EqnBase} \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy}]}{0} = \ensuremath{\mathbb{E}}ecti{[Q - \BellmanEvaluation{\policyicy}Q]}{0} + \ensuremath{\mathbb{E}}ecti{ [\BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{0} & = \ensuremath{\mathbb{E}}ecti{[ Q - \BellmanEvaluation{\policyicy}Q ]}{0} + \discount \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{1}, \end{align} where we have used the definition of the Bellman evaluation operator to assert that $\ensuremath{\mathbb{E}}ecti{ [\BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{0} = \discount \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{1}$. Since $Q - \BellmanEvaluation{\policyicy}Q = \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}$, the equality~\eqref{EqnBase} is equivalent to the claim~\eqref{EqnInduction} with $\timestep = 1$. Turning to the induction step, we now assume that the claim~\eqref{EqnInduction} holds for some $\timestep \geq 1$, and show that it holds at step $\timestep + 1$. By a similar argument, we can write \begin{align*} \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy}]}{\timestep} = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[Q - \BellmanEvaluation{\policyicy}Q + \BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{\timestep} & = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[ Q - \BellmanEvaluation{\policyicy}Q ]}{\timestep} + \discount^{\timestep+1}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{\timestep+1} \\ & = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\timestep} + \discount^{\timestep+1}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{\timestep+1}. \end{align*} By the induction hypothesis, equality~\eqref{EqnInduction} holds for $\timestep$, and substituting the above equality shows that it also holds at time $\timestep + 1$. Since the equivalence~\eqref{EqnInduction} holds for all $\timestep$, we can take the limit as $\timestep \rightarrow \infty$, and doing so yields the claim. \subsection{Proof of \cref{thm:NewPolicyEvaluation}} \label{SecProofNewPolicyEvaluation} The proof relies on proving a high probability bound to \cref{EqnSandwich} and then invoking \cref{prop:Deterministic} to conclude. In the statement of the theorem, we require choosing $\epsilon > 0$ to satisfy the upper bound $\epsilon^2 \precsim \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}$, and then provide an upper bound in terms of $\sqrt{\ensuremath{\rho}(\epsilon,\delta)/n}$. It is equivalent to instead choose $\epsilon$ to satisfy the lower bound $\epsilon^2 \succsim \frac{\ensuremath{\rho}(\epsilon,\delta)}{n}$, and then provide upper bounds proportional to $\epsilon$. For the purposes of the proof, the latter formulation turns out to be more convenient and we pursue it here. \\ To streamline notation, let us introduce the shorthand $\inprod{f}{\Diff{\policyicy}(Q)} \defeq \inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu$. For each pair $(Q, \policyicy)$, we then define the random variable \begin{align*} \ZvarShort \defeq \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\Diff{\policyicy}(Q)}{}\big|}{\TestNormaRegularizerEmp{}}. \end{align*} Central to our proof of the theorem is a uniform bound on this random variable, one that holds for all pairs $(Q, \policyicy)$. In particular, our strategy is to exhibit some $\epsilon > 0$ for which, upon setting $\ensuremath{\lambda} = 4 \epsilon^2$, we have the guarantees \begin{subequations} \begin{align} \label{EqnSandwichZvar} \frac{1}{4} \leq \frac{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}{ \sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \leq 2 \qquad & \mbox{uniformly for all $f \in \TestFunctionClass{}$, and} \\ \label{EqnUniformZvar} \ZvarShort \leq \epsilon \quad & \mbox{uniformly for all $(Q, \policyicy)$,} \end{align} \end{subequations} both with probability at least $1 - \delta$. In particular, consistent with the theorem statement, we show that this claim holds if we choose $\epsilon > 0$ to satisfy the inequality \begin{align} \label{EqnOriginalChoice} \epsilon^2 & \geq \bar{c} \frac{\ensuremath{\rho}(\epsilon, \delta)}{n} \; \end{align} where $\bar{c} > 0$ is a sufficiently large (but universal) constant. Supposing that the bounds~\eqref{EqnSandwichZvar} and~\eqref{EqnUniformZvar} hold, let us now establish the set inclusions claimed in the theorem. \paragraph{Inclusion $\PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}(\epsilon)$:} Define the random variable $\ensuremath{M}_n(Q, \policyicy) \defeq \sup \limits_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ |\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}^\policyicy(Q)}{\mu}|}{\TestNormaRegularizerEmp{}}$, and observe that $Q \in \PopulationFeasibleSetInfty{\policyicy}$ implies that $\ensuremath{M}_n(Q, \policyicy) = 0$. With this definition, we have \begin{align*} \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} & \stackrel{(i)}{\leq} \ensuremath{M}_n(Q, \policyicy) + \ZvarShort \; \stackrel{(ii)}{\leq} \epsilon \end{align*} where step (i) follows from the triangle inequality; and step (ii) follows since $\ensuremath{M}_n(Q, \policyicy) = 0$, and $\ZvarShort \leq \epsilon$ from the bound~\eqref{EqnUniformZvar}. \paragraph{Inclusion $\EmpiricalFeasibleSet{\policyicy}(\epsilon) \subseteq \PopulationFeasibleSet{\policyicy}(4 \epsilon)$} By the definition of $\PopulationFeasibleSet{\policyicy}(4 \epsilon)$, we need to show that \begin{align*} \ensuremath{\bar{\ensuremath{M}}}(Q, \policyicy) \defeq \sup \limits_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}^\policyicy(Q)}{\mu}\big|}{\TestNormaRegularizerPop{}} \leq 4 \epsilon \qquad \mbox{for any $Q \in \EmpiricalFeasibleSet{\policyicy}(\epsilon)$.} \end{align*} Now we have \begin{align*} \ensuremath{\bar{\ensuremath{M}}}(Q, \policyicy) \stackrel{(i)}{\leq} 2 \ensuremath{M}_n(Q, \policyicy) \stackrel{(ii)}{\leq} 2 \left \{ \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} + \ZvarShort \right \} \; \stackrel{(iii)}{\leq} 2 \big \{ \epsilon + \epsilon \} \; = \; 4 \epsilon, \end{align*} where step (i) follows from the sandwich relation~\eqref{EqnSandwichZvar}; step (ii) follows from the triangle inequality and the definition of $\ZvarShort$; and step (iii) follows since $\ZvarShort \leq \epsilon$ from the bound~\eqref{EqnUniformZvar}, and \begin{align*} \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} & \leq \epsilon, \qquad \mbox{using the inclusion $Q \in \EmpiricalFeasibleSet{\policyicy}(\epsilon)$.} \end{align*} Consequently, the remainder of our proof is devoted to establishing the claims~\eqref{EqnSandwichZvar} and~\eqref{EqnUniformZvar}. In doing so, we make repeated use of some Bernstein bounds, stated in terms of the shorthand $\ensuremath{\Psi_\numobs}(\delta) = \frac{\log(n/\delta)}{n}$. \begin{lemma} \label{LemBernsteinBound} There is a universal constant $c$ such each the following statements holds with probability at least $1 - \delta$. For any $f$, we have \begin{subequations} \begin{align} \label{EqnBernsteinFsquare} \Big| \|f\|_n^2 - \|f\|_\mu^2 \Big| & \leq c \; \Big \{ \|f\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \ensuremath{\Psi_\numobs}(\delta) \Big \}, \end{align} and for any $(Q, \policyicy)$ and any function $f$, we have \begin{align} \label{EqnBernsteinBound} \big|\inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu \big| & \leq c \; \Big \{ \|f\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \|f\|_\infty \ensuremath{\Psi_\numobs}(\delta) \Big \}. \end{align} \end{subequations} \end{lemma} \noindent These bounds follow by identifying a martingale difference sequence, and applying a form of Bernstein's inequality tailored to the martingale setting. See Section~\ref{SecProofLemBernsteinBound} for the details. \subsection{Proof of the sandwich relation~\eqref{EqnSandwichZvar}} We claim that (modulo the choice of constants) it suffices to show that \begin{align} \label{EqnCleanSandwich} \Big| \|f\|_n - \|f\|_\mu \Big| & \leq \epsilon \qquad \mbox{uniformly for all $f \in \TestFunctionClass{}$} \end{align} for some universal constant $c'$. Indeed, when this bound holds, we have \begin{align*} \|f\|_n + 2 \epsilon \leq \|f\|_\mu + 3 \epsilon \leq \frac{3}{2} \{ \|f\|_\mu + 2\epsilon \}, \quad \mbox{and} \quad \|f\|_n + 2 \epsilon \geq \|f\|_\mu + \epsilon \geq \frac{1}{2} \big \{ \|f\|_\mu + 2 \epsilon \}, \end{align*} so that $\frac{\|f\|_\mu + 2 \epsilon}{\|f \|_n + 2 \epsilon} \in \big[ \frac{1}{2}, \frac{3}{2} \big]$. To relate this statement to the claimed sandwich, observe the inclusion $\frac{\|f\| + \sqrt{2 \epsilon}}{\sqrt{\|f\|^2 + 4 \epsilon^2}} \in [1, \sqrt{2}]$, where $\|f\|$ can be either $\|f\|_n$ or $\|f\|_\mu$. Combining this fact with our previous bound, we see that $\frac{\sqrt{\|f\|_n^2 + 4 \epsilon^2}}{\sqrt{\|f\|_\mu^2 + 4 \epsilon^2}} \in \Big[ \frac{1}{\sqrt{2}} \frac{1}{2}, \frac{3 \sqrt{2}}{2} \Big] \subset \big[\frac{1}{4}, 3 \big]$, as claimed. \\ The remainder of our analysis is focused on proving the bound~\eqref{EqnCleanSandwich}. Defining the random variable $Y_n(\ensuremath{f}) = \big| \|f\|_n - \|f\|_\mu \big|$, we need to establish a high probability bound on $\sup_{f \in \TestFunctionClass{}} Y_n(\ensuremath{f})$. Let $\{f^1, \ldots, f^N \}$ be an $\epsilon$-cover of $\TestFunctionClass{}$ in the sup-norm. For any $f \in \TestFunctionClass{}$, we can find some $f^j$ such that $\|f - f^j\|_\infty \leq \epsilon$, whence \begin{align*} Y_n(f) \leq Y_n(f^j) + \big | Y_n(f^j) - Y_n(f) \big| & \stackrel{(i)}{\leq} Y_n(f^j) + \big| \|f^j\|_n - \|f\|_n \big| + \big| \|f^j\|_\mu - \|f\|_\mu \big| \\ & \stackrel{(ii)}{\leq} Y_n(f^j) + \|f^j - f\|_n + \|f^j - f\|_\mu \\ & \stackrel{(iii)}{\leq} Y_n(f^j) + 2 \epsilon, \end{align*} where steps (i) and (ii) follow from the triangle inequality; and step (iii) follows from the inequality $\max \{ \|f^j - f\|_n, \|f^j - f\|_\mu \} \leq \|f^j - f\|_\infty \leq \epsilon$. Thus, we have reduced the problem to bounding a finite maximum. Note that if $\max \{ \|f^j\|_n, \|f^j\|_\mu \} \leq \epsilon$, then we have $Y_n(f^j) \leq 2 \epsilon$ by the triangle inequality. Otherwise, we may assume that $\|f^j\|_n + \|f^j\|_n \geq \epsilon$. With probability at least $1 - \delta$, we have \begin{align*} \Big| \|f^j\|_n - \|f\|_\mu \Big| = \frac{ \Big| \|f^j\|_n^2 - \|f\|_\mu^2 \Big|}{\|f^j\|_n + \|f^j\|_\mu} & \stackrel{(i)}{\leq} \frac{ c \big \{ \|f^j\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \ensuremath{\Psi_\numobs}(\delta) \big \}}{\|f^j\|_\mu + \|f^j\|_n} \\ & \stackrel{(ii)}{\leq} c \Big \{ \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \frac{\ensuremath{\Psi_\numobs}(\delta)}{\epsilon} \Big \}, \end{align*} where step (i) follows from the Bernstein bound~\eqref{EqnBernsteinFsquare} from Lemma~\ref{LemBernsteinBound}, and step (ii) uses the fact that $\|f^j\|_n + \|f^j\|_n \geq \epsilon$. Taking union bound over all $N$ elements in the cover and replacing $\delta$ with $\delta/N$, we have \begin{align*} \max_{j \in [N]} Y_n(f^j) & \leq c \Big \{ \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \frac{\ensuremath{\Psi_\numobs}(\delta/N)}{\epsilon} \Big \} \end{align*} with probability at least $1 - \delta$. Recalling that $N = N_\epsilon(\TestFunctionClass{})$, our choice~\eqref{EqnOriginalChoice} of $\epsilon$ ensures that $\sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} \leq c \; \epsilon$ for some universal constant $c$. Putting together the pieces (and increasing the constant $\bar{c}$ in the choice~\eqref{EqnOriginalChoice} of $\epsilon$ as needed) yields the claim. \subsection{Proof of the uniform upper bound~\eqref{EqnUniformZvar}} \label{SecUniProof} We need to establish an upper bound on $\ZvarShort$ that that holds uniformly for all $(Q, \policyicy)$. Our first step is to prove a high probability bound for a fixed pair. We then apply a standard discretization argument to make it uniform in the pair. Note that we can write $\ZvarShort = \sup_{f \in \TestFunctionClass{}} \frac{V_n(f)}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}$, where we have defined $V_n(f) \defeq |\inprod{f}{\Diff{\policyicy}(Q)}|$. Our first lemma provides a uniform bound on the latter random variables: \begin{lemma} \label{LemVbound} Suppose that $\epsilon^2 \geq \ensuremath{\Psi_\numobs} \big(\delta/N_\epsilon(\TestFunctionClass{}) \big)$. Then we have \begin{align} \label{EqnVbound} V_n(f) & \leq c \big \{ \|f\|_\mu \epsilon + \epsilon^2 \big \} \qquad \mbox{for all $f \in \TestFunctionClass{}$} \end{align} with probability at least $1 - \delta$. \end{lemma} \noindent See \cref{SecProofLemVbound} for the proof of this claim. \\ We claim that the bound~\eqref{EqnVbound} implies that, for any fixed pair $(Q, \policyicy)$, we have \begin{align*} Z_n(Q, \policyicy) \leq c' \epsilon \qquad \mbox{with probability at least $1 - \delta$.} \end{align*} Indeed, when Lemma~\ref{LemVbound} holds, for any $f \in \TestFunctionClass{}$, we can write \begin{align*} \frac{V_n(f)}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} = \frac{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} \; \frac{V_n(f)}{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \stackrel{(i)}{\leq} \; 3 \; \frac{c \big \{ \|f\|_\mu \epsilon + \epsilon^2 \big \}}{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \; \stackrel{(ii)}{\leq} \; c' \epsilon, \end{align*} where step (i) uses the sandwich relation~\eqref{EqnSandwichZvar}, along with the bound~\eqref{EqnVbound}; and step (ii) follows given the choice $\ensuremath{\lambda} = 4 \epsilon^2$. We have thus proved that for any fixed $(Q, \policyicy)$ and $\epsilon \geq \ensuremath{\Psi_\numobs} \big(\delta/N_\epsilon(\TestFunctionClass{}) \big)$, we have \begin{align} \label{EqnFixZvarBound} \ZvarShort & \leq c' \epsilon \qquad \mbox{with probability at least $1 - \delta$.} \end{align} Our next step is to upgrade this bound to one that is uniform over all pairs $(Q, \policyicy)$. We do so via a discretization argument: let $\{Q^j\}_{j=1}^J$ and $\{\pi^k\}_{k=1}^K$ be $\epsilon$-coverings of $\Qclass{}$ and $\PolicyClass$, respectively. \begin{lemma} \label{LemDiscretization} We have the upper bound \begin{align} \sup_{Q, \policyicy} \ZvarShort & \leq \max_{(j,k) \in [J] \times [K]} \Zvar{Q^j}{\policyicy^k} + 4 \epsilon. \end{align} \end{lemma} \noindent See Section~\ref{SecProofLemDiscretization} for the proof of this claim. If we replace $\delta$ with $\delta/(J K)$, then we are guaranteed that the bound~\eqref{EqnFixZvarBound} holds uniformly over the family $\{ Q^j \}_{j=1}^J \times \{ \policyicy^k \}_{k=1}^K$. Recalling that $J = N_\epsilon(\Qclass{})$ and $K = N_\epsilon(\Pi)$, we conclude that for any $\epsilon$ satisfying the inequality~\eqref{EqnOriginalChoice}, we have $\sup_{Q, \policyicy} \ZvarShort \leq \ensuremath{\tilde{c}} \epsilon$ with probability at least $1 - \delta$. (Note that by suitably scaling up $\epsilon$ via the choice of constant $\bar{c}$ in the bound~\eqref{EqnOriginalChoice}, we can arrange for $\ensuremath{\tilde{c}} = 1$, as in the stated claim.) \subsection{Proofs of supporting lemmas} In this section, we collect together the proofs of~\cref{LemVbound,LemDiscretization}, which were stated and used in~\Cref{SecUniProof}. \subsubsection{Proof of ~\cref{LemVbound}} \label{SecProofLemVbound} We first localize the problem to the class $\ensuremath{\mathcal{F}}(\epsilon) = \{f \in \TestFunctionClass{} \mid \|f\|_\mu \leq \epsilon \}$. In particular, if there exists some $\tilde{f} \in \TestFunctionClass{}$ that violates~\eqref{EqnVbound}, then the rescaled function $f = \epsilon \tilde{f}/\|\tilde{f}\|_\mu$ belongs to $\ensuremath{\mathcal{F}}(\epsilon)$, and satisfies $V_n(f) \geq c \epsilon^2$. Consequently, it suffices to show that $V_n(f) \leq c \epsilon^2$ for all $f \in \ensuremath{\mathcal{F}}(\epsilon)$. Choose an $\epsilon$-cover of $\TestFunctionClass{}$ in the sup-norm with $N = N_\epsilon(\TestFunctionClass{})$ elements. Using this cover, for any $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we can find some $f^j$ such that $\|f - f^j\|_\infty \leq \epsilon$. Thus, for any $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we can write \begin{align} \label{EqnOriginalInequality} V_n(f) \leq V_n(f^j) + V_n(f - f^j) \; \; \leq \underbrace{V_n(f^j)}_{T_1} + \underbrace{\sup_{g \in \ensuremath{\mathcal{G}}(\epsilon)} V_n(g)}_{T_2}, \end{align} where $\ensuremath{\mathcal{G}}(\epsilon) \defeq \{ f_1 - f_2 \mid f_1, f_2 \in \TestFunctionClass{}, \|f_1 - f_2 \|_\infty \leq \epsilon \}$. We bound each of these two terms in turn. In particular, we show that each of $T_1$ and $T_2$ are upper bounded by $c \epsilon^2$ with high probability. \paragraph{Bounding $T_1$:} From the Bernstein bound~\eqref{EqnBernsteinBound}, we have \begin{align*} V_n(f^k) & \leq c \big \{ \|f^k\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \|f^k\|_\infty \ensuremath{\Psi_\numobs}(\delta/N) \big \} \qquad \mbox{for all $k \in [N]$} \end{align*} with probability at least $1 - \delta$. Now for the particular $f^j$ chosen to approximate $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we have \begin{align*} \|f^j\|_\mu & \leq \|f^j - f\|_\mu + \|f\|_\mu \leq 2 \epsilon, \end{align*} where the inequality follows since $\|f^j - f\|_\mu \leq \|f^j - f\|_\infty \leq \epsilon$, and $\|f\|_\mu \leq \epsilon$. Consequently, we conclude that \begin{align*} T_1 & \leq c \Big \{ 2 \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \ensuremath{\Psi_\numobs}(\delta/N) \Big \} \; \leq \; c' \epsilon^2 \qquad \mbox{with probability at least $1 - \delta$.} \end{align*} where the final inequality follows from our choice of $\epsilon$. \paragraph{Bounding $T_2$:} Define $\ensuremath{\mathcal{G}} \defeq \{f_1 - f_2 \mid f_1, f_2 \in \TestFunctionClass{} \}$. We need to bound a supremum of the process $\{ V_n(g), g \in \ensuremath{\mathcal{G}} \}$ over the subset $\ensuremath{\mathcal{G}}(\epsilon)$. From the Bernstein bound~\eqref{EqnBernsteinBound}, the increments $V_n(g_1) - V_n(g_2)$ of this process are sub-Gaussian with parameter $\|g_1 - g_2\|_\mu \leq \|g_1 - g_2\|_\infty$, and sub-exponential with parameter $\|g_1 - g_2\|_\infty$. Therefore, we can apply a chaining argument that uses the metric entropy $\log N_t(\ensuremath{\mathcal{G}})$ in the supremum norm. Moreover, we can terminate the chaining at $2 \epsilon$, because we are taking the supremum over the subset $\ensuremath{\mathcal{G}}(\epsilon)$, and it has sup-norm diameter at most $2 \epsilon$. Moreover, the lower interval of the chain can terminate at $2 \epsilon^2$, since our goal is to prove an upper bound of this order. Then, by using high probability bounds for the suprema of empirical processes (e.g., Theorem 5.36 in the book~\cite{wainwright2019high}), we have \begin{align*} T_2 \leq c_1 \; \int_{2 \epsilon^2}^{2 \epsilon} \ensuremath{\phi} \big( \frac{\log N_t(\ensuremath{\mathcal{G}})}{n} \big) dt + c_2 \big \{ \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \epsilon \ensuremath{\Psi_\numobs}(\delta) \big\} + 2 \epsilon^2 \end{align*} with probability at least $1 - \delta$. (Here the reader should recall our shorthand $\ensuremath{\phi}(s) = \max \{s, \sqrt{s} \}$.) Since $\ensuremath{\mathcal{G}}$ consists of differences from $\TestFunctionClass{}$, we have the upper bound $\log N_t(\ensuremath{\mathcal{G}}) \leq 2 \log N_{t/2}(\TestFunctionClass{})$, and hence (after making the change of variable $u = t/2$ in the integrals) \begin{align*} T_2 \leq c'_1 \int_{\epsilon^2}^{ \epsilon} \ensuremath{\phi} \big( \frac{\log N_u(\TestFunctionClass{})}{n} \big) du + c_2 \big \{ \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \epsilon \ensuremath{\Psi_\numobs}(\delta) \big\} \; \leq \; \ensuremath{\tilde{c}} \epsilon^2, \end{align*} where the last inequality follows from our choice of $\epsilon$. \subsubsection{Proof of ~\cref{LemDiscretization}} \label{SecProofLemDiscretization} By our choice of the $\epsilon$-covers, for any $(Q, \policyicy)$, there is a pair $(Q^j, \policyicy^k)$ such that \begin{align*} \|Q^j - Q\|_\infty \leq \epsilon, \quad \mbox{and} \quad \|\policyicy^k - \policyicy\|_{\infty,1} = \sup_{\state} \|\policyicy^k(\cdot \mid \state) - \policyicy(\cdot \mid \state) \|_1 \leq \epsilon. \end{align*} Using this pair, an application of the triangle inequality yields \begin{align*} \big| \Zvar{Q}{\policyicy} - \Zvar{Q^j}{\policyicy^k} \big| & \leq \underbrace{\big| \Zvar{Q}{\policyicy} - \Zvar{Q}{\policyicy^k} \big|}_{T_1} + \underbrace{\big| \Zvar{Q}{\policyicy^k} - \Zvar{Q^j}{\policyicy^k} \big|}_{T_2} \end{align*} We bound each of these terms in turn, in particular proving that $T_1 + T_2 \leq 24 \epsilon$. Putting together the pieces yields the bound stated in the lemma. \paragraph{Bounding $T_2$:} From the definition of $\ensuremath{Z_\numobs}$, we have \begin{align*} T_2 = \big| \Zvar{Q}{\policyicy^k} - \Zvar{Q^j}{\policyicy^k} \big| & \leq \sup_{f \in \TestFunctionClass{}} \frac{\big| \smlinprod{f}{\Diff{\policyicy^k}(Q - Q^j)}|}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}. \end{align*} Now another application of the triangle inequality yields \begin{align*} |\smlinprod{f}{\Diff{\policyicy^k}(Q - Q^j)}| & \leq |\smlinprod{f}{\TDError{(Q- Q^j)}{\policyicy^k}{}}_n| + ||\smlinprod{f}{\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)}|_\mu \\ & \leq \|f\|_n \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_n + \|f\|_\mu \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\mu \\ & \leq \max \{ \|f\|_n, \|f\|_\mu \} \; \Big \{ \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_\infty + \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\infty \Big \} \end{align*} where step (i) follows from the Cauchy--Schwarz inequality. Now in terms of the shorthand \mbox{$\Delta \defeq Q - Q^j$,} we have \begin{subequations} \begin{align} \label{EqnCoffeeBell} \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\infty & = \sup_{\psa} \Big| \Delta \psa - \discount \E_{\successorstate\sim\Pro\psa} \big[ \Delta(\successorstate,\policyicy) \big] \Big| \leq 2 \|\Delta\|_\infty \leq 2 \epsilon. \end{align} An entirely analogous argument yields \begin{align} \label{EqnCoffeeTD} \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_\infty \leq 2 \epsilon \end{align} \end{subequations} Conditioned on the sandwich relation~\eqref{EqnSandwichZvar}, we have $\sup_{f \in \TestFunctionClass{}} \frac{\max \{ \|f\|_n, \|f\|_\mu \}}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} \leq 4$. Combining this bound with inequalities~\eqref{EqnCoffeeBell} and~\eqref{EqnCoffeeTD}, we have shown that $T_2 \leq 4 \big \{2 \epsilon + 2 \epsilon \} = 16 \epsilon$. \paragraph{Bounding $T_1$:} In this case, a similar argument yields \begin{align*} |\smlinprod{f}{(\Diff{\policyicy} - \Diff{\policyicy^k})(Q)}| & \leq \max \{ \|f\|_n, \|f\|_\mu \} \; \Big \{ \|(\delta^\policyicy - \delta^{\policyicy^k})(Q)\|_n + \|(\ensuremath{\mathcal{B}}^\policyicy - \ensuremath{\mathcal{B}}^{\policyicy^k})(Q)\|_\mu \}. \end{align*} Now we have \begin{align*} \|(\delta^\policyicy - \delta^{\policyicy^k})(Q)\|_n & \leq \max_{i = 1, \ldots, n} \Big| \sum_{\action'} \big(\policyicy(\action' \mid \state_i) - \policyicy^k(\action' \mid \state_i) \big) Q(\state^+_i, \action') \Big| \\ & \leq \max_{\state} \sum_{\action'} |\policyicy(\action' \mid \state) - \policyicy^k(\action \mid \state)| \; \|Q\|_\infty \\ & \leq \epsilon. \end{align*} A similar argument yields that $\|(\ensuremath{\mathcal{B}}^\policyicy - \ensuremath{\mathcal{B}}^{\policyicy^k})(Q)\|_\mu | \leq \epsilon$, and arguing as before, we conclude that $T_1 \leq 4 \{\epsilon + \epsilon \} = 8 \epsilon$. \subsubsection{Proof of Lemma~\ref{LemBernsteinBound}} \label{SecProofLemBernsteinBound} Our proof of this claim makes use of the following known Bernstein bound for martingale differences (cf. Theorem 1 in the paper~\cite{beygelzimer2011contextual}). Recall the shorthand notation $\ensuremath{\Psi_\numobs}(\delta) = \frac{\log(n/\delta)}{n}$. \begin{lemma}[Bernstein's Inequality for Martingales] \label{lem:Bernstein} Let $\{X_t\}_{t \geq 1}$ be a martingale difference sequence with respect to the filtration $\{ \mathcal F_t \}_{t \geq 1}$. Suppose that $|X_t| \leq 1$ almost surely, and let $\E_t$ denote expectation conditional on $\mathcal F_t$. Then for all $\delta \in (0,1)$, we have \begin{align} \Big| \frac{1}{n} \sum_{t=1}^n X_t \Big| & \leq 2 \Big[ \Big(\frac{1}{n} \sum_{t=1}^n \E_t X^2_t \Big) \ensuremath{\Psi_\numobs}(2 \delta) \Big]^{1/2} + 2 \ensuremath{\Psi_\numobs}(2 \delta) \end{align} with probability at least $1 - \delta$. \end{lemma} \noindent With this result in place, we divide our proof into two parts, corresponding to the two claims~\eqref{EqnBernsteinBound} and~\eqref{EqnBernsteinFsquare} stated in~\cref{LemBernsteinBound}. \paragraph{Proof of the bound~\eqref{EqnBernsteinBound}:} Recall that at step $i$, the triple $(\state,\action,\identifier)$ is drawn according to a conditional distribution $\mu_i(\cdot \mid \mathcal F_i)$. Similarly, we let $d_i$ denote the distribution of $\sarsi{}$ conditioned on the filtration $\mathcal F_i$. Note that $\mu_i$ is obtained from $d_i$ by marginalizing out the pair $(\reward, \successorstate)$. Moreover, by the tower property of expectation, the Bellman error is equivalent to the average TD error. Using these facts, we have the equivalence \begin{align*} \innerprodweighted{\TestFunction{}} {\TDError{Q}{\policyicy}{}} {d_{i}} & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunctionDefCompact{}{}[\TDErrorDefCompact{Q}{\policyicy}{}]\big\}} {d_{i}} \\ & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunction{}(\state,\action,\identifier) \ensuremath{\mathbb{E}}ecti{[\TDErrorDefCompact{Q}{\policyicy}{}]\big\}} {\substack{\reward \sim R\psa, \successorstate\sim\Pro\psa}}} {(\state,\action,\identifier) \sim \mu_{i}} \\ & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunction{}(\state,\action,\identifier) [\ensuremath{\mathcal{B}}orDefCompact{Q}{\policyicy}{}]\big\}} {(\state,\action,\identifier) \sim \mu_{i}} \\ & = \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu_{i}}. \end{align*} As a consequence, we can write $\inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu = \frac{1}{n} \sum_{i=1}^n \ensuremath{W}_i$ where \begin{align*} \ensuremath{W}_i & \defeq \TestFunctionDefCompact{}{i} [\TDErrorDefCompact{Q}{\policyicy}{i}] - \ensuremath{\mathbb{E}}ecti{\big\{\TestFunctionDefCompact{}{}{[\TDErrorDefCompact{Q}{\policyicy}{}}]\big\}}{d_{i}} \end{align*} defines a martingale difference sequence (MDS). Thus, we can prove the claim by applying a Bernstein martingale inequality. Since $\|r\|_\infty \leq 1$ and $\|Q\|_\infty \leq 1$ by assumption, we have $\|\ensuremath{W}_i\|_\infty \leq 3 \|f\|_\infty$, and \begin{align*} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{d_i} [\ensuremath{W}_i^2] & \leq 9 \; \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_{i}}[f^2(\state_{i},\action_{i}, o_{i})] \; = \; 9 \|f\|_\mu^2. \end{align*} Consequently, the claimed bound~\eqref{EqnBernsteinBound} follows by applying the Bernstein bound stated in \cref{lem:Bernstein}. \paragraph{Proof of the bound~\eqref{EqnBernsteinFsquare}:} In this case, we have the additive decomposition \begin{align*} \|\ensuremath{f}c\|_n^2 - \|\ensuremath{f}\|_\mu^2 & = \frac{1}{n} \sum_{i=1}^n \Big \{ \underbrace{\ensuremath{f}c^2(\state_{i},\action_{i},o_{i}) - \ensuremath{\mathbb{E}}_{\mu_{i}}[f^2(\state, \action, o)]}_{\ensuremath{\martone'}_i} \Big\}, \end{align*} where $\{\ensuremath{\martone'}_i\}_{i=1}^n$ again defines a martingale difference sequence. Note that $\|\ensuremath{\martone'}_i\|_\infty \leq 2 \|\ensuremath{f}c\|_\infty^2 \leq 2$, and \begin{align*} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} [(\ensuremath{\martone'}_i)^2] & \stackrel{(i)}{\leq} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} \big[ \ensuremath{f}c^4(S, A, \ensuremath{O}) \big] \; \leq \; \|\ensuremath{f}c\|_\infty^2 \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} \big[\ensuremath{f}c^2(S, A, \ensuremath{O}) \big] \; \stackrel{(ii)}{\leq} \; \|\ensuremath{f}c\|_\mu^2, \end{align*} where step (i) uses the fact that the variance of $\ensuremath{f}c^2$ is at most the fourth moment, and step (ii) uses the bound $\|\ensuremath{f}c\|_\infty \leq 1$. Consequently, the claimed bound~\eqref{EqnBernsteinFsquare} follows by applying the Bernstein bound stated in \cref{lem:Bernstein}. \section{Proofs for \texorpdfstring{\cref{sec:Applications}}{} and \cref{sec:appConc}} In this section, we collect together the proofs of results stated without proof in Section~\ref{sec:Applications} and \cref{sec:appConc}. \subsection{Proof of \cref{prop:LikeRatio}} \label{sec:LikeRatio} \begin{proof} Since $\TestFunction{}^* \in \TestFunctionClass{\policyicy}$, we are guaranteed that the corresponding constraint must hold. It reads as \begin{align*} |\ensuremath{\mathbb{E}}ecti{\frac{1}{\scaling{\policyicy}} \frac{\DistributionOfPolicy{\policyicy}}{\mu} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} |^2 = \frac{1}{\scaling{\policyicy}^2} | \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} |^2 & \overset{(iii)}{\leq} \big( \frac{1}{\scaling{\policyicy}^2} \munorm{\frac{\DistributionOfPolicy{\policyicy}}{\mu} }^2 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}. \end{align*} where step (iii) follows from the definition of population constraint. Re-arranging yields the upper bound \begin{align*} \frac{|\ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}}{\mu} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} |^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} & \leq \frac{\big(\munorm{\frac{\DistributionOfPolicy{\policyicy}}{\mu} }^2 + \scalingsq{\policyicy}\ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} \; = \; \frac{ \ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big] }{\policyicy} + \scalingsq{\policyicy} \ensuremath{\lambda}} {1 + \ensuremath{\lambda}}, \end{align*} where the final step uses the fact that \begin{align*} \norm{\frac{\DistributionOfPolicy{\policyicy}}{\mu}}{\mu}^2 = \ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}^2\PSA}{\mu^2\PSA} }{\mu} = \ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA} }{\policyicy} \end{align*} Thus, we have established the bound (i) in our claim~\eqref{EqnLikeRatioBound}. The upper bound (ii) follows immediately since $\ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa} }{\policyicy} \leq \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa} \leq \scaling{\policyicy}$. \end{proof} \subsection{Proof of \texorpdfstring{\cref{lem:PredictionError}}{}} \label{sec:PredictionError} Some simple algebra yields \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{} = [Q - \BellmanEvaluation{\policyicy}Q] - [\QpiWeak{\policyicy} - \BellmanEvaluation{\policyicy}\QpiWeak{\policyicy}] = (\IdentityOperator - \gamma\TransitionOperator{\policyicy}) (Q - \QpiWeak{\policyicy}) = (\IdentityOperator - \gamma\TransitionOperator{\policyicy}) QErr. \end{align*} Taking expectations under $\policyicy$ and recalling that $\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = 0$ for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$ yields \begin{align*} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} = \innerprodweighted{\TestFunction{}} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}. \end{align*} Notice that for any $Q \in \Qclass{\policyicy}$ there exists a test function $QErr = Q - \QpiWeak{\policyicy} \in \QclassErr{\policyicy}$, and the associated population constraint reads \begin{align*} \frac{ \big| \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu} \big| } { \sqrt{ \norm{QErr}{\mu}^2 + \TestFunctionReg } } & \leq \sqrt{\frac{\ensuremath{\rho}}{n}}. \end{align*} Consequently, the {off-policy cost coefficient} can be upper bounded as \begin{align*} K^\policy & \leq \max_{QErr \in \QclassErrCentered{\policyicy}} \Big \{ \frac{\ensuremath{\rho}}{n} \; \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } {1+ \TestFunctionReg} \Big \} \; \leq \; \max_{QErr \in \QclassErrCentered{\policyicy}} \Big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \: \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu}^2 } \Big \}, \end{align*} as claimed in the bound~\eqref{EqnPredErrorBound}. \subsection{Proof of \texorpdfstring{\cref{lem:PredictionErrorBellmanClosure}}{}} \label{sec:PredictionErrorBellmanClosure} If weak Bellman closure holds, then we can write \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q = Q - \QpiProj{\policyicy}{Q} \in \QclassErr{\policyicy}. \end{align*} For any $Q \in \Qclass{\policyicy}$, the function $QErrNC = Q - \QpiProj{\policyicy}{Q}$ belongs to $\QclassErr{\policyicy}$, and the associated population constraint reads $\frac{ |\innerprodweighted{QErrNC} {QErrNC} {\mu} |} { \sqrt{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg }} \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$. Consequently, the {off-policy cost coefficient} is upper bounded as \begin{align*} K^\policy & \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{n}{\ensuremath{\rho}} \; \frac{ v\innerprodweighted{\1} {QErrNC} {\policyicy}^2 } {1+ \TestFunctionReg} \Big \} \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg } { 1 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \} \; \leq \; \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \}, \end{align*} where the final inequality follows from the fact that $\norm{QErrNC}{\mu} \leq 1$. \subsection{Proof of \texorpdfstring{\cref{lem:BellmanTestFunctions}}{}} \label{sec:BellmanTestFunctions} We split our proof into the two separate claims. \paragraph{Proof of the bound~\eqref{EqnBellBound}:} When the test function class includes $\TestFunctionClassBubnov{\policyicy}$, then any $Q$ feasible must satisfy the population constraints \begin{align*} \frac{\innerprodweighted{ \ensuremath{\mathcal{B}}or{Q'}{\policyicy}{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}} {\sqrt{\norm{\ensuremath{\mathcal{B}}or{Q'}{\policyicy}{}}{\mu}^2 + \TestFunctionReg}} \leq \sqrt{\frac{\ensuremath{\rho}}{n}}, \qquad \mbox{for all $Q' \in \Qpi{\policyicy}$.} \end{align*} Setting $Q' = Q$ yields $\frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} { \sqrt{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 + \TestFunctionReg } } \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$. If $\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 \geq \TestFunctionReg$, then the claim holds, given our choice $\TestFunctionReg = c \frac{\ensuremath{\rho}}{n}$ for some constant $c$. Otherwise, the constraint can be weakened to $\frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} { \sqrt{ 2\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 } } \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$, which yields the bound~\eqref{EqnBellBound}. \paragraph{Proof of the bound~\eqref{EqnBellBoundConc}:} We now prove the sequence of inequalities stated in equation~\eqref{EqnBellBoundConc}. Inequality (i) follows directly from the definition of $K^\policy$ and \cref{lem:BellmanTestFunctions}. Turning to inequality (ii), an application of Jensen's inequality yields \begin{align*} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 = [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2 \leq \ensuremath{\mathbb{E}}ecti{[\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}]^2}{\policyicy} = \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2. \end{align*} Finally, inequality (iii) follows by observing that \begin{align*} \sup_{Q \in \Qclass{\policyicy}} \frac{\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} = \sup_{Q \in \Qclass{\policyicy}} \frac{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\policyicy}}{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\mu}} = \sup_{Q \in \Qclass{\policyicy}} \frac{\ensuremath{\mathbb{E}}ecti{ \Big[ \frac{\DistributionOfPolicy{\policyicy}\psa }{\mu \psa} }{\mu} \Big][(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2 }{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\mu}} \leq \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa }{\mu \psa}. \end{align*} \section{Proofs for the Linear Setting} We now prove the results stated in~\cref{sec:Linear}. Throughout this section, the reader should recall that $Q$ takes the linear function $Q \psa = \inprod{\CriticPar{}}{\phi \psa}$, so that the bulk of our arguments operate directly on the weight vector $\CriticPar{} \in \R^\dim$. Given the linear structure, the population and empirical covariance matrices of the feature vectors play a central role. We make use of the following known result (cf. Lemma 1 in the paper~\cite{zhang2021optimal}) that relates these objects: \begin{lemma}[Covariance Concentration] \label{lem:CovarianceConcentration} There are universal constants $(c_1, c_2, c_3)$ such that for any $\delta \in (0, 1)$, we have \begin{align} c_1 \SigmaExplicit \preceq \frac{1}{n}\widehat \SigmaExplicit + \frac{c_2}{n} \log \frac{n \dim}{\FailureProbability}\Identity \preceq c_3 \SigmaExplicit + \frac{c_4}{n} \log \frac{n \dim}{\FailureProbability}\Identity. \end{align} with probability at least $1 - \FailureProbability$. \end{lemma} \subsection{Proof of \texorpdfstring{\cref{prop:LinearConcentrability}}{}} \label{sec:LinearConcentrability} Under weak realizability, we have \begin{align} \innerprodweighted {\TestFunction{j}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} {\mu} = 0 \qquad \mbox{for all $j = 1, \ldots, \dim$.} \end{align} Thus, at $\psa$ the Bellman error difference reads \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}\psa - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}\psa & = [Q - \BellmanEvaluation{\policyicy}Q]\psa - [\QpiWeak{\policyicy} - \BellmanEvaluation{\policyicy} \QpiWeak{\policyicy} ]\psa \\ & = [Q - \QpiWeak{\policyicy} ]\psa - \discount \ensuremath{\mathbb{E}}ecti{[Q - \QpiWeak{\policyicy}](\successorstate,\policyicy)}{\successorstate \sim \Pro\psa } \\ \numberthis{\label{eqn:LinearBellmanError}} & = \inprod{\CriticPar{} - \CriticParBest{\policyicy}}{ \phi\psa - \discount \phiBootstrap{\policyicy}\psa} \end{align*} To proceed we need the following auxiliary result: \begin{lemma}[Linear Parameter Constraints] \label{lem:RelaxedLinearConstraints} With probability at least $1-\FailureProbability$, there exists a universal constant $c_1 > 0$ such that if $Q \in \PopulationFeasibleSet{\policyicy}$ then $ \norm{\CriticPar{} - \CriticParBest{\policyicy}}{\CovarianceWithBootstrapReg{\policyicy}}^2 \leq c_1 \frac{\dim \ensuremath{\rho}}{n}$. \end{lemma} \noindent See \cref{sec:RelaxedLinearConstraints} for the proof. \newcommand{\IntermediateSmall}[4]{Using this lemma, we can bound the OPC coefficient as follows \begin{align*} K^\policy \overset{(i)}{\leq} \frac{n}{\ensuremath{\rho}} \; \max_{Q \in \PopulationFeasibleSet{\policyicy}} \innerprodweighted{\1} { \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} #4} {\policyicy}^2 & \overset{(ii)}{\leq} \frac{n}{\ensuremath{\rho}} \; [\ensuremath{\mathbb{E}}ecti{(#1)^\top}{\policyicy} (#2)]^2 \\ & \overset{(iii)}{\leq} \frac{n}{\ensuremath{\rho}} \: \norm{\ensuremath{\mathbb{E}}ecti{ #1 }{\policyicy}}{(#3)^{-1}}^2 \norm{#2}{#3}^2 \\ & \leq c_1 \dim \norm{\ensuremath{\mathbb{E}}ecti{ #1}{\policyicy}}{(#3)^{-1}}^2. \end{align*} Here step $(i)$ follows from the definition of off-policy cost coefficient, $(ii)$ leverages the linear structure and $(iii)$ is Cauchy-Schwartz. } \IntermediateSmall {\phi - \discount \phiBootstrap{\policyicy}} {\CriticPar{} - \CriticParBest{\policyicy}} {\CovarianceWithBootstrapReg{\policyicy}} {-\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} \subsection{Proof of \texorpdfstring{\cref{lem:RelaxedLinearConstraints}}{}} \label{sec:RelaxedLinearConstraints} \intermediate{(\phi - \discount\phiBootstrap{\policyicy})^\top} {(\SigmaReg - \discount \CovarianceBootstrap{\policyicy})} {(\CriticPar{} - \CriticParBest{\policyicy})} {\CovarianceWithBootstrapReg{\policyicy}} {(\Sigma - \discount \CovarianceBootstrap{\policyicy})} {\cref{eqn:LinearBellmanError}} \subsection{Proof of \cref{prop:LinearConcentrabilityBellmanClosure}} \label{sec:LinearConcentrabilityBellmanClosure} Under weak Bellman closure, we have \begin{align} \numberthis{\label{eqn:LinearBellmanErrorWithClosure}} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q = \phi^\top(\CriticPar{} - \CriticParProjection{\policyicy}). \end{align} With a slight abuse of notation, let $\QpiProj{\policyicy}{\CriticPar{}}$ denote the weight vector that defines the action-value function $\QpiProj{\policyicy}{Q}$. We introduce the following auxiliary lemma: \begin{lemma}[Linear Parameter Constraints with Bellman Closure] \label{lem:RelaxedLinearConstraintsBellmanClosure} With probability at least $1-\FailureProbability$, if $Q \in \PopulationFeasibleSet{\policyicy}$ then $ \norm{\CriticPar{} - \CriticParProjection{\policyicy}}{\SigmaReg }^2 \leq c_1\frac{\dim \ensuremath{\rho}}{n}$. \end{lemma} See~\cref{sec:RelaxedLinearConstraintsBellmanClosure} for the proof. \IntermediateSmall {\phi} {\CriticPar{} - \CriticParProjection{\policyicy}} {\SigmaReg} {} \subsection{Proof of \texorpdfstring{\cref{sec:RelaxedLinearConstraintsBellmanClosure}}{}} \label{sec:RelaxedLinearConstraintsBellmanClosure} \intermediate{\phi^\top} {\SigmaReg} {(\CriticPar{} - \CriticParProjection{\policyicy})} {\SigmaReg} {\Sigma} {\cref{eqn:LinearBellmanErrorWithClosure}} \section{Proof of \texorpdfstring{\cref{thm:LinearApproximation}}{}} \label{sec:LinearApproximation} In this section, we prove the guarantee on our actor-critic procedure stated in \cref{thm:LinearApproximation}. \hidecom{ The proof consists of four main steps. First, we show that the critic linear $\QminEmp{\policyicy}$ function can be interpreted as the \emph{exact} value function on an MDP with a perturbed reward function. Second, we show that the global update rule in \cref{eqn:LinearActorUpdate} is equivalent to an instantiation of Mirror descent where the gradient is the $Q$ of the adversarial MDP. Third, we analyze the progress of Mirror descent in finding a good solution on the sequence of adversarial MDP identified by the critic. Finally we put everything together to derive a performance bound. } \subsection{Adversarial MDPs} \label{sec:AdversarialMDP} We now introduce sequence of adversarial MDPs $\{\mathcal{M}adv{t}\}_{t=1}^T$ used in the analysis. Each MDP $\mathcal{M}adv{t}$ is defined by the same state-action space and transition law as the original MDP $\mathcal{M}$, but with the reward functions $\Reward$ perturbed by $RAdv{t}$---that is \begin{align} \label{eqn:AdversarialMDP} \mathcal{M}adv{t} \defeq \langle \StateSpace,ASpace, R + RAdv{t}, \Pro, \discount \rangle. \end{align} For an arbitrary policy $\policyicy$, we denote with $\QpiAdv{t}{\policyicy}$ and with $\ApiAdv{t}{\policyicy}$ the action value function and the advantage function on $\mathcal{M}adv{t}$; the value of $\policyicy$ from the starting distribution $\nu_{\text{start}}$ is denoted by $\VpiAdv{t}{\policyicy}$. We immediately have the following expression for the value function, which follows because the dynamics of $\mathcal{M}adv{t}$ and $\mathcal{M}$ are identical and the reward function of $\mathcal{M}adv{t}$ equals that of $\mathcal{M}$ plus $RAdv{t}$ \begin{align} \label{eqn:VonAdv} \VpiAdv{t}{\policyicy} \defeq \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{ \Big[ R + RAdv{t} \Big]}{\policyicy}. \end{align} Consider the action value function $\QminEmp{\ActorPolicy{t}}$ returned by the critic, and let the reward perturbation $RAdv{t} = \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}$ be the Bellman error of the critic value function $\QminEmp{\ActorPolicy{t}}$. The special property of $\mathcal{M}adv{t}$ is that the action value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$ equals the critic lower estimate $\QminEmp{\ActorPolicy{t}}$. \begin{lemma}[Adversarial MDP Equivalence] \label{lem:QfncOnAdversarialMDP} Given the perturbed MDP $\mathcal{M}adv{t}$ from equation~\eqref{eqn:AdversarialMDP} with $RAdv{t} \defeq \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}$, we have the equivalence \begin{align*} \QpiAdv{t}{\ActorPolicy{t}} = \QminEmp{\ActorPolicy{t}}. \end{align*} \end{lemma} \begin{proof} We need to check that $\QminEmp{\ActorPolicy{t}}$ solves the Bellman evaluation equations for the adversarial MDP, ensuring that $\QminEmp{\ActorPolicy{t}}$ is the action-value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$. Let $\BellmanEvaluation{\ActorPolicy{t}}_t$ be the Bellman evaluation operator on $\mathcal{M}adv{t}$ for policy $\ActorPolicy{t}$. We have \begin{align*} \QminEmp{\ActorPolicy{t}} - \BellmanEvaluation{\ActorPolicy{t}}_t(\QminEmp{\ActorPolicy{t}}) = \QminEmp{\ActorPolicy{t}} - \BellmanEvaluation{\ActorPolicy{t}}(\QminEmp{\ActorPolicy{t}}) - RAdv{t} & = \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{} - \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{} = 0. \end{align*} Thus, the function $\QminEmp{\ActorPolicy{t}}$ is the action value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$, and it is by definition denoted by $\QpiAdv{t}{\ActorPolicy{t}}$. \end{proof} This lemma shows that the action-value function $\QminEmp{\ActorPolicy{t}}$ computed by the critic is equivalent to the action-value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$. Thus, we can interpret the critic as performing a model-based pessimistic estimate of $\ActorPolicy{t}$; this view is useful in the rest of the analysis. \subsection{Equivalence of Updates} The second step is to establish the equivalence between the update rule~\eqref{eqn:LinearActorUpdate}, or equivalently as the update~\eqref{eqn:GlobalRule}, to the exponentiated gradient update rule~\eqref{eqn:LocalRule}. \begin{lemma}[Equivalence of Updates] \label{lem:UpdateEquivalence} For linear $Q$-functions of the form $\QpiAdv{t}{} \psa = \inprod{\CriticPar{t}}{\phi \psa}$, the parameter update \begin{subequations} \begin{align} \label{eqn:GlobalRule} \ActorPolicy{t+1} \pas & \propto \exp(\phi\psa^\top (\ActorPar{t} + \eta \CriticPar{t})), \qquad \intertext{is equivalent to the policy update} \label{eqn:LocalRule} \ActorPolicy{t+1}\pas & \propto \ActorPolicy{t}\pas \exp(\eta\QpiAdv{t}{} \psa), \qquad \ActorPolicy{1}\pas = \frac{1}{\card{ASpace_{\state}}}. \end{align} \end{subequations} \end{lemma} \begin{proof} We prove this claim via induction on $t$. The base case ($t = 1$) holds by a direct calculation. Now let us show that the two update rules update $\ActorPolicy{t}$ in the same way. As an inductive step, assume that both rules maintain the same policy $\ActorPolicy{t} \propto \exp(\phi\psa^\top\ActorPar{t})$ at iteration $t$; we will show the policies are still the same at iteration $t+1$. At any $\psa$, we have \begin{align*} \ActorPolicy{t+1}(\action \mid \state) \propto \exp(\phi\psa^\top (\ActorPar{t} + \eta\CriticPar{t})) & \propto \exp(\phi\psa^\top \ActorPar{t}) \exp(\eta\phi\psa^\top \CriticPar{t}) \\ & \propto \ActorPolicy{t}(\action \mid \state) \exp(\eta\QpiAdv{t}{} \psa). \end{align*} \end{proof} Recall that $\ActorPar{t}$ is the parameter associated to $\ActorPolicy{t}$ and that $\CriticPar{t}$ is the parameter associated to $\QminEmp{\ActorPolicy{t}}$. Using \cref{lem:UpdateEquivalence} together with \cref{lem:QfncOnAdversarialMDP} we obtain that the actor policy $\ActorPolicy{t}$ satisfies through its parameter $\ActorPar{t}$ the mirror descent update rule~\eqref{eqn:LocalRule} with $\QpiAdv{t}{} = \QminEmp{\ActorPolicy{t}} = \QpiAdv{t}{\ActorPolicy{t}}$ and $\ActorPolicy{1}(\action \mid \state) = 1/\abs{ASpace_\state}, \; \forall \psa$. In words, the actor is using Mirror descent to find the best policy on the sequence of adversarial MDPs $\{\mathcal{M}adv{t} \}$ implicitly identified by the critic. \subsection{Mirror Descent on Adversarial MDPs} Our third step is to analyze the behavior of mirror descent on the MDP sequence $\{\mathcal{M}adv{t}\}_{t=1}^T$, and then translate such guarantees back to the original MDP $\mathcal{M}$. The following result provides a bound on the average of the value functions $\{\Vpi{\ActorPolicy{t}} \}_{t=1}^T$ induced by the actor's policy sequence. This bound involves a form of optimization error\footnote{Technically, this error should depend on $\card{ASpace_\state}$, if we were to allow the action spaces to have varyign cardinality, but we elide this distinction here.} given by \begin{align*} \MirrorRegret{T} & = 2 \, \sqrt{ \frac{2 \log |ASpace|}{T}}, \end{align*} as is standard in mirror descent schemes. It also involves the \emph{perturbed rewards} given by $RAdv{t} \defeq \ensuremath{\mathcal{B}}or{\QpiAdv{t}{\ActorPolicy{t}}} {\ActorPolicy{t}}{}$. \begin{lemma}[Mirror Descent on Adversarial MDPs] \label{prop:MirrorDescentAdversarialRewards} For any positive integer $T$, applying the update rule~\eqref{eqn:LocalRule} with $\QpiAdv{t}{} = \QpiAdv{t}{\ActorPolicy{t}}$ for $T$ rounds yields a sequence such that \begin{align} \label{eqn:MirrorDescentAdversarialRewards} \frac{1}{T} \sumiter \Big[ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \Big] \leq \frac{1}{1-\discount} \left \{ \MirrorRegret{T} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \}, \end{align} valid for any comparator policy $\widetilde \policy$. \end{lemma} \noindent See \cref{sec:MirrorDescentAdversarialRewards} for the proof. \\ To be clear, the comparator policy $\widetilde \policy$ need belong to the soft-max policy class. Apart from the optimization error term, our bound~\eqref{eqn:MirrorDescentAdversarialRewards} involves the behavior of the perturbed rewards $RAdv{t}$ along the comparator $\widetilde \policy$ and $\ActorPolicy{t}$, respectively. These correction terms arise because the actor performs the policy update using the action-value function $\QpiAdv{t}{\ActorPolicy{t}}$ on the perturbed MDPs instead of the real underlying MDP. \subsection{Pessimism: Bound on \texorpdfstring{$\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}}$}{}} The fourth step of the proof is to leverage the pessimistic estimates returned by critic to simplify equation~\eqref{eqn:MirrorDescentAdversarialRewards}. Using \cref{lem:Simulation} and the definition of adversarial reward $RAdv{t}$ we can write \begin{align*} \VminEmp{\ActorPolicy{}} - \Vpi{\ActorPolicy{t}} = \frac{1}{1-\discount} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\ActorPolicy{t}} = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\ActorPolicy{t}} & = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}}. \end{align*} Since weak realizability holds, \cref{thm:NewPolicyEvaluation} guarantees that $\VminEmp{\ActorPolicy{}} \leq \Vpi{\policyicy}$ uniformly for all $\policyicy \in \PolicyClass$ with probability at least $1-\FailureProbability$. Coupled with the prior display, we find that \begin{align} \label{eqn:PessimisticAdversarialReward} \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \leq 0. \end{align} Using the above display, the result in \cref{eqn:MirrorDescentAdversarialRewards} can be further upper bounded and simplified. \subsection{Concentrability: Bound on \texorpdfstring{$\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}$}{}} The term $\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}$ can be interpreted as an approximate concentrability factor for the approximate algorithm that we are investigating. \paragraph{Bound under only weak realizability:} \cref{lem:RelaxedLinearConstraints} gives with probability at least $1-\FailureProbability$ that any surviving $Q$ in $\PopulationFeasibleSet{\ActorPolicy{t}}$ must satisfy: $ \norm{ \CriticPar{} - \CriticParBest{\ActorPolicy{t}}}{\CovarianceWithBootstrapReg{\ActorPolicy{t}}}^2 \lesssim \frac{\dim \ensuremath{\rho}}{n} $ where $\CriticParBest{\ActorPolicy{t}}$ is the parameter associated to the weak solution $\QpiWeak{\ActorPolicy{t}}$. Such bound must apply to the parameter $ \CriticPar{t} \in \EmpiricalFeasibleSet{\ActorPolicy{t}}$ identified by the critic.\footnote{We abuse the notation and write $\CriticPar{} \in \EmpiricalFeasibleSet{\policyicy}$ in place of $Q \in \EmpiricalFeasibleSet{\policyicy}$}. We are now ready to bound the remaining adversarial reward along the distribution of the comparator $\widetilde \policy$. \begin{align*} \abs{\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}} & = \abs{\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\widetilde \policy}} \\ & \overset{\text(i)}{=} \abs{\ensuremath{\mathbb{E}}ecti{ (\phi - \discount \phiBootstrap{\ActorPolicy{t}})^\top (\CriticPar{t} - \CriticParBest{\ActorPolicy{t}}) }{\widetilde \policy}} \\ & \leq \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\ActorPolicy{t}}]}{\widetilde \policy}}{(\CovarianceWithBootstrapReg{\ActorPolicy{t}})^{-1}} \norm{\CriticPar{t} - \CriticParBest{\ActorPolicy{t}}}{\CovarianceWithBootstrapReg{\ActorPolicy{t}}} \\ & \leq c \; \sqrt{\frac{\dim \ensuremath{\rho}}{n}} \; \sup_{\policyicy \in \PolicyClass} \left \{ \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\widetilde \policy}}{(\CovarianceWithBootstrapReg{\policyicy})^{-1}} \right \}. \numberthis{\label{eqn:ApproximateLinearConcentrability}} \end{align*} Step (i) follows from the expression~\eqref{eqn:LinearBellmanError} for the weak Bellman error, along with the definition of the weak solution $\QpiWeak{\ActorPolicy{t}}$. \paragraph{Bound under weak Bellman closure:} When Bellman closure holds we proceed analogously. The bound in \cref{lem:RelaxedLinearConstraintsBellmanClosure} ensures with probability at least $1-\FailureProbability$ that $ \norm{\CriticPar{} - \CriticParProjection{\ActorPolicy{t}}}{\SigmaReg}^2 \leq c \; \frac{\dim \ensuremath{\rho}}{n} $ for all $\CriticPar{} \in \PopulationFeasibleSet{\ActorPolicy{t}}$; as before, this relation must apply to the parameter chosen by the critic $\CriticPar{t} \in \EmpiricalFeasibleSet{\ActorPolicy{t}}$. The bound on the adversarial reward along the distribution of the comparator $\widetilde \policy$ now reads \begin{align*} \abs{\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}} \: = \: \abs{\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\widetilde \policy}} & \overset{\text{(i)}}{=} \abs{\ensuremath{\mathbb{E}}ecti{ \phi^\top (\CriticPar{t} - \CriticParProjectionFull{\ActorPolicy{t}}{\CriticPar{t}}) }{\widetilde \policy}} \\ & \leq \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}} \norm{\CriticPar{t} - \CriticParProjectionFull{\ActorPolicy{t}}{\CriticPar{t}}}{\SigmaReg} \\ & \leq c \; \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}} \sqrt{\frac{\dim \ensuremath{\rho}}{n}}. \numberthis{\label{eqn:ApproximateLinearConcentrabilityBellmanClosure}} \end{align*} Here step (i) follows from the expression~\eqref{eqn:LinearBellmanErrorWithClosure} for the Bellman error under weak closure. \subsection{Proof of \cref{prop:MirrorDescentAdversarialRewards}} \label{sec:MirrorDescentAdversarialRewards} We now prove our guarantee for a mirror descent procedure on the sequence of adversarial MDPs. Our analysis makes use of a standard result on online mirror descent for linear functions (e.g., see Section 5.4.2 of Hazan~\cite{hazan2021introduction}), which we state here for reference. Given a finite cardinality set $\xSpace$, a function $f: \xSpace \rightarrow \R$, and a distribution $\ensuremath{\nu}$ over $\xSpace$, we define $f(\ensuremath{\nu}) \defeq \sum_{x \in \xSpace} \ensuremath{\nu}(x) f(x)$. The following result gives a guarantee that holds uniformly for any sequence of functions $\{f_t\}_{t=1}^T$, thereby allowing for the possibility of adversarial behavior. \begin{proposition}[Adversarial Guarantees for Mirror Descent] \label{prop:MirrorDescent} Suppose that we initialize with the uniform distribution $\ensuremath{\nu}_{1}(\xvar) = \frac{1}{\abs{\xSpace}}$ for all $\xvar \in \xSpace$, and then perform $T$ rounds of the update \begin{align} \label{eqn:ExponentiatedGradient} \ensuremath{\nu}_{t+1}(\xvar ) \propto \ensuremath{\nu}_{t}(\xvar) \exp(\eta \fnc_{t}(\xvar)), \quad \mbox{for all $\xvar \in \xSpace$,} \end{align} using $\eta = \sqrt{\frac{\log \abs{\xSpace}}{2T}}$. If $\norm{\fnc_{t}}{\infty} \leq 1$ for all $t \in [T]$ then we have the bound \begin{align} \label{EqnMirrorBoundStatewise} \frac{1}{T} \sum_{t=1}^T \Big[ \fnc_{t}(\widetilde{\ensuremath{\nu}}) - \fnc_{t}(\ensuremath{\nu}_{t})\Big] \leq \MirrorRegret{T} \defeq 2\sqrt{\frac{2\log \abs{\xSpace}}{T}}. \end{align} where $\widetilde{\ensuremath{\nu}}$ is any comparator distribution over $\xSpace$. \end{proposition} We now use this result to prove our claim. So as to streamline the presentation, it is convenient to introduce the advantage function corresponding to $\ActorPolicy{t}$. It is a function of the state-action pair $\psa$ given by \begin{align*} \ApiAdv{t}{\ActorPolicy{t}}\psa \defeq \QpiAdv{t}{\ActorPolicy{t}}\psa - \ensuremath{\mathbb{E}}ecti{\QpiAdv{t}{\ActorPolicy{t}}(\state,\successoraction)}{\successoraction \sim \ActorPolicy{t}(\cdot \mid \state)}. \end{align*} In the sequel, we omit dependence on $\psa$ when referring to this function, consistent with the rest of the paper. From our earlier observation~\eqref{eqn:VonAdv}, recall that the reward function of the perturbed MDP $\mathcal{M}adv{t}$ corresponds to that of $\mathcal{M}$ plus the perturbation $RAdv{t}$. Combining this fact with a standard simulation lemma (e.g., \cite{kakade2003sample}) applied to $\mathcal{M}adv{t}$, we find that \begin{subequations} \begin{align} \label{EqnInitialBound} \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} & = \VpiAdv{t}{\widetilde \policy} - \VpiAdv{t}{\ActorPolicy{t}} + \frac{1}{1-\discount} \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \; = \; \frac{1}{1-\discount} \Big[ \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\widetilde \policy} - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big]. \end{align} Now for any given state $\state$, we introduce the linear objective function \begin{align*} \fnc_t(\ensuremath{\nu}) & \defeq \ensuremath{\mathbb{E}}_{\action \sim \ensuremath{\nu}} \QpiAdv{t}{\ActorPolicy{t}}(\state, \action) \; = \; \sum_{\action \in ASpace} \ensuremath{\nu}(\action) \QpiAdv{t}{\ActorPolicy{t}}(\state, \action), \end{align*} where $\ensuremath{\nu}$ is a distribution over the action space. With this choice, we have the equivalence \begin{align*} \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\action \sim \widetilde \policy}\psa & = \fnc_t(\widetilde \policy(\cdot \mid \state)) - \fnc_t\big(\ActorPolicy{t}(\cdot \mid \state) \big), \end{align*} where the reader should recall that we have fixed an arbitrary state $\state$. Consequently, applying the bound~\eqref{EqnMirrorBoundStatewise} with $\xSpace = ASpace$ and these choices of linear functions, we conclude that \begin{align} \label{EqnMyMirror} \frac{1}{T} \sumiter \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\action \sim \widetilde \policy}\psa \leq \MirrorRegret{T}. \end{align} \end{subequations} This bound holds for any state, and also for any average over the states. We now combine the pieces to conclude. By computing the average of the bound~\eqref{EqnInitialBound} over all $T$ iterations, we find that \begin{align*} \frac{1}{T} \sumiter \Big[ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \Big] & \leq \frac{1}{1-\discount} \left \{ \frac{1}{T} \sumiter \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\widetilde \policy} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \} \\ & \leq \frac{1}{1-\discount} \left \{ \MirrorRegret{T} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \}, \end{align*} where the final inequality follow from the bound~\eqref{EqnMirrorBoundStatewise}, applied for each $\state$. We have thus established the claim. \section{Concentrability Coefficients and Test Spaces} \label{sec:Applications} In this section, we develop some connections to concentrability coefficients that have been used in past work, and discuss various choices of the test class. Like the predictor class $\Qclass{\policyicy}$, the test class $\TestFunctionClass{\policyicy}$ encodes domain knowledge, and thus its choice is delicate. Different from the predictor class, the test class does not require a `realizability' condition. As a general principle, the test functions should be chosen as orthogonal as possible with respect to the Bellman residual, so as to enable rapid progress towards the solution; at the same time, they should be sufficiently ``aligned'' with the dataset, meaning that $\norm{\TestFunction{}}{\mu}$ or its empirical counterpart $\norm{\TestFunction{}}{n}$ should be large. Given a test class, each additional test function posits a new constraint which helps identify the correct predictor; at the same time, it increases the metric entropy (parameter $\ensuremath{\rho}$), which makes each individual constraints more loose. In summary, there are trade-offs to be made in the selection of the test class $\TestFunctionClass{}$, much like $\Qclass{}$. In order to assess the statistical cost that we pay for off-policy data, it is natural to define the \emph{off-policy cost coefficient} (OPC) as \begin{align} \label{EqnConcSimple} K^\policy(\PopulationFeasibleSet{\policyicy}, \ensuremath{\rho}, \ensuremath{\lambda}) & \defeq \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{|\PolComplexGen{\policyicy}|^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} = \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{} }{\policyicy} ^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}}, \end{align} With this notation, our off-policy width bound~\eqref{EqnWidthBound} can be re-expressed as \begin{subequations} \begin{align} \label{eqn:ConcreteCI} \abs{\VminEmp{\policyicy} - \VmaxEmp{\policyicy}} \leq 2 \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{K^\policy \frac{ \ensuremath{\rho}}{n}}, \end{align} while the oracle inequality~\eqref{EqnOracle} for policy optimization can be re-expressed in the form \begin{align} \label{EqnConcSimpleBound} \Vpi{\widetilde \pi} \geq \max_{\policyicy \in \PolicyClass} \Big\{ \Vpi{\policyicy} - \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{K^\policy \frac{ \ensuremath{\rho}}{n}} \Big \}, \end{align} \end{subequations} Since $\ensuremath{\lambda} \sim \ensuremath{\rho}/n$, the factor $\sqrt{1 + \ensuremath{\lambda}}$ can be bounded by a constant in the typical case $n \geq \ensuremath{\rho}$. We now offer concrete examples of the OPC , while deferring further examples to \cref{sec:appConc}. \subsection{Likelihood ratios} Our broader goal is to obtain small Bellman error along the distribution induced by $\policyicy$. Assume that one constructs a test function class $\TestFunctionClass{\policyicy}$ of possible likelihood ratios. \begin{proposition}[Likelihood ratio bounds] \label{prop:LikeRatio} Assume that for some constant $\scaling{\policyicy}$, the test function defined as $\TestFunction{}^*\psa = \frac{1}{\scaling{\policyicy}} \frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa}$ belongs to $\TestFunctionClass{\policyicy}$ and satisfies $\norm{\TestFunction{}^*}{\infty} \leq 1$. Then the {OPC } coefficient satisfies \begin{align} \label{EqnLikeRatioBound} \ConcentrabilityGeneric{\policyicy} \stackrel{(i)}{\leq} \frac{ \ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big] }{\policyicy} + \scalingsq{\policyicy} \ensuremath{\lambda}} {1 + \ensuremath{\lambda}} \; \stackrel{(ii)}{\leq} \l \frac{\scaling{\policyicy} \big(1 + \scaling{\policyicy} \ensuremath{\lambda} \big)}{1 + \ensuremath{\lambda}}. \end{align} \end{proposition} Here $\scaling{\policyicy}$ is a scaling parameter that ensures $\norm{\TestFunction{}^*}{\infty} \leq 1$. Concretely one can take $\scaling{\policyicy} = \sup_{(\state,\action)} \frac{\DistributionOfPolicy{\policyicy}(\state,\action)}{\mu(\state,\action)}$. The proof is in \cref{sec:LikeRatio}. Since $\ensuremath{\lambda} = \ensuremath{\lambda}_n \rightarrow 0$ as $n$ increases, the {OPC } coefficient is bounded by a multiple of the expected ratio $\ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big]}{\policyicy}{}$. Up to an additive offset, this expectation is equivalent to the $\chi^2$-distribution between the policy-induced occupation measure $\DistributionOfPolicy{\policyicy}$ and data-generating distribution $\mu$. The concentrability coefficient can be plugged back into \cref{eqn:ConcreteCI,EqnConcSimpleBound} to obtain a concrete policy optimization bound. In this case, we recover a result similar to \cite{xie2020Q}, but with a much milder concentrability coefficient that involves only the chosen comparator policy. \subsection{The error test space} \label{sec:ErrorTestSpace} We now turn to the discussion of a choice for the test space that extends the LSTD algorithm to non-linear spaces. A simplification to the linear setting is presented later in \cref{sec:Linear}. As is well known, the LSTD algorithm \cite{bradtke1996linear} can be seen as minimizing the Bellman error projected onto the linear prediction space $Q$. Define the transition operator $ (\TransitionOperator{\policyicy}Q)\psa = \ensuremath{\mathbb{E}}ecti{Q(\successorstate,\policyicy ) } {\successorstate \sim \Pro\psa} $, and the prediction error $QErr = Q - \QpiWeak{\policyicy}$, where $\QpiWeak{\policyicy}$ is a $Q$-function from the definition of weak realizability. The Bellman error can be re-written as $ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{} = (\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr $. When realizability holds, in the linear setting and at the population level, the LSTD solution seeks to satisfy the projected Bellman equations \begin{align} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} = 0, \quad \text{for all $\TestFunction{} \in \QclassErrCentered{\policyicy}$}. \end{align} In the linear case, $\QclassErrCentered{\policyicy} $ is the class of linear functions $\Qclass{\policyicy}$ used as predictors; when $\Qclass{\policyicy}$ is non-linear, we can extend the LSTD method by using the (nonlinear) error test space $\TestFunctionClass{\policyicy} = \QclassErrCentered{\policyicy} = \{Q - \QpiWeak{\policyicy} \}$. Since $\QclassErrCentered{\policyicy}$ is unknown (as it depends on the weak solution $\QpiWeak{\policyicy}$), we choose instead the larger class \begin{align*} \QclassErr{\policyicy} = \{ Q - Q' \mid Q,Q' \in \Qclass{\policyicy} \}, \end{align*} which contains $\QclassErrCentered{\policyicy}$. The resulting approach can be seen as performing a projection of the Bellman operator $\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}$ into the error space $\QclassErrCentered{\policyicy}$, much like LSTD does in the linear setting. However, different from LSTD, our procedure returns confidence intervals as opposed to a point estimator. This choice of the test space is related to the Bubnov-Galerkin method~\cite{repin2017one} for linear spaces; it selects the test space $\TestFunctionClass{\policyicy}$ to be identical to the trial space $\QclassErrCentered{\policyicy}$ that contains all possible solution errors. \begin{lemma}[OPC coefficient from prediction error] \label{lem:PredictionError} For any test function class $\TestFunctionClass{\policyicy} \supseteq \QclassErr{\policyicy}$, we have \begin{align} \label{EqnPredErrorBound} K^\policy & \leq \max_{Q \in \Qclass{\policyicy}} \big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2 } { \innerprodweighted{QErr} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu}^2 } \big \} = \max_{QErr \in \QclassErrCentered{\policyicy}} \big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {(\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr} {\mu}^2 } \big \}. \end{align} \end{lemma} The above coefficient measures the ratio between the Bellman error along the distribution of the target policy $\policyicy$ and that projected onto the error space $\QclassErrCentered{\policyicy}$ defined by $\Qclass{\policyicy}$. It is a concentrability coefficient that \emph{always} applies, as the choice of the test space does not require domain knowledge. See~\cref{sec:PredictionError} for the proof, and \cref{sec:appBubnov} for further comments and insights, as well as a simplification in the special case of Bellman closure. \subsection{The Bellman test space} \label{sec:DomainKnowledge} In the prior section we controlled the projected Bellman error. Another longstanding approach in reinforcement learning is to control the Bellman error itself, for example by minimizing the squared Bellman residual. In general, this cannot be done if only an offline dataset is available due to the well known \emph{double sampling} issue. However, in some cases we can use an helper class to try to capture the Bellman error. Such class needs to be a superset of the class of \emph{Bellman test functions} given by \begin{align} \label{EqnBellmanTest} \TestFunctionClassBubnov{\policyicy} & \defeq \{ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} \mid Q \in \Qclass{\policyicy} \}. \end{align} Any test class that contains the above allows us to control the Bellman residual, as we show next. \begin{lemma}[Bellman Test Functions] \label{lem:BellmanTestFunctions} For any test function class $ \TestFunctionClass{\policyicy}$ that contains $\TestFunctionClassBubnov{\policyicy}$, we have \begin{subequations} \begin{align} \label{EqnBellBound} \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \leq c_1 \sqrt{\frac{\ensuremath{\rho}}{n}} \qquad \mbox{for any $Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{\policyicy})$.} \end{align} Moreover, the {off-policy cost coefficient} is upper bounded as \begin{align} \label{EqnBellBoundConc} K^\policy & \stackrel{(i)}{\leq} c_1 \sup_{Q \in \Qclass{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2} \stackrel{(ii)}{\leq} c_1 \sup_{Q \in \Qclass{\policyicy}} \frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2} \stackrel{(iii)}{\leq} c_1 \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa} {\mu\psa}. \end{align} \end{subequations} \end{lemma} \noindent See~\cref{sec:BellmanTestFunctions} for the proof of this claim. Consequently, whenever the test class includes the Bellman test functions, the {off-policy cost coefficient} is at most the ratio between the squared Bellman residuals along the data generating distribution and the target distribution. If Bellman closure holds, then the prediction error space $\QclassErr{\policyicy}$ introduced in \cref{sec:ErrorTestSpace} contains the Bellman test functions: for $Q \in \Qclass{\policyicy}$, we can write $ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q \in \QclassErr{\policyicy} $. This fact allows us to recover a result in the recent paper~\cite{xie2021bellman} in the special case of Bellman closure, although the approach presented here is more general. \subsection{Combining test spaces} \label{sec:MultipleRobustness} Often, it is natural to construct a test space that is a union of several simpler classes. A simple but valuable observation is that the resulting procedure inherits the best of the OPC coefficients. Suppose that we are given a collection $\{ \TestFunctionClass{\policyicy}_m \}_{m=1}^M$ of $M$ different test function classes, and define the union $\TestFunctionClass{\policyicy} = \bigcup_{m=1}^M \TestFunctionClass{\policyicy}_m$. For each $m = 1, \ldots, M$, let $K^\policy_m$ be the OPC coefficient defined by the function class $\TestFunctionClass{\policyicy}_m$ and radius $\ensuremath{\rho}$, and let $K^\policy(\TestFunctionClass{})$ be the OPC coefficient associated with the full class. Then we have the following guarantee: \begin{lemma}[Multiple test classes] \label{prop:MultipleRobustness} $ \label{EqnMinBound} K^\policy(\TestFunctionClass{}) \leq \min_{m = 1, \ldots, M} K^\policy_m. $ \end{lemma} \noindent This guarantee is a straightforward consequence of our construction of the feasibility sets: in particular, we have $\PopulationFeasibleSet{\policyicy}(\TestFunctionClass{}) = \cap_{m =1}^M \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{}_m)$, and consequently, by the variational definition of the {off-policy cost coefficient} $K^\policy(\TestFunctionClass{})$ as optimization over $\PopulationFeasibleSet{\policyicy}(\TestFunctionClass{})$, the bound~\eqref{EqnMinBound} follows. In words, when multiple test spaces are combined, then our algorithms inherit the best (smallest) OPC coefficient over all individual test spaces. While this behavior is attractive, one must note that there is a statistical cost to using a union of test spaces: the choice of $\ensuremath{\rho}$ scales as a function of $\TestFunctionClass{}$ via its metric entropy. This increase in $\ensuremath{\rho}$ must be balanced with the benefits of using multiple test spaces.\footnote{For space reasons, we defer to \cref{sec:IS2BC} an application in which we construct a test function space as a union of subclasses, and thereby obtain a method that automatically leverages Bellman closure when it holds, falls back to importance sampling if closure fails, and falls back to a worst-case bound in general.} \section{Linear Setting} \label{sec:Linear} In this section, we turn to a detailed analysis of our estimators using function classes that are linear in a feature map. Let $\phi: \StateSpace \times ASpace \rightarrow \R^\dim$ be a given feature map, and consider linear expansions $g_{\CriticPar{}} \psa \defeq \inprod{\CriticPar{}}{\phi \psa} \; = \; \sum_{j=1}^d \CriticPar{j} \phi_j \psa$. The class of \emph{linear functions} takes the form \begin{align} \label{EqnLinClass} \ensuremath{\mathcal{L}} & \defeq \{ \psa \mapsto g_{\CriticPar{}} \psa \mid \CriticPar{} \in \R^{\dim}, \; \norm{\CriticPar{}}{2} \leq 1 \}. \end{align} Throughout our analysis, we assume that $\norm{\phi(\state, \action)}{2} \leq 1$ for all state-action pairs. Following the approach in \cref{sec:ErrorTestSpace}, which is based on the LSTD method, we should choose the test function class $\TestFunctionClass{\policyicy} = \LinSpace$, as in the linear case the prediction error is linear. In order to obtain a computationally efficient implementation, we need to use a test class that is a ``simpler'' subset of $\LinSpace$. In particular, for linear functions, it is not hard to show that the estimates $\VminEmp{\policyicy}$ and $\VmaxEmp{\policyicy}$ from equation~\eqref{eqn:ConfidenceIntervalEmpirical} can be computed by solving a quadratic program, with two linear constraints for each test function. (See~\cref{sec:LinearConfidenceIntervals} for the details.) Consequently, the computational complexity scales linearly with the number of test functions. Thus, if we restrict ourselves to a finite test class contained within $\LinSpace$, we will obtain a computationally efficient approach. \subsection{A computationally friendly test class and OPC coefficients} Define the empirical covariance matrix $\widehat \Sigma = \frac{1}{n} \sum_{i=1}^{n} \phiEmpirical{i} \phiEmpirical{i}^T$ \mbox{where $\phiEmpirical{i} \defeq \phi(\state_i,\action_i)$.} Let $\{\EigenVectorEmpirical{j} \}_{j=1}^\dim$ be the eigenvectors of empirical covariance matrix $\widehat \Sigma$, and suppose that they are normalized to have unit $\ell_2$-norm. We use these normalized eigenvectors to define the finite test class \begin{align} \label{eqn:LinTestFunction} \TestFunctionClassRandom{\policyicy} \defeq \{ f_j, j = 1, \ldots, \dim \} \quad \mbox{where $f_j \psa \defeq \inprod{\EigenVectorEmpirical{j}}{\phi \psa}$} \end{align} A few observations are in order: \begin{carlist} \item This test class has only $d$ functions, so that our QP implementation has $2 \dim$ constraints, and can be solved in polynomial time. (Again, see \cref{sec:LinearConfidenceIntervals} for details.) \item Since $\TestFunctionClassRandom{\policyicy}$ is a subset of $\LinSpace$ the choice of radius $\ensuremath{\rho} = c( \frac{\dim}{n} + \log 1/\delta)$ is valid for some constant $c$. \end{carlist} \newcommand{\ConcSimple(\TestFunctionClassRandom{\policy})}{K^\policy(\TestFunctionClassRandom{\policyicy})} \paragraph{Concentrability} When weak Bellman closure does not hold, then our analysis needs to take into account how errors propagate via the dynamics. In particular, we define the \emph{next-state feature extractor} $\phiBootstrap{\policyicy} \psa \defeq \ensuremath{\mathbb{E}}ecti{\phi(\successorstate,\policyicy)}{\successorstate \sim \Pro\psa}$, along with the population covariance matrix $\Sigma \defeq \E_\mu \big[\phi \psa \phi^\top \psa \big]$, and its $\ensuremath{\lambda}$-regularized version $\SigmaReg \defeq \Sigma + \TestFunctionReg \Identity$. We also define the matrices \begin{align*} \CovarianceBootstrap{\policyicy} \defeq \ensuremath{\mathbb{E}}ecti{[\phi(\phiBootstrap{\policyicy})^\top]}{ \mu}, \quad \CovarianceWithBootstrapReg{\policyicy} \defeq (\SigmaReg^{\frac{1}{2}} - \discount \SigmaReg^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy} )^\top (\SigmaReg^{\frac{1}{2}} - \discount \SigmaReg^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy}). \end{align*} The matrix $\CovarianceBootstrap{\policyicy}$ is the cross-covariance between successive states, whereas the matrix $\CovarianceWithBootstrapReg{\policyicy}$ is a suitably renormalized and symmetrized version of the matrix $\Sigma^{\frac{1}{2}} - \discount \Sigma^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy}$, which arises naturally from the policy evaluation equation. We refer to quantities that contain evaluations at the next-state (e.g., $\phiBootstrap{\policyicy}$) as bootstrapping terms, and now bound the OPC coefficient in the presence of such terms: \begin{proposition}[OPC bounds with bootstrapping] \label{prop:LinearConcentrability} Under weak realizability, we have \begin{align} \label{EqnOPCBootstrap} \ConcSimple(\TestFunctionClassRandom{\policy}) & \leq c \; \dim \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\policyicy}} {(\CovarianceWithBootstrapReg{\policyicy})^{-1}}^2 \qquad \mbox{with probability at least $1-\FailureProbability$.} \end{align} \end{proposition} \noindent See~\cref{sec:LinearConcentrability} for the proof. The bound~\eqref{EqnOPCBootstrap} takes a familiar form, as it involves the same matrices used to define the LSTD solution. This is expected, as our approach here is essentially equivalent to the LSTD method; the difference is that LSTD only gives a point estimate as opposed to the confidence intervals that we present here; however, they are both derived from the same principle, namely from the Bellman equations projected along the predictor (error) space. The bound quantifies how the feature extractor $\phi$ together with the bootstrapping term $\phiBootstrap{\policyicy}$, averaged along the target policy $\policyicy$, interact with the covariance matrix with bootstrapping $\CovarianceWithBootstrapReg{\policyicy}$. It is an approximation to the OPC coefficient bound derived in~\cref{lem:PredictionError}. The bootstrapping terms capture the temporal difference correlations that can arise in reinforcement learning when strong assumptions like Bellman closure do not hold. As a consequence, such an OPC coefficient being small is a \emph{sufficient} condition for reliable off-policy prediction. This bound on the OPC coefficient always applies, and it reduces to the simpler one~\eqref{EqnOPCClosure} when weak Bellman closure holds, with no need to inform the algorithm of the simplified setting; see \cref{sec:LinearConcentrabilityBellmanClosure} for the proof. \begin{proposition}[OPC bounds under weak Bellman Closure] \label{prop:LinearConcentrabilityBellmanClosure} Under Bellman closure, we have \begin{align} \label{EqnOPCClosure} \ConcSimple(\TestFunctionClassRandom{\policy}) & \leq c \; \dim \norm{\ensuremath{\mathbb{E}}ecti{ \phi}{\policyicy}}{\SigmaReg^{-1}}^2 \qquad \mbox{with probability at least $1 - \delta$.} \end{align} \end{proposition} \subsection{Actor-critic scheme for policy optimization} \label{sec:LinearApproximateOptimization} Having described a practical procedure to compute $\VminEmp{\policyicy}$, we now turn to the computation of the max-min estimator for policy optimization. We define the \emph{soft-max policy class} \begin{align} \label{eqn:SoftMax} \PolicyClass_{\text{lin}} \defeq \Big\{ \psa \mapsto \frac{e^{\innerprod{\phi\psa}{\ActorPar{}}}}{\sum_{\successoraction \in ASpace} e^{\innerprod{\phi(\state,\successoraction)}{\ActorPar{}}}} \mid \norm{\ActorPar{}}{2} \leq \nIter, \; \ActorPar{} \in \R^{\dim} \Big\}. \end{align} In order to compute the max-min solution~\eqref{eqn:MaxMinEmpirical} over this policy class, we implement an actor-critic method, in which the actor performs a variant of mirror descent.\footnote{Strictly speaking, it is mirror ascent, but we use the conventional terminology.} \begin{carlist} \item At each iteration $t = 1, \ldots, \nIter$, the policy $\ActorPolicy{t} \in \PolicyClass_{\text{lin}}$ can be identified with a parameter $\ActorPar{t} \in \R^\dim$. The sequence is initialized with $\ActorPar{1} = 0$. \item Using the finite test function class~\eqref{eqn:LinTestFunction} based on normalized eigenvectors, the pessimistic value estimate $\VminEmp{\ActorPolicy{t}}$ is computed by solving a quadratic program, as previously described. This computation returns the weight vector $\CriticPar{t}$ of the associated optimal action-value function. \item Using the action-value vector $\CriticPar{t}$, we update the actor's parameter as \begin{align} \label{eqn:LinearActorUpdate} \ActorPar{t+1} = \ActorPar{t} + \eta \CriticPar{t} \qquad \mbox{where $\eta = \sqrt{\frac{\log\abs{ASpace}}{2T}}$ is a stepsize parameter. } \end{align} \end{carlist} We now state a guarantee on the behavior of this procedure, based on two OPC coefficients: \begin{align} \label{EqnNewConc} \ConcentrabilityGenericSub{\widetilde \policy}{1} = \dim \l \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}}^2, \quad \mbox{and} \quad \ConcentrabilityGenericSub{\widetilde \policy}{2} = \dim \; \sup_{\policyicy \in \PolicyClass} \Big \{ \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\widetilde \policy}} {(\CovarianceWithBootstrapReg{\policyicy})^{-1}}^2 \Big \}. \end{align} Moreover, in making the following assertion, we assume that every weak solution $\QpiWeak{\policyicy}$ can be evaluated against the distribution of a comparator policy $\widetilde \policy \in \PolicyClass$, i.e., $\innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\widetilde \policy} = 0$ for all $\policyicy \in \PolicyClass $. (This assumption is still weaker than strong realizability). \begin{theorem}[Approximate Guarantees for Linear Soft-Max Optimization] \label{thm:LinearApproximation} Under the above conditions, running the procedure for $T$ rounds returns a policy sequence $\{\ActorPolicy{t}\}_{t=1}^T$ such that, for any comparator policy $\widetilde \policy\in \PolicyClass$, \begin{align} \label{EqnMirrorBound} \frac{1}{T} \sumiter \big \{ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \big \} & \leq \frac{c_1}{1-\discount} \biggr \{ \underbrace{\sqrt{\frac{\log \abs{ASpace}}{T}} \vphantom{\sqrt{\ConcentrabilityGenericSub{\widetilde \policy}{\cdot}\frac{\dim \log(nT) + \log \frac{n}{\FailureProbability} }{n}}} }_{\text{Optimization error}} + \underbrace{\sqrt{\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} \frac{\dim \log(nT) + \log \big(\frac{n }{\FailureProbability}\big) }{n}}}_{\text{Statistical error}} \biggr \}, \end{align} with probability at least $1 - \FailureProbability$. This bound always holds with \mbox{$\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} = \ConcentrabilityGenericSub{\widetilde \policy}{2}$,} and moreover, it holds with \mbox{$\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} = \ConcentrabilityGenericSub{\widetilde \policy}{1}$} when weak Bellman closure is in force. \end{theorem} \noindent See~\cref{sec:LinearApproximation} for the proof. Whenever Bellman closure holds, the result automatically inherits the more favorable concentrability coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{2}$, as originally derived in \cref{prop:LinearConcentrabilityBellmanClosure}. The resulting bound is only $\sqrt{\dim}$ worse than the lower bound recently established in the paper~\cite{zanette2021provable}. However, the method proposed here is robust, in that it provides guarantees even when Bellman closure does not hold. In this case, we have a guarantee in terms of the OPC coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{1}$. Note that it is a uniform version of the one derived previously in~\cref{prop:LinearConcentrability}, in that there is an additional supremum over the policy class. This supremum arises due to the use of gradient-based method, which implicitly searches over policies in bootstrapping terms; see \cref{sec:LinearDiscussion} for a more detailed discussion of this issue. \section*{Acknowledgment} AZ was partially supported by NSF-FODSI grant 2023505. In addition, this work was partially supported by NSF-DMS grant 2015454, NSF-IIS grant 1909365, as well as Office of Naval Research grant DOD-ONR-N00014-18-1-2640 to MJW. The authors are grateful to Nan Jiang and Alekh Agarwal for pointing out further connections with the existing literature, as well as to the reviewers for pointing out clarity issues. \section{Main Proofs} \label{sec:AnalysisProofs} This section is devoted to the proofs of our guarantees for general function classes---namely, \cref{prop:Deterministic} that holds in a deterministic manner, and \cref{thm:NewPolicyEvaluation} that gives high probability bounds under a particular sampling model. \subsection{Proof of \cref{prop:Deterministic}} \label{SecProofPropDeterministic} Our proof makes use of an elementary simulation lemma, which we state here: \begin{lemma}[Simulation lemma] \label{lem:Simulation} For any policy $\policyicy$ and function $Q$, we have \begin{align} \Estart{Q-\Qpi{\policyicy}}{,\policyicy} & = \frac{\PolComplexGen{\policyicy}}{1-\discount} \end{align} \end{lemma} \noindent See \cref{SecProofLemSimulation} for the proof of this claim. \subsubsection{Proof of policy evaluation claims} First of all, we have the elementary bounds \begin{align*} \abs{\VminEmp{\policyicy} - \Vpi{\policyicy}} & = \abs{ \min_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S, \policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \leq \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} }, \quad \mbox{and} \\ \abs{\VmaxEmp{\policyicy} - \Vpi{\policyicy}} & = \abs{ \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \leq \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S, \policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} }. \end{align*} Consequently, in order to prove the bound~\eqref{EqnWidthBound} it suffices to upper bound the right-hand side common in the two above displays. Since $\EmpiricalFeasibleSet{\policyicy} \subseteq \PopulationFeasibleSet{\policyicy}$, we have the upper bound \begin{align*} \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } & \leq \max_{Q\in\PopulationFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \\ & = \max_{Q\in\PopulationFeasibleSet{\policyicy}} \abs{\ensuremath{\mathbb{E}}ecti{[Q(S ,\policyicy) - \Qpi{\policyicy}(S ,\policyicy)]} {S \sim \nu_{\text{start}}}} \\ & \overset{\text{(i)}}{=} \frac{1}{1-\discount} \max_{Q\in\PopulationFeasibleSet{\policyicy}} \frac{\PolComplexGen{\policyicy}}{1-\discount} \end{align*} where step (i) follows from \cref{lem:Simulation}. Combined with the earlier displays, this completes the proof of the bound~\eqref{EqnWidthBound}. We now show the inclusion $[ \VminEmp{\policyicy}, \VmaxEmp{\policyicy}] \ni \Vpi{\policyicy}$ when weak realizability holds. By definition of weak realizability, there exists some $\QpiWeak{\policyicy} \in \PopulationFeasibleSetInfty{\policyicy}$. In conjunction with our sandwich assumption, we are guaranteed that $\QpiWeak{\policyicy} \in \PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}$, and consequently \begin{align*} \VminEmp{\policyicy} & = \min_{Q\in\EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \leq \min_{Q\in\PopulationFeasibleSetInfty{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \leq \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} = \Vpi{\policyicy}, \quad \mbox{and} \\ \VmaxEmp{\policyicy} & = \max_{Q\in\EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \geq \max_{Q\in\PopulationFeasibleSetInfty{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \geq \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} = \Vpi{\policyicy}. \end{align*} \subsubsection{Proof of policy optimization claims} We now prove the oracle inequality~\eqref{EqnOracle} on the value $\Vpi{\widetilde \pi}$ of a policy $\widetilde \pi$ that optimizes the max-min criterion. Fix an arbitrary comparator policy $\policycomp$. Starting with the inclusion $[\VminEmp{\widetilde \pi}, \VmaxEmp{\widetilde \pi}] \ni \Vpi{\widetilde \pi}$, we have \begin{align*} \Vpi{\widetilde \pi} \stackrel{(i)}{\geq} \VminEmp{\widetilde \pi} \stackrel{(ii)}{\geq} \VminEmp{\policycomp} \; = \; \Vpi{\policycomp} - \Big(\Vpi{\policycomp} - \VminEmp{\policycomp} \Big) \stackrel{(iii)}{\geq} \Vpi{\policycomp} - \frac{1}{1-\discount} \max_{Q \in \PopulationFeasibleSet{\policycomp}} \frac{|\PolComplexGen{\policyicy}Gen{\policycomp}|}{1-\discount}, \end{align*} where step (i) follows from the stated inclusion at the start of the argument; step (ii) follows since $\widetilde \pi$ solves the max-min program; and step (iii) follows from the bound $|\Vpi{\policycomp} - \VminEmp{\policycomp}| \leq \frac{1}{1-\discount} \max_{Q\in\PopulationFeasibleSet{\policycomp}} \frac{\PolComplexGen{\policyicy}}{1-\discount}$, as proved in the preceding section. This lower bound holds uniformly for all comparators $\policycomp$, from which the stated claim follows. \subsection{Proof of \cref{lem:Simulation}} \label{SecProofLemSimulation} For each $\timestep = 1, 2, \ldots$, let $\ensuremath{\mathbb{E}}ecti{}{\timestep}$ be the expectation over the state-action pair at timestep $\timestep$ upon starting from $\nu_{\text{start}}$, so that we have $\Estart{Q - \Qpi{\policyicy} }{,\policyicy} = \ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy}]}{0}$ by definition. We claim that \begin{align} \label{EqnInduction} \ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy}]}{0} & = \sum_{\tau=1}^{\timestep} \discount^{\tau-1} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\tau-1} + \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy} ]}{\timestep} \qquad \mbox{for all $\timestep = 1, 2, \ldots$.} \end{align} For the base case $\timestep = 1$, we have \begin{align} \label{EqnBase} \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy}]}{0} = \ensuremath{\mathbb{E}}ecti{[Q - \BellmanEvaluation{\policyicy}Q]}{0} + \ensuremath{\mathbb{E}}ecti{ [\BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{0} & = \ensuremath{\mathbb{E}}ecti{[ Q - \BellmanEvaluation{\policyicy}Q ]}{0} + \discount \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{1}, \end{align} where we have used the definition of the Bellman evaluation operator to assert that $\ensuremath{\mathbb{E}}ecti{ [\BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{0} = \discount \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{1}$. Since $Q - \BellmanEvaluation{\policyicy}Q = \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}$, the equality~\eqref{EqnBase} is equivalent to the claim~\eqref{EqnInduction} with $\timestep = 1$. Turning to the induction step, we now assume that the claim~\eqref{EqnInduction} holds for some $\timestep \geq 1$, and show that it holds at step $\timestep + 1$. By a similar argument, we can write \begin{align*} \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy}]}{\timestep} = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[Q - \BellmanEvaluation{\policyicy}Q + \BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{\timestep} & = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[ Q - \BellmanEvaluation{\policyicy}Q ]}{\timestep} + \discount^{\timestep+1}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{\timestep+1} \\ & = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\timestep} + \discount^{\timestep+1}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{\timestep+1}. \end{align*} By the induction hypothesis, equality~\eqref{EqnInduction} holds for $\timestep$, and substituting the above equality shows that it also holds at time $\timestep + 1$. Since the equivalence~\eqref{EqnInduction} holds for all $\timestep$, we can take the limit as $\timestep \rightarrow \infty$, and doing so yields the claim. \subsection{Proof of \cref{thm:NewPolicyEvaluation}} \label{SecProofNewPolicyEvaluation} The proof relies on proving a high probability bound to \cref{EqnSandwich} and then invoking \cref{prop:Deterministic} to conclude. In the statement of the theorem, we require choosing $\epsilon > 0$ to satisfy the upper bound $\epsilon^2 \precsim \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}$, and then provide an upper bound in terms of $\sqrt{\ensuremath{\rho}(\epsilon,\delta)/n}$. It is equivalent to instead choose $\epsilon$ to satisfy the lower bound $\epsilon^2 \succsim \frac{\ensuremath{\rho}(\epsilon,\delta)}{n}$, and then provide upper bounds proportional to $\epsilon$. For the purposes of the proof, the latter formulation turns out to be more convenient and we pursue it here. \\ To streamline notation, let us introduce the shorthand $\inprod{f}{\Diff{\policyicy}(Q)} \defeq \inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu$. For each pair $(Q, \policyicy)$, we then define the random variable \begin{align*} \ZvarShort \defeq \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\Diff{\policyicy}(Q)}{}\big|}{\TestNormaRegularizerEmp{}}. \end{align*} Central to our proof of the theorem is a uniform bound on this random variable, one that holds for all pairs $(Q, \policyicy)$. In particular, our strategy is to exhibit some $\epsilon > 0$ for which, upon setting $\ensuremath{\lambda} = 4 \epsilon^2$, we have the guarantees \begin{subequations} \begin{align} \label{EqnSandwichZvar} \frac{1}{4} \leq \frac{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}{ \sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \leq 2 \qquad & \mbox{uniformly for all $f \in \TestFunctionClass{}$, and} \\ \label{EqnUniformZvar} \ZvarShort \leq \epsilon \quad & \mbox{uniformly for all $(Q, \policyicy)$,} \end{align} \end{subequations} both with probability at least $1 - \delta$. In particular, consistent with the theorem statement, we show that this claim holds if we choose $\epsilon > 0$ to satisfy the inequality \begin{align} \label{EqnOriginalChoice} \epsilon^2 & \geq \bar{c} \frac{\ensuremath{\rho}(\epsilon, \delta)}{n} \; \end{align} where $\bar{c} > 0$ is a sufficiently large (but universal) constant. Supposing that the bounds~\eqref{EqnSandwichZvar} and~\eqref{EqnUniformZvar} hold, let us now establish the set inclusions claimed in the theorem. \paragraph{Inclusion $\PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}(\epsilon)$:} Define the random variable $\ensuremath{M}_n(Q, \policyicy) \defeq \sup \limits_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ |\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}^\policyicy(Q)}{\mu}|}{\TestNormaRegularizerEmp{}}$, and observe that $Q \in \PopulationFeasibleSetInfty{\policyicy}$ implies that $\ensuremath{M}_n(Q, \policyicy) = 0$. With this definition, we have \begin{align*} \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} & \stackrel{(i)}{\leq} \ensuremath{M}_n(Q, \policyicy) + \ZvarShort \; \stackrel{(ii)}{\leq} \epsilon \end{align*} where step (i) follows from the triangle inequality; and step (ii) follows since $\ensuremath{M}_n(Q, \policyicy) = 0$, and $\ZvarShort \leq \epsilon$ from the bound~\eqref{EqnUniformZvar}. \paragraph{Inclusion $\EmpiricalFeasibleSet{\policyicy}(\epsilon) \subseteq \PopulationFeasibleSet{\policyicy}(4 \epsilon)$} By the definition of $\PopulationFeasibleSet{\policyicy}(4 \epsilon)$, we need to show that \begin{align*} \ensuremath{\bar{\ensuremath{M}}}(Q, \policyicy) \defeq \sup \limits_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}^\policyicy(Q)}{\mu}\big|}{\TestNormaRegularizerPop{}} \leq 4 \epsilon \qquad \mbox{for any $Q \in \EmpiricalFeasibleSet{\policyicy}(\epsilon)$.} \end{align*} Now we have \begin{align*} \ensuremath{\bar{\ensuremath{M}}}(Q, \policyicy) \stackrel{(i)}{\leq} 2 \ensuremath{M}_n(Q, \policyicy) \stackrel{(ii)}{\leq} 2 \left \{ \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} + \ZvarShort \right \} \; \stackrel{(iii)}{\leq} 2 \big \{ \epsilon + \epsilon \} \; = \; 4 \epsilon, \end{align*} where step (i) follows from the sandwich relation~\eqref{EqnSandwichZvar}; step (ii) follows from the triangle inequality and the definition of $\ZvarShort$; and step (iii) follows since $\ZvarShort \leq \epsilon$ from the bound~\eqref{EqnUniformZvar}, and \begin{align*} \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} & \leq \epsilon, \qquad \mbox{using the inclusion $Q \in \EmpiricalFeasibleSet{\policyicy}(\epsilon)$.} \end{align*} Consequently, the remainder of our proof is devoted to establishing the claims~\eqref{EqnSandwichZvar} and~\eqref{EqnUniformZvar}. In doing so, we make repeated use of some Bernstein bounds, stated in terms of the shorthand $\ensuremath{\Psi_\numobs}(\delta) = \frac{\log(n/\delta)}{n}$. \begin{lemma} \label{LemBernsteinBound} There is a universal constant $c$ such each the following statements holds with probability at least $1 - \delta$. For any $f$, we have \begin{subequations} \begin{align} \label{EqnBernsteinFsquare} \Big| \|f\|_n^2 - \|f\|_\mu^2 \Big| & \leq c \; \Big \{ \|f\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \ensuremath{\Psi_\numobs}(\delta) \Big \}, \end{align} and for any $(Q, \policyicy)$ and any function $f$, we have \begin{align} \label{EqnBernsteinBound} \big|\inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu \big| & \leq c \; \Big \{ \|f\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \|f\|_\infty \ensuremath{\Psi_\numobs}(\delta) \Big \}. \end{align} \end{subequations} \end{lemma} \noindent These bounds follow by identifying a martingale difference sequence, and applying a form of Bernstein's inequality tailored to the martingale setting. See Section~\ref{SecProofLemBernsteinBound} for the details. \subsection{Proof of the sandwich relation~\eqref{EqnSandwichZvar}} We claim that (modulo the choice of constants) it suffices to show that \begin{align} \label{EqnCleanSandwich} \Big| \|f\|_n - \|f\|_\mu \Big| & \leq \epsilon \qquad \mbox{uniformly for all $f \in \TestFunctionClass{}$} \end{align} for some universal constant $c'$. Indeed, when this bound holds, we have \begin{align*} \|f\|_n + 2 \epsilon \leq \|f\|_\mu + 3 \epsilon \leq \frac{3}{2} \{ \|f\|_\mu + 2\epsilon \}, \quad \mbox{and} \quad \|f\|_n + 2 \epsilon \geq \|f\|_\mu + \epsilon \geq \frac{1}{2} \big \{ \|f\|_\mu + 2 \epsilon \}, \end{align*} so that $\frac{\|f\|_\mu + 2 \epsilon}{\|f \|_n + 2 \epsilon} \in \big[ \frac{1}{2}, \frac{3}{2} \big]$. To relate this statement to the claimed sandwich, observe the inclusion $\frac{\|f\| + \sqrt{2 \epsilon}}{\sqrt{\|f\|^2 + 4 \epsilon^2}} \in [1, \sqrt{2}]$, where $\|f\|$ can be either $\|f\|_n$ or $\|f\|_\mu$. Combining this fact with our previous bound, we see that $\frac{\sqrt{\|f\|_n^2 + 4 \epsilon^2}}{\sqrt{\|f\|_\mu^2 + 4 \epsilon^2}} \in \Big[ \frac{1}{\sqrt{2}} \frac{1}{2}, \frac{3 \sqrt{2}}{2} \Big] \subset \big[\frac{1}{4}, 3 \big]$, as claimed. \\ The remainder of our analysis is focused on proving the bound~\eqref{EqnCleanSandwich}. Defining the random variable $Y_n(\ensuremath{f}) = \big| \|f\|_n - \|f\|_\mu \big|$, we need to establish a high probability bound on $\sup_{f \in \TestFunctionClass{}} Y_n(\ensuremath{f})$. Let $\{f^1, \ldots, f^N \}$ be an $\epsilon$-cover of $\TestFunctionClass{}$ in the sup-norm. For any $f \in \TestFunctionClass{}$, we can find some $f^j$ such that $\|f - f^j\|_\infty \leq \epsilon$, whence \begin{align*} Y_n(f) \leq Y_n(f^j) + \big | Y_n(f^j) - Y_n(f) \big| & \stackrel{(i)}{\leq} Y_n(f^j) + \big| \|f^j\|_n - \|f\|_n \big| + \big| \|f^j\|_\mu - \|f\|_\mu \big| \\ & \stackrel{(ii)}{\leq} Y_n(f^j) + \|f^j - f\|_n + \|f^j - f\|_\mu \\ & \stackrel{(iii)}{\leq} Y_n(f^j) + 2 \epsilon, \end{align*} where steps (i) and (ii) follow from the triangle inequality; and step (iii) follows from the inequality $\max \{ \|f^j - f\|_n, \|f^j - f\|_\mu \} \leq \|f^j - f\|_\infty \leq \epsilon$. Thus, we have reduced the problem to bounding a finite maximum. Note that if $\max \{ \|f^j\|_n, \|f^j\|_\mu \} \leq \epsilon$, then we have $Y_n(f^j) \leq 2 \epsilon$ by the triangle inequality. Otherwise, we may assume that $\|f^j\|_n + \|f^j\|_n \geq \epsilon$. With probability at least $1 - \delta$, we have \begin{align*} \Big| \|f^j\|_n - \|f\|_\mu \Big| = \frac{ \Big| \|f^j\|_n^2 - \|f\|_\mu^2 \Big|}{\|f^j\|_n + \|f^j\|_\mu} & \stackrel{(i)}{\leq} \frac{ c \big \{ \|f^j\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \ensuremath{\Psi_\numobs}(\delta) \big \}}{\|f^j\|_\mu + \|f^j\|_n} \\ & \stackrel{(ii)}{\leq} c \Big \{ \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \frac{\ensuremath{\Psi_\numobs}(\delta)}{\epsilon} \Big \}, \end{align*} where step (i) follows from the Bernstein bound~\eqref{EqnBernsteinFsquare} from Lemma~\ref{LemBernsteinBound}, and step (ii) uses the fact that $\|f^j\|_n + \|f^j\|_n \geq \epsilon$. Taking union bound over all $N$ elements in the cover and replacing $\delta$ with $\delta/N$, we have \begin{align*} \max_{j \in [N]} Y_n(f^j) & \leq c \Big \{ \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \frac{\ensuremath{\Psi_\numobs}(\delta/N)}{\epsilon} \Big \} \end{align*} with probability at least $1 - \delta$. Recalling that $N = N_\epsilon(\TestFunctionClass{})$, our choice~\eqref{EqnOriginalChoice} of $\epsilon$ ensures that $\sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} \leq c \; \epsilon$ for some universal constant $c$. Putting together the pieces (and increasing the constant $\bar{c}$ in the choice~\eqref{EqnOriginalChoice} of $\epsilon$ as needed) yields the claim. \subsection{Proof of the uniform upper bound~\eqref{EqnUniformZvar}} \label{SecUniProof} We need to establish an upper bound on $\ZvarShort$ that that holds uniformly for all $(Q, \policyicy)$. Our first step is to prove a high probability bound for a fixed pair. We then apply a standard discretization argument to make it uniform in the pair. Note that we can write $\ZvarShort = \sup_{f \in \TestFunctionClass{}} \frac{V_n(f)}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}$, where we have defined $V_n(f) \defeq |\inprod{f}{\Diff{\policyicy}(Q)}|$. Our first lemma provides a uniform bound on the latter random variables: \begin{lemma} \label{LemVbound} Suppose that $\epsilon^2 \geq \ensuremath{\Psi_\numobs} \big(\delta/N_\epsilon(\TestFunctionClass{}) \big)$. Then we have \begin{align} \label{EqnVbound} V_n(f) & \leq c \big \{ \|f\|_\mu \epsilon + \epsilon^2 \big \} \qquad \mbox{for all $f \in \TestFunctionClass{}$} \end{align} with probability at least $1 - \delta$. \end{lemma} \noindent See \cref{SecProofLemVbound} for the proof of this claim. \\ We claim that the bound~\eqref{EqnVbound} implies that, for any fixed pair $(Q, \policyicy)$, we have \begin{align*} Z_n(Q, \policyicy) \leq c' \epsilon \qquad \mbox{with probability at least $1 - \delta$.} \end{align*} Indeed, when Lemma~\ref{LemVbound} holds, for any $f \in \TestFunctionClass{}$, we can write \begin{align*} \frac{V_n(f)}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} = \frac{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} \; \frac{V_n(f)}{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \stackrel{(i)}{\leq} \; 3 \; \frac{c \big \{ \|f\|_\mu \epsilon + \epsilon^2 \big \}}{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \; \stackrel{(ii)}{\leq} \; c' \epsilon, \end{align*} where step (i) uses the sandwich relation~\eqref{EqnSandwichZvar}, along with the bound~\eqref{EqnVbound}; and step (ii) follows given the choice $\ensuremath{\lambda} = 4 \epsilon^2$. We have thus proved that for any fixed $(Q, \policyicy)$ and $\epsilon \geq \ensuremath{\Psi_\numobs} \big(\delta/N_\epsilon(\TestFunctionClass{}) \big)$, we have \begin{align} \label{EqnFixZvarBound} \ZvarShort & \leq c' \epsilon \qquad \mbox{with probability at least $1 - \delta$.} \end{align} Our next step is to upgrade this bound to one that is uniform over all pairs $(Q, \policyicy)$. We do so via a discretization argument: let $\{Q^j\}_{j=1}^J$ and $\{\pi^k\}_{k=1}^K$ be $\epsilon$-coverings of $\Qclass{}$ and $\PolicyClass$, respectively. \begin{lemma} \label{LemDiscretization} We have the upper bound \begin{align} \sup_{Q, \policyicy} \ZvarShort & \leq \max_{(j,k) \in [J] \times [K]} \Zvar{Q^j}{\policyicy^k} + 4 \epsilon. \end{align} \end{lemma} \noindent See Section~\ref{SecProofLemDiscretization} for the proof of this claim. If we replace $\delta$ with $\delta/(J K)$, then we are guaranteed that the bound~\eqref{EqnFixZvarBound} holds uniformly over the family $\{ Q^j \}_{j=1}^J \times \{ \policyicy^k \}_{k=1}^K$. Recalling that $J = N_\epsilon(\Qclass{})$ and $K = N_\epsilon(\Pi)$, we conclude that for any $\epsilon$ satisfying the inequality~\eqref{EqnOriginalChoice}, we have $\sup_{Q, \policyicy} \ZvarShort \leq \ensuremath{\tilde{c}} \epsilon$ with probability at least $1 - \delta$. (Note that by suitably scaling up $\epsilon$ via the choice of constant $\bar{c}$ in the bound~\eqref{EqnOriginalChoice}, we can arrange for $\ensuremath{\tilde{c}} = 1$, as in the stated claim.) \subsection{Proofs of supporting lemmas} In this section, we collect together the proofs of~\cref{LemVbound,LemDiscretization}, which were stated and used in~\Cref{SecUniProof}. \subsubsection{Proof of ~\cref{LemVbound}} \label{SecProofLemVbound} We first localize the problem to the class $\ensuremath{\mathcal{F}}(\epsilon) = \{f \in \TestFunctionClass{} \mid \|f\|_\mu \leq \epsilon \}$. In particular, if there exists some $\tilde{f} \in \TestFunctionClass{}$ that violates~\eqref{EqnVbound}, then the rescaled function $f = \epsilon \tilde{f}/\|\tilde{f}\|_\mu$ belongs to $\ensuremath{\mathcal{F}}(\epsilon)$, and satisfies $V_n(f) \geq c \epsilon^2$. Consequently, it suffices to show that $V_n(f) \leq c \epsilon^2$ for all $f \in \ensuremath{\mathcal{F}}(\epsilon)$. Choose an $\epsilon$-cover of $\TestFunctionClass{}$ in the sup-norm with $N = N_\epsilon(\TestFunctionClass{})$ elements. Using this cover, for any $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we can find some $f^j$ such that $\|f - f^j\|_\infty \leq \epsilon$. Thus, for any $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we can write \begin{align} \label{EqnOriginalInequality} V_n(f) \leq V_n(f^j) + V_n(f - f^j) \; \; \leq \underbrace{V_n(f^j)}_{T_1} + \underbrace{\sup_{g \in \ensuremath{\mathcal{G}}(\epsilon)} V_n(g)}_{T_2}, \end{align} where $\ensuremath{\mathcal{G}}(\epsilon) \defeq \{ f_1 - f_2 \mid f_1, f_2 \in \TestFunctionClass{}, \|f_1 - f_2 \|_\infty \leq \epsilon \}$. We bound each of these two terms in turn. In particular, we show that each of $T_1$ and $T_2$ are upper bounded by $c \epsilon^2$ with high probability. \paragraph{Bounding $T_1$:} From the Bernstein bound~\eqref{EqnBernsteinBound}, we have \begin{align*} V_n(f^k) & \leq c \big \{ \|f^k\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \|f^k\|_\infty \ensuremath{\Psi_\numobs}(\delta/N) \big \} \qquad \mbox{for all $k \in [N]$} \end{align*} with probability at least $1 - \delta$. Now for the particular $f^j$ chosen to approximate $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we have \begin{align*} \|f^j\|_\mu & \leq \|f^j - f\|_\mu + \|f\|_\mu \leq 2 \epsilon, \end{align*} where the inequality follows since $\|f^j - f\|_\mu \leq \|f^j - f\|_\infty \leq \epsilon$, and $\|f\|_\mu \leq \epsilon$. Consequently, we conclude that \begin{align*} T_1 & \leq c \Big \{ 2 \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \ensuremath{\Psi_\numobs}(\delta/N) \Big \} \; \leq \; c' \epsilon^2 \qquad \mbox{with probability at least $1 - \delta$.} \end{align*} where the final inequality follows from our choice of $\epsilon$. \paragraph{Bounding $T_2$:} Define $\ensuremath{\mathcal{G}} \defeq \{f_1 - f_2 \mid f_1, f_2 \in \TestFunctionClass{} \}$. We need to bound a supremum of the process $\{ V_n(g), g \in \ensuremath{\mathcal{G}} \}$ over the subset $\ensuremath{\mathcal{G}}(\epsilon)$. From the Bernstein bound~\eqref{EqnBernsteinBound}, the increments $V_n(g_1) - V_n(g_2)$ of this process are sub-Gaussian with parameter $\|g_1 - g_2\|_\mu \leq \|g_1 - g_2\|_\infty$, and sub-exponential with parameter $\|g_1 - g_2\|_\infty$. Therefore, we can apply a chaining argument that uses the metric entropy $\log N_t(\ensuremath{\mathcal{G}})$ in the supremum norm. Moreover, we can terminate the chaining at $2 \epsilon$, because we are taking the supremum over the subset $\ensuremath{\mathcal{G}}(\epsilon)$, and it has sup-norm diameter at most $2 \epsilon$. Moreover, the lower interval of the chain can terminate at $2 \epsilon^2$, since our goal is to prove an upper bound of this order. Then, by using high probability bounds for the suprema of empirical processes (e.g., Theorem 5.36 in the book~\cite{wainwright2019high}), we have \begin{align*} T_2 \leq c_1 \; \int_{2 \epsilon^2}^{2 \epsilon} \ensuremath{\phi} \big( \frac{\log N_t(\ensuremath{\mathcal{G}})}{n} \big) dt + c_2 \big \{ \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \epsilon \ensuremath{\Psi_\numobs}(\delta) \big\} + 2 \epsilon^2 \end{align*} with probability at least $1 - \delta$. (Here the reader should recall our shorthand $\ensuremath{\phi}(s) = \max \{s, \sqrt{s} \}$.) Since $\ensuremath{\mathcal{G}}$ consists of differences from $\TestFunctionClass{}$, we have the upper bound $\log N_t(\ensuremath{\mathcal{G}}) \leq 2 \log N_{t/2}(\TestFunctionClass{})$, and hence (after making the change of variable $u = t/2$ in the integrals) \begin{align*} T_2 \leq c'_1 \int_{\epsilon^2}^{ \epsilon} \ensuremath{\phi} \big( \frac{\log N_u(\TestFunctionClass{})}{n} \big) du + c_2 \big \{ \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \epsilon \ensuremath{\Psi_\numobs}(\delta) \big\} \; \leq \; \ensuremath{\tilde{c}} \epsilon^2, \end{align*} where the last inequality follows from our choice of $\epsilon$. \subsubsection{Proof of ~\cref{LemDiscretization}} \label{SecProofLemDiscretization} By our choice of the $\epsilon$-covers, for any $(Q, \policyicy)$, there is a pair $(Q^j, \policyicy^k)$ such that \begin{align*} \|Q^j - Q\|_\infty \leq \epsilon, \quad \mbox{and} \quad \|\policyicy^k - \policyicy\|_{\infty,1} = \sup_{\state} \|\policyicy^k(\cdot \mid \state) - \policyicy(\cdot \mid \state) \|_1 \leq \epsilon. \end{align*} Using this pair, an application of the triangle inequality yields \begin{align*} \big| \Zvar{Q}{\policyicy} - \Zvar{Q^j}{\policyicy^k} \big| & \leq \underbrace{\big| \Zvar{Q}{\policyicy} - \Zvar{Q}{\policyicy^k} \big|}_{T_1} + \underbrace{\big| \Zvar{Q}{\policyicy^k} - \Zvar{Q^j}{\policyicy^k} \big|}_{T_2} \end{align*} We bound each of these terms in turn, in particular proving that $T_1 + T_2 \leq 24 \epsilon$. Putting together the pieces yields the bound stated in the lemma. \paragraph{Bounding $T_2$:} From the definition of $\ensuremath{Z_\numobs}$, we have \begin{align*} T_2 = \big| \Zvar{Q}{\policyicy^k} - \Zvar{Q^j}{\policyicy^k} \big| & \leq \sup_{f \in \TestFunctionClass{}} \frac{\big| \smlinprod{f}{\Diff{\policyicy^k}(Q - Q^j)}|}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}. \end{align*} Now another application of the triangle inequality yields \begin{align*} |\smlinprod{f}{\Diff{\policyicy^k}(Q - Q^j)}| & \leq |\smlinprod{f}{\TDError{(Q- Q^j)}{\policyicy^k}{}}_n| + ||\smlinprod{f}{\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)}|_\mu \\ & \leq \|f\|_n \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_n + \|f\|_\mu \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\mu \\ & \leq \max \{ \|f\|_n, \|f\|_\mu \} \; \Big \{ \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_\infty + \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\infty \Big \} \end{align*} where step (i) follows from the Cauchy--Schwarz inequality. Now in terms of the shorthand \mbox{$\Delta \defeq Q - Q^j$,} we have \begin{subequations} \begin{align} \label{EqnCoffeeBell} \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\infty & = \sup_{\psa} \Big| \Delta \psa - \discount \E_{\successorstate\sim\Pro\psa} \big[ \Delta(\successorstate,\policyicy) \big] \Big| \leq 2 \|\Delta\|_\infty \leq 2 \epsilon. \end{align} An entirely analogous argument yields \begin{align} \label{EqnCoffeeTD} \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_\infty \leq 2 \epsilon \end{align} \end{subequations} Conditioned on the sandwich relation~\eqref{EqnSandwichZvar}, we have $\sup_{f \in \TestFunctionClass{}} \frac{\max \{ \|f\|_n, \|f\|_\mu \}}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} \leq 4$. Combining this bound with inequalities~\eqref{EqnCoffeeBell} and~\eqref{EqnCoffeeTD}, we have shown that $T_2 \leq 4 \big \{2 \epsilon + 2 \epsilon \} = 16 \epsilon$. \paragraph{Bounding $T_1$:} In this case, a similar argument yields \begin{align*} |\smlinprod{f}{(\Diff{\policyicy} - \Diff{\policyicy^k})(Q)}| & \leq \max \{ \|f\|_n, \|f\|_\mu \} \; \Big \{ \|(\delta^\policyicy - \delta^{\policyicy^k})(Q)\|_n + \|(\ensuremath{\mathcal{B}}^\policyicy - \ensuremath{\mathcal{B}}^{\policyicy^k})(Q)\|_\mu \}. \end{align*} Now we have \begin{align*} \|(\delta^\policyicy - \delta^{\policyicy^k})(Q)\|_n & \leq \max_{i = 1, \ldots, n} \Big| \sum_{\action'} \big(\policyicy(\action' \mid \state_i) - \policyicy^k(\action' \mid \state_i) \big) Q(\state^+_i, \action') \Big| \\ & \leq \max_{\state} \sum_{\action'} |\policyicy(\action' \mid \state) - \policyicy^k(\action \mid \state)| \; \|Q\|_\infty \\ & \leq \epsilon. \end{align*} A similar argument yields that $\|(\ensuremath{\mathcal{B}}^\policyicy - \ensuremath{\mathcal{B}}^{\policyicy^k})(Q)\|_\mu | \leq \epsilon$, and arguing as before, we conclude that $T_1 \leq 4 \{\epsilon + \epsilon \} = 8 \epsilon$. \subsubsection{Proof of Lemma~\ref{LemBernsteinBound}} \label{SecProofLemBernsteinBound} Our proof of this claim makes use of the following known Bernstein bound for martingale differences (cf. Theorem 1 in the paper~\cite{beygelzimer2011contextual}). Recall the shorthand notation $\ensuremath{\Psi_\numobs}(\delta) = \frac{\log(n/\delta)}{n}$. \begin{lemma}[Bernstein's Inequality for Martingales] \label{lem:Bernstein} Let $\{X_t\}_{t \geq 1}$ be a martingale difference sequence with respect to the filtration $\{ \mathcal F_t \}_{t \geq 1}$. Suppose that $|X_t| \leq 1$ almost surely, and let $\E_t$ denote expectation conditional on $\mathcal F_t$. Then for all $\delta \in (0,1)$, we have \begin{align} \Big| \frac{1}{n} \sum_{t=1}^n X_t \Big| & \leq 2 \Big[ \Big(\frac{1}{n} \sum_{t=1}^n \E_t X^2_t \Big) \ensuremath{\Psi_\numobs}(2 \delta) \Big]^{1/2} + 2 \ensuremath{\Psi_\numobs}(2 \delta) \end{align} with probability at least $1 - \delta$. \end{lemma} \noindent With this result in place, we divide our proof into two parts, corresponding to the two claims~\eqref{EqnBernsteinBound} and~\eqref{EqnBernsteinFsquare} stated in~\cref{LemBernsteinBound}. \paragraph{Proof of the bound~\eqref{EqnBernsteinBound}:} Recall that at step $i$, the triple $(\state,\action,\identifier)$ is drawn according to a conditional distribution $\mu_i(\cdot \mid \mathcal F_i)$. Similarly, we let $d_i$ denote the distribution of $\sarsi{}$ conditioned on the filtration $\mathcal F_i$. Note that $\mu_i$ is obtained from $d_i$ by marginalizing out the pair $(\reward, \successorstate)$. Moreover, by the tower property of expectation, the Bellman error is equivalent to the average TD error. Using these facts, we have the equivalence \begin{align*} \innerprodweighted{\TestFunction{}} {\TDError{Q}{\policyicy}{}} {d_{i}} & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunctionDefCompact{}{}[\TDErrorDefCompact{Q}{\policyicy}{}]\big\}} {d_{i}} \\ & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunction{}(\state,\action,\identifier) \ensuremath{\mathbb{E}}ecti{[\TDErrorDefCompact{Q}{\policyicy}{}]\big\}} {\substack{\reward \sim R\psa, \successorstate\sim\Pro\psa}}} {(\state,\action,\identifier) \sim \mu_{i}} \\ & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunction{}(\state,\action,\identifier) [\ensuremath{\mathcal{B}}orDefCompact{Q}{\policyicy}{}]\big\}} {(\state,\action,\identifier) \sim \mu_{i}} \\ & = \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu_{i}}. \end{align*} As a consequence, we can write $\inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu = \frac{1}{n} \sum_{i=1}^n \ensuremath{W}_i$ where \begin{align*} \ensuremath{W}_i & \defeq \TestFunctionDefCompact{}{i} [\TDErrorDefCompact{Q}{\policyicy}{i}] - \ensuremath{\mathbb{E}}ecti{\big\{\TestFunctionDefCompact{}{}{[\TDErrorDefCompact{Q}{\policyicy}{}}]\big\}}{d_{i}} \end{align*} defines a martingale difference sequence (MDS). Thus, we can prove the claim by applying a Bernstein martingale inequality. Since $\|r\|_\infty \leq 1$ and $\|Q\|_\infty \leq 1$ by assumption, we have $\|\ensuremath{W}_i\|_\infty \leq 3 \|f\|_\infty$, and \begin{align*} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{d_i} [\ensuremath{W}_i^2] & \leq 9 \; \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_{i}}[f^2(\state_{i},\action_{i}, o_{i})] \; = \; 9 \|f\|_\mu^2. \end{align*} Consequently, the claimed bound~\eqref{EqnBernsteinBound} follows by applying the Bernstein bound stated in \cref{lem:Bernstein}. \paragraph{Proof of the bound~\eqref{EqnBernsteinFsquare}:} In this case, we have the additive decomposition \begin{align*} \|\ensuremath{f}c\|_n^2 - \|\ensuremath{f}\|_\mu^2 & = \frac{1}{n} \sum_{i=1}^n \Big \{ \underbrace{\ensuremath{f}c^2(\state_{i},\action_{i},o_{i}) - \ensuremath{\mathbb{E}}_{\mu_{i}}[f^2(\state, \action, o)]}_{\ensuremath{\martone'}_i} \Big\}, \end{align*} where $\{\ensuremath{\martone'}_i\}_{i=1}^n$ again defines a martingale difference sequence. Note that $\|\ensuremath{\martone'}_i\|_\infty \leq 2 \|\ensuremath{f}c\|_\infty^2 \leq 2$, and \begin{align*} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} [(\ensuremath{\martone'}_i)^2] & \stackrel{(i)}{\leq} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} \big[ \ensuremath{f}c^4(S, A, \ensuremath{O}) \big] \; \leq \; \|\ensuremath{f}c\|_\infty^2 \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} \big[\ensuremath{f}c^2(S, A, \ensuremath{O}) \big] \; \stackrel{(ii)}{\leq} \; \|\ensuremath{f}c\|_\mu^2, \end{align*} where step (i) uses the fact that the variance of $\ensuremath{f}c^2$ is at most the fourth moment, and step (ii) uses the bound $\|\ensuremath{f}c\|_\infty \leq 1$. Consequently, the claimed bound~\eqref{EqnBernsteinFsquare} follows by applying the Bernstein bound stated in \cref{lem:Bernstein}. \tableofcontents \section{Additional Discussion and Results} \input{3p1p1-BRO} \subsection{Comparison with Weight Learning Methods} \label{sec:WeightLearning} The work closest to ours is \cite{jiang2020minimax}. They also use an auxiliary weight function class, which is comparable to our test class. However, the test class is used in different ways; we compare them in this section at the population level.\footnote{ The empirical estimator in \cite{jiang2020minimax} does not take into account the `alignment' of each weight function with respect to the dataset, which we do through self-normalization and regularization in the construction of the empirical estimator. This precludes obtaining the same type of strong finite time guarantees that we are able to derive here.} Let us assume that weak realizability holds and that $\TestFunctionClass{}$ is symmetric, i.e., if $\TestFunction{} \in \TestFunctionClass{}$ then $-\TestFunction{} \in \TestFunctionClass{}$ as well. At the population level, our program seeks to solve \begin{align} \label{eqn:PopProgComparison} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} \quad \text{s.t. } \quad \sup_{\TestFunction{} \in \TestFunctionClass{}} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0, \end{align} \defw{w} which is equivalent for any $w \in \TestFunctionClass{}$ to \begin{align*} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \quad \text{s.t. } \quad \sup_{\TestFunction{} \in \TestFunctionClass{}} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0. \end{align*} Removing the constraints leads to the upper bound \begin{align*} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} Since this is a valid upper bound for any $w{} \in \TestFunctionClass{}$, minimizing over $w$ must still yield an upper bound, which reads \begin{align*} \inf_{w \in \TestFunctionClass{}}\sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} This is the population program for ``weight learning'', as described in \cite{jiang2020minimax}. It follows that Bellman residual orthogonalization always produces tighter confidence intervals than ``weight learning'' at the population level. Another interesting comparison is with ``value learning'', also described in \cite{jiang2020minimax}. In this case, assuming symmetric $\TestFunctionClass{}$, we can equivalently express the population program \eqref{eqn:PopProgComparison} using a Lagrange multiplier as follows \begin{align} \label{eqn:LagProgComparison} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \sup_{\lambda \geq 0 ,\TestFunction{} \in \TestFunctionClass{}} \lambda\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align} Rearranging we obtain \begin{align*} \sup_{Q \in \Qclass{\policyicy}} \inf_{\lambda \geq 0 ,\TestFunction{} \in \TestFunctionClass{}}& \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \lambda\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} The ``value learning'' program proposed in \cite{jiang2020minimax} has a similar formulation to ours but differs in two key aspects. The first---and most important---is that \cite{jiang2020minimax} ignores the Lagrange multiplier; this means ``value learning'' is not longer associated to a constrained program. While the Lagrange multiplier could be ``incorporated'' into the test class $\TestFunctionClass{}$, doing so would cause the entropy of $\TestFunctionClass{}$ to be unbounded. Another point of difference is that ``value learning'' uses such expression with $\lambda = 1$ to derive the confidence interval \emph{lower bound}, while we use it to construct the confidence interval \emph{upper bound}. While this may seem like a contradiction, we notice that the expression is derived using different assumptions: we assume weak realizability of $Q$, while \cite{jiang2020minimax} assumes realizability of the density ratios between $\mu$ and the discounted occupancy measure $\policyicy$. \subsection{Additional Literature} \label{sec:Literature} Here we summarize some additional literature. The efficiency of off-policy tabular RL has been investigated in the papers~\cite{yin2020near,yin2020asymptotically,yin2021towards}. For empirical studies on offline RL, see the papers~\cite{laroche2019safe,jaques2019way,wu2019behavior,agarwal2020optimistic,wang2020critic,siegel2020keep,nair2020accelerating,yang2021pessimistic,kumar2021should,buckman2020importance,kumar2019stabilizing,kidambi2020morel,yu2020mopo}. Some of the classical RL algorithm are presented in the papers~\cite{munos2003error,munos2005error,antos2007fitted,antos2008learning,farahmand2010error,farahmand2016regularized}. For a more modern analysis, see~\cite{chen2019information}. These works generally make additionally assumptions on top of realizability. Alternatively, one can use importance sampling \cite{precup2000eligibility,thomas2016data,jiang2016doubly,farajtabar2018more}. A more recent idea is to look at the distributions themselves~\cite{liu2018breaking,nachum2019algaedice,xie2019towards,zhang2020gendice,zhang2020gradientdice,yang2020off,kallus2019efficiently}. Offline policy optimization with pessimism has been studied in the papers~\cite{liu2020provably,rashidinejad2021bridging,jin2021pessimism,xie2021bellman,zanette2021provable,yinnear,uehara2021pessimistic}. There exists a fairly extensive literature on lower bounds with linear representations, including the two papers~\cite{zanette2020exponential,wang2020statistical} that concurrently derived the first exponential lower bounds for the offline setting, and \cite{foster2021offline} proves that realizability and coverage alone are insufficient. In the context of off-policy optimization several works have investigated methods that assume only realizability of the optimal policy~\cite{xie2020batch,xie2020Q}. Related work includes the papers~\cite{duan2020minimax,duan2021risk,jiang2020minimax,uehara2020minimax,tang2019doubly,nachum2020reinforcement,voloshin2021minimax,hao2021bootstrapping,zhang2022efficient,uehara2021finite,chen2022well,lee2021model}. Among concurrent works, we note \cite{zhan2022offline}. \subsection{Definition of Weak Bellman Closure} \label{sec:appWeakClosure} \begin{definition}[Weak Bellman Closure] The Bellman operator $\BellmanEvaluation{\policyicy}$ is \emph{weakly closed} with respect to the triple $\big( \Qclass{\policyicy}, \TestFunctionClass{\policyicy}, \mu \big)$ if for any $Q \in \Qclass{\policyicy}$, there exists a predictor $\QpiProj{\policyicy}{Q} \in \Qclass{\policyicy}$ such that \begin{align} \innerprodweighted{\TestFunction{}}{\QpiProj{\policyicy}{Q}}{\mu} = \innerprodweighted{\TestFunction{}} {\BellmanEvaluation{\policyicy}(Q)} {\mu}. \end{align} \end{definition} \subsection{Additional results on the concentrability coefficients} \label{sec:appConc} \subsubsection{Testing with the identity function} Suppose that the identity function $\1$ belongs to the test class. Doing so amounts to requiring that the Bellman error is controlled in an average sense over all the data. When this choice is made, we can derive some generic upper bounds on $K^\policy$, which we state and prove here: \begin{lemma} If $\1 \in \TestFunctionClass{\policyicy}$, then we have the upper bounds \begin{align} \label{eqn:InEq} K^\policy & \stackrel{(i)}{\leq} \frac{\max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolComplexGen{\policyicy}|^2}{ \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolMuComplex|^2} \; \stackrel{(ii)}{\leq} K^\policy_* \defeq \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{|\PolComplexGen{\policyicy}|^2}{|\PolMuComplex|^2}. \end{align} \end{lemma} \begin{proof} Since $\1 \in \TestFunctionClass{}$, the definition of $\PopulationFeasibleSet{\policyicy}$ implies that \begin{align*} \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolMuComplex|^2 & \leq \big( \munorm{\1}^2 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n} = \big( 1 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}. \end{align*} The upper bound (i) then follows from the definition of $K^\policy$. The upper bound (ii) follows since the right hand side is the maximum ratio. \end{proof} Note that large values of $K^\policy_*$ (defined in \cref{eqn:InEq}) can arise when there exist $Q$-functions in the set $\PopulationFeasibleSet{\policyicy}$ that have low average Bellman error under the data-generating distribution $\mu$, but relatively large values under $\policyicy$. Of course, the likelihood of such unfavorable choices of $Q$ is reduced when we use a larger test function class, which then reduces the size of $\PopulationFeasibleSet{\policyicy}$. However, we pay a price in choosing a larger test function class, since the choice~\eqref{EqnRadChoice} of the radius $\ensuremath{\rho}$ needed for \cref{thm:NewPolicyEvaluation} depends on its complexity. \subsubsection{Mixture distributions} Now suppose that the dataset consists of a collection of trajectories collected by different protocols. More precisely, for each $j = 1, \ldots, \nConstraints$, let $\Dpi{j}$ be a particular protocol for generating a trajectory. Suppose that we generate data by first sampling a random index $J \in [\nConstraints]$ according to a probability distribution $\{\Frequencies{j} \}_{j=1}^{\nConstraints}$, and conditioned $J = j$, we sample $(\state,\action,\identifier)$ according to $\Dpi{j}$. The resulting data follows a mixture distribution, where we set $o = j$ to tag the protocol used to generate the data. To be clear, for each sample $i = 1, \ldots, n$, we sample $J$ as described, and then draw a single sample $(\state,\action,\identifier) \sim \Dpi{j}$ . Following the intuition given in the previous section, it is natural to include test functions that code for the protocol---that is, the binary-indicator functions \begin{align} f_j (\state,\action,\identifier) & = \begin{cases} 1 & \mbox{if $o=j$} \\ 0 & \mbox{otherwise.} \end{cases} \end{align} This test function, when included in the weak formulation, enforces the Bellman evaluation equations for the policy $\policyicy \in \PolicyClass$ under consideration along the distribution induced by each data-generating policy $\Dpi{j}$. \begin{lemma}[Mixture Policy Concentrability] \label{lem:MixturePolicyConcentrability} Suppose that $\mu$ is an $\nConstraints$-component mixture, and that the indicator functions $\{f_j \}_{j=1}^\nConstraints$ are included in the test class. Then we have the upper bounds \begin{align} \label{EqnMixturePolicyUpper} K^\policy & \stackrel{(i)}{\leq} \frac{1 + \nConstraints \ensuremath{\lambda}}{1 + \ensuremath{\lambda}} \; \frac{\max \limits_{Q\in \PopulationFeasibleSet{\policyicy}} [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2} {\max \limits_{Q \in \PopulationFeasibleSet{\policyicy}} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2} \; \stackrel{(ii)}{\leq} \; \frac{1 + \nConstraints \ensuremath{\lambda}}{1 + \ensuremath{\lambda}} \; \max_{Q\in \PopulationFeasibleSet{\policyicy}} \left \{ \frac{ [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2} { \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2} \right \}. \end{align} \end{lemma} \begin{proof} From the definition of $K^\policy$, it suffices to show that \begin{align*} \max \limits_{Q \in \PopulationFeasibleSet{\policyicy}} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2 \leq \frac{\ensuremath{\rho}}{n} \; \big(1 + \nConstraints \ensuremath{\lambda} \big). \end{align*} A direct calculation yields $\innerprodweighted{\TestFunction{j}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = \ensuremath{\mathbb{E}}ecti{\Indicator \{o = j \} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = \Frequencies{j} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}$. Moreover, since each $\TestFunction{j}$ belongs to the test class by assumption, we have the upper bound $\Big|\Frequencies{j} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}} \Big| \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \; \sqrt{ \munorm{\TestFunction{j}}^2 + \ensuremath{\lambda}}$. Squaring each term and summing over the constraints yields \begin{align*} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2 \leq \frac{\ensuremath{\rho}}{n} \sum_{j=1}^\nConstraints \big( \munorm{\TestFunction{j}}^2 + \ensuremath{\lambda} \big) = \frac{\ensuremath{\rho}}{n} \big(1 + \nConstraints \ensuremath{\lambda} \big), \end{align*} where the final equality follows since $\sum_{j=1}^{\nConstraints} \munorm{\TestFunction{j}}^2 = 1$. \end{proof} As shown by the upper bound, the off-policy coefficient $K^\policy$ provides a measure of how the squared-averaged Bellman errors along the policies $\{ \Dpi{j} \}_{j=1}^\nConstraints$, weighted by their probabilities $\{ \Frequencies{j} \}_{j=1}^{\nConstraints}$, transfers to the evaluation policy $\policyicy$. Note that the regularization parameter $\ensuremath{\lambda}$ decays as a function of the sample size---e.g., as $1/n$ in \cref{thm:NewPolicyEvaluation}---the factor $(1 + \nConstraints \TestFunctionReg)/(1 + \TestFunctionReg)$ approaches one as $n$ increases (for a fixed number $\nConstraints$ of mixture components). \subsubsection{Bellman Rank for off-policy evaluation} In this section, we show how more refined bounds can be obtained when---in addition to a mixture condition---additional structure is imposed on the problem. In particular, we consider a notion similar to that of Bellman rank~\cite{jiang17contextual}, but suitably adapted\footnote{ The original definition essentially takes $\widetilde {\PolicyClass}$ as the set of all greedy policies with respect to $\widetilde {\Qclass{}}$. Since a dataset need not originate from greedy policies, the definition of Bellman rank is adapted in a natural way.} to the off-policy setting. Given a policy class $\widetilde {\PolicyClass}$ and a predictor class $\widetilde {\Qclass{}}$, we say that it has Bellman rank is $\dim$ if there exist two maps $\BRleft{} : \widetilde {\PolicyClass} \rightarrow \R^\dim$ and $\BRright{}: \widetilde {\Qclass{}} \rightarrow \R^\dim$ such that \begin{align} \label{eqn:BellmanRank} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \smlinprod{\BRleft{\policyicy}}{\BRright{Q}}_{\R^d}, \qquad \text{for all } \; \policyicy \in \widetilde {\PolicyClass} \; \text{and} \; Q \in \widetilde {\Qclass{}}. \end{align} In words, the average Bellman error of any predictor $Q$ along any given policy $\policyicy$ can be expressed as the Euclidean inner product between two $\dim$-dimensional vectors, one for the policy and one for the predictor. As in the previous section, we assume that the data is generated by a mixture of $\nConstraints$ different distributions (or equivalently policies) $\{\Dpi{j} \}_{j=1}^\nConstraints$. In the off-policy setting, we require that the policy class $\widetilde {\PolicyClass}$ contains all of these policies as well as the target policy---viz. $\{\Dpi{j} \} \cup \{ \policyicy \} \subseteq \widetilde {\PolicyClass}$. Moreover, the predictor class $\widetilde {\Qclass{}}$ should contain the predictor class for the target policy, i.e., $\Qclass{\policyicy} \subseteq \widetilde {\Qclass{}}$. We also assume weak realizability for this discussion. Our result depends on a positive semidefinite matrix determined by the mixture weights $\{\Frequencies{j} \}_{j=1}^\nConstraints$ along with the embeddings $\{\BRleft{\Dpi{j}}\}_{j=1}^\nConstraints$ of the associated policies that generated the data. In particular, we define \begin{align*} \BRCovariance{} = \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 \BRleft{\Dpi{j}} \BRleft{\Dpi{j}}^\top. \end{align*} Assuming that this is matrix is positive definite,\footnote{If not, one can prove a result for a suitably regularized version.} we define the norm $\|u\|_{\BRCovariance{}^{-1}} = \sqrt{u^T (\BRCovariance{})^{-1} u}$. With this notation, we have the following bound. \begin{lemma}[Concentrability with Bellman Rank] \label{lem:ConcentrabilityBellmanRank} For a mixture data-generation process and under the Bellman rank condition~\eqref{eqn:BellmanRank}, we have the upper bound \begin{align} K^\policy & \leq \; \frac{1 + \nConstraints \TestFunctionReg}{1 + \TestFunctionReg} \; \norm{\BRleft{\policyicy}}{\BRCovariance{}^{-1}}^2, \end{align} \end{lemma} \begin{proof} Our proof exploits the upper bound (ii) from the claim~\eqref{EqnMixturePolicyUpper} in~\cref{lem:MixturePolicyConcentrability}. We first evaluate and redefine the ratio in this upper bound. Weak realizability coupled with the Bellman rank condition~\eqref{eqn:BellmanRank} implies that there exists some $\QpiWeak{\policyicy}$ such that \begin{align*} 0 & = \innerprodweighted{\TestFunction{j}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\mu} = \Frequencies{j} \ensuremath{\mathbb{E}}ecti {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\Dpi{j}} = \Frequencies{j} \inprod{\BRleft{\Dpi{j}}}{\BRright{\QpiWeak{\policyicy}}}, \qquad \mbox{for all $j = 1, \ldots, \nConstraints$, and} \\ 0 & = \innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \ensuremath{\mathbb{E}}ecti {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \inprod{\BRleft{\policyicy}}{\BRright{\QpiWeak{\policyicy}}}. \end{align*} Therefore, we have the equivalences $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}} = \inprod{\BRleft{\Dpi{j}}}{ (\BRright{Q} - \BRright{\QpiWeak{\policyicy}})}$ for all $j = 1, \ldots, \nConstraints$, as well as $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \inprod{\BRleft{\policyicy}}{(\BRright{Q} - \BRright{\QpiWeak{\policyicy}})}$. Introducing the shorthand $\BRDelta{Q} = \BRright{Q} - \BRright{\QpiWeak{\policyicy}}$, we can bound the ratio as follows \begin{align*} \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ (\inprod{\BRleft{\policyicy}}{\BRDelta{Q}})^2 }{ \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 (\inprod{\BRleft{\Dpi{j}}}{ \BRDelta{Q}})^2 } \Big \}& = \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ (\inprod{\BRleft{\policyicy}}{ \BRDelta{Q}})^2 }{\BRDelta{Q}^\top \Big( \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 \BRleft{\Dpi{j}} \BRleft{\Dpi{j}}^\top \Big) \BRDelta{Q} } \Big \} \\ & = \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ ( \smlinprod{\BRleft{\policyicy}}{ \BRCovariance{}^{-\frac{1}{2}} \BRDeltaCoord{Q}})^2 }{ \|\BRDeltaCoord{Q}\|_2^2} \Big \} \qquad \mbox{where $\BRDeltaCoord{Q} = \BRCovariance{}^{\frac{1}{2}}\BRDelta{Q}$}\\ & \leq \norm{\BRleft{\policyicy}}{\BRCovariance{}^{-1}}^2, \end{align*} where the final step follows from the Cauchy--Schwarz inequality. \end{proof} Thus, when performing off-policy evaluation with a mixture distribution under the Bellman rank condition, the coefficient $K^\policy$ is bounded by the alignment between the target policy $\policyicy$ and the data-generating distribution $\mu$, as measured in the the embedded space guaranteed by the Bellman rank condition. The structure of this upper bound is similar to a result that we derive in the sequel for linear approximation under Bellman closure (see~\cref{prop:LinearConcentrabilityBellmanClosure}). \subsection{Further comments on the prediction error test space} \label{sec:appBubnov} A few comments on the bound in \cref{lem:PredictionError}: as in our previous results, the pre-factor $\frac{\norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg }$ serves as a normalization factor. Disregarding this leading term, the second ratio measures how the prediction error $QErr = Q - \QpiWeak{\policyicy} $ along $\mu$ transfers to $\policyicy$, as measured via the operator $\IdentityOperator - \discount\TransitionOperator{\policyicy}$. This interaction is complex, since it includes the \emph{bootstrapping term} $-\discount\TransitionOperator{\policyicy}$. (Notably, such a term is not present for standard prediction or bandit problems, in which case $\discount = 0$.) This term reflects the dynamics intrinsic to reinforcement learning, and plays a key role in proving ``hard'' lower bounds for offline RL (e.g., see the work~\cite{zanette2020exponential}). Observe that the bound in \cref{lem:PredictionError} requires only weak realizability, and thus it always applies. This fact is significant in light of a recent lower bound~\cite{foster2021offline}, showing that without Bellman closure, off-policy learning is challenging even under strong concentrability assumption (such as bounds on density ratios). \cref{lem:PredictionError} gives a sufficient condition without Bellman closure, but with a different measure that accounts for bootstrapping. \\ \noindent If, in fact, (weak) Bellman closure holds, then~\cref{lem:PredictionError} takes the following simplified form: \begin{lemma}[OPC coefficient under Bellman closure] \label{lem:PredictionErrorBellmanClosure} If $\QclassErr{\policyicy} \subseteq \TestFunctionClass{\policyicy}$ and weak Bellman closure holds, then \begin{align*} K^\policy \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg } { 1 + \TestFunctionReg } \, \cdot \, \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \} \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm {QErrNC} {\policyicy}^2 } { \norm{QErrNC} {\mu}^2 } \Big \}. \end{align*} \end{lemma} \noindent See \cref{sec:PredictionErrorBellmanClosure} for the proof. \\ In such case, the concentrability measures the increase in the discrepancy $Q - Q'$ of the feasible predictors when moving from the dataset distribution $\mu$ to the distribution of the target policy $\policyicy$. In \cref{sec:DomainKnowledge}, we give another bound under weak Bellman closure, and thereby recover a recent result due to Xie et al.~\cite{xie2021bellman}. Finally, in~\cref{sec:Linear}, we provide some applications of this concentrability factor to the linear setting. \subsection{From Importance Sampling to Bellman Closure} \label{sec:IS2BC} Let us show an application of \cref{prop:MultipleRobustness} on an example with just two test spaces. Suppose that we suspect that Bellman closure holds, but rather than committing to such assumption, we wish to fall back to an importance sampling estimator if Bellman closure does not hold. In order to streamline the presentation of the idea, let us introduce the following setup. Let $\policyicyBeh{}$ be a behavioral policy that generates the dataset, i.e., such that each state-action $\psa$ in the dataset is sampled from its discounted state distribution $\DistributionOfPolicy{\policyicyBeh{}}$. Next, let the identifier $o$ contain the trajectory from $\nu_{\text{start}}$ up to the state-action pair $\psa$ recorded in the dataset. That is, each tuple $\sarsi{}$ in the dataset $\Dataset$ is such that $\psa \sim \DistributionOfPolicy{\policyicyBeh{}}$ and $o$ contains the trajectory up to $\psa$. We now define the test spaces. The first one is denoted with $\TestFunctionISClass{\policyicy}$ and leverages importance sampling. It contains a single test function defined as the importance sampling estimator \begin{align} \label{eqn:ImportanceSampling} \TestFunctionISClass{\policyicy} = \{ \TestFunction{\policyicy}\}, \qquad \text{where} \; \TestFunction{\policyicy}(\state,\action,\identifier) = \frac{1}{\scaling{\policyicy}} \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) }. \end{align} The above product is over the random trajectory contained in the identifier $o$. The normalization factor $\scaling{\policyicy} \in \R$ is connected to the maximum range of the importance sampling estimator, and ensures that $\sup_{(\state,\action,\identifier)} \TestFunction{\policyicy}(\state,\action,\identifier) \leq 1$. The second test space is the prediction error test space $\QclassErr{\policyicy}$ defined in \cref{sec:ErrorTestSpace}. With this choice, let us define three concentrability coefficients. $\ConcentrabilityGenericSub{\policyicy}{1}$ arises from importance sampling, $\ConcentrabilityGenericSub{\policyicy}{2}$ from the prediction error test space when Bellman closure holds and $\ConcentrabilityGenericSub{\policyicy}{3}$ from the prediction error test space when just weak realizability holds. They are defined as \begin{align*} \ConcentrabilityGenericSub{\policyicy}{1} \leq \sqrt{ \scaling{\policyicy} \frac{(1 + \TestFunctionReg\scaling{\policyicy})}{1+\TestFunctionReg} } \qquad \ConcentrabilityGenericSub{\policyicy}{2} \leq \max_{QErr \in \QclassErrCentered{\policyicy}} \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu}^2 } \times \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg }, \qquad \ConcentrabilityGenericSub{\policyicy}{3} \leq c_1 \frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2}. \end{align*} \begin{lemma}[From Importance Sampling to Bellman Closure] \label{lem:IS} The choice $\TestFunctionClass{\policyicy} = \TestFunctionISClass{\policyicy} \cup \QclassErr{\policyicy} \; \text{for all } \policyicy\in\PolicyClass$ ensures that with probability at least $1-\FailureProbability$, the oracle inequality~\eqref{EqnOracle} holds with $\ConcentrabilityGeneric{\policyicy} \leq \min\{ \ConcentrabilityGenericSub{\policyicy}{1}, \ConcentrabilityGenericSub{\policyicy}{2}, \ConcentrabilityGenericSub{\policyicy}{3} \} $ if weak Bellman closure holds and $\ConcentrabilityGeneric{\policyicy} \leq \min\{ \ConcentrabilityGenericSub{\policyicy}{1}, \ConcentrabilityGenericSub{\policyicy}{2} \} $ otherwise. \end{lemma} \begin{proof} Let us calculate the {off-policy cost coefficient} associated with $\TestFunctionISClass{\policyicy}$. The unbiasedness of the importance sampling estimator gives us the following population constraint (here $\mu = \DistributionOfPolicy{\policyicyBeh{}}$) \begin{align*} \abs{ \innerprodweighted{\TestFunction{\policyicy}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} } = \abs{ \ensuremath{\mathbb{E}}ecti{\TestFunction{\policyicy}\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} } = \frac{1}{\scaling{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} } = \frac{1}{\scaling{\policyicy}} \abs{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} } \leq \frac{\ConfidenceInterval{\FailureProbability}}{\sqrt{n}} \sqrt{\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg} \end{align*} The norm of the test function reads (notice that $\mu$ generates $(\state,\action,\identifier)$ here) \begin{align*} \norm{\TestFunction{\policyicy}}{\mu}^2 = \ensuremath{\mathbb{E}}ecti{\TestFunction{\policyicy}^2}{\mu} = \frac{1}{\scaling{\policyicy}^2} \ensuremath{\mathbb{E}}ecti{ \Bigg[ \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) } }{\mu}\Bigg]^2 = \frac{1}{\scaling{\policyicy}^2} \ensuremath{\mathbb{E}}ecti{ \Bigg[ \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) } }{\policyicy}\Bigg] \leq \frac{1}{\scaling{\policyicy}}. \\ \end{align*} Together with the prior display, we obtain \begin{align*} \frac{\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2} {\scaling{\policyicy}^2(\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg)} \leq \frac{\ensuremath{\rho}}{n}. \end{align*} The resulting concentrability coefficient is therefore \begin{align*} \ConcentrabilityGeneric{\policyicy} \leq \max_{Q\in \PopulationFeasibleSet{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {1 + \TestFunctionReg} \times \frac{n}{\ensuremath{\rho}} \leq \max_{Q\in \PopulationFeasibleSet{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {1 + \TestFunctionReg} \times \frac{\scaling{\policyicy}^2(\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg)} {\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2} \leq \scaling{\policyicy} \frac{(1 + \TestFunctionReg\scaling{\policyicy})}{1+\TestFunctionReg}. \end{align*} Chaining the above result with \cref{lem:BellmanTestFunctions,lem:PredictionError}, using \cref{prop:MultipleRobustness} and plugging back into \cref{thm:NewPolicyEvaluation} yields the thesis. \end{proof} \subsection{Implementation for Off-Policy Predictions} \label{sec:LinearConfidenceIntervals} In this section, we describe a computationally efficient way in which to compute the upper/lower estimates~\eqref{eqn:ConfidenceIntervalEmpirical}. Given a finite set of $n_{\TestFunctionClass{}}$ test functions, it involves solving a quadratic program with $2 n_{\TestFunctionClass{}} + 1$ constraints. Let us first work out a concise description of the constraints defining membership in $\EmpiricalFeasibleSet{\policyicy}$. Introduce the shorthand $\nLinEffSq{\TestFunction{}} \defeq \norm{\TestFunction{j}}{n}^2 + \TestFunctionReg$. We then define the empirical average feature vector $\phiEmpiricalExpecti{\TestFunction{}}$, the empirical average reward $\LinEmpiricalReward{\TestFunction{}}$, and the average next-state feature vector $\phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}$ as \begin{align*} \quad \phiEmpiricalExpecti{\TestFunction{}} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa \phi\psa, \qquad \LinEmpiricalReward{\TestFunction{}} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa\reward, \\ \qquad \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa \phi(\successorstate,\policyicy). \end{align*} In terms of this notation, each empirical constraint defining $\EmpiricalFeasibleSet{\policyicy}$ can be written in the more compact form \begin{align*} \frac{\abs{\innerprodweighted{\TestFunction{}} {\TDError{Q}{\policyicy}{}}{n}} } {\nLinEff{\TestFunction{}}} & = \Big | \smlinprod{\phiEmpiricalExpecti{\TestFunction{}} - \discount \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}}{\CriticPar{}} - \LinEmpiricalReward{\TestFunction{}} \Big| \leq \sqrt{\frac{\ensuremath{\rho}}{n}}. \end{align*} Then the set of empirical constraints can be written as a set of constraints linear in the critic parameter $\CriticPar{}$ coupled with the assumed regularity bound on $\CriticPar{}$ \begin{align} \label{eqn:LinearConstraints} \EmpiricalFeasibleSet{\policyicy} = \Big\{ \CriticPar{} \in \R^d \mid \norm{\CriticPar{}}{2} \leq 1, \quad \mbox{and} \quad - \sqrt{\frac{\ensuremath{\rho}}{n}} \leq \smlinprod{\phiEmpiricalExpecti{\TestFunction{}} - \discount \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}}{\CriticPar{}} - \LinEmpiricalReward{\TestFunction{}} \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \quad \mbox{for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$} \Big\}. \end{align} Thus, the estimates $\VminEmp{\policy}$ (respectively $\VmaxEmp{\policy}$) acan be computed by minimizing (respectively maximizing) the linear objective function $w \mapsto \inprod{[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathbb{E}}ecti{\phi(\state,\action)} {\action \sim \policyicy}}{\state \sim \nu_{\text{start}}}]}{\CriticPar{}}$ subject to the $2 n_{\TestFunctionClass{}} + 1$ constraints in equation~\eqref{eqn:LinearConstraints}. Therefore, the estimates can be computed in polynomial time for any test function with a cardinality that grows polynomially in the problem parameters. \subsection{Discussion of Linear Approximate Optimization} \label{sec:LinearDiscussion} Here we discuss the presence of the supremum over policies in the coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{1}$ from equation~\eqref{EqnNewConc}. In particular, it arises because our actor-critic method iteratively approximates the maximum in the max-min estimate~\eqref{eqn:MaxMinEmpirical} using a gradient-based scheme. The ability of a gradient-based method to make progress is related to the estimation accuracy of the gradient, which is the $Q$ estimates of the actor's current policy $\ActorPolicy{t}$; more specifically, the gradient is the $Q$ function parameter $\CriticPar{t}$. In the general case, the estimation error of the gradient $\CriticPar{t}$ depends on the policy under consideration through the matrix $\CovarianceWithBootstrapReg{\ActorPolicy{t}}$, while it is independent in the special case of Bellman closure (as it depends on just $\Sigma$). As the actor's policies are random, this yields the introduction of a $\sup_{\policyicy\in\PolicyClass}$ in the general bound. Notice the method still competes with the best comparator $\widetilde \policy$ by measuring the errors along the distribution of the comparator (through the operator $\ensuremath{\mathbb{E}}ecti{}{\widetilde \policy}$). To be clear, $\sup_{\policyicy\in\PolicyClass}$ may not arise with approximate solution methods that do not rely only on the gradient to make progress (such as second-order methods); we leave this for future research. Reassuringly, when Bellman closure, the approximate solution method recovers the standard guarantees established in the paper~\cite{zanette2021provable}. \section{General Guarantees} \label{sec:GeneralGuarantees} \subsection{A deterministic guarantee} We begin our analysis stating a deterministic set of sufficient conditions for our estimators to satisfy the guarantees~\eqref{EqnPolEval} and~\eqref{EqnOracle}. This formulation is useful, because it reveals the structural conditions that underlie success of our estimators, and in particular the connection to weak realizability. In Section~\ref{SecHighProb}, we exploit this deterministic result to show that, under a fairly general sampling model, our estimators enjoy these guarantees with high probability. In the previous section, we introduced the population level set $\PopulationFeasibleSet{\policyicy}$ that arises in the statement of our guarantees. Also central in our analysis is the infinite data limit of this set. More specifically, for any fixed $(\ensuremath{\rho}, \ensuremath{\lambda})$, if we take the limit $n \rightarrow \infty$, then $\PopulationFeasibleSet{\policyicy}$ reduces to the set of all solutions to the weak formulation~\eqref{eqn:WeakFormulation}---that is \begin{align} \PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) = \{ Q \in \Qclass{\policyicy} \mid \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\pi}{}} {\mu} = 0 \quad \mbox{for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$} \}. \end{align} As before, we omit the dependence on the test function class $\TestFunctionClass{\policyicy}$ when it is clear from context. By construction, we have the inclusion $\PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) \subseteq \SuperPopulationFeasible$ for any non-negative pair $(\ensuremath{\rho}, \ensuremath{\lambda})$. Our first set of guarantees hold when the random set $\EmpiricalFeasibleSet{\policyicy}$ satisfies the \emph{sandwich relation} \begin{align} \label{EqnSandwich} \PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) \subseteq \EmpiricalFeasibleSet{\policyicy}(\ensuremath{\rho}, \ensuremath{\lambda}; \TestFunctionClass{\policyicy}) \subseteq \PopulationFeasibleSet{\policyicy}(4 \ensuremath{\rho}, \ensuremath{\lambda}; \TestFunctionClass{\policyicy}) \end{align} To provide intuition as to why this sandwich condition is natural, observe that it has two important implications: \begin{enumerate} \item[(a)] Recalling the definition of weak realizability~\eqref{EqnWeakRealizable}, the weak solution $\QpiWeak{\policyicy}$ belongs to the empirical constraint set $\EmpiricalFeasibleSet{\policyicy}$ for any choice of test function space. This important property follows because $\QpiWeak{\policyicy}$ must satisfy the constraints~\eqref{eqn:WeakFormulation}, and thus it belongs to $\PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}$. \item[(b)] All solutions in $\EmpiricalFeasibleSet{\policyicy}$ also belong to $\PopulationFeasibleSet{\policyicy}$, which means they approximately satisfy the weak Bellman equations in a way quantified by $\PopulationFeasibleSet{\policyicy}$. \end{enumerate} By leveraging these facts in the appropriate way, we can establish the following guarantee: \\ \begin{proposition} \label{prop:Deterministic} The following two statements hold. \begin{enumerate} \item[(a)] \underline{Policy evaluation:} If the set $\EmpiricalFeasibleSet{\policyicy}$ satisfies the sandwich relation~\eqref{EqnSandwich}, then the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ satisfy the width bound~\eqref{EqnWidthBound}. If, in addition, weak Bellman realizability for $\policyicy$ is assumed, then the coverage~\eqref{EqnCoverage} condition holds. \item[(b)] \underline{Policy optimization:} If the sandwich relation~\eqref{EqnSandwich} and weak Bellman realizability hold for all $\policyicy \in \PolicyClass$, then any max-min~\eqref{eqn:MaxMinEmpirical} optimal policy $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{enumerate} \end{proposition} \noindent See Section~\ref{SecProofPropDeterministic} for the proof of this claim. \\ In summary, \Cref{prop:Deterministic} ensures that when weak realizability is in force, then the sandwich relation~\eqref{EqnSandwich} is a sufficient condition for both the policy evaluation~\eqref{EqnPolEval} and optimization~\eqref{EqnOracle} guarantees to hold. Accordingly, the next phase of our analysis focuses on deriving sufficient conditions for the sandwich relation to hold with high probabability. \subsection{Some high-probability guarantees} \label{SecHighProb} As stated, \Cref{prop:Deterministic} is a ``meta-result'', in that it applies to any choice of set $\EmpiricalFeasibleSet{\policyicy} \equiv \SuperEmpiricalFeasible$ for which the sandwich relation~\eqref{EqnSandwich} holds. In order to obtain a more concrete guarantee, we need to impose assumptions on the way in which the dataset was generated, and concrete choices of $(\ensuremath{\rho}, \ensuremath{\lambda})$ that suffice to ensure that the associated sandwich relation~\eqref{EqnSandwich} holds with high probability. These tasks are the focus of this section. \subsubsection{A model for data generation} \label{SecDataGen} Let us begin by describing a fairly general model for data-generation. Any sample takes the form $\sarsizNp{} \defeq \sarsi{}$, where the five components are defined as follows: \begin{carlist} \item the pair $(\state, \action)$ index the current state and action. \item the random variable $\reward$ is a noisy observation of the mean reward. \item the random state $\successorstate$ is the next-state sample, drawn according to the transition $\Pro\psa$. \item the variable $o$ is an optional identifier. \end{carlist} \noindent As one example of the use of an identifier variable, if samples might be generated by one of two possible policies---say $\policyicy_1$ and $\policyicy_2$---the identifier can take values in the set $\{1, 2 \}$ to indicate which policy was used for a particular sample. \\ Overall, we observe a dataset $\Dataset = \{\sarsizNp{i} \}_{i=1}^n$ of $n$ such quintuples. In the simplest of possible settings, each triple $(\state,\action,\identifier)$ is drawn i.i.d. from some fixed distribution $\mu$, and the noisy reward $\reward_i$ is an unbiased estimate of the mean reward function $\Reward(\state_i, \action_i)$. In this case, our dataset consists of $n$ i.i.d. quintuples. More generally, we would like to accommodate richer sampling models in which the sample $z_i = (\state_i, \action_i, o_i, \reward_i, \successorstate_i)$ at a given time $i$ is allowed to depend on past samples. In order to specify such dependence in a precise way, define the nested sequence of sigma-fields \begin{align} \mathcal F_1 = \emptyset, \quad \mbox{and} \quad \mathcal F_i \defeq \sigma \Big(\{\sarsizNp{j}\}_{j=1}^{i-1} \Big) \qquad \mbox{for $i = 2, \ldots, n$.} \end{align} In terms of this filtration, we make the following definition: \begin{assumption}[Adapted dataset] \label{asm:Dataset} An adapted dataset is a collection $\Dataset = \{ \sarsizNp{i} \}_{i=1}^n$ such that for each $i = 1, \ldots, n$: \begin{carlist} \item There is a conditional distribution $\mu_i$ such that $(\state_i, \action_i, o_i) \sim \mu_i (\cdot \mid \mathcal F_i)$. \item Conditioned on $(\state_i,\action_i,o_i )$, we observe a noisy reward $\reward_i = \reward(\state_i,\action_i) + \eta_i$ with $\E[\eta_i \mid \mathcal F_{i} ] = 0$, and $|\reward_i| \leq 1$. \item Conditioned on $(\state_i,\action_i,o_i )$, the next state $\successorstate_i$ is generated according to $\Pro(\state_i,\action_i)$. \end{carlist} \end{assumption} Under this assumption, we can define the (possibly) random reference measure \begin{align} \mu(\state, \action, o) & \defeq \frac{1}{n} \sum_{i=1}^n \mu_i \big(\state, \action, o \mid \mathcal F_i \big). \end{align} In words, it corresponds to the distribution induced by first drawing a time index $i \in \{1, \ldots, n \}$ uniformly at random, and then sampling a triple $(\state,\action,\identifier)$ from the conditional distribution $\mu_i \big(\cdot \mid \mathcal F_i \big)$. \subsubsection{A general guarantee} \newcommand{\ensuremath{\phi}}{\ensuremath{\phi}} \label{SecGeneral} Recall that there are three function classes that underlie our method: the test function class $\TestFunctionClass{}$, the policy class $\PolicyClass$, and the $Q$-function class $\Qclass{}$. In this section, we state a general guarantee (\cref{thm:NewPolicyEvaluation}) that involves the metric entropies of these sets. In Section~\ref{SecCorollaries}, we provide corollaries of this guarantee for specific function classes. In more detail, we equip the test function class and the $Q$-function class with the usual sup-norm \begin{align*} \|f - \tilde{f}\|_\infty \defeq \sup_{(\state,\action,\identifier)} |f(\state,\action,\identifier) - \tilde{f} (\state,\action,\identifier)|, \quad \mbox{and} \quad \|Q - \tilde{Q}\|_\infty \defeq \sup_{\psa} |Q \psa - \tilde{Q} \psa|, \end{align*} and the policy class with the sup-TV norm \begin{align*} \|\pi - \pitilde\|_{\infty, 1} & \defeq \sup_{\state} \|\pi(\cdot \mid \state) - \pitilde(\cdot \mid \state) \|_1 = \sup_{\state} \sum_{\action} |\pi(\action \mid \state) - \pitilde(\action \mid \state)|. \end{align*} For a given $\epsilon > 0$, we let $\CoveringNumber{\TestFunctionClass{}}{\epsilon}$, $\CoveringNumber{\Qclass{}}{\epsilon}$, and $\CoveringNumber{\PolicyClass}{\epsilon}$ denote the $\epsilon$-covering numbers of each of these function classes in the given norms. Given these covering numbers, a tolerance parameter $\delta \in (0,1)$ and the shorthand $\ensuremath{\phi}(t) = \max \{t, \sqrt{t} \}$, define the radius function \begin{subequations} \label{EqnTheoremChoice} \begin{align} \label{EqnDefnR} \ensuremath{\rho}(\epsilon, \delta) & \defeq n \Big\{ \int_{\epsilon^2}^\epsilon \ensuremath{\phi} \big( \frac{\log N_u(\TestFunctionClass{})}{n} \big) du + \frac{\log N_\epsilon(\Qclass{})}{n} + \frac{ \log N_\epsilon(\Pi)}{n} + \frac{\log(n/\delta)}{n} \Big\}. \end{align} In our theorem, we implement the estimator using a radius $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$, where $\epsilon > 0$ is any parameter that satisfies the bound \begin{align} \label{EqnRadChoice} \epsilon^2 & \stackrel{(i)}{\leq} \bar{c} \: \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}, \quad \mbox{and} \quad \ensuremath{\lambda} \stackrel{(i)}{=} 4 \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}. \end{align} \end{subequations} Here $\bar{c} > 0$ is a suitably chosen but universal constant (whose value is determined in the proof), and we adopt the shorthand $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$ in our statement below. \renewcommand{\descriptionlabel}[1]{ \hspace{\labelsep}\normalfont\underline{#1} } \begin{theorem}[High-probability guarantees] \label{thm:NewPolicyEvaluation} Consider the estimates implemented using triple $\InputFunctionalSpace$ that is weakly Bellman realizable (\cref{asm:WeakRealizability}); an adapted dataset (\Cref{asm:Dataset}); and with the choices~\eqref{EqnTheoremChoice} for $(\epsilon, \ensuremath{\rho}, \ensuremath{\lambda})$. Then with probability at least $1 - \delta$: \begin{description} \item[Policy evaluation:] For any $\pi \in \PolicyClass$, the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ specify a confidence interval satisfying the coverage~\eqref{EqnCoverage} and width bounds~\eqref{EqnWidthBound}. \item[Policy optimization:] Any max-min policy~\eqref{eqn:MaxMinEmpirical} $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{description} \end{theorem} \noindent See \cref{SecProofNewPolicyEvaluation} for the proof of the claim. \\ \paragraph{Choices of $(\ensuremath{\rho}, \epsilon, \ensuremath{\lambda})$:} Let us provide a few comments about the choices of $(\ensuremath{\rho}, \epsilon, \ensuremath{\lambda})$ from equations~\eqref{EqnDefnR} and~\eqref{EqnRadChoice}. The quality of our bounds depends on the size of the constraint set $\PopulationFeasibleSet{\policyicy}$, which is controlled by the constraint level $\sqrt{\frac{\ensuremath{\rho}}{n}}$. Consequently, our results are tightest when $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$ is as small as possible. Note that $\ensuremath{\rho}$ is an decreasing function of $\epsilon$, so that in order to minimize it, we would like to choose $\epsilon$ as large as possible subject to the constraint~\eqref{EqnRadChoice}(i). Ignoring the entropy integral term in equation~\eqref{EqnRadChoice} for the moment---see below for some comments on it---these considerations lead to \begin{align} n \epsilon^2 \asymp \log N_\epsilon(\TestFunctionClass{}) + \log N_\epsilon(\Qclass{}) + \log N_\epsilon(\Pi). \end{align} This type of relation for the choice of $\epsilon$ in non-parametric statistics is well-known (e.g., see Chapters 13--15 in the book~\cite{wainwright2019high} and references therein). Moreover, setting $\ensuremath{\lambda} \asymp \epsilon^2$ as in equation~\eqref{EqnRadChoice}(ii) is often the correct scale of regularization. \paragraph{Key technical steps in proof:} It is worthwhile making a few comments about the structure of the proof so as to clarify the connections to \Cref{prop:Deterministic} along with the weak formulation that underlies our methods. Recall that \Cref{prop:Deterministic} requires the empirical $\EmpiricalFeasibleSet{\policyicy}$ and population sets $\PopulationFeasibleSet{\policyicy}$ to satisfy the sandwich relation~\eqref{EqnSandwich}. In order to prove that this condition holds with high probability, we need to establish uniform control over the family of random variables \begin{align} \label{EqnKeyFamily} \frac{ \big| \inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu \big|}{\TestNormaRegularizerEmp{}}, \qquad \mbox{as indexed by the triple $(f, Q, \policyicy)$.} \end{align} Note that the differences in the numerator of these variables correspond to moving from the empirical constraints on $Q$-functions that are enforced using the TD errors, to the population constraints that involve the Bellman error function. Uniform control of the family~\eqref{EqnKeyFamily}, along with the differences $\|f\|_n - \|f\|_\mu$ uniformly over $f$, allows us to relate the empirical and population sets, since the associated constraints are obtained by shifting between the empirical inner products $\inprod{\cdot}{\cdot}_n$ to the reference inner products $\inprod{\cdot}{\cdot}_\mu$. A simple discretization argument allows us to control the differences uniformly in $(Q, \policyicy)$, as reflected by the metric entropies appearing in our definition~\eqref{EqnTheoremChoice}. Deriving uniform bounds over test functions $f$---due to the self-normalizing nature of the constraints---requires a more delicate argument. More precisely, in order to obtain optimal results for non-parametric problems (see Corollary~\ref{cor:alpha} to follow), we need to localize the empirical process at a scale $\epsilon$, and derive bounds on the localized increments. This portion of the argument leads to the entropy integral---which is localized to the interval $[\epsilon^2, \epsilon]$---in our definition~\eqref{EqnDefnR} of the radius function. \paragraph{Intuition from the on-policy setting:} In order to gain intuition for the statistical meaning of the guarantees in~\cref{thm:NewPolicyEvaluation}, it is worthwhile understanding the implications in a rather special case---namely, the simpler on-policy setting, where the discounted occupation measure induced by the target policy $\policyicy$ coincides with the dataset distribution $\mu$. Let us consider the case in which the identity function $\1$ belongs to the test class $\TestFunctionClass{\policyicy}$. Under these conditions, for any $Q \in \PopulationFeasibleSet{\policyicy}$, we can write \begin{align*} \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolComplexGen{\policyicy}| & \stackrel{(i)}{=} \max_{Q \in \PopulationFeasibleSet{\policyicy}}| \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}| \; \stackrel{(ii)}{\leq} \sqrt{1 + \ensuremath{\lambda}} \; \sqrt{\frac{\ensuremath{\rho}}{n}}, \end{align*} where equality (i) follows from the on-policy assumption, and step (ii) follows from the definition of the set $\PopulationFeasibleSet{\policyicy}$, along with the condition that $\1 \in \TestFunctionClass{\policyicy}$. Consequently, in the on-policy setting, the width bound~\eqref{EqnWidthBound} ensures that \begin{align} \label{EqnOnPolicy} \abs{\VminEmp{\policyicy} - \VmaxEmp{\policyicy}} \leq 2 \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{\frac{ \ensuremath{\rho}}{n}}. \end{align} In this simple case, we see that the confidence interval scales as $\sqrt{\ensuremath{\rho}/n}$, where the quantity $\ensuremath{\rho}$ is related to the metric entropy via equation~\eqref{EqnRadChoice}. In the more general off-policy setting, the bound involves this term, along with additional terms that reflect the cost of off-policy data. We discuss these issues in more detail in \cref{sec:Applications}. Before doing so, however, it is useful derive some specific corollaries that show the form of $\ensuremath{\rho}$ under particular assumptions on the underlying function classes, which we now do. \subsubsection{Some corollaries} \label{SecCorollaries} Theorem~\ref{thm:NewPolicyEvaluation} applies generally to triples of function classes $\InputFunctionalSpace$, and the statistical error $\sqrt{\frac{\ensuremath{\rho}(\epsilon, \delta)}{n}}$ depends on the metric entropies of these function classes via the definition~\eqref{EqnDefnR} of $\ensuremath{\rho}(\epsilon, \delta)$, and the choices~\eqref{EqnRadChoice}. As shown in this section, if we make particular assumptions about the metric entropies, then we can derive more concrete guarantees. \paragraph{Parametric and finite VC classes:} One form of metric entropy, typical for a relatively simple function class $\ensuremath{\mathcal{G}}$ (such as those with finite VC dimension) scales as \begin{align} \label{EqnPolyMetric} \log N_\epsilon(\ensuremath{\mathcal{G}}) & \asymp d \; \log \big(\frac{1}{\epsilon} \big), \end{align} for some dimensionality parameter $d$. For instance, bounds of this type hold for linear function classes with $d$ parameters, and for finite VC classes (with $d$ proportional to the VC dimension); see Chapter 5 of the book~\cite{wainwright2019high} for more details. \begin{corollary} \label{cor:poly} Suppose each class of the triple $\InputFunctionalSpace$ has metric entropy that is at most polynomial~\eqref{EqnPolyMetric} of order $d$. Then for a sample size $n \geq 2 d$, the claims of Theorem~\ref{thm:NewPolicyEvaluation} hold with $\epsilon^2 = d/n$ and \begin{align} \label{EqnPolyRchoice} \tilde{\ensuremath{\rho}}\big( \sqrt{\frac{d}{n}}, \delta \big) & \defeq c \; \Big \{ d \; \log \big( \frac{n}{d} \big) + \log \big(\frac{n}{\delta} \big) \Big \}, \end{align} where $c$ is a universal constant. \end{corollary} \begin{proof} Our strategy is to upper bound the radius $\ensuremath{\rho}$ from equation~\eqref{EqnDefnR}, and then show that this upper bound $\tilde{\ensuremath{\rho}}$ satisfies the conditions~\eqref{EqnRadChoice} for the specified choice of $\epsilon^2$. We first control the term $\log N_\epsilon(\TestFunctionClass{})$. We have \begin{align*} \frac{1}{\sqrt{n}} \int_{\epsilon^2}^\epsilon \sqrt{\log N_u(\TestFunctionClass{})} du & \leq \sqrt{\frac{d}{n}} \; \int_{0}^{\epsilon} \sqrt{\log(1/u)} du \; = \; \epsilon \sqrt{\frac{d}{n}} \; \int_0^1 \sqrt{\log(1/(\epsilon t))} dt \; = \; c \epsilon \log(1/\epsilon) \sqrt{\frac{d}{n}}. \end{align*} Similarly, we have \begin{align*} \frac{1}{n} \int_{\epsilon^2}^\epsilon \log N_u(\TestFunctionClass{}) du & \leq \epsilon \frac{d}{n} \Big \{ \int_{\epsilon}^{1} \log(1/t) dt + \log(1/\epsilon) \Big \} \; \leq \; c \, \epsilon \log(1/\epsilon) \frac{d}{n}. \end{align*} Finally, for terms not involving entropy integrals, we have \begin{align*} \max \Big \{ \frac{\log N_\epsilon(\Qclass{})}{n}, \frac{\log N_\epsilon(\Pi)}{n} \Big\} & \leq c \frac{d}{n} \log(1/\epsilon). \end{align*} Setting $\epsilon^2 = d/n$, we see that the required conditions~\eqref{EqnRadChoice} hold with the specified choice~\eqref{EqnPolyRchoice} of $\tilde{\ensuremath{\rho}}$. \end{proof} \paragraph{Richer function classes:} In the previous section, the metric entropy scaled logarithmically in the inverse precision $1/\epsilon$. For other (richer) function classes, the metric entropy exhibits a polynomial scaling in the inverse precision, with an exponent $\alpha > 0$ that controls the complexity. More precisely, we consider classes of the form \begin{align} \label{EqnAlphaMetric} \log N_\epsilon(\ensuremath{\mathcal{G}}) & \asymp \Big(\frac{1}{\epsilon} \Big)^\alpha. \end{align} For example, the class of Lipschitz functions in dimension $d$ has this type of metric entropy with $\alpha = d$. More generally, for Sobolev spaces of functions that have $s$ derivatives (and the $s^{th}$-derivative is Lipschitz), we encounter metric entropies of this type with $\alpha = d/s$. See Chapter 5 of the book~\cite{wainwright2019high} for further background. \begin{corollary} \label{cor:alpha} Suppose that each function class $\InputFunctionalSpace$ has metric entropy with at most \mbox{$\alpha$-scaling~\eqref{EqnAlphaMetric}} for some $\alpha \in (0,2)$. Then the claims of Theorem~\ref{thm:NewPolicyEvaluation} hold with $\epsilon^2 = (1/n)^{\frac{2}{2 + \alpha}}$, and \begin{align} \label{EqnAlphaRchoice} \tilde{\ensuremath{\rho}}\big((1/n)^{\frac{1}{2 + \alpha}}, \delta \big) & = c \; \Big \{ n^{\frac{\alpha}{2 + \alpha}} + \log(n/\delta) \Big \}. \end{align} where $c$ is a universal constant. \end{corollary} \noindent We note that for standard regression problems over classes with $\alpha$-metric entropy, the rate $(1/n)^{\frac{2}{2 + \alpha}}$ is well-known to be minimax optimal (e.g., see Chapter 15 in the book~\cite{wainwright2019high}, as well as references therein). \begin{proof} We start by controlling the terms involving entropy integrals. In particular, we have \begin{align*} \frac{1}{\sqrt{n}} \int_{\epsilon^2}^\epsilon \sqrt{\log N_u(\TestFunctionClass{})} du & \leq \frac{c}{\sqrt{n}} u^{1 - \frac{\alpha}{2}} \Big |_{0}^\epsilon \; = \; \frac{c}{\sqrt{n}} \epsilon^{1 - \frac{\alpha}{2}}. \end{align*} Requiring that this term is of order $\epsilon^2$ amounts to enforcing that $\epsilon^{1 + \frac{\alpha}{2}} \asymp (1/\sqrt{n})$, or equivalently that $\epsilon^2 \asymp (1/n)^{\frac{2}{2 + \alpha}}$. If $\alpha \in (0,1]$, then the second entropy integral converges and is of lower order. Otherwise, if $\alpha \in (1,2)$, then we have \begin{align*} \frac{1}{n} \int_{\epsilon^2}^\epsilon \log N_u(\TestFunctionClass{}) du & \leq \frac{c}{n} \int_{\epsilon^2}^\epsilon (1/u)^{\alpha} du \leq \frac{c}{n} (\epsilon^2)^{1 - \alpha}. \end{align*} Hence the requirement that this term is bounded by $\epsilon^2$ is equivalent to $\epsilon^{2 \alpha} \succsim (1/n)$, or $\epsilon^2 \succsim (1/n)^{1/\alpha}$. When $\alpha \in (1,2)$, we have $\frac{1}{\alpha} > \frac{2}{2 + \alpha}$, so that this condition is milder than our first condition. Finally, we have $ \max \big \{ \frac{\log N_\epsilon(\Qclass{})}{n}, \frac{\log N_\epsilon(\Pi)}{n} \big\} \leq \frac{c}{n} \big(1/\epsilon)^{\alpha}$, and requiring that this term scales as $\epsilon^2$ amounts to requiring that $\epsilon^{2 +\alpha} \asymp (1/n)$, or equivalently $\epsilon^2 \asymp (1/n)^{\frac{2}{2 + \alpha}}$, as before. \end{proof} \section{Proofs for \texorpdfstring{\cref{sec:Applications}}{} and \cref{sec:appConc}} In this section, we collect together the proofs of results stated without proof in Section~\ref{sec:Applications} and \cref{sec:appConc}. \subsection{Proof of \cref{prop:LikeRatio}} \label{sec:LikeRatio} \begin{proof} Since $\TestFunction{}^* \in \TestFunctionClass{\policyicy}$, we are guaranteed that the corresponding constraint must hold. It reads as \begin{align*} |\ensuremath{\mathbb{E}}ecti{\frac{1}{\scaling{\policyicy}} \frac{\DistributionOfPolicy{\policyicy}}{\mu} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} |^2 = \frac{1}{\scaling{\policyicy}^2} | \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} |^2 & \overset{(iii)}{\leq} \big( \frac{1}{\scaling{\policyicy}^2} \munorm{\frac{\DistributionOfPolicy{\policyicy}}{\mu} }^2 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}. \end{align*} where step (iii) follows from the definition of population constraint. Re-arranging yields the upper bound \begin{align*} \frac{|\ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}}{\mu} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} |^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} & \leq \frac{\big(\munorm{\frac{\DistributionOfPolicy{\policyicy}}{\mu} }^2 + \scalingsq{\policyicy}\ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} \; = \; \frac{ \ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big] }{\policyicy} + \scalingsq{\policyicy} \ensuremath{\lambda}} {1 + \ensuremath{\lambda}}, \end{align*} where the final step uses the fact that \begin{align*} \norm{\frac{\DistributionOfPolicy{\policyicy}}{\mu}}{\mu}^2 = \ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}^2\PSA}{\mu^2\PSA} }{\mu} = \ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA} }{\policyicy} \end{align*} Thus, we have established the bound (i) in our claim~\eqref{EqnLikeRatioBound}. The upper bound (ii) follows immediately since $\ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa} }{\policyicy} \leq \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa} \leq \scaling{\policyicy}$. \end{proof} \subsection{Proof of \texorpdfstring{\cref{lem:PredictionError}}{}} \label{sec:PredictionError} Some simple algebra yields \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{} = [Q - \BellmanEvaluation{\policyicy}Q] - [\QpiWeak{\policyicy} - \BellmanEvaluation{\policyicy}\QpiWeak{\policyicy}] = (\IdentityOperator - \gamma\TransitionOperator{\policyicy}) (Q - \QpiWeak{\policyicy}) = (\IdentityOperator - \gamma\TransitionOperator{\policyicy}) QErr. \end{align*} Taking expectations under $\policyicy$ and recalling that $\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = 0$ for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$ yields \begin{align*} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} = \innerprodweighted{\TestFunction{}} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}. \end{align*} Notice that for any $Q \in \Qclass{\policyicy}$ there exists a test function $QErr = Q - \QpiWeak{\policyicy} \in \QclassErr{\policyicy}$, and the associated population constraint reads \begin{align*} \frac{ \big| \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu} \big| } { \sqrt{ \norm{QErr}{\mu}^2 + \TestFunctionReg } } & \leq \sqrt{\frac{\ensuremath{\rho}}{n}}. \end{align*} Consequently, the {off-policy cost coefficient} can be upper bounded as \begin{align*} K^\policy & \leq \max_{QErr \in \QclassErrCentered{\policyicy}} \Big \{ \frac{\ensuremath{\rho}}{n} \; \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } {1+ \TestFunctionReg} \Big \} \; \leq \; \max_{QErr \in \QclassErrCentered{\policyicy}} \Big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \: \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu}^2 } \Big \}, \end{align*} as claimed in the bound~\eqref{EqnPredErrorBound}. \subsection{Proof of \texorpdfstring{\cref{lem:PredictionErrorBellmanClosure}}{}} \label{sec:PredictionErrorBellmanClosure} If weak Bellman closure holds, then we can write \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q = Q - \QpiProj{\policyicy}{Q} \in \QclassErr{\policyicy}. \end{align*} For any $Q \in \Qclass{\policyicy}$, the function $QErrNC = Q - \QpiProj{\policyicy}{Q}$ belongs to $\QclassErr{\policyicy}$, and the associated population constraint reads $\frac{ |\innerprodweighted{QErrNC} {QErrNC} {\mu} |} { \sqrt{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg }} \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$. Consequently, the {off-policy cost coefficient} is upper bounded as \begin{align*} K^\policy & \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{n}{\ensuremath{\rho}} \; \frac{ v\innerprodweighted{\1} {QErrNC} {\policyicy}^2 } {1+ \TestFunctionReg} \Big \} \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg } { 1 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \} \; \leq \; \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \}, \end{align*} where the final inequality follows from the fact that $\norm{QErrNC}{\mu} \leq 1$. \subsection{Proof of \texorpdfstring{\cref{lem:BellmanTestFunctions}}{}} \label{sec:BellmanTestFunctions} We split our proof into the two separate claims. \paragraph{Proof of the bound~\eqref{EqnBellBound}:} When the test function class includes $\TestFunctionClassBubnov{\policyicy}$, then any $Q$ feasible must satisfy the population constraints \begin{align*} \frac{\innerprodweighted{ \ensuremath{\mathcal{B}}or{Q'}{\policyicy}{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}} {\sqrt{\norm{\ensuremath{\mathcal{B}}or{Q'}{\policyicy}{}}{\mu}^2 + \TestFunctionReg}} \leq \sqrt{\frac{\ensuremath{\rho}}{n}}, \qquad \mbox{for all $Q' \in \Qpi{\policyicy}$.} \end{align*} Setting $Q' = Q$ yields $\frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} { \sqrt{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 + \TestFunctionReg } } \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$. If $\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 \geq \TestFunctionReg$, then the claim holds, given our choice $\TestFunctionReg = c \frac{\ensuremath{\rho}}{n}$ for some constant $c$. Otherwise, the constraint can be weakened to $\frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} { \sqrt{ 2\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 } } \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$, which yields the bound~\eqref{EqnBellBound}. \paragraph{Proof of the bound~\eqref{EqnBellBoundConc}:} We now prove the sequence of inequalities stated in equation~\eqref{EqnBellBoundConc}. Inequality (i) follows directly from the definition of $K^\policy$ and \cref{lem:BellmanTestFunctions}. Turning to inequality (ii), an application of Jensen's inequality yields \begin{align*} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 = [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2 \leq \ensuremath{\mathbb{E}}ecti{[\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}]^2}{\policyicy} = \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2. \end{align*} Finally, inequality (iii) follows by observing that \begin{align*} \sup_{Q \in \Qclass{\policyicy}} \frac{\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} = \sup_{Q \in \Qclass{\policyicy}} \frac{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\policyicy}}{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\mu}} = \sup_{Q \in \Qclass{\policyicy}} \frac{\ensuremath{\mathbb{E}}ecti{ \Big[ \frac{\DistributionOfPolicy{\policyicy}\psa }{\mu \psa} }{\mu} \Big][(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2 }{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\mu}} \leq \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa }{\mu \psa}. \end{align*} \section{Proofs for the Linear Setting} We now prove the results stated in~\cref{sec:Linear}. Throughout this section, the reader should recall that $Q$ takes the linear function $Q \psa = \inprod{\CriticPar{}}{\phi \psa}$, so that the bulk of our arguments operate directly on the weight vector $\CriticPar{} \in \R^\dim$. Given the linear structure, the population and empirical covariance matrices of the feature vectors play a central role. We make use of the following known result (cf. Lemma 1 in the paper~\cite{zhang2021optimal}) that relates these objects: \begin{lemma}[Covariance Concentration] \label{lem:CovarianceConcentration} There are universal constants $(c_1, c_2, c_3)$ such that for any $\delta \in (0, 1)$, we have \begin{align} c_1 \SigmaExplicit \preceq \frac{1}{n}\widehat \SigmaExplicit + \frac{c_2}{n} \log \frac{n \dim}{\FailureProbability}\Identity \preceq c_3 \SigmaExplicit + \frac{c_4}{n} \log \frac{n \dim}{\FailureProbability}\Identity. \end{align} with probability at least $1 - \FailureProbability$. \end{lemma} \subsection{Proof of \texorpdfstring{\cref{prop:LinearConcentrability}}{}} \label{sec:LinearConcentrability} Under weak realizability, we have \begin{align} \innerprodweighted {\TestFunction{j}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} {\mu} = 0 \qquad \mbox{for all $j = 1, \ldots, \dim$.} \end{align} Thus, at $\psa$ the Bellman error difference reads \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}\psa - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}\psa & = [Q - \BellmanEvaluation{\policyicy}Q]\psa - [\QpiWeak{\policyicy} - \BellmanEvaluation{\policyicy} \QpiWeak{\policyicy} ]\psa \\ & = [Q - \QpiWeak{\policyicy} ]\psa - \discount \ensuremath{\mathbb{E}}ecti{[Q - \QpiWeak{\policyicy}](\successorstate,\policyicy)}{\successorstate \sim \Pro\psa } \\ \numberthis{\label{eqn:LinearBellmanError}} & = \inprod{\CriticPar{} - \CriticParBest{\policyicy}}{ \phi\psa - \discount \phiBootstrap{\policyicy}\psa} \end{align*} To proceed we need the following auxiliary result: \begin{lemma}[Linear Parameter Constraints] \label{lem:RelaxedLinearConstraints} With probability at least $1-\FailureProbability$, there exists a universal constant $c_1 > 0$ such that if $Q \in \PopulationFeasibleSet{\policyicy}$ then $ \norm{\CriticPar{} - \CriticParBest{\policyicy}}{\CovarianceWithBootstrapReg{\policyicy}}^2 \leq c_1 \frac{\dim \ensuremath{\rho}}{n}$. \end{lemma} \noindent See \cref{sec:RelaxedLinearConstraints} for the proof. \newcommand{\IntermediateSmall}[4]{Using this lemma, we can bound the OPC coefficient as follows \begin{align*} K^\policy \overset{(i)}{\leq} \frac{n}{\ensuremath{\rho}} \; \max_{Q \in \PopulationFeasibleSet{\policyicy}} \innerprodweighted{\1} { \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} #4} {\policyicy}^2 & \overset{(ii)}{\leq} \frac{n}{\ensuremath{\rho}} \; [\ensuremath{\mathbb{E}}ecti{(#1)^\top}{\policyicy} (#2)]^2 \\ & \overset{(iii)}{\leq} \frac{n}{\ensuremath{\rho}} \: \norm{\ensuremath{\mathbb{E}}ecti{ #1 }{\policyicy}}{(#3)^{-1}}^2 \norm{#2}{#3}^2 \\ & \leq c_1 \dim \norm{\ensuremath{\mathbb{E}}ecti{ #1}{\policyicy}}{(#3)^{-1}}^2. \end{align*} Here step $(i)$ follows from the definition of off-policy cost coefficient, $(ii)$ leverages the linear structure and $(iii)$ is Cauchy-Schwartz. } \IntermediateSmall {\phi - \discount \phiBootstrap{\policyicy}} {\CriticPar{} - \CriticParBest{\policyicy}} {\CovarianceWithBootstrapReg{\policyicy}} {-\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} \subsection{Proof of \texorpdfstring{\cref{lem:RelaxedLinearConstraints}}{}} \label{sec:RelaxedLinearConstraints} \intermediate{(\phi - \discount\phiBootstrap{\policyicy})^\top} {(\SigmaReg - \discount \CovarianceBootstrap{\policyicy})} {(\CriticPar{} - \CriticParBest{\policyicy})} {\CovarianceWithBootstrapReg{\policyicy}} {(\Sigma - \discount \CovarianceBootstrap{\policyicy})} {\cref{eqn:LinearBellmanError}} \subsection{Proof of \cref{prop:LinearConcentrabilityBellmanClosure}} \label{sec:LinearConcentrabilityBellmanClosure} Under weak Bellman closure, we have \begin{align} \numberthis{\label{eqn:LinearBellmanErrorWithClosure}} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q = \phi^\top(\CriticPar{} - \CriticParProjection{\policyicy}). \end{align} With a slight abuse of notation, let $\QpiProj{\policyicy}{\CriticPar{}}$ denote the weight vector that defines the action-value function $\QpiProj{\policyicy}{Q}$. We introduce the following auxiliary lemma: \begin{lemma}[Linear Parameter Constraints with Bellman Closure] \label{lem:RelaxedLinearConstraintsBellmanClosure} With probability at least $1-\FailureProbability$, if $Q \in \PopulationFeasibleSet{\policyicy}$ then $ \norm{\CriticPar{} - \CriticParProjection{\policyicy}}{\SigmaReg }^2 \leq c_1\frac{\dim \ensuremath{\rho}}{n}$. \end{lemma} See~\cref{sec:RelaxedLinearConstraintsBellmanClosure} for the proof. \IntermediateSmall {\phi} {\CriticPar{} - \CriticParProjection{\policyicy}} {\SigmaReg} {} \subsection{Proof of \texorpdfstring{\cref{sec:RelaxedLinearConstraintsBellmanClosure}}{}} \label{sec:RelaxedLinearConstraintsBellmanClosure} \intermediate{\phi^\top} {\SigmaReg} {(\CriticPar{} - \CriticParProjection{\policyicy})} {\SigmaReg} {\Sigma} {\cref{eqn:LinearBellmanErrorWithClosure}} \section{Proof of \texorpdfstring{\cref{thm:LinearApproximation}}{}} \label{sec:LinearApproximation} In this section, we prove the guarantee on our actor-critic procedure stated in \cref{thm:LinearApproximation}. \hidecom{ The proof consists of four main steps. First, we show that the critic linear $\QminEmp{\policyicy}$ function can be interpreted as the \emph{exact} value function on an MDP with a perturbed reward function. Second, we show that the global update rule in \cref{eqn:LinearActorUpdate} is equivalent to an instantiation of Mirror descent where the gradient is the $Q$ of the adversarial MDP. Third, we analyze the progress of Mirror descent in finding a good solution on the sequence of adversarial MDP identified by the critic. Finally we put everything together to derive a performance bound. } \subsection{Adversarial MDPs} \label{sec:AdversarialMDP} We now introduce sequence of adversarial MDPs $\{\mathcal{M}adv{t}\}_{t=1}^T$ used in the analysis. Each MDP $\mathcal{M}adv{t}$ is defined by the same state-action space and transition law as the original MDP $\mathcal{M}$, but with the reward functions $\Reward$ perturbed by $RAdv{t}$---that is \begin{align} \label{eqn:AdversarialMDP} \mathcal{M}adv{t} \defeq \langle \StateSpace,ASpace, R + RAdv{t}, \Pro, \discount \rangle. \end{align} For an arbitrary policy $\policyicy$, we denote with $\QpiAdv{t}{\policyicy}$ and with $\ApiAdv{t}{\policyicy}$ the action value function and the advantage function on $\mathcal{M}adv{t}$; the value of $\policyicy$ from the starting distribution $\nu_{\text{start}}$ is denoted by $\VpiAdv{t}{\policyicy}$. We immediately have the following expression for the value function, which follows because the dynamics of $\mathcal{M}adv{t}$ and $\mathcal{M}$ are identical and the reward function of $\mathcal{M}adv{t}$ equals that of $\mathcal{M}$ plus $RAdv{t}$ \begin{align} \label{eqn:VonAdv} \VpiAdv{t}{\policyicy} \defeq \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{ \Big[ R + RAdv{t} \Big]}{\policyicy}. \end{align} Consider the action value function $\QminEmp{\ActorPolicy{t}}$ returned by the critic, and let the reward perturbation $RAdv{t} = \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}$ be the Bellman error of the critic value function $\QminEmp{\ActorPolicy{t}}$. The special property of $\mathcal{M}adv{t}$ is that the action value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$ equals the critic lower estimate $\QminEmp{\ActorPolicy{t}}$. \begin{lemma}[Adversarial MDP Equivalence] \label{lem:QfncOnAdversarialMDP} Given the perturbed MDP $\mathcal{M}adv{t}$ from equation~\eqref{eqn:AdversarialMDP} with $RAdv{t} \defeq \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}$, we have the equivalence \begin{align*} \QpiAdv{t}{\ActorPolicy{t}} = \QminEmp{\ActorPolicy{t}}. \end{align*} \end{lemma} \begin{proof} We need to check that $\QminEmp{\ActorPolicy{t}}$ solves the Bellman evaluation equations for the adversarial MDP, ensuring that $\QminEmp{\ActorPolicy{t}}$ is the action-value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$. Let $\BellmanEvaluation{\ActorPolicy{t}}_t$ be the Bellman evaluation operator on $\mathcal{M}adv{t}$ for policy $\ActorPolicy{t}$. We have \begin{align*} \QminEmp{\ActorPolicy{t}} - \BellmanEvaluation{\ActorPolicy{t}}_t(\QminEmp{\ActorPolicy{t}}) = \QminEmp{\ActorPolicy{t}} - \BellmanEvaluation{\ActorPolicy{t}}(\QminEmp{\ActorPolicy{t}}) - RAdv{t} & = \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{} - \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{} = 0. \end{align*} Thus, the function $\QminEmp{\ActorPolicy{t}}$ is the action value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$, and it is by definition denoted by $\QpiAdv{t}{\ActorPolicy{t}}$. \end{proof} This lemma shows that the action-value function $\QminEmp{\ActorPolicy{t}}$ computed by the critic is equivalent to the action-value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$. Thus, we can interpret the critic as performing a model-based pessimistic estimate of $\ActorPolicy{t}$; this view is useful in the rest of the analysis. \subsection{Equivalence of Updates} The second step is to establish the equivalence between the update rule~\eqref{eqn:LinearActorUpdate}, or equivalently as the update~\eqref{eqn:GlobalRule}, to the exponentiated gradient update rule~\eqref{eqn:LocalRule}. \begin{lemma}[Equivalence of Updates] \label{lem:UpdateEquivalence} For linear $Q$-functions of the form $\QpiAdv{t}{} \psa = \inprod{\CriticPar{t}}{\phi \psa}$, the parameter update \begin{subequations} \begin{align} \label{eqn:GlobalRule} \ActorPolicy{t+1} \pas & \propto \exp(\phi\psa^\top (\ActorPar{t} + \eta \CriticPar{t})), \qquad \intertext{is equivalent to the policy update} \label{eqn:LocalRule} \ActorPolicy{t+1}\pas & \propto \ActorPolicy{t}\pas \exp(\eta\QpiAdv{t}{} \psa), \qquad \ActorPolicy{1}\pas = \frac{1}{\card{ASpace_{\state}}}. \end{align} \end{subequations} \end{lemma} \begin{proof} We prove this claim via induction on $t$. The base case ($t = 1$) holds by a direct calculation. Now let us show that the two update rules update $\ActorPolicy{t}$ in the same way. As an inductive step, assume that both rules maintain the same policy $\ActorPolicy{t} \propto \exp(\phi\psa^\top\ActorPar{t})$ at iteration $t$; we will show the policies are still the same at iteration $t+1$. At any $\psa$, we have \begin{align*} \ActorPolicy{t+1}(\action \mid \state) \propto \exp(\phi\psa^\top (\ActorPar{t} + \eta\CriticPar{t})) & \propto \exp(\phi\psa^\top \ActorPar{t}) \exp(\eta\phi\psa^\top \CriticPar{t}) \\ & \propto \ActorPolicy{t}(\action \mid \state) \exp(\eta\QpiAdv{t}{} \psa). \end{align*} \end{proof} Recall that $\ActorPar{t}$ is the parameter associated to $\ActorPolicy{t}$ and that $\CriticPar{t}$ is the parameter associated to $\QminEmp{\ActorPolicy{t}}$. Using \cref{lem:UpdateEquivalence} together with \cref{lem:QfncOnAdversarialMDP} we obtain that the actor policy $\ActorPolicy{t}$ satisfies through its parameter $\ActorPar{t}$ the mirror descent update rule~\eqref{eqn:LocalRule} with $\QpiAdv{t}{} = \QminEmp{\ActorPolicy{t}} = \QpiAdv{t}{\ActorPolicy{t}}$ and $\ActorPolicy{1}(\action \mid \state) = 1/\abs{ASpace_\state}, \; \forall \psa$. In words, the actor is using Mirror descent to find the best policy on the sequence of adversarial MDPs $\{\mathcal{M}adv{t} \}$ implicitly identified by the critic. \subsection{Mirror Descent on Adversarial MDPs} Our third step is to analyze the behavior of mirror descent on the MDP sequence $\{\mathcal{M}adv{t}\}_{t=1}^T$, and then translate such guarantees back to the original MDP $\mathcal{M}$. The following result provides a bound on the average of the value functions $\{\Vpi{\ActorPolicy{t}} \}_{t=1}^T$ induced by the actor's policy sequence. This bound involves a form of optimization error\footnote{Technically, this error should depend on $\card{ASpace_\state}$, if we were to allow the action spaces to have varyign cardinality, but we elide this distinction here.} given by \begin{align*} \MirrorRegret{T} & = 2 \, \sqrt{ \frac{2 \log |ASpace|}{T}}, \end{align*} as is standard in mirror descent schemes. It also involves the \emph{perturbed rewards} given by $RAdv{t} \defeq \ensuremath{\mathcal{B}}or{\QpiAdv{t}{\ActorPolicy{t}}} {\ActorPolicy{t}}{}$. \begin{lemma}[Mirror Descent on Adversarial MDPs] \label{prop:MirrorDescentAdversarialRewards} For any positive integer $T$, applying the update rule~\eqref{eqn:LocalRule} with $\QpiAdv{t}{} = \QpiAdv{t}{\ActorPolicy{t}}$ for $T$ rounds yields a sequence such that \begin{align} \label{eqn:MirrorDescentAdversarialRewards} \frac{1}{T} \sumiter \Big[ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \Big] \leq \frac{1}{1-\discount} \left \{ \MirrorRegret{T} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \}, \end{align} valid for any comparator policy $\widetilde \policy$. \end{lemma} \noindent See \cref{sec:MirrorDescentAdversarialRewards} for the proof. \\ To be clear, the comparator policy $\widetilde \policy$ need belong to the soft-max policy class. Apart from the optimization error term, our bound~\eqref{eqn:MirrorDescentAdversarialRewards} involves the behavior of the perturbed rewards $RAdv{t}$ along the comparator $\widetilde \policy$ and $\ActorPolicy{t}$, respectively. These correction terms arise because the actor performs the policy update using the action-value function $\QpiAdv{t}{\ActorPolicy{t}}$ on the perturbed MDPs instead of the real underlying MDP. \subsection{Pessimism: Bound on \texorpdfstring{$\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}}$}{}} The fourth step of the proof is to leverage the pessimistic estimates returned by critic to simplify equation~\eqref{eqn:MirrorDescentAdversarialRewards}. Using \cref{lem:Simulation} and the definition of adversarial reward $RAdv{t}$ we can write \begin{align*} \VminEmp{\ActorPolicy{}} - \Vpi{\ActorPolicy{t}} = \frac{1}{1-\discount} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\ActorPolicy{t}} = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\ActorPolicy{t}} & = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}}. \end{align*} Since weak realizability holds, \cref{thm:NewPolicyEvaluation} guarantees that $\VminEmp{\ActorPolicy{}} \leq \Vpi{\policyicy}$ uniformly for all $\policyicy \in \PolicyClass$ with probability at least $1-\FailureProbability$. Coupled with the prior display, we find that \begin{align} \label{eqn:PessimisticAdversarialReward} \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \leq 0. \end{align} Using the above display, the result in \cref{eqn:MirrorDescentAdversarialRewards} can be further upper bounded and simplified. \subsection{Concentrability: Bound on \texorpdfstring{$\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}$}{}} The term $\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}$ can be interpreted as an approximate concentrability factor for the approximate algorithm that we are investigating. \paragraph{Bound under only weak realizability:} \cref{lem:RelaxedLinearConstraints} gives with probability at least $1-\FailureProbability$ that any surviving $Q$ in $\PopulationFeasibleSet{\ActorPolicy{t}}$ must satisfy: $ \norm{ \CriticPar{} - \CriticParBest{\ActorPolicy{t}}}{\CovarianceWithBootstrapReg{\ActorPolicy{t}}}^2 \lesssim \frac{\dim \ensuremath{\rho}}{n} $ where $\CriticParBest{\ActorPolicy{t}}$ is the parameter associated to the weak solution $\QpiWeak{\ActorPolicy{t}}$. Such bound must apply to the parameter $ \CriticPar{t} \in \EmpiricalFeasibleSet{\ActorPolicy{t}}$ identified by the critic.\footnote{We abuse the notation and write $\CriticPar{} \in \EmpiricalFeasibleSet{\policyicy}$ in place of $Q \in \EmpiricalFeasibleSet{\policyicy}$}. We are now ready to bound the remaining adversarial reward along the distribution of the comparator $\widetilde \policy$. \begin{align*} \abs{\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}} & = \abs{\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\widetilde \policy}} \\ & \overset{\text(i)}{=} \abs{\ensuremath{\mathbb{E}}ecti{ (\phi - \discount \phiBootstrap{\ActorPolicy{t}})^\top (\CriticPar{t} - \CriticParBest{\ActorPolicy{t}}) }{\widetilde \policy}} \\ & \leq \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\ActorPolicy{t}}]}{\widetilde \policy}}{(\CovarianceWithBootstrapReg{\ActorPolicy{t}})^{-1}} \norm{\CriticPar{t} - \CriticParBest{\ActorPolicy{t}}}{\CovarianceWithBootstrapReg{\ActorPolicy{t}}} \\ & \leq c \; \sqrt{\frac{\dim \ensuremath{\rho}}{n}} \; \sup_{\policyicy \in \PolicyClass} \left \{ \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\widetilde \policy}}{(\CovarianceWithBootstrapReg{\policyicy})^{-1}} \right \}. \numberthis{\label{eqn:ApproximateLinearConcentrability}} \end{align*} Step (i) follows from the expression~\eqref{eqn:LinearBellmanError} for the weak Bellman error, along with the definition of the weak solution $\QpiWeak{\ActorPolicy{t}}$. \paragraph{Bound under weak Bellman closure:} When Bellman closure holds we proceed analogously. The bound in \cref{lem:RelaxedLinearConstraintsBellmanClosure} ensures with probability at least $1-\FailureProbability$ that $ \norm{\CriticPar{} - \CriticParProjection{\ActorPolicy{t}}}{\SigmaReg}^2 \leq c \; \frac{\dim \ensuremath{\rho}}{n} $ for all $\CriticPar{} \in \PopulationFeasibleSet{\ActorPolicy{t}}$; as before, this relation must apply to the parameter chosen by the critic $\CriticPar{t} \in \EmpiricalFeasibleSet{\ActorPolicy{t}}$. The bound on the adversarial reward along the distribution of the comparator $\widetilde \policy$ now reads \begin{align*} \abs{\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}} \: = \: \abs{\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\widetilde \policy}} & \overset{\text{(i)}}{=} \abs{\ensuremath{\mathbb{E}}ecti{ \phi^\top (\CriticPar{t} - \CriticParProjectionFull{\ActorPolicy{t}}{\CriticPar{t}}) }{\widetilde \policy}} \\ & \leq \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}} \norm{\CriticPar{t} - \CriticParProjectionFull{\ActorPolicy{t}}{\CriticPar{t}}}{\SigmaReg} \\ & \leq c \; \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}} \sqrt{\frac{\dim \ensuremath{\rho}}{n}}. \numberthis{\label{eqn:ApproximateLinearConcentrabilityBellmanClosure}} \end{align*} Here step (i) follows from the expression~\eqref{eqn:LinearBellmanErrorWithClosure} for the Bellman error under weak closure. \subsection{Proof of \cref{prop:MirrorDescentAdversarialRewards}} \label{sec:MirrorDescentAdversarialRewards} We now prove our guarantee for a mirror descent procedure on the sequence of adversarial MDPs. Our analysis makes use of a standard result on online mirror descent for linear functions (e.g., see Section 5.4.2 of Hazan~\cite{hazan2021introduction}), which we state here for reference. Given a finite cardinality set $\xSpace$, a function $f: \xSpace \rightarrow \R$, and a distribution $\ensuremath{\nu}$ over $\xSpace$, we define $f(\ensuremath{\nu}) \defeq \sum_{x \in \xSpace} \ensuremath{\nu}(x) f(x)$. The following result gives a guarantee that holds uniformly for any sequence of functions $\{f_t\}_{t=1}^T$, thereby allowing for the possibility of adversarial behavior. \begin{proposition}[Adversarial Guarantees for Mirror Descent] \label{prop:MirrorDescent} Suppose that we initialize with the uniform distribution $\ensuremath{\nu}_{1}(\xvar) = \frac{1}{\abs{\xSpace}}$ for all $\xvar \in \xSpace$, and then perform $T$ rounds of the update \begin{align} \label{eqn:ExponentiatedGradient} \ensuremath{\nu}_{t+1}(\xvar ) \propto \ensuremath{\nu}_{t}(\xvar) \exp(\eta \fnc_{t}(\xvar)), \quad \mbox{for all $\xvar \in \xSpace$,} \end{align} using $\eta = \sqrt{\frac{\log \abs{\xSpace}}{2T}}$. If $\norm{\fnc_{t}}{\infty} \leq 1$ for all $t \in [T]$ then we have the bound \begin{align} \label{EqnMirrorBoundStatewise} \frac{1}{T} \sum_{t=1}^T \Big[ \fnc_{t}(\widetilde{\ensuremath{\nu}}) - \fnc_{t}(\ensuremath{\nu}_{t})\Big] \leq \MirrorRegret{T} \defeq 2\sqrt{\frac{2\log \abs{\xSpace}}{T}}. \end{align} where $\widetilde{\ensuremath{\nu}}$ is any comparator distribution over $\xSpace$. \end{proposition} We now use this result to prove our claim. So as to streamline the presentation, it is convenient to introduce the advantage function corresponding to $\ActorPolicy{t}$. It is a function of the state-action pair $\psa$ given by \begin{align*} \ApiAdv{t}{\ActorPolicy{t}}\psa \defeq \QpiAdv{t}{\ActorPolicy{t}}\psa - \ensuremath{\mathbb{E}}ecti{\QpiAdv{t}{\ActorPolicy{t}}(\state,\successoraction)}{\successoraction \sim \ActorPolicy{t}(\cdot \mid \state)}. \end{align*} In the sequel, we omit dependence on $\psa$ when referring to this function, consistent with the rest of the paper. From our earlier observation~\eqref{eqn:VonAdv}, recall that the reward function of the perturbed MDP $\mathcal{M}adv{t}$ corresponds to that of $\mathcal{M}$ plus the perturbation $RAdv{t}$. Combining this fact with a standard simulation lemma (e.g., \cite{kakade2003sample}) applied to $\mathcal{M}adv{t}$, we find that \begin{subequations} \begin{align} \label{EqnInitialBound} \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} & = \VpiAdv{t}{\widetilde \policy} - \VpiAdv{t}{\ActorPolicy{t}} + \frac{1}{1-\discount} \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \; = \; \frac{1}{1-\discount} \Big[ \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\widetilde \policy} - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big]. \end{align} Now for any given state $\state$, we introduce the linear objective function \begin{align*} \fnc_t(\ensuremath{\nu}) & \defeq \ensuremath{\mathbb{E}}_{\action \sim \ensuremath{\nu}} \QpiAdv{t}{\ActorPolicy{t}}(\state, \action) \; = \; \sum_{\action \in ASpace} \ensuremath{\nu}(\action) \QpiAdv{t}{\ActorPolicy{t}}(\state, \action), \end{align*} where $\ensuremath{\nu}$ is a distribution over the action space. With this choice, we have the equivalence \begin{align*} \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\action \sim \widetilde \policy}\psa & = \fnc_t(\widetilde \policy(\cdot \mid \state)) - \fnc_t\big(\ActorPolicy{t}(\cdot \mid \state) \big), \end{align*} where the reader should recall that we have fixed an arbitrary state $\state$. Consequently, applying the bound~\eqref{EqnMirrorBoundStatewise} with $\xSpace = ASpace$ and these choices of linear functions, we conclude that \begin{align} \label{EqnMyMirror} \frac{1}{T} \sumiter \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\action \sim \widetilde \policy}\psa \leq \MirrorRegret{T}. \end{align} \end{subequations} This bound holds for any state, and also for any average over the states. We now combine the pieces to conclude. By computing the average of the bound~\eqref{EqnInitialBound} over all $T$ iterations, we find that \begin{align*} \frac{1}{T} \sumiter \Big[ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \Big] & \leq \frac{1}{1-\discount} \left \{ \frac{1}{T} \sumiter \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\widetilde \policy} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \} \\ & \leq \frac{1}{1-\discount} \left \{ \MirrorRegret{T} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \}, \end{align*} where the final inequality follow from the bound~\eqref{EqnMirrorBoundStatewise}, applied for each $\state$. We have thus established the claim. \end{document}
\begin{document} \mathcalketitle \begin{abstract} The present work introduces new perspectives in order to extend finite group actions from surfaces to 3-manifolds. We consider the Schur multiplier associated to a finite group $G$ in terms of principal $G$-bordisms in dimension two, called $G$-cobordisms. We are interested in the question of when a free action of a finite group on a closed oriented surface extends to a non-necessarily free action on a 3-manifold. We show the answer to this question is affirmative for abelian, dihedral, symmetric and alternating groups. As an application of our methods, we show that every non-necessarily free action of abelian groups (under certain conditions) and dihedral groups on a closed oriented surface extends to $3$-dimensional handlebody. \end{abstract} \section*{Introduction} \label{intro} Let $\Omega_n^{SO}(G)$ be the free $G$-bordism group in dimension $n$ from Conner-Floyd \cite{CF} and denote by $\Omega_{n+1}^{SO,\partial free }(G)$, the $G$-bordism group of $(n+1)$-dimensional manifolds with a non necessarily free $G$-action which restricts to a free action over the boundary. We are interested in knowing what the image of the following map is \begin{equation}\label{e345} \Omega_{n+1}^{SO,\partial free }(G)\longrightarrow\Omega_n^{SO}(G)\,. \end{equation} For $n=2$, the group $\Omega_2^{SO}(G)$ has been studied extensively with the name of the Schur multiplier \cite{SM} (denoted by $\mathcal{M}(G)$). When this group vanishes, the map \eqref{e345} is surjective, such is the case for cyclic groups, groups of deficiency zero, see \cite{SM}. For free actions of abelian groups and dihedral groups, the extension was given by Reni-Zimmermann \cite{RZ} and Hidalgo \cite{Hida}. Obstructions for the surjectivity of the map \eqref{e345} are constructed by Samperton \cite{ES}, considering the quotient by the homology classes represented by tori. Our approach considers the elements of the group $\Omega_2^{SO}(G)$ represented by what we call $G$-cobordisms in dimension two. These are diffeomorphism classes of principal $G$-bundles over (closed) surfaces \cite{AC}. We say that a $G$-cobordism is {\it extendable} if it has a representative given by a principal $G$-bundle over a surface $S$, which is the boundary of a $3$-dimensional manifold $M$ with an action of $G$. For $G$ a finite abelian group, we show that every $G$-cobordism over a closed surface is extendable, see Theorem \ref{extAbe}. For this, we decompose any $G$-cobordism into small pieces given by $G$-cobordisms over a closed surface of genus one, which are extendable, as we will see in Proposition \ref{prC1}. For the dihedral group $D_{2n}$, we focus in the case $n=2k$ since the Schur multiplier $\mathcal{M}(D_{2n})$ vanishes for $n=2k+1$. Similar to the abelian case, we decompose every $D_{2n}$-cobordism into a finite product of the generator with base space of genus one, which is induced by a reflection and the rotation by 180 degrees, see Corollary \ref{cordie}. For the symmetric group $S_n$, the Schur multiplier is non-trivial for $n> 3$ and in that case it is equal to $\mathbb{Z}_2$. In Proposition \ref{propsym}, we prove that there is a generator with base space of genus one, which is induced by any two disjoint transpositions. A similar argument works for the alternating group $A_n$, where for $n=6,7$, we use the Sylow theory of the Schur multiplier that is shown in Proposition \ref{Sylow}. In summary, we have the following result. \begin{thm}\label{thm1} For $G$ a finite abelian group or $G\in \{D_{2n},S_n,A_n\}$, every $G$-cobordism over a closed oriented surface is extendable. \end{thm} We have the following applications for extending non-necessarily free actions over surfaces to 3-dimensional handlebodies: \begin{itemize} \item[i)] In Theorem \ref{teore1}, the actions of abelian groups have two types of fixed points, which are the ones induced by hyperelliptic involutions and pairs of ramification points with complementary monodromies (signature $>2$). An unfolding process is performed by first considering the quotient by the hyperelliptic involutions and after some modifications, we reduce the problem to the extension of free actions. \item[ii)] In Theorem \ref{teore2}, the actions of dihedral groups reduce to a finite product of an specific generator. We extend the action for this generator and for the surfaces realized by the products. \end{itemize} These results were proven before by different methods, by Reni-Zimmermann \cite{RZ} and Hidalgo \cite{Hida}. This article is organized as follows. In Section \ref{sec0}, we review the concept of $G$-cobordism and the Schur multiplier, as well as the relations between them. In Section \ref{disi}, we give explicit generators for the Schur multiplier of the dihedral, the symmetric and the alternating groups. Finally, in Section \ref{nonfree} we construct the extensions of the free actions on closed oriented surfaces for the dihedral, symmetric and alternating groups. Additionally, for non necessarily free actions on closed oriented surfaces of abelian groups (under certain conditions) and dihedral groups, we construct the extensions given by 3-dimensional handlebodies. {\bf Acknowledgements:} the first author thanks the Academia Mexicana de Ciencias for the opportunity of participating in the Scientific Research Summer of 2020. The second author is supported by c\'atedras CONACYT and Proyecto CONACYT ciencias b\'asicas 2016, No. 284621. We would like to thank Bernardo Uribe and Eric Samperton for their helpful conversations. \section{Preliminaries} \label{sec0} In this section we review in detail the definitions and properties of the theory of $G$-cobordisms introduced in \cite{AC,car}. In addition, we discuss some important facts about the Schur multiplier. \subsection{$G$-cobordisms} \label{sec1} Throughout the article, $G$ denotes a finite group and $1\in G$ the neutral element. Also, we consider right actions of the group $G$, and all the surfaces are oriented. \begin{defn} Let $\Sigma$ and $\Sigma'$ be $d$-dimensional closed, oriented smooth ma\-nifolds. A cobordism between $\Sigma$ and $\Sigma'$ is a $(d+1)$-dimensional oriented smooth manifold $M$, with boundary diffeomorphic to $\Sigma\sqcup-\Sigma'$, where $-\Sigma'$ is $\Sigma'$ with the reverse orientation. Two cobordisms $M$ and $M'$ are equivalent if there exists a diffeomorphism $\phi:M\longrightarrow M'$ such that we have the commutative diagram \begin{equation} \xymatrix{&M\ar[dd]^\phi&\\\Sigma\ar[ru]\ar[rd]&&\Sigma'\ar[lu]\ar[ld]\\&M'&\,.} \end{equation} \end{defn} \begin{defn}A {\it principal $G$-bundle} over a topological space $X$, consists of a fiber bundle $\pi:E\rightarrow X$ where the group $G$ acts freely and transitively over each fiber. \end{defn} \begin{example}In dimension one, for every $g\in G$, we construct the principal $G$-bundle $P_g\rightarrow S^1$ obtained by attaching the ends of $[0,1]\times G$ via multiplication by $g$, i.e., $(0,h)$ is identified with $(1,gh)$ for every $h\in G$. This construction $P_g=[0,1]\times G/\sim_g$ projects to the circle by restriction to the first coordinate, and the action $P_g\times G\rightarrow P_g$ is defined by right multiplication on the second coordinate. Any principal $G$-bundle over the circle is isomorphic to some $P_g$, and $P_g$ is isomorphic to $P_h$ if and only if $h$ is conjugate to $g$. \end{example} Throughout the paper, we refer to the element $g\in G$ as the {\it monodromy} associated to the corresponding principal $G$-bundle $P_g$. In the case of the neutral element of the group $G$, we say that the monodromy is trivial. \begin{defn} Let $\xi:P\rightarrow \Sigma$ and $\xi':P'\rightarrow \Sigma'$ be principal $G$-bundles. A {\it $G$-cobordism} between $\xi$ and $\xi'$ is a principal $G$-bundle $\epsilon:Q\rightarrow M$, with diffeomorphisms for the boundaries $\partial M\cong S\sqcup- S'$ and $\partial Q\cong P\sqcup -P'$, which match with the projections and the restriction of the action. Two $G$-cobordisms $\epsilon:Q\rightarrow M$ and ${\epsilon}':Q'\rightarrow M'$ define the same class if $M$ and ${M}'$ are equivalent as cobordisms by a diffeomorphism $\phi:M\rightarrow {M}'$, $Q$ and $Q'$ are equivalent as cobordisms by a $G$-equivariant diffeomorphism $\psi:Q\longrightarrow Q'$, and in addition, we have the commutative diagram \begin{equation} \xymatrix{Q\ar[r]^\psi\ar[d]_\epsilon&Q'\ar[d]^{\epsilon'}\\M\ar[r]_\phi & M'\,.} \end{equation} \end{defn} \begin{example} \begin{enumerate} \item A $G$-cobordism from $P_g$ to $P_h$ ($g,h\in G$) with base space the cylinder, is given by an element $k\in G$ such that $h=kgk^{-1}$. \item A $G$-cobordism with entry the disjoint union $P_g\sqcup P_h$ and exit $P_{gh}$, with base space the pair of pants, is a G-deformation retract\footnote{By a $G$-deformation retract we mean that the homotopy is by means of principal $G$-bundles.} of a principal G-bundle over the wedge $S^1\vee S^1$. \item There is only one $G$-cobordism over the disk and every representative is a trivial bundle. \item Take as base space a two dimensional handlebody of genus $n$ with one boundary circle. A $G$-cobordism depends on elements $g_i,k_i\in G$, for $1\leq i\leq n$, with monodromy for the boundary circle given by the product $\prod_{i=1}^n[k_i,g_i]$. \end{enumerate} \end{example} In Figure \ref{fig1}, we have pictures for the $G$-cobordisms over the cylinder, the pair of pants and the disc. For these pictures, we draw from left to right the direction for our cobordisms. Also, every circle is labelled with the correspondent monodromy and for every cylinder we write inside the element of the group with which we do the conjugation. \begin{figure} \caption{$G$-cobordism over the cylinder, the pair of pants and the disc.} \label{fig1} \end{figure} In the left side of Figure \ref{fig4}, we have a $G$-cobordism over a genus one handlebody. Additionally, in the right side, we represent an equivalent manner to see this $G$-cobordism. \begin{figure} \caption{Two equivalent $G$-cobordisms over a handle of genus one.} \label{fig4} \end{figure} If a $G$-cobordism over a closed connected surface is cut along a simple closed se\-pa\-rating curve\footnote{A simple closed curve in a surface is separating if the cut surface is not connected.}, the monodromy of the resulting curve lies inside the commutator group, as shown in the following proposition. \begin{prop}\label{pro1} For a $G$-cobordism over a closed connected surface $S$, the monodromy of every embedded simple closed separating curve in $S$ lies in the commutator group $[G,G]$. \end{prop} \begin{defn} A $G$-cobordism of dimension two, over a closed surface, is {\it extendable} if for some representative principal $G$-bundle $P\rightarrow S$, with the action $\alpha:P\times G\rightarrow P$, there exits a $3$-dimensional manifold $M$ with boundary $\partial M=P$, with an action of $G$ of the form $\overline{\alpha}:M\times G\rightarrow M$, which extends $\alpha$, i.e., we have the commutative diagram \begin{equation} \xymatrix{P\times G\ar@{_(->}[d]\ar[r]^\alpha & P\ar@{_(->}[d] \\ M\times G\ar[r]_{\overline{\alpha}} & M\,.} \end{equation} \end{defn} \begin{prop}\label{prC1} Any $G$-cobordism over a closed surface of genus one is extendable. \end{prop} \begin{proof}Consider a principal $G$-bundle representing the given $G$-cobordism over the closed surface of genus one. It is enough to prove the case in which the total space of the bundle is a connected space. Moreover, the action of $G$ over the total space can be modified by an isotopy resulting in an action which depends completely on a pair of monodromies $(g,k)$, which are associated to two curves in the torus that intersect once. Denote by $P_g$ and $P_k$ the two principal $G$-bundles associated to these two curves. It follows that the action of $G$ over the total space is given by the product of the total spaces $P_g$ and $P_k$. Because of the assumptions, at least one of $P_g$ or $P_k$ is a connected space, let us assume that it is $P_g$. The extension of the action of $G$ is through the 3-dimensional handlebody constructed as follows. First, consider the disc $D$ as the union $(S^1\times (0,1])\cup \{0\}$, where $0$ is the center. For each circle $S^1\times \{r\}$, with $r\in (0,1]$, we take as monodromy the element $g$ so that the principal $G$-bundle is $P_g$, and we take the center $0$ as fixed point. Thus, over the disc we have the rotation by $2\pi/|g|$, with $|g|$ the order of $g\in G$. Taking the product of this disc together with the induced principal $G$-bundle $P_k$, we obtain the extension which makes the $G$-cobordism extendable. \end{proof} \begin{rem} We want to emphasize why the construction given in the previous proposition does not work for closed surfaces with genus $>1$. The reason is that the set of fixed points should be a smooth submanifold, which we can not assure for genus $>1$, since we have points where three lines meet. \end{rem} Now, we apply the previous results to abelian groups. \begin{thm}\label{extAbe} For $G$ a finite abelian group, any $G$-cobordism is extendable. \end{thm} \begin{proof} Consider a $G$-cobordism over a connected closed surface. By Proposition \ref{pro1}, we can write this $G$-cobordism as a connected sum of $G$-cobordisms with base space of genus one. This connected sum is done for the total space along trivial bundles over a circle. Since the connected sum is in the same bordism class as the disjoint union, we are decomposing this $G$-cobordism as a disjoint union of $G$-cobordisms with base space a closed surface of genus one. Because of Proposition \ref{prC1}, any $G$-cobordism over a closed surface of genus one is extendable, so the theorem follows. \end{proof} \subsection{The Schur multiplier} \label{secschur} The study of this theory began in 1904 by Isaai Schur in order to study the projective representations of groups. Nowadays, the Schur multiplier represents three different isomorphic groups given by the second free bordism group $\Omega_2^{SO}(G)$, the second homology group $H_2(G,\mathbb{Z})$ and the second cohomology group $H^2(G,\CC^*)$. \begin{defn} Let $\langle G, G\rangle$ be the free group on all pairs $\langle x, y\rangle$, with $x,y\in G$. There is a natural homomorphism of $\langle G, G\rangle$ onto the commutator group $[G, G]$, which sends $\langle x, y\rangle$ into $[x, y]$. Consider the kernel $Z(G)$ of this homomorphism and the normal subgroup $B(G)$ of $\langle G, G\rangle$ generated by the relations \begin{eqnarray}\label{four1} &\left<x,x\right>&\sim 1\,,\\\label{four2} &\left<x,y\right>&\sim \left<y,x\right>^{-1}\,,\\\label{four3} &\left<xy,z\right>&\sim \left<y,z\right>^x\left<x,z\right>\,,\\\label{four4} &\left<y,z\right>^x&\sim \left<x,[y,z]\right>\left<y,z\right>\,, \end{eqnarray} where $x,y,z\in G$ and $\langle y,z\rangle^x=\langle y^x,z^x\rangle = \langle xyx^{-1}, xzx^{-1}\rangle$. The Schur multiplier is defined as the quotient group \begin{equation} \mathcal{M}(G):=\frac{Z(G)}{B(G)}\,. \end{equation} \end{defn} Miller \cite{Mil} shows that the quotient $Z(G)/B(G)$ is canonically isomorphic to the Hopf's integral formula \begin{equation}\label{hopf} \mathcal{M}(G)\cong\frac{R\cap [F,F]}{[F,R]}\,, \end{equation} where $G=\langle \,F\,|\,R\,\rangle$. Moreover, in \cite{Mil} there are some consequent relations, which we enumerate in the following theorem. \begin{thm}[\cite{Mil}] The following relations can be deduced from \eqref{four1}-\eqref{four4}: \begin{eqnarray}\label{four5} &\left<x,yz\right>&\sim \left<x,y\right>\left<x,z\right>^y\,,\\\label{four6} &\left<x,y\right>^{\left<a,b\right>}&\sim \left<x,y\right>^{[a,b]}\,,\\\label{four7} & \left[\left<x,y\right>,\left<a,b\right>\right]&\sim \left<[x,y],[a,b]\right>\,,\\\label{four8} & \left<b,b'\right>\left<a_0,b_0\right> &\sim \left<[b,b'],a_0\right>\left<a_0,[b,b']b_0\right>\left<b,b'\right>\,,\\\label{four9} & \left<b,b'\right>\left<a_0,b_0\right> &\sim \left<[b,b']b_0,a_0\right>\left<a_0,[b,b']\right>\left<b,b'\right>\,,\\\label{four10} & \left<b,b'\right>\left<a,a'\right> &\sim \left<[b,b'],[a,a']\right>\left<a,a'\right>\left<b,b'\right>\,,\\\label{four11} &\left<x^n,x^s\right> &\sim 1\hspace{1cm}n=0,\pm1,\cdots;s=0,\pm1,\cdots\,, \end{eqnarray}for $x,y,z,a,b,a',b',a_0,b_0\in G$. \end{thm} The connection with bordism relates the elements of $Z(G)$ by means of the assignment \begin{equation}\label{corrbor} \langle x_1,y_1 \rangle \langle x_2,y_2\rangle \cdots\langle x_n,y_n \rangle \longmapsto (y_n,x_n)(y_{n-1},x_{n-1})\cdots (y_1,x_1)\,, \end{equation} where the sequence in the right defines the generating monodromies for a $G$-cobordism over a closed surface of genus $n$ as in Figure \ref{fig3}. \begin{figure} \caption{The $G$-cobordism associated to the sequence $(y_n,x_n)(y_{n-1} \label{fig3} \end{figure} Indeed, the previous four relations \eqref{four1}, \eqref{four2}, \eqref{four3} and \eqref{four4}, are interpreted in bordism as follows: \begin{itemize} \item[(i)] For \eqref{four1} and \eqref{four2}, we consider the $G$-cobordism defined by the pairs $(x,x)$ and $(x,y)(y,x)$, respectively. We represent these $G$-cobordisms in the left side of Figure \ref{fig80}, respectively. \begin{figure} \caption{The $G$-cobordisms associated to the pairs $(x,x)$ and $(x,y)(y,x)$.} \label{fig80} \end{figure} In the top of Figure \ref{fig80}, we apply the Dehn twist diffeomorphism and obtain that the conjugation becomes the neutral element $1\in G$. In the bottom of Figure \ref{fig80}, we cut along a trivial monodromy to reduce the genus by one. Notice that the $G$-cobordisms in the left side of Figure \ref{fig80} are null bordant since we can cut along a trivial monodromy eliminating the hole of the handle. \item[(ii)] For \eqref{four3} and \eqref{four4}, we obtain a $G$-cobordism, over a handlebody of genus two, where we can find a curve with trivial monodromy which reduces the genus by one. In Figure \ref{fig8}, we represent these identifications, respectively. \begin{figure} \caption{Reduction of genus through the cutting along a trivial monodromy.} \label{fig8} \end{figure} \end{itemize} Now, we focus on the Sylow theory of the Schur multiplier. We use Definition \ref{def1} and Theorem \ref{kar}, in order to show Proposition \ref{Sylow}. \begin{defn}\label{def1} For a subgroup $H\subset G$, there are the following induces maps: \begin{itemize} \item[i)] the restriction map, denoted by $\operatorname{res}:\mathcal{M}(G)\rightarrow \mathcal{M}(H)$, which associates to a $G$-cobordism over a closed surface, the restriction of the action to the subgroup $H$. \item[ii)] the corestriction map, denoted by $\operatorname{cor}:\mathcal{M}(H)\rightarrow \mathcal{M}(G)$, which starts with a principal $H$-bundle $P\rightarrow S$ and associates the Borel construction $P\times_H G$ produced by the quotient of the product $P\times G$ with the action of $H$ of the form $(x,g)h=(xh,h^{-1}g)$. The group $G$ has a free action over the the Borel construction by $[x,g]\hat{g}=[x,g\hat{g}]$. \end{itemize} \end{defn} In general, these maps extend to non-necessarily free actions, in particular, for the $G$-bordism groups $\Omega_{3}^{SO,\partial free}(G)$ of $3$-dimensional manifolds with a non necessarily free $G$-action which restricts to a free action over the boundary. \begin{thm}[\cite{SM}]\label{kar} Let $P$ be a Sylow $p$-subgroup of $G$ and let $\mathcal{M}(G)$ the $p$-component of the Schur multiplier $\mathcal{M}(G)$. Then the restriction map $\operatorname{res}:\mathcal{M}(G)\rightarrow \mathcal{M}(P)$ induces an injective homomorphism $\mathcal{M}(G)_p\rightarrow \mathcal{M}(P)$, and the corestriction map $\operatorname{cor}:\mathcal{M}(P)\rightarrow \mathcal{M}(G)$ induces a surjective homomorphism $\mathcal{M}(P)\rightarrow \mathcal{M}(G)_p$. \end{thm} \begin{prop}\label{Sylow} For a finite group $G$, and $\operatorname{Syl}(G)$ the set of isomorphism classes of Sylow subgroups of $G$, if any element $Q$ in $\operatorname{Syl}(G)$ satisfies that any $Q$-cobordism is extendable, then any $G$-cobordism is extendable. \end{prop} \begin{proof} For $n=|G|$ and $n=p^km$ with $p\not|m$, consider an element $f\in\mathcal{M}(G)_p$, hence the composition \begin{equation} F:=\operatorname{cor}\circ\operatorname{res}:\mathcal{M}(G)_p\longrightarrow\mathcal{M}(G)_p\,,\end{equation} is given by $f\longrightarrowt f^m$, which is an automorphism of $\mathcal{M}(G)_p$. By the assumptions, the restriction $\operatorname{res}(f)$ is extendable by a $3$-manifold $M$ with an action of $Q$, therefore, applying the corestriction we obtain that $f^m$ is extendable by the $3$-manifold $\operatorname{cor}(M)$. Similarly, we can start with $F^{-1}(f)$ and we get that $f$ is extendable and the proposition follows. \end{proof} \section{Generators for the Schur multiplier} \label{disi} In this section we give explicit generators for the Schur multiplier of the dihedral, the symmetric and the alternating groups. \subsection{Dihedral group} For $n\geq 3$, the dihedral group is the group of symmetries of the $n$-regular polygon (with $D_2=\mathbb{Z}_2$, $D_{4}=\mathbb{Z}_2\times \mathbb{Z}_2$), and presentation \begin{equation}\label{diedral} D_{2n}=\langle a,b:a^2=1,b^2=1,(ab)^n=1\rangle\,,\end{equation} where $c:=ab$ is the rotation of $2\pi/n$. The Schur multiplier has the form \begin{equation} \mathcal{M}(D_{2n})=\left\{ \begin{array}{cl} 0 & n=2k+1\,, \\ \mathbb{Z}_2 & n=2k\,. \end{array} \right. \end{equation} In order to find a generator we show the following. \begin{prop}\label{pr1}We obtain the following identifications: \begin{itemize} \item[(i)] $\langle c^i,c^j\rangle \sim 1$, \item[(ii)] $\left<c^{i},ac^j\right>\sim \langle c,a\rangle^i$, \item[(iii)] $\left<ac^{i},c^j\right>\sim \langle c,a\rangle^{-j}$, and \item[(iv)] $\left<ac^i,ac^j\right>\sim \langle c,a\rangle^{j-i}$. \end{itemize} \end{prop} \begin{proof} The relation (i) follows by \eqref{four11}. The use of \eqref{four3}, \eqref{four11} and \eqref{four5} implies \begin{align} &\left<c^i,ac^j\right>\sim\left<c^{i},a\right>\left<c^{i},c^{j}\right>^{a}\sim\left<c^{i},a\right>\,,\\ &\left<c^i,ac^j\right>=\left<cc^{i-1},ac^j\right>\sim\left<c^{i-1},ac^j\right>^c\left<c,ac^j\right>\sim\left<c^{i-1},a\right>\left<c,a\right>\,. \end{align} Therefore, we obtain the relation \begin{equation} \left<c^{i},a\right>\sim\underbrace{\left<c,a\right>\left<c,a\right>\cdots\left<c,a\right>}_i\,, \end{equation} which implies (ii). By \eqref{four2} we obtain (iii) as follows \begin{equation}\left<ac^{i},c^j\right>\sim \left<c^j,ac^{i}\right>^{-1}\sim \left<c,a\right>^{-j}\,.\end{equation} Finally, we use \eqref{four5} and \eqref{four1}, \begin{equation}\left<ac^{i},ac^j\right>\sim \left<c^i,ac^{j}\right>^{a} \left<a,ac^j\right>\sim\left<c^{-1},a\right>^i \left<a,c^j\right>^a\,,\end{equation} and by \eqref{four2} we obtain (iv). \end{proof} \begin{cor}\label{cordie} For $n=2k$, the generator of the group $\mathcal{M}(D_{2n})$ is represented by the element $\langle c^k,a\rangle$. \end{cor} \subsection{Symmetric group} The symmetric group $S_n$ is composed of permutation of the set $[n]=\{1,\cdots, n\}$. This group is generated by the transpositions $(ij)$ with $i,j\in[n]$. The Schur multiplier is given as follows \begin{equation} \mathcal{M}(S_{n})=\left\{ \begin{array}{cl} 0 & n\leq3\,, \\ \mathbb{Z}_2 & n\geq4\,. \end{array} \right. \end{equation} \begin{lem}\label{pipiripau} Let $k\in[n]$, and $\langle \sigma_1,\tau_1\rangle\cdots\langle \sigma_r,\tau_r\rangle$ be a sequence with $\sigma_i,\tau_i,\in S_n$, for $i\in\{1,\cdots, r\}$. There exist a positive number $0\leq s\leq r$ and the following elements: \begin{itemize} \item[(i)] $a_i,b_i\in S_n$, with $0\leq i\leq s$, such that all $a_i,b_i$ fix $k$, and \item[(ii)] $c_j,d_j\in S_n$, with $0\leq j\leq r-s$, such that for each $j$, at least one of $c_j,d_j$ does not fix $k$, \end{itemize} with the relation \begin{equation} \langle \sigma_1,\tau_1\rangle\cdots\langle \sigma_r,\tau_r\rangle\sim \langle a_1,b_1\rangle\cdots\langle a_s,b_s\rangle\langle c_1,d_1\rangle\cdots\langle c_{r-s},c_{r-s}\rangle\,. \end{equation} Moreover, $s$ is the amount of pairs $\langle\sigma_i,\tau_i\rangle$ such that both $\sigma_i$ and $\tau_i$ fix $k$. \end{lem} \begin{proof} It suffices to note that for pairs $\langle a,b\rangle$ and $\langle x,y\rangle$, with $a,b,x,y\in S_n$, such that $a$ and $b$ fix $k$ there is the relation \begin{align*} \left<x,y\right>\left<a,b\right>&\sim\left<a,b\right>\left<b,a\right>\left<x,y\right>\left<a,b\right>\\ &\sim\left<a,b\right>\left<x^{[b,a]},y^{[b,a]}\right>\,, \end{align*} where we have used \eqref{four6}. An iterative application of this process, allows us to put all terms fixing $k$ to the left in the sequence. \end{proof} \begin{prop}\label{propsym} Assume $n\geq 4$, and take elements $\sigma_i,\tau_i,\sigma'_j,\tau'_j\in S_n$, for $1\leq i\leq r$ and $1\leq j\leq s$, with the same commutator, i.e., \begin{equation} [\sigma_1,\tau_1]\cdots[\sigma_r,\tau_r]=[ \sigma'_1,\tau'_1]\cdots[\sigma'_s,\tau'_s]\,. \end{equation} Therefore, for the element $u:=\langle(1,2),(3,4)\rangle$, there is the relation \begin{equation} \langle \sigma_1,\tau_1\rangle\cdots\langle \sigma_r,\tau_r\rangle\sim u^k\langle \sigma'_1,\tau'_1\rangle\cdots\langle \sigma'_s,\tau'_s\rangle\,, \end{equation} with $k\in\lbrace0,1\rbrace$. \end{prop} \begin{proof} First, we observe that from \eqref{four3} and \eqref{four5}, we can assume that all the elements $\sigma_i,\tau_i,\sigma'_j,\tau'_j\in S_n$ are transpositions. By \eqref{four4}, every pair $\langle\sigma,\tau\rangle$ with $\sigma$ and $\tau$ disjoint transpositions is in the same class as the pair $u:=\left<(1,2),(3,4)\right>$. Therefore, we can assume that the pairs are of the form $\left<(i,j),(j,k)\right>$, with $i,j$ and $k$ different numbers. By exhaustion, the proposition follows for the symmetric group $S_n$, with $4\leq n \leq 6$. We proceed by induction for $n\geq7$ and we suppose that for $k< n$, the generator of the Schur multiplier $\mathcal{M}(S_n)$ is given by the element $u:=\left<(1,2),(3,4)\right>$. Set by $m$ the maximum of $r$ and $s$, for the sequences $\langle\sigma_1,\tau_1\rangle\cdots\langle \sigma_r,\tau_r\rangle$ and $\langle \sigma'_1,\tau'_1\rangle\cdots\langle \sigma'_s,\tau'_s\rangle$. For $m=1$, the proposition follows from the triviality of $\mathcal{M}(S_3)$. Suppose that our proposition follows for sequences with length $l<m$. We consider the sequence \begin{equation}\label{week} \langle\sigma_1,\tau_1\rangle\cdots\langle \sigma_r,\tau_r\rangle \left(\langle \sigma'_1,\tau'_1\rangle\cdots\langle \sigma'_s,\tau'_s\rangle\right)^{-1} \sim\langle\sigma_1,\tau_1\rangle\cdots\langle \sigma_r,\tau_r\rangle\langle \tau'_s,\sigma'_s\rangle\cdots\langle \tau'_1,\sigma'_1\rangle\,, \end{equation} which has trivial commutator and length given by $M:=r+s\leq 2m$. Let $x\in\lbrace1,\cdots,n\rbrace$ be the number that is fixed by the most terms of the sequence \eqref{week}. Given that the sequences have non trivial terms, each term permutes $3$ different numbers in $\{1,2,\cdots,n\}$. Therefore, the number $x$ is not fixed by at most $\frac{3(r+s)}{n}$ terms. Given that $\frac{3(r+s)}{n}\leq\frac{3(2m)}{7}<m$, hence $x$ is not fixed by at most $m-1$ terms. By Lemma \ref{pipiripau}, we can find an equivalent sequence for \eqref{week}, with the following form \begin{equation}\underbrace{\left<\alpha_1,\beta_1\right>\cdots\left<\alpha_t,\beta_t\right>}_{\text{fix number x}}\underbrace{\left<\alpha_{t+1},\beta_{t+1}\right>\cdots\left<\alpha_M,\beta_M\right>}_{\text{do not fix number x}}\,,\end{equation} where: \begin{itemize} \item[i)] $M-t< m$; \item[ii)] the $\alpha_i,\beta_i\in S_n$, with $0\leq i\leq t$, fix $x$; and \item[iii)] the $\alpha_j,\beta_j\in S_n$, with $t+1\leq j\leq M$, at least one does not fix $x$. \end{itemize} Moreover, by the proof of Lemma \ref{pipiripau}, the elements $\alpha_i,\beta_i,\alpha'_j,\beta'_j\in S_n$ are again transpositions. Now we consider the sequences \begin{equation} A:=\left<\alpha_1,\beta_1\right>\cdots\left<\alpha_t,\beta_t\right> \end{equation} and \begin{equation} B:=\left(\left<\alpha_{t+1},\beta_{t+1}\right>\cdots\left<\alpha_M,\beta_M\right>\right)^{-1}= \left<\beta_M,\alpha_M\right>\cdots\left<\beta_{t+1},\alpha_{t+1}\right>\,, \end{equation} where both sequences have the same commutator. Furthermore, the sequence $A$ has pairs composed by elements in $S_{n-1}$ because they fix $x$. By our induction hypothesis, for $n$, we conclude that the Schur multiplier $\mathcal{M}(S_{n-1})$ is generated by $u=\langle(1,2),(3,4)\rangle$. Therefore, $A\sim u^iC$ for $i\in\{0,1\}$ and $C$ is a sequence of pairs with elements in $S_{n-1}$. We can take $C$ to be of length $< m$, as it is has the same commutator as the chain $B$ of length $<m$. By the other induction hypothesis, for $m$, since $B$ and $C$ have length less than $m$, then there is $j\in \{0,1\}$ such that $B\sim u^j C$. This shows that the product of our initial sequences in \eqref{week} is equivalent to $u^{i-j}$ and the proof of the proposition follows. \end{proof} \begin{cor}\label{coraz2} For $n\geq4$, the generator of the group $\mathcal{M}(S_{n})$ is represented by the element $u:=\langle (1,2),(3,4)\rangle$. \end{cor} \subsection{Alternating group} The alternating group $A_n$ is the normal subgroup of $S_n$ with index $2$. The Schur multiplier has the form \begin{equation} \mathcal{M}(A_{n})=\left\{ \begin{array}{cl} 0 & n\leq 3\,, \\ \mathbb{Z}_2 & n=4,5, \\ \mathbb{Z}_6 & n=6,7, \\ \mathbb{Z}_2 & n\geq8\,. \end{array} \right. \end{equation} \begin{prop} For $n\geq4$, the element $\langle(1,2)(3,4),(1,3)(2,4)\rangle$ is nontrivial in $\mathcal{M}(A_{n})$. \end{prop} \begin{proof} Because of the relations \eqref{four3} and \eqref{four5} in $\mathcal{M}(S_n)$, we have the following \begin{align*} \langle(1,2)(3,4),(1,3)(2,4)\rangle&\sim\langle(3,4),(2,3)(1,4)\rangle \langle(1,2),(1,3)(2,4)\rangle\\ &\sim\langle(3,4),(2,3)\rangle\langle(2,4),(1,4)\rangle \langle(1,2),(1,3)\rangle \langle(2,3),(2,4)\rangle \end{align*} We also have from \eqref{four4}, \eqref{four5} and $\langle(2,4),(1,4)\rangle=\langle(2,3),(1,3)\rangle^{(3,4)}$ that \begin{align*} \langle(2,4),(1,4)\rangle&\sim \langle(3,4),[(2,3),(1,3)]\rangle\langle(2,3),(1,3)\rangle=\langle(3,4),(1,2)(1,3)\rangle\langle(2,3),(1,3)\rangle\\ &\sim\langle(3,4),(1,2)\rangle\langle(3,4),(2,3)\rangle\langle(2,3),(1,3)\rangle=u\langle(3,4),(2,3)\rangle\langle(2,3),(1,3)\rangle \end{align*} As $[(3,4),(2,3)]=[(2,3),(2,4)]$, $[(2,3),(1,3)]=[(1,3),(1,2)]$ and $\mathcal{M}(S_3)=0$, we have that $\langle(3,4),(2,3)\rangle\sim\langle(2,3),(2,4)\rangle$ and $\langle(2,3),(1,3)\rangle\sim\langle(1,3),(1,2)\rangle$. Therefore, \begin{align*} \langle(1,2)(3,4),(1,3)(2,4)\rangle&\sim\langle(2,3),(2,4)\rangle\langle(2,4),(1,4)\rangle \langle(1,2),(1,3)\rangle \langle(2,3),(2,4)\rangle\\ &\sim u\langle(2,3),(2,4)\rangle^2\langle(1,3),(1,2)\rangle \langle(1,2),(1,3)\rangle \langle(2,3),(2,4)\rangle\\ &\sim u\langle(2,3),(2,4)\rangle^3 \sim u = \langle(1,2),(3,4)\rangle\,, \end{align*} where $\langle(2,3),(2,4)\rangle^3$ vanishes since $[(2,3),(2,4)]^3=1$ and $\mathcal{M}(S_3)=0$. As a consequence, the element $\langle(1,2)(3,4),(1,3)(2,4)\rangle$ is nontrivial in $\mathcal{M}(S_n)$, and hence it is also nontrivial in $\mathcal{M}(A_n)$ for $n\geq 4$. \end{proof} \begin{cor}\label{ccor} For $n\geq4$ and $n\not\in\lbrace6,7\rbrace$, the generator of the group $\mathcal{M}(A_{n})$ is represented by the element $\langle(1,2)(3,4),(1,3)(2,4)\rangle$. \end{cor} \section{Extending group actions on surfaces} \label{nonfree} This section contains the main applications of this work. We start with free actions of abelian, dihedral, symmetric and alternating groups and then, we show that these actions extend to actions on 3-manifolds. Finally, we see the case of non-necessarily free actions for abelian and dihedral groups. \subsection{Free actions on surfaces} \begin{defn} Consider a compact oriented surface $S$ with a (free) group action \begin{equation} \alpha:S\times G\longrightarrow S\,.\end{equation} We say that the action is {\it extendable} if there exists a $3$-manifold $M$ with boundary $\partial M=S$, with an action of $G$ of the form $\overline{\alpha}:M\times G\longrightarrow M$, which extends $\alpha$, i.e., we have the commutative diagram \begin{equation} \xymatrix{S\times G\ar@{_(->}[d]\ar[r]^\alpha & S\ar@{_(->}[d] \\ M\times G\ar[r]_{\overline{\alpha}} & M\,.} \end{equation} \end{defn} \begin{proof}[Proof of Theorem \ref{thm1}] By Theorem \ref{extAbe} we know that any free action of a finite abelian group is extendable. For dihedral groups $D_{2n}$, we have two cases to consider. One is for $n=2k+1$, but since $\mathcal{M}(D_{4k+2})=0$, then any free action is extendable. The other is for $n=2k$, where by Corollary \ref{cordie}, the generator of the Schur multiplier is represented by a $G$-cobordism over a closed surface of genus one, therefore, by Proposition \ref{prC1} these free actions are extendable. Now consider free actions of the symmetric groups $S_n$, since $\mathcal{M}(S_n)=0$ for $n\leq 3$, it remains to prove the extension for $n\geq 4$. Similar as for dihedral groups, by Corollary \ref{coraz2} these free actions are extendable. For the alternating groups $A_n$ the free actions are extendable for $n\leq 3$. Again, for $n\geq 4$ and $n\neq 6,7$, by Corollary \ref{ccor} these actions are extendable. In the case of free actions of $A_n$ for $n=6,7$, we notice that the Sylow subgroups of $A_6$ and $A_7$ have the following isomorphic types $\{D_8,\mathbb{Z}_3\times \mathbb{Z}_3\times \mathbb{Z}_3,\mathbb{Z}_5,\mathbb{Z}_7 \}$ and because of Proposition \ref{Sylow}, we obtain that these free actions are extendable. \end{proof} \subsection{Non-necessarily free action on surfaces} Now we consider non-necessarily free actions of finite abelian groups and dihedral groups. The extension of these actions was already given by Reni-Zimmermann \cite{RZ} with $3$-dimensional methods and by Hidalgo \cite{Hida} with $2$-dimensional methods. \begin{thm}\label{teore1} Let $G$ be a finite abelian group with an action on a closed oriented surface where the fixed points are of two types: \begin{itemize} \item[(i)]fixed points produced by hyperelliptic involutions, see Figure \ref{invo}, \begin{figure} \caption{Hyperelliptic involutions} \label{invo} \end{figure} and \item[(ii)] ramification points with complementary monodromies (signature $>2$). \end{itemize} Then the action is extendable by a $3$-dimensional handlebody. \end{thm} \begin{proof} The extension is performed in some steps. First, we consider the quotient of the surface by the hyperelliptic involutions in some order and smooth the corners with the aim to obtain a smooth closed oriented surface. The hyperelliptic involutions act in the set of ramification points with signature $>2$ and in the quotient we still have ramifications points grouped into pairs. Then we connect the complementary monodromies by cylinders in order to have a free action over a closed oriented surface. From Theorem \ref{extAbe}, we have that this free action is extendable and by the proof of Proposition \ref{pr1}, this extension is by means of a 3-dimensional handlebody. Now we disconnect the complementary monodromies by cutting in each of the cylinders that we glued. Each cylinder has only one fixed point by the proof of Proposition \ref{prC1}. Finally, we extend the action to the original surface by an unfolding process using the hyperelliptic involutions in the reverse order in which we constructed the initial quotient surface. \end{proof} \begin{thm}\label{teore2} Every action of a dihedral group $D_{2n}$ over a closed orientable surface is extendable by a $3$-dimensional handlebody. \end{thm} \begin{proof} By Proposition \ref{pr1}, the extension problem reduces to a finite product of the same $D_{2n}$-cobordism induced by the pair $\langle c,a\rangle\sim \langle ab,a\rangle\sim \langle b,a\rangle^a$. Thus, it is enough to solve the extension problem for the pair $\langle b,a\rangle$ and for the $D_{2n}$-cobordism over the pair of pants with entries in $[D_{2n},D_{2n}]=\langle c^2\rangle$. The last reduces to the extension of $G$-cobordisms over pair of pants where $G=[D_{2n},D_{2n}]$, which follows because the group is cyclic. For the pair $\langle b,a\rangle$ we construct a representative $D_{2n}$-cobordism and there are two cases to consider: \begin{itemize} \item[(i)]For $n=2k$, we consider the disjoint union of two spheres, where each one is the gluing of two $n$-gons by the boundary. Denote by $T$ the operation of switching from one sphere to the other and by $S$ the operation of switching from one $n$-gon to the other in the same sphere. The action of $a$ lifts to the composition $Sa=aS$ and the action of $b$ lifts to the composition $Tb=bT$. We obtain $n=2k$ fixed points over each sphere plus the north a south poles. Then for each fixed point (except the north and south pole), we remove a small disc around it and we connect the holes for opposite fixed points with a cylinder. In a similar way as for abelian groups, this action extends with the north and south pole as unique ramification points. For $k=2$, in Figure \ref{foursq}, we draw an illustrator of this construction. \begin{figure} \caption{Representative for $\langle b,a\rangle$ with $n=2k$ ($k=1$).} \label{foursq} \end{figure} \item[(ii)] For $n=2k+1$, we consider one sphere as the gluing of two $n$-gons by the boundary. Denote by $S$ the operation of switching from one $n$-gon to the other. Thus the action of $a$ lifts to the composition $Sa=aS$ and the action of $b$ lifts to the composition $Sb=bS$. We obtain $2n$ fixed points plus the north a south poles. Then we perform the same procedure to construct the extension of the action, as in the even case. \end{itemize} \end{proof} \end{document}
\begin{document} \title{Livsic-type determinantal representations and Hyperbolicity} \author{E. Shamovich} \address{Department of Mathematics, Ben-Gurion University of the Negev} \email{[email protected]} \thanks{The research of E. S. was partially carried out during the visits to the of Mathematics and Statistics of the University of Konstanz, supported by the EDEN Erasmus Mundus program (30.12.2013 - 30.6.2014) and to the MFO, supported by the Leibnitz graduate student program (6.4.2014-12.4.2014). The research of E. S. was also supported by the Negev fellowship of the Kreitman school of the Ben Gurion University of the Negev.} \author{V. Vinnikov} \address{Department of Mathematics, Ben-Gurion University of the Negev} \email{[email protected]} \thanks{Both authors were partially supported by US--Israel BSF grant 2010432.} \begin{abstract} Hyperbolic homogeneous polynomials with real coefficients, i.e., hyperbolic real projective hypersurfaces, and their determinantal representations, play a key role in the emerging field of convex algebraic geometry. In this paper we consider a natural notion of hyperbolicity for a real subvariety $X \subset \mathbb{P}^d$ of an arbitrary codimension $\ell$ with respect to a real $\ell - 1$-dimensional linear subspace $V \subset \mathbb{P}^d$ and study its basic properties. We also consider a special kind of determinantal representations that we call Livsic-type and a nice subclass of these that we call very reasonable{}. Much like in the case of hypersurfaces ($\ell=1$), the existence of a definite Hermitian very reasonable{} Livsic-type determinantal representation implies hyperbolicity. We show that every curve admits a very reasonable{} Livsic-type determinantal representation. Our basic tools are Cauchy kernels for line bundles and the notion of the Bezoutian for two meromorphic functions on a compact Riemann surface that we introduce. We then proceed to show that every real curve in $\mathbb{P}^d$ hyperbolic with respect to some real $d-2$-dimensional linear subspace admits a definite Hermitian, or even real symmetric, very reasonable{} Livsic-type determinantal representation. \end{abstract} \maketitle \section{Introduction} The study of hyperbolic polynomials originated with the theory of partial differential equations. A linear partial differential equation with constant coefficients is called hyperbolic if there exists $a \in \mathbb{P}^d({\mathbb R})$ such that the symbol $p$, considered as a homogeneous polynomial, satisfies $p(a) \neq 0$ and $p(a + tx) = 0$ only if $t \in {\mathbb R}$ for every $x \in \mathbb{P}^d({\mathbb R})$. This led G\"{a}rding \cite{Ga51,Ga59} and Lax \cite{Lax58} to consider such polynomials and the hypersurfaces $X({\mathbb R})=\left\{x \in \mathbb{P}^d({\mathbb R}) \colon p(x)=0\right\}$ they define. In particular, G\"{a}rding proved in \cite{Ga59} that if $p$ is hyperbolic with respect to $a$ as above then the connected component $C$ of $a$ in $\mathbb{P}^d({\mathbb R}) \setminus X({\mathbb R})$ is convex and $p$ is hyperbolic with respect to any $a'$ in $C$ (in the case when $X$ is irreducible or $X({\mathbb R})$ is smooth, $C$ simply consists of all $a' \in \mathbb{P}^d({\mathbb R})$ such that $p$ is hyperbolic with respect to $a'$). More precisely, the cone over the set $C$ in ${\mathbb R}^{d + 1}$ has two connected components, each one a convex cone. During the last two decades these hyperbolicity cones came to play an important role in optimization and related fields \cite{Gu97,BGLS01,Re06}. Among other applications, hyperbolic polynomials played a key role in the recent proof by Marcus, Spielman and Srivastava of the Kadison--Singer conjecture in operator algebras \cite{MSS14}. A simple way to manufacture hyperbolic polynomials is to consider Hermitian matrices $A_0,\ldots A_d$ such that $A_0 > 0$, and set $p(x_0,\ldots,x_d) = \det \left(\sum_{j=0}^d x_j A_j \right)$. Then since $A_0 > 0$, we see easily (using the fact the eigenvalues of a Hermitian matrix are real) that $p$ is hyperbolic with respect to $(1:0:\ldots :0)$. Furthermore, the connected component of $(1,0,\ldots,0)$ in $\{x \in {\mathbb R}^{d+1} \colon p(x) \neq 0\}$ is given by the linear matrix inequality $\sum_{j=0}^d x_j A_j > 0$, i.e., the hyperbolicity cone is a spectrahedral cone \cite{RG95} which is the feasible set of a semidefinite program, see \cite{NN94,VB96,Nem06} as well as the recent survey volume \cite{CA G}. In this case we say that $p$ admits a definite Hermitian determinantal representation. Using the correspondence between determinantal representations and kernel line bundles \cite{Vin89} that goes in its essence back to Dixon \cite{D1900}, and a detailed analysis of the real structure of the corresponding Jacobian variety, it was shown by the second author in \cite{Vin93} that for a smooth real hyperbolic curve in $\mathbb{P}^2$, definite determinantal representations are parametrized by points on a certain distinguished real torus in the Jacobian. In particular, every smooth real hyperbolic curve in $\mathbb{P}^2$ admits a definite determinantal representation, a fact established previously by Dubrovin \cite{Dub83}. A technique using the Cauchy kernels for vector bundles was developed in \cite{BV-ZPF} (following \cite{BV-ZPT}) to provide a construction of determinantal representations for any plane algebraic curve. This technique was later used by Helton and the second author in \cite{HelVin07} to prove that every real hyperbolic plane curve admits a definite Hermitian and even a real symmetric determinantal representation, settling a conjecture of Lax \cite{Lax58}. (The result in \cite{HelVin07} is in the nonhomogeneous setting of real zero polynomials --- the explicit translation to the homogeneous setting of hyperbolic polynomials and the connection to the Lax conjecture were worked out in \cite{LPR05}.) If we consider hypersurfaces in $\mathbb{P}^d$ for $d > 2$, we immediately see by a count of parameters argument \cite{Dic21} or by a Bertini theorem argument as in \cite{Bea00} that a generic hypersurface does not admit a determinantal representation (except for quadrics and cubics in $\mathbb{P}^3$). Determinantal representations of possibly singular and multiple hypersurfaces in $\mathbb{P}^d$ were considered in details by Kerner and the second author in \cite{KerVin12} to which we also refer for further references. It was proved by Branden in \cite{Bra11} that even if we allow multiplicity structure not every real hyperbolic hypersurface will admit a definite determinantal representation. We refer to \cite{Vppf} for an up-to-date survey on definite determinantal representations of real hyperbolic hypersurfaces and linear matrix inequality representations of the corresponding hyperbolicity cones; see also \cite{Kum} for a recent progress. In this paper we proceed in a different direction: we consider determinantal representations and hyperbolicity for subvarieties $X \subset \mathbb{P}^d$ ($d \geq 2$) of an arbitrary codimension $\ell \geq 1$, both in general and in the case of curves. In Section \ref{sec:det_rep} we define a special kind of determinantal representations that we call Livsic-type determinantal representations that generalize both linear determinantal representations of hypersurfaces and the determinantal representations of curves considered in \cite{LKMV} in the context of multivariable operator theory and multidimensional systems (vessels). We then show that a specific subclass of Livsic-type determinantal representations, that we call very reasonable{}, has especially nice properties. In particular, if $X$ admits a very reasonable{} Livsic-type determinantal representation, then the associated hypersurface $Y$ in the Grassmanian $\mathbb{G}(\ell - 1,d)$ of $\ell-1$-dimensional linear subspaces of $\mathbb{P}^d$ (that consists of linear subspaces that intersect $X$) admits a linear determinantal representation. In Section \ref{sec:hyper} we define the notion of hyperbolicity for subvarieties of $\mathbb{P}^d$ of an arbitrary codimension: we call a real subvariety $X$ hyperbolic with respect to a real linear subspace $V \subset \mathbb{P}^d$ of dimension $\ell-1$ if $X \cap V = \emptyset$ and for every real linear subspace $U \subset \mathbb{P}^d$ of dimension $\ell$ containing $V$, $X \cap U$ consists of only real points. Equivalently, every real $1$-dimensional Schubert cycle through $V$ in the Grassmanian intersects the associated hypersurface $Y$ in real points only. We show that the connected component $C(V)$ of $V$ in $\mathbb{G}(\ell - 1,d)({\mathbb R}) \setminus Y({\mathbb R})$ has a natural convexity property that we call slice-convexity, and that $X$ is hyperbolic with respect to any $V' \in C(V)$. It is an open question whether $C(V)$ (more precisely any of the two connected components of the cone over it in the Pl\"{u}cker embedding) has a different property of being extendably convex in the sense of Buseman \cite{Bus61} (an intersection of a convex set in the ambient space with the image of the Grassmanian), or whether, in the case when $X$ is irreducible or when $X({\mathbb R})$ is smooth, $C(V)$ coincides with the set of all $\ell-1$-dimensional real linear subspaces $V'$ so that $X$ is hyperbolic with respect to $V'$. We also demonstrate that if $X$ admits a very reasonable Livsic-type determinantal representations that is definite Hermitian, then $X$ is hyperbolic. Sections \ref{sec:bezoutian}--\ref{sec:hyper_curve} are dedicated to Livsic-type determinantal representations and hyperbolicity for curves in $\mathbb{P}^d$. While our methods are a generalization of the methods used in \cite{BV-ZPF} and \cite{HelVin07}, it is both more natural and more convenient to set them in the framework of Bezoutians on a compact Riemann surface. In Section \ref{sec:bezoutian} we introduce the notion of a Bezoutian of two meromorphic functions with simple poles on a compact Riemann surface; this notion originated in the study of Hankel-type realizations for meromorphic bundle maps on a compact Riemann surface as transfer functions of overdetermined 2D systems (vessels) \cite{bvh}, and seems to be appropriate for studying localization of zeroes just as in the classical (genus zero) case. Similar notions of resultants of meromorphic functions on a Riemann surface were considered by Gustafsson and Tkachev in \cite{GuTk09} and \cite{GuTk11}. We limit ourselves to proving several basic properties of the Bezoutian that are essential for our purposes here, and postpone a more general development of the theory and applications (as well as clarifying the relation to the work of Shapiro and the second author \cite{AS,ASV1,ASV2}) to a future publication. In Section \ref{sec:div_func} we consider Bezoutians on compact real Riemann surfaces (a Riemann surface equipped with an antiholomorphic involution $\tau$ or equivalently the desingularization of a real algebraic curve) and in particular on those of dividing type. We show how the Bezoutian relates to dividing functions, i.e., real meromorphic functions that map a half of the compact real Riemann surfaces of dividing type onto the upper half plane and that are closely related to the hyperbolicity of the Riemann surface birationally embedded as an algebraic curve in a projective space. In Section \ref{sec:det_rep_curve} we use the Bezoutians to show that every curve $X \subset \mathbb{P}^d$ admits a very reasonable{} Livsic-type determinantal representations, generalizing the construction of \cite{BV-ZPF} in the case $d=2$ and (essentially) the construction of Kravitsky \cite{Kra} (see also \cite{LKMV}) in the case of rational curves (genus zero). Finally, in Section \ref{sec:hyper_curve} we extend the results of \cite{HelVin07} in the case $d=2$: we show that every curve $X$ in $\mathbb{P}^d$ hyperbolic with respect to some $d-2$-dimensional real linear subspace $V \subset \mathbb{P}^d$ admits a definite Hermitian and even real symmetric very reasonable{} Livsic-type determinantal representation. Furthermore, when $X$ is irreducible, the set of all $V' \in \mathbb{G}(\ell - 1,d)({\mathbb R})$ such that $X$ is hyperbolic with respect to $V'$ is given by a linear matrix inequality (in the coordinates of the Pl\" ucker embedding). Our terminology is quite standard. All our varieties are over the field ${\mathbb C}$ of complex numbers, are reduced unless explicitly stated otherwise, and we identify the variety with the set of its (closed) points over ${\mathbb C}$. We say that $X \subset \mathbb{P}^d$ is a real subvariety if $X$ is defined over the reals (i.e., by homogeneous polynomial equations with real coefficients); we then denote by $X({\mathbb R})$ the set of points of $X$ that are rational over ${\mathbb R}$ (i.e., have real coordinates). When we consider the dimension or the codimension of $X$ we assume that $X$ has pure dimension (i.e., all the irreducible components of $X$ have the same dimension) unless the converse is explicitly specified. We denote by $\mathbb{G}(m,d)$ the Grassmanian of $m$-dimensional linear subspaces in the $d$-dimensional projective space $\mathbb{P}^d$. We will assume that ${\mathbb C}^{d + 1}$ is equipped with the standard scalar product. For $V$ a subspace in ${\mathbb C}^{d 1}$ we will write $V^{\perp}$ for the orthogonal complement of $V$; note that if a subspace is real then so is its orthogonal complement. For most of our purposes $V^{\perp}$ could have been replaced by any complementary subspace, but the use of the orthogonal complement will streamline some proofs and simplify notations. We will also use the standard scalar product to identify ${\mathbb C}^{d 1}$ with its dual, a fact that we will use later both implicitly and explicitly. \section{Livsic-type Determinantal Representations} \label{sec:det_rep} In his work M. S. Livsic and his collaborators considered plane algebraic curves obtained from matrices $\gamma_{01}, \gamma_{02}, \gamma_{12} \in M_n({\mathbb C})$ by: \begin{equation*} \det \left( \mu_2 \gamma_{01} - \mu_1 \gamma_{02} + \mu_0 \gamma_{12} \right). \end{equation*} Now consider the tensor in $\wedge^2 {\mathbb C}^3 \otimes M_n({\mathbb C})$ given by $\gamma = \gamma_{01} (e_0 \wedge e_1) + \gamma_{02} (e_0 \wedge e_2) + \gamma_{12} (e_1 \wedge e_2)$, where $e_0$, $e_1$ and $e_2$ form a basis of ${\mathbb C}^3$. For every point $\mu = \mu_0 e_0 + \mu_1 e_1 + \mu_2 e_2 \in {\mathbb C}^3$ one has that: \begin{equation*} \gamma \wedge \mu = \left( \mu_2 \gamma_{01} - \mu_1 \gamma_{02} + \mu_0 \gamma_{12} \right) e_0 \wedge e_1 \wedge e_2. \end{equation*} Fixing an orientation on ${\mathbb C}^3$, we can identify $\gamma \wedge \mu$ with a matrix in $M_n({\mathbb C})$. Note that the determinant of $\gamma \wedge \mu$ is zero if and only if there exists a vector $0 \neq v \in {\mathbb C}^n$, such that $(\gamma \wedge \mu) v = 0$. Furthermore it is invariant under the action of ${\mathbb C}^{\times}$ on ${\mathbb C}^3$ and hence we can identify the curve with the following set of points: \begin{equation*} D(\gamma) = \left\{ \mu \in \mathbb{P}^2 \mid \exists \, v \in {\mathbb C}^n \setminus {0} \,, (\gamma \wedge \mu)v = 0. \right\}. \end{equation*} We will say that a projective plane curve, $X$, admits a Livsic-type determinantal representation if there exists $\gamma \in \wedge^2 {\mathbb C}^3 \otimes M_n({\mathbb C})$, such that $X = D(\gamma)$. It has been shown by the second author that every projective plane curve admits a Livsic-type determinantal representation (cf. \cite{Vin89, Vin93, LKMV}). Each element $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$ can be thought of as a linear map $\gamma \colon {\mathbb C}^n \to \wedge^{k+1} {\mathbb C}^{d+1} \otimes {\mathbb C}^n$. Fix $e_0,\ldots,e_d$, a basis of ${\mathbb C}^{d+1}$. For $I \subset \{0,\ldots,d\}$ we will write $e_I = e_{i_1} \wedge \ldots \wedge e_{i_r}$, where $I = \{i_1,\ldots,i_r\}$ and $i_1 < i_2 < \ldots < i_r$. Then: \begin{equation*} \gamma = \sum_{I \subset \{0,\ldots,d\}, |I| = k + 1} \gamma_I e_I. \end{equation*} Thus for $u \in {\mathbb C}^n$ we get $\gamma u = \sum_{I \subset \{0,\ldots,d\}, |I| = k + 1} \gamma_I u \otimes e_I$. Now write $\mu = \sum_{j=0}^d \mu_j e_j$ and for every $J \subset \{0,\ldots,d\}$, $|J| = k+2$ set: \begin{equation*} (\gamma \wedge \mu)_J = \sum_{j \in J} (-1)^{\sigma(J,j)} \mu_j \gamma_{J \setminus \{j\}}. \end{equation*} Here $(-1)^{\sigma(J,j)}$ is the sign of the permutation required to obtain the form described above, i.e., $\sigma(J,j) = |\{j^{\prime} \in J \mid j^{\prime} > j \}|$. Conclude that: \begin{equation} \label{eq:gamma_mu_coord} \gamma \wedge \mu = \sum_{J \subset \{0,\ldots,d\}, |J| = k + 2} (\gamma \wedge \mu)_J e_J. \end{equation} Next take $V \subset \mathbb{P}^d$ a plane of dimension $d-k-1$ spanned by $v_0,\ldots,v_{d-k-1}$. Clearly $\gamma \wedge v_0 \wedge \ldots \wedge v_{d-k-1} \in \wedge^{d+1} {\mathbb C}^{d+1} \otimes M_N({\mathbb C})$. We fix an orientation and identify the later space with ${\mathbb C}$ and thus $\gamma \wedge v_0 \wedge \ldots \wedge v_{d-k-1}$ with a matrix. With respect to the fixed basis we have that: \[ v_0 \wedge \ldots \wedge v_{d-k-1} = \sum_{J \subset \{0,\ldots,d\}, |J| = d-k} p(V)_J e_J. \] Here $p(V)_J$ are the coordinates of the vector $v_0 \wedge \ldots \wedge v_{d-k-1}$ with respect to our basis. Hence, using our identification, we can write: \begin{equation} \label{eq:gamma_v_coord} \gamma \wedge v_0 \wedge \ldots \wedge v_{d-k-1} = \sum_{I \subset \{0,\ldots,d\}, |I| = k + 1} (-1)^{\sigma(I)} p(V)_{I^c} \gamma_I. \end{equation} Here $I^c = \{0,\ldots,d\} \setminus I$ and $(-1)^{\sigma(I)} e_0 \wedge \ldots \wedge e_d = e_I \wedge e_J$. Now we can generalize the definition for curves. \begin{definition} Given a tensor $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$, we define the following set: \begin{equation*} D(\gamma) = \left\{ \mu \in \mathbb{P}^d \mid \exists \, v \in {\mathbb C}^n \setminus {0} \,, (\gamma \wedge \mu)v = 0. \right\}. \end{equation*} Here we consider $\gamma \wedge \mu$ as a mapping from ${\mathbb C}^n$ to $\wedge^{k+2} {\mathbb C}^{d+1} \otimes {\mathbb C}^n$. We will say that $\gamma$ is non-degenerate if there exist $v_0,\ldots,v_{d-k-1} \in {\mathbb C}^{d+1}$ linearly independent, such that $\gamma \wedge v_0 \wedge \ldots \wedge v_{d-k-1}$ is invertible, considered as a matrix in $M_n({\mathbb C})$. \end{definition} Note that non-degeneracy depends only on the $d-k-1$-plane in $\mathbb{P}^d$, spanned by the vectors $v_0,\ldots,v_{d-k-1}$. Let $V \subset \mathbb{P}^d$ be this plane, then we denote $\gamma(V) = \gamma \wedge v_0 \wedge \ldots \wedge v_{d-k-1}$. Whenever necessary we will identify $\gamma(V)$ with a matrix in $M_n({\mathbb C})$ via an orientation as in \eqref{eq:gamma_v_coord}. Using \eqref{eq:gamma_mu_coord} we have: \[ D(\gamma) = \left\{ \mu \in \mathbb{P}^d \mid \cap_{J \subset \{0,\ldots,d\}, |J| = k + 2} \ker (\gamma \wedge \mu)_J \neq \{0\} \right\}. \] \begin{rem} Note that $D(\gamma)$ is cut out by the ideal generated by the maximal minors of $\gamma \wedge \mu$, considered as a matrix of linear forms in the entries of $\mu$. Alternatively, one can consider it as generated by polynomials of the following form: \[ \det \left(\sum_{J \subset \{0,\ldots,d\}, |J| = k + 2} m_J (\gamma \wedge \mu)_J\right).\] Here $m_J \in M_n({\mathbb C})$ are arbitrary matrices (cf. \cite[Prop.\ 8.2.1]{LKMV} for the case when $k=1$, the proof of the general case is identical). However, $D(\gamma)$ with this closed subscheme structure will generally be non-reduced and might even have embedded components. We can thus conclude that $D(\gamma)$ is closed subset of $\mathbb{P}^d$. \end{rem} \begin{lem} \label{lem:inclusion} Fix some tensor $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$ and a $d-k-1$-plane $V \subset \mathbb{P}^d$. Then the intersection of $V$ and $D(\gamma)$ is non-empty implies that: \begin{equation*} \det \gamma(V) = 0. \end{equation*} \end{lem} \begin{proof} Every point on $V$ is of the form $t_0 v_0 + \ldots t_{d-k-1} v_{d-k-1}$, for some basis of $V$. If such a point is on $D(\gamma)$, there exists some non-zero $u \in {\mathbb C}^n$, such that: \begin{equation*} \sum_{j=0}^{d-k-1} t_j (\gamma \wedge v_j) u = 0. \end{equation*} Now since some $t_j \neq 0$, taking the exterior product with $v_0 \wedge \ldots \wedge \widehat{v}_j \wedge \ldots \wedge v_l$ we get that: \begin{equation*} \left(\gamma \wedge v_0 \wedge \ldots \wedge v_{d-k-1} \right) u = 0. \end{equation*} \end{proof} We identify $\mathbb{G}(d-k-1,d)$ with its image in $\mathbb{P}(\wedge^{d-k} {\mathbb C}^{d+1})$ via the Pl\"{u}cker embedding. Recall that the Pl\"{u}cker embedding is the map sending a subspace $V \subset {\mathbb C}^{d+1}$ of dimension $\ell$ to the line $\wedge^\ell V \subset \wedge^{\ell} {\mathbb C}^{d+1}$. Thus we get an embedding of the Grassmannian into $\mathbb{P}(\wedge^{\ell} {\mathbb C}^{d+1})$. We denote by $v_0 \wedge \ldots \wedge v_{d-k-1}$ the Pl\"{u}cker coordinates of a $d-k-1$-plane $V$ in $\mathbb{P}^d$. In this setting $\gamma(V)$ defines a matrix of linear forms on the Grassmannian. Note that the $p(V)_J$ in \eqref{eq:gamma_v_coord} are precisely the Pl\"{u}cker coordinates with respect to the basis $e_0,\ldots,e_d$. \begin{cor} For a non-degenerate $\gamma$, we have that $\dim D(\gamma) \leq k$. Therefore, for a generic choice of $d-k-1$-plane $V$ we have that $\gamma(V)$ is invertible. \end{cor} \begin{proof} Note that $\det \gamma(V)$ is a section of a line bundle on $\mathbb{G}(d-k-1,d)$. Since $\gamma$ is non-degenerate, this section does not vanish identically. Conclude that the zeroes are a hypersurface. \end{proof} Let $S = {\mathbb C}[x_0,\ldots,x_d]$ with the natural grading, then $\gamma \wedge \mu$, considered as a matrix of linear forms in the entries of $\mu$, is a map between the graded modules: \begin{equation*} \gamma \wedge \mu \colon S(-1)^n \to S^{n {d+1 \choose k+2}}. \end{equation*} \begin{prop} \label{prop:deg_locus} The set $D(\gamma)$ is the degeneration locus of a vector bundle map on $\mathbb{P}^d$. This, in particular, is another way to see that $D(\gamma)$ is closed. \end{prop} \begin{proof} Just apply module to sheaf correspondence for $\operatorname{Proj}$ to the above map, to get: \begin{equation*} \gamma \wedge \mu \colon {\mathcal O}(-1)^n \to {\mathcal O}^{n {d+1 \choose k+1}}. \end{equation*} The points that belong to $D(\gamma)$ are precisely those points, where the map is not injective on the stalk. Thus $D(\gamma)$ is the degeneration locus of this map. \end{proof} Note that the definition is independent of the choice of the coordinates since given any $g \in \operatorname{GL}_{d+1}({\mathbb C})$ we have that $\mu \in D(\gamma)$ if and only if $g \mu \in g D(\gamma)$, since the map defined by $\gamma$ changes by a multiplication by an invertible scalar matrix on the left. \begin{rem} Following the Beilinson-Gelfand-Gelfand construction one can identify $\wedge^{i-j} {\mathbb C}^{d+1} \cong \operatorname{Hom}(\Omega^i(i),\Omega^j(j))$, for $0 \leq j \leq i \leq 0$. We think of $\Omega^i(i)$ as embedded in $\wedge^i {\mathbb C}^{d+1} \otimes {\mathcal O}$, where ${\mathcal O}$ is the sheaf of regular functions on $\mathbb{P}^d$. Hence in particular $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$ defines uniquely a map from $\Omega^d(d)^n \cong {\mathcal O}(-1)^n$ to $\Omega^{d-k-1}(d-k-1)^n$. This map however is not the same map as defined above unless $k = d-1$. There is however a way to change the signs in $\gamma$ to obtain one from the other. \end{rem} In general the set $D(\gamma)$ will be empty, unless $d = k + 1$. In order to emphasize a special case when the upper bound is achieved, we make the following definition: \begin{definition} A non-degenerate tensor $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$ will be called reasonable{} if $\dim D(\gamma) = k$. \end{definition} Recall that an irreducible subvariety $X$ of dimension $k$ of $\mathbb{P}^d$ defines a class in the $k$-th Chow group of $\mathbb{P}^d$. It is well known that the $k$-th Chow group of $\mathbb{P}^d$ is isomorphic to ${\mathbb Z}$ and is generated by the class of a $k$-plane. Therefore $[X] = n [L_k]$ and we call $n$ the degree of $X$. For a pure-dimensional reducible variety, we represent it as a formal sum of its components, therefore its degree is the sum of the degrees of its components. It is useful to keep track of the dimension of the kernel of the map $\gamma \wedge \mu$, hence we make the following definition: \begin{definition} We define the cycle associated to a non-degenerate $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$ in $Z_*(\mathbb{P}^d)$ by: \begin{equation*} Z(\gamma) = \sum_{j=1}^r n_j [D_j]. \end{equation*} Here we denote by $D_j$ the irreducible components of $D(\gamma)$. The numbers $n_j$ are obtained by taking the exact sequence: \begin{equation*} 0 \to {\mathcal O}(-1)^n \to {\mathcal O}^{n {d+1 \choose k+1}} \to {\mathcal C} \to 0 \end{equation*} and pulling it back to $D_j$. Since $D_j$ is in the degeneracy locus, we get the exact sequence: \begin{equation*} 0 \to {\mathcal K} \to {\mathcal O}_{D_j}(-1)^n \to {\mathcal O}_{D_j}^{n {d+1 \choose k+1}} \to {\mathcal C}_{D_j} \to 0. \end{equation*} We call ${\mathcal K}$ the kernel sheaf associated to the tensor $\gamma$. The kernel sheaf is a coherent sheaf on $D_j$ and we take $n_j$ to be the dimension of the generic fiber of ${\mathcal K}$. We define the degree of $\gamma$ to be: \begin{equation*} \deg(\gamma) = \int_{\mathbb{P}^d} [Z(\gamma)][L_{d-k}]. \end{equation*} Here $[L_{d-k}]$ is the rational equivalence class of the $d-k$-plane in $\mathbb{P}^d$. \end{definition} \begin{rem} Let $D(\gamma) = D_1 \cup \ldots \cup D_r \cup D^{\prime}$, where $\dim D_j = k$ for each $j$ and they are irreducible and $\dim D^{\prime} < k$. Then a generic $d-k$-dimensional plane $U$ intersects each $D_j$ at $\deg D_j$ distinct points and does not intersect $D^{\prime}$. Let $U_j \subset D_j$ be the open set on which the dimension of the fiber of the kernel sheaf is $n_j$. Since $D_j \setminus U_j$ is a closed subvariety, its dimension is at most $k-1$, hence using the incidence correspondence described below, it is easy to see that a generic $U$ intersects each $D_j$ at points of $U_j$. Hence $\deg \gamma$ is the sum of the dimensions of the fibers of the kernel sheaf at points of intersection with a generic $d-k$-plane. Furthermore, note that for any $d-k$-plane that intersects each $D_j$ at $\deg D_j$ distinct points, the sum of the dimensions of the kernel sheaf fibers at those points is always greater or equal to $\deg \gamma$, since the dimension of the fibers of a coherent sheaf is upper semi-continuous. \end{rem} \begin{definition} \label{def:det_rep} Let $X \subset \mathbb{P}^d$ be a subvariety of dimension $k$. We say that $X$ admits a Livsic-type determinantal representation if $X = D(\gamma)$ for some non-degenerate tensor $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$, for some integer $n$. If for some (and hence for every) basis $e_0,\ldots,e_d$ the matrices $\gamma_I$ are symmetric we will say that $X$ admits a symmetric Livsic type determinantal representation. If for some (and hence for every) real basis $e_0,\ldots,e_d$ for ${\mathbb C}^{d+1}$, we have that $\gamma = \sum_{I \subset \{0,\ldots,d\},|I|=k+1} \gamma_I e_I$ with every $\gamma_I$ Hermitian or real symmetric, we will say that $X$ admits a Hermitian or real symmetric Livsic-type determinantal representation, respectively. \end{definition} Recall from \cite{Har95} that for every integer $\ell$ we have the incidence correspondence: \begin{equation*} \Sigma = \left\{ (x, V) \mid x \in V \right\} \subset \mathbb{P}^d \times \mathbb{G}(\ell,d). \end{equation*} We get a diagram by restricting the projection maps to $\Sigma$: \begin{equation*} \xymatrix{\Sigma \ar[r]^{p_1} \ar[d]_{p_2} & \mathbb{P}^d \\ \mathbb{G}(\ell,d) & }. \end{equation*} Both $p_1$ and $p_2$ are proper and smooth, hence in particular for every closed $X \subset \mathbb{P}^d$ we have that $p_2(p_1^{-1}(X))$ is closed in $\mathbb{G}(\ell,d)$. The fiber of $p_1$ over a point $\mu \in \mathbb{P}^d$ is isomorphic to $\mathbb{G}(\ell-1,d-1)$. The fiber of $p_2$ over $V \in \mathbb{G}(\ell,d)$ is isomorphic to $V$ itself. Recall that the dimension of $\mathbb{G}(\ell,d)$ is $g_{\ell} = \ell(d-\ell)$. Given an irreducible subvariety $X \subset \mathbb{P}^d$ of dimension $k$ and degree $n$, we know that a generic $d-k-1$-plane does not intersect $X$. Let $\ell = d-k-1$ and $Y = p_2(p_1^{-1}(X))) \subset \mathbb{G}(d-k-1,d)$. Furthermore, since generically a $d-k-1$-plane in $\mathbb{P}^d$ that intersects $X$ does so at a single point, we get that $p_2$ is birational on an open dense subset of $p_1^{-1}(X)$. Since the map $p_1$ is smooth it is in particular flat and of relative dimension $g_{d-k-1} - k -1$. Hence we get a map: \begin{equation*} p_1^* \colon A_k(\mathbb{P}^d) \to A_{g_{d-k-1}-1}(\Sigma). \end{equation*} Since $p_2$ is birational on $Y$, $Y$ is a hypersurface in $\mathbb{G}(d-k-1,d)$. Furthermore, since $[X] = n[L]$, where $L$ is a $k$-plane in $\mathbb{P}^d$, we get that: \begin{equation*} [Y] = p_{2*} p_1^*([X]) = n p_{2*} p_1^*(L) = n \sigma_1. \end{equation*} Here $\sigma_1$ is the first Chern class of the universal quotient bundle on the Grassmannian (one can say that $\sigma_1$ is dual to the rational equivalence class of the intersection of the Grassmannian with a hyperplane in the ambient space of the Pl\"{u}cker embedding). Furthermore, $\sigma_1$ generates $A^1(\mathbb{G}(d-k-1,d)) \cong \operatorname{Pic}(\mathbb{G}(d-k-1,d))$ (see \cite[Ch.14.6-7]{Ful98}). Since the Grassmannian is non-singular we know that $A^1(\mathbb{G}(d-k-1,d)) \cong A_{g_{d-k-1}-1}(\mathbb{G}(d-k-1,d))$. Hence the degree of $Y$ equals the degree of $X$. We summarize this discussion in the following well known lemma (see for example \cite{ChvdW37} and \cite[Prop.\ 2.2]{GKZ08}): \begin{lem} \label{lem:chow-degree} The hypersurface $Y \subset \mathbb{G}(d-k-1,d)$ corresponding to an irreducible subvariety $X \subset \mathbb{P}^d$ of dimension $k$ under the incidence correspondence is of the same degree as $X$. \end{lem} For $u \in {\mathbb C}^{d+1}$ linearly independent from $V$ we denote $\gamma(V,i,u) = \gamma \wedge v_0 \wedge \ldots v_{i-1} \wedge u \wedge v_{i+1} \ldots \wedge v_{d-k-1}$. The following Lemma is a generalization of \cite[Eq.\ 2.24-25]{BV-ZPT}. \begin{lem} \label{lem:sol-space} Let $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$ be non-degenerate. Let $V \subset \mathbb{P}^d$ be a $d-k-1$-plane, such that $\gamma(V)$ is invertible. Let $u \in {\mathbb C}^{d+1}$ be linearly independent from $V$. Then the intersection points of $U$, the $d-k$-plane spanned by $V$ and $u$, with $D(\gamma)$ are in one-to-one correspondence with a subset of the joint eigenvalues of the matrices $\gamma(V)^{-1} \gamma(V,u,i)$, for $i= 0,\ldots, d-k-1$. Furthermore, the fibers of the kernel sheaf at these points are contained in the corresponding joint eigenspaces and thus are linearly independent as subspaces of ${\mathbb C}^n$. \end{lem} \begin{proof} By Lemma \ref{lem:inclusion}, $V$ does not intersect $D(\gamma)$. However $U$ intersects $D(\gamma)$ in a finite number of points unless $\dim D(\gamma) < k$. Every point in $U \cap D(\gamma)$ is of the form $u + \sum_{j=0}^{d-k-1} t_j v_j$. According to the definition of $D(\gamma)$, there is a vector $w \in {\mathbb C}^n$, such that: \begin{equation*} \gamma \wedge \left( u + \sum_{j=0}^{d-k-1} t_j v_j \right) w = 0. \end{equation*} Taking the exterior product with $v_0 \wedge \ldots \widehat{v}_i \ldots \wedge v_{d-k-1}$, for some $0 \leq i \leq d-k-1$, we get: \begin{equation*} \left( \gamma(V,u,i) - t_i \gamma(V) \right) w = 0. \end{equation*} Hence the stalk of the kernel sheaf at each point in the intersection is a subspace of the joint eigenspace of $\gamma(V)^{-1} \gamma(V,u,i)$. We conclude that for distinct points the stalks are linearly independent as subspaces of ${\mathbb C}^n$. \end{proof} \begin{cor} \label{cor:degree-bound} Assume that $\gamma$ is non-degenerate then $\deg(\gamma) \leq n$. \end{cor} \begin{proof} The degree of $\gamma$ is independent of irreducible components of $D(\gamma)$ that are of dimension less than $k$. Hence we may assume that $D(\gamma)$ is of pure-dimension $k$. Let $V$ be such that the $\gamma(V)$ is invertible. For every generic $d-k$-plane through $V$ we have that each irreducible component, $D_j$, is intersected at $\deg(D_j)$ distinct points. Now the dimension of a generic fiber is $n_j$. Applying Lemma \ref{lem:sol-space} we get that the sum of the spaces is direct. Therefore the dimension of the space is $\sum_j n_j \deg(D_j) = \deg(\gamma)$. Since this is a subspace of ${\mathbb C}^n$ we get that $\deg(\gamma) \leq n$. \end{proof} \begin{rem} Note that if $\gamma$ is not reasonable, then $\deg(\gamma) = 0$. \end{rem} \begin{definition} Given a tensor $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$, we say that $\gamma$ is very reasonable{} if $\deg(\gamma) = n$. \end{definition} \begin{prop} \label{prop:pure_dim} If a tensor $\gamma$ is very reasonable{}, then $D(\gamma)$ is of pure dimension $k$. \end{prop} \begin{proof} By definition $\deg(\gamma) = \int_X [Z(\gamma)][L_{d-k}] = n$. Now if $D(\gamma) = D_1 \cup \ldots \cup D_r$ is the decomposition into irreducible components then $Z(\gamma) = \sum_{j=1}^r n_j D_j$. It suffices to show that if $D(\gamma)$ has an irreducible component $D_{j_0}$ of dimension less than $k$, then $\deg(\gamma) < n$. Fix a point $\mu_0 \in D_{j_0}$ that is not on any other component of $D(\gamma)$. Every $d-k$-plane through $\mu_0$ will be spanned by $\mu_0$ and some $d-k-1$-dimensional plane $V$. Since generically $\gamma(V)$ is invertible, we know that for a generic $d-k$-plane through $\mu_0$ the kernel spaces on the components of dimension $k$ can not span all of ${\mathbb C}^n$. Since the sum of their dimensions is greater or equal to $\deg \gamma$, we conclude that $\deg \gamma < n$. \end{proof} Recall that $\gamma(V)$ is a matrix of linear forms on the Grassmannian $\mathbb{G}(d-k-1,d)$. One can consider $\gamma(V)$ as a map of vector bundles ${\mathcal O}_{\mathbb{G}(d-k-1,d)}(-1)^n \to {\mathcal O}_{\mathbb{G}(d-k-1,d)}^n$. Let $W \in Z_*(\mathbb{G}(d-k-1,d))$ be the cycle of zeroes of the section $\det \gamma(V)$ of ${\mathcal O}_{\mathbb{G}(d-k-1,d)}(n)$, i.e., $W = \sum_{j=1}^r n_j W_j$, where each $W_j$ is a hypersurface and the $n_j$ are the order of zero of $\det \gamma(V)$ on $W_j$. Let us denote by $|W|$ the support of $W$, namely $W = \cup_{j=1}^r W_j$. Using \eqref{eq:gamma_v_coord} we can write $\det(\gamma(V))$ is a degree $n$ homogeneous polynomial in the coordinate ring of the Grassmannian in Pl\"{u}cker embedding. We can factor this polynomial into irreducible polynomials and each $W_j$ corresponds to an irreducible polynomial and $n_j$ to the multiplicity it appears with in $\det(\gamma(V))$. Let us recall the definition of a $1$-dimensional Schubert cycle on the Grassmannian. Fix some complete flag $0 \subset U_1 \subset U_2 \subset \ldots U_{d+1} = {\mathbb C}^{d+1}$. The Schubert cycle $L$ is given by: \begin{equation} \label{eq:schubert} L = \left\{ V \in \mathbb{G}(\ell,d) \mid U_l \subset V \subset U_{l+2}\right\}. \end{equation} The next Lemma is the equivalent of Lemma \ref{lem:sol-space} for degeneracy loci on the Grassmannian. \begin{lem} \label{lem:hypersurface-components} Let $W$ and $|W|$ be the degeneracy locus of $\xymatrix{{\mathcal O}(-1)^n \ar[r]^T & {\mathcal O}^n}$ and its support. Denote by $L \subset \mathbb{G}(\ell,d)$ a $1$-dimensional Schubert cycle associated to some flag. Then the kernel spaces of $T$ at each of the points in $L \cap |W|$ are linearly independent and generically the intersection of $L$ and $W$ has $n$ points counting multiplicities. \end{lem} \begin{proof} The class of the $1$-dimensional Schubert cycle $[L]$ generates $A^{g_{\ell}-1}$, hence $\int_{\mathbb{G}(\ell,d)}[W][L] = n$, so generically it has $n$ points counting multiplicities. Now one let $v_1,\ldots,v_{\ell+2}$ be the basis of $U_{\ell+2}$ such that the first $\ell$ vectors are a basis for $U_{\ell}$, then if $U_{\ell} \subset V \subset U_{\ell+2}$ then the Pl\"{u}cker coordinates of $V$ in $\mathbb{G}(\ell,d)$ are $v_1 \wedge \ldots \wedge v_{\ell} \wedge (t v_{\ell+1} + s v_{\ell+2})$, where $[t:s] \in \mathbb{P}^1$. We may assume that $T(v_1 \wedge \ldots \wedge v_{\ell+1})$ is invertible. Hence passing to the open subset where $s = 1$, we see that: \begin{equation*} \det(T(V)) = 0 \iff \det (t I + T(v_1 \wedge \ldots \wedge v_{\ell+1}^{-1}) T(v_1 \wedge \ldots \wedge v_{\ell} \wedge v_{\ell+2})) = 0. \end{equation*} The multiplicity of the intersection is the order of zero of the determinant on $L$. The kernels are clearly eigenspaces of a matrix associated to distinct eigenvalues and hence have zero intersections. \end{proof} Now we can give a description of very reasonable{} tensors both geometrically and algebraically: \begin{thm} \label{thm:char1} Let $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$ be non-degenerate. Let $Y = p_2(p_1^{-1}(D(\gamma))$, $\alpha = p_{2*}(p_1^*(Z(\gamma)) \in Z_*(\mathbb{G}(d-k-1,d))$ and $W$ and $|W|$ the degeneracy locus of $\gamma(V)$ and its support. Then the following conditions are equivalent: \begin{itemize} \item[(a)] The tensor $\gamma$ is very reasonable{}; \item[(b)] The variety $Y$ is a hypersurface and furthermore $\alpha = W$ and $Y = |W|$; \end{itemize} \end{thm} \begin{proof} $(a) {\mathbb R}ightarrow (b)$ By \cite[Ex.\ 11.18]{Har95} if $X \subset \mathbb{P}^d$ is irreducible, so is $p_2(p_1^{-1}(X))$. So if $D(\gamma) = D_1 \cup \ldots \cup D_r$ is the decomposition into irreducible components and $Y_j = p_2(p_1^{-1}(D_j))$, then $Y = Y_1 \cup \ldots \cup Y_r$ is the decomposition of $Y$ into irreducible components. By Lemma \ref{lem:inclusion} $Y \subset |W|$ and by Proposition \ref{prop:pure_dim} they are of the same dimension. Hence we can conclude that the irreducible components of $Y$ are a subset of the irreducible components of $|W|$. Now $W$ is the degeneracy locus of a map of vector bundles. Take a line as in Lemma \ref{lem:hypersurface-components}; its intersection with $W$ will yield a set of linearly independent subspaces of ${\mathbb C}^n$. Note that for a point $\mu \in D(\gamma)$, if $(\gamma\wedge \mu) u = 0$, then $\gamma(V) u = 0$ for every $d-k-1$-plane $V$ through $\mu$. Hence $\dim \ker \gamma(V) \geq n_j$, for every $V \in Y_j$. However, $\sum_j n_j \deg Y_j = n$ and thus $|W|$ no other components and furthermore $W = \sum_{j=1}^r n_j Y_j$. $(b) {\mathbb R}ightarrow (a)$ This is immediate since the degree of $W$ is $n$ and $\deg \gamma = \deg \alpha$. \end{proof} The following corollary is immediate from the proof. \begin{cor} \label{cor:vr_kernel_fibers} Assume $\gamma$ is very reasonable{}. Let ${\mathcal K}$ be the kernel sheaf of $\gamma$ on $D(\gamma)$ and let ${\mathcal K}^{\prime}$ be the kernel sheaf on $|W|$. Then the fibers of $p_{2*}p_1^{*} {\mathcal K}$ and ${\mathcal K}^{\prime}$ agree generically. \end{cor} \begin{cor} \label{cor:vr_necessary} Assume that $\gamma$ is very reasonable{}. Let $V$ a $d-k-1$ plane that does not intersect $D(\gamma)$ and let $U$ be a $d-k$-plane through $V$ that intersects $D(\gamma)$ transversely. Then for every $u\in U$ linearly independent from $V$ the matrices $A_j = \gamma(V)^{-1} \gamma(V,u,j)$ for $j=0,\ldots,d-k-1$, commute and are semi-simple. \end{cor} \begin{proof} By Theorem \ref{thm:char1} we have that $\gamma(V)$ is invertible. Furthermore, by Lemma \ref{lem:sol-space} we know that for each $\mu \in U \cap D(\gamma)$, the fiber of the kernel sheaf $E_{\mu}$ is a subset of a joint eigenspace of the $A_j$. Again by Theorem \ref{thm:char1} we know that the $E_{\mu}$ span ${\mathbb C}$. We conclude that the $A_j$ commute and are semi-simple. \end{proof} To get a sufficient condition we will consider a non-degenerate tensor $\gamma$ and a $d-k-1$-plane $V$, such that $\gamma(V)$ is invertible. Let us assume that $V$ is spanned by $e_{k+1},\ldots,e_d$ and complete it to a basis of ${\mathbb C}^{d+1}$. Recall that a point $\mu = \sum_{j=0}^d z_j e_j \in D(\gamma)$ if there exists a non-zero vector $w \in {\mathbb C}^n$, such that for every $J \subset \{0,\ldots,d\}$ of cardinality $k+2$ we have: \[ \sum_{j \in J} (-1)^{\sigma(J,j)} z_j \gamma_{J \setminus \{j\}} w = 0. \] Note that $\gamma(V)$ is precisely $\gamma_{I_0}$, where $I_0 = \{0,\ldots,k\}$. Hence we get the following equation for every $\ell = k+1,\ldots,d$: \[ z_{\ell} w = \sum_{j=0}^k (-1)^{\sigma_j} z_j \gamma_{I_0}^{-1} \gamma_{I_0 \setminus \{j\} \cup \{\ell\}} w. \] In fact if $\gamma$ is very reasonable{} this is another way to obtain the result of Corollary \ref{cor:vr_necessary}. Now let $I \subset \{0,\ldots,d\}$ of cardinality $k+1$, such that $|I \cap I_0| \leq k-1$ and let $p \in I_0 \setminus (I \cap I_0)$. Then we can take $J = I \cup \{p\}$ and get the equation: \[ \sum_{j \in J \cap I_0} (-1)^{\sigma(J,j)} z_j \gamma_{J \setminus \{j\}} w + \sum_{j \in J \setminus (J \cap I_0)} \sum_{\ell = 0}^k (-1)^{\sigma(J,j) + \sigma_{\ell}} z_{\ell} \gamma_{J \setminus \{j\}} \gamma_{I_0}^{-1} \gamma_{I_0 \setminus \{\ell\} \cup \{j\}} w = 0. \] The coefficient of $z_p$ is: \[ \left( \gamma_I + \sum_{j \in J \setminus (J \cap I_0)} (-1)^{\sigma(J,j) + \sigma_{\ell} + \sigma(J,p)} \gamma_{J \setminus \{j\}} \gamma_{I_0}^{-1} \gamma_{I_0 \setminus \{p\} \cup \{j\}} \right) w. \] Note that for every $j$ in the sum above we have that $|(J \setminus \{j\}) \cap I_0| = |I \cap I_0| + 1$. So we can express them as well using the same formula. Furthermore, if $\gamma$ is very reasonable{}, then the variables $z_0,\ldots,z_k$ are free and for every choice of those variables we have a basis for ${\mathbb C}^n$ formed by the joint eigenvectors of the corresponding pencils. Hence, if we take $z_p$ non-zero and others $0$, we'll get that: \[ \gamma_I = \sum_{j \in J \setminus (J \cap I_0)} (-1)^{\sigma(J,j) + \sigma_{\ell} + \sigma(J,p)} \gamma_{J \setminus \{j\}} \gamma_{I_0}^{-1} \gamma_{I_0 \setminus \{p\} \cup \{j\}}. \] It is not difficult to check using induction and the commutation conditions described in Corollary \ref{cor:vr_necessary} that in fact this formula is independent of the choice of $p$. On the other hand it is immediate that if the commutation conditions hold, the matrices described in Corollary \ref{cor:vr_necessary} are semi-simple and the above equations are satisfied, then $\gamma$ is very reasonable{}. \section{Hyperbolicity and the Grassmannian} \label{sec:hyper} Recall that in the classical case a real hypersurface $X \subset \mathbb{P}^d$ is called hyperbolic with respect to a real point $a \in \mathbb{P}^d$ if for every real line $L$ that passes through $a$, we have that $X \cap L \subset X({\mathbb R})$. We will generalize this definition to the case when $\operatorname{codim} X > 1$ as follows: \begin{definition} Let $X \subset \mathbb{P}^d$ be a real subvariety of codimension $\ell$. We'll say that $X$ is hyperbolic with respect to a real linear $\ell -1$-dimensional subspace $V \subset \mathbb{P}^d$, if $V \cap X = \emptyset$ and for every $\ell$-dimensional subspace, $U$, that contains $V$ we have that $X \cap U \subset X({\mathbb R})$. \end{definition} \begin{prop} \label{prop:dividing_general} Assume $X \subset \mathbb{P}^d$ is a real subvariety of dimension $k$ and $V$ a real $d-k-1$-plane, that does not intersect $X$. Then $X$ is hyperbolic with respect to $V$ if and only if the projection $f$ from $V$ onto $V^{\perp} \cong \mathbb{P}^k$ restricted to $X$ has the following property: $(\star)$ $f(x) \in \mathbb{P}^k({\mathbb R})$ if and only if $x \in X({\mathbb R})$. \end{prop} \begin{proof} Let $V^{\perp}$ be the real $k$-plane associated to the orthogonal complement of $V$. It is immediate that every $d-k$-plane through $V$ intersects $V^{\perp}$ at a single point. Furthermore, if the $d-k$-plane is real then so is its point of intersection with $V^{\perp}$. Consider now the projection of $X$ onto $V^{\perp}$ from $V$, namely for each point $x \in X$, we consider the $d-k$-plane $U_x$ spanned by $x$ and $V$ and map $x$ to the point of intersection of $U_x$ and $V^{\perp}$. Clearly, if $x \in X({\mathbb R})$, then $f(x) \in \mathbb{P}^k({\mathbb R})$, since $U_x$ is real in that case. Next note that for every point $y \in V^{\perp}$ the fiber over $y$ is precisely the points of intersection of the $d-k$-plane $U_y$ spanned by $V$ and $y$ with $X$, hence the map $f$ has property $(\star)$ if and only if $X$ is hyperbolic with respect to $V$. \end{proof} Another way to connect the notion of hyperbolicity introduced here and the classical one is similar to the above construction. \begin{prop} \label{prop:projection} Assume $X \subset \mathbb{P}^d$ is a real subvariety of dimension $k$, and $V$ a real $d-k-1$-plane, that does not intersect $X$. Take $V_0 \subset V$ of codimension $m$ in $V$ and project $\mathbb{P}^d$ onto $V_0^{\perp} \cong \mathbb{P}^{k+m}$ as above. Denote the projection by $\pi_{V_0}$. note that $\pi_{V_0}(V)$ is an $m-1$-plane and $\pi_{V_0}(X)$ is subvariety of codimension $m$. Then $X$ is hyperbolic with respect to $V$ if and only if $\pi_{V_0}(X)$ is hyperbolic with respect to $\pi_{V_0}(V)$ for every $V_0 \subset V$ of codimension $m$. \end{prop} \begin{proof} The proof is the same as above. \end{proof} \begin{cor} \label{cor:classic} Let $X, V \subset \mathbb{P}^d$ be as in Proposition \ref{prop:projection} and let $V_0$ be of codimension $1$ in $V$. Denote by $\pi$ the projection onto $V_0^{\perp}$, then $\pi(X)$ is a real hypersurface hyperbolic with respect to the point $\pi(V)$. \end{cor} Let us from now on write $X$ for a real subvariety of $\mathbb{P}^d$ of pure dimension $k$. Set $\ell = d - k$ and let $Y$ be as in the previous section the real hypersurface in $\mathbb{G}(\ell -1,d)$ that corresponds to $X$ via the incidence correspondence. The following proposition is immediate from the definitions: \begin{prop} \label{prop:hyperbolic-equiv} The subvariety $X$ is hyperbolic with respect to $V$ if and only if for every $1$-dimensional real Schubert cycle (as defined in \eqref{eq:schubert}), $L$, through $V$ in $\mathbb{G}(\ell - 1,d)$ we have that $Y \cap L \subset Y({\mathbb R})$. In this case we will say that $Y$ is hyperbolic with respect to $V$. \end{prop} \begin{proof} To see this note that every real $1$-dimensional Schubert cycle through $V$ is defined by a real subspace $V_0 \subset V$ of codimension $1$ and a real $d-k$-plane $U$ containing $V$. The points of intersection of $L$ with $Y$ are precisely the $d-k-1$-planes $V^{\prime}$ such that $V_0 \subset V^{\prime} \subset U$ and $V^{\prime} \cap X \neq \emptyset$. Since $V_0$ does not intersect $X$, then $V^{\prime}$ is spanned by the intersection of $U$ with $X$ and $V_0$. Hence the intersections are all real if and only if all of the $V^{\prime}$ are. \end{proof} Consider $\mathbb{G}(\ell-1,d) \subset \mathbb{P}^N$ with Pl\"{u}cker embedding. Denote by $\mathbb{G}^{\dagger} \subset {\mathbb R}^{N+1}$, one of the connected components of the cone over $\mathbb{G}(\ell-1,d)({\mathbb R})$. For every $\ell-2$-plane $V_0 \subset \mathbb{P}^d$ we can define a subset of $\mathbb{G}(\ell-1,d)$: \[ P_{V_0} = \left\{ V^{\prime} \in \mathbb{G}(\ell-1,d)({\mathbb R}) \mid V_0 \subset V^{\prime} \right\}. \] Note that $P_{V_0} \cong \mathbb{P}^{d - \ell -2}({\mathbb R})$ and each point $V^{\prime} \in P_{V_0}$ can be identified uniquely with a point on the projection from $V_0$ onto $V_0^{\perp}$. In fact $P_{V_0}$ is a Schubert cycle of the form $\Omega(U_0,\ldots,U_{d-k-1})$, where we fix a basis $v_0,\ldots,v_{d-k-2}$ for $V_0$ and set $U_j = \operatorname{Span}\{v_0,\ldots,v_j\}$, for $j=0,\ldots,d-k-2$ and $U_{d-k-1}= \mathbb{P}^d$. This means that its cycle class is $(0,1,\ldots,d-k-2,d)$ and the dual cohomology class is $(k+1,\ldots,k+1,0)$. \begin{definition} \label{def:sliceconvex} Let $E \subset \mathbb{G}(\ell-1,d)({\mathbb R})$. If for a point $V \in E$ and every $V_0 \subset V$ of codimension $1$, we have that the piece of the cone over $E \cap P_{V_0}$ in $\mathbb{G}^{\dagger}$ is convex, then we will say that $E$ is slice-convex with respect to $V$. If $E$ is slice-convex with respect to every $V \ \in E$, then we will simply say that $E$ is slice convex. \end{definition} \begin{rem} Consider $E \cap P_{V_0}$ as a subset of $\mathbb{P}^{d-\ell-2}$ and look at the cone over it in ${\mathbb R}^{d-\ell-1}$. This cone is a union of a pointed convex cone and its negative if and only if the condition of Definition \ref{def:sliceconvex} holds. \end{rem} Let $X \subset \mathbb{P}^d$ be a subvariety of codimension $\ell$ and let $Y \subset \mathbb{G}(\ell-1,d)$ be its associated hypersurface. Fix an $\ell-2$-plane $V_0 \subset \mathbb{P}^d$ that does not intersect $X$ and denote by $\pi_{V_0}$ the projection from $V_0$ onto $V^{\perp}$. Note that every point $V \in P_{V_0} \cap Y$ is an $\ell-1$-plane that intersects $X$ and is spanned by $V_0$ and one of the points in the intersection. On the other hand $\pi_{V_0}(V)$ is the point on $V^{\perp}$ corresponding to $V \cap V_0^{\perp}$. Since $V$ intersects $X$ we have that $\pi_{V_0}(V) \in \pi_{V_0}(X)$. The converse is also true by the definition of the projection. Thus we can identify $P_{V_0} \cap Y$ with $\pi_{V_0}(X)$. Note that this discussion ties together Propositions \ref{prop:dividing_general} and \ref{prop:hyperbolic-equiv}. \begin{lem} \label{lem:sliceconvex_V} Let $X \subset \mathbb{P}^d$ be a real variety of dimension $k$ hyperbolic with respect to some real $d-k-1$-plane $V$. Let $Y \subset \mathbb{G}(d-k-1,d)$ be the associated hypersurface and let $C(V)$ be the connected component of $V$ in $\mathbb{G}(\ell-1,d) \setminus Y$. Then $C(V)$ is slice-convex with respect to $V$ and furthermore every $X$ is hyperbolic with respect to every $V^{\prime} \in C(V) \cap P_{V_0}$ for every $V_0 \subset V$ of codimension $1$. \end{lem} \begin{proof} Let $V_0 \subset V$ be a real subspace of codimension $1$ in $V$. Projection $\pi_0$ from $V_0$ will map $X$ to a hypersurface hyperbolic with respect to the $\pi_0(V)$. For every other real $d-k-1$-plane $V^{\prime}$, such that $V \cap V^{\prime} = V_0$, $X$ is hyperbolic with respect to $V^{\prime}$ if and only if $\pi_0(X)$ is hyperbolic with respect to $\pi_0(V^{\prime})$. By \cite{Ga59} we know that the cone over the hyperbolicity set of $\pi_0(X)$ consists of two convex cones. \end{proof} For the proof of the following theorem we will fix a metric $d$ on $\mathbb{G}(\ell-1,d)({\mathbb R})$ that induces the classical topology on it. There are several ways to do that, for example we can embed $\mathbb{G}(\ell-1,d)({\mathbb R})$ in $M_{d+1}({\mathbb R})$, by sending a space to the orthogonal projection onto it with respect to the standard scalar product on ${\mathbb R}^{d+1}$. Then the distance between two spaces is the norm of the difference of the associated projections. The important feature of the classical topology on the Grassmannian and hence of the metric is that an open neighborhood of $V$, that is spanned by $v_0,\ldots,v_{\ell-1}$, consists of spaces $V^{\prime}$, spanned by $v^{\prime}_0,\ldots,v^{\prime}_{\ell-1}$, such that each $v^{\prime}_j$ is in a neighborhood of the respective $v_j$. We will formulate this more precisely in the following lemma: \begin{lem} \label{lem:metric} Let $V, V^{\prime} \in \mathbb{G}(\ell-1,d)({\mathbb R})$ if $D(V,V^{\prime}) < \epsilon$, then we can choose orthonormal bases $v_0,\ldots,v_{\ell-1}$ and $v_0^{\prime},\ldots,v_{\ell-1}^{\prime}$ for $V$ and $V^{\prime}$, respectively, such that $\|v_j - v_j^{\prime}\| < \sqrt{2} \epsilon$, for every $j = 0, \ldots, \ell -1$. Conversely take $V, V^{\prime} \in \mathbb{G}(\ell - 1,d)({\mathbb R})$, such that $V \cap V^{\prime}$ has an orthonormal basis $v_0,\ldots,v_r$ ($r = -1$ if the intersection is trivial) and complete it to orthonormal bases $v_0,\ldots,v_{\ell-1}$ and $v_0,\ldots,v_r,v_r^{\prime},\ldots,v_{\ell-1}^{\prime}$ for $V$ and $V^{\prime}$, respectively, then if $\| v_j - v_j^{\prime}\| < \delta$ for $j = r,\ldots,\ell-1$, then $d(V,V^{\prime}) < 2 (\ell - r - 1) \delta$. \end{lem} \begin{proof} For the first part fix orthonormal bases $v_0,\ldots,v_{\ell-1}$ and $v_0^{\prime},\ldots,v_{\ell-1}^{\prime}$ for $V$ and $V^{\prime}$, respectively. Note that every $v^{\prime} \in V^{\prime}$ we can write uniquely as $v^{\prime} = v + u$, where $V \in V$ and $u \in V^{\perp}$. Take a unit vector $v^{\prime} \in V^{\prime}$, then by assumption $\|u\| = \|P v^{\prime} - P^{\prime} v^{\prime}\| < \epsilon$, where $P$ and $P^{\prime}$ are orthogonal projections onto $V$ and $V^{\prime}$,respectively. Now write $v_0^{\prime} = \sum_j \alpha_j v_j + u$ and let us assume that $\alpha_0 > 0$ (otherwise replace $v_0$ with $-v_0$). Then: \[ \langle v_0^{\prime} - v_0 ,v_0^{\prime} - v_0 \rangle = 2 ( 1 - \alpha_0 ). \] On the other hand $\|u\| < \epsilon$. Now using the fact that $v_0$ is normal we get: \[ 1 = \|v_0\|^2 = \sum_j \alpha_j^2 + \|u\|^2 < \alpha_0^2 + \epsilon^2. \] Hence we get that $ 1 - \alpha_0^2 < \epsilon^2$. Next note that $1 - \alpha_0^2 \geq 1 - \alpha_0$, since $0 \leq \alpha_0 \leq 1$. Thus $1 - \alpha_0 < \epsilon^2$ and $\|v_0^{\prime} - v_0\| < \sqrt{2} \epsilon$. Similarly for every other index. For the second part take any unit vector $u$ and write $P u = \sum_{j=0}^{\ell-1} \langle u, v_j \rangle v_j$ and similarly $P^{\prime} u = \sum_{j=0}^{r} \langle u , v_j \rangle v_j + \sum_{j=r+1}^{\ell-1} \langle u, v_j^{\prime} \rangle v_j^{\prime}$. Write $\alpha_j = \langle u,v_j \rangle$ and $\alpha^{\prime} = \langle u, v_j^{\prime} \rangle$. Then we have: \[ \| Pu - P^{\prime} u\| \leq \sum_{j=r+1}^{\ell-1} \|\alpha_j v_j - \alpha_j^{\prime} v_j^{\prime} \|. \] Next note that using Cauchy-Schwartz and the fact that $u$ is a unit vector we get that $|\alpha_j - \alpha_j^{\prime}| < \delta$. Therefore: \[ \|\alpha_j v_j - \alpha_j^{\prime} v_j^{\prime} \| = \| (\alpha - \alpha^{\prime}) v_j + \alpha^{\prime} (v_j - v_j^{\prime}) \| < \delta + |\alpha^{\prime}| \delta < 2 \delta. \] Hence $\| P - P^{\prime}\| < 2 (\ell - r -1) \delta$. \end{proof} For simplicity, if $X$ is hyperbolic with respect to $V$, we will say that $V$ witnesses the hyperbolicity of $X$ or shortly that $V$ is a witness. \begin{thm} \label{thm:connected_component} Assume that $X$ is hyperbolic with respect to $V$, then $X$ is hyperbolic with respect to every $V^{\prime} \in C(V)$. \end{thm} \begin{proof} We will prove the claim in two steps, first we'll show that the set of all witnesses is open. Then we will use a metric argument to show that in fact every $V^{\prime} \in C(V)$ is a witness. For the first argument take a ball with radius $\epsilon > 0$ around $V$ that is contained in $C(V)$ and take a point $V^{\prime} \in C(V)$ such that $d(V,V^{\prime}) < \epsilon/2 \sqrt{2}\ell$. Fix orthonormal bases $v_0,\ldots, v_{\ell-1}$ and $v^{\prime}_0,\ldots,v^{\prime}_{\ell-1}$ for $V$ and $V^{\prime}$, respectively. By Lemma \ref{lem:metric} we know that $\|v_j - v_j\| < \epsilon/ 2 \ell$, for $j=0,\ldots,\ell-1$. Now set $V_0$ the space spanned by $v_1,\ldots,v_{\ell-1}$ and $W_0$ the space spanned by $V_0$ and $v^{\prime}_0$. Applying Lemma \ref{lem:metric} again we get that $d(V,W_0) < \epsilon/\ell$ and clearly $W_0 \in P_{V_0} \cap C(V)$, hence in particular $W_0$ is a witness. Now we proceed inductively each time replacing a single basis vector and the distance between each two consecutive points will be less than $\epsilon/\ell$. Therefore by the triangle inequality they are all contained in the ball with radius $\epsilon$ around $V$. This shows that the set of witnesses contains the ball with radius $\epsilon/2 \sqrt{2} \ell$ around $V$ and thus it is open. Take now any $V^{\prime} \in C(V)$ and since the Grassmannian is path connected , we can connect it with a simple path $p \colon [0,1] \to \mathbb{G}(\ell-1,d)({\mathbb R})$ to $V$, $p(0) = V$, that is contained in $C(V)$. Let $\epsilon > 0$ be the distance from the path to the associated hypersurface, that is defined since both are compact. Since the path $p$ is continuous from a compact set it is uniformly continuous hence there exists $\delta > 0$, such that if $|t -s| < \delta$ then $d(p(t),p(s)) < \epsilon/2\sqrt{2}\ell$. By the first part $p([0,\delta)$ consists of witnesses now just cover $[0,1]$ by segments of length $\delta$ and apply the first part repeatedly to see that $p(1) = V^{\prime}$ is a witness. \end{proof} \begin{cor} \label{cor:sliceconvex} The set $C(V)$ is slice convex. \end{cor} \begin{proof} Apply Corollary \ref{lem:sliceconvex_V} to each and every $V \in C(V)$. \end{proof} There is a connection between hyperbolicity and determinantal representations encoded in the following proposition. \begin{prop} \label{prop:definite-hyper} Assume $X$ admits a very reasonable{} Hermitian Livsic-type determinantal representation, $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$. Assume, furthermore, that for some real $V$, we have that $\gamma(V)$ is positive definite, then $X$ is hyperbolic with respect to $V$. \end{prop} \begin{proof} Let $U$ be a real $\ell$-plane containing $V$. Fix a basis $v_0,\ldots,v_{\ell}$ for $V$ and add a vector $u$ to complete it to a basis of $U$. Since $\gamma(V)$ is positive definite, it is invertible and therefore by Lemma \ref{lem:inclusion} we know that $V \cap X = \emptyset$. Let $u + \sum_{j=0}^{\ell-1} t_j v_j$ be a point of intersection of $U$ and $X$. By definition we have a $w \in {\mathbb C}^n$, such that: \[ \left(\gamma \wedge \left( u + \sum_{j=0}^{\ell-1} t_j v_j \right) \right) w = 0. \] Recall that by Lemma \ref{lem:sol-space} we have that the $t_j$ are eigenvalues of $\gamma(V)^{-1} \gamma(V,u,i)$. Note that $\gamma(V,u,i)$ is Hermitian, since the representation is Hermitian and thus $\gamma(V,u,i)$ is a linear combination with real coefficients of Hermitian matrices. Since $\gamma(V)$ is positive definite, conclude that $\gamma(V)^{-1} \gamma(V,u,i)$ is also Hermitian and thus all its eigenvalues are real. \end{proof} Hyperbolicity of hypersurfaces has been studied extensively since the notion was introduced. A few analogous questions arise in our setting. Consider the Grassmannian embedded in $\mathbb{P}^N$ via the Pl\"{u}cker embedding. This embedding is projectively normal and since $\operatorname{Pic}(\mathbb{G}(\ell-1,d)) \cong {\mathbb Z}$ we obtain that every hypersurface in the Grassmannian is obtained via an intersection with a hypersurface in $\mathbb{P}^N$. In a series of works H. Buseman discussed various notions of convexity of a subset of the cone over Grassmannian (in the Pl\"{u}cker embedding), e.g. \cite{Bus61}. In particular he calls extendably convex those sets in the cone that are intersections with convex sets in the ambient space. \begin{quest} Let $X$ be an irreducible real variety in $\mathbb{P}^d$ with $\operatorname{codim} X = \ell > 1$ and let $Y \subset \mathbb{G}(\ell-1,d)$ be the hypersurface associated to $X$ via the incidence correspondence. Is it true that the cone over $C(V)$ intersection $\mathbb{G}^{\dagger}$ is an extendably convex set in ${\mathbb R}^{N+1}$? Furthermore is it true that $C(V)$ coincides with the set of all witnesses to the hyperbolicity of $X$? \end{quest} In Section \ref{sec:hyper_curve} we will show that in the case of curves the cone over the set of all witnesses intersection $\mathbb{G}^{\dagger}$ is extendably convex. However, we do not know yet whether this set coincides with $C(V)$ even in that case. \begin{quest}[Generalized Lax Conjecture, cf. \cite{Vppf}] Assume we have a real variety $X \subset \mathbb{P}^d$ of dimension $k$ hyperbolic with respect to a real $d-k-1$-plane $V$, does there exist a real hyperbolic $X^{\prime} \subset \mathbb{P}^d$, such that $X \cup X^{\prime}$ admits a Livsic-type Hermitian determinantal representation $\gamma \in \wedge^{k+1} {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$, such that $\deg X \cup X^{\prime} = n$ and $\gamma(U)$ is definite for a real $d-k-1$-plane $U$ if and only if $U \in C(V)$? \end{quest} In Section \ref{sec:hyper_curve} we will obtain such a (multi)linear matrix inequality representation in the case where $X$ is an irreducible curve, without any auxiliary variety $X^{\prime}$, for the set of all witnesses instead of $C(V)$. \section{Bezoutians of Meromorphic Functions on a Riemann Surface} \label{sec:bezoutian} Let $X$ be a compact Riemann surface of genus $g$. Fix a canonical basis for the homology of $X$, $A_1,\ldots,A_g,B_1,\ldots,B_g$ and fix a normalized basis for holomorphic differentials, $\omega_1,\ldots,\omega_g$. Normalization means that $\int_{A_j} \omega_i = \delta_{ij}$. Set $\Omega$, the $B$-period matrix, given by columns of the form $\begin{pmatrix} \int_{B_j} \omega_1 & \cdots & \int_{B_j} \omega_g \end{pmatrix}^{T}$. Then $J(X) = {\mathbb C}^g/\left({\mathbb Z}^g + \Omega Z^g\right)$ is the Jacobian variety of $X$. Fix a point $p_0 \in X$ and set $\varphi \colon X \to J(X)$ the Abel-Jacobi map, given by: $$ \varphi(p) = \begin{pmatrix} \int_{p_0}^p \omega_1, \cdots, \int_{p_0}^p \omega_g \end{pmatrix}. $$ Extend $\varphi$ linearly to all divisors on $X$. Thus by writing $\varphi({\mathcal L})$ for a line bundle ${\mathcal L}$ on $X$, we mean the image of the corresponding divisor. Fix a line bundle of half-order differentials $\Delta$ on $X$, such that $\varphi(\Delta)= -\kappa$, the Riemann constant. Additionally fix a flat line bundle $\chi$ on $X$, such that $h^0(\chi \otimes \Delta) = 0$. Since $\chi$ is flat, the sections of $\chi$ lift to functions on $\tilde{X}$, the universal cover of $X$, that satisfy for every $T \in \pi_1(X)$ and every $\tilde{p} \in \tilde{X}$: \[ f( T \tilde{p}) = a_{\chi}(T) f(\tilde{p}). \] Here $a_{\chi}$ is the constant factor of automorphy associated to $\chi$. In fact a choice of a trivialization of $\chi$ in a neighborhood of a point $p$ is equivalent to a choice of a lift $\tilde{p} \in \tilde{X}$. We can also lift $\varphi$ to a map from $\tilde{X}$ to ${\mathbb C}^g$. Since every $\tilde{p}$ is represented by a point $p \in X$ and a path $c$ connecting $p_0$ to $p$, then: \[ \varphi(\tilde{p}) = \begin{pmatrix} \int_c \tilde{\omega}_1, \cdots, \int_c \tilde{\omega}_g \end{pmatrix}. \] The differential $\tilde{\omega}_j$ is the pullback of $\omega_j$ to $\tilde{X}$ via the coveting map. Let us write $\theta(z)$ for the theta function associated to the lattice ${\mathbb Z}^g + \Omega Z^g$, where $\Omega$ is the period matrix of $X$, namely: \[ \theta(z) = \sum_{m \in {\mathbb Z}^g} e^{2 \pi i \langle \Omega m, m \rangle + 2 \pi i \langle z, m \rangle}. \] We will also need the theta function with characteristic, so for $a,b \in {\mathbb R}^g$ we define: \[ \theta{a \brack b}(z) = \sum_{m \in {\mathbb Z}^g} e^{\pi i \langle \Omega( m + a), m+a \rangle} e^{2 \pi i \langle z + b, m + a \rangle}. \] Recall from \cite{BV-ZPF} that there exists a Cauchy kernel $K(\chi,p,q)$ a meromorphic map of line bundles on $X\times X$ with only a simple pole along the diagonal with residue $1$, given by: \begin{equation} K(\chi,p,q) = \frac{\theta{a \brack b}(\varphi(q) - \varphi(p))}{\theta{a \brack b}(0) E_{\Delta}(q,p)}. \end{equation} Where $\varphi(\chi) = b + \Omega a$ and $E_{\Delta}(\cdot,\cdot)$ is the prime form $X \times X$, with respect to $\Delta$. Pulling back $K(\chi,\cdot,\cdot)$ to $\tilde{X}$, we get a section of the pullback of $\Delta$ satisfying: \[ \frac{K(\chi,T \tilde{p}, R \tilde{q})}{\sqrt{dt}(T \tilde{p}) \sqrt{ds}(R \tilde{q})} = a_{\chi}(T) \frac{K(\chi,\tilde{p},\tilde{q})}{\sqrt{dt}(\tilde{p}) \sqrt{ds}(\tilde{q})} a_{\chi}(R)^{-1}. \] See \cite{AV} for details. Here $t$ and $s$ are local coordinates on $X$ centered at $p$ and $q$, respectively, and $T,R \in \pi_1(X)$. The pullback is holomorphic at $(\tilde{p},\tilde{q})$, as long as $p \neq q$. Let $f$ and $g$ be two meromorphic functions with simple poles. We define a meromorphic section of $\operatorname{Hom}(\pi_2^* \chi, \pi_1^* \chi \otimes \pi_1^* \Delta \otimes \pi_2^* \Delta)$ on $X \times X$: $$ b_{\chi}(f,g)(p,q) = \left( f(p) g(q) - f(q) g(p) \right) K(\chi, p, q). $$ Assume that $p$ is not a pole of either $f$ or $g$ and fix a local coordinate $t$ centered at $p$. Now if $q$ tends to $p$ we get: \begin{multline*} f(p) (g(p) + g^{\prime}(p) t + \ldots) - (f(p) + f^{\prime}(p) t + \ldots) g(p) \to \\ (f(p) g^{\prime}(p) - f^{\prime}(p) g(p)) t + \ldots \end{multline*} Since the residue of $K(\chi,\cdot,\cdot)$ along the diagonal is $1$ we get that: $$ b_{\chi}(f,g)(p,q) \to f(p) g^{\prime}(p) - f^{\prime}(p) g(p). $$ Note that this is independent of the choice of the lifts of $p$ and $q$, since when $\tilde{q}$ will go to $\tilde{p}$, the factors of automorphy will cancel out in the limit. Now observe that since $K(\chi,\cdot,\cdot)$ is holomorphic off the diagonal, we get that $b_{\chi}(f,g) = 0$ if and only if $f/g = const.$. Indeed if we fix $p$ that is neither a pole nor a zero for either $f$ or $g$, we get that for every $q$ in an open set in $X$ we have the equality: $$ \frac{f(p)}{g(p)} = \frac{f(q)}{g(q)}. $$ Note that $b_{\chi}$ is alternating and linear as a function of $f$ and $g$. Hence we have that: $$ b_{\chi}(\alpha_1 f + \beta_1 g, \alpha_2 f + \beta_2 g) = (\alpha_1 \beta_2 - \alpha_2 \beta_1) b_{\chi}(f,g). $$ Given a set of points $S = \{p_1, \ldots,p_m\} \subset X$, we define an effective reduced divisor $D = \sum_{j=1}^m p_j$. Recall that ${\mathcal L}(D)$ is the space of all meromorphic functions $f$ on $X$ satisfying $(f) + D \geq 0$. In other words that is to say that $f$ has at most simple poles on $S$ and is holomorphic on $X \setminus S$. Furthermore, for every point $p_j$ we fix a lift $\tilde{p}_j$ to $\tilde{X}$ and fix a local coordinate $t_j$ centered at $p_j$ and a corresponding local holomorphic frame $\sqrt{dt_j}$ of $\Delta$. Then we define for every point $p \in X \setminus S$ two sections of $\left(\chi \otimes \Delta\right)^{\oplus m}$: \begin{equation*} {\mathbf u}cl(p) = \begin{pmatrix} \frac{K(\chi,p,\tilde{p}_1)}{\sqrt{dt_1}(\tilde{p}_1)} & \cdots & \frac{K(\chi, p, \tilde{p}_m)}{\sqrt{dt_m}(\tilde{p}_m)} \end{pmatrix} \mbox{ and } {\mathbf u}c(p) = \begin{pmatrix} K(\chi,\tilde{p}_1,p)/\sqrt{dt_1}(\tilde{p}_1) \\ \vdots \\ K(\chi,\tilde{p}_m,p)/\sqrt{dt_m}(\tilde{p}_m) \end{pmatrix}. \end{equation*} Note that changing the lift $\tilde{p}_j$ will result in the multiplication of ${\mathbf u}c(p)$ and ${\mathbf u}cl(p)$ by the a diagonal matrix of the constant factors of automorphy. Changing the coordinates will result in multiplication by a diagonal matrix of transition function for $\Delta$ at $p$. \begin{prop} \label{prop:bezoutians_formula} Set $D = \sum_{j=1}^m p_j$ be an effective reduced divisor and let $f,g \in {\mathcal L}(D)$. Then there exists a matrix $B_{\chi,D}(f,g) \in M_m({\mathbb C})$, such that for $p \neq q$: \begin{equation} \label{eq:bezoutian_fundamental} b_{\chi}(f,g)(p,q) = {\mathbf u}cl(p) B_{\chi,D}(f,g) {\mathbf u}c(q) = \sum_{i,j=1}^m b_{ij} \frac{K(\chi,p,\tilde{p}_i)K(\chi,\tilde{p}_j,q)}{\sqrt{dt_j}(\tilde{p}_j)\sqrt{dt_i}(\tilde{p}_i)}. \end{equation} Whereas when $p = q$ is not a pole of either $f$ or $g$, we fix a coordinate $t$ centered at $p$ and get the limit version: \begin{equation} \label{eq:bezoutian_fundamental_lim} {\mathbf u}cl(p) B_{\chi,D}(f,g) {\mathbf u}c(p) = f(p) g^{\prime}(p) - f^{\prime}(p) g(p). \end{equation} Here for every $j$, the $t_j$ are local coordinates centered at $p_j$. The equality is to be understood literally if neither $p$ or $q$ are poles of $f$ or $g$ and as a limit in case at least one of them is a pole. \end{prop} \begin{proof} Let us fix a point $q$ not in $D$. Let $t$ be a local coordinate centered at $q$. The map $\rho \colon X \to X \times X$, defined by $p \mapsto (p,q)$, satisfies $\pi_1 \circ \rho = 1_X$ and $\pi_2 \circ \rho(p) = q$. Hence $\rho^* \pi_1^* {\mathcal F} = {\mathcal F}$ and $\rho^* \pi_2^* {\mathcal F} = {\mathcal F}_q$, for every sheaf ${\mathcal F}$ on $X$. Hence if we divide out by $1/\sqrt{dt}(q)$, we'll get that both sides of \eqref{eq:bezoutian_fundamental} are meromorphic sections of $\chi \otimes \Delta$. Since this line bundle admits no holomorphic sections, except for $0$, it suffices to show that both sections have the same poles and identical principal parts at these poles. Clearly the poles of the sections thus obtained are precisely the poles of $f$ and $g$ on the left-hand side and $D$ on the right hand side. If $p_i$ is not a pole of either $f$ or $g$, we'll set $b_{ij} = b_{ji} = 0$, for every $j$. Therefore we may assume that $D$ consists precisely of poles of either $f$ or $g$. Write the Laurent expansion of $f$ and $g$, with respect to $t_j$: $$ f(t_j) = \frac{a_j}{t_j} + b_j + \ldots $$ $$ g(t_j) = \frac{c_j}{t_j} + d_j + \ldots $$ Now we set: $$ b_{ij} = \left\{\begin{array}{cc} (a_i c_j - a_j c_i)\frac{K(\chi,\tilde{p}_i,\tilde{p}_j)}{\sqrt{dt_i}(\tilde{p}_i)\sqrt{dt_j}(\tilde{p}_j)}, & i \neq j \\a_i d_i - b_i c_i , & i=j \end{array} \right. $$ For a fixed $i_0$, we have that the left hand side of \eqref{eq:bezoutian_fundamental} (multiplied by $1/\sqrt{dt}(q)$) has a simple pole with residue $(a_{i_0} g(q) - c_{i_0} f(q))K(\chi,p_{i_0},q)$. The right hand side has also a simple pole with residue $\sum_{j=1}^m b_{i_0 j} K(\chi,p_j,q)$. Both expressions can be considered as maps from $\chi$ to $\Delta$ or in other words, sections of $\chi^{\vee} \otimes \Delta$, the Serre dual of $\chi \otimes \Delta$. By Riemann-Roch we get that $h^0(\chi^{\vee} \otimes \Delta) = 0,$ as well. Hence we can apply similar considerations to those sections. We note that the poles are precisely $S$ and computing the residues we obtain the equality with $b_{i_0 j}$ defined above. To get \eqref{eq:bezoutian_fundamental_lim} we fix a lift $\tilde{p}$ of $p$ and pass to the limit. Next we note that since changing $\tilde{p}$ to $T \tilde{p}$ will result in cancellation, the equality is independent of the choice of the lift. \end{proof} \begin{rem} In particular note that due to cancellation of the constant factors of automorphy appearing when we change the choice of $\tilde{p}_j$, we get that the formula is independent of the choice of those lifts. \end{rem} This leads us to the following definition: \begin{definition} We define the Bezoutian of the functions $f$ and $g$ with respect to the divisor $D$ as the matrix $B_{\chi,D}(f,g)$. \end{definition} One can see immediately from the proof that is $D \leq D^{\prime}$ are two effective reduced divisors on $X$ and $f,g \in {\mathcal L}(D)$, then $B_{\chi,D}(f,g)$ is a submatrix of $B_{\chi,D^{\prime}}$ and furthermore $B_{\chi,D^{\prime}}(f,g)$ is obtained by padding $B_{\chi,D}(f,g)$ with zeroes to the required size. There are a few choices made in the construction of the Bezoutian matrix. Changing the lifts of the $p_j$ will result in the conjugation of $B_{\chi,D}(f,g)$ by diagonal unitary matrices of the constant factors of automorphy of $\chi$. Similarly changes in coordinates result in conjugation by the respective matrices of transition functions. The following corollary is immediate from the definition of the Bezoutian: \begin{cor} Let $D = \sum_{j=1}^m p_j$ be an effective reduced divisor on $X$. Then the Bezoutian defines a linear map \[ B_{\chi,D} \colon \wedge^2 {\mathcal L}(D) \to M_m({\mathbb C}). \] \end{cor} \begin{prop} \label{prop:bezotian_symmetric} If $\chi \otimes \chi \cong {\mathcal O}$, i.e., $\varphi(\chi)$ is a half-period, then $B_{\chi,D}(f,g)$ is a complex symmetric matrix. \end{prop} \begin{proof} If $\varphi(\chi)$ is a half-period and it is off the theta divisor, it must be an even characteristic. Hence the resulting theta function is even. Since the prime form is anti-symmetric, we get that $K(\chi,p,q) = -K(\chi,q,p)$ in this case. Therefore $b_{\chi}(f,g)(p,q) = b_{\chi}(f,g)(q,p)$ and thus the resulting Bezoutian is symmetric. \end{proof} The following proposition shows that the pullbacks of ${\mathbf u}c$ and ${\mathbf u}cl$ to $\tilde{X}$ as vector valued functions have certain duality and independence properties. \begin{prop} \label{prop:u_independence} Let $D= \sum_{j=1}^m p_j$ be an effective and reduced divisor on $X$. Let $g \in {\mathcal L}(D)$ of degree $r$ and take an unramified fiber $g^{-1}(z) = \{q_1,\ldots,q_r\}$, for some $z \in {\mathbb C}$. For each $j=1,\ldots,r$, fix a lift $\tilde{q}_j \in \tilde{X}$. Then the vectors ${\mathbf u}c(\tilde{q}_j)$ are linearly independent and the same is true for ${\mathbf u}cl(\tilde{q}_j)$. Set $W = \operatorname{Span} \{ {\mathbf u}c(\tilde{q}_1),\ldots, {\mathbf u}c(\tilde{q}_r)\}$ and $W_{\ell} = \operatorname{Span} \{ {\mathbf u}cl(\tilde{q}_1),\ldots, {\mathbf u}cl(\tilde{q}_r)\}$. For every $f \in {\mathcal L}(D)$ that does not vanish at $q_1,\ldots,q_r$, the matrix $B_{\chi,D}(f,g)$ defines a non-degenerate pairing between the subspaces $W$ and $W_{\ell}$. For every $v \in W$ and $v_{\ell} \in W_{\ell}$ set $[v,v_{\ell}] = v_{\ell} B_{\chi,D}(f,g) v$. Then the ${\mathbf u}c(\tilde{q}_j)$ and ${\mathbf u}cl(\tilde{q}_j)$ are dual with respect to that pairing. \end{prop} \begin{proof} Without loss of generality we may assume that $z = 0$, otherwise replace $g$ with $g-z$ and note that both ${\mathbf u}c$ and ${\mathbf u}cl$ are independent of $g$. Take some $f \in {\mathcal L}(D)$ such that $f$ does not vanish on $q_1,\ldots,q_r$. Then for every $i \neq j$ we have that: $$ {\mathbf u}cl(\tilde{q}_i) B_{\chi,D}(f,g) {\mathbf u}c(\tilde{q}_j) = 0. $$ On the other hand we have that since those are simple zeroes of $g$, we get: $$ {\mathbf u}cl(\tilde{q}_i) B_{\chi,D} {\mathbf u}c(\tilde{q}_i) = - f(q_i) g^{\prime}(q_i) \neq 0. $$ Assume that there exist constants $\alpha_1,\ldots,\alpha_r \in {\mathbb C}$, such that the linear combination $\sum_{j=1}^r \alpha_j {\mathbf u}c(\tilde{q}_j) = 0$. Then premultiplying by ${\mathbf u}c(\tilde{q}_i)$, we get that $\alpha_i = 0$. Conclude that the vectors are linearly independent. Similarly for ${\mathbf u}cl$. \end{proof} Note that the result is independent of the choices in the construction of ${\mathbf u}c$ and ${\mathbf u}cl$, since the difference is multiplication by an invertible matrix. \begin{cor} \label{cor:u_independence_divisor} Let $D = \sum_{j=1}^m p_j$ be an effective reduced divisor on $X$ and assume that $D$ is precisely the divisor of poles of a meromorphic function $g$. Let $z \in {\mathbb C}$, be such that $g$ is unramified over $z$ and set $g^{-1}(z) = \{q_1,\ldots,q_m\}$. For every $j=1,\ldots,m$, fix a lift $\tilde{q}_j \in \tilde{X}$. Then ${\mathbf u}c(\tilde{q}_j)$ and ${\mathbf u}cl(\tilde{q}_j)$ span ${\mathbb C}^m$ and are dual bases with respect to $B_{\chi,D}(1,g)$. \end{cor} \begin{cor} \label{cor:common_zero} Let $D$ be as in Corollary \ref{cor:u_independence_divisor} and $f,g \in {\mathcal L}(D)$. Then if $f$ and $g$ have a common zero at $p$, then $B_{\chi,D}(f,g){\mathbf u}c(\tilde{p}) = 0 $. Independently of the choice of the lift $\tilde{p}$. \end{cor} \begin{proof} Note that by choosing an appropriate constant $c$, the set $S$ is the divisor of poles of the function $f + c g$. By assumption, for every $q \in X$, we have that: \[ b_{\chi}(f,g)(p,q) = 0. \] Therefore, by Proposition \ref{prop:bezoutians_formula} we conclude that: \[ {\mathbf u}cl(\tilde{q}) B_{\chi,D}(f,g) {\mathbf u}c(\tilde{p}) = 0. \] Now by the assumption on $D$ and Corollary \ref{cor:u_independence_divisor} we conclude that there exist $q_1,\ldots,q_m$, such that ${\mathbf u}cl(\tilde{q}_j)$ are a basis for ${\mathbb C}^m$ dual to ${\mathbf u}c(\tilde{q}_j)$, hence $B_{\chi,D}(f,g) {\mathbf u}c(\tilde{p}) = 0$. Note that if we replace $\tilde{p}$ by $T \tilde{p}$ for some $T \in \pi_1(X)$, then ${\mathbf u}c(T \tilde{p}) = a_{\chi}(T) {\mathbf u}c(p)$ and hence $B_{\chi,D}(f,g){\mathbf u}c(\tilde{T p}) = 0$ as well. \end{proof} \begin{cor} \label{cor:det_vanish} Let $D$ be as in Corollary \ref{cor:u_independence_divisor} and $f,g \in {\mathcal L}(D)$. Then for every $p \in X \setminus S$, we have that: $$ \left( g(p) B_{\chi,D}(1,f) - f(p) B_{\chi,D}(1,g) + B_{\chi,D}(f,g) \right) {\mathbf u}c(\tilde{p}) = 0. $$ This equality is independent of the choice of $\tilde{p}$. \end{cor} \begin{proof} Note that by the anti-symmetry of the Bezoutian we have that: $$ B_{\chi,D}( f - f(p), g - g(p) ) = g(p) B_{\chi,D}(1,f) - f(p) B_{\chi,D}(1,g) + B_{\chi,D}(f,g). $$ Now just apply Corollary \ref{cor:common_zero}. \end{proof} \section{Real Riemann Surfaces and Dividing Functions} \label{sec:div_func} In this section we'll keep the notations from the previous section and assume that $X$ is equipped with an anti-holomorphic involution $\tau$. Set $X({\mathbb R})$ the set of fixed points of $\tau$. Recall that $X$ is called dividing if $X \setminus X({\mathbb R})$ has two connected components. \begin{definition} We say that a meromorphic function $f$ is dividing, if $f(p) \in \mathbb{P}^1({\mathbb R})$ if and only if $p \in X({\mathbb R})$. \end{definition}. Note that if $X$ admits such a function then clearly $X$ is dividing and the two components of $X \setminus X({\mathbb R})$ are given by $X_{+} = \{ p \in X \mid \operatorname{Im} f(p) > 0\}$ and $X_{-} = \{ p \in X \mid \operatorname{Im} f(p) < 0\}$. The converse is also true, see \cite{Ahl47} and \cite[Sec.\ 4]{Ahl50}. Let us call the orientation induced on $X({\mathbb R})$ from $X_{+}$, positive. If $p \in X({\mathbb R})$ and $t$ is a real coordinate centered at $p$ then the Laurent expansion of $f$ with respect to $t$ will have real coefficients. Furthermore, if we consider the function $f \circ t^{-1}$ as a meromorphic function on a disc, then it takes points with positive (resp. negative) imaginary parts to points with positive (resp. negative) imaginary parts as well. The following proposition is in fact contained in \cite{Ahl47} and \cite{Ahl50}, we will recall the proof for the sake of completeness. \begin{prop} \label{prop:dividing_func} Let $f$ be a dividing function on $X$, then it has only simple poles and zeroes and its residues at the poles, with respect to a real local coordinate with positive orientation, are negative. Conversely, if $X$ is dividing and $f$ is a real meromorphic function on $X$ with simple real poles and negative residues with respect to positive real local coordinate, then $f$ is dividing. \end{prop} \begin{proof} Let $p$ be a zero of $f$, then $p \in X({\mathbb R})$. Let $t$ be a real local coordinate centered at $p$. We note that if we have a zero of higher order that $f(t) = a t^k + \ldots$, hence if $t$ is small enough it can not preserve the part of the disc with positive imaginary part, unless $k=1$. Since if $f$ is dividing then so is $-1/f$, hence a similar conclusion applies to poles. In order to prove the second part of the claim we fix a real positively oriented local coordinate $t$ at a pole, then: \[ \lim_{t \to -} t f(t) = a. \] Since the limit exists in particular the limit exists when we approach $0$ along the positive imaginary axis. Then the imaginary part of $f(t)$ is also positive by assumption and hence the real part of $t f(t)$ is always negative, and hence so is the limit. Conversely, assume that $f$ is a real meromorphic function on $X$ with simple real poles and negative residues with respect to positive real local coordinate. Then $\operatorname{Im}(f)$ is a harmonic function on $X_{+}$ and it vanishes at every point of the boundary, except for the poles. The above limit argument shows that in fact for a positively oriented coordinate the imaginary part of $f$ is positive on $X_{+}$ near the poles. Therefore, from the minimum and maximum principle for harmonic functions it follows that $\operatorname{Im}(f) > 0$ on $X_{+}$. \end{proof} \begin{rem} Another way to see that the residues are negative is as follows. Let $p$ be a simple pole of $f$ and $t$ again a real positive local coordinate centered at $p$. Write the Laurent expansion of $f$ with respect to $t$: $f(t) = a/t + b + \ldots$. We have that if $\operatorname{Im} t > 0$, then the sign of $\operatorname{Im} (a/t)$ is the opposite of the sign of $a$, so for $t$ of very small modulus we conclude that $a$ has to be negative. \end{rem} Let us assume from now on that $X$ is dividing and $X({\mathbb R})$ has $k$ components, $X_0,\ldots,X_{k-1}$. We can pullback $\tau$ to an anti-holomorphic involution on $\tilde{X}$, we'll denote the pullback by $\tau$ as well. We recall the construction of a special symmetric basis for the homology of $X$ from \cite{Vin93}. We take a point $s_i$ on $X_i$ and for each $i=1,\ldots,k-1$ we take a path $C_i$ connecting $s_0$ to $s_i$ and containing no other real points. Then we set $A_{g+1-k+i} \sim X_i$ and $B_{g+1-k+i} \sim \pm (C_i - C_i^{\tau})$. Here $\sim$ stands for integral homology we choose the sign in $B_{g+1-k+i}$ so that $\langle A_{g+1-k+i},B_{g+1-k+i} \rangle = 1$, where the pairing is the intersection pairing. Then we complete this to a symmetric homology basis on $X$. We fix a corresponding basis of holomorphic differentials $\omega_1,\ldots,\omega_g$. Then, as in \cite{Vin93}, we have that $\overline{\tau^* \omega_j} = \omega_j$. Recall from \cite[Ch.\ 3]{Vin93} that the Jacobian variety of $X$ has several real sub-tori, associated each to a different choice of signs $(v_0,\ldots,v_{g-r})$, where $r = g + 1 - k$, defined by: \begin{multline*} T_v = \{ \zeta \in J(X) \mid \zeta = \frac{v_1}{2} e_{r+1} + \ldots + \frac{v_{g-r}}{2} e_g + a_1 (\Omega_1 - \frac{e_2}{2}) + a_2 (\Omega_2 - \frac{e_1}{2}) + \ldots + \\ a_{r-1} (\Omega_{r-1} - \frac{e_r}{2}) + a_r (\Omega_r - \frac{e_{r-1}}{2}) + a_{r+1} \Omega_{r+1} + \ldots + a_g \Omega_g\}. \end{multline*} Here the $e_j$ and $\Omega_j$ are columns of the identity matrix and $\Omega$, respectively. Write $e_1,\ldots,e_g$ for the standard basis of ${\mathbb Z}^g$. Let us fix $\chi \in {\mathbb T}_{v}$, then by our assumption and \cite[Eq,\ 3.12]{Vin93}, we have that: $$ \varphi(\chi) + \varphi(\chi^{\tau}) = \varphi(\chi) + \overline{\varphi(\chi)} = v_1 e_{r+1} + \ldots + v_{g-r} e_g. $$ Using this fact we obtain the following lemma about the behavior of $K(\chi,\cdot,\cdot)$. \begin{lem} \label{lem:K_bar} For every two distinct points $p,q \in X$, we have that: $$ \overline{K(\chi,p,q)} = - K(\chi,q^{\tau},p^{\tau}). $$ \end{lem} \begin{proof} Recall that we have the following identity for theta functions: $$ \theta{a \brack b}(z) = e^{2 \pi i \langle z + b +\frac{1}{2} \Omega a, a \rangle} \theta(z + b +\Omega a). $$ Let us write: $$ G(z) = \frac{\theta{a \brack b}(z)}{\theta{a \brack b}(0)} = e^{2 \pi i \langle z, a \rangle} \frac{\theta(z + b + \Omega a)}{\theta(b + \Omega a)}. $$ Then, by \cite[Prop.\ 2.3]{Vin93}, we have that, for real $a$ and $b$, such that $b + \Omega a \in {\mathbb T}_v$: $$ \overline{G(z)} = e^{- 2 \pi i \langle \bar{z}, a \rangle} \frac{\theta(\bar{z} - b - \Omega a + v_1 e_{r+1} + \ldots + v_{g-r} e_g)}{\theta(-b - \Omega a + v_1 e_{r+1} + \ldots + v_{g-r} e_g)} = G(- \bar{z}). $$ Now note that we have: $$ K(\chi,p,q) = \frac{G(\varphi(q) - \varphi(p))}{E_{\Delta}(q,p)}. $$ Hence, applying the above equality and \cite[Eq.\ 2.12]{Vin93}, we get: $$ \overline{K(\chi,p,q)} = \frac{G(\varphi(p^{\tau}) - \varphi(q^{\tau}))}{\overline{E_{\Delta}(q,p)}}. $$ By the fact that the prime form is anti-symmetric, we get that to prove the result we need only to show that: $$ \overline{E(p,q)} = E(p^{\tau},q^{\tau}). $$ Now by \cite[Eq.\ 19]{Fay73} we have that: $$ E(p,q) = \frac{\theta[\varphi(\Delta)](q-p)}{h(p) h(q)}. $$ Here $h$ is a holomorphic section of $\Delta$, satisfying $h^2(p) = \sum_{j=1}^g \dfrac{\partial \theta[\varphi(\Delta)]}{\partial z_j}(0) \omega_j(p)$. By \cite[Prop.\ 6.11]{Fay73} we have that there exists an open cover of $X$, trivializing $\Delta$, such that $h$ is real and positively oriented, Now applying again \cite[Prop.\ 2.3]{Vin93} we get that: $\overline{h^2(p)} = c h^2(p^{\tau})$. Therefore we get the desired result. See also \cite[Cor.\ 6.12]{Fay73}. \end{proof} The following fact was essentially proved in the proof of \cite[Thm.\ 2.1]{AlpVin02}, we recall the proof to make the exposition more self-contained. \begin{cor} \label{cor:u_adjointness} Let $D = \sum_{j=1}^m p_j$ be an effective reduced divisor on $X$, such that $p_j \in X({\mathbb R})$ for every $j=1,\ldots,m$. Then if $\chi \in {\mathbb T}_v$, we have that: ${\mathbf u}c(p)^* = - J {\mathbf u}cl(p^{\tau})$, where $J$ is a signature matrix that depends on $v$. In particular if $v = 0$, then $J = I$, the identity matrix. \end{cor} \begin{proof} Note that ${\mathbf u}c(p)^* = \begin{pmatrix} \frac{\overline{K(\chi,\tilde{p}_1,p)}}{\sqrt{dt_1}(\tilde{p_1}}, \ldots,\frac{\overline{K(\chi,\tilde{p}_m,p)}}{\sqrt{dt_m}(\tilde{p_m}} \end{pmatrix}$. Now applying Lemma \ref{lem:K_bar} we get that: \[ \frac{\overline{K(\chi,\tilde{p}_j,p)}}{\sqrt{dt_j}(\tilde{p_j})} == - \frac{K(\chi,p^{\tau},\tilde{p}_j^{\tau})}{\sqrt{dt_j}(\tilde{p_j}} = - \frac{K(\chi,p^{\tau},T_j \tilde{p}_j)}{\sqrt{dt_j}(\tilde{p}_j}= - a_{\chi}(T_j) \frac{K(\chi,p^{\tau},\tilde{p}_j)}{\sqrt{dt_j}(\tilde{p}_j}. \] Here $T_j \in \pi_1(X)$ that maps $\tilde{p}_j$ to $\tilde{p}^{\tau}_j$. Assume that $p_j \in X_s$, where $X_s$ is some components of $X({\mathbb R})$. Then by the definition of the symmetric basis in $H_1(X,{\mathbb Z})$ we have that if $s=0$, then $T_j \sim 0$ or $T_j \sim B_{g+1-k+s}$ if $s = 1,\ldots,k-1$. Since $a_{\chi}$ is a unitary character it factors through $H_1(X,{\mathbb Z})$. So either $a_{\chi}(T_j) = 1$, if $s=0$ or $a_{\chi}(T_j) = e^{2 \pi i b_{g+k-1+s}}$. Now by \cite[Eq.\ 3.9]{Vin93} we have that $b_{g+1-k+s} = v_{s}/2$ and we are done. \end{proof} \begin{prop} \label{prop:bezoutian_hermitian} Assume that $\chi \in {\mathbb T}_v$ and let $D = \sum_{j=1}^m p_j$ be an effective reduced divisor with all $p_j \in X({\mathbb R})$. Let $f,g \in {\mathcal L}(D)$ be real. Then under our assumptions, $B_{\chi,D}(f,g)$ is $J$-Hermitian, where $J$ is a signature matrix that depends on $v$ obtained above. \end{prop} \begin{proof} Fix two distinct points $p,q \in X({\mathbb R})$ not on $D$, then by Proposition \ref{lem:K_bar} we have that: $$ \overline{b_{\chi}(f,g)(p,q)} = b_{\chi}(f,g)(q,p). $$ Now applying Proposition \ref{prop:bezoutians_formula} and Lemma \ref{lem:K_bar}, we get that: $$ \sum_{i,j =1}^m \overline{b_{ij}} \frac{\overline{K(\chi,p,\tilde{p}_i)} \overline{K(\chi,\tilde{p}_j,q)}}{\overline{\sqrt{dt_i}(\tilde{p}_i)\sqrt{dt_j}(\tilde{p}_j)}} = \sum_{i,j =1}^m \overline{b_{ij}} a_{\chi}(T_i) a_{\chi}(T_j)\frac{K(\chi,\tilde{p}_i,p) K(\chi,q,\tilde{p}_j)}{\sqrt{dt_i}(\tilde{p}_i)\sqrt{dt_j}(\tilde{p}_j)}. $$ Hence comparing coefficients we get that, $b_{ij} = a_{\chi}(T_i) a_{\chi}(T_j) \overline{b_{ji}}$. Conclude that: \[ B_{\chi,D}(f,g) = J B_{\chi,D}(f,g)^* J. \] \end{proof} Assume that $\chi \in {\mathbb T}_0$ and let $D$ be a divisor as in Proposition \ref{prop:bezoutian_hermitian} above. Let us assume that we have two functions $f,g \in {\mathcal L}(D)$ are real and $f/g$ is dividing. Let $(f)_{\infty}$ and $(g)_{\infty}$ be the divisors of poles of $f$ and $g$, respectively. Let us assume that $D = (f)_{\infty} \vee (g)_{\infty}$ the supremum of divisors of $f$ and $g$. We can replace both $f$ and $g$ by real linear combinations so that $D= (f)_{\infty} = (g)_{\infty}$. If we have a matrix $\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right) \in \operatorname{SL}_2({\mathbb R})$, then $h = \frac{ a f + b g}{c f + d g} = \frac{ a + b (f/g)}{c + d (f/g)}$. Hence for every $p \in X$ not a zero of $g$ if $h(p) \in {\mathbb R}$, then $(f/g)(p)$ in ${\mathbb R}$ and hence $p \in X({\mathbb R})$, so $h$ is dividing as well. The poles of $f/g$ are thus at real zeroes of $g$. Now if $p$ is a complex zero of $g$ then $f$ also has a zero at $p$ and thus $B_{\chi,D}(f,g) {\mathbf u}c(\tilde{p}) = 0$ by Corollary \ref{cor:common_zero}. If $p$ is a real zero of $g$, then either it is also a zero of $f$ or it is a pole of $f/g$. In the first case we apply Corollary \ref{cor:common_zero} again to get that $B_{\chi,D}(f,g) {\mathbf u}c(\tilde{p}) = 0$ as well. In the second case we fix a real positive coordinate $t$ centered at $p$ and applying Proposition \ref{prop:bezoutians_formula} we get: \[ {\mathbf u}cl(\tilde{p}) B_{\chi,D}(f,g) {\mathbf u}c(\tilde{p}) = f(p) g^{\prime}(p). \] Note that in this case the zero of $g$ has to be simple, since every pole of $f/g$ is simple. Using Corollary \ref{cor:u_adjointness} and the fact that $\chi \in {\mathbb T}_0$ we conclude that: \[ \langle B_{\chi,D}(f,g) {\mathbf u}c(\tilde{p}), {\mathbf u}c(\tilde{p}) \rangle = - f(p) g^{\prime}(p). \] Note that the residue of $f/g$ at $p$ is $f(p)/g^{\prime}(p) < 0$ and deduce that $- f(p) g^{\prime}(p) > 0$. This leads us to the following proposition. \begin{prop} \label{prop:bezout_definite} Assume that $\chi \in {\mathbb T}_0$ and let $D$ be a divisor as in Proposition \ref{prop:bezoutian_hermitian} above and assume that $f,g \in {\mathcal L}(D)$ are real and that $f/g$ is dividing. If $B_{\chi,D}(f,g)$ is invertible then $B_{\chi,D}(f,g) \geq 0$. \end{prop} \begin{proof} We first reduce to the case that $D = (f)_{\infty} = (g)_{\infty}$. We know that $D \geq (f)_{\infty} \vee (g)_{\infty}$ and for every point $p_j$ that is neither a pole of $f$ nor $g$, the $j$-th column and row of $B_{\chi,D}(f,g)$ are zero and this contradicts our assumption. So as in the preceding discussion we can assume that $(f)_{\infty} = (g)_{\infty} = D$. Since $B_{\chi,D}(f,g)$ is invertible, we get that all the zeroes of $g$ are simple and distinct from the zeroes of $f$. Let $q_1,\ldots,q_m$ be the zeroes of $g$ and fix a lift $\tilde{q}_j \in \tilde{X}$. By Corollary \ref{cor:u_independence_divisor} we know that ${\mathbf u}c(\tilde{q_j})$ are linearly independent and by the discussion above they are orthogonal with respect to the bilinear form defined by $B_{\chi,D}(f,g)$. Furthermore the discussion above combined with \cite[Prop.\ 2.2.3]{GLR} gives us that $B_{\chi,D}(f,g)$ is positive definite. \end{proof} \begin{rem} In fact the assumption of $B_{\chi,D}(f,g)$ being invertible can be relaxed, if we assume that $g$ has simple zeroes and $(f/g)$ is dividing, it will still follow that $B_{\chi,D}(f,g) \geq 0$. Indeed the assumptions imply that we are allowing $f$ and $g$ to have common zeroes. The vectors ${\mathbf u}c(\tilde{q}_j)$ are still linearly independent, however some them may be isotropic vectors of $B_{\chi,D}(f,g)$. Looking at those vectors that are not isotropic, we can still deduce that $B_{\chi,D}(f,g) \geq 0$. \end{rem} \section{Livsic-type Determinantal Representations of Curves} \label{sec:det_rep_curve} We shall first fix some notations to be used constantly from now on. Let $C \hookrightarrow \mathbb{P}^d$ be a projective curve of degree $n$ not contained in any hypersurface. Let $X$ be the normalizing Riemann surface of $C$. Let $\iota \colon X \to \mathbb{P}^d$ be the composition of the normalization map with the embedding of $C$. Let us assume that $C$ intersects the hyperplane at infinity at $n$ distinct non-singular points. Otherwise we apply a linear transformation to achieve it. Let ${\mathcal L} = \iota^* {\mathcal O}(1)$ be a line bundle on $X$. Then we have global sections $\mu_0,\ldots,\mu_d \in H^0(X,{\mathcal L})$, such that $\iota(p) = (\mu_0(p):\cdots : \mu_d(p))$. We denote $\lambda_j = \mu_j/\mu_0$, for $j=1,\ldots,d$ and set $\lambda_0 = 1$. Again applying a linear transformation if necessary we may assume that $\mu_1$ and $\mu_0$, have no common zeroes. Fix a flat unitary line bundle $\chi$ on $X$ and a line bundle of half-order differentials, $\Delta$. We define a tensor $\gamma \in \wedge^2 {\mathbb C}^{d+1} \otimes M_n({\mathbb C})$ by setting $\gamma_{ij} = B_{\chi,D}(\lambda_i,\lambda_j)$, where $D$ is the divisor of zeroes of $\mu_0$. Note that by assumption the zeroes of $\mu_0$ are simple and hence for every $j=0,\ldots,d$ we have that $\lambda_j {\mathcal L}(D)$. Furthermore the divisor $D$ is the divisor of poles of $\lambda_1$. In particular if $e_0,\ldots,e_d$ are the standard basis of ${\mathbb C}^{d+1}$, then $\gamma = \sum_{0 \leq i < j \leq d} \gamma_{ij} \otimes e_i \wedge e_j$. Let $V \subset \mathbb{P}^d$ be a linear subspace of dimension $d-2$. Writing out $\gamma(V)$, we get: $$ \gamma(V) = \sum_{0 \leq i < j \leq d} (a_{i0} a_{j1} - a_{j0} a_{i1}) \gamma_{ij}. $$ By the properties of the Bezoutian, we get that: $$ \gamma(V) = \sum_{0 \leq i < j \leq d} B( a_{i0} \lambda_i + a_{j0} \lambda_j, a_{i1} \lambda_i + a_{j1} \lambda_j). $$ Now rearranging the terms and using the linearity and the fact that $B(f,f) = 0$, for every meromorphic function $f$, one gets that: $$ \gamma(V) = B(\sum_{i=0}^d a_{i0} \lambda_i, \sum_{j=0}^d a_{j1} \lambda_j). $$ So we get the following: \begin{lem} \label{lem:bezoutian} Let $C$, $X$ and $V$ as above, then there exist linear combinations of the $\lambda_j$, namely $\kappa_0 = \sum_{i=0}^d a_{i0} \lambda_i$ and $\kappa_1 =\sum_{j=0}^d a_{j1} \lambda_j$, such that: $$ \gamma(V) = B(\kappa_0,\kappa_1). $$ \end{lem} The main result of this section is the following theorem: \begin{thm} \label{thm:vr_rep_curves} The curve $C$ admits a very reasonable{} Livsic-type determinantal representation $\gamma$. \end{thm} \begin{proof} By Corollary \ref{cor:det_vanish} we have that for every $1 \leq i < j \leq d$ and every affine point of $C$ we have that: $$ \left( \lambda_i(p) \gamma_{0j} - \lambda_j(p) \gamma_{0i} + \gamma_{ij} \right) {\mathbf u}c(p) = 0. $$ Let us write for $0 \leq i < j < k \leq d$, $L_{ijk} = \lambda_k \gamma_{ij} - \lambda_j \gamma_{ik} + \lambda_i \gamma_{jk}$. Then one notes that for every $1 \leq i < j < k \leq d$, we have that: $$ L_{ijk} = \lambda_k L_{0ij} - \lambda_j L_{0ik} + \lambda_i L_{0jk}. $$ Hence for every $p \in X$ not a pole of the $\lambda_j$ we have that $\iota(p) \in D(\gamma)$. Hence $C \subset D(\gamma)$, since $C$ is the Zariski closure of its affine part and $D(\gamma)$ is closed. Now by Corollary \ref{cor:u_independence_divisor} we have that for a generic hypersurface of the form $\mu_1 = z$, intersecting $C$ in $n$-distinct affine points, $q_1,\ldots,q_n$, the vectors ${\mathbf u}c(q_j)$ are a basis for ${\mathbb C}^n$. Hence we have that generically the kernel of $\gamma$ is one-dimensional. Now $\deg \gamma \leq n$ on the other hand since $C \subset D(\gamma)$ we have that $\deg \gamma \geq n$, conclude that $\deg \gamma = n$ and therefore, $\gamma$ is very reasonable{}. Furthermore this implies that $D(\gamma)$ is of pure dimension $1$ and of degree $n$. Conclude that $D(\gamma) = C$ as sets. \end{proof} \begin{rem} Note that if pull back ${\mathcal K}$, the kernel sheaf of the determinantal representation, to the normalization and mod out torsion we will get $\chi \otimes \Delta$ (up to a twist). \end{rem} We will finish this section with some examples of the construction in the case of genus $0$ curves. \begin{ex} Using the methods of \cite[Ch.\ 9]{LKMV} we get that the following matrices are a realization of the twisted cubic curve: \begin{align*} & \gamma_{01} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{02} = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{03} = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0\end{pmatrix}, \\ & \gamma_{12} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{13} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0\end{pmatrix} \,,\, \gamma_{23} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}. \end{align*} \end{ex} \begin{ex} Similarly one obtain for a cuspidal plane monomial quintic the Livsic-type determinantal representation: \begin{align*} & \gamma_{01} = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{02} = \begin{pmatrix} 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{03} = \begin{pmatrix} 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \end{pmatrix}, \\ & \gamma_{12} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{13} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & -1 & 0 \end{pmatrix} \,,\, \gamma_{23} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 \end{pmatrix}. \end{align*} However the scheme has an embedded point at the singularity. The affine primary decomposition is given by: \begin{equation*} I = (y^2-xz, x^2 y - z^2, x^3 - yz) \cap (z, y^3, x y^2, x^3 y, x^4). \end{equation*} The last ideal is $(x,y,z)$-primary and hence $(x,y,z)$ is an embedded prime. \end{ex} \begin{ex} The following is an example of a smooth, but not projectively normal rational curve in $\mathbb{P}^3$ obtained via the map $(1,t,t^2,t^3)$. \begin{align*} & \gamma_{01} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{02} = \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{03} = \begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{pmatrix}, \\ & \gamma_{12} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \,,\, \gamma_{13} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix} \,,\, \gamma_{23} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}. \end{align*} \end{ex} \section{Hyperbolic Curves in $\mathbb{P}^d$} \label{sec:hyper_curve} From now let us assume that $C$ is a real curve, then the involution on $\mathbb{P}^d$ obtained from complex conjugation of coordinates, induces an anti-holomorphic involution on $X$. Note that $\dim H^0(X,{\mathcal L}) \geq d+1$, in particular if $W \subset H^0(X,{\mathcal L})$ is the subspace spanned by the real sections $\mu_0,\ldots,\mu_d$, then in fact $\iota$ is a map from $X$ to $\mathbb{P} W^*$. We identify $\mathbb{P} W^*$ with $\mathbb{P} W$, by setting the basis $\mu_0,\ldots,\mu_d$ to be orthonormal. Furthermore a section $\nu \in W$ is real if and only if it is a linear combination of the $\mu_j$ with real coefficients. Let us assume that there exists a real linear subspace $V \subset \mathbb{P} W$ of dimension $d-2$, such that $C$ is hyperbolic with respect to $V$, then we have that: \begin{lem} \label{lem:hyper_dividing} There exist real $\nu_0, \nu_1 \in H^0(\tilde{X},{\mathcal L})$, such that the meromorphic function $\lambda$, on $X$, defined by $\lambda = \nu_1/\nu_0$ is dividing. In particular $X$ is dividing. \end{lem} \begin{proof} Consider $V \subset H^0(X,{\mathcal L})$ and assume at first that $V$ is spanned by $\mu_2,\ldots,\mu_d$. Then every real hypersurface containing $V$ is spanned by $s \mu_0 + t \mu_1$ and $V$, where $s,t \in {\mathbb R}$ not both zero. Set $\lambda = \mu_1/\mu_0$. Clearly if $p \in X({\mathbb R})$, then $\lambda(p)$ is real. On the other hand if $\lambda(p)$ is real then either $\lambda(p) = \alpha \in {\mathbb R}$ or $\lambda(p) = \infty$, then take the hyperplane $H$ spanned by $\mu_0 + \alpha \mu_1$ and $V$ in the first case and $mu_1$ and $V$ in the second case. Observe that by hyperbolicity $H \cap C \subset C({\mathbb R})$. Now $\iota(p) = (\mu_0(p),\ldots,\mu_d(p))$, in particular $\iota(p) \in H \cap C$ and hence is real and therefore $p \in X({\mathbb R})$. Now if $V$ is spanned by real sections $\nu_2,\ldots,\nu_d$. We can complete this set to a a real basis of $H^0(X,{\mathcal L})$, by adding two more sections $\nu_0^{\prime}$ and $\nu_1^{\prime}$. Now since hyperbolicity is invariant under real coordinate changes, we see that we get the required $\nu_0$ and $\nu_1$ by pulling back $\mu_0$ and $\mu_1$. \end{proof} This discussion leads us to the main result of this section. \begin{thm} \label{thm:hyp-liv-det-rep} The curve $C$ admits very reasonable{} Hermitian Livsic type determinantal representations $\gamma$, parametrized by flat unitary line bundles in ${\mathbb T}_0$, a real subtorus of the Jacobian variety of the desingularizing Riemann surface of $C$, such that for every real $d-2$-dimensional linear subspace $U \subset \mathbb{P}^d$, we have that $\gamma(U)$ is definite if and only if $C$ is hyperbolic with respect to $U$. In particular if the line bundle is two-torsion, then the resulting determinantal representation is real symmetric. \end{thm} \begin{proof} Fix $\chi$ a flat unitary line bundle on ${\mathbb T}_0$, by \cite[Cor.\ 4.3]{Vin93} $h^0(\chi \otimes \Delta) = 0$. Then by Proposition \ref{prop:bezoutian_hermitian}, we have that each $\gamma_{ij}$ is Hermitian, since the $\lambda_j$ are real functions with real poles. It suffices to prove that if $C$ is hyperbolic with respect to $U$ then $\gamma(U)$ is definite, since the converse has already been proved. However, we note that by Lemma \ref{lem:bezoutian} we have that for every such $U$ there exist two functions $\kappa_0$ and $\kappa_1$, such that by Lemma \ref{lem:hyper_dividing} $\lambda = \kappa_1/\kappa_0$ is dividing. Since $\gamma$ is very reasonable{} we note that $U \cap C = \emptyset$ implies that $\gamma(U)$ is invertible. Now we note that $\gamma(U) = B_{\chi,D}(\kappa_0,\kappa_1)$ and hence we can apply Proposition \ref{prop:bezout_definite} to get the result. To get a real symmetric representation apply Propostion \ref{prop:bezotian_symmetric}. \end{proof} If $C$ admits a very reasonable{} Hermitian Livsic type determinantal representation $\gamma$, then so does the hypersurface, $Y$, corresponding to it via the incidence correspondence. Now we can lift the determinantal representation to some hypersurface, $Y^{\prime}(\gamma) \subset \mathbb{P}^N$. If $C$ is hyperbolic with respect to some real $d-2$-dimensional linear subspace $V \subset \mathbb{P}^d$, then by the above theorem $\gamma(V)$ is definite; we conclude that $Y^{\prime}(\gamma)$ is hyperbolic with respect to $V$. On the other hand if $Y^{\prime}(\gamma)$ is hyperbolic with respect to $V$ then so is $Y$, and thus by Proposition \ref{prop:hyperbolic-equiv} $C$ is hyperbolic with respect to $V$; applying the theorem again we see that $\gamma(V)$ is definite. Therefore we obtain: \begin{cor} Set ${\mathcal H}(Y) = \left\{ V \in \mathbb{G}(d-2,d) \mid Y \mbox{ is hyperbolic w.r.t. } V \right\}$. and similarly ${\mathcal H}(Y^{\prime}) = \left\{ U \in \mathbb{P}^N \mid Y^{\prime} \mbox{ is hyperbolic w.r.t. } U \right\}$, then: $$ {\mathcal H}(Y) = \mathbb{G}(d-2,d) \cap {\mathcal H}(Y^{\prime}). $$ In particular the cone over ${\mathcal H}(Y)$ is a disjoint union of two extendably convex connected components in the sense of Buseman. \end{cor} \nocite{M2} \end{document}
\begin{document} \title{Active Readout Error Mitigation} \author{Rebecca Hicks$^{\#}$} \email{[email protected]} \affiliation{Physics Department, University of California, Berkeley, Berkeley, CA 94720, USA} \author{Bryce Kobrin$^{\#}$} \email{[email protected]} \affiliation{Physics Department, University of California, Berkeley, Berkeley, CA 94720, USA} \affiliation{Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA} \author{Christian W. Bauer} \email{[email protected]} \affiliation{Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA} \author{Benjamin Nachman} \email{[email protected]} \affiliation{Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA} \begin{abstract} Mitigating errors is a significant challenge for near term quantum computers. One of the most important sources of errors is related to the readout of the quantum state into a classical bit stream. A variety of techniques have been proposed to mitigate these errors with post-hoc corrections. We propose a complementary scheme to actively reduce readout errors on a shot-by-shot basis by encoding single qubits, immediately prior to readout, into multi-qubit states. The computational resources of our technique are independent of the circuit depth and are compatible with the error rates and connectivity of many current devices. We analyze the potential of our approach using two types of error-correcting codes and, as a proof of principle, demonstrate an 80\% improvement in readout error on the IBMQ Mumbai quantum computer. \end{abstract} \date{\today} \maketitle \def\#}\footnotetext{These authors contributed equally.{\#}\footnotetext{These authors contributed equally.} \section{Introduction} Quantum computers are promising tools to solve many computationally intractable scientific and industrial problems. However, existing noisy intermediate-scale quantum (NISQ) computers~\cite{Preskill2018quantumcomputingin} suffer from significant errors that must be mitigated before obtaining useful results. These errors can be categorized as initialization, state preparation, or readout errors. Initialization errors, which often arise from thermal noise, lead to residual entropy in the starting state of the system. State preparation errors result from mis-calibrated quantum gates and unintended couplings with the environment. The focus of this paper is readout errors, which result from decoherence during measurement and from the overlapping support between the measured physical quantities that correspond to the $\ket{0}$ and $\ket{1} $ states. Readout errors can be significant and even dominate the error budget for relatively shallow quantum circuits. A number of strategies have recently been proposed for mitigating readout errors which rely on classical, post-processing techniques. A particularly common scheme involves measuring a response matrix $R_{ij}=\Pr(\text{measure $i$} | \text{true state is $j$})$ and performing regularized matrix inversion on the output distribution~\cite{geller_efficient_2020,song_10-qubit_2017,gong_genuine_2019,wei_verifying_2020,hamilton2020scalable}. However, this scheme requires characterizing, at worst, an exponentially large matrix, and inverting the response matrix may suffer from numerical instabilities \cite{geller_efficient_2020}. More fundamentally, such passive methods can only reduce the bias from readout errors on average, rather than correcting individual errors on a shot-by-shot basis ~\cite{1910.01969,1904.11935,1907.08518,arute2020quantum,10.1145/3352460.3358265}. Thus, they cannot be applied to tasks where obtaining individual output states---not expectation values---is desired, including random circuit sampling \cite{arute2019quantum,wu2021strong}, prime factorization \cite{shor1999polynomial}, and high energy physics simulations \cite{nachman2021quantum}. \begin{figure} \caption{A schematic diagram of the active readout error mitigation protocol for a single qubit and a quantum circuit represented by the unitary operation $U$. (Top) In passive readout error mitigation, the qubit is read out directly and classical post-processing (box with $R^{-1} \label{fig:schematic} \end{figure} A more powerful strategy for correcting individual errors, during but not limited to readout, is based on the celebrated framework of quantum error correction (QEC). The core idea behind QEC is to embed the logical state into non-local degrees of freedom, such that local errors that affect only a few degrees of freedom can be isolated and corrected \cite{Gottesman09anintroduction,terhal2015quantum,Nielsen:2011:QCQ:1972505}. To utilize this protection for quantum processing, operations on the logical state must also be encoded and, crucially, be able to withstand a low rate of physical errors; this is the hallmark of \emph{fault-tolerant} quantum computation. While individual aspects of error correction / fault tolerance have been demonstrated on small-scale experiments \cite{errorcorrecting,PhysRevA.97.052313,Barends2014,Kelly2015,Linke:2017, Takita:2017, Roffe:2018, Vuillot:2018, Willsch:2018,Harper:2019,chen2021exponential}, performing full fault-tolerant computation imposes stringent technical requirements which are infeasible for near-term devices \cite{campbell2017roads,gidney2021factor}. Moreover, unless all components of a computation are implemented fault-tolerantly, the encoding is likely to have little effect on or even worsen the computational outcome compared to a direct, unencoded implementation. Finding loopholes to this ``all-or-nothing'' outlook remains an essential task for bridging the gap towards full fault-tolerant computation. In this paper, we propose and analyze an active error mitigation technique that targets readout errors only. The essence of our approach is to encode each qubit immediately before readout into a multi-qubit logical state, as exemplified in Fig.~\ref{fig:schematic}. A general feature of this strategy is a tradeoff between suppressing readout noise and introducing errors during the encoding circuit. Our strategy is thus most effective on devices whose intrinsic readout errors dominate over entangling gate errors, a situation that arises among many existing quantum computers (Fig.~\ref{fig:igmqrates}). Moreover, the resource requirement of the encoding in terms of ancilla qubits and gate overhead is independent of circuit depth, making it well suited for near-term applications. To this end, we implement our protocol on an IBMQ quantum computer and demonstrate almost an order of magnitude reduction in readout errors. The rest of the paper is organized as follows. In Section~\ref{sec:method}, we introduce the active readout error mitigation protocol after briefly reviewing existing forms of readout error mitigation. In Section~\ref{sec:results} and \ref{sec:sim}, we present analytical results and numerical simulations based on simple error models. In Section \ref{eq:ibmq}, we demonstrate the experimental performance of our strategy on a superconducting quantum computer. In Section~\ref{sec:conclusions}, we conclude with a brief summary and outlook on error mitigation strategies. \begin{figure} \caption{A scatter plot of the \textsc{cnot} \label{fig:igmqrates} \end{figure} \section{Method} \label{sec:method} Consider a quantum system which is characterized in the computational basis by the frequencies $t_i=\Pr(\text{true state in $i$})= \textrm{Tr}\left[ \rho \dyad i \right ]$, where $i \in \mathbb{Z}^{2^{n_\text{qubits}}}$. The goal of quantum readout is to accurately and efficiently estimate $t_i$ using a series of measurements on identically prepared quantum systems. This task becomes non-trivial in the presence of readout errors, since these lead to biases in the measured frequencies $m_i=\Pr(\text{measure $i$})$ compared to the true frequencies $t_i$. A widely used strategy to correct these biases is to characterize the response matrix $R$ and estimate the true state frequencies as $\hat{t}=R^{-1}m$. As an illustrative example, consider a single qubit quantum state with $t_0 = 1-t_1 = p$ and suppose that it undergoes symmetric readout noise with error probability $q$, i.e. the response matrix is given by $R_{10}=R_{01}=q$ and $R_{00}=R_{11}=1-q$. If a series of measurements is performed, the expected frequency in the $0$ state is $\mathbb{E}[m_0] = \lambda$ and its variance is $\text{Var}[m_0] = \lambda(1-\lambda)$ for $\lambda=p+q(1-2p)$. Performing matrix inversion allows one to correct the average bias. In particular, applying \begin{align} \label{eq:passive_corr} \hat{t}=\begin{pmatrix}1-q & q \cr q & 1-q\end{pmatrix}^{-1}\begin{pmatrix} m_0\cr m_1\end{pmatrix}\, \end{align} leads to the correct expected value of $\hat{t}$, i.e.~$\mathbb{E}[\hat{t}]=(p,1-p)$. However, this passive approach towards error mitigation is limited in two fundamental ways. First, the \emph{variance} of $\hat t$ typically remains larger than that of the noise-free case; indeed, in our simple example, the variance increases by an $\mathcal{O}(q)$ factor: \begin{align} \begin{split} \label{eq:variance} \textrm{Var}[\hat t_0] &= \textrm{Var}[m_0] - 2q \textrm{Cov}[m_0,m_1] + \mathcal{O}(q^2) \\ &= \lambda(1-\lambda)(1+2q) + \mathcal{O}(q^2) \\ &= \textrm{Var}[t_0]+q\left[1-2p(1-p)\right] + \mathcal{O}(q^2), \end{split} \end{align} where we have used that $\textrm{Cov}[m_0,m_1] = -m_0 m_1$ for a multinomial distribution. Correspondingly, the number measurements required to estimate $\hat t_i$ to a desired precision, which is proportional to $\text{Var}[\hat t_i]$, also increases with the readout error rate. Second, and more significantly, we have assumed that $m_i$ can be estimated using a large number of measurements relative to the size of the vector space. This is generally the case when computing the expectations of quantum observables, where the relevant vector space is that of the observable and not the full Hilbert space. But there are many problems of interest where the goal is to sample a relatively sparse number of output states rather, than to estimate statistical properties of the output distributions; for such problems, passive correction methods are fundamentally inapplicable. \begin{figure} \caption{An illustration of the repetition (3,1) code, which enables both error detection and error correction. } \label{fig:schematic2} \end{figure} Active readout error mitigation overcomes both of these limitations by effectively reducing the readout error $q$ on a shot-by-shot basis. The main idea is to take a circuit where the physical qubits are the logical qubits and encode the qubits before they are measured into a bigger multiqubit array. This encoding is analogous to conventional strategies for quantum error correction, but with two important distinctions. First, in the case of readout error mitigation, one is only concerned with bit-flip errors in the computational basis since phase-flip errors do not affect the measured distribution. Second, the encoding is performed \emph{after} state preparation rather than at the beginning of the circuit. These simplifications allow us to circumvent the significant space and gate overhead typically associated with full quantum error correction. The simplest version of active readout error mitigation is based on the two-qubit repetition code. As depicted in Fig.~\ref{fig:schematic}, each qubit is entangled with a unique partner qubit using a single \textsc{cnot}~gate (though other fully entangling gates could also be employed). Without errors, the measured outcomes are either $00$ or $11$, whereas with a single bit-flip error, the measured outcomes become $01$ or $10$; single qubit readout errors can thus be detected but not corrected. A natural extension of this encoding is to introduce a second ancilla qubit and entangle it with the original qubit (Fig.~\ref{fig:schematic2}). This forms a three-qubit repetition code and allows for the correction of single qubit readout errors --- i.e.~by taking the majority vote among the three qubit --- or the detection of two-qubit readout errors. We henceforth refer to these encodings as the (2,1) and (3,1) codes, where the notation $(n,k)$ indicates that $n$ physical qubits are required to encode $k$ logical qubits. By design, these encodings offer substantial protection against readout errors; nevertheless, they remain susceptible to certain gate errors that occur during the encoding circuit. For example, the single \textsc{cnot}~gate in the two-qubit encoding may lead to a correlated bit-flip error on \emph{both} of the qubits, resulting in a spurious measurement outcome despite error detection. In general, if the average two-qubit error rate is $\epsilon$, one expects an effective readout error rate of $q_\textrm{eff} \approx \alpha \epsilon$, where $\alpha$ is an order-one constant that depends on specific protocol (e.g.~two-qubit vs.~three-qubit, and error detection vs.~correction), as well as the error model for the entangling gates. We confirm this scaling for a symmetric depolarizing noise model via analytical results in Sec.~\ref{sec:results} and numerical simulations in Sec.~\ref{sec:sim}. Active readout error mitigation is thus beneficial whenever the two-qubit error rate is lower than the intrinsic readout error rate $q$. In practice, this condition is met by many existing quantum devices, as depicted in Fig.~\ref{fig:igmqrates} for Google Sycamore and a variety of IBMQ quantum computers. Generalizing our strategy, one may consider encoding circuits for implementing arbitrary \emph{classical} error correction codes. To do so, one would add ancilla qubits and entangle them with the original qubits to generate a classical code in the computational basis, i.e.~each bitstring on the original qubits is mapped to an encoded bitstring on the full set of qubits. To understand the tradeoffs of using increasingly complex codes, we compare two families of error encoding schemes: the aforementioned repetition codes and two versions of the Hamming code---the (7,4) and (8,4) codes---illustrated in Fig.~\ref{fig:schematic3}. We test the performance of these codes via numerical simulations in Sec.~\ref{sec:sim} and summarize their key differences in Table \ref{tab:Tab1}. In particular, we find that both types of codes offer comparable levels of error mitigation, and the more important factor for determining the effective error rate is whether error detection or error correction is performed. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Encoding} & \multirow{2}{*}{$(n,k)$ } & \multicolumn{1}{c|}{Det.~or} & \multicolumn{2}{c|}{Eff.~error rate} & \multicolumn{1}{c}{Discarded} \\ & & cor.? & $q$-dependence & $\alpha$ & measurements? \\ \hline \makecell{Repetition\\code} & \makecell{(2,1)\\(3,1)\\(3,1)} & \makecell{det.\\det.\\cor.} & \makecell{$\sim q^2$\\$\sim q^3$\\$\sim q^2$} & \makecell{$1/4$\\$1/4$\\$3/4$} & \makecell{Yes\\Yes\\No} \\ \hline \makecell{Hamming\\code} & \makecell{(7,4)\\(7,4)\\(8,4)\\(8,4)} & \makecell{det.\\cor.\\det.\\hybrid} & \makecell{$\sim q^3$\\$\sim q^2$\\$\sim q^4$\\$\sim q^3$} & \makecell{$1/4$\\$7/8$\\$1/4$\\$1/4$} & \makecell{Yes\\No\\Yes\\Yes} \\ \hline \hline \end{tabular} \caption{Summary of the encoding schemes presented in this work. The notation $(n,k)$ indicates that $n$ physical qubits are required to encode $k$ logical qubits. The effective error rate $Q_{\textrm{eff}}$ [defined in Eq.~\eqref{eq:Q_eff}] scales non-linearly with the nominal readout error rate $q$, and is linearly proportional to the \textsc{cnot}~error rate $\epsilon$ with a susceptibility $\alpha$ given by $Q_{\textrm{eff}} / k \approx \alpha \epsilon$. } \label{tab:Tab1} \end{table} \begin{figure} \caption{Illustration of the encoding for the Hamming (8,4) code. The first four qubits (indexed q3, q5, q6, q7) contain logical state information while the remaining qubits are the parity bits. For the Hamming (7,4) code we omit the last parity bit (q0) and all gates connected to it, i.e.~those within the black dashed box. } \label{fig:schematic3} \end{figure} \section{Analytical Results} \label{sec:results} \subsection{Two-qubit repetition code} \label{sec:two-qubit} We begin by analyzing the simplest realization of active readout error mitigation: protecting a single qubit via the two-qubit encoding depicted in Fig.~\ref{fig:schematic}. Since each qubit is encoded separately, this represents an excellent model of a full quantum circuit, ignoring cross talk. Suppose that after applying $U$, the quantum state (for the first qubit) is \begin{align} \label{eq:psi} \rho = \dyad{\psi}, \quad \ket \psi = \sqrt{p} \ket 0 + \sqrt{1-p} \ket 1\,, \end{align} where, without loss of generality, we have chosen the phase to be real and positive. Moreover, as in the previous section, suppose that readout errors are described by a symmetric bit-flip channel with $\Pr(1 \to 0)=q$ and $\Pr(0 \to 1)=q$ for all qubits. Finally, we model the dominant error channel affecting the \textsc{cnot}~gate as a two-qubit depolarizing noise channel: \begin{align} \label{eq:cnot} \rho\mapsto (1-\epsilon)\textsc{cnot}\,\rho\,\textsc{cnot} + \frac{\epsilon}{4}I_4\,, \end{align} where $I_4$ is the identity matrix. Note that this last assumption, while widely used in theoretical analysis, is not essential to our qualitative results \cite{error_model}. Based on this setup, we can compare the rate of readout errors with and without the encoding. In the nominal case with no encoding, the measurement frequencies are given by \begin{align} \label{eq:nominal} \begin{split} m_0 &= p + (1-2p) q \\ m_1 &= (1-p) - (1-2p)q \end{split} \end{align} where $m_i = \textrm{Tr}\left[ \rho \dyad i \right ]$. For the encoded circuit (bottom of Fig.~\ref{fig:schematic}), we instead have four output states with frequencies \begin{align} \begin{split} m_{00} &\approx p (1-2q-\epsilon)+q^2+ \frac \epsilon 4\\ m_{01} &\approx q - q^2 + \frac \epsilon 4 \\ m_{10} &\approx q - q^2 + \frac \epsilon 4 \\ m_{11} &\approx (1-p) (1-2q-\epsilon)+q^2+ \frac \epsilon 4 , \end{split} \end{align} where we have neglected $\mathcal{O}(\epsilon q)$ terms and higher. In post-selection, we keep only the two symmetric states, $\ket{\bar 0} = \ket{00}$ and $\ket{\bar 1} = \ket{11}$, whose normalized frequencies are \begin{align} \begin{split} \label{eq:two_qubit_Q} m_{\bar 0} &= \frac{m_{00}}{m_{00}+m_{11}} \approx p+\left(\frac{\epsilon}4+q^2\right) (1-2p) \\ m_{\bar 1} &= \frac{m_{11}}{m_{00}+m_{11}} \approx (1-p)-\left(\frac{\epsilon}4+q^2\right) (1-2p). \end{split} \end{align} Comparing to the nominal case (Eq.~\ref{eq:nominal}), the final distribution is characterized by an effective error rate, $q_\textrm{eff} \approx \epsilon/4+q^2$, which is independent of $p$. Thus, for $q^2 \ll \epsilon$, the encoding effectively lowers the readout error rate by a factor of $\sim 4 q/\epsilon$. Physically, this leading order dependence on $\epsilon$ may be understood through a simple error analysis. From a stochastic perspective, the depolarizing channel corresponds to introducing a random two-qubit Pauli error, each with probability $\epsilon/16$. There are 4 errors which could lead to a undetected measurement error, i.e.~\textsc{XX, XY, YX}, and \textsc{YY}; thus, the total effective error rate is $\epsilon/4$. Additional mechanisms that result in an undetected error correspond to ($i$) two independent measurement errors or ($ii$) a \textsc{cnot}\ error followed by a measurement error. However, these are subleading with probability $\mathcal{O}(q^2)$ and $\mathcal{O}(q\epsilon)$, respectively. A few additional remarks are in order. First, as discussed in the previous section, it is important to consider not only the average bias in the measurment outcome but also the number of repetitions required to achieve a desired precision. For the unencoded circuit, the required repetitions is directly proportional to the variance of the measurement outcome distribution (Eq.~\ref{eq:variance}). However, in the two-qubit encoding, a fraction of the repetitions are discarded (i.e.~those in the $01$ and $10$ states), thereby contributing to the total repetitions but not to the output distribution. Thus, there is a competition between the improved bias offered by the encoding and the repetition overhead of the discarded states; whether or not this competition favors the encoding depends quantitatively on the desired observable, error model, and initial state. Second, it is instructive to consider the case of non-symmetric readout errors. If we directly generalize our scheme to a non-symmetric error channel, $\Pr(1 \to 0) = q(1+\kappa)$ and $\Pr(0 \to 1) = q(1-\kappa)$, we find that the normalized probabilities, $m_{\bar 0}$ and $m_{\bar 1}$, exhibit a systematic bias compared to the true probabilities at linear order in $q|\kappa|$. Notably, this bias does not result from measurement errors that flip between $\ket {00} \leftrightarrow \ket{11}$, but rather from the fact that one of these states may be more likely to decay into the non-symmetric sector and therefore discarded during post-selection \cite{asymm_noise}. An \emph{ad hoc} solution to eliminate this bias to perform error detection as before but, in a random half of the repetitions, apply an extra \textsc{X} gate immediately before the measurement. In doing so, the resulting distribution is \emph{rebalanced}, or rendered symmetric with respect to the initial state \cite{Hicks_2021}. The system can then be modeled as having an effective measurement rate, $\Pr(1 \to 0) = \Pr(0 \to 1) = 1/2[\Pr(1 \to 0) + \Pr(0 \to 1)] = q$, regardless of the asymmetry parameter $\alpha$. Of course, this scheme assumes that the X gate itself introduces negligible errors compared to readout errors or two-qubit errors, which is reasonable assumption for many experimental platforms. As we discuss next, an alternative approach to mitigate the effects of asymmetric noise---as well as eliminate the discarded measurements---is to perform error \emph{correction} on the encoding qubit. \subsection{Three-qubit repetition code} \label{sec:two-qubit} By adding a third qubit to the encoded circuit, as shown in Fig.~\ref{fig:schematic2}, one can realize the repetition (3,1) code whose output states without errors are $000$ and $111$. With this code, two bit-flip errors can be detected, while single bit-flip errors can be corrected by taking the ``majority vote'' among the three qubits. To understand the tradeoffs between error detection and detection, we analyze the three-qubit encoding circuit under the previous error model, i.e.~symmetric readout errors ($\Pr(1 \to 0) = \Pr(0 \to 1) = q$) and two-qubit depolarizing errors (at rate $\epsilon$) following each CNOT gate. We begin by computing the final measurement distribution up to order $\mathcal{O}(q e)$ and $\mathcal{O}(q^3)$: \begin{align} \begin{split} m_{000} &\approx \left(1-3q+3q^2-\frac {7\epsilon} 4 \right)p+ \frac \epsilon 4 \\ m_{001} &\approx \frac \epsilon 4 p + pq+(1-3p)q^2 \\ m_{010} &\approx \frac \epsilon 4 (2-p) + pq +(1-3p)q^2\\ m_{011} &\approx \frac \epsilon 4 (1-p) + (1-p)q-(2-3p)q^2 \\ m_{100} &\approx \frac \epsilon 4 p + pq+(1-3p)q^2 \\ m_{101} &\approx \frac \epsilon 4 (1+p) + (1-p)q -(2-3p)q^2\\ m_{110} &\approx \frac \epsilon 4 (1-p) + (1-p)q -(2-3p)q^2\\ m_{111} &\approx \left(1-3q+3q^2-\frac {7\epsilon} 4 \right)(1-p)+ \frac \epsilon 4 \end{split} \end{align} For error detection, we proceed in analogy with the two-qubit code by computing the normalized frequencies in the logical code subspace: \begin{align} \begin{split} \label{eq:three_qubit_Q} m^{\textrm{det}}_{\bar 0} &= \frac{m_{000}}{m_{000}+m_{111}} \approx p+\frac{\epsilon}4(1-2p)\\ m^{\textrm{det}}_{\bar 1} &= \frac{m_{111}}{m_{000}+m_{111}} \approx (1-p)-\frac \epsilon 4 (1-2p). \end{split} \end{align} The effective error rate is thus $q^{\textrm{det}}_\textrm{eff} = \epsilon/4 + \mathcal{O}(q^3)$ and exhibits the same leading order dependence on $\epsilon$ as the two-qubit code. This implies that the three-qubit encoding is subject to failure under the same number of individual gate errors, as we confirm by direct examination of the encoding circuit (see Appendix \ref{app:gate}). In contrast, the $q^2$ terms have vanished since three independent readout errors are required to produce an undetectable logical error. We next simulate error correction by assigning the subspace of states with two or more 0's as $\ket {\bar 0}$, and the remaining states as $\ket {\bar 1}$. This results in a final output distribution \begin{align} \label{eq:3-qubit correction} \begin{split} m^{\textrm{cor}}_{\bar 0} &= m_{000}+m_{001}+m_{010}+m_{100} \\ &\approx p+3\left(\frac{\epsilon}4 +q^2\right) (1-2p) \\ m^{\textrm{cor}}_{\bar 1} &= m_{111}+m_{110}+m_{101}+m_{110} \\ &\approx (1-p)-3\left(\frac{\epsilon}4 +q^2\right)(1-2p), \end{split} \end{align} characterized by an effective error rate, $q^{\textrm{cor}}_\textrm{eff} \approx 3(\epsilon/4+q^2)$. Notably, the dependence on $\epsilon$ is enhanced by a factor of 3 compared to error detection compared with either the 2-qubit or 3-qubit code. This difference arises from the increased number of gate errors that can lead to a falsely \emph{corrected} output state. For example, a single bit-flip on the first qubit after the first \textsc{cnot}~gate can \emph{propagate} across the second \textsc{cnot}~gate into a two-qubit error, thereby corrupting the measurement outcome (see Appendix \ref{app:gate}). Despite the less favorable effective error rate, we emphasize that error correction offers two main advantages compared to error detection, owing to the fact that none of the measurements are discarded during post-processing. First, the total number of events kept is larger, leading to reduced statistical uncertainties (see Appendix \ref{app:var}). More importantly, contrary to error detection schemes, error correction is intrinsically robust against asymmetries in the readout error channels. For example, if the first qubit was subject to an asymmetric readout channel, $\Pr(1 \to 0) = q(1+\kappa)$ and $\Pr(0 \to 1) = q(1-\kappa)$ with $\kappa > 0$, then population would be transferred from $\ket {000} \rightarrow \ket{100}$ more frequently than from $\ket {111} \rightarrow \ket{011}$, leading to a bias in error detection schemes. But neither of these processes affect the sums, $m_{\bar 0}$ and $m_{\bar 1}$, at leading order in $q$, implying that this bias is eliminated in error correction. \begin{figure*} \caption{ Effective readout error, $q_\textrm{eff} \label{fig:avg_error} \end{figure*} \section{Simulations} \label{sec:sim} \subsection{Repetition codes} To demonstrate active error mitigation with realistic error rates, we perform numerical simulations on the encoded circuits shown in Fig.~\ref{fig:schematic} and Fig.~\ref{fig:schematic2}. As before, we consider two types of errors: ($i$) two-qubit depolarizing errors with rate $\epsilon$ applied after each \textsc{cnot}~ gate (Eq.~\ref{eq:cnot}), and ($ii$) symmetric readout errors on each qubit with rate $q$. We implement the noisy circuits using the software package \texttt{Cirq}~\cite{cirq} and measure the outcome distribution directly as $m_i = \textrm{Tr}\left[ \rho \dyad i \right ]$. For the two-qubit code, we perform error \emph{detection} by discarding the probability of measuring the qubits in the odd parity subspace. For the three-qubit code, we perform error \emph{correction} on the outcome distribution via majority vote (see Eq.~\ref{eq:3-qubit correction}). Subsequently, we characterize the effective error rate $q_\textrm{eff}$ using \begin{align} \label{eq:Q_eff} \begin{split} m_{\bar 0} &= p + (1-2p) q_\textrm{eff} \\ m_{\bar 1} &= (1-p) - (1-2p)q_\textrm{eff} \end{split} \end{align} where $m_{\bar 0/ \bar 1}$ are the error-mitigated probabilities, and $p$ is the initial state parameter defined in Eq.~\ref{eq:psi}. In Fig.~\ref{fig:avg_error}, we depict the effective error rate for the two encodings as a function of $\epsilon$ and $q$. In both cases, for $\epsilon \gg q^2$, $q_\textrm{eff}$ scales linearly with $\epsilon$ and has a negliglible dependence on $q$. This is consistent with our leading order analysis and indicates that higher order effects arise primarily from two-qubit readout errors with probability $\mathcal{O}(q^2)$. For a quantitative comparison with the unencoded readout error rate, we next plot the ratio $q_{\textrm{eff}}/q$ in Fig.~\ref{fig:avg_error}(b). For low error rates, this ratio crosses unity at $\epsilon=4q$ for error detection and $\epsilon=4q/3$ for error correction, in ageement with our predictions; small deviations from these trends become noticeable as $\epsilon, q \rightarrow 1$. Note that although the simulations were performed with a specific value of $p$, the effective error rate $q_\textrm{eff}$ is in fact independent of the initial state. This is a consequence of using \emph{symmetric} noise channels for the readout and gate error and would change for a state-dependent noise model, e.g.~for readout errors where $\Pr(1 \to 0) \neq \Pr(0 \to 1)$. \subsection{Comparison with the Hamming code} We now compare the performance of the repetition codes to a more complex encoding circuit based on the Hamming (7,4) code, which encodes 4 logical qubits into 7 physical qubits (Fig.~\ref{fig:schematic3}). The Hamming code is thus more space-efficient encoding for 4 logical qubits than either repetition code; in fact, even larger Hamming codes reduce the ratio of physical to logical qubits further, quickly approaching an optimal ratio of one (see Appendix \ref{app:Hamming}). Note, however, that implementing a Hamming code generally requires each logical qubit to be entangled with several ancillary qubits, which requires \textsc{Swap} gates on systems with limited connectivity. Like the three-qubit repetition code, the 7-qubit Hamming code enables the correction of single-qubit errors and the detection of two-qubit errors. This similarity arises because both codes have a \emph{code distance} of three, which refers to the minimum number of local bit-flips (or Hamming distance) that connects two logical states. Indeed, it is well known that a classical code with code distance $d$ enables the detection of up to $d-1$ independent errors and the correction of up to $d/2$ errors. A circuit for implementing the Hamming (7,4) code is shown in Fig.~\ref{fig:schematic3}. The first four qubits contain the logical state, while the remaining three qubits are ancilla qubits required for the encoding. Without errors, the state of each ancilla qubit is equal to the parity of a certain set of logical qubits. In contrast, if a single bit-flip error occurs (on either the logical bits or the ancilla bits), one or more of these conditions will be violated. A convenient way to check for errors is by multiplying the output state $s$ by the parity check matrix: \begin{align} \label{eq:parity check} H = \begin{pmatrix} 1 & 1 & 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 \\ \end{pmatrix}_{[7,4]}, \end{align} where the ordering matches the physical layout of the first seven qubits in Fig.~\ref{fig:schematic3}. Each element for which the bitwise product $Hs$ is equal to 1 (modulo 2) indicates that an error has occurred on either the parity bit or the associated set of logical qubits. Based on these checks, one can uniquely determine the location of a \emph{single} bit-flip error or detect the occurence of up to two bit-flip errors (see Appendix \ref{app:Hamming}). \begin{figure} \caption{Total readout error rate, $Q_{\textrm{eff} \label{fig:err_scaling} \end{figure} To compare the performance of the Hamming (7,4) code to the repetition codes, we perform noisy simulations based each encoding and apply error either error correction or detection to the output distribution. We estimate the total error rate after error mitigation by computing \begin{equation}\label{eq:Q_eff} Q_{\textrm{eff}} = 1- \frac{\sum_i R_{ii}}{2^k}, \end{equation} where $R_{ii}$ are the diagonal elements of the response matrix, i.e.~the probabilities of measuring the correct state. This generalizes our previous error rate $q_\textrm{eff}$ (Eq.~\ref{eq:Q_eff}); indeed, $Q_{\textrm{eff}} = q_\textrm{eff}$ in the case of a single qubit. We begin by simulating the encoded circuits with readout errors only, i.e.~taking $\epsilon = 0$. As shown in Fig.~\ref{fig:err_scaling}(a), the total error rate for both the Hamming (7,4) code and the repetition (3,1) code scales as $\sim q^2$ when using error correction and $\sim q^3$ when using error detection. These trends follow directly from having a code distance of $d=3$, which implies that 2 or 3 independent bit-flip errors are required for a logical errors in the two respective cases. Similarly, the total error rate for the two-qubit repetition code scales as $\sim q^2$, since its code distance is $d=2$. We next isolate the effect of gate errors by plotting the error rate as a function of $\epsilon$ [Fig. \ref{fig:err_scaling}(b)]. In contrast to the dependence on readout errors, all encoding schemes lead to the same general linear scaling, i.e.~$Q_{\textrm{eff}} \approx 4 \alpha \epsilon$ (we include a factor of 4 for the number of logical qubits). This indicates that a \emph{single} gate error during the encoding can propagate into logical errors on the measurement outcomes, as we had shown previously for the repetition codes. More specifically, for all cases of error detection, we observe the same prefactor $\alpha = 1/4$; whereas, for error correction, $\alpha$ is an order-one factor larger. In fact, we previously derived $\alpha = 3/4$ for a single qubit encoded with the (3,1) code (see Eq.~\eqref{eq:3-qubit correction}). Similarly, we can determine $\alpha = 7/8$ for the Hamming (7,4) code by counting the relevant error channels (see Appendix \ref{app:gate}). \begin{figure} \caption{Comparison of the effective error rate $Q_{\textrm{eff} \label{fig:code-comparison} \end{figure} We now probe the effects of both readout and gate errors through two direct comparisons. First, we compare the 7-qubit encoding circuit using both error detection and error correction [Fig.~\ref{fig:code-comparison}(a)]. For the most part, their total error rates differ by a factor of $\sim 3$ due to their respective scalings with $\epsilon$; this ratio increases further when $\epsilon \ll q$, owing to the favorable scaling with $q$ offered by error detection. Second, we compare error detection using the 7-qubit encoding and the 2-qubit encoding [Fig.~\ref{fig:code-comparison}(a)]. Analagous with the previous case, the error rates are nearly equivalent for the two encodings except when $\epsilon \ll q$, in which case the scaling with $q$ becomes relevant. Finally, in Appendix \ref{app:8-hamming}, we analyze an extended (8,4) version of the Hamming code which includes an extra ancilla qubit compared to the (7,4) code and has an increased code distance of $d=4$. Interestingly, there are two natural circuits that lead to the same classical encoding and differ in the number of entangling gates. While one would naively expect the circuit with the fewest gates to have the lowest error rate, we find the opposite to be true. This highlights the importance of designing encoding circuits that are robust not only to readout errors, but also to gate errors that occur during the encoding circuit. \section{Experimental demonstration} \label{eq:ibmq} This section demonstrates the experimental performance of the repetition code on the IBMQ Mumbai quantum computer. This computer has 27 qubits total, arranged in a pattern depicted in Fig.~\ref{fig:ibmqmanhattan}. To demonstrate the active readout error correction protocol, we construct a 5 qubit sub-computer consisting of the five filled black circles in Fig.~\ref{fig:ibmqmanhattan} (corresponding to qubits 12-16 in the computer's labeling scheme). Due to the adjacency map of connected qubits, we are unable to encode all qubits, without adding extra \textsc{Swap} gates. Instead, the first (top right filled black circle in Fig.~\ref{fig:ibmqmanhattan}), second, fourth and fifth (counter-clockwise from the first) are encoded with the (3,1) repetition code to improve the readout errors. \begin{figure} \caption{A diagram of the IBMQ Mumbai computer layout, where circles represent qubits and links represent qubits that are connected. The qubits used for the measurement presented in Fig.~\ref{fig:igmqresults} \label{fig:ibmqmanhattan} \end{figure} In Fig.~\ref{fig:effectiverate}, the effective readout error rates for the four qubits are compared under three different scenarios. First, we measure the readout error rate with the encoding circuit but \emph{without} performing error mitigation (i.e.~we discard the measurements of the ancilla bits). As expected, this leads to an increase in the readout error rate relative to that of the nominal circuit. Indeed, the observed increase of $\sim 1\%$ is consistent with the independently measured rate of depolarizing errors (Fig.~\ref{fig:ibmqmanhattan}). Second, we measure the effective error rate after performing either error detection or correction. With either scheme, we observe a substantial improvement in the error rate, e.g.~dropping by a factor of five in the first two qubits compared to the unencoded qubits. This indicates that the suppression of readout errors due to the encoding outweighs the errors introduced by the entangling gates and is consistent with the relatively large readout error rates for these qubits (Fig.~\ref{fig:ibmqmanhattan}). A global picture of the subcomputer performance is illustrated in Fig.~\ref{fig:igmqresults}. Even though only four of the five qubits are encoded, the probability for a prepared state to be correctly measured increases from $\sim$75\% to more than 90\% on average. On other quantum computers, the number and location of qubits that may be encoded for readout mitigation will be determined by the device connectivity. This is a relatively minor concern for problems where only a small number of qubits are read out at a time (e.g.~when measuring few-body correlation functions), and thus only a few additional qubits are required for the readout encoding. These ancilla qubits may be spare nearby qubits on the device or, for greater space efficiency, they may be qubits that are initially involved in the computation and repurposed directly before readout, i.e.~by resetting to the ground state. A more challenging situation occurs when measuring \emph{all} the qubits simultaneously is required, e.g.~as in random circuit sampling. Indeed, with current two-dimensional grid architectures, applying our scheme to all qubits is only well suited to computational tasks performed on a one-dimensional (or quasi-one-dimensional) subset of qubits, such that a neighboring row (or few rows) of qubits may be used for the readout encoding. Looking forward, we envision that improvements in hardware connectivity will overcome this limitation. For example, one may consider designing a two-dimensional device with two sublattices of qubits: one sublattice containing the computational qubits and the other sublattice containing ancilla qubits. The computational qubits would be connected to each other as in current devices but also to a single ancilla qubit in the opposite sublattice for implementing the (2,1) code. Notably, as the ancilla qubits would only involved in the readout operations, they would not affect the implementation or performance of the main computational task (assuming negligible cross-talk). \begin{figure} \caption{The effective readout error for the first (top right filled black circle in Fig.~\ref{fig:ibmqmanhattan} \label{fig:effectiverate} \end{figure} \begin{figure*} \caption{The diagonal elements of the response matrix measured on IBMQ Manhattan for an effective five logical qubit setup (see Fig.~\ref{fig:ibmqmanhattan} \label{fig:igmqresults} \end{figure*} \section{Conclusions} In this work, we proposed a scheme for active readout error mitigation based on encoding the output state of a quantum circuit via a classical error correcting code. We showed that this approach generally provides significant readout improvement on devices whose bare readout error rate is comparable or larger than the error rate of entangling gates. More specifically, we introduced two forms of encoding (the repetition code and the Hamming code) and analyzed the tradeoffs between error detection and error correction. Which scheme gives the optimal results depends on the criteria that are most important to satisfy. Error correction schemes are favorable if a large asymmetry in the readout error exists and it can not be mitigated through rebalancing techniques. On the other hand, error detection schemes have a smaller overall effective readout error. Among the error detection schemes, one should choose a specific encoding depending relative size of readout and \textsc{cnot}\ gate errors, as well as how many ancillary qubits are available. If readout errors dominate over gate errors, then correction schemes with a code distance of 3, namely the (7,4) Hamming code and the (3,1) repetition code are preferable. On the other hand, if gate errors dominate, then all detection codes are more or less equal. Finally, the (7,4) Hamming code requires the least amount of ancillary qubits for a given number of physical qubits (however the encoding circuit involves non-local entangling gates). With any of these implementations, active readout mitigation offers a few general advantages compared to passive error mitigation techniques. First, active error mitigation does not require characterizing a device's noise parameters (e.g.~response matrix), which can involve an exponential overhead and be subject to temporal drifts; nor does it suffer from pathologies that can occur when applying post-processing (e.g.~matrix inversion) on specific error models. Second, by removing errors on a shot-by-shot basis, our approach is more applicable to tasks where sampling individual states is desired and, for general tasks, can be more effective at reducing not only the average measurement bias but also its variance. Crucially, active and passive strategies can also be combined to maximize the readout fidelity; i.e.~by actively encoding the readout qubits and implementing error detection / correction, followed by post-processing the effective output distribution using passive correction techniques. Lastly, in contrast to active \emph{gate} error correction schemes, active readout error correction is fully compatible with near-term quantum hardware. Indeed, the protocol we have presented in this paper only requires local encoding whereby additional qubits need only have a small number of connections with entangling gates. Active readout error mitigation thus provides a practical intermediate step on the long path towards realizing full quantum error correction. \emph{Note added:} After this work had been completed, we became aware of a recent study \cite{gunther2021improving} by G\"unther et.~al.~which also introduces active readout correction. Aside from the core concept, the two papers are complementary in their analysis and proposed implementations. \label{sec:conclusions} \section*{Code and Data} The code and data for this paper can be found at \url{https://github.com/LBNL-HEP-QIS/activereadouterrors}. \begin{acknowledgments} We would like to thank Marat Freytsis, Maurice Garcia-Sciveres, Maxwell Block, Francisco Machado and Jarrod McClean for useful discussions and feedback on the manuscript. We thank Vince Pascuzzi for his IBMQ device data script. This work is supported by the U.S. Department of Energy, Office of Science under contract DE-AC02-05CH11231. In particular, support comes from Quantum Information Science Enabled Discovery (QuantISED) for High Energy Physics (KA2401032) and the Office of Advanced Scientific Computing Research (ASCR) through the Accelerated Research for Quantum Computing Program. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. \end{acknowledgments} \appendix \section{Readout variance with error mitigation}\label{app:var} While we have mostly focused on the bias of quantum observables in the presence of readout errors, it is also important to consider the change in variance under active error mitigation, as this determines the number of measurements required to estimate an observable to a desired degree of precision. To this end, let us revisit the single qubit example presented in Section \ref{sec:two-qubit}. We recall that, without any error mitigation, the distribution of measurements characterized by the expected frequencies $\mathbb{E}[m_0] = 1- \mathbb{E}[m_1] = \lambda$ for $\lambda=p+q(1-2p)$ and the variance $\textrm{Var}[m_0] = \textrm{Var}[m_1] = \lambda(1-\lambda)$. In practice, we care about the estimator of the frequencies after $N$ measurements, e.g.~$\theta_0 = N_0 / N$ where $N_0$ is the number of measurements in the 0 state. The variance of the estimator is given by \begin{align} \begin{split} \label{var_theta0} \textrm{Var}[\theta_0] &= \textrm{Var}[m_0]/N \\ &= \frac 1 N \lambda(1-\lambda) \\ &= \frac 1 N \left [p(1-p)+q(1-2p)^2\right] + \mathcal{O}(q^2), \end{split} \end{align} which increases as a function of the error rate. Let us now compare to case of active readout error \emph{detection}. As we found in the main text, the bias in the distribution of the two logical states, $\bar 0$ and $\bar 1$, is renormalized by the effective error rate: $\mathbb{E}[m_{\bar 0}] = 1- \mathbb{E}[m_{\bar 1}] = \lambda_{\textrm{eff}}$ with $\lambda_\textrm{eff} = p+q_\textrm{eff}(1-2p) + \mathcal{O}(q^2)$. However, the estimator of these frequencies depends on two competing factors: the change in bias and the total number of measurements that are kept during post-selection. Specifically, we have $\theta_{\bar 0} = N_{00} / (N_{00}+N_{11}) = N_{00}/(N(1-2q))$. This leads to a variance \begin{align} \begin{split} \label{var_theta00} \textrm{Var}[\theta_{\bar 0}] &\approx \frac 1 N \lambda_\textrm{eff}(1-\lambda_\textrm{eff})(1+2q) \\ &\approx \frac 1 N \left [p(1-p)+q_\textrm{eff}(1-2p)^2 + 2qp(1-p) \right], \end{split} \end{align} where we have dropped $\mathcal{O}(q^2,q_\textrm{eff}^2)$ terms. Comparing with Eq.~\eqref{var_theta0}, we find that whether or not error detection improves the estimator variance depends on both the state parameter $p$ and the ratio $q_\textrm{eff} / q$. In contrast, when error \emph{correction} is performed, no measurements are discarded. Hence, the variance of the estimator for error correction depends only on $q_\textrm{eff}$: \begin{align} \begin{split} \label{var_cor} \textrm{Var}[\theta_{\bar 0}] &= \frac 1 N \lambda_\textrm{eff}(1-\lambda_\textrm{eff}) \\ &= \frac 1 N \left [p(1-p)+q_\textrm{eff}(1-2p)^2\right] + \mathcal{O}(q_\textrm{eff}^2). \end{split} \end{align} This is strictly lower than unmitigated variance [Eq.~\eqref{var_theta0}] whenever $q_\textrm{eff} < q$ and, for certain parameters, can be lower than variance with error detection [Eq.~\eqref{var_theta00}]. Finally, we note that the change in variance with or without error mitigation is generally small compared to intrinsic quantum uncertainty, i.e.~$p(1-p)$. For this reason, the more significant benefits offered by active error mitigation may be its application to problems where passive readout mitigation methods are unsuitable, e.g.~when a response matrix cannot be accurately determined or the output of individual shots is required. \section{Review of Hamming codes}\label{app:Hamming} In this appendix, we briefly describe the family of classical error correction codes known as Hamming codes. The 7-qubit code considered in this work is a member of this family; in fact, the three-qubit repetition code is a member as well (though, for clarity, we have chosen not to use this terminology). Each Hamming code has a code distance of $d=3$, implying that a single bit-flip error can be corrected and up to two bit-flip errors can be detected. Specifically, for any integer $r \ge 2$, a Hamming code can be constructed to encode $2^r-r-1$ logical bits into $2^r - 1$ physical bits ($r=2$ is the 3-qubit repetition code; $r=3$ is the 7-qubit code). Thus, as $r$ increases, the ratio of physical to logical bits quickly approaches one. The general procedure for constructing a Hamming code is as follows. One begins by enumerating the $2^r - 1$ physical bits starting from $l = 1,2,...,2^r - 1$ and assigning the bits with index $2^i$ for $i = 0, 1,\ldots,r-1$ as parity bits and the rest as logical bits. The role of each parity bit is to store the parity of a particular set of logical bits. In particular, for the $i$th parity bit, this set includes all logical bits whose index $l$ as a binary string satisfies $l_i = 1$ (i.e.~the $i$th bit of $l$ is equal to 1). For example, in the (7,4) code, the $l=1$ parity bit stores the parity of the logical bits with index 3, 5, and 7. To utilize the encoding, one checks if the state of each parity bit matches the measured parity of the corresponding set of logical bits; this is accomplished by multiplying the measurement outcome by the parity check matrix, as shown in Eq.~\eqref{eq:parity check} for the (7,4) code. If only a single bit-flip error has occurred, the index of the error is given by the binary string whose bits are set to 1 for each parity check that is violated. For example, if the $l=1$ and $l=2$ parity checks are violated, it indicates that an error occurred on the logical bit with index $l=3$; while if only the $l=1$ parity check is violated, it indicates that this parity bit itself had an error. However, if more than one bit-flip errors occur, the errors can no longer be correctly identified (though they can still be detected for up to two bit-flip errors); indeed, attempting to perform error correction would lead to a spurious final state. Finally, we note that, for each Hamming code, one can form an extended Hamming code by adding a single extra parity bit, denoted with the label $l=0$, which stores the parity of \emph{all} the physical bits.. The effect of this extra parity bit is to increase the code distance from $d=3$ to $d=4$. We provide a detailed analysis of the Hamming (8,4) code in Appendix \ref{app:8-hamming}. \section{Analysis of gate errors}\label{app:gate} A key feature of active readout error correction is that the effective readout error rate remains linearly susceptible to gate errors during the encoding circuit, i.e.~$Q_{\textrm{eff}} / k \approx \alpha \epsilon$, when $\epsilon, q \ll 1$. This is because single gate errors can lead to correlated bit-flip errors in the final measured outcomes. As we discussed in Section~\ref{sec:results}, the prefactor $\alpha$ can be derived by summing all error channels that lead to a logical error, weighted by their individual likelihood (e.g.~$\epsilon/16$ for a two-qubit symmetric depolarizing channel). Note that the definition of a logical error depends both on the encoding and also whether one is performing error detection or correction, as in the latter case it suffices to falsely \emph{identify} an error. \begin{figure} \caption{Schematic of the single undetectable error channel for (a) the (2,1) code, and (b) the (3,1) code. The red crosses indicate the location of a bit-flip error (i.e.~a Pauli $X$- or $Y$-type error).} \label{fig:rem_error_det} \end{figure} \begin{figure} \caption{Example of an uncorrectable error for the (3,1) code. The error occurs on a single site (top), but propagates to a second site (bottom) due to the second \textsc{cnot} \label{fig:rem_error_cor} \end{figure} \begin{figure} \caption{Example of (a) an undetectable and (b) an uncorrectable error for the (7,4) code.} \label{fig:hamm_error} \end{figure} We previously identified the relevant error channels for the (2,1) and (3,1) codes. For the (2,1) code, they consist of correlated errors that flip both of the physical bits, i.e.~Pauli errors of the form XX, XY, YX, or YY [Fig.~\ref{fig:rem_error_det}]. The (3,1) code with error detection is directly analagous: logical errors require all of the physical bits to be flipped and are caused only by correlated errors following the first \textsc{cnot}~gate. However, for the (3,1) code with error correction, logical errors require only two out of the three qubits to be flipped; this can occur by a two-qubit error on the second \textsc{cnot}~gate or single qubit error after the first \textsc{cnot}~gate (i.e.~$X$ or $Y$ error) which propagates into a two-qubit error, as depicted in Fig.~\ref{fig:rem_error_cor}. Adding together these channels, we determine the values for $\alpha$ presented in Table \ref{tab:Tab1} and confirmed numerically in Fig.~\ref{fig:err_scaling}. We now extend this logic to analyze the Hamming (7,4) code. In order to have a fully \emph{undetectable} logical error, all parity checks must remain unviolated, requiring that a bit-flip error occurs on one of the logical bits and \emph{all} of the parity bits associated with it. As with repetition codes, this situation can only arise through a correlated two-qubit error after the first \textsc{cnot}~gate acting on one of the logical bits [Fig.~\ref{fig:hamm_error}(a)]. Indeed, such an error is equivalent to having a bit flip on the logical qubit itself \emph{before} the encoding circuit, making it clearly undetectable. On the other hand, having a falsely \emph{identified} logical error, requires only that a spurious combination of parity checks to be violated. This can occur in two possible ways. First, a two-qubit bit-flip error can occur after \emph{any} of the \textsc{cnot}~gates (including the undetectable error mentioned above). Second, a single-qubit bit-flip error can occur after one \textsc{cnot}~gate and propogate into a multi-qubit error owing to a later \textsc{cnot}~gate [Fig.~\ref{fig:hamm_error}(b)]. For this, the single-qubit error must occur on the control bit of a subsequent \textsc{cnot}~gate. This understanding allows us to directly count the number of relevant error channels for the (7,4) code. Referring to Fig.~\ref{fig:schematic3}, there are a total of 9 \textsc{cnot}~gates which could directly lead to a two-qubit error, while there are 4 gates for which a single qubit error could later propogate. We mutiply these by the number of distinct Pauli errors per gate (4 two-qubit errors and 2 single-qubit errors), and weight them by their individual likelihood of occuring. This yields a total error rate of $Q_{\textrm{eff}} = \frac{\epsilon} {16} (4*9+2*4) = 4*\frac 7 2$, which agrees precisely with the numerical results presented in Fig.~\ref{fig:err_scaling}. We can further generalize this counting argument to predict the total error rate for an arbitrary Hamming code, denoted a $(2^r-1,2^r-r-1)$ code . To do so, we note that the layout \textsc{cnot}~gates is given by the binary representation of each logical bit. Thus, the number of \textsc{cnot}~gates connected to a logical bit is equal to the number of 1s in the binary string, and the number of gates that could lead to a propagating error is equal to one fewer (i.e.~only excludes the last gate). Summing each of the logical bits, the total number of \textsc{cnot}~gates is \begin{align*} \sum_{m=2}^r {r \choose m}k &= r 2^{r-1} - r \\ & = (r-1)2^{r-1} \end{align*} and the total number of propagating errors is \begin{align*} \sum_{m=2}^r {r \choose m}(m-1) &= r 2^{r-1} - r - (2^r - r). \\ &= (r-1)2^{r-2} \end{align*} This yields a total error rate \begin{equation*} Q_{\textrm{eff}} = \frac \epsilon {16}\left [4 (r-1)2^{r-1} + 2 (r-1)2^{r-1} \right], \end{equation*} which, for $r \gg 1$, we can approximate as \begin{equation*} Q_{\textrm{eff}} \approx \frac{\epsilon} {16} \left(4r2^{r-1}+2r2^{r-1} \right). \end{equation*} Recalling that the number of logical qubits is $k\approx 2^r$, we can express the error rate as $Q_{\textrm{eff}} /k \approx \alpha \epsilon$ with $\alpha \approx 3r/16 \approx 3/16 \log_2 n$. Incidentally, we could also have arrived at this result by realizing that the number of \textsc{cnot}~gates per logical qubit is sharply peaked at $r/2$, corresponding to the typical number of 1's in a random binary string. We conclude that, when performing error correction, the error rate per qubit increases logarithmically with the total number of qubits. This is in contrast to the \emph{constant} error rate per qubit, i.e.~$\alpha = 1/4$, when performing error detection. While the exact error rates depended on our model of 2-qubit depolarizing noise, we expect these trends to hold for arbitrary noise models. \section{Hamming (8,4) code}\label{app:8-hamming} \begin{figure} \caption{Total readout error rate, $Q_{\textrm{eff} \label{fig:err_scaling2} \end{figure} As mentioned in Appendix \ref{app:Hamming}, an extended version of each Hamming code may be obtained by including an additional parity bit that stores the parity of all logical bits. This increases the code distance from $d=3$ to $d=4$, enabling the detection of up to three bit-flip errors and the correction of only single bit-flip, as in the non-extended version. Notably, this implies that two-qubit errors can be \emph{distinguished} from single-qubit errors; in contrast, in the non-extended version, two-qubit errors can always be misidentified as a single-qubit error. Thus, a common usage for the code is to perform a hybrid between error correction and error detection, i.e.~one corrects ostensibly single-qubit errors and discards multi-qubit errors. We consider the implementation of this code using the encoding circuit shown in Fig.~\ref{fig:schematic3}. The corresponding parity check matrix is given by: \begin{equation} H = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ \end{pmatrix}_{[8,4]} . \end{equation} If the first parity check is violated, it suggests that a single bit-flip error has occurred, and the remaining parity checks can locate the error in the same way as the non-extended version (of course, the same scenario could occur with three bit-flip errors). On the other hand, if the first parity check succeeds, but one of the other checks fail, it indicates that multiple bit-flip errors have occurred, but because the location of these cannot be determined, the measurement is discarded. In Fig.~\ref{fig:err_scaling2}, we test the performance of this hybrid approach via noisy simulations with the same noise channels as in main text. We first verify that the total error rate scales as $Q_{\textrm{eff}} \sim q^3$ with respect to readout noise, as expected from the fact that three bit-flip errors can be misidentified as a single bit-flip error. More surprisingly, the linear susceptibility with respect to gate errors is $Q_{\textrm{eff}} /k = \alpha \epsilon$ with $\alpha = 1/4$, the same as other codes performing pure error detection. While this suggest that the hybrid approach is more robust to gate errors, a close inspection of the circuit reveals that all gate errors that plagued the (7,4) code with error correction are not \emph{correctable} by the (8,4) code; rather they contribute to the number of discarded measurements. More specifically, such errors propagate during the last round of \textsc{cnot}~gates (shown in the box of Fig.~\ref{fig:schematic3}), resulting in least two bit-flip errors on the measurement outcome, which cannot be corrected. Indeed, we verify that the number of discarded measurements for the (8,4) code is equivalent to that of the (7,4) code with error detection when only gate errors are included. Thus, utilizing the (8,4) code with the hybrid error correction / detection strategy is practically equivalent to utilizing the (7,4) code with error detection in terms of handling gate errors (as well as readout errors). Finally, we note that a simpler circuit can, in fact, be utilized to implement the (8,4) code. The reason is that the parity bits redundantly stores the parity of certain combinations of logical bits, so one can obtain the parity of all the physical bits by interacting with only a partial set of the bits. The reduced circuit is shown in Fig.~\ref{fig:8hamming_reduced} and consists of only 3 \textsc{cnot}~gates acting on the last parity bit rather than the original 7 gates. By performing noisy simulations, we verify that this reduced circuit has the same scaling with respect to readout errors as the full circuit. However, the simplified circuit exhibits a \emph{worse} performance with respect to gate errors; in particular, we find $\alpha = 3/4$. The explanation for this counter-intuitive result is that, in the simplified circuit, many of gate errors that occur in the non-extended part of the circuit do not propagate during the final round of \textsc{cnot}~gates. As a result, many such errors are misidentified as single bit-flip errors and lead to a spurious final state, similar to the (7,4) code performing error correction. In contrast, for the full circuit shown in Fig.~\ref{fig:schematic3}, these same errors always propagate and are flagged as multi-qubit errors which are discarded. \begin{figure} \caption{Reduced circuit for encoding the Hamming (8,4) code.} \label{fig:8hamming_reduced} \end{figure} \appendix \end{document}
\begin{document} \title{Exploring Rawlsian Fairness for \\ K-Means Clustering} \author{Stanley Simoes \hspace{0.2in} Deepak P \hspace{0.2in} Muiris MacCarthaigh} \authorrunning{S. Simoes et al.} \institute{Queen's University Belfast, UK\\ \email{[email protected]} \hspace{0.1in} \email{[email protected]} \hspace{0.1in} \email{[email protected]}} \maketitle \begin{abstract} We conduct an exploratory study that looks at incorporating John Rawls' ideas on fairness into existing unsupervised machine learning algorithms. Our focus is on the task of \emph{clustering}, specifically the \emph{k-means clustering} algorithm. To the best of our knowledge, this is the first work that uses Rawlsian ideas in clustering. Towards this, we attempt to develop a \emph{postprocessing} technique \emph{i.e.},\xspace one that operates on the cluster assignment generated by the standard k-means clustering algorithm. Our technique perturbs this assignment over a number of iterations to make it fairer according to Rawls' \emph{difference principle} while minimally affecting the overall utility. As the first step, we consider two simple perturbation operators -- $\mathbf{R_1}$ and $\mathbf{R_2}$ -- that reassign examples in a given cluster assignment to new clusters; $\mathbf{R_1}$ assigning a single example to a new cluster, and $\mathbf{R_2}$ a pair of examples to new clusters. Our experiments on a sample of the Adult dataset demonstrate that both operators make meaningful perturbations in the cluster assignment towards incorporating Rawls' difference principle, with $\mathbf{R_2}$ being more efficient than $\mathbf{R_1}$ in terms of the number of iterations. However, we observe that there is still a need to design operators that make significantly better perturbations. Nevertheless, both operators provide good baselines for designing and comparing any future operator, and we hope our findings would aid future work in this direction. \keywords{fairness \and unsupervised machine learning \and clustering.} \end{abstract} \section{Introduction} While traditional machine learning (ML) algorithms aim to maximise utility through discrimination, they are now being increasingly regulated by law so as to prevent any unjustifiable discrimination in the society, especially when these algorithms are deployed on a large scale and can heavily influence people's life chances. These laws provide the interpretation of fairness (\emph{e.g.}:\xspace the \emph{four-fifths rule}\footnote{\url{https://www.law.cornell.edu/cfr/text/29/1607.4}} in the US and the \emph{Equality Act 2010}\footnote{\url{https://www.legislation.gov.uk/ukpga/2010/15/contents}} in the UK) which existing algorithms are required to comply with. The challenge lies in translating these legal interpretations of fairness, which are in natural language, to mathematical formulations that can be incorporated in ML algorithms with minimal impact on the utility of these algorithms. Among the many ideas of justice and fairness put forth by political philosophers and social scientists over time, the ML research community has looked at the idea of fairness that is computationally easy to model such as individual fairness \cite{dwork12fairness} and group fairness \cite{pedreshi08discrimination}. However, this idea does not have enough support from the justice and fairness space. We take a novel and different route in this paper: we look at the ideas of \emph{John Rawls}\footnote{\url{https://plato.stanford.edu/entries/rawls/}} -- an influential 20\textsuperscript{th} century moral and political philosopher in liberal tradition -- and attempt to incorporate his ideas of fairness in unsupervised ML. Compared to other ideas, the Rawlsian ideas are time-tested, and a good mix of pragmatism and principledness, but incorporating them into ML algorithms is challenging. In general, ideas of fairness can be incorporated in existing ML algorithms in 3 ways -- \begin{enumerate*}[label=(\roman*)] \item \emph{preprocessing}: where the input is processed to ensure fairness before being fed to the ML algorithm, \item \emph{inprocessing}: where the ML algorithm is altered to incorporate the fairness criteria (\emph{e.g.}:\xspace by adding fairness constraints to the optimisation criterion), and \item \emph{postprocessing}: where the output of the ML algorithm is processed to make it fairer \end{enumerate*}. Preprocessing and postprocessing techniques can be used with any off-the-shelf ML algorithm with no modification, which is not the case with the inprocessing techniques. Existing ML algorithms can thus be easily augmented to produce fairer outputs through preprocessing and postprocessing techniques. On the other hand, the inprocessing techniques are known to have both better utility and better fairness than the other two. In this paper, we focus on \emph{clustering} -- an unsupervised ML task. We specifically look at the \emph{k-means clustering} algorithm \cite{hastie09the}, a well-known unsupervised ML algorithm that is used to partition a collection of examples into disjoint sets called \emph{clusters} such that similar examples are assigned to the same cluster. We look at the k-means clustering algorithm with an additional constraint of satisfying Rawls' \emph{difference principle} \cite{rawls01justice}. In other words, we allow the overall utility of the obtained clusters to be sub-optimal as long as the least-advantaged sensitive group has the greatest utility. We work towards developing a \emph{postprocessing} technique that perturbs the output of the standard k-means clustering algorithm to incorporate the difference principle. While there has been research on the intersection of fairness and clustering, to the best of our knowledge, this is the first to attempt to incorporate Rawlsian ideas in clustering. Although this work is exploratory, we hope that this report of our experiences would aid future work in this direction. \section{The Difference Principle and Rawlsian Point} In his book \emph{Justice as Fairness: A Restatement}, John Rawls states the \emph{difference principle} as \begin{displayquote}[\cite{rawls01justice}] \textquote{\emph{Social and economic inequalities are to satisfy two conditions: first, they are to be attached to offices and positions open to all under conditions of fair equality of opportunity; and second, they are to be to the greatest benefit of the least-advantaged members of society.}} \end{displayquote} We focus on the second condition: \textquote{\emph{they [social and economic inequalities] are to be to the greatest benefit of the least-advantaged members of society}}. For the experiments outlined in this paper, we interpret the above statement as the members of a society belonging to one of two sensitive social groups (\emph{e.g.}:\xspace Male or Female)\footnote{We use binary genders for sake of illustration only.}, one being the more advantaged group and the other the less advantaged group. \subsection{Terminology} In the context of clustering, we refer to the \emph{Rawlsian point} as the cluster assignment where the difference principle is satisfied \emph{i.e.},\xspace the utility to the least-advantaged members of society is the greatest, and we refer to such a cluster assignment as the \emph{Rawlsian k-means clusters}. In contrast, the classical k-means clustering algorithm by design returns a cluster assignment where the sum of individual utilities is (approximately) maximised; we refer to this point as the \emph{utilitarian point}, and the corresponding cluster assignment as the \emph{utilitarian k-means clusters}. This paper looks at binary sensitive attributes \emph{i.e.},\xspace we assume that all examples in the dataset belong to one of two sensitive groups. We refer to the sensitive group with the lower utility as the less advantaged group (LAG) and the one with the higher utility as the more advantaged group (MAG). Thus, the LAG and MAG are defined on the sensitive attribute. Note that the sensitive attribute is \emph{not} used in the k-means clustering algorithm. \section{Problem statement\label{section:problem-statement}} We attempt to develop a postprocessing technique that operates on the output of the k-means clustering algorithm, returning a cluster assignment that corresponds to the Rawlsian point \emph{i.e.},\xspace the Rawlsian k-means clusters. This requires that the Rawlsian point does indeed exist. This leads us to the following questions: \begin{enumerate} \item Does there exist a cluster assignment that corresponds to (or is an approximate of) the Rawlsian point? (Section \ref{section:existence}) \item Can we reach this point starting from the utilitarian point \emph{i.e.},\xspace the point of highest utility achieved by the classical k-means clustering algorithm? (Section \ref{section:algo}) \end{enumerate} \subsection{Notation} Let $\mathcal{X} = [ \ldots, x, \ldots ]$ be a collection of examples defined over a set of non-sensitive attributes $\mathcal{N}$ and a single binary sensitive attribute $S = \lbrace 0, 1 \rbrace$. For any example $x$, let $x.n$ denote its values for the non-sensitive attributes and $x.s$ denote its value for the sensitive attribute. Also, let $x.n \in [0, 1]^{|\mathcal{N}|}$ and $x.s \in \lbrace 0, 1 \rbrace$. Note that the non-sensitive attributes $\mathcal{N}$ are the only attributes used for clustering; the clustering algorithm does not use the sensitive attribute $S$. The k-means clustering algorithm assigns a label to each example based on the example's distance from the cluster centroids, thus yielding a cluster assignment $\mathcal{C} = \lbrace \ldots, C, \ldots \rbrace$ with $|\mathcal{C}| = k$, where each cluster $C$ is a set of examples having the same label and $k$ is the number of clusters to be generated. We additionally define the \emph{utility} $u(x)$ of an example $x$ as \begin{equation} u(x) = \delta - d(x.n, C) \end{equation} where the constant $\delta$ is the maximum possible distance between two examples, and $d(x.n, C)$ is the distance of example $x$ from the nearest cluster centroid. Further, the utility of a sensitive group $\alpha$ is computed as \begin{equation} U(\alpha) = \frac{1}{| \mathcal{X}_\alpha |} \sum_{x \in \mathcal{X}_\alpha} u(x) \end{equation} where $\mathcal{X}_\alpha$ denotes the set of examples in $\mathcal{X}$ belonging to the sensitive group $\alpha$ \emph{i.e.},\xspace $\mathcal{X}_\alpha = \{x | x \in \mathcal{X}, x.s=\alpha \}$. The overall utility of a cluster assignment is the average utility of all examples in the dataset. In the next section we describe the Adult dataset which is central to this exploratory study, and in the subsequent sections we attempt to address the questions outlined above. \section{Dataset} We use the publicly available Adult dataset\footnote{\url{https://archive.ics.uci.edu/ml/datasets/adult}} from the UCI repository \cite{dua17uci} in our experiments. The Adult dataset has been heavily used in the fairness literature. It consists of 15 attributes and 30718 examples with no missing values. We use 8 non-sensitive attributes, 1 sensitive attribute, and the predictor. Table \ref{tab:adult-attributes} lists out the attributes from this dataset that are used for preprocessing, clustering, and evaluation in our experiments. \begin{table} \centering \begin{tabular}{>{\bfseries}ll>{\ttfamily}l} \hline & \textsc{type} & \textsc{name} \\ \hline Sensitive & categorical & sex \\ \hline Non-sensitive & continuous & age \\ & & education-num \\ & & capital-gain \\ & & capital-loss \\ & & hours-per-week \\ \cline{2-3} & categorical & workclass \\ & & education \\ & & occupation \\ \hline Predictor & categorical & annual-income \\ \hline \end{tabular} \caption{Attributes from the Adult dataset used in our experiments. Only the non-sensitive attributes are used in the clustering algorithm. The sensitive attribute is used only for computing the utilities of the sensitive groups. The predictor attribute is used only when preprocessing the dataset. \label{tab:adult-attributes}} \end{table} \subsubsection*{Preprocessing} We follow the steps in previous work \cite{abraham20fairness} for preprocessing the dataset. The non-sensitive attributes (used for clustering) are preprocessed as follows: \begin{itemize} \item continuous: scaled and translated to the range $[0, 1]$. \item categorical: one hot encoded. To ensure that the maximum squared distance between any two examples for this attribute is 1, the value for the `hot' position is set to $\frac{1}{\sqrt{2}}$. \end{itemize} Consequently, the maximum distance between any two values of a single non-sensitive attribute is 1. As we use 8 non-sensitive attributes, the maximum possible distance $\delta$ between any two examples is 8. We then undersample for parity across the predictor attribute. The resulting dataset contains 42 non-sensitive attributes and 1 sensitive attribute. Finally, we sample 500 examples from each predictor class for a total of 1000 examples. Table \ref{tab:adult-sensitive} shows the distribution of sensitive groups in the sampled dataset. \begin{table} \centering \begin{tabular}{lrr} \hline sex & \# of examples & \% of examples \\ \hline Female & 267 & 26.7\% \\ Male & 733 & 73.3\% \\ \hline \end{tabular} \caption{Distribution of sensitive groups (Female, Male) in the sampled dataset (1000 examples). \label{tab:adult-sensitive}} \end{table} \section{Existence of Rawlsian k-means clusters\label{section:existence}} Before devising a technique for obtaining the Rawlsian k-means clusters, we need to determine whether such a cluster assignment indeed exists. Since it is impractical to enumerate and evaluate the utilities of all possible cluster assignments, we instead perform several runs of the k-means algorithm on our dataset with different initial centroids to find an approximate. Note that either of the sensitive groups may be less (or more) advantaged; in case of the Adult dataset, we only consider those cluster assignments where the minority group (\emph{i.e.},\xspace Female) is the less advantaged group. Figure \ref{fig:existence} shows the utilities of these cluster assignments, with the utility of the more advantaged group (MAG) on the $x$-axis and the less advantaged group (LAG) on the $y$-axis. \begin{figure} \caption{points generated by the 5000 runs} \caption{zooming in to the utilitarian point and (approximate) Rawlsian point} \caption{Scatter plot of points in the MAG-LAG utility space generated by 5000 runs of the k-means clustering algorithm with different initial centroids, $k=5$, on our Adult dataset, and majority (Male) as the more advantaged group (MAG) and minority (Female) as the less advantaged group (LAG). Each point in this space corresponds to the cluster assignment obtained from a single run. The hue of a point indicates the overall utility, with darker being better. Points on the 45° red dashed line are such that the utilities for MAG and LAG are equal. By design of this experiment \emph{i.e.} \label{fig:existence} \end{figure} It can be seen in Figure \ref{fig:existence} that there is indeed a point (shown as an olive plus point) corresponding to a k-means clustering that has a better utility for the less advantaged group than the utilitarian point (shown as a blue plus point). The olive plus point is thus an approximate for the Rawlsian point. Note that this point may not be the actual Rawlsian point; we use it as an approximate for the Rawlsian point in the experiment discussed in Section \ref{section:algo} as evaluating all possible cluster assignments is not possible. \section{Reaching the approximate Rawlsian point\label{section:algo}} We saw in the previous section there is no mechanism to generate points in the MAG-LAG utility space other than through the k-means algorithm itself. We thus performed several runs of the k-means algorithm to see whether doing so can reveal spaces where the Rawlsian point is likely to be and whether we can navigate to these spaces. Now, the histograms to the top and right of the scatter plots in Figure \ref{fig:existence}, indicate that the generated points are concentrated near the utilitarian point (the blue plus point). This would suggest that a k-means algorithm would likely generate a cluster assignment that corresponds to a point closer to the utilitarian point. Thus, we select the utilitarian cluster assignment \emph{i.e.},\xspace the one with highest overall utility (shown as a blue plus point in Figure \ref{fig:existence}) as the starting point for our postprocessing technique. We now outline our postprocessing technique for arriving at the approximate Rawlsian point starting from the utilitarian point. Our goal is to find a \emph{reassignment} of examples to new clusters so that we reach the approximate Rawlsian point (shown as a olive plus point). To do so, we apply a series of reassignment operations to the utilitarian cluster assignment such that we gradually move above the horizontal blue line in the MAG-LAG utility space toward the Rawlsian point. The general outline of our technique is shown in Algorithm \ref{algo:traverse}. \begin{algorithm}[ht] \caption{Traverse \label{algo:traverse}} \begin{algorithmic}[1] \Require $\mathcal{X}$, $\mathcal{C}$ \Ensure $\mathcal{C}$ \While{$\mathcal{C}$ has not changed} \State $O \gets$ GenerateOperations$(\mathcal{X}, \mathcal{C})$ \Comment{Generate operations (Section \ref{section:operators})} \State $o \gets$ select the best operation in $O$ \Comment{(Section \ref{section:select-apply})} \If{$o$ is not $\phi$} \State $\mathcal{C} \gets$ apply the selected operation $o$ \EndIf \EndWhile \end{algorithmic} \end{algorithm} We discuss the GenerateOperations algorithm in Section \ref{section:operators}, and how the best operation is selected and applied in Section \ref{section:select-apply}. \subsection{Selecting and applying the best reassignment operation\label{section:select-apply}} We select the best reassignment operation among those generated according to the following order of preference: \begin{enumerate} \item Among the operations that generate a point in the north-east of the current point in the MAG-LAG utility space, select the operation that corresponds to the point with highest overall utility. \item If no such operation exists, among the operations that generate a point in the \emph{skyline} of points in the north-west of the current point, select the operation that corresponds to the point with highest overall utility. \item If no such operation exists, select the null operation $\phi$, which indicates that no reassignment is done and hence the cluster assignment is unchanged. \end{enumerate} The \emph{skyline} $S$ of a set of points $Q$ is defined as those points in the MAG-LAG utility space where $\forall q \in$ Q, $\exists s \in S$ such that either \begin{enumerate*}[label=(\roman*)] \item $q == s$ (\emph{i.e.},\xspace $q$ is in the skyline), or \item $u_\mathit{LAG}(q) < u_\mathit{LAG}(s)$ and $u_\mathit{MAG}(q) < u_\mathit{MAG}(s)$ (\emph{i.e.},\xspace $q$ is worse than $s$ for both LAG and MAG), where $u_\alpha(x)$ is the utility of sensitive group $\alpha$ for the point $x$ \end{enumerate*}. Moving through the skyline ensures that we select the operation with the least drop in overall utility and maximum gain in LAG utility. Applying the selected reassignment operation is straightforward; we change the current cluster assignment (\emph{i.e.},\xspace current labels of the examples) as specified by the selected operation, and use the new cluster assignment as the starting point for the next iteration. \section{Reassignment operators\label{section:operators}} Our goal is to construct an operator that reassigns a number of examples in the current cluster assignment to new clusters, thus generating a new cluster assignment -- the Rawlsian cluster assignment -- having a higher LAG utility while minimally affecting the overall utility. By applying a series of instantiations of this operator to the utilitarian cluster assignment, we hope to reach the approximate Rawlsian point. We explore two simple operators $\mathbf{R_1}$ and $\mathbf{R_2}$ which are now detailed. \subsection{Reassignment operator $\mathbf{R_1}$\label{section:r1}} The reassignment operator $\mathbf{R_1}$ operating on a tuple $(x, C')$ takes a single example $x$ from the current cluster assignment $\mathcal{C}$ and reassigns it to a different cluster $C'$ (\emph{i.e.},\xspace a different label) thus yielding a new cluster assignment. The number of possible operations\footnote{An \emph{operation} is an instantiated operator.} generated for $\mathbf{R_1}$ is thus $n \times (k-1)$. Algorithm \ref{algo:generate-r1} outlines the GenerateOperationsR1 algorithm which is the $\mathbf{R_1}$ variant of the GenerateOperations algorithm. \begin{algorithm}[ht] \caption{GenerateOperationsR1\label{algo:generate-r1}} \begin{algorithmic}[1] \Require $\mathcal{X}$, $\mathcal{C}$ \Ensure $O$ \State Initialise $O$ to an empty set \For{each example-cluster tuple $(x, C') \in \mathcal{X} \times \mathcal{C}$} \State Apply $\mathbf{R_1}(x, C')$ to get a new cluster assignment $\mathcal{C}'$ \State Obtain the corresponding point in the MAG-LAG utility space \If{the point has a higher LAG utility than the current point} \State Add $(x, C')$ to $O$ \EndIf \State Discard $\mathcal{C}'$ \EndFor \end{algorithmic} \end{algorithm} Figure \ref{fig:traverse-r1} shows the trajectory of points in the MAG-LAG utility space generated by employing GenerateOperationsR1 in Algorithm \ref{algo:traverse}. We see that the algorithm takes small steps (each step corresponds to the application of one $\mathbf{R_1}$ operation) in the correct direction towards the approximate Rawlsian point but ends up with the LAG and MAG utilities being equal. Notably, the approximate Rawlsian point (olive plus point in the figure) has a better LAG (Female) utility than all generated points. There is thus a need for improvement. Another drawback of $\mathbf{R_1}$ is that its repeated application may be inefficient because the amount of movement towards the approximate Rawlsian point in each iteration is negligible. Additionally, the reassignment operation may never be undone. \begin{figure} \caption{Trajectory of points (in orange) generated in the MAG-LAG utility space by Traverse (Algorithm \ref{algo:traverse} \label{fig:traverse-r1} \end{figure} \subsection{Pair reassignment operator $\mathbf{R_2}$} To overcome the inefficiency limitation of $\mathbf{R_1}$, we instead select pairs of examples to be reassigned. We define the \emph{pair reassignment} operator as follows: The pair reassignment operator $\mathbf{R_2}$ operating on a pair of tuples $(x_1, C'_1), (x_2, C'_2)$ takes a pair of distinct examples $x_1$ and $x_2$ ($x_1 \ne x_2$) from the current cluster assignment $\mathcal{C}$ and reassigns them to new clusters $C'_1$ and $C'_2$ thus yielding a new cluster assignment. One $\mathbf{R_2}$ operation is thus equivalent to two $\mathbf{R_1}$ operations on two distinct examples $x_1$ and $x_2$. Two issues arise when using $\mathbf{R_2}$ for navigating the MAG-LAG utility space: \begin{enumerate} \item The number of possible operations using $\mathbf{R_2}$ is ${n \choose 2} (k-1)^2$, where $n$ is the number of examples and $k$ is the number of clusters. In our experiment, this evaluates to nearly 8 million possible operations; it is impractical to calculate the utilities of all such operations. In contrast, the number of possible operations using $\mathbf{R_1}$ is $n \times (k-1)$ which is significantly smaller. \item On inspection of a sample of these $\mathbf{R_2}$ operations, we found that a large chunk (around 62\%) generate points with lower LAG utilities than the current point \emph{i.e.},\xspace they move away from the Rawlsian point, and hence are not useful. \end{enumerate} Thus, there is a need to intelligently generate $\mathbf{R_2}$ operations whose corresponding points have higher LAG utilities than the current point. Algorithm \ref{algo:generate-r2} outlines the GenerateOperationsR2 algorithm which is the $\mathbf{R_2}$ variant of the GenerateOperations algorithm. Instead of generating all possible operations, we use a heuristic for pruning the set of example-cluster tuples (Lines 1 to 4 in Algorithm \ref{algo:generate-r2}). This heuristic is based on the assumption that if $\mathbf{R_1}$ does not generate a point with higher LAG utility for some tuple $(x, C')$, then no $\mathbf{R_2}$ operation instantiated with $(x, C')$ -- \emph{i.e.},\xspace neither $\mathbf{R_2}(x, C', \cdot, \cdot)$ nor $\mathbf{R_2}(\cdot, \cdot, x, C')$ -- will generate a point with higher LAG utility than the current point. Next, the remaining tuples are separately ranked on LAG utility and overall utility, and only the top reassignments (1\% to 5\%) from the two rankings are retained. This yields a set of good example-cluster tuples $T_\mathit{good} \subset \mathcal{X} \times \mathcal{C}$ that can be later used to instantiate $\mathbf{R_2}$. The rest of the algorithm is similar to Algorithm \ref{algo:generate-r1}. \begin{algorithm}[ht] \caption{GenerateOperationsR2\label{algo:generate-r2}} \begin{algorithmic}[1] \Require $\mathcal{X}$, $\mathcal{C}$ \Ensure $O$ \State $T \gets$ GenerateOperationsR1$(\mathcal{X}, \mathcal{C})$ \State $T_\mathit{LAG} \gets$ top $p$\% of $T$ ranked on LAG utility \State $T_\mathit{overall} \gets$ top $q$\% of $T$ ranked on overall utility \State $T_\mathit{good} \gets T_\mathit{LAG} \cup T_\mathit{overall}$ \State Initialise $O$ to an empty set \For{each pair of example-cluster tuples $((x_1, C'_1), (x_2, C'_2)) \in T_\mathit{good} \times T_\mathit{good}$} \State Apply $\mathbf{R_2}(x_1, C'_1, x_2, C'_2)$ to get a new cluster assignment $\mathcal{C}'$ \State Obtain the corresponding point in the MAG-LAG utility space \If{the point has a higher LAG utility than the current point} \State Add $(x_1, C'_1, x_2, C'_2)$ to $O$ \EndIf \State Discard $\mathcal{C}'$ \EndFor \end{algorithmic} \end{algorithm} Figure \ref{fig:traverse-r1vr2} compares the trajectories of points in the MAG-LAG utility space generated by employing GenerateOperationsR1 and GenerateOperationsR2 in Algorithm \ref{algo:traverse}. We can see that while $\mathbf{R_2}$ improves upon the inefficiency limitation of $\mathbf{R_1}$ (requiring 152 iterations as compared to 261 iterations by $\mathbf{R_1}$), it follows nearly an identical trajectory as $\mathbf{R_1}$ and hence still suffers from its other limitations \emph{i.e.},\xspace ends up with the LAG and MAG utilities being equal, and the operation once applied may never be undone. \begin{figure} \caption{Trajectories of points generated in the MAG-LAG utility space by Traverse (Algorithm \ref{algo:traverse} \label{fig:traverse-r1vr2} \end{figure} \section{Related Work} While there has been significant research on fairness in machine learning in recent years, we look at those works that are relevant to this paper \emph{i.e.},\xspace \begin{enumerate*}[label=(\roman*)] \item Rawlsian ideas of fairness in machine learning, and \item fair algorithms for clustering. \end{enumerate*}. \textbf{Rawlsian ideas of fairness in ML:} The few existing works that explore Rawls' ideas of fairness in ML are in the supervised setting. The hardness of adapting Rawlsian principles into algorithms is apparent from these works. For example, Shah et al \cite{shah21rawlsian} propose a classifier that minimises the error rate of the worst-off sensitive group; they call this a Rawls classifier. Hashimoto et al \cite{hashimoto18fairness} employ Rawlsian ideas to mitigate the amplification of representation disparity in empirical risk minimization. We have not come across any work that explores Rawls' ideas in the unsupervised setting. \textbf{Fair algorithms for clustering:} These can be broadly categorised based on the notion of fairness \emph{i.e.},\xspace \begin{enumerate*}[label=(\roman*)] \item individual fairness \cite{jung19a,kleindessner20a,mahabadi20individual}, and \item group fairness \cite{chierichetti17fair,davidson20making,kleindessner19fair} \end{enumerate*}. Further, these algorithms differ on how the fairness criterion is enforced (\emph{i.e.},\xspace preprocessing, inprocessing, or postprocessing). For example, Chierichetti et al \cite{chierichetti17fair} propose a preprocessing technique that makes the output of a subsequent standard clustering algorithm fair. Kleindessner et al \cite{kleindessner19fair} incorporates the fairness constraint within the clustering algorithm. Davidson and Ravi \cite{davidson20making} look at postprocessing clusters for fairness; they do so by presenting it as a Minimum Cluster Modification for Group Fairness (MCMF) optimisation problem which is formulated as an ILP. Other notions of fairness such as representativity fairness \cite{p20representativity} and proportionality fairness \cite{chen19proportionally} have also been proposed for clustering. \section{Conclusion} We proposed a postprocessing framework for making the clusters generated by the standard k-means clustering algorithm satisfy Rawls' difference principle while minimally affecting the overall utility. Within this framework, we explored two simple operators that perturb a given cluster assignment by reassigning examples to new clusters; the first operator $\mathbf{R_1}$ reassigns a single example to a new cluster at a time, and the second operator $\mathbf{R_2}$ reassigns two examples to new clusters at a time. We observed that while $\mathbf{R_2}$ improves upon the efficiency limitation of $\mathbf{R_1}$, there is still a huge scope for improvement with regards to arriving at the Rawlsian point. There is a need to design an operator that reassigns a larger number of examples to other clusters, which will consequently open up new avenues for exploration in the search space. Nevertheless, we expect these operators to act as good baselines for any future operator. \section*{Acknowledgment} This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 945231; and the Department of the Economy in Northern Ireland. \end{document}
\begin{document} \title{Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction} \iftrue \author{\IEEEauthorblockN{Mohamed Suliman} \IEEEauthorblockA{\textit{Trinity College Dublin} \\ Dublin, Ireland} \and \IEEEauthorblockN{Douglas Leith} \IEEEauthorblockA{\textit{Trinity College Dublin} \\ Dublin, Ireland} } \fi \maketitle \begin{abstract} In this paper we present new attacks against federated learning when used to train natural language text models. We illustrate the effectiveness of the attacks against the next word prediction model used in Google's GBoard app, a widely used mobile keyboard app that has been an early adopter of federated learning for production use. We demonstrate that the words a user types on their mobile handset, e.g. when sending text messages, can be recovered with high accuracy under a wide range of conditions and that counter-measures such a use of mini-batches and adding local noise are ineffective. We also show that the word order (and so the actual sentences typed) can be reconstructed with high fidelity. This raises obvious privacy concerns, particularly since GBoard is in production use. \end{abstract} \section{Introduction} Federated Learning (FL) is a class of distributed algorithms for the training of machine learning models such as neural networks. A primary aim of FL when it was first introduced was to enhance user privacy, namely by keeping sensitive data stored locally and avoiding uploading it to a central server~\cite{mcmahan2017communication}. The basic idea is that users train a local version of the model with their own data, and share only the resulting model parameters with a central coordinating server. This server then combines the models of all the participants, transmits the aggregrate back to them, and this cycle (i.e a single FL `round') repeats until the model is judged to have converged. A notable real-world deployment of FL is within Google's Gboard, a widely used Android keyboard application that comes pre-installed on many mobile handsets and which has $>5$ Billion downloads~\cite{gboardplay22}. Within GBoard, FL is used to train the Next Word Prediction (NWP) model that provides the suggested next words that appear above the keyboard while typing~\cite{hard2018federated}. In this paper we show that FL is not private for Next Word Prediction. We present an attack that reconstructs the original training data, i.e. the text typed by a user, from the FL parameter updates with a high degree of fidelity. Both the FedSGD and FederatedAveraging variants of FL are susceptible to this attack. In fairness, Google have been aware of the possibility of information leakage from FL updates since the earliest days of FL, e.g. see footnote 1 in~\cite{mcmahan2017communication}. Our results demonstrate that not only does information leakage indeed happen for real-world models deployed in production and in widespread use, but that the amount of information leaked is enough to allow the local training data to be fully reconstructed. We also show that adding Gaussian noise to the transmitted updates, which has been proposed to ensure local Differential Privacy (DP), provides little defence unless the noise levels used are so large that the utility of the model becomes substantially degraded. That is, DP is not an effective countermeasure to our attack\footnote{DP aims to protect the aggregate training data/model against query-based attacks, whereas our attack targets the individual updates. Nevertheless, we note that DP is sometimes suggested as a potential defence against the type of attack carried out here.}. We also show that use of mini-batches of up to 256 sentences provides little protection. Other defences, such as Secure Aggregation (a form of multi-party computation MPC), Homomorphic Encryption (HE), and Trusted Execution Environments (TEEs), are currently either impractical or require the client to trust that the server is honest\footnote{Google's Secure Aggregation approach~\cite{bonawitz2016practical} is a prominent example of an approach requiring trust in the server, or more specifically in the PKI infrastructure which in practice is operated by the same organisation that runs the FL server since it involves authentication/verification of clients. We note also that Secure Aggregation is not currently deployed in the GBoard app despite being proposed 6 years ago.} in which case use of FL is redundant. Previous studies of reconstruction attacks against FL have mainly focussed on image reconstruction, rather than text as considered here. Unfortunately, we find that the methods developed for image reconstruction, which are based on used gradient descent to minimise the distance between the observed model data and the model data corresponding to a synthetic input, do not readily transfer over to text data. This is perhaps unsurprising since the inherently discrete nature of text makes the cost surface highly non-smooth and so gradient-based optimisation is difficult to apply successfully. In this paper we therefore propose new reconstruction approaches that are specialised to text data. It is important to note that the transmission of user data to a remote server is not inherently a breach of privacy. The risk of a privacy breach is related to the nature of the data being sent, as well as whether it's owner can be readily identified. For example, sending device models, version numbers, and locale/region information is not an immediate concern but it seems clear that the sentences entered by users, e.g. when typing messages, writing notes and emails, web browsing and performing searches, may well be private. Indeed, it is not only the sentences typed which can be sensitive but also the set of words used (i.e. even without knowing the word ordering) since this can be used for targeting surveillance via keyword blacklists \cite{guardiannsasms}. In addition, most Google telemetry is tagged with with an Android ID. Via other data collected by Google Play Services the Android ID is linked to (i) the handset hardware serial number, (ii) the SIM IMEI (which uniquely identifies the SIM slot) and (iii) the user's Google account \cite{infocomgaen21,securecom21}. When creating a Google account users are encouraged to supply a phone number and for many people this will be their own phone number. Use of Google services such as buying a paid app on the Google Play store or using Google Pay further links a person's Google account to their credit card/bank details. A user's Google account, and so the Android ID, can therefore commonly be expected to be linked to the person's real identity. \section{Preliminaries} \subsection{Federated Learning Client Update} Algorithm \ref{alg:fedlearning} gives the procedure followed by FL participants to generate a model update. The number of local epochs $E$, mini-batch size $B$, and local client learning rate $\eta$ can be changed depending on the FL application. When $E = 1$ and the mini-batch size is equal the size of the training dataset, then it is called FedSGD, and any other configuration corresponds to FedAveraging, where multiple gradient descent steps occur on the client. \begin{algorithm} \caption{Federated Learning Client Update}\label{alg:fedlearning} \KwInput{$\theta_0$: The parameters of the global model, training loss function $\ell$.} \KwOutput{$\theta_1$: The model parameters after training on the client's data.} \KwProc{clientUpdate}{ $\theta_1 \gets \theta_0$\; $\mathcal{B} \gets (\mbox{split dataset into batches of size } B)$\; $\mbox{\textbf{for} local epoch \textit{i} from 1 to $E$ \textbf{do}}$\; $\ \ \mbox{\textbf{for} batch \textit{b} $\in \mathcal{B}$ \textbf{do}}$\; $\ \ \ \ \theta_1 \gets \theta_1 - \eta\nabla \ell(\theta_1;b)$\; \Return $\theta_1$\; } \end{algorithm} \subsection{Threat Model} The threat model is that of a honest-but-curious adversary that has access to (i) the FL model achitecture, (ii) the current global FL model parameters $\theta_0$, and (iii) the FL model parameters, $\theta_1$, after updating locally using the training data of an individual user. The FL server has, for example, access to all of these and so this threat model captures the situation where there is an honest-but-curious FL server. We do not consider membership attacks against the global model, although knowledge of $\theta_0$ allows such attacks, since they have already received much attention in the literature. Instead we focus on local reconstruction attacks i.e. attacks that aim to reconstruct the local training data of a user from knowledge of $\theta_0$ and $\theta_1$. In the GBoard Next Word Prediction task the local training data is the text typed by the user while using apps on their mobile handset e.g. while sending text messages. \subsection{GBoard NWP Model} Figure \ref{fig:gboard_lstm} shows the LSTM recursive neural net (RNN) architecture used by by Gboard for NWP. This model was extracted from the app. \begin{figure} \caption{\label{fig:gboard_lstm} \label{fig:gboard_lstm} \end{figure} The Gboard LSTM RNN is a word level language model, predicting the probability of the next word given what the user has already typed into the keyboard. Input words are first mapped to a dictionary entry, which has a vocabulary of \(V = 9502\) words, with a special \texttt{<UNK>} entry used for words that are not in the dictionary, and \texttt{<S>} to indicate the start of a sentence. The index of the dictionary entry is then mapped to a dense vector of size \(D=96\) using a lookup table (the dictionary entry is one-hot encoded and then multiplied by a \(\mathbb{R}^{D \times V}\) weighting matrix \(W^T\)) and applied as input to an LSTM layer with 670 units i.e. the state \(C_t\) is a vector of size 670. The LSTM layer uses a CIFG architecture without peephole connections, illustrated schematically in Figure \ref{fig:gboard_lstm}. The LSTM state \(C_t\) is linearly projected down to an output vector \(h_t\) of size \(D\), which is mapped to a raw logit vector \(z_t\) of size \(V\) via a weighting matrix \(W\) and bias \(b\). This extra linear projection is not part of the orthodox CIFG cell structure that was introduced in \cite{cifg}, and is included to accommodate the model's tied input and output embedding matrices \cite{press2017using}. A softmax output layer finally maps this to an \([0,1]^{V}\) vector \(\hat{y}_t\) of probabilities, the \(i\)'th element being the estimated probability that the next word is the \(i\)'th dictionary entry. \section{Reconstruction Attack} \subsection{Word Recovery} \label{sec:wr} In next word prediction the input to the RNN is echoed in it's output. That is, the output of the RNN aims to match the sequence of words typed by the user, albeit with a shift one word ahead. The sign of the output loss gradient directly reveals information about the words typed by the user, which can then be recovered easily by inspection. This key observation, first made in \cite{zhao2020idlg}, is the basis of our word recovery attack. After the user has typed \(t\) words, the output of the LSTM model at timestep \(t\) is the next word prediction vector \(\hat{y}_{t}\), \begin{align*} \hat{y}_{t,i} = \frac{e^{z_i}}{\sum_{j=1}^Ve^{z_j}},\ i=1,\dots, V \end{align*} with raw logit vector \(z_t=Wh_t+b\), where \(h_t\) is the output of the LSTM layer. The cross-entropy loss function for text consisting of \(T\) words is \(J_{1:T}(\theta)= \sum_{t=1}^T J_t(\theta)\) where \begin{align*} J_t(\theta)= -\log \frac{e^{z_{i^*}(\theta)}}{\sum_{j=1}^Ve^{z_j(\theta)}}, \end{align*} and \(i^*_t\) is the dictionary index of the \(t\)'th word entered by the user and \(\theta\) is the vector of neural net parameters (including the elements of \(W\) and \(b\)). Differentiating with respect to the output bias parameters \(b\) we have that, \begin{align*} \frac{\partial J_{1:T}}{\partial b_{k}} = \sum_{t=1}^T \sum_{i=1}^V \frac{\partial J_t}{\partial z_{t,i}} \frac{\partial z_{t,i}}{\partial b_k} \end{align*} where \begin{align*} \frac{\partial J_t}{\partial z_{t,i_t^*}}=\frac{e^{z_{i^*}}}{\sum_{j=1}^Ve^{z_j}}-1<0, \\ \quad \frac{\partial J_t}{\partial z_{t,i}}=\frac{e^{z_{i}}}{\sum_{j=1}^Ve^{z_j}}>0,\ i\ne i_t^* \end{align*} and \begin{align*} \frac{\partial z_{t,i}}{\partial b_k} = \begin{cases} 1 & k=i\\ 0 & \text{otherwise} \end{cases} \end{align*} That is, \begin{align*} \frac{\partial J_{1:T}}{\partial b_{k}} = \sum_{t=1}^T \frac{\partial J_t}{\partial z_{t,k}} \end{align*} It follows that for words \(k\) which do not appear in the text \(\frac{\partial J_{1:T}}{\partial b_{k}} >0\). Also, assuming that the neural net has been trained to have reasonable performance then \(e^{z_k}\) will tend to be small for words \(k\) that do not appear next and large for words which do. Therefore for words \(i^*\) that appear in the text we expect that \(\frac{\partial J_{1:T}}{\partial b_{i^*}} <0\). The above analysis focuses mainly on the bias parameters of the final fully connected layer, however similar methods can be applied to the \(W\) parameters. The key aspect here that lends to the ease of this attack is that the outputs echo the inputs, unlike for example, the task of object detection in images. In that case, the output is just the object label. This observation is intuitive from a loss function minimisation perspective. Typically the estimated probability \(\hat{y}_{i^*}\) for an input word will be less than 1. Increasing \(\hat{y}_{i^*}\) will therefore decrease the loss function i.e. the gradient is negative. Conversely, the estimated probability \(\hat{y}_{i}\) for a word that does not appear in the input will be small but greater than 0. Decreasing \(\hat{y}_{i}\) will therefore decrease the loss function i.e. the gradient is positive. \emph{Example:} To execute this attack in practice, simply subtract the final layer parameters of the current global model $\theta_0$ from those of the resulting model trained on the client's local data, $\theta_1$, as shown in Algorithm \ref{alg:word-extraction}. The indices of the negative values reveal the typed words. Suppose the client's local data consists of just the one sentence ``learning online is not so private''. We then train model $\theta_0$ on this sentence for 1 epoch, with a mini-batch size of 1, and SGD learning rate of 0.001 (FedSGD), and report the the values at the negative indices in Table \ref{tab:vals}. \begin{table} \centering \begin{tabular}{ |c|c|c| } \hline word & $i$ & $(\theta_1 - \theta_0)_i $ \\ \hline learning & 7437 & -0.0009951561 \\ online & 4904 & -0.0009941629 \\ is & 209 & -0.000997875 \\ not & 1808 & -0.0009941144 \\ so & 26 & -0.0009965639 \\ private & 6314 & -0.0009951561 \\ \hline \end{tabular} \caption{Values of the final layer parameter difference at the indices of the typed words. Produced after training the model on the sentence ``learning online is not so private'', $E = 1, B = 1, \eta = 0.001$. These are the only indices where negative values occur.\label{tab:vals}} \end{table} \begin{algorithm} \caption{Word Recovery}\label{alg:word-extraction} \KwInput{$\theta_0$: The global model's final layer parameters, $\theta_1$: The final layer parameters of a model update} \KwOutput{User typed tokens $w$} \KwProc{recoverWords}{ $d \gets \theta_1 - \theta_0$\; $w \gets \{ i \ | \ d_i < 0 \}$\; \Return $w$\; } \end{algorithm} \subsection{Reconstructing Sentences} \label{sec:rs} The attack described previously retrieves the words typed, but gives no indication of the order in which they occured. To reveal this order, we ask the model\footnote{It is perhaps worth noting that we studied a variety of reconstruction attacks, e.g, using Monte Carlo Tree Search to perform a smart search over all words sequences, but found the attack method described here to be simple, efficient and highly effective}. The basic idea is that after running multiple rounds of gradient descent on the local training data, the local model is ``tuned'' to the local data in the sense that when it is presented with the first words of a sentence from the local training data, the model's next word prediction will tend to match the training data and so we can bootstrap reconstruction of the full training data text. In more detail, the $t$'th input word is represented by a vector $x_t\in{0,1}^V$, with all elements zero apart from the element corresponding to the index of the word in the dictionary. The output $y_{t+1}\in[0,1]^V$ from the model after seeing input words $x_0,\dots,x_t$ is a probability distribution over the dictionary. We begin by selecting $x_0$ equal to the start of sentence token \texttt{<S>} and $x_1$ equal to the first word from our set of reconstructed words, then ask the model to generate $y_2=Pr(x_{2} | x_0,x_1; \theta_1)$. We set all elements of $y_2$ that are not in the set of reconstructed words to zero, since we know that these were not part of the local training data, renormalise $y_2$ so that its elements sum to one, and then select the most likely next word as $x_2$. We now repeat this process for $y_3=Pr(x_{3} | x_0,x_1,x_2; \theta_1)$, and so on, until a complete sentence has been generated. We then take the second word from our set of reconstructed words as $x_1$ and repeat to generate a second sentence, and so on. This method generates as many sentences as there are extracted words. This results in a lot more sentences than were originally in the client's training dataset. In order to filter out the unnecessary sentences, we rank each generated sentence by its change in perplexity, from the initial global model $\theta_0$ to the new update $\theta_1$. The Log-Perplexity of a sequence $x_0,...,x_t$, is defined as \begin{equation*} PP_\theta(x_0,...,x_t) = \sum_{i=1}^{t}(- \log Pr(x_i | x_0,...,x_{i-1}; \theta)), \end{equation*} and quantifies how `surprised' the model is by the sequence. Those sentences that report a high perplexity for $\theta_0$ but a comparatively lower one for $\theta_1$ reveal themselves as having been part of the dataset used to train $\theta_1$. Each generated sentence is scored by their percentage change in perplexity: \begin{equation*} Score(x_0,...,x_t) = \frac{PP_{\theta_0}(x_0,...,x_t) - PP_{\theta_1}(x_0,...,x_t)}{PP_{\theta_0}(x_0,...,x_t)}. \end{equation*} By selecting the top-$n$ ranked sentences, we select those most likely to have been present in the training dataset. \begin{figure*} \caption{\small Word recovery performance over time with full batch and mini-batch training. The upper bound on the F1 score for the different datasets is related to how many words in the training set are also in the model dictionary. Figure \ref{fig:fbepoch} \label{fig:wr_nonoise} \end{figure*} \section{Performance Of Attacks Against Vanilla Federated Learning} \label{sec:results_vanilla} \subsection{Experimental Setup} We make use of the LSTM RNN extracted from Gboard as the basis of our experiments. The value of it's extracted parameters are used as the initial 'global' model $\theta_0$; the starting point of the updates we generate. There are several variables that go into producing an update: the number of sentences in the dataset, $n_k$, the number of epochs $E$, the batch size $B$, and the local learning rate $\eta$. Note that when $E = 1$ and $B = n_k$, this corresponds to a FedSGD update, and that any other configuration corresponds to a FedAveraging update. Unless explicitly mentioned otherwise, we keep the client learning rate $\eta = 0.001$ constant for all our experiments. All sample datasets used consist of 4 word long sentences, mirroring the average length of sentences that the Gboard model was trained with \cite{hard2018federated}. To evaluate the effectiveness of our attack, the sample datasets we use are taken from a corpus of american english text messages \cite{oday2013text}, which includes short sentences similar to those the Gboard LSTM extracted from the mobile application was trained on. We perform our two attacks on datasets consisting of $n_k = 16, 32, 64, 128,$ and $256$ sentences. Converting a sentence into a sequence of training samples and labels $(\textbf{x}, y)$ is done as follows: \begin{itemize} \item The start-of-sentence token \texttt{<S>} is prepended to the beginning of the sentence, and each word is then converted to it's corresponding word embedding. This gives a sequence $x_0,x_1,\dots,x_T$ of tokens where $x_0$ is the \texttt{<S>} token. \item A sentence of length $T$ becomes $T$ training points $((x_0,x_1),y_2)$, $((x_0,x_1,x_2),y_3), \dots, ((x_0,x_1,\dots,x_{T-1}),y_T)$ where label $y_t\in [0,1]^V$ is a probability distribution over the dictionary entries, with all elements zero apart from the element corresponding to the dictionary index of $x_t$ \end{itemize} Following \cite{hard2018federated} we use categorical cross entropy loss over the output and target labels. After creating the training samples and labels from a dataset of $n_k$ sentences, we train model $\theta_0$ on this training data for a specified number of epochs $E$ with a mini-batch size of $B$, according to Algorithm \ref{alg:fedlearning} to produce the local update $\theta_1$. We then subtract the final layer parameters of the two models to recover the words, and iteratively sample $\theta_1$ according to the methodology described in Section \ref{sec:rs} to reconstruct the sentences, and take the top-$n_k$ ranked sentences by their perplexity score. \subsection{Metrics} To evaluate performance, we use the F1 score which balances the precision and recall of word recovery with our attack. We also use a modified version of the Levenshtein ratio i.e. the normalised Levenshtein distance \cite{marzal1993computation} (the minimum number of word level edits needed to make one string match another) to evaluate our sentence reconstruction attack. Ranging from 0 to 100, the larger the levenshtein ratio, the closer the match between our reconstructed and the ground truth sentence. \subsection{Measurements} \subsubsection{Word Recovery Performance} Figure \ref{fig:wr_nonoise} shows the measured performance of our word recovery attack for both the FedSGD and FedAveraging variants of FL for mini-batch/full batch training and as the dataset size and training time are varied. It can be seen that none of these variables have much of an effect on the F1 score achieved by our attack, which remains high across a wide range of conditions. Note that the maximum value of the F1 score is not one for this data but instead a smaller value related to how many of the words in the dataset are also present in the model's vocabulary. Some words, e.g unique nouns, slang, etc, do not exist in the model's 9502 word dictionary, and our word recovery attack can only extract the \texttt{<UNK>} token in their place, limiting how many words we can actually recover. \begin{figure*} \caption{$E = 50$ epochs} \label{fig:sre50} \caption{$E = 100$ epochs} \label{fig:sre100} \caption{$E = 1000$ epochs} \label{fig:sre1000} \caption{\small Sentence reconstruction performance. Each point corresponds to a different dataset colour coded by it's size. The y-axis gives the average Levenshtein ratio of the reconstructed sentences. The x-axis is the F1 score between the tokens used in the reconstructed sentences and the ground truth. The closer a point is to the top-right corner, the closer the reconstruction is to perfect.} \label{fig:fedavg_sr_nonoise} \end{figure*} \subsubsection{Sentence Reconstruction Peformance} Figure \ref{fig:fedavg_sr_nonoise} shows the measured performance of our sentence reconstruction attack. Figures \ref{fig:sre50}, \ref{fig:sre100}, and \ref{fig:sre1000} show that as you train for more epochs (50, 100, and 1000 respectively) the quality of the reconstructed sentences improves. This is intuitive as the models trained for longer are more overfit to the data, and so the iterative sampling approach is more likely to return the correct next word given a conditioning prefix. However, longer training times are not necessary to accurately reconstruct sentences. It can be seen from Figure \ref{fig:fedsgd_sr_nonoise}(b) that even in the FedSGD setting, where the number of epochs $E = 1$, we can sometimes still get high quality sentence reconstructions by modifying the model parameters $\theta_1$ to be $\theta_1 + s(\theta_1 - \theta_0)$, with $s$ being a scaling factor. Since $\theta_1 = \theta_0 -\eta \nabla \ell(\theta_0;b)$ with FedSGD, $$\theta_1 + s(\theta_1 - \theta_0) = \theta_0 -\eta(1+s) \nabla \ell(\theta_0;b)$$ from which it can be seen that scaling factor $s$ effectively increases the gradient descent step size. \begin{figure*} \caption{\small FedSGD sentence reconstruction performance without (a) and with (b) scaling.} \label{fig:fedsgd_sr_nonoise} \end{figure*} \section{Existing Attacks And Their Defences} \subsection{Image Data Reconstruction} Information leakage from the gradients of neural networks used for object detection in images appears to have been initially investigated in~\cite{zhu2019dlg}, which proposed the Deep Leakage from Gradients (DLG) algorithm. An image is input to a neural netl and the output is a label specifying an object detected in the image. In DLG a synthetic input is applied to the neural net and the gradients of the model parameters are calculated. These gradients are then compared to the observed model gradients sent to the FL server and gradient descent is used to update the synthetic input so as to minimise the difference between its model gradients and the observed model gradients. This work was subsequently extended by~\cite{zhao2020idlg,geiping_inverting_2020-1,wang2020sapag,zhu2020rgap,yin2021see,jin2021catastrophic} to improve the stability and performance of the original DLG algorithm, as well as the fidelity of the images it generates. In~\cite{yin2021see,jin2021catastrophic}, changes to the optimisation terms allowed for successful data reconstruction at batch sizes of up to 48 and 100 respectively. Analytical techniques of data extraction~\cite{boenisch2021curious,pan2020theory} benefit from not being as costly to compute as compared to optimization based methods. Additionally, these analytical attacks extract the exact ground truth data, as compared to DLG and others who often settle to image reconstructions that include artefacts. \subsection{Text Data Reconstruction} Most work on reconstruction attacks has focussed on images and there is relatively little work on text reconstruction. A single text reconstruction example is presented in~\cite{zhu2019dlg}, with no performance results. Probably the closest work to the present paper is~\cite{deng2021tag} which applies a variant of DLG to text reconstruction from gradients of transformer-based models (variants of BERT~\cite{vaswani2017attention}). As already noted, DLG tends to perform poorly with text data and the word recovery rate achieved. in~\cite{deng2021tag} is generally no more than 50\%. In our attack context, DLG can recover words but at much smaller scales than we have demonstrated, and takes longer to find these words. There is also no guarantee that DLG can recover words and place them in the correct order. In \cite{zhu2019dlg}, it is noted that the algorithm requires multiple restarts before successful reconstruction. Additionally, DLG operates by matching the single gradient of a batch of training data, therefore it only works in the FedSGD setting, where $E = 1$. We show the results of DLG in Listing \ref{lst:sampledlg} on gradients of $B = 1, \mbox{ and } 2$ 4 word sentences. In the first example, it took DLG 1000 iterations to produce ``\texttt{<S> how are venue}'', and 1500 iterations to produce ``\texttt{<S> how are sure cow}, \texttt{<S> haha where are Tell van}''. These reconstructions include some of the original words, but recovery is not as precise as our attack, and takes orders of magnitude longer to carry out. \begin{lstlisting}[frame=single,caption=Original and reconstructed sentences by DLG,label=lst:sampledlg] <S> how are you <S> how are venue <S> how are you doing <S> how are sure cow <S> where are you going <S> haha where are Tell van \end{lstlisting} Work has also been carried out on membership attacks against text data models such as GPT2 i.e. given a trained model the attack seeks to infer one or more training data points. See for example~\cite{carlini2019secretsharer,carlini2021extracting}. But, as already noted, such attacks are not the focus of the present paper. \subsection{Proposed Defences} Several defences have been proposed to prevent data leakage in FL. Abadi et al. \cite{abadi2016deep} proposed Differentially Private Stochastic Gradient Descent (DP-SGD), which clips stochastic gradient decent updates and adds Gaussian noise at each iteration. This aims to defend against membership attacks against neural networks, rather than the reconstruction attacks that we consider here. In~\cite{brendan2018learning} it was appled to train a next word prediction RNN motivated by mobile keyboard applications, again with a focus on membership attacks. Recently, the same team at Google proposed DP-FTRL~\cite{kairouz2021practical} which avoids the sampling step in DP-SGD. Secure Aggregation is a multi-party protocol proposed in 2016 by~\cite{bonawitz2016practical} as a defence against data leakage from the data uploaded by clients to an FL server. In this setting the central server only has access to the sum of, and not individual updates. However, this approach still requires clients to trust that the PKI infrastructure is honest since dishonest PKI infrastructure allows the server to perform a sybil attack (see Section 6.2 in~\cite{bonawitz2016practical}) to reveal the data sent by an individual client. When both the FL server and the PKI infrastructure are operated by Google then Secure Aggregation requires users to trust Google servers to be honest, and so offers from an attack capability point of view offers no security benefit. Recent work by Pasquini et al. \cite{pasquini2021eluding} has also shown that by distributing different models to each client a dishonest server can recover individual model updates. As a mitigation they propose adding local noise to client updates to obtain a form of local differential privacy. We note that despite the early deployment of FL in production systems such as GBoard, to the best our knowledge, there does not exist a real-world deployment of secure aggregation. This is also true for homomorphically encrypted FL, and FL using Trusted Execution Environments (TEEs). \section{Performance Of Our Attacks Against Federated Learning with Local DP} Typically, when differential privacy is used with FL noise is added by the server to the aggregate update from multiple clients i.e. no noise is added to the update before leaving a device. This corresponds to the situation considered in Section \ref{sec:results_vanilla} . In this section we now evaluate how local differential privacy, that is, noise added either during local training (DPSGD) or to the final model parameters $\theta_1$, before its transmission to the coordinating FL server, affect the performance of both our word recovery and sentence reconstruction attacks. \begin{algorithm} \caption{Local DPSGD}\label{alg:localdpsgd} \KwProc{clientUpdateDPSGD}{ $\theta_1 \gets \theta_0$\; $\mathcal{B} \gets (\mbox{split dataset into batches of size } B)$\; $\mbox{\textbf{for} local epoch \textit{i} from 1 to $E$ \textbf{do}}$\; $\ \ \mbox{\textbf{for} batch \textit{b} $\in \mathcal{B}$ \textbf{do}}$\; $\ \ \ \ \theta_1 \gets \theta_1 - \eta\nabla \ell(\theta_1;b) + \eta\mathcal{N}(0, \sigma)$\; \Return $\theta_1$\; } \end{algorithm} Algorithm \ref{alg:localdpsgd} outlines the procedure for DPSGD-like local training, where Gaussian noise of mean $0$ and standard deviation $\sigma$ is added along with each gradient update. Algorithm \ref{alg:singlenoise} details the typical FL client update procedure but adds Gaussian noise to the final model $\theta_1$ before it is returned to the server. In our experiments, everything else as described in Section \ref{sec:results_vanilla} is kept the same. \begin{algorithm} \caption{Local Single Noise Addition}\label{alg:singlenoise} \KwProc{clientUpdateSingleNoise}{ $\theta_1 \gets \theta_0$\; $\mathcal{B} \gets (\mbox{split dataset into batches of size } B)$\; $\mbox{\textbf{for} local epoch \textit{i} from 1 to $E$ \textbf{do}}$\; $\ \ \mbox{\textbf{for} batch \textit{b} $\in \mathcal{B}$ \textbf{do}}$\; $\ \ \ \ \theta_1 \gets \theta_1 - \eta\nabla \ell(\theta_1;b)$\; \Return $\theta_1 + \mathcal{N}(0, \sigma)$\; } \end{algorithm} \begin{figure*} \caption{\small Word recovery behaviour when Gaussian noise is all to local FL updates: (a) vanilla word recovery performance, (b) disparity of magnitudes between those words that were present in the dataset and those 'noisily' flipped negative, (c) word recovery performance when filtering is used.} \label{fig:removing-noisy} \end{figure*} \begin{figure*} \caption{\small Word recovery results for two local DP methods. Figure \ref{fig:fedavgsinglenoise} \label{fig:noiseresults} \end{figure*} \subsection{Word Recovery Performance} Figure \ref{fig:nocutoff} shows the performance of our word recovery attack against DPSGD-like local training for different levels of $\sigma$. For noise levels of $\sigma = 0.001$ or greater, it can be seen that the F1 score drops significantly. What is happening is that the added noise introduces more negative values in the difference of the final layer parameters and so results in our attack extracting more words than actually occurred in the dataset, destroying it's precision. However, one can eliminate most of these ``noisily'' added words by simple inspection. Figure \ref{fig:mags1} graphs the sorted magnitudes of the negative values in the difference between the final layer parameters of $\theta_1$ and $\theta_0$, after DPSGD-like training with $B = 32, n_k = 256, \mbox{ and } \sigma = 0.001$. We can see that the more epochs the model is trained for, the more words are extracted via our attack. Of the around 600 words extracted after 1000 epochs of training, only about 300 of them were actually present in the dataset. On this graph, those words with higher magnitudes correspond to the ground truth words. It can be seen that we therefore can simply cutoff any words extracted beyond a specified magnitude threshold. This drastically improves the performance of the attack, see Figure \ref{fig:cutoff}. It can be seen that even for $\sigma = 0.1$, we now get word recovery results close to those obtained when we added no noise at all. Figure \ref{fig:fedavgsinglenoise} shows the performance of our word recovery attack when noise is added to the local model parameters $\theta_1$ in the FedAveraging setting, with $n_k = 256, \mbox{ and } B = 32$. When $\sigma \ge 0.01$, it can be seen that the performance drops drastically\footnote{Note that in DPSGD the added noise is multiplied by the learning rate $\eta$, and so this factor needs to be taken into account when comparing the $\sigma$ values used in DPSGD above and with single noise addition. This means added noise with standard deviation $\sigma$ for DPSGD corresponds roughly to a standard deviation of $\eta \sqrt{EB}\sigma$ with single noise addition. For $\eta=0.001$, $E=1000$, $B=32$, $\sigma=0.1$ the corresponding single noise addition standard deviation is 0.018.}, even when we use the magnitude threshold trick described previously. With FedSGD (Figures \ref{fig:fedsgddpsgd} and \ref{fig:fedsgdsinglenoise}), we see that with DPSGD-like training, these levels of noise are manageable, but for single noise addition, there are only so many words that are recoverable before being lost in the comparatively large amounts of noise added. For comparison, in the FL literature on differential privacy, the addition of Gaussian noise with standard deviation no more than around 0.001 (and often much less) is typically considered, and is only added after the update has been transmitted to the coordinating FL server. \subsection{Sentence Reconstruction Performance} Figure \ref{fig:sr_localdp} shows the measured sentence reconstruction performance with both DPSGD-like training (Figures \ref{fig:levendpsgd0_01} and \ref{fig:levendpsgd0_1}) and when noise is added to the final parameters of the model (Figures \ref{fig:sn0_01} and \ref{fig:sn0_1}). By removing the noisily added words and running our sentence reconstuction attack, we get results close to those had we not added any noise for up to $\sigma = 0.1$. For the single noise addition method, as these levels of noise are not calibrated, $\sigma = 0.1$ is enough to destroy the quality of reconstructions, however these levels of noise also destroy any model utility. \begin{figure*}\label{fig:levendpsgd0_01} \label{fig:levendpsgd0_1} \label{fig:sn0_01} \label{fig:sn0_1} \label{fig:sr_localdp} \end{figure*} \section{Additional Material} The code for all of the attacks here, the LSTM model and the datasets used are all publicly available on github \href{https://github.com/namilus/nwp-fedlearning}{\texttt{here}}. \section{Summary And Conclusions} In this paper we introduce two local reconstruction attacks against federated learning when used to train natural language text models. We find that previously proposed attacks (DLG and its variants) targeting image data are ineffective for text data and so new methods of attack tailored to text data are necessary. Our attacks are simple to carry out, efficient, and highly effective. We illustrate their effectiveness against the next word prediction model used in Google's GBoard app, a widely used mobile keyboard app (with $>5$ Billion downloads) that has been an early adopter of federated learning for production use. We demonstrate that the words a user types on their mobile handset, e.g. when sending text messages, can be recovered with high accuracy under a wide range of conditions and that counter-measures such a use of mini-batches and adding local noise are ineffective. We also show that the word order (and so the actual sentences typed) can be reconstructed with high fidelity. This raises obvious privacy concerns, particularly since GBoard is in production use. Secure multi-party computation methods such as Secure Aggregation and also methods such as Homomorphic Encryption and Trusted Execution Environments are potential defences that can improve privacy, but these can be difficult to implement in practice. Secure Aggregation requires users to trust that the server is honest, despite the fact that FL aims to avoid the need for such trust. Homomorphic Encryption implementations that are sufficiently efficient to allow large-scale production use are currently lacking. On a more positive note, the privacy situation may not be quite as bad as it seems given these reconstruction attacks. Firstly, it is not the raw sentences typed by a user that are reconstructed in our attacks but rather the sentences after they have been mapped to tokens in a text model dictionary. Words which are not in the dictionary are mapped to a special \texttt{<UNK>} token. This means that the reconstructed text is effectively redacted, with words not in the dictionary having been masked out. This suggests that a fruitful direction for future privacy research on FL for natural language models may well lie in taking a closer look at the specification of the dictionary used. Secondly, we also note that changing from a word-based text model to a character-based one would likely make our attacks much harder to perform. \appendices \end{document}
\begin{document} \preprint{malonlett} \title{Entanglement between an electron and a nuclear spin $\mathbf{\frac{1}{2}}$} \author{M.~Mehring} \author{J.~Mende} \author{W.~Scherer} \affiliation{2.~Physikalisches~Institut, Universit\"at~Stuttgart,\\ Pfaffenwaldring~57, 70550 Stuttgart, Germany} \date{\today} \begin{abstract} We report on the preparation and detection of entangled states between an electron spin 1/2 and a nuclear spin 1/2 in a molecular single crystal. These were created by applying pulses at ESR (9.5 GHz) and NMR (28 MHz) frequencies. Entanglement was detected by using a special entanglement detector sequence based on a unitary back transformation including phase rotation. \end{abstract} \pacs{03.67.-a, 03.65.Ud, 33.35.+r, 76.30.-v} \maketitle The entanglement between two spins 1/2 is at the heart of quantum mechanics. Ever since a so-called "paradox" was formulated by Einstein, Podolsky and Rosen (EPR) \cite{einstein:35}, referring to local measurements performed on the individual spins of a delocalized entangled pair, properties of entanglement and its consequences for quantum physics has been discussed in great detail \cite{greenberger:89}. In the context of quantum information processing (QIP) entanglement has been considered as a resource for quantum parallelism (speedup of quantum computing) \cite{deutsch:85,deutsch:92,grover:97} and quantum cryptography \cite{bennet:84,bennet:92}. A number of these quantum algorithms have been demonstrated in NMR (nuclear magnetic resonance) quantum computing \cite{cory97, gershenfeld:97, knill:98}. In this contribution we report on the experimental preparation and observation of the entangled states of an electron spin $S = 1/2$ and a nuclear spin $I = 1/2$ in a crystalline solid. The spins considered here are a proton and a radical (unpaired electron spin) produced by x-ray irradiation of a malonic acid single crystal \cite{mcconnell:60}. This leads to the partial conversion of the CH$_2$ group of the malonic acid molecule to the radical $^{\bullet}$CH where the dot marks the electron spin. In a strong magnetic field the following four Zeeman product states $|m_Sm_I\rangle = ~|\uparrow\uparrow\rangle,~|\uparrow\downarrow\rangle,~|\downarrow\uparrow\rangle, ~|\downarrow\downarrow\rangle$ exist where the arrows label the $\pm 1/2$ states of the electron and the nuclear spin. Equivalently we will use a qubit labelling as $|m_Sm_I\rangle = |00\rangle,~|01\rangle,~|10\rangle,~|11\rangle$. The energy level diagram corresponding to the electron-proton spin system is shown in fig. \ref{4niveau}, where we have also indicated the possible ESR ($\Delta m_S =\pm1$) and NMR transitions ($\Delta m_I =\pm1$) of the individual spins by solid arrows. What we are aiming at are states of the type \begin{equation}\label{Bell} \Psi^\pm = \frac{1}{\sqrt 2}\left(\mid \uparrow\downarrow\rangle\pm \mid \downarrow\uparrow\rangle\right) \mbox{ and } \Phi^\pm = \frac{1}{\sqrt 2}\left(\mid \uparrow\uparrow\rangle\pm \mid \downarrow\downarrow\rangle\right). \end{equation} These represent all four possible entangled states of a two qubit system, also called the Bell states of two spins 1/2. They correspond to a superposition of the states in fig.~\ref{4niveau} connected by dashed arrows. \begin{figure} \caption{Schematic diagram of the four energy levels of a two spin system with $S=1/2$ and $I=1/2$. The solid arrows denote allowed transitions. The dotted arrows indicate forbidden transitions, corresponding to entangled states. The phase dependence of the quantum states under $z$-rotations is also indicated (see text).\label{4niveau} \label{4niveau} \end{figure} Electron spin resonance (ESR) was performed at X-band (9.49 GHz) at $T = 40$~K. The low temperature was chosen only for reasons of signal-to-noise ratio. The two well resolved ESR lines due to the $^\bullet$CH proton depend on the orientation of the single crystal and were observed at magnetic fields of 338.2~mT and 339.2~mT with a linewidth of 0.5~mT for the ESR and about 1~MHz for the ENDOR (electron nuclear double resonance) line. This orientation corresponds to a principal axis of the hyperfine tensor. There are two different proton NMR transitions. We applied pulsed ENDOR techniques to one of them at the frequency of 28.05~MHz. In the high temperature approximation we express the Boltzmann spin density matrix as $\hat{\rho}_\mathrm{B} = (1-K_\mathrm{B})\frac{1}{4}\hat{1}+K_\mathrm{B}\cdot\hat{\rho}_\mathrm{P}$ with $K_\mathrm{B} =\mu_\mathrm{B}B_0/k_\mathrm{B}T$ (for $g = 2$) and where the pseudo Boltzmann density matrix is defined as $\hat{\rho}_\mathrm{P}=(\frac{1}{4}\hat{1}-\frac{1}{2}\hat{S}_z)$ which corresponds to an equal polulation of the states $|\downarrow\uparrow\rangle$ and $|\downarrow\downarrow\rangle$ ($p_3=p_4=1/2$) and equivalently $|\uparrow\downarrow\rangle$ and $|\uparrow\uparrow\rangle$ ($p_1=p_2=0$) with $\mathrm{tr}\{\hat{\rho}_\mathrm{P}\}=1$. We used here the assumption that the Larmor frequency of the nuclear spin $\omega_{0I}$ is much smaller than the Larmor frequency of the electron spin $\omega_{0S}$. Note that the pseudo-pure density matrix $\hat{\rho}_{00}$ can be expressed as $\hat{\rho}_{00} = \frac{1}{4}\hat{1}+\frac{1}{2}\hat{S}_z+\frac{1}{2}\hat{I}_z+\hat{S}_z\hat{I}_z=|00\rangle\langle 00|$ corresponding to the pure state $\psi = |00\rangle$. In what follows we will prepare density matrices corresponding to the Bell states according to eqn. (\ref{Bell}). Since we will apply selective transitions we need to consider only a three level subsystem for each of the Bell states. In order to prepare all four states $\Psi^\pm$ and $\Phi^\pm$ we only need to apply transition selective excitations to either of the three level subsystems 1,2,3; 1,2,4; 1,3,4 or 2,3,4. Here we restrict ourselves to the sublevels 1,2,4 for creating the $\Phi^\pm$ states and 1,2,3 to create the $\Psi^\pm$ states. As an example we treat the $\Psi^-$ state in detail. It corresponds to the well known Einstein Podolsky Rosen (EPR) state \cite{einstein:35}. The preparation of $\Psi^-$ proceeds by first applying a selective $\pi$-pulse to the $1\leftrightarrow3$ transition to create the following pseudo pure populations: $p_1= 1/2$, $p_2= 0$, $p_3= 0$ of the 1,2,3 three level subsystem. The corresponding pseudo pure density matrix of the three-level subsystem (1,2,3) represents the pseudo pure state $\mid\uparrow\uparrow\rangle$. The creation of the $\Psi^{\pm}$ states can now be achieved by the pulse sequence $P^{12}_I(\mp\pi/2)$ followed by $P^{13}_S(-\pi)$ where the plus sign in $P^{12}_I(\mp\pi/2)$ creates the $\Psi^-$ state. Here we use the abbreviation $P^{jk}_{S,I}(\beta)$ for a selective pulse at the transition $j\leftrightarrow k$ with rotation angle $\beta$. The label $S$ refers to an electrons spin transition, whereas the label $I$ stands for a nuclear spin transition. The corresponding unitary transformation results in \begin{equation}\label{psi} |\uparrow\uparrow\rangle \stackrel{P^{12}_I(\pi/2)}{\longrightarrow}\frac{1}{\sqrt{2}}(|\uparrow\uparrow\rangle +|\uparrow\downarrow\rangle) \stackrel{P^{13}_S(-\pi)}{\longrightarrow}\frac{1}{\sqrt{2}}(|\uparrow\downarrow\rangle -|\downarrow\uparrow\rangle). \end{equation} In order to create $\Psi^+$ the plus sign in $P^{13}_I(\pm\pi)$ must be chosen. For completeness we note that the $\Phi^{\pm}$ states can be created by following the same line of reasoning when starting from the sublevels 1,2,4 with a preparatory $\pi$-pulse at the $2\leftrightarrow4$ transition. Next the pulse sequence $P^{12}_I(\pm\pi/2)$ followed by $P^{24}_S(-\pi)$ is applied creating the $\Phi^{\pm}$ state except an overall minus sign. Other scenarios using the other sublevels are possible and will be presented in a more extensive publication. In order to prove that the $\Psi^-$ state has indeed been created we apply a density matrix tomography which is based on the phase dependence of the entangled state as already sketched in fig.~\ref{4niveau}. The phase factors noted there represent the phase dependence of the corresponding states under rotation about the quantization axis (z-axis). This corresponds to the unitary transformations $\hat{U}_{S_z}=\exp(-i\phi_1\hat{S}_z)$ and $\hat{U}_{I_z}=\exp(-i\phi_2\hat{I}_z)$ leading to $\hat{U}_{S_z}\hat{U}_{I_z}|m_Sm_I\rangle = \exp(-i(\phi_1m_S+\phi_2m_I))|m_Sm_I\rangle$. A single ESR transition ($\Delta m_S = \pm 1$) will have a phase dependence $\phi_1$ under z-axis rotation, whereas a single NMR transition ($\Delta m_I = \pm 1$) will have a phase dependence $\phi_2$. Each of the entangled states $\Psi^\pm$ and $\Phi^\pm$ is characterized by a linear combination $\phi_1 \pm \phi_2$ of both phases. This is another manifestation of the fact, that these states are global states and no local measurement on the single qubits reveals any information about the entangled state. Since the entangled state is not directly observable we need to transform it to an observable state. Our entangled state detector therefore corresponds to a unitary back transformation comprised of e.g. $P^{13}_S(-\pi)$ followed by $P^{12}_I(-\pi/2)$ applied to the state $\Psi^-$. In order to distinguish entangled states from other superposition states we encode their characters in the phase dependence under z-rotation as discussed before. Therefore we apply phase shifted pulses of the type $P^{13}_S(-\pi,~\phi_1)$ and $P^{12}_I(-\pi/2,~\phi_2)$ which corresponds to a rotation about the z-axis by the angles $\phi_1$ and $\phi_2$. The complete $\Psi^{\pm}$ detector sequence now reads \begin{equation}\label{UdetPsi} \hat{U}^\Psi_\mathrm{d}(\phi_1,~\phi_2)= P^{12}_I(-\pi/2,~\phi_2)\,P^{13}_S(-\pi,~\phi_1). \end{equation} Since our observable is the ESR transition intensity, detected via an electron spin echo, the entangled state tomography corresponds to the evaluation of the following signal strength for $\Psi^-$ \begin{equation}\label{entangdetPsi} S^\Psi_\mathrm{d}(\phi_1,\phi_2) = \mathrm{tr}\left\{2\hat{S}^{13}_z\,\hat{U}^\Psi_\mathrm{d}\,|\Psi^-\rangle\langle\Psi^-|\,\hat{U}^{\Psi\dag}_\mathrm{d}\right\} \end{equation} where $\hat{S}^{13}_z$ is the fictitious spin 1/2 of the $1\leftrightarrow 3$ transition. With the current definitions we obtain the expression \begin{equation}\label{detPsi} S^\Psi_\mathrm{d}(\phi_1,\phi_2) = \frac{1}{2}\big(1-\cos(\phi_1-\phi_2)\big). \end{equation} Alternatively we have used for the detection of the $\Phi^{\pm}$ states the sequence \begin{equation}\label{UdetPhi} \hat{U}^\Phi_\mathrm{d}(\phi_1,~\phi_2)= P^{12}_I(-\pi/2,~\phi_2)\,P^{24}_S(-\pi,~\phi_1) \end{equation} leading to a detector signal for $\Phi^+$ \begin{equation}\label{entangdetPhi} S^\Phi_\mathrm{d}(\phi_1,\phi_2) = \mathrm{tr}\left\{2\hat{S}^{24}_z\,\hat{U}^\Phi_\mathrm{d}\,|\Phi^+\rangle\langle\Phi^+|\,\hat{U}^{\Phi\dag}_\mathrm{d}\right\}. \end{equation} with \begin{equation}\label{detPhi} S^\Phi_\mathrm{d}(\phi_1,\phi_2) = \frac{1}{2}\big(1-\cos(\phi_1+\phi_2)\big). \end{equation} A more detailed discussion of the phase dependence of the detector signal will be presented in a more extended publication. The phase shifts were implemented by incrementing the phase of the individual detection pulses in consecutive experiments according to $\phi_j(n) = \Delta\omega_j\,n\,\Delta t$ with $j=1,2$. The artificial phase frequencies $\Delta\omega_j= 2\pi\Delta\nu_j$ were arbitrary chosen as $\nu_1=2.0$~MHz and $\nu_2=1.5$~MHz. Examples of different phase increments are shown in fig.~\ref{vszphase}. \begin{figure} \caption{Phase interferograms versus time $n~\Delta t$ for four different sets of experiments. (a) $\phi_1=0$, (b) $\phi_2=0$, (c) $\phi_1\neq 0,~\phi_2\neq 0$ for the entangled state $ \Psi^-$ with phase dependence $|\phi_1-\phi_2|$ (see eqn. \ref{detPsi} \label{vszphase} \end{figure} Four different sets of phase variations were chosen to demonstrate the individual and combined phase frequencies. In fig.~\ref{vszphase} a, b we have set (a) $\nu_1=0$ or (b) $\nu_2=0$ in order to demonstrate the $\phi_2$ and $\phi_1$ dependence as a reference. The corresponding spectra (see fig.~\ref{vszphasef} a, b) are obtained after Fourier transformation. These would also be observed for non-entangled superposition states of either ESR ($\nu_1$) or NMR transitions ($\nu_2$). The characteristics of the entangled states shows up in the combined phase dependence (see eqns. \ref{detPsi} and \ref{detPhi}). This is demonstrated for the $\Psi^-$ state in fig.~\ref{vszphase} c where the interferogram already shows the phase difference behavior which is even more clearly evident from the spectral representation of fig.~\ref{vszphasef}c showing a line at the difference frequency $\nu_1-\nu_2$. In a similar way the $\Phi^{\pm}$ states were created which under the corresponding tomography sequence show a phase dependence as $\phi_1+\phi_2$ which is evident from the interferogram (fig.~\ref{vszphase}d) and more clearly from the phase spectrum (fig.~\ref{vszphasef}d) leading to a spectral line at $\nu_1+\nu_2$. \begin{figure} \caption{Fourier transform of the phase interferograms shown in fig.~\ref{vszphase} \label{vszphasef} \end{figure} Here we have used an ensemble of electron-nuclear spin pairs with a mixed state density matrix. Like in all so far published NMR quantum computing experiments the corresponding entangled state would be better termed {\it pseudo entangled states} \cite{cory97, gershenfeld:97, knill:98}. However, we point out that the same pulse sequences could be applied to a pure state electron-nuclear spin pair in order to create the discussed entangled states. Also the same Bell state detection sequences proposed here would apply. In fact the same phase dependence would be observed for pure states. Moreover, the experiments presented here would reach the quantum limit (see Warren et al. \cite{warren:97}) according to the PPT (positive partial transpose) criterion \cite{peres:96, horodecki:96} at $\tanh(\hbar\omega_S/2k_BT) =\sqrt{2}$ which corresponds to a temperature $T_Q$ = 2.576~K for an ESR frequency of 95~GHz. Theoretically the scenario reported here might seem to be obvious, however, experimentally this type of experiment is rather demanding because of the extreme difference of the ESR and NMR linewidth and the duration of the microwave and radio-frequency pulses. We have used the following typical values: $32$~ns for the ESR and $1.6~\mu$s for the NMR $\pi$-pulses. Entanglement is achieved with these in $832$~ns. However, because of the broad ESR and NMR lines a complete excitation of the transitions could not be obtained with these pulses. This leads to errors in the effective rotation angles of the corresponding pulses. As a consequence of this an incomplete creation of the entangled states results, leading to residual separable states. These are expected to vary as $\phi_1$ and $\phi_2$. In order to estimate this effect we calculate the resulting phase dependence of the detector signal (electron spin echo) for the deviation $\delta_j$ of the $\pi/2$-pulses at ESR ($\delta_1$) and NMR ($\delta_2$) frequencies which results in \begin{eqnarray}\label{entangdeterr} S_\mathrm{d}(\phi_1,\,\phi_2) &=& a_0 + a_1\cos\phi_1+a_2\cos\phi_2 \nonumber \\ &&+a_{12}\cos(\phi_1-\phi_2) \end{eqnarray} with $a_1=-\frac{1}{2}\delta_1(1-\delta_1)\delta^2_2$, $a_2=-\frac{1}{4}\delta_1\delta_2$ and $a_{12}=\frac{1}{4}(1-\delta_1^2)(1-\delta_2^2)$ up to fourth order. We note that the pulse errors introduce $\cos\phi_j$ (j=1,2) dependencies in addition to the expected $\cos(\phi_1-\phi_2)$ dependence of the entangled state. In order to reduce the $\cos\phi_j$ dependencies in the detector signal we have applied a phase cycling procedure where the two signals $S_d(\phi_1,\,\phi_2)$ and $S_d(\phi_1+\pi,\,\phi_2+\pi)$ were added. This eliminates the $\cos\phi_j$ dependencies but retains the $\cos(\phi_1-\phi_2)$ dependence of the entangled state. This procedure was applied to the data presented in figs.~\ref{vszphase} and \ref{vszphasef}. We acknowledge a financial support by the German Bundesministerium f\"ur Bildung und Forschung (BMBF). \end{document}
\begin{document} \title{Hodge theory on generalized normal crossing varieties} \begin{abstract} We generalize some results in Hodge theory to generalized normal crossing varieties. \end{abstract} \section{Introduction} A normal crossing variety is defined to be a variety which is locally isomorphic to a normal crossing divisor in a smooth variety. Similarly we define a {\em generalized} normal crossing variety as a variety which is locally isomorphic to a direct product of normal crossing divisors in smooth varieties. A suitably defined generalized normal crossing {\em divisor} in a generalized normal crossing variety is again a generalized normal crossing variety. In this way a certain induction argument on the dimension works for generalized normal crossing varieties and divisors. For example a reducible divisor often appears and is transformed to a normal crossing divisor by resolution of singularities. Moreover one sometimes has to consider a reducible divisor on a reducible variety during the induction argument on the dimension as in \cite{invent}. A generalized normal crossing variety appears naturally in higher dimensional stable reduction. Indeed a fiber of a semistable family is a normal crossing variety, and a fiber of a generalized semistable family of Abramovich and Karu \cite{AK} is a generalized normal crossing variety. If each irreducible component of the intersections of some of the irreducible components of a generalized normal crossing variety is smooth, then it is called a generalized {\em simple} normal crossing (GSNC) variety. The purpose of this paper is to prove that some Hodge theoretic statements for smooth varieties or pairs can be generalized to generalized simple normal crossing varieties or pairs. It is similar to the extensions in \cite{reducible} to simple normal crossing varieties. Therefore the proofs are similar in many places but we need additional care in the details. For example we have a cell complex instead of a simplicial complex in Proposition~\ref{sign}. We shall concentrate on the differences from \cite{reducible} in this paper. The results of this paper are as follows. We prove first that there exists a naturally defined cohomological mixted Hodge $\mathbf{Q}$-complex on a projective GSNC pair, a pair consisting of a GSNC variety and a GSNC divisor (Theorem~\ref{absolute case}). By the standard machinery due to Deligne (\cite{Deligne2}), we obtain degenerations of spectral sequences as corollaries. The point is that we have explicit descriptions of the graded pieces (Lemma~\ref{Gr1}) so that concrete calculations are possible. We generalize the above absolute case to the relative case where every closed stratum is assumed to be smooth over the base (Theorem~\ref{relative case}). Then by using the theory of canonical extensions, we prove that higher direct images of the structure sheaves are locally free (Theorem~\ref{canonical extension}). As an application, we prove a generalization of Koll\'ar's vanishing theorem (Theorem~\ref{injective}), thereby modify the contents of \cite{invent}~\S 4, and correct an error in the proof of \cite{invent}~Theorem~4.3. Fujino \cite{Fujino} found a simpler alternative proof of its main theorem, but our construction which shows that the original proof still works might also be useful sometime. In \S 1, we give definitions concerning the GSNC varieties and GSNC pairs. We prove some lemmas on desingularizations and coverings. In \S 2, we construct a cohomological mixed Hodge $\mathbf{Q}$-complex on GSNC pairs. The results in \S 2 are generalized to relative settings in \S 3. We extend the results to the case where there are degenerate fibers in \S 4 by using the theory of canonical extensions. We prove a vanishing theorem of Koll\'ar type for GSNC varieties in \S 5 following the original proof of \cite{Kollar1}. It is generalized to the $\mathbf{Q}$-divisor version by using the covering lemma, thereby finishing the modification of the argument in \cite{invent}. We work over the base field $\mathbf{C}$. \section{GSNC pairs} A reduced complex analytic space $X$ is said to be a {\em generalized simple normal crossing variety (GSNC variety)} if the following conditions are satisfied: \begin{enumerate} \item (local) At each point $x \in X$, there exists a complex analytic neighborhood which is isomorphic to a direct product to normal crossing varieties, i.e. varieties isomorphic to normal crossing divisors on smooth varieties. \item (global) Any irreducible component of the intersection of some of the irreducible components of $X$ is smooth. \end{enumerate} The first condition can be put as follows: there exists a complex analytic neighborhood $X_x$ at each point $x$ which is embedded into a smooth variety $V$ with a coordinate system such that $X_x$ is a complete intersection of divisors defined by monomials of coordinates. The {\em level} of $X$ at $x$ is the smallest number of such equations. For example, a fiber of a semistable family (\cite{AK}) satisfies the first condition. $X$ is locally complete intersection, hence Gorenstein and locally equidimensional. A {\em closed stratum} of $X$ is an irreducible component of the intersection of some of the irreducible components of $X$. A closed stratum is smooth by assumption. In the case where $X$ is connected, let $X^{[n]}$ denote, for an integer $n$, the disjoint union of all the closed strata of codimension $n$ in $X$. The combinatorics of the closed strata is described by the {\em dual graph}. It is a cell complex where a closed stratum of codimension $n$ corresponds to an $n$-cell and the inclusion of closed strata corresponds to the boundary relation in the opposite direction. Any cell is a direct product of simplices due to the local structure of $X$. \begin{Prop}\label{sign} There is a Mayer-Vietoris exact sequence \[ 0 \to \mathbf{Q}_X \to \mathbf{Q}_{X^{[0]}} \to \mathbf{Q}_{X^{[1]}} \to \dots \to \mathbf{Q}_{X^{[N]}} \to 0 \] where $N = \dim X$ and the arrows are alternating sums of restriction homomorphisms. \end{Prop} \begin{proof} We assign arbitrarily fixed orientation to each cell of the dual graph. Then the corresponding chain complex has boundary maps as alternating sums. Namely the sign of a boundary map for each pair of closed strata is positive or negative according to whether the orientations are compatible or not. The sign convention of the sequence of the constant sheaves is defined accordingly. Since each cell is contractible, the chain complex is locally exact, hence the sequence of sheaves is exact. \end{proof} A Cartier divisor $D$ on $X$ is said to be {\em permissible} if it does not contain any stratum. We denote by $D \vert_Y$ the induced Cartier divisor on a closed stratum $Y$. A {\em generalized simple normal crossing divisor (GSNC divisor)} $B$ is a permissible Cartier divisor such that $B \vert_Y$ is a reduced simple normal crossing divisor for each closed stratum $Y$. In this case $B$ is again a GSNC variety of codimension $1$ in $X$. Namely a GSNC divisor on a GSNC variety of level $r$ is a GSNC variety of level $r+1$. Such an inductive structure may be an advantage. If $D$ is a permissible $\mathbf{Q}$-Cartier divisor whose support is a GSNC divisor, then the round up $\ulcorner D \urcorner$ is well defined. We have $\ulcorner D \urcorner \vert_Y = \ulcorner D \vert_Y \urcorner$ for any closed stratum $Y$. A {\em generalized simple normal crossing pair (GSNC pair)} $(X,B)$ consists of a GSNC variety $X$ and a GSNC divisor $B$ on it. For each closed stratum $Y$ of $X$, we denote $B_Y = B \vert_Y$. We denote by $B_{X^{[n]}}$ the union of all the simple normal crossing divisors $B_Y$ for irreducible components $Y$ of $X^{[n]}$. A {\em closed stratum} of the pair $(X,B)$ is an irreducible component of the intersection of some of the irreducible components of $X$ and $B$. A morphism $f: X' \to X$ between two GSNC varieties is said to be a {\em permissible birational morphism} if it induces a bijection between the sets of closed strata of $X$ and $X'$, and birational morphisms between closed strata. A smooth subvariety $C$ of $X$ is said to be a {\em permissible center} for a GSNC pair $(X,B)$ if the following conditions are satisfied: \begin{enumerate} \item $C$ does not contain any closed stratum, and the scheme theoretic intersections $C_Y$ of $C$ and the closed strata $Y$ are smooth. \item At each point $x \in C_Y$, there exists a coordinate system in a neighborhood of $x$ such that $B_Y$ and $C_Y$ are defined by ideals generated by monomials. \end{enumerate} A blowing up $f: X' \to X$ whose center is permissible with respect to a GSNC pair $(X,B)$ is called a {\em permissible blowing up}. It is a permissible birational morphism from another GSNC variety, and the union of the strict transform of $B$ and the exceptional divisor is again a GSNC divisor. The following is a corollary of a theorem of Hironaka \cite{Hironaka}. \begin{Lem}\label{resolution} Let $(X,B)$ be a GSNC pair. Consider one of the following situations: (a) Let $f: X \dashrightarrow S$ be a rational map to another variety whose domain of definition has non-empty intersection with each closed stratum of the pair $(X,B)$. (b) Let $Z$ be a closed subset which does not contain any closed stratum of the pair $(X,B)$. Then there exists a tower of permissible blowing-ups \[ g: (X_n,B_n) \to (X_{n-1},B_{n-1}) \to \dots \to (X_0,B_0)=(X,B) \] where $B_{i+1}$ is the union of the strict transform of $B_i$ and the exceptional divisor, satisfying the following, respectively to the situations (a) and (b): (a) There is a morphism $h: X_n \to S$ such that $h = f \circ g$. (b) The union $\bar B_n = B_n \cup g^{-1}(Z)$ is a GSNC divisor on $X_n$. \end{Lem} \begin{proof} (a) We reduce inductively the indeterminacy locus of the rational map $f$. For an irreducible component $Y$ of $X$, the restriction of $f$ to $Y$ is resolved by a tower of permissible blowing-ups of the pair $(Y,B_Y)$ by the theorem of Hironaka. Since the indeterminacy locus of $f$ does not contain any closed stratum, the centers of the blowing-ups do not contain any closed stratum either. Moreover this property is preserved in the process. Therefore the same centers determine a tower of permissible blowing-ups of $(X,B)$. If we do the same process for all the irreducible components of $X$, then we obtain the assertion. (b) is similar to (a). We resolve $Z$ at each irreducible component $Y$ of $X$, since $Z$ does not contain any closed stratum of the pair $(Y,B_Y)$. \end{proof} We generalize a covering lemma \cite{AB}: \begin{Lem}\label{covering} Let $(X,B)$ be a quasi-projective GSNC pair. Let $D_j$ ($j=1,\dots,r$) be permissible reduced Cartier divisors which satisfy the following conditions: \begin{enumerate} \item $D_j \subset B$. \item $D_j$ and $D_{j'}$ have no common irreducible components if $j \ne j'$. \item $D_j \cap Y$ are smooth for all $D_j$ and closed strata $Y$ of $X$. \end{enumerate} Let $d_j$ be rational numbers, and $D = \sum d_jD_j$. Then there exists a finite surjective morphism from another GSNC pair $\pi: (X',B') \to (X,B)$ such that $B' = \pi^{-1}(B)$ and $\pi^*D$ has integral coefficients. \end{Lem} \begin{proof} The proof is similar to \cite{AB}~Theorem~17, where the $D_j$ play the role of the irreducible components of $D$. \end{proof} \section{Hodge theory on a GSNC pair} We construct explicitly the mixed Hodge structures of Deligne \cite{Deligne2} for GSNC pairs. Let $(X,B)$ be a GSNC pair. We do not assume that $X$ is compact at the moment. We define a cohomological mixed Hodge complex on $X$ when $X$ is projective. We can extend the construction in \cite{reducible} to a GSNC pair, but we use here the de Rham complex of Du Bois. The underlying local system is the constant sheaf $\mathbf{Q}$ in our case, while it is different and more difficult in the former case. We define a {\em de Rham complex} $\bar{\Omega}_X^{\bullet}(\log B)$ by the following Mayer-Vietoris exact sequence: \[ \begin{split} &0 \to \bar{\Omega}_X^{\bullet}(\log B) \to \Omega^{\bullet}_{X^{[0]}}(\log B_{X^{[0]}}) \to \Omega^{\bullet}_{X^{[1]}}(\log B_{X^{[1]}}) \to \\ &\dots \to \Omega^{\bullet}_{X^{[N]}}(\log B_{X^{[N]}}) \to 0 \end{split} \] where $N = \dim X$ and the arrows are the alternating sums of the restriction homomorphisms, where the signs are defined as in Proposition~\ref{sign}. We have $\bar{\Omega}_X^0 \cong \mathcal{O}_X$, and \[ Ri_*\mathbf{C}_{X \setminus B} \cong \bar{\Omega}_X^{\bullet}(\log B) \] for the open immersion $i: X \setminus B \to X$. We define a {\em weight filtration} on the complex $\bar{\Omega}^{\bullet}_X(\log B)$ by \[ \begin{split} &0 \to W_q(\bar{\Omega}^{\bullet}_X(\log B)) \to W_q(\Omega^{\bullet}_{X^{[0]}}(\log B_{X^{[0]}})) \to W_{q+1}(\Omega^{\bullet}_{X^{[1]}}(\log B_{X^{[1]}})) \to \\ &\dots \to W_{q+N}(\Omega^{\bullet}_{X^{[N]}}(\log B_{X^{[N]}})) \to 0 \end{split} \] where the $W$'s from the second terms denote the filtration with respect to the order of log poles. For example, $W_q(\Omega^{\bullet}_{X^{[0]}}(\log B_{X^{[0]}}))$ is the subcomplex of $\Omega^{\bullet}_{X^{[0]}}(\log B_{X^{[0]}})$ consisting of logarithmic forms on $X^{[0]}$ whose log poles along $B_{X^{[0]}}$ have order at most $q$. Before we define a weight filtration on the $\mathbf{Q}$-level object, we recall the definition of a convolution of a complex of objects in a triangulated category \cite{GM}. Let \[ a_0 \to a_1 \to \dots \to a_{n-1} \to a_n \] be a complex of objects. If there exists a sequence of distinguished triangles \[ b_{t+1}[-1] \to b_t \to a_t \to b_{t+1} \\ \] for $0 \le t < n$ with an isomorphism $b_n \to a_n$, then $b_0$ is said to be a {\em convolution} of the complex. A convolution may not exist and may not be unique if it exists. We also need to define a {\em canonical filtration} of a complex. If $a_{\bullet}$ is a complex in an abelian category, then we define \[ \tau_{\le q}(a_{\bullet})_n = \begin{cases} a_n \text{ if } n < q \\ \text{Ker}(a_q \to a_{q+1}) \\ 0 \text{ if } n > q \end{cases} \] so that $H^n(\tau_{\le q}(a_{\bullet})) \cong H^n(a_{\bullet})$ if $n \le q$ and $\cong 0$ otherwise. In the same way as \cite{reducible}, we can define a {\em weight filtration} $W_q(Ri_*\mathbf{Q}_{X \setminus B})$ as a convolution of the following complex of objects \[ \begin{split} &\tau_{\le q}(R(i_0)_*\mathbf{Q}_{X^{[0]} \setminus B_{X^{[0]}}}) \to \tau_{\le q+1}(R(i_1)_*\mathbf{Q}_{X^{[1]} \setminus B_{X^{[1]}}}) \to \\ &\dots \to \tau_{\le q+N}(R(i_N)_*\mathbf{Q}_{X^{[N]} \setminus B_{X^{[N]}}}) \end{split} \] where $\tau$ denotes the canonical filtration and $i_p: X^{[p]} \setminus B_{X^{[p]}} \to X^{[p]}$ are open immersions, such that there are isomorphisms \[ W_q(Ri_*\mathbf{Q}_{X \setminus B}) \otimes \mathbf{C} \cong W_q(\bar{\Omega}_X^{\bullet}(\log B)) \] for all $q$. We define the {\em Hodge filtration} as a stupid filtration: \[ F^p(\bar{\Omega}^{\bullet}_X(\log B)) = \bar{\Omega}^{\ge p}_X(\log B). \] Then we have the following whose proof can be compared to \cite{reducible}~Lemma~3.2: \begin{Lem}\label{Gr1} \[ \begin{split} &\text{Gr}_q^W(Ri_*\mathbf{Q}_{X \setminus B}) \cong \bigoplus_{p \ge 0,dim X^{[p]}-\dim Y=p+q} \mathbf{Q}_Y[-2p-q] \\ &\text{Gr}_q^W(\bar{\Omega}^{\bullet}_X(\log B)) \cong \bigoplus_{p \ge 0,dim X^{[p]}-\dim Y=p+q} \Omega_Y^{\bullet}[-2p-q] \\ &F^r(\text{Gr}_q^W(\bar{\Omega}^{\bullet}_X(\log B))) \cong \bigoplus_{p \ge 0,dim X^{[p]}-\dim Y=p+q} \Omega_Y^{\ge r-p-q}[-2p-q] \end{split} \] where the $p$ run over all non-negative integers and the $Y$ run over all the closed strata of the pair $(X^{[p]},B_{X^{[p]}})$ of codimension $p+q$. \end{Lem} \begin{proof} We have \[ \begin{split} &\text{Gr}_q^W(Ri_*\mathbf{Q}_{X \setminus B}) \cong \bigoplus_p R^{p+q}(i_p)_*\mathbf{Q}_{X^{[p]} \setminus B_{X^{[p]}}} [-2p-q] \\ &\cong \bigoplus_{p \ge 0,dim X^{[p]}-\dim Y=p+q} \mathbf{Q}_Y[-2p-q] \\ &\text{Gr}_q^W(\bar{\Omega}^{\bullet}_X(\log B)) \cong \bigoplus_p \text{Gr}_{p+q}^W(\Omega^{\bullet}_{X^{[p]}} (\log B_{X^{[p]}}))[-p] \\ &\cong \bigoplus_{p \ge 0,dim X^{[p]}-\dim Y=p+q} \Omega_Y^{\bullet}[-2p-q] \\ &F^r(\text{Gr}_q^W(\bar{\Omega}^{\bullet}_X(\log B))) \cong \bigoplus_p \text{Gr}_{p+q}^W(F^r(\Omega^{\bullet}_{X^{[p]}} (\log B_{X^{[p]}})))[-p] \\ &\cong \bigoplus_{p \ge 0,dim X^{[p]}-\dim Y=p+q} \Omega_Y^{\ge r-p-q}[-2p-q] \end{split} \] \end{proof} As a corollary, we have the following whose proof is similar to \cite{reducible}~Theorem 3.3: \begin{Thm}\label{absolute case} Let $(X,B)$ be a GSNC pair. Assume that $X$ is projective. Then \[ ((Ri_*\mathbf{Q}_{X \setminus B}, W), (\bar{\Omega}^{\bullet}_X(\log B), W, F)) \] is a cohomological mixed Hodge $\mathbf{Q}$-complex on $X$. \end{Thm} \begin{proof} If $Y$ is a closed stratum of the pair $(X^{[p]},B_{X^{[p]}})$ of codimension $p+q$, then \[ (\mathbf{Q}_Y,\Omega_Y^{\bullet},F(-p-q)) \] is a cohomological Hodge $\mathbf{Q}$-complex of weight $2(p+q)$, where $F(-p-q)^r=F^{r-p-q}$. Hence \[ (\mathbf{Q}_Y[-2p-q],\Omega_Y^{\bullet}[-2p-q],F(-p-q)[-2p-q]) \] is a cohomological Hodge $\mathbf{Q}$-complex of weight $2(p+q)-2p-q=q$. \end{proof} \begin{Cor}\label{degeneration1} The weight spectral sequence \[ {}_WE_1^{p,q} = H^{p+q}(\text{Gr}_{-p}^W(Ri_*\mathbf{Q}_{X \setminus B})) \Rightarrow H^{p+q}(\mathbf{Q}_{X \setminus B}) \] degenerates at $E_2$, and the Hodge spectral sequence \[ {}_FE_1^{p,q} = H^q(\bar{\Omega}_X^p(\log B)) \Rightarrow H^{p+q}(\bar{\Omega}_X^{\bullet}(\log B)) \] degenerates at $E_1$. \end{Cor} \begin{proof} This is \cite{Deligne3}~8.1.9. \end{proof} When $B=0$, we have more degenerations: \begin{Cor}\label{degeneration10} The weight spectral sequence of the Hodge pieces \[ {}_WE_1^{p,q} = H^{p+q}(\text{Gr}_{-p}^W(\bar{\Omega}_X^r)) \Rightarrow H^{p+q}(\bar{\Omega}_X^r) \] degenerates at $E_2$ for all $r$. \end{Cor} \begin{proof} The differentials ${}_Wd_1^{p,q}$ of the weight spectral sequence in the previous corollary preserve the degree of the differentials. Therefore the $E_2$-degeneration of the third spectral sequence is a consequence of the first two degenerations. \end{proof} \section{Relativization} We can easily generalize the results in the previous section to the relative setting when all the closed strata are smooth over the base. We consider the following situation: $(X,B)$ is a pair of a GSNC variety and a GSNC divisor, $S$ is a smooth algebraic variety, which is not necessarily complete, and $f: X \to S$ is a projective surjective morphism. We assume that, for each closed stratum $Y$ of the pair $(X,B)$, the restriction $f \vert_Y: Y \to S$ is smooth. We define a {\em relative de Rham complex} $\bar{\Omega}_{X/S}^{\bullet}(\log B)$ by the following exact sequence \[ \begin{split} &0 \to \bar{\Omega}_{X/S}^{\bullet}(\log B) \to \bar{\Omega}_{X^{[0]}/S}^{\bullet}(\log B_{X^{[0]}}) \to \bar{\Omega}_{X^{[1]}/S}^{\bullet}(\log B_{X^{[1]}}) \to \\ &\dots \to \bar{\Omega}_{X^{[N]}/S}^{\bullet}(\log B_{X^{[N]}}) \to 0. \end{split} \] In particular we have \[ \bar{\Omega}^0_{X/S}(\log B) \cong \mathcal{O}_{X}. \] A {\em weight filtration} on the complex $\bar{\Omega}^{\bullet}_{X/S}(\log B)$ is defined by using the filtration with respect to the order of log poles on the closed strata as in the previous section: \[ \begin{split} &0 \to W_q(\bar{\Omega}_{X/S}^{\bullet}(\log B)) \to W_q(\bar{\Omega}_{X^{[0]}/S}^{\bullet}(\log B_{X^{[0]}})) \\ &\to W_{q+1}(\bar{\Omega}_{X^{[1]}/S}^{\bullet}(\log B_{X^{[1]}})) \to \\ &\dots \to W_{q+N}(\bar{\Omega}_{X^{[N]}/S}^{\bullet}(\log B_{X^{[N]}}) \to 0. \end{split} \] We define a {\em Hodge filtration} by \[ F^p(\bar{\Omega}^{\bullet}_X(\log B)) = \bar{\Omega}^{\ge p}_X(\log B). \] \begin{Lem} There is an isomorphism \[ Ri_*\mathbf{C}_{X \setminus B} \otimes f^{-1}\mathcal{O}_S \cong \bar{\Omega}_{X/S}^{\bullet}(\log B) \] such that \[ W_q(Ri_*\mathbf{C}_{X \setminus B}) \otimes f^{-1}\mathcal{O}_S \cong W_q(\bar{\Omega}^{\bullet}_{X/S}(\log B)). \] \end{Lem} We have again: \begin{Lem} \[ \begin{split} &\text{Gr}_q^W(\bar{\Omega}^{\bullet}_{X/S}(\log B)) \cong \bigoplus_{p \ge 0,dim X^{[p]}-\dim Y=p+q} \Omega_{Y/S}^{\bullet}[-2p-q] \\ &F^r(\text{Gr}_q^W(\bar{\Omega}^{\bullet}_{X/S}(\log B))) \cong \bigoplus_{p \ge 0,dim X^{[p]}-\dim Y=p+q} \Omega_{Y/S}^{\ge r-p-q} [-2p-q] \end{split} \] where the $p$ run over all non-negative integers and the $Y$ run over all the closed strata of the pair $(X^{[p]},B_{X^{[p]}})$ of codimension $p+q$. \end{Lem} The following theorem and corollaries are similar to \cite{reducible}~Theorem~4.1 and Corollary~4.2: \begin{Thm}\label{relative case} \[ ((R^n(f \circ i)_*\mathbf{Q}_{X \setminus B}, W(-n)), (R^nf_*\bar{\Omega}_{X/S}^{\bullet}(\log B), W(-n), F)) \] is a variation of rational mixed Hodge structures on $S$. \end{Thm} \begin{Cor}\label{degeneration2} The relative weight spectral sequence \[ {}_WE_1^{p,q} = R^{p+q}f_*\text{Gr}_{-p}^W(Ri_*\mathbf{Q}_{X \setminus B}) \Rightarrow R^{p+q}(f \circ i)_*\mathbf{Q}_{X \setminus B} \] degenerates at $E_2$, and the relative Hodge spectral sequence \[ {}_FE_1^{p,q} = R^qf_*\bar{\Omega}_{X/S}^p(\log B) \Rightarrow R^{p+q}f_*\bar{\Omega}_{X/S}^{\bullet}(\log B) \] degenerates at $E_1$. \end{Cor} If $B=0$, then we have again: \begin{Cor}\label{degeneration20} The weight spectral sequence of the Hodge pieces \[ {}_{W,r}E_1^{p,q} = R^{p+q}f_*\text{Gr}_{-p}^W(\bar{\Omega}_{X/S}^r) \Rightarrow R^{p+q}f_*(\bar{\Omega}_{X/S}^r) \] degenerates at $E_2$ for all $r$. \end{Cor} \section{Canonical extension} We prove local freeness theorem by using the theory of canonical extensions when the degeneration locus is a simple normal crossing divisor. Let $(S, B_S)$ be a pair of a smooth projective variety and a simple normal crossing divisor. We denote $S^o = S \setminus B_S$. Let $H_{\mathbf{Q}}$ be a local system on $S^o$ which underlies a variation of mixed Hodge $\mathbf{Q}$-structures. Assuming that all the local monodromies around the branches of $B_S$ are quasi-unipotent, we define the {\em lower canonical extension} $\tilde{\mathcal{H}}$ of $\mathcal{H} = H_{\mathbf{Q}} \otimes \mathcal{O}_{S^o}$ as follows. We take an arbitrary point $s \in B_S$ at the boundary. Let $N_i=\log T_i$ be the logarithm of the local monodromies $T_i$ around the branches of $B_S$ around $s$, and let $t_i$ be the local coordinates corresponding to the branches. Here we select a branch of the logarithmic function so that the eigenvalues of $N_i$ belong to the interval $2\pi \sqrt{-1}(-1,0]$. Then the lower canonical extension $\tilde{\mathcal{H}}$ is defined as a locally free sheaf on $S$ which is generated by local sections of the form $\tilde v= \text{exp}(-\sum_i N_i\log t_i/2\pi \sqrt{-1})(v)$ near $s$, where the $v$ are flat sections of $H_{\mathbf{Q}}$. We note that the monodromy actions on $v$ and the logarithmic functions are canceled and the $\tilde v$ are single-valued holomorphic sections of $\mathcal{H}$ outside the boundary divisors. By \cite{Schmid}, the Hodge filtration of $\mathcal{H}$ extends to a filtration by locally free subsheaves, which we denote again by $F$. Let $f: X \to S$ be a surjective morphism between smooth projective varieties which is smooth over $S^o$. We denote $X^o=f^{-1}(S^o)$ and $f^o = f \vert_{X^o}$. Let $H_{\mathbf{Q}}^q = R^qf^o_*\mathbf{Q}_{X^o}$ for an integer $q$, let $\mathcal{H}^q = H_{\mathbf{Q}}^q \otimes \mathcal{O}_{S^o}$, and let $\tilde{\mathcal{H}}^q$ be the canonical extension. Then by Koll\'ar \cite{Kollar2} and Nakayama \cite{Nakayama}, there is an isomorphism \[ R^qf_*\mathcal{O}_X \cong \text{Gr}_F^0(\tilde{\mathcal{H}}^q) \] which extends the natural isomorphism \[ R^qf^o_*\mathcal{O}_{X^o} \cong \text{Gr}_F^0(\mathcal{H}^q). \] The following theorem will be used in the next section: \begin{Thm}\label{canonical extension} Let $X$ be a projective GSNC variety, $(S,B_S)$ a pair of a smooth projective variety and a simple normal crossing divisor, and let $f: X \to S$ be a projective surjective morphism. Assume that, for each closed stratum $Y$ of $X$, the restriction $f \vert_Y: Y \to S$ is surjective and smooth over $S^o = S \setminus B_S$. Denote $X^o=f^{-1}(S^o)$ and $f^o = f \vert_{X^o}$. For integers $q$, let $H_{\mathbf{Q}}^q = R^qf^o_*\mathbf{Q}_{X^o}$ be the local system on $S^o$ which underlies a variation of mixed Hodge $\mathbf{Q}$-structures defined in the preceeding section. Let $\mathcal{H}^q = H_{\mathbf{Q}}^q \otimes \mathcal{O}_{S^o}$, and let $\tilde{\mathcal{H}}^q$ be their canonical extensions. Then the following hold: (1) The weight spectral sequence of a Hodge piece \[ {}_{W,0}E_1^{p,q} = R^{p+q}f_*\text{Gr}_{-p}^W(\mathcal{O}_X) \Rightarrow R^{p+q}f_*\mathcal{O}_X \] degenerates at $E_2$. (2) There are isomorphisms \[ R^qf_*\mathcal{O}_X \cong \text{Gr}_F^0(\tilde{\mathcal{H}}^q) \] for all integers $q$ which extend the natural isomorphisms \[ R^qf^o_*\mathcal{O}_{X^o} \cong \text{Gr}_F^0(\mathcal{H}^q) \] in Corollary~\ref{degeneration2}. (3) $R^qf_*\mathcal{O}_X$ are locally free sheaves. \end{Thm} \begin{proof} We extend the definition of the weight filtration on $\mathcal{O}_{X^o} = \text{Gr}_F^0(\bar{\Omega}_{X^o/S^o}^{\bullet})$ to $\mathcal{O}_X$ by an exact sequence \[ \begin{split} &0 \to W_q(\mathcal{O}_X) \to W_q(\mathcal{O}_{X^{[0]}}) \to W_{q+1}(\mathcal{O}_{X^{[1]}}) \to \\ &\dots \to W_{q+N}(\mathcal{O}_{X^{[N]}}) \to 0 \end{split} \] where $W_q(\mathcal{O}_{X^{[p]}})=\mathcal{O}_{X^{[p]}}$ for $q \ge 0$, and $=0$ otherwise, for any $p$. By the canonical extension, we extend the $E_2$-degeneration of the weight spectral sequence from $S^o$ to $S$ as in \cite{reducible}~Theorem~5.1. Then we reduce the assertion to the theorem of Koll\'ar and Nakayama. \end{proof} \section{Koll\'ar type vanishing} We shall generalize the vanishing theorem of Koll\'ar \cite{Kollar1} to GSNC varieties: \begin{Thm}\label{injective} Let $X$ be a projective GSNC variety, $S$ a normal projective variety, $f: X \to S$ a projective surjective morphism, and let $L$ be a permissible Cartier divisor on $X$ such that $\mathcal{O}_X(m_1L)$ is generated by global sections for a positive integer $m_1$. Assume that $\mathcal{O}_X(m_2L) \cong f^*\mathcal{O}_Z(L_S)$ for a positive integer $m_2$ and an ample Cartier divisor $L_S$ on $S$, and for each closed stratum $Y$ of $X$, the restricted morphism $f \vert_Y: Y \to S$ is surjective. Then the following hold: (1) Let $s \in H^0(X,\mathcal{O}_X(nL))$ be a non-zero section for some positive integer $n$ such that the corresponding Cartier divisor $D$ is permissible. Then the natural homomorphisms given by $s$ \[ H^p(X, \mathcal{O}_X(K_X+L)) \to H^p(X, \mathcal{O}_X(K_X+L+D)) \] are injective for all $p$. (2) $H^q(S, R^pf_*\mathcal{O}_X(K_X+L))=0$ for $p \ge 0$ and $q > 0$. (3) $R^pf_*\mathcal{O}_X(K_X)$ are torsion free for all $p$. \end{Thm} \begin{proof} We follow closely the proof of \cite{Kollar1}~Theorem~2.1 and 2.2. We use the same numbering of the steps. {\em Step 1}. We may assume, and will assume from now, that $\mathcal{O}_X(L)$ is generated by global sections. We achieve this reduction by constructing a Kummer covering and taking the Galois invariant part as in loc. cit. \vskip 1pc {\em Step 2}. We prove the dual statement of (1) in the case where $n=1$ and $D$ is generic: Let $s,s' \in H^0(X,L)$ be general members, and let $D,D'$ be the corresponding permissible Cartier divisors. Then the natural homomorphisms given by $s$ \[ H^p(X, \mathcal{O}_X(-D-D')) \to H^p(X, \mathcal{O}_X(-D')) \] are surjective for all $p$. We go into details in this step in order to show how to generalize the argument in loc. cit. to the GSNC case. Since $s$ and $s'$ are general, $D,D',D+D'$ and $D \cap D'$ are also GSNC varieties. We consider the following commutative diagram: \[ \begin{CD} H^{p-1}(\mathcal{O}_{D+D'}) @>>> H^p(\mathcal{O}_X(-D-D')) @>>> H^p(\mathcal{O}_X) @>d_p>> H^p(\mathcal{O}_{D+D'}) \\ @V{b'_{p-1}}VV @VVV @V=VV @V{b'_p}VV \\ H^{p-1}(\mathcal{O}_{D'}) @>>> H^p(\mathcal{O}_X(-D')) @>>> H^p(\mathcal{O}_X) @>{e'_p}>> H^p(\mathcal{O}_{D'}) \end{CD} \] In order to prove our assertion, we shall prove that (a) $b'_{p-1}$ is surjective, and (b) $\text{Ker}(d_p) = \text{Ker}(e'_p)$. (a) We consider the following Mayer-Vietoris exact sequence \[ \begin{CD} H^{p-1}(\mathbf{C}_{D \cap D'}) @>{\bar a_p}>> H^p(\mathbf{C}_{D+D'}) @>{\bar b_p+\bar b'_p}>> H^p(\mathbf{C}_D) \oplus H^p(\mathbf{C}_{D'}) @>{\bar c_p-\bar c'_p}>> H^p(\mathbf{C}_{D \cap D'}) \end{CD} \] whose $\text{Gr}^0_F$ is \[ \begin{CD} H^{p-1}(\mathcal{O}_{D \cap D'}) @>{a_p}>> H^p(\mathcal{O}_{D+D'}) @>{b_p+b'_p}>> H^p(\mathcal{O}_D) \oplus H^p(\mathcal{O}_{D'}) @>{c_p-c'_p}>> H^p(\mathcal{O}_{D \cap D'}). \end{CD} \] There is a path connecting $D$ and $D'$ in the linear system $\vert L \vert$ on $X$ which gives a diffeomorphism of pairs $(D,D \cap D') \to (D',D \cap D')$ fixing $D \cap D'$. Therefore we have an equality $\text{Im}(\bar c_p)=\text{Im}(\bar c'_p)$, which implies that $\bar b'_p$ is surjective. Hence so is $b'_p$. (b) It is sufficient to prove that $\text{Im}(d_p) \cap \text{Ker}(b'_p) = 0$. Using a path connecting $D$ and $D'$, we deduce that $\text{Ker}(e_p)=\text{Ker}(e'_p)$ for $e_p: H^p(\mathcal{O}_X) \to H^p(\mathcal{O}_D)$. Thus \[ \text{Im}(d_p) \cap \text{Ker}(b'_p) = \text{Im}(d_p) \cap \text{Ker}(b_p). \] Therefore it is sufficient to prove that $\text{Im}(a_p) \cap \text{Im}(d_p) = 0$. We shall prove that \[ \text{Im}(a_p) \cap \text{Im}(d_p) \cap W_q(H^p(\mathcal{O}_{D+D'}))= 0 \] by induction on $q$. Let \[ \begin{split} &a_{p,q}: \text{Gr}^W_q(H^{p-1}(\mathcal{O}_{D \cap D'})) \to \text{Gr}^W_q(H^p(\mathcal{O}_{D+D'})) \\ &d_{p,q}: \text{Gr}^W_q(H^p(\mathcal{O}_X)) \to \text{Gr}^W_q(H^p(\mathcal{O}_{D+D'})). \end{split} \] be the natural homomorphisms. Then it is sufficient to prove that \[ \text{Im}(a_{p,q}) \cap \text{Im}(d_{p,q}) = 0. \] For $A=X,D\cap D'$ or $D+D'$, we have the following spectral sequences \[ {}_{W,A}E_1^{r,s} = H^{r+s}(\text{Gr}_{-r}^W(\mathcal{O}_A)) =\bigoplus_{\dim A - \dim Y = r} H^s(\mathcal{O}_Y) \Rightarrow H^{r+s}(\mathcal{O}_A). \] Therefore the boundary homomorphism $d_{p,q}$ is induced from the sum of homomorphisms $H^s(\mathcal{O}_Y) \to H^s(\mathcal{O}_{Y'})$ such that $s = q$, $Y \subset X$, $Y' \subset D+D'$, $Y' \subset Y$ and \[ r = \dim X - \dim Y = \dim (D+D') - \dim Y' = p - q \] hence $Y' = Y \cap D$ or $Y' = Y \cap D'$. On the other hand, $a_{p,q}$ is induced from the sum of homomorphisms $H^s(\mathcal{O}_{Y''}) \to H^s(\mathcal{O}_{Y'})$ such that $s = q$, $Y'' \subset D \cap D'$, $Y' \subset D+D'$, $Y'' = Y'$ and \[ r = \dim (D \cap D') - \dim Y'' + 1 = \dim (D+D') - \dim Y' = p - q. \] Therefore there is no closed stratum $Y'$ of $D+D'$ which receives non-trivial images from both $Y$ and $Y''$, hence we have our assertion. \vskip 1pc {\em Step 3}. (1) in the case where $n=2^d-1$ for a positive integer $d$ and $D$ generic is an immediate corollary of Step 2. {\em Step 4}. Proof of (2) is the same as in loc. cit. \vskip 1pc {\em Step 5}. Proof of (3). This is a generalization of Step 2. We use the notation there. $D'$ is again generic but $D$ is special. More precisely, $D$ is special along a fiber $f^{-1}(s)$ over a point $s \in S$ and generic otherwise. Therefore the intersection $D \cap D'$ is still generic, hence a GSNC variety. Let $\mu: \tilde X \to X$ be the blowing-up along the center $D \cap D'$. $\tilde X$ is again a GSNC variety. We denote by $\tilde Y$ the closed stratum of $\tilde X$ above a closed closed stratum $Y$ of $X$. Let $g: \tilde X \to \mathbf{P}^1$ be the morphism induced from the linear system spanned by $D$ and $D'$. Let $U \subset \mathbf{P}^1$ be an open dense subset such that the restricted morphisms $g \vert_{\tilde Y}: \tilde Y \to \mathbf{P}^1$ are smooth over $U$ for all the closed strata $Y$. (a) In order to prove that $b'_p$ is surjective, we need to prove an inclusion $\text{Im}(c'_p) \subset \text{Im}(c_p)$. For this purpose, we shall prove \[ \text{Im}(c'_p) = \text{Im}(c_p \circ e_p). \] Since $c_p \circ e_p = c'_p \circ e'_p$, the right hand side is contained in the left hand side. The other direction is the essential part. We note that both $c'_p$ and $c_p \circ e_p$ are parts of homomorphisms of mixed Hodge structures. Let $\tilde X_U = g^{-1}(U)$ and $\tilde Y_U = \tilde Y \cap \tilde X_U$. Let $H^p_{\text{prim}}(\mathbf{C}_{\tilde Y \cap D'}) \subset H^p(\mathbf{C}_{\tilde Y \cap D'})$ be the subspace of the primitive cohomologies, and $R^pg_{*,\text{prim}}\mathbf{C}_{\tilde Y_U} \subset R^pg_*\mathbf{C}_{\tilde Y_U}$ the corresponding local subsystem. Then natural homomorphisms \[ \begin{split} &R^pg_*\mathbf{C}_{\tilde X_U} \to H^p(\mathbf{C}_{D \cap D'}) \times U \\ &R^pg_{*,\text{prim}}\mathbf{C}_{\tilde Y_U} \to H^p(\mathbf{C}_{Y \cap D \cap D'}) \times U \end{split} \] underlie respectively morphisms of variations of mixed and pure Hodge structures over $U$, where the targets are constant variations. By the semi-simplicity of the category of variations of pure Hodge structures of fixed weight (\cite{Deligne2}), we deduce that the local system $R^pg_{*,\text{prim}}\mathbf{C}_{\tilde Y_U}$ has a subsystem which is isomorphic to a constant local system with fiber $\text{Im}(\bar c'_{p,Y})$ for $\bar c'_{p,Y}: H^p_{\text{prim}}(\mathbf{C}_{Y \cap D'}) \to H^p(\mathbf{C}_{Y \cap D \cap D'})$. Then by the invariant cycle theorem (\cite{Deligne2}), we deduce \[ \begin{split} &\text{Im}(\bar c'_{p,Y}) \subset \text{Im}(H^0(U,R^pg_{*,\text{prim}}\mathbf{C}_{\tilde Y_U}) \to H^p(\mathbf{C}_{Y \cap D \cap D'})) \\ &\subset \text{Im}(H^p(\mathbf{C}_{\tilde Y}) \to H^p(\mathbf{C}_{Y \cap D \cap D'})) \end{split} \] where the second and third homomorphisms are derived from the restrictions to the fiber $D'$ of $g$. Since \[ H^p(\mathcal{O}_{Y \cap D'}) = \text{Gr}^0_F(H^p_{\text{prim}}(\mathbf{C}_{Y \cap D'})) \] we conclude \[ \text{Im}(c'_{p,Y}: H^p(\mathcal{O}_{Y \cap D'}) \to H^p(\mathcal{O}_{Y \cap D \cap D'})) \subset \text{Im}(H^p(\mathcal{O}_{\tilde Y}) \to H^p(\mathcal{O}_{Y \cap D \cap D'})). \] Since the combinatorics of the closed strata are the same for $\tilde X$ and $D \cap D'$, we have \[ \text{Im}(c'_p: H^p(\mathcal{O}_{D'}) \to H^p(\mathcal{O}_{D \cap D'})) \subset \text{Im}(H^p(\mathcal{O}_{\tilde X}) \to H^p(\mathcal{O}_{D \cap D'})).\] Since $H^p(\mathcal{O}_{\tilde X}) \cong H^p(\mathcal{O}_X)$, we have our assertion. (b) There is an obvious inclusion $\text{Ker}(d_p) \subset \text{Ker}(e'_p)$. We know already that \[ H^p(\mathcal{O}_X(-D-D')) \to H^p(\mathcal{O}_X(-D)) \] is surjective, because it is a statement which is independent of $D$. Thus we have $\text{Ker}(d_p) = \text{Ker}(e_p)$. Therefore it is sufficient to prove \[ \text{rank}(e_p) = \text{rank}(e'_p). \] As explained in the next Step 6, the sheaves $R^pg_*\mathcal{O}_{\tilde X}$ are locally free for all $p$. Therefore we have $h^p(\mathcal{O}_D) = h^p(\mathcal{O}_{D'})$, where we denote $h^p=\dim H^p$. We compare two exact sequences \[ \begin{split} &0 \to \mathcal{O}_X(-D) \to \mathcal{O}_X \to \mathcal{O}_D \to 0 \\ &0 \to \mathcal{O}_X(-D') \to \mathcal{O}_X \to \mathcal{O}_{D'} \to 0. \end{split} \] Since the corresponding $h^p$'s are equal, we deduce that the ranks of the corresponding homomorphisms of the long exact sequences are equal, and we are done. \vskip 1pc {\em Step 6}. This is our Theorem~\ref{canonical extension}. {\em Step 7}. Proof of (1) is the same as loc. cit. \end{proof} Finally we can easily generalize the vanishing theorem to the case of $\mathbf{Q}$-divisors by using the covering lemma (Lemma~\ref{covering}): \begin{Thm}\label{injective2} Let $X$ be a projective GSNC variety, $S$ a normal projective variety, $f: X \to S$ a projective surjective morphism, and let $L$ be a permissible $\mathbf{Q}$-Cartier divisor on $X$ such that $m_1L$ is a Cartier divisor and $\mathcal{O}_X(m_1L)$ is generated by global sections for a positive integer $m_1$. Assume that the support of $L$ is a GSNC divisor, $\mathcal{O}_X(m_2L) \cong f^*\mathcal{O}_Z(L_S)$ for a positive integer $m_2$ and an ample Cartier divisor $L_S$ on $S$, and for each closed stratum $Y$ of $X$, the restricted morphism $f \vert_Y: Y \to S$ is surjective. Then the following hold: (1) Let $s \in H^0(X,\mathcal{O}_X(nL))$ be a non-zero section for some positive integer $n$ such that the corresponding Cartier divisor $D$ is permissible. Then the natural homomorphisms given by $s$ \[ H^p(X, \mathcal{O}_X(K_X+\ulcorner L \urcorner)) \to H^p(X, \mathcal{O}_X(K_X+\ulcorner L \urcorner+D)) \] are injective for all $p$. (2) $H^q(S, R^pf_*\mathcal{O}_X(K_X+\ulcorner L \urcorner))=0$ for $p \ge 0$ and $q > 0$. \end{Thm} \begin{proof} We take a finite Kummer covering $\pi: X' \to X$ from another GSNC variety such that $\pi^*L$ becomes a Cartier divisor. Let $G$ be the Galois group. Then we have \[ (\pi_*\mathcal{O}_X(K_X+L))^G \cong \mathcal{O}_X(K_X+\ulcorner L \urcorner). \] Therefore our assertion is reduced to the case where $L$ is a Cartier divisor. \end{proof} Graduate School of Mathematical Sciences, University of Tokyo, Komaba, Meguro, Tokyo, 153-8914, Japan [email protected] \end{document}
\begin{document} \bstctlcite{IEEEexample:BSTcontrol} \title{Polyadic Quantum Classifier} \author{\IEEEauthorblockN{William Cappelletti, Rebecca Erbanni and Joaqu\'{i}n Keller } \IEEEauthorblockA{\textit{Entropica Labs}, Singapore \\ \{william, rebecca, joaquin\}@entropicalabs.com } } \maketitle \begin{abstract} We introduce here a supervised quantum machine learning algorithm for multi-class classification on NISQ architectures. A parametric quantum circuit is trained to output a specific bit string corresponding to the class of the input datapoint. We train and test it on an IBMq 5-qubit quantum computer and the algorithm shows good accuracy ---compared to a classical machine learning model--- for ternary classification of the Iris dataset and an extension of the XOR problem. Furthermore, we evaluate with simulations how the algorithm fares for a binary and a quaternary classification on resp. a known binary dataset and a synthetic dataset. \end{abstract} \begin{IEEEkeywords} quantum machine learning, variational quantum algorithm, NISQ architecture, classification algorithm, supervised learning \end{IEEEkeywords} \nocite{daskin2018iris,grant2018,schuld2017iris,farhiNeven2018,schuld2020circuitcentric,lloyd2020qml,latorre2020,romero2019variational,multiclass2005survey} \section*{Anotations example} \td{Annotations by Tommaso} \me{Annotations by Ewan} \kj{Annotations by Joaqu\'{i}n} \section*{\td{Major points and notes to keep in mind}} \td{I believe the message has to be shaped around certain key points: \begin{itemize} \item How does this work differ from previous works in supervised classification with quantum processors? \kj{Answering in full this question would need to do a complete state of the art of the domain. An alternative option is to do attribution with citations: each time you introduce something you say where you get the idea from. This avoid to have to say: these and these papers did the thing differently} \item What are the claims \kj{I assume that this means "what is original in this work". I think the abstract summarize this point pretty well. Even short summary: an algorithm, 3 experiments. }\\ \\ \kj{Discussion, main strengths of the algorithm compared to previous qml algorithms for supervised learning: \begin{enumerate} \item polyadic classification (vs binary) \item it is noise resilient \item it fares well compared to classical ML \item it need less resources (ie qubits, gates, shots) \item previous point could mean lower algorithmic complexity \end{enumerate} Point 1 can be claimed to some extent (you can always do an n-ary classification with many binary ones. So this goes to point 4). \\ Points 2, 3, 4 cannot be claimed in general, it can only be: in these experiments the algorithm was noise resilient, fared well compared to CML, and needed few resources. That's why ML community uses the same test datasets (Iris, MNIST, CIFAR, ImageNet, etc...), to be able to compare their results. And that's why we worked on the iris dataset. \\ Point 5 is difficult to claim. Usually ML community do not talk much about that. They say my algorithm worked well on this problem, it needed that many hours of compute time.\\ However, we need to be able to say something someday on this point. This is how a QML algorithm could prove Q advantage. That's why I am studying complexity theory of ML. Maybe there is no answer for this. Maybe it would only happen by an empirical proof: for this problem, this QML algorithm on this quantum computer fares better than any existing CML algorithm on the most powerful C computer. This means that if we run more and more complex problems using the full potential of the QPUs of the day, we might someday reach QML advantage (over CML). This is more a rationale for series A than something for a scientific paper. } \item What are NOT the claims \kj{This is an infinite space. We can say what we think is missing and will our future work\\ To do: \begin{itemize} \item try to solve problems with higher feature dimensions (using more qubits and more gates) \item design optimizers better at minimizing quantum loss functions \end{itemize} Explicit no claims: \begin{itemize} \item we do not claim quantum advantage \item we do not claim the algorithm work for something else than these datasets \end{itemize}} \item Work on the paper structure, now it is weak. Once the claims are laid out, we can build a consistent structure. \\ \kj{Ongoing work: the claims are an algorithm + 3 experiments, so this is structure of the paper} \item State from the beginning that the goal is to run this on a quantum computer\\ \kj{Agree, however this is not a the goal, the goal is to have a algorithm that could run on a NISQ device. It happened that for the iris it could run on an ibmq, , so we decided to do it} \item Have a detailed section where all details are explained in a rigorous fashion \item Detailed description of the quantum circuit \item Detailed description of the processors used, error rates, time it took to run the algorithm and related information\\ \kj{the 3 points above belong to the experiments part} \item Can we find a comparison with classical methods? What is the best way to run a fair comparison, against what methods, and what metrics of goodness do we need and use?\\ \kj{The standar metric (ML community) to compare the quality of classifier algorithms is accuracy. \\ ML community organize annual events where they compare their } \item Who are we talking to? The physics community? The machine learning community? The investors (i.e., does not matter who we talk to in scientific communities)? This choice motivates the structure and language of the manuscript. As it is now, it is not well fit for the general physics community.\\ \kj{we talk to the quantum computing algorithms community and more specifically to the quantum machine learning community. This is a multidisciplinary community that includes people from physics, TCS and Machine Learning. We need to avoid a language targeted to only one discipline. } \end{itemize}} \section{Introduction} Quantum machine learning (QML) has raised great expectations, it is thought\cite{nisq2018}\cite{tfquantum2020} to be one of the first possible applications of quantum computing to be able to run on NISQ\footnote{Noisy Intermediate-Scale Quantum} computers. Nonetheless, QML is still in its infancy. We can make a parallel with the dawn of machine learning in the 1950s when the emblematic perceptron \cite{perceptron1958, mazieres2018revanche} was introduced to solve binary classification problems. Today's research in QML has followed the same path and binary classification has been broadly studied. In 1969\cite{minsky1969} it was shown that the perceptron could not solve the simple XOR problem. In fact it can only classify linearly separable datasets and it wasn't before 1986 that multilayer perceptrons with backpropagation \cite{backprop1986} addressed harder problems. For instance, the ternary classification of the Iris flower dataset\cite{fisher1936iris}, which is non-linearly separable, could not be solved with the perceptron approach, but is now a typical test case\cite[Ch. 2]{ESL} of machine learning. In this paper, we introduce a QML algorithm for multi-class classification and challenge it with the Iris flower dataset and a extension of the XOR problem with added Gaussian noise. The Iris flower dataset has three classes, two of which are not linearly separable. Binary QML classifications on this dataset have been addressed, on the linearly separable class against the rest, by \cite{daskin2018iris, grant2018} and, pairwise, on all classes by \cite{schuld2017iris}. Any n-ary classifier can be implemented with n binary classifiers\cite{multiclass2005survey} however, to predict all classes at once, we take a direct approach described in section \ref{section:output}. There was no guarantee it would be possible to train our classifier on an actual quantum computer. However our simulations of the algorithm on the Iris dataset show that the quantum computing power needed, for both training and test, can be found in today's hardware. And since models trained on simulators were tested on IBMq by \cite{schuld2017iris} and \cite{grant2018}, we were optimist that the inherent noise of the hardware wouldn't be an impediment. Indeed, we run our algorithm ---train and test--- on IBMq quantum hardware, for the ternary classification of the Iris dataset. Results are in section~\ref{ssec:iris-exp}. Similarly, in section~\ref{ssec:xor-exp}, we successfully trained a model for the Gaussian XOR problem on the same quantum system. Other experiments, with a simulator, on binary and quaternary classifications suggest that our approach is flexible enough to be applied to many problems. The outcome of these experiments are in sections \ref{ssec:skin-exp} and \ref{ssec:synthetic}. As detailed in sections \ref{section:vqa} and \ref{section:input}, our algorithm is based on parametric quantum circuits. Our approach is mostly empirical and the exact design of these circuits relies on experiments and on some guidelines presented in sections \ref{section:optimizing} and \ref{section:rules}. \section{A variational quantum algorithm} \label{section:vqa} Like many quantum machine learning algorithms\cite{daskin2018iris,grant2018,schuld2017iris,farhiNeven2018,schuld2020circuitcentric,lloyd2020qml,latorre2020,romero2019variational}, our procedure for classification of classical data fits in the general scheme of variational quantum algorithms\cite{peruzzo2014vqa}, where a parametric quantum computation $\mathfrak{F}_{\bm{\theta}}$ is applied to an input vector $\bm{x}$ to get the result $\hat{y}=\mathfrak{F}_{\bm{\theta}}(\bm{x})$. The variational algorithm per se consists of running $\mathfrak{F}_{\bm{\theta}}(\bm{x})$ for different values of $\bm{\theta}$ and $\bm{x}$ to find an optimal $\bm{\theta}^\star$ for which the results are satisfactory. Although some have envisioned hardware architectures with quantum random access memory \cite{qram2008} and other interesting features, our algorithm relies only on the simplest functionalities available in actual quantum hardware. \jk{This point on actual hardware needs more emphasis} \newcommand{{\mathtt{P}}}{{\mathtt{P}}} In most of today's quantum computers or QPUs\footnote{Quantum Processing Unit}, an elemental computation $\mathtt{Q}$ take as input $n$, a number of \emph{shots}, and ${\mathtt{P}}$, a quantum program, or circuit. A circuit ${\mathtt{P}}$ is a sequence of \emph{gates}, or elemental operations on qubits. In what is called a shot, the qubits of the quantum computer are all initialized at $\ket{0}$, the sequence ${\mathtt{P}}$ of gates is applied to the qubits, then they are all measured. A \emph{run} or \emph{circuit run} is a sequence of $n$ shots. Runs and shots are quantum computations of different granularity. The result $\mathtt{Q}\left(n,{\mathtt{P}}\right)$ of a circuit run is the sequence $\hat{R}$ of $n$ bit strings in $\{0,1\}^N$, corresponding to the measurement of the $N$ qubits of the circuit. We can note that, the computational time complexity of $\mathtt{Q}\left(n,{\mathtt{P}}\right)$ is $\mathcal{O}(n\times |{\mathtt{P}}|)$ where $|{\mathtt{P}}|$ is the number of gates in ${\mathtt{P}}$. \newcommand{\qp_{\!\bm{\omega},\bm{\theta}}}{{\mathtt{P}}_{\!\bm{\omega},\bm{\theta}}} \newcommand{{\bm{\omega}}}{{\bm{\omega}}} \section{How to Input Data} \label{section:input} To input data we resort to a method called variational, or parametric, encoding which was first proposed in \cite{romero2019variational}. The method consists of using a parametric circuit to encode the input vector $\bm{x}$ to parameters of the circuit. A parametric circuit is a circuit where some gates can take continuous angles as parameters and are $2\pi$-periodic regarding them. We define an encoding function $f$ to map each coordinate or \emph{feature}, of the input vector $\bm{x}$ to an angle in the interval $\left]-\pi,\pi\right[$, which gives us a vector of parameters $\bm{\omega}=f(\bm{x})$ to be used in a parametric circuit ${\mathtt{P}}_{\!\bm{\omega}}$. \begin{wrapfigure}{r}{0.45\columnwidth} \centering \qcscaled{0.9}{ &\lstick{\ket{0}}&\qw&\qw&\qw& \multigate{2}{\qp_{\!\bm{\omega},\bm{\theta}}}&\qw&\qw&\meter\\ &\cdots&&&& \nghost{\qp_{\!\bm{\omega},\bm{\theta}}}&&&\cdots&&& \\ &\lstick{\ket{0}}&\qw&\qw& \qw&\ghost{\qp_{\!\bm{\omega},\bm{\theta}}}&\qw&\qw&\meter } \caption{The input vector $\bm{x}$ is encoded as angles $\bm{\omega}=\!\!f\!\left(\bm{x}\right)$ of a parametric circuit ${\mathtt{P}}_{\bm{\!\omega},\bm{\theta}}$ } \end{wrapfigure} We note $\bm{X}$ the set of input vectors $\bm{x}$ from the learning dataset and define the vectors $\bm{\overline{X}}$ and $\sigma_{\bm{X}}$, as its element-wise mean and standard deviation. Similarly, we compute the element-wise standard score $\bm{z}_{\bm{X}}(\bm{x})=\frac{\bm{x}-\bm{\overline{X}}}{\sigma_{\bm{X}}}$, In a Gaussian distribution approximation, we define the quantile $q=\Phi^{-1}(1-\epsilon^{\frac{1}{d}}/2)$, where $d$ is the dimension of $\bm{X}$ and $\Phi^{-1}$ the quantile function\cite{wasserman2013all}. By definition, the points $\bm{x}$ such that $\exists i\; \left|\bm{z}_{\bm{X}}(\bm{x})\right|_i>q$ represent less than an $\epsilon$ fraction of $\bm{X}$. \jk{Emmanuel Viennet: it is a preprocessing routinely used in ML and should not have to be detailed here ? (== « standardization of the data »} We fix $\epsilon$ small and ignore such points. Thus, we define the encoding function as a simple linear rescale and shift of the input: $$ f(\bm{x}) = \left(1 - \frac{\alpha}{2}\right) \frac{\pi}{q} \:\bm{z}_{\bm{X}(\bm{x})} . $$ By definition, all angles of the encoded vector $\bm{\omega} = f(\bm{x})$ fall within the interval $\left]-\left(1 - \frac{\alpha}{2}\right) \pi,\left(1 - \frac{\alpha}{2}\right) \pi\right[$. This enforces an angular gap $\alpha \pi$ between the extreme values of the encoded dataset, where $\alpha$ is a parameter to be choosen. Using this encoding function $f$, we can now define our quantum classifier as $$\hat{y}=\mathfrak{F}_{\bm{\theta}}(\bm{x})= g\left(\;\mathtt{Q}\left(n,{\mathtt{P}}_{\!\bm{\omega},\bm{\theta}}\right)\;\right) , $$ where ${\mathtt{P}}_{\!\bm{\omega},\bm{\theta}}$ is a parametric quantum circuit, $\bm{\omega}=f(\bm{x})$ are the encoded input parameters and $\bm{\theta}$ the model parameters, i.e.\ those to optimize. The parameter $n$ is the number of shots, which here is not automatically learned but adjusted ``by hand"; and $g$ is the postprocessing function, a classical computation ---described in following sections. We can note that as $f$, the input encoding function, is not parametric, the dataset is encoded only once, prior to the learning process. \newcommand{\commentblock}[1]{} \commentblock{ However, to input data without loss of information, the inputing process into a quantum state needs to be injective. Here the quantum end $h_\psi({\bm{\omega}}) = U_{\bm{\omega}} \ket{\psi}$ where $U_{\bm{\omega}}$ is the unitary of ${\mathtt{P}}_{\bm{\omega}}$. If we make abstraction of noise in the operation of the quantum hardware, this condition is satisfied iff there is a function\footnote{The function $U^{-1}$ only needs to have a theoretical existence and do not need to be effectively calculable.} $U^{-1}$ that from an encoding quantum state $\psi_{\bm{\omega}}=U_{\bm{\omega}} \ket{0^N}$ can get back to the original encoded value ${\bm{\omega}}=U^{-1}(\psi_{\bm{\omega}})$. \td{There is an issue with the equality. It seems you are assuming that $U^{-1}$ acts as a map on $\psi_{\bm{\omega}}$, but $\psi_{\bm{\omega}}$ is a quantum state so $\gamma$ should be a quantum state too. This needs to be defined/abstracted better than it is shown now} When a parametric circuit $U_{\bm{\omega}}$ fulfill this condition, i.e.\ when from $\ket{0^N}$ it can encode all the parameters ${\bm{\omega}}$ without loss of information, we will say that this parametric circuit is \textit{invertible} regarding ${\bm{\omega}}$ or just \textit{invertible} when no confusion is possible. } \tdok{You are trying to say something important, but this definition is sloppy. If the encoding is unitary then the operation is always invertible. However here you are using $U$ to define a circuit (a very bad idea), which makes for great confusion. If you want to offer a more general description, then you need to describe the encoding procedure as a quantum channel or a CPTP map. In alternative, you need to expand the description of the quantum circuit in the previous section and clarify that the operation is unitary in theory, but it might not be in practice - and you have to be very careful when dealing with that.} \jk{very good comment, this thing about `invertible' is not useful in this paper, so I removed it. However, what needs to be invertible is not the unitary but this function $h_\psi({\bm{\omega}}) = U_{\bm{\omega}} \ket{\psi}$, invertible on ${\bm{\omega}}$, the initial state $\psi$ is the parameter and ${\bm{\omega}}$ and the variable: meaning if you know $\psi$ and the resulting state $U_{\bm{\omega}} \ket{\psi}$ you can get ${\bm{\omega}}$. Badly explained + useless $\implies$ removed } \section {How to Output Data} \label{section:output} The output of a basic quantum computation, the run of a circuit ${\mathtt{P}}_{\bm{\omega},\bm{\theta}}$, is a sequence $\mathtt{Q}\left(n,{\mathtt{P}}_{\bm{\omega},\bm{\theta}}\right)=\hat{R}$ of $n$ bit strings in $\{0,1\}^N$ where $N$ is the number of quantum bits and $n$ the number of shots. The outcome of each measurement is a bit string $s$ and from quantum mechanics we know that it follows an underlying probability distribution $P(s)$. We can estimate its probability by ${\hat{P}(s)=\hat{C}(s)/n}$, where $\hat{C}(s)$ is the number of occurrences of $s$ in $\hat{R}$. Here, recalling that $\bm{\omega}$ encodes for the input vector $\bm{x}$, we use this estimated probability $\hat{P}(s)$ to predict $\hat{y}$ the class of $\bm{x}$. In order to do so, we associate to each possible class $k$ a bit string $s_k$ in $\{0,1\}^N$ and we note $\hat{P}(s_k)$ as $\hat{P}_k$. The output of our algorithm, the predicted class is $$\hat{y}=\mathfrak{F}_{\bm{\theta}}(\bm{x}) =g\left(\:\mathtt{Q}\left(n,{\mathtt{P}}_{\bm{\omega},\bm{\theta}}\right)\: \right) =\argmax_{k}\hat{P}_k . $$ Or, saying it otherwise, the predicted class $\hat{y}$ is the class $k$ for which its associated bit string $s_k$ has the highest number of occurrences. Note that, even though $\hat{P}_k$ varies from one run to another, if the relative difference between the theoretical $P_k$ associated with each class is high enough, $\argmax_{k}$ will return a consistent value even with a low number of shots. This approach is by essence polyadic, the number of classes being bounded by the total number of possible bit strings. As \cite{latorre2020}, it does not rely on multiple binary classifications\cite{multiclass2005survey}. \section{Learning} \label{section:learning} The learning process for our classifier consists in finding a set of parameters $\bm{\theta}^\star$ for which the predictor ${g\left(\:\mathtt{Q}\left(n,U_{f(\bm{x}),\bm{\theta}^\star}\right)\: \right)}$ has high accuracy. To do so, for each sample $(\bm{x},y)$ of the learning dataset $\mathcal{T}$, we define an adequate loss function which correlates with the error of the predictor: $$\mathcal{L}_{\bm{\theta}}(\bm{x},y)=-\log {\frac{e^{\hat{P}_y}}{\sum_{k\in\mathit{K}}e^{\hat{P}_k}}} , $$ where $\hat{P}_y$ is the probability estimate of the bitstring associated with the real class $y$ of $\bm{x}$. The optimization process consists of iteratively exploring the space of parameters $\bm{\theta}$ with an heuristic. At each step $i$ we compute the loss function $\mathcal{L}_{\bm{\theta}^i}$ as the average over a random subset $\mathcal{B}_i$ of $\mathcal{T}$: $$\mathcal{L}_{\bm{\theta}^i} = \sum_{(\bm{x},y)\in\mathcal{B}_i\subset\mathcal{T}}\frac{\mathcal{L}_{\bm{\theta}^i}(\bm{x},y)}{\scriptstyle{|\mathcal{B}_i|}} . $$ The subset, or \emph{minibatch}, $\mathcal{B}_i$ could be a singleton, the full training set $\mathcal{T}$ or have any cardinality in between. The result of the training, the vector of parameters $\bm{\theta}^\star$, is the ${\bm{\theta}^i}$ that gives the lowest value for the loss function. \newcommand{{\mathtt{\breve{P}}}}{{\mathtt{\breve{P}}}} \newcommand{\ket{0}&&}{\ket{0}&&} \newcommand{C\!z}{C\!z} \section{Optimizing Parametric Quantum Circuits} \label{section:optimizing} To specify our circuits we choose an universal set of quantum gates: $s_x^i$ ---the $\pi/2$ rotation about the Pauli-X axis, $Z^i_{\phi}$ ---a rotation of an arbitrary angle $\phi$ about the Pauli-Z axis, and $C\!z^i_j$ ---the controlled-$Z$ gate, where $i$ and $j$ are the qubits to which they apply. In a QPU, these elemental gates are implemented using pulses ---physical phenomena that modify the state of the qubits. A controlled-$Z$ gate is translated in a 2-qubit pulse, a $s_x$ gate needs a 1-qubit pulse and $Z$-rotations need no pulse\cite{zgates2016virtual} as they are implemented by modifying the subsequent pulses. Each QPU might have a slightly different set of elemental gates, by choice of the hardware manufacturer. Nonetheless, a circuit written using our set of elemental gates have a translation that preserves the number of 1-qubit pulses and 2-qubit pulses for most of the existing QPUs (see Appendix \ref{native}). A quantum program, or, circuit ${\mathtt{P}}$ specifies\cite{nielsen00} a unitary matrix, that we note here ${\mathtt{\breve{P}}}$. The unitary ${\mathtt{\breve{P}}}$ transforms a quantum state into another one. \newcommand{\mathcal{M}}{\mathcal{M}} Qubits in a given quantum state $\psi$ yield, when measured, a bit string $s$ with probability $P(s) = \left|\:\psi_s\:\right|^2$, where $\psi_s$ is the amplitude associated with $s$. We note $\mathcal{M}$ the measurement operator $\mathcal{M} : \psi \mapsto s$. From the definition of $\mathcal{M}$ it follows that $$ \exists \alpha\!\in\!\mathbb{R}\ \ U\!=e^{i\alpha}U' \quad\implies\quad \mathcal{M} U = \mathcal{M} U'. $$ If this holds we say that $U$ and $U'$ are \emph{equal up to a global phase} and note $U\equiv U'$. The purpose of a quantum computer is to implement the unitary ${\mathtt{\breve{P}}}$ and the final measure operation. However, due to engineering limitations, the hardware is imperfect and the actual quantum operations differs from those specified in ${\mathtt{P}}$. This difference or \emph{noise} grows with the number of pulses and the time needed to run the circuit. \begin{wrapfigure}{r}{0.45\columnwidth} \centering \qcscaled{1.05}{\ket{0}&&&\gate{\mbox{U}}&\ctrl{0}\qwx[1]&\qw&\qw&&&&\equiv&&&&&&\ket{0}&&&\gate{\mbox{U}}&\qw&\qw \\ \ket{0}&&&\qw&\ctrl{0}&\qw&\qw&&&&&&&&&&\ket{0}&&&\qw&\qw&\qw } \caption{\label{fig:nocz} A controlled-$Z$ after initialization has no effect, see \ref{nocz}.} \end{wrapfigure} Hence, to minimize the noise, if two circuits ${\mathtt{P}}$ and ${\mathtt{P}}'$ specify for equivalent unitaries ${\mathtt{\breve{P}}} \equiv {\mathtt{\breve{P}}}'$ or yield the same result $\mathcal{M}{\mathtt{\breve{P}}} = \mathcal{M}{\mathtt{\breve{P}}}'$ we will choose the circuit minimizing the number of pulses. We can now list some simple properties on the unitaries of our elemental gates that will help in optimizing our circuits. Since the unitary ${\mathpalette\wide@breve{AB}}$ of the concatenation $AB$ of quantum programs $A$ and $B$ is the composition of their unitaries $\breve{A}\breve{B}$ we will omit the $\breve{\, .\, }$ symbol when no confusion is possible. \newcommand{{\ket{0}^{\otimes N}}}{{\ket{0}^{\otimes N}}} \begin{enumerate}[label={(\arabic*)}] \item \label{zz}As $Z^i_\phi Z^i_\lambda \equiv Z^i_{\phi+\lambda}$, in a circuit, two consecutive Z-rotations on the same qubit can be replaced by just one. \item \label{zeroz} As $Z^i_\phi {\ket{0}^{\otimes N}} \equiv {\ket{0}^{\otimes N}}$, a Z-rotation immediately after qubit initialization can removed. \item \label{nocz} For any 1-qubit unitary $U^i$ applied to qubit $i$, as $C\!z^i_j U^i{\ket{0}^{\otimes N}} \equiv U^i{\ket{0}^{\otimes N}}$, we can remove a $C\!z$ applied to a qubit $j$ immediately after its initialization. See Fig.~\ref{fig:nocz}. \item \label{mz}As $\mathcal{M} Z_\phi U = \mathcal{M} U$, we can remove a Z-rotation applied to a qubit just before its measurement. \item Similarly, as $\mathcal{M} C\!z^i_j U = \mathcal{M} U$, we can remove any controlled-Z followed by no gate before measurement. \item \label{canon}As $C\!z^i_jC\!z^i_j$ is equal to the identity, two consecutive controlled-Z on the same qubits can be removed. \item \label{unitary}Any unitary $U^i$ on one qubit $i$ can be decomposed as $U^i\equiv Z^i_{\phi} s^i_x Z^i_{\alpha} s^i_x Z^i_{\lambda}$. \item \label{begin} Properties \ref{unitary}, \ref{zeroz}, \ref{canon} and \ref{zz} imply that any qubit should have at most $s_xZ_\phi s_x$ between initialization and its first $C\!z$ gate. \item \label{middle} Properties \ref{unitary}, \ref{canon} and \ref{zz} imply that any qubit should have at most a sequence $s_xZ_\phi s_xZ_\alpha$ between two $C\!z$ gates \item \label{end}Properties \ref{unitary}, \ref{mz} and \ref{zz} imply that any qubit should have at most $s_xZ_\phi s_xZ_\alpha$ after its last $C\!z$ gate and before measurement. \end{enumerate} \newcommand{\gate{\color{MidnightBlue}s_x}}{\gate{\color{MidnightBlue}s_x}} \newcommand{\zphi}[1]{\gate{Z_{\phi_{#1}}}} \newcommand{\zgate}[2]{\gate{\color{Mahogany}Z_{{#1}_{#2}}}} \newcommand{\zzgate}[1]{\zgate{\alpha}{#1}&\gate{\color{MidnightBlue}s_x}&\zgate{\phi}{#1}&\gate{\color{MidnightBlue}s_x}} \newcommand{\zphiphi}[2]{\zphi{#1}&\gate{\color{MidnightBlue}s_x}&\zphi{#2}&\gate{\color{MidnightBlue}s_x}} \begin{figure} \caption{\label{maxatomix} \label{maxatomix} \end{figure} Properties \ref{begin}, \ref{middle} and \ref{end} give a maximal sequence Any circuit with more gates than that can be optimized further. \section {Designing circuits} \label{section:rules} The constraints for optimality leave a high level of freedom on how to design parametric circuits: less elemental gates are still possible; some angles can be fixed as constants; some parameters will encode input features, while others will be parameters to optimize, and a choice has to be made on how to entangle qubits. Following intuitions and empirical evidences (some circuits perform better than others) we have identified a set of rules to design what we think are good parametric circuits for our algorithm. However as intuitions are often misguided and empirical evidences can be overthrown by new experiments, we hope and expect these rules to be challenged and/or expanded by further research work. \begin{wrapfigure}{r}{0.5\columnwidth} \centering \qcscaled{.75}{&\qw&\gate{\phi}&\qw&\qw} $\:\equiv\:$ \qcscaled{1.2}{&\qw&\gate{s_x}&\gate{Z_\phi}&\gate{s_x}&\qw&\qw} \caption{\label{fig:notation} Compact notation for $s_xZ_\phi s_x$.} \end{wrapfigure} Two main criteria enter in the design of parametric circuits intended to be used as classifiers. First, we want to minimize the number of gates. Second, we need enough parametric gates to encode the input vector and provide adequate learning capacity. This double constraint --minimize the number of gates and maximize the number of parametric gates-- should have led to keep all the $Z$-rotations as parametric gates. Yet, without fully understanding why, we noticed that a parametric $Z$-rotation right after a controlled-$Z$ seem not to add much capacity and tend to impair the learning phase. Hence, we opt for a rule enforcing exactly one parametric gate between entanglements, or more precisely, as shown in Fig. \ref{fig:succession}, a sequence $s_xZ_\phi s_x$ between any two consecutive controlled-$Z$ gates. \begin{figure} \caption{\label{fig:succession} \label{fig:succession} \end{figure} If we extend this rule for individual qubits to the whole set of qubits, we have a step where each qubit is rotated by some angle followed by a step where qubits are entangled by pairs. Things are slightly, but not fundamentally, different depending whether the number of qubits is odd or even. Although we don't have clear rules on how or why choose a given entangling pattern at each step, it is well understood~\cite{jozsa2003} that to unleash the power on quantum computing qubits need to be highly entangled. At each step a quantum computer only allows 2-qubit gates between a limited set of qubit pairs, which is the coupling map or qubit connectivity graph. \tdok{Well, existing quantum processors do, but a general quantum computer has no such limitation. Here you are trying to say that you wish to run this on a real device, but this has never been said in the manuscript.} \jk{Well the coupling map is not a limitation per se, if your coupling map is a clique, all 2-qubits combinations are possible, so no a general quantum computer do have a coupling map. And at the very beginning of the paper it is said that we do not design a algorithm for imaginary quantum computers but real ones.} Assuming that the qubit connectivity graph is connected, by choosing alternating entangling patterns (see Fig. \ref{fig:allcz}) it is possible in few steps to ensure that all qubits are entangled pairwise. In designing our circuits, we will choose the succession of entangling patterns in such a way it minimizes the number of steps to reach a state where all qubits are entangled. Similar rules to design circuits are proposed in \cite{cerezo2020barren}. \begin{figure} \caption{\label{fig:allcz} \label{fig:allcz} \end{figure} Once the structure of the parametric circuit is fixed, the next decision we face is which parameters are going to be input parameters $\bm{\omega}$ (see section \ref{section:input}) and which are going to be model parameters $\bm{\theta}$. Although the model parameters $\bm{\theta}$ have a similar role and are indistinguishable, each of the input parameters $\bm{\omega}$ corresponds to features that have a meaning, so their position in the circuit matters. This choice is today made experimentally on learning performance. Also we implement the idea of \emph{data re-uploading}, suggested first by \cite{latorre2020} and used in \cite{lloyd2020qml}, which consists in inputting $\bm{\omega}$ more than once. Data re-uploading proves to be useful to add learning capacity without adding more qubits. \newcommand{\ket{1}&&}{\ket{1}&&} \newcommand{\ket{+}&&}{\ket{+}&&} \newcommand{\ket{-}&&}{\ket{-}&&} \newcommand{\dots&&}{\dots&&} \newcommand{\xygate}[1]{\measure{\!\scriptscriptstyle{#1}\!}} \label{sec:experiments} \input{texfiles/experiments/intro.tex} \input{texfiles/experiments/iris.tex} \input{texfiles/experiments/xor.tex} \input{texfiles/experiments/skin.tex} \input{texfiles/experiments/artificial.tex} \section{Future work and conclusion} We obtain satisfying quantum models for the challenging Iris flower dataset and the other polyadic problems, with good test scores compared with a classical model. Furthermore, we were able to train the model for the Iris dataset and the Gaussian XOR problem on actual hardware. This was possible because of the low number of shots needed and the noise resilience of the model. Moreover, even though different machines have different noise patterns, the model trained on one machine performed well on all others (see Fig.~\ref{fig:iris-ibmq-tables}). It is important to remind that the optimization heuristics used in this work have severe limitations. The loss function is quantum computed and thus random; hence, we can only approximate its gradient to some extent, at the expense of more shots. Usual gradient based methods as \bfgs\ perform poorly with this sort of randomness. In fact, we use them successfully on simulations, but cannot do the same on actual hardware. For QPUs we resort to the gradient-free \cobyla. However, it is well known\cite{powell2007view} that \cobyla's performance drastically degrades with the dimension of the parameter space. To improve the algorithm, we will need to explore optimization techniques adapted to quantum loss functions. One salient idea is to use circuit modifications to approximate the gradient, as proposed by \cite{crooks2019gradients}\cite{schuld2018gradients}. Another avenue worth exploring is to work in a different parameter space. Here, we optimize directly the angles $\bm{\theta}$ of the parametric quantum circuit; instead, we could train the model using an intermediate parameter vector $\bm{\rho}$ mapped to~$\bm{\theta}$. For instance, a non-linear mapping would reshape the optimization landscape. Also, this would allow constraints on~$\bm{\theta}$, in particular parameter sharing, as in convolutional neural networks\cite{goodfellow2016deeplearning}. However, when addressing higher dimensional data and increasing the number of qubits, the most challenging issue becomes circuit design. We cannot rely anymore on sheer guess-and-check and need to understand better why some circuits outperform others. As the computational capability of quantum hardware keeps increasing, we expect the empirical study of more complex quantum machine learning models to give new insights and lead to increasingly better algorithms. \appendices \section{Translating to hardware specific native gates} \label{native} Quantum hardware manufacturers implement different sets of native gates to specify the circuits to run on their machines. In section \ref{section:optimizing} we define our own set of gates to specify circuits: the 0-pulse $Z$-rotation and the 1-pulse $s_x$ one-qubit gates and the controlled-$Z$ two-qubit gate. The one-qubit gates are native in IBM\cite{qiskit}, Rigetti\cite{pyquil2016rigetti}, Honeywell\cite{honeywell2020} and Google\cite{cirq} quantum architectures and implemented with the same number of qubits. The controlled-$Z$ is native for Rigetti's QPU and for most of Google's ---with the notable exception of the Sycamore QPU. The tunable two-qubit gate\cite{supremacy} of the Sycamore processor has not been studied in this paper. Honeywell uses the two-qubit $Z\!z$ gate and $C\!z_j^i$ translates in $Z^i_{-\pi/2}Z^j_{-\pi/2}{Z\!z}_j^i$ which preserves the number of pulses. IBM uses the controlled-not as two-qubit gate, noted $C^i_j$. Controlled-$Z$ can be translated in $H_jC^i_jH_j$, where $H$ is the Hadamard gate. With the following translations \begin{eqnarray*} H s_x Z_\phi s_x H &=& s_x Z_{\pi-\phi} s_x \\ H s_x Z_\phi s_x &=& Z_\pi s_x Z_{\phi-\frac{\pi}{2}} s_x \\ s_x Z_\phi s_x H &=& s_x Z_{\phi-\frac{\pi}{2}} s_x Z_\pi \end{eqnarray*} the Hadamard gates disappear, thus preserving the number of pulses. \end{document}
\begin{document} \title{Semi-Adequate Closed Braids and Volume} \author{Adam Giambrone\\ Alma College\\ [email protected]} \date{} \maketitle \begin{abstract} In this paper, we show that the volumes for a family of A-adequate closed braids can be bounded above and below in terms of the twist number, the number of braid strings, and a quantity that can be read from the combinatorics of a given closed braid diagram. We also show that the volumes for many of these closed braids can be bounded in terms of a single stable coefficient of the colored Jones polynomial, thus showing that this collection of closed braids satisfies a Coarse Volume Conjecture. By expanding to a wider family of closed braids, we also obtain volume bounds in terms of the number of positive and negative twist regions in a given closed braid diagram. Furthermore, for a family of A-adequate closed $3$-braids, we show that the volumes can be bounded in terms of the parameter $s$ from the Schreier normal form of the $3$-braid. Finally we show that, for the same family of A-adequate closed 3-braids, the parameters $k$ and $s$ from the Schreier normal form can actually be read off of the original 3-braid word. \end{abstract} \section{Introduction} One of the current aims of knot theory is to strengthen the relationships among the hyperbolic volume of the link complement, the colored Jones polynomial, and data extracted from link diagrams. In a recent monograph, Futer, Kalfagianni, and Purcell (\cite{Guts}, or see \cite{Survey} for a survey of results) showed that, for sufficiently twisted negative braid closures and for certain Montesinos links, the volume of the link complement can be bounded above and below in terms of the twist number of an A-adequate link diagram. The results for Montesinos links were recently generalized by Purcell and Finlinson in \cite{Montesinos}. Similar results for alternating links were previously given by Lackenby in \cite{Lackenby}, with the lower bounds improved upon by Agol, Storm, and W. Thurston in \cite{AgolStorm} and the upper bounds improved upon by Agol and D. Thurston in the appendix of \cite{Lackenby} and more recently improved upon by Dasbach and Tsvietkova in \cite{Refined}. In a previous paper (\cite{Me}), the author showed that the volumes for a large family of A-adequate link complements can be bounded in terms of two diagrammatic quantities: the twist number and the number of certain alternating tangles in a given A-adequate diagram. In this paper, we show that the volumes for a family of A-adequate closed $n$-braids can be bounded above and below in terms of the twist number $t(D)$, the number $t^{+}(D)$ of positive twist regions, the number $t^{-}(D)$ of negative twist regions, the number $n$ of braid strings, and the number $m$ of special types of state circles (called \emph{non-essential wandering circles}) that arise from a given closed braid diagram. Let $v_{8}=3.6638\ldots$ denote the volume of a regular ideal octahedron and let $v_{3}=1.0149\ldots$ denote the volume of a regular ideal tetrahedron. The main results of this paper, where the words ``certain'' and ``more general'' will be made precise later, are stated below. \begin{theorem} \label{introthm} For $D(K)$ a certain A-adequate closed $n$-braid diagram, the complement of $K$ satisfies the volume bounds $$\frac{v_{8}}{2}\cdot(t(D)-2(n+m-2)) \leq v_{8}\cdot(t^{-}(D)-(n+m-2)) \leq \mathrm{vol}(S^{3}\backslash K) < 10v_{3}\cdot(t(D)-1).$$ For $D(K)$ a more general A-adequate closed $n$-braid diagram, the complement of $K$ satisfies the volume bounds $$v_{8}\cdot(t^{-}(D)-t^{+}(D)-(n+m-2)) \leq \mathrm{vol}(S^{3}\backslash K) < 10v_{3}\cdot(t(D)-1) = 10v_{3}\cdot(t^{-}(D)+t^{+}(D)-1).$$ \end{theorem} By restricting to a family of A-adequate closed $3$-braids, we show that the volumes can also be bounded in terms of the parameter $s$ from the Schreier normal form of the $3$-braid. It should be noted that the lower bound provided in this paper, which relies on the more recent machinery of \cite{Guts}, is often an improvement over the one given in \cite{Cusp}. \begin{theorem}\label{sthm} For $D(K)$ a certain A-adequate closed $3$-braid diagram, the complement of $K$ satisfies the volume bounds $$v_{8}\cdot(s-1) \leq \mathrm{vol}(S^{3}\backslash K) < 4v_{8}\cdot s.$$ \end{theorem} In addition to providing diagrammatic volume bounds, we also show that, for the same family of A-adequate closed 3-braids, the parameters $k$ and $s$ from the Schreier normal form can actually be read off of the original 3-braid diagram. The volumes for many families of link complements have also been expressed in terms of coefficients of the colored Jones polynomial (\cite{Volumish}, \cite{Refined}, \cite{Guts}, \cite{Filling}, \cite{Symmetric}, \cite{Cusp}, \cite{Stoimenow}). Denote the \emph{$j^{th}$ colored Jones polynomial} of a link $K$ by $$J_{K}^{j}(t)=\alpha_{j}t^{m_{j}}+\beta_{j}t^{m_{j}-1}+\cdots+\beta_{j}'t^{r_{j}+1}+\alpha_{j}'t^{r_{j}},$$ \noindent where $j \in \mathbb{N}$ and where the degree of each monomial summand decreases from left to right. We will show that, for fixed $n$, the volumes for many of the closed $n$-braids considered in this paper can be bounded in terms of the stable penultimate coefficient $\beta_{K}':=\beta_{j}'$ (where $j \geq 2$) of the colored Jones polynomial. A result of this nature shows that the given collection of closed $n$-braids satisfies a Coarse Volume Conjecture (\cite{Guts}, Section 10.4). This result, which is stated below, can be viewed as a corollary of Theorem~\ref{introthm}. \begin{corollary}\label{newcor} For $D(K)$ a certain A-adequate closed $n$-braid diagram, the complement of $K$ satisfies the volume bounds $$v_{8}\cdot(\left|\beta_{K}'\right|-1) \leq \mathrm{vol}(S^{3}\backslash K) < 20v_{3}\cdot\left(\left|\beta_{K}'\right|+n+m-\dfrac{7}{2}\right).$$ \end{corollary} \section*{Acknowledgments} I would like to thank Efstratia Kalfagianni for helpful conversations that led to this project. I would also like to thank the anonymous referee for helpful comments that led to an improvement in the quality of exposition in this paper. \section{Volume bounds for A-adequate closed braids} \subsection{Preliminaries} \begin{figure} \caption{A crossing neighborhood of a link diagram, along with its A-resolution and B-resolution.} \label{resolutions} \end{figure} Let $D(K) \subseteq S^2$ denote a diagram of a link $K \subseteq S^3$. To smooth a crossing of the link diagram $D(K)$, we may either \emph{A-resolve} or \emph{B-resolve} this crossing according to Figure~\ref{resolutions}. By A-resolving each crossing of $D(K)$ we form the \emph{all-A state} of $D(K)$, which is denoted by $H_{A}$ and consists of a disjoint collection of \emph{all-A circles} and a disjoint collection of dotted line segments, called \emph{A-segments}, that are used record the locations of crossing resolutions. We will adopt the convention throughout this paper that any unlabeled segments are assumed to be A-segments. We call a link diagram $D(K)$ \emph{A-adequate} if $H_{A}$ does not contain any A-segments that join an all-A circle to itself, and we call a link $K$ \emph{A-adequate} if it has a diagram that is A-adequate. While we will focus exclusively on A-adequate links, our results can easily be extended to semi-adequate links by reflecting the link diagram $D(K)$ and obtaining the corresponding results for B-adequate links. From $H_{A}$ we may form the \emph{all-A graph}, denoted $\mathbb{G}_{A}$, by contracting the all-A circles to vertices and reinterpreting the A-segments as edges. From this graph we can form the \emph{reduced all-A graph}, denoted $\mathbb{G}_{A}'$, by replacing parallel edges with a single edge. For an example of a diagram $D(K)$, its all-A resolution $H_{A}$, its all-A graph $\mathbb{G}_{A}$, and its reduced all-A graph $\mathbb{G}_{A}'$, see Figure~\ref{figure8}. Let $v(G)$ and $e(G)$ denote the number of vertices and edges, respectively, in a graph $G$. Let $-\chi(G)=e(G)-v(G)$ denote the negative Euler characteristic of $G$. \begin{figure} \caption{A link diagram $D(K)$, its all-A resolution $H_{A} \label{figure8} \end{figure} \begin{figure} \caption{The long and short resolutions of a twist region of $D(K)$.} \label{longshort} \end{figure} \begin{definition} Define a \emph{twist region} of $D(K)$ to be a longest possible string of bigons in the projection graph of $D(K)$. Denote the number of twist regions in $D(K)$ by $t(D)$ and call $t(D)$ the \emph{twist number} of $D(K)$. Note that it is possible for a twist region to consist of a single crossing of $D(K)$. If a given twist region contains two or more crossings, then the A-resolution of a left-handed twist region will be called a \emph{long resolution} and the A-resolution of a right-handed twist region will be called a \emph{short resolution}. See Figure~\ref{longshort} for depictions of these resolutions. We will call a twist region \emph{long} if its A-resolution is long and \emph{short} if its A-resolution is short. \end{definition} \begin{definition} A link diagram $D(K)$ satisfies the \emph{two-edge loop condition (TELC)} if, whenever two all-A circles share a pair of A-segments, these segments correspond to crossings from the same short twist region of $D(K)$. (See the right side of Figure~\ref{longshort}.) \end{definition} Recall that $v_{8}=3.6638\ldots$ denotes the volume of a regular ideal octahedron and $v_{3}=1.0149\ldots$ denotes the volume of a regular ideal tetrahedron. By combining results from \cite{New}, \cite{Guts}, and \cite{Lackenby}, we get the following key result. \begin{theorem}[Corollary 1.4 of \cite{New}, Theorem from Appendix of \cite{Lackenby}] Let $D(K)$ be a connected, prime, A-adequate link diagram that satisfies the TELC and contains $t(D) \geq 2$ twist regions. Then $K$ is hyperbolic and \label{Cor} \begin{equation*} -v_{8}\cdot\chi(\mathbb{G}_{A}') \leq \mathrm{vol}(S^{3}\backslash K) < 10v_{3}\cdot(t(D)-1). \end{equation*} \end{theorem} \subsection{Braids and A-adequacy} Recall that the \emph{n-braid group} has Artin presentation $$\displaystyle B_{n} = \left\langle \sigma_{1}, \cdots, \sigma_{n-1}\ |\ \sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i}\ \mathrm{and}\ \sigma_{i}\sigma_{i+1}\sigma_{i} = \sigma_{i+1}\sigma_{i}\sigma_{i+1}\right\rangle,$$ \noindent where the generators $\sigma_{i}$ are depicted in Figure~\ref{braidgen}, where $\left|i-j\right|\geq2$ and $1 \leq i < j \leq n-1$ in the first relation (sometimes called \emph{far commutativity}), and where $1\leq i\leq n-2$ in the second relation. As a special case, the \emph{3-braid group} has Artin presentation $$B_{3}=\left\langle \sigma_{1}, \sigma_{2}\ |\ \sigma_{1}\sigma_{2}\sigma_{1}=\sigma_{2}\sigma_{1}\sigma_{2}\right\rangle.$$ \begin{figure} \caption{The braid group generators $\sigma_{i} \label{braidgen} \end{figure} \begin{definition} Call a subword $\gamma$ of a braid word $\beta \in B_{n}$ \emph{(cyclically) induced} if the letters of $\gamma$ are all (cyclically) adjacent and appear in the same order as in the full braid word $\beta$. As an example, given the braid word $$\beta=\sigma_{1}^{3}\sigma_{2}^{-3}\sigma_{1}^{2}\sigma_{3}^{-2}\sigma_{2}\sigma_{3}=\sigma_{1}^{3}\sigma_{2}^{-1}(\sigma_{2}^{-2}\sigma_{1}^{2}\sigma_{3}^{-1})\sigma_{3}^{-1}\sigma_{2}\sigma_{3},$$ \noindent the subword $\gamma=\sigma_{2}^{-2}\sigma_{1}^{2}\sigma_{3}^{-1}$ is an induced subword of $\beta$. \end{definition} Because conjugate braids have isotopic braid closures, we will usually work within conjugacy classes of braids. Furthermore, note that cyclic permutation is a special case of conjugation. \begin{definition} A braid $\beta=\sigma_{m_{1}}^{r_{1}}\cdots\sigma_{m_{l}}^{r_{l}} \in B_{n}$ is called \emph{cyclically reduced into syllables} if the following hold: \begin{enumerate} \item[(1)] $r_{i}\neq 0$ for all $i$. \item[(2)] there are no occurrences of induced subwords of the form $\sigma_{i}\sigma_{i}^{-1}$ or $\sigma_{i}^{-1}\sigma_{i}$ for any $i$ (looking up to cyclic permutation). \item[(3)] $m_{i}\neq m_{i+1}$ for all $i$ (modulo $l$). \end{enumerate} \end{definition} Recall that a braid $\beta=\sigma_{m_{1}}^{r_{1}}\cdots\sigma_{m_{l}}^{r_{l}} \in B_{n}$ is called \emph{positive} if all of the exponents $r_{i}$ are positive and \emph{negative} if all of the exponents $r_{i}$ are negative. We define a syllable $\sigma_{m_{i}}^{r_{i}}$ of the braid word to be \emph{positive} if $r_{i}>0$ and \emph{negative} if $r_{i}<0$. We now present Stoimenow's (\cite{Stoimenow}) classification of A-adequate closed 3-braid diagrams. \begin{proposition}[\cite{Stoimenow}, Lemma 6.1] Let $D(K)=\widehat{\beta}$ denote the closure of a 3-braid $$\beta=\sigma_{i}^{r_{1}}\sigma_{j}^{r_{2}}\cdots\sigma_{i}^{r_{2l-1}}\sigma_{j}^{r_{2l}} \in B_{3},$$ \noindent where $\left\{i,j\right\}=\left\{1,2\right\}$, where $l \geq 2$, and where $\beta$ has been cyclically reduced into syllables. Then $D(K)$ is A-adequate if and only if either \begin{enumerate}\label{adeq3braid} \item[(1)] $\beta$ is positive, or \item[(2)] $\beta$ does not contain $\sigma_{1}^{-1}\sigma_{2}^{-1}\sigma_{1}^{-1}=\sigma_{2}^{-1}\sigma_{1}^{-1}\sigma_{2}^{-1}$ as a cyclically induced subword and $\beta$ also has the property that all positive syllables in the braid word are cyclically neighbored on both sides by negative syllables. \end{enumerate} \end{proposition} \subsection{A lemma for closed braids} The main goal of this section is to produce a family of closed $n$-braids that satisfies the hypotheses of Theorem~\ref{Cor}. We will work within this family of braids throughout the rest of this paper. We begin with necessary definitions. \begin{definition} Call two induced subwords $\gamma_{1}$ and $\gamma_{2}$ of a braid word $\beta \in B_{n}$ \emph{disjoint} if the subwords share no common letters when they are viewed as part of $\beta$. As an example, given the braid word $$\beta=\sigma_{1}^{3}\sigma_{2}^{-3}\sigma_{1}^{2}\sigma_{3}^{-2}\sigma_{2}\sigma_{3}=\sigma_{1}^{2}(\sigma_{1}\sigma_{2}^{-3}\sigma_{1})\sigma_{1}(\sigma_{3}^{-2}\sigma_{2})\sigma_{3},$$ \noindent the subwords $\gamma_{1}=\sigma_{1}\sigma_{2}^{-3}\sigma_{1}$ and $\gamma_{2}=\sigma_{3}^{-2}\sigma_{2}$ are disjoint induced subwords of $\beta$. \end{definition} \begin{definition} Call a subword $\gamma$ of a braid word $\beta \in B_{n}$ \emph{complete} if it contains, at some point, each of the generators $\sigma_{1}, \ldots, \sigma_{n-1}$ of $B_{n}$. It is important to note that the generators $\sigma_{1}, \ldots, \sigma_{n-1}$ need not occur in any particular order and that repetition of some or all of these generators is allowed. \end{definition} Because the majority of the braids considered in this paper will satisfy the same set of assumptions, we make the following definition. \begin{definition} Call a braid $\beta \in B_{n}$ \emph{nice} if $\beta$ is cyclically reduced into syllables and $\beta$ contains two disjoint induced complete subwords. \end{definition} Note that, in the case when $n=3$, the condition that the 3-braid $\beta=\sigma_{i}^{r_{1}}\sigma_{j}^{r_{2}}\cdots\sigma_{i}^{r_{2l-1}}\sigma_{j}^{r_{2l}} \in B_{3}$ is nice is equivalent to the condition that $\sigma_{1}$ and $\sigma_{2}$ each occur at least twice nontrivially in the braid word $\beta$ (which is equivalent to the condition that $l \geq 2$). We now define, in the following Main Lemma, the family of closed $n$-braids that we will consider for the remainder of this paper. \begin{lemma}[The Main Lemma] \label{setup} Let $D(K)=\widehat{\beta}$ denote the closure of a nice $n$-braid $\beta=\sigma_{m_{1}}^{r_{1}}\cdots\sigma_{m_{l}}^{r_{l}} \in B_{n}$. View $D(K)$ as lying in an annular region of the plane and assume that $\beta$ satisfies the conditions \begin{enumerate} \item[(1)] all negative exponents $r_{i}<0$ in $\beta$ satisfy the stronger requirement that $r_{i}\leq-3$; \item[(2a)] when $r_{i}>0$ and $m_{i}=1$, we have that $r_{i-1} \leq -3$, that $r_{i+1} \leq -3$, and that $m_{i-1}=m_{i+1}=2$; \item[(2b)] when $r_{i}>0$ and $2 \leq m_{i} \leq n-2$, we have that $r_{i-2} \leq -3$, that $r_{i-1} \leq -3$, that $r_{i+1} \leq -3$, that $r_{i+2} \leq -3$, and that $\{m_{i-2}, m_{i-1}\}=\{m_{i+1}, m_{i+2}\}=\{m_{i}-1, m_{i}+1\}$; and \item[(2c)] when $r_{i}>0$ and $m_{i}=n-1$, we have that $r_{i-1} \leq -3$, that $r_{i+1} \leq -3$, and that $m_{i-1}=m_{i+1}=n-2$. \end{enumerate} Then we may categorize the all-A circles of $H_{A}$ into the following types: \begin{itemize} \item \emph{small inner circles} that arise from negative exponents $r_{i}\leq-3$ in the braid word $\beta$ \item \emph{medium inner circles} that arise from cyclically isolated positive syllables in the braid word $\beta$ \item \emph{essential wandering circles} that are essential in the annulus and have wandering that arises from adjacent negative syllables in adjacent generators in the braid word $\beta$ \item \emph{non-essential wandering circles} that are non-essential (contractible) in the annulus and have wandering that arises from adjacent negative syllables in adjacent generators in the braid word $\beta$ \item \emph{nonwandering circles} that arise from the case when all syllables in the generator $\sigma_{1}$ are positive or the case when all syllables in the generator $\sigma_{n-1}$ are positive. \end{itemize} \noindent Furthermore, we have that $K$ is a hyperbolic link and that $D(K)$ is a connected, prime, A-adequate link diagram that satisfies the TELC and contains $t(D)~\geq~2(n-1)$ twist regions. \end{lemma} \begin{remark} \label{cond} Condition~(2a) of the above theorem requires that positive syllables in the generator $\sigma_{1}$ are cyclically neighbored on both sides by negative syllables in the generator $\sigma_{2}$. Similarly, Condition~(2c) of the above theorem requires that positive syllables in the generator $\sigma_{n-1}$ are cyclically neighbored on both sides by negative syllables in the generator $\sigma_{n-2}$. Condition~(2b) of the above theorem requires that positive syllables in the generator $\sigma_{i}$, where $2 \leq i \leq n-2$, are cyclically neighbored on both sides by a pair of negative syllables in the far commuting generators $\sigma_{i-1}$ and $\sigma_{i+1}$. \end{remark} Note that Condition~(2a), Condition~(2b), and Condition~(2c) are in a spirit similar to the condition that positive syllables are cyclically neighbored on both sides by negative syllables in Stoimenow's classification of A-adequate closed 3-braid diagrams (Proposition~\ref{adeq3braid}). \begin{figure} \caption{An example of an isolated positive syllable $\sigma_{1} \label{loop} \end{figure} We will prove Lemma~\ref{setup} through a series of sublemmas. \begin{sublemma} \label{sublemma1} Let $D(K)=\widehat{\beta}$ satisfy the assumptions of Lemma~\ref{setup}. Then, as described in the statement of Lemma~\ref{setup}, we may categorize the all-A circles of $H_{A}$ into small inner circles, medium inner circles, essential wandering circles, non-essential wandering circles, and nonwandering circles. \end{sublemma} \begin{proof} First, note that Condition (2a), Condition (2b), and Condition (2c) of Lemma~\ref{setup} imply that $\beta$ cannot be positive. Because positive syllables of $\beta$ are cyclically neighbored by negative syllables, we may decompose $\beta$ as $\beta=N_{1}$ (in the case that $\beta$ is a negative braid) or as $\beta=P_{1}N_{1} \cdots P_{t}N_{t}$, where $P_{i}$ denotes a positive syllable of $\beta$ and $N_{i}$ denotes a maximal length negative induced subword of $\beta$. \noindent \underline{Small Inner Circles}: Let $\sigma_{m_{i}}^{r_{i}}$, where $r_{i} \leq -3$, denote a negative syllable. Then, except for the additional $n-2$ surrounding vertical line segments, the A-resolution of this syllable will look like the left side of Figure~\ref{longshort}. In particular, having $r_{i} \leq -3$ implies the existence of at least two small inner circles. \noindent \underline{Wandering Circles}: Let $\sigma_{m_{i}}^{r_{i}}\sigma_{m_{i+1}}^{r_{i+1}}=\sigma_{{m_i}}^{r_{i}+1}(\sigma_{m_{i}}^{-1}\sigma_{m_{i+1}}^{-1})\sigma_{m_{i+1}}^{r_{i+1}+1}$, where $r_{i} \leq -3$ and $r_{i+1} \leq -3$, denote a pair of adjacent negative syllables. \underline{Case 1}: Suppose $m_{i+1}=m_{i}-1$ or $m_{i+1}=m_{i}+1$. Then the pair of adjacent negative syllables involve adjacent generators of $B_{n}$. Consequently, the corresponding portion of $H_{A}$ will resemble (up to reflection and except for the additional surrounding vertical line segments) the right side of Figure~\ref{loop}. The key feature of this figure is the fact that we see a portion of a wandering circle, where the wandering behavior corresponds to the existence of the $\sigma_{m_{i}}^{-1}\sigma_{m_{i+1}}^{-1}$ induced subword. \underline{Case 2}: Suppose $\left|m_{i+1}-m_{i}\right| \geq 2$. In this case, the pair of adjacent negative syllables involve far commuting generators of $B_{n}$. Then, except for the additional $n-4$ surrounding vertical line segments, we get that the A-resolution will look like two copies of the left side of Figure~\ref{longshort}. Thus, we return to the case of small inner circles. \noindent \underline{Medium Inner Circles and Nonwandering Circles}: Let $\sigma_{m_{i}}^{r_{i}}$, where $r_{i}>0$, denote a positive syllable of $\beta$ that is cyclically neighbored on both sides by negative syllables as in Remark~\ref{cond}. Then the A-resolution of the induced subword will resemble either the left side (up to reflection and except for the additional surrounding vertical line segments) or the center (except for the additional surrounding vertical line segments) of Figure~\ref{loop}. In particular, the existence of an isolated positive syllable corresponds to the existence of one or two medium inner circles, one in the case of Condition~(2a) and Condition~(2c) and two in the case of Condition~(2b). Furthermore, a nonwandering circle will occur in $H_{A}$ precisely when all syllables in the generator $\sigma_{1}$ are positive (see the left side of Figure~\ref{loop}) or when all syllables in the generator $\sigma_{n-1}$ are positive (which would give a reflection of the left side of Figure~\ref{loop}). Note that we may classify the wandering circles in the annulus into essential and non-essential circles. Since the A-resolutions of all portions of the closure of $\beta=N_{1}$ and $\beta=P_{1}N_{1} \cdots P_{t}N_{t}$ have been considered locally and since gluing such portions together joins wandering and potential nonwandering circle portions together to form wandering and nonwandering circles, then we have the desired classification of all-A circles. \end{proof} \begin{sublemma} \label{sublemma2} Let $D(K)=\widehat{\beta}$ satisfy the assumptions of Lemma~\ref{setup}. Then $D(K)$ is connected, prime, and contains $t(D) \geq 2(n-1)$ twist regions. \end{sublemma} \begin{proof} Recall that $D(K)$ is connected if and only if the projection graph of $D(K)$ is path-connected. Since $\beta \in B_{n}$ contains a complete subword, then each generator of $B_{n}$ (each of which corresponds to a crossing of $D(K)$ between adjacent braid string portions) must occur at least once. This fact implies that the closed $n$-braid diagram $D(K)$ must be connected. Since $\beta$ contains two disjoint complete subwords, then it must be that each of the generators $\sigma_{1}, \ldots, \sigma_{n-1}$ of $B_{n}$ must occur at least twice in (two distinct syllables of) the braid word. Since the syllables of the cyclically reduced braid word $\beta$ correspond to the twist regions of $D(K)$, then we have that $t(D) \geq 2(n-1)$. Let $C$ be a simple closed curve that intersects $D(K)=\widehat{\beta}$ exactly twice (away from the crossings) and contains crossings on both sides. To show that $D(K)$ is prime, we need to show that such a curve $C$ cannot exist. Recall that $D(K)$ lies in an annular region of the plane. \noindent \underline{Case 1}: Suppose $C$ contains a point $p$ outside of this annular region. Since $C$ only intersects $D(K)$ twice and must start and end at $p$, then $C$ must intersect $D(K)$ twice in braid string position $1$ or twice in braid string position $n$. This cannot happen because the crossings corresponding to the occurrences of $\sigma_{1}$ and $\sigma_{n-1}$ prevent $C$ from being able to close up in a way that will contain crossings on both sides. \noindent \underline{Case 2}: Suppose $C$ contains a point $p$ between braid string positions $i$ and $i+1$ for some $1 \leq i \leq n-1$. See Figure~\ref{primeclose}. Since $C$ only intersects $D(K)$ twice and must start and end at $p$, then $C$ must intersect $D(K)$ twice in braid string position $i$ or twice in braid string position $i+1$. Suppose, as in Figure~\ref{primeclose}, that $C$ intersects $D(K)$ twice in braid string position $i$. The case for braid string position $i+1$ is very similar. If $i=1$, then $C$ contains a point outside of the annular region and we return to Case 1. Suppose $2 \leq i \leq n-1$. Since $\beta$ contains two disjoint induced complete subwords, then the generator $\sigma_{i-1}$ and the generator $\sigma_{i}$ must each occur at least twice, once above $p$ and once below $p$. These occurrences prevent $C$ from being able to close up in a way that will contain crossings on both sides. Thus, this case cannot occur. \end{proof} \begin{figure} \caption{A simple closed curve $C$ intersecting $D(K)$ twice and containing a point $p$ between braid string positions $i$ and $i+1$. Each box in the figure represents the eventual occurrence of the generator $\sigma_{i} \label{primeclose} \end{figure} \begin{sublemma} \label{sublemma3} Let $D(K)=\widehat{\beta}$ satisfy the assumptions of Lemma~\ref{setup}. Then $D(K)$ is A-adequate. \end{sublemma} \begin{proof} To prove that $D(K)$ is A-adequate, we need to show that no A-segment of the all-A state $H_{A}$ joins an all-A circle to itself. Note that positive syllables in $\beta$ (which include the short twist regions of $D(K)$) A-resolve to give horizontal A-segments and that negative syllables in $\beta$ (which include the long twist regions of $D(K)$) A-resolve to give vertical A-segments. Suppose, for a contradiction, that an A-segment joins an all-A circle to itself. \noindent \underline{Case 1}: Suppose the A-segment is a vertical segment. Condition (1) (that negative exponents are at least three in absolute value) implies that it is impossible for a vertical A-segment to join an all-A circle to itself. This is because all vertical A-segments either join distinct small inner circles or join a small inner circle to a medium inner circle or a wandering circle. (See the left side of Figure~\ref{longshort}.) \noindent \underline{Case 2}: Suppose the A-segment is a horizontal segment. By Remark~\ref{cond} and Figure~\ref{loop}, it can be seen that it is impossible for a horizontal A-segment to join an all-A circle to itself. This is because all horizontal A-segments join a medium inner circle either to another medium inner circle, to a wandering circle, or to a nonwandering circle. \end{proof} \begin{sublemma} \label{sublemma4} Let $D(K)=\widehat{\beta}$ satisfy the assumptions of Lemma~\ref{setup}. Then $D(K)$ satisfies the TELC. \end{sublemma} \begin{proof} Let $C_{1}$ and $C_{2}$ be two distinct all-A circles that share a pair of distinct A-segments, call them $s_{1}$ and $s_{2}$. To prove that $D(K)$ satisfies the TELC, we need to show that these A-segments correspond to crossings from the same short twist region of $D(K)$. \noindent \underline{Case 1}: Suppose one of $s_{1}$ and $s_{2}$ is a horizontal segment (that must come from a cyclically isolated positive syllable of $\beta$). Then it is impossible for the other segment to be a vertical segment. This is because one of $C_{1}$ and $C_{2}$, say $C_{1}$, must be a medium inner circle. This circle is adjacent, via horizontal A-segments, to the second circle $C_{2}$, which will either be another medium inner circle, a wandering circle, or a nonwandering circle. By Condition~(1), the circle $C_{1}$ is also adjacent to small inner circles above and below. Consequently, the second segment cannot be vertical. Thus, if one of the segments $s_{1}$ and $s_{2}$ is horizontal, then the other segment must also be horizontal. Since the horizontal segments incident to the medium inner circle $C_{1}$ necessarily belong to the same resolution of a short twist region of $D(K)$, then the TELC is satisfied. (See the left side and center of of Figure~\ref{loop}.) \noindent \underline{Case 2}: Suppose both $s_{1}$ and $s_{2}$ are vertical segments. Since all vertical A-segments either join distinct small inner circles or join a small inner circle to a medium inner circle or a wandering circle, then one of $C_{1}$ and $C_{2}$ must be a small inner circle. By Condition~(1), it is impossible for a small inner circle to share more than one A-segment with another all-A circle. (See the left side of Figure~\ref{longshort}.) Therefore, this case cannot occur. \end{proof} We now combine all of the necessary ingredients in order to prove the Main Lemma. \begin{proof}[Proof of Lemma~\ref{setup} (The Main Lemma)] By applying Sublemma~\ref{sublemma1}, Sublemma~\ref{sublemma2}, Sublemma~\ref{sublemma3}, and Sublemma~\ref{sublemma4}, we get every conclusion of Lemma~\ref{setup} except the hyperbolicity of $K$, which follows from Theorem~\ref{Cor}. \end{proof} \subsection{Diagrammatic volume bounds} In this section, we apply Lemma~\ref{setup} and Theorem~\ref{Cor} to produce diagrammatic volume bounds for the complements of the closed $n$-braids considered in this paper. We now begin with a lemma for the case of 3-braids. \begin{lemma} Let $D(K)=\widehat{\beta}$ denote the closure of a nice 3-braid $\beta \in B_{3}$. Furthermore, assume that the positive syllables of $\beta$ are cyclically neighbored on both sides by negative syllables. Then the all-A state $H_{A}$ of $D(K)$ satisfies precisely one of the following conditions: \label{mylemma2} \begin{enumerate} \item[(1)] $H_{A}$ contains exactly one nonwandering circle and no wandering circles. \item[(2)] $H_{A}$ contains exactly one wandering circle and no nonwandering circles. \end{enumerate} \end{lemma} \begin{proof} A 3-braid $\beta \in B_{3}$ is either alternating or nonalternating. \noindent \underline{Case 1}: Suppose $\beta$ is alternating. Then one of the braid generators must always occur with positive exponents and the other generator must always occur with negative exponents. Hence, by the proof of Sublemma~\ref{sublemma1}, since a generator occurs with only positive exponents, then there will be a nonwandering circle. Since only one generator occurs with only positive exponents, then there is only one such nonwandering circle. Also, since adjacent negative syllables cannot occur in $\beta$, then wandering circles cannot occur in $H_{A}$. \noindent \underline{Case 2}: Suppose $\beta$ is nonalternating. Since positive syllables are cyclically neighbored on both sides by negative syllables, then the nonalternating behavior of $\beta$ must come from a pair of adjacent negative syllables. Hence, by the proof of Sublemma~\ref{sublemma1}, this implies the existence of a wandering circle in $H_{A}$ and prevents the existence of a nonwandering circle (since both generators occur once with negative exponent). Finally, since a wandering circle (which is always essential in the case that $n=3$) ``uses up'' a braid string from the braid closure and since a wandering circle must wander from braid string position 1 to braid string position 3 and back before closing up, then the existence of a second such wandering circle is impossible. \end{proof} \begin{definition} Let $D(K)=\widehat{\beta}$ be the closure of an $n$-braid $\beta \in B_{n}$. With the sign conventions for braid generators given in Figure~\ref{braidgen}, we call a twist region \emph{positive} if its crossings correspond to a positive syllable in the braid word $\beta$ and \emph{negative} if its crossings correspond to a negative syllable in the braid word $\beta$. Let $t^{+}(D)$ denote the number of positive twist regions in $D(K)$ and let $t^{-}(D)$ denote the number of negative twist regions in $D(K)$. \end{definition} The following theorem, which is the first main result of this paper, is a more precise version of Theorem~\ref{introthm} from the introduction. \begin{theorem} \label{mytheorem} Let $D(K)=\widehat{\beta}$, where $\beta \in B_{n}$ satisfies the assumptions of Lemma~\ref{setup}. Let $m$ denote the number of non-essential wandering circles in the all-A state $H_{A}$ of $D(K)$. In the case that $n=3$, we have that $$-\chi(\mathbb{G}_{A}')=t^{-}(D)-1 \geq \frac{1}{2}\cdot (t(D)-2),$$ \noindent which gives the volume bounds $$\frac{v_{8}}{2}\cdot(t(D)-2) \leq v_{8}\cdot(t^{-}(D)-1) \leq \mathrm{vol}(S^{3}\backslash K) < 10v_{3}\cdot(t(D)-1).$$ In the case that $n\geq4$ and that the only positive syllables of $\beta$ possibly occur in the generators $\sigma_{1}$ and $\sigma_{n-1}$, we have the volume bounds $$\frac{v_{8}}{2}\cdot(t(D)-2(n+m-2)) \leq v_{8}\cdot(t^{-}(D)-(n+m-2)) \leq \mathrm{vol}(S^{3}\backslash K) < 10v_{3}\cdot(t(D)-1).$$ In the case that $n\geq4$ and that positive syllables of $\beta$ occur in a generator $\sigma_{i}$ for some $2 \leq i \leq n-2$, we have the volume bounds $$v_{8}\cdot(t^{-}(D)-t^{+}(D)-(n+m-2)) \leq \mathrm{vol}(S^{3}\backslash K) < 10v_{3}\cdot(t(D)-1) = 10v_{3}\cdot(t^{-}(D)+t^{+}(D)-1).$$ \end{theorem} It is worth noting that the lower bounds on volume in terms of $t(D)$ given above are sharper than the general lower bounds given in the Main Theorem (Theorem 1.1) of \cite{Me} when $t(D)$ is large compared to the sum $n+m$. Specifically, the lower bounds are sharper when $t(D)>4$ for the case when $n=3$ (in which $m=0$ and $t(D) \geq 4=2(n-1)$) and when $t(D)>6(n+m)-16$ for the case when $n \geq 4$. \begin{proof}[Proof of Theorem~\ref{mytheorem}] To begin, note that we may apply the many results of Lemma~\ref{setup}. Since $D(K)$ is connected, A-adequate, and satisfies the TELC, then Lemma~3.4 of \cite{Me} implies that $$-\chi(\mathbb{G}_{A}')=t(D)-\#\left\{\text{OCs}\right\},$$ where $\#\left\{\text{OCs}\right\}$ is the number of all-A circles in $H_{A}$, called \emph{other circles}, that are not small inner circles. Note that, by Sublemma~\ref{sublemma1}, the set of other circles consists of the medium inner circles, essential wandering circles, non-essential wandering circles, and nonwandering circles in $H_{A}$. By Remark~\ref{cond} and Figure~\ref{loop}, it can be seen that $$t^{+}(D) \leq \#\left\{\text{medium\ inner\ circles}\right\} \leq 2t^{+}(D),$$ which says that each positive twist region corresponds to either one or two medium inner circles. In particular, looking at the left side (up to reflection) of Figure~\ref{loop}, it can be seen that $$\#\left\{\text{medium\ inner\ circles}\right\} = t^{+}(D)$$ in the case that $n=3$ and in the case that $n \geq 4$ when the only positive syllables of $\beta$ possibly occur in the generators $\sigma_{1}$ and $\sigma_{n-1}$. Additionally, Condition (2a), Condition (2b), and Condition (2c) of Lemma~\ref{setup} imply that at least half of the twist regions in $D(K)$ must be negative twist regions. \noindent \underline{Case 1}: Suppose $n=3$. By Lemma~\ref{mylemma2}, we know that the total number of wandering circles and nonwandering circles is one. Therefore, using all of what was said above, we get \begin{eqnarray*}-\chi(\mathbb{G}_{A}') & = & t(D)-\#\left\{\text{OCs}\right\}\\ \ & = & t(D)-\#\left\{\text{medium\ inner\ circles}\right\}-\#\left\{\text{wandering\ circles}\right\}\\ \ & \ & \hspace{2in} -\#\left\{\text{nonwandering\ circles}\right\}\\ \ & = & t(D)-t^{+}(D)-1\\ \ & = & t^{-}(D)-1\\ \ & \geq & \frac{t(D)}{2}-1\\ \ & = & \frac{1}{2}\cdot (t(D)-2). \end{eqnarray*} \noindent \underline{Case 2}: Suppose $n\geq4$. Note that both essential wandering circles and nonwandering circles ``use up'' a braid string from the braid closure. Also, note that Lemma~\ref{setup} implies that there can be at most two nonwandering circles in the all-A state $H_{A}$. These two facts imply that \begin{equation}\label{EWC} \#\left\{\text{essential\ wandering\ circles}\right\}+\#\left\{\text{nonwandering\ circles}\right\} \leq n-2. \end{equation} \noindent \underline{Subcase 1}: Suppose that the only positive syllables of $\beta$ possibly occur in the generators $\sigma_{1}$ and $\sigma_{n-1}$, in which case we have that $\#\left\{\text{medium\ inner\ circles}\right\} = t^{+}(D)$. Then, by using Inequality~\ref{EWC} and what was said in the beginning of the proof, we get \begin{eqnarray*}-\chi(\mathbb{G}_{A}') & = & t(D)-\#\left\{\text{OCs}\right\}\\ \ & = & t(D)-\#\left\{\text{medium\ inner\ circles}\right\}-\#\left\{\text{non-essential\ wandering\ circles}\right\}\\ \ & \ & \hspace{.318in} -\#\left\{\text{essential\ wandering\ circles}\right\}-\#\left\{\text{nonwandering\ circles}\right\}\\ \ & \geq & t(D)-t^{+}(D)-m-(n-2)\\ \ & = & t^{-}(D)-(n+m-2)\\ \ & \geq & \frac{t(D)}{2}-(n+m-2)\\ \ & = & \frac{1}{2}\cdot (t(D)-2(n+m-2)). \end{eqnarray*} \noindent \underline{Subcase 2}: Suppose that positive syllables of $\beta$ occur in a generator $\sigma_{i}$ for some $2 \leq i \leq n-2$, in which case we have that $\#\left\{\text{medium\ inner\ circles}\right\} \leq 2t^{+}(D)$. Then, by using Inequality~\ref{EWC} and what was said in the beginning of the proof, we get \begin{eqnarray*}-\chi(\mathbb{G}_{A}') & = & t(D)-\#\left\{\text{OCs}\right\}\\ \ & = & t(D)-\#\left\{\text{medium\ inner\ circles}\right\}-\#\left\{\text{non-essential\ wandering\ circles}\right\}\\ \ & \ & \hspace{.318in} -\#\left\{\text{essential\ wandering\ circles}\right\}-\#\left\{\text{nonwandering\ circles}\right\}\\ \ & \geq & t(D)-2t^{+}(D)-m-(n-2)\\ \ & = & t^{-}(D)-t^{+}(D)-(n+m-2).\\ \end{eqnarray*} \noindent By applying the inequalities for $-\chi(\mathbb{G}_{A}')$ above to Theorem~\ref{Cor}, we get the desired volume bounds. \end{proof} \subsection{Volume bounds in terms of the colored Jones polynomial} We now continue our study of volume bounds for hyperbolic A-adequate closed braids by translating our diagrammatic volume bounds from earlier to volume bounds in terms of the stable penultimate coefficient $\beta_{K}'$ of the colored Jones polynomial. Recall from the introduction that we denote the \emph{$j^{th}$ colored Jones polynomial} of a link $K$ by $$J_{K}^{j}(t)=\alpha_{j}t^{m_{j}}+\beta_{j}t^{m_{j}-1}+\cdots+\beta_{j}'t^{r_{j}+1}+\alpha_{j}'t^{r_{j}},$$ \noindent where $j \in \mathbb{N}$ and where the degree of each monomial summand decreases from left to right. The following corollary of Theorem~\ref{mytheorem} is a more precise version of Corollary~\ref{newcor} from the introduction. \begin{corollary} \label{newestsetup} Let $D(K)=\widehat{\beta}$ denote the closure of a nice $n$-braid $\beta \in B_{n}$ that satisfies the assumptions of Lemma~\ref{setup}. In the case that $n=3$ and in the case that $n \geq 4$ when the only positive syllables of $\beta$ possibly occur in the generators $\sigma_{1}$ and $\sigma_{n-1}$, we have the volume bounds $$v_{8}\cdot(\left|\beta_{K}'\right|-1) \leq \mathrm{vol}(S^{3}\backslash K) < 20v_{3}\cdot\left(\left|\beta_{K}'\right|+n+m-\dfrac{7}{2}\right).$$ \end{corollary} \begin{proof} Since $D(K)$ is a connected, A-adequate link diagram, then Theorem 3.1 of \cite{HeadTail} implies that the absolute value $$\left|\beta_{K}'\right|:=\left|\beta_{j}'\right|=1-\chi(\mathbb{G}_{A}')$$ is independent of $j \geq 2$. By combining this result with Theorem~\ref{Cor} and Theorem~\ref{mytheorem}, we get the desired result. \end{proof} \section{Volume bounds for A-adequate closed 3-braids in terms of the Schreier normal form} \subsection{The Schreier normal form for 3-braids and volume} \label{normformsec} A useful development in the history of 3-braids was the solution to the Conjugacy Problem (\cite{Schreier}). During this time, an algorithm was developed that produces from an arbitrary 3-braid word $\beta$ a conjugate 3-braid word $\beta'$, called the \emph{Schreier normal form}, which is the unique representative of the conjugacy class of $\beta$. A version of this algorithm is presented below. \noindent \textbf{Schreier Normal Form Algorithm:} \begin{itemize} \item[(1)] Let $\beta \in B_{3}=\langle \sigma_{1}, \sigma_{2}\ \vert\ \sigma_{1}\sigma_{2}\sigma_{1}=\sigma_{2}\sigma_{1}\sigma_{2}\rangle$ be cyclically reduced into syllables. Introduce new variables $x=\left(\sigma_{1}\sigma_{2}\sigma_{1}\right)^{-1}$ and $y=\sigma_{1}\sigma_{2}$. Thus we have that $\sigma_{1}=y^2x$, $\sigma_{2}=xy^2$, $\sigma_{1}^{-1}=xy$, and $\sigma_{2}^{-1}=yx$. Possibly using cyclic permutation, rewrite $\beta$ as a cyclically reduced word (that is positive) in $x$ and $y$. \item[(2)] Introduce $C=x^{-2}=y^{3} \in Z(B_{3}),$ where $Z(B_{3})$ denotes the center of the 3-braid group $B_{3}$. By using these relations and the commutativity of $C$ as much as possible, group all powers of $C$ at the beginning of the braid word and reduce the exponents of $x$ and $y$ as much as possible. Rewrite $\beta$ as $\beta=C^{j}\eta$, where $j \in \mathbb{Z}$ and where $$\eta = \left\{ \begin{array}{ll} \left(xy\right)^{p_{1}}\left(xy^2\right)^{q_{1}}\cdots \left(xy\right)^{p_{s}}\left(xy^2\right)^{q_{s}} & \mathrm{for\ some}\ s, p_{i}, q_{i} \geq 1 \\ \left(xy\right)^{p} & \mathrm{for\ some}\ p \geq 1 \\ \left(xy^{2}\right)^{q} & \mathrm{for\ some}\ q \geq 1 \\ y & \ \\ y^2 & \ \\ x & \ \\ 1 & \ \\ \end{array} \right. $$ \item[(3)] Possibly using cyclic permutation and the commutativity of $C$, rewrite $\beta$ back in terms of $\sigma_{1}$ and $\sigma_{2}$ as $\beta'=C^{k}\eta'$, where $k \in \mathbb{Z}$ and where $$ \eta' = \left\{ \begin{array}{ll} \sigma_{1}^{-p_{1}}\sigma_{2}^{q_{1}}\cdots \sigma_{1}^{-p_{s}}\sigma_{2}^{q_{s}} & \mathrm{for\ some}\ s, p_{i}, q_{i} \geq 1 \\ \sigma_{1}^{p} & \mathrm{for\ some}\ p\in \mathbb{Z} \\ \sigma_{1}\sigma_{2} & \ \\ \sigma_{1}\sigma_{2}\sigma_{1} & \ \\ \sigma_{1}\sigma_{2}\sigma_{1}\sigma_{2} & \ \end{array} \right. $$ \end{itemize} \begin{definition} We call $\beta'=C^{k}\eta' \in B_{3}$ the \emph{Schreier normal form} of $\beta \in B_{3}$. The braid word $\beta'$ is the unique representative of the conjugacy class of $\beta$. Following \cite{Cusp}, we will call a braid $\beta$ \emph{generic} if it has Schreier normal form $$\beta'=C^{k}\sigma_{1}^{-p_{1}}\sigma_{2}^{q_{1}}\cdots \sigma_{1}^{-p_{s}}\sigma_{2}^{q_{s}}.$$ \end{definition} Using the Schreier normal form of a 3-braid, Futer, Kalfagianni, and Purcell (\cite{Cusp}) classified the hyperbolic 3-braid closures. Furthermore, given such a hyperbolic closed 3-braid, they gave two-sided bounds on the volume of the link complement, expressing the volume in terms of the parameter $s$ from the Schreier normal form of the 3-braid. By using the more recent machinery built by the same authors in \cite{Guts}, we aim to obtain a sharper lower bound on volume. To begin our study, we first recall two propositions from \cite{Cusp}. \begin{proposition}[\cite{Cusp}, Theorem 5.5] Let $D(K)=\widehat{\beta}$ denote the closure of a 3-braid $\beta \in B_{3}$. Then $K$ is hyperbolic if and only if \begin{itemize} \item[(1)] $\beta$ is generic, and \item[(2)] $\beta$ is not conjugate to $\sigma_{1}^p\sigma_{2}^q$ for any integers $p$ and $q$. \end{itemize} \label{hypgeneric} \end{proposition} \begin{proposition}[\cite{Cusp}, Theorem 5.6]\label{svolbound} Let $D(K)=\widehat{\beta}$ denote the closure of a 3-braid $\beta \in B_{3}$. Then, assuming that $K$ is hyperbolic, we have that $$4v_{3}\cdot s-276.6 < \mathrm{vol}(S^3 \backslash K) < 4v_{8}\cdot s.$$ \end{proposition} \subsection{Volume bounds in terms of the Schreier normal form} We begin this section by presenting a result that, for the family of closed 3-braids that satisfy Lemma~\ref{setup}, states that the parameters $k$ and $s$ from the Schreier normal form of the 3-braid can actually be read off of the original 3-braid word. We then use this result to obtain volume bounds in terms of the parameter $s$ from the Schreier normal form. \begin{theorem} \label{normformthm} Let $D(K)=\widehat{\beta}$ denote the closure of a nice $3$-braid $\beta \in B_{3}$ that satisfies the assumptions of Lemma~\ref{setup}. Then $\beta$ is generic and, furthermore, we are able to express the parameters $k$ and $s$ of the Schreier normal form $\beta'$ in terms of the original 3-braid $\beta$ as follows: \begin{enumerate} \item[(1)] $k=-\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{2}}\sigma_{1}^{n_{1}}\ \text{of\ negative\ syllables\ of}\ \beta,\ \text{where}\ n_{1}, n_{2} \leq -3 \right\}$. \item[(2)] $s=t^{-}(D)=\#\left\{\text{negative\ syllables\ in}\ \beta \right\}$. \end{enumerate} \end{theorem} Note that, when looking for the induced products $\sigma_{2}^{n_{2}}\sigma_{1}^{n_{1}}$ of negative syllables of $\beta$ to find $k$, we must look cyclically in the braid word. As a special case of the above theorem, notice that if $\beta=\sigma_{i}^{p_{1}}\sigma_{j}^{n_{1}}\cdots\sigma_{i}^{p_{l}}\sigma_{j}^{n_{l}}$ is an alternating 3-braid where $\{i,j\}=\{1,2\}$, then $k=0$ and $s=l$. Also note that the count $$\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{2}}\sigma_{1}^{n_{1}}\ \text{of\ negative\ syllables\ of}\ \beta,\ \text{where}\ n_{1}, n_{2} \leq -3 \right\}$$ is the same as the count $$\# \left\{\text{induced\ products}\ \sigma_{1}^{n_{1}'}\sigma_{2}^{n_{2}'}\ \text{of\ negative\ syllables\ of}\ \beta,\ \text{where}\ n_{1}', n_{2}' \leq -3 \right\}.$$ This is because, recalling from Lemma~\ref{setup} that wandering corresponds to the existence of adjacent negative syllables (in adjacent braid generators), the number of times a wandering circle wanders from braid string position 1 to braid string position 3 is the same as the number of times a wandering circle wanders from braid string position 3 to braid string position 1. \begin{remark} Since the 3-braids of Theorem~\ref{normformthm} (which satisfy the assumptions of Lemma~\ref{setup}) are generic, then Proposition~4.15 of \cite{Width} implies, with the assumption that $k \neq 0$, that $$\left|k\right|-1 \leq g_{T}(K) \leq \left|k\right|,$$ where $K$ is the hyperbolic link with closed 3-braid diagram $D(K)=\widehat{\beta}$ and where $g_{T}(K)$ denotes the Turaev genus of $K$. Thus, since conclusion (1) of Theorem~\ref{normformthm} gives that we can read the parameter $k$ from the original 3-braid $\beta$, then we can visually bound the Turaev genus of the braid closure. \end{remark} Deferring the proof of Theorem~\ref{normformthm} to the next section, we now relate the parameter $s$ from the Schreier normal form of a 3-braid from Lemma~\ref{setup} to the stable penultimate coefficient $\beta_{K}'$ of the colored Jones polynomial of the closure of this 3-braid. \begin{corollary}\label{scor} Let $D(K)=\widehat{\beta}$, where $\beta \in B_{3}$ satisfies the assumptions of Lemma~\ref{setup}. Then $$s=t^{-}(D)=\vert\beta_{K}'\vert.$$ Thus, $t^{-}(D)$ and $s$ are link invariants. \end{corollary} \begin{proof} Using the conclusions of Lemma~\ref{setup}, we can apply Theorem 3.1 of \cite{HeadTail}, Theorem~\ref{mytheorem}, and Theorem~\ref{normformthm} respectively to get that $\vert\beta_{K}'\vert = 1-\chi(\mathbb{G}_{A}') = 1+(t^{-}(D)-1) = t^{-}(D) = s$. Since the colored Jones polynomial and therefore its coefficients are link invariants, then we can conclude that $t^{-}(D)$ and $s$ are link invariants. \end{proof} By combining Corollary~\ref{newestsetup}, Corollary~\ref{scor}, and Proposition~\ref{svolbound}, we get the following result, which is a more precise version of Theorem~\ref{sthm} from the introduction. \begin{theorem}\label{newsvolbound} Let $D(K)=\widehat{\beta}$, where $\beta \in B_{3}$ satisfies the assumptions of Lemma~\ref{setup}. Then we have the volume bounds $$v_{8}\cdot(s-1) \leq \mathrm{vol}(S^{3}\backslash K) < 4v_{8}\cdot s.$$ \end{theorem} \begin{remark} Comparing the lower bound on volume of Theorem~\ref{newsvolbound} to that of Proposition~\ref{svolbound}, we get that $v_{8}\cdot (s-1) \geq 4v_{3}\cdot s-276.6$ is equivalent to the condition that $\displaystyle s \leq \frac{276.6-v_{8}}{4v_{3}-v_{8}} \approx 690$. Therefore, the lower bound found in Theorem~\ref{newsvolbound} is sharper than the lower bound provided by Proposition~\ref{svolbound} unless the parameter $s$ from the Schreier normal form is very large. \end{remark} \subsection{The proof of Theorem~\ref{normformthm}} We will now prove Theorem~\ref{normformthm} using a series of lemmas, beginning by first considering the simpler case that $\beta \in B_{3}$ is a negative braid. \begin{lemma} Let $D(K)=\widehat{\beta}$, where $\beta \in B_{3}$ is a negative braid that satisfies the assumptions of Lemma~\ref{setup}. Then $\beta$ is generic and, furthermore, we are able to express the parameters $k$ and $s$ of the Schreier normal form $\beta'$ in terms of the original 3-braid $\beta$ as follows: \begin{enumerate} \item[(1)] $k=-\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{2}}\sigma_{1}^{n_{1}}\ \text{of\ negative\ syllables\ of}\ \beta,\ \text{where}\ n_{1}, n_{2} \leq -3 \right\}$. \item[(2)] $s=t^{-}(D)=\#\left\{\text{negative\ syllables\ in}\ \beta \right\}$. \end{enumerate} \end{lemma} \begin{proof} By Lemma~\ref{setup}, we have that $K$ is hyperbolic. By Proposition~\ref{hypgeneric}, this implies that $\beta$ is generic. Since $\beta$ is a negative braid, then (possibly using cyclic permutation) we may write $\beta$ as $\beta=\sigma_{1}^{n_{1}}\sigma_{2}^{n_{2}}\cdots\sigma_{1}^{n_{2m-1}}\sigma_{2}^{n_{2m}}$, where $n_{i} \leq -3$ and the fact that $\beta$ is nice forces the condition that $m \geq 2$. Applying the Schreier Normal Form Algorithm, we get \begin{eqnarray*} \beta & = & \sigma_{1}^{n_{1}}\sigma_{2}^{n_{2}}\cdots\sigma_{1}^{n_{2m-1}}\sigma_{2}^{n_{2m}}\\ \ & = & (xy)^{-n_{1}}(yx)^{-n_{2}}\cdots(xy)^{-n_{2m-1}}(yx)^{-n_{2m}}\\ \ & = & (xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-1}(x^{2})y\cdots(x^{2})y(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}x\\ \ & = & (xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-1}(C^{-1})y\cdots(C^{-1})y(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}x\\ \ & \cong & (C^{-1})^{m-1}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-2}(xy^{2})\cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}x\\ \ & \cong & (C^{-1})^{m-1}x(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-2}(xy^{2})\cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}\\ \ & = & (C^{-1})^{m-1}(x^{2})y(xy)^{-n_{1}-2}(xy^{2})(xy)^{-n_{2}-2}(xy^{2})\cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}\\ \ & = & (C^{-1})^{m}y(xy)^{-n_{1}-2}(xy^{2})(xy)^{-n_{2}-2}(xy^{2})\cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}\\ \ & \cong & (C^{-1})^{m}(xy)^{-n_{1}-2}(xy^{2})(xy)^{-n_{2}-2}(xy^{2})\cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}y\\ \ & = & (C^{-1})^{m}(xy)^{-n_{1}-2}(xy^{2})(xy)^{-n_{2}-2}(xy^{2})\cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-2}(xy^{2})\\ \ & = & (C^{-1})^{m}\sigma_{1}^{n_{1}+2}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2}\cdots\sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+2}\sigma_{2}\\ \ & = & \beta', \end{eqnarray*} \noindent where $\cong$ denotes that cyclic permutation or the fact that $C \in Z(B_{3})$ has been used. Thus, we see that $$k=-m=-\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{i}}\sigma_{1}^{n_{i+1}}\ \text{of\ negative\ syllables\ of}\ \beta,\ \text{where}\ n_{i}, n_{i+1} \leq -3 \right\}$$ \noindent and $$s=2m=\#\left\{\text{negative\ syllables\ in}\ \beta \right\}.$$ \noindent Recall that, when looking for the induced products $\sigma_{2}^{n_{i}}\sigma_{1}^{n_{i+1}}$ of negative syllables of $\beta$ to find $k$, we must look cyclically in the braid word. \end{proof} We will now consider the more complicated case that $\beta \in B_{3}$ is not a negative braid. Since $\beta$ is not a negative braid, then (possibly using cyclic permutation) we may write $\beta$ as $\beta=P_{1}N_{1} \cdots P_{t}N_{t}$, where $P_{i}$ denotes a positive syllable of $\beta$ and where $N_{i}$ denotes a maximal length negative induced subword of $\beta$. Note that this decomposition arises as a result of Condition (2a), Condition (2b), and Condition (2c) of Lemma~\ref{setup} (the conditions that positive syllables are cyclically neighbored on both sides by negative syllables). Our strategy for the proof of this case is to apply the Schreier Normal Form Algorithm to the subwords $P_{i}N_{i}$, showing along the way that cyclic permutation is never needed in applying the algorithm, to show that Theorem~\ref{normformthm} is locally satisfied for the $P_{i}N_{i}$, and to finally show that juxtaposing the subwords $P_{i}N_{i}$ to form $\beta$ allows the local conclusions of Theorem~\ref{normformthm} for the $P_{i}N_{i}$ to combine to give the global conclusion of Theorem~\ref{normformthm} for $\beta$. We now list the types of induced subwords $P_{i}N_{i}$ below. For each subword, we consider the two possible subtypes. Let $p>0$ denote a positive exponent and let the $n_{i}\leq-3$ denote negative exponents. \begin{enumerate} \item[(1a)] $\sigma_{2}^{p}\sigma_{1}^{n_{1}}$ \item[(1b)] $\sigma_{1}^{p}\sigma_{2}^{n_{1}}$ \item[(2a)] $\sigma_{2}^{p}\sigma_{1}^{n_{1}}\sigma_{2}^{n_{2}}\cdots\sigma_{1}^{n_{2m-1}}\sigma_{2}^{n_{2m}}$, where $m \geq 1$ \item[(2b)] $\sigma_{1}^{p}\sigma_{2}^{n_{1}}\sigma_{1}^{n_{2}}\cdots\sigma_{2}^{n_{2m-1}}\sigma_{1}^{n_{2m}}$, where $m \geq 1$ \item[(3a)] $\sigma_{2}^{p}\sigma_{1}^{n_{1}}\sigma_{2}^{n_{2}}\cdots\sigma_{1}^{n_{2m-1}}\sigma_{2}^{n_{2m}}\sigma_{1}^{n_{2m+1}}$, where $m \geq 1$ \item[(3b)] $\sigma_{1}^{p}\sigma_{2}^{n_{1}}\sigma_{1}^{n_{2}}\cdots\sigma_{2}^{n_{2m-1}}\sigma_{1}^{n_{2m}}\sigma_{2}^{n_{2m+1}}$, where $m \geq 1$ \end{enumerate} \begin{lemma} \label{normlist} Given the list of induced subwords $P_{i}N_{i}$ above, applying the Schreier Normal Form Algorithm produces the following corresponding list of braid words. \begin{enumerate} \item[(1a)] $\sigma_{2}^{p}\sigma_{1}^{n_{1}}$ \item[(1b)] $\mathbf{y^{2}}\sigma_{2}^{p-1}\sigma_{1}^{n_{1}}\mathbf{x}$ \item[(2a)] $(C^{-1})^{m-1}\sigma_{2}^{p}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2} \cdots \sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+1}\mathbf{x}$, where $m \geq 1$ \item[(2b)] $(C^{-1})^{m}\mathbf{y^{2}}\sigma_{2}^{p-1}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2} \cdots \sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+1}$, where $m \geq 1$ \item[(3a)] $(C^{-1})^{m}\sigma_{2}^{p}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2} \cdots \sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+2}\sigma_{2}\sigma_{1}^{n_{2m+1}+1}$, where $m \geq 1$ \item[(3b)] $(C^{-1})^{m}\mathbf{y^{2}}\sigma_{2}^{p-1}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2} \cdots \sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+2}\sigma_{2}\sigma_{1}^{n_{2m+1}+1}\mathbf{x}$, where $m~\geq~1$ \end{enumerate} Also, when applying the Schreier Normal Form Algorithm to the subwords $P_{i}N_{i}$, cyclic permutation is avoided. Furthermore, letting $k_{i}$ denote the exponent of $C$ in the application of the Schreier Normal Form Algorithm to the subword $P_{i}N_{i}$, we have that a local version of Theorem~\ref{normformthm} holds for the $k_{i}$. \end{lemma} \begin{proof} Let $\cong$ denote that the fact that $C \in Z(B_{3})$ has been used. Applying the Schreier Normal Form Algorithm to type (1a), we get $$P_{i}N_{i} = \sigma_{2}^{p}\sigma_{1}^{n_{1}} = (xy^{2})^{p}(xy)^{-n_{1}} = \sigma_{2}^{p}\sigma_{1}^{n_{1}}.$$ \noindent Applying the Schreier Normal Form Algorithm to type (1b), we get $$P_{i}N_{i} = \sigma_{1}^{p}\sigma_{2}^{n_{1}} = (y^{2}x)^{p}(yx)^{-n_{1}} = y^{2}(xy^{2})^{p-1}(xy)^{-n_{1}}x = y^{2}\sigma_{2}^{p-1}\sigma_{1}^{n_{1}}x$$ \noindent Applying the Schreier Normal Form Algorithm to type (2a), we get \begin{eqnarray*} P_{i}N_{i} & = & \sigma_{2}^{p}\sigma_{1}^{n_{1}}\sigma_{2}^{n_{2}}\cdots\sigma_{1}^{n_{2m-1}}\sigma_{2}^{n_{2m}}\\ \ & = & (xy^{2})^{p}(xy)^{-n_{1}}(yx)^{-n_{2}} \cdots (xy)^{-n_{2m-1}}(yx)^{-n_{2m}}\\ \ & = & (xy^{2})^{p}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-1}(x^{2})y \cdots(x^{2})y(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}x\\ \ & = & (xy^{2})^{p}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-1}(C^{-1})y \cdots(C^{-1})y(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}x\\ \ & \cong & (C^{-1})^{m-1}(xy^{2})^{p}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-2}(xy^{2}) \cdots\\ \ & \ & \cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}x\\ \ & = & (C^{-1})^{m-1}\sigma_{2}^{p}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2} \cdots \sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+1}x \end{eqnarray*} \noindent Applying the Schreier Normal Form Algorithm to type (2b), we get \begin{eqnarray*} P_{i}N_{i} & = & \sigma_{1}^{p}\sigma_{2}^{n_{1}}\sigma_{1}^{n_{2}}\cdots\sigma_{2}^{n_{2m-1}}\sigma_{1}^{n_{2m}}\\ \ & = & (y^{2}x)^{p}(yx)^{-n_{1}}(xy)^{-n_{2}} \cdots (yx)^{-n_{2m-1}}(xy)^{-n_{2m}}\\ \ & = & y^{2}(xy^{2})^{p-1}(xy)^{-n_{1}}(x^{2})y(xy)^{-n_{2}-2}(xy^{2}) \cdots(xy^{2})(xy)^{-n_{2m-1}-1}(x^{2})y(xy)^{-n_{2m}-1}\\ \ & = & y^{2}(xy^{2})^{p-1}(xy)^{-n_{1}}(C^{-1})y(xy)^{-n_{2}-2}(xy^{2}) \cdots(xy^{2})(xy)^{-n_{2m-1}-1}(C^{-1})y(xy)^{-n_{2m}-1}\\ \ & \cong & (C^{-1})^{m}y^{2}(xy^{2})^{p-1}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-2}(xy^{2}) \cdots\\ \ & \ & \cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}\\ \ & = & (C^{-1})^{m}y^{2}\sigma_{2}^{p-1}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2} \cdots \sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+1} \end{eqnarray*} \noindent Applying the Schreier Normal Form Algorithm to type (3a), we get \begin{eqnarray*} P_{i}N_{i} & = & \sigma_{2}^{p}\sigma_{1}^{n_{1}}\sigma_{2}^{n_{2}}\cdots\sigma_{1}^{n_{2m-1}}\sigma_{2}^{n_{2m}}\sigma_{1}^{n_{2m+1}}\\ \ & = & (xy^{2})^{p}(xy)^{-n_{1}}(yx)^{-n_{2}} \cdots (xy)^{-n_{2m-1}}(yx)^{-n_{2m}}(xy)^{-n_{2m+1}}\\ \ & = & (xy^{2})^{p}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-1}(x^{2})y \cdots\\ \ & \ & \cdots(x^{2})y(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}(x^{2})y(xy)^{-n_{2m+1}-1}\\ \ & = & (xy^{2})^{p}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-1}(C^{-1})y \cdots\\ \ & \ & \cdots(C^{-1})y(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-1}(C^{-1})y(xy)^{-n_{2m+1}-1}\\ \ & \cong & (C^{-1})^{m}(xy^{2})^{p}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-2}(xy^{2}) \cdots\\ \ & \ & \cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-2}(xy^{2})(xy)^{-n_{2m+1}-1}\\ \ & = & (C^{-1})^{m}\sigma_{2}^{p}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2} \cdots \sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+2}\sigma_{2}\sigma_{1}^{n_{2m+1}+1} \end{eqnarray*} \noindent Applying the Schreier Normal Form Algorithm to type (3b), we get \begin{eqnarray*} P_{i}N_{i} & = & \sigma_{1}^{p}\sigma_{2}^{n_{1}}\sigma_{1}^{n_{2}}\cdots\sigma_{2}^{n_{2m-1}}\sigma_{1}^{n_{2m}}\sigma_{2}^{n_{2m+1}}\\ \ & = & (y^{2}x)^{p}(yx)^{-n_{1}}(xy)^{-n_{2}} \cdots (yx)^{-n_{2m-1}}(xy)^{-n_{2m}}(yx)^{-n_{2m+1}}\\ \ & = & y^{2}(xy^{2})^{p-1}(xy)^{-n_{1}}(x^{2})y(xy)^{-n_{2}-2}(xy^{2}) \cdots\\ \ & \ & \cdots(xy^{2})(xy)^{-n_{2m-1}-1}(x^{2})y(xy)^{-n_{2m}-2}(xy^{2})(xy)^{-n_{2m+1}-1}x\\ \ & = & y^{2}(xy^{2})^{p-1}(xy)^{-n_{1}}(C^{-1})y(xy)^{-n_{2}-2}(xy^{2}) \cdots\\ \ & \ & \cdots(xy^{2})(xy)^{-n_{2m-1}-1}(C^{-1})y(xy)^{-n_{2m}-2}(xy^{2})(xy)^{-n_{2m+1}-1}x\\ \ & \cong & (C^{-1})^{m}y^{2}(xy^{2})^{p-1}(xy)^{-n_{1}-1}(xy^{2})(xy)^{-n_{2}-2}(xy^{2}) \cdots\\ \ & \ & \cdots(xy^{2})(xy)^{-n_{2m-1}-2}(xy^{2})(xy)^{-n_{2m}-2}(xy^{2})(xy)^{-n_{2m+1}-1}x\\ \ & = & (C^{-1})^{m}y^{2}\sigma_{2}^{p-1}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2} \cdots \sigma_{2}\sigma_{1}^{n_{2m-1}+2}\sigma_{2}\sigma_{1}^{n_{2m}+2}\sigma_{2}\sigma_{1}^{n_{2m+1}+1}x \end{eqnarray*} By inspecting the cases above, it can be seen that cyclic permutation is never used. Let $k_{i}$ denote the exponent of $C$ in the application of the Schreier Normal Form Algorithm to the subword $P_{i}N_{i}$. In each case above, it can be see that $$k_{i}=-\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{j,i}}\sigma_{1}^{n_{j+1,i}}\ \text{of\ negative\ syllables\ of}\ P_{i}N_{i},\ \text{where}\ n_{j,i}, n_{j+1,i} \leq -3 \right\},$$ which is a local version of conclusion (1) from Theorem~\ref{normformthm}. \end{proof} The fact that cyclic permutation is never used in applying the Schreier Normal Form Algorithm to the subwords $P_{i}N_{i}$ is very important because the goal is to juxtapose the resulting braid words from Lemma~\ref{normlist} and claim that this gives, after some minor modifications, the normal form of the full braid word $\beta$. We are now ready to combine the above results to prove Theorem~\ref{normformthm} for the case that $\beta \in B_{3}$ is not a negative braid. The following lemma establishes that $\beta$ is generic and establishes the result for the parameter $k$ from the Schreier normal form (conclusion (1) from Theorem~\ref{normformthm}). \begin{lemma} \label{klemma} Let $D(K)=\widehat{\beta}$, where $\beta \in B_{3}$ is a nonnegative braid that satisfies the assumptions of Lemma~\ref{setup}. Then $\beta$ is generic and the parameter $k$ from the Schreier normal form $\beta'$ can be expressed in terms of the original 3-braid $\beta$ as $$k=-\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{2}}\sigma_{1}^{n_{1}}\ \text{of\ negative\ syllables\ of}\ \beta,\ \text{where}\ n_{1}, n_{2} \leq -3 \right\}.$$ \end{lemma} \begin{proof} By Lemma~\ref{setup}, we have that $K$ is hyperbolic. By Proposition~\ref{hypgeneric}, this implies that $\beta$ is generic. Let us now consider which types of induced subwords $P_{i+1}N_{i+1}$ can (cyclically) follow a given induced subword $P_{i}N_{i}$. Since $\beta$ is assumed to be cyclically reduced into syllables and since the the induced subwords $P_{i}N_{i}$ contain unbroken syllables of $\beta$, then (looking at the list of induced subwords $P_{i}N_{i}$ preceding Lemma~\ref{normlist}) we get that \begin{itemize} \item the subwords $P_{i}N_{i}$ of type (1) and type (3) with subtype either (a) or (b) can be followed by subwords $P_{i+1}N_{i+1}$ of \textbf{the same} subtype (a) or (b), respectively. \item the subwords $P_{i}N_{i}$ of type (2) with subtype either (a) or (b) can be followed by subwords $P_{i+1}N_{i+1}$ of \textbf{different} subtype (b) or (a), respectively. \end{itemize} It is important to note that the rules for juxtaposition given above also apply after the Schreier Normal Form Algorithm is applied to the list of induced subwords $P_{i}N_{i}$ preceding Lemma~\ref{normlist}. Abusing terminology slightly, call the braids words in the list given in Lemma~\ref{normlist} the \emph{normal forms of the $P_{i}N_{i}$}. Let $k_{i}$ denote the exponent of $C$ in the normal form of $P_{i}N_{i}$. Recall that $C \in Z(B_{3})$ commutes with the generators of $B_{3}$. Thus, when juxtaposing the normal form of $P_{i}N_{i}$ with the normal form of $P_{i+1}N_{i+1}$, we may move the factor $C^{k_{i+1}}$ out of the way, moving it from the beginning of the normal form of $P_{i+1}N_{i+1}$ to the beginning of the normal form of $P_{i}N_{i}$. This fact will be utilized below. Looking at the list of braids words given in Lemma~\ref{normlist}, notice that half of the braid words may potentially contain the variable $\mathbf{x}$ at the end of the word and half of the braid words may contain the expression $\mathbf{y^{2}}\sigma_{2}^{p-1}$ at the beginning of the word (immediately after the $C^{k_{i+1}}$ term that will be moved out of the way). We can now see that juxtaposing the normal form of $P_{i}N_{i}$ with that of $P_{i+1}N_{i+1}$ either \begin{enumerate} \item[(1)] involves neither $\mathbf{x}$ at the end of the normal form of $P_{i}N_{i}$ nor $\mathbf{y^{2}}\sigma_{2}^{p-1}$ at the beginning of the normal form of $P_{i+1}N_{i+1}$, or \item[(2)] involves (after moving $C^{k_{i+1}}$ out of the way) both an $\mathbf{x}$ at the end of the normal form of $P_{i}N_{i}$ and $\mathbf{y^{2}}\sigma_{2}^{p-1}$ at the beginning of the normal form of $P_{i+1}N_{i+1}$. \end{enumerate} \noindent Consequently, upon juxtaposing the normal forms of all of the induced subwords together, we see that all factors $\mathbf{x}$ and $\mathbf{y^{2}}\sigma_{2}^{p-1}$ in the list of normal forms combine to form $$\mathbf{xy^{2}}\sigma_{2}^{p-1}=\boldsymbol{\sigma_{2}}\sigma_{2}^{p-1}=\sigma_{2}^{p}.$$ Note, in particular, that juxtaposing the normal form of $P_{i}N_{i}$ with that of $P_{i+1}N_{i+1}$ does \textbf{not} create any new nontrivial powers of $C$. Therefore, since $C$ commutes with the generators of $B_{3}$, then we may group all $C^{k_{i}}$ terms together at the beginning of the normal form braid word. Furthermore, by applying the conclusions of Lemma~\ref{normlist}, we have that $$k_{i}=-\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{j,i}}\sigma_{1}^{n_{j+1,i}}\ \text{of\ negative\ syllables\ of}\ P_{i}N_{i},\ \text{where}\ n_{j,i}, n_{j+1,i} \leq -3 \right\}.$$ We also have that juxtaposing $P_{i}N_{i}$ with $P_{i+1}N_{i+1}$ does \textbf{not} create any new induced products $\sigma_{2}^{n_{j}}\sigma_{1}^{n_{j+1}}$ of negative syllables. This is because juxtaposing $P_{i}N_{i}$ with $P_{i+1}N_{i+1}$ only joins a negative syllable with a positive syllable. With this information, we are now able to conclude that \begin{eqnarray*} k & = & \sum_{i=1}^{t} k_{i}\\ \ & = & \sum_{i=1}^{t} -\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{j,i}}\sigma_{1}^{n_{j+1,i}}\ \text{of\ negative\ syllables\ of}\ P_{i}N_{i},\ \text{where}\ n_{j,i}, n_{j+1,i} \leq -3 \right\}\\ \ & = & -\# \left\{\text{induced\ products}\ \sigma_{2}^{n_{j}}\sigma_{1}^{n_{j+1}}\ \text{of\ negative\ syllables\ of}\ \beta,\ \text{where}\ n_{j}, n_{j+1} \leq -3 \right\}. \end{eqnarray*} \end{proof} The following lemma proves conclusion (2) from Theorem~\ref{normformthm}, the result concerning the parameter $s$ from the Schreier normal form, for the case that $\beta \in B_{3}$ is not a negative braid. \begin{lemma} Let $D(K)=\widehat{\beta}$, where $\beta \in B_{3}$ is a nonnegative braid that satisfies the assumptions of Lemma~\ref{setup}. Then the parameter $s$ from the Schreier normal form $\beta'$ can be expressed in terms of the original 3-braid $\beta$ as $$s=t^{-}(D)=\#\left\{\text{negative\ syllables\ in}\ \beta \right\}.$$ \end{lemma} \begin{proof} We need to relate the global parameter $s$ from the Schreier normal form to local versions of the parameter $s$. Let $s_{i}$ denote the local version of the parameter $s$, which comes from the normal form of the subword $P_{i}N_{i}$ and will be more precisely defined below. As seen in the proof of Lemma~\ref{klemma} above, juxtaposing the normal forms of the subwords $P_{i}N_{i}$ to create a braid word groups together the $\mathbf{x}$ and $\mathbf{y^{2}}$ factors in the normal forms in such a way that they are absorbed into $\sigma_{2}^{p-1}$ to form $\sigma_{2}^{p}$. Also recall that we can collect together all individual powers of $C$ from each $P_{i}N_{i}$ normal form and use commutativity to form a single power of $C$ at the beginning of the normal form braid word. Thus, after juxtaposition of the normal forms of the $P_{i}N_{i}$, what results is a braid word that looks like $C^{k}W_{1} \cdots W_{t}$, where $k \in \mathbb{Z}$ and $W_{i}$ is an alternating word that is positive in $\sigma_{2}$, negative in $\sigma_{1}$, begins with a $\sigma_{2}$ syllable, and ends with a $\sigma_{1}$ syllable. To see this, recall the list of $P_{i}N_{i}$ normal forms given in Lemma~\ref{normlist}. Given an alternating subword $W_{i}=\sigma_{2}^{p_{1,i}}\sigma_{1}^{n_{1,i}}\cdots\sigma_{2}^{p_{q,i}}\sigma_{1}^{n_{q,i}}$ as described above, we define $s_{i}=q$. To provide an example, for the normal form $C^{-1}\sigma_{2}^{p}\sigma_{1}^{n_{1}+1}\sigma_{2}\sigma_{1}^{n_{2}+2}\sigma_{2}\sigma_{1}^{n_{3}+2}\sigma_{2}\sigma_{1}^{n_{4}+1}\mathbf{x}$ of Type (2a), we have that $s_{i}=4$. Consider the product $$W_{i}W_{i+1}=(\sigma_{2}^{p_{1,i}}[\sigma_{1}^{n_{1,i}}\cdots\sigma_{2}^{p_{q,i}}\sigma_{1}^{n_{q,i}})\cdot(\sigma_{2}^{p_{1,i+1}}]\sigma_{1}^{n_{1,i+1}}\cdots\sigma_{2}^{p_{r,i+1}}\sigma_{1}^{n_{r,i+1}}).$$ Note that the subword $[\sigma_{1}^{n_{1,i}}\cdots\sigma_{2}^{p_{q,i}}\sigma_{1}^{n_{q,i}}\cdot\sigma_{2}^{p_{1,i+1}}]$ looks like the alternating part of a generic braid word, the part of the normal form for which the parameter $s$ measures the length. From this perspective of (cyclically) borrowing the first syllable of the next subword, the local parameter $s_{i}$ makes sense as being the local version of the global parameter $s$. Recall that, since $\beta$ is cyclically reduced into syllables, then the number of negative twist regions in $D(K)=\widehat{\beta}$ corresponds to the number of negative syllables in $\beta$. Also, note that the decomposition of $\beta$ into induced subwords $P_{i}N_{i}$ (with unbroken syllables) gives that the number of negative syllables in $\beta$ is the sum of the numbers of negative syllables in the $P_{i}N_{i}$. Furthermore, returning to Lemma~\ref{normlist} and its proof, it can be seen that the number of negative syllables in the subword $P_{i}N_{i}$ is equal to the local parameter $s_{i}$ from the normal form of $P_{i}N_{i}$. Finally, since juxtaposing the normal forms of the $P_{i}N_{i}$ gives a braid word $C^{k}W_{1} \cdots W_{t}$ that (by cyclic permutation and the commutativity of $C$) is equivalent to the generic normal form of $\beta$, then the parameters $s_{i}$ from the $W_{i}$ sum to give the parameter $s$ from the Schreier normal form of $\beta$. With this information, we are now able to conclude that \begin{eqnarray*} t^{-}(D) & = & \#\left\{\text{negative\ syllables\ in\ }\beta\right\}\\ \ & = & \sum_{i=1}^{t} \#\left\{\text{negative\ syllables\ in\ }P_{i}N_{i}\right\}\\ \ & = & \sum_{i=1}^{t} s_{i}\\ \ & = & s. \end{eqnarray*} \end{proof} \end{document}
\begin{document} \setcounter{page}{1} \pagenumbering{arabic} \begin{center} \Large{\textsc{A Quick-and-Dirty Check\\ for a One-dimensional Active Subspace}}\\[5mm] Paul G.~Constantine\\ Colorado School of Mines, Golden, CO\\[5mm] \url{[email protected]}\\ \url{http://inside.mines.edu/~pconstan} \end{center} \begin{abstract} Most engineering models contain several parameters, and the map from input parameters to model output can be viewed as a multivariate function. An active subspace is a low-dimensional subspace of the space of inputs that explains the majority of variability in the function. Here we describe a quick check for a dominant one-dimensional active subspace based on computing a linear approximation of the function. The visualization tool presented here is closely related to \emph{regression graphics}~\cite{cook2009introduction,cook2009regression}, though we avoid the statistical interpretation of the model. This document will be part of a larger review paper on active subspace methods. \end{abstract} We can run a quick-and-dirty check to see if a given model has a dominant one-dimensional active subspace---i.e., a direction in the space of model inputs that explains the majority of the variability in the model. After we spell out the steps of this check, we'll describe what it's looking for. Here's what we need to get started: \begin{itemize} \item {\it A scalar quantity of interest}. The model should have a single output $f=f(\vx)$ that depends on the $m$ input parameters denoted $\vx=(x_1,\dots,x_m)$. For example, this might be the lift of an airfoil, the maximum power of a battery, or the average temperature over a surface. \item {\it A range for each input parameter}. Find a lower bound $x^l_i$ and an upper bound $x^u_i$ such that $x^l_i\leq x_i\leq x^u_i$ for each input parameter $x_i$. This may be a 10\% perturbation around some nominal value. \item {\it Enough time and computing power to run a few model evaluations}. To fit the linear model below, we need to evaluate the model $f$ about three-to-four-$m$ times. \end{itemize} Given these, we run the following procedure. \noindent{\it A quick-and-dirty check for an active subspace} \begin{enumerate} \item Draw $N$ samples $\hat{\vx}_j$ uniformly from $[-1,1]^m$. \item Let $\vx_j=\frac{1}{2}\left[(\vx^u-\vx^l)\cdot\hat{\vx}_j+(\vx^u+\vx^l)\right]$, where $\vx^l$ is the vector of lower bounds and $\vx^u$ is the vector of upper bounds; the dot operation $(\cdot)$ is component-wise multiplication. \item Compute $f_j = f(\vx_j)$. \item Compute the coefficients of the linear regression model \[ f(\vx) \;\approx\; \hat{a}_0 + \hat{a}_1\hat{x}_1 + \cdots + \hat{a}_m\hat{x}_m \] with least-squares \[ \hat{\va}\;=\;\argmin{\vu}\; \|\hat{\mX}\vu - \vf\|_2^2, \] where \[ \hat{\mX} = \bmat{1 & \hat{\vx}_1 \\ \vdots & \vdots \\ 1 & \hat{\vx}_N},\quad \hat{\va} = \bmat{\hat{a}_0 \\ \vdots \\ \hat{a}_m},\quad \vf = \bmat{f_1\\ \vdots \\ f_N}. \] \item Let $\vw = \va'/\|\va'\|$ where $\va'=[\hat{a}_1,\dots,\hat{a}_m]^T$ is the last $m$ coefficients (i.e., the gradient) of the linear regression approximation. \item Produce a scatter plot of $\vw^T\hat{\vx}_j$ versus $f_j$. \end{enumerate} Here is a MATLAB implementation of this procedure, assuming there is a function \texttt{mymodel.m} that computes the model output $f$ given inputs and vectors \texttt{xl} and \texttt{xu} containing the lower and upper bounds of the input parameters. \begin{verbatim} N = 4*m; Xhat = 2*rand(m,N); X = 0.5*(repmat(xu-xl,1,N).*Xhat + repmat(xu+xl,1,N)); for j=1:N, f(j) = mymodel(X(:,j)); end ahat = [ones(N,1) Xhat'] \ f'; w = ahat(2:m+1)/norm(ahat(2:m+1)); plot(Xhat*w,f,'o'); \end{verbatim} To demonstrate, we apply this procedure to $f(\vx)=\exp(x_1+x_2)$ defined on the square $[-1,1]^2$. Figure \ref{fig:scatter} shows the scatter plot using $N=20$. Notice the apparent relationship between evaluations $\vw^T\vx_j$ and $f_j$. In fact, this particular $f$ varies entirely along a one-dimensional subspace defined by the vector $[1, 1]^T$. The scatter plot in Figure \ref{fig:scatter} is equivalent to sampling the three-dimensional surface plot---where the $z$ coordinate is $\exp(x_1+x_2)$---and rotating the plot such that the evaluations $f_j$ appear to depend on one variable instead of two. The rotated surface plot along with the $f_j$ is shown in Figure \ref{fig:surf}. This interpretation is valid more generally for functions of several variables. The check for an active subspace finds an angle from which to view the data set $\{\vx_j,f_j\}$ such that a trend emerges in the $f_j$ (if such a trend exists) as a function of a new variable $y=\vw^T\vx$. \begin{figure} \caption{The leftmost figure is the scatter plot produced by the quick check for the active subspace applied to the function $\exp(x_1+x_2)$. The rightmost figure is a rotated surface plot of the same function.} \label{fig:scatter} \label{fig:surf} \label{fig:exp} \end{figure} If, instead of the linear regression procedure, we choose the vector $\vw$ as the $i$th column of the $m\times m$ identity matrix, then we create a scatter plot of $f_j$ versus the $i$th component of $\vx_j$. Such scatter plots are described in Section 1.2.3 of \cite{saltelli2008global}. However, the vector produced by the regression is special. It is the normalized gradient of the linear regression approximation to $f(\vx)$. If a global, monotonic trend is present in the $f$---and if the number of samples $N$ is sufficiently large---then the vector $\vw$ reveals the direction in the parameter space of this trend. Additionally, the components of $\vw$ measure the relative importance of the original parameters $\vx$. For the function $\exp(x_1+x_2)$, we expect the two components of $\vw$ to be roughly the same. This procedure is a subjective diagnostic test based on visualization. The viewer must use the scatter plot to judge if such a trend is present. A trend may be obvious, subtle, or absent. For complex functions, it is unlikely that one can know if such a trend exists without running the check. Fortunately, the check is pretty cheap! Once a trend is identified, there are several ways to exploit it. For example, if one wants to maximize or minimize $f(\vx)$, then the vector $\vw$ is a direction in the parameter space along which we expect $f$ will increase or decrease. In particular, when the domain of $\vx$ is an $m$-dimensional hypercube, then the vector $\vw$ can identify one of the $2^m$ corners where $f$ is likely to be the largest. If $m$ is large and $f$ is expensive, then this strategy is preferred to evaluating $f$ $2^m$ times. If one is attempting to invert $f$ to find parameters that match a given measurement, then this trend can help identify sets of parameters that map to the measurement. If one wishes to predict $f$ at some other values of $\vx$, then this trend can help build a coarse surrogate model on a low-dimensional subspace of the parameter space. The trend may also be used to approximate an integral of $f$ over the parameter space. \noindent\textbf{Examples.} The check for the active subspace has worked surprisingly well for several complex systems. In particular, it has identified a direction in the parameter space that corresponds to a global monotonic trend in the output. Three representative systems and the corresponding plots are shown in Figures \ref{fig:blade}, \ref{fig:oneram}, and \ref{fig:hyshot} with a brief description in the captions. More details on these systems can be found in \cite{constantine2013active}, \cite{Lukaczyk2014}, and \cite{icossar2013}, respectively. We are actively seeking more examples. \begin{figure} \caption{We examined a model for heat transfer in a turbine blade given a parameterized model for the heat flux boundary condition representing unknown transition to turbulence. There are 250 parameters characterizing a Karhunen-Loeve model of the heat flux, and the quantity of interest is the average temperature over the trailing edge of the blade. The leftmost figure shows the domain and a representative temperature distribution. The rightmost figure plots 750 samples of the quantity of interest against the projected coordinate. The strong appearance of the linear relationship verifies the quality of the subspace approximation.} \label{fig:blade} \end{figure} \begin{figure} \caption{We applied the active subspace check to quantify the margins of safety in a multiphysics model of a hypersonic scramjet vehicle. There are seven parameters characterizing the inflow boundary conditions of a reacting, compressible channel flow. The quantity of interest was the exit pressure at the exit of the combustor. The left figure shows a representative compressible flow computation. The right figure shows the scatter plot of 14 samples of the exit pressure.} \label{fig:hyshot} \end{figure} \begin{figure} \caption{We applied the active subspace check to a design optimization problem in transonic wing design. There were 60 design variables, and the quantity of interest is the drag coefficient computed via computational fluid dynamics. The left figure shows the a representative pressure field for a given design. The right figure shows 70 evaluations of the drag coefficient plotted against the reduced coordinate of the one-dimensional active subspace from the 60-dimensional design space. The nearly linear relationship implies that optimization of the drag coefficient is much easier than initially thought.} \label{fig:oneram} \end{figure} There are two common features of these complex models that may explain the perceived trends: \begin{enumerate} \item When a scientist informally describes the effect of a parameter on a model, it often sounds like the output is monotonic with respect to changes in the parameter; more of $x_3$ leads to less $f$. This may suggest a global monotonic trend in the model as a function of the parameter. \item A model is often given a nominal parameter value corresponding to a theoretical or measured physical case. Back-of-the-envelope estimates of the uncertainty in the input parameters often take the form of a 5-10\% perturbation about the nominal value. Despite the complexity of the system, this type of perturbation often results in little change in the model. More importantly, it results in small rate of change. In such cases, the linear model represents the global trends well within the range of the input perturbations. \end{enumerate} These explanations are based on experience with several models. Of course, there are many complex systems that do not behave this way. In such cases, the check for the active subspace may lead to nothing useful. However, given the low cost of the check and the potential insights into the model, we think it is well worth the effort. \begin{itemize}bliographystyle{siam} \begin{itemize}bliography{asm-primer} \end{document}
\begin{document} \title[RBSDEs when the obstacle is not RC in a general filtration]{Reflected BSDEs when the obstacle is not right-continuous in a general filtration} \author{Baadi Brahim} \author{Ouknine Youssef} \address{Ibn Tofa\"{\i}l University,\newline Department of mathematics, faculty of sciences,\newline BP 133, K\'{e}nitra, Morocco } \email{[email protected]} \address{Cadi Ayyad University,\newline Av. Abdelkrim Khattabi \newline 40000, Gu\'{e}liz Marrakesh, Morocco, and\newline Hassan II Academy of Sciences and Technology, Rabat, Morocco.} \email{[email protected]} \subjclass[2000]{60K35, 82B43.} \keywords{ Backward stochastic differential equation, Reflected backward stochastic differential equation, General filtration, Strong optional supermartingale, Mertens decomposition.} \begin{abstract} We prove existence and uniqueness of the reflected backward stochastic differential equation's (RBSDE) solution with a lower obstacle which is assumed to be right upper-semicontinuous but not necessarily right-continuous in a filtration that supports a Brownian motion $W$ and an independent Poisson random measure $\pi$. The result is established by using some tools from the general theory of processes such as Mertens decomposition of optional strong (but not necessarily right continuous) supermartingales and some tools from optimal stopping theory, as well as an appropriate generalization of It\^{o}'s formula due to Gal'chouk and Lenglart. Two applications on dynamic risk measure and on optimal stopping will be given. \end{abstract} \maketitle \section{Introduction} The notion of Backward Stochastic Differential Equations (BSDEs in short) was introduced by \citet{Bismut(1973), Bismut(1976)} in the case of a linear driver. The nonlinear case was developed by \citet{PP1990, PP1992}. BSDEs have found a number of applications in finance, that is pricing and hedging of European options and recursive utilities (for instance \citet{ELKarouiPeng1997}).\\ Reflected Backward Stochastic Differential Equations (RBSDEs in short) have been introduced by \citet{ELKaroui1997} and were useful, for example, in the study of American option. The difference between the two types of equations (BSDEs and RBSDEs) is that the second can be seen as a variant of the first in which the first component of the solution is constrained to remain greater than or equal to a given process called obstacle or barrier, and there is an additional nondecreasing predictable process which keeps the first component of the solution above the obstacle. The work of \citet{ELKaroui1997} considers the case of a Brownian filtration and a continuous obstacle. After there have been several extensions of this work to the case of a discontinuous obstacle, for example, \citet{Hamadene2002}, \citet{HamadeneOknine2003, HamadeneOknine2011}, \citet{Essaky2008} and \citet{Crepeymatoussi} ...\\ The right continuity of the obstacle is the difference between these extensions and the paper of \citet{Ouknine2015}. In this work, the authors present a further extension of the theory of RBSDEs to the case where the obstacle is not necessarily right-continuous in a Brownian filtration. In the present paper, we generalize the result of uniqueness and existence of the RBSDE's solution in \citet{Ouknine2015} to the case of a larger stochastic basis, i.e. in a filtration that supports a Brownian motion $W$ and an independent Poisson random measure $\pi$, we establish existence and uniqueness of solutions, in appropriate Banach spaces, to the following RBSDE: \begin{multline} Y_{\tau} = \xi_{T}+\int_{\tau}^{T}f(s,Y_{s},Z_{s},\psi_{s})ds-\int_{\tau}^{T}Z_{s}dW_{s}-\int_{\tau}^{T}\int_{\mathcal{U}}\psi_{s}(u)\widetilde{\pi}(du,ds) -\int_{\tau}^{T}dM_{s}\\ +A_{T}-A_{\tau}+C_{T-}-C_{\tau-}\,\,\,\ \text{for all}\,\,\,\ \tau\in\mathcal{T}_{0,T} \label{mon0} \end{multline} The solution is given by $(Y,Z,\psi,M,A,C)$, where $M$ is an orthogonal local martingale. We assume that the function $f$ is Lipschitz with respect to $y$, $z$ and $\psi$. To prove our results we use tools from the general theory of processes such as Mertens decomposition of strong optional (but not necessarily right-continuous) supermartingales (generalizing Doob-Meyer decomposition) and some tools from optimal stopping theory, as well as a generalization of It\^{o}'s formula to the case of strong optional (but not necessarily right-continuous) semimartingales due to \citet{Galchouk1981} and \citet{Lenglart1980}. We recover these natural differential equations by studying the limit of a system of reflected BSDEs $$\left\{ \begin{array}{ll} Y^{n}_{\tau}=\xi_{T}+\int_{\tau}^{T}f(s,Y^{n}_{s},Z^{n}_{s},\psi^{n}_{s})ds+K^{n}_{T}-K^{n}_{\tau} -\int_{\tau}^{T}Z^{n}_{s}dW_{s}-\int_{\tau}^{T}\int_{\mathcal{U}}\psi^{n}_{s}(u)\widetilde{\pi}(du,ds)\\ -\int_{\tau}^{T}dM^{n}_{s} \\ Y^{n}_{\tau} \geq \xi_{\tau}& \hbox{} \end{array} \right. $$ Where $K^{n}_{t}=n\int_{0}^{t}(Y^{n}_{s}-\xi_{s})^{-}ds$. Essaky proved (in \citet{Essaky2008}), by a monotonic limit theorem, that $(Y^{n},Z^{n},K^{n},\psi^{n},M^{n})$ has, in some sense, a limit $(Y,Z,K,\psi,M)$ which satisfies a reflected BSDE with $\xi$ a c\`{a}dl\`{a}g barrier (see also \citet{Peng1999} for the case of filtration generated only by a brownian motion). It is well known that if $\xi$ is a c\`{a}dl\`{a}g barrier then $Y$ is also a c\`{a}dl\`{a}g process (Theorem 3.1 in \citet{Essaky2008} for filtration generated by a Brownian motion and Poisson point process, and Lemma 2.2 in \citet{Peng1999} for the Brownian filtration). But if the barrier $\xi$ is only optional the limit $Y$ of $Y^{n}$ is $\mathcal{E}^{f}$-super-martingale, then $Y$ has left and right limits (see \citet{DellacherieMeyer1980}, Theorem 4 page 408). In this sense, we know that $(Y^{n},Z^{n},\psi^{n},M^{n})$ converge to $(Y,Z,\psi,M)$ and the limit $K$ of $K^{n}_{t}=n\int_{0}^{t}(Y^{n}_{s}-\xi_{s})^{-}ds$ is a l\`{a}dl\`{a}g process that can be written as $K=A+C_{-}$ where $A$ an increasing c\`{a}dl\`{a}g predictable process satisfying $A_{0}=0$, $E(A_{T})<\infty$, and $C$ an increasing c\`{a}dl\`{a}g optional process and $E(C_{T})<\infty$. The paper is decomposed as follows: in the second section, we give the mathematical setting (preliminary, definitions and properties). In subsection 2.1 we recall the change of variables formula for optional semimartingales which are not necessarily right continuous (Gal'chouk-Lenglart decomposition for strong optional semimartingales). In the third section, we define our RBSDE and we prove existence and uniqueness of the solution in a general filtration. In the last section, we give two applications of reflected BSDEs where the right-continuity of the obstacle is not necessarily used: application on dynamic risk measure and on optimal stopping. \section{Preliminaries} Let $T>0$ be a fixed positive real number. Let us consider a filtered probability space $(\Omega, \mathcal{F},\mathbb{P},\mathbb{F}=\{\mathcal{F}_{t}, t\geq 0\})$. The filtration is assumed to be complete, right continuous and quasi-left continuous, which means that for every sequence ($\tau_n$) of $\mathbb{F}$-stopping times such that $\tau_n\nearrow \tau$ for some stopping time $\tau$ we have $\bigvee_{n\in\mathbb{Z}_{+}}\mathcal{F}_{\tau_n}=\mathcal{F}_{\tau}$. We assume that $(\Omega, \mathcal{F},\mathbb{P},\mathbb{F}=\{\mathcal{F}_{t}, t\geq 0\})$ supports a $k$-dimensional Brownian motion $W$ and a Poisson random measure $\pi$ with intensity $\mu(du)dt$ on the space $\mathcal{U}\subset\mathbb{R}^{m}\setminus\{0\}$. The measure $\mu$ is $\sigma$-finite on $\mathcal{U}$ such that \begin{equation} \int_{\mathcal{U}}(1\wedge|u|^{2})\mu(du)<+\infty \label{moneq2} \end{equation} The compensated Poisson random measure $\pi$: $\widetilde{\pi}(du,dt)=\pi(du,dt)-\mu(du)dt$ is a martingale w.r.t. the filtration $\mathbb{F}$.\\ In this paper for a given $T>0$, we denote: \begin{itemize} \item $\mathcal{T}_{t,T}$ is the set of all stopping times $\tau$ such that $\mathbb{P}(t\leq \tau \leq T)=1$. More generally, for a given stopping time $\nu$ in $\mathcal{T}_{0,T}$, we denote by $\mathcal{T}_{\nu,T}$ the set of all stopping times $\tau$ such that $\mathbb{P}(\nu \leq \tau \leq T)=1$. \item $\mathcal{P}$ is the predictable $\sigma$-field on $\Omega\times [0,T]$ and $$\widetilde{\mathcal{P}}=\mathcal{P}\otimes\mathcal{B}(\mathcal{U})$$ where $\mathcal{B}(\mathcal{U})$ is the Borelian $\sigma$-field on $\mathcal{U}$. \item $L^{2}(\mathcal{F}_{T})$ is the set of random variables which are $\mathcal{F}_{T}$-measurable and square-integrable. \item On $\widetilde{\Omega}=\Omega\times[0,T]\times \mathcal{U}$, a function that is $\widetilde{\mathcal{P}}$-measurable, is called predictable. \item $G_{loc}(\pi)$ is the set of $\widetilde{\mathcal{P}}$-measurable functions $\psi$ on $\widetilde{\Omega}$ such that for any $t\geq 0$ a.s. $$\int_{0}^{t}\int_{\mathcal{U}}(|\psi_{s}(u)|^{2}\wedge|\psi_{s}(u)|)\mu(du)<+\infty.$$ \item $\mathds{H}^{2,T}$ is the set of real-valued predictable processes $\phi$ such that $$\|\phi\|_{\mathds{H}^{2,T}}^{2}=E\Bigl[\int_{0}^{T}|\phi_{t}|^{2}dt\Bigr]<\infty.$$ \item $\mathcal{M}_{loc}$ is the set of c\`{a}dl\`{a}g local martingales orthogonal to $W$ and $\widetilde{\pi}$: if $M\in\mathcal{M}_{loc}$ then $$[M,W^{i}]_{t}=0, \,\,\,\ 1\leq i\leq k, \,\,\,\,\ [M,\widetilde{\pi}(A)]_{t}=0$$ for all $A\in\mathcal{B}(\mathcal{U})$. \item $\mathcal{M}$ is the subspace of $\mathcal{M}_{loc}$ of martingales. \end{itemize} As explained above, the filtration $\mathbb{F}$ supports the Brownian motion $W$ and the Poisson random measure $\pi$. We have the following lemma that we can find in \citet{JacodShiryaev2003} (Chapter III, Lemma 4.24): \begin{lemma} $\label{lemme1}$ Every local martingale $N$ has a decomposition \begin{equation} N_{t}=\int_{0}^{t}Z_{s}dW_{s}+\int_{0}^{t}\int_{\mathcal{U}}\psi_{s}(u)\widetilde{\pi}(du,ds)+M_{t} \label{moneq3} \end{equation} where $M\in\mathcal{M}_{loc}$, $Z\in\mathds{H}^{2,T}$ and $\psi\in G_{loc}(\mu).$ \end{lemma} Now to define the solution of our reflected backward stochastic differential equation (RBSDE), let us introduce the following spaces: \begin{itemize} \item $\mathds{S}^{2,T}$ is the set of real-valued optional processes $\phi$ such that: $$\||\phi\||_{\mathds{S}^{2,T}}^{2}=E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}|\phi_{\tau}|^{2}\Bigr]<\infty.$$ \item $\mathbb{M}^{2}$ is the subspace of $\mathcal{M}$ of all martingales such that: $$\|M\|_{\mathbb{M}^{2}}^{2}=E([M,M]_{T})=E([M]_{T})<+\infty.$$ \item $\mathbb{L}^{2}_{\pi}(0,T)=\mathbb{L}^{2}_{\pi}(\Omega\times(0,T)\times\mathcal{U})$ is the set of all processes $\psi\in G_{loc}(\mu)$ such that: $$\|\psi\|^{2}_{\mathbb{L}^{2}_{\pi}}=E\Bigl[\int_{0}^{T}\int_{\mathcal{U}}|\psi_{s}(u)|^{2}\mu(du)ds\Bigr]<+\infty$$ \item $\mathbb{L}^{2}_{\mu}(0,T)=\mathbb{L}^{2}(\mathcal{U},\mu;\mathbb{R}^{d})$ is the set of all measurable functions $\psi:\mathcal{U}\longrightarrow \mathbb{R}^{d}$ such that: $$\|\psi\|_{\mathbb{L}^{p}_{\mu}}^{2}=\int_{\mathcal{U}}|\psi(u)|^{2}\mu(du)<+\infty.$$ \item $\mathcal{E}^{2}(0,T)=\mathds{S}^{2,T}\times\mathds{H}^{2,T}\times\mathbb{L}^{2}_{\pi}(0,T) \times\mathbb{M}^{2}\times\mathds{S}^{2,T}\times\mathds{S}^{2,T}.$ \end{itemize} The random variable $\xi$ is $\mathcal{F}_{T}$-measurable with values in $\mathbb{R}^{d}$ $(d\geq 1)$ and $f:\Omega\times[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d\times k}\times\mathbb{L}^{2}_{\mu}\longrightarrow\mathbb{R}^{d}$ is a random function measurable with respect to $Prog\times\mathcal{B}(\mathbb{R}^{d})\times\mathcal{B}(\mathbb{R}^{d\times k})\times\mathcal{B}(\mathbb{L}^{2}_{\mu})$ where $Prog$ denotes the $\sigma$-field of progressive subsets of $\Omega\times[0,T]$.\\ In the following we denote the spaces $\mathds{H}^{2,T}$ and $\mathds{S}^{2,T}$ by $\mathds{H}^{2}$ and $\mathds{S}^{2}$, as well as the norms $\parallel.\parallel_{\mathds{H}^{2,T}}$ and $\parallel\mid.\parallel\mid_{\mathds{S}^{2,T}}$ by $\parallel.\parallel_{\mathds{H}^{2}}$ and $\parallel\mid.\parallel\mid_{\mathds{S}^{2}}$. \begin{definition} A function $f$ is said to be a driver if: \begin{itemize} \item $f:\Omega\times [0,T]\times \mathbb{R}^{2}\times\mathbb{L}^{2}_{\mu}\longrightarrow \mathbb{R}$ \\ $(\omega,t,y,z,\psi)\longmapsto f(\omega,t,y,z,\psi)$ is $\mathcal{P}\otimes\mathcal{B}(\mathds{R}^{2})\otimes\mathcal{B}(\mathbb{L}^{2}_{\mu})$-measurable. \item $E[\int_{0}^{T}\mid f(t,0,0,0)\mid^{2}dt]<\infty$. \end{itemize} A driver $f$ is called a Lipschitz driver if moreover there exists a constant $K\geq0$ such that $\mathbb{P}\otimes dt$-a.s., for each $(y_1,z_1,\psi_1)$ and $(y_2,z_2,\psi_2)$ \begin{equation} \Bigl|f(\omega,t,y_1,z_1,\psi_1)-f(\omega,t,y_2,z_2,\psi_2)\Bigr|\leq K\Bigl(|y_1-y_2|+|z_1-z_2|+\|\psi_1-\psi_2\|_{\mathbb{L}^{2}_{\mu}}\Bigr) \label{moneq0} \end{equation} \end{definition} For a l\`{a}dl\`{a}g process $\phi$, we denote by $\phi_{t+}$ and $\phi_{t-}$ the right-hand and left-hand limit of $\phi$ at $t$. We denote by $\Delta_{+}\phi_{t}=\phi_{t+}-\phi_{t}$ the size of the right jump of $\phi$ at $t$, and by $\Delta\phi_{t}=\phi_{t}-\phi_{t-}$ the size of the left jump of $\phi$ at $t$. We give a useful property of the space $\mathds{S}^{2}$: \begin{proposition} The space $\mathds{S}^{2}$ endowed with the norm $\parallel\mid.\parallel\mid_{\mathds{S}^{2}}$ is a Banach space. \end{proposition} \begin{proof} The proof is given in \citet{Ouknine2015} (Proposition 2.1). \end{proof} The following proposition can be found in \citet{AshkanNikeghbali2006} (Theorem 3.2.). \begin{proposition} Let $(X_t)$ and $(Y_t)$ be two optional processes. If for every finite stopping time $\tau$ one has, $X_{\tau}=Y_{\tau}$, then the processes $(X_t)$ and $(Y_t)$ are indistinguishable. \end{proposition} Let $\beta>0$. We will also use the following notation: \\ For $\phi\in\mathds{H}^{2}$, $\|\phi\|_{\beta}^{2}:=E[\int_{0}^{T}e^{\beta s}\phi^{2}_{s}ds]$. We note that on the space $\mathds{H}^{2}$ the norms $\|.\|_{\beta}$ and $\|.\|_{\mathds{H}^{2}}$ are equivalent.\\ For $\phi\in\mathds{S}^{2}$, we define $\||\phi\||_{\beta}^{2}:=E[ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}|\phi_{\tau}|^{2}]$. We note that $\||.|\|_{\beta}$ is a norm on $\mathds{S}^{2}$ equivalent to the norm $\||.|\|_{\mathds{S}^{2}}$.\\ For $\phi\in\mathbb{L}^{2}_{\pi}$, the defined norm $\|\phi\|_{\mathbb{L}^{2,\beta}_{\pi}}=\sqrt{E[\int_{0}^{T}e^{\beta s}\int_{\mathcal{U}}|\phi_{s}(u)|^{2}\mu(du)ds]}$ is equivalent to the norm $\|\phi\|_{\mathbb{L}^{2}_{\pi}}$ on $\mathbb{L}^{2}_{\pi}$.\\ For $M\in\mathbb{M}^{2}$, we have the equivalence between $\|M\|_{\mathbb{M}_{\beta}^{2}}=\sqrt{E[\int_{0}^{T}e^{\beta s}d[M]_{s}]}$ and $\|M\|_{\mathbb{M}^{2}}$ on $\mathbb{M}^{2}$. \subsection{Gal'chouk-Lenglart decomposition for strong optional semimartingales.} In this section, we recall the change of variables formula for optional semimartingales which are not necessarily cad. The result can be seen as a generalization of the classical It\^{o} formula and can be found in (\citet{Galchouk1981}, (Theorem 8.2)), (\citet{Lenglart1980},(Section 3, page 538)). We recall the result in our framework in which the underlying filtered probability space satisfies the usual conditions. \begin{theorem} \label{galchouklenglart} \textbf{(Gal'chouk-Lenglart)} Let $n\in\mathbb{Z}_{+}$. Let $X$ be an n-dimensional optional semimartingale, i.e. $X = (X_1,... ,X_n)$ is an n-dimensional optional process with decomposition $X_{t}^{k}=X_{0}^{k}+N_{t}^{k}+A_{t}^{k}+B_{t}^{k}$, for all $k\in\{1,...,n\}$ where $N_{t}^{k}$ is a (c\`{a}dl\`{a}g) local martingale, $A_{t}^{k}$ is a right-continuous process of finite variation such that $A_{0}=0$ and $B_{t}^{k}$ is a left-continuous process of finite variation which is purely discontinuous and such that $B_{0-}=0$. Let $F$ be a twice continuously differentiable function on $\mathbb{R}^{n}$. Then, almost surely, for all $t\geq0$, \begin{eqnarray*} F(X_{t}) &=& F(X_{0})+\sum_{k=1}^{n}\int_{]0,t]}D^{k}F(X_{s-})d(N^{k}+A^{k})_{s} \\ &+& \frac{1}{2}\sum_{k,l=1}^{n}\int_{]0,t]}D^{k}D^{l}F(X_{s-})d<N^{kc},N^{lc}>_{s}\\ &+& \sum_{0<s\leq t}\Bigl[F(X_{s})-F(X_{s-})-\sum_{k=1}^{n}D^{k}F(X_{s-})\Delta X^{k}_{s}\Bigr]\\ &+& \sum_{k=1}^{n}\int_{[0,t[}D^{k}F(X_{s})d(B^{k})_{s+} \\ &+& \sum_{0\leq s<t}\Bigl[F(X_{s+})-F(X_{s})-\sum_{k=1}^{n}D^{k}F(X_{s})\Delta_{+}X^{k}_{s}\Bigr] \end{eqnarray*} where $D^k$ denotes the differentiation operator with respect to the $k$-th coordinate, and $N^{kc}$ denotes the continuous part of $N^k$. \end{theorem} \begin{corollary} \label{corollaire1} Let $Y$ be a one-dimensional optional semimartingale with decomposition $Y_t=Y_0+N_t+A_t+B_t$, where $N$, $A$ and $B$ are as in the above theorem. Let $\beta >0$. Then, almost surely, for all $t$ in $[0,T]$, \begin{eqnarray*} e^{\beta t}Y_{t}^{2} &=& Y_{0}^{2}+\int_{0}^{t}\beta e^{\beta s}Y_{s}^{2}ds+2\int_{0}^{t}e^{\beta s}Y_{s-}d(A+N)_{s}\\ &+& \int_{0}^{t} e^{\beta s}d<N^{c},N^{c}>_{s}\\ &+& \sum_{0<s\leq t}e^{\beta s}(Y_{s}-Y_{s-})^{2}+2\int_{0}^{t}e^{\beta s}Y_{s}d(B)_{s+}+\sum_{0\leq s<t}e^{\beta s}(Y_{s+}-Y_{s})^{2} \end{eqnarray*} \end{corollary} \begin{proof} For the corollary demonstration, it suffices to apply the change of variables formula from Theorem$~\ref{galchouklenglart}$ with $n=2$, $F(x,y)=xy^{2}$, $X_{t}^{1}= e^{\beta t}$ and $X_{t}^{2}=Y_{t}$. Indeed, by applying Theorem$~\ref{galchouklenglart}$ and by noting that the local martingale part and the purely discontinuous part of $X^{1}$ are both equal to $0$, we obtain \begin{eqnarray*} e^{\beta t}Y_{t}^{2} &=& Y_{0}^{2}+\int_{0}^{t}\beta e^{\beta s}Y_{s}^{2}ds+2\int_{0}^{t}e^{\beta s}Y_{s-}d(A+N)_{s}\\ &+& \int_{0}^{t} e^{\beta s}d<N^{c},N^{c}>_{s}\\ &+& \sum_{0<s\leq t}e^{\beta s}(Y_{s}^{2}-Y_{s-}^{2}-2Y_{s-}(Y_{s}-Y_{s-}))+2\int_{0}^{t}e^{\beta s}Y_{s}d(B)_{s+}\\ &+& \sum_{0\leq s<t}e^{\beta s}(Y_{s+}^{2}-Y_{s}^{2}-2Y_{s}(Y_{s+}-Y_{s})) \end{eqnarray*} The desired expression follows as $(Y_{s}-Y_{s-})^{2}=Y_{s}^{2}-Y_{s-}^{2}-2Y_{s-}(Y_{s}-Y_{s-})$ and $(Y_{s+}-Y_{s})^{2}=Y_{s+}^{2}-Y_{s}^{2}-2Y_{s}(Y_{s+}-Y_{s})$. \end{proof} \section{RBSDEs whose obstacles are not c\`{a}dl\`{a}g in a general filtration.} Let $T>0$ be a fixed terminal time. Let $f$ be a driver. Let $\xi=(\xi_t)_{t\in[0,T]}$ be a left-limited process in $\mathds{S}^{2}$. We suppose moreover that the process $\xi$ is right upper-semicontinuous (r.u.s.c. for short). A process $\xi$ satisfying the previous properties will be called a barrier, or an obstacle. \begin{definition} \label{definition1} A process $(Y,Z,\psi,M,A,C)$ is said to be a solution to the reflected BSDE with parameters $(f,\xi)$, where $f$ is a driver and $\xi$ is an obstacle, if $(Y,Z,\psi,M,A,C) \in\mathcal{E}^{2}(0,T)$ and \begin{multline} Y_{\tau} = \xi_{T}+\int_{\tau}^{T}f(s,Y_{s},Z_{s},\psi_{s})ds-\int_{\tau}^{T}Z_{s}dW_{s}-\int_{\tau}^{T}\int_{\mathcal{U}}\psi_{s}(u)\widetilde{\pi}(du,ds) -\int_{\tau}^{T}dM_{s}\\ +A_{T}-A_{\tau}+C_{T-}-C_{\tau-} \,\,\,\ \text{for all}\,\,\,\ \tau\in\mathcal{T}_{0,T} \label{moneq4} \end{multline} \begin{equation} Y\geq \xi \,\,\ \text{(up to an evanescent set)} \,\,\ \text{a.s.} \label{moneq5} \end{equation} \begin{equation} M\in\mathcal{M}_{loc} \,\,\ \text{and} \,\,\ M_0=0. \label{martingale} \end{equation} In the above, the process $A$ is a nondecreasing right-continuous predictable process with $A_0=0$, $E(A_T)<\infty$ such that: \begin{equation} \int_{0}^{T}1_{ \{Y_{t}>\xi_{t}\}}dA_{t}^{c}=0 \,\,\ a.s.\,\,\ and\,\,\ (Y_{\tau-}- \xi_{\tau-})(A_{\tau}^{d}-A_{\tau-}^{d})=0 \,\,\ a.s. \,\,\ \forall (predictable) \tau\in\mathcal{T}_{0,T} \label{moneq6} \end{equation} And the process $C$ is a nondecreasing right-continuous adapted purely discontinuous process with $C_{0-}=0$, $E(C_T)<\infty$ such that: \begin{equation} (Y_{\tau}-\xi_{\tau})(C_{\tau}-C_{\tau-})=0 \,\,\ a.s.\,\,\ \forall \tau\in\mathcal{T}_{0,T} \label{moneq7} \end{equation} Here $A^{c}$ denotes the continuous part of the nondecreasing process $A$ and $A^{d}$ its discontinuous part. \end{definition} \begin{remark} We note that a process $(Y,Z,\psi,M,A,C) \in\mathcal{E}^{2}(0,T)$ satisfies equation $~\eqref{moneq4}$ in the above definition if and only if $\forall t\in[0,T]$, a.s. \begin{eqnarray*} Y_{t} = \xi_{T}+\int_{t}^{T}f(s,Y_{s},Z_{s},\psi_{s})ds-\int_{t}^{T}Z_{s}dW_{s}&-&\int_{t}^{T}\int_{\mathcal{U}}\psi_{s}(u)\widetilde{\pi}(du,ds) - \int_{t}^{T}dM_{s} \\ &+& A_{T}-A_{t}+C_{T-}-C_{t-} \end{eqnarray*} \end{remark} \begin{remark} If $(Y,Z,\psi,M,A,C) \in\mathcal{E}^{2}(0,T)$ satisfies the above definition, then the process $Y$ has left and right limits. Moreover, the process $(Y_{t}+\int_{0}^{t}f(s,Y_{s},Z_{s},\psi_{s})ds)_{t\in[0,T]}$ is a strong supermartingale. \end{remark} The proof of the existence and uniqueness of the reflected BSDE solution defined above is based on a useful result (following lemma) in the case of $f$ depends only on $s$ and $\omega$ (i.e. $f(s,y,z,\psi)=f(s,\omega)$), the corollary$~\ref{corollaire1}$ and the lemma$~\ref{lemme1}$. To this purpose, we first prove a lemma which will be used in the sequel. \begin{lemma} \label{lemme2} Let $(Y^{1},Z^{1},\psi^{1},M^{1},A^{1},C^{1})\in\mathcal{E}^{2}(0,T)$ (resp. $(Y^{2},Z^{2},\psi^{2},M^{2},A^{2},C^{2})\in\mathcal{E}^{2}(0,T)$.) be a solution to the RBSDE associated with driver $f^{1}(s,\omega)$ (resp.$f^{2}(s,\omega)$) and with obstacle $\xi$. There exists $c>0$ such that for all $\epsilon>0$, for all $\beta>\frac{1}{\epsilon^{2}}$ we have \begin{equation} \|Z^{1}-Z^{2}\|^{2}_{\beta}+\|M^{1}-M^{2}\|_{\mathbb{M}_{\beta}^{2}}^{2} +\|\psi^{1}-\psi^{2}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}\leq \epsilon^{2}\|f^{1}-f^{2}\|_{\beta}^{2} \label{unicite} \end{equation} and \begin{equation} \||Y^{1}-Y^{2}|\|_{\beta}^{2}\leq 4\epsilon^{2}(1+4c^{2})\|f^{1}-f^{2}\|_{\beta}^{2} \label{moneq8} \end{equation} \end{lemma} \begin{proof} Let $\beta >0$ and $\epsilon>0$ be such that $\beta\geq \frac{1}{\epsilon^{2}}$. We set $\widetilde{Y}:=Y^{1}-Y^{2}$, $\widetilde{Z}:=Z^{1}-Z^{2}$, $\widetilde{\psi}:=\psi^{1}-\psi^{2}$, $\widetilde{M}:=M^{1}-M^{2}$, $\widetilde{A}:=A^{1}-A^{2}$, $\widetilde{C}:=C^{1}-C^{2}$ and $\widetilde{f}(\omega,t):=f^{1}(\omega,t)-f^{2}(\omega,t)$. We note that $\widetilde{Y}_{T}:=\xi_{T}-\xi_{T}=0$. Moreover, \begin{multline} \widetilde{Y}_{\tau}=\int_{\tau}^{T}\widetilde{f}(s)ds-\int_{\tau}^{T}\widetilde{Z}_{s}dW_{s} -\int_{\tau}^{T}\int_{\mathcal{U}}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)-\widetilde{M}_{T}+\widetilde{M}_{\tau} +\widetilde{A}_{T}-\widetilde{A}_{\tau}\\+\widetilde{C}_{T-}-\widetilde{C}_{\tau-} \,\,\ a.s.\,\,\ \forall \tau\in\mathcal{T}_{0,T}, \label{moneq9} \end{multline} i.e. $$\widetilde{Y}_{\tau}=\widetilde{Y}_{0}-\int_{0}^{\tau}\widetilde{f}(s)ds+ \int_{0}^{\tau}\widetilde{Z}_{s}dW_{s} +\int_{0}^{\tau}\int_{\mathcal{U}}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)+\widetilde{M}_{\tau}-\widetilde{A}_{\tau}-\widetilde{C}_{\tau-}\,\,\ a.s.\,\,\ \forall \tau\in\mathcal{T}_{0,T},$$ Since $\widetilde{M}_{0}=M_{0}^{1}-M_{0}^{2}=0$, $\widetilde{A}_{0}=A_{0}^{1}-A_{0}^{2}=0$ and $\widetilde{C}_{0-}=C_{0-}^{1}-C_{0-}^{2}=0$. Thus we see that $\widetilde{Y}$ is an optional (strong) semimartingale with decomposition $\widetilde{Y}_{t}=\widetilde{Y}_{0}+N_{t}+A_{t}+B_{t}$, where $N_{t}=\int_{0}^{t}\widetilde{Z}_{s}dW_{s}+\int_{0}^{t}\int_{\mathcal{U}}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)+\widetilde{M}_{t}$, $A_{t}=-\int_{0}^{t}\widetilde{f}(s)ds-\widetilde{A}_{t}$ and $B_{t}=-\widetilde{C}_{t-}$ (the notation is that of (\ref{galchouklenglart})), Applying Corollary$~\ref{corollaire1}$ to $\widetilde{Y}$ gives: almost surely, for all $t\in[0,T]$, \begin{eqnarray*} e^{\beta t}\widetilde{Y}_{t}^{2} &=& -\int_{0}^{t}\beta e^{\beta s}\widetilde{Y}_{s}^{2}ds+2\int_{0}^{t}e^{\beta s}\widetilde{Y}_{s-}d(A+N)_{s}\\ &-& \int_{0}^{t} e^{\beta s}d<N^{c},N^{c}>_{s} \\ &-& \sum_{0< s\leq t}e^{\beta s}(\widetilde{Y}_{s}-\widetilde{Y}_{s-})^{2}-\int_{0}^{t}2e^{\beta s}\widetilde{Y}_{s}d(B)_{s+}-\sum_{0\leq s<t}e^{\beta s}(\widetilde{Y}_{s+}-\widetilde{Y}_{s})^{2} \end{eqnarray*} Using the expressions of $N$, $A$ and $B$ and the fact that $\widetilde{Y}_{T}=0$, we get: almost surely, for all $t\in[0,T]$, \begin{eqnarray*} e^{\beta t}\widetilde{Y}_{t}^{2}+\int_{t}^{T} e^{\beta s}d<N^{c},N^{c}>_{s} &=& -\int_{t}^{T}\beta e^{\beta s}\widetilde{Y}_{s}^{2}ds+2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{f}(s)ds \\ &+& 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{A}- 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s}\\ &-& 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}\int_{\mathcal{U}}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)- 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{M}_{s} \\ &-& \sum_{t<s\leq T}e^{\beta s}(\widetilde{Y}_{s}-\widetilde{Y}_{s-})^{2}+\int_{t}^{T}2e^{\beta s}\widetilde{Y}_{s}d(\widetilde{C})_{s}\\ &-&\sum_{t\leq s<T}e^{\beta s}(\widetilde{Y}_{s+}-\widetilde{Y}_{s})^{2} \end{eqnarray*} Then \begin{eqnarray*} e^{\beta t}\widetilde{Y}_{t}^{2} + \int_{t}^{T} e^{\beta s}\widetilde{Z}^{2}_{s}ds &+& \int_{t}^{T} e^{\beta s}\int_{\mathcal{U}}|\widetilde{\psi}_{s}(u)|^{2}\mu(du)ds+\int_{t}^{T} e^{\beta s}d<\widetilde{M}^{c},\widetilde{M}^{c}>_{s}=\\ &-&\int_{t}^{T}\beta e^{\beta s}\widetilde{Y}_{s}^{2}ds+2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{f}(s)ds \\ &+& 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{A}- 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s}\\ &-& 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}\int_{\mathcal{U}}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)- 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{M}_{s} \\ &-& \sum_{t<s\leq T}e^{\beta s}(\widetilde{Y}_{s}-\widetilde{Y}_{s-})^{2}+\int_{t}^{T}2e^{\beta s}\widetilde{Y}_{s}d(\widetilde{C})_{s}\\ &-&\sum_{t\leq s<T}e^{\beta s}(\widetilde{Y}_{s+}-\widetilde{Y}_{s})^{2} \end{eqnarray*} It is clear that for all $t\in[0,T]$ $-\sum_{t<s\leq T}e^{\beta s}(\widetilde{Y}_{s}-\widetilde{Y}_{s-})^{2}-\sum_{t\leq s<T}e^{\beta s}(\widetilde{Y}_{s+}-\widetilde{Y}_{s})^{2} \leq 0$. By applying the inequality $2ab\leq (\frac{a}{\epsilon})^{2}+\epsilon^{2}b^{2}$, valid for all $(a,b)$ in $\mathbb{R}^{2}$, we get: a.e. for all $t\in[0,T]$ \begin{eqnarray*} -\int_{t}^{T}\beta e^{\beta s}\widetilde{Y}_{s}^{2}ds+2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{f}(s)ds &\leq & -\int_{t}^{T}\beta e^{\beta s}\widetilde{Y}_{s}^{2}ds +\frac{1}{\epsilon^{2}}\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}^{2}ds\\ &+&\epsilon^{2}\int_{t}^{T}e^{\beta s}\widetilde{f}(s)^{2}ds\\ &=& (\frac{1}{\epsilon^{2}}-\beta)\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}^{2}ds+\epsilon^{2}\int_{t}^{T}e^{\beta s}\widetilde{f}(s)^{2}ds \end{eqnarray*} As $\beta\geq\frac{1}{\epsilon^{2}}$, we have $(\frac{1}{\epsilon^{2}}-\beta)\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}^{2}ds\leq0$ for all $t\in[0,T]$ a.s.\\ Next, we have also that the term $\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s}d\widetilde{C}_{s}$ is non-positive. Indeed a.s. for all $t\in[0,T]$, $$\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s}d\widetilde{C}_{s} =\sum_{t\leq s<T}e^{\beta s}\widetilde{Y}_{s}\bigtriangleup\widetilde{C}_{s}$$ and a.s. for all $t\in[0,T]$ \begin{equation} \widetilde{Y}_{t}\bigtriangleup\widetilde{C}_{t}=(Y^{1}_{t}-Y^{2}_{t})\bigtriangleup C^{1}_{t}-(Y^{1}_{t}-Y^{2}_{t})\bigtriangleup C^{2}_{t} \label{moneq10} \end{equation} We use property $~\eqref{moneq7}$ of $C^1$ and the fact that $Y^2\geq \xi$ to obtain: a.s. for all $t\in[0,T]$ $$(Y^{1}_{t}-Y^{2}_{t})\bigtriangleup C^{1}_{t}=(Y^{1}_{t}-\xi_{t})\bigtriangleup C^{1}_{t}-(Y^{2}_{t}-\xi_{t})\bigtriangleup C^{1}_{t}=0-(Y^{2}_{t}-\xi_{t})\bigtriangleup C^{1}_{t}\leq0$$ Similarly, we obtain: a.s. for all $t\in[0,T]$, $$(Y^{1}_{t}-Y^{2}_{t})\bigtriangleup C^{2}_{t}=(Y^{1}_{t}-\xi_{t})\bigtriangleup C^{2}_{t}-(Y^{2}_{t}-\xi_{t})\bigtriangleup C^{2}_{t}=(Y^{1}_{t}-\xi_{t})\bigtriangleup C^{2}_{t}-0\geq0.$$ We also show that $\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{A}$ is non-positive by using property $~\eqref{moneq6}$ of the definition of the RBSDE and the fact that $Y^i\geq \xi$ for $i=1,2$ and that $A^i=A^{i,c}+A^{i,d}$ (see also \citet{QuenezSulem2014}). Then \begin{multline} e^{\beta t}\widetilde{Y}_{t}^{2}+\int_{t}^{T} e^{\beta s}\widetilde{Z}^{2}_{s}ds+\int_{t}^{T} e^{\beta s}d<\widetilde{M}^{c},\widetilde{M}^{c}>_{s} +\int_{t}^{T} e^{\beta s}\int_{\mathcal{U}}|\widetilde{\psi}_{s}(u)|^{2}\mu(du)ds\leq \\ \epsilon^{2}\int_{t}^{T}e^{\beta s}\widetilde{f}^{2}(s)ds-2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s}\\ -2\int_{t}^{T}\int_{\mathcal{U}}e^{\beta s}\widetilde{Y}_{s-}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)- 2\int_{t}^{T}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{M}_{s} \,\,\ \forall \,\,\ a.s. \,\,\ t\in[0,T]. \label{moneq11} \end{multline} We now show that the term $\int_{0}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s}$ has zero expectation. To this purpose, we show that $E[\sqrt{\int_{0}^{T}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{Z}^{2}_{s}ds}]<\infty$, in the same way that in the proof of Lemma 3.2 (A priori estimates) in \citet{Ouknine2015}. By using the left-continuity of a.e. trajectory of the process $(\widetilde{Y}_{s-})$, we have \begin{equation} (\widetilde{Y}_{s-})^{2}(\omega)\leq \sup_{t\in\mathbb{Q}}(\widetilde{Y}_{t-})^{2}(\omega) \,\,\ \text{for all} \,\,\ s\in(0,T],\,\,\ \text{for a.s.}\,\,\ \omega\in\Omega \label{moneq12} \end{equation} On the other hand, for all $t\in(0,T]$, a.s., $(\widetilde{Y}_{t-})^{2} \leq ess\sup_{\tau\in\mathcal{T}_{0,T}}(\widetilde{Y}_{\tau})^{2}$. Then \begin{equation} \sup_{t\in\mathbb{Q}}(\widetilde{Y}_{t-})^{2}\leq ess\sup_{\tau\in\mathcal{T}_{0,T}}(\widetilde{Y}_{\tau})^{2} \,\,\ a.s. \label{moneq13} \end{equation} According to $~\eqref{moneq12}$ and $~\eqref{moneq13}$ we obtain \begin{equation} \int_{0}^{T}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{Z}_{s}^{2}ds\leq \int_{0}^{T}e^{2\beta s}\sup_{t\in\mathbb{Q}}\widetilde{Y}_{t-}^{2}\widetilde{Z}_{s}^{2}ds \leq\int_{0}^{T}e^{2\beta s}ess\sup_{\tau\in\mathcal{T}_{0,T}}\widetilde{Y}_{\tau}^{2}\widetilde{Z}_{s}^{2}ds \label{moneq14} \end{equation} Using $~\eqref{moneq14}$, together with Cauchy-Schwarz inequality, gives $$E\Bigl[\sqrt{\int_{0}^{T}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{Z}_{s}^{2}ds}\Bigr] \leq E\Bigl[\sqrt{ess\sup_{\tau\in\mathcal{T}_{0,T}}\widetilde{Y}_{\tau}^{2}}\sqrt{\int_{0}^{T}e^{2\beta s}\widetilde{Z}_{s}^{2}ds}\Bigr]$$ Then \begin{equation} E\Bigl[\sqrt{\int_{0}^{T}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{Z}_{s}^{2}ds}\Bigr] \leq \||\widetilde{Y}\||_{\mathds{S}^{2}}\|\widetilde{Z}\|_{2\beta}. \label{moneq15} \end{equation} We conclude that $ E\Bigl[\sqrt{\int_{0}^{T}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{Z}_{s}^{2}ds}\Bigr]<\infty$, whence, we get $E[\int_{0}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s}]=0$. Next we show that $E\Bigl[\int_{0}^{T}\int_{\mathcal{U}}e^{\beta s}\widetilde{Y}_{s-}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)\Bigr]=0$. For this purpose, we first prove that $E[\sqrt{\int_{0}^{T}\int_{\mathcal{U}}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds}]<\infty$. According to $~\eqref{moneq12}$ and $~\eqref{moneq13}$, we have \begin{multline} \int_{0}^{T}\int_{\mathcal{U}}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds\leq \int_{0}^{T}\int_{\mathcal{U}}e^{2\beta s}\sup_{t\in\mathbb{Q}}\widetilde{Y}_{t-}^{2}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds\\ \leq \int_{0}^{T}\int_{\mathcal{U}}e^{2\beta s}ess\sup_{\tau\in\mathcal{T}_{0,T}}\widetilde{Y}_{\tau}^{2}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds \label{moneq16} \end{multline} Using $~\eqref{moneq16}$ and Cauchy-Schwarz inequality, gives $$E\Bigl[\sqrt{\int_{0}^{T}\int_{\mathcal{U}}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds}\Bigr] \leq E\Bigl[\sqrt{ess\sup_{\tau\in\mathcal{T}_{0,T}}\widetilde{Y}_{\tau}^{2}}\sqrt{\int_{0}^{T}e^{2\beta s}\int_{\mathcal{U}}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds}\Bigr]$$ Thus \begin{equation} E\Bigl[\sqrt{\int_{0}^{T}\int_{\mathcal{U}}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds}\Bigr] \leq \||\widetilde{Y}\||_{\mathds{S}^{2}}\|\widetilde{\psi}\|_{\mathbb{L}^{2,2\beta}_{\pi}}<\infty \label{moneq17} \end{equation} Then $E\Bigl[\int_{0}^{T}\int_{\mathcal{U}}e^{\beta s}\widetilde{Y}_{s-}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)\Bigr]=0$. Finally the same result holds for the martingale $\int_{0}^{t}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{M}_{s}$, since: \begin{equation} E\Bigl[\sqrt{\int_{0}^{T} e^{2\beta s}\widetilde{Y}_{s-}^{2}d[\widetilde{M}]_{s}}\Bigr] \leq \||\widetilde{Y}\||_{\mathds{S}^{2}} \|\widetilde{M}\|_{\mathbb{M}_{2\beta}^{2}} <\infty \label{moneq18} \end{equation} By taking expectations on both sides of $~\eqref{moneq11}$ with $t=0$, we obtain: $$\widetilde{Y}^{2}_{0}+E[\int_{0}^{T} e^{\beta s}\widetilde{Z}^{2}_{s}ds]+E[\int_{0}^{T} e^{\beta s}d<\widetilde{M}>_{s}]+E[\int_{0}^{T} e^{\beta s}\int_{\mathcal{U}}|\widetilde{\psi}_{s}(u)|^{2}\mu(du)ds]\leq \epsilon^{2}\|\widetilde{f}(s)\|^{2}_{\beta}.$$ Hence, with the fact that $E[\int_{0}^{T} e^{\beta s}d<\widetilde{M}>_{s}]=E[\int_{0}^{T} e^{\beta s}d[\widetilde{M}]_{s}]$, we have \begin{equation} \|\widetilde{Z}\|^{2}_{\beta}+\|\widetilde{M}\|_{\mathbb{M}_{\beta}^{2}}^{2} +\|\widetilde{\psi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}\leq \epsilon^{2}\|\widetilde{f}(s)\|^{2}_{\beta} \label{moneq19} \end{equation} This therefore shows the first inequality of the lemma. From $~\eqref{moneq11}$ we also get, for all $\tau\in\mathcal{T}_{0,T}$ \begin{multline} e^{\beta\tau}\widetilde{Y}_{\tau}^{2}\leq \epsilon^{2}\int_{0}^{T}e^{\beta s}\widetilde{f}^{2}(s)ds-2\int_{\tau}^{T}e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s} -2\int_{\tau}^{T}\int_{\mathcal{U}}e^{\beta s}\widetilde{Y}_{s-}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)\\ - 2\int_{\tau}^{T}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{M}_{s} \,\,\ a.s. \label{moneq20} \end{multline} By taking first the essential supremum over $\tau\in\mathcal{T}_{0,T}$, and then the expectation on both sides of the inequality $~\eqref{moneq20}$, we obtain: \begin{multline} E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}\widetilde{Y}_{\tau}^{2}\Bigr] \leq \epsilon^{2}\|\widetilde{f}(s)\|^{2}_{\beta}+2E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}|\int_{0}^{\tau}e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s}|\Bigr]\\ +2E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}|\int_{0}^{\tau}\int_{\mathcal{U}}e^{\beta s}\widetilde{Y}_{s-}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)|\Bigr]\\ +2E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}|\int_{0}^{\tau}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{M}_{s}|\Bigr]. \label{moneq21} \end{multline} By using the continuity of a.e. trajectory of the process $(\int_{0}^{t}e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s})_{t\in[0,T]}$ (\citet{Ouknine2015}, Prop.A.3 ) and Burkholder-Davis-Gundy inequalities (\citet{Protter2000} Theorem 48, page 193. Applied with $p=1$), we get \begin{multline} E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}|\int_{0}^{\tau} e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s}|\Bigr]=E\Bigl[\sup_{t\in[0,T]}|\int_{0}^{t} e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}dW_{s}|\Bigr]\\ \leq cE\Bigl[\sqrt{\int_{0}^{T}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{Z}^{2}_{s}ds} \Bigr] \label{moneq22} \end{multline} where $c$ is a positive "universal" constant (which does not depend on the other parameters). The same reasoning as that used to obtain equation $~\eqref{moneq14}$ leads to \begin{equation} \sqrt{\int_{0}^{T}e^{2\beta s}\widetilde{Y}_{s-}^{2}\widetilde{Z}^{2}_{s}ds}\leq \sqrt{ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}\widetilde{Y}_{\tau}^{2}\int_{0}^{T}e^{\beta s}\widetilde{Z}^{2}_{s}ds} \,\,\ p.s. \label{moneq23} \end{equation} From the inequalities $~\eqref{moneq22}$, $~\eqref{moneq23}$ and $ab\leq\frac{1}{4}a^{2}+b^{2}$, we have \begin{equation} 2E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}|\int_{0}^{\tau} e^{\beta s}\widetilde{Y}_{s-}\widetilde{Z}_{s}ds|\Bigr]\leq\frac{1}{4}E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}\widetilde{Y}_{\tau}^{2}\Bigr]+4c^{2}E\Bigl[\int_{0}^{T}e^{\beta s}\widetilde{Z}^{2}_{s}ds\Bigr]. \label{moneq24} \end{equation} By the same arguments, we have \begin{multline} 2E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}|\int_{0}^{\tau}e^{\beta s}\widetilde{Y}_{s-}\widetilde{\psi}_{s}(u)\widetilde{\pi}(du,ds)|\Bigr]\leq 2cE\Bigl[\sqrt{|\int_{0}^{T}\int_{\mathcal{U}}e^{2\beta s}\widetilde{Y}^{2}_{s-}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds|}\Bigr]\\ \leq \frac{1}{4}E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}\widetilde{Y}_{\tau}^{2}\Bigr]+4c^{2}E\Bigl[\int_{0}^{T}\int_{\mathcal{U}}e^{\beta s}\widetilde{\psi}^{2}_{s}(u)\mu(du)ds\Bigr] \label{moneq25} \end{multline} And \begin{multline} 2E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}|\int_{0}^{\tau}e^{\beta s}\widetilde{Y}_{s-}d\widetilde{M}_{s}|\Bigr]\leq 2cE\Bigl[\sqrt{|\int_{0}^{T}e^{2\beta s}\widetilde{Y}^{2}_{s-}d[\widetilde{M}]_{s}|}\Bigr]\\ \leq \frac{1}{4}E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}\widetilde{Y}_{\tau}^{2}\Bigr]+4c^{2}E\Bigl[\int_{0}^{T}e^{\beta s}d[\widetilde{M}]_{s}\Bigr] \label{moneq26} \end{multline} where $c$ is a positive constant which does not depend on the other parameters. From $~\eqref{moneq22}$, $~\eqref{moneq24}$, $~\eqref{moneq25}$ and $~\eqref{moneq26}$, we get $$\frac{1}{4}\||\widetilde{Y}|\|_{\beta}^{2}\leq\epsilon^{2}\|\widetilde{f}(s)\|^{2}_{\beta} +4c^{2}\|\widetilde{Z}\|^{2}_{\beta} +4c^{2}\|\widetilde{M}\|_{\mathbb{M}_{\beta}^{2}}^{2} +4c^{2}\|\widetilde{\psi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}$$ This inequality, combined with $~\eqref{moneq19}$, gives $$\||\widetilde{Y}|\|_{\beta}^{2}\leq 4\epsilon^{2}(1+4c^{2})\|\widetilde{f}(s)\|^{2}_{\beta}$$ \end{proof} In the following lemma, we prove existence and uniqueness of the solution to the RBSDE from Definition$~\ref{definition1}$ in the case where the driver $f$ depends only on $s$ and $\omega$, i.e. $f(\omega,s,y,z,\psi):=f(\omega,s)$. \begin{lemma} \label{lemme3} Suppose that $f$ does not depend on y, z, $\psi$ that is $f(\omega,s,y,z,\psi):=f(\omega,s)$, where $f$ is a process in $\mathds{H}^{2}$. Let $\xi$ be an obstacle. Then, the RBSDE from Definition$~\ref{definition1}$ admits a unique solution $(Y,Z,\psi,M,A,C)\in\mathcal{E}^{2}(0,T)$, and for each $S\in\mathcal{T}_{0,T}$, we have \begin{equation} Y_{S}=ess\sup_{\tau\in\mathcal{T}_{S,T}}E\Bigl[\xi_{\tau}+\int_{S}^{\tau} f(t)dt|\mathcal{F}_{S} \Bigr] \,\,\,\ a.s. \label{moneq27} \end{equation} \end{lemma} \begin{proof} For all $S\in\mathcal{T}_{0,T}$, we define $\overline{Y}(S)$ by: \begin{equation} \overline{Y}(S)=ess\sup_{\tau\in\mathcal{T}_{S,T}}E\Bigl[\xi_{\tau}+\int_{S}^{\tau} f(t)dt|\mathcal{F}_{S} \Bigr] \,\,\,\ , \,\,\,\ \overline{Y}(T)=\xi_{T} \label{moneq28} \end{equation} And $\overline{\overline{Y}}(S)$ by: \begin{equation} \overline{\overline{Y}}(S)=\overline{Y}(S)+\int_{0}^{S}f(t)dt=ess\sup_{\tau\in\mathcal{T}_{S,T}}E\Bigl[\xi_{\tau}+\int_{0}^{\tau} f(t)dt|\mathcal{F}_{S} \Bigr] \label{moneq29} \end{equation} We note that the process $(\xi_{t}+\int_{0}^{t} f(s)ds)_{t\in[0,T]}$ is progressive. Therefore, the family $(\overline{\overline{Y}}(S))_{S\in\mathcal{T}_{0,T}}$ is a supermartingale family (see \citet{KobylanskiQuenez2012} Remark 1.2 with Prop.1.5), and with remark $(b)$ in (\citet{DellacherieMeyer1980}, page 435), gives the existence of a strong optional supermartingale (which we denote again by $\overline{\overline{Y}}$) such that $\overline{\overline{Y}}_{S}=\overline{\overline{Y}}(S)$ a.s. for all $S\in\mathcal{T}_{0,T}$. Thus, we have $\overline{Y}(S)=\overline{\overline{Y}}(S)-\int_{0}^{S}f(t)dt=\overline{\overline{Y}}_{S}-\int_{0}^{S}f(t)dt$ a.s. for all $S\in\mathcal{T}_{0,T}$ (see \citet{DellacherieMeyer1980}). On the other hand, we know that almost all trajectories of the strong optional supermartingale $\overline{\overline{Y}}$ are l\`{a}dl\`{a}g. Thus, we get that the l\`{a}dl\`{a}g optional process $(\overline{Y}_{t})_{t\in[0,T]}=(\overline{\overline{Y}}_{t}-\int_{0}^{t}f(s)ds)_{t\in[0,T]}$ aggregates the family $(\overline{Y}(S))_{S\mathcal{T}_{0,T}}$. To prove the lemma $~\ref{lemme3}$, it must be shown, as a first step, that $\overline{Y}\in\mathds{S}^{2}$ by giving an estimate of $\||\overline{Y}|\|_{\mathds{S}^{2}}^{2}$ in terms of $\||\xi|\|_{\mathds{S}^{2}}^{2}$ and $\|f\|_{\mathds{H}^{2}}^{2}$. In the second step, we exhibit processes $Z$, $\psi$, $M$, $A$ and $C$ such that $(\overline{Y},Z,\psi,M,A,C)$ is a solution to the RBSDE with parameters $(f,\xi)$. In the third step, we prove that $A\times C\in \mathds{S}^{2}\times\mathds{S}^{2}$ and we give an estimate of $\||A|\|_{\mathds{S}^{2}}^{2}$ and $\||C|\|_{\mathds{S}^{2}}^{2}$. In the fourth step, we show that $Z\in\mathds{H}^{2}$, $\psi\in\mathbb{L}_{\pi}^{2}$ and $M\in\mathbb{M}^{2}$, and finally we show the uniqueness of the solution.\\ \textbf{Step 1.} By using the definition of $\overline{Y}$ $~\eqref{moneq28}$, Jensen's inequality and the triangular inequality, we get $$|\overline{Y}_{S}|\leq ess\sup_{\tau\in\mathcal{T}_{S,T}}E\Bigl[|\xi_{\tau}|+|\int_{S}^{\tau} f(t)dt| |\mathcal{F}_{S} \Bigr]\leq E\Bigl[ess\sup_{\tau\in\mathcal{T}_{S,T}}|\xi_{\tau}|+\int_{0}^{T} |f(t)|dt |\mathcal{F}_{S} \Bigr]$$ Thus, we obtain \begin{equation} |\overline{Y}_{S}|\leq E\Bigl[X|\mathcal{F}_{S}\Bigr] \label{moneq30} \end{equation} With \begin{equation} X=\int_{0}^{T}|f(t)|dt+ess\sup_{\tau\in\mathcal{T}_{0,T}}|\xi_{\tau}| \label{moneq31} \end{equation} Applying Cauchy-Schwarz inequality gives \begin{equation} E[X^2]\leq cT\|f\|_{\mathds{H}^{2}}^{2}+c\||\xi|\|_{\mathds{S}^{2}}^{2}<\infty. \label{moneq32} \end{equation} where $c$ is a positive constant. Now, inequality $~\eqref{moneq30}$ leads to $|\overline{Y}_{S}|^{2}\leq |E[X|\mathcal{F}_{S}]|^{2}$. By taking the essential supremum over $S\in\mathcal{T}_{0,T}$ we get $ess\sup_{S\in\mathcal{T}_{0,T}}|\overline{Y}_{S}|^{2}\leq ess\sup_{S\in\mathcal{T}_{0,T}}|E[X|\mathcal{F}_{S}]|^{2}$. By using Proposition A.3 in \citet{Ouknine2015}, we get $ess\sup_{S\in\mathcal{T}_{0,T}}|\overline{Y}_{S}|^{2}\leq \sup_{t\in[0,T]}|E[X|\mathcal{F}_{t}]|^{2}$. By using this inequality and Doob's martingale inequalities, we obtain \begin{equation} E\Bigl[ess\sup_{S\in\mathcal{T}_{0,T}}|\overline{Y}_{S}|^{2}\Bigr]\leq E\Bigl[\sup_{t\in[0,T]}|E[X|\mathcal{F}_{t}]|^{2}\Bigr] \leq cE[X^{2}] \label{moneq33} \end{equation} where $c$ is a positive constant that changes from line to line. Finally, combining inequalities $~\eqref{moneq32}$ and $~\eqref{moneq33}$ gives \begin{equation} E\Bigl[ess\sup_{S\in\mathcal{T}_{0,T}}|\overline{Y}_{S}|^{2}\Bigr]\leq cT\|f\|_{\mathds{H}^{2}}^{2}+c\||\xi|\|_{\mathds{S}^{2}}^{2}<\infty. \label{moneq34} \end{equation} Then $\overline{Y}_{S}\in\mathds{S}^{2}$. \\ \textbf{Step 2.} Due to the previous step and to the assumption $f\in\mathds{H}^{2}$, the strong optional supermartingale $\overline{\overline{Y}}$ is of class $(D)$. Applying Mertens decomposition (\citet{Ouknine2015}, Theorem A.1) and a result from optimal stopping theory (see more in \citet{ElKaroui1981}, Prop. 2.34. page 131 or \citet{KobylanskiQuenez2012}), gives the following $$\overline{\overline{Y}}_{\tau}=N_{\tau}-A_{\tau}-C_{\tau-} \,\,\,\ \forall \tau\in\mathcal{T}_{0,T}$$ \begin{equation} \overline{Y}_{\tau}=-\int_{0}^{\tau}f(t)dt+N_{\tau}-A_{\tau}-C_{\tau-} \,\,\,\ a.s. \,\,\,\ \forall \tau\in\mathcal{T}_{0,T} \label{moneq35} \end{equation} where $N$ is a (c\`{a}dl\`{a}g) uniformly integrable martingale such that $N_0=0$, $A$ is a nondecreasing right-continuous predictable process such that $A_0=0$, $E(A_T)<\infty$ and satisfying $~\eqref{moneq6}$, and $C$ is a nondecreasing right-continuous adapted purely discontinuous process such that $C_{0-}=0$, $E(C_T)<\infty$ and satisfying $~\eqref{moneq7}$. By the martingale representation theorem (Lemma$~\ref{lemme1}$), there exists a unique predictable process $Z$, a unique process $\psi$ and a unique (c\`{a}dl\`{a}g) local martingales orthogonal $M$ such that $$N_{t}=\int_{0}^{t}Z_{s}dW_{s}+\int_{0}^{t}\int_{\mathcal{U}}\psi_{s}(u)\widetilde{\pi}(du,ds)+M_{t}.$$ Moreover, we have $\overline{Y}_{T}=\xi_{T}$ a.s. by definition of $\overline{Y}$. Combining this with equation $~\eqref{moneq35}$. gives equation $~\eqref{moneq4}$. Also by definition of $\overline{Y}$, we have $\overline{Y}_{S}\geq\xi_{S}$ a.s. for all $S\in\mathcal{T}_{0,T}$, which, along with Proposition A.4 in \citet{Ouknine2015} (or Theorem 3.2. in \citet{AshkanNikeghbali2006}), shows that $\overline{Y}$ satisfies inequality $~\eqref{moneq5}$. Finally, to conclude that the process $(\overline{Y},Z,\psi,M,A,C)$ is a solution to the RBSDE with parameters $(f,\xi)$, it remains to show that $Z\times\psi\times M\times A\times C\in\mathds{H}^{2}\times\mathbb{L}_{\pi}^{2}\times\mathbb{M}^{2}\times\mathds{S}^{2}\times\mathds{S}^{2}$.\\ \textbf{Step 3.} Let us show that $A\times C\in\mathds{S}^{2}\times\mathds{S}^{2}$.\\ Let us define the process $\overline{\overline{A}}_{t}=A_{t}+C_{t-}$ where the processes $A$ and $C$ are given by $~\eqref{moneq35}$. By arguments similar to those used in the proof of inequality $~\eqref{moneq30}$, we see that $|\overline{\overline{Y}}_{S}|\leq E[X|\mathcal{F}_{S}]$ with $$X=\int_{0}^{T}|f(t)|dt+ess\sup_{\tau\in\mathcal{T}_{S,T}}|\xi_{\tau}|.$$ Then, the Corollary A.1 in \citet{Ouknine2015} ensures the existence of a constant $c>0$ such that $E[(\overline{\overline{A}}_{T})^{2}]\leq cE[X^{2}]$. By combining this inequality with inequality $~\eqref{moneq32}$, we obtain \begin{equation} E[(\overline{\overline{A}}_{T})^{2}]\leq cT\|f\|_{\mathds{H}^{2}}^{2}+c\||\xi|\|_{\mathds{S}^{2}}^{2} \label{moneq36} \end{equation} where we have again allowed the positive constant $c$ to vary from line to line. We conclude that $\overline{\overline{A}}\in L^{2}$. And with the nondecreasingness of $\overline{\overline{A}}$, then $(\overline{\overline{A}}_{\tau})^{2}\leq (\overline{\overline{A}}_{T})^{2}$ for all $\tau\in\mathcal{T}_{0,T}$ thus $$E\Bigl[ess\sup_{\tau\in\mathcal{T}_{0,T}}(\overline{\overline{A}}_{\tau})^{2}\Bigr]\leq E\Bigl[(\overline{\overline{A}}_{T})^{2}\Bigr]$$ i.e. $\overline{\overline{A}}\in \mathds{S}^{2}$ then $A\in \mathds{S}^{2}$ and $C\in \mathds{S}^{2}$. \\ \textbf{Step 4.} Let us now prove that $Z\times\psi\times M\in\mathds{H}^{2}\times\mathbb{L}_{\pi}^{2} \times\mathbb{M}^{2}$. We have from step 3 $$\int_{0}^{T}Z_{s}dW_{s}+\int_{0}^{T}\int_{\mathcal{U}}\psi_{s}(u)\widetilde{\pi}(du,ds)+M_{T}= \overline{Y}_{T}+\int_{0}^{T}f(t)dt+\overline{\overline{A}}_{T}-\overline{Y}_{0}$$ where $\overline{\overline{A}}$ is the process from Step 3. Since $\overline{\overline{A}}_{T}\in L^{2}$, $\overline{Y}_{T}\in L^{2}$, $\overline{Y}_{0}\in L^{2}$ and $f\in\mathds{H}^{2}$. Hence, $\int_{0}^{T}Z_{s}dW_{s}\in L^{2}$, $\int_{0}^{T}\int_{\mathcal{U}}\psi_{s}(u)\widetilde{\pi}(du,ds)\in L^{2}$ and $M_{T}\in L^{2}$ and consequently $Z\times\psi\times M\in\mathds{H}^{2}\times\mathbb{L}_{\pi}^{2} \times\mathbb{M}^{2}$. For the uniqueness of the solution, suppose that $(Y,Z,\psi,M,A,C)$ is a solution of the RBSDE with driver $f$ and obstacle $\xi$. Then, by the previous inequality$~\ref{moneq8}$ in the Lemma$~\ref{lemme2}$ (applied with $f^1=f^2=f$) we obtain $Y=\overline{Y}$ in $\mathds{S}^{2}$, where $\overline{Y}$ is given by $~\eqref{moneq28}$. The uniqueness of $A$, $C$, $Z$, $\psi$ and $M$ follows from the uniqueness of Mertens decomposition of strong optional supermartingales and from the uniqueness of the martingale representation (Lemma$~\ref{lemme1}$). \end{proof} \begin{remark} \label{remarque1} \begin{enumerate} \item We note that the uniqueness of $Z$, $\psi$ and $M$ can be obtained also by applying $~\eqref{unicite}$ in the previous Lemma$~\ref{lemme2}$. \item Let $\beta>0$. For $\phi\in\mathds{S}^{2}$, we have $E[\int_{0}^{T}e^{\beta t}|\phi_{t}|^{2}dt]\leq TE[ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}|\phi_{\tau}|^{2}]$. Indeed, by applying Fubini's theorem, we get \begin{multline} E[\int_{0}^{T}e^{\beta t}|\phi_{t}|^{2}dt]= \int_{0}^{T}E[e^{\beta t}|\phi_{t}|^{2}]dt\leq \int_{0}^{T}E[ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}|\phi_{\tau}|^{2}]ds=\\ TE[ess\sup_{\tau\in\mathcal{T}_{0,T}}e^{\beta \tau}|\phi_{\tau}|^{2}] \end{multline} \end{enumerate} \end{remark} In the following theorem, we prove existence and uniqueness of the solution to the RBSDE from Definition$~\ref{definition1}$ in the case of a general Lipschitz driver $f$ by using a fixed-point theorem and by using (2) in the Remark$~\ref{remarque1}$ . \begin{theorem} Let $\xi$ be a left-limited and r.u.s.c. process in $\mathds{S}^{2}$ and let $f$ be a Lipschitz driver. The RBSDE with parameters $(f,\xi)$ from Definition$~\ref{definition1}$ admits a unique solution $(Y,Z,\psi,M,A,C)\in\mathcal{E}^{2}(0,T)$.\\ \end{theorem} \begin{proof} We note by $\mathcal{E}^{\beta}_{f}$ the space $\mathds{S}^{2}\times\mathds{H}^{2}\times\mathbb{L}^{2}_{\pi}(0,T)$ which we equip with the norm $\|.\|_{\mathcal{E}^{\beta}_{f}}$ defined by $$\|(Y,Z,\psi)\|_{\mathcal{E}^{\beta}_{f}}^{2}=\||Y|\|_{\beta}^{2}+\|Z\|^{2}_{\beta}+\|\psi\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}$$ for all $(Y,Z,\psi)\in\mathds{S}^{2}\times\mathds{H}^{2}\times\mathbb{L}^{2}_{\pi}(0,T)$. After, we define an application $\Phi:\mathcal{E}^{\beta}_{f}\rightarrow \mathcal{E}^{\beta}_{f}$ as follows: for a given $(y,z,\varphi)\in\mathcal{E}^{\beta}_{f}$, we let $(Y,Z,\psi)=\Phi(y,z,\varphi)$ where $(Y,Z,\psi)$ the first three components of the solution to the RBSDE associated with driver $f:=f(t,y_t,z_t,\varphi_t)$ and with obstacle $\xi_t$. Let $(A,C)$ be the associated Mertens process, constructed as in lemma$~\ref{lemme3}$. The mapping $\Phi$ is well-defined by Lemma$~\ref{lemme3}$. Let $(y,z,\varphi)$ and $(y',z',\varphi')$ be two elements of $\mathcal{E}^{\beta}_{f}$. We set $(Y,Z,\psi)=\Phi(y,z,\varphi)$ and $(Y',Z',\psi')=\Phi(y',z',\varphi')$. We also set $\widetilde{Y}=Y-Y'$, $\widetilde{Z}=Z-Z'$, $\widetilde{\psi}=\psi-\psi'$, $\widetilde{y}=y-y'$, $\widetilde{z}=z-z'$ and $\widetilde{\varphi}=\varphi-\varphi'$. By the same argument that in the proof of Theorem 3.4 in \citet{Ouknine2015}, in the Brownian filtration case. Let us prove that for a suitable choice of the parameter $\beta>0$, the mapping $\Phi$ is a contraction from the Banach space $\mathcal{E}^{\beta}_{f}$ into itself. Indeed, By applying Lemma$~\ref{lemme2}$, we have, for all $\epsilon>0$ and for all $\beta\geq\frac{1}{\epsilon^{2}}$: \begin{eqnarray*} \||\widetilde{Y}|\|_{\beta}^{2}+\|\widetilde{Z}\|^{2}_{\beta}+\|\widetilde{\psi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}&\leq &\||\widetilde{Y}|\|_{\beta}^{2}+\|\widetilde{Z}\|^{2}_{\beta}+\|\widetilde{M}\|^{2}_{\mathbb{M}^{2}_{\beta}} +\|\widetilde{\psi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}\\ &\leq& \epsilon^{2}(5+16c^{2})\|f(t,y,z,\varphi)-f(t,y',z',\varphi')\|_{\beta}^{2} \end{eqnarray*} By using the Lipschitz property of $f$ and the fact that $(a+b)^2\leq2a^2+2b^2$, for all $(a,b)\in\mathbb{R}^{2}$, we obtain $$\|f(t,y,z,\varphi)-f(t,y',z',\varphi')\|_{\beta}^{2}\leq C_{K}(\|\widetilde{y}\|_{\beta}^{2}+\|\widetilde{z}\|_{\beta}^{2} +\|\widetilde{\varphi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}})$$ where $C_{K}$ is a positive constant depending on the Lipschitz constant $K$ only. Thus, for all $\epsilon>0$ and for all $\beta\geq\frac{1}{\epsilon^{2}}$ we have: $$\||\widetilde{Y}|\|_{\beta}^{2}+\|\widetilde{Z}\|^{2}_{\beta}+\|\widetilde{\psi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}\leq \epsilon^{2}C_{K}(5+16c^{2})\Bigl(\|\widetilde{y}\|_{\beta}^{2}+\|\widetilde{z}\|_{\beta}^{2} +\|\widetilde{\varphi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}\Bigr)$$ The previous inequality, combined with (2) in Remark$~\ref{remarque1}$, gives $$\||\widetilde{Y}|\|_{\beta}^{2}+\|\widetilde{Z}\|^{2}_{\beta}+\|\widetilde{\psi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}\leq \epsilon^{2}C_{K}(5+16c^{2})(T+1)\Bigl(\||\widetilde{y}|\|_{\beta}^{2}+\|\widetilde{z}\|_{\beta}^{2} +\|\widetilde{\varphi}\|^{2}_{\mathbb{L}^{2,\beta}_{\pi}}\Bigr)$$ Thus, for $\epsilon>0$ such that $\epsilon^{2}C_{K}(5+16c^{2})(T+1)<1$ and $\beta>0$ such that $\beta\geq\frac{1}{\epsilon^{2}}$, the mapping $\Phi$ is a contraction. By the Banach fixed-point theorem, we get that $\Phi$ has a unique fixed point in $\mathcal{E}^{\beta}_{f}$. We thus have the existence and uniqueness of the solution to the RBSDE. \end{proof} \section{Application on dynamic risk measure and optimal stopping.} \subsection{On dynamic risk measure} In this subsection, we give an application of reflected BSDEs in dynamic risk measure. Indeed, define the following functional: for each stopping time $\tau\in\mathcal{T}_{0,T}$ and $\xi\in\mathds{S}^{2}$. Set \begin{equation} v(S)=-ess\sup_{\tau\in\mathcal{T}_{S,T}} \mathcal{E}^{f}_{S,\tau}(\xi_{\tau}) \label{moneq38} \end{equation} where $S\in\mathcal{T}_{0,T}$, $v$ is the dynamic risk measure, $\xi_{T'}$ $(T'\in[0,T])$ is the gain of the position at time $T'$ and $-\mathcal{E}^{f}_{t,T'}(\xi_{T'})$ is the $f$-conditional expectation of $\xi_{\tau}$ modelling the risk at time $t$ where $t\in[0,T]$. We can show that the minimal risk measure $v$ defined by $~\eqref{moneq38}$ coincides with $-Y$, where $Y$ is (the first component of) the solution to the reflected BSDE associated with driver $f$ and obstacle $\xi$. For this purpose, we can extend the results in Proposition A.5 and Theorem 4.2 in \citet{Ouknine2015} to our setting (see \citet{AaziziOuknine2016} for more details). \subsection{On optimal stopping} We note also that we can show the existence of an $\varepsilon$-optimal stopping time, and that of the existence of an optimal stopping time under suitable assumptions on the barrier $\xi$ i.e. without right continuity of $\xi$, by extending the results of the second part of \citet{Ouknine2015} to our setting. Let $(Y,Z,\psi,M,A,C)$ be the solution of the reflected BSDE with parameters $(f,\xi)$ as in definition$~\ref{definition1}$, we have \begin{equation} Y_{S}=ess\sup_{\tau\in\mathcal{T}_{S,T}} \mathcal{E}^{f}_{S,\tau}(\xi_{\tau}) \label{moneq39} \end{equation} For each $S\in\mathcal{T}_{0,T}$ and $\varepsilon >0$, the stopping time $\tau_{S}^{\varepsilon}=inf \{t\geq S, Y_{t}\leq\xi_{t}+\varepsilon\}$ is a $(C\varepsilon)$-optimal for $~\ref{moneq39}$ where $C$ is a constant which depends only on $T$ and the Lipschitz constant $K$ of $f$: $$Y_{S}\leq\mathcal{E}^{f}_{S,\tau_{S}^{\varepsilon}}(\xi_{\tau_{S}^{\varepsilon}})+C\varepsilon, \,\,\,\ a.s.$$ Under our assumption on $\xi$ and $f$, we can prove that for each $S\in\mathcal{T}_{0,T}$ and $\widehat{\tau}\in\mathcal{T}_{S,T}$, the stopping time $\widehat{\tau}$ is $S$-optimal. i.e. $$Y_{\widehat{\tau}}=\mathcal{E}^{f}_{S,\widehat{\tau}}(\xi_{\widehat{\tau}}), \,\,\,\ a.s.$$ (see Theorem 4.2 and Proposition 4.3 in \citet{Ouknine2015}). Finally, under an additional assumption of left-upper semicontinuity (l.u.s.c) of $\xi$ in $\mathds{S}^{2}$, the first time when the value process $Y$ hits $\xi$ is optimal: if $\tau_{S}^{*}=inf \{u\geq S, Y_{u}=\xi_{u}\}$, $\xi$ is r.u.s.c and l.u.s.c in $\mathds{S}^{2}$ and $(Y,Z,\psi,M,A,C)$ is the solution of the reflected BSDE of definition$~\ref{definition1}$, the stopping time $\tau_{S}^{*}$ is optimal that is $$Y_{S}=\mathcal{E}^{f}_{S,\tau_{S}^{*}}(\xi_{\tau_{S}^{*}}), \,\,\,\ a.s.$$ (see Proposition 4.2 in \citet{Ouknine2015}). \end{document}
\begin{document} \begin{abstract} We consider the inverse problem of determining some class of nonlinear terms appearing in an elliptic equation from boundary measurements. More precisely, we study the stability issue for this class of inverse problems. Under suitable assumptions, we prove a Lipschitz and a H\"older stability estimate associated with the determination of quasilinear and semilinear terms appearing in this class of elliptic equations from measurements restricted to an arbitrary parts of the boundary of the domain. Besides their mathematical interest, our stability estimates can be useful for improving numerical reconstruction of this class of nonlinear terms. Our approach combines the linearization technique with applications of suitable class of singular solutions. \noindent {\bf Keywords:} Inverse problem, Nonlinear elliptic equations, Stability estimate, Singular solutions.\\ \noindent {\bf Mathematics subject classification 2020 :} 35R30, 35J61, 35J62. \end{abstract} \author[Yavar Kian]{Yavar Kian} \address{Aix Marseille Univ, Universit\'e de Toulon, CNRS, CPT, Marseille, France.} \email{[email protected]} \maketitle \section{Introduction} \subsection{Statement} Let $\Omega$ be a bounded domain of $\mathbb{R}^n$, $n\geqslant 2$, with $\mathcal C^{2+\alpha}$, $\alpha\in(0,1)$, boundary. Let $\gamma\in \mathcal C^3({\mathbb R};(0,+\infty))$, $F\in \mathcal C^2(\overline{\Omega}\times{\mathbb R}\times{\mathbb R}^n;{\mathbb R})$ and let $a:=(a_{i,j})_{1 \leqslant i,j \leqslant n} \in \mathcal C^2(\overline{\Omega};{\mathbb R}^{n^2})$ be symmetric, that is $$ a_{i,j}(x)=a_{j,i}(x),\ x \in \Omega,\ i,j = 1,\ldots,n, $$ and let $a$ fulfill the ellipticity condition: there exists a constant $c>0$ such that \begin{equation} \label{ell} \sum_{i,j=1}^d a_{i,j}(x) \xi_i \xi_j \geqslant c |\xi|^2, \quad \mbox{for each $x \in \overline{\Omega},\ \xi=(\xi_1,\ldots,\xi_n) \in {\mathbb R}^n$}. \end{equation} We consider the following boundary value problem \begin{equation} \label{eq1} \left\{ \begin{array}{ll} -\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x)\gamma(u(x)) \partial_{x_j} u(x) \right)+F(x,u(x),\nabla u(x))=0 & \mbox{in}\ \Omega , \\ u=g &\mbox{on}\ \partial\Omega, \end{array} \right. \end{equation} with $g\in \mathcal C^{2+\alpha}(\partial\Omega)$. We assume here that the nonlinear terms $\gamma$, $F$ satisfy one of the following conditions:\\ (i) $F$ takes the form \begin{equation} \label{non1} F(x,u(x),\nabla u(x))=D(x,u(x))\cdot\nabla u(x),\quad x\in\Omega,\end{equation} with $D\in \mathcal C^2(\overline{\Omega}\times{\mathbb R};{\mathbb R}^n)$.\\ (ii) $\gamma=1$ and \begin{equation} \label{non2} \quad F(x,u(x),\nabla u(x))=G(u(x)),\quad x\in\Omega,\end{equation} with $G\in \mathcal C^2({\mathbb R})$ a non-decreasing function. Under some suitable assumptions, that will be specified later, we prove that the problem \eqref{eq1} admits a unique and sufficiently smooth solution $u_g$. Then, fixing $S$ an arbitrary open subset of $\partial\Omega$ we associate with problem \eqref{eq1} the partial Dirichlet-to-Neumann (DN in short) map $$\mathcal N_{\gamma,F}:g\mapsto \gamma(u_g)\partial_{\nu_a} u_g|_{S}$$ with $\nu(x)=(\nu_1(x),\ldots,\nu_n(x))$ the outward unit normal to $\partial\Omega$ computed at $x \in \partial\Omega$ and $$\partial_{\nu_a} u(x)=\sum_{i,j=1}^na_{i,j}(x)\partial_{x_j}u(x)\nu_i(x),\quad x\in\partial\Omega.$$ We study the inverse problem of determining the nonlinear term $\gamma$ when condition (i) is fulfilled or $G$ when condition (ii) is fulfilled from some knowledge of $\mathcal N_{\gamma,F}$. More precisely, we are looking for stability result for this inverse problem. \subsection{Motivations} Let us observe that the nonlinear equations under consideration in \eqref{eq1} can be associated with different physical phenomenon which can not be described by classical linear elliptic equations. For instance, we can mention physical models of viscous flows \cite{GR} or plasticity phenomena \cite{CS} where the transfer from voltage to current density can not be described by the classical Ohm's law but some more general nonlinear expression with a conductivity $\gamma$ depending on the voltage. The solutions of problem \eqref{eq1} can also be seen as stationary solutions of nonlinear Fokker-Planck or reaction-diffusion equations where the nonlinearity can arise from cooperative interactions between the subsystems of many-body systems (see e.g. \cite{F}), complex mixing phenomena (e.g. \cite{NW}) or models appearing in combustion theory (e.g. \cite{ZF}). In this context, the goal of our inverse problem is to determine the nonlinear law associated with the quasilinear term $\gamma$ or the semilinear term $F$ from measurements of the flux, localized at some arbitrary parts of the boundary $\partial\Omega$, associated with different excitation of the system (voltage, heat source...) applied at the boundary $\partial\Omega$. More precisely, we are looking for Lipschitz or H\"older stability estimates for this class of inverse problems which can be useful tools for improving the precision of numerical reconstruction (see e.g. \cite{CY}). \subsection{Known results} The determination of nonlinear terms appearing in elliptic equations is an important class of inverse problems which has been intensively studied these last decades. One of the first important results for these problems can be found in \cite{Is5,Is4} (see also \cite{ImYa}) where the authors considered the unique determination of a general semilinear term of the form $F(x,u)$, $x\in\Omega$, $u\in{\mathbb R}$, by applying the method of linearization initiated by \cite{Is1}. This approach has been extended to the unique determination of quasilinear terms by \cite{EPS1,EPS2,MU,Sh,Sun1,SuUh} and to the determination of semilinear terms of the form $F(u,\nabla u)$ by \cite{Is2,Is3}. More recently, this class of inverse problems received an important attention with different contributions obtained by mean of the multiple order linearization approach introduced by \cite{KLU}. In that category, without being exhaustive, we can mention the works of \cite{FO20,KKU,KU0,LLLS} who have considered the unique determination of semilinear terms of the form $F(x,u)$, $x\in\Omega$, $u\in{\mathbb R}$ and the works of \cite{CFKKU,KKU} dealing with the unique determination of general quasilinear terms of the form $\gamma(x,u,\nabla u)$, $x\in\Omega$. All the above mentioned results have been stated in terms of uniqueness results. In contrast to the important development of this topic in terms of uniqueness results, only few authors considered the stability issue for these problems. For elliptic equations, we are only aware of the works \cite{CHY,LLLS} where the authors proved a logarithmic stability estimate for the determination of semilinear terms of the form $F(u)$, $u\in{\mathbb R}$, with $F$ a sufficiently smooth function or semilinear terms of the form $F(x,u)=q(x)u^m$, $x\in\Omega$, $u\in{\mathbb R}$, where $m$ is a known integer and the parameter $q$ is unknown. We mention also the recent work of \cite{LLPT} where the authors studied the stable determination of semilinear terms appearing in nonlinear hyperbolic equations. To the best of our knowledge there is no result available in the mathematical literature showing Lipschitz or H\"older stability estimate for the determination of nonlinear terms appearing in nonlinear elliptic equations. \subsection{Main results} In order to state our main results we will start by recalling some properties of the solution $u$ of \eqref{eq1} and the associated partial DN map $\mathcal N_{\gamma,F}$. Assuming that condition (i) is fulfilled, we prove in Proposition \mathfrak Rf{p1}, that for all constant $\lambda\in{\mathbb R}$ there exists ${\varepsilon}ilon_\lambda>0$, depending on $a$, $\gamma$, $F$, $\lambda$ and $\Omega$, such that for $g=\lambda+h$, with $h\in \{f\in\mathcal C^{2+\alpha}(\partial\Omega):\ \norm{h}_{\mathcal C^{2+\alpha}(\partial\Omega)}<{\varepsilon}ilon_\lambda\}$, problem \eqref{eq1} admits a unique solution $u_g\in \mathcal C^{2+\alpha}(\overline{\Omega})$. In addition, we show in Lemma \mathfrak Rf{l1} that the partial DN map $\mathcal N_{\gamma,F}$ admits a Fr\'echet derivative at $\lambda$ which allows us to consider the map \begin{equation} \label{map}\mathcal C^{2+\alpha}(\partial\Omega)\ni h\mapsto\lim_{{\varepsilon}ilon\to0}\frac{\mathcal N_{\gamma,F}(\lambda+{\varepsilon}ilon h)-\mathcal N_{\gamma,F}(\lambda)}{{\varepsilon}ilon}\end{equation} as a bounded linear map from $\mathcal C^{2+\alpha}(\partial\Omega)$ to $\{f|_S:\ f\in\mathcal C^{1+\alpha}(\partial\Omega)\}$. We prove also that this map can be extended uniquely by density to a continuous linear map from $ H^{\frac{1}{2}}(\partial\Omega)$ to $H^{-\frac{1}{2}}(S)$. Using this property and fixing $\mathcal H_S$ the subset of $\mathcal C^{2+\alpha}(\partial\Omega)$ defined by $$\mathcal H_S:=\{f\in \mathcal C^{2+\alpha}(\partial\Omega):\ \norm{f}_{H^{\frac{1}{2}}(\partial\Omega)}\leqslant 1,\ \textrm{supp}(f)\subset S\},$$ we show in Section 2 that the map \begin{equation} \label{mesure}{\mathbb R}\ni\lambda\mapsto\sup_{h\in\mathcal H_S}\limsup_{{\varepsilon}ilon\to0}\frac{\norm{\mathcal N_{\gamma,F}(\lambda+{\varepsilon}ilon h)-\mathcal N_{\gamma,F}(\lambda)}_{H^{-\frac{1}{2}}(S)}}{{\varepsilon}ilon}\end{equation} is lying in $\mathcal C({\mathbb R})$. We will use this last map in terms of measurements for our first stability result, where we treat the determination of the quasilinear term $\gamma$ from the above knowledge of $\mathcal N_{\gamma,F}$. \begin{theorem}\label{t1} For $j=1,2$, let $\gamma_j\in \mathcal C^3({\mathbb R};(0,+\infty))$ and $D_j\in C^2(\overline{\Omega}\times{\mathbb R})$ and consider $F_j$ defined by \eqref{non1} with $D=D_j$. We assume that \begin{equation} \label{t11a}\nabla_x\cdot D_j(x,t)\leqslant 0,\quad x\in\Omega,\ t\in{\mathbb R},\ j=1,2.\end{equation} Then, fixing $\mathcal N=\mathcal N_{\gamma_1,F_1}-\mathcal N_{\gamma_2,F_2}$, there exists a constant $C>0$, depending only on $a$, $\Omega$, $S$ such that the following estimate \begin{equation} \label{t1c}\norm{\gamma_1-\gamma_2}_{L^\infty(-R,R)}\leqslant C\sup_{\lambda\in[-R,R]}\sup_{h\in\mathcal H_S}\limsup_{{\varepsilon}ilon\to0}\frac{\norm{\mathcal N(\lambda+{\varepsilon}ilon h)-\mathcal N(\lambda)}_{H^{-\frac{1}{2}}(S)}}{{\varepsilon}ilon},\quad R>0,\end{equation} holds true.\end{theorem} For our second result we assume that condition (ii) is fulfilled. We consider also $S'$ an open subset of $\partial\Omega$, to be defined later (see Section 3.1), whose closure is contained into $S$ and we fix $\chi\in\mathcal C^{2+\alpha}(\partial\Omega)$ such that supp$(\chi)\subset S$ and $\chi=1$ on $S'$. Following \cite[Theorem 8.3, pp. 301]{LU} and \cite[Theorem 9.3, 9.4]{GT}, we can show that for $g=\lambda\chi+h$, with $h\in \mathcal C^{2+\alpha}(\partial\Omega)$ and $\lambda\in{\mathbb R}$ a constant, problem \eqref{eq1} admits a unique solution $u_g\in \mathcal C^{2+\alpha}(\overline{\Omega})$. In addition, in a similar way as above, applying Lemma \mathfrak Rf{l2}, we can prove that the map \begin{equation} \label{mesure1}{\mathbb R}\ni\lambda\mapsto\sup_{h\in\mathcal H_S}\limsup_{{\varepsilon}ilon\to0}\frac{\norm{\mathcal N_{\gamma,F}(\lambda\chi+{\varepsilon}ilon h)-\mathcal N_{\gamma,F}(\lambda\chi)}_{H^{-\frac{1}{2}}(S)}}{{\varepsilon}ilon}\end{equation} is lying in $\mathcal C({\mathbb R})$. We will use this last map for our second result where we prove the stable determination of the semilinear term $G$ from the knowledge of $\mathcal N_{\gamma,F}$ given by \eqref{mesure1}. \begin{theorem}\label{t2} We assume that $n\geqslant3$ and that \begin{equation} \label{t2b}a_{i,j}(x)=\delta_{ij},\quad i,j=1,\ldots,n,\ x\in \Omega,\end{equation} with $\delta$ the Kronecker delta symbol. For $j=1,2$, we fix $G_j\in \mathcal C^2({\mathbb R};{\mathbb R})$ a non-decreasing function and we consider $F_j$ defined by \eqref{non2} with $G=G_j$. We assume that there exists a non-decreasing function $\kappa\in \mathcal C({\mathbb R}_+)$ such that \begin{equation} \label{t2a}G_1(0)=G_2(0),\quad \sum_{i,j=1}^2|G_j^{(i)}(\lambda)|\leqslant \kappa(|\lambda|),\quad \lambda\in{\mathbb R}.\end{equation} Then, fixing $\mathcal N=\mathcal N_{1,F_1}-\mathcal N_{1,F_2}$ and $R>0$, we can find a constant $C_R>0$, depending on $\Omega$, $\chi$, $\kappa$, $R$, such that the following estimate \begin{equation} \label{t2c}\norm{G_1-G_2}_{L^\infty(-R,R)}\leqslant C_R\left(\sup_{\lambda\in[-R,R]}\sup_{h\in\mathcal H_S}\limsup_{{\varepsilon}ilon\to0}\frac{\norm{\mathcal N(\lambda\chi+{\varepsilon}ilon h)-\mathcal N(\lambda \chi)}_{H^{-\frac{1}{2}}(S)}}{{\varepsilon}ilon}\right)^{\frac{1}{3}}\end{equation} holds true.\end{theorem} Let us observe that the estimate \eqref{t1c} in Theorem \mathfrak Rf{t1} is a Lipschitz stability estimate in the determination of the quasilinear term $\gamma(u)$ appearing in \eqref{eq1} from some knowledge of the associated partial DN map $\mathcal N_{\gamma,F}$. The result of Theorem \mathfrak Rf{t1} can be seen as the determination of the nonlinear diffusion term $\gamma(u)$ when the nonlinear drift vector $D(x,u)$ is unknown for nonlinear stationary Fokker–Planck equations. Not only the stability estimate \eqref{t1c} is established independently of the choice of the convection term $D_j(x,u)$, $j=1,2$, but the constant of this stability estimate is completely independent of the nonlinear terms $\gamma_j$. This means that the result of Theorem \mathfrak Rf{t1} is not a conditional stability estimate requiring any \textit{a priori} estimate of the unknown parameter. In addition, the result of Theorem \mathfrak Rf{t1} is stated with variable second order coefficients and the measurements are restricted to any arbitrary open subset $S$ of the boundary $\partial\Omega$. To the best of our knowledge, we obtain in Theorem \mathfrak Rf{t1} the first result of Lipschitz stability in the determination of a nonlinear term stated in such a general context and with measurements restricted to an arbitrary open subset of the boundary. In contrast to Theorem \mathfrak Rf{t1}, the result of Theorem \mathfrak Rf{t2} is a H\"older stability estimate for the determination of a semilinear term $G$ when condition (ii) is fulfilled. This results is stated in a more restricted context than Theorem \mathfrak Rf{t1} and in the final stability estimate \eqref{t2c} the constant depend on an \textit{a priori} estimate of the nonlinear term $G_j$, $j=1,2$, given by condition \eqref{t2a}. However, in contrast to Theorem \mathfrak Rf{t1}, in Theorem \mathfrak Rf{t2} we restrict both the support of the Dirichlet data $g$ appearing in \eqref{eq1} and the measurements to an arbitrary subset $S$. Let us mention that the proof Theorem \mathfrak Rf{t1} and \mathfrak Rf{t2} are based on a suitable application of linearization technique combined with applications of singular solutions suitably designed for our problem. This approach, allows us to obtain Lipschitz and H\"older stability estimate for this inverse problem while, as far as we know, all other results in that category were restricted to logarithmic stability estimate. We recall that this improvement of the stability estimate and the fact that in \eqref{t1c} the constant is independent of the size of the unknown parameter can be exploited for numerical reconstruction of these parameters by mean of Tikhonov's regularization approach (see e.g. \cite{CY}). In some near future, we plan to exploit the material of this article and some further results for proving the numerical reconstruction of nonlinear terms. Let us remark that, since the goal of our inverse problems is to determine unbounded functions defined on ${\mathbb R}$, our stability estimates need to be localized. This is why the stability estimates \eqref{t1c} and \eqref{t2c} are stated in terms of estimates of the nonlinear terms on an interval of the form $(-R,R)$ for $R>0$ arbitrary chosen. While in \eqref{t1c} the constant is independent of $R$, the constant of the stability estimate \eqref{t2c} depends on $R$. We mention also that our stability estimates are stated with some knowledge of the DN map that can be seen as the limit value of the ratio of the differences of the map $\mathcal N_{\gamma,F}$. This result can be formulated in a similar way from the first order Fr\'echet derivative of $\mathcal N_{\gamma,F}$ considered in several articles treating the same issue (see e.g. \cite{CHY,CK}). In contrast to this last formulation, the stability estimates \eqref{t1c} and \eqref{t2c} are stated with measurements given by some explicit knowledge of $\mathcal N_{\gamma,F}$ applied to some class of Dirichlet data. This article is organized as follows. In Section 2, we recall some properties of \eqref{eq1} including the well-posdness of this boundary value problem and the linearization of our inverse problems. In Section 3, we introduce some class of singular solutions associated to our linearized problem and some of their properties. Finally, Section 4 will be devoted to the completion of the proof of Theorem \mathfrak Rf{t1} and \mathfrak Rf{t2}. \section{Preliminary properties} In this section we consider several properties including the proof of the well-posedness of \eqref{eq1} in the context of Theorem \mathfrak Rf{t1} and \mathfrak Rf{t2}. We will also introduce the linearization of the problem \eqref{eq1} when condition (i) or condition (ii) is fulfilled. We start by proving the well-posedness of \eqref{eq1} when the Dirichlet data $g$ takes the form $\lambda+h$ with $\lambda\in{\mathbb R}$ a constant and $h\in\mathcal C^{2+\alpha}(\partial\Omega)$ a sufficiently small Dirichlet data. This result can be stated as follows. \begin{prop}\label{p1} Let condition (i) be fulfilled and let $g=\lambda+h$ with $\lambda\in{\mathbb R}$ a constant and $h\in\mathcal C^{2+\alpha}(\partial\Omega)$. Then, for all $\lambda\in{\mathbb R}$, there exists ${\varepsilon}ilon_\lambda>0$ depending on $a$, $\gamma$, $D$, $\lambda$, $\Omega$, such that, for $h\in B_{{\varepsilon}ilon_\lambda}:=\{f\in\mathcal C^{2+\alpha}(\partial\Omega):\ \norm{f}_{\mathcal C^{2+\alpha}(\partial\Omega)}<{\varepsilon}ilon_\lambda\}$, problem \eqref{eq1} admits a unique solution $u_\lambda\in\mathcal C^{2+\alpha}(\overline{\Omega})$ satisfying \begin{equation} \label{TA1a}\norm{u_\lambda-\lambda}_{\mathcal C^{2+\alpha}(\overline{\Omega})}\leqslant C\norm{h}_{\mathcal C^{2+\alpha}(\partial\Omega)}.\end{equation} \end{prop} \begin{proof} We prove this result by extending the approach of \cite[Theorem B.1.]{CFKKU}, where a similar boundary value problem has been studied with infinity smooth parameters and without the convection term $D$, to problem \eqref{eq1}. Let us first observe that constant functions are solutions of \eqref{eq1} with $g=\lambda$. Thus, by splitting $u_\lambda$ into two terms $u_\lambda=\lambda+u_\lambda^0$, we deduce that $u_\lambda^0$ solves \begin{equation} \label{eq4} \left\{ \begin{array}{ll} -\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x)\gamma(\lambda+u_\lambda^0) \partial_{x_j} u_\lambda^0(x) \right)+D(x,u_\lambda^0(x)+\lambda)\cdot\nabla u_\lambda^0=0 & \mbox{in}\ \Omega , \\ u_\lambda^0=h &\mbox{on}\ \partial\Omega, \end{array} \right. \end{equation} Therefore, we only need to prove that there exists ${\varepsilon}ilon_\lambda>0$ depending on $a$, $\gamma$, $D$, $\lambda$, $\Omega$, such that, for $h\in B_{{\varepsilon}ilon_\lambda}$, problem \eqref{eq4} admits a unique solution $u_\lambda^0\in\mathcal C^{2+\alpha}(\overline{\Omega})$ satisfying \begin{equation} \label{TA1v}\norm{u_\lambda^0}_{\mathcal C^{2+\alpha}(\overline{\Omega})}\leqslant C\norm{h}_{\mathcal C^{2+\alpha}(\partial\Omega)}.\end{equation} For this purpose, we introduce the map $\mathcal G$ from $\mathcal C^{2+\alpha}(\partial\Omega)\times\mathcal C^{2+\alpha}(\overline{\Omega})$ to the space $C^{\alpha}(\overline{\Omega})\times\mathcal C^{2+\alpha}(\partial\Omega)$ defined by $$\mathcal G: (f,v)\mapsto\left(-\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x)\gamma(\lambda+v) \partial_{x_j} v \right)+D(x,v+\lambda)\cdot\nabla v, v_{|\partial\Omega}-f\right).$$ Using the fact that $\gamma\in \mathcal C^{3}({\mathbb R})$ and applying \cite[Theorem A.8]{H}, we deduce that for any $v\in \mathcal C^{2+\alpha}(\overline{\Omega})$ we have $x\mapsto\gamma'(\lambda+v(x))\in \mathcal C^{1+\alpha}(\overline{\Omega})$. Then, applying \cite[Theorem A.7]{H}, we deduce that $$x\mapsto-\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x)\gamma'(\lambda+v(x)) \partial_{x_j} v(x)\right)\in \mathcal C^{\alpha}(\overline{\Omega})$$ and the map $$\mathcal C^{2+\alpha}(\overline{\Omega})\ni v\mapsto -\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x)\gamma(\lambda+v) \partial_{x_j} v\right)\in \mathcal C^{\alpha}(\overline{\Omega})$$ is $\mathcal C^1$. In the same way, using the fact that $D\in \mathcal C^2(\overline{\Omega}\times{\mathbb R};{\mathbb R}^n)$, we find $$x\mapsto[\partial_t D(x,t)|_{t=v(x)+\lambda}]\cdot\nabla v(x)\in \mathcal C^{\alpha}(\overline{\Omega})$$ and we deduce that the map $$\mathcal C^{2+\alpha}(\overline{\Omega})\ni v\mapsto D(x,v+\lambda)\cdot\nabla v\in \mathcal C^{\alpha}(\overline{\Omega})$$ is $\mathcal C^1$. It follows that the map $\mathcal G$ is $\mathcal C^1$ from $\mathcal C^{2+\alpha}(\partial\Omega)\times\mathcal C^{2+\alpha}(\overline{\Omega})$ to the space $C^{\alpha}(\overline{\Omega})\times\mathcal C^{2+\alpha}(\partial\Omega)$. Moreover, we have $\mathcal G(0,0)=(0,0)$ and $$\partial_v\mathcal G(0,0)w=\left(-\gamma(\lambda)\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} w\right)+D(x,\lambda)\cdot\nabla w, w_{|\partial\Omega}\right).$$ In view of \cite[Theorem 6.8]{GT} and the fact that $\gamma(\lambda)>0$, for any $(F,f)\in \mathcal C^{\alpha}(\overline{\Omega})\times\mathcal C^{2+\alpha}(\partial\Omega)$, the following linear boundary value problem $$ \left\{ \begin{array}{ll} -\gamma(\lambda)\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} w \right)+D(x,\lambda)\cdot\nabla w=F & \mbox{in}\ \Omega , \\ w=f &\mbox{on}\ \partial\Omega. \end{array} \right.$$ admits a unique solution $w\in \mathcal C^{2+\alpha}(\overline{\Omega})$ satisfying $$\norm{w}_{\mathcal C^{2+\alpha}(\overline{\Omega})}\leqslant C(\norm{F}_{\mathcal C^{\alpha}(\overline{\Omega})}+\norm{f}_{\mathcal C^{2+\alpha}(\partial\Omega)}),$$ with $C>0$ depending only on $a$, $\lambda$, $\Omega$ and $D$. Thus, $\partial_v\mathcal G(0,0)$ is an isomorphism from $\mathcal C^{2+\alpha}(\overline{\Omega})$ to $\mathcal C^{\alpha}(\overline{\Omega})\times\mathcal C^{2+\alpha}(\partial\Omega)$ and, applying the implicit function theorem, we deduce that there exists ${\varepsilon}ilon_\lambda>0$ depending on $a$, $\lambda$, $\Omega$ and $D$, and a $\mathcal C^1$ map $\psi$ from $ B_{{\varepsilon}ilon_\lambda}$ to $\mathcal C^{2+\alpha}(\overline{\Omega})$, such that, for all $f\in B_{{\varepsilon}ilon_\lambda}$, we have $\mathcal G(f,\psi(f))=(0,0)$. This proves that, for all $f\in B_{{\varepsilon}ilon_\lambda}$, $v=\psi(f)$ is a solution of \eqref{eq4}. Recalling that a solution of the problem \eqref{eq4} can also be seen as a solution of the linear problem with sufficiently smooth coefficients depending on $u^0_\lambda$, we can apply again \cite[Theorem 6.8]{GT} in order to deduce that $u^0_\lambda=\psi(f)$ is the unique solution of \eqref{eq4}. Combining this with the fact that $\psi$ is $\mathcal C^1$ from $B_{{\varepsilon}ilon_\lambda}$ to $\mathcal C^{2+\alpha}(\overline{\Omega})$ and $\psi(0)=0$, we obtain \eqref{TA1v}. This completes the proof of the proposition.\end{proof} In view of Proposition \mathfrak Rf{p1}, for all constant $\lambda\in{\mathbb R}$, we can define $\mathcal N_{\gamma,F}$ on the set $\lambda+B_{{\varepsilon}ilon_\lambda}=\{\lambda+h:\ h\in B_{{\varepsilon}ilon_\lambda}\}$. Now let us fix $\lambda\in{\mathbb R}$, $h\in\mathcal C^{2+\alpha}(\partial\Omega)$ and $s_h=\frac{{\varepsilon}ilon_\lambda}{2\norm{h}_{\mathcal C^{2+\alpha}(\partial\Omega)}+1}$. For $s\in[-s_h,s_h]$, we consider the following boundary value problem \begin{equation} \label{eq5} \left\{ \begin{array}{ll} -\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x)\gamma(u_{\lambda,s}) \partial_{x_j} u_{\lambda,s} \right)+D(x,u_{\lambda,s})\cdot\nabla u_{\lambda,s}=0 & \mbox{in}\ \Omega , \\ u_{\lambda,s}=\lambda+sh &\mbox{on}\ \partial\Omega. \end{array} \right. \end{equation} We consider also the solution of the following linear problem \begin{equation} \label{eq6} \left\{ \begin{array}{ll} -\gamma(\lambda)\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} w_\lambda \right)+D(x,\lambda)\cdot\nabla w_\lambda=0 & \mbox{in}\ \Omega , \\ w_\lambda=h &\mbox{on}\ \partial\Omega. \end{array} \right. \end{equation} We prove the following result about the linearization of the problem \eqref{eq5}. \begin{lem}\label{l1} For all $\lambda\in{\mathbb R}$ and $h\in\mathcal C^{2+\alpha}(\partial\Omega)$, the map $[-s_h,s_h]\ni s\longmapsto u_{s,\lambda}$ is lying in $\mathcal C^1([-s_h,s_h];\mathcal C^{2+\alpha}(\overline{\Omega}))$ and we have $\partial_su_{s,\lambda}|_{s=0}=w_\lambda$ with $w_\lambda$ the unique solution of \eqref{eq6}.\end{lem} \begin{proof} Let us observe that applying the result of Proposition \mathfrak Rf{p1}, the unique solution of \eqref{eq1} with $g=\lambda+sh$, $s\in[-s_h,s_h]$, takes the form $\lambda+\psi(sh)$ with $\psi$ a $\mathcal C^1$ map from $ B_{{\varepsilon}ilon_\lambda}$ to $\mathcal C^{2+\alpha}(\overline{\Omega})$. This clearly implies that $[-s_h,s_h]\ni s\longmapsto u_{s,\lambda}\in\mathcal C^1([-s_h,s_h];\mathcal C^{2+\alpha}(\overline{\Omega}))$. Moreover, using the fact that $\lambda$ is the unique solution of \eqref{eq1} with $g=\lambda$, we deduce that $ u_{s,\lambda}|_{s=0}=\lambda$. Therefore, we have $$\partial_s\left[-\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x)\gamma(u_{\lambda,s}) \partial_{x_j} u_{\lambda,s} \right)\right]|_{s=0}=-\gamma(\lambda)\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} \partial_su_{\lambda,s}|_{s=0} \right),$$ $$\partial_s[D(x,u_{\lambda,s})\cdot\nabla u_{\lambda,s}]|_{s=0}=D(x,\lambda)\cdot\nabla \partial_su_{\lambda,s}|_{s=0}$$ and it follows that $$\left\{ \begin{array}{ll} -\gamma(\lambda)\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} \partial_su_{\lambda,s}|_{s=0} \right)+D(x,\lambda)\cdot\nabla \partial_su_{\lambda,s}|_{s=0}=0 & \mbox{in}\ \Omega , \\ \partial_su_{\lambda,s}|_{s=0}=h &\mbox{on}\ \partial\Omega. \end{array} \right.$$ Then the uniqueness of the solution of this boundary value problem implies that $\partial_su_{\lambda,s}|_{s=0}=w_\lambda$.\end{proof} Let us also define the partial DN map $\Lambda_{\gamma(\lambda),D(\cdot,\lambda)}:h\mapsto \gamma(\lambda)\partial_{\nu_a} w_\lambda|_{S}$ associated with problem \eqref{eq6}. For $h\in\mathcal C^{2+\alpha}(\partial\Omega)$, $s\in[-s_h,s_h]$ we consider $u_{\lambda,s}$ the solution of \eqref{eq5} and $w_\lambda$ the solution of \eqref{eq6}. Applying Lemma \mathfrak Rf{l1} and assuming that condition (i) is fulfilled, we get $$\partial_s[\gamma(u_{\lambda,s})\partial_{\nu_a}u_{\lambda,s}]|_{s=0}=\partial_su_{\lambda,s}|_{s=0}\gamma'(u_{\lambda,s}|_{s=0})\partial_{\nu_a}u_{\lambda,s}|_{s=0}+\gamma(u_{\lambda,s}|_{s=0})\partial_{\nu_a}w_\lambda.$$ Recalling that $u_{\lambda,s}|_{s=0}=\lambda$, we get that $\partial_{\nu_a}u_{\lambda,s}|_{s=0}\equiv0$ and it follows that $$\partial_s \mathcal N_{\gamma,F}(\lambda+sh)|_{s=0}=\gamma(\lambda)\partial_{\nu_a}w_\lambda|_{S}$$ which implies that \begin{equation} \label{l1b} \partial_s \mathcal N_{\gamma,F}(\lambda+sh)|_{s=0}=\Lambda_{\gamma(\lambda),D(\cdot,\lambda)}h,\quad \lambda\in{\mathbb R},\ h\in \mathcal C^{2+\alpha}(\partial\Omega).\end{equation} Using this identity, we deduce that \eqref{map} can be extended uniquely by density to a continuous linear map from $ H^{\frac{1}{2}}(\partial\Omega)$ to $H^{-\frac{1}{2}}(S)$. In order to prove that \eqref{mesure} is continuous, we need the following result. \begin{lem}\label{l11} The map ${\mathbb R}\ni\lambda\mapsto\Lambda_{\gamma(\lambda),D(\cdot,\lambda)}$ is lying in $\mathcal C({\mathbb R};\mathcal B(H^{\frac{1}{2}}(\partial\Omega),H^{-\frac{1}{2}}(S)))$.\end{lem} \begin{proof} From now on and in all this article, we denote by $\mathcal A$ the differential operator defined by \begin{equation} \label{A}\mathcal Au(x)=-\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} u(x)\right),\quad u\in H^1(\Omega),\ x\in\Omega.\end{equation} For any $h\in H^{\frac{1}{2}}(\partial\Omega)$ and any $\lambda\in{\mathbb R}$, we denote by $w_{\lambda,h}$ the solution of \eqref{eq6}. Since $\gamma\in \mathcal C^3({\mathbb R};(0,+\infty))$, the proof of the lemma will be completed if we show that $$\lim_{\delta\to 0}\ \ \underset{\norm{h}_{H^{\frac{1}{2}}(\partial\Omega)}\leqslant 1}{\sup}\norm{\partial_{\nu_a}w_{\lambda+\delta,h}-\partial_{\nu_a}w_{\lambda,h}}_{H^{-\frac{1}{2}}(\partial\Omega)}=0,\quad \lambda\in{\mathbb R}.$$ Using the fact that there exists $C>0$ depending only on $a$ and $\Omega$ such that $$\norm{\partial_{\nu_a}w_{\lambda+\delta,h}-\partial_{\nu_a}w_{\lambda,h}}_{H^{-\frac{1}{2}}(\partial\Omega)}\leqslant C\left[\norm{w_{\lambda+\delta,h}-w_{\lambda,h}}_{H^1(\Omega)}+\norm{\mathcal Aw_{\lambda+\delta,h}-\mathcal Aw_{\lambda,h}}_{L^2(\Omega)}\right],$$ we are left with the task of proving that \begin{equation} \label{l11a}\lim_{\delta\to 0}\ \ \underset{\norm{h}_{H^{\frac{1}{2}}(\partial\Omega)}\leqslant 1}{\sup}[\norm{w_{\lambda+\delta,h}-w_{\lambda,h}}_{H^1(\Omega)}+\norm{\mathcal Aw_{\lambda+\delta,h}-\mathcal Aw_{\lambda,h}}_{L^2(\Omega)}]=0,\quad \lambda\in{\mathbb R}.\end{equation} For this purpose, we fix $\lambda\in{\mathbb R}$, $\delta\in(-1,1)$, $h\in H^{\frac{1}{2}}(\partial\Omega)$ and we consider $w=w_{\lambda,h}-w_{\lambda+\delta,h}$. It is clear that $w$ solves the boundary value problem \begin{equation} \label{eq66} \left\{ \begin{array}{ll} -\gamma(\lambda)\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} w \right)+D(x,\lambda)\cdot\nabla w=G_{\delta,h} & \mbox{in}\ \Omega , \\ w=0 &\mbox{on}\ \partial\Omega, \end{array} \right. \end{equation} with $$G_{\delta,h}=[\gamma(\lambda+\delta)-\gamma(\lambda)]\mathcal A w_{\lambda+\delta,h}+[D(x,\lambda+\delta)-D(x,\lambda)]\cdot\nabla w_{\lambda+\delta,h}.$$ Using the fact that $\gamma\in \mathcal C^3({\mathbb R};(0,+\infty))$, $D\in \mathcal C^2(\overline{\Omega}\times{\mathbb R};{\mathbb R}^n)$ and applying \cite[Theorem 8.3]{GT}, we deduce that there exists a constant $C>0$ depending only on $\lambda$, $a$, $\Omega$, $\gamma$ and $D$ such that $$\norm{w_{s,h}}_{H^1(\Omega)}+\norm{\mathcal Aw_{s,h}}_{L^2(\Omega)}\leqslant C\norm{h}_{H^{\frac{1}{2}}(\partial\Omega)},\quad s\in[\lambda-1,\lambda+1].$$ Therefore, we find $$\norm{G_{\delta,h}}_{L^2(\Omega)}\leqslant C\left[|\gamma(\lambda+\delta)-\gamma(\lambda)|+\norm{D(\cdot,\lambda+\delta)-D(\cdot,\lambda)}_{L^\infty(\Omega)}\right]\norm{h}_{H^{\frac{1}{2}}(\partial\Omega)},$$ with $C>0$ independent of $\delta$ and $h$. Applying again \cite[Theorem 8.3]{GT}, we obtain $$\norm{w}_{H^1(\Omega)}+\norm{\mathcal Aw}_{L^2(\Omega)}\leqslant C\left[|\gamma(\lambda+\delta)-\gamma(\lambda)|+\norm{D(\cdot,\lambda+\delta)-D(\cdot,\lambda)}_{L^\infty(\Omega)}\right]\norm{h}_{H^{\frac{1}{2}}(\partial\Omega)}$$ and, using the fact that $\gamma\in \mathcal C^3({\mathbb R};(0,+\infty))$, $D\in \mathcal C^2(\overline{\Omega}\times{\mathbb R};{\mathbb R}^n)$, we get \eqref{l11a}. \end{proof} Combining \eqref{l1b} with Lemma \mathfrak Rf{l11}, we deduce that \eqref{mesure} is lying in $\mathcal C({\mathbb R})$. Now let us consider the linearization of problem \eqref{eq1} when condition (ii) is fulfilled. For this purpose, we use the notation of Theorem \mathfrak Rf{t2} and we assume that the function $G$ in \eqref{non2} satisfies $$|G(t)|+|G'(t)|+|G''(t)|\leqslant \kappa(|t|),\quad t\in{\mathbb R},$$ with $\kappa\in\mathcal C([0,+\infty))$ a non-decreasing function. Then, according to \cite[Theorem 8.3, pp. 301]{LU} and \cite[Theorem 9.3, 9.4]{GT}, for $g=\lambda\chi+h$, with $\lambda\in[-R,R]$, $R>0$ and $h\in B_1$, the problem \eqref{eq1} admits a unique solution $u\in \mathcal C^{2+\alpha}(\overline{\Omega})$ satisfying $\norm{u}_{L^\infty(\Omega)}\leqslant M(R)$ with $M$ a non-decreasing function of ${\mathbb R}_+$ depending only on $a$, $\Omega$, $\chi$ and $\kappa$. We fix $h \in\mathcal C^{2+\alpha}(\partial\Omega)$, $\tau_h=\frac{1}{\norm{h}_{\mathcal C^{2+\alpha}(\partial\Omega)}+1}$, $s\in[-\tau_h,\tau_h]$, $\lambda\in{\mathbb R}$ and we consider the following boundary value problem \begin{equation} \label{eq7} \left\{ \begin{array}{ll} -\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} v_{\lambda,s} \right)+G( v_{\lambda,s})=0 & \mbox{in}\ \Omega , \\ v_{\lambda,s}=\lambda\chi+sh &\mbox{on}\ \partial\Omega. \end{array} \right. \end{equation} We consider also the solution of the following linear problem \begin{equation} \label{eq8} \left\{ \begin{array}{ll} -\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} p \right)+q_{\lambda,G}(x) p=0 & \mbox{in}\ \Omega , \\ p=h &\mbox{on}\ \partial\Omega, \end{array} \right. \end{equation} where $q_{\lambda,G}(x)=G'(v_{\lambda,s}|_{s=0}(x))$, $x\in\Omega$. Then, following the argumentation of \cite{Is4}, we can show that. \begin{lem}\label{l2} For all $h\in \mathcal C^{2+\alpha}(\partial\Omega)$, the map $[-\tau_h,\tau_h]\ni s\longmapsto v_{s,\lambda}$ is lying in $\mathcal C^1([-\tau_h,\tau_h];\mathcal C^{2+\alpha}(\overline{\Omega}))$ and we have $\partial_su_{s,\lambda}|_{s=0}=p$ with $p$ the unique solution of \eqref{eq8}.\end{lem} Let us also define the partial DN map $\mathcal D_{q_{\lambda,G}}:h\mapsto \partial_{\nu_a} p_{|S}$ associated with problem \eqref{eq8}. Applying Lemma \mathfrak Rf{l2} and assuming that condition (ii) is fulfilled, we get \begin{equation} \label{l2b} \partial_s \mathcal N_{\gamma,F}(\lambda\chi+sh)|_{s=0}=\mathcal D_{q_{\lambda,G}}h,\quad h\in \mathcal C^{2+\alpha}(\partial\Omega).\end{equation} Let us also observe that according to the above discussions, we have $q_{\lambda,G}\in L^\infty(\Omega)$ and, for all $R>0$, we have \begin{equation} \label{l2c}\norm{q_{\lambda,G}}_{L^\infty(\Omega)}\leqslant \sup_{s\in[-M(R),M(R)]}|G'(s)|\leqslant \sup_{s\in[0,M(R)]}|\kappa(s)|\leqslant \kappa(M(R)),\quad \lambda\in [-R,R].\end{equation} Combining this estimate with the above argumentation, one can check that the map \eqref{mesure1} is lying in $\mathcal C({\mathbb R})$. \section{Special solutions for the linearized problem} In the spirit of the works \cite{A,EPS1,Is2}, the proofs of our main results are based on constructions of suitable class of singular solutions of the linearized problems \eqref{eq6} and \eqref{eq8}. In contrast to these last works, we need to consider a construction with variable second order coefficients and low order unknown coefficients. Moreover, we will localize the trace of such solutions on $\partial\Omega$ to the arbitrary subset $S$. For this purpose, we will introduce a new construction of these class of solutions designed for our inverse problem. We consider separately the construction of these class of singular solutions for Theorem \mathfrak Rf{t1} and Theorem \mathfrak Rf{t2}. \subsection{Special solutions for Theorem \mathfrak Rf{t1}} We denote by $\Omega_\star$ a smooth open bounded set of ${\mathbb R}^n$ such that $\overline{\Omega}\subset \Omega_\star$ and we extend $a=(a_{i,j})_{1 \leqslant i,j \leqslant n}$ into a function of $\mathcal C^2(\overline{\Omega_\star};{\mathbb R}^{n^2})$ still denoted by $a=(a_{i,j})_{1 \leqslant i,j \leqslant n}$ and satisfying $a_{i,j}=a_{j,i}$ and $$\sum_{i,j=1}^d a_{i,j}(x) \xi_i \xi_j \geqslant c |\xi|^2, \quad \mbox{for each $x \in \overline{\Omega_\star},\ \xi=(\xi_1,\ldots,\xi_n) \in {\mathbb R}^n$}.$$ From now on, for all $x\in{\mathbb R}^n$ and $r>0$, we set $B(x,r):=\{y\in{\mathbb R}^n:\ |y-x|<r\}$. We introduce the function $$\rho(x,y):=\left(a(y)(x-y)\cdot (x-y)\right)^{\frac{1}{2}},\quad y\in \Omega_\star,\ x\in {\mathbb R}^n$$ and the function $f_n$ on $(0,+\infty)$ by $$f_n(t):=\left\{\begin{aligned}-\frac{1}{2\pi}\ln(t)\quad &\textrm{for $n=2$},\\ \frac{t^{2-n}}{(n-2)d_n\sqrt{det(a(y))}}\quad&\textrm{for $n\geqslant3$},\end{aligned}\right.$$ with $d_n$ the area of the unit sphere. We define $$H(x,y):= f_n(\rho(x,y)),\quad y\in \Omega_\star,\ x\in{\mathbb R}^n\setminus\{y\}.$$ It is well known (see e.g. \cite[pp. 258]{Ka}) that $H$ satisfies the following estimates \begin{equation} \label{est1} \begin{aligned}&|H(x,y)|\leqslant c_1\abs{\ln\left(\frac{|x-y|}{n^2\norm{a}_{L^\infty(\Omega)}}\right)},\quad y\in \Omega_\star,\ x\in{\mathbb R}^n\setminus\{y\},\ n=2,\\ &c_0|x-y|^{2-n}\leqslant H(x,y)\leqslant c_1|x-y|^{2-n},\quad y\in \Omega_\star,\ x\in{\mathbb R}^n\setminus\{y\},\ n\geqslant 3,\end{aligned}\end{equation} where $c_j>0$, $j=0,1$, are constants that depend only on $\Omega$ and $a$. In addition, for any $k=1,\ldots,n$, one can check that \begin{equation} \label{est3} c_2|x-y|^{1-n}\leqslant |\nabla_x H(x,y)|\leqslant c_3|x-y|^{1-n},\quad y\in \Omega_\star,\ x\in{\mathbb R}^n\setminus\{y\},\end{equation} \begin{equation} \label{esti1} |\nabla_x\partial_{x_k} H(x,y)|\leqslant c_4|x-y|^{-n},\quad y\in \Omega_\star,\ x\in{\mathbb R}^n\setminus\{y\},\end{equation} where $c_j>0$, $j=2,\ldots,4,$ are constants that depend only on $\Omega$ and $a$. We fix $U=\{(x,x):\ x\in \Omega_\star\}$. It is well known (see e.g. \cite[Theorem 3]{Ka}) that there exists a function $P\in \mathcal C(\Omega_\star\times \Omega_\star\setminus U)$ taking the form $P=H+\mathcal R$ such that for all $y\in\Omega_\star$ we have $P(\cdot,y)\in \mathcal C^{2}(\Omega_\star\setminus\{y\})$ and \begin{equation} \label{fond1}\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} P(x,y) \right)=0,\quad x\in \Omega_\star\setminus\{y\},\end{equation} \begin{equation} \label{fond2}| \mathcal R(x,y)|\leqslant C|x-y|^{\frac{5}{2}-n},\quad |\nabla_x\mathcal R(x,y)|\leqslant C|x-y|^{\frac{3}{2}-n},\quad y\in \Omega_\star,\ x\in{\mathbb R}^n\setminus\{y\},\end{equation} with $C>0$ depending only on $\Omega$ and $a$. We fix $x_0\in S$ and using boundary normal coordinates (see e.g. \cite[Theorem 2.12]{KKL}) we fix $\delta'>0$ sufficiently small such that for all $\tau\in(0,\delta')$ there exists a unique $y_\tau\in \Omega_\star\setminus \overline{\Omega}$ such that dist$(y_\tau,\partial\Omega)=|y_\tau-x_0|=\tau$. We fix $S'$ an open subset of $S$ containing $x_0$ and we consider $\Omega'$ an open bounded set of ${\mathbb R}^n$ with $\mathcal C^2$ boundary such that $\overline{\Omega'}\subset \Omega_\star$, $\Omega\subset \Omega'$ and $(\partial\Omega\setminus S')\subset \partial\Omega'$, $x_0\in\Omega'$ (see Figure 1). Fixing $\delta=\min($dist$(x_0,\partial\Omega')/3,1,\delta')>0$, for all $\tau\in(0,\delta)$ we have dist$(y_\tau,\partial\Omega')\geqslant\delta$. \begin{figure} \caption{The sets $\Omega$, $\Omega_\star$, $\Omega'$ and the points $x_0$, $y_\tau$. } \label{fig1} \end{figure} Then, we consider $w_\tau^0(x)$ the solution of the following boundary value problem $$\left\{ \begin{array}{ll} -\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} w_\tau^0(x) \right)=0 & \mbox{in}\ \Omega' , \\ w_\tau^0=P(\cdot,y_\tau) &\mbox{on}\ \partial\Omega', \end{array} \right.$$ and for all $\tau\in (0,\delta)$ we deduce that $w_\tau^0\in H^2(\Omega)$ and, applying estimates \eqref{est1}-\eqref{est3} and \eqref{fond2}, we get \begin{equation} \label{est4}\norm{w_\tau^0}_{H^1(\Omega')}\leqslant C\norm{P(\cdot,y_\tau)}_{H^{\frac{1}{2}}(\partial\Omega')}\leqslant C,\end{equation} with $C$ depending only on $\Omega'$ and $a$. We set $g_\tau^1\in H^{\frac{3}{2}}(\partial\Omega)$ defined by \begin{equation} \label{g1}g_\tau^1(x)=P(x,y_\tau)-w_\tau^0(x),\quad x\in\partial\Omega,\ \tau\in(0,\delta).\end{equation} Note that here we have $$g_\tau^1(x)=P(x,y_\tau)-w_\tau^0(x)=P(x,y_\tau)-P(x,y_\tau)=0,\quad x\in\partial\Omega\setminus S',$$ which implies that supp$(g_\tau)\subset S$. We fix $B\in W^{1,\infty}(\Omega;{\mathbb R}^n)$ such that $\nabla\cdot B\leqslant0$ and, for $s\in(0,+\infty)$, $\tau\in(0,\delta)$, we consider $w_{s,\tau}\in H^2(\Omega)$ the solution of the following boundary value problem \begin{equation} \label{eqq1}\left\{ \begin{array}{ll} -s\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} w_{s,\tau}(x) \right)+B(x)\cdot\nabla w_{s,\tau} =0 & \mbox{in}\ \Omega , \\ w_{s,\tau}=g_\tau^1 &\mbox{on}\ \partial\Omega. \end{array} \right.\end{equation} For $s\in(0,+\infty)$, $\tau\in(0,\delta)$, we consider also $w_{s,\tau}^*\in H^2(\Omega)$ the solution of the adjoint problem \begin{equation} \label{eqq2}\left\{ \begin{array}{ll} -s\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} w_{s,\tau}^*(x) \right)-\nabla\cdot(B(x) w_{s,\tau}^*) =0 & \mbox{in}\ \Omega , \\ w_{s,\tau}^*=g_\tau^1 &\mbox{on}\ \partial\Omega. \end{array} \right.\end{equation} Note that the well-posedness of \eqref{eqq1}-\eqref{eqq2} can be deduced from \cite[Theorem 8.3]{GT}. We can prove the following result. \begin{prop}\label{p2} For all $s\in(0,+\infty)$ and all $\tau\in (0,\delta)$, we have \begin{equation} \label{p2a}w_{s,\tau}(x)=H(x,y_\tau) +z_{s,\tau}(x),\quad w_{s,\tau}^*(x)=H(x,y_\tau) +z_{s,\tau}^*(x),\end{equation} where $z_{s,\tau},z_{s,\tau}^*\in H^2(\Omega)$ satisfy the estimate \begin{equation} \label{p2b}\norm{z_{s,\tau}}_{H^1(\Omega)}+\norm{z_{s,\tau}^*}_{H^1(\Omega)}\leqslant C\max(1,\tau^{\frac{3}{2}-\frac{n}{2}}),\quad s\in(0,+\infty),\ \tau\in (0,\delta)\end{equation} with $C>0$ depending only on $a$, $s$, $\Omega$, $B$.\end{prop} \begin{proof} We will only show the result for $w_{s,\tau}$ the result for $w_{s,\tau}^*$ being treated in a similar fashion. Let us first observe that we can divide $z_{s,\tau}$ into two terms $z_{s,\tau}(x)=\mathcal R(x,y_\tau)+h_{s,\tau}(x)$ where $h_{s,\tau}$ solves the problem \begin{equation} \label{eqq3}\left\{ \begin{array}{ll} -s\sum_{i,j=1}^d \partial_{x_i} \left( a_{i,j}(x) \partial_{x_j} h_{s,\tau}(x) \right)+B(x)\cdot\nabla h_{s,\tau}(x) =-B(x)\cdot\nabla (P(x,y_\tau)+w_\tau^0(x)) & \ x\in\Omega , \\ h_{s,\tau}=0 &\mbox{on}\ \partial\Omega. \end{array} \right.\end{equation} According to \cite[Theorem 8.1]{GT}, there exists $C>0$ depending only on $a$, $s$, $\Omega$, $B$ such that \begin{equation} \label{p2c}\norm{h_{s,\tau}}_{H^1(\Omega)}\leqslant C\norm{B\cdot\nabla (P(\cdot,y_\tau)+w_\tau^0)}_{H^{-1}(\Omega)}\leqslant C\norm{B}_{W^{1,\infty}(\Omega)}\norm{P(\cdot,y_\tau)+w_\tau^0}_{L^2(\Omega)}.\end{equation} Consider $R>0$ such that $\Omega\subset B(y_\tau,R)$. Using the fact that $B(y_\tau,\tau)\cap\Omega=\emptyset$, we deduce that $\Omega\subset B(y_\tau,R)\setminus B(y_\tau,\tau)$. Therefore, applying \eqref{est1}, \eqref{fond2} and \eqref{est4}, we obtain $$\norm{P(\cdot,y_\tau)+w_\tau^0}_{L^2(\Omega)}^2\leqslant C\left(1+\int_{B(y_\tau,R)\setminus B(y_\tau,\tau)}|x-y_\tau|^{4-2n}dx\right)\leqslant C(1+\tau^{4-n}),\quad \tau\in (0,\delta),$$ with $C>0$ depending only on $a$, $s$, $\Omega$, $B$. Combining this last estimate with \eqref{p2c}, we obtain $$\norm{h_{s,\tau}}_{H^1(\Omega)}\leqslant C\max(1,\tau^{2-\frac{n}{2}}),\quad \tau\in (0,\delta).$$ In the same way, estimates \eqref{fond2} implies that $$\norm{\mathcal R(\cdot,y_\tau)}_{H^1(\Omega)}\leqslant C\max(1,\tau^{\frac{3}{2}-\frac{n}{2}}),\quad \tau\in (0,\delta).$$ which clearly implies \eqref{p2b}.\end{proof} Let us also observe that, following the argumentation of Proposition \mathfrak Rf{p2}, we can prove that the estimates \eqref{est1}-\eqref{esti1} and \eqref{fond2} imply \begin{equation} \label{g3}\norm{g_\tau^1}_{H^{\frac{1}{2}}(\partial\Omega)}\leqslant C\max(\tau^{1-\frac{n}{2}},|\ln(\tau)|^{\frac{1}{2}}),\quad \tau\in(0,\delta).\end{equation} \subsection{Special solutions for Theorem \mathfrak Rf{t2}} In this subsection assume that $n\geqslant3$ and that condition \eqref{t2b} is fulfilled. Under such assumption, we fix $$P(x,y)=H(x,y):= \frac{|x-y|^{2-n}}{(n-2)d_n},\quad y\in {\mathbb R}^n,\ x\in{\mathbb R}^n\setminus\{y\}.$$ and we recall that condition \eqref{est1}-\eqref{esti1} are fulfilled. Note also that in such a context, for any $k=1,\ldots,n$, we have $$\Delta_x\partial_{x_k}H(x,y)=\partial_{x_k}\Delta_xH(x,y)=0,\quad y\in{\mathbb R}^n,\ x\in{\mathbb R}^n\setminus\{y\}.$$ We give the same definition as the preceding subsection to the sets $\Omega_\star$, $\Omega'$, $S'$ and the points $x_0$ and $y_\tau$, $\tau\in(0,\delta)$. Using the fact that $\partial_{x_k}H(\cdot,y_\tau)\in \mathcal C^\infty(\overline{\Omega'}\setminus \{y_\tau\})$, $k=1,\ldots,n$, we can define $v_{k,\tau}\in H^1(\Omega')$ the solution of the following boundary value problem $$\left\{ \begin{array}{ll} -\Delta v_{k,\tau} =0 & \mbox{in}\ \Omega' , \\ v_{k,\tau}=\partial_{x_k}H(\cdot,y_\tau) &\mbox{on}\ \partial\Omega' \end{array} \right.$$ and we deduce again that \begin{equation} \label{est6}\norm{v_{k,\tau}}_{H^1(\Omega')}\leqslant C\norm{\partial_{x_k}H(\cdot,y_\tau)}_{H^{\frac{1}{2}}(\partial\Omega')}\leqslant C,\end{equation} with $C$ depending only on $\Omega'$. We set $g_{k,\tau}^2\in H^{\frac{1}{2}}(\partial\Omega)$ defined by \begin{equation} \label{g2}g_{k,\tau}^2(x)=\partial_{x_k}H(x,y_\tau)-v_{k,\tau}(x),\quad x\in\partial\Omega,\ \tau\in(0,\delta)\end{equation} and we notice again that supp$(g_{k,\tau}^2)\subset S$. For $\tau\in(0,\delta)$, we consider $w_{k,\tau}\in H^1(\Omega)$ the solution of the following boundary value problem \begin{equation} \label{eqq4}\left\{ \begin{array}{ll} -\Delta w_{k,\tau}(x) +qw_{k,\tau}(x) =0 & \mbox{in}\ \Omega , \\ w_{k,\tau}=g_{k,\tau}^2 &\mbox{on}\ \partial\Omega, \end{array}\right.\end{equation} where $q$ is a non-negative function satisfying $$\norm{q}_{L^\infty(\Omega)}\leqslant M$$ for some $M>0$. This solution satisfies the following properties. \begin{prop}\label{p3} Let condition \eqref{t2b} be fulfilled. Then for all $\tau\in (0,\delta)$, the solution $w_{k,\tau}$ of \eqref{eqq4} takes the form \begin{equation} \label{p3a}w_{k,\tau}(x)=\partial_{x_k}H(x,y_\tau) +J_{k,\tau}(x),\quad x\in\Omega,\end{equation} where $J_{k,\tau}\in H^1(\Omega)$ and satisfies the estimate \begin{equation} \label{p3b}\norm{J_{k,\tau}}_{H^1(\Omega)}\leqslant C\max(1,\tau^{2-\frac{n}{2}}),\quad \tau\in (0,\delta),\end{equation} with $C>0$ depending only on $\Omega$ and $M$.\end{prop} \begin{proof}In all this proof we denote by $C>0$ a positive constant depending on $\Omega$ and $M$ that may change from line to line. Note first that $h_{k,\tau}=J_{k,\tau}-v_{k,\tau}$ solves the problem \begin{equation} \label{eqq5}\left\{ \begin{array}{ll} -\Delta h_{k,\tau}+q(x)h_{k,\tau} =-q(\partial_{x_k}H(\cdot,y_\tau)+v_{k,\tau}) & \mbox{in}\ \Omega , \\ h_{k,\tau}=0 &\mbox{on}\ \partial\Omega. \end{array} \right.\end{equation} Applying the Sobolev embedding theorem, we deduce that $H^1_0(\Omega)$ embedded into $L^{\frac{2n}{n-2}}(\Omega)$. Therefore, by duality we deduce that the space $L^{\frac{2n}{n+2}}(\Omega)$ embedded into $H^{-1}(\Omega)$. Thus, the solution of \eqref{eqq5} satisfies the estimate $$\begin{aligned}\norm{h_{k,\tau}}_{H^1(\Omega)}\leqslant C\norm{q(\partial_{x_k}H(\cdot,y_\tau)+v_{k,\tau}) }_{H^{-1}(\Omega)}&\leqslant C\norm{q(\partial_{x_k}H(\cdot,y_\tau)+v_{k,\tau}) }_{L^{\frac{2n}{n+2}}(\Omega)}\\ &\leqslant C\norm{q}_{L^\infty(\Omega)}\norm{(\partial_{x_k}H(\cdot,y_\tau)+v_{k,\tau})}_{L^{\frac{2n}{n+2}}(\Omega)}.\end{aligned}$$ On the other hand, repeating the arguments used in Proposition \mathfrak Rf{p1} and applying estimate \eqref{est3} we can prove that $$\begin{aligned}\norm{\partial_{x_k}H(\cdot,y_\tau)+v_{k,\tau}}_{L^{\frac{2n}{n+2}}(\Omega)}&\leqslant C\left(\int_\Omega|x-y_\tau|^{{\frac{2n(1-n)}{n+2}}}dx+ 1\right)^{\frac{n+2}{2n}}\\ &\leqslant C\left(\int_{B(y_\tau,R)\setminus B(y_\tau,\tau)}|x-y_\tau|^{{\frac{2n(1-n)}{n+2}}}dx+1\right)^{\frac{n+2}{2n}},\\ &\leqslant C\left(1+\tau^{\frac{2n(1-n)}{n+2}+n}+1\right)^{\frac{n+2}{2n}}\leqslant C(1+\tau^{2-\frac{n}{2}})\quad \tau\in (0,\delta),\end{aligned}$$ which implies that $$\norm{h_{k,\tau}}_{H^1(\Omega)}\leqslant C\max(1,\tau^{2-\frac{n}{2}}),\quad \tau\in (0,\delta).$$ Combining this with \eqref{est6} we deduce \eqref{p3b}.\end{proof} Following the argumentation of Proposition \mathfrak Rf{p3}, we can prove that the estimates \eqref{est1}-\eqref{esti1} imply \begin{equation} \label{g4}\norm{g_{k,\tau}^2}_{H^{\frac{1}{2}}(\partial\Omega)}\leqslant C\tau^{-\frac{n}{2}},\quad \tau\in(0,\delta),\end{equation} with $C>0$ depending only on $\Omega$. \section{Proof of the main results} \subsection{Proof of Theorem \mathfrak Rf{t1}} We use here the notation of Section 3. Let us first observe that in view of \eqref{l1b}, fixing $\mathcal N=\mathcal N_{\gamma_1,F_1}-\mathcal N_{\gamma_2,F_2}$, we have $$\partial_s \mathcal N(\lambda+sh)|_{s=0}=\Lambda_{\gamma_1(\lambda),D_1(\cdot,\lambda)}h-\Lambda_{\gamma_2(\lambda),D_2(\cdot,\lambda)}h,\quad h\in \mathcal H_S,$$ where we recall that $\Lambda_{\gamma(\lambda),D(\cdot,\lambda)}$ denotes the partial DN map associated with problem \eqref{eq6}. Therefore, fixing $H^{\frac{1}{2}}_0(S):=\{f\in H^{\frac{1}{2}}(\partial\Omega):\ \textrm{supp}(f)\subset S\}$ and $\mathcal C^{2+\alpha}_0(S):=\{f\in \mathcal C^{2+\alpha}(\partial\Omega):\ \textrm{supp}(f)\subset S\}$, denoting by $\Lambda^j_{S,\lambda}$, $j=1,2$, the restriction of $\Lambda_{\gamma_j(\lambda),D_j(\cdot,\lambda)}$ to $H^{\frac{1}{2}}_0(S)$ and applying the density of $\mathcal C^{2+\alpha}_0(S)$ in $H^{\frac{1}{2}}_0(S)$, we obtain $$\begin{aligned}\norm{\Lambda^1_{S,\lambda}-\Lambda^2_{S,\lambda}}_{\mathcal B(H^{\frac{1}{2}}_0(S);H^{-\frac{1}{2}}(S))}&= \sup_{h\in\mathcal H_S}\norm{\Lambda^1_{S,\lambda}h-\Lambda^2_{S,\lambda}h}_{H^{-\frac{1}{2}}(S)}\\ &=\sup_{h\in\mathcal H_S}\norm{\Lambda^1_{S,\lambda}h-\Lambda^2_{S,\lambda}h}_{H^{-\frac{1}{2}}(S)}\\ &=\sup_{h\in\mathcal H_S}\limsup_{{\varepsilon}ilon\to0}\norm{ \frac{\mathcal N(\lambda+{\varepsilon}ilon h)-\mathcal N(\lambda)}{{\varepsilon}ilon}}_{H^{-\frac{1}{2}}(S)}.\end{aligned}$$ It follows that, for all $R>0$, we have \begin{equation} \label{t1d}\sup_{\lambda\in[-R,R]}\norm{\Lambda^1_{S,\lambda}-\Lambda^2_{S,\lambda}}_{\mathcal B(H^{\frac{1}{2}}_0(S);H^{-\frac{1}{2}}(S))}=\sup_{\lambda\in[-R,R]}\sup_{h\in\mathcal H_S}\limsup_{{\varepsilon}ilon\to0}\frac{\norm{\mathcal N(\lambda+{\varepsilon}ilon h)-\mathcal N(\lambda)}_{H^{-\frac{1}{2}}(S)}}{{\varepsilon}ilon}\end{equation} and the proof of Theorem \mathfrak Rf{t1} will be completed if we show that there exists a constant $C>0$, depending on $a$, $\Omega$, such that the following estimate \begin{equation} \label{t1e}\norm{\gamma_1-\gamma_2}_{L^\infty(-R,R)}\leqslant C\sup_{\lambda\in[-R,R]}\norm{\Lambda^1_{S,\lambda}-\Lambda^2_{S,\lambda}}_{\mathcal B(H^{\frac{1}{2}}_0(S);H^{-\frac{1}{2}}(S))},\quad R>0,\end{equation} holds true. Fixing $R>0$, we define $\lambda_R\in [-R,R]$ such that for $\gamma=\gamma_1-\gamma_2\in\mathcal C^3({\mathbb R})$ we have $$\norm{\gamma_1-\gamma_2}_{L^\infty(-R,R)}=|\gamma(\lambda_R)|$$ and without loss of generality we assume that $\gamma(\lambda_R)\geqslant0$. In light of condition \eqref{t11a}, for $\tau\in(0,\delta)$ and $j=1,2$, we can consider $w_{j,\tau}$ (resp. $w_{j,\tau}^*$) the solution of \eqref{eqq1} (resp. \eqref{eqq2}) with $s=\gamma_j(\lambda_R)$, $B=D_j(\cdot,\lambda_R)$. We fix $D=D_1(\cdot,\lambda_R)-D_2(\cdot,\lambda_R)$ and we recall that $\mathcal A$ is defined by \eqref{A}. Fixing $w_\tau=w_{2,\tau}-w_{1,\tau}$, we deduce that $w_\tau$ solves the problem \begin{equation} \label{t1x}\left\{ \begin{array}{ll} \gamma_2(\lambda_R)\mathcal A w_\tau+D_2(x,\lambda_R)\cdot\nabla w_\tau =K_\tau & \mbox{in}\ \Omega , \\ w_\tau=0 &\mbox{on}\ \partial\Omega, \end{array} \right.\end{equation} with $K_\tau=\gamma(\lambda_R)\mathcal Aw_{1,\tau}+D\cdot\nabla w_{1,\tau}$. Multiplying \eqref{t1x} by $w_{2,\tau}^*$ and integrating by parts, we obtain $$-\gamma_2(\lambda_R)\int_{\partial\Omega}\partial_{\nu_a}w_\tau w_{2,\tau}^*d\sigma(x)=\int_\Omega K_\tau w_{2,\tau}^*dx.$$ Integrating again by parts, we find $$\begin{aligned}\int_\Omega K_\tau w_{2,\tau}^*dx=&-\gamma(\lambda_R)\int_{\partial\Omega}\partial_{\nu_a}w_{1,\tau} w_{2,\tau}^*d\sigma(x)+\gamma(\lambda_R)\int_\Omega\sum_{i,j=1}^d a_{i,j}(x) \partial_{x_j} w_{1,\tau}(x)\partial_{x_i}w_{2,\tau}^*(x)dx\\ &+\int_\Omega D\cdot\nabla w_{1,\tau}w_{2,\tau}^*dx.\end{aligned}$$ Combining this with the fact that $$-\gamma_2(\lambda_R)\partial_{\nu_a}w_\tau(x)+\gamma(\lambda_R)\partial_{\nu_a}w_{1,\tau}(x)=\gamma_1(\lambda_R)\partial_{\nu_a}w_{1,\tau}(x)-\gamma_2(\lambda_R)\partial_{\nu_a}w_{2,\tau}(x),\quad x\in\partial\Omega$$ and using the fact that $g_\tau^1\in H^{\frac{1}{2}}_0(S)$, we obtain the identity \begin{equation} \label{t1f} \begin{aligned}&\gamma(\lambda_R)\int_\Omega\sum_{i,j=1}^d a_{i,j}(x) \partial_{x_j} w_{1,\tau}(x)\partial_{x_i}w_{2,\tau}^*(x)dx+\int_\Omega (D_1(\cdot,\lambda_R)-D_2(\cdot,\lambda_R))\cdot\nabla w_{1,\tau}(x)w_{2,\tau}^*(x)dx\\ &=\left\langle (\Lambda^1_{S,\lambda_R}-\Lambda^2_{S,\lambda_R})g_\tau^1,g_\tau^1\right\rangle_{H^{-\frac{1}{2}}(S),H^{\frac{1}{2}}_0(S)}.\end{aligned}\end{equation} In view of Proposition \mathfrak Rf{p2}, we can decompose $w_{1,\tau}$ and $w_{2,\tau}^*$ into two terms \begin{equation} \label{t1g}w_{1,\tau}(x)=H(x,y_\tau) +z_{1,\tau}(x),\quad w_{2,\tau}^*(x)=H(x,y_\tau) +z_{2,\tau}^*(x),\quad x\in\Omega,\end{equation} with \begin{equation} \label{t1h}\norm{z_{1,\tau}}_{H^1(\Omega)}+\norm{z_{2,\tau}^*}_{H^1(\Omega)}\leqslant C_1\max(1,\tau^{\frac{3-n}{2}}),\quad \tau\in(0,\delta),\end{equation} with $C_1>0$ independent of $\tau$. In the same way, applying \eqref{est1}-\eqref{est3} and repeating the argumentation of Proposition \mathfrak Rf{p2}, we get \begin{equation} \label{t1i} \tau\norm{H(\cdot,y_\tau)}_{H^1(\Omega)}+\norm{H(\cdot,y_\tau)}_{L^2(\Omega)}\leqslant C_2\max(1,\tau^{2-\frac{n}{2}}),\quad \tau\in(0,\delta),\end{equation} with $C_2>0$ independent of $\tau$. Applying estimates \eqref{t1h}-\eqref{t1i}, we obtain $$\tau \norm{w_{1,\tau}}_{H^1(\Omega)}+\norm{w_{2,\tau}^*}_{L^2(\Omega)}\leqslant C_3\max(1,\tau^{2-\frac{n}{2}}),\quad \tau\in(0,\delta),$$ with $C_3>0$ independent of $\tau$ and it follows that $$\begin{aligned}&\abs{\int_\Omega (D_1(\cdot,\lambda_R)-D_2(\cdot,\lambda_R))\cdot\nabla w_{1,\tau}(x)w_{2,\tau}^*(x)dx}\\ &\leqslant \norm{D_1(\cdot,\lambda_R)-D_2(\cdot,\lambda_R)}_{L^\infty(\Omega)}\norm{w_{1,\tau}}_{H^1(\Omega)}\norm{w_{2,\tau}^*}_{L^2(\Omega)}\\ \ &\leqslant C_4\max(1,\tau^{3-n}),\quad \tau\in(0,\delta),\end{aligned}$$ with $C_4>0$ a constant independent of $\tau$ which depends on $D_1$, $D_2$, $R$, $\gamma_1$, $\gamma_2$. In the same way, applying \eqref{t1h}-\eqref{t1i} and \eqref{ell}, we get $$\begin{aligned}&\abs{\int_\Omega\sum_{i,j=1}^d a_{i,j}(x) \partial_{x_j} w_{1,\tau}(x)\partial_{x_i}w_{2,\tau}^*(x)dx}\\ &\geqslant \int_\Omega\sum_{i,j=1}^d a_{i,j}(x) \partial_{x_j} H(x,y_\tau)\partial_{x_i}H(x,y_\tau)dx -C_5\max(1,\tau^{\frac{5}{2}-n})\\ &\geqslant c\int_\Omega |\nabla_x H(x,y_\tau)|^2dx-C_5\max(1,\tau^{\frac{5}{2}-n}),\quad \tau\in(0,\delta),\end{aligned}$$ with $C_5>0$ a constant independent of $\tau$ which depends on $D_1$, $D_2$, $R$, $\gamma_1$, $\gamma_2$. In addition, applying \eqref{g3}, for all $\tau\in(0,\delta)$, we obtain $$\begin{aligned}|\left\langle (\Lambda^1_{S,\lambda_R}-\Lambda^2_{S,\lambda_R})g_\tau^1,g_\tau^1\right\rangle_{H^{-\frac{1}{2}}(S),H^{\frac{1}{2}}_0(S)}|&\leqslant \norm{\Lambda^1_{S,\lambda_R}-\Lambda^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\norm{g_\tau^1}_{H^{\frac{1}{2}}(\partial\Omega)}^2\\ &\leqslant C_*\norm{\Lambda^1_{S,\lambda_R}-\Lambda^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\max(\tau^{2-n},|\ln(\tau)|),\end{aligned}$$ with $C_*>0$ depending only on $\Omega$ and $a$. Combining all these estimates with \eqref{t1f}, for all $\tau\in(0,\delta)$, we have \begin{equation} \label{t1j} c\gamma(\lambda_R)\int_\Omega |\nabla_x H(x,y_\tau)|^2dx-\gamma(\lambda_R)C_5\tau^{\frac{5}{2}-n}-C_4\tau^{3-n}\leqslant C_*\norm{\Lambda^1_{S,\lambda_R}-\Lambda^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\tau^{2-n}.\end{equation} On the other hand, applying \eqref{est3}, one can find $\delta_1\in(0,\delta)$ such that for all $\tau\in (0,\delta_1)$, we have $$\begin{aligned}\int_\Omega |\nabla_x H(x,y_\tau)|^2dx&\geqslant \int_{B(y_\tau,4\tau)\cap\Omega} |\nabla_x H(x,y_\tau)|^2dx\\ &\geqslant c_2\int_{B(y_\tau,4\tau)\cap\Omega}|x-y_\tau|^{2-2n}\\ &\geqslant c'\max(\tau^{2-n},|\ln(\tau)|),\quad \tau\in (0,\delta_1),\end{aligned}$$ with $c'>0$ a constant depending on $a$ and $\Omega$. Combining this with \eqref{t1j}, for all $\tau\in (0,\delta_1)$, we obtain $$\begin{aligned}&cc'\gamma(\lambda_R)\max(\tau^{2-n},|\ln(\tau)|)-\gamma(\lambda_R)C_5\max(1,\tau^{\frac{5}{2}-n})-C_4\max(1,\tau^{3-n})\\ &\leqslant C_*\norm{\Lambda^1_{S,\lambda_R}-\Lambda^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\max(\tau^{2-n},|\ln(\tau)|).\end{aligned}$$ Dividing both side of this inequality by $\max(\tau^{2-n},|\ln(\tau)|)$ and sending $\tau\to0$, we obtain $$cc'\gamma(\lambda_R)\leqslant C_*\norm{\Lambda^1_{S,\lambda_R}-\Lambda^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}.$$ Since $c$, $c'$ and $C_*$ are constants depending only on $\Omega$ and $a$, we deduce that there exists $C>0$ depending only on $\Omega$ and $a$ such that $$\begin{aligned}\norm{\gamma_1-\gamma_2}_{L^\infty(-R,R)}=\gamma(\lambda_R)&\leqslant C\norm{\Lambda^1_{S,\lambda_R}-\Lambda^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\\ &\leqslant C\sup_{\lambda\in[-R,R]}\norm{\Lambda^1_{S,\lambda}-\Lambda^2_{S,\lambda}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}.\end{aligned}$$ Combining this with \eqref{t1d}, we obtain \eqref{t1c}. This completes the proof of the theorem. \subsection{Proof of Theorem \mathfrak Rf{t2}} Fix $R>0$ and consider, for all $\lambda\in[-R,R]$, $q_{j,\lambda}(x)=G_j'(v_{j,\lambda}(x))$, $j=1,2$, with $v_{j,\lambda}$ solving the problem $$\left\{ \begin{array}{ll} -\Delta v_{j,\lambda} +G_j( v_{j,\lambda})=0 & \mbox{in}\ \Omega , \\ v_{j,\lambda}=\lambda\chi &\mbox{on}\ \partial\Omega. \end{array} \right.$$ We denote by $\mathcal D^j_{S,\lambda}$, $j=1,2$, the restriction of $\mathcal D_{q_{j,\lambda}}$ to $H^{\frac{1}{2}}_0(S)$, where we recall that $\mathcal D_{q_{\lambda,G}}$ is the DN map associated with problem \eqref{eq8}. Applying Lemma \mathfrak Rf{l2} and repeating the argumentation of Theorem \mathfrak Rf{t1}, we can prove that $$\sup_{\lambda\in[-R,R]}\norm{\mathcal D^1_{S,\lambda}-\mathcal D^2_{S,\lambda}}_{\mathcal B(H^{\frac{1}{2}}_0(S);H^{-\frac{1}{2}}(S))}=\sup_{\lambda\in[-R,R]}\sup_{h\in\mathcal H_S}\limsup_{{\varepsilon}ilon\to0}\frac{\norm{\mathcal N(\lambda+{\varepsilon}ilon h)-\mathcal N(\lambda)}_{H^{-\frac{1}{2}}(S)}}{{\varepsilon}ilon}$$ and the proof Theorem \mathfrak Rf{t2} will be completed if we show that for all $R>0$ there exists a constant $C>0$, depending on $\Omega$, $\kappa$, $\chi$ and $R$ such that the following estimate \begin{equation} \label{t2e}\norm{G_1-G_2}_{L^\infty(-R,R)}\leqslant C\left(\sup_{\lambda\in[-R,R]}\norm{\mathcal D^1_{S,\lambda}-\mathcal D^2_{S,\lambda}}_{\mathcal B(H^{\frac{1}{2}}_0(S);H^{-\frac{1}{2}}(S))}\right)^{\frac{1}{3}}\end{equation} holds true. In a similar way to Theorem \mathfrak Rf{t1}, we fix $G=G_1-G_2$ and we consider $\lambda_R\in[-R,R]$ such that $\norm{G_1'-G_2'}_{L^\infty(-R,R)}=|G'(\lambda_R)|$. Without loss of generality we assume that $G'(\lambda_R)\geqslant0$. Fix $\tau\in (0,\delta)$, $k=1,\ldots,n$ and consider the solution $w_{j,k,\tau}$ of \eqref{eqq4} with $q=q_{j,\lambda_R}$. Note that here, in view of \eqref{t2a} and \eqref{l2c}, there exists $M>0$ depending only on $\Omega$, $\kappa$, $\chi$ and $R$ such that $\norm{q_{j,\lambda_R}}_{L^\infty(\Omega)}\leqslant M$. Therefore, applying Proposition \mathfrak Rf{p3}, we can decompose $w_{j,k,\tau}$ into two terms $ w_{j,k,\tau}=\partial_{x_k}H(\cdot,y_\tau)+J_{j,k,\tau}$ with \begin{equation} \label{t2f}\norm{J_{j,k,\tau}}_{H^1(\Omega)}\leqslant C\max(1,\tau^{2-\frac{n}{2}}),\quad \tau\in (0,\delta),\end{equation} where $C>0$ is a constant depending only on $\Omega$, $\chi$, $\kappa$ and $R$. In a similar way to Theorem \mathfrak Rf{t1}, integrating by parts, we obtain the identity $$ \begin{aligned}&\int_\Omega (q_{1,\lambda_R}(x)-q_{2,\lambda_R}(x)) w_{1,k,\tau}(x)w_{2,k,\tau}(x)dx\\ &=\left\langle (\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R})g_{k,\tau}^2,g_{k,\tau}^2\right\rangle_{H^{-\frac{1}{2}}(S),H^{\frac{1}{2}}_0(S)}.\end{aligned}$$ Combining this with \eqref{t2f} and \eqref{g4}, we obtain \begin{equation} \label{t2g}\abs{\int_\Omega (q_{1,\lambda_R}-q_{2,\lambda_R})|\partial_{x_k}H(\cdot,y_\tau)|^2dx} \leqslant C\left[\max(1,\tau^{2-\frac{n}{2}})+\norm{\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\tau^{-n}\right],\end{equation} where $C>0$ is a constant depending only on $R$, $\chi$, $\kappa$ and $\Omega$. Now let us consider $q=q_{1,\lambda_R}-q_{2,\lambda_R}$ and recall that $q\in \mathcal C^1(\overline{\Omega})$ with $$\partial_{x_i}q=G_1''(v_{1,\lambda_R})\partial_{x_i}v_{1,\lambda_R}-G_2''(v_{2,\lambda_R})\partial_{x_i}v_{2,\lambda_R},\quad i=1,\ldots,n.$$ On the other hand, applying \eqref{t2a} and following the argumentation after \cite[formula (8.14) pp. 296]{LU} and \cite[Theorem 9.3, 9.4]{GT}, we deduce that there exists $M_1>0$ depending only on $\kappa$, $\Omega$, $R$ and $\chi$ such that $$\norm{v_{j,\lambda_R}}_{W^{1,\infty}(\Omega)}\leqslant M_1.$$ Therefore, there exists $C>0$ depending only on $\kappa$, $\Omega$, $R$ and $\chi$ such that $$\norm{q}_{W^{1,\infty}(\Omega)}\leqslant C.$$ Since $\Omega$ is $\mathcal C^{2+\alpha}$, we can extend $q$ into $\tilde{q}\in \mathcal C^1({\mathbb R}^n)$ satisfying $$\norm{\tilde{q}}_{W^{1,\infty}({\mathbb R}^n)}\leqslant C'\norm{q}_{W^{1,\infty}(\Omega)},$$ with $C'>0$ depending only on $\Omega$. Then, we deduce that there exists $C>0$ depending only on $\kappa$, $\Omega$, $R$ and $\chi$ such that \begin{equation} \label{t2h}\norm{\tilde{q}}_{W^{1,\infty}({\mathbb R}^n)}\leqslant C.\end{equation} Moreover, we have $$q(x)=q(x_0)+\int_0^1\nabla \tilde{q}(x_0+s(x-x_0))\cdot (x-x_0)ds=q(x_0)+Q(x)\cdot(x-x_0),\quad x\in\overline{\Omega}$$ and using the fact that $$q(x_0)=G_1'(v_{1,\lambda_R}(x_0))-G_2'(v_{2,\lambda_R}(x_0))=G_1'(\lambda_R\chi(x_0))-G_2'(\lambda_R\chi(x_0))=G'(\lambda_R),$$ we obtain $$q(x)=G'(\lambda_R)+Q(x)\cdot(x-x_0),\quad x\in\overline{\Omega}.$$ It follows that $$\int_\Omega (q_{1,\lambda_R}-q_{2,\lambda_R})|\partial_{x_k}H(\cdot,y_\tau)|^2dx=G'(\lambda_R)\int_\Omega |\partial_{x_k}H(\cdot,y_\tau)|^2dx+\int_\Omega Q(x)\cdot(x-x_0)|\partial_{x_k}H(\cdot,y_\tau)|^2dx$$ and applying \eqref{t2h}, we get $$\abs{\int_\Omega (q_{1,\lambda_R}-q_{2,\lambda_R})|\partial_{x_k}H(\cdot,y_\tau)|^2dx}\geqslant G'(\lambda_R)\int_\Omega |\partial_{x_k}H(\cdot,y_\tau)|^2dx-C\int_\Omega |x-x_0||\partial_{x_k}H(\cdot,y_\tau)|^2dx.$$ Moreover, applying \eqref{est3}, we obtain $$\begin{aligned}\int_\Omega |x-x_0||\partial_{x_k}H(\cdot,y_\tau)|^2dx&\leqslant C\int_\Omega |x-x_0||x-y_{\tau}|^{2-2n}dx\\ \ &\leqslant C\int_\Omega (|x-y_{\tau}|+|x_0-y_\tau|)|x-y_{\tau}|^{2-2n}dx\\ &\leqslant C\left(\int_\Omega |x-y_{\tau}|^{3-2n}dx+\tau\int_\Omega |x-y_{\tau}|^{2-2n}dx\right)\end{aligned}$$ and repeating the arguments used in Proposition \mathfrak Rf{p2}, we obtain $$\abs{\int_\Omega (q_{1,\lambda_R}-q_{2,\lambda_R})|\partial_{x_k}H(\cdot,y_\tau)|^2dx}\geqslant G'(\lambda_R)\int_\Omega |\partial_{x_k}H(\cdot,y_\tau)|^2dx-C\tau^{3-n},$$ with $C>0$ depending only on $\kappa$, $\Omega$, $R$ and $\chi$. Combining this with \eqref{t2g}, we find $$G'(\lambda_R)\int_\Omega |\partial_{x_k}H(\cdot,y_\tau)|^2dx \leqslant C\left[\max(1,\tau^{2-\frac{n}{2}})+\tau^{3-n}+\norm{\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\tau^{-n}\right].$$ Taking the sum of the above expression with respect to $k=1,\ldots,n$, we get $$G'(\lambda_R)\int_\Omega |\nabla_x H(\cdot,y_\tau)|^2dx \leqslant C\left[\max(1,\tau^{2-\frac{n}{2}})+\tau^{3-n}+\norm{\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\tau^{-n}\right].$$ Then, in similar way to Theorem \mathfrak Rf{t1}, applying \eqref{est3}, we find $$G'(\lambda_R)\tau^{2-n}\leqslant C\left[\max(1,\tau^{2-\frac{n}{2}})+\tau^{3-n}+\norm{\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\tau^{-n}\right],\quad\tau\in(0,\delta).$$ Dividing this inequality by $\tau^{2-n}$, we obtain $$G'(\lambda_R)\leqslant C\left[\tau+\norm{\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\tau^{-2}\right],\quad \tau\in(0,\delta).$$ From this last estimate and \eqref{t2a} by choosing $\tau=\left(\norm{\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}\right)^{\frac{1}{3}}$ when $\norm{\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}$ is sufficiently small, one can easily check that there exists $C>0$ depending only on $\kappa$, $\Omega$, $R$ and $\chi$, such that the following estimate \begin{equation} \label{t2j}G'(\lambda_R)\leqslant C\norm{\mathcal D^1_{S,\lambda_R}-\mathcal D^2_{S,\lambda_R}}_{\mathcal B(H^{\frac{1}{2}}_0(S),H^{-\frac{1}{2}}(S))}^{\frac{1}{3}}\end{equation} holds true. Combining this with the fact that, according to \eqref{t2a}, we have $$|G(\lambda)|\leqslant |G(0)|+|\lambda|\norm{G'}_{L^\infty(-R,R)}\leqslant RG'(\lambda_R),\quad \lambda\in[-R,R],$$ we deduce \eqref{t2e} from \eqref{t2j}. This completes the proof of the theorem. \vskip 1cm \end{document}
{\mathfrak m}athfrak{b}egin{document} \subjclass[2010]{13A18 (12J10, 12J20, 14E15)} {\mathfrak m}athfrak{b}egin{abstract} For a certain field $K$, we construct a valuation-algebraic valuation on the polynomial ring $\op{Ker}x$, whose underlying Maclane--Vaqui\'e chain consists of an infinite (countable) number of limit augmentations. {\mathfrak m}edskipnd{abstract} {\mathfrak m}aketitle \section*{Introduction} Let $(K,v)$ be a valued field. In a pioneering work, Maclane studied the extensions of the valuation $v$ to the polynomial ring $\op{Ker}x$ in the case $v$ discrete of rank one {\mathfrak m}athfrak{c}ite{mcla}. He proved that all extensions of $v$ to $\op{Ker}x$ can be obtained as a kind of limit of chains of augmented valuations: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{depthintro} {\mathfrak m}u_0\ \stackrel{\mathfrak{p}hi_1,\Gammaa_1}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_1\ \stackrel{\mathfrak{p}hi_2,\Gammaa_2}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}athfrak{c}dots \ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{n-1} \ \stackrel{\mathfrak{p}hi_{n},\Gammaa_{n}}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{n}\ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}athfrak{c}dots\ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}u {\mathfrak m}edskipnd{equation} involving the choice of certain {\mathfrak m}edskipmph{key polynomials} $\mathfrak{p}hi_n \in K[x]$ and elements $\Gammaa_n$ belonging to some extension of the value group of $v$. These chains of valuations contain relevant information on ${\mathfrak m}u$ and play a crucial role in the resolution of many arithmetic-geometric tasks in number fields and function fields of curves {\mathfrak m}athfrak{c}ite{newapp,gen}. For valued fields of arbitrary rank, several approaches to this problem were developed by Alexandru-Popescu-Zaharescu {\mathfrak m}athfrak{c}ite{APZ}, Kuhlmann {\mathfrak m}athfrak{c}ite{Kuhl}, Herrera-Mahboub-Olalla-Spivakovsky {\mathfrak m}athfrak{c}ite{hos,hmos} and Vaqui\'e {\mathfrak m}athfrak{c}ite{Vaq,Vaq3}. In this general context, {\mathfrak m}edskipmph{limit augmentations} and the corresponding {\mathfrak m}edskipmph{limit key polynomials} appear as a new feature. In the henselian case, limit augmentations are linked with the existence of {\mathfrak m}edskipmph{defect} in the extension ${\mathfrak m}u/v$ {\mathfrak m}athfrak{c}ite{VaqDef}. Thus, they are an obstacle for local uniformization in positive characteristic. A chain as in (\ref{depthintro}) is said to be a {\mathfrak m}edskipmph{{\mathfrak m}lv chain} if it is constructed as a mixture of ordinary and limit augmentations, and satisfies certain additional technical condition (see Section \ref{subsecMLV}). In this case, the intermediate valuations ${\mathfrak m}u_n$ are essentially unique and contain intrinsic information about the valuation ${\mathfrak m}u$ {\mathfrak m}athfrak{c}ite[Thm. 4.7]{MLV}. In particular, the number of limit augmentations of any {\mathfrak m}lv chain of ${\mathfrak m}u$ is an intrinsic datum of ${\mathfrak m}u$, which is called the {\mathfrak m}edskipmph{limit-depth} of ${\mathfrak m}u$. In this paper, we exhibit an example of a valuation with an infinite limit-depth, inspired in a construction by Kuhlmann of infinite towers of Artin-Schreier extensions with defect {\mathfrak m}athfrak{c}ite{KuhlDefect}. \section{Maclane--Vaqui\'e chains of valuations on $\op{Ker}x$}{\mathfrak m}athfrak{l}abel{secComm} In this section we recall some well-known results on valuations on a polynomial ring, mainly extracted from the surveys {\mathfrak m}athfrak{c}ite{KP} and {\mathfrak m}athfrak{c}ite{MLV}. Let $(K,v)$ be a valued field, with valuation ring $\mathcal{O}_v$ and residue class field $k$. Let $\Gamma=v(K^*)$ be the value group and denote by $\Gammaq=\Gamma\otimes{\mathfrak m}athbb Q$ the divisible hull of $\Gamma$. In the sequel, we write $\Gammaq\infty$ instead of $\Gammaq{\mathfrak m}athfrak{c}up\{\infty\}$. Consider the set $\thetatt$ of all $\Gammaq$-valued extensions of $v$ to the field $K(x)$ of rational functions in one indeterminate. That is, an element ${\mathfrak m}u\in\thetatt$ is a valuation on $\op{Ker}x$, $$ {\mathfrak m}u{\mathfrak m}athfrak{c}olon \op{Ker}x{\mathfrak m}athfrak{l}ra \Gammaq\infty, $$ such that ${\mathfrak m}u_{{\mathfrak m}id K}=v$ and ${\mathfrak m}u^{-1}(\infty)=\{0\}$. Let $\Gammam={\mathfrak m}u(K(x)^*)$ be the value group and $\op{Ker}m$ the residue field. This set $\thetatt$ admits a partial ordering. For ${\mathfrak m}u,\mathbf{n}u\in \thetatt$ we say that ${\mathfrak m}u{\mathfrak m}athfrak{l}e\mathbf{n}u$ if $${\mathfrak m}u(f){\mathfrak m}athfrak{l}e \mathbf{n}u(f), \mathfrak{q}uad{\mathfrak m}athfrak{f}orall\,f\in\op{Ker}x.$$ As usual, we write ${\mathfrak m}u<\mathbf{n}u$ to indicate that ${\mathfrak m}u{\mathfrak m}athfrak{l}e\mathbf{n}u$ and ${\mathfrak m}u\mathbf{n}e\mathbf{n}u$. This poset $\thetatt$ has the structure of a tree; that is, all intervals $$ (-\infty,{\mathfrak m}u\,]:={\mathfrak m}athfrak{l}eft\{\rho\in\thetatt{\mathfrak m}id \rho{\mathfrak m}athfrak{l}e{\mathfrak m}u\rho_ight\} $$ are totally ordered {\mathfrak m}athfrak{c}ite[Thm. 2.4]{MLV}. A node ${\mathfrak m}u\in\thetatt$ is a {\mathfrak m}edskipmph{leaf} if it is a maximal element with respect to the ordering ${\mathfrak m}athfrak{l}e$. Otherwise, we say that ${\mathfrak m}u$ is an {\mathfrak m}edskipmph{inner node}. The leaves of $\thetatt$ are the {\mathfrak m}edskipmph{valuation-algebraic} valuations in Kuhlmann's terminology {\mathfrak m}athfrak{c}ite{Kuhl}. The inner nodes are the {\mathfrak m}edskipmph{residually transcendental} valuations, characterized by the fact that the extension $\op{Ker}m/k$ is transcendental. In this case, its transcendence degree is necessarily equal to one {\mathfrak m}athfrak{c}ite{Kuhl}. \subsection{Graded algebra and key polynomials}{\mathfrak m}athfrak{l}abel{subsecKP} Take any ${\mathfrak m}u\in\thetatt$. For all ${\mathfrak m}athfrak{a}lpha\in\Gamma_{\mathfrak m}u$, consider the $\mathcal{O}_v$-modules: $$ \mathfrak{p}pa=\{g\in \op{Ker}x{\mathfrak m}id {\mathfrak m}u(g)\Gammae {\mathfrak m}athfrak{a}lpha\}\supset \mathfrak{p}pa^+=\{g\in \op{Ker}x{\mathfrak m}id {\mathfrak m}u(g)> {\mathfrak m}athfrak{a}lpha\}. $$ The {\mathfrak m}edskipmph{graded algebra of ${\mathfrak m}u$} is the integral domain: $$ \Gammagm={\mathfrak m}athfrak{b}igoplus\mathbf{n}olimits_{{\mathfrak m}athfrak{a}lpha\in\Gamma_{\mathfrak m}u}\mathfrak{p}pa/\mathfrak{p}pa^+. $$ There is an {\mathfrak m}edskipmph{initial term} mapping $\op{in}_{\mathfrak m}u{\mathfrak m}athfrak{c}olon \op{Ker}x\thetao \Gammagm$, given by $\op{in}_{\mathfrak m}u0=0$ and $$ \op{in}_{\mathfrak m}u g= g+\mathfrak{p}set_{{\mathfrak m}u(g)}^+\mathfrak{q}uad{\mathfrak m}box{for all nonzero }g\in\op{Ker}x. $$ The following definitions translate properties of the action of ${\mathfrak m}u$ on $\op{Ker}x$ into algebraic relationships in the graded algebra $\Gammagm$.{\mathfrak m}edskip \Deltaefn Let $g,\,h\in \op{Ker}x$. We say that $g$ is {\mathfrak m}edskipmph{${\mathfrak m}u$-divisible} by $h$, and we write $h{\mathfrak m}mu g$, if $\op{in}_{\mathfrak m}u h{\mathfrak m}id \op{in}_{\mathfrak m}u g$ in $\Gammagm$. We say that $g$ is ${\mathfrak m}u$-irreducible if $\op{in}_{\mathfrak m}u g$ is a prime element; that is, the homogeneous principal ideal of $\Gammagm$ generated by $\op{in}_{\mathfrak m}u g$ is a prime ideal. We say that $g$ is ${\mathfrak m}u$-minimal if $g\mathbf{n}mid_{\mathfrak m}u f$ for all nonzero $f\in \op{Ker}x$ with $\Deltaeg(f)<\Deltaeg(g)$.{\mathfrak m}edskip Let us recall a well-known characterization of ${\mathfrak m}u$-minimality {\mathfrak m}athfrak{c}ite[Prop. 2.3]{KP}. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{minimal0} A polynomial $g\in \op{Ker}x\setminus K$ is ${\mathfrak m}u$-minimal if and only if ${\mathfrak m}u$ acts as follows on $g$-expansions: $$ f=\sum\mathbf{n}olimits_{0{\mathfrak m}athfrak{l}e n}a_n g^n,\mathfrak{q}uad \Deltaeg(a_n)<\Deltaeg(g)\mathfrak{q}uad\ \Longrightarrow\ \mathfrak{q}uad {\mathfrak m}u(f)={\mathfrak m}in_{0{\mathfrak m}athfrak{l}e n}{\mathfrak m}athfrak{l}eft\{{\mathfrak m}u{\mathfrak m}athfrak{l}eft(a_n g^n\rho_ight)\rho_ight\}. $$ {\mathfrak m}edskipnd{lemma} \Deltaefn A {\mathfrak m}edskipmph{(Maclane-Vaqui\'e) key polynomial} for ${\mathfrak m}u$ is a monic polynomial in $\op{Ker}x$ which is simultaneously ${\mathfrak m}u$-minimal and ${\mathfrak m}u$-irreducible. The set of key polynomials for ${\mathfrak m}u$ is denoted $\op{Ker}pm$. {\mathfrak m}edskip All $\mathfrak{p}hi\in\op{Ker}pm$ are irreducible in $\op{Ker}x$. For all $\mathfrak{p}hi\in\op{Ker}pm$ let ${\mathfrak m}athfrak{c}l{\mathfrak{p}hi}\subset \op{Ker}pm$ be the subset of all key polynomials $\varphi\in\op{Ker}pm$ such that $\op{in}_{\mathfrak m}u \varphi=\op{in}_{\mathfrak m}u \mathfrak{p}hi$. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{c}ite[Thm. 1.15]{Vaq}{\mathfrak m}athfrak{l}abel{propertiesTMN} Let ${\mathfrak m}u<\mathbf{n}u$ be two nodes in $\thetatt$. Let $\thetamn$ be the set of monic polynomials $\mathfrak{p}hi\in\op{Ker}x$ of minimal degree satisfying ${\mathfrak m}u(\mathfrak{p}hi)<\mathbf{n}u(\mathfrak{p}hi)$. Then, $\thetamn\subset\op{Ker}pm$ and $\thetamn={\mathfrak m}athfrak{c}l{\mathfrak{p}hi}$ for all $\mathfrak{p}hi\in\thetamn$. Moreover, for all $f\in\op{Ker}x$, the equality ${\mathfrak m}u(f)=\mathbf{n}u(f)$ holds if and only if $\mathfrak{p}hi\mathbf{n}mid_{{\mathfrak m}u}f$. {\mathfrak m}edskipnd{lemma} The existence of key polynomials characterizes the inner nodes of $\thetatt$. {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{leaves} A node ${\mathfrak m}u\in\thetatt$ is a leaf if and only if $\op{Ker}pm={\mathfrak m}edskipmptyset$. {\mathfrak m}edskipnd{theorem} \Deltaefn The {\mathfrak m}edskipmph{degree} $\Deltaeg({\mathfrak m}u)$ of an inner node ${\mathfrak m}u\in\thetatt$ is defined as the minimal degree of a key polynomial for ${\mathfrak m}u$. \subsection{Depth zero valuations}{\mathfrak m}athfrak{l}abel{subsecDepth0} For all $a\in K$, $\Gammaa\in\Gammaq$, consider the {\mathfrak m}edskipmph{depth-zero} valuation $${\mathfrak m}u=\omega_{a,\Deltata}=[v;\,x-a,\Gammaa]\in\thetatt,$$ defined in terms of $(x-a)$-expansions as $$ f=\sum\mathbf{n}olimits_{0{\mathfrak m}athfrak{l}e n}a_n(x-a)^n\ \Longrightarrow\ {\mathfrak m}u(f)={\mathfrak m}in\{v(a_n)+n\Gammaa{\mathfrak m}id 0{\mathfrak m}athfrak{l}e n\}. $$ Note that ${\mathfrak m}u(x-a)=\Gammaa$. Clearly, $x-a$ is a key polynomial for ${\mathfrak m}u$ of minimal degree and $\Gammam=\Gammaen{\Gamma,\Gammaa}$. In particular, ${\mathfrak m}u$ is an inner node of $\thetatt$ with $\Deltaeg({\mathfrak m}u)=1$. One checks easily that {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{balls} \omega_{a,\Deltata}{\mathfrak m}athfrak{l}e \omega_{b,{\mathfrak m}edskipp} \ \ \Longleftrightarrow\ \ v(a-b)\Gammae\Deltata {\mathfrak m}box{\, and \,}{\mathfrak m}edskipp\Gammae\Deltata. {\mathfrak m}edskipnd{equation} \subsection{Ordinary augmentation of valuations}{\mathfrak m}athfrak{l}abel{subsecOrdAugm} Let ${\mathfrak m}u\in\thetatt$ be an inner node. For all $\mathfrak{p}hi\in\op{Ker}pm$ and all $\Gammaa\in\Gammaq$ such that ${\mathfrak m}u(\mathfrak{p}hi)<\Gammaa$, we may construct the {\mathfrak m}edskipmph{ordinary} augmented valuation $${\mathfrak m}u'=[{\mathfrak m}u;\,\mathfrak{p}hi,\Gammaa]\in\thetatt,$$ defined in terms of $\mathfrak{p}hi$-expansions as $$ f=\sum\mathbf{n}olimits_{0{\mathfrak m}athfrak{l}e n}a_n\mathfrak{p}hi^n,\mathfrak{q}uad \Deltaeg(a_n)<\Deltaeg(\mathfrak{p}hi)\ \Longrightarrow\ {\mathfrak m}u'(f)={\mathfrak m}in\{{\mathfrak m}u(a_n)+n\Gammaa{\mathfrak m}id 0{\mathfrak m}athfrak{l}e n\}, $$ Note that ${\mathfrak m}u'(\mathfrak{p}hi)=\Gammaa$, ${\mathfrak m}u<{\mathfrak m}u'$ and $\thetay({\mathfrak m}u,{\mathfrak m}u')=[\mathfrak{p}hi]_{\mathfrak m}u$. By {\mathfrak m}athfrak{c}ite[Cor. 7.3]{KP}, $\mathfrak{p}hi$ is a key polynomial for ${\mathfrak m}u'$ of minimal degree. In particular, ${\mathfrak m}u'$ is an inner node of $\thetatt$ too, with $\Deltaeg({\mathfrak m}u')=\Deltaeg(\mathfrak{p}hi)$. \subsection{Limit augmentation of valuations}{\mathfrak m}athfrak{l}abel{subsecLimAugm} Consider a totally ordered family of inner nodes of $\thetatt$, not containing a maximal element: $$ {\mathfrak m}athfrak{c}c={\mathfrak m}athfrak{c}fa\subset\thetatt. $$ We assume that ${\mathfrak m}athfrak{c}c$ is parametrized by a totally ordered set $A$ of indices such that the mapping $A\thetao{\mathfrak m}athfrak{c}c$ determined by $i{\mathfrak m}apsto \rho_i$ is an isomorphism of totally ordered sets. If $\Deltaeg(\rho_{i})$ is stable for all sufficiently large $i\in A$, we say that ${\mathfrak m}athfrak{c}c$ has {\mathfrak m}edskipmph{stable degree}, and we denote this stable degree by $\Deltag({\mathfrak m}athfrak{c}c)$. We say that $f\in\op{Ker}x$ is {\mathfrak m}edskipmph{${\mathfrak m}athfrak{c}c$-stable} if, for some index $i\in A$, it satisfies $$\rho_i(f)=\rho_j(f), \mathfrak{q}uad {\mathfrak m}box{ for all }\ j>i.$$ {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{stable=unit} A nonzero $f\in\op{Ker}x$ is ${\mathfrak m}athfrak{c}c$-stable if and only if $\,\op{in}_{\rho_i}f$ is a unit in $\Gammag_{\rho_i}$ for some $i\in A$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Suppose that $\,\op{in}_{\rho_i}f$ is a unit in $\Gammag_{\rho_i}$ for some $i\in A$. Take any $j>i$ in $A$, and let $\thetay(\rho_i,\rho_j)={\mathfrak m}athfrak{l}eft[\varphi\rho_ight]_{\rho_i}$. By Lemma \ref{propertiesTMN}, $\varphi\in\op{Ker}p(\rho_i)$, so that $\op{in}_{\rho_i}\varphi$ is a prime element. Hence, $\varphi \mathbf{n}mid_{\rho_i}f$, and this implies $\rho_i(f)=\rho_j(f)$, again by Lemma \ref{propertiesTMN}. Conversely, if $f$ is ${\mathfrak m}athfrak{c}c$-stable, there exists $i_0\in A$ such that $\rho_{i_0}(f)=\rho_i(f)$ for all $i>i_0$. Hence, $\op{in}_{\rho_i}f$ is the image of $\op{in}_{\rho_{i_0}}f$ under the canonical homomorphism $\Gammag_{\rho_{i_0}}\thetao\Gammag_{\rho_i}$. By {\mathfrak m}athfrak{c}ite[Cor. 2.6]{MLV}, $\,\op{in}_{\rho_i}f$ is a unit in $\Gammag_{\rho_i}$. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip We obtain a {\mathfrak m}edskipmph{stability function} $\rho_\cc$, defined on the set of all ${\mathfrak m}athfrak{c}c$-stable polynomials by $$\rho_\cc(f)={\mathfrak m}ax\{\rho_i(f){\mathfrak m}id i\in A\}.$$ \Deltaefn We say that ${\mathfrak m}athfrak{c}c$ has a {\mathfrak m}edskipmph{stable limit} if all polynomials in $\op{Ker}x$ are ${\mathfrak m}athfrak{c}c$-stable. In this case, $\rho_\cc$ is a valuation in $\thetatt$ and we say that $$ \rho_\cc={\mathfrak m}athfrak{l}im({\mathfrak m}athfrak{c}c)={\mathfrak m}athfrak{l}im_{i\in A}\rho_i. $$ Suppose that ${\mathfrak m}athfrak{c}c$ has no stable limit. Let $\op{Ker}pi({\mathfrak m}athfrak{c}c)$ be the set of all monic ${\mathfrak m}athfrak{c}c$-unstable polynomials of minimal degree. The elements in $\op{Ker}pi({\mathfrak m}athfrak{c}c)$ are said to be {\mathfrak m}edskipmph{limit key polynomials} for ${\mathfrak m}athfrak{c}c$. Since the product of stable polynomials is stable, all limit key polynomials are irreducible in $ \op{Ker}x$.{\mathfrak m}edskip \Deltaefn We say that ${\mathfrak m}athfrak{c}c$ is an {\mathfrak m}edskipmph{essential continuous family} of valuations if it has stable degree and it admits limit key polynomials whose degree is greater than $\Deltag({\mathfrak m}athfrak{c}c)$.{\mathfrak m}edskip For all limit key polynomials $\mathfrak{p}hi\in\op{Ker}pi{\mathfrak m}athfrak{l}eft({\mathfrak m}athfrak{c}c\rho_ight)$, and all $\Gammaa\in\Gammaq$ such that $\rho_i(\mathfrak{p}hi)<\Gammaa$ for all $i\in A$, we may construct the {\mathfrak m}edskipmph{limit augmented} valuation $${\mathfrak m}u=[{\mathfrak m}athfrak{c}c;\,\mathfrak{p}hi,\Gammaa]\in\thetatt$$ defined in terms of $\mathfrak{p}hi$-expansions as: $$ f=\sum\mathbf{n}olimits_{0{\mathfrak m}athfrak{l}e n}a_n\mathfrak{p}hi^n,\ \Deltaeg(a_n)<\Deltaeg(\mathfrak{p}hi)\ \ \Longrightarrow\ \ {\mathfrak m}u(f)={\mathfrak m}in\{\rho_\cc(a_n)+n\Gammaa{\mathfrak m}id 0{\mathfrak m}athfrak{l}e n\}. $$ Since $\Deltaeg(a_n)<\Deltaeg(\mathfrak{p}hi)$, all coefficients $a_n$ are ${\mathfrak m}athfrak{c}c$-stable. Note that ${\mathfrak m}u(\mathfrak{p}hi)=\Gammaa$ and $\rho_i<{\mathfrak m}u$ for all $i\in A$. By {\mathfrak m}athfrak{c}ite[Cor. 7.13]{KP}, $\mathfrak{p}hi$ is a key polynomial for ${\mathfrak m}u$ of minimal degree, so that ${\mathfrak m}u$ is an inner node of $\thetatt$ with $\Deltaeg({\mathfrak m}u)=\Deltaeg(\mathfrak{p}hi)$. \subsection{Maclane--Vaqui\'e chains}{\mathfrak m}athfrak{l}abel{subsecMLV} Consider a countable chain of valuations in $\thetatt$: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{depthMLV} v\ \stackrel{\mathfrak{p}hi_0,\Gammaa_0}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_0\ \stackrel{\mathfrak{p}hi_1,\Gammaa_1}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_1\ \stackrel{\mathfrak{p}hi_2,\Gammaa_2}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}athfrak{c}dots \ \stackrel{\mathfrak{p}hi_{n},\Gammaa_{n}}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{n}\ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}athfrak{c}dots {\mathfrak m}edskipnd{equation} in which $\mathfrak{p}hi_0\in\op{Ker}x$ is a monic polynomial of degree one, ${\mathfrak m}u_0=[v;\,\mathfrak{p}hi_0,\Gammaa_0]$ is a depth-zero valuation, and each other node is an augmentation of the previous node, of one of the two types:{\mathfrak m}edskip {\mathfrak m}edskipmph{Ordinary augmentation}: \ ${\mathfrak m}u_{n+1}=[{\mathfrak m}u_n;\, \mathfrak{p}hi_{n+1},\Gammaa_{n+1}]$, for some $\mathfrak{p}hi_{n+1}\in\op{Ker}p({\mathfrak m}u_n)$.{\mathfrak m}edskip {\mathfrak m}edskipmph{Limit augmentation}: \ ${\mathfrak m}u_{n+1}=[{\mathfrak m}athfrak{c}c_n;\, \mathfrak{p}hi_{n+1},\Gammaa_{n+1}]$, for some $\mathfrak{p}hi_{n+1}\in\op{Ker}pi({\mathfrak m}athfrak{c}c_n)$, where ${\mathfrak m}athfrak{c}c_n$ is an essential continuous family whose first valuation is ${\mathfrak m}u_n$.{\mathfrak m}edskip Therefore, $\mathfrak{p}hi_n$ is a key polynomial for ${\mathfrak m}u_n$ of minimal degree and $\Deltaeg({\mathfrak m}u_n)=\Deltaeg(\mathfrak{p}hi_n)$, for all $n\Gammae0$.{\mathfrak m}edskip \Deltaefn A chain of mixed augmentations as in (\ref{depthMLV}) is said to be a {\mathfrak m}edskipmph{{\mathfrak m}lv (MLV) chain} if every augmentation step satisfies: {\mathfrak m}athfrak{b}egin{itemize} \item If $\,{\mathfrak m}u_n\thetao{\mathfrak m}u_{n+1}\,$ is ordinary, then $\ \Deltaeg({\mathfrak m}u_n)<\Deltaeg({\mathfrak m}u_{n+1})$. \item If $\,{\mathfrak m}u_n\thetao{\mathfrak m}u_{n+1}\,$ is limit, then $\ \Deltaeg({\mathfrak m}u_n)=\Deltaeg({\mathfrak m}athfrak{c}c_n)$ and $\ \mathfrak{p}hi_n\mathbf{n}ot \in\thetay({\mathfrak m}u_n,{\mathfrak m}u_{n+1})$. {\mathfrak m}edskipnd{itemize}{\mathfrak m}edskip In this case, we have $\mathfrak{p}hi_n\mathbf{n}ot\in\thetay({\mathfrak m}u_n,{\mathfrak m}u_{n+1})$ for all $n$. As shown in {\mathfrak m}athfrak{c}ite[Sec. 4.1]{MLV}, this implies that ${\mathfrak m}u(\mathfrak{p}hi_n)=\Gammaa_n$ and $\Gamma_{{\mathfrak m}u_{n}}=\Gammaen{\Gamma_{{\mathfrak m}u_{n-1}},\Gammaa_{n}}$ for all $n$. The following theorem is due to Maclane, for the discrete rank-one case {\mathfrak m}athfrak{c}ite{mcla}, and Vaqui\'e for the general case {\mathfrak m}athfrak{c}ite{Vaq}. Another proof may be found in {\mathfrak m}athfrak{c}ite[Thm. 4.3]{MLV}. {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{main} Every node ${\mathfrak m}u\in\thetatt$ falls in one, and only one, of the following cases. {\mathfrak m}edskip (a) \ It is the last valuation of a finite MLV chain. $$ {\mathfrak m}u_0\ \stackrel{\mathfrak{p}hi_1,\Gammaa_1}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_1\ \stackrel{\mathfrak{p}hi_2,\Gammaa_2}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}athfrak{c}dots\ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{r-1}\ \stackrel{\mathfrak{p}hi_{r},\Gammaa_{r}}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{r}={\mathfrak m}u.$$ (b) \ It is the stable limit of an essential continuous family, ${\mathfrak m}athfrak{c}c={\mathfrak m}athfrak{c}fa$, whose first valuation ${\mathfrak m}u_r$ falls in case (a): $$ {\mathfrak m}u_0\ \stackrel{\mathfrak{p}hi_1,\Gammaa_1}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_1\ \stackrel{\mathfrak{p}hi_2,\Gammaa_2}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}athfrak{c}dots\ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{r-1}\ \stackrel{\mathfrak{p}hi_{r},\Gammaa_{r}}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{r}\ \stackrel{{\mathfrak m}athfrak{c}fa}{\mathfrak m}athfrak{l}ra\ \rho_{{\mathfrak m}athfrak{c}c}={\mathfrak m}u. $$ Moreover, we may assume that $\Deltaeg({\mathfrak m}u_r)=\Deltaeg({\mathfrak m}athfrak{c}c)$ and $\mathfrak{p}hi_r\mathbf{n}ot\in\thetay({\mathfrak m}u_r,{\mathfrak m}u)$.{\mathfrak m}edskip (c) \ It is the stable limit, ${\mathfrak m}u={\mathfrak m}athfrak{l}im_{n\in{\mathfrak m}athbb N}{\mathfrak m}u_n$, of an infinite MLV chain. {\mathfrak m}edskipnd{theorem} The main advantage of MLV chains is that their nodes are essentially unique, so that we may read in them several data intrinsically associated to the valuation ${\mathfrak m}u$. For instance, the sequence ${\mathfrak m}athfrak{l}eft(\Deltaeg({\mathfrak m}u_n)\rho_ight)_{n\Gammae0}$ and the character ``ordinary" or ``limit" of each augmentation step ${\mathfrak m}u_n\thetao{\mathfrak m}u_{n+1}$, are intrinsic features of ${\mathfrak m}u$ {\mathfrak m}athfrak{c}ite[Sec. 4.3]{MLV}. Thus, we may define order preserving functions $$ \Deltaep,{\mathfrak m}athfrak{l}dp{\mathfrak m}athfrak{c}olon\thetatt{\mathfrak m}athfrak{l}ra{\mathfrak m}athbb N\infty, $$ where $\Deltaep({\mathfrak m}u)$ is the length of the MLV chain underlying ${\mathfrak m}u$, and ${\mathfrak m}athfrak{l}dp({\mathfrak m}u)$ counts the number of limit augmentations in this MLV chain. It is easy to construct examples of valuations on $\op{Ker}x$ of infinite depth. In the next section, we show the existence of valuations with infinite limit-depth too. Their construction is much more involved. \section{A valuation with an infinite limit-depth}{\mathfrak m}athfrak{l}abel{secLimDepthInfinity} In this section, we exhibit an example of a valuation with an infinite limit-depth, based on explicit constructions by Kuhlmann, of infinite towers of field extensions with defect {\mathfrak m}athfrak{c}ite{KuhlDefect}. For a prime number $p$, let ${\mathfrak m}athbb F$ be an algebraic closure of the prime field ${\mathfrak m}athbb F_p$. For an indeterminate $t$, consider the fields of Laurent series, Newton-Puiseux series and Hahn series in $t$, respectively: $$ {\mathfrak m}athbb F((t))\subset K={\mathfrak m}athfrak{b}igcup_{N\in{\mathfrak m}athbb N} {\mathfrak m}athbb F((t^{1/N}))\subset H={\mathfrak m}athbb F((t^{\mathfrak m}athbb Q)). $$ For a generalized power series $s=\sum_{q\in{\mathfrak m}athbb Q}a_qt^q$, its support is a subset of ${\mathfrak m}athbb Q$: $$\op{supp}(s)=\{q\in{\mathfrak m}athbb Q{\mathfrak m}id a_q\mathbf{n}e0\}.$$ The Hahn field $H$ consists of all generalized power series with well-ordered support. The Newton-Puiseux field $K$ contains all series whose support is included in ${\mathfrak m}athfrak{f}rac1N{\mathfrak m}athbb Z_{\Gammae m}$ for some $N\in{\mathfrak m}athbb N$, $m\in{\mathfrak m}athbb Z$. From now on, we denote by $\op{Irr}_K(b)$ the minimal polynomial over $K$ of any $b\in\op{Ker}b$. On these three fields we may consider the valuation $v$ defined as $$v(s)={\mathfrak m}in(\op{supp}(s)),$$ which clearly satisfies, $$v{\mathfrak m}athfrak{l}eft({\mathfrak m}athbb F((t))^*\rho_ight)={\mathfrak m}athbb Z, \mathfrak{q}quad v(K^*)=v(H^*)={\mathfrak m}athbb Q.$$ The valued field ${\mathfrak m}athfrak{l}eft({\mathfrak m}athbb F((t)),v\rho_ight)$ is henselian, because it is the completion of the discrete rank-one valued field ${\mathfrak m}athfrak{l}eft({\mathfrak m}athbb F(t),v\rho_ight)$. Since the extension ${\mathfrak m}athbb F((t))\subset K$ is algebraic, the valued field $(K,v)$ is henselian too. The Hahn field $H$ is algebraically closed. Thus, it contains an algebraic closure $\op{Ker}b$ of $K$. The algebraic generalized power series have been described by Kedlaya {\mathfrak m}athfrak{c}ite{Ked0, Ked1}. Let us recall {\mathfrak m}athfrak{c}ite[Lem. 3]{Ked0}, which is essential for our purposes. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{kedlaya} If $s\in H$ is algebraic over $K$, then it is contained in a tower of Artin-Schreier extensions of $K$. In particular, $s$ is separable over $K$ and $\Deltaeg_K s$ is a power of $p$. {\mathfrak m}edskipnd{lemma} Any $s\in H$ determines a valuation on $H[x]$ extending $v$: $$ v_s{\mathfrak m}athfrak{c}olon H[x]{\mathfrak m}athfrak{l}ra{\mathfrak m}athbb Q\infty,\mathfrak{q}quad g{\mathfrak m}athfrak{l}ongmapsto v_s(g)=v(g(s)). $$ We are interested in the valuation on $\op{Ker}x$ obtained by restriction of $v_s$, which we still denote by the same symbol $v_s$. If $s$ is algebraic over $K$ and $f=\op{Irr}_K(s)\in\op{Ker}x$, we have $v_s(f)=\infty$. Hence, $v_s$ cannot be extended to a valuation on $K(x)$. On the other hand, suppose that $s=\sum_{q\in{\mathfrak m}athbb Q}a_qt^q\in H$ is transcendental over $K$ and all its truncations $$ s_r=\sum_{q<r}a_qt^q, \mathfrak{q}quad r\in{\mathfrak m}athbb R, $$ are algebraic over $K$ and have a bounded degree over $K$. Then, it is an easy exercise to check that $v_s$ falls in case (b) of Theorem \ref{main}. Therefore, our example of a valuation with infinite limit-depth must be given by a transcendental $s\in H$, all whose truncations are algebraic over $K$ and have unbounded degree over $K$. In this case, $v_s$ will necessarily fall in case (c) of Theorem \ref{main}. We want to find an example such that, moreover, all steps in the MLV chain of $v_s$ are limit augmentations. By Lemma \ref{kedlaya}, the truncations of $s$ must belong to some tower of Artin-Schreier extensions of $K$. Let us use a concrete tower constructed by Kuhlmann {\mathfrak m}athfrak{c}ite[Ex. 3.14]{KuhlDefect}. \subsection{A tower of Artin-Schreier extensions of $K$} Let ${\mathfrak m}athfrak{a}s(g)=g^p-g$ be the Artin-Screier operator on $\op{Ker}x$. It is ${\mathfrak m}athbb F_p$-linear and has kernel ${\mathfrak m}athbb F_p$. Let us start with the classical Abhyankar's example $$ s_0=\sum_{i\Gammae1}t^{-1/p^i}\in H, $$ which is a root of the polynomial $\varphi_0={\mathfrak m}athfrak{a}s(x)-t^{-1}\in \op{Ker}x$. Since the denominators of the support of $s_0$ are unbounded, we have $s_0\mathbf{n}ot\in K$. Since the roots of $\varphi_0$ are $s_0+{\mathfrak m}edskipll$, for ${\mathfrak m}edskipll$ running on ${\mathfrak m}athbb F_p$, the polynomial $\varphi_0$ has no roots in $K$. Hence, $\varphi_0$ is irreducible in $\op{Ker}x$, because all irreducible polynomials in $\op{Ker}x$ have degree a power of $p$. Now, we iterate this construction to obtain a tower of Artin-Schreier extensions $$ K\subsetneq K(s_0)\subsetneq K(s_1) \subsetneq{\mathfrak m}athfrak{c}dots \subsetneq K(s_n)\subsetneq {\mathfrak m}athfrak{c}dots $$ where $s_n\in H$ is taken to be a root of $\varphi_n={\mathfrak m}athfrak{a}s(x)-s_{n-1}$. The above argument shows that $\varphi_n$ is irreducible in $K(s_{n-1})$ as long as $s_n\mathbf{n}ot \in K(s_{n-1})$, which is easy to check. From the algebraic relationship ${\mathfrak m}athfrak{a}s(s_n)=s_{n-1}$ we may deduce a concrete choice for all $s_n$: $$ s_n=\sum_{j\Gammae n} {\mathfrak m}athfrak{c}omb{j}{n}t^{-1/p^{j+1}},\mathfrak{q}uad{\mathfrak m}box{ for all }n\Gammae0, $$ which follows from the well-known identity $$ {\mathfrak m}athfrak{c}omb{j+1}{n+1}={\mathfrak m}athfrak{c}omb{j}{n+1}+{\mathfrak m}athfrak{c}omb{j}{n},\mathfrak{q}uad{\mathfrak m}box{ for all }j\Gammae n. $$ In particular, $$ \Deltaeg_K s_n= p^{n+1},\mathfrak{q}quad v(s_n)=-1/p^{n+1},\mathfrak{q}uad{\mathfrak m}box{ for all }n\Gammae0. $$ For all $n\Gammae0$, we have $\op{Irr}_K(s_n)={\mathfrak m}athfrak{a}s^{n+1}(x)-t^{-1}$, and the set of roots of this polynomial is {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{ZAS} Z{\mathfrak m}athfrak{l}eft(\op{Irr}_K(s_n)\rho_ight)=s_n+\operatorname{Ker}({\mathfrak m}athfrak{a}s^{n+1})\subset s_n+{\mathfrak m}athbb F. {\mathfrak m}edskipnd{equation} In particular, the support of all these conjugates of $s_n$ is contained in $(-1,0]$, and Krasner's constant of $s_n$ is zero: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{krasner} \Delta(s_n)={\mathfrak m}ax{\mathfrak m}athfrak{l}eft\{v(s_n-\sigma(s_n)){\mathfrak m}id \sigma\in\Gammaal(\op{Ker}b/K),\ \sigma(s_n)\mathbf{n}e s_n\rho_ight\}=0. {\mathfrak m}edskipnd{equation} We are ready to define our transcendental $s\in H$ as: $$ s=\sum_{n\Gammae0}t^ns_n. $$ Let us introduce some useful notation to deal with the support of $s$ and its truncations. Consider the well-ordered set $$ S={\mathfrak m}athfrak{l}eft\{(n,i)\in{\mathfrak m}athbb Z^2_{{\mathfrak m}athfrak{l}x}{\mathfrak m}id \ 0{\mathfrak m}athfrak{l}e n{\mathfrak m}athfrak{l}e i,\mathfrak{q}uad p\mathbf{n}mid {\mathfrak m}athfrak{c}omb{i}{n}\rho_ight\}. $$ The support of $s$ is the image of the following order-preserving embedding $$ \Deltata{\mathfrak m}athfrak{c}olon S{\mathfrak m}athfrak{h}ra {\mathfrak m}athbb Q,\mathfrak{q}quad (n,i){\mathfrak m}athfrak{l}ongmapsto \Deltata(n,i)=n-\Deltafrac1{p^{i+1}}. $$ The limit elements in $S$ are $(n,n)$ for $n\Gammae0$. These elements have no immediate predecessor in $S$. On the other hand, all elements in $S$ have an immediate successor: $$ (n,i)\ \rho_ightsquigarrow\ (n,i+m), $$ where $m$ is the least natural number such that $p\mathbf{n}mid{\mathfrak m}athfrak{c}omb{i+m}{n}$. For all $(n,i)\in S$ we consider the truncations of $s$ determined by the rational numbers $\Deltata(n,i)$: $$ s_{n,i}:=s_{\Deltata(n,i)}=\sum_{m=0}^{n-1}t^ms_m+t^n\sum_{j=n}^{i-1}{\mathfrak m}athfrak{c}omb{j}{n}t^{-1/p^{j+1}}. $$ For the limit indices $(n,n)\in S$ the truncations are: $$ s_{n,n}=\sum_{m=0}^{n-1}t^ms_m. $$ Since $(0,0)={\mathfrak m}in(S)$, the truncation $s_{0,0}=0$ is defined by an empty sum. All truncations of $s$ are algebraic over $K$. Their degree is $$ \Deltaeg_K s_{n,i}= p^n,\mathfrak{q}uad{\mathfrak m}box{ for all }(n,i)\in S, $$ because $s_{n-1}$ has degree $p^n$, and all other summands have strictly smaller degree. For instance, the ``tail" \ $t^n\sum_{j=n}^{i-1}{\mathfrak m}athfrak{c}omb{j}{n}t^{-1/p^{j+1}}$ belongs to $K$. The unboundedness of the degrees of the truncations of $s$ is not sufficient to guarantee that $s$ is transcendental over $K$. To this end, we must analyze some more properties of these truncations. For any pair $(a,\Deltata)\in \op{Ker}b\thetaimes {\mathfrak m}athbb Q$, consider the ultrametric ball $$ B=B_\Deltata(a)=\{b\in\op{Ker}b{\mathfrak m}id v(b-a)\Gammae\Deltata\}. $$ We define the {\mathfrak m}edskipmph{degree} of such a ball over $K$ as $$ \Deltaeg_K B={\mathfrak m}in\{\Deltaeg_K b{\mathfrak m}id b\in B\}. $$ {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{minpair2} For all $n\Gammae 1$, we have $\Deltaeg_K B_{n-1}{\mathfrak m}athfrak{l}eft(s_{n,n}\rho_ight)=p^n$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Denote $B=B_{n-1}{\mathfrak m}athfrak{l}eft(s_{n,n}\rho_ight)$. From the computation in (\ref{krasner}), we deduce that Krasner's constant of $s_{n,n}$ is $\Delta(s_{n,n})=n-1$. Any $u\in B$ may be written as $$ u=s_{n,n}+{\mathfrak m}edskipll\, t^{n-1}+b, \mathfrak{q}quad {\mathfrak m}edskipll\in{\mathfrak m}athbb F,\mathfrak{q}uad b\in \op{Ker}b,\mathfrak{q}uad v(b)>n-1. $$ Let $z=s_{n,n}+{\mathfrak m}edskipll\, t^{n-1}$. Since ${\mathfrak m}edskipll\, t^{n-1}$ belongs to $K$, we have $$\Deltaeg_K z=p^n,\mathfrak{q}quad \Delta(z)=n-1.$$ Since $v(u-z)>\Delta(z)$, we have $K(z)\subset K(u)$ by Krasner's lemma. Hence, $\Deltaeg_K u\Gammae p^n$. Since $B$ contains elements of degree $p^n$, we conclude that $\Deltaeg_K B=p^n$. {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{strans} The element $s\in H$ is transcendental over $K$. {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} If $s$ were algebraic over $K$, it would belong to $B_{n-1}{\mathfrak m}athfrak{l}eft(s_{n,n}\rho_ight)$ for all $n$. This is impossible, because $\Deltaeg_K s$ would be unbounded, by Lemma \ref{minpair2}. {\mathfrak m}edskipnd{proof} \subsection{A MLV chain of $v_s$ as a valuation on $\op{Ker}b[x]$} For all $(s,i)\in S$, we have $$ v_s(x-s_{n,i})=v(s-s_{n,i})=n-\Deltafrac1{p^{i+1}}=\Deltata(n,i). $$ Let $v_{n,i}=\omega_{s_{n,i},\Deltata(n,i)}$ be the depth zero valuation on $\op{Ker}b[x]$ associated to the pair $(s_{n,i},\Deltata(n,i))\in\op{Ker}b\thetaimes{\mathfrak m}athbb Q$; that is, $$ v_{n,i}{\mathfrak m}athfrak{l}eft(\sum_{0{\mathfrak m}athfrak{l}e{\mathfrak m}edskipll}a_{\mathfrak m}edskipll{\mathfrak m}athfrak{l}eft(x-s_{n,i}\rho_ight)^{\mathfrak m}edskipll\rho_ight)={\mathfrak m}in_{0{\mathfrak m}athfrak{l}e {\mathfrak m}edskipll}{\mathfrak m}athfrak{l}eft\{v_s{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll{\mathfrak m}athfrak{l}eft(x-s_{n,i}\rho_ight)^{\mathfrak m}edskipll\rho_ight)\rho_ight\}={\mathfrak m}in_{0{\mathfrak m}athfrak{l}e {\mathfrak m}edskipll}{\mathfrak m}athfrak{l}eft\{v{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\rho_ight)+{\mathfrak m}edskipll\,\Deltata(n,i)\rho_ight\}. $$ {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{vivs} For all $(n,i),(m,j)\in S$ we have $v_{n,i}(x-s_{m,j})={\mathfrak m}in\{\Deltata(n,i),\Deltata(m,j)\}$. In particular, $v_{n,i}<v_s$ for all $(n,i)\in S$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} The computation of $v_{n,i}(x-s_{m,j})$ follows immediately form the definition of $v_{n,i}$. The inequality $v_{n,i}{\mathfrak m}athfrak{l}e v_s$ follows from the comparison of the action of both valuation on $(x-s_{n,i})$-expansions. Finally, if we take $\Deltata(n,i)<\Deltata(m,j)$, we get $$ v_{n,i}(x-s_{m,j})=\Deltata(n,i)<\Deltata(m,j)=v_s(x-s_{m,j}). $$ This shows that $v_{n,i}<v_s$. {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{vsb} The family ${\mathfrak m}athfrak{c}c={\mathfrak m}athfrak{l}eft(v_{n,i}\rho_ight)_{(n,i)\in S}$ is a totally ordered family of valuations on $\op{Ker}b[x]$ of stable degree one, admitting $v_s$ as its stable limit. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Let us see that ${\mathfrak m}athfrak{c}c$ is a totally ordered family of valuations. More precisely, $$ (n,i)<(m,j)\ \ \Longrightarrow\ \ \Deltata(n,i)<\Deltata(m,j)\ \ \Longrightarrow\ \ v_{n,i}<v_{m,j}<v_s. $$ Indeed, this follows from (\ref{balls}) because $v(s_{n,i}-s_{m,j})=v(s_{n,i}-s)=\Deltata(n,i)$. Clearly, ${\mathfrak m}athfrak{c}c$ contains no maximal element, and all valuations in ${\mathfrak m}athfrak{c}c$ have degree one. Let us show that all polynomials $x-a\in\op{Ker}b[x]$ are ${\mathfrak m}athfrak{c}c$-stable, and the stable value coincides with $v_s(x-a)=v(s-a)$. Since $s$ is transcendental over $K$, we have $s\mathbf{n}e a$ and $q=v(s-a)$ belongs to ${\mathfrak m}athbb Q$. For all $(n,i)\in S$ such that $\Deltata(n,i)>q$ we have $$ v_{n,i}(x-a)={\mathfrak m}in\{v(a-s_{n,i}),\Deltata(n,i)\}={\mathfrak m}in\{q,\Deltata(n,i)\}=q=v_s(x-a). $$ This ends the proof of the lemma. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip Therefore, $v_s$ falls in case (b) of Theorem \ref{main}, as a valuation on $\op{Ker}b[x]$. A MLV chain of $v_s$ is, for instance, $$ v_{0,0}\stackrel{{\mathfrak m}athfrak{c}c}{\mathfrak m}athfrak{l}ra v_s={\mathfrak m}athfrak{l}im({\mathfrak m}athfrak{c}c). $$ In order to obtain a MLV chain of $v_s$ as a valuation of $\op{Ker}x$, we need to ``descend" this result to $\op{Ker}x$. In this regard, we borrow some ideas of {\mathfrak m}athfrak{c}ite{Vaq3}. \subsection{A MLV chain of $v_s$ as a valuation on $\op{Ker}x$} We say that $(a,\Deltata)\in \op{Ker}b\thetaimes {\mathfrak m}athbb Q$ is a {\mathfrak m}edskipmph{minimal pair} if $\Deltaeg_K B_\Deltata(a)=\Deltaeg_K a$. This concept was introduced in {\mathfrak m}athfrak{c}ite{APZ}. By equation (\ref{balls}), for all $b\in\op{Ker}b$ we have$$\omega_{a,\Deltata}=\omega_{b,\Deltata} \ \ \Longleftrightarrow\ \ b\in B_\Deltata(a).$$However, only the minimal pairs $(a,\Deltata)$ of this ball contain all essential information about the valuation on $\op{Ker}x$ that we obtain by restriction of $\omega_{a,\Deltata}$. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{c}ite[Prop. 3.3]{Vaq3}{\mathfrak m}athfrak{l}abel{minpair} For $(a,\Deltata)\in\op{Ker}b\thetaimes{\mathfrak m}athbb Q$, let ${\mathfrak m}u$ be the valuation on $\op{Ker}x$ obtained by restriction of the valuation $\omega=\omega_{a,\Deltata}$ on $\op{Ker}b[x]$. Then, for all $g\in\op{Ker}x$, $\op{in}_{\mathfrak m}u g$ is a unit in $\Gammagm$ if and only if $\op{in}_{\omega} g$ is a unit in $\Gammag_{\omega}$. {\mathfrak m}edskipnd{lemma} The following result was originally proved in {\mathfrak m}athfrak{c}ite{PP}; another proof can be found in {\mathfrak m}athfrak{c}ite[Thm. 1.1]{N2019}. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{minp} For a minimal pair $(a,\Deltata)\in\op{Ker}b\thetaimes{\mathfrak m}athbb Q$, let ${\mathfrak m}u$ be the valuation on $\op{Ker}x$ obtained by restriction of the valuation $\omega_{a,\Deltata}$ on $\op{Ker}b[x]$. Then, $\op{Irr}_K(a)$ is a key polynomial for ${\mathfrak m}u$, of minimal degree. {\mathfrak m}edskipnd{lemma} We need a last auxiliary result. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{minpair3} For all $(n,i)\in S$ the pair ${\mathfrak m}athfrak{l}eft(s_{n,i},\Deltata(n,i)\rho_ight)$ is minimal. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} All ${\mathfrak m}athfrak{l}eft(s_{0,i},\Deltata(0,i)\rho_ight)$ are minimal pairs, because $\Deltaeg_K s_{0,i}=1$. For $n>0$, denote $B_{n,i}=B_{\Deltata(n,i)}{\mathfrak m}athfrak{l}eft(s_{n,i}\rho_ight)$. Since $B_{n,i}\subset B_{n-1}{\mathfrak m}athfrak{l}eft(s_{n,n}\rho_ight)$, Lemma \ref{minpair2} shows that $$ \Deltaeg_K B_{n,i}\,\Gammae\,\Deltaeg_K B_{n-1}{\mathfrak m}athfrak{l}eft(s_{n,n}\rho_ight)=p^n. $$ Since the center $s_{n,i}$ of the ball $B_{n,i}$ has $\Deltaeg_K s_{n,i}=p^n$, we deduce $\Deltaeg_K B_{n,i}=p^n$. Thus, ${\mathfrak m}athfrak{l}eft(s_{n,i},\Deltata(n,i)\rho_ight)$ is a minimal pair. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip \mathbf{n}oindent{{\mathfrak m}athfrak{b}f Notation. }Let us denote the restriction of $v_{n,i}$ to $\op{Ker}x$ by $$ \rho_{n,i}={\mathfrak m}athfrak{l}eft(v_{n,i}\rho_ight)_{{\mathfrak m}id \op{Ker}x}. $$ Moreover, for the limit indices $(n,n)$, $n\Gammae0$, we denote: $$ {\mathfrak m}u_n=\rho_{n,n},\mathfrak{q}quad \mathfrak{p}hi_n=\op{Irr}_K{\mathfrak m}athfrak{l}eft(s_{n,n}\rho_ight),\mathfrak{q}quad \Gammaa_n=v_s{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi_n\rho_ight). $$ By Lemmas \ref{vivs} and \ref{vsb}, the set of all valuations ${\mathfrak m}athfrak{l}eft(\rho_{n,i}\rho_ight)_{(n,i)\in S}$ is totally ordered, and $\rho_{n,i}<v_s$ for all $(n,i)$. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{mainvs} For all $n\Gammae0$, the set ${\mathfrak m}athfrak{c}c_n={\mathfrak m}athfrak{l}eft(\rho_{n,i}\rho_ight)_{(n,i)\in S}$ is an essential continuous family of stable degree $p^n$. Moreover, the polynomial $\mathfrak{p}hi_{n+1}$ belongs to $\op{Ker}pi{\mathfrak m}athfrak{l}eft({\mathfrak m}athfrak{c}c_n\rho_ight)$ and ${\mathfrak m}u_{n+1}=[{\mathfrak m}athfrak{c}c_n;\,\mathfrak{p}hi_{n+1},\Gammaa_{n+1}]$. {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} Let us fix some $n\Gammae0$. By Lemmas \ref{minp} and \ref{minpair3}, all valuations in ${\mathfrak m}athfrak{c}c_n$ have degree $p^n$. Hence, ${\mathfrak m}athfrak{c}c_n$ is a totally ordered family of stable degree $p^n$. Let us show that all monic $g\in \op{Ker}x$ with $\Deltaeg(g){\mathfrak m}athfrak{l}e p^n$ are ${\mathfrak m}athfrak{c}c_n$-stable. Let $u\in\op{Ker}b$ be a root of $g$. By Lemma \ref{minpair2}, $u\mathbf{n}ot\in B_n{\mathfrak m}athfrak{l}eft(s_{n+1,n+1}\rho_ight)$, so that $v{\mathfrak m}athfrak{l}eft(s_{n+1,n+1}-u\rho_ight)<n$. Since $v{\mathfrak m}athfrak{l}eft(s-s_{n+1,n+1}\rho_ight)=\Deltata(n+1,n+1)>n$, we deduce that $v(s-u)<n$. Therefore, we may find $j\Gammae n$ such that $$ v(s-u)<n-\Deltafrac1{p^{j+1}} $$ for all roots $u$ of $g$. As we showed along the proof of Lemma \ref{vsb}, this implies $$v_{n,i}(x-u)=v_s(x-u)\mathfrak{q}uad{\mathfrak m}box{ for all }(n,i)\Gammae (n,j)$$ simultaneously for all roots $u$ of $g$. Therefore, $\rho_{n,i}(g)=v_s(g)$ for all $(n,i)\Gammae (n,j)$ and $g$ is ${\mathfrak m}athfrak{c}c_n$-stable. Now, let us show that $\mathfrak{p}hi_{n+1}$ is ${\mathfrak m}athfrak{c}c_n$-unstable. For all $i\Gammae n$, we have $$v{\mathfrak m}athfrak{l}eft(s_{n+1,n+1}-s_{n,i}\rho_ight)=\Deltata(n,i)=v_{n,i}(x-s_{n,i}).$$ By {\mathfrak m}athfrak{c}ite[Prop. 6.3]{KP}, $x-s_{n+1,n+1}$ is a key polynomial for $v_{n,i}$; thus, $\op{in}_{v_{n,i}} {\mathfrak m}athfrak{l}eft(x-s_{n+1,n+1}\rho_ight)$ is not a unit in the graded algebra $\Gammag_{v_{n,i}}$. Hence, $\op{in}_{v_{n,i}}\mathfrak{p}hi_{n+1}$ is not a unit in $\Gammag_{v_{n,i}}$ and Lemma \ref{minpair} shows that $\op{in}_{\rho_{n,i}}\mathfrak{p}hi_{n+1}$ is not a unit in $\Gammag_{\rho_{n,i}}$. Since this holds for all $i$, Lemma \ref{stable=unit} shows that $\mathfrak{p}hi_{n+1}$ is ${\mathfrak m}athfrak{c}c_n$-unstable. Since the irreducible polynomials in $\op{Ker}x$ have degree a power of $p$ (Lemma \ref{kedlaya}), $\mathfrak{p}hi_{n+1}$ is an ${\mathfrak m}athfrak{c}c_n$-unstable polynomial of minimal degree. Therefore, ${\mathfrak m}athfrak{c}c_n$ is an essential continuous family and $\mathfrak{p}hi_{n+1}\in\op{Ker}pi({\mathfrak m}athfrak{c}c_n)$. Since $\mathfrak{p}hi_{n+1}$ is ${\mathfrak m}athfrak{c}c_n$-unstable, $\rho_{n,i}{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi_{n+1}\rho_ight)<v_s{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi_{n+1}\rho_ight)=\Gammaa_{n+1}$ for all $i$. Thus, it makes sense to consider the limit augmentation ${\mathfrak m}u=[{\mathfrak m}athfrak{c}c_n;\,\mathfrak{p}hi_{n+1},\Gammaa_{n+1}]$. Let us show that ${\mathfrak m}u={\mathfrak m}u_{n+1}$ by comparing their action on $\mathfrak{p}hi_{n+1}$-expansions. For all $g=\sum_{0{\mathfrak m}athfrak{l}e {\mathfrak m}edskipll}a_{\mathfrak m}edskipll\mathfrak{p}hi_{n+1}^{\mathfrak m}edskipll$, {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{bothmu} {\mathfrak m}u_{n+1}(g)={\mathfrak m}in_{0{\mathfrak m}athfrak{l}e{\mathfrak m}edskipll}{\mathfrak m}athfrak{l}eft\{{\mathfrak m}u_{n+1}{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\mathfrak{p}hi_{n+1}^{\mathfrak m}edskipll\rho_ight)\rho_ight\},\mathfrak{q}quad{\mathfrak m}u(g)={\mathfrak m}in_{0{\mathfrak m}athfrak{l}e{\mathfrak m}edskipll}{\mathfrak m}athfrak{l}eft\{{\mathfrak m}u{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\mathfrak{p}hi_{n+1}^{\mathfrak m}edskipll\rho_ight)\rho_ight\}. {\mathfrak m}edskipnd{equation} Since $\Deltaeg(a_{\mathfrak m}edskipll)<p^{n+1}=\Deltaeg{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi_{n+1}\rho_ight)$, all these coefficients $a_{\mathfrak m}edskipll$ are ${\mathfrak m}athfrak{c}c_n$-stable. Hence, $\rho_{n,i}{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\rho_ight)=v_s{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\rho_ight)$ for all $(n,i)$ sufficiently large. Since $\rho_{n,i}<{\mathfrak m}u_{n+1}<v_s$, we deduce $${\mathfrak m}u{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\rho_ight)=\rho_{{\mathfrak m}athfrak{c}c_n}{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\rho_ight)=\rho_{n,i}{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\rho_ight)={\mathfrak m}u_{n+1}{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\rho_ight)=v_s{\mathfrak m}athfrak{l}eft(a_{\mathfrak m}edskipll\rho_ight).$$ Finally, for all $i\Gammae n+1$, we have $v{\mathfrak m}athfrak{l}eft(s_{n+1,i}-s_{n+1,n+1}\rho_ight)=\Deltata(n+1,n+1)$, so that $$ v_{n+1,n+1}{\mathfrak m}athfrak{l}eft(x-s_{n+1,n+1}\rho_ight)=\Deltata(n+1,n+1)=v_{n+1,i}{\mathfrak m}athfrak{l}eft(x-s_{n+1,n+1}\rho_ight). $$ By (\ref{ZAS}), for all the other roots $u$ of $\mathfrak{p}hi_{n+1}$, the support of $u$ is contained in $(-1,n]$. Thus, for all $i\Gammae n+1$ we get $$ v_{n+1,n+1}{\mathfrak m}athfrak{l}eft(x-u\rho_ight)=v{\mathfrak m}athfrak{l}eft(s_{n+1,n+1}-u\rho_ight)=v{\mathfrak m}athfrak{l}eft(s_{n+1,i}-u\rho_ight)=v_{n+1,i}{\mathfrak m}athfrak{l}eft(x-u\rho_ight). $$ Since ${\mathfrak m}u_{n+1}=\rho_{n+1,n+1}<\rho_{n+1,i}<v_s$, {\mathfrak m}athfrak{c}ite[Cor. 2.5]{MLV} implies $$ {\mathfrak m}u_{n+1}{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi_{n+1}\rho_ight)=\rho_{n+1,i}{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi_{n+1}\rho_ight)=v_s{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi_{n+1}\rho_ight)=\Gammaa_{n+1}={\mathfrak m}u{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi_{n+1}\rho_ight). $$ By (\ref{bothmu}), we deduce that ${\mathfrak m}u={\mathfrak m}u_{n+1}$. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip Therefore, we get a countable chain of limit augmentations $$ {\mathfrak m}u_0\ \stackrel{\mathfrak{p}hi_1,\Gammaa_1}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_1\ \stackrel{\mathfrak{p}hi_2,\Gammaa_2}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}athfrak{c}dots \ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{n-1} \ \stackrel{\mathfrak{p}hi_{n},\Gammaa_{n}}{\mathfrak m}athfrak{l}ra\ {\mathfrak m}u_{n} \ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}athfrak{c}dots $$ which is an MLV chain. Indeed, the MLV condition amounts to $$ \mathfrak{p}hi_n\mathbf{n}ot\in \thetay({\mathfrak m}u_n,{\mathfrak m}u_{n+1})\mathfrak{q}uad {\mathfrak m}box{ for all }n\Gammae0. $$ This means ${\mathfrak m}u_n(\mathfrak{p}hi_n)={\mathfrak m}u_{n+1}(\mathfrak{p}hi_n)$ for all $n$. Since ${\mathfrak m}u_n<{\mathfrak m}u_{n+1}<v_s$, the desired equality follows from ${\mathfrak m}u_n(\mathfrak{p}hi_n)=\Gammaa_n=v_s(\mathfrak{p}hi_n)$. Finally, the family ${\mathfrak m}athfrak{l}eft({\mathfrak m}u_n\rho_ight)_{n\in{\mathfrak m}athbb N}$ has stable limit $v_s$. Indeed, for all nonzero $f\in\op{Ker}x$, there exists $n\in{\mathfrak m}athbb N$ such that $\Deltaeg(f)<p^n=\Deltaeg({\mathfrak m}u_n)$. Let $\thetay({\mathfrak m}u_n,v_s)=[\varphi]_{{\mathfrak m}u_n}$. Since $\Deltaeg(f)<\Deltaeg(\varphi)$, we have $\varphi\mathbf{n}mid_{{\mathfrak m}u_n}f$ and this implies ${\mathfrak m}u_n(f)=v_s(f)$ by Lemma \ref{propertiesTMN}. As a consequence, $v_s$ has infinite limit-depth. {\mathfrak m}athfrak{b}egin{thebibliography}{} {\mathfrak m}athfrak{b}ibitem{APZ} V. Alexandru, N. Popescu, A. Zaharescu, {\mathfrak m}edskipmph{Minimal pairs of definition of a residual transcendental extension of a valuation}, J. Math. Kyoto Univ. {{\mathfrak m}athfrak{b}f 28} (1990), 2--225. {\mathfrak m}athfrak{b}ibitem{newapp}J. Gu\`{a}rdia, J. Montes, E. Nart, {\mathfrak m}edskipmph{A new computational approach to ideal theory in number fields}, Found. Comput. Math. {{\mathfrak m}athfrak{b}f 13} (2013), 729--762. {\mathfrak m}athfrak{b}ibitem{gen} J. Gu\`{a}rdia, E. Nart, {\mathfrak m}edskipmph{Genetics of polynomials over local fields}, in {\mathfrak m}edskipmph{Arithmetic, geometry, and coding theory}, Contemp. Math. vol. 637 (2015), 207-241. {\mathfrak m}athfrak{b}ibitem{hos}F.J. Herrera Govantes, M.A. Olalla Acosta, M. Spivakovsky, {\mathfrak m}edskipmph{Valuations in algebraic field extensions}, Journal of Algebra {{\mathfrak m}athfrak{b}f 312} (2007), no. 2, 1033--1074. {\mathfrak m}athfrak{b}ibitem{hmos}F.J. Herrera Govantes, W. Mahboub, M.A. Olalla Acosta, M. Spivakovsky, {\mathfrak m}edskipmph{Key polynomials for simple extensions of valued fields}, preprint, arXiv:1406.0657v4 [math.AG], 2018. {\mathfrak m}athfrak{b}ibitem{Ked0}K. S. Kedlaya, {\mathfrak m}edskipmph{The algebraic closure of the power series field in positive characteristic}, Proc. Amer. Math. Soc. {{\mathfrak m}athfrak{b}f 129} (2001), no. 12, 3461--3470. {\mathfrak m}athfrak{b}ibitem{Ked1}K. S. Kedlaya, {\mathfrak m}edskipmph{On the algebraicity of generalized power series}, Beitr. Alg. Geom. {{\mathfrak m}athfrak{b}f 58} (2017), 499--527. {\mathfrak m}athfrak{b}ibitem{Kuhl}F.-V. Kuhlmann, {\mathfrak m}edskipmph{Value groups, residue fields, and bad places of rational function fields}, Trans. Amer. Math. Soc. {{\mathfrak m}athfrak{b}f 356} (2004), no. 11, 4559--4660. {\mathfrak m}athfrak{b}ibitem{KuhlDefect} F.-V. Kuhlmann, {\mathfrak m}edskipmph{The defect}, in: Commutative Algebra - Noetherian and non-Noetherian perspectives, Marco Fontana, Salah-Eddine Kabbaj, Bruce Olberding and Irena Swanson (eds.), Springer 2011. {\mathfrak m}athfrak{b}ibitem{mcla} S. MacLane, {\mathfrak m}edskipmph{A construction for absolute values in polynomial rings}, Trans. Amer. Math. Soc. {{\mathfrak m}athfrak{b}f40} (1936), pp. 363--395. {\mathfrak m}athfrak{b}ibitem{KP} E. Nart, {\mathfrak m}edskipmph{Key polynomials over valued fields}, Publ. Mat. {{\mathfrak m}athfrak{b}f 64} (2020), 195--232. {\mathfrak m}athfrak{b}ibitem{MLV} E. Nart, {\mathfrak m}edskipmph{MacLane-Vaqui\'e chains of valuations on a polynomial ring}, Pacific J. Math. {{\mathfrak m}athfrak{b}f 311-1} (2021), 165--195. {\mathfrak m}athfrak{b}ibitem{N2019} J. Novacoski, {\mathfrak m}edskipmph{Key polynomials and minimal pairs}, J. Algebra {{\mathfrak m}athfrak{b}f 523} (2019), 1--14. {\mathfrak m}athfrak{b}ibitem{PP} L. Popescu, N. Popescu, {\mathfrak m}edskipmph{On the residual transcendental extensions of a valuation. Key polynomials and augmented valuations}, Tsukuba J. Math. {{\mathfrak m}athfrak{b}f 15} (1991), 57--78. {\mathfrak m}athfrak{b}ibitem{Vaq} M. Vaqui\'e, {\mathfrak m}edskipmph{Extension d'une valuation}, Trans. Amer. Math. Soc. {{\mathfrak m}athfrak{b}f 359} (2007), no. 7, 3439--3481. {\mathfrak m}athfrak{b}ibitem{VaqDef}M. Vaqui\'e, {\mathfrak m}edskipmph{Famille essential de valuations et d\'efaut d'une extension}, J. Algebra {{\mathfrak m}athfrak{b}f 311} (2007), no. 2, 859--876. {\mathfrak m}athfrak{b}ibitem{Vaq3}M. Vaqui\'e, {\mathfrak m}edskipmph{Valuation augment\'ee, paire minimal et valuation approch\'ee}, preprint 2021, hal-02565309, version 2. {\mathfrak m}edskipnd{thebibliography} {\mathfrak m}edskipnd{document}
\begin{document} \date{\today} \title{Scheme for generating coherent state superpositions with realistic cross-Kerr nonlinearity} \author{Bing He} \email{[email protected]} \affiliation{Department of Physics and Astronomy, Hunter College of the City University of New York, 695 Park Avenue, New York, NY 10065, USA} \author{Mustansar Nadeem} \affiliation{Department of Physics, Quaid-i-Azam University, Islamabad 45320, Pakistan} \author{J\'{a}nos A. Bergou} \affiliation{Department of Physics and Astronomy, Hunter College of the City University of New York, 695 Park Avenue, New York, NY 10065, USA} \pacs{03.67.Mn, 42.50.Dv, 03.67.Lx} \begin{abstract} We present a simple scheme using two identical cross-phase modulation processes in decoherence environment to generate superpositions of two coherent states with the opposite phases, which are known as cat states. The scheme is shown to be robust against decoherence due to photon absorption losses and other errors, and the design of its experimental setup is also discussed. \end{abstract} \date{\today} \maketitle Schr\"odinger's famous cat paradox can be realized by optical coherent state superpositions in the form of $|CSS_{\pm}(\beta)\rangle=N_{\pm}~(|\beta\rangle \pm |-\beta\rangle)$ ($|\pm\beta\rangle$ is coherent state with the amplitude $|\beta|$ and $N_{\pm}=(2\pm 2~ exp~[-2|\beta|^2])^{-\frac{1}{2}}$). $|CSS_{+}\rangle$ ($|CSS_{-}\rangle$) is called an even (odd) cat state, since it is the superposition of even (odd) photon number states. Cat states and other coherent state superpositions have been proposed to implement various quantum information tasks such as linear-optics quantum computation \cite{ralph, lund} and quantum metrology \cite{ralph2, gilchrist, m-b}. Generation of these states is therefore under intensive research recently (see \cite{g-v} for a comprehensive review). One line of research in the field is to generate cat states through a cross-phase modulation (XPM) process in Kerr medium \cite{v, gerry, jeong}. Such an ideal process is described by the Hamiltonian $H=-\hbar\chi \hat{a}^{\dagger}\hat{a} \hat{b}^{\dagger}\hat{b}$, where $\chi$ is the nonlinear strength, and $\hat{a}$ and $\hat{b}$ two coupling optical modes. A simple approach of this kind is Gerry's scheme \cite{gerry}, where an input coherent state $|\alpha\rangle_1$ as the probe in Fig. 1 interacts with one of the single photon modes, $|0,1\rangle_{2,3}\equiv |0\rangle$ and $|1,0\rangle_{2,3}\equiv |1\rangle$, as the signal through an XPM process, and the state of the coherent beam is post-selected to a cat state by the detection of the single photon mode $D_1$ or $D_2$ if the XPM phase $\theta$ could be $\pi$. Simple though the scheme is, realizing a large $\theta$ is still challenging with the current technology. Even with electromagnetically induced transparent (EIT) material \cite{eit}, the initially achieved phase shift $\theta$ at the single photon level is only in the order of $10^{-5}$ \cite{s-i}. Moreover, all Kerr nonlinear materials carry a complex third order susceptibility $\chi^{(3)}=Re \chi^{(3)}+iIm\chi^{(3)}$, necessitating the decays of the coupling optical modes caused by the imaginary part. Under the decoherence effect caused by such losses, the XPM processes of the density matrix $\rho$ of an involved system are quantum operations (QOs) described by the master equation \begin{eqnarray} \frac{d\rho}{dt}=i\hbar\chi\sum_{i,j}[\hat{a}^{\dagger}_i\hat{a}_i \hat{a}_j^{\dagger}\hat{a}_j, \rho]+\frac{\gamma}{2}\sum_{i}\{[\hat{a}_i\rho,\hat{a}_i^{\dagger}]+[\hat{a}_i,\rho\hat{a}_i^{\dagger}]\},~~ \label{1} \end{eqnarray} rather than the ideal unitary transformations. The nonlinear strength $\chi$ and the damping rate $\gamma$ of the coupling optical modes $\hat{a}_i$ are from the real and the imaginary parts of $\chi^{(3)}$, respectively. \begin{figure} \caption{(color online) The setup of single XPM scheme, where the input qubit and coherent state are in the state $2^{-\frac{1} \end{figure} To generate cat states with weak cross-Kerr nonlinearity, Jeong proposed applying the idea of compensating for a small $\theta$ with a large intensity of the input coherent beam in \cite{cc} and obtained a post-selected mixed state \cite{jeong} \begin{eqnarray} \rho_{\pm}(t)&\sim&|A\alpha\rangle\langle A\alpha|\pm C(t)|A\alpha\rangle\langle A\alpha e^{i\theta}|\pm C^{\ast}(t)|A\alpha e^{i\theta}\rangle\langle A\alpha| \nonumber\\ &+&|A\alpha e^{i\theta}\rangle\langle A\alpha e^{i\theta}|\nonumber\\ &=& \frac{1+|C(t)|}{2}|CG_{\pm}\rangle\langle CG_{\pm}| +\frac{1-|C(t)|}{2}|CG_{\mp}\rangle\langle CG_{\mp}|~~~~ \label{2} \end{eqnarray} under the decoherence effect, where $|CG_{\pm}\rangle=|A \alpha\rangle\pm \{C^{\ast}(t)/|C(t)|\}|A \alpha e^{i\theta}\rangle$, $A=e^{-\frac{\gamma}{2}t}$, and the closed form of the complex coherence parameter $C(t)$ is given in \cite{l-n}. After one performs a displacement $D(x)$ such that $|A \alpha\rangle\rightarrow |\beta\rangle$ and $|A \alpha e^{i\theta}\rangle\rightarrow |-\beta\rangle$, the pure state components $|CG_{\pm}\rangle$ will be transformed to $|CG'_{\pm}\rangle\sim|\beta\rangle\pm e^{iarg C^{\ast}} e^{i\phi_D}|-\beta\rangle$ ($\phi_D$ is the relative phase from the displacement). Small $\theta$ can be therefore compensated by a large $|\alpha|$ so that the amplitude $|\beta|$ of the realized $|CG'_{\pm}\rangle$ could be big enough. One problem in the scheme is the implementation of the displacement $D(x)$ on an intense coherent beam in the state of Eq. (\ref{2}). A displacement on a coherent state can be approximated by a beam splitter of extremely high transmissivity, which is fed by a very intense coherent beam at the second port \cite{paris}. However, if one wants to displace the state by a very large $|x|$ as in \cite{jeong}, the intensity of the second beam would be beyond the reasonable value. The other problem is the difference of the generated $|CG'_{\pm}\rangle$ from an even or odd cat state $|CSS_{\pm}\rangle$ by a relative phase $\phi=arg C^{\ast}+\phi_D$ arising from the decoherence and the displacement. Directly changing the relative phase $\phi$ requires some type of nonlinear interaction \cite{g-v}. There are two ways to convert $|CG'_{\pm}\rangle$ to $|CSS_{\pm}\rangle$ with only linear optics: one is to prepare the state of the input single photon qubit as $|0\rangle+e^{-i\phi}|1\rangle$ to cancel that of the coherent states; the other is to have two such states $|CG'_{1,+}\rangle=|\beta\rangle+ e^{i\phi_1}|-\beta\rangle$ and $|CG'_{2,+}\rangle=|\beta\rangle+ e^{i\phi_2}|-\beta\rangle$ satisfying $\phi_1+\phi_2=\pi$, and transform them together by a beam splitter to $|\sqrt{2}\beta\rangle+ |-\sqrt{2}\beta\rangle$ \cite{am}. By these methods the perfect match of the qubit and the coherent state relative phases (or two relative phases of the coherent states) is necessary, so the scheme is sensitive to these independently created phases in generating $|CSS_{\pm}\rangle$. Moreover, $|1\rangle$ component of the single photon picks up an extra phase $\phi_E$ and a decay factor due to the different propagation velocity from that of $|0\rangle$ component and its loss in nonlinear medium, adding more consideration to the experimental realization of the scheme. \begin{figure} \caption{(color online) Improved setup for generating even and odd cat states. Under the decoherence from photon absorption losses, two ideal unitary transformations of the XPM processes should be substituted by the effective actions of two quantum operations, which map the input to a mixture of even and odd cat states conditioned on the detection of $D_1$ or $D_2$ mode.} \end{figure} Here we present a scheme of double XPM outlined in Fig. 2 to overcome the above-mentioned shortcomings in single XPM scheme (a similar scheme without considering the photon absorption losses in XPM processes is given in \cite{k-p}). By {\it double XPM} we mean two identical XPM processes inducing the same phase $\theta$. We choose the single photon state as the superposition of two polarizations, $2^{-\frac{1}{2}}(|H\rangle+|V\rangle)$ ($H$ and $V$ are horizontal and vertical polarization, respectively), but the effect will be the same if we use the single photon state of Fig. 1. Separated by a 50/50 beam splitter $BS_1$ and a polarization beam-spltter (PBS), the coherent beam and the single photon as the whole system will be in the following input state \begin{eqnarray} |\Psi\rangle_{in}=\frac{1}{\sqrt{2}}\left(|H\rangle_3+|V\rangle_4\right)|\alpha\rangle_1|\alpha\rangle_2. \end{eqnarray} To study its evolution determined by Eq. (\ref{1}), we should consider four modes corresponding to two coherent beams and $H$/$V$ polarization of the single photon, and the first summation $\sum_{i,j}$ in Eq. (\ref{1}) will be over the modes $(1,3)$ and $(2,4)$. We here use the following operators \begin{eqnarray} {\cal K}_{ij}\rho&=&i\hbar\chi[\hat{a}^{\dagger}_i\hat{a}_i \hat{a}_j^{\dagger}\hat{a}_j, \rho],\nonumber\\ {\cal J}_i \rho&=&\frac{\gamma}{2}[\hat{a}_i\rho,\hat{a}_i^{\dagger}], ~~~~~~{\cal L}_i \rho=\frac{\gamma}{2}[\hat{a}_i,\rho\hat{a}_i^{\dagger}], \end{eqnarray} for simplicity. By dividing the interaction time into infinitely many small periods, we express the QOs on the input, $\rho(t_0)=|\Psi\rangle_{in}\langle\Psi|$, from $t_0$ to $t$ as follows: \begin{eqnarray} &&\rho(t)=\lim_{N\rightarrow \infty}\prod_{k=1}^{N-1}\underbrace{(I+\sum_i({\cal J}_i+{\cal L}_i )\Delta t)}\limits_{{\cal D}(t_k)}\underbrace{(I+\sum_{i,j}{\cal K}_{i,j}\Delta t)}\limits_{{\cal U}(t_k)}\rho(t_0),\nonumber\\ \end{eqnarray} with $\Delta t=(t-t_0)/N$ and $t_k=t_0+k\Delta t$. The first small step of operation ${\cal D}(t_1){\cal U}(t_1)$ maps $\rho(t_0)$ to (the mode indexes are neglected) \begin{widetext} \begin{eqnarray} \rho(t_1)&=& \frac{1}{2} {\cal D}(t_1){\cal U}(t_1)\{(|H\rangle+|V\rangle)(\langle H|+ \langle V|)\otimes |\alpha\rangle\langle\alpha|\otimes |\alpha\rangle\langle\alpha|\}\nonumber\\ &=&\frac{1}{2}{\cal D}(t_1)\left(|H\rangle|\alpha e^{i\chi \Delta t}\rangle|\alpha\rangle+|V\rangle |\alpha\rangle |\alpha e^{i\chi \Delta t}\rangle\right) \left(\langle H| \langle\alpha e^{i\chi \Delta t} |\langle\alpha|+\langle V|\langle \alpha|\langle\alpha e^{i\chi \Delta t}|\right)\nonumber\\ &\sim&|H\rangle\langle H|\otimes|\alpha e^{(-\frac{\gamma}{2}+i\chi) \Delta t},\alpha e^{-\frac{\gamma}{2}\Delta t}\rangle \langle\alpha e^{(-\frac{\gamma}{2}+i\chi) \Delta t},\alpha e^{-\frac{\gamma}{2}\Delta t}|+|V\rangle\langle V|\otimes|\alpha e^{-\frac{\gamma}{2}\Delta t},\alpha e^{(-\frac{\gamma}{2}+i\chi) \Delta t}\rangle \langle\alpha e^{-\frac{\gamma}{2}\Delta t},\alpha e^{(-\frac{\gamma}{2}+i\chi) \Delta t}|\nonumber\\ &+&C_1|H\rangle\langle V|\otimes|\alpha e^{(-\frac{\gamma}{2}+i\chi) \Delta t},\alpha e^{-\frac{\gamma}{2}\Delta t}\rangle \langle\alpha e^{-\frac{\gamma}{2}\Delta t},\alpha e^{(-\frac{\gamma}{2}+i\chi) \Delta t}|\nonumber\\ &+&C_1|V\rangle\langle H|\otimes|\alpha e^{-\frac{\gamma}{2}\Delta t},\alpha e^{(-\frac{\gamma}{2}+i\chi) \Delta t}\rangle \langle \alpha e^{(-\frac{\gamma}{2}+i\chi) \Delta t},\alpha e^{-\frac{\gamma}{2}\Delta t}|, \label{6} \end{eqnarray} \end{widetext} where $C_1=exp\{-(1-e^{-\gamma\Delta t})|\alpha e^{i\chi\Delta t}-\alpha|^2\}$. Two identical unitary operations between the modes $(1,3)$ and $(2,4)$ in ${\cal U}(t_1)$ create a symmetric form of the coherent states on the second line of Eq. (\ref{6}), and then the phases gained from ${\cal D}(t_1)$ for the off-diagonal qubit terms, $|H\rangle\langle V|$ and $|V\rangle\langle H|$, can be canceled to obtain a real number $C_1$. The approximation on the third line of Eq. (\ref{6}) is the negligence of a common decay factor for all four qubit terms from the symmetric XPM processes. The $k$-th step operation ${\cal D}(t_k){\cal U}(t_k)$ contributes a similar coefficient $C_k$. The QOs of the XPM processes therefore map $\rho(t_0)$ to $\rho(t)$, which can be decomposed to \begin{eqnarray} \rho(t)\sim\frac{1+C(t)}{2}|CS_{+}\rangle\langle CS_{+}| +\frac{1-C(t)}{2}|CS_{-}\rangle\langle CS_{-}|,~ \label{8} \end{eqnarray} where \begin{eqnarray} |CS_{\pm}\rangle&=&|H\rangle|e^{(-\frac{\gamma}{2}+i\chi)t}\alpha \rangle|e^{-\frac{\gamma}{2}t}\alpha \rangle\nonumber\\ &\pm& |V\rangle |e^{-\frac{\gamma}{2}t}\alpha\rangle|e^{(-\frac{\gamma}{2}+i\chi)t}\alpha \rangle. \label{9} \end{eqnarray} The coherence parameter \begin{eqnarray} &&C(t)=\lim_{N\rightarrow \infty} C_1C_2\cdots C_{N-1}\nonumber\\ &=&exp\{-2|\alpha|^2(\frac{\chi^2}{\gamma^2+\chi^2}-e^{-\gamma t}+\frac{\gamma^2}{\gamma^2+\chi^2}e^{-\gamma t}\cos\chi t\nonumber \\ &&-\frac{\gamma\chi}{\gamma^2+\chi^2}e^{-\gamma t}\sin\chi t)\} \label{10} \end{eqnarray} is shown in Fig. 3. Two 50/50 beam splitters $BS_2$ and $BS_3$ are applied to transform the coherent and the single photon modes in Eq. (\ref{9}) to the proper forms. If one detects the single photon mode $D_1$ or $D_2$ in Fig. 2, the generated state will be the mixture of an even and an odd cat state $|CSS_{\pm}(\beta)\rangle$, with its size given as \begin{eqnarray} |\beta|^2=2e^{-\gamma t}\sin^2\frac{\chi t}{2}|\alpha|^2, \label{11} \end{eqnarray} and fidelity as $F=(1+C(t))/2$. Another interesting thing is that a pure coherent state $|\gamma\rangle=|\frac{e^{-\frac{\gamma}{2}t+i\chi t}+e^{-\frac{\gamma}{2}t}}{\sqrt{2}}\alpha\rangle$ outputs from the other port in Fig. 2, as long as the two XPM processes are symmetric. \begin{figure} \caption{The coherent parameter $C(\tau)$ vs dimensionless time $\tau$ for $|\alpha|=200$. The solid, dotted and dashed line represent the cases of $\Gamma=0.5$, $1$ and $1.5$, respectively.} \end{figure} There are two primary advantages in the scheme: (1) the precise displacement of the strong coherent beam in Fig. 1 is replaced by two XPM processes, which could be easier to implement; (2) the coherence parameter $C(t)$ is real instead of the complex one in \cite{jeong}, so it is unnecessary to apply other procedures to convert the output to an even or odd cat state. The essential point is that we let two groups of the input optical modes, $\{|\alpha\rangle_1, |H\rangle\}$ and $\{|\alpha\rangle_2, |V\rangle\}$, undergo the same physical process in nonlinear Kerr medium, so the photon absorption decoherence on both of the groups and the phases gained by two coherent states and two single photon components should be identical. To achieve the target, we should have a well stabilized setup to process the inputs. In setting up the circuit in Fig. 2, one could meet with the asymmetry of two XPM processes. As the result, the symmetric pure state components in Eq. (\ref{9}) will become \begin{eqnarray} |CS'_{\pm}\rangle&=&|H\rangle|e^{(-\frac{\gamma}{2}+i\chi)t_1}\alpha \rangle|e^{-\frac{\gamma}{2}t_2}\alpha \rangle\nonumber\\ &\pm& e^{i\phi'_E}|V\rangle |e^{-\frac{\gamma}{2}t_1}\alpha\rangle|e^{(-\frac{\gamma}{2}+i\chi)t_2}\alpha \rangle, \label{9-b} \end{eqnarray} with the different interaction times $t_1$ and $t_2$ for the two groups of optical modes, which also give rise to a relative phase $\phi'_E$ in the above equation. The resilience of a double XPM scheme to such asymmetry is estimated in \cite{k-p}: for a deviation of $\theta_1=\chi t_1$ and $\theta_2=\chi t_2$ as large as $10\%$, the fidelity of the output is still larger than $0.95$. Here we propose a direct way to test how good the symmetry in circuit is. We just prepare a coherent state $|\gamma\rangle$, which is identical with that output from the other port in Fig. 2 in case of symmetry, according to the target cat state size, and then compare it with the coherent state components output from the second port by means of a 50/50 beam splitter and a simple photodiode as in \cite{andersson}. The difference of two coherent states $|\gamma\rangle$ and $|\gamma'\rangle$ due to the asymmetry can be well identified with an efficiency of $1-exp~(-\frac{1}{2}|\gamma-\gamma'|^2)$. We now look at the design of the setup to generate a cat state with a fidelity $F=(1+C)/2$ and an amplitude $|\beta|$. From Eqs. (\ref{10}) and (\ref{11}) we obtain a relation \begin{eqnarray} |\beta|^2 G(\tau)&=&|\beta|^2\frac{1-\frac{\Gamma^2}{1+\Gamma^2}e^{\tau}-\frac{1}{1+\Gamma^2}\cos \Gamma \tau+\frac{\Gamma}{1+\Gamma^2}\sin\Gamma \tau} {\sin^2\frac{\Gamma \tau}{2}}\nonumber\\ &=&\ln (2F-1), \label{12} \end{eqnarray} where $\tau=\gamma t$ and $\Gamma=\chi/\gamma$. For any value of $\Gamma$, $G(\tau)$ ranges from $-\infty$ to $0$, as demonstrated with the representative $\Gamma$ values in Fig. 4. From Eq. (\ref{12}) we can definitely find a dimensionless interaction time $\tau_{int}$ for coherent beam and single photon if we specify any fidelity $F$ and any size $|\beta|^2$ for the generated cat state. For a sufficiently high fidelity $F=1-x$ ($0<x\ll 0.1$), this dimensionless interaction time $\tau_{int}$ is simply determined by $G(\tau_{int})=-2x/|\beta|^2$, and should be very small if $|\beta|$ is also large enough. We thus draw a conclusion: the only way to create cat states of high fidelity and large size through XPM is coupling a sufficiently strong coherent beam to a single photon within a limited time in Kerr medium. This is valid to any cross-Kerr nonlinearity and to single XPM scheme \cite{jeong} as well. \begin{figure} \caption{Function $G(\tau)$ vs dimensionless time for three values of $\Gamma=0.01$, $5$ and $10$, which are represented by solid, dashed and dotted lines, respectively.} \end{figure} For the example of the generated cat state with an amplitude $|\beta|=1.6$ and a fidelity $F=0.99$, we provide the following table of $\tau_{int}$ and input beam intensity $|\alpha|^2$: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|}\hline $\Gamma$ & 0.01 & 1 & 25 & 50 & 100\\ \hline $\tau_{int}$ & 0.0116846 & 0.011685 & 0.011652 & 0.011555& 0.011196 \\ \hline $|\alpha|^2$& $1.2 \times 10^{12}$& $1.2 \times 10^{8}$&$2.0 \times 10^{5}$ & $5.1 \times 10^{4}$&$1.4 \times 10^{4}$ \\ \hline \end{tabular} \end{center} The $\Gamma$ values range from that of normal silica core fiber to those achievable in EIT materials. The necessary input coherent beam intensity drops quickly with the increased $\Gamma$, and the $\tau_{int}$ values are very close because the $G(\tau)$ curves for the different $\Gamma$ values stick together near the origin as shown in Fig. 4. The data in the table also reflects the trade-off between the necessary nonlinear strength and the required coherent beam intensity similar to that in \cite{cc}. For a realistic system, however, the intensity of the coherent beam can not be boundlessly large because a very strong beam might also cause other effects. Good candidates for weak cross-Kerr nonlinearity without self-phase modulation effect are atomic systems working under EIT conditions. The $\Gamma$ parameters of EIT or double EIT systems are the ratio of the signal field detuning to the decay rate of the excited state \cite{p-k-h, m-s} (or to a quantity related to this decay rate for light-storage XPM approach \cite{chen-yu}). To create EIT condition in hot atoms, e.g., the probe beam should be weak enough (at the $\mu $W level) while the coupling beam is strong (at the $m$W level). By choosing a proper detuning, we could use a probe beam of the intensity $2|\alpha|^2$ consistent with such requirement in EIT systems to implement the scheme. The other issues, such as controlling the interaction time $t_{int}=\tau_{int}/\gamma$ in optical cavity or by other methods, still await further research. In summary, we have studied a double XPM scheme to generate cat states in decoherence environment. We show that this scheme is robust against photon absorption losses and other errors. The results obtained here are also applicable to the design of other quantum non-demolition detection (QND) setups based on XPM process. B. H. thanks Dr. W.-Z. Tang, Prof. I. A. Yu for discussions on experimental requirement for XPM process in EIT materials; M. N. is sponsored by IRSIP project of HEC Pakistan; the authors also thank Prof. M. S. Kim for reminding them of the scheme in \cite{k-p}. \end{document}
\begin{document} \newcommand{\lambda}{\lambdambda} \newcommand{\Leftrightarrow}{\Leftrightarrow} \newcommand{\mathbf}{\mathbf} \newcommand{\Rightarrow}{\Rightarrow} \newtheorem{t1}{Theorem}[section] \newtheorem{d1}{Definition}[section] \newtheorem{n1}{Notation}[section] \newtheorem{c1}{Corollary}[section] \newtheorem{l1}{Lemma}[section] \newtheorem{r1}{Remark}[section] \newtheorem{e1}{Counterexample}[section] \newtheorem{p1}{Proposition}[section] \newtheorem{cn1}{Conclusion}[section] \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \pagenumbering{arabic} \title {Stochastic Comparison of Parallel Systems with Log-Lindley Distributed Components} \author{Shovan Chowdhury\\Indian Institute of Management, Kozhikode\\Quantitative Methods and Operations Management Area \\Kerala, India \and Amarjit Kundu\\Santipur College\\Department of Mathematics\\ West Bengal, India }\maketitle \begin{abstract} In this paper, we study stochastic comparisons of parallel systems having log-Lindley distributed components. These comparisons are carried out with respect to reversed hazard rate and likelihood ratio ordering. \end{abstract} {\bf Keywords and Phrases}: Likelihood ratio order, Log-Lindley distribution, Majorization, Multiple-outlier model, Reversed hazard rate order, Schur-convex.\\ {\bf AMS 2010 Subject Classifications}: 62G30, 60E15, 60K10 \section{Introduction} \setcounter{equation}{0} \hspace*{0.3in} In reliability optimization and life testing experiments, many times the tests are censored or truncated when failure of a device during the warranty period may not be counted or items may be replaced after a certain time under a replacement policy. Moreover, many reliability systems and biological organism including human life span are bounded above because of test conditions, cost or other constraints. These situations result in a data set which is modeled by distributions with finite range (i.e. with bounded support) viz. power function density, finite range density, truncated Weibull, beta, Kumaraswamy and so on (see for example, Ghitany~\cite{gh}, Lai and Jones~\cite{lai1}, Lai and Mukherjee~\cite{lai2}, Moore and Lai~\cite{moo} and Mukherjee and Islam~\cite{muk}). \\\hspace*{0.3in} Recently, G$\acute{o}$mez et al.~\cite{go1} introduce the log-Lindley (LL) distribution with parameters $(\sigma,\lambdambda)$, written as LL($\sigma,\lambdambda$), as an alternative to the beta distribution with the probability density function given by \begin{equation}\lambdabel{e0} f(x;\sigma,\lambdambda)=\frac{\sigma^2}{1+\lambdambda\sigma}\left(\lambdambda-\log x\Rightarrowght) x^{\sigma-1};~0<x<1,~\lambdambda\geq 0,~\sigma>0, \end{equation} where $\sigma$ is the shape parameter and $\lambdambda$ is the scale parameter. This distribution with a simple expression and nice reliability properties, is derived from the generalized Lindley distribution as proposed by Zakerzadeh and Dolati~\cite{za}, which is again a generalization of the Lindley distribution as proposed by Lindley~\cite{li}. The LL distribution exhibits bath-tub failure rates and has increasing generalized failure rate (IGFR). This distribution has useful applications in the context of inventory management, pricing and supply chain contracting problems (see, for example, Ziya et al.~\cite{zia}, Lariviere and Porteus~\cite{lar1} and Lariviere~\cite{lar2}), where a demand distribution is required to have the IGFR property. Moreover, it has application in the actuarial context where the cumulative distribution function (CDF) of the LL distribution is used to distort the premium principle (G$\acute{o}$mez et al.~\cite{go1}). The LL distribution is also shown to fit rates and proportions data better than the beta distribution (G$\acute{o}$mez et al.~\cite{go1}). \\\hspace*{0.3in} Order statistics play an important role in reliability optimization, life testing, operations research and many other areas. Parallel and series systems are the building blocks of many complex coherent systems in reliability theory. While the lifetime of a series system corresponds to the smallest order statistic $X_{1:n}$, the same of a parallel system is represented by the largest order statistic $X_{n:n}$. Although stochastic comparisons of order statistics from homogeneous populations have been studied in detail in the literature, not much work is available so far for the same from heterogeneous populations, due to its complicated nature of expressions. Such comparisons are studied with exponential, gamma, Weibull, generalized exponential or Fr$\acute{e}$chet distributed components with unbounded support. One may refer to Dykstra \emph{et al.}~\cite{dkr11}, Misra and Misra~\cite{mm11.1}, Zhao and Balakrishnan~(\cite{zb11.2}), Torrado and Kochar~\cite{tr11}, Kundu and Chowdhury~\cite{kun2}, Kundu \emph{et al.}~\cite{kun1}, Gupta \emph{et al.}~\cite{gu} and the references there in. Moreover, not much attention has been paid so far to the stochastic comparison of two systems having finite range distributed components. The notion of majorization (Marshall et al. [5]) is also essential to the understanding of the stochastic inequalities for comparing order statistics. This concept is used in the context of optimal component allocation in parallel-series as well as in series-parallel systems, allocation of standby in series and parallel systems, and so on, see, for instance, El-Neweihi et al.~\cite{el}. It is also used in the context of minimal repair of two-component parallel system with exponentially distributed lifetime by Boland and El-Neweihi~\cite{bo}. \\ \hspace*{0.3 in} In this paper our main aim is to compare two parallel systems in terms of reversed hazard rate order and likelihood ratio order with majorized scale and shape parameters separately, when the components are from two heterogeneous LL distributions as well as from the multiple outlier LL random variables. The rest of the paper is organized as follows. In Section 2, we have given the required notations, definitions and some useful lemmas which have been used throughout the paper. Results related to reversed hazard rate ordering and likelihood ratio ordering between two order statistics $X_{n:n}$ and $Y_{n:n}$ are derived in Section 3. \\\hspace*{0.3 in} Throughout the paper, the word increasing (resp. decreasing) and nondecreasing (resp. nonincreasing) are used interchangeably, and $\Re$ denotes the set of real numbers $\{x:-\infty<x<\infty\}$. We also write $a\stackrel{sign}{=}b$ to mean that $a$ and $b$ have the same sign. For any differentiable function $k(\cdot)$, we write $k'(t)$ to denote the first derivative of $k(t)$ with respect to $t$. \section{Notations, Definitions and Preliminaries} \hspace*{0.3 in} For an absolutely continuous random variable $X$, we denote the probability density function, the distribution function and the reversed hazard rate function by $f_X(\cdot), F_X(\cdot),$ and $\tilde r_X(\cdot)$ respectively. The survival or reliability function of the random variable $X$ is written as $\bar F_X(\cdot)=1-F_X(\cdot)$. \\\hspace*{0.3 in} In order to compare different order statistics, stochastic orders are used for fair and reasonable comparison. In literature many different kinds of stochastic orders have been developed and studied. The following well known definitions may be obtained in Shaked and Shanthikumar~\cite{shak1}. \begin{d1}\lambdabel{de1} Let $X$ and $Y$ be two absolutely continuous random variables with respective supports $(l_X,u_X)$ and $(l_Y,u_Y)$, where $u_X$ and $u_Y$ may be positive infinity, and $l_X$ and $l_Y$ may be negative infinity. Then, $X$ is said to be smaller than $Y$ in \begin{enumerate} \item[(i)] likelihood ratio (lr) order, denoted as $X\leq_{lr}Y$, if $$\frac{f_Y(t)}{f_X(t)}\;\text{is increasing in} \,t\in(l_X,u_X)\cup(l_Y,u_Y);$$ \item[(ii)] hazard rate (hr) order, denoted as $X\leq_{hr}Y$, if $$\frac{\bar F_Y(t)}{\bar F_X(t)}\;\text{is increasing in}\, t \in (-\infty,max(u_X,u_Y)),$$ which can equivalently be written as $r_X(t)\geq r_Y(t)$ for all $t$; \item[(iii)] reversed hazard rate (rhr) order, denoted as $X\leq_{rhr}Y$, if $$ \frac{F_Y(t)}{ F_X(t)}\;\text{is increasing in}\, t \in(min(l_X,l_Y),\infty),$$ which can equivalently be written as $\tilde r_X(t)\leq \tilde r_Y(t)$ for all $t$; \item[(iv)] usual stochastic (st) order, denoted as $X\leq_{st}Y$, if $\bar F_X(t)\leq \bar F_Y(t)$ for all \\$t\in (-\infty,\infty).$ \end{enumerate} \end{d1} In the following diagram we present a chain of implications of the stochastic orders, see, for instance, Shaked and Shanthikumar \cite{shak1}, where the definitions and usefulness of these orders can be found. \\\hspace*{1.7 in}$~~~~~~X\leq_{hr}Y$ \\\hspace*{1.7 in}$~~~~~~~~~~~\uparrow ~~~~~~~\searrow$ \\\hspace{6 in} $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~X\leq_{lr}Y~~\Rightarrowghtarrow~~X\leq_{st}Y.$ \hspace{2.51 cm}$~~~~~~~~~~~~~~~~~~~~~~~~~\downarrow~~~~~~~~~\nearrow$ \hspace{2 cm}$~~~~~~~~~~~~~~~~~~~~~~~~X\leq_{rhr}Y$ \\\hspace*{0.3 in} It is well known that the results on different stochastic orders can be established on using majorization order(s). Let $I^n$ denotes an $n$-dimensional Euclidean space where $I\subseteq\Re$. Further, let $\mathbf{x}=(x_1,x_2,\dots,x_n)\in I^n$ and $\mathbf{y}=(y_1,y_2,\dots,y_n)\in I^n$ be any two real vectors with $x_{(1)}\le x_{(2)}\le\cdots\le x_{(n)}$ being the increasing arrangements of the components of the vector $\mathbf{x}$. The following definitions may be found in Marshall \emph{et al.} \cite{Maol}.\\ \begin{d1} The vector $\mathbf{x} $ is said to majorize the vector $\mathbf{y} $ (written as $\mathbf{x}\stackrel{m}{\succeq}\mathbf{y}$) if \begin{equation*} \sum_{i=1}^j x_{(i)}\le\sum_{i=1}^j y_{(i)},\;j=1,\;2,\;\ldots, n-1,\;\;and \;\;\sum_{i=1}^n x_{(i)}=\sum_{i=1}^n y_{(i)}. \end{equation*} \end{d1} \begin{d1} A function $\psi:I^n\Rightarrowghtarrow\Re$ is said to be Schur-convex (resp. Schur-concave) on $I^n$ if \begin{equation*} \mathbf{x}\stackrel{m}{\succeq}\mathbf{y} \;\text{implies}\;\psi\left(\mathbf{x}\Rightarrowght)\ge (\text{resp. }\le)\;\psi\left(\mathbf{y}\Rightarrowght)\;for\;all\;\mathbf{x},\;\mathbf{y}\in I^n. \end{equation*} \end{d1} \begin{n1} Let us introduce the following notations. \begin{enumerate} \item[(i)] $\mathcal{D}_{+}=\left\{\left(x_{1},x_2,\ldots,x_{n}\Rightarrowght):x_{1}\geq x_2\geq\ldots\geq x_{n}> 0\Rightarrowght\}$. \item[(ii)] $\mathcal{E}_{+}=\left\{\left(x_{1},x_2,\ldots,x_{n}\Rightarrowght):0< x_{1}\leq x_2\leq\ldots\leq x_{n}\Rightarrowght\}$. \end{enumerate} \end{n1} Next, two lemmas are given which will be used to prove our main results. The first one can be obtained by combining Proposition H2 of Marshall \emph{et al.} (\cite{Maol}, p. 132) and Lemma 3.2 of Kundu \emph{et al.} (\cite{kun1}) while the second one is due to Lemma 3.4 of Kundu \emph{et al.} (\cite{kun1}). \begin{l1}\lambdabel{l3} Let $\varphi({\bf x})=\sum_{i=1}^ng_i(x_i)$ with ${\bf x}\in \mathcal{D}_+$, where $g_i:\mathbb{R}\to\mathbb{R}$ is differentiable, for all $i=1,2,\ldots, n$. Then $\varphi(\mathbf{x})$ is Schur-convex (Schur-concave) on $\mathcal{D}_+$ if, and only if, $$g_{i}'(a)\geq\; (resp. \leq)\ g_{i+1}'(b)\;\text{whenever}\;a\geq b,\;\text{for all}\;i=1,2,\ldots,n-1,$$ where $g'(a)=\frac{d g(x)}{dx}\big|_{x=a}$. \end {l1} \begin{l1}\lambdabel{l4} Let $\varphi({\bf x})=\sum_{i=1}^ng_i(x_i)$ with ${\bf x}\in \mathcal{E}_+$, where $g_i:\mathbb{R}\to\mathbb{R}$ is differentiable, for all $i=1,2,\ldots, n$. Then $\varphi(\mathbf{x})$ is Schur-convex (Schur-concave) on $\mathcal{E}_+$ if, and only if, $$g_{i+1}'(a)\geq\;(resp. \leq)\ g_{i}'(b)\;\text{whenever}\;a\geq b,\;\text{for all}\;i=1,2,\ldots,n-1,$$ where $g'(a)=\frac{d g(x)}{dx}\big|_{x=a}$. \end {l1} \section{Main Results} \setcounter{equation}{0} \hspace{0.3in} For $i=1,2,\ldots,n$, let $X_i$ (resp. $Y_i$) be $n$ independent nonnegative random variables following LL distribution as given in (\ref{e0}).\\ \hspace*{0.3 in} If $F_{n:n}\left(\cdot\Rightarrowght)$ and $G_{n:n}\left(\cdot\Rightarrowght)$ be the distribution functions of $X_{n:n}$ and $Y_{n:n}$ respectively, where $\mbox{\boldmath$\sigma$}=\left(\sigma_1,\sigma_2,\ldots,\sigma_n\Rightarrowght)$, $\mbox{\boldmath$\theta$}=\left(\theta_1,\theta_2, \ldots,\theta_n\Rightarrowght)$, $\mbox{\boldmath$\lambdambda$}=\left(\lambdambda_1,\lambdambda_2,\ldots,\lambdambda_n\Rightarrowght)$ and $\mbox{\boldmath$\delta$}=\left(\delta_1,\delta_2,\ldots,\delta_n\Rightarrowght)$, then \begin{equation*} F_{n:n}\left(x\Rightarrowght)=\prod_{i=1}^n \frac{x^{\sigma_{i}}\left(1+\sigma_{i}\left(\lambdambda_{i}-\log x\Rightarrowght)\Rightarrowght)}{1+\lambdambda_{i}\sigma_{i}}, \end{equation*} and \begin{equation*} G_{n:n}\left(x\Rightarrowght)=\prod_{i=1}^n \frac{x^{\theta_{i}}\left(1+\theta_{i}\left(\delta_{i}-\log x\Rightarrowght)\Rightarrowght)}{1+\delta_{i}\theta_{i}}. \end{equation*} Again, if $\tilde{r}_{n:n}^{X}$ and $\tilde{r}_{n:n}^{Y}$ are the reversed hazard rate functions of $X_{n:n}$ and $Y_{n:n}$ respectively, then \begin{equation} \tilde{r}_{n:n}^X\left(x\Rightarrowght)=\sum_{i=1}^n\frac{\sigma_i}{x}\left(1-\frac{1}{1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)}\Rightarrowght)\lambdabel{e1}, \end{equation} and \begin{equation} \tilde{r}_{n:n}^Y\left(x\Rightarrowght)=\sum_{i=1}^n\frac{\theta_i}{x}\left(1-\frac{1}{1+\theta_i\left(\delta_i-\log x\Rightarrowght)}\Rightarrowght)\lambdabel{e2}. \end{equation} \hspace*{0.3 in}The following two theorems show that under certain conditions on parameters, there exists reversed hazard rate ordering between $X_{n:n}$ and $Y_{n:n}$. \begin{t1}\lambdabel{th1} For $i=1,2,\ldots, n$, let $X_i$ and $Y_i$ be two sets of mutually independent random variables with $X_i\sim LL\left(\sigma_i,\lambdambda_i\Rightarrowght)$ and $Y_i\sim LL\left(\theta_i,\lambdambda_i\Rightarrowght)$. Further, suppose that $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\theta$}, \mbox{\boldmath $\lambdambda$}\in \mathcal{D}_+$ or $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\theta$}, \mbox{\boldmath $\lambdambda$}\in \mathcal{E}_+$. Then, $$\mbox{\boldmath $\sigma$}\stackrel{m}{\succeq}\mbox{\boldmath $\theta$}\;\text{implies}\; X_{n:n}\ge_{rhr}Y_{n:n}.$$ \end{t1} {\bf Proof:} Let $g_{i}(y)=\frac{y}{x}\left(1-\frac{1}{1+y\left(\lambdambda_i-\log x\Rightarrowght)}\Rightarrowght).$ Differentiating $g_{i}(y)$ with respect to $y$, we get $$g_{i}^{'}(y)=\frac{1}{x}\left(1-\frac{1}{\left(1+y\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}\Rightarrowght),$$ giving $$g_{i}^{'}(\sigma_i)-g_{i+1}^{'}(\sigma_{i+1})=\frac{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2-\left(1+\sigma_{i+1}\left(\lambdambda_{i+1}-\log x\Rightarrowght)\Rightarrowght)^2}{x\left(\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)\left(1+\sigma_{i+1}\left(\lambdambda_{i+1}-\log x\Rightarrowght)\Rightarrowght)\Rightarrowght)^2}.$$ So, if $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\lambdambda$}\in \mathcal{D}_+ \left(resp.\ \mathcal{E}_+\Rightarrowght)$, then $g_{i}^{'}(\sigma_i)-g_{i+1}^{'}(\sigma_{i+1})\geq \left(\leq\Rightarrowght)0.$ Then, by Lemma \ref{l3} (Lemma \ref{l4}), $\tilde{r}^X_{n:n}\left(x\Rightarrowght)$ is Schur convex in $\mbox{\boldmath $\sigma$}$, proving the result. $ \Box$\\ The counterexample given below shows that the ascending (descending) order of the components of the scale and shape parameters are necessary for the result of Theorem \ref{th1} to hold. \begin{e1}\lambdabel{ce2} Let $X_i\sim LL\left(\sigma_i, \lambdambda_i\Rightarrowght)$ and $Y_i\sim LL\left(\theta_i, \lambdambda_i\Rightarrowght), i=1,2,3.$ Now, if $\left(\sigma_1,\sigma_2, \sigma_3\Rightarrowght)=\left(1, 1,5\Rightarrowght)\in \mathcal{E}_+$, $\left(\theta_1,\theta_2, \theta_3\Rightarrowght)=\left(1,2,4\Rightarrowght)\in \mathcal{E}_+$ and $\left(\lambdambda_1, \lambdambda_2, \lambdambda_3\Rightarrowght)=\left(4, 3, 0.2\Rightarrowght)\in \mathcal{D}_+$ are taken, then from Figure \ref{fig1}, it is clear that $\frac{F_{3:3}(x)}{G_{3:3}(x)}$ is not monotone, giving that $X_{3:3}\ngeq_{rhr}Y_{3:3}$, although $\mbox{\boldmath $\sigma$}\stackrel{m}{\succeq}\mbox{\boldmath $\theta$}$. \begin{figure} \caption{\lambdabel{fig1} \end{figure} \end{e1} \hspace*{0.3in} Theorem \ref{th1} guarantees that for parallel systems of components having independent LL distributed lifetimes with common scale parameter vector, the majorized shape parameter vector leads to larger system's life in the sense of the reversed hazard rate ordering. Now the question arises$-$what will happen if the scale parameter $\mbox{\boldmath $\lambdambda$}$ majorizes $\mbox{\boldmath $\delta$}$ when the shape parameter vector remains constant? The theorem given below answers that if the order of the components of shape and scale parameter vectors are reversed, then $X_{n:n}$ will be smaller than $Y_{n:n}$ in reversed hazard rate ordering. \begin{t1}\lambdabel{th2} For $i=1,2,\ldots, n$, let $X_i$ and $Y_i$ be two sets of mutually independent random variables with $X_i\sim LL\left(\sigma_i,\lambdambda_i\Rightarrowght)$ and $Y_i\sim LL\left(\sigma_i,\delta_i\Rightarrowght)$. Further, suppose that $\mbox{\boldmath $\sigma$}\in \mathcal{E}_+$, $\mbox{\boldmath $\lambdambda$}, \mbox{\boldmath $\delta$}\in \mathcal{D}_+$ or $\mbox{\boldmath $\sigma$}\in \mathcal{D}_+$, $\mbox{\boldmath $\lambdambda$}, \mbox{\boldmath $\delta$}\in \mathcal{E}_+$. Then, $$\mbox{\boldmath $\lambdambda$}\stackrel{m}{\succeq}\mbox{\boldmath $\delta$}\;\text{implies}\; X_{n:n}\leq_{rhr}Y_{n:n}.$$ \end{t1} {\bf Proof:} For $i= 1, 2\ldots, n$, let us consider $g_{i}(y)=\frac{\sigma_i}{x}\left(1-\frac{1}{1+\sigma_i\left(y-\log x\Rightarrowght)}\Rightarrowght).$ Differentiating $g_{i}(y)$ with respect to $y$, we get $$g_{i}^{'}(y)=\frac{\sigma_i^2}{x\left(1+\sigma_i\left(y-\log x\Rightarrowght)\Rightarrowght)^2},$$ giving \begin{equation*} \begin{split} g_{i}^{'}(\lambdambda_i)-g_{i+1}^{'}(\lambdambda_{i+1})&\stackrel{sign}{=}\left(\sigma_i^2-\sigma_{i+1}^2\Rightarrowght)+\sigma_i^2\sigma_{i+1}^2\left[\left(\lambdambda_{i+1}-\log x\Rightarrowght)^2-\left(\lambdambda_{i}-\log x\Rightarrowght)^2\Rightarrowght]\\&\quad+2\sigma_i\sigma_{i+1}\left[\left(\sigma_i\lambdambda_{i+1}-\sigma_{i+1}\lambdambda_{i}\Rightarrowght)-\log x\left(\sigma_i-\sigma_{i+1}\Rightarrowght)\Rightarrowght]. \end{split} \end{equation*} So, if $\mbox{\boldmath $\lambdambda$}\in \mathcal{D}_+\left(resp.\ \mathcal{E}_+\Rightarrowght)$ and $\mbox{\boldmath $\sigma$}\in \mathcal{E}_+\left(resp.\ \mathcal{D}_+\Rightarrowght)$, then $g_{i}^{'}(\lambdambda_i)-g_{i+1}^{'}(\lambdambda_{i+1})\leq \left(\geq\Rightarrowght)0.$ So, by Lemma \ref{l3} (Lemma \ref{l4}), $\tilde{r}^X_{n:n}\left(x\Rightarrowght)$ is Schur-concave in $\mbox{\boldmath $\lambdambda$}$, proving the result.$ \Box$\\ Next, one counterexample is provided to show that, nothing can be said about reversed hazard rate ordering between $X_{n:n}$ and $Y_{n:n}$ if $\mbox{\boldmath $\lambdambda$}$ majorizes $\mbox{\boldmath $\delta$}$ and all of $\mbox{\boldmath $\lambdambda$}$, $\mbox{\boldmath $\delta$}$ and $\mbox{\boldmath $\sigma$}$ are either in $\mathcal{E}_+$ or in $\mathcal{D}_+$. \begin{e1}\lambdabel{e3} Let $X_i\sim LL\left(\sigma_i, \lambdambda_i\Rightarrowght)$ and $Y_i\sim\left(\sigma_i, \delta_i\Rightarrowght), i=1,2,3$. Let $\left(\lambdambda_1,\lambdambda_2, \lambdambda_3\Rightarrowght)=\left(0.1, 0.3, 4.1\Rightarrowght)\in \mathcal{E}_+$ and $\left(\delta_1, \delta_2, \delta_3\Rightarrowght)=\left(0.2, 0.3,4\Rightarrowght)\in \mathcal{E}_+$, giving $\mbox{\boldmath $\lambdambda$}\stackrel{m}{\succeq}\mbox{\boldmath $\delta$}$. Now, if $\left(\sigma_1,\sigma_2, \sigma_3\Rightarrowght)=\left(0.1, 3,5\Rightarrowght)\in \mathcal{E}_+$ is taken, then Figure \ref{fig2} (a) shows that $\frac{F_{3:3}(x)}{G_{3:3}(x)}$ is increasing in $x$. Again if $\left(\sigma_1,\sigma_2, \sigma_3\Rightarrowght)=\left(2, 3,5\Rightarrowght)\in \mathcal{E}_+$ is taken, then Figure \ref{fig2} (b) shows that $\frac{F_{3:3}(x)}{G_{3:3}(x)}$ is decreasing in $x$. So, it can be concluded that, for all $\mbox{\boldmath $\sigma$}, \mbox{\boldmath$\lambdambda$}, \mbox{\boldmath$\delta$}\in \mathcal{D}_+ (resp.\ \mathcal{E}_+)$, $\mbox{\boldmath $\lambdambda$}\stackrel{m}{\succeq}\mbox{\boldmath $\delta$}$ does not always imply $X_{3:3}\leq_{rhr}Y_{3:3}$. \begin{figure} \caption{\lambdabel{fig2} \end{figure} \end{e1} \hspace*{0.3in} The following theorem shows that depending upon certain conditions, majorization order of the shape parameters implies likelihood ratio ordering between $X_{n:n}$ and $Y_{n:n}$. \begin{t1}\lambdabel{th3} For $i=1,2,\ldots, n$, let $X_i$ and $Y_i$ be two sets of mutually independent random variables with $X_i\sim LL\left(\sigma_i,\lambdambda_i\Rightarrowght)$ and $Y_i\sim LL\left(\theta_i,\lambdambda_i\Rightarrowght)$. Further, suppose that $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\theta$}, \mbox{\boldmath $\lambdambda$}\in \mathcal{D}_+$ or $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\theta$}, \mbox{\boldmath $\lambdambda$}\in \mathcal{E}_+$. Then, if $\lambdambda_i\sigma_i>1/2$, $$\mbox{\boldmath $\sigma$}\stackrel{m}{\succeq}\mbox{\boldmath $\theta$}\;\text{implies}\; X_{n:n}\ge_{lr}Y_{n:n}.$$ \end{t1} {\bf Proof:} In view of theorem \ref{th1} and using (3.1) and (3.2), here we have only to show that \begin{eqnarray*} \frac{\tilde{r}_{n:n}^X\left(x\Rightarrowght)}{\tilde{r}_{n:n}^Y\left(x\Rightarrowght)}&=&\frac{\sum_{k=1}^nu_k\left(\sigma_k,x\Rightarrowght)}{\sum_{k=1}^nu_k\left(\theta_k,x\Rightarrowght)}\\ &=& \eta(x) (say), \end{eqnarray*} is increasing in $x$, where $u_k(y,x)=\frac{y^{2}\left(\lambdambda_k-\log x\Rightarrowght)}{1+y\left(\lambdambda_k-\log x\Rightarrowght)}$. Now, differentiating $\eta(x)$ with respect to $x$, \begin{eqnarray*} \eta^{'}(x)&\stackrel{sign}{=}&\sum_{k=1}^n\frac{\partial u_k\left(\sigma_k,x\Rightarrowght)}{\partial x}\sum_{k=1}^nu_k\left(\theta_k,x\Rightarrowght)-\sum_{k=1}^n\frac{\partial u_k\left(\theta_k,x\Rightarrowght)}{\partial x}\sum_{k=1}^nu_k\left(\sigma_k,x\Rightarrowght)\\&=&-h\left(\mbox{\boldmath$\sigma$},x\Rightarrowght)\sum_{k=1}^nu_k\left(\theta_k,x\Rightarrowght)+h\left(\mbox{\boldmath$\theta$},x\Rightarrowght)\sum_{k=1}^nu_k\left(\sigma_k,x\Rightarrowght), \end{eqnarray*} where $$ h(\mbox{\boldmath$\sigma$}, x)=-\sum_{k=1}^n\frac{\partial u_k\left(\sigma_k,x\Rightarrowght)}{\partial x}=\frac{1}{x}\sum_{k=1}^n\frac{\sigma_k^2}{\left(1+\sigma_k\left(\lambdambda_k-\log x\Rightarrowght)\Rightarrowght)^2} $$ and $$ h(\mbox{\boldmath$\theta$}, x)=-\sum_{k=1}^n\frac{\partial u_k\left(\theta_k,x\Rightarrowght)}{\partial x}=\frac{1}{x}\sum_{k=1}^n\frac{\theta_k^2}{\left(1+\theta_k\left(\lambdambda_k-\log x\Rightarrowght)\Rightarrowght)^2}. $$ Thus, to show that $\eta(x)$ is increasing in $x$, we have only to show that $$\psi\left(\mbox{\boldmath$\sigma$}, x\Rightarrowght)=\frac{h(\mbox{\boldmath$\sigma$}, x)}{\sum_{k=1}^nu_k\left(\sigma_k,x\Rightarrowght)}$$ is Schur-concave in $\mbox{\boldmath$\sigma$}$. \\ Now, as $$\frac{\partial h(\mbox{\boldmath$\sigma$}, x)}{\partial \sigma_i}=\frac{1}{x}.\frac{2\sigma_i}{\left(1+\sigma_i(\lambdambda_i-\log x)\Rightarrowght)^3}$$ and $$\frac{\partial }{\partial \sigma_i}\left[\sum_{k=1}^n u_k\left(\sigma_k,x\Rightarrowght)\Rightarrowght]=1-\frac{1}{\left(1+\sigma_i(\lambdambda_i-\log x)\Rightarrowght)^2},$$ then \begin{eqnarray*} \frac{\partial \psi}{\partial\sigma_i}&\stackrel{sign}{=}&\frac{2\sigma_i}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}\sum_{k=1}^n u_k\left(\sigma_k,x\Rightarrowght)-x.h(\mbox{\boldmath$\sigma$}, x)\left(1-\frac{1}{\left(1+\sigma_i(\lambdambda_i-\log x)\Rightarrowght)^2}\Rightarrowght). \end{eqnarray*} So, if $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\lambdambda$}\in \mathcal{D}_+ \left(resp \ \in \mathcal{E}_+\Rightarrowght)$, i.e., for $i\leq j$ if $\sigma_i\geq\sigma_j$ and $\lambdambda_i\geq\lambdambda_j$ $\left(\sigma_i\leq\sigma_j, \lambdambda_i\leq\lambdambda_j\Rightarrowght)$, then noticing the fact that $\frac{1}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}$ is decreasing in $\sigma_i$ as well as in $\lambdambda_i,$ it can be written that $$\frac{1}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}\leq (\geq) \frac{1}{\left(1+\sigma_j\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}\leq (\geq) \frac{1}{\left(1+\sigma_j\left(\lambdambda_j-\log x\Rightarrowght)\Rightarrowght)^2}.$$ Again, as $\sigma_i\lambdambda_i>\frac{1}{2}$ implying $\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)>\frac{1}{2}$ for all $0<x<1$, then $$\frac{\partial}{\partial\sigma_i}\left(\frac{\sigma_i}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}\Rightarrowght)=\frac{1-2\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^4}<0,$$ proving that $\frac{\sigma_i}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}$ is decreasing in $\sigma_i$. Again, it is also decreasing in $\lambdambda_i$. Thus, for all $\sigma_i\geq\sigma_j$ and $\lambdambda_i\geq\lambdambda_j$ $\left(\sigma_i\leq\sigma_j, \lambdambda_i\leq\lambdambda_j\Rightarrowght)$, $$\frac{\sigma_i}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}\leq (\geq)\frac{\sigma_j}{\left(1+\sigma_j\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}\leq (\geq)\frac{\sigma_j}{\left(1+\sigma_j\left(\lambdambda_j-\log x\Rightarrowght)\Rightarrowght)^3}.$$ So, for all $i\leq j$ \begin{equation*} \begin{split} \frac{\partial\psi}{\partial\sigma_i}-\frac{\partial\psi}{\partial\sigma_j}&\stackrel{sign}{=}\sum_{k=1}^n\frac{\sigma_k^2\left(\lambdambda_k-\log x\Rightarrowght)}{1+\sigma_k\left(\lambdambda_k-\log x\Rightarrowght)}\left[\frac{2\sigma_i}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}-\frac{2\sigma_j}{\left(1+\sigma_j\left(\lambdambda_j-\log x\Rightarrowght)\Rightarrowght)^3}\Rightarrowght]\\ &\quad+\sum_{k=1}^n\frac{\sigma_k^2}{\left(1+\sigma_k\left(\lambdambda_k-\log x\Rightarrowght)\Rightarrowght)^2}\left[\frac{1}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}-\frac{1}{\left(1+\sigma_j\left(\lambdambda_j-\log x\Rightarrowght)\Rightarrowght)^2}\Rightarrowght]\\ &\leq (\geq) 0. \end{split} \end{equation*} Thus the result follows from Lemma 3.1 (Lemma 3.3) of Kundu \emph{et al.} (\cite{kun1}).$ \Box$\\ \hspace*{0.3in} Although Theorem \ref{th3} holds under a sufficient condition for two $n$ component systems, the next theorem shows that no such condition is required for these systems having multiple-outlier LL model if the scale parameter vectors of these systems are common. \begin{t1}\lambdabel{th5} For $i=1,2,...,n$, let $X_i$ and $Y_i$ be two sets of independent random variables each following the multiple-outlier EW model such that $X_i\sim LL\left(\sigma,\lambdambda\Rightarrowght)$ and $Y_i\sim LL\left(\theta,\lambdambda\Rightarrowght)$ for $i=1,2,\ldots,n_1$, $X_i\sim LL\left(\sigma^*,\lambdambda^*\Rightarrowght)$ and $Y_i\sim LL\left(\theta^*,\lambdambda^*\Rightarrowght)$ for $i=n_1+1,n_1+2,\ldots,n_1+n_2(=n)$ If $$(\underbrace{\sigma,\sigma,\ldots,\sigma,}_{n_1} \underbrace{\sigma^*,\sigma^*,\ldots,\sigma^*}_{n_2})\stackrel{m}{\succeq} (\underbrace{\theta,\theta,\ldots,\theta,}_{n_1} \underbrace{\theta^*,\theta^*,\ldots,\theta^*}_{n_2})$$ and either $\{\sigma\ge\sigma^*, \theta\ge\theta^*, \lambdambda\ge\lambdambda^*\}$ or $\{\sigma\leq\sigma^*, \theta\leq\theta^*, \lambdambda\leq\lambdambda^*\}$ then $ X_{n:n}\ge_{lr}Y_{n:n}$. \end{t1} {\bf Proof:} Following Theorem \ref{th3} and in view of Theorem \ref{th1}, we have only to show that $$\psi_{1}(\mbox{\boldmath$\sigma$},x)=\frac{\sum_{k=1}^n\frac{\sigma_k^{2}}{\left(1+\sigma_k(\lambdambda_k-\log x)\Rightarrowght)^{2}}}{\sum_{k=1}^n\frac{\sigma_k^{2}(\lambdambda_k-\log x)}{1+\sigma_k(\lambdambda_k-\log x)}}$$ is Schur-concave in $\mbox{\boldmath$\sigma$}$.\\ \hspace*{0.3 in} Now, three cases may arise:\\ $Case (i)$ If $1\leq i<j\leq n_1$, $i.e.$, if $\sigma_i=\sigma_j=\sigma$ and $\lambdambda_i=\lambdambda_j=\lambdambda$, then $\frac{\partial \psi_{1}}{\partial \sigma_i}-\frac{\partial \psi_{1}}{\partial \sigma_j}=0.$ \\ $Case (ii)$ If $n_1+1\leq i<j\leq n$, $i.e.$, if $\sigma_i=\sigma_j=\sigma^*$ and $\lambdambda_i=\lambdambda_j=\lambdambda^*$, then $\frac{\partial \Psi}{\partial \sigma_i}-\frac{\partial \Psi}{\partial \sigma_j}=0.$ \\ $Case (iii)$ If $1\leq i\leq n_1$ and $n_1+1\leq j\leq n$, then $\sigma_i=\sigma$, $\lambdambda_i=\lambdambda$ and $\sigma_j=\sigma^*$, $\lambdambda_i=\lambdambda^*$. It can be easily shown that \begin{equation*} \begin{split} \frac{\partial \psi_1}{\partial \sigma_i}-\frac{\partial \psi_1}{\partial \sigma_j}&\stackrel{sign}{=}\left(\frac{n_1\sigma^{2}}{(1+\xi_1)^{2}}+\frac{n_2\sigma^{*2}}{(1+\xi_2)^{2}}\Rightarrowght)\left(\frac{\xi_2^{2}}{(1+\xi_2)^{2}}-\frac{\xi_1^{2}}{(1+\xi_1)^{2}}\Rightarrowght)\\&\quad+\left(\frac{\sigma\xi_2}{(1+\xi_1)}-\frac{\sigma^*\xi_1}{(1+\xi_2)}\Rightarrowght)\left(\frac{2n_1\sigma}{(1+\xi_2)^2(1+\xi_1)}+\frac{2n_2\sigma^*}{(1+\xi_1)^2(1+\xi_2)}\Rightarrowght). \end{split} \end{equation*} where $\xi_1=\sigma(\lambdambda-\log x)$ and $\xi_2=\sigma^{*}(\lambdambda^*-\log x)$. Now, as $\sigma \geq (\leq) \sigma^*$ and $\lambdambda \geq (\leq) \lambdambda^*$, implying that $\sigma(\lambdambda-\log x) \geq (\leq) \sigma^*(\lambdambda^*-\log x)$ $i.e.$ $\xi_1\geq (\leq)\xi_2$, and moreover, $\frac{\xi}{1+\xi}=1-\frac{1}{1+\xi}$ is increasing in $\xi$, then $\frac{\xi_2^{2}}{(1+\xi_2)^{2}}\leq (\geq)\frac{\xi_1^{2}}{(1+\xi_1)^{2}}$. Again, \begin{eqnarray*} \frac{\sigma\xi_2}{1+\xi_1}-\frac{\sigma^*\xi_1}{1+\xi_2}&=&\frac{\sigma\sigma^*\left\{(\lambdambda^*-\log x)(1+\sigma^*(\lambdambda^*-\log x))-(\lambdambda-\log x)(1+\sigma(\lambdambda-\log x))\Rightarrowght\}}{\left(1+\xi_1\Rightarrowght)\left(1+\xi_2\Rightarrowght)}\\ &\le(\ge)& 0. \end{eqnarray*} So, by Lemma 3.1 (Lemma 3.3) of Kundu \emph{et al.} (\cite{kun1}), the result is proved. $ \Box$\\ \hspace*{0.3in} Theorem \ref{th3} guarantees that, for two $n$ component parallel systems (with a sufficient condition) having independent LL distributed lifetimes with a common scale parameter vector, the majorized shape parameter vector leads to greater system's lifetime in the sense of likelihood ratio order. The next theorem states that the majorized scale parameter vector leads to smaller system's lifetime in the sense of likelihood ratio order when the shape parameter vector of these two $n$-component parallel systems are common. \begin{t1}\lambdabel{th6} For $i=1,2,\ldots, n$, let $X_i$ and $Y_i$ be two sets of mutually independent random variables with $X_i\sim LL\left(\sigma_i,\lambdambda_i\Rightarrowght)$ and $Y_i\sim LL\left(\sigma_i,\delta_i\Rightarrowght)$. Further, suppose that $\mbox{\boldmath $\sigma$}\in \mathcal{E}_+$, $\mbox{\boldmath $\lambdambda$}, \mbox{\boldmath $\delta$}\in \mathcal{D}_+$ or $\mbox{\boldmath $\sigma$}\in \mathcal{D}_+$, $\mbox{\boldmath $\lambdambda$}, \mbox{\boldmath $\delta$}\in \mathcal{E}_+$. Then, $$\mbox{\boldmath $\lambdambda$}\stackrel{m}{\succeq}\mbox{\boldmath $\delta$}\;\text{implies}\; X_{n:n}\leq_{lr}Y_{n:n}.$$ \end{t1} {\bf Proof:} In view of Theorem \ref{th2} and using (\ref{e1}) and (\ref{e2}), we are to prove that \begin{equation*} \eta_{1}(x)=\frac{\sum_{k=1}^n\frac{\sigma_k}{x}\left(1-\frac{1}{1+\sigma_k(\lambdambda_k-\log x)}\Rightarrowght)}{\sum_{k=1}^n\frac{\sigma_k}{x}\left(1-\frac{1}{1+\sigma_k(\delta_k-\log x)}\Rightarrowght)} \end{equation*} is decreasing in $x$ $i.e.$ to prove that $$\psi_{2}(\mbox{\boldmath$\lambdambda$},x)=\frac{\sum_{k=1}^n\frac{\sigma_k^2}{\left(1+\sigma_k\left(\lambdambda_k-\log x\Rightarrowght)\Rightarrowght)^2}}{\sum_{k=1}^n\frac{\sigma_k^2\left(\lambdambda_k-\log x\Rightarrowght)}{\left(1+\sigma_k\left(\lambdambda_k-\log x\Rightarrowght)\Rightarrowght)}}$$ is Schur-convex in $\mbox{\boldmath$\lambdambda$}$. Now, $$\frac{\partial\psi_2}{\partial\lambdambda_i}\stackrel{sign}{=}-\frac{2\sigma_i^3}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}\sum_{k=1}^n\frac{\sigma_k^2\left(\lambdambda_k-\log x\Rightarrowght)}{\left(1+\sigma_k\left(\lambdambda_k-\log x\Rightarrowght)\Rightarrowght)}-\frac{\sigma_i^2}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}\sum_{k=1}^n\frac{\sigma_k^2}{\left(1+\sigma_k\left(\lambdambda_k-\log x\Rightarrowght)\Rightarrowght)^2}.$$ So, by noticing the fact that $$\frac{\partial}{\partial\sigma_i}\left[\frac{\sigma_i}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)}\Rightarrowght]=\frac{1}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}>0,$$ giving that $\frac{\sigma_i}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)}$ is increasing in $\sigma_i$, $\mbox{\boldmath $\lambdambda$}\in \mathcal{D}_+ \left(resp. \mathcal{E}_+\Rightarrowght)$ and $\mbox{\boldmath $\sigma$}\in \mathcal{E}_+ \left(resp. \mathcal{D}_+\Rightarrowght)$, $i.e.$ for all $i\le j$ $\lambdambda_i\geq(\leq)\lambdambda_j$ and $\sigma_i\leq(\geq)\sigma_j$ gives $$\frac{\sigma_i^3}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}\leq(\geq)\frac{\sigma_j^3}{\left(1+\sigma_j\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^3}\leq(\geq)\frac{\sigma_j^3}{\left(1+\sigma_j\left(\lambdambda_j-\log x\Rightarrowght)\Rightarrowght)^3}$$ and $$\frac{\sigma_i^2}{\left(1+\sigma_i\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}\leq(\geq)\frac{\sigma_j^2}{\left(1+\sigma_j\left(\lambdambda_i-\log x\Rightarrowght)\Rightarrowght)^2}\leq(\geq)\frac{\sigma_j^2}{\left(1+\sigma_j\left(\lambdambda_j-\log x\Rightarrowght)\Rightarrowght)^2}.$$ So, $$\frac{\partial\psi_2}{\partial\lambdambda_i}-\frac{\partial\psi_2}{\partial\lambdambda_j}\geq(\leq)0.$$ Thus the result follows from Lemma 3.1 (Lemma 3.3) of Kundu \emph{et al.} (\cite{kun1}).$ \Box$\\ \end{document}
\begin{document} \title{Supplementary Materials of Hierarchical Prototype Networks for Continual Graph Representation Learning} \maketitle \IEEEPARstart{I}{n} this document, we provide implementation details in Section \ref{sec:implementation_details} and additional experimental results and analysis in Section \ref{sec:experiments}. \iffalse \section{Details of Theoretical Analysis}\label{sec:theoretical_analysis} \subsection{Overview} The main theoretical results are briefly introduced in the paper. In this section, we provide detailed explanations and proofs for the theoretical results. \subsection{Memory consumption upper bound} The proof and detailed analysis on the memory consumption upper bound has already been given in the paper. In this subsection, we give the computation of a specific case of the general theoretical results. Although the general formulation of the upper bound is not available, we can specially compute $\max_{N} S(d_a, N, 1-t_A)$ for certain $n$s, and verify it with experiments. For example, when $n=2$, the distribution becomes distributing points on a circle with unit radius. Then, $\max_{N} S(d_a, N, 1-t_A)$ can be obtained by evenly distributing the points on the circle with an interval of $t_A$. Finally, the explicit value of $\max_{N} S(d_a, N, 1-t_A)$ can be formulated as: \begin{align} \max_{N} S(d_n, N, 1-t_N) = \frac{2\pi}{\arccos(1-t_A)}, \end{align} then we have: \begin{align} n_A \leqslant (l_a + l_r) \frac{2\pi}{\arccos(1-t_A)}. \end{align} And the upper bound of the number of N- and C-prototypes can be formulated similarly. The above results are used in Section 4.7 in the paper. \subsection{Task distance preserving} In this subsection, we give proofs and detailed analysis on the Theorem 2 in the paper. At below, Lemma \ref{quadrtic_bound}, Lemma \ref{real_symmetric}, Lemma \ref{rank}, and Corollary \ref{cor:1} are from existing knowledge ranging from geometry to linear algebra. The other parts are of our own contributions. In continual learning, the key challenge is to overcome the catastrophic forgetting, which refers to the performance degradation on previous tasks after training the model on new tasks. Based on our model design, we formulate this as: whether learning new tasks affect the representations the model generates for old task data. First, we give definitions on the tasks and task distances: \begin{definition}[Task set]\label{def:task_set} The $p$-th task in a sequence is denoted as $\mathcal{T}^p$ and contains a subgraph $\mathcal{G}_p$ consisting of nodes belonging to some new categories. We denote the associated node set and adjacency matrix as $\mathbb{V}_p$ and $\mathrm{A}_p$. Each $v_p^i \in \mathbb{V}_p$ has a feature vector $\mathbf{x}(v_p^i)$ and a label $y(v_p^i)$. \end{definition} Then, the reason for catastrophic forgetting is that different tasks in a sequence are drawn from heterogeneous distributions, making the model sequentially trained on different tasks unable to maintain satisfying performances on previous tasks. Therefore, given the definition of the tasks (Definition \ref{def:task_set}), we then give a formal definition to quantify the difference between two tasks. \begin{definition}[Task distance]\label{def:task_distance} We define the distance between two tasks as the set distance between the node sets of these two tasks, \textit{i.e.} $\mathbf{dist}(\mathbb{V}_p, \mathbb{V}_q) = \mathrm{inf}\norm{\mathbf{x}(v_p^i) - \mathbf{x}(v_q^j)}, \forall v_p^i \in \mathbb{V}_p, v_q^j \in \mathbb{V}_q$. \end{definition} \begin{lemma} The distance between any two tasks is non-negative, i.e. $\forall i,j\in \{1,...,\mathrm{M}^T\}, \mathbf{dist}(\mathbb{V}_p, \mathbb{V}_q) \geqslant 0$, where $\mathrm{M}^T$ is the number of tasks contained in the sequence. \end{lemma} The real-world data could be complex and sometimes may even contain noises that are impossible for any model to learn, which needs extra considerations when justifying the effectiveness of the model. Formally, we give the definition of the contradictory data. \begin{definition}[Contradictory data] $\forall v_p^i \in \mathbb{V}_p, p = 1,...,\mathrm{M}^T$, if $\exists v_q^j \in \mathbb{V}_q, j = 1,...,\mathrm{M}^T$, $\mathrm{st.} \forall l\in \mathbb{N^*}, \forall u \in \mathcal{N}^l(v_p^i) $ and $\forall v \in \mathcal{N}^l(v_q^j)$, $\mathbf{x}(u)=\mathbf{x}(v)$ but $y(v_p^i) \neq y(v_q^j)$, then we say $(v_p^i, y(v_p^i))$ and $(v_q^j, y(v_q^j))$ are contradictory data, as it is contradictory for any model to give different predictions for one node based on the same node features and graph structures. ($\mathbb{N}^*$ denotes the set of non-negative integers) \end{definition} \begin{rmrk} Contradictory data is ignored or simply regarded as outliers in previous works, but in this work, we explicitly analyze its affect for the comprehensiveness of our theory. contradictory data has different situations. If $v_p^i$ and $v_q^j$ are from different tasks, then $y(v_p^i) \neq y(v_q^j)$ is plausible. Because they may be describing a same thing from different aspects. For example, an article from the citation network may be both categorized as 'physics related' and 'computer science related'. In this situation, it would be easy to add an task indicator to the feature of the node, then the feature of $v_p^i$ and $v_q^j$ are no longer equal and are not contradictory data anymore. But within one task, contradictory data are most likely to be wrongly labeled, \textit{e.g.} it does not make sense if an article is both 'related to physics' and 'not related to physics'. \end{rmrk} Besides the distance between tasks, the distance between the embeddings obtained by the AFEs will also be a crucial concept in the proof. \begin{definition}[Embedding distance] Each input node $v_p^i$ is given a set of atomic embeddings $\mathbb{E}_A(v_p^i) = \mathbb{E}^{\mathrm{node}}_A(v_p^i) \cup \mathbb{E}^{\mathrm{struct}}_A(v_p^i)$, where $\mathbb{E}^{\mathrm{node}}_A(v_p^i) = \{\mathbf{a}_i^j | j\in\{1,...,l_a\}\}_p $ containing the atomic node embeddings of $v_p^i$ and $\mathbb{E}^{\mathrm{struct}}_A(v_p^i) = \{\mathbf{r}_i^j | k \in \{ 1,...,l_r\} \}_p$ containing the atomic structure embeddings. $\mathbf{a}_i^j\in \mathbb{R}^{d_a}$ and $\mathbf{r}_i^j \in \mathbb{R}^{d_r}$. To define the distance between representations of two nodes, we concatenate the atomic embeddings of each node into a single vector in a higher dimensional space, i.e. each node $v_p^i$ corresponds to a latent vector $\mathbf{z}_p^i=[\mathbf{a}_i^1;...;\mathbf{a}_i^{l_a};\mathbf{r}_i^1;...;\mathbf{r}_i^{l_r}]\in\mathbb{R}^{l_a \times d_a + l_r \times d_r}$. Then we define the distance between representations of two nodes $v_p^i$ and $v_q^j$ as the Euclidean distance between their corresponding latent vector $\mathbf{z_p^i}$ and $\mathbf{z_q^j}$, i.e. $\mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = \norm{\mathbf{z}_p^i-\mathbf{z}_q^j}_2$ . \end{definition} Then we will give some explanations on the linear algebra related theories. \begin{lemma}[Bounds for real quadratic forms]\label{quadrtic_bound} Given a real symmetric matrix $\mathbf{A}$, and an arbitrary real vector variable $\mathbf{x}$, we have \begin{align} \lambda_{\min} \leqslant \frac{\mathbf{x}^T \mathbf{A} \mathbf{x}}{\mathbf{x}^T \mathbf{x}} \leqslant \lambda_{\max}, \end{align} where $\lambda_{\min}$ and $\lambda_{\max}$ are the minimum and maximum eigenvalues of matrix $\mathbf{A}$. \end{lemma} \begin{lemma}[Real symmetric matrix]\label{real_symmetric} For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$, $\mathbf{A}^T\mathbf{A} \in \mathbb{R}^{n\times n}$ is a real symmetric matrix, $rank(\mathbf{A}^T\mathbf{A}) = rank(\mathbf{A})$, and the non-zero eigenvalues of $\mathbf{A}^T\mathbf{A}$ are squares of the non-zero singular values of $\mathbf{A}$. \end{lemma} \begin{lemma}[Rank and number of non-zero singular values]\label{rank} For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$, the number of non-zero singular values equals the rank of $\mathbf{A}$, i.e. $rank(\mathbf{A})$ \end{lemma} \begin{cor}\label{cor:1} For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$. Without loss of generality, we assume $n \leqslant m$. If $\mathbf{A}$ is column full rank, i.e. $rank(\mathbf{A})=n$, then $\mathbf{A}$ has $n$ non-zero singular values. Besides, $rank(\mathbf{A}^T\mathbf{A}) = n$, and $\mathbf{A}^T\mathbf{A}$ has $n$ non-zero singular values. \end{cor} Given the explanations above, we then derive the bound for the change of the distance among data, which will be further used for analyzing the separation of data from different tasks. \begin{lemma}[Embedding distance bound]\label{dist_bound} Given two nodes $v_p^i \in \mathbb{V}_p$ and $v_q^j \in \mathbb{V}_q$ with vertex feature $\mathbf{x}(v_p^i), \mathbf{x}(v_q^j) \in \mathbb{R}^{{d_v}}$, their multi-hop neighboring node sets are denoted as $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$ and $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_q^j)$. The AFEs for generating atomic embeddings are $\mathrm{AFE}_{\mathrm{node}} = \{ \mathbf{A}_i \in \mathbb{R}^{d_{a} \times d_v } | i\in\{1,...,l_a\} $ and $\mathrm{AFE}_{\mathrm{struct}} = \{ \mathbf{R}_j \in \mathbb{R}^{d_r \times d_v } | j\in\{1,...,l_r\}$, corresponding to matrices for atomic node embeddings and atomic structure embeddings, respectively. Then, the square distance $\mathrm{dist}^2(\mathbf{z}_p^i - \mathbf{z}_q^j) = \norm{\mathbf{z}_p^i - \mathbf{z}_q^j}_2^2 \geqslant \lambda_{\min}(\norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2 + \sum_{k=1}^{l_r}\norm{\mathbf{x}(u_k)-\mathbf{x}(\nu_k)}_2^2)$, if $l_a \times d_a + l_r \times d_r \geqslant d_v $, where $u_k$ are nodes sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$, $\nu_k$ are nodes sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_q^j)$, $\lambda_i$ are the eigenvalues of $\mathbf{W}^T \mathbf{W}$, and $\mathbf{W} \in \mathbb{R}^{(l_r+1)d_v \times (l_a d_a + l_r d_r)}$ is constructed with the matrices in $\mathrm{AFE}_{\mathrm{node}}$ and $\mathrm{AFE}_{\mathrm{struct}}$. Specifically, $\mathbf{W}$ is a block matrix constructed as follows: 1. $\mathbf{W}_{1:l_ad_a,1:d_v}$ are filled by the concatenation of $\{\mathbf{A}_i | i=1,...,l_a \}$, i.e. $[\mathbf{A}_1;...;\mathbf{A}_{l_a}] \in \mathbb{R}^{l_a d_a \times d_v}$. 2. For $\mathbf{W}_{l_ad_a+1:l_ad_a+l_rd_r, 1:(l_r+1)d_v}$, the construction is first filling $\mathbf{W}_{l_ad_a+(k-1)d_r:l_ad_a+kd_r, kd_v:(k+1)d_v}$ with $\mathbf{R}_k$, $k=1,...,l_r$. 3. For other parts, fill with zeros. \end{lemma} \begin{proof} Given vertex $v_p^i$, we concatenate its feature vector with the $l_r$ neighbors sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$, i.e. $\mathbf{x}_{p,i}' = [\mathbf{x}(v_p^i); \mathbf{x}(u_1); ...; \mathbf{x}(u_{l_r})]\in \mathbb{R}^{(l_r+1)d \times 1}, u_j \in \bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$. Then with the constructed block matrix $\mathbf{W}$, we could formulate the generation of $\mathbf{z}_p^i$ as: $\mathbf{z}_p^i = \mathbf{W} \mathbf{x}_{p,i}'$. Similarly, we can formulate $\mathbf{z}_q^j$ for another vertex $v_q^j$. And their distance can be formulated as: \begin{align*} \scriptsize \mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = \norm{\mathbf{z}_p^i-\mathbf{z}_q^j}_2 = \sqrt{(\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)}, \end{align*} where $\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)$ can be further expanded as: \begin{flalign*} (\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j) & = (\mathbf{W} \mathbf{x}_{p,i}'-\mathbf{W} \mathbf{x}_{q,j}')^T (\mathbf{W} \mathbf{x}_{p,i}'-\mathbf{W} \mathbf{x}_{q,j}') &&\\ &=\big(\mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')\big)^T \big(\mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')\big) &&\\ &= (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T\mathbf{W}^T \mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')&& \\ \end{flalign*} According to lemma \ref{real_symmetric}, $\mathbf{W}^T \mathbf{W}$ is a real symmetric matrix, with lemma \ref{quadrtic_bound}, we have \begin{align*} \frac{(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T\mathbf{W}^T \mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')}{(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')} \geqslant \lambda_{min} \end{align*} According to lemma \ref{real_symmetric}, with $l_ad_a+l_rd_r \geqslant (l_r+1)d_v $ and the constraint of column full rank on $\mathbf{W}$, $\mathbf{W}^T \mathbf{W} \in \mathbb{R}^{(l_r+1) \times (l_r+1)}$ has $l_r+1$ positive eigenvalues, thus $\lambda_{min} > 0$. Then we decompose $(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')$ as: \begin{flalign*} &\quad (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')&& \\ &= \sum_{k=0}^{(l_r+1)d-1} \big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')\big)^2&&\\ &= \sum_{k=1}^{(l_r+1)d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')\big)^2 &&\\ &= \sum_{k=1}^{d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')_k\big)^2 + \sum_{k=1}^{l_r}\sum_{m=kd_v+1}^{(k+1)d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')_k\big)^2 &&\\ &= \norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2+ \sum_{m=1}^{l_r}\norm{\mathbf{x}(u_m)-\mathbf{x}(\nu_k)}_2^2&& \end{flalign*} \noindent$\therefore \mathrm{dist}^2(\mathbf{z}_p^i - \mathbf{z}_q^j) = \norm{\mathbf{z}_p^i - \mathbf{z}_q^j}_2^2 \geqslant \lambda_{\min}(\norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2 + \sum_{k=1}^{l_r}\norm{\mathbf{x}(u_k)-\mathbf{x}(\nu_k)}_2^2)$ \end{proof} The key point in these theories is that for any task sequence with certain distance among the tasks, there exists a configuration that ensures HPNs to be capable of preserving the task distance after projecting the data into the hidden space, so that only the prototypes associated with the current task are refined and the prototypes corresponding to the other tasks are preserved. Specifically, theorem on zero-forgetting can be formulated as follows: \begin{thm}[Task distance preserving]\label{zero-forget} For $\mathrm{HPNs}$ trained on consecutive tasks $\mathcal{T}^p$ and $\mathcal{T}^{p+1}$. If $l_ad_a+l_rd_r \geqslant (l_r+1)d_v $ and $\mathbf{W}$ is column full rank, then as long as $ t_{A} < \lambda_{\min}(l_r+1) \mathbf{dist}(\mathbb{V}_p, \mathbb{V}_{p+1}) $, learning on $\mathcal{T}^{p+1}$ will not modify representations $\mathrm{HPNs}$ generate for data from $\mathcal{T}^p$, i.e. catastrophic forgetting is avoided. \end{thm} In Theorem \ref{zero-forget}, $\lambda_i$ is eigenvalues of the $\mathbf{W}^T \mathbf{W}$, where $\mathbf{W}$ is the matrix mentioned before constructed via AFEs. $d_v$, $d_a$ and $d_r$ are dimensions of data, atomic node embeddings, and atomic structure embeddings. \begin{proof} Following the proofs above, suppose two nodes $v_p^i$ and $v_q^j$ are embedded into $\mathbf{z}_p^i$ and $\mathbf{z}_q^j$ with the embedding module. Then the distance between $\mathbf{z}_p^i$ and $\mathbf{z}_q^j$ could be formulated as: $\mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = ||\mathbf{z}_p^i-\mathbf{z}_q^j||_2 = \sqrt{(\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)}$ According to lemma \ref{dist_bound}, we have $\mathrm{dist}^2(\mathbf{z}_p^i, \mathbf{z}_q^j) = ||\mathbf{z}_p^i - \mathbf{z}_q^j||_2^2 \geqslant \lambda_{\min}(||\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)||_2^2 + \sum_{k=1}^{l_r}||\mathbf{x}(u_k)-\mathbf{x}(\nu_k)||_2^2)$. \noindent$\because v_p^i \in \mathbb{V}_p, v_q^j \in \mathbb{V}_q$, \noindent$\therefore ||\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)||_2^2 \geqslant \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$. Similarly, $||\mathbf{x}(u_k)-\mathbf{x}(\nu_k)||_2^2 \geqslant \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$, for $\forall k$. \noindent$\therefore ||\mathbf{z}_p^i, \mathbf{z}_q^j||_2^2 \geqslant \lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$ \noindent$\therefore \mathrm{dist}(\mathbf{z}_p^i, \mathbf{z}_q^j) =||\mathbf{z}_p^i - \mathbf{z}_q^j||_2 \geqslant \sqrt{\lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)} $ \noindent$\therefore$ If $t_A < \sqrt{\lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)}$, the embeddings of two nodes from two different tasks will not be assigned to same A-prototypes. Above all, if the conditions in Theorem \ref{zero-forget} are satisfied, learning on new tasks will not modify the prototypes for previous tasks. Besides, the data from previous tasks will be exactly matched to the correct prototypes after training the model on new tasks. In practice, the conditions may not be easy to be satisfied all the time. However, as mentioned in the paper, the bound given in Theorem \ref{zero-forget} is not tight, thus fully satisfying the conditions may not be necessary. Therefore, in the experimental section in the paper, we practically show how the important factors included in these conditions influence the performance (Section 3.6 in the paper). The results demonstrates that the more we satisfy the conditions, the better performance we will obtain, and certain factors (number of AFEs) influence more than the others. \end{proof} \begin{rmrk} When $\mathrm{dist}(\mathbb{V}_p, \mathbb{V}_q) = 0$, i.e. there exists a non-empty set $\mathbb{V}_{\cap} = \mathbb{V}_p \cap \mathbb{V}_q$, $\mathrm{st.}$ $\mathrm{dist}(\mathbb{V}_p \setminus \mathbb{V}_{\cap}, \mathbb{V}_q \setminus \mathbb{V}_{\cap}) > 0$, then \textrm{Theorem} \ref{zero-forget} holds. As for the $\mathbb{V}_{\cap}$ containing examples exactly same in $\mathbb{V}_p$ and $\mathbb{V}_q$, there are two situations: 1. $\forall v \in \mathbb{V}_{\cap}$, $y_p(v) = y_q(v)$, where $y_p(\cdot)$ and $y_q(\cdot)$ denote the associated labels in task $p$ and $q$ 2. $\exists v \in \mathbb{V}_{\cap}$, $y_p(v) \neq y_q(v)$ For situation 1, $\mathbb{V}_{\cap}$ will not cause the model to forget about the previous task, as these shared data are exactly same and will optimize the model to same direction. For situation 2, if no task indicator is provided, then these data are contradictory data, if task indicator is provided, then the indicator could be merged into the feature vector of the node, i.e. $\mathbf{x}(v_p)$, then $v_p$ will not belong to $\mathbb{V}_{\cap}$. \end{rmrk} \iffalse \begin{rmrk} Theorem \ref{zero-forget} implies that our model is capable of handling any arbitrarily collected tasks. However, our model actually does not depend on the splitting of the tasks. Instead, it will automatically discover different tasks and generate similar representations for similar tasks, vice versa. This is explained in the proof of Theorem \ref{zero-forget}. \end{rmrk} The constraints on the AFEs and the range for $t_A$ are negatively correlated. In Theorem \ref{zero-forget}, the constraints on AFEs are tight thus the range for $t_A$ is wide. In the following, we give another version of Theorem \ref{zero-forget} in which the constraints on AFEs is relaxed but the range for $t_A$ shrinks. The two versions of theorems may help understanding our model. Also, the different versions of constraints provide more flexible instructions on configuring the implementations. \begin{thm}[Task distance preserving v2]\label{zero-forget2} For $\mathrm{HPNs}$ trained on consecutive tasks $\mathcal{T}^p$ and $\mathcal{T}^{p+1}$. If $l_ad_a \geqslant d_v$ and $\mathbf{W}$ is column full rank, then as long as $ t_{A} < \lambda_{\min}\mathrm{dist}(\mathbb{V}_p, \mathbb{V}_{p+1}) $, learning on $\mathcal{T}^{p+1}$ will not modify representations $\mathrm{HPNs}$ generate for data from $\mathcal{T}^p$, i.e. catastrophic forgetting is avoided. \end{thm} The proof of Theorem \ref{zero-forget2} is similar to the proof of Theorem \ref{zero-forget}. \fi \fi \section{Details of Implementation}\label{sec:implementation_details} \subsection{Datasets and task splitting} In this subsection, we introduce the datasets we used and the details of how each dataset is split into different tasks. We use 5 publicly datasets which include 2 citation networks (Cora\cite{sen2008collective}, Citeseer \cite{sen2008collective}, OGB-Arxiv \cite{wang2020microsoft,mikolov2013distributed}), 1 actor co-occurrence network (Actor) \cite{pei2020geom}, and 1 product co-purchasing network (OGB-Products \cite{Bhatia16}). \iffalse \begin{table*}[] \scriptsize \centering \begin{tabular}{lcccccccc}\toprule \textbf{Dataset} & Cornell & Texas & Wisconsin & Cora & Citeseer & Actor & OGB-Arxiv & OGB-Products\\\midrule \# nodes & 183 & 183 & 251 & 2,708 & 3,327 & 7,600 & 169,343 & 2,449,029 \\\midrule \# edges & 295 & 309 & 499 & 5,429 & 4,732 & 33,544 &1,166,243 &61,859,140 \\ \midrule \# features & 1,703 & 1,703 & 1,703 & 1,433 & 3,703 & 931 & 128 & 100 \\\midrule \# classes & 5 & 5 & 5 & 7 & 6 & 4 & 40 & 47 \\\midrule \# tasks &2 &2 &2 & 3 & 3 & 2 & 20 & 23 \\\bottomrule \end{tabular} \caption{The detailed statistics of 8 datasets used in our experiments.} \label{tab:data_statistics} \end{table*} \fi \subsubsection{Citation networks} The original Cora \cite{mccallum2000automating} and Citeseer \cite{giles1998citeseer} are pre-processed by Sen et al. \cite{sen2008collective} with stemming and removing stop words as well as words with document frequency less than 10. Finally, Cora contains 2708 documents, 5429 links denoting the citations among the documents, and each document is represented with 1433 distinct words. Cora contains 7 classes. For training, 140 documents are selected with 20 examples for each class. The validation set contains 500 documents and the test set contains 1000 examples. In our continual learning setting, the first 6 classes are selected and grouped into 3 tasks (2 classes for each task) in the original order. Citeseer results in 3312 documents with each document being represented with 3703 distinct words, and 4732 links. Citeseer contains 6 classes. 20 documents per class are selected, for training, 500 documents are selected, for validation, and 1000 documents are selected as the test set. For continual learning setting, the documents from 6 classes are grouped into 3 tasks with 2 classes per task in the original order. The Cora and Citeseer datasets can be downloaded via \href{https://github.com/tkipf/gcn/tree/master/gcn/data}{Cora$\&$Citeseer}. The OGB-Arxiv dataset is collected in the Open Graph Benchmark \href{https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv}{OGB}. It is a directed citation network between all Computer Science (CS) arXiv papers indexed by MAG \cite{wang2020microsoft}. Totally it contains 169,343 nodes and 1,166,243 edges. Each node is an paper and each directed edge indicates that one paper cites another one. Each paper comes with a 128-dimensional feature vector. The dataset contains 40 classes. As the dataset is not balanced and the numbers of examples in different classes differs significantly, directly grouping the classes into 2-class groups like the Cora and Citeseer will cause certain tasks to be imbalanced. Therefore, we reordered the classes in an descending order according to the number of examples contained in each class, and then group the classes according to the new order. In this way, the number of examples contained in different classes of each task are arranged to be as balanced as possible. Specifically, the class indices of each task are: \{(35, 12),(15, 21),(28, 30), (16, 24), (10, 34), (8, 4), (5, 2), (27, 26), (36, 19), (23, 31), (9, 37), (13, 3), (20, 39), (22, 6), (38, 33), (25, 11), (18, 1), (14, 7), (0, 17), (29, 32)\}. \iffalse \subsubsection{Web page networks} WebKB dataset is collected from different universities by Carnegie Mellon University. The nodes in the datasets are web pages with bag-of-words representation, and edges are hyperlinks between the pages. The web pages are manually classified into 5 classes including student, project, course, staff, and faculty. Following setting in \cite{pei2020geom}, we use three subsets including Wisconsin with 251 web pages, Cornell with 183 web pages, and Texas with 183 web pages. For all these datasets, 60\% nodes are used for training, 20\% for validation, and 20\% for testing. For each of the web page networks, we constructed 2 tasks with 2 classes per task. The three web page network datasets can be accessed via \href{https://github.com/graphdml-uiuc-jlu/geom-gcn/tree/master/new_data}{Web Pages}. The balanced splitting of the classes for the three web page networks is \{(2, 3), (0, 4)\}. \fi \subsubsection{Actor co-occurrence network} The actor co-occurrence network is a subgraph of the film-director-actor-writer network \cite{tang2009social}. Each node in this dataset corresponds to an author, and the edges between the nodes are co-occurrence on the same Wikipedia pages. The whole dataset contains 7600 nodes and 33544 edges. Each node is accompanied with a feature vector of 931 dimensions. The nodes are classified into 4 classes according to the number of the average monthly traffic of the web page. For this dataset, we also constructed 2 tasks with 2 classes per task. The link to this dataset is \href{https://github.com/graphdml-uiuc-jlu/geom-gcn/tree/master/new_data/film}{Actor}. The balanced splitting of the classes is \{(0, 1), (2, 3)\}. \subsubsection{Product co-purchasing network} OGB-Products is also collected in the Open Graph Benchmark \href{https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv}{OGB}, and is an undirected and unweighted graph, representing an Amazon product co-purchasing network \href{http://manikvarma.org/downloads/XC/XMLRepository.html}{link}. In total, it contains 2,449,029 nodes and 61,859,140 edges. Nodes represent products sold in Amazon, and edges between two products indicate that the products are purchased together. Node features are generated by extracting bag-of-words features from the product descriptions followed by a Principal Component Analysis to reduce the dimension to 100. 47 top-level categories are used for target labels, in our experiments, we select 46 classes and omit the final class containing only 1 example. Similar to OGB-Arxiv, we reorder the classes in an descending order according to the number of examples contained in each class, and then group the classes according to the new order. The class indices of each tasks are: \{(4, 7), (6, 3), (12, 2), (0, 8), (1, 13), (16, 21), (9, 10), (18, 24), (17, 5), (11, 42), (15, 20), (19, 23), (14, 25), (28, 29), (43, 22), (36, 44), (26, 37), (32, 31), (30, 27), (34, 38), (41, 35), (39, 33), (45, 40)\}. \begin{figure*} \caption{Details of modules in HPNs.} \label{fig:module_structure} \end{figure*} \subsection{Experiment Setup} All models are implemented in PyTorch with SGD optimizer and repeated 5 times on a Nvidia Titan Xp GPU. The average performance and standard deviations are reported for comparison. The network architecture of HPNs is detailed in Figure \ref{fig:module_structure}, and the specific values of hyperparameters will be given in the following. The hyperparameters we provide here correspond to the models used in comparisons with the baselines, while in other experiments the hyperparameters are the research objects and will not be kept unchanged. As the sizes of the datasets we used are greatly different, we adopt different hyperparameters for small datasets and large datasets. The small datasets include Cora, Citeseer, Actor, Wisconsin, Cornell, and Texas. The large datasets include OGB-Arxiv and OGB-Products. For the small datasets, we set $l_a'=1$, $l_r'=1$, and $h=2$. We randomly sample 5 one-hop neighbors and 7 two-hop neighbors. The learning rates are managed separately for different modules of the model. For the AFEs, the learning rate is set as 0.1 at the beginning and decays to 0.001 at epoch 35. The learning rate for the prototypes are initialized as 0.1 at epoch 35 and decays to 0.01 at epoch 85. And the learning rates for the other trainable parameters are the same as the AFEs. During training, the AFEs would change rapidly at first and slow down after several epochs. Therefore, at the starting period of training, the same node would not be stably matched to the same set of prototypes due to the rapidly changing AFEs. To avoid this from creating too many redundant prototypes, we start to establish prototypes after training the AFEs at the 35th epochs. The input data has a dimension of 1433, and we set the dimensions of A-, N-, and C-prototypes to be 16. The number of training epochs is 90. Although the training procedure is designed in a delicate way, the model is actually rather robust and can perform well without these delicate procedures. For example, on the largest dataset OGB-Products, we only train the model for 10 epochs, and do not decay the learning rate, the prototypes are established at the beginning, and the model still obtained good results, as shown in in the paper (e.g. results in Section 3.3). For the large datasets, for higher efficiency, we set $h=1$, and only uniformly sample one neighbor from the neighbors. On the OGB-Products, we shrink the dimensions of A-, N-, and C-prototypes to be 2, in order to control the number of prototypes. For both HPNs, the threshold $t_A$, $t_N$, and $t_C$ are selected by cross validation on $\{0.01, 0.05,0.1,0.15,0.2,0.25,0.3,0.35, 0.4\}$. According to the experimental results, there is a wide range for choosing the thresholds. Finally we choose $t_A=t_N=0.3$ and $t_C=0.4$. Figure \ref{fig:module_structure} describes the shapes of the modules in HPNs. The input data are $\mathrm{d_{input}}$ node feature vectors. With the $\mathrm{AFEs}$, the data are transformed into atomic embeddings, which are $\mathrm{d_a}$ dimensional vectors. Then they are mapped to A-prototypes with same dimensions. After that, the matched A-prototypes are further mapped to higher level N- and C-prototypes with the corresponding Fc layers. The dimensions of N- and C- prototypes are $\mathrm{d_n}$ and $\mathrm{d_c}$. Finally, all the prototypes of different levels are concatenated into a single vector with length of $\mathrm{(l_a'+l_r')d_a+d_n+d_c}$ and fed into the classifier (Fc layer) for classification results. The number of logits output by the classifier is $\mathrm{num\_{class}}$, which is the number of classes in each task. Some existing continual learning works with the task-incremental setting will expand the number of logits output by the classifier to $\mathrm{num\_{class}}\cdot \mathrm{num\_{task}}$, where $\mathrm{num\_task}$ is the total number of tasks the model is going to encounter. however, we argue that this is a very impractical scenario because of the following reasons: 1. A continual learning model should not know the number of tasks to learn in advance, therefore $\mathrm{num\_{task}}$ is unknown. 2. Setting the number of logits as $\mathrm{num\_{class}}\cdot \mathrm{num\_{task}}$ causes the memory consumption of the model to grows linearly with the number of tasks to learn, which is highly undesirable for continual learning models. Therefore, we set the number of output logits as $\mathrm{num\_{class}}$ and force different tasks to share the output head, which increases the hardness of learning but is much more practical. The baselines have different settings. For the baselines with GCN backbone, 16 is approximately the best for the number of hidden unit. For the GAT based baselines, we set the number of heads and number of hidden units as 8. For GIN, the number of hidden units is 32. For all these baselines, the above mentioned settings are applied on most datasets. For some datasets on which the baselines cannot perform well, we will further tune the models carefully to get better results. \section{Additional Experimental Results and Detailed Analysis}\label{sec:experiments} To further validate our proposed model, in this section, we report additional experimental results by extending the experiments reported in the paper to more datasets. We will also give more detailed analysis on the results, which is omitted in the paper due to space limitations. \begin{figure} \caption{Dynamics of ARS for continual learning tasks on OGB-Products dataset.} \label{fig:ARS_Memory} \end{figure} \begin{figure} \caption{Dynamics of memory consumption of HPNs on both OGB-Arxiv and OGB-Products.} \label{fig:Param_amount} \end{figure} \iffalse \subsection{Comparisons with Baseline Methods on Additional Datasets} In this subsection, we include the comparison results with baseline methods on the other four datasets including Wisconsin, Cornell, Texas, and Actor, the results are shown in Table \ref{tab:comparison}. \begin{table*}[] \centering \caption{Performance comparisons between HPNs and baseline models on four other datasets.} \begin{tabular}{c|c||cc|cc|cc|cc} \toprule \multirow{2}{2.5em}{\textbf{C.L.T.}} & \multirow{2}{2em}{\textbf{Base}} & \multicolumn{2}{c|}{Actor}& \multicolumn{2}{c|}{Wisc.}& \multicolumn{2}{c|}{Corn.}& \multicolumn{2}{c}{Texas}\\ \cline{3-10} & & AM & FM & AM & FM & AM & FM & AM & FM \\ \bottomrule\toprule \multirow{3}{2.5em}{None} & GCN & 43.63\% & -9.11\% & 74.71\% & -9.52\% & 34.92\% & -68.00\% & 80.15\% & -12.00\%\\ & GAT & 53.10\% & -4.33\% & 78.82\% & -4.76\% & 46.77\% & -56.00\% & 74.62\% & -16.00\%\\ & GIN & 45.51\% & -8.88\% & 76.44\% & -4.76\% & 34.92\% & -64.00\% & 78.31\% & -12.00\%\\ \midrule \multirow{3}{2.5em}{EWC} & GCN & 44.29\% & -7.06\% & 74.71\% & -9.52\% & 38.92\% & -60.00\% & 82.15\% & -8.00\%\\ & GAT & 54.23\%&-2.51\% &78.82\% &-4.76\%&48.92\%&-44.00\%&78.62\%&-8.00\%\\ & GIN & 47.61\%&-7.29\%&77.09\%&0.00\%&33.23\%&-52.00\%&78.31\% & -12.00\%\\ \midrule \multirow{3}{2.5em}{LwF} & GCN & 49.77\%&-3.65\%&84.65\%&-9.52\%&62.77\%&-20.00\% &56.31\%&-52.00\%\\ & GAT & 52.82\%&-6.15\%&81.20\%&0.00\%&46.77\%&-52.00\%&78.46\%&-20.00\%\\ & GIN &49.70\%&-4.10\%&74.71\% &0.00\%&34.92\%&-64.00\%&34.92\%&-52.00\%\\ \midrule \multirow{3}{2.5em}{GEM} & GCN &52.66\%&+3.91\%&88.71\%&6.25\%&65.08\% &+0.00\%&80.46\%&+4.00\%\\ & GAT &54.31\% &-2.05\% &77.09\% &-9.52\% &65.08\% &-4.00\% &76.77\% & -4.00\% \\ & GIN &45.23\%&-11.16\%&72.78\%&-6.25\%&76.62\%&+4.00\%&72.77\%&+8.00\%\\ \midrule \multirow{3}{2.5em}{MAS} & GCN &50.73\%&-1.59\%&77.75\%& -9.52\%&61.23\%&+0.00\%&78.46\%&+0.00\%\\ & GAT &53.67\%&-1.60\%&76.01\%&-6.25\%&62.62\%&-32.00\%&84.46\%&+0.00\%\\ & GIN &51.69\%&-0.69\%&77.75\%&-4.76\%&63.08\%&+0.00\%&82.86\%&+0.00\%\\ \midrule \multirow{3}{2.5em}{ERGN.} & GCN &52.44\%&+0.69\%&74.71\%&-9.52\%&34.92\%&-68.00\%&80.15\%&-12.00\%\\ & GAT &51.40\%&-7.29\%&78.16\%&-9.52\%&48.77\%&-52.00\%&80.46\%&-12.00\%\\ & GIN &42.72\%&-12.98\%&76.44\% & -4.76\%&34.92\%&-64.00\%&78.31\%&-12.00\%\\ \midrule \multirow{3}{2.5em}{TWP} & GCN &50.59\%&-4.79\%&66.09\%&-14.28\%&56.77\%&-32.00\%&82.31\% &-4.00\%\\ & GAT &54.01\%&-2.05\%&80.54\%&-9.52\%&46.92\%&-48.00\%&78.62\%&-8.00\%\\ & GIN &49.91\%&-3.64\%&71.17\%&-6.25\%&51.08\%&-24.00\%&74.62\%&-4.00\%\\ \midrule \multirow{3}{2.5em}{Join.} & GCN &57.01\% & +0.00\% & 96.72\% & +0.00\% &88.09\% & +0.00\% &86.43\% &+0.00\% \\ & GAT &57.15\% & +0.00\% & 95.94\% & +0.00\% & 89.46\% & +0.00\% & 86.30\% & +0.00\%\\ & GIN &56.97\% & +0.00\% & 96.88\% & +0.00\% & 88.82\% &+0.00\% &86.95\% &+0.00\% \\ \bottomrule\toprule \multicolumn{2}{c||}{\textbf{HPNs}} &\textbf{56.80\%} & \textbf{ -0.92\%} &\textbf{96.55\%} &\textbf{+0.00\% } &\textbf{88.23\%} &\textbf{+0.00\% } &\textbf{86.31\%} &\textbf{+2.77\%}\\ \bottomrule \end{tabular} \label{tab:comparison} \end{table*} \iffalse \begin{figure} \caption{Study on impact of $t_A$ on number of different prototypes of DHPNs-C ($left$) and DHPNs ($right$) over Cora dataset.} \label{fig:n_proto} \end{figure} \begin{figure*} \caption{Visualization of hierarchical prototype representations of test data of different tasks from Cora via TSNE.} \label{fig:tsne} \end{figure*} \fi \fi \begin{figure*} \caption{Left and Middle: AM and FM change when $t_A$ varies on Citeseer. Right: impact of $t_A$ on the number of prototypes over Citeseer.} \label{fig:parameter_sensitivity} \end{figure*} \begin{figure*} \caption{Visualization of hierarchical prototype representations of test data of different tasks from Citeseer via t-SNE.} \label{fig:tsne_citeseer} \end{figure*} \subsection{Additional Results on Ablation Study} In this subsection, we provide the ablation study results on another large dataset OGB-Arxiv, and the results are shown in Table \ref{tab:ablation_proto} and \ref{tab:ablation_loss}. \begin{table}[] \centering \caption{Ablation study on prototypes of different levels of prototypes over OGB-Arxiv.} \begin{tabular}{c|c|c|c|c|c} \toprule Conf. &A-p. & N-p. & C-p. & AM\% & FM\% \\ \midrule 1 &\checkmark & & & 82.1$\pm$0.9 & +0.0$\pm$1.1 \\ \midrule 2 &\checkmark &\checkmark & & 83.6$\pm$1.2 & +0.2$\pm$0.9 \\ \midrule 3 &\checkmark &\checkmark &\checkmark & 85.8$\pm$0.7 & +0.6$\pm$0.9 \\ \bottomrule \end{tabular} \label{tab:ablation_proto} \end{table} \begin{table}[] \centering \caption{Ablation study on different loss terms over OGB-Arxiv.} \begin{tabular}{c|c|c|c|c|c} \toprule Conf. &$\mathcal{L}_{cls}$ & $\mathcal{L}_{div}$ & $\mathcal{L}_{dis}$ & AM\% & FM\% \\ \midrule 1 &\checkmark & & &79.6$\pm$1.5 &-0.3$\pm$1.3 \\ \midrule 2 &\checkmark & \checkmark & &82.3$\pm$1.0 &+0.4$\pm$0.9 \\ \midrule 3 &\checkmark & &\checkmark &80.7$\pm$1.2 &+0.0$\pm$1.4 \\ \midrule 4 &\checkmark &\checkmark &\checkmark & 85.8$\pm$0.7 & +0.6$\pm$0.9 \\ \bottomrule \end{tabular} \label{tab:ablation_loss} \end{table} From Table \ref{tab:ablation_proto}, we can observe that on the large dataset, the improvements brought by high level prototypes are more significant than on the small dataset (reported in Table 2 in the paper). Similarly, in Table \ref{tab:ablation_loss}, the influence of different loss terms is also more prominent compared to the results reported in Table 3 in the paper. The above results imply that our proposed hierarchical prototypes and different loss terms are effective, and the effectiveness becomes increasingly significant on larger datasets with richer information. \subsection{Additional Results on Learning Dynamics} Besides the learning dynamics on OGB-Arxiv provided in Section 3.5 in the paper, we further provide the results on OGB-Products, as shown in Figure \ref{fig:ARS_Memory}. The learning dynamics shown in Figure \ref{fig:Param_amount} is similar to the one on OGB-Products shown in the paper. The only difference is that OGB-Products contains more tasks and the ARS of the baselines decrease more than on OGB-Arxiv. \subsection{Additional Results on Parameter Sensitivity} In Figure \ref{fig:parameter_sensitivity}, we further provide the parameter sensitivity results on citeseer dataset. The results have similar patterns with the results provided in the paper. \subsection{Additional Results on Memory Consumption} In Figure \ref{fig:Param_amount}, we simultaneously show the memory consumption change via the number of tasks on both OGB-Arxiv and OGB-Products. We use same model configurations for both datasets, thus the upper bounds of the memory consumption are same. From Figure \ref{fig:ARS_Memory}, we could see that the memory consumption of HPNs on both datasets increases slowly and far less than the upper bound. Although OGB-Products is more than ten times larger than OGB-Arxiv, the memory used on OGB-Products is only slightly more than on OGB-Arxiv. demonstrating the memory efficiency of HPNs. \subsection{Additional Results on Visualization} In Figure \ref{fig:tsne_citeseer}, we visualize the hierarchical prototype representations of the test nodes on Citeseer by t-SNE~\cite{van2008visualizing}. Similar to the visualization results shown in the paper, Figure \ref{fig:tsne_citeseer} sequentially show the classes of task 1 (Left), task 1,2 (Middle), and task 1,2,3 (Right). The examples belonging to different classes are denoted with different shapes and colors, as shown in the legend on the right. \ifCLASSOPTIONcaptionsoff \fi \end{document}
\begin{document} \begin{abstract} This paper deals with a vector polynomial optimization problem over a basic closed semi-algebraic set. By invoking some powerful tools from real semi-algebraic geometry, we first introduce the concept called {\it tangency varieties}; obtain the relationships of the Palais--Smale condition, Cerami condition, {\it M}-tameness, and properness related to the considered problem, in which the condition of Mangasarian--Fromovitz constraint qualification at infinity plays an essential role in deriving these relationships. At last according to the obtained connections, we establish the existence of Pareto solutions to the problem in consideration and give some examples to illustrate our main findings. \end{abstract} \subjclass[2010]{90C29, 90C30, 49J30} \keywords{Vector optimization; polynomial optimization; Pareto solutions; Palais--Smale condition; Cerami condition; properness} \title{Existence of Pareto Solutions for Vector Polynomial Optimization Problems with Constraints} \section{Introduction}\label{Sec:1} Existence of optimal solutions to optimization problems is a rather important issue in in the study of optimization theory. In the literature on vector optimization (among others), one can find a lot of papers dealing with the existence of different kinds of solutions to vector optimization problems; see, e.g., \cite{Borwein1983,Deng1998a,Deng1998b,Deng2010,Gutierrez2014,Huang2004,Kim2019,Kim2020} and the references therein. Consider the following constrained vector {\em polynomial} optimization problem \begin{align}\label{problem} {\rm Min}_{\mathbb{R}^p_+}\;\big\{f(x)\,{\rm conv\,}lon \,x\in S\big\},\tag{VPO} \end{align} where $f(x):=(f_1(x), \ldots, f_p(x))$ is a real polynomial mapping, and \begin{align}\label{SAset} S:=\{x\in{\mathbb R}^n {\rm conv\,}lon g_i(x)=0,i=1,\ldots,l,\ h_j(x)\geq0,j=1,\ldots,m\} \end{align} is the feasible set of the problem~\eqref{problem}, in which, $g_i, i = 1, \ldots, l,$ and $h_j, j = 1, \ldots, m$ are all real polynomials. As we will see from Definition~\ref{definitionSA} that $S$ is a closed semi-algebraic set. Furthermore, make the following assumption: \begin{itemize} \item[$\quad$] $\qquad\qquad\qquad\qquad\qquad\quad$ \framebox[1.07\width]{the feasible set $S$ is unbounded.} \end{itemize} Note also that the ``${\rm Min_{{\mathbb R}^p_+}}$" in the above problem~\eqref{problem} is understood in the vector sense, where a partial ordering is induced in the image space ${\mathbb R}^p,$ by the non-negative orthant ${\mathbb R}^p_+.$ The partial ordering says that $a \geq b,$ if $a - b \in {\mathbb R}^p_+,$ which can equivalently be written as $a_k \geq b_k,$ for all $k = 1, \ldots, p,$ where $a_k$ and $b_k$ stand for the $k$th component of the vectors $a$ and $b,$ respectively. \subsection{Pareto values and solutions} In what follows, we recall the Pareto values and Pareto soultions to the problem~\eqref{problem}. Unless the classical literature on vector optimization (see, e.g., \cite{Ehrgott2005,Jahn2004,Luc1989,Sawaragi1985}), we will first introduce the Pareto values to the problem~\eqref{problem}, then give the definition of its Pareto solutions. Let $f(S)$ be the image of the restrictive real polynomial mapping $f$ over $S$. \begin{definition}\label{Pareto}{\rm Let $y\in \,\mathrm{cl} f(S)$. \begin{itemize} \item[(i)] $y\in {\mathbb R}^p$ is called a {\em Pareto value} to the problem~\eqref{problem} if $$f(x)\notin y - (\mathbb{R}^p_+\setminus\{{{\bf 0}}\}), \quad \forall x \in S,$$ where ${\bf 0}:=(0, \ldots, 0) \in {\mathbb R}^p$. The set of all Pareto values to the problem~\eqref{problem} is denoted by $\mathrm{val}\,\eqref{problem}$. \item[(ii)] $y\in {\mathbb R}^p$ is called a {\em weak Pareto value} to the problem~\eqref{problem} if $$f(x)\notin y -{\rm int}\,\mathbb{R}^p_+, \quad \forall x\in S.$$ Denote $\mathrm{val}^w\,\eqref{problem}$ as the set of all weak Pareto values to the problem~\eqref{problem}. \item[(iii)] $\bar x \in S$ is called a {\em Pareto solution} (resp., {\em weak Pareto solution}) to the problem~\eqref{problem} if $f(\bar x)$ is a Pareto value (resp., weak Pareto value) to the problem~\eqref{problem}. Denoted $\mathrm{sol}\,\eqref{problem}$ (resp., $\mathrm{sol}^w\,\eqref{problem}$) as the set of all Pareto solutions (resp., weak Pareto solutions). \end{itemize} }\end{definition} According to the above definitions, it is clear that $\mathrm{val}\,\eqref{problem}\subset \mathrm{val}^w\,\eqref{problem}$. \begin{definition}\label{section}{\rm Let $\Omega$ be a subset in ${\mathbb R}^p$ and $\bar y \in {\mathbb R}^p.$ The set $\Omega \cap (\bar y - {\mathbb R}^p_+)$ is said to be a {\it section} of $\Omega$ at $\bar y,$ and denoted by $[\Omega]_{\bar y}.$ The section $[\Omega]_{\bar y}$ is said to be {\it bounded} if and only if there is $\omega \in {\mathbb R}^p$ such that $$[\Omega]_{\bar y} \subset \omega + {\mathbb R}^p_+.$$ }\end{definition} \subsection{Backgrounds} In this part, we will treat the problem~\eqref{problem} as a standard vector optimization problem (not necessarily under the polynomial setting). Firstly, let us recall some results on the existence of Pareto solutions to the problem~\eqref{problem} in the case that the feasible set $S$ is nonempty and compact. If in addition $f$ is ${\mathbb R}^p_+$-semicontinuous (see \cite[Definition 2.16]{Ehrgott2005}), then the existence of Pareto solutions to the problem~\eqref{problem} was shown by Hartley \cite{Hartley1978} in 1978. Later, Corley \cite{Corley1980} in 1980 proved also the existence of Pareto solutions to the problem~\eqref{problem}, if the image $f(S)$ is nonempty and ${\mathbb R}^p_+$-semicompact (see \cite[Definition 2.11]{Ehrgott2005}). In 1983, it was observed by Borwein \cite[Theorem 1]{Borwein1983} that the condition ``the image $f(S)$ has at least one nonempty {\it closed} and {\it bounded} section" is a necessary and also {\it sufficient} condition for the existence of Pareto solutions to the problem~\eqref{problem}; see also \cite[Theorem 2.10]{Ehrgott2005}. Clearly, the compactness of $S$ together with the continuity (or even semi-continuity) of $f$ ensures the compactness of the image $f(S),$ in this case, the problem~\eqref{problem} admits at least one Pareto solution; see, e.g., \cite[Corollary 3.2.1]{Sawaragi1985}. Now, we recall some existence results for the problem~\eqref{problem} in another case that the feasible set $S$ is not compact. By assuming that the objective function is bounded from below and satisfies the so-called (PS)$_1$ condition, H\`a \cite{Ha2006JMAA} proved that the problem~\ref{problem} has {\em weak} Pareto solutions in 2006 (see \cite[Theorem 4.1]{Ha2006JMAA}). Later, by exploring {\em quasiboundedness from below} and {\em refined subdifferential Palais--Smale condition}, Bao and Mordukhovich \cite{Bao2007,Bao2010MP} investigates some vector optimization problems. It is worth to note that they established only the existence of weak or relative Pareto solutions, but not the existence of Pareto solutions to the vector optimization problems in \cite{Bao2007,Bao2010MP,Ha2006JMAA}. In order to obtain the results for existence of Pareto solutions, Lee et al. \cite[Theorem 3.1]{Lee2021} proved that the problem~\eqref{problem} admits a Pareto solution if and only if the image $f(S)$ of $f$ has a nonempty and bounded section for the case that $f$ is a convex polynomial mapping (each component of $f$ being convex polynomial), in which the celebrated existence results for scalar convex polynomial programming problems contributed by Belousov and Klatte \cite[Theorem 3]{Belousov2002} are applied. Very recently, for the case that $S={\mathbb R}^n$ and the image $f({\mathbb R}^n)$ of a polynomial mappying $f$ has a bounded section, Kim et al. \cite{Kim2019} investigated the existence of Pareto solutions to the problem~\eqref{problem} under some novel conditions. Furthermore, in order to investigate existence results in more general setting, by employing the theory of variational analysis and nonsmooth analysis (instead methods of semialgebraic geometry), Kim et al \cite{Kim2020} furtherly proved that nonconvex and nonsmooth vector optimization problems with locally Lipschitzian data have Pareto efficient and Geoffrion-properly efficient solutions. It is also worth mentioning that, Liu et al \cite{Liu2021} studied the solvability for a class of regular polynomial vector optimization problem without convexity, and interestingly even without semi-algebraic assumption for the feasible set $S$ (see \cite[Example 5.4]{Liu2021}). \subsection{Our contributions} In this paper, we will make the following contributions to the area of vector optimization with polynomials. \begin{itemize} \item[{\rm (i)}]We prove the existence of Pareto solutions to the {\it constrained} vector polynomial optimization problem~\eqref{problem} under some conditions. Comparing with \cite{Lee2021}, we do not need any convexity assumptions in the problem~\eqref{problem}, and comparing with \cite{Kim2019}, we further consider the problem~\eqref{problem} over a closed (and unbounded) semi-algebraic set $S.$ \item[{\rm (ii)}] By constructing some suitable sets (that can be computed effectively) related to the problem~\eqref{problem}, we define the concepts concerning Palais--Smale condition, Cerami condition and $M$-tameness, and also establish some relationships between them (see Theorem~\ref{relationship1}). All of these concepts play the important roles in establishing some sufficient conditions for the existence of Pareto solutions to the problem~\eqref{problem}. \item[{\rm (iii)}] It is worth emphasizing that, in Theorem~\ref{relationship1}, the Mangasarian--Fromovitz constraint qualification at infinity of $S$ (see Definition~\ref{regulity}) plays an essential role. This significantly improves \cite[Proposition 3.2]{Kim2019}. In order to highlight this observation, we construct an example (see Example~\ref{example0717}) to show that the assumption on Mangasarian--Fromovitz constraint qualification at infinity of $S$ cannot be dropped. Besides, we also design several examples to illustrate some related terminologies and the obtained results. \item[{\rm (iv)}] As results, we establish some sufficient conditions for the existence of Pareto solutions to the problem~\eqref{problem}. The obtained results improve and extend \cite[Theorem 4.1]{Kim2019}, \cite[Theorem 4.1]{Ha2006JMAA}, \cite[Theorem 4]{Bao2007} and \cite[Theorem 4.4]{Bao2010MP}, in the polynomial setting. \end{itemize} The rest of the paper is organized as follows. In Sect.~\ref{Sec:2}, we recall some necessary tools from real semi-algebraic geometry. In Sect.~\ref{Sec:3}, we introduce the concept of the tangency variety, which will be useful in the later, and its properties. In Sect.~\ref{Sec:4}, we construct some suitable sets, by which, we establish some relationships between Palais--Smale condition, Cerami condition, $M$-tameness, and properness for the restrictive polynomial mappings. Section~\ref{Sec:5} contains several existence results of Pareto solutions to the problem~\eqref{problem}. Finally, conclusions and further discussions are given in Sect.~\ref{Sec:7}. \section{Preliminaries}\label{Sec:2} Throughout this paper, we use the following notation and terminology. Fix a number $n \in \mathbb{N},$ $n \geq 1,$ and abbreviate $(x_1, x_2, \ldots, x_n)$ by $x.$ The space ${\mathbb R}^n$ is equipped with the usual scalar product $\langle \cdot, \cdot \rightarrowngle$ and the corresponding Euclidean norm $\| \cdot \|.$ The interior (resp., the closure) of a set $S$ is denoted by ${\rm int} S$ (resp., ${\rm cl\,} S$). The closed unit ball in ${\mathbb R}^n$ is denoted by $\mathbb{B}^n.$ Let ${\mathbb R}^p_+:= \{y := (y_1, \ldots, y_p) {\rm conv\,}lon y_j \geq 0, \ j = 1, \ldots, p\}$ be the nonnegative orthant in ${\mathbb R}^p.$ The cone ${\mathbb R}^p_+$ induces the following partial order in ${\mathbb R}^p: a, b \in {\mathbb R}^p,$ $a \leq b$ if and only if $b - a \in {\mathbb R}^p_+.$ Besides, ${\mathbb R}[x]$ stands for the space of real polynomials in the variable $x.$ Let us recall some notion and results from semi-algebraic geometry (see, e.g., \cite{RASS,Bochnak1998}). \begin{definition}\label{definitionSA}{\rm \begin{enumerate} \item[(i)] A subset of $\mathbb{R}^n$ is called a {\em semi-algebraic} set if it is a finite union of sets of the form $$\{x \in \mathbb{R}^n {\rm conv\,}lon \varrho_i(x) = 0, i = 1, \ldots, k;\ \varrho_i(x) > 0, i = k + 1, \ldots, p\},$$ where all $\varrho_{i}$'s are in ${\mathbb R}[x]$. \item[(ii)] Let $B_1 \subset \mathbb{B}bb{R}^n$ and $B_2 \subset \mathbb{B}bb{R}^m$ be semi-algebraic sets. A mapping $F {\rm conv\,}lon B_1 \to B_2$ is said to be {\em semi-algebraic} if its graph $$\{(x, y) \in B_1 \times B_2 {\rm conv\,}lon y = F(x)\}$$ is a semi-algebraic subset in $\mathbb{B}bb{R}^n\times\mathbb{B}bb{R}^m.$ In particular, if $m=1,$ we call the mapping $F$ a semi-algebraic function. \end{enumerate} }\end{definition} The semi-algebraic sets and functions have many remarkable properties; see, e.g., \cite{RASS,Bochnak1998,HaHV2017}. \begin{theorem}[Tarski--Seidenberg Theorem]\label{Tarski Seidenberg Theorem} The image and inverse image of a semi-algebraic set under a semi-algebraic mapping are semi-algebraic sets. In particular$,$ the projection of a semi-algebraic set is still a semi-algebraic set. \end{theorem} The Curve Selection Lemma at infinity (see \cite{HaHV2017,Milnor1968}) will be frequently used in this paper. \begin{lemma}[Curve Selection Lemma at infinity]\label{CurveSelectionLemmaatinfinity} Let $A$ be a semi-algebraic subset of $\mathbb{R}^n,$ and let $$\varrho:=(\varrho_1, \ldots, \varrho_p): {\mathbb R}^n {\rm ri\,}ghtarrow {\mathbb R}^p$$ be a semi-algebraic mapping. Assume that there exists a sequence $\{x^k\}$ with $x^k \in A,$ $\lim_{k {\rm ri\,}ghtarrow \infty}\|x^k \| = \infty$ and $\lim_{k {\rm ri\,}ghtarrow \infty} \varrho(x^k) = y \in \overline {\mathbb R}^p,$ where $\overline {\mathbb R} := {\mathbb R} \cup \{\infty\} \cup \{-\infty\}.$ Then there exist a positive real number $\epsilon$ and a smooth semi-algebraic curve $$\phi {\rm conv\,}lon (0, \epsilon) \to {\mathbb R}^n$$ such that $\phi(t) \in A$ for all $t \in (0, \epsilon),$ $\lim_{t {\rm ri\,}ghtarrow 0}\| \phi(t)\| = \infty,$ and $\lim_{t {\rm ri\,}ghtarrow 0}\varrho(\phi(t)) = y.$ \end{lemma} In what follows, we will need the following useful results; see \cite{Dries1996}. \begin{lemma}[Growth Dichotomy Lemma] \label{GrowthDichotomyLemma} Let $\varrho:(0, \epsilon) \to {\mathbb R}$ be a semi-algebraic function with $\varrho(t) \not= 0$ for all $t \in (0, \epsilon),$ where $\epsilon$ is a positive real number. Then there exist constants $c \not= 0$ and $q \in \mathbb{B}bb{Q}$ such that $$\varrho(t) = ct^q + o (t),$$where $\lim_{t\to 0}\frac{o(t)}{t}=0$. \end{lemma} Let $\varrho, \varsigma: (0, \epsilon) \to {\mathbb R}$ be nonzero functions such that $\lim\limits_{t \to 0^+} \varrho(t) \to \infty$ and $\lim\limits_{t \to 0^+} \varsigma(t) \to \infty,$ where $\epsilon$ is a positive real number. If $\lim\limits_{t \to 0^+} \frac{\varrho(t)}{\varsigma(t)} = c_0,$ where $c_0$ is a positive constant, then we denote this relation by $$\varrho(t) \simeq \varsigma(t) \ \textrm{as} \ t \to 0^+.$$ \begin{lemma}\label{jiao0421a} Let $\varrho:(0, \epsilon) \to {\mathbb R}$ be a continuously differentiable semi-algebraic function with $\varrho(t) \not= 0$ for all $t \in (0, \epsilon),$ where $\epsilon$ is a positive real number$,$ and $\varrho(t) \to +\infty$ as $t \to 0^+.$ Then \begin{align}\label{jiao0421b} \varrho(t) \simeq t \varrho'(t) \ \textrm{as} \ t \to 0^+. \end{align} \end{lemma} \begin{proof} Since $\varrho$ is a semi-algebraic function, by Lemma \ref{GrowthDichotomyLemma}, we can write \begin{align*} \varrho(t) = \ \bar c t^{\bar q} + 0( t), \end{align*} for some $\bar c \not= 0$, $\bar q \in \mathbb{B}bb{Q}$ and $\lim_{t\to 0}\frac{o(t)}{t}=0$. Clearly, $\bar q<0$, due to $\varrho(t) \to +\infty$ as $t \to 0^+.$ On the other hand, by the continuous differentiablity of $\varrho$, it yields \begin{align*} \varrho'(t) = \ \bar c \bar q t^{\bar q - 1} + \textrm{ higher order terms in } t. \end{align*} This shows \eqref{jiao0421b} as $t \to 0^+.$ \end{proof} \begin{lemma}[Monotonicity Lemma] \label{MonotonicityLemma} Let $a < b$ in $\mathbb{R}.$ If $\varrho {\rm conv\,}lon [a, b] {\rm ri\,}ghtarrow \mathbb{R}$ is a semi-algebraic function, then there is a partition $a =: t_1 < \cdots < t_{N} := b$ of $[a, b]$ such that $\varrho|_{(t_l, t_{l + 1})}$ is $C^1,$ and either constant or strictly monotone$,$ for $l \in \{1, \ldots, N - 1\}.$ \end{lemma} \section{Tangency Variety and Its Properties}\label{Sec:3} In this section, we introduce some concepts related to the vector polynomial optimization problem~\eqref{problem}, and study their properties. \begin{definition}\label{tangency-variety}{\rm By {\it tangency variety of $f$ on $S$} we mean the set \begin{align*} \Gamma(f,S):=\left\{x\in S{\rm conv\,}lon \left\{ \begin{aligned} &\textrm{there exist } (\tau, \lambda, \nu, \mu) \in ({\mathbb R}^p_+\times {\mathbb R}^l \times {\mathbb R}^m_+\times {\mathbb R}) \setminus \{{\bf 0}\} \ \textrm{such that}\\ &\sum\limits_{k=1}^p\tau_k \nabla f_k(x)- \sum\limits_{i=1}^l\lambda_i\nabla g_i(x)-\sum\limits_{j=1}^m\nu_j\nabla h_j(x)-\mu x= {\bf 0}\\ &\textrm{and } \nu_j h_j(x)=0,\ j=1,\ldots,m \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\}, \end{align*} where $\nabla f_k (x)$ stands for the gradient of $f_k$ at $x.$ }\end{definition} \begin{lemma}\label{unboundness} Let $f:{\mathbb R}^n \to {\mathbb R}^p$ be a polynomial mapping and $S$ be defined as \eqref{SAset}$,$ then $\Gamma(f,S)$ is an unbounded nonempty semi-algebraic set. \end{lemma} \begin{proof} Clearly, it follows from Theorem~\ref{Tarski Seidenberg Theorem} that $\Gamma(f,S)$ is semi-algebraic. Now, we claim that $\Gamma(f,S) \not= \emptyset.$ Indeed, for given $r > 0$, denote by $$\mathbb{S}_r := \{x \in {\mathbb R}^n {\rm conv\,}lon \| x \|^2 = r^2\}.$$ Then $\mathbb{S}_r$ is nonempty, bounded and closed, thus the intersection $\mathbb{S}_r\cap S$ is also nonempty and compact for $r$ large enough, and so is the image $f(\mathbb{S}_r\cap S)$. Therefore, the optimization problem $${\rm Min}_{{\mathbb R}^p_+}\{ f(x) {\rm conv\,}lon x \in \mathbb{S}_r\cap S\}$$ admits a Pareto solution. Denote the Pareto solution as $x(r) \in \mathbb{S}_r\cap S.$ The celebrated Fritz-John optimality conditions \cite[Theorem 7.4]{Jahn2004} imply that $x(r) \in \Gamma(f,S),$ and so $\Gamma(f,S) \not= \emptyset.$ Note that if $r \to \infty$ then $\|x(r)\| = r \to \infty,$ then $\Gamma(f,S)$ is unbounded and we complete the proof. \end{proof} In what follows, we need a constraint qualification ``at infinity", which is inspired by \cite[Definition 3.1]{Pham2020arXiv}, to deal with the case when Pareto solutions occur at infinity. \begin{definition}\label{regulity}{\rm The constraint set $S$ is said to satisfy the {\it Mangasarian--Fromovitz constraint qualification at infinity} (${\rm (MFCQ)_{\infty}}$ in short), if there exists a real number $R_0 > 0$ such that for each $x \in S, \|x \| \geq R_0,$ the gradient vectors $\nabla g_i(x),$ $i = 1,\ldots, l,$ are linearly independent and there exists a vector $v \in {\mathbb R}^n$ such that $$\langle \nabla g_i(x), v\rightarrowngle = 0,\ i = 1,\ldots, l \ \ \textrm{and} \ \ \langle \nabla h_j(x), v\rightarrowngle > 0, \ j \in J(x),$$ where $J(x) := \{j \in \{1, \ldots, m\} {\rm conv\,}lon h_j(x) = 0\}$ is the set of {\it active constraint indices}. }\end{definition} \begin{remark}{\rm In order to deal with the case when optimal solutions to polynomial optimization problems occur at infinity, another constraint qualification ``at infinity" called regular at infinity was introduced by \cite[Definition 3.3]{Dinh2014}. Recall that the constraint set $S$ is said to be {\it regular at infinity} if there exists a real number $R_0 > 0$ such that for each $x \in S, \|x \| \geq R_0,$ the gradient vectors $\nabla g_i(x),$ $i = 1,\ldots, l,$ and $\nabla h_j(x),$ $j \in J(x),$ are linearly independent, where $$J(x) := \{j \in \{1, \ldots, m\} {\rm conv\,}lon h_j(x) = 0\}$$ is called the set of {\it active constraint indices}. Observe that ${\rm (MFCQ)_{\infty}}$ of $S$ is weaker than the regularity at infinity, therefore the following results obtained in the paper can also be guaranteed under regularity at infinity. }\end{remark} \begin{lemma}\label{jiao0419b} If the unbounded set $S$ $($defined as in \eqref{SAset}$)$ satisfies ${\rm (MFCQ)_{\infty}},$ then for each $x\in \Gamma(f,S),$ $\|x\| \gg 1,$ there exist real numbers $\tau_k \in {\mathbb R}_+$ with $\sum_{k = 1}^p \tau_k = 1,$ $\lambda_i, \nu_j,$ and $\mu$ such that \begin{align*} &\sum\limits_{k=1}^p\tau_k \nabla f_k(x)- \sum\limits_{i=1}^l\lambda_i\nabla g_i(x)-\sum\limits_{j=1}^m\nu_j\nabla h_j(x)-\mu x={\bf 0}, \textrm{and } \\ &\nu_j h_j(x)=0,\ j=1,\ldots,m. \end{align*} \end{lemma} \begin{proof} Since $S$ is unbounded, so is $\Gamma(f,S)$ by Lemma~\ref{unboundness}. Let $x \in \Gamma(f,S).$ It follows from Definition~\ref{tangency-variety} that there exist $\tau_k,\nu_j\in{\mathbb R}_+,~\lambda_i, \mu \in {\mathbb R},$ not all zero, such that \begin{align} &\sum\limits_{k=1}^p\tau_k \nabla f_k(x)- \sum\limits_{i=1}^l\lambda_i\nabla g_i(x)-\sum\limits_{j=1}^m\nu_j\nabla h_j(x)-\mu x={\bf 0}, \label{jiao01}\\ & \nu_j h_j(x)=0,\ j=1,\ldots,m. \label{jiao02} \end{align} Now, it remains to show, without loss of generality, that $\sum\limits_{k=1}^p\tau_k >0,$ provided that $x \in \Gamma(f,S),$ $\|x \| \gg 1.$ Assume to the contrary that $\sum\limits_{k=1}^p\tau_k = 0,$ then it follows from \eqref{jiao01} and \eqref{jiao02} that \begin{align*} &\sum\limits_{i=1}^l\lambda_i\nabla g_i(x)+\sum\limits_{j=1}^m\nu_j\nabla h_j(x)+\mu x={\bf 0},\\ & \nu_j h_j(x)=0,\ j=1,\ldots,m, \end{align*} for some $\lambda_i, \nu_j, \mu \in {\mathbb R},$ not all zero. By using the Curve Selection Lemma at infinity (Lemma~\ref{CurveSelectionLemmaatinfinity}), there exist a positive real number $\epsilon$, a smooth semi-algebraic curve $\varphi(t)$ and semi-algebraic functions $\lambda_i(t), \nu_j(t), \mu(t), t \in (0,\epsilon],$ such that \begin{itemize} \item[(a1)] $\varphi(t) \in S$ for $ t \in (0, \epsilon];$ \item[(a2)] $\|\varphi(t)\| {\rm ri\,}ghtarrow +\infty$ as $t {\rm ri\,}ghtarrow 0^+;$ \item[(a3)] $\sum_{i=1}^l\lambda_i(t)\nabla g_i(\varphi(t))+\sum_{j=1}^m\nu_j(t)\nabla h_j(\varphi(t))+\mu(t) \varphi(t) \equiv {\bf 0};$ and \item[(a4)] $\nu_j(t) h_j(\varphi(t))\equiv0,\ j=1,\ldots,m.$ \end{itemize} Since the functions $\nu_j$ and $h_j \circ \varphi$ [note that here and hereafter we denote $h_j(\varphi(t)):= (h_j \circ \varphi)(t)$ in the variable $t$] are semi-algebraic, it follows from the Monotonicity Lemma (Lemma~\ref{MonotonicityLemma}) that for $\epsilon > 0$ small enough, these functions are either constant or strictly monotone. Then, by (a4), we can see that either $\nu_j(t) \equiv 0$ or $(h_j\circ\varphi)(t) \equiv 0;$ in particular, \begin{align}\label{jiao0415} \nu_j(t) \frac{d}{dt} (h_j\circ\varphi)(t) \equiv 0, \quad j=1,\ldots,m. \end{align} It then follows from (a3) that \begin{align*} 0\ & =\ \sum_{i=1}^l\lambda_i(t)\left\langle\nabla g_i(\varphi(t)), \frac{d \varphi}{dt} {\rm ri\,}ght\rightarrowngle +\sum_{j=1}^m\nu_j(t)\left\langle \nabla h_j(\varphi(t)), \frac{d \varphi}{dt} {\rm ri\,}ght\rightarrowngle + \mu(t)\left\langle \varphi(t), \frac{d \varphi}{dt} {\rm ri\,}ght\rightarrowngle \\ & =\ \sum_{i=1}^l\lambda_i(t)\frac{d}{dt}(g_i\circ \varphi) (t) +\sum_{j=1}^m\nu_j(t)\frac{d}{dt}(h_j\circ \varphi) (t) + \frac{\mu(t)}{2}\frac{d \|\varphi(t)\|^2}{dt} \\ & =\ \frac{\mu(t)}{2}\frac{d \|\varphi(t)\|^2}{dt}. \tag{by \eqref{jiao0415} and (a1)} \end{align*} Therefore $\mu(t) \equiv 0$ by (a2), which implies \begin{equation}\label{new1} \sum\limits_{i=1}^l\lambda_i(t) \nabla g_i(\varphi(t))+\sum\limits_{j\in J(\varphi(t))}\nu_j(t) \nabla h_j(\varphi(t))={\bf 0}. \end{equation} By ${\rm (MFCQ)_{\infty}}$, there exists $v\in{\mathbb R}^n$ such that $$\langle \nabla g_i(\varphi(t)), v\rightarrowngle = 0,\ i = 1,\ldots, l \ \ \textrm{and} \ \ \langle \nabla h_j(\varphi(t)), v\rightarrowngle > 0, \ j \in J(x).$$ This, combined with \eqref{new1}, yields $$\sum\limits_{j\in J(\varphi(t))}\langle\nu_j(t)\nabla h_i(\varphi(t)),v\rightarrowngle=0.$$ Thus $\nu_j(t)=0$, for all $j\in J(\varphi(t))$. Then by \eqref{new1}, $\lambda_i(t),~i=1,\ldots,l$ are not all zero and $$\sum\limits_{i=1}^l\lambda_i(t) \nabla g_i(\varphi(t))={\bf 0}.$$ which contradicts the linear independence of $\nabla g_i(\varphi(t)),~i=1,\ldots,l.$ Hence, $\sum_{k=1}^p\tau_k >0,$ and without loss of generality, we may get $\sum_{k=1}^p\tau_k = 1$ by normalization. \end{proof} \section{Palais--Smale Condition, Cerami Condition, $M$-tameness and Properness}\label{Sec:4} Recall the unbounded semi-algebraic set $S$ defined as \eqref{SAset} introduced in the Section \ref{Sec:1}. Given a restrictive polynomial mapping $f:=(f_1, \ldots, f_p): S \to {\mathbb R}^p$ and a value $\bar y \in \overline{\mathbb{R}}^p.$ First, we define the (extended) {\it Rabier function} $v {\rm conv\,}lon {\mathbb R}^n \to \overline{\mathbb{R}}$ by \begin{align}\label{Rabierfunction} v(x):= \inf\left\{\left\|\sum_{k = 1}^p \tau_k \nabla f_k(x) - \sum_{i = 1}^l \lambda_i\nabla g_i(x) - \sum_{j = 1}^m \nu_j\nabla h_j(x){\rm ri\,}ght\| {\rm conv\,}lon \left\{ \begin{aligned} &\tau_k \geq 0 \ \textrm{with} \ \sum_{k = 1}^p \tau_k = 1, \\ &(\lambda, \nu) \in {\mathbb R}^l \times {\mathbb R}^m_+,\ \textrm{and }\\ &\nu_j h_j(x)=0, j=1,\ldots,m \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\}. \end{align} Next, we consider the following sets: \begin{align*} \widetilde{K}_{\infty, \leq \bar y}(f, S)&:=\left\{y\in {\mathbb R}^p {\rm conv\,}lon \left\{ \begin{aligned} &\exists\ \{x^{\ell}\} \subset S \ \textrm{with} \ f(x^{\ell}) \leq \bar y\ \textrm{and} \ \|x^{\ell} \| \to \infty \\ &\textrm{such that} \ f(x^{\ell}) \to y,\ v(x^{\ell}) \to 0 \ \textrm{as}\ {\ell} \to \infty \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\}, \\ {K}_{\infty, \leq \bar y}(f, S)&:= \left\{y\in {\mathbb R}^p {\rm conv\,}lon \left\{ \begin{aligned} &\exists\ \{x^{\ell}\} \subset S \ \textrm{with} \ f(x^{\ell}) \leq \bar y\ \textrm{and} \ \|x^{\ell}\| \to \infty \\ &\textrm{such that} \ f(x^{\ell}) \to y,\ \|x^{\ell}\|\ v(x^{\ell}) \to 0 \ \textrm{as}\ {\ell}\to \infty \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\}, \\ {T}_{\infty, \leq \bar y}(f, S)&:= \left\{y\in {\mathbb R}^p {\rm conv\,}lon \left\{ \begin{aligned} &\exists\ \{x^{\ell}\} \subset \Gamma(f, S) \ \textrm{with} \ f(x^{\ell}) \leq \bar y\ \textrm{and} \ \|x^{\ell} \| \to \infty \\ &\textrm{such that} \ f(x^{\ell}) \to y \ \textrm{as}\ {\ell} \to \infty \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\}. \end{align*} If $\bar y = (+\infty, \ldots, +\infty),$ the notations $\widetilde{K}_{\infty, \leq \bar y}(f, S),$ ${K}_{\infty, \leq \bar y}(f, S)$ and ${T}_{\infty, \leq \bar y}(f, S)$ will be written as $\widetilde{K}_{\infty}(f, S),$ ${K}_{\infty}(f, S)$ and ${T}_{\infty}(f, S),$ respectively. We would note here that all of the sets mentioned above can be computed effectively as shown recently in \cite{Dias2015,Dias2017,Dias2021,Jelonek2014}. The following result is the constrictive version of \cite[Proposition 3.2]{Kim2019}, while as shown below, the ${\rm (MFCQ)_{\infty}}$ of $S$ plays an essential role. \begin{theorem}\label{relationship1} Let $S$ be defined as in \eqref{SAset}$,$ $f: S \to {\mathbb R}^p$ be a restrictive polymonial mapping and $\bar y \in \overline{\mathbb{R}}^p.$ Then the following inclusion holds$,$ \begin{align}\label{jiao0425a} {K}_{\infty, \leq \bar y}(f, S) \subset \widetilde{K}_{\infty, \leq \bar y}(f, S). \end{align} If in addition the set $S$ satisfies ${\rm (MFCQ)_{\infty}}$, then \begin{align}\label{jiao0425b} {T}_{\infty, \leq \bar y}(f, S) \subset {K}_{\infty, \leq \bar y}(f, S). \end{align} \end{theorem} \begin{proof} By definition, the inclusion \eqref{jiao0425a} is satisfied immediately. Now, we show the inclusion \eqref{jiao0425b} under ${\rm (MFCQ)_{\infty}}$. Taking any $y \in {T}_{\infty, \leq \bar y}(f, S),$ [if ${T}_{\infty, \leq \bar y}(f, S) = \emptyset,$ then the inclusion \eqref{jiao0425b} holds trivially], by definition there exist sequences $\{x^{\ell}\} \subset S$ and $\{(\tau^\ell, \lambda^\ell, \nu^\ell, \mu^\ell)\} \subset ({\mathbb R}^p_+ \times {\mathbb R}^l \times {\mathbb R}^m_+ \times {\mathbb R}) \setminus \{{\bf 0}\},$ such that \begin{itemize} \item[(b1)] $\lim_{\ell \to \infty}\|x^\ell \| = + \infty;$ \item[(b2)] $\lim_{\ell \to \infty} f(x^\ell) = y;$ \item[(b3)] $f(x^\ell) \leq \bar y;$ \item[(b4)] $\sum_{k=1}^p\tau_k^\ell \nabla f_k(x^\ell)- \sum_{i=1}^l\lambda_i^\ell \nabla g_i(x^\ell)-\sum_{j=1}^m\nu_j^\ell \nabla h_j(x^\ell)-\mu^\ell x^\ell={\bf 0};$ and \item[(b5)] $\nu_j^\ell h_j(x^\ell) = 0,\ j=1,\ldots,m.$ \end{itemize} Without loss of generality, for each $\ell \in \mathbb{N},$ we can normalize the vector $(\tau^\ell, \lambda^\ell, \nu^\ell, \mu^\ell)$ by $$\|(\tau^\ell, \lambda^\ell, \nu^\ell, \mu^\ell)\| = 1.$$ Let \begin{align*} \mathcal{U}:=\left\{(x, \tau, \lambda, \nu, \mu) \in \mathcal{V} {\rm conv\,}lon \left\{ \begin{aligned} &\sum_{k=1}^p\tau_k \nabla f_k(x)- \sum_{i=1}^l\lambda_i \nabla g_i(x)-\sum_{j=1}^m\nu_j \nabla h_j(x)-\mu x={\bf 0}\\ &f(x) \leq \bar y, \ \|(\tau, \lambda, \nu, \mu)\| = 1, \ \nu_j h_j(x) = 0,\ j=1,\ldots,m \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\}, \end{align*} where $\mathcal{V} = S \times {\mathbb R}^p \times {\mathbb R}^l \times {\mathbb R}^m \times {\mathbb R}.$ Observe that, $\mathcal{U}$ is a semi-algebraic set in ${\mathbb R}^{n+p+l+m+1}$ and the sequence $\{(x^\ell, \tau^\ell, \lambda^\ell, \nu^\ell, \mu^\ell)\} \subset \mathcal{U}$ tends to infinity in the sense that $\| (x^\ell, \tau^\ell, \lambda^\ell, \nu^\ell, \mu^\ell) \| \to \infty$ as $\ell \to \infty.$ Now, by using the Curve Selection Lemma at infinity (Lemma~\ref{CurveSelectionLemmaatinfinity}) for the semi-algebraic mapping $$\mathcal{U} \to {\mathbb R}^p, \ (x, \tau, \lambda, \nu, \mu) \mapsto f(x),$$ there exist a positive real number $\epsilon$ and a smooth semi-algebraic curve \begin{align*} (\varphi, \tau, \lambda, \nu, \mu): (0, \epsilon) &\to {\mathbb R}^n \times {\mathbb R}^p_+ \times {\mathbb R}^l \times {\mathbb R}^m_+ \times {\mathbb R} \\ t &\mapsto \left(\varphi(t), \tau(t), \lambda(t), \nu(t), \mu(t){\rm ri\,}ght) \end{align*} such that \begin{itemize} \item[(c1)] $\lim_{t \to 0^+}\|\varphi(t)\| {\rm ri\,}ghtarrow +\infty;$ \item[(c2)] $\lim_{t \to 0^+}f(\varphi(t)) = y;$ \end{itemize} and for $ t \in (0, \epsilon),$ \begin{itemize} \item[(c3)] $\varphi(t) \in S$ and $f(\varphi(t))\leq \bar y;$ \item[(c4)] $\sum_{k=1}^p\tau_k(t)\nabla f_k(\varphi(t))-\sum_{i=1}^l\lambda_i(t)\nabla g_i(\varphi(t))-\sum_{j=1}^m\nu_j(t)\nabla h_j(\varphi(t))-\mu(t) \varphi(t) \equiv {\bf 0};$ \item[(c5)] $\nu_j(t) h_j(\varphi(t))\equiv0,\ j=1,\ldots,m;$ and \item[(c6)] $\|(\tau(t), \lambda(t), \nu(t), \mu(t))\|\equiv 1.$ \end{itemize} Because $\tau_k,$ $\lambda_i,$ $\nu_j,$ $\mu,$ and $f_k \circ \varphi$ are semi-algebraic, it follows from the Monotonicity Lemma (Lemma~\ref{MonotonicityLemma}) again that for $\epsilon > 0$ small enough, these functions are either constant or strictly monotone. Then, by (c5), either $\nu_j(t) \equiv 0$ or $(h_j\circ\varphi)(t) \equiv 0$. Consequently, \begin{align}\label{jiao0419a} \nu_j(t) \frac{d}{dt} (h_j\circ\varphi)(t) \equiv 0, \quad j=1,\ldots,m. \end{align} Now, by (c4) we obtain \begin{align*} &\frac{1}{2}\mu(t)\frac{d (\|\varphi(t)\|^2)}{dt}\\ & = \ \mu(t) \langle \varphi(t), \varphi'(t)\rightarrowngle \\ & =\ \sum_{k=1}^p\tau_k(t) \left\langle \nabla f_k(\varphi(t)), \varphi'(t) {\rm ri\,}ght\rightarrowngle - \sum_{i=1}^l\lambda_i(t) \left\langle \nabla g_i(\varphi(t)), \varphi'(t){\rm ri\,}ght\rightarrowngle - \sum_{j=1}^m\nu_j(t) \left\langle \nabla h_j(\varphi(t)), \varphi'(t) {\rm ri\,}ght\rightarrowngle \\ & =\ \sum_{k=1}^p\tau_k(t) \frac{d}{dt}(f_k \circ \varphi)(t) - \sum_{i=1}^l\lambda_i(t) \frac{d}{dt}(g_i \circ \varphi)(t) - \sum_{j=1}^m\nu_j(t)\frac{d}{dt}(h_j \circ \varphi)(t) \\ & =\ \sum_{k=1}^p\tau_k(t) \frac{d}{dt}(f_k \circ \varphi)(t). \tag{by \eqref{jiao0419a} and (c3)} \end{align*} Let $P:=\{k\in \{1, \ldots, p\}{\rm conv\,}lon \tau_k(t) \frac{d}{dt}(f_k\circ \varphi)(t) \not\equiv 0\}.$ Then \begin{align}\label{objective-complementary} \frac{\mu(t)}{2}\frac{d \|\varphi(t)\|^2}{dt} = \sum_{k \in P}\tau_k(t) \frac{d}{dt}(f_k \circ \varphi)(t). \end{align} \begin{itemize} \item[{\rm Case 1.}] $P = \emptyset.$ Clearly, combining (c1) and \eqref{objective-complementary} implies that $\mu(t) \equiv 0,$ and along with (c4) and (c5), we have \begin{align}\label{xiangguan} &\sum_{k=1}^p\tau_k(t)\nabla f_k(\varphi(t))-\sum_{i=1}^l\lambda_i(t)\nabla g_i(\varphi(t))-\sum_{j=1}^m\nu_j(t)\nabla h_j(\varphi(t)) \equiv {\bf 0},\\ &\nu_j(t) h_j(\varphi(t))\equiv0,\ j=1,\ldots,m. \end{align} We claim that $\sum\limits_{k=1}^p\tau_k(t)>0$. Otherwise, $\tau_k(t)=0$ for any $k=1,\ldots,p$. This, combined with \eqref{xiangguan}, yields \begin{equation}\label{new2} \sum_{i=1}^l\lambda_i(t)\nabla g_i(\varphi(t))+\sum_{j=1}^m\nu_j(t)\nabla h_j(\varphi(t)) \equiv {\bf 0}, \end{equation} By ${\rm (MFCQ)_{\infty}}$, there exists $v\in{\mathbb R}^n$ such that $$\langle \nabla g_i(\varphi(t)), v\rightarrowngle = 0,\ i = 1,\ldots, l \ \ \textrm{and} \ \ \langle \nabla h_j(\varphi(t)), v\rightarrowngle > 0, \ j \in J(x).$$ This, combined with \eqref{new2}, yields $$\sum\limits_{j\in J(\varphi(t))}\langle\nu_j(t)\nabla h_i(\varphi(t)),v\rightarrowngle=0.$$ Thus $\nu_j(t)=0$, for all $j\in J(\varphi(t))$. Then by \eqref{new2}, $\lambda_i(t),~i=1,\ldots,l$ are not all zero and $$\sum\limits_{i=1}^l\lambda_i(t) \nabla g_i(\varphi(t))={\bf 0}.$$ which contradicts the linear independence of $\nabla g_i(\varphi(t)),~i=1,\ldots,l.$ Consequently, $v(\varphi(t)) \equiv 0$ by \eqref{Rabierfunction}. Taking (c1)--(c3) into account yields $y \in K_{\infty, \leq \bar y}(f,S).$ \item[{\rm Case 2.}] $P \not= \emptyset.$ For each $k \in P,$ we have $\tau_k(t) \not \equiv 0$ and $\frac{d}{dt}(f_k \circ \varphi)(t)\not \equiv 0,$ thus $(f_k\circ \varphi)(t) \not \equiv y_k.$ It follows from Lemma \ref{GrowthDichotomyLemma} that \begin{align*} \tau_k(t) & = \ a_k t^{\alpha_k} + o( t), \\ (f_k \circ \varphi)(t) & = \ y_k + b_k t^{\beta_k} + o( t), \end{align*} where $a_k >0,$ $b_k \not= 0$, $\alpha_k, \beta_k \in \mathbb{B}bb{Q}$ and $\lim_{t\to 0}\frac{o(t)}{t}=0$. It follows from (c6) and (c2), respectively, that $\alpha_k \geq 0$ and $\beta_k > 0.$ Moreover, $\gamma:= \min_{k\in P} (\alpha_k + \beta_k) > 0.$ Clearly, $\gamma>\bar \alpha:=\min_{k\in P}\alpha_k$ and \begin{equation}\label{sumtau} \sum\limits_{k=1}^{p}\tau_k=\bar a t^{\bar \alpha}+\textrm{ higher order terms in } t, \end{equation} where $\bar a$ is a positive constant. Now, by (c4) and \eqref{objective-complementary}, we have \begin{align*} &\frac{\bigg\|\sum\limits_{k=1}^p\tau_k(t)\nabla f_k(\varphi(t))-\sum\limits_{i=1}^l\lambda_i(t)\nabla g_i(\varphi(t))-\sum\limits_{j=1}^m\nu_j(t)\nabla h_j(\varphi(t))\bigg\|}{2\|\varphi(t)\|} \bigg|\frac{d \|\varphi(t)\|^2}{dt}\bigg|\\ =\ & \bigg|\sum_{k \in P}\tau_k(t) \frac{d}{dt}(f_k \circ \varphi)(t)\bigg| \end{align*} Note that by Lemma~\ref{jiao0421a}, we have \begin{align*} \|\varphi (t) \|^2 \simeq t\frac{d\|\varphi(t)\|^2}{dt}\ \textrm{as} \ t \to 0^+. \end{align*} Hence, \begin{align*} &\|\varphi(t)\| \bigg\|\sum\limits_{k=1}^p\tau_k(t)\nabla f_k(\varphi(t))-\sum\limits_{i=1}^l\lambda_i(t)\nabla g_i(\varphi(t))-\sum\limits_{j=1}^m\nu_j(t)\nabla h_j(\varphi(t))\bigg\|\\ \simeq\ & \frac{\bigg\|\sum\limits_{k=1}^p\tau_k(t)\nabla f_k(\varphi(t))-\sum\limits_{i=1}^l\lambda_i(t)\nabla g_i(\varphi(t))-\sum\limits_{j=1}^m\nu_j(t)\nabla h_j(\varphi(t))\bigg\|}{\|\varphi(t)\|} \bigg|t\frac{d \|\varphi(t)\|^2}{dt}\bigg|. \end{align*} Taking (c4) and \eqref{objective-complementary} into account, one has \begin{align*} & \frac{\bigg\|\sum\limits_{k=1}^p\tau_k(t)\nabla f_k(\varphi(t))-\sum\limits_{i=1}^l\lambda_i(t)\nabla g_i(\varphi(t))-\sum\limits_{j=1}^m\nu_j(t)\nabla h_j(\varphi(t))\bigg\|}{\|\varphi(t)\|} \bigg|t\frac{d \|\varphi(t)\|^2}{dt}\bigg|\\ =\ & 2t \bigg|\sum_{k \in P}\tau_k(t) \frac{d}{dt}(f_k \circ \varphi)(t)\bigg| \\ =\ & a_0t^{\gamma+1} + \textrm{ higher order terms in } t, \end{align*} for some constant $a_0 \geq 0.$ On the other hand, taking $$ \bar \tau_k(t)=\frac{\tau_k(t)}{\sum\limits_{k=1}^p\tau_k(t)},~~\bar \lambda_i(t)=\frac{\lambda_i(t)}{\sum\limits_{k=1}^p\lambda_k(t)}~~\text{and}~~\bar \nu_j(t)=\frac{\nu_j(t)}{\sum\limits_{k=1}^{p}\tau_k(t)}. $$ we get $\sum\limits_{k=1}^p\bar\tau_k(t)=1$ and $$\lim\limits_{t \to 0^+} \|\varphi(t)\| \bigg\|\sum\limits_{k=1}^p\bar \tau_k(t)\nabla f_k(\varphi(t))-\sum\limits_{i=1}^l\bar \lambda_i(t)\nabla g_i(\varphi(t))-\sum\limits_{j=1}^m\bar\nu_j(t)\nabla h_j(\varphi(t))\bigg\| = 0,$$ due to $\gamma>\bar\alpha$ and \eqref{sumtau}. This, along with (c1)--(c3), reaches $y \in K_{\infty, \leq \bar y}(f, S).$ \end{itemize} Thus, the proof is complete. \end{proof} \begin{remark}{\rm \begin{enumerate} \item [{\rm (i)}] It is worth noting that the assumption on ${\rm (MFCQ)_{\infty}}$ of $S$ is a generic condition in the sense that it holds in an open dense semi-algebraic set of the entire space of input data (see \cite{Bolte2018,Dinh2014,HaHV2017}). \item [{\rm (ii)}] The inclusion \eqref{jiao0425b} holds under the ${\rm (MFCQ)_{\infty}}$ of the constraint set $S.$ If $S = {\mathbb R}^n,$ the inclusion \eqref{jiao0425b} still holds (of course without any constraint qualifications) in the polynomial mapping setting (see \cite[Proposition 3.2]{Kim2019}), while it may go awry in more general setting, e.g., $f$ is not a polynomial mapping (see \cite[Example 3.1]{Kim2020}). \end{enumerate} }\end{remark} The following example shows that the assumption on ${\rm (MFCQ)_{\infty}}$ of $S$ plays an essential role, and it cannot be dropped. In other words, the inclusion \eqref{jiao0425b} in Theorem~\ref{relationship1} does {\em not} hold if $S$ does not satisfy ${\rm (MFCQ)_{\infty}}$. \begin{example}\label{example0717}{\rm Let $x:=(x_1, x_2, x_3) \in {\mathbb R}^3.$ Let \begin{align*} f(x):=&\ (f_1(x), f_2(x)) = \left(x_2x_3, x_1x_3{\rm ri\,}ght), \\ g_1(x):=&\ (1 - x_1x_2x_3)^2 + x_1^2 + x_2^2 - 1, \\ g_2(x):=&\ x_1x_2,\\ h(x):=&\ x_1^3. \end{align*} Consider the following vector polynomial optimization problem with constraints \begin{align}\label{example02} {\rm Min}_{\mathbb{R}^2_+}\;\big\{f(x)\,{\rm conv\,}lon \,x\in S\big\},\tag{VPO$_1$} \end{align} where $S:=\{x \in {\mathbb R}^3 {\rm conv\,}lon g_1(x) = 0, g_2(x) = 0, h(x) \geq 0\} = \{(0, 0, x_3) {\rm conv\,}lon x_3 \in {\mathbb R}\}.$ A simple calculation yields that $$\nabla f_1 = \left( \begin{array}{c} 0\\ x_3\\ x_2 \\ \end{array} {\rm ri\,}ght), \nabla f_2 = \left( \begin{array}{c} x_3\\ 0\\ x_1 \\ \end{array} {\rm ri\,}ght),$$\\ $$\text{and}~~\nabla g_1 = \left( \begin{array}{c} -2(1 - x_1x_2x_3)x_2x_3 + 2x_1\\ -2(1 - x_1x_2x_3)x_1x_3 + 2x_2\\ -2(1 - x_1x_2x_3)x_1x_2\\ \end{array} {\rm ri\,}ght), \nabla g_2 = \left( \begin{array}{c} x_2\\ x_1\\ 0 \\ \end{array} {\rm ri\,}ght), \nabla h = \left( \begin{array}{c} 3x_1^2\\ 0\\ 0 \\ \end{array} {\rm ri\,}ght).$$ By Definition~\ref{tangency-variety}, one has \begin{align*} \Gamma(f,S):=&\ \left\{x\in S{\rm conv\,}lon \left\{ \begin{aligned}&\sum_{k = 1}^2\tau_k \nabla f_k(x) - \sum_{i = 1}^2\lambda_i \nabla g_i (x)-\nu \nabla h (x) - \mu x = {\bf 0}, \\ &\textrm{for some}\ (\tau_1, \tau_2, \lambda_1, \lambda_2, \nu, \mu) \not= 0 \end{aligned}{\rm ri\,}ght. {\rm ri\,}ght\}\\ =&\ \{(0, 0, x_3) {\rm conv\,}lon x_3 \in {\mathbb R}\}. \end{align*} Now, we will show that the inclusion \eqref{jiao0425b} fails to hold in the case for problem~\eqref{example02}, for convenience, let $\bar y = (+\infty, +\infty);$ in other words, \begin{align}\label{jiao-a1} {T}_{\infty}(f, S) \not\subset {K}_{\infty}(f, S). \end{align} Indeed, by calculation, we have \begin{align*} {T}_{\infty}(f, S) = \left\{y\in {\mathbb R}^2 {\rm conv\,}lon \left\{ \begin{aligned} &\exists\ \{x^{\ell}\} \subset \Gamma(f, S) \ \textrm{with} \ \|x^{\ell} \| \to \infty \\ &\textrm{such that} \ f(x^{\ell}) \to y \ \textrm{as}\ {\ell} \to \infty \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\} = \{(0,0)\}. \end{align*} On the other hand, by \eqref{Rabierfunction} \begin{align*} v(x):= \inf\left\{\left\|\sum_{k = 1}^2 \tau_k \nabla f_k(x) - \sum_{i = 1}^2 \lambda_i\nabla g_i(x) - \nu\nabla h (x) {\rm ri\,}ght\| {\rm conv\,}lon \left\{ \begin{aligned} &\tau_1, \tau_2 \geq 0 \ \textrm{with} \ \tau_1 + \tau_2 = 1 \\ &\lambda_1, \lambda_2 \in {\mathbb R}, \nu \in {\mathbb R}_+ \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\}. \end{align*} Now, consider the set $\widetilde{K}_{\infty}(f, S).$ Note that $S= \{(0, 0, x_3) {\rm conv\,}lon x_3 \in {\mathbb R}\}$, for $x\in S$, \begin{align}\label{Rabierfunction1} v(x)=\inf\left\{ \sqrt{(\tau_1^2+\tau_2^2)x_3^2}{\rm conv\,}lon \tau_1, \tau_2\in {\mathbb R}_+, \ \tau_1 + \tau_2 = 1{\rm ri\,}ght\} = \tfrac{\sqrt{2}}{2}|x_3|.\end{align} Hence, \begin{align}\label{jiao-a2} \widetilde{K}_{\infty}(f, S)&:=\left\{y\in {\mathbb R}^2 {\rm conv\,}lon \left\{ \begin{aligned} &\exists\ \{x^{\ell}\} \subset S \ \textrm{with} \ \|x^{\ell} \| \to \infty\ \textrm{such that} \\ &f(x^{\ell}) \to y,\ v(x^{\ell}) \to 0 \ \textrm{as}\ {\ell} \to \infty \end{aligned} {\rm ri\,}ght. {\rm ri\,}ght\} = \emptyset, \end{align} this is because there is no $\{x^{\ell}\} \subset S$ with $\|x^{\ell} \| \to \infty$ such that $v(x^{\ell}) \to 0$ as ${\ell} \to \infty$ by \eqref{Rabierfunction1}. Thus, \eqref{jiao-a2} along with \eqref{jiao0425a} implies that ${K}_{\infty}(f, S) = \emptyset.$ As a result, we get \eqref{jiao-a1}. The reason is that the constraint set $S$ in consideration does not satisfy ${\rm (MFCQ)_{\infty}}$ (the reader may check it by definition easily). \qed }\end{example} \begin{definition}{\rm(\cite[Definition 3.2]{Kim2020}) \begin{itemize} \item[{\rm(i)}] the restrictive polynomial mapping $f$ on $S$ is called {\it proper at $\bar y \in \overline{\mathbb{R}}^p$} if $$\forall \{x^{\ell}\} \subset S,\ \|x^{\ell}\| \to \infty,\ f(x^{\ell}) \leq \bar y \Longrightarrow \|f(x^{\ell})\| \to \infty\ \textrm{ as }\ \ell \to \infty;$$ \item[{\rm(ii)}] the restrictive polynomial mapping $f$ on $S$ is called {\it proper} if it is proper at every $\bar y \in \overline{\mathbb{R}}^p.$ \end{itemize} }\end{definition} \begin{remark} {\rm \begin{enumerate} \item [{\rm (i)}] In case of $p=1,$ the properness of $f$ on $S$ is weaker than the coercivity of $f$ on $S.$ Remember that $f$ is said to be {\it coercive} on $S$ (see \cite{Bajbar2015,Bajbar2019} for more information in polynomial setting) if $$\lim_{x \in S,\ \|x\| {\rm ri\,}ghtarrow \infty} f(x) = + \infty.$$ Indeed, if $f$ is coercive on $S,$ then $f$ on $S$ is proper at $\bar y = +\infty.$ Conversely, it may fail to hold in general. For example, let $f(x) = x$ and $S = {\mathbb R},$ it is clear that $f$ is proper (at every $\bar y\in\overline{\mathbb{R}}$) but not coercive. \item [{\rm (ii)}] If $p \geq 2,$ the properness of $f$ on $S$ is also weaker than other coercivity conditions, such as {\it ${\mathbb R}^p_+$-zero-coercivity} $f$ on $S$ introduced by Guti\'errez et al \cite[Definition 3.1]{Gutierrez2014}. Remember that $f$ is said to be {\it ${\mathbb R}^p_+$-zero-coercive} on $S$ with respect to $\xi \in {\mathbb R}^p_+\setminus \{{\bf 0}\}$ if $$\lim_{x \in S,\ \|x\| {\rm ri\,}ghtarrow \infty} \langle \xi, f(x) \rightarrowngle = + \infty.$$ Actually, if $f$ is ${\mathbb R}^p_+$-zero-coercive on $S,$ then $f$ on $S$ is proper at $\bar y = (+\infty, \ldots, + \infty).$ Conversely, it may fail to hold in general. For example, let $f(x_1, x_2) = (x_1, x_2)$ and $S = {\mathbb R}^2,$ it is clear that $f$ is proper (at every $\bar y\in\overline{\mathbb{R}}^p$) but not ${\mathbb R}^p_+$-zero-coercive with respect to any $\xi \in {\mathbb R}^p_+\setminus \{{\bf 0}\}$. \end{enumerate} } \end{remark} \begin{definition}{\rm Let $f: S \to {\mathbb R}^p$ be a restrictive polynomial mapping and $\bar y \in \overline{\mathbb{R}}^p$. \begin{itemize} \item[{\rm(i)}] The restrictive polynomial mapping $f$ on $S$ satisfies the {\it Palais--Smale condition} at $\bar y$ if $\widetilde{K}_{\infty, \leq \bar y}(f, S) = \emptyset;$ \item[{\rm(ii)}] The restrictive polynomial mapping $f$ on $S$ satisfies the {\it Cerami condition} (or {\it weak Palais--Smale condition}) at $\bar y$ if ${K}_{\infty, \leq \bar y}(f, S) = \emptyset;$ \item[{\rm(iii)}] The restrictive polynomial mapping $f$ on $S$ is called {\it M-tame} at $\bar y$ if $T_{\infty, \leq \bar y}(f, S) = \emptyset.$ \end{itemize} }\end{definition} Observe that if the restrictive polynomial mapping $f$ on $S$ is proper at $\bar y \in \overline{\mathbb{R}}^p,$ then by definition, $$T_{\infty, \leq \bar y}(f, S) = {K}_{\infty, \leq \bar y}(f, S) = \widetilde{K}_{\infty, \leq \bar y}(f, S) = \emptyset.$$ But not vice versa. (see \cite{Kim2020}). \begin{theorem}\label{equivalent1} Let $S$ be defined as in \eqref{SAset} and presume that the set $S$ satisfies ${\rm (MFCQ)_{\infty}}$. Let $f: S \to {\mathbb R}^p$ be a restrictive polynomial mapping. Presume that there exists $\bar y \in f(S)$ such that the section $[f(S)]_{\bar y}$ is bounded. Then the following assertions are equivalent$:$ \begin{itemize} \item[{\rm(i)}] The restrictive polynomial mapping $f$ on $S$ is proper at $\bar y.$ \item[{\rm(ii)}] The restrictive polynomial mapping $f$ on $S$ satisfies the Palais--Smale condition at $\bar y.$ \item[{\rm(iii)}] The restrictive polynomial mapping $f$ on $S$ satisfies the Cerami condition at $\bar y.$ \item[{\rm(iv)}] The restrictive polynomial mapping $f$ on $S$ is $M$-tame at $\bar y.$ \end{itemize} If in addition$,$ one of the above conditions holds$,$ then the section $[f(S)]_{\bar y}$ is compact. \end{theorem} \begin{proof} By definition, the implications $[{\rm (i)}{\mathbb R}ightarrow {\rm (ii)} {\mathbb R}ightarrow {\rm (iii)}]$ is satisfied immediately. Since the set $S$ satisfies ${\rm (MFCQ)_{\infty}}$, thus the implication $[{\rm (iii)} {\mathbb R}ightarrow {\rm (iv)}]$ follow from Theorem \ref{relationship1}. Now, we will show the implication $[{\rm (iv)} {\mathbb R}ightarrow {\rm (i)}]$. Assume to the contrary that the restrictive polynomial mapping $f$ on $S$ is {\it not} proper at $\bar y.$ Then by definition, we have $$\exists \{x^{\ell}\} \subset S,\ \|x^{\ell}\| \to \infty,\ f(x^{\ell}) \leq \bar y \Longrightarrow \|f(x^{\ell})\| \to M\ \textrm{ as }\ \ell \to \infty,$$ where $M$ is a nonnegative constant. For each fixed $\ell \in \mathbb{N},$ we consider the problem \begin{align}\label{jiao0416a} {\rm Min}_{\mathbb{R}^p_+}\;\left\{f(x)\,{\rm conv\,}lon \,x\in S,\ f(x) \leq \bar y\ \textrm{and}\ \|x \|^2 = \| x^\ell\|^2{\rm ri\,}ght\}.\tag{P} \end{align} Since the set $\{ x \in {\mathbb R}^n \, {\rm conv\,}lon \, x\in S,\ f(x) \leq \bar y\ \textrm{and}\ \|x \|^2 = \| x^\ell\|^2\}$ is nonempty and compact, and $f$ is continuous, thus the problem~\eqref{jiao0416a} has a Pareto solution, say $z^\ell.$ According to Fritz-John optimality conditions \cite[Theorem 7.4]{Jahn2004}, there are $({\bf a}, {\bf b}, {\bf c}, {\bf d}, {\bf e}) \in ({\mathbb R}^p_+ \times {\mathbb R}^l \times {\mathbb R}^m_+ \times {\mathbb R}^p_+ \times {\mathbb R}) \setminus \{{\bf 0}\}$ such that \begin{align*} & \sum_{k = 1}^p {\bf a}_k\nabla f_k(z^\ell) - \sum_{i = 1}^l {\bf b}_i\nabla g_i(z^\ell) - \sum_{j = 1}^m {\bf c}_j\nabla h_j(z^\ell) + \sum_{k = 1}^p {\bf d}_k\nabla f_k(z^\ell) - 2{\bf e} z^\ell = {\bf 0}, \\ & {\bf c}_j h_j(z^\ell) = 0, j = 1, \ldots, m, \ \textrm{ and }\ {\bf d}_k (f_k(z^\ell) - \bar y_k) = 0, k = 1, \ldots, p. \end{align*} Letting $\tau_k:={\bf a}_k + {\bf d}_k,$ $k = 1, \ldots, p,$ $\lambda_i := {\bf b}_i,$ $i = 1, \ldots, l,$ $\nu_j := {\bf c}_j,$ $j = 1, \ldots, m$ and $\mu:=2{\bf e},$ it yields that \begin{align*} & \sum_{k = 1}^p \tau_k\nabla f_k(z^\ell) - \sum_{i = 1}^l \lambda_i\nabla g_i(z^\ell) - \sum_{j = 1}^m \nu_j\nabla h_j(z^\ell) - \mu z^\ell = {\bf 0}, \\ & \nu_j h_j(z^\ell) = 0, j = 1, \ldots, m, \ \textrm{ and }\ {\bf d}_k (f_k(z^\ell) - \bar y_k) = 0, k = 1, \ldots, p. \end{align*} Clearly, $(\tau, \lambda, \nu, \mu) \not= {\bf 0},$ then $z^\ell \in \Gamma (f, S).$ Consequently, $\{ z^\ell \}$ has the following properties: \begin{itemize} \item[(d1)] $\{z^\ell\} \subset \Gamma (f, S);$ \item[(d2)] $\|z^\ell\| = \|x^\ell\| \to +\infty$ as $\ell \to +\infty;$ and \item[(d3)] $f(z^\ell) \leq \bar y$ for all $\ell \in \mathbb{N}.$ \end{itemize} Observe that the assumption that $[f(S)]_{\bar y}$ is bounded, without loss of generality, we assume that $f (z^\ell) \to y.$ Clearly, $y \leq \bar y$. Thus $y \in {T}_{\infty, \leq \bar y}(f, S).$ That is, ${T}_{\infty, \leq \bar y}(f, S) \not= \emptyset,$ which contradicts to $M$-tameness. Finally, let us show the compactness of $[f(S)]_{\bar y}$ by condition (i). Indeed, if the restrictive polynomial mapping $f$ on $S$ is proper at $\bar y,$ then the set $S':=\{x \in S {\rm conv\,}lon f(x) \leq \bar y\}$ is bounded. Furthermore, let a sequence $\{x^{\ell}\} \subset S$ satisfies $f(x^\ell) \leq \bar y$ for all $\ell \in \mathbb{N}.$ As $\{f(x^\ell)\} \subset [f(S)]_{\bar y},$ by the condition (i) we obtain that $\{x^\ell\}$ is bounded. Meanwhile, by the closedness of $S$ and continuity of the mapping $f,$ it ensures that $S'$ is closed. Thus, $S'$ is compact, which together with the continuity of the mapping $f,$ yields that the section $[f(S)]_{\bar y}$ is compact. \end{proof} \section{Existence of Pareto Solutions}\label{Sec:5} \begin{theorem}\label{exist-result} Let $S$ be defined as in \eqref{SAset} and assume that the set $S$ satisfies ${\rm (MFCQ)_{\infty}}$. Let $f: S \to {\mathbb R}^p$ be a restrictive polynomial mapping. Presume that there exists $\bar y \in f(S)$ such that the section $[f(S)]_{\bar y}$ is bounded. Then the problem~\eqref{problem} possesses at least one Pareto solution$,$ if one of the following conditions holds$:$ \begin{itemize} \item[{\rm(i)}] The restrictive polynomial mapping $f$ on $S$ is proper at $\bar y.$ \item[{\rm(ii)}] The restrictive polynomial mapping $f$ on $S$ satisfies the Palais--Smale condition at $\bar y.$ \item[{\rm(iii)}] The restrictive polynomial mapping $f$ on $S$ satisfies the Cerami condition at $\bar y.$ \item[{\rm(iv)}] The restrictive polynomial mapping $f$ on $S$ is $M$-tame at $\bar y.$ \end{itemize} \end{theorem} \begin{proof} By Theorem~\ref{equivalent1}, it yields that the section $[f(S)]_{\bar y}$ is compact providing one of the above equivalent conditions (i)--(iv) holds. Therefore, the result follows by \cite[Theorem 1]{Borwein1983} (or \cite[Theorem 2.10]{Ehrgott2005}). \end{proof} \begin{remark}{\rm Note that if the restrictive polynomial mapping $f$ on $S$ is proper at the $\bar y \in f(S),$ and the section $[f(S)]_{\bar y}$ is bounded, then the problem~\eqref{problem} obviously possesses at least one Pareto solution. However, as mentioned in \cite{Kim2020}, the problem of checking a function is proper (or coercive) is {\it strongly NP-hard} even for polynomials of degree 4 (see \cite[Theorem 3.1]{Ahmadi2020}). }\end{remark} \begin{example}{\rm Let $x:=(x_1, x_2) \in {\mathbb R}^2.$ Let \begin{align*} f(x):=&\ f(x_1, x_2, x_3) = \left(x_3, (1- x_1x_2)^2 + x_2^2 + x_3^2{\rm ri\,}ght). \end{align*} Consider the following vector polynomial optimization problem with constraints \begin{align}\label{examPareto} {\rm Min}_{\mathbb{R}^2_+}\;\big\{f(x)\,{\rm conv\,}lon \,x\in S\big\},\tag{VPO$_2$} \end{align} where $S:=\{(x_1, x_2, x_3) \in {\mathbb R}^3 {\rm conv\,}lon x_1 \geq 0, x_2 \geq 0\}.$ Clearly, the set $S$ satisfies ${\rm (MFCQ)_{\infty}}$. The image $f(S)$ of $f$ over $S$ can be seen in {\sc Figure}~1. \begin{figure} \caption{The image $f(S)$.} \caption{The section $[f(S)]_{\bar y} \end{figure} Let $\bar y = (-1, 2).$ It is clear that the section $[f(S)]_{\bar y}$ is bounded but not closed (see {\sc Figure}~2). On the other hand, by Definition~\ref{Pareto}, one can easily verify that $\mathrm{sol}\,({\rm VPO}_2) \not= \emptyset.$ In this case, $\widetilde{K}_{\infty, \leq \bar y}(f, S) \not= \emptyset.$ \qed }\end{example} \begin{example}{\rm Let $x:=(x_1, x_2) \in {\mathbb R}^2.$ Let \begin{align*} f(x):=&\ (f_1(x), f_2(x)) = \left(x_1^2x_2^4 + x_1^4x_2^2 - 3x_1^2x_2^2 + 1, (x_1 - 1)^2 + (x_2 - 1)^2{\rm ri\,}ght), \end{align*} where $f_1(x)$ is known as Motzkin polynomial (see \cite{HaHV2017}). Consider the following vector polynomial optimization problem with constraints \begin{align}\label{exam09} {\rm Min}_{\mathbb{R}^2_+}\;\big\{f(x)\,{\rm conv\,}lon \,x\in S\big\},\tag{VPO$_3$} \end{align} where $S:=\left\{x \in {\mathbb R}^2 {\rm conv\,}lon h_1(x) = x_1 \geq 0, h_2(x) = x_2 \geq 0{\rm ri\,}ght\} = {\mathbb R}_+ \times {\mathbb R}_+.$ Clearly, the set $S$ satisfies ${\rm (MFCQ)_{\infty}}$, and by definition, we can easily verify that $T_{\infty, \leq \bar y}(f, S) = \emptyset$ for $\bar y = (0, 0).$ Hence, along with Theorem~\ref{exist-result}, $\mathrm{sol}\,({\rm VPO}_3) \not= \emptyset.$ On the other hand, a simple calculation yields that the solution set is $\{(1,1)\}.$ \qed }\end{example} \begin{corollary} Let $S:=\{x \in {\mathbb R}^n {\rm conv\,}lon Ax = b\},$ where $A\in {\mathbb R}^{m \times n}, b \in {\mathbb R}^m,$ and let $f: {\mathbb R}^n \to {\mathbb R}^p$ be a linear mapping with $f(x):= Cx,$ where $C\in {\mathbb R}^{p \times n}.$ Presume that both the rows of $A$ and $C$ are linearly independent. If there exists $\bar y \in f(S)$ such that the section $[f(S)]_{\bar y}$ is bounded. Then the problem~\eqref{problem} possesses at least one Pareto solution. \end{corollary} \section{Conclusions and Further Discussions}\label{Sec:7} In this paper, we derive some sufficient conditions for the existence of Pareto solutions to the consitained vector polynomial optimization problem~\eqref{problem}. Such sufficient conditions are based on the Palais--Smale condition, the Cerami condition, the $M$-tameness, and the properness for the restrictive polynomial mapping $f$ over $S$. Among others, it is worth mentioning that our results are derived under the assumption ${\rm (MFCQ)_{\infty}}$ of $S$, which are significantly different to the results in the literature \cite{Kim2019,Kim2020}. Now, having the results in hand, we will close this paper by mentioning the following possible research directions as future investigations. \begin{itemize} \item[{\rm (i)}] [Finding (weak) Pareto solutions] When all functions in the problem~\ref{problem} are linear, Blanco et al \cite{Blanco2014} obtain the set of Pareto solutions by using a semidefinite programming method. When the functions in the problem~\ref{problem} are convex polynomials, Moment-SOS relaxation methods are invoked to compute Pareto solutions in \cite{Jiao2020,Lee2018,Lee2019,Lee2021}. Further important results on computing Pareto solutions/values to the problem~\ref{problem} are given in \cite{Magron2014,Magron2015,Nie2021}. A natural question arises: how to compute the Pareto solutions/values to the problem~\ref{problem} without any convexity assumptions? \item[{\rm (ii)}] [On vector polynomial variational inequality problems] By using the similar techniques (with possibly significantly modifications), it is also very interesting to investigate the vector polynomial variational inequality problems, such problems can be seen in \cite{Huong2016,Yen2016}. \end{itemize} \subsection*{Acknowledgments} The authors wish to thank Ti{\accent"5E e}\kern-.385em\raise.2ex\hbox{\char'23}\kern-.08em n-S\ow n Ph\d{a}m for many valuable suggestions. This work was supported by the National Natural Sciences Foundation of China (11971339, 11771319). \end{document}
\begin{document} \ensubject{fdsfd} \ArticleType{ARTICLES} \Year{2019} \Month{} \Vol{60} \No{} \BeginPage{1} \DOI{} \ReceiveDate{January 31, 2019} \AcceptDate{May 5, 2019} \title[]{Infinite dimensional Cauchy-Kowalevski and Holmgren type theorems} {Infinite dimensional Cauchy-Kowalevski and Holmgren type theorems} \author[1]{Jiayang YU}{[email protected]} \author[2,$\ast$]{Xu ZHANG }{zhang$\[email protected]} \AuthorMark{Jiayang Yu} \AuthorCitation{Jiayang Yu, Xu Zhang} \address[1]{ School of Mathematics, Sichuan University, Chengdu 610064, P. R. China} \address[2]{ School of Mathematics, Sichuan University, Chengdu 610064, P. R. China} \abstract{The aim of this paper is to show Cauchy-Kowalevski and Holmgren type theorems with infinite number of variables. We adopt von Koch and Hilbert's definition of analyticity of functions as monomial expansions. Our Cauchy-Kowalevski type theorem is derived by modifying the classical method of majorants. Based on this result, by employing some tools from abstract Wiener spaces, we establish our Holmgren type theorem.} \keywords{Cauchy-Kowalevski theorem, Holmgren theorem, monomial expansions, abstract Wiener space, divergence theorem, method of majorants} \MSC{Primary 35A10, 26E15, 46G20; Secondary 46G20, 46G05, 58B99.} \maketitle \section{Introduction} The classical Cauchy-Kowalevski theorem asserts the local existence and uniqueness of analytic solutions to quite general partial differential equations with analytic coefficients and initial data in the finite dimensional Euclidean space $\mathbb{R}^{n}$ (for any given $n\in \mathbb{N}$). In 1842, A. L. Cauchy first proved this theorem for the second order case; while in 1875, S. Kowalevski proved the general result. Both of them used the method of majorants. Because of its fundamental importance, there exists continued interest to generalize and/or improve this theorem (See \cite{Cha, Fri, Lascar79, Mog, Nir, Nis, Ovs, Saf, Tre, Yam92, Zub01} and the references cited therin). In particular, some mathematicians studied abstract forms of Cauchy-Kowalevski theorem in the context of Banach spaces and Fr\'{e}chet derivatives. For example, the linear cases were considered by L.~Ovsjannikov (\cite{Ovs}) and F.~Tr\`{e}ves (\cite{Tre}). Later, L.~Nirenberg (\cite{Nir}) obtained a nonlinear form, while T.~Nishida (\cite{Nis}) simplified Nirenberg's proof and obtained a more general version, M.~Safonov (\cite{Saf}) gave another proof of Nishida's theorem. There are also some abstract Cauchy-Kowalevski theorems in this respect (\cite{Cha, Mog, Yam92}). On the other hand, Holmgren's uniqueness theorem states the uniqueness of solutions to linear partial differential equations with analytic coefficients in much larger class of functions than analytic ones. In 1901, E.~Holmgren (\cite{Hol}) first proved this theorem for the case of $n=2$ in the context of classic solutions, while in 1949, F.~John (\cite{Joh}) extended it to the general case of $n$ variables. Later the result was extended first to the setting of distribution solutions by L.~H\"{o}rmander (\cite{Hor1, Hor2}) and then to that of hyperfunction solutions by H.~Hedenmalm (\cite{Hed}). All of the above works are addressed to the case of finite dimensional spaces. In the 1970s, B.~Lascar (\cite{Lascar76, Lascar79}) gave a Banach space version of Holmgren's uniqueness theorem. Unfortunately, the proof of Lascar's result in this respect is incomplete and questionable, and therefore, about thirty years later, in \cite{Cha} M. Chaperon said that no infinite-dimensional version of Holmgren's theorem seems to be known. The main purpose of this paper is to establish Cauchy-Kowalevski and Holmgren type theorems on $\mathbb{R}^{\infty}$, which is a countable Cartesian product of $\mathbb{R}$. In some sense, this is quite natural because $\mathbb{R}^{\infty}$ is an infinite dimensional counterpart of $\mathbb{R}^{n}$, the $n$-fold Cartesian product of $\mathbb{R}$. Since there are various different topologies on $\mathbb{R}^{\infty}$, we have more freedom and flexibility to introduce suitable assumptions on equations under consideration. Nevertheless, a large family of local derivatives of functions on $\mathbb{R}^{\infty}$ make the analysis in our work more complicated. Clearly, we cannot apply the known abstract Cauchy-Kowalevski and Holmgren type theorems (in the literatures) in the setting of Banach spaces to obtain our results. Indeed, our working space, $\mathbb{R}^{\infty}$, is NOT a Banach space! There exists many different definitions of analyticity in infinite dimensions. As far as we know, the concept of analyticity for functions of infinitely many variables began with H.~von Koch in 1899 (\cite{Koc}). H.~von Koch (\cite{Koc}) introduced a monomial approach to holomorphic functions on infinite dimensional polydiscs which was further developed by D.~Hilbert in 1909 (\cite{Hil}). After the pioneering work of H.~von Koch and D.~Hilbert, it is clear from M.~Fr\'echet \cite{Fre1, Fre2} and R.~G\^{a}teaux \cite{Gat1, Gat2} that the power series expansion in terms of homogeneous polynomials seems more suitable for the analyticity in the context of Banach space (e.g., \cite{Din} for the extensive works on infinite dimensional complex analysis starting from 1960s). Nevertheless, in the recent decades, it was found that Hilbert's definition of analyticity is also useful in some problems. For example, in 1987, R.~Ryan (\cite{Rya}) discovered that every entire function $f$ on $\ell^1$ (the usual Banach space of absolutely summable sequences of real numbers) has a monomial expansion which converges and coincides with $f$; in 1999, L.~Lempert (\cite{Lem}) proved this holds for any open ball of $\ell^1$; while in 2009, A.~Defant, M.~Maestre and C.~Prengel (\cite{DMP}) showed that the monomial expansion of any holomorphic function on the Reinhardt domain $\cal R$ in a Banach sequence space converges uniformly and absolutely on any compact subsets of $\cal R$. In this work, we shall use the definition of analyticity introduced by H.~von Koch and D.~Hilbert. We shall modify the classical method of majorants to derive a Cauchy-Kowalevski type theorem in $\mathbb{R}^{\infty}$. Based on this result, we then employ some tools from abstract Wiener spaces (developed by L.~Gross in \cite{Gro1}) and especially an infinite dimensional divergence theorem (\cite{Goo}) to establish a Holmgren type theorem with infinite number of variables. For the later, the basic idea is more or less the same as that in finite dimensions but the technique details are much more complicated. Indeed, it is well-known that, compared with its finite dimensional counterpart, the analysis tools in infinite dimensions are much less developed. The rest of this paper is organized as follows. Section 2 is of preliminary nature, in which we first introduce suitable topologies for $\mathbb{R}^{\infty}$ and for $\mathbb{R}^{\infty}$ plus an infinity point; then, we give the definition of analyticity for functions of infinitely many variables; also, we present a brief introduction to the theory of abstract Wiener space and a divergence theorem. In Section 3, we show a Cauchy-Kowalevski type theorem of infinitely many variables. Section 4 is devoted to establishing our Holmgren type theorem. We refer to \cite{YuZh1} for the details of proofs of the results announced in this paper and some other results in this context. \section{Preliminaries } \subsection{A family of topologies} There exists only one useful topology on finite dimensional space $\mathbb{R}^{n}$. However, we need to use a family of topologies on $\mathbb{R}^{\infty}$. Denote by $\mathscr{B}_{\infty}$ the class of sets (in $\mathbb{R}^{\infty}$): $(x_i)+B_r^{\infty}\triangleq \{(x_i+y_i):\, (y_i)\in B_r^{\infty}\}$, where $(x_i)\in \mathbb{R}^{\infty}$, $r\in(0,+\infty)$ and $B_r^{\infty}\triangleq\{(y_i)\in \mathbb{R}^{\infty}:\,\sup_{1\leq i< \infty}|y_i|<r\}.$ Then $\mathscr{B}_{\infty}$ is a base for a topological space, denoted by $\mathscr{T}_{\infty}$. For any $(x_i),(y_i)\in \mathbb{R}^{\infty}$, define $d_{\infty}((x_i),(y_i))\triangleq \min \{1,\sup_{1\leq i<\infty}|x_i-y_i|\}$. Then, it is easy to show the following result: \begin{proposition}\label{prop 3} The following assertions hold: \begin{itemize} \item[(1)] $(\mathbb{R}^{\infty},\mathscr{T}_{\infty})$ is not a topological vector space; \item[(2)] $(\mathbb{R}^{\infty},\mathscr{T}_{\infty})$ is compatible with the metric $d_{\infty}$ and $(\mathbb{R}^{\infty},d_{\infty})$ is a nonseparable complete metric space. \end{itemize} \end{proposition} For any $p\in[1,\infty)$ and $(x_i),(y_i)\in \mathbb{R}^{\infty}$, define $\text{d}_p((x_i),(y_i))\triangleq \min \{1, (\sum_{i=1}^{\infty}|x_i-y_i|^p )^{\frac{1}{p}} \}.$ Then, $(\mathbb{R}^{\infty}, d_p)$ is a complete metric space. Denote by $\mathscr{T}_p$ the topology induced by the metric $d_p$, and by $\mathscr{B}_p$ the class of sets: $(x_i)+B_r^{p}$, where $(x_i)\in \mathbb{R}^{\infty}$, $r\in(0,+\infty)$ and $B_r^{p}\triangleq \{(x_i)\in \mathbb{R}^{\infty}:\, (\sum_{i=1}^{\infty}|x_i|^p )^{\frac{1}{p}}<r \}$. Obviously, $\mathscr{B}_p$ is a base for the topology space $\mathscr{T}_p$. Also, denote by $\ell^p$ ({\it resp.} $\ell^\infty$) the usual Banach space of sequences $(x_i)\in \mathbb{R}^{\infty}$ so that $(\sum_{i=1}^{\infty}|x_i|^p )^{\frac{1}{p}}<\infty$ ({\it resp.} $\sup_{1\leq i< \infty}|x_i|<\infty$). Likewise, for any $p\in(0,1)$ and $(x_i),(y_i)\in \mathbb{R}^{\infty}$, define $\text{d}_p((x_i),(y_i))\triangleq \min \{1, (\sum_{i=1}^{\infty}|x_i-y_i|^p ) \}$. Then $d_p$ is a metric on $\mathbb{R}^{\infty}$. Denote by $\mathscr{T}_p$ the topology induced by the metric $d_p$, and by $\mathscr{B}_p$ the class of sets: $(x_i)+B_r^{p}$, where $(x_i)\in \mathbb{R}^{\infty}$, $r\in(0,+\infty)$ and $B_r^{p}\triangleq \{(x_i)\in \mathbb{R}^{\infty}:\,\sum_{i=1}^{\infty}|x_i|^p<r \}$. Similarly to Proposition \ref{prop 3}, $(\mathbb{R}^{\infty},\mathscr{T}_{p})$ and $\mathscr{B}_p$ is a base for the topology space $\mathscr{T}_p$ when $0< p< \infty$. Denote by $\mathscr{T}$ the (usual) product topology on $\mathbb{R}^{\infty}$. Now we have defined a family of topologies on $\mathbb{R}^{\infty}$. The following result shows some relations between these topologies. \begin{proposition} \label{toplogies inclusion proposition} The inclusion relations $\mathscr{T}\subsetneqq \mathscr{T}_{\infty}\subsetneqq \mathscr{T}_q\subsetneqq\mathscr{T}_p$ hold for any $0<p<q<\infty$. \end{proposition} Let $\widetilde{\mathbb{R}^{\infty} }\triangleq \mathbb{R}^{\infty} \sqcup\{\infty\}$, where $\infty$ is any fixed point not belonging in $\mathbb{R}^{\infty}$. We consider the following family of sets (in $\widetilde{\mathbb{R}^{\infty} }$): $\mathscr{B}^p\triangleq \mathscr{B}_p \bigcup \Big\{\big\{(x_n)\in \mathbb{R}^{\infty} :x_n\neq 0 \text{ for each }n\in \mathbb{N}\,\text{ and } \,\,\sum_{n=1}^{\infty}\frac{1}{|x_n|^p}<r \big\}\sqcup\{\infty\}:\,r>0\Big\}$ for $0<p<\infty$, and $\mathscr{B}^{\infty}\triangleq \mathscr{B}_{\infty} \bigcup \Big\{\big\{(x_n)\in \mathbb{R}^{\infty} :x_n\neq 0 \text{ for each }n\in \mathbb{N}\,\text{ and } \,\,\sup_{1\leq n<\infty}\frac{1}{|x_n|}<r \big\}\sqcup \{\infty\}:\,r>0\Big\}.$ Then $\mathscr{B}^p$ is a base for a topological space on $\widetilde{\mathbb{R}^{\infty} }$ which we will denoted by $\mathcal {T}^p$ for $0<p\leq \infty$. It is easily seen that the subspace topology of $(\widetilde{\mathbb{R}^{\infty} },\mathcal {T}^p)$ on $\mathbb{R}^{\infty}$ is $\mathscr{T}_p$ but $\mathcal {T}^p$ is not the usual one-point compactification of $\mathscr{T}_p$ (and therefore we use the notion $\widetilde{\mathbb{R}^{\infty} }$ rather than $\widehat {\mathbb{R}^{\infty} }$ for the usual one-point compactification). \subsection{Analyticity for Functions of Infinitely Many Variables} We denote by $\mathbb{N}^{(\mathbb{N})}$ the set of all finitely supported sequences of nonnegative integers. For $\textbf{x}=(x_i)\in \mathbb{R}^{\infty}$, $\alpha=(\alpha_i)\in \mathbb{N}^{(\mathbb{N})}$ with $\alpha_k=0$ for $k\geq n+1$ and some $n\in \mathbb{N}$, write $\textbf{x}^{\alpha}\triangleq x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_n^{\alpha_n}$, which is called a monomial on $\mathbb{R}^{\infty}$. The following definition of analyticity is essentially from \cite{Koc} and \cite{Hil}. \begin{definition} Suppose $f$ is a real-valued function defined on a subset $D$ of $\mathbb{R}^{\infty}$ and $\textbf{x}_0\in D$. If for each $\alpha\in \mathbb{N}^{(\mathbb{N})}$, there exist a real number $c_{\alpha}$ (depending on $f$ and $x_0$) such that the series $\sum_{\alpha \in \mathbb{N}^{(\mathbb{N})}} c_{\alpha}\textbf{x}^{\alpha}$ is absolutely convergent for some $\textbf{x}=(x_i)\in \mathbb{R}^{\infty}$ with $x_i\neq 0$ for each $i\in \mathbb{N}$, and for each $\textbf{h}=(h_i)\in \mathbb{R}^{\infty}$ with $|h_i|\leq |x_i|$ for all $i\in\mathbb{N}$, it holds that $\textbf{x}_0+\textbf{h}\in D$ and \begin{eqnarray}\label{monomial expansion} f(\textbf{x}_0+\textbf{h})=\sum_{\alpha \in \mathbb{N}^{(\mathbb{N})}} c_{\alpha}\textbf{h}^{\alpha}, \end{eqnarray} then $f$ is called analytic near $\textbf{x}_0$ (with the monomial expansion (\ref{monomial expansion})). In this case, we write $D_f^{\textbf{x}_0}\triangleq \{\textbf{h}\in \mathbb{R}^{\infty}:\sum_{\alpha \in \mathbb{N}^{(\mathbb{N})}} |c_{\alpha}\textbf{h}^{\alpha}|<\infty \}$, and call the set $D_f^{\textbf{x}_0}$ the convergence domain of monomial expansion (\ref{monomial expansion}). \end{definition} \begin{definition} Suppose $f$ is a real-valued function defined on a subset $D$ of $\mathbb{R}^{\infty}$, $\textbf{x}_0\in D$ and $0<p<\infty$. If there exists $\textbf{x}=(x_i)\in\mathbb{R}^{\infty}$ with $\sum_{i=1}^{\infty}\frac{1}{|x_i|^p}<\infty$ such that $f$ is analytic near $\textbf{x}_0$ with the monomial expansion (\ref{monomial expansion}) and $\sum_{\alpha \in \mathbb{N}^{(\mathbb{N})}} |c_{\alpha}\textbf{x}^{\alpha}|<\infty$, then the monomial expansion of $f$ near $\textbf{x}_0$ is called absolutely convergent at a point near $\infty$ in the topology $\mathcal {T}^{p}$. \end{definition} \begin{example} (\textbf{Riemann Zeta Function}) Recall that the Riemann zeta function is defined by $\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}$ when $s\in \mathbb{C}$ with Re$(s)>2$. Suppose that $\{p_n\}_{n=1}^{\infty}$ is the collection of all positive prime numbers. Then for $s\in \mathbb{R}$ with $s>2$ we have $\zeta(s)=\prod_{n=1}^{\infty}\frac{1}{1-p_n^{-s}} =\sum_{\alpha=(\alpha_1,\cdots,\alpha_n) \in \mathbb{N}^{(\mathbb{N})}}p_1^{-s\alpha_1}\cdots p_n^{-s\alpha_n} =\sum_{\alpha \in \mathbb{N}^{(\mathbb{N})}}(p_1^{-s},\cdots,p_n^{-s},\cdots)^{\alpha} =f(p_1^{-s},\cdots,p_n^{-s},\cdots),$ where $f(\textbf{x})\triangleq\sum_{\alpha \in \mathbb{N}^{(\mathbb{N})}}\textbf{x}^{\alpha}$ with definition domain $D=\{\textbf{x}=(x_i)\in\mathbb{R}^{\infty}:\,\sum_{i=1}^{\infty}|x_i|<1\}$ is the function of geometric series of infinitely many variables. \end{example} Naturally, the notion of majority function is as follows: \begin{definition} Suppose $f$ and $F$ are analytic near $\textbf{x}_0\in \mathbb{R}^{\infty}$ with monomial expansions $f(\textbf{x})=\sum_{\alpha \in \mathbb{N}^{(\mathbb{N})}} c_{\alpha}(\textbf{x}-\textbf{x}_0)^{\alpha}$ and $F(\textbf{x})=\sum_{\alpha \in \mathbb{N}^{(\mathbb{N})}} C_{\alpha}(\textbf{x}-\textbf{x}_0)^{\alpha}$, respectively, where $c_{\alpha}\in\mathbb{R}$ and $C_{\alpha}\geq 0$ for each $\alpha \in \mathbb{N}^{(\mathbb{N})}$. If $|c_{\alpha}|\leq C_{\alpha}$ for all $\alpha \in \mathbb{N}^{(\mathbb{N})}$, then $F$ is called a majority function of $f$ near $\textbf{x}_0$. \end{definition} Suppose $B$ is a real Banach space and $U$ is an open set of $B$. For each $n\in \mathbb{N}$, a function $g$ from $B$ into $\mathbb{R}$ is called continuous $n$-homogeneous polynomial if there exists a continuous $n$-linear map $T$ from $\prod_{i=1}^{n}B$ into $\mathbb{R}$ such that $g(x)=T(x,\cdots,x)$ for each $x\in B$. For $n=0$, we call any function from $B$ into $\mathbb{R}$ with constant value a continuous $0$-homogeneous polynomial. A function $f$ (defined on a subset $D$ of $B$) is called analytic on some subset $U\subset D$ if for each $\xi\in U$ there exist a sequence $\{P_nf(\xi)\}_{n=0}^{\infty}$ of continuous $n$-homogeneous polynomials on $B$ and a radius $r>0$ such that $\xi+B(r)\subset U$ and $f(\xi+x)=\sum_{n=0}^{\infty} P_nf(\xi)(x)$ uniformly in $B(r)$, where $B(r)\triangleq \{y\in B: ||y||<r\}$. It is easy to see that, the function $g(\cdot)\triangleq f(\xi+\cdot)$ is Fr\'{e}chet differentiable in $B(r)$ (See \cite{Din} for more details). We emphasize that $\mathbb{R}^{\infty}$ is not a norm space. Hence, the analyticities by monomial expansions and by power series expansions are two distinct notions. Nevertheless, for every $\textbf{x}\in\mathbb{R}^{\infty}$, $r\in(0,1)$ and $p\in[1,+\infty]$, it holds that $\textbf{x}+B_r^{p}\subset \mathbb{R}^{\infty}$. Motivated by this simple observation, we have the following definition. \begin{definition} Suppose $p\in[1,\infty],\,U\subset \mathbb{R}^{\infty}$, $f$ is a real-valued function defined on $U$ and $\textbf{x}\in U$. We call $f$ is Fr\'{e}chet differentiable with respect to $\ell^p$ in a neighborhood of $\textbf{x}$ in the topology $\mathscr{T}_p$ if there exists $r\in(0,1)$ such that $\textbf{x}+B_r^{p}\subset U$, and the function defined by $g(\cdot)\triangleq f(\textbf{x}+\cdot)$ is Fr\'{e}chet differentiable with respect to the Banach space $\ell^p$ in $B_r^{p}$. \end{definition} \subsection{Abstract Wiener Space and Derivatives} Let us recall the notion of abstract Wiener space, which will be of crucial importance in the proof of the Holmgren type theorem. The materials in this subsection are from \cite{Dri, Fer, Gro, Gro1}. Suppose $X$ is a real separable Banach space, and denote by $X^*$ its dual space and by $\langle \cdot,\cdot\rangle$ the natural pairing from $X^*\times X$ into $\mathbb{R}$. Let $Y$ be another Banach space. Denote by ${\cal L}(X; Y)$ the Banach space of all bounded linear operators from $X$ to $Y$, with the usual operator norm (When $Y=X$, we simply write ${\cal L}(X)$ instead of ${\cal L}(X; Y)$). A subset $C$ of $X$ is called a cylinder set if it is of the form: $$C=\big\{x\in X:(\langle y_1,x\rangle,\cdots,\langle y_n,x\rangle)\in E\big\},$$ where $n\in\mathbb{N}$, $y_1,\cdots,y_n\in X^*$ and $E$ is a Borel set in $\mathbb{R}^n$. If $L$ is a finite-dimensional subspace of $X^*$ such that $\{y_1,\cdots,y_n\}\subset L$, then $C$ is said to be based on $L$. Clearly, the collection of cylinder sets in $X$ is an algebra $\mathscr{R}$ and the collection of cylinder sets based on $L$ is a $\sigma$-algebra which will be denoted by $\mathscr{S}_L$. We call a nonnegative set function $\mu$ on $\mathscr{R}$ a cylinder set measure on $X$ if $\mu(X)=1$ and $\mu$ is countably additive on $\mathscr{S}_L$ for each finite-dimensional subspace $L$ of $X^*$. Suppose $H$ is a real separable Hilbert space with inner product $(\cdot, \cdot)$ and norm $|\cdot|=\sqrt{(\cdot,\cdot)}$. Then every cylinder set of $H$ is of the form $$C=\big\{x\in H:Px\in E\big\},$$ where $P$ is a finite-dimensional projection in $H$ and $E$ is a Borel set in $PH$. For any $t>0$, a (typical) cylinder set measure $\mu_t$ is defined by $$\mu_t(C)\triangleq\frac{1}{(2\pi t)^{-\frac{n}{2}}}\int_E e^{-\frac{|x|^2}{2t}}\,\mathrm{d}x,$$ where $C$, $P$ and $E$ are given above, $n=\dim PH$, and $\mathrm{d}x$ is the Lebesgue measure in $PH$. A measurable semi-norm on $H$ is a semi-norm $||\cdot||$ on $H$ with the property that for every $\epsilon>0$, there exists a finite-dimensional projection $P_0$ such that for every finite-dimensional projection $P$ orthogonal to $P_0$ it holds that $\mu_1(\{x\in H: ||Px||>\epsilon\})<\epsilon.$ As a consequence of the definition of measurable semi-norm, every measurable semi-norm $||\cdot||$ is dominated by the Hilbert norm, i.e., there exists a constant $C$ such that $||x||\leq C|x|$ for all $x\in H$. If $||\cdot||$ is a measurable norm, we denote by $B$ the completion of $H$ with respect to $||\cdot||$. Then $B$ is a separable Banach space. There is a natural embedding $i$ from $H$ into $B$ whose image is dense in $B$. Then $i^*$ is also an embedding from $B^*$ into $H^*$. Since $H^*$ can be identified with $H$ we have the following inclusion relations $B^*\subset H\subset B.$ Furthermore, we should note that an element $x$ of $H$ is in $B^*$ if and only if there exists a constant $C>0$ such that the inequality $|(x,y)|\leq C ||y||$ holds for all $y$ in $H$. Then $\mu_t$ induces a cylinder set measure $m_t$ in $B$ as follows. If $y_1,\cdots,y_n\in B^*$ and $E$ is a Borel set of $\mathbb{R}^n$, we define $$m_t(\{x\in B:(\langle y_1,x\rangle,\cdots,\langle y_n,x\rangle)\in E\})\triangleq \mu_t(\{x\in H:((y_1,x),\cdots,( y_n,x))\in E\}).$$ In \cite{Gro1}, it was proved that $m_t$ is countably additive on the cylinder sets of $B$. By the Carath\'{e}odory extension theorem, it can be uniquely extended to the Borel sets of $B$ as a measure, denoted by $p_t$. The triple $(i,H,B)$ is called an abstract Wiener space and the measure $p_t$ is called the Wiener measure on $B$ with variance parameter $t$. For any $x\in B$ and Borel subset $A$ of $B$, we define $p_t(x,A)\triangleq p_t(A-x).$ By \cite{Gro}, one has the following formula for the Wiener measure. \begin{proposition} For any $s,t\in(0,+\infty)$ and $x,y\in B$, $p_t(x,\cdot)$ and $p_s(y,\cdot)$ are equivalent measures if and only if $s=t$ and $x-y\in H$. Otherwise they are mutually singular. Furthermore, it holds that $p_{ts}(A)=p_t(s^{-\frac{1}{2}}A)$ for any Borel subset $A$ of $B$. \end{proposition} In terms of the inclusion relations $B^*\subset H\subset B$, one has a useful product decomposition of Wiener measure $p_t$ as follows. Suppose $K$ is a finite-dimensional subspace of $B^*$ and $L$ is its annihilator in $B$. If $\{y_1,\cdots,y_n\}$ is an orthonormal basis of $K$ then we define $Qx=\sum_{j=1}^{n}\langle y_j,x\rangle y_j,\,x\in B.$ Then $Q$ is a continuous linear operator from $B$ into $B$. Obviously, the range of $Q$ is $K$ and null space of $Q$ is $L$. We thus get $B=K\oplus L$. Let $K^{\perp}$ be the orthogonal complement of $K$ in $H$. Then $K^{\perp}\subset L$ and $L$ is the closure in $B$ of $K^{\perp}$. It is easy to check that the restriction of a measurable norm to a closed subspace is again a measurable norm. Thus there is a Wiener measure $p_t^{'}$ on the space $L$. Let $\mu_t^{'}$ denote the typical Gaussian measure in $K$ then in the Cartesian product decomposition $B=K\times L$ there holds $p_t=\mu_t^{'}\times p_t^{'}.$ The following exponential integrability of Wiener measure, discovered in \cite{Fer}, is very useful. \begin{theorem}\label{Fernique's Theorem} {\rm (\textbf{Fernique's Theorem})} For any fixed $t\in(0,\infty)$, there exists $\epsilon=\epsilon(p_t)>0$ such that \begin{eqnarray*} \int_B e^{\epsilon ||x||^2}\,p_t(\mathrm{d}x)<\infty. \end{eqnarray*} \end{theorem} Denote by $\mathcal{B}(B)$ the collection of Borel sets of $B$. The following density result (\cite[Proposition 39.8]{Dri}) will also be used later. \begin{proposition}\label{prop 39.8 in Drive} Suppose $\mu$ is a probability measure on $\mathcal{B}(B)$ so that for every $\varphi\in B^*$, there exists a constant $\epsilon=\epsilon( ||\varphi||_{B^*})>0$ such that $\int_Be^{\epsilon |\varphi|}\,\mathrm{d}\mu<\infty$ (Here, $|\varphi|$ stands for the absolute value of $\varphi$). Then $\mathcal {F}\triangleq\{P(\varphi_1,\cdots,\varphi_n):n\in\mathbb{N},\varphi_i\in B^*,1\leq i\leq n, P\text{ is a real polynomial with $n$ variables }\}$ is dense in $L^p(B,\mathcal {B}(B),\mu)$ for $1\leq p<\infty$. \end{proposition} In what follows, $(i,H,B)$ is a given abstract Wiener space. For a real valued function $f$ defined on an open set $O$ of $B$, one has two definitions of derivatives. Firstly, for $x\in O$ if $f$ is $B$-Fr\'{e}chet differentiable at $x$, we will denote the corresponding derivative at $x$ by $f'(x)$. Secondly, we have a function defined by $g(h)=f(x+ih)$ which is a function on a neighborhood of $0$ in $H$. If $g$ is $H$-Fr\'{e}chet differentiable at $0$, we will denote the corresponding derivative at $0$ by $Df(x)$. From \cite{Gro} we see that the second derivative is weaker than the first derivative. The second derivative is sometimes called Malliavin derivative, which will be used in the proof of the Holmgren type theorem in this paper. For $n\geq 2$, we will use the notations $f^{n}(x)$ and $D^{n}f(x)$ to denote the corresponding higher order derivatives. \subsection{A Divergence Theorem in Abstract Wiener Space} In this subsection, we will give a brief exposition of surface measures and a divergence theorem in the abstract Wiener space $(i,H,B)$, developed in \cite{Goo}. Firstly, we introduce the concepts of ``smooth" functions and surfaces. \begin{definition} A real-valued function $g$ defined on an open subset $U$ of $B$ is called an $H$-$C^1$ function if $g$ is continuous and $H$-Fr\'{e}chet differentiable on $U$, the map $Dg:U\to H^*$ is continuous and the vector $Dg(x)\in B^*$ for each $x\in U$. \end{definition} \begin{definition} A subset $S$ ({\it resp.} $V$) of $B$ is called an $H$-$C^1$ surface ({\it resp.} to have an $H$-$C^1$ boundary $\partial V$) if for each $x\in S$ ({\it resp.} $x\in\partial V$) there is an open neighborhood $U$ of $x$ in $B$ and an $H$-$C^1$ function $g$ defined on $U$ such that $Dg(x)\neq 0$ and $S\cap U=\{y\in U: g(y)=0\}$ ({\it resp.} $V\cap U=\{y\in U:g(y)<0\}$). \end{definition} Secondly, we introduce the notion of local coordinate for the above $H$-$C^1$ surface $S$. We begin with the concept of normal projection: \begin{definition} A one dimensional orthogonal projection $P$ on $H$ is called a normal projection for $S$ at $x\in S$ if $|P(y-x)|=o(|y-x|)$ as $|y-x|\to 0$ for all $y\in S$ so that $y-x\in H$. \end{definition} Denote by $I$ the identity operator on $H$. One can show the following result: \begin{proposition}\label{prop 2 in Godman DT} There exists a unique map $N_{\cdot}:S\to {\cal L}(B;B^*)$ such that \begin{itemize} \item[(1)] For each $x\in S$ the restriction of $N_x$ to $H$ is a normal projection for $(S-x)\cap H$ at $0$; \item[(2)] For each $x$ the map $J_x=I-N_x$ is a homeomorphism of an open neighborhood of $x$ in $S$ onto an open subset in the null space of $N_x$; \item[(3)] The map $N:S\to {\cal L}(H)$ is continuous. \end{itemize} \end{proposition} For any $y\in S$, by Proposition \ref{prop 2 in Godman DT}, there is an element $h$ in $B^*$ with $|h|=1$ and a neighborhood $W$ of $y$ in $S$ such that $N_yh=h$, $|N_wh|>0$ for all $w$ in $W$, and $J_y=I-N_y$ is a homeomorphism of $W$ into the null space of $N_y$. We call the above element $h$ {\it a unit normal vector} at $y$, and $W$ {\it a coordinate neighborhood} of $y$. For a fixed $x$ in $B$ and $t>0$ we define a measure $\rho(t,x,W,\cdot)$ on the Borel sets of $W$ by $$\rho(t,x,W,E)\triangleq\frac{1}{\sqrt{2\pi t}}\int_{J_y(E)} \frac{1}{|N_{J_y^{-1}z}h|}\exp\Big[-\frac{|N_y(J_y^{-1}z-x)|^2}{2t}\Big]\,p_t^{'}(J_y x,\mathrm{d}z),$$ where $p_t^{'}$ is the Wiener measure on the null space of $N_y$. We call $\rho(t,x,W,\cdot)$ {\it a local version of normal surface measure} with dilation parameter $t$ and translation variable $x$. One has the following result: \begin{theorem}\label{theorem 1 in Goodman DT} For any $x\in B$ and $t>0$, there is a unique measure $\sigma_t(x,\cdot)$ on the Borel sets of $S$ such that for any local version of normal surface measure $\rho(t,x,W,\cdot)$ on a coordinate neighborhood $W$ in $S$ and any Borel subset $E$ of $W$, it holds that $\sigma_t(x,E)=\rho(t,x,W,E)$. \end{theorem} The measure $\sigma_t(x,\cdot)$ given in Theorem \ref{theorem 1 in Goodman DT} is called {\it a normal surface measure} on $S$ with dilation parameter $t$ and translation variable $x$. In the sequel, $V$ is a given nonempty open set of $B$ and has an $H$-$C^1$ boundary $\partial V$. We also need the following two concepts. \begin{definition} A map ${\bf n}:\partial V\to H^*$ is called {\it an outward normal map} for $V$ provided that ${\bf n}(y)$ is a unit normal vector at $y$ for the surface $\partial V$ and $y-s{\bf n}(y)\in V$ for any small $s>0$. \end{definition} \begin{definition} An ($H$-valued) function $F$ defined on an open subset $U$ of $B$ is called to have finite divergence at $x\in U$ if $F$ is $H$-Fr\'{e}chet differentiable at $x$ and $DF(x)$ is an operator of trace class on $H$. For such a function $F$, the divergence of $F$ at $x$ is defined by the trace of $DF(x)$, and denoted by $(\text{div }F)(x)$. \end{definition} The following result will play a key role in the proof of our Holmgren type theorem of infinite many variables. \begin{theorem}\label{Divergence Theorem}{\rm (\textbf{Divergence Theorem})} Assume that $F:V\cup \partial V\to H$ is a continuous function with finite divergence on $V$ and that $F$ is uniformly bounded with respect to the $B^*$-norm on $V$. If for some $x\in B$ and $t>0$, the function $|F(\cdot)|$ is $\sigma_t(x,\cdot)$-integrable on $\partial V$ and the trace class operator norm of $DF$ is $p_t(x,\cdot )$-integrable on $V$, then \begin{eqnarray*} \int_V \bigg[({\rm div }\;F)(y)-\frac{\langle F(y),y-x\rangle}{t}\bigg]\,p_t(x,\mathrm{d}y)=\int_{\partial V}\langle F(y), {\bf n}(y)\rangle\,\sigma_t(x,\mathrm{d}y). \end{eqnarray*} \end{theorem} \section{Cauchy-Kowalevski Type Theorem of Infinitely Many Variables} This section is devoted to a study of the following form of initial value problem: \begin{equation}\label{one order quasi-linear partial differential equation} \left\{ \begin{array}{ll} \displaystyle\partial^m_t u(t,\textbf{x})=f\big(t,\textbf{x},u, \partial^{\beta}_{\textbf{x}}\partial^{j}_t u \big),\\[2mm]\displaystyle \partial^k_tu(t,\textbf{x})\mid_{t=0}=\phi_k(\textbf{x}),\ \ k=0,1,\cdots,m-1. \end{array} \right. \end{equation} Here $m$ is a given positive integer, $t\in \mathbb{R}, \textbf{x}=(x_i)\in \mathbb{R}^{\infty}$, $\partial^{\beta}_{\textbf{x}}\;{\buildrel \triangle \over =}\;\partial^{\beta_1}_{x_1^{\beta_1} }\cdots\partial^{\beta_n}_{x_n^{\beta_n} }$ for $\beta=(\beta_1,\cdots,\beta_n,0,\cdots)\in \mathbb{N}^{(\mathbb{N})}$ (for some positive $n\in\mathbb{N}$), the unknown $u$ is a real-valued function depending on $t$ and $\textbf{x}$; $f$ is a non-linear real-valued function depending on $t,$ $\textbf{x}$, $u$ and all of its derivatives of the form $\partial^{\beta}_x\partial^{j}_t u,\,\beta\in \mathbb{N}^{(\mathbb{N})}, j<m, 1\leq|\beta|+j\leq m.$ Note that the values $u(0,(0)), \partial^{\beta}_x\partial^{j}_t u(0,(0)), \beta\in \mathbb{N}^{(\mathbb{N})},\,j<m,\,1\leq|\beta|+j\leq m$ are determined by (\ref{one order quasi-linear partial differential equation}). For simplicity, we write these determined values by $u(0,(0))=u_0,\partial^{\beta}_x\partial^{j}_t u(0,(0))=w_{\beta,j}^0,\beta\in \mathbb{N}^{(\mathbb{N})},\,j<m,\,1\leq|\beta|+j\leq m$ and $\textbf{w}^0= (w_{\beta,j}^0)_{\beta\in \mathbb{N}^{(\mathbb{N})},\,j<m,\,1\leq|\beta|+j\leq m}.$ One can see that $f$ is a function on $\mathbb{R}\times \mathbb{R}^{\infty}\times \mathbb{R}\times \mathbb{R}^{\infty}$ which is also a countable Cartesian product of $\mathbb{R}$. Therefore, we may identify $\mathbb{R}\times \mathbb{R}^{\infty}\times \mathbb{R}\times \mathbb{R}^{\infty}$ with $\mathbb{R}^{\infty}$. We suppose that $f$ is analytic near $(0,(0),u(0,(0)), \partial^{\beta}_x\partial^{j}_t u(0,(0)) )$ with the monomial expansion $f\big(t,(x_i),u, \partial^{\beta}_x\partial^{j}_t u \big)=\sum_{\alpha\in\mathbb{N}^{(\mathbb{N})}}C_{\alpha}\big(t,(x_i),u-u_0, \partial^{\beta}_x\partial^{j}_t u-w_{\beta,j}^0 \big)^{\alpha}$ and let $F(t,\textbf{x},u, \textbf{w})\triangleq\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}|C_{\alpha}| (t,\textbf{x},u, \textbf{w})^{\alpha},$ where $\textbf{w}= (w_{\beta,j})_{\beta\in \mathbb{N}^{(\mathbb{N})},\,1\leq |\beta|+j\leq m,\,j<m}\in \mathbb{R}^{\infty}$ and $\textbf{x}= (x_i)\in \mathbb{R}^{\infty}.$ We also suppose that for each $0\leq k\leq m-1$, $\phi_k$ is analytic near $(0,(0))$ with monomial expansion $\phi_k(\textbf{x})=\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}C_{\alpha,k}\textbf{x}^{\alpha}$ and let $\Phi_k (t,\textbf{x},u, \textbf{w})\triangleq\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}|C_{\alpha,k}|\textbf{x}^{\alpha}.$ By means of the majorant method, we can show the following Cauchy-Kowalevski type theorem of infinitely many variables: \begin{theorem}\label{Infinite Dimensional Cauchy-Kowalevski Theorem} Suppose $1\leq p<\infty$ and that the monomial expansions of $\Phi_k$, $0\leq k\leq m-1$ and $F$ near $(0,(0))$ are absolutely convergent at a point near $\infty$ in the topology $\mathcal {T}^p$. Then the Cauchy problem (\ref{one order quasi-linear partial differential equation}) admits a locally analytic solution (near $(0)$), which is unique in the class of analytic functions under the topology $\mathscr{T}_{p'}$ where $p'$ is the usual H\"older conjugate of $p$. Furthermore, the solution $u$ is Fr\'{e}chet differentiable with respect to $\ell^{p'}$ in a neighborhood of $(0)$ in the topology $\mathcal {T}^{p'}$ and the corresponding Fr\'{e}chet derivative $Du$ is continuous. \end{theorem} \begin{example} (\textbf{The Schr\"{o}dinger operator of infinitely many number of particles}) This example is from \cite{Bere86}. Suppose that $\{a_k\}_{k=1}^{\infty}$ is a sequence of nonnegative real numbers. The following operator \begin{eqnarray}\label{operator in QFT} (Lu)(\textbf{x})\triangleq-\frac{1}{2}\sum_{k=1}^{\infty}a_k\bigg(\frac{\partial^2 u(\textbf{x})}{\partial x_k^2}-2x_k\frac{\partial u(\textbf{x})}{\partial x_k}\bigg),\ \ \textbf{x}=(x_k)\in \mathbb{R}^{\infty}, \end{eqnarray} for $u\in \mathcal {P}(\mathbb{R}^{\infty})$ which is the space of cylindrical polynomials on $\mathbb{R}^{\infty}$, i.e., polynomials depending only on finitely many variables. Note that in quantum mechanics the operator $N_k\triangleq-\frac{1}{2}a_k\Big(\frac{\partial^2 }{\partial x_k^2}-2x_k\frac{\partial }{\partial x_k}\Big)$ is an operator of energy of one-dimensional harmonic oscillator with unit mass and frequency in the space of $L^2\Big(\mathbb{R},\frac{e^{-x_k^2}}{\sqrt{\pi}}\,\mathrm{d}x_k\Big)$. Thus the operator (\ref{operator in QFT}) describes a system consisting infinitely many noninteracting oscillators with frequency $a_k\geq 0,k\in \mathbb{N}$. If $a_1=1$ and we let $t=x_1$ then the equation $Lu=0$ is equivalent to the following one: $$\frac{\partial^2 u(t,\textbf{x})}{\partial t^2}=2t\frac{\partial u(t,\textbf{x})}{\partial t}-\sum_{k=2}^{\infty}a_k\Big(\frac{\partial^2 u(t,\textbf{x})}{\partial x_k^2}-2x_k\frac{\partial u(t,\textbf{x})}{\partial x_k}\Big), $$ which is of form (\ref{one order quasi-linear partial differential equation}). One can easily see that the above equation satisfies the assumption in Theorem \ref{Infinite Dimensional Cauchy-Kowalevski Theorem} if and only if there exist $(b_k),(c_k)\in \mathbb{R}^{\infty}$ such that $\sum_{k=2}^{\infty}\big(\frac{1}{|b_k|^p}+\frac{1}{|c_k|^p}\big)<\infty $ and $\sum_{k=2}^{\infty}( |a_{k}b_k|+2|a_{k}c_k|)<\infty.$ For example, if $p=2$ and let $a_k=\frac{1}{k^3},\,b_k=c_k=\frac{1}{k},\,k=2,3,\cdots.$ Then $\sum_{k=2}^{\infty}( |a_{k}b_k|+2|a_{k}c_k|)=\sum_{k=2}^{\infty}\frac{3}{k^2}<\infty.$ \end{example} One can find similar examples such as the Hamilton-Jacobi equation in infinite dimensions (\cite{CL}) and the Laplacian defined on $\ell^2$ by Malliavin derivatives (\cite{Gro}). Now, let us consider the Cauchy problem of the following first order linear homogenous partial differential equation of infinitely many variables: \begin{equation}\label{one order-zx} \left\{ \begin{array}{ll} \displaystyle\partial_t u(t,\textbf{x}) -\sum_{i=1}^{\infty}a_{i}(t,\textbf{x}) \partial_{ x_i} u(t,\textbf{x}) -b(t,\textbf{x}) u(t,\textbf{x})= 0,\\[2mm] \displaystyle u(t,\textbf{x})\mid_{t=0}=\phi(\textbf{x}). \end{array} \right. \end{equation} Here $t\in \mathbb{R}, \textbf{x}=(x_i)_{i=1}^{\infty}\in \mathbb{R}^{\infty}$, the unknown $u$ is a real-valued function depending on $t$ and $\textbf{x}$, and the data $a_i(t,\textbf{x})$'s, $b(t,\textbf{x})$ and $\phi$ are analytic near $(0)$. Let \begin{equation}\label{def of G} G(t,\textbf{x}, \textbf{w})\triangleq\sum_{i=1}^{\infty}a_i(t,\textbf{x})w_i+w_0b(t,\textbf{x}),\quad \textbf{w}= (w_{ j})_{j=0}^{\infty}\in \mathbb{R}^{\infty}. \end{equation} By modifying the proof of Theorem \ref{Infinite Dimensional Cauchy-Kowalevski Theorem}, we can show the following result. \begin{corollary}\label{corollary 20} Suppose $1\leq p<\infty$, the monomial expansion of $G $ near $(0)$ is absolutely convergent at a point near $\infty$ in the topology $\mathcal {T}^{p}$, and $D_\phi^{(0)}=\mathbb{R}^{\infty}$. Then there exists $r\in (0,\infty)$, independent of $\phi$, such that the equation (\ref{one order-zx}) admits locally an analytic solution $u$ near $(0)$ and $D_u^{(0)}\supset B_r^{p'}$. Furthermore, $u$ is Fr\'{e}chet differentiable with respect to $\ell^{p'}$ in $B_r^{p'}$, and the corresponding Fr\'{e}chet derivative $Du$ is continuous. \end{corollary} \section{Holmgren Type Theorem of Infinitely Many Variables} Denote by $\Xi$ the set of real-valued functions, defined locally near (0) in $\mathbb{R}^{\infty}$, which are Fr\'{e}chet differentiable with respect to $\ell^2$ and whose Fr\'{e}chet derivative are locally continuous near $(0)$ in the $\mathscr{T}_2$ topology. In this section, we shall establish the following Holmgren type theorem of infinitely many variables. \begin{theorem}\label{Infinite Dimensional Holmgren Type Theorem} Suppose the monomial expansion of $G$ (defined by (\ref{def of G})) near $(0)$ is absolutely convergent at a point near $\infty$ in the topology $\mathcal {T}^{\frac{1}{2}}$ and $\phi\in \Xi$. Then the solution to (\ref{one order-zx}) is locally unique in the class $\Xi$. \end{theorem} In the rest, we shall give a sketch of the proof of Theorem \ref{Infinite Dimensional Holmgren Type Theorem}. It suffices to show that $U\in \Xi$ must be $0$ provided that $U$ solves (\ref{one order-zx}) with $\phi(\cdot)=0$. Without loss of generality, we assume that the data $a_i(t,\textbf{x})\equiv a_i(\textbf{x})$ ($i=1,2,\cdots$) and $b(t,\textbf{x})\equiv b(\textbf{x})$, i.e., they are independent of $t$. By our assumption, for each $i\in \mathbb{N}$, $a_{i}(\textbf{x})$ ({\it resp.} $b(\textbf{x})$) has a monomial expansion near $(0)$: $a_{i}(\textbf{x})=\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}a_{i,\alpha}\textbf{x}^{\alpha}$ ({\it resp.} $b(\textbf{x})=\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}a_{0,\alpha}\textbf{x}^{\alpha}$). Then, there exist $\rho_0>0$ and $0<s_j,t_i<1,\,0\leq i<\infty,\, 1\leq j<\infty$ such that $\sum_{j=1}^{\infty} s_j^{\frac{1}{2}}+\sum_{i=0}^{\infty}t_i^{\frac{1}{2}} =1$ and \begin{equation}\label{equation 1133} \sum_{i=0}^{\infty} \left[\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}|a_{i,\alpha}| (\frac{\rho_0}{s_j} )^{\alpha} \right]\frac{\rho_0}{t_i}<\infty. \end{equation} Now we need to introduce a suitable transformation of variables. In a neighborhood of $(0)$ in $\mathscr{T}_2$, put $$t^{'}\triangleq t+\sum_{i=1}^{\infty} x_i^2,\quad x_i^{'}\triangleq x_i,\,\,i=1,2,\cdots, \quad\widetilde{U}(t^{'},x_1^{'},\cdots,x_i^{'},\cdots)\triangleq U\big(t^{'}-\sum_{i=1}^{\infty} (x_i^{'})^2,x_1^{'},\cdots,x_i^{'},\cdots\big). $$ Then $\widetilde{U}\in\Xi$, and \begin{eqnarray}\label{transformation 1} \bigg(1-2\sum_{i=1}^{\infty}x_i^{'}a_{i}(\textbf{x}^{'})\bigg)\partial_{t'} \widetilde{U} =\sum_{i=1}^{\infty}a_{i}(\textbf{x}^{'})\partial_{x_i^{'}} \widetilde{U}+b(\textbf{x}^{'})\widetilde{U}, \end{eqnarray} where $\textbf{x}^{'}=(x_i^{'})\in\ell^2$. Clearly, $|2\sum_{i=1}^{\infty}x_i^{'}a_{i}(\textbf{x}^{'})|<1$ in a neighborhood of $(0)$ in $\mathcal {T}_2$. Thus (\ref{transformation 1}) can be written as $\partial_{t'} \widetilde{U} =\sum_{i=1}^{\infty}\widetilde{a}_{i}(\textbf{x}^{'})\partial_{x_i^{'}} \widetilde{U}+\widetilde{b}(\textbf{x}^{'})\widetilde{U},$ where $\widetilde{a}_{i}(\textbf{x}^{'})=\frac{a_{i}(\textbf{x}^{'})}{1-2\sum_{i=1}^{\infty}x_i^{'}a_{i}(\textbf{x}^{'})},1\,\,\leq i<\infty $ and $\widetilde{b}(\textbf{x}^{'})=\frac{b(\textbf{x}^{'})}{1-2\sum_{i=1}^{\infty}x_i^{'}a_{i}(\textbf{x}^{'})}.$ Now let $A_{0 } = t_0^{\frac{1}{4}},\,\,A_{i } =\max\Big\{t_i^{\frac{1}{4}},s_i^{\frac{1}{4}}\Big\},\,\,1\leq i<\infty.$ Then, $\sum_{i=0}^{\infty}A_{i}^2 < \sum_{j=1}^{\infty} s_j^{\frac{1}{2}}+\sum_{i=0}^{\infty}t_i^{\frac{1}{2}} =1$ and by (\ref{equation 1133}) there exists some $\rho_1\in(0,\rho_0)$ such that $\sum_{i=0}^{\infty}\Big[\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}|a_{i,\alpha}|\big(\frac{\rho_1}{A_j^4}\big)_{j\in \mathbb{N}}^{\alpha}\Big]\frac{\rho_1}{A_i^4}<\frac{1}{2}.$ Write the monomial expansions of $ \widetilde{b}(\textbf{x}^{'})$ and $ \widetilde{a}_{i}(\textbf{x}^{'}),\,1\leq i<\infty$ near $(0)$ respectively as $\widetilde{b}(\textbf{x}^{'})=\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}a_{0,\alpha}^{'}\textbf{x}^{'\alpha}$ and $\widetilde{a}_{i}(\textbf{x}^{'})=\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}a_{i,\alpha}^{'}\textbf{x}^{'\alpha},\,\,1\leq i<\infty.$ It is easily seen that \begin{eqnarray}\label{convergence 1} \sum_{i=0}^{\infty}\Bigg[\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}|a_{i,\alpha}^{'}|\bigg(\frac{\rho_1}{A_j^4}\bigg)_{j\in \mathbb{N}}^{\alpha}\Bigg]\frac{\rho_1}{A_i^4}<\infty. \end{eqnarray} Formally write $$ \begin{array}{ll} \displaystyle G_1[\widetilde{U}]\triangleq\partial_{t'} \widetilde{U} -\sum_{i=1}^{\infty}\widetilde{a}_{i}(\textbf{x}^{'})\partial_{x_i^{'}} \widetilde{U}-\widetilde{b}(\textbf{x}^{'})\widetilde{U},\\[2mm] \displaystyle G_2[W]\triangleq-\partial_{t'} W +\sum_{i=1}^{\infty}\partial_{x_i^{'}}[\widetilde{a}_{i}(\textbf{x}^{'})W]+ \Big [- \widetilde{b}(\textbf{x}^{'}) + \frac{t^{'}}{A_0^3}-\sum_{i=1}^{\infty} \frac{x_i^{'}\widetilde{a}_{i}(\textbf{x}^{'})}{A_i^3}\Big]W, \end{array} $$ where the function $W$ will be defined later. For a sufficient small positive number $\lambda$, we put $$H_{\lambda}= \{(t^{'},x_1^{'},\cdots,x_i^{'},\cdots)\in \mathbb{R}^{\infty}:\sum_{i=1}^{\infty}(x_i^{'})^2< t^{'}<\lambda \}.$$ Also, denote the boundary of $H_{\lambda}$ by $l_{\lambda}\cup I_{\lambda}\cup k_{\lambda}$ where $$ \begin{array}{ll} \displaystyle l_{\lambda}=\{(t^{'},x_1^{'},\cdots,x_i^{'},\cdots)\in \mathbb{R}^{\infty}: t^{'}=\lambda, \sum_{i=1}^{\infty}(x_i^{'})^2<\lambda\},\\[2mm] \displaystyle I_{\lambda}=\{(t^{'},x_1^{'},\cdots,x_i^{'},\cdots)\in \mathbb{R}^{\infty}: t^{'}= \lambda,\sum_{i=1}^{\infty}(x_i^{'})^2= t^{'}\},\\[2mm] \displaystyle k_{\lambda}=\{(t^{'},x_1^{'},\cdots,x_i^{'},\cdots)\in \mathbb{R}^{\infty}: t^{'}< \lambda,\sum_{i=1}^{\infty}(x_i^{'})^2= t^{'}\}. \end{array} $$ From the definition one can see that $H_{\lambda}$ is an open subset of $\ell^2$ and $\widetilde{U}\mid_{I_{\lambda}\cup k_{\lambda}}=0$. Let \begin{eqnarray}\label{def of F} F(t^{'},\textbf{x}^{'}) \triangleq\bigg(\frac{t^{'}}{A_0},-\frac{\widetilde{a}_{1}(\textbf{x}^{'}) }{A_1 },\cdots,-\frac{\widetilde{a}_{i}(\textbf{x}^{'}) }{A_i },\cdots\bigg), \qquad t^{'}\in\mathbb{R}, \;\textbf{x}^{'}=(x_i^{'})\in\ell^2, \end{eqnarray} $H\triangleq\Big\{(t^{'},x_1^{'},\cdots,x_i^{'},\cdots)\in \mathbb{R}^{\infty}: \big(\frac{t^{'}}{A_0 }\big)^2+ \sum_{i=1}^{\infty}\big(\frac{x_i^{'}}{A_i }\big)^2<\infty\Big\}$ and $B\triangleq \{(t^{'},x_1^{'},\cdots,x_i^{'},\cdots)\in \mathbb{R}^{\infty}: ( t^{'} )^2+ \sum_{i=1}^{\infty} ( x_i^{'} )^2<\infty \}.$ We can view $B$ as $\mathbb{R}\times \ell^2$ and we will use this convention later. Since $\sum_{i=0}^{\infty} A_i^2<\infty,$ we have the natural inclusion map $i$ which maps $H$ into $B$ such that the triple $(i,H,B)$ is an abstract Wiener space. Since $H=H^*$, $i^{*}B^{*}$ can be characterized by a subset of $H$ which will be denoted by $B^{*}$. Precisely, $B^{*}\triangleq \{h\in H: \,\text{ there exists }C_h\in (0,+\infty)\text{ such that }|(h,g)_{H}|\leq C_h||ig||_{B}\text{ for any }g\in H \}.$ A simple computation shows that $B^{*}= \Big\{(t^{'},x_1^{'},\cdots,x_i^{'},\cdots): \frac{(t^{'})^2}{A_0^4 }+ \sum_{i=1}^{\infty} \frac{(x_i^{'})^2}{A_i^4 } <\infty \Big\}.$ In order to apply Theorem \ref{Divergence Theorem}, we need the following result: \begin{proposition}\label{3 cond F satisfy} \begin{itemize}For a sufficiently small $\lambda>0$ it holds that \item[(a)] $F$ is uniformly bounded in $B^*$ norm, i.e., $\sup_{(t^{'},\textbf{x}^{'})\in H_{\lambda}}\sum_{i=1}^{\infty} \Big[\frac{\widetilde{a}_{i}(\textbf{x}^{'}) }{A_i^3} \Big]^2 <\infty;$ \item[(b)] $F$ maps $H_{\lambda}\cup l_{\lambda}\cup k_{\lambda}\cup I_{\lambda}$ into a bounded subset of $H$, i.e., $\sup_{(t^{'},\textbf{x}^{'})\in H_{\lambda}\cup l_{\lambda}\cup k_{\lambda}\cup I_{\lambda}}\sum_{i=1}^{\infty} \Big[\frac{\widetilde{a}_{i}(\textbf{x}^{'}) }{A_i^2} \Big]^2 <\infty;$ \item[(c)] $F$ is $H$-Fr\'{e}chet differentiable with $DF$ being a trace class operator and the trace norm $||DF||_1$ is $p$ integrable on $ H_{\lambda}$. \end{itemize} \end{proposition} It is easy to see that both $l_{\lambda}$ and $k_{\lambda}$ are two differentiable surfaces in $B$, by \cite[Remark 2]{Goo}, both $l_{\lambda}$ and $k_{\lambda}$ are two $H$-$C^1$ surfaces in the abstract Wiener space $(i,H,B)$. Denote by $p$ ({\it resp.} $\sigma$) the corresponding Wiener measure on $B$ ({\it resp.} normal surface measure given in Theorem \ref{theorem 1 in Goodman DT}) with parameters $x=0$ and $t=1$. We may show the following result (which means that, for any $\lambda>0$, $\sigma$ is a finite measure): \begin{lemma}\label{small surface measure} For any $\lambda>0$, it holds that $\sigma(l_{\lambda}\cup k_{\lambda}\cup I_{\lambda})<\infty$. \end{lemma} By Lemma \ref{small surface measure} and the second conclusion in Proposition \ref{3 cond F satisfy}, it follows that $|F|_H$ is $\sigma$-integrable on $l_{\lambda}\cup k_{\lambda}\cup I_{\lambda}$. Note that the Divergence Theorem, i.e., Theorem \ref{Divergence Theorem} requires the boundary to be a ``smooth" surface but the boundary of $H_{\lambda}$ is union of two ``smooth" surfaces. However, similar arguments in \cite{Goo} can be modified to show that the Divergence Theorem also holds in this following case: \begin{theorem}\label{Divergence theorem coro} Assume that ${\cal F}: H_{\lambda}\cup l_{\lambda}\cup k_{\lambda}\cup I_{\lambda}\rightarrow H$ is a continuous function with finite divergence on $H_{\lambda}$ and that ${\cal F}$ is uniformly bounded with respect to the $B^*$-norm on $H_{\lambda}$. If the function $|{\cal F}(\cdot)|$ is integrable with respect to the normal surface measure $\sigma( \cdot)$ on $l_{\lambda}\cup k_{\lambda}\cup I_{\lambda}$, and the trace class norm of $D{\cal F}$ is $p$-integrable on $H_{\lambda}$, then \begin{eqnarray*} \int_{H_{\lambda}}\left[({\rm div }\;{\cal F})(y)-\langle {\cal F}(y),y\rangle\right]\,p(\mathrm{d}y)=\int_{l_{\lambda}\cup k_{\lambda}\cup I_{\lambda}}\langle {\cal F}(y),{\bf n}(y)\rangle\,\sigma(\mathrm{d}y). \end{eqnarray*} \end{theorem} By Proposition \ref{3 cond F satisfy} and Lemma \ref{small surface measure}, we conclude that $H_{\lambda}$ and the function $F$ defined by (\ref{def of F}) satisfies the assumptions in Theorem \ref{Divergence theorem coro}. If $W $ is Fr\`{e}chet differentiable respect to $B$ with continuous Fr\`{e}chet derivatives, then $F_0=W \widetilde{U} F$ satisfies the assumptions in Theorem \ref{Divergence theorem coro}. Clearly, $\text{div}(W \widetilde{U} F)=A_0\partial_{t'} \Big[\frac{ W\widetilde{U}}{A_0} \Big] -\sum_{i=1}^{\infty}A_i\partial_{x_i^{'}}\Big[\frac{\widetilde{a}_{i} W\widetilde{U}}{A_i} \Big]$ and $\langle W \widetilde{U} F(y),y \rangle= \frac{W\widetilde{U}\cdot t^{'}}{A_0^3}-\sum_{i=1}^{\infty}\frac{\widetilde{a}_{i}(\textbf{x}^{'})W\widetilde{U}\cdot x_i^{'}}{A_i^3},$ where $y=(t^{'}, \textbf{x}^{'})=(t^{'},(x_i^{'})_{i=1}^{\infty} )\in H$. Apply Theorem \ref{Divergence theorem coro}, we have \begin{eqnarray} \int_{H_{\lambda}}(WG_1[\widetilde{U}]- \widetilde{U}G_2[W])\,p(\mathrm{d}y) &=&\frac{\lambda}{A_0^2} \int_{l_{\lambda}}W\widetilde{U} \,\sigma(\mathrm{d}y)\label{formula 20}. \end{eqnarray} Let $b'(\textbf{x}^{'})\triangleq \Big[ \sum_{i=1}^{\infty}\partial_{x_i^{'}}\widetilde{a}_{i}(\textbf{x}^{'})- \widetilde{b}(\textbf{x}^{'}) + \frac{t^{'}}{A_0^3}-\sum_{i=1}^{\infty} \frac{x_i^{'}\widetilde{a}_{i}(\textbf{x}^{'})}{A_i^3}\Big].$ Then $G_2[W]=-\partial_{t'} W +\sum_{i=1}^{\infty}\widetilde{a}_{i}(\textbf{x}^{'})\partial_{x_i^{'}}W+b'(\textbf{x}^{'})W.$ Let $G' (\textbf{x}', \textbf{w} )\triangleq\sum_{i=1}^{\infty}\widetilde{a}_{i}(\textbf{x}')w_i+w_0b(\textbf{x}'),$ where $\textbf{w}= (w_{ j})_{j=0}^{\infty}\in \mathbb{R}^{\infty}$ and $\textbf{x}= (x_i)_{i=1}^{\infty}\in \mathbb{R}^{\infty}.$ Recall that $\sum_{i=0}^{\infty}\Big[\sum_{\alpha\in \mathbb{N}^{(\mathbb{N})}}|a_{i,\alpha}^{'}|\big(\frac{\rho_1}{A_j^4}\big)^{\alpha}\Big]\frac{\rho_1}{A_i^4}<\infty.$ Hence, for any $\rho\in (0,\rho_1)$, the monomial expansion of $G'$ is absolutely convergent at $\big(\big(\frac{\rho}{A_j}\big)_{j=0}^{\infty},\big(\frac{\rho}{A_i}\big)_{i=1}^{\infty}\big),$ which is a point near $\infty$ in the topology $\mathcal {T}^{2}$ by the fact that $\sum_{i=0}^{\infty}A_{i}^2 < 1.$ Therefore, the equation $G_2[W]=0$ satisfies the assumptions in Corollary \ref{corollary 20}. Hence, for any $n\geq 0$ and $k_1,\cdots,k_n\in \mathbb{N}$, there is an analytic solution $W$ to $G_2[W]=0$ and $W\mid_{t^{'}=\lambda}=(x_1^{'})^{k_1}\cdots(x_n^{'})^{k_n}$. From Corollary \ref{corollary 20}, it follows that the monomial expansion of $W$ is absolutely convergent in a neighborhood of $(0)$ containing $H_{\lambda}$ for sufficiently small $\lambda$ and $W$ is Fr\`{e}chet differentiable with respect to $B$ and the Fr\`{e}chet derivative is continuous in this neighborhood. Therefore, for any sufficiently small $\lambda>0$, applying (\ref{formula 20}), we arrive at \begin{eqnarray}\label{polynomial is dense} \int_{l_{\lambda}}(x_1^{'})^{k_1}\cdots(x_n^{'})^{k_n}\widetilde{U} \,\sigma(\mathrm{d}y)=0, \end{eqnarray} for any $n\geq 0$ and $k_1,\cdots,k_n\in \mathbb{N}$. Let $L_{\lambda}= \{(t^{'},x_1^{'},\cdots,x_i^{'},\cdots)\in \mathbb{R}^{\infty}: t^{'}=\lambda, \sum_{i=1}^{\infty}(x_i^{'})^2<\infty \},$ then the surface measure $\sigma^{'}$ on $L_{\lambda}$ is identified with the Gaussian measure with parameters $x=0$ and $t=1$ on the Hilbert space $H_0=\{(x_1^{'},\cdots,x_i^{'},\cdots)\in \mathbb{R}^{\infty}: \sum_{i=1}^{\infty}(x_i^{'})^2<\infty \}.$ Note that (\ref{polynomial is dense}) is equivalent to \begin{eqnarray*} \int_{L_{\lambda}}(x_1^{'})^{k_1}\cdots(x_n^{'})^{k_n}\chi_{l_{\lambda}}\widetilde{U} \,\sigma^{'}(\mathrm{d}y)=0. \end{eqnarray*} We also need the following density result. \begin{lemma}\label{dense lemma} $\hbox{\rm span$\,$}\{(x_1^{'})^{k_1}\cdots(x_n^{'})^{k_n}:n\geq 0,\, k_1,\cdots,k_n\in \mathbb{N}\}$ is dense in $L^2(L_{\lambda},\sigma^{'})$. \end{lemma} One may check that $L^2(l_{\lambda},\sigma)=\chi_{l_{\lambda}}\cdot L^2(L_{\lambda},\sigma^{'})$. From Lemma \ref{dense lemma} and noting the continuity of $\widetilde{U}$, we deduce that $\widetilde{U}\equiv 0$ on $l_{\lambda}$ for any sufficiently small $\lambda>0$ and hence $\widetilde{U}\mid_{k_{\lambda}}=0$. Therefore, $\widetilde{U}\equiv 0$ on $H_{\lambda}$ for any sufficiently small $\lambda>0$. This implies that $U\equiv 0$ at a neighborhood of $(0)$ in the $\mathcal {T}_2$ topology restricted on the half space $t>0$. By the same way we can prove that $U\equiv 0$ at a neighborhood of $(0)$ the $\mathcal {T}_2$ topology restricted on the half space $t<0$. Finally, by the continuity of $U$ we have $U\equiv 0$ at a neighborhood of $(0)$ in the $\mathcal {T}_2$ topology. \Acknowledgements{ The research of the first author is supported by NSFC under grant 11501384; The research of the second author is supported by NSFC under grant 11221101, the NSFC-CNRS Joint Research Project under grant 11711530142 and the PCSIRT under grant IRT$\_$16R53 from the Chinese Education Ministry.} \end{document}
\begin{document} \title{Scheduling step-deteriorating jobs to minimize the total weighted tardiness on a single machine} \author{Peng Guo \and Wenming Cheng \and Yi Wang } \institute{P. Guo \at School of Mechanical Engineering, Southwest Jiaotong University, Chengdu, China \\ \email{[email protected]} \and W. Cheng \at School of Mechanical Engineering, Southwest Jiaotong University, Chengdu, China \\ \email{[email protected]} \and Y. Wang \at Corresponding author. Department of Mathematics, Auburn University at Montgomery, AL, USA \\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} This paper addresses the scheduling problem of minimizing the total weighted tardiness on a single machine with step-deteriorating jobs. With the assumption of deterioration, the job processing times are modeled by step functions of job starting times and pre-specified job deteriorating dates. The introduction of step-deteriorating jobs makes a single machine total weighted tardiness problem more intractable. The computational complexity of this problem under consideration was not determined. In this study, it is firstly proved to be strongly NP-hard. Then a mixed integer programming model is derived for solving the problem instances optimally. In order to tackle large-sized problems, seven dispatching heuristic procedures are developed for near-optimal solutions. Meanwhile, the solutions delivered by the proposed heuristic are further improved by a pair-wise swap movement. Computational results are presented to reveal the performance of all proposed approaches. \end{abstract} \maketitle\section{Introduction} Scheduling with meeting job due dates have received increasing attention from managers and researchers since the Just-In-Time concept was introduced in manufacturing facilities. While meeting due dates is only a qualitative performance, it usually implies that time dependent penalties are assessed on late jobs but that no benefits are derived from completing jobs early \citep{Baker2009book}. In this case, these quantitative scheduling objectives associated with tardiness are naturally highlighted in different manufacturing environments. The total tardiness and the total weighted tardiness are two of the most common performance measures in tardiness related scheduling problems \citep{Sen2003static}. In the traditional scheduling theory, these two kinds of single machine scheduling problems have been extensively studied in the literature under the assumption that the processing time of a job is known in advance and constant throughout the entire operation process \citep{Koulamas2010TTU,Vepsalainen1987TWTU,Crauwels1998TWTU,Cheng2005TWTU,Bilge2007TWTU,Wang2009TWTU}. However, there are many practical situations that any delay or waiting in starting to process a job may cause to increase its processing time. Examples can be found in financial management, steel production, equipment maintenance, medicine treatment and so on. Such problems are generally known as {\em time dependent scheduling problems} \citep{Gawiejnowicz2008TDS_Book}. Among various time dependent scheduling models, there is one case in which job processing times are formulated by piecewise defined functions. In the literature, these jobs with piecewise defined processing times are mainly presented by {\em piecewise linear deterioration} and/or {\em step-deterioration}. In this paper, the single machine total weighted tardiness scheduling problem with step-deteriorating jobs (SMTWTSD) are addressed. For a step-deteriorating job, if it fails to be processed prior to a pre-specified threshold, its processing time will be increased by an extra time; otherwise, it only needs the basic processing time. The corresponding single machine scheduling problem was considered firstly by \citet{Sundararaghavan1994_SPSD} for minimizing the sum of the weighted completion times. The single machine scheduling problem of makespan minimization under step-deterioration effect was investigated in \citet{Mosheiov1995_SPSD}, and some simple heuristics were introduced and extended to the setting of multi-machine and multi-step deterioration. \citet{Jeng2004SPSD} proved that the problem proposed by \citet{Mosheiov1995_SPSD} is NP-hard in the ordinary sense based on a pseudo-polynomial time dynamic programming algorithm, and introduced two dominance rules and a lower bound to develop a branch and bound algorithm for deriving optimal solutions. \citet{Cheng2001SPSD} showed the total completion time problem with identical job deteriorating dates is NP-complete and introduced a pseudo-polynomial algorithm for the makespan problem. Moreover, \citet{Jeng2005SPSD} proposed a branch and bound algorithm incorporating a lower bound and two elimination rules for the total completion time problem. Owing to the intractability of the problem, \citet{He2009SPSD} developed a branch and bound algorithm and a weight combination search algorithm to derive the optimal and near-optimal solutions. The problem was extended by \citet{Cheng2012SPSD_VNS} to the case with parallel machines, where a {\em variable neighborhood search algorithm} was proposed to solve the parallel machine scheduling problem. Furthermore, \citet{Layegh2009SPSD} studied the total weighted completion time scheduling problem on a single machine under job step deterioration assumption, and proposed a {\em memetic algorithm} with the average percentage error of 2\%. Alternatively, batch scheduling with step-deteriorating effect also attracts the attention of some researchers, for example, referring to \citet{Barketau2008SPSD_batch}, \citet{Leung2008SPSD_batch} and \citet{Mor2012SPSD_batch}. With regard to the piecewise linear deteriorating model, the single machine scheduling problem with makespan minimization was firstly addressed in \citep{Kunnathur1990SPLD}. Following this line of research, successive research works, such as \citet{Kubiak1998SPLD}, \citet{Cheng2003SPLD}, \citet{Moslehi2010SPLD}, spurred in the literature. However, these objective functions with due dates were rarely studied under step-deterioration model in the literature. \citet{Guo2014GVNSSPSDTT} recently found that the total tardiness problem in a single machine is NP-hard, and introduced two heuristic algorithms. As the more general case, the single machine total weighted tardiness problem (SMTWT) has been extensively studied, and several dispatching heuristics were also proposed for obtaining the near-optimal solutions \citep{Potts1991SPTWT,Kanet2004SPTWT}. \emph{To the best of our knowledge, there is no research that discusses the} SMTWTSD {\em problem}. Although the SMTWT problem has been proved to be strongly NP-hard \citep{Lawler1977pp,Lenstra1977complexity} and \citep[p.~58]{Pin2012}, the complexity of the SMTWTSD problem under consideration is still open. Therefore, this paper gives \emph{the proof of strong NP-hardness of the} {SMTWTSD} {\em problem}. \emph{Several efficient dispatching heuristics are presented and analyzed as well}. These dispatching heuristics can deliver a feasible schedule within reasonable computation time for large-sized problem instances. Moreover, they can be used to generate an initial solution with certain quality required by a meta-heuristics. The remainder of this paper is organized as follows. Section \ref{sec:Sec_PDF} provides a definition of the single machine total weighted tardiness problem with step-deteriorating jobs and formulates the problem as a mixed integer programming model. The complexity results of the problem considered in this paper and some extended problems are discussed in Section \ref{sec:Sec_CS}. In Section \ref{sec:Sec_HA}, seven improved heuristic procedures are described. Section \ref{sec:Sec_CE} presents the computational results and analyzes the performances of the proposed heuristics. Finally, Section \ref{sec:Sec_CN} summarizes the findings of this paper. \section{Problem description and formulation \label{sec:Sec_PDF}} The problem considered in this paper is to schedule $n$ jobs of the set $\mathbb{N}_n:=\{1,2,\ldots, n\}$, on a single machine for minimizing the {\em total weighted tardiness}, where the jobs have {\em stepwise} processing times. Specifically, assume that all jobs are ready at time zero and the machine is available at all times. Meanwhile, no preemption is assumed. In addition, the machine can handle only one job at a time, and cannot keep idle until the last job assigned to it is processed and finished. For each job $j\in \mathbb{N}_n$, there is a {\em basic processing time} $a_{j}$, a {\em due date} $d_{j}$ and a given {\em deteriorating threshold}, also called {\em deteriorating date} $h_{j}$. If the {\em starting time} $s_{j}$ of job $j\in \mathbb{N}_n$ is less than or equal to the given threshold $h_{j}$, then job $j$ only requires a basic processing time $a_{j}$. Otherwise, an extra penalty $b_{j}$ is incurred. Thus, the {\em actual processing time} $p_{j}$ of job $j$ can be defined as a step-function: $p_{j}=a_{j}$ if $s_{j}\leqslant h_{j}$; $p_{j}=a_{j}+b_{j}$, otherwise. Without loss of generality, the four parameters $a_{j}$, $b_{j}$, $d_{j}$ and $h_{j}$ are assumed to be positive integers. Let $\pi=(\pi_{1},\ldots,\pi_{n})$ be a sequence that arranges the current processing order of jobs in $\mathbb{N}_n$, where $\pi_{k}$, $k\in \mathbb{N}_n$, indicates the job in position $k$. The {\em tardiness} $T_{j} $ of job $j$ in a schedule $\pi$ can be calculated by $$T_{j} =\max\left\{ 0,s_{j} +p_{j} \text{\textminus}d_{j}\right\} .$$ The objective is to find a schedule such that the {\em total weighted tardiness} $\sum w_jT_{j}$ is minimized, where the weights $w_j$, $j\in \mathbb{N}_n$ are positive constants. Using the standard three-field notation \citep{Graham1979optimization}, this problem studied here can be denoted by $1|p_{j}=a_{j}$ or $a_{j}+b_{j},h_{j}|\sum w_jT_{j}$. Based on the above description, we formulate the problem as a 0-1 integer programming model. Firstly, the decision variable $y_{ij}$, $i,j \in \mathbb{N}_n$ is defined such that $y_{ij}$ is 1 if job $i$ precedes job $j$ (not necessarily immediately) on the single-machine, and 0 otherwise. The formulation of the problem is given below.\\ \emph{Objective function}: \begin{equation} \mathrm{minimize} \quad Z:=\sum_{j\in \mathbb{N}_n}w_jT_{j}\label{eq:2.1} \end{equation} \emph{Subject to}: \begin{align} p_{j}=\begin{cases} a_{j}, & \quad s_{j}\leqslant h_{j}\\ a_{j}+b_{j}, & \quad \mathrm{otherwise}, \end{cases} &\quad \begin{gathered}\forall j\in \mathbb{N}_n\end{gathered} \label{eq:2.2}\\ s_{i}+p_{i}\leqslant s_{j}+M(1-y_{ij}), &\quad \forall i,j\in \mathbb{N}_n,i< j \label{eq:2.3}\\ s_j+p_j \leqslant s_i+My_{ij}, &\quad \forall i,j \in\mathbb{N}_n, i<j \label{eq:2.4}\\ s_{j}+p_{j}-d_{j} \leqslant T_{j}, &\quad\forall j\in \mathbb{N}_n\label{eq:2.5}\\ y_{ij} \in \left\{ 0,1\right\}, &\quad\forall i,j\in \mathbb{N}_n,i\neq j\label{eq:2.6}\\ s_{j},T_{j} \geqslant 0, &\quad\forall j\in \mathbb{N}_n, \label{eq:2.7} \end{align} where $M$ is a large positive constant such that $M\rightarrow\infty$ as $n\to \infty$. For example, $M$ may be chosen as $M:=\max_{j\in \mathbb{N}_n}\left\{ d_{j}\right\} +\sum_{j\in \mathbb{N}_n}(a_{j}+b_{j})$. In the above mathematical model, equation (\ref{eq:2.1}) represents the objective of minimizing the total weighted tardiness. Constraint (\ref{eq:2.2}) defines the processing time of each job with the consideration of step-deteriorating effect. Constraint (\ref{eq:2.3}) and \eqref{eq:2.4} determine the starting time $s_{j}$ of job $j$ with respect to the decision variables $y_{ij}$. Constraint (\ref{eq:2.5}) calculates the tardiness of job $j$ with the completion time and the due date. Finally, Constraints (\ref{eq:2.6}) and (\ref{eq:2.7}) define the boundary values of variables $y_{ij}$, $s_j$, $T_j$, for $i,j\in \mathbb{N}_n$. \section{Complexity results \label{sec:Sec_CS}} This section discusses computational complexity of the problem under consideration. It is generally known that once a problem is proved to be strongly NP-hard, it is impossible to find a polynomial time algorithm or a pseudo polynomial time algorithm to produce its optimal solution. Then heuristic algorithms are presented to obtain near optimal solutions for such a problem. Subsequently, the studied single machine scheduling problem is proved to be strongly NP-hard. \begin{theorem}\label{thm:NPhard} The problem $1|p_{j}=a_{j}$ or $a_{j}+b_{j},h_{j}|\sum w_jT_{j}$ is strongly NP-hard. \end{theorem} The proof of the theorem is based on reducing 3-PARTITION to the problem $1|p_{j}=a_{j}$ or $a_{j}+b_{j},h_{j}|\sum w_jT_{j}$. For a positive integer $t\in \mathbb{N}$, let integers $a_j\in \mathbb{N}, j\in \mathbb{N}_{3t}$, and $b\in \mathbb{N}$ such that $\frac{b}{4}<a_j<\frac{b}{2}$, $j\in \mathbb{N}_{3t}$ and $\sum_{j\in \mathbb{N}_{3t}}a_j=tb $. The reduction is based on the following transformation. Let the number of jobs $n=4t-1$. Let the partition jobs be such that for $j\in \mathbb{N}_{3t}$, \\ \begin{equation} d_j=0, \quad h_j=tb+2(t-1), \quad b_j=1, \quad w_j=a_j-\frac{b}{4} \end{equation} and \begin{eqnarray} p_{j}&=&\begin{cases} a_{j}, & s_{j}\leqslant h_{j},\\ a_{j}+b_{j}, & \mathrm{otherwise}. \end{cases} \label{eq:partition} \end{eqnarray} Introduce the notation $\mathbb{N}_{m,n}:=\{m,m+1, \ldots, n\}$. Let the $t-1$ enforcer jobs be such that for $j\in \mathbb{N}_{3t+1,4t-1}$, \begin{eqnarray} d_j=(j-3t)(b+1), \quad h_j=d_j-1, \quad b_j=1, \nonumber \\ w_j=(b+b^2)(t^2-t) \end{eqnarray} and \begin{eqnarray} p_{j}&=&\begin{cases} 1, & s_{j}\leqslant h_{j},\\ 1+b_{j}, & \mathrm{otherwise}. \end{cases} \label{eq:enforcer} \end{eqnarray} The first $3t$ partition jobs are due at time $0$, and the last $(t-1)$ enforcer jobs are due at $b+1$, $2b+2$, $\ldots$ and so on. The deterioration dates of all partition jobs are set at $h_j=tb+2(t-1)$, $j\in \mathbb{N}_{3t}$; while the deterioration dates of all enforcer jobs are set at $d_j-1$, for $j\in \mathbb{N}_{3t+1,4t-1}$, that is, one unit before their individual due dates, respectively. We introduce the set notation $\mathbb{S}_i:=\{{3i-2}, {3i-1},{3i}\}$ , $i\in \mathbb{N}_t$ for sets of three partition jobs. In general, assume that for $i\in \mathbb{N}_t$, \begin{equation}\label{eqn:sumtime} a_{3i-2}+a_{3i-1}+a_{3i}=b+\delta_i, \end{equation} where, due to $\frac{b}{4}<a_j<\frac{b}{2}$, $j\in \mathbb{N}_{3t}$, we must have \begin{equation}\label{eqn:deltai} -\frac{b}{4}<\delta_i < \frac{b}{2}. \end{equation} The quantity $\delta_i$, $i\in \mathbb{N}_t$ describes the difference from $b$ of the sum of basic processing times of jobs in $\mathbb{S}_i$ , $i\in \mathbb{N}_t$. Since $\sum_{j\in \mathbb{N}_{3t}}a_j=tb $, we deduce that \begin{equation}\label{eqn:delta} \sum_{i\in \mathbb{N}_t} \delta_i=0. \end{equation} For convenience we define $\Delta_0=0$ and for $i\in \mathbb{N}_t$, $$ \Delta_i=\sum_{j\in \mathbb{N}_i} \delta_j. $$ That is, $\Delta_i$ is the accumulative difference from $ib$ of the cumulative basic processing times of the partition jobs up to the $i$-th set $\mathbb{S}_i$ of 3 partition jobs. Note by \eqref{eqn:delta}, $\Delta_t=0$. The following observation is useful. \begin{lemma} \begin{equation}\label{eqn:conversion} \sum_{j\in \mathbb{N}_t} j \delta_j=-\sum_{j\in \mathbb{N}_{t-1}}\Delta_j= -\sum_{j\in \mathbb{N}_{t}}\Delta_j. \end{equation} \end{lemma} \begin{proof} The proof can be done by direct calculation. The second equality is obvious because $\Delta_t=0$. For the first equality, we have \begin{eqnarray*} \sum_{j\in \mathbb{N}_t} j \delta_j &= & \delta_1+2\delta_2+\ldots + t\delta_t\\ &=& (\delta_1+\delta_2+\ldots+\delta_t)+(\delta_2+\delta_3+\ldots+\delta_t)\\ && +\ldots + (\delta_{t-1}+\delta_t)+\delta_t\\ &=& 0-\delta_1-(\delta_1+\delta_2)-\ldots \\ &&- (\delta_1+\delta_1+\ldots+\delta_{t-2}) \\ && -(\delta_1+\delta_2+\ldots+\delta_{t-1}) \end{eqnarray*} where in the last equality we have used the equality $\sum_{j\in \mathbb{N}_t}\delta_j=0$. Thus we continue by using the definition of $\Delta_j$, $j\in \mathbb{N}_t$ to have \begin{eqnarray*} \sum_{j\in \mathbb{N}_t} j \delta_j&=&-\Delta_1-\Delta_2-\ldots-\Delta_{t-2}-\Delta_{t-1}\\ &=&-\sum_{j\in \mathbb{N}_{t-1}}\Delta_j. \end{eqnarray*} The lemma is proved. \end{proof} Let $\lceil x \rceil$ be the least integer greater than or equal to $x$, and for $k\in \mathbb{N}$, define the index set $J_k:=\{ 3\left\lceil \frac{k}{3} \right\rceil -2, 3\left\lceil \frac{k}{3} \right\rceil -1, \ldots, k\}$. We are ready to prove the theorem by showing that there exits a schedule with an objective value \begin{equation}\label{eqn:opt_value} z^*=\frac{(t^2-t)}{8}(b+b^2)+\sum_{k\in \mathbb{N}_{3t}}\sum_{j\in J_k }a_j\left(a_k-\frac{b}{4}\right) \end{equation} if and only if there exits a solution for the 3-PARTITION problem. \begin{proof}[Proof of Theorem \ref{thm:NPhard}] If the 3-PARTITION problem has a solution, the corresponding $3t$ jobs thus can be partitioned into $t$ subsets $\mathbb{S}_i$, $i\in \mathbb{N}_t$, of three jobs each, with the sum of the three processing times in each subset equal to $b$, that is, $\delta_i=0$, for $i\in \mathbb{N}_t$, and the last $t-1$ jobs are processed exactly during the intervals $$ [b,b+1], [2 b+1, 2b+2], \ldots, [(t-1)b+t-2, (t-1)b+t-1]. $$ In this scenario, all the $t-1$ enforcer jobs are not tardy and all the $3t$ partition jobs are tardy. The tardiness of each partition job equals to its completion time. Moreover, no job is deteriorated. Let $c_j$, $j\in \mathbb{N}_{4t}$ be the completion time of each job. Let $S_i$ be the starting time of the first job in each set $\mathbb{S}_i$, $i\in \mathbb{N}_t$. When no job is deteriorated, that is all jobs are processed with basic processing time, the completion time of each job $j$ in $\mathbb{S}_i$, $i\in \mathbb{N}_t$ is given by $$ c_j=S_i+\sum_{k\in J_j }a_k. $$ The total weighted tardiness of the 3 jobs in $\mathbb{S}_i$, $i\in \mathbb{N}_t$, equals to \begin{eqnarray} z_i&=& \sum_{j\in \mathbb{S}_i} c_jw_j \nonumber\\ &=&S_i\sum_{j\in \mathbb{S}_i} w_j + \sum_{j\in \mathbb{S}_i} \sum_{k\in J_j }a_kw_j \label{eqn:zk}\\ &=&[(i-1)b+(i-1)]\frac{b}{4}+\sum_{j\in \mathbb{S}_i}\sum_{k\in J_j }a_k\left(a_j-\frac{b}{4}\right) , \nonumber \end{eqnarray} since $S_i=(i-1)b+(i-1)$ and $\sum_{j\in \mathbb{S}_i} w_j=\frac{b}{4}$, for each $i\in \mathbb{N}_t$. Thus the total weighted tardiness is $ \sum_{i\in \mathbb{N}_t}z_i$ which sums to $z^*$ given by \eqref{eqn:opt_value}. Conversely, if such a 3-partition is not possible, there is at least one $\delta_i\ne 0$, $i\in \mathbb{N}_t$. We next argue that this must imply $\Delta z:=z-z^*>0$. If $\Delta_i>0$, $i\in \mathbb{N}_{t-1}$, then the $i$-th enforcer job will deteriorate to be processed in the extended time and entail a weighted tardiness. Introduce the notation $x_+:=\max\{x, 0\}$, and let $\mathcal{N}(i)$, $i\in \mathbb{N}_{t-1}$ denote the number of times the value $\Delta_j>0$, $j\in \mathbb{N}_i$. For convenience, we define $\mathcal{N}(0)=0$. The value $\mathcal{N}(i)$, with $0\le \mathcal{N}(i)\le i$, coincides with the cumulative extended processing time of all the enforcer jobs up to the $i$-th enforcer job. This implies that the weighted tardiness of the $i$-th enforcer job is given by $$\left(\left(\Delta_i \right)_+ +\mathcal{N}(i)\right)(b+b^2)(t^2-t).$$ On the other hand, our configuration of the deterioration dates for all the partition jobs ensures that no partition jobs can deteriorate. Therefore, by recalling equation \eqref{eqn:sumtime}, we have in this case that for $i\in \mathbb{N}_t$, $$ S_i= (i-1)b+(i-1)+\Delta_{i-1} +\mathcal{N}(i-1) $$ and $$\sum_{j\in \mathbb{S}_i} w_j=\frac{b}{4}+\delta_i.$$ In view of equation \eqref{eqn:zk} and the weighted tardiness of the enforcer job, we deduce that the change in the objective function value caused by $\delta_i$, $i\in \mathbb{N}_t$, is given by \begin{eqnarray} \Delta z_i&=&\left( (i-1)b+(i-1)+\Delta_{i-1} +\mathcal{N}(i-1) \right)\delta_i \nonumber\\ && +\frac{b}{4}\left(\Delta_{i-1}+\mathcal{N}(i-1)\right) \nonumber \\ & & +\left(\left(\Delta_i \right)_+ +\mathcal{N}(i)\right)(b+b^2)(t^2-t). \end{eqnarray} Therefore the total change in the objective function value is given by \begin{eqnarray*} \Delta z &=& \sum_{i\in \mathbb{N}_t} \Delta z_i \\ &\ge & (1+b)\sum_{i\in \mathbb{N}_t} (i-1)\delta_i +\sum_{i\in \mathbb{N}_t} \Delta_{i-1}\left(\delta_i+\frac{b}{4}\right) \\ && + (b+b^2)(t^2-t) \sum_{i\in \mathbb{N}_t}\mathcal{N}(i) \\ &= & (1+b)\sum_{i\in \mathbb{N}_t} i \delta_i +\sum_{i\in \mathbb{N}_{t-1}} \Delta_{i}\left(\delta_{i+1}+\frac{b}{4}\right) \\ && + (b+b^2)(t^2-t) \sum_{i\in \mathbb{N}_{t-1}} \mathcal{N}(i). \end{eqnarray*} In the last equality again we have used $\Delta_t=\sum_{j\in \mathbb{N}_t}\delta_j=0$. Now by equation \eqref{eqn:conversion}, we continue to have \begin{eqnarray} \Delta z &\ge& (1+b)\sum_{i\in \mathbb{N}_{t-1}} (-\Delta_i) +\sum_{i\in \mathbb{N}_{t-1}} \Delta_{i}\left(\delta_{i+1}+\frac{b}{4}\right) \nonumber \\ &&+ (b+b^2)(t^2-t) \sum_{i\in \mathbb{N}_{t-1}} \mathcal{N}(i) \nonumber \\ &=& \sum_{i\in \mathbb{N}_{t-1}} (-\Delta_i) \left( 1+\frac{3}{4}b-\delta_{i+1} \right)+ \nonumber\\ && (b+b^2)(t^2-t) \sum_{i\in \mathbb{N}_{t-1}} \mathcal{N}(i). \nonumber\\ \label{eqn:deltaz} \end{eqnarray} If there is a $\Delta_i>0$ for some $i\in \mathbb{N}_t$, then $\sum_{i\in \mathbb{N}_{t-1}} \mathcal{N}(i) \ge 1$. Recalling that $\frac{b}{2}>\delta_i>-\frac{b}{4}$, for all $i\in \Delta_t$, we have \begin{eqnarray*} \sum_{i\in \mathbb{N}_{t-1}} (-\Delta_i) \left( 1+\frac{3}{4}b-\delta_{i+1} \right) &>& (1+b)\sum_{i\in \mathbb{N}_{t-1}} (-\Delta_i)\\ &>& (1+b)\sum_{i\in \mathbb{N}_{t-1}} \left(-i\frac{b}{2} \right)\\ &=& -\frac{1}{4}b(1+b)t(t-1). \end{eqnarray*} Thus in this case, $\Delta z>0$ by equation \eqref{eqn:deltaz}. On the other hand, if all $\Delta_i\le 0$, and at least one $\delta_i\ne 0$, $i\in \mathbb{N}_t$, then there must be at least one $\Delta_i<0$. Thus in this case \begin{eqnarray*} \Delta z &=& \sum_{i\in \mathbb{N}_{t-1}} (-\Delta_i) \left( 1+\frac{3}{4}b-\delta_{i+1} \right)\\ &>&0 \end{eqnarray*} because $\left( 1+\frac{3}{4}b-\delta_{i+1} \right)>0 $ for $i\in \mathbb{N}_t$. Therefore $\Delta z=0$ if and only if $\delta_i=0$, $i\in \mathbb{N}_t$. We have proved the theorem. \end{proof} {\bf Remark}: with small changes, the above proof also shows that the scheduling problem of total weighted tardiness (without deteriorating jobs), represented by $1|$ $|\sum w_jT_{j}$, is strongly NP-hard, of which proofs might be found in \citep{Lawler1977pp,Lenstra1977complexity} and \citep[p.~58]{Pin2012}. After a reflection, we mention that the following three problems also are strongly NP-hard: 1) the problem of total weighted tardiness with deterioration jobs and job release times $r_j$; 2) the problem of total tardiness with deterioration jobs and job release times $r_j$; 3) the problem of maximum lateness of a single machine scheduling problem with job release time $r_j$. Recall that the maximum lateness is defined to be $$ L_{\max}=\max\{ L_j: L_j=c_j-d_j, j\in \mathbb{N}_n\}. $$ Their proofs can be obtained by slightly modifying our previous proof. In fact, the assumption of release times makes the proof a lot of easier. We summarize these results in the following corollaries. \begin{coro} The problem $1|p_{j}=a_{j}$ or \\$a_{j}+b_{j},h_j,r_{j}|\sum w_jT_{j}$ is strongly NP-hard. \end{coro} \begin{coro} The problem $1|p_{j}=a_{j}$ or \\$a_{j}+b_{j},h_j,r_{j}|\sum T_{j}$ is strongly NP-hard. \end{coro} \begin{coro} The problem $1|p_{j}=a_{j}$ or \\$a_{j}+b_{j},h_j,r_{j}|L_{\max}$ is strongly NP-hard. \end{coro} \section{Heuristic algorithms\label{sec:Sec_HA}} The problem under study is proved strongly NP-hard earlier, then some dispatching heuristics are needed to develop for solving the problem. In this section, the details of these heuristics are discussed. Dispatch heuristics gradually form the whole schedule by adding one job at a time with the best priority index among the unscheduled jobs. There are several existing heuristics designed for the problem without deteriorating jobs. Since the processing times of all jobs considered in our problem depend, respectively, on their starting times, these dispatching heuristics are modified for considering the characteristic of the problem. Before introducing these procedures, the following notations are defined. Let ${N}^s$ denote the {\em ordered set} of already-scheduled jobs and ${N}^u$ the {\em unordered set} of unscheduled jobs. Hence $\mathbb{N}_n= {N}^s\cup {N}^u $ when the ordering of $N^s$ is not in consideration. Let $t^s$ denote the current time, i.e., the maximum completion time of the scheduled jobs in the set $ {N}^s$. We shall call $t^s$ the current time of the sequence $N^s$. Simultaneously, $t^s$ is also the starting time of the next selected job. For each of the unscheduled jobs in $\mathbb{N}^u $, its actual processing time is calculated based on its deteriorating date and the current time $t^s$. For most scheduling problems with due dates, the \emph{earliest due date} (EDD) rule is simple and efficient. In this paper, the rule is adopted to obtain a schedule by sorting jobs in non-decreasing order of their due dates. In the same way, the \emph{weighted shortest processing time} (WSPT) schedules jobs in decreasing order of $w_j/p_j$. At each iteration, the actual processing times of unscheduled jobs in ${N}^u$ are needed to recalculate. This is because when step-deteriorating effect is considered, the processing time of a job is variable. Even when all jobs are necessarily tardy, the WSPT rule does not guarantee an optimal schedule. Moreover, the weighted EDD (WEDD) rule introduced by \citet{Kanet2004SPTWT} sequences jobs in non-decreasing order of WEDD, where \begin{equation}\label{eq:eq_wedd} \mathrm{WEDD}_j=d_j/w_j, \end{equation} The apparent tardiness cost (ATC) heuristic introduced by \citet{Vepsalainen1987TWTU} was developed for the total weighted tardiness problem when the processing time of a job is constant and known in advance. It showed relatively good performance compared with the EDD and the WSPT. The job with the largest ATC value is selected to be processed. The ATC for job $j$ is determined by the following equation. \begin{equation}\label{eq:eq_atc} \mathrm{ATC}_j=\frac{w_j}{p_j}\exp\left(-\max\{0, d_j-p_j-t^s\}/(\kappa\rho)\right), \end{equation} where, $\kappa$ is a ``look-ahead" parameter usually between 0.5 and 4.5, and $\rho$ is the average processing time of the rest of unscheduled jobs. The processing time of an already scheduled job $j\in N^s$ may be $a_j$ or $a_j+b_j$ dependent on if it is deteriorated. Subsequently, the current time $t_s$ is calculated upon the completion of the last job in $N^s$. The parameter $\rho$ is calculated by averaging the processing times of unscheduled jobs in $N^u$ assuming their starting time is at $t^s$. Based on the cost over time (COVERT) rule \citep{Fisher1976Covert} and the apparent urgency (AU) rule \citep{Morton1984AU}, \citet{Alidaee1996AR} developed a class of heuristic named COVERT-AU for the standard single machine scheduling weighted tardiness problem. The COVERT-AU heuristic combines the two well known methods, i.e. COVERT and AU. At the time $t^s$, the COVERT-AU chooses the next job with the largest priority index $\mathrm{CA}_j$ calculated by the equation \begin{equation}\label{eq:eq_ca} \mathrm{CA}_j=\frac{w_j}{p_j}(\kappa\rho/(\kappa\rho+\max\{0, d_j-p_j-t^s\})). \end{equation} {For the convenience of description, the heuristic with equation \eqref{eq:eq_ca} is denoted by CA in this paper hereafter. The \emph{weighted modified due date} (WMDD) rule was developed by \citet{Kanet2004SPTWT} based on modified due date (MDD). In this method, the jobs are processed in non-decreasing order of WMDD. The WMDD is calculated by the equation \begin{equation}\label{eq:eq_wmdd} \mathrm{WMDD}_j= \frac{1}{w_j} (\max\{p_j, (d_j-t^s)\}). \end{equation} Note that when all job weights are equal, the WMDD is equal to the MDD. The above heuristics need to recalculate the processing time of the next job for obtaining the priority index except for the EDD and the WEDD. The procedures to recalculate the processing time of the next job is significantly different from those for the problem without step-deterioration. In order to illustrate how these heuristics work, the detailed steps of the WMDD, as an example, are shown in Algorithm \ref{alg:alg_wmdd}. For other heuristics, the only difference is the calculation of the priority index. \begin{algorithm}[htp] \begin{algorithmic}[1] \caption{\label{alg:alg_wmdd} The WMDD} \STATE Input the initial data of a given instance; \STATE Set ${N}^s=[\;]$, $t^s=0$ and ${N}^u=\{1, 2, \ldots, n\}$; \STATE Set $k=1$; \REPEAT \STATE Compute the processing time of each job in the set ${N}^u$ based on the current time $t^s$; \STATE Calculate the WMDD value of each job in $N^u$ according to equation \eqref{eq:eq_wmdd}; \STATE Choose job $j$ from the set ${N}^u$ with the smallest value $\mathrm{WMDD}_j$ to be scheduled in the $k$th position; \STATE Update the tardiness of job $j$: $T_j=\max\{t^s+p_j-d_j, 0\}$, and ${N}^s=[{N}^s, j]$; \STATE Delete job $j$ from ${N}^u$; \STATE $k=k+1$; \UNTIL {the set ${N}^u$ is empty} \STATE Calculate the total weighted tardiness of the obtained sequence ${N}^s$. \end{algorithmic} \end{algorithm} A very effective and simple combination search heuristic for minimizing total tardiness was proposed by \citet{Guo2014GVNSSPSDTT}. The heuristic is called "Simple Weighted Search Procedure" (SWSP) and works as follows: a combined value of parameters $a_j$, $p_j$ and $h_j$ for job $j$ is calculated as \begin{equation}\label{eq:eq_swsp} m_j=\gamma_1d_j+\gamma_2p_j+\gamma_3h_j, \end{equation} where $\gamma_1$, $\gamma_2$ and $\gamma_3$ are three positive constants. In the SWSP, jobs are sequenced in non-decreasing order of $m$-value. To accommodate the case of the weighted tardiness, equation \eqref{eq:eq_swsp} is modified to compute a priority index $m'$ for a job $j$ calculated by the equation \begin{equation}\label{eq:eq_mswsp} m'_j=\frac{1}{w_j} (\gamma_1d_j+\gamma_2p_j+\gamma_3h_j). \end{equation} The modified method is called "Modified Simple Weighted Search Procedure" (MSWSP). {In equation \eqref{eq:eq_mswsp}, the values of $\gamma_1$, $\gamma_2$ and $\gamma_3$ are determined by using a dynamically updating strategy. The updating strategy is similar to that proposed by \citet{Guo2014GVNSSPSDTT}. Specifically, parameter $\gamma_1$ is linearly increased by 0.1 at each iteration and its range is varied from $\gamma_{1\min}$ to $\gamma_{1\max}$. In this study, $\gamma_{1\min}=0.2$ and $\gamma_{1\max}=0.9$ are chosen based on preliminary tests by using randomly generated instances. The parameter $\gamma_2$ adopts a similar approach with $\gamma_{1\min}$ and $\gamma_{1\max}$ replaced by $\gamma_{2\min}=0.1$ and $\gamma_{2\max}=0.7$, respectively. Once the values of parameters $\gamma_1$ and $\gamma_2$ are determined, the parameter $\gamma_3:=\max\{1-\gamma_1-\gamma_2, 0.1\}$.} The detailed steps of the MSWSP is shown in Algorithm \ref{alg:MSWSP}. \begin{algorithm}[htp] \begin{algorithmic}[1] \caption{\label{alg:MSWSP} The MSWSP } \STATE Input the initial data of a given instance; \STATE Set $c_{[0]}=0$, $N^{u}=\{1,\cdots,n\}$ and ${N}^s=[\;]$; \STATE Generate the entire set $\Omega$ of possible triples of weights. For each triple $(\omega_{1}, \omega_{2}, \omega_{3})\in \Omega$, perform the following steps; \STATE Choose job $i$ with minimal due date to be scheduled in the first position; \STATE $\:$$c_{[1]}=c_{[0]}+a_{i}$, ${N}^s=\left[ {N}^s,\; i\right]$; \STATE $\:$delete job $i$ from $N^{u}$; \STATE $\:$set $k=2$; \REPEAT \STATE choose job $j$ from $N^{u}$ with the smallest value $m'_j$ to be scheduled in the $k$th position; \IF { $c_{[k-1]}>h_{j}$} \STATE $c_{[k]}=c_{[k-1]}+a_{j}+b_{j}$; \ELSE \STATE $c_{[k]}=c_{[k-1]}+a_{j}$; \ENDIF \STATE ${N}^s=\left[ {N}^s,\;j\right]$; \STATE delete job $j$ from $N^{u}$; \UNTIL {the set $N^{u}$ is empty}. \STATE Calculate the total weighted tardiness of the obtained schedule $ {N}^s$; \STATE Output the finial solution $ {N}^s$. \end{algorithmic} \end{algorithm} In order to further improve the quality of the near-optimal solutions, a pairwise swap movement (PS) is incorporated into these heuristics. Let $N^s$ be the sequence output by a heuristic. A swap operation chooses a pair of jobs in positions $i$ and $j$, $1\le i,j\le n$, from the sequence $ {N}^s$, and exchanges their positions. Denote the new sequence by $N^s_{ji}$. Subsequently, the total weighted tardiness of the sequence $N^s_{ji}$ is calculated. If the new sequence $N^s_{ji}$ is better with a smaller tardiness than the incumbent one, the incumbent one is replaced by the new sequence. The swap operation is repeated for any combination of two indices $i$ and $j$, where $1\leqslant i < j \leqslant n$. Thus the size of the pairwise swap movement (PS) is $n(n-1)/2$. In the following, a heuristic with the PS movement is denoted by the symbol $\mathrm{ALG}_{\mathrm{PS}}$, where ALG is one of the above mentioned heuristic algorithms. For example, $\mathrm{EDD}_{\mathrm{PS}}$ represents that the earliest due date rule is applied first , then the solution obtained by the EDD is further improved by the PS movement. \section{Computational experiments\label{sec:Sec_CE}} In this section, the computational experiments and results are presented to analyze the performance of the above dispatching heuristics. Firstly, randomly generated test problem instances varying from small to large sizes are described. Next, preliminary experiments are carried out to determine appropriate values for the parameters used in some of the heuristics. Then, a comparative analysis of all seven dispatching heuristics is performed. Furthermore, the results of the best method are compared with optimal solutions delivered by ILOG CPLEX 12.5 for small-sized problem instances. All heuristics were coded in MATLAB 2010 and run on a personal computer with Pentium Dual-Core E5300 2.6 GHz processor and 2 GB of RAM. \subsection{Experimental design\label{sec:subsec_ED}} The problem instances were generated using the method proposed by \citet{Guo2014GVNSSPSDTT} as follows. For each job $j$, a basic processing time $a_j$ was generated from the uniform distribution [1, 100], a weight $w_j$ was generated from the uniform distribution [1, 10], and a deteriorating penalty $b_j$ is generated from the uniform distribution [1, 100$\times \tau$], where $\tau=0.5$. Problem hardness is likely to depend on the value ranges of deteriorating dates and due dates. For each job $j$, a deteriorating date $h_j$ was drawn from the uniform distribution over three intervals $H_1$:=[1, \emph{A}/2], $H_2$:=[\emph{A}/2, \emph{A}] and $H_3$:=[1, \emph{A}], where $A=\sum_{j \in \mathbb{N}_n}a_j$. Meanwhile, a due date $d_j$ was generated from the uniform distribution [$C'_{\max}(1-T-R/2), C'_{\max}(1-T+R/2)$], where $C'_{\max}$ is the value of the maximum completion time obtained by scheduling the jobs in the non-decreasing order of the ratios $a_j/b_j$, $j\in \mathbb{N}_n$, $T$ is the average tardiness factor and $R$ is the relative range of due dates. Both $T$ and $R$ were set at 0.2, 0.4, 0.6, 0.8 and 1.0. Overall, 75 different combinations were generated for different $h$, $T$ and $R$. For the purpose of obtaining optimal solutions, the number of jobs in each instance was taken to be one of the two sizes of 8 and 10. For the heuristics, the number of jobs can be varied from the small sizes to the large sizes, that comprises 14 sizes including 8, 10, 15, 20, 25, 30, 40, 50, 75, 100, 250, 500, 750 and 1000. In each combination of $h$, $T$, $R$, and $n$, 10 replicates were generated and solved. Thus, there are 750 instances for each problem size, totalling 10500 problem instances, which are available from http:$ //$www.researchgate.net$/$profile$/$Peng\_Guo9. In general, the performances of a heuristic is measured by the average relative percentage deviation (RPD) of the heuristic solution values from optimal solutions value or best solution values. The average RPD value is calculated as $\frac{1}{K} \sum_{k=1}^{K}\frac{Z^k_{\mathrm{alg}}-Z^k_{\mathrm{opt}}}{Z^k_{\mathrm{opt}}}\times100$, where $K$ is the number of problem instances and $Z^k_{\mathrm{alg}}$ and $Z^k_{\mathrm{opt}}$ are the objective function value of the heuristic method and the optimal solution value for instance $k$, respectively. The objective function value delivered by a heuristics may be equal to 0 for an instance with a low tardiness factor $T$ and a high relative range $R$ of due dates. The zero objective value means that all jobs are finished on time. It is troublesome to obtain the RPD in this case, since necessarily $Z^k_{\mathrm{opt}}=0$, thus it leads to a division by 0 that is undefined. To avoid this situation, in this paper, the relative improvement versus the worst result (RIVW) used by \citet{Valente2012SMWQTSP} is adopted to evaluate the performance of a proposed heuristics. For a given instance, the RIVW for a heuristic is defined by the following way. Let $Z_{\mathrm{best}}$ and $Z_{ \mathrm{worst}}$ denote the best and worst solution values delivered by all considered heuristics in comparison, respectively. When $Z_{ \mathrm{best}}=Z_{ \mathrm{worst}}$, the RIVW value of a heuristic algorithm is set to 0. Otherwise, the RIVW value is calculated as $$\mathrm{RIVW}=(Z_{ \mathrm{worst}}-Z_{ \mathrm{alg}})/Z_{ \mathrm{worst}}\times 100.$$ Based on the definition of RIVW, it can be observed that the bigger the RIVW value, the better the quality of the corresponding solution. \subsection{Parameter selection\label{sec:subsec_PS}} In order to select an appropriate value for the parameter $\kappa$, preliminary tests were conducted on a separate problem set, which contains instances with 20, 50, 100, and 500 jobs. For each of these job sizes $n$, 5 replicates were produced for each combination of $n$, $h$, $T$ and $R$. Subsequently, for each problem instance, the objective function value was obtained by using all considered seven heuristics with a candidate value of $\kappa$. Then the results of all problem instances were analyzed to determine the best value of $\kappa$. The candidate values of the parameter $\kappa$ are chosen to be {0.5, 1.0, \ldots, 4.5}, which are usually used in the ATC and the CA for traditional single machine problem \citep{Vepsalainen1987TWTU,Alidaee1996AR}. The computational tests show that the solutions delivered by the ATC and the CA with $\kappa=0.5$ are relatively better compared with the results obtained by other values of $\kappa$. Thus, $\kappa$ is set to 0.5 in the sequel. \subsection{Experimental results\label{sec:subsec_CR}} The computational results of the proposed seven heuristics are listed in Table \ref{tab:tab_CR}. Specifically, this table provides the mean RIVW values for each heuristic, as well as the number of instances with the best solution ($\mathrm{Num}_{\mathrm{best}}$) found by a heuristic method from 750 instances for each problem size. Each mean RIVW value for a heuristic and a particular problem size in Table \ref{tab:tab_CR} is the average of RIVW values from the 750 instances for a given problem size. From the table, the EDD and WEDD rules are clearly outperformed by the WSPT, ATC, CA, WMDD, and MSWSP heuristics. The mean RIVW values delivered by EDD and WEDD are significantly less than that given by other heuristics. This is due to the fact that the EDD rule only considers job due dates, while the WEDD relies only on weights and due dates. In addition, the two rules do not consider the effect of step-deterioration in calculating the priority index. It is worthwhile to note that the results achieved by the WEDD is better than that given by the EDD. As far as the remaining methods, the RIVW values obtained by the WSPT and MSWSP heuristics are worse than that of other three (ATC, CA and WMDD) heuristics. But the performance of the MSWSP is better than the WSPT. This indicates that the MSWSP can produce good results for our problem, but it fails to obtain better solutions compared with the three improved methods (ATC, CA and WMDD). There is no significant difference between the results produced by the ATC, CA and WMDD heuristics. The CA procedure provides slightly higher mean relative improvement versus the worst result values. For large-sized instances, the number of the best solutions achieved by the CA is much higher than the ATC and the WMDD. Therefore, {\em the CA procedure can be deemed as the best one among the seven dispatching heuristic algorithms}. Computational times of these dispatching heuristic algorithms for each job size are listed in Table \ref{tab:tab_ctime1}. As the size of instances increases, the CPU time of all methods grows at different degrees. Totally, the MSWSP consumes the most time compared with other algorithms, but surprisingly its maximum CPU time is only 5.71 seconds for the intractable instance with 1000 jobs. The average CPU time of the CA which is 0.66 seconds is less than that of the MSWSP. Since the EDD and the WEDD mainly depend on the ranking index of all jobs' due dates, their computational times are less than 0.01 seconds. It was observed that the CPU time of the other five algorithms follow almost the same trend. \begin{sidewaystable*}[htp]\scriptsize \centering \caption{Computational results of the dispatching heuristic algorithms} \setlength{\tabcolsep}{5pt} \begin{tabular}{rrrrrrrrrrrrrrr} \toprule $n$ & \multicolumn{7}{c}{RIVW(\%)} & \multicolumn{7}{c}{$\mathrm{Num}_{\mathrm{best}}$} \\ \cmidrule[0.05em](r){2-8} \cmidrule[0.05em](lr){9-15} & EDD & WSPT & WEDD & ATC & CA & WMDD & MSWSP & EDD & WSPT & WEDD & ATC & CA & WMDD & MSWSP \\ \midrule 8 & 15.56 & 39.87 & 28.63 & 48.50 & 55.01 & 49.06 & 50.40 & 87 & 202 & 32 & 331 & 375 & 307 & 206 \\ 10 & 15.38 & 40.76 & 26.66 & 49.31 & 57.68 & 52.36 & 50.57 & 70 & 149 & 10 & 299 & 367 & 278 & 145 \\ 15 & 16.97 & 40.15 & 28.23 & 55.60 & 62.25 & 57.82 & 53.01 & 78 & 80 & 4 & 274 & 340 & 249 & 104 \\ 20 & 17.89 & 41.35 & 27.50 & 57.73 & 65.56 & 60.42 & 54.13 & 70 & 72 & 2 & 237 & 363 & 232 & 79 \\ 25 & 16.92 & 41.11 & 27.60 & 58.49 & 67.46 & 62.92 & 53.26 & 67 & 55 & 2 & 206 & 373 & 241 & 72 \\ 30 & 17.39 & 41.47 & 27.41 & 59.93 & 68.64 & 64.49 & 52.52 & 66 & 61 & 0 & 190 & 388 & 215 & 68 \\ 40 & 18.44 & 41.10 & 26.38 & 61.81 & 70.31 & 66.22 & 52.84 & 77 & 46 & 1 & 166 & 383 & 242 & 78 \\ 50 & 18.76 & 40.59 & 27.31 & 62.62 & 71.22 & 67.17 & 52.30 & 71 & 30 & 0 & 183 & 400 & 237 & 71 \\ 75 & 19.00 & 41.34 & 27.41 & 63.65 & 72.03 & 68.59 & 51.70 & 71 & 31 & 0 & 165 & 421 & 247 & 71 \\ 100 & 20.13 & 41.07 & 27.73 & 64.49 & 72.81 & 69.09 & 51.86 & 75 & 36 & 0 & 143 & 428 & 254 & 75 \\ 250 & 19.54 & 41.37 & 27.46 & 65.91 & 73.74 & 69.47 & 49.92 & 78 & 42 & 0 & 124 & 472 & 250 & 78 \\ 500 & 19.55 & 41.47 & 27.42 & 66.47 & 74.01 & 69.42 & 49.29 & 68 & 34 & 0 & 116 & 487 & 245 & 68 \\ 750 & 19.46 & 41.54 & 27.37 & 66.44 & 74.08 & 69.30 & 49.12 & 69 & 35 & 0 & 105 & 499 & 243 & 69 \\ 1000 & 19.50 & 41.46 & 27.32 & 66.38 & 74.13 & 69.32 & 48.92 & 67 & 32 & & 95 & 502 & 250 & 67 \\ Total & & & & & & & & 1014 & 905 & 51 & 2634 & 5798 & 3490 & 1251 \\ Avg. & 18.18 & 41.05 & 27.46 & 60.52 & 68.50 & 63.97 & 51.42 & 72 & 65 & 4 & 188 & 414 & 249 & 89 \\ \bottomrule \end{tabular} \label{tab:tab_CR} \end{sidewaystable*} \begin{sidewaystable*}[htp]\footnotesize \centering \caption{Computational times of dispatching heuristic algorithms} \begin{tabular}{cccccccc} \toprule $n$ & \multicolumn{7}{c}{CPU Time(s)} \\ \cmidrule[0.05em](r){2-8} & EDD & WSPT & WEDD & ATC & CA & WMDD & MSWSP \\ \midrule 8 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.01 \\ 10 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.01 \\ 15 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.02 \\ 20 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.03 \\ 25 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.04 \\ 30 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.05 \\ 40 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.06 \\ 50 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.01 & $<$0.01 & 0.08 \\ 75 & $<$0.01 & 0.01 & $<$0.01 & 0.01 & 0.02 & 0.01 & 0.14 \\ 100 & $<$0.01 & 0.02 & $<$0.01 & 0.02 & 0.05 & 0.02 & 0.20 \\ 250 & $<$0.01 & 0.12 & $<$0.01 & 0.14 & 0.30 & 0.14 & 0.65 \\ 500 & $<$0.01 & 0.47 & $<$0.01 & 0.58 & 1.19 & 0.56 & 1.81 \\ 750 & $<$0.01 & 1.11 & $<$0.01 & 1.33 & 2.72 & 1.31 & 3.48 \\ 1000 & $<$0.01 & 2.05 & $<$0.01 & 2.42 & 4.91 & 2.39 & 5.67 \\ & & & & & & & \\ Avg. & $<$0.01 & 0.27 & $<$0.01 & 0.32 & 0.66 & 0.32 & 0.88 \\ \bottomrule \end{tabular} \label{tab:tab_ctime1} \end{sidewaystable*} {Subsequently, the solutions delivered by the seven dispatching heuristics are improved by the PS movement. The seven heuristics with the PS movement are denoted by $\mathrm{EDD}_{\mathrm{PS}}$, $\mathrm{WSPT}_{\mathrm{PS}}$, $\mathrm{WEDD}_{\mathrm{PS}}$, $\mathrm{ATC}_{\mathrm{PS}}$, $\mathrm{CA}_{\mathrm{PS}}$, $\mathrm{WMDD}_{\mathrm{PS}}$ and $\mathrm{MSWSP}_{\mathrm{PS}}$, respectively. A comparison of these methods is given in Table \ref{tab:tab_cre_ps}. Again, this table lists the mean relative improvement versus the worst result (RIVW) for each algorithm and the number of instances with the best solution found by each of the seven algorithms with the PS movement.} From Table \ref{tab:tab_cre_ps}, it can be observed that the $\mathrm{CA}_{\mathrm{PS}}$ provides the best performance among these procedures. In fact, the $\mathrm{CA}_{\mathrm{PS}}$ not only gives the largest RIVW value, but also gives a better solution for most of the instances. For medium- and large-sized instances, the $\mathrm{CA}_{\mathrm{PS}}$ shows better performance in terms of the number of best solution ($\mathrm{Num}_{\mathrm{best}}$) compared with the other six methods. In particular, for the case with 1000 jobs, the $\mathrm{CA}_{\mathrm{PS}}$ gives the best solutions for 538 over the 750 instances. It is found that $\mathrm{CA}_{\mathrm{PS}}$ delivers best solutions for on average 473 out of 750 instances for all job sizes. The average RIVW values delivered by the $\mathrm{WEDD}_{\mathrm{PS}}$, $\mathrm{ATC}_{\mathrm{PS}}$, $\mathrm{CA}_{\mathrm{PS}}$, $\mathrm{WMDD}_{\mathrm{PS}}$ and $\mathrm{MSWSP}_{\mathrm{PS}}$ are more than 40\%. The RIVW value of the WSPT is only 15.86\%, significantly less than that achieved by the other six methods. Computational times of these methods are listed in Table \ref{tab:tab_tab_ct_ps}. The average computational times of the seven methods are very close, and the gap of the average CPU times between these methods is not more than one second. As expected, the CPU times of these algorithms are increased as the number of jobs increases. But the computational times of the seven methods are not more than 80 seconds even for the 1000-job case. {In order to further analyze the results, the one-way Analysis of Variance (ANOVA) is used to check whether the observed difference in the RIVW values for the dispatching heuristics with the PS movement are statistically significant. The $\mathrm{WSPT}_{\mathrm{PS}}$ is removed from the statistical analysis since it is clearly worse than the remaining ones. The means plot and the Fisher Least Significant Difference (LSD) intervals at the 95\% confidence level are shown in Figure \ref{fig:fig_h_ps}. If the LSD intervals of two algorithms are not overlapped, the performances of the tested algorithms are statistically significantly different. Otherwise, the performances of the two algorithms do not lie significantly in the difference. As it can be seen, {\em $\mathrm{ATC}_{\mathrm{PS}}$, $\mathrm{CA}_{\mathrm{PS}}$ and $\mathrm{WMDD}_{\mathrm{PS}}$ are not statistically different because their confidence intervals are overlapped.} This observation is really important since it gives the conclusion that can not be obtained from a table of average RIVW results. Moreover, the $\mathrm{CA}_{\mathrm{PS}}$ and $\mathrm{WMDD}_{\mathrm{PS}}$ are statistically significantly better than the other three methods ($\mathrm{EDD}_{\mathrm{PS}}$, $\mathrm{WEDD}_{\mathrm{PS}}$ and $\mathrm{MSWSP}_{\mathrm{PS}}$) by having their LSD intervals of the RIVW values higher than those of other methods. However, the $\mathrm{WEDD}_{\mathrm{PS}}$ and the $\mathrm{MSWSP}_{\mathrm{PS}}$ are not statistically different due to their overlapping confidence intervals.} {To evaluate the effect of the PS movement, a comparison of the CA heuristic with the $\mathrm{CA}_{\mathrm{PS}}$ procedure is provided in Table \ref{tab:tab_CA_CA_PS}. Table \ref{tab:tab_CA_CA_PS} gives the mean relative improvements versus the worst result values for the two procedures, as well as the number of times the $\mathrm{CA}_{\mathrm{PS}}$ procedure performs better than ($\mathrm{Num}_{\mathrm{better}}$) or equal the CA ($\mathrm{Num}_{\mathrm{equal}}$). Since the heuristic CA always gives worse result when comparing with the $\mathrm{CA}_{\mathrm{PS}}$, its RIVW values in Table \ref{tab:tab_CA_CA_PS} equal to zero for all instances. On average, the RIVW values obtained by the $\mathrm{CA}_{\mathrm{PS}}$ are 24.71\% better than the RIVW values achieved by the CA. These results show that {\em the PS movement can significantly improve the quality of solutions delivered by the CA heuristic for most of the instances.}} \begin{sidewaystable*}[htp]\scriptsize \centering \caption{Computational results of dispatching heuristic algorithms with the PS movement} \setlength{\tabcolsep}{4pt} \begin{tabular}{ccccccccccccccc} \toprule $n$ & \multicolumn{7}{c}{RIVW(\%)} & \multicolumn{7}{c}{$\mathrm{Num}_{\mathrm{best}}$} \\ \cmidrule[0.05em](r){2-8} \cmidrule[0.05em](lr){9-15} & $\mathrm{EDD}_{\mathrm{PS}}$ & $\mathrm{WSPT}_{\mathrm{PS}}$ & $\mathrm{WEDD}_{\mathrm{PS}}$ & $\mathrm{ATC}_{\mathrm{PS}}$ & $\mathrm{CA}_{\mathrm{PS}}$ & $\mathrm{WMDD}_{\mathrm{PS}}$ & $\mathrm{MSWSP}_{\mathrm{PS}}$ & $\mathrm{EDD}_{\mathrm{PS}}$ & $\mathrm{WSPT}_{\mathrm{PS}}$ & $\mathrm{WEDD}_{\mathrm{PS}}$ & $\mathrm{ATC}_{\mathrm{PS}}$ & $\mathrm{CA}_{\mathrm{PS}}$ & $\mathrm{WMDD}_{\mathrm{PS}}$ & $\mathrm{MSWSP}_{\mathrm{PS}}$ \\ \midrule 8 & 26.95 & 12.84 & 29.26 & 31.67 & 33.12 & 31.90 & 32.54 & 243 & 441 & 288 & 506 & 535 & 514 & 422 \\ 10 & 28.76 & 14.37 & 32.27 & 35.18 & 36.57 & 35.79 & 35.02 & 162 & 339 & 179 & 460 & 483 & 468 & 335 \\ 15 & 33.13 & 15.04 & 36.71 & 40.43 & 42.39 & 40.40 & 39.80 & 148 & 233 & 140 & 392 & 435 & 389 & 218 \\ 20 & 35.00 & 15.81 & 38.67 & 43.41 & 44.91 & 43.64 & 42.11 & 139 & 202 & 110 & 370 & 416 & 370 & 181 \\ 25 & 35.88 & 14.58 & 40.10 & 44.61 & 46.77 & 44.86 & 42.66 & 131 & 166 & 98 & 346 & 426 & 358 & 160 \\ 30 & 36.92 & 14.06 & 41.30 & 46.29 & 47.81 & 46.46 & 43.76 & 134 & 181 & 113 & 333 & 423 & 351 & 158 \\ 40 & 37.79 & 16.21 & 42.93 & 48.17 & 50.03 & 48.53 & 45.07 & 135 & 174 & 104 & 318 & 420 & 365 & 134 \\ 50 & 37.99 & 16.08 & 43.40 & 48.96 & 50.79 & 49.45 & 45.68 & 130 & 154 & 98 & 305 & 441 & 320 & 133 \\ 75 & 38.47 & 16.16 & 44.23 & 50.22 & 52.24 & 50.65 & 46.34 & 137 & 164 & 100 & 272 & 460 & 304 & 139 \\ 100 & 38.97 & 15.78 & 44.96 & 51.10 & 53.00 & 51.70 & 46.68 & 136 & 153 & 101 & 264 & 471 & 291 & 133 \\ 250 & 38.98 & 16.94 & 45.44 & 51.99 & 54.21 & 52.49 & 47.02 & 141 & 159 & 101 & 234 & 527 & 254 & 140 \\ 500 & 38.87 & 17.72 & 45.43 & 52.20 & 54.56 & 52.66 & 46.95 & 141 & 161 & 103 & 244 & 520 & 263 & 140 \\ 750 & 38.58 & 18.45 & 45.36 & 51.88 & 54.51 & 52.50 & 46.82 & 144 & 166 & 100 & 252 & 523 & 254 & 143 \\ 1000 & 38.37 & 18.02 & 45.20 & 51.67 & 54.52 & 52.44 & 46.64 & 147 & 166 & 105 & 240 & 538 & 252 & 146 \\ Total & & & & & & & & 2068 & 2859 & 1740 & 4536 & 6618 & 4753 & 2582 \\ Avg. & 36.05 & 15.86 & 41.09 & 46.27 & 48.24 & 46.68 & 43.36 & 148 & 204 & 124 & 324 & 473 & 340 & 184 \\ \bottomrule \end{tabular} \label{tab:tab_cre_ps} \end{sidewaystable*} \begin{sidewaystable*}[htbp]\footnotesize \centering \caption{Computational times of dispatching heuristic algorithms with the PS movement} \begin{tabular}{cccccccc} \toprule \multirow{2}[1]{*}{$n$} & \multicolumn{7}{c}{CPU Time(s)} \\ \cmidrule[0.05em](r){2-8} & $\mathrm{EDD}_{\mathrm{PS}}$ & $\mathrm{WSPT}_{\mathrm{PS}}$ & $\mathrm{WEDD}_{\mathrm{PS}}$ & $\mathrm{ATC}_{\mathrm{PS}}$ & $\mathrm{CA}_{\mathrm{PS}}$ & $\mathrm{WMDD}_{\mathrm{PS}}$ & $\mathrm{MSWSP}_{\mathrm{PS}}$ \\ \midrule 8 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.01 \\ 10 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.01 \\ 15 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.02 \\ 20 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.03 \\ 25 & $<$0.01 & $<$0.01 & $<$0.01 & $<$0.01 & 0.01 & $<$0.01 & 0.04 \\ 30 & $<$0.01 & 0.01 & $<$0.01 & 0.01 & 0.01 & 0.01 & 0.05 \\ 40 & 0.01 & 0.02 & 0.01 & 0.02 & 0.02 & 0.02 & 0.08 \\ 50 & 0.02 & 0.03 & 0.02 & 0.03 & 0.04 & 0.03 & 0.11 \\ 75 & 0.07 & 0.08 & 0.07 & 0.08 & 0.10 & 0.08 & 0.21 \\ 100 & 0.14 & 0.16 & 0.14 & 0.17 & 0.19 & 0.16 & 0.34 \\ 250 & 1.47 & 1.56 & 1.47 & 1.60 & 1.75 & 1.60 & 2.12 \\ 500 & 9.71 & 9.87 & 9.79 & 10.10 & 10.66 & 10.12 & 11.56 \\ 750 & 30.3 & 30.28 & 30.35 & 30.96 & 32.06 & 30.96 & 33.69 \\ 1000 & 69.43 & 68.93 & 69.61 & 70.29 & 72.10 & 70.27 & 74.75 \\ & & & & & & & \\ Avg. & 7.94 & 7.92 & 7.96 & 8.09 & 8.35 & 8.09 & 8.79 \\ \bottomrule \end{tabular} \label{tab:tab_tab_ct_ps} \end{sidewaystable*} \begin{figure*} \caption{Means plot and the LSD intervals (at the 95\% confidence level) for the different dispatching heuristic algorithms with the PS movement} \label{fig:fig_h_ps} \end{figure*} \begin{table}[htbp]\footnotesize \centering \caption{Comparison of CA with $\mathrm{CA}_{\mathrm{PS}}$} \begin{tabular}{ccccc} \toprule \multirow{2}[3]{*}{$n$} & \multicolumn{2}{c}{RIVW(\%)} & \multicolumn{2}{c}{$\mathrm{CA}_{\mathrm{PS}}$ versus CA} \\ \cmidrule[0.05em](r){2-5} & CA & $\mathrm{CA}_{\mathrm{PS}}$ & $\mathrm{Num}_{\mathrm{equal}}$ & $\mathrm{Num}_{\mathrm{better}}$ \\ \midrule 8 & 0.00 & 18.15 & 225 & 525 \\ 10 & 0.00 & 19.56 & 171 & 579 \\ 15 & 0.00 & 24.22 & 80 & 670 \\ 20 & 0.00 & 25.73 & 59 & 691 \\ 25 & 0.00 & 26.08 & 44 & 706 \\ 30 & 0.00 & 26.09 & 47 & 703 \\ 40 & 0.00 & 27.92 & 46 & 704 \\ 50 & 0.00 & 26.74 & 55 & 695 \\ 75 & 0.00 & 27.63 & 66 & 684 \\ 100 & 0.00 & 28.09 & 72 & 678 \\ 250 & 0.00 & 25.07 & 103 & 647 \\ 500 & 0.00 & 24.40 & 110 & 640 \\ 750 & 0.00 & 23.45 & 117 & 633 \\ 1000 & 0.00 & 22.86 & 119 & 631 \\ Avg. & 0.00 & 24.71 & 94 & 656 \\ \bottomrule \end{tabular} \label{tab:tab_CA_CA_PS} \end{table} Next, the $\mathrm{CA}_{\mathrm{PS}}$ heuristic is compared with the optimal solutions delivered by the CPLEX 12.5 for instances with 8 jobs and 10 jobs. Here, the performance of the heuristic is measured by the mean relative improvement of the optimum objective function value versus the heuristic solution (RIVH), as well as the number of instances with the optimal solution given by the heuristic ($\mathrm{Num}_{\mathrm{opt}}$). For a given instance, the relative improvement of the optimum objective function value versus heuristic is calculated as follows. When $Z_{\mathrm{opt}}=Z_{\mathrm{alg}}$, the RIVH value is set to 0. Otherwise, the RIVH value is calculated as $$\mathrm{RIVH}=(Z_{\mathrm{alg}}-Z_{\mathrm{opt}})/Z_{\mathrm{alg}}\times100.$$ According to the definition of the RIVH value, the smaller the mean RIVH value, the better the quality of the solutions delivered by a heuristic is for the set of instances. The comparison results were shown in Table \ref{tab:tab_cos1} and \ref{tab:tab_cos2}. The two tables show that {\em the tardiness factor $T$ and the deteriorating interval $H$ can significantly affect the performance of the heuristic $\mathrm{CA}_{\mathrm{PS}}$.} For large values of $T$, the RIVH values are relatively small and the corresponding objective function values are on average quite close to the optimum achieved by the CPLEX. When the deteriorating interval is $H_2$, most jobs tend to have large deteriorating dates, and may not be deteriorated. Thus the RIVH values of the test instances with $H_2$ are less than that of the instances with $H_1$ and $H_3$. This observation has been demonstrated by \citet{Cheng2012SPSD_VNS} for parallel machine scheduling problem. In this case, the heuristic $\mathrm{CA}_{\mathrm{PS}}$ can give more optimal solutions compared with the instances with $H_1$ and $H_3$. Overall, the $\mathrm{CA}_{\mathrm{PS}}$ is effective in solving the problem under consideration since the maximum mean RIVH value is below 20\% for instances with 10 jobs. \begin{table}[htbp]\footnotesize \centering \caption{Comparison with optimum results for instances with 8 jobs.} \begin{tabular}{rrrrrrrr} \toprule \multicolumn{1}{c}{\multirow{2}[4]{*}{$T$}} & \multicolumn{1}{c}{\multirow{2}[4]{*}{$R$}} & \multicolumn{3}{c}{RIVH(\%)} & \multicolumn{3}{c}{$\mathrm{Num}_{\mathrm{opt}}$} \\ \cmidrule[0.05em](r){3-5} \cmidrule[0.05em](r){6-8} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$H_1$} & \multicolumn{1}{c}{$H_2$} & \multicolumn{1}{c}{$H_3$} & \multicolumn{1}{c}{$H_1$} & \multicolumn{1}{c}{$H_2$} & \multicolumn{1}{c}{$H_3$} \\ \midrule 0.2 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{40.39 } & \multicolumn{1}{c}{1.07 } & \multicolumn{1}{c}{16.94 } & \multicolumn{1}{c}{0 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{48.40 } & \multicolumn{1}{c}{0.47 } & \multicolumn{1}{c}{34.66 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{4 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{9.46 } & \multicolumn{1}{c}{7.97 } & \multicolumn{1}{c}{35.52 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{18.78 } & \multicolumn{1}{c}{10.00 } & \multicolumn{1}{c}{0.00 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{9 } & \multicolumn{1}{c}{10 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{9.29 } & \multicolumn{1}{c}{4.56 } & \multicolumn{1}{c}{6.25 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{8 } \\ 0.4 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{13.50 } & \multicolumn{1}{c}{2.32 } & \multicolumn{1}{c}{14.23 } & \multicolumn{1}{c}{1 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{2 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{7.00 } & \multicolumn{1}{c}{0.00 } & \multicolumn{1}{c}{7.50 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{10 } & \multicolumn{1}{c}{3 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{12.88 } & \multicolumn{1}{c}{4.62 } & \multicolumn{1}{c}{4.48 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{8 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{33.70 } & \multicolumn{1}{c}{7.99 } & \multicolumn{1}{c}{14.46 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{10.30 } & \multicolumn{1}{c}{9.01 } & \multicolumn{1}{c}{18.17 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{3 } \\ 0.6 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{8.50 } & \multicolumn{1}{c}{1.31 } & \multicolumn{1}{c}{1.90 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{7 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{9.94 } & \multicolumn{1}{c}{0.98 } & \multicolumn{1}{c}{8.97 } & \multicolumn{1}{c}{2 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{4 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{6.38 } & \multicolumn{1}{c}{0.50 } & \multicolumn{1}{c}{2.27 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{5.84 } & \multicolumn{1}{c}{2.55 } & \multicolumn{1}{c}{10.09 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{1 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{13.31 } & \multicolumn{1}{c}{0.03 } & \multicolumn{1}{c}{4.58 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{5 } \\ 0.8 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{3.43 } & \multicolumn{1}{c}{0.36 } & \multicolumn{1}{c}{3.43 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{9.58 } & \multicolumn{1}{c}{0.14 } & \multicolumn{1}{c}{1.09 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{6 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{4.17 } & \multicolumn{1}{c}{0.12 } & \multicolumn{1}{c}{1.71 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{9 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{3.32 } & \multicolumn{1}{c}{0.87 } & \multicolumn{1}{c}{0.48 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{7 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{3.25 } & \multicolumn{1}{c}{1.06 } & \multicolumn{1}{c}{1.85 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{7 } \\ 1 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{0.77 } & \multicolumn{1}{c}{0.13 } & \multicolumn{1}{c}{0.78 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{6 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{1.16 } & \multicolumn{1}{c}{0.00 } & \multicolumn{1}{c}{2.35 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{9 } & \multicolumn{1}{c}{3 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{4.02 } & \multicolumn{1}{c}{0.00 } & \multicolumn{1}{c}{2.36 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{9 } & \multicolumn{1}{c}{6 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{2.79 } & \multicolumn{1}{c}{0.00 } & \multicolumn{1}{c}{1.48 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{9 } & \multicolumn{1}{c}{6 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{0.75 } & \multicolumn{1}{c}{0.13 } & \multicolumn{1}{c}{1.28 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{5 } \\ Avg. & & 11.24 & 2.25 & 7.87 & 4.40 & 7.36 & 5.24 \\ \bottomrule \end{tabular} \label{tab:tab_cos1} \end{table} \begin{table}[htbp]\footnotesize \centering \caption{Comparison with optimum results for instances with 10 jobs} \begin{tabular}{rrrrrrrr} \toprule \multicolumn{1}{c}{\multirow{2}[4]{*}{$T$}} & \multicolumn{1}{c}{\multirow{2}[4]{*}{$R$}} & \multicolumn{3}{c}{RIVH(\%)} & \multicolumn{3}{c}{$\mathrm{Num}_{\mathrm{opt}}$} \\ \cmidrule[0.05em](r){3-5} \cmidrule[0.05em](r){6-8} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$H_1$} & \multicolumn{1}{c}{$H_2$} & \multicolumn{1}{c}{$H_3$} & \multicolumn{1}{c}{$H_1$} & \multicolumn{1}{c}{$H_2$} & \multicolumn{1}{c}{$H_3$} \\ \midrule 0.2 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{55.29 } & \multicolumn{1}{c}{2.52 } & \multicolumn{1}{c}{38.39 } & \multicolumn{1}{c}{2 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{3 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{70.00 } & \multicolumn{1}{c}{20.40 } & \multicolumn{1}{c}{30.71 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{31.10 } & \multicolumn{1}{c}{5.64 } & \multicolumn{1}{c}{16.71 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{8 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{25.00 } & \multicolumn{1}{c}{16.07 } & \multicolumn{1}{c}{17.36 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{14.85 } & \multicolumn{1}{c}{22.23 } & \multicolumn{1}{c}{31.82 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{5 } \\ 0.4 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{18.28 } & \multicolumn{1}{c}{0.88 } & \multicolumn{1}{c}{10.54 } & \multicolumn{1}{c}{1 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{4 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{20.75 } & \multicolumn{1}{c}{2.59 } & \multicolumn{1}{c}{27.79 } & \multicolumn{1}{c}{1 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{0 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{43.98 } & \multicolumn{1}{c}{7.06 } & \multicolumn{1}{c}{21.03 } & \multicolumn{1}{c}{0 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{3 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{35.02 } & \multicolumn{1}{c}{16.17 } & \multicolumn{1}{c}{23.44 } & \multicolumn{1}{c}{2 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{3 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{16.40 } & \multicolumn{1}{c}{14.17 } & \multicolumn{1}{c}{28.16 } & \multicolumn{1}{c}{0 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{4 } \\ 0.6 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{10.27 } & \multicolumn{1}{c}{1.27 } & \multicolumn{1}{c}{4.09 } & \multicolumn{1}{c}{1 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{4 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{7.45 } & \multicolumn{1}{c}{5.21 } & \multicolumn{1}{c}{5.36 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{3.01 } & \multicolumn{1}{c}{1.55 } & \multicolumn{1}{c}{5.14 } & \multicolumn{1}{c}{2 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{4 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{7.70 } & \multicolumn{1}{c}{3.44 } & \multicolumn{1}{c}{5.92 } & \multicolumn{1}{c}{2 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{3 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{8.51 } & \multicolumn{1}{c}{2.71 } & \multicolumn{1}{c}{5.88 } & \multicolumn{1}{c}{1 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{3 } \\ 0.8 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{2.31 } & \multicolumn{1}{c}{0.53 } & \multicolumn{1}{c}{0.75 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{8.01 } & \multicolumn{1}{c}{0.03 } & \multicolumn{1}{c}{3.61 } & \multicolumn{1}{c}{1 } & \multicolumn{1}{c}{8 } & \multicolumn{1}{c}{3 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{3.76 } & \multicolumn{1}{c}{1.03 } & \multicolumn{1}{c}{2.81 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{3.52 } & \multicolumn{1}{c}{0.29 } & \multicolumn{1}{c}{2.05 } & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{2 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{4.66 } & \multicolumn{1}{c}{1.06 } & \multicolumn{1}{c}{3.66 } & \multicolumn{1}{c}{3 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{3 } \\ 1 & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{0.67 } & \multicolumn{1}{c}{0.18 } & \multicolumn{1}{c}{1.14 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{6 } & \multicolumn{1}{c}{4 } \\ & \multicolumn{1}{c}{0.4} & \multicolumn{1}{c}{2.66 } & \multicolumn{1}{c}{0.20 } & \multicolumn{1}{c}{1.61 } & \multicolumn{1}{c}{2 } & \multicolumn{1}{c}{7 } & \multicolumn{1}{c}{4 } \\ & \multicolumn{1}{c}{0.6} & \multicolumn{1}{c}{2.93 } & \multicolumn{1}{c}{0.13 } & \multicolumn{1}{c}{0.92 } & \multicolumn{1}{c}{1 } & \multicolumn{1}{c}{9 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c}{1.44 } & \multicolumn{1}{c}{0.01 } & \multicolumn{1}{c}{0.92 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{9 } & \multicolumn{1}{c}{5 } \\ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{1.37 } & \multicolumn{1}{c}{0.02 } & \multicolumn{1}{c}{1.51 } & \multicolumn{1}{c}{4 } & \multicolumn{1}{c}{9 } & \multicolumn{1}{c}{3 } \\ Avg. & & 15.96 & 5.01 & 11.65 & 2.88 & 6.04 & 3.92 \\ \bottomrule \end{tabular} \label{tab:tab_cos2} \end{table} \section{Conclusions\label{sec:Sec_CN}} The total weighted tardiness as the general case has more important meaning in the practical situation. In this paper, the single machine scheduling problem with step-deteriorating jobs for minimizing the total weighted tardiness was addressed. Based on the characteristics of this problem, a mathematical programming model is presented for obtaining the optimal solution, and, the proof of the strong NP-hardness for the problem under consideration is given. Afterwards, seven heuristics are designed to obtain the near-optimal solutions for randomly generated problem instances. Computational results show that these dispatching heuristics can deliver relatively good solutions at low cost of computational time. Among these dispatching heuristics, the CA procedure as the best solution method can quickly generate a good schedule even for large instances. Moreover, the test results clearly indicate that these methods can be significantly improved by the pairwise swap movement. In the future, the consideration of developing meta-heuristics such as a genetic algorithm or ant colony optimization approach might be an interesting issue. For medium-sized problems, it is possible that a meta-heuristic could give better solutions within reasonable computational time. Another consideration is to investigate the total weighted tardiness problem with the step-deteriorating effects under other machine settings, such as parallel machines or flow-shops. \input{SMSPSDTWT1020.bbl} \end{document}
\begin{document} \title{Improved Algorithm and Lower Bound for Variable Time Quantum Search} \begin{abstract} We study variable time search, a form of quantum search where queries to different items take different time. Our first result is a new quantum algorithm that performs variable time search with complexity $O(\sqrt{T}\log n)$ where $T=\sum_{i=1}^n t_i^2$ with $t_i$ denoting the time to check the $i^{\rm th}$ item. Our second result is a quantum lower bound of $\Omega(\sqrt{T\log T})$. Both the algorithm and the lower bound improve over previously known results by a factor of $\sqrt{\log T}$ but the algorithm is also substantially simpler than the previously known quantum algorithms. \end{abstract} \section{Introduction} We study variable time search \cite{ambainis2010variable}, a form of quantum search in which the time needed for a query depends on which object is being queried. Variable time search and its generalization, variable time amplitude \cite{ambainis2012variable} amplification, are commonly used in quantum algorithms. For example, \begin{itemize} \item Ambainis \cite{ambainis2012variable} used variable time amplitude amplification to improve the running time of HHL quantum algorithm for solving systems of linear equations \cite{harrow2009quantum} from $\widetilde{O}(\kappa^2)$ (where $\kappa$ is the condition number of the system) to $\widetilde{O}(\kappa^{1+o(1)})$ in different contexts; \item Childs et al.~\cite{childs2017quantum} used variable time amplitude amplification to design a quantum algorithm for solving systems of linear equations with an exponentially improved dependence of the running time on the required precision; \item Le Gall \cite{legall2014improved} used variable time search to construct the best known quantum algorithm for triangle finding, with a running time $\widetilde{O}(n^{5/4})$ where $n$ is the number of vertices; \item De Boer et al.~\cite{boer2018attacks} used variable time search to optimize the complexity of quantum attacks against a post-quantum cryptosystem; \item Glos et al.~\cite{glos2021quantum} used variable time search to develop a quantum speedup for a classical dynamic programming algorithm. \item Schrottenloher and Stevens \cite{Schrottenloher2022} used variable time amplitude amplification to transform a classical nested search into a quantum algorithm, with applications to quantum attacks on AES. \end{itemize} In those applications, the oracle for the quantum search is a quantum algorithm whose running time depends on the item that is being queried. For example, we might have a graph algorithm that uses quantum search to find a vertex with a certain property and the time $t_v$ to check the property may depend on the degree of the vertex $v$. In such situations, using standard quantum search would mean that we run the checking algorithm for the maximum possible time $t_{\max} = \max_v t_v$. If most times $t_v$ are substantially smaller, this results in suboptimal quantum algorithms. A more efficient strategy is to use the variable time quantum search algorithm \cite{ambainis2010variable}. It has two variants: the ``known times'' variant when times $t_v$ for checking various $v$ are known in advance and can be used to design the algorithm and the ``unknown times'' variant in which $t_v$ are only discovered when running the algorithm. In the ``known times'' case, VTS (variable time search) has complexity $O(\sqrt{T})$ where $T=\sum_{v} t_v^2$ and there is a matching lower bound \cite{ambainis2010variable}. For the ``unknown times'' case, the complexity of the variable time search increases to $O(\sqrt{T} \log^{1.5} T)$ and the quantum algorithm becomes substantially more complicated. Since almost all of the applications of VTS require the ``unknown times'' setting, it may be interesting to develop a simpler quantum algorithm. In more detail, the ``unknown times'' search works by first running the query algorithm for a small time $T_1$ and then amplifying $v$ for which the query either returns a positive result or does not finish in time $T_1$. This is followed by running the query algorithm for longer time $T_2$, $T_3$, $\ldots$ and each time, amplifying $v$ for which the query either returns a positive result or does not finish in time $T_i$. To determine the necessary amount of amplification, quantum amplitude estimation is used. This results in a complex algorithm consisting of interleaved amplification and estimation steps. This complex structure contributes to the complexity of the algorithm, via log factors and may also lead to large constants hidden under the big-$O$. In this paper, we develop a simple algorithm for variable time search that uses only amplitude amplification. Our algorithm achieves the complexity of $O(\sqrt{T} \log n)$ where $T$ is an upper bound for $\sum_v t_v^2$ provided to the algorithm. (Unlike in the ``known times'' model, we do not need to provide $t_1, \ldots, t_n$ but only an estimate for $T$.) This also improves over the previous algorithm by a $\sqrt{\log}$ factor. To summarize, the key difference from the earlier algorithms \cite{ambainis2010variable,ambainis2012variable} is that the earlier algorithms would use amplitude estimation (once for each amplification step) to determine the optimal schedule for amplitude amplification for this particular $t_1, \ldots, t_n$. In contrast, we use one fixed schedule for amplitude amplification (that depends only on the estimate for $T$ and not on $t_1, \ldots, t_n$). While this schedule may be slightly suboptimal, the losses from it being suboptimal are less than savings from not performing multiple rounds of amplitude estimations. This also leads to the quantum algorithm being substantially simpler. Our second result is a lower bound of $\Omega(\sqrt{T \log T})$, showing that a complexity of $\Theta(\sqrt{T})$ is not achievable. The lower bound is by creating a query problem which can be solved by variable time search and using the quantum adversary method to show a lower bound for this problem. In particular, this proves that ``unknown times'' search is more difficult than ``known times'' search (which has the complexity of $\Theta(\sqrt{T})$). \section{Model, definitions, and previous results} We consider the standard search problem in which the input consists of variables $x_1, \ldots, x_N\in\{0, 1\}$ and the task is to find $i:x_i=1$ if such $i$ exists. Our model is a generalization of the usual quantum query model. We model a situation when the variable $x_i$ is computed by a query algorithm $Q_i$ which is initialized in the state $\ket{0}$ and, after $t_i$ steps, outputs the final state $\ket{x_i}\ket{\psi_i}$ for some unknown $\ket{\psi_i}$. (For most of the paper, we restrict ourselves to the case when $Q_i$ always outputs the correct $x_i$. The bounded error case is discussed briefly at the end of this section.) In the first $t_{i}-1$ steps, $Q_i$ can be in arbitrary intermediate states. The goal is to construct an algorithm $A$ that finds $i:x_i=1$ (if such $i$ exists). The algorithm $A$ can run the query $Q_i$ for a chosen $t$, with $Q_i$ outputting $x_i$ if $t_i\leq t$ or * (an indication that the computation is not complete) if $t_i>t$. The complexity of $A$ is the amount of time that is spent running the query algorithms $Q_i$. Transformations that does not involve running $Q_i$ do not count towards the complexity. More formally, we assume that, for any $T$, there is a circuit $C_T$ which, on an input $\sum_{i=1}^n \ket{i} \otimes \ket{0}$ outputs \[ \sum_{i=1}^n \ket{i} \otimes \ket{y_i} \otimes \ket{\psi_i} \] where $y_i=x_i$ if $t_i\leq T$ and $y_i =*$ if $t_i>T$. The state $\ket{\psi_i}$ contains intermediate results of the computation and can be arbitrary. An algorithm $A$ for variable time search consists of two types of transformations: \begin{itemize} \item circuits $C_T$ for various $T$; \item transformations $U_i$ that are independent of $x_1, \ldots, x_n$. \end{itemize} If there is no intermediate measurements, an algorithm $A$ is of the form \[ U_k C_{T_k} U_{k-1} \ldots U_1 C_{T_1} U_0 \] and its complexity is defined as $T_1+T_2+\ldots+T_k$. In the general case, an algorithm is a sequence \[ U_0, C_{T_1}, U_1, \ldots, C_{T_k}, U_k \] with intermediate measurements. Depending on the outcomes of those measurements, the algorithm may stop and output the result or continue with the next transformations. The complexity of the algorithm is defined as $p_1 T_1 + \ldots + p_k T_k$ where $p_i$ is the probability that $C_{T_i}$ is performed. (One could also allow $U_i$ and $T_i$ to vary depending on the results of previous measurements but this will not be necessary for our algorithm.) If there exists $i:x_i=1$, $A$ must output one of such $i$ with probability at least 2/3. If $x_i=0$, $A$ must output ``no such $i$'' with probability at least 2/3. {\bf Known vs.~unknown times.} This model can be studied in two variants. In the ``known times'' variant, the times $t_i$ for each $i\in [n]$ are known in advance and can be used to design the search algorithm. In the ``unknown times'' variant, the search algorithm should be independent of the times $t_i$, $i\in [n]$. The complexity of the variable time search is characterized by the parameter $T = \sum_{i=1}^n t_i^2$. We summarize the previously known results below. \begin{theorem} \label{thm:known} \cite{ambainis2010variable,ambainis2012variable} \begin{enumerate}[label=(\alph*)] \item {\bf Algorithm -- known times:} For any $t_1, \ldots, t_n$, there is a variable time search algorithm $A_{t_1, \ldots, t_n}$ with the complexity $O(\sqrt{T})$. \item {\bf Algorithm -- unknown times:} There is a variable time search algorithm $A$ with the complexity $O(\sqrt{T} \log^{1.5} T)$ for the case when $t_1, \ldots, t_n$ are not known in advance. \item {\bf Lower bound -- known times.} For any $t_1, \ldots, t_n$ and any variable time search algorithm $A_{t_1, \ldots, t_n}$, its complexity must be $\Omega(\sqrt{T})$. \end{enumerate} \end{theorem} Parts (a) and (c) of the theorem are from \cite{ambainis2010variable}. Part (b) is from \cite{ambainis2012variable}, specialized to the case of search. In the recent years there have been attempts to reproduce and improve the aforementioned results by other means. In \cite{cornelissen_et_al:LIPIcs:2020:12694}, the authors obtain a variant of Theorem \ref{thm:known}(a) by converting the original algorithms into span programs, which then are composed and subsequently converted back to a quantum algorithm. More recently, \cite{Jeffery2022} gives variable time quantum walk algorithm (which generalizes variable time quantum search) by employing a recent technique of multidimensional quantum walks. While the focus of these two papers is on developing very general frameworks, our focus is on making the variable time search algorithm simpler. Concurrently and independently of our work, a similar algorithm for variable time amplitude amplification was presented in \cite{Schrottenloher2022}, which also relies on recursive nesting of quantum amplitude amplifications. {\bf Variable time search with bounded error inputs.} We present our results for the case when the queries $Q_i$ are perfect (have no error) but our algorithm can be extended to the case if $Q_i$ are bounded error algorithms, at the cost of an extra logarithmic factor. Let $k$ be the maximum number of calls to $C_T$'s in an algorithm $A$. Then, it suffices that each $C_T$ outputs a correct answer with a probability $1-o(1/k^2)$. This can be achieved by repeating $C_T$ $O(\log k)$ times and taking the majority of answers. Possibly, this logarithmic factor can be removed using methods similar to ones for search with bounded error inputs in the standard (not variable time) setting \cite{Hoyer2003quantum}. \section{Algorithm} We proceed in two steps. We first present a simple algorithm for the case when a sufficiently good bound on the number of solutions $m=|i:x_i=1|$ are known (Section \ref{sec:known}). We then present an algorithm for the general case that calls the simple algorithm multiple times, with different estimates for the parameter $\ell$ corresponding to $m$ (Section \ref{sec:unknown}). Both algorithms require an estimate $T$ for which $\sum_{i=1}^n t_i^2 \leq T$, with the complexity depending on $T$. \subsection{Tools and methods} Before presenting our results, we describe the necessary background about quantum amplitude amplification \cite{BHMT02}. {\bf Amplitude amplification -- basic construction.} Assume that we have an algorithm $A$ that succeeds with a small probability and it can be verified whether $A$ has succeeded. Amplitude amplification is a procedure for increasing the success probability. Let \[ A\ket{0} = \sin \alpha \ket{\psi_{\text{succ}}} + \cos \alpha \ket{\psi_{\text{fail}}} .\] Then, there is an algorithm $A(k)$ that involves $k+1$ applications of $A$ and $k$ applications of $A^{-1}$ such that \[ A(k)\ket{0} = \sin \lr{(2k+1) \alpha} \ket{\psi_{\text{succ}}} + \cos \lr{(2k+1) \alpha} \ket{\psi_{\text{fail}}}.\] Knowledge of $\alpha$ is not necessary (the way how $A(k)$ is obtained from $A$ is independent of $\alpha$). {\bf Amplitude amplification -- amplifying to success probability $1-\delta$.} If $\alpha$ is known then one can choose $k = \lfloor \frac{\pi}{4 \alpha}\rfloor$ to amplify to a success probability close to 1 (since $(2k+1) \alpha$ will be close to $\frac{\pi}{2}$). If the success probability of $A$ is $\epsilon$, then $\sin \alpha \approx \sqrt{\epsilon}$ and $k \approx \frac{\pi}{4\sqrt{\epsilon}}$. For unknown $\alpha$, amplification to success probability $1-\delta$ for any $\delta>0$ can be still achieved, via a more complex algorithm. Namely, for any $\epsilon, \delta\in(0, 1)$ and any $A$, one can construct an algorithm $A(\epsilon, \delta)$ such that: \begin{itemize} \item $A(\epsilon, \delta)$ invokes $A$ and $A^{-1}$ $O(\frac{1}{\sqrt{\epsilon}}\log \frac{1}{\delta})$ times; \item If $A$ succeeds with probability at least $\epsilon$, $A(\epsilon, \delta)$ succeeds with probability at least $1-\delta$. \end{itemize} To achieve this, we first note that performing $A(k)$ for a randomly chosen $k\in\{1, \ldots, M\}$ for an appropriate $M=O\left(\frac{1}{\sqrt{\epsilon}}\right)$ and measuring the final state gives a success probability that is close to 1/2 (as observed in the proof of Theorem 3 in \cite{BHMT02}). Repeating this procedure $O\left(\log \frac{1}{\delta}\right)$ times achieves the success probability of at least $1-\delta$. \subsection{Algorithm with a fixed number of stages} \label{sec:known} Now we present an informal overview of the algorithm when tight bounds on the number of solutions $m=|i:x_i=1|$ is known. We will define a sequence of times $T_1, T_2, \ldots$ and procedures $A_1, A_2, \ldots$. We choose $T_1= 3 \sqrt{T/n}$ (this ensures that at most $n/9$ of indices $i\in[n]$ have $t_i\geq T_1$) and $T_2=3 T_1$, $T_3 = 3 T_2$, $\ldots$ until $d$ for which $T_d \geq \sqrt{T}$. The procedure $A_1$ creates the superposition $\sum_{i=1}^n \frac{1}{\sqrt{n}} \ket{i}$ and runs the checking procedure $C_{T_1}$, obtaining state of the form $\sum_{i=1}^n \frac{1}{\sqrt{n}} \ket{i, a_i}$, where $a_i\in\{0, 1, *\}$, with $*$ denoting a computation that did not terminate. The subsequent procedures $A_j$ are defined as $A_j = C_{T_j} A_{j-1}(1)$, i.e., we first amplify the parts of the state with outcomes 1 or $*$ and then run the checking procedure $C_{T_j}$. We express the final state of $A_{j-1}$ as \[ \sin \alpha_{j-1} \ket{\psi_{\text{succ}}} + \cos \alpha_{j-1} \ket{\psi_{\text{fail}}} , \] where $\ket{\psi_{\text{succ}}} $ consists of those indices $i\in [n]$ which are either 1 or are still unresolved $*$ (and thus have the potential to turn out to be `1'). Then the amplitude amplification part triples the angle $\alpha_{j-1}$, i.e., amplifies both the `good' and `unresolved' states by a factor of $\sin(3\alpha_{j-1})/\sin(\alpha_{j-1}) \approx 3$. We will show that $\ell = \lceil \log_9 \frac{n}{m} \rceil$ stages are sufficient, i.e., the procedure $A_{\ell}$ the amplitude at the `good' states (if they exist) is sufficiently large. We note that the idea of recursive tripling via amplitude amplification has been used in other contexts. It has been used to build an algorithm for bounded-error search in \cite{Hoyer2003quantum}; more recently, the recursive tripling trick has also been used in, e.g., \cite{chakraborty_et_al:LIPIcs.STACS.2022.20}. Furthermore, the repeated tripling of the angle $\alpha$ also explains the scaling factor 3 when defining the sequence $T_1, T_2, T_3 \ldots$ A formal description follows. We assume an estimate $T \geq \sum_i t_i^2$ to be known and set \begin{equation*} T_1= 3 \sqrt{T/n}, \ T_2 = 3T_1, \ \ldots, T_d = 3 T_{d-1}, \end{equation*} with $ d \in \mbb N $ s.t.~$ T_{d-1 } < \sqrt T \leq T_d $ (equivalently, $ 9^{d-1} < n \leq 9^d $). Let $ \mc M = \lrb{i\in[n] : x_i=1 }$, $m= \lrv{\mc M}$. We assume that we know $\ell$ for which $m$ belongs to the interval $\left [\frac{n}{9^\ell}, \frac{n}{9^{\ell-1}} \right)$ (so that $\ell = \lceil \log_9 \frac{n}{m} \rceil$). Under those assumptions, we now describe a variable time search algorithm with parameters $T, \ell$. \begin{algorithm}[H] \caption{VTS algorithm with a fixed number of stages }\label{alg:alg1} \begin{algorithmic}[1] \Statex {\bf Parameters: }{$ T $, $ n $, $\ell$, $\delta$.} \State{Run the amplified algorithm $A(0.04, \delta)$ where $A$ is the procedure defined below and we amplify the part of the state for the second register contains `1' } \Procedure{$A$}{} \State{Run $A_\ell$}\Comment{(defined below)} \State{Run $C_{T_{\ell+1}}$ (or $C_{T_d}$ if $\ell=d$)} \EndProcedure \State Measure the state \If {The second register is `1'} \State Output $i$ from the first register \Else \State Output \texttt{No solutions.} \EndIf \Procedure{$A_j$}{}\Comment{$j \in [d]$} \If{$j=1$} \State Create the state $\sum_{i=1}^n \frac{1}{\sqrt{n}} \ket{i}$ \State Run $C_{T_1}$, obtaining state of the form $\sum_{i=1}^n \frac{1}{\sqrt{n}} \ket{i, a_i} $ where $a_i\in\{0, 1, *\}$. \Else \State Perform the amplified algorithm $A_{j-1}(1)$, amplifying the basis states with 1 or * in the second register \If{$j<\ell$} \State Run $C_{T_j}$. \EndIf \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \begin{lemma}\label{thm:proofOfAlg1} \cref{alg:alg1} with parameter $\ell = \lceil \log_9 \frac{n}{m} \rceil$ finds an index $ i \in \mc M$ with probability at least $1-\delta$ in time $ O \lr{ \sqrt {\frac{T}{m}} \log\frac{n}{m} \log \frac{1}{\delta} } $. \end{lemma} \begin{proof} By $\mc S_j$ we denote the sets of those indices whose amplitudes will be amplified after running $A_j$, namely, the set of indices for which the query either returns a positive result or does not finish in time $T_j$: \begin{equation*} \mc S_j = \lrb{ i \in [n] \ : \ \lr{T_j < t_i} \lor \lr{t_i \leq T_j \land x_i=1 } } , \quad j=0,1,2,\ldots, d, \end{equation*} where $T_0 := 0$. We note that the sets $\mc S_j$ form a decreasing sequence\footnote{Since each $i$ s.t.~$t_i \leq T_j \land x_i=1$ either satisfies $t_i \leq T_{j-1} \land x_i=1$ or $t_i > T_{j-1}$; in both cases $i \in \mc S_{j-1}$.}, i.e., \begin{equation*} [n] = \mc S_0 \supseteq \mc S_1 \supseteq \mc S_2 \supseteq \ldots \supseteq \mc S_{d-1} \supseteq \mc S_d = \mc M. \end{equation*} We shall denote the cardinality of $\mc S_j $ by $s_j$; then \begin{equation*} n = s_0 \geq s_1 \geq \ldots \geq s_d = m. \end{equation*} We express the final state of $A_j$ as \[ \sin \alpha_j \ket{\psi_{\text{succ}, j}} + \cos \alpha_j \ket{\psi_{\text{fail}, j}} \] where $\ket{\psi_{\text{succ}, j}}$ consists of basis states with $\ket{i}$, $i\in \mc S_j$, in the first register and $\ket{\psi_{\text{fail}, j}}$ consists of basis states with $\ket{i}$, $i\notin \mc S_j$, in the first register. We begin by describing how the cardinality of $\mc S_j$ is related to the amplitude $\sin \alpha_j$ (the proof is deferred to \cref{sec:appA}). \begin{restatable}{lemma}{recursiveamplitude}\label{thm:recursiveamplitude} For all $j=1,2,\ldots, \ell$, \begin{equation}\label{eq:nextSinJ} \sin^2 \alpha_{j} = \frac{s_{j}}{n} \prod_{k=1}^{j-1} \lr{\frac{\sin (3\alpha_{k})}{\sin\alpha_{k}}}^2. \end{equation} Moreover, for any $i \in \mc S_j$, the amplitude at $\ket{i,1}$ (or $\ket{i,*}$, if $t_i > T_j$) equals $ \frac{\sin \alpha_j}{\sqrt{s_j}} $. \end{restatable} \cref{eq:nextSinJ} and the trigonometric identity \begin{equation*} \sin(3\alpha) = (3-4\sin^2\alpha)\sin \alpha \end{equation*} allows to obtain (for $j=1,2,\ldots,\ell$) \begin{equation}\label{eq:sinRatioLB} \frac{\sin (3\alpha_{j}) }{\sin\alpha_{j} } = 3-4\sin^2\alpha_{j} = 3- \frac{4s_j \cdot 9^{j-1} }{n} \prod_{k=1}^{j-1} \lr{\frac{\sin (3\alpha_{k})}{3\sin\alpha_{k}}}^2 \geq 3- \frac{4s_j }{n} \cdot 9^{j-1}, \end{equation} where the inequality is justified by the observation $\lrv{\frac{\sin (3\alpha)}{3\sin \alpha}} \leq 1 $. This allows to estimate \begin{equation}\label{eq:sinAlphaL} \sin \alpha_\ell = \sqrt{\frac{s_\ell}{n}} \prod_{j=1}^{\ell-1} \frac{\sin (3\alpha_{j})}{\sin\alpha_{j}} \geq 3^{\ell-1} \sqrt{\frac{s_\ell}{n}} \prod_{j=1}^{\ell-1} \lr{ 1 - \frac{4 s_j}{27 n} \cdot 9^j }, \end{equation} as long as each factor on the RHS is positive. We argue that it is indeed the case; moreover, the whole product is lower-bounded by a constant (the proof is deferred to \cref{sec:appA}): \begin{restatable}{lemma}{cOneThree}\label{thm:C1-3} \begin{enumerate}[label=\textbf{C-\arabic*}] \item\label{enum:1} Each factor on the RHS of \eqref{eq:sinAlphaL} is positive: $ \frac{9^j s_j}{n} \leq \frac{9}{4} $, thus \begin{equation*} \lr{ 1 - \frac{4 s_j}{27 n} \cdot 9^j } \geq \frac{2}{3}, \quad\text{for all } j \in [\ell-1]. \end{equation*} \item\label{enum:2} The product $\prod_{j=1}^{\ell-1} \lr{ 1 - \frac{4 s_j}{27 n} \cdot 9^j }$ is lower bounded by $2/3$. \item\label{enum:3} $9^\ell s_\ell \geq 9^\ell s_d \geq n$. \end{enumerate} \end{restatable} From \eqref{eq:sinAlphaL} and \cref{thm:C1-3} it is evident that $ \sin \alpha_\ell \geq \frac{2}{9} \sqrt{ \frac{9^\ell s_\ell}{n} } \geq \frac{2}{9} $. However, after running $A_\ell$, there still could be some unresolved indices $i$ with $t_i > T_\ell$ and some of these unresolved indices may correspond to $x_i=0$. Our next argument is that running $C_{T_{\ell+1}}$, i.e., the checking procedure for $3 T_\ell$ steps, resolves sufficiently many indices in $\mc M$. This argument, however, is necessary only for $\ell < d$; for $\ell=d$, one runs $C_{T_d}$ instead of $C_{T_{\ell+1}}$ and the same estimate \eqref{eq:A1succProbLB} of the success probability applies, with $8m/9$ replaced by $m$. Also notice that in \cref{alg:alg1} we skipped running $C_{T_\ell}$ at the end of $A_\ell$ and immediately proceeded with running $C_{T_{\ell+1}}$ instead. In the analysis, this detail is omitted for convenience (since it is equivalent to running $C_{T_j}$ at the end of each procedure $A_j$ and additionally running $C_{T_{\ell+1}}$ after $A_\ell$). By the choice of $\ell$ we have $\sqrt{\frac{T}{m}} \leq T_\ell = \sqrt{ \frac{9^\ell T }{n} }$ and $T_{\ell+1} \geq 3 \sqrt{\frac{T}{m}}$. Notice that at most $m/9$ of the indices $i \in [n]$ can satisfy $t_i^2 > T_{\ell+1}^2 $ (otherwise, the sum over those indices already exceeds $ \frac{m}{9} \cdot \frac{9T}{m} = T $). Consequently, after running the checking procedure $C_{T_{\ell+1}}$, at least $8m/9$ of the indices in $\mc M$ will be resolved to `1'. By \cref{thm:recursiveamplitude}, the amplitude at each of the respective states $\ket{i,1}$ is equal to $\frac{\sin \alpha_\ell}{\sqrt{s_\ell}}$, therefore the probability to measure `1' in the second register is at least \begin{equation}\label{eq:A1succProbLB} \frac{8m}{9} \cdot \frac{\sin^2 \alpha_\ell}{s_\ell} \geq \frac{8m}{9} \cdot {\frac{9^{\ell} }{n}} \lr{\frac{1}{3}\prod_{j=1}^{\ell-1} \lr{ 1 - \frac{4 s_j}{27 n} \cdot 9^j }}^2 \geq \frac{8}{9} \cdot \lr{ \frac{ 2}{9}}^2>0.04 , \end{equation} where the first inequality follows from \eqref{eq:sinAlphaL} and the second inequality is due to \ref{enum:2} and \ref{enum:3}. We conclude that the procedure $ A$ finds an index $i \in \mc M$ with probability at least $0.04$; its running time is easily seen to be \begin{equation*} T_{\ell+1} + T_{\ell} + 3 \lr{ T_{\ell-1} + 3 \lr{T_{\ell-2} + \ldots + 3 \lr{T_2 + 3 T_1 } } } = (3+\ell) T_\ell , \end{equation*} which for our choice of $\ell$ is of order \begin{equation*} O \lr{ \log\frac{n}{m} \sqrt {9^\ell \frac{T}{n}} } = O \lr{ \log\frac{n}{m} \sqrt {\frac{T}{m}} } . \end{equation*} Use $O(\log \frac{1}{\delta})$ rounds amplitude amplification to amplify the success probability of $ A$ to $1-\delta$, concluding the proof. \end{proof} \subsection{Algorithm for the general case} \label{sec:unknown} When the cardinality of $\lrv{\mc M}$ is not known in advance, we run \cref{alg:alg1} with increasing values of $\ell$ (which corresponds to exponentially decreasing guesses of $m$) until either $i : x_i=1$ is found or we conclude that no such $i$ exists. \cref{alg:alg1} also suffers the `souffl\'{e} problem' \cite{Brassard1997} in which iterating too much (choosing $\ell$ in \cref{alg:alg1} larger than its optimal value) may ``overcook'' the state and decrease the success probability. For this reason, before running \cref{alg:alg1} with the next value of $\ell$, we re-run it with all the previous values of $\ell$ to ensure that the probability of running \cref{alg:alg1} with too large $\ell$ is small. This ensures that the algorithm stops in time $ O \lr{ \sqrt {\frac{T}{m}} \log\frac{n}{m} } $ with high probability. Formally, we make the following claim: \begin{restatable}{lemma}{algIIanalysis}\label{thm:alg2analysis} If $\mc M$ is nonempty, \cref{alg:alg2} finds an index $ i \in \mc M$ with probability at least $5/6$ with complexity $ O \lr{ \sqrt {\frac{T}{m}} \log\frac{n}{m} } $. If $\mc M$ is empty, \cref{alg:alg2} outputs \texttt{No solutions.} with complexity $ O \lr{ \sqrt {T} \log n } $. \end{restatable} \begin{algorithm}[H] \caption{VTS algorithm for arbitrary number of solutions $m$ }\label{alg:alg2} \begin{algorithmic}[1] \Statex {\bf Parameters: }{$ T $, $ n $.} \Statex Let $B_k $ stand for \cref{alg:alg1} with parameters $T$, $n$, $k$ and $\delta = 1/6$. \For{$j= 1,2,\ldots,d$} \For{$k = 1,2,\ldots,j$} \State Run $B_k$ \State If $B_k$ returned $i\in \mc M$, output this $i$ and quit \EndFor \EndFor \State {Output \texttt{No solutions.}} \end{algorithmic} \end{algorithm} \begin{proof}[Proof of \cref{thm:alg2analysis}] Let $\delta=1/6$; let us remark that each procedure $B_k$ runs in time $O\lr{k 3^k \sqrt{T/n}}$. Let us consider the case when $ m=\lrv{\mc M}>0 $; denote $\ell := \lceil \log_9 \frac{n}{m} \rceil$. The probability of $ B_k$, $k \neq \ell$, finding an index $i \in \mc M$ is lower-bounded by 0; the probability of $ B_{\ell} $ finding an index $i \in \mc M$ is lower-bounded by $1-\delta$. Hence, the total complexity of the algorithm stages $j=1,2,\ldots, \ell$, is of order \begin{equation*} \sqrt{\frac{T}{n}}\sum_{j=1}^\ell \sum_{k=1}^j k 3^k = \sqrt{\frac{T}{n}}\sum_{j=1}^\ell (\ell+1-j) j 3^j \asymp \ell 3^\ell \sqrt{\frac{T}{n}} . \end{equation*} and the last step $B_\ell$ finds $i \in \mc M$ with probability at least $1-\delta$. With probability at most $\delta$, the last step fails to find $i\in \mc M$, and then \cref{alg:alg2} proceeds with $j=\ell+1$ and runs the sequence $B_1$, $B_2$, \ldots, $B_\ell$, where the last step finds $i \in \mc M$ with (conditional) probability at least $1-\delta$ (conditioned on the failure to find $i\in \mc M$ in the previous batch). The complexity of this part is of order \begin{equation*} \delta \sqrt{\frac{T}{n}} \lr{ \sum_{j=1}^\ell j 3^j } \asymp \delta \sqrt{\frac{T}{n}} \ell 3^\ell , \end{equation*} where the $\delta$ factor reflects the fact the respective procedures are invoked with probability $\delta$. With (total) probability at most $\delta^2$, the algorithm still has not found $i\in \mc M$. Then \cref{alg:alg2} runs the sequence $B_{\ell+1}$, $B_1$, $B_2$, \ldots, $B_\ell$ (i.e., finishes with $j=\ell+1$ and continues with $j=\ell+2$), where the last step finds $i \in \mc M$ with (conditional) probability at least $1-\delta$. The complexity of this part is of order \begin{equation*} \delta^2 \sqrt{\frac{T}{n}} \lr{ (\ell+1)3^{\ell+1} + \ell 3^\ell } \asymp \delta^2 \sqrt{\frac{T}{n}} (\ell+1)3^{\ell+1} . \end{equation*} With (total) probability at most $\delta^3$, the algorithm still has not found $i\in \mc M$. Then \cref{alg:alg2} runs the sequence $B_{\ell+1}$,$B_{\ell+2}$, $B_1$, $B_2$, \ldots, $B_\ell$, where the last step finds $i \in \mc M$ with (conditional) probability at least $1-\delta$. The complexity of this part is of order \begin{equation*} \delta^3 \sqrt{\frac{T}{n}} \lr{ (\ell+1)3^{\ell+1} +(\ell+2)3^{\ell+2} + \ell 3^\ell } \asymp \delta^3 \sqrt{\frac{T}{n}} (\ell+2)3^{\ell+2}, \end{equation*} and so on. For $j=d$, the final batch $B_1, B_2, \ldots, B_\ell$ is invoked with probability at most $\delta^{d-\ell}$; with conditional probability at most $\delta$ we still fail to find $i\in \mc M$ and run the remaining sequence $B_{\ell+1}, \ldots, B_d$ (which can completely fail finding any $i\in \mc M$ as it has no non-trivial lower bounds on the success probability). The complexity of the latter sequence is of order \begin{equation*} \delta^{d+1-\ell} \sqrt{\frac{T}{n}} \lr{ (\ell+1)3^{\ell+1} +(\ell+2)3^{\ell+2} + \ldots +d 3^d } \asymp \delta^{d+1-\ell} \sqrt{\frac{T}{n}} \, d \, 3^{d}. \end{equation*} We see that \cref{alg:alg2} fails with probability at most $\delta^{d+1-\ell}$; since $\ell \leq d$, this is upper-bounded by $\delta=1/6$. The total complexity of the algorithm is of order \begin{align*} & 3^\ell \sqrt{\frac{T}{n}}\lr{ \ell + 3\delta ^2 (\ell +1) + 9 \delta^3 (\ell +2) + \ldots + (3 \delta)^{d-\ell} \cdot d \delta } \\ & < 3^\ell \sqrt{\frac{T}{n}} \lr{ \ell + \ell \, 3\delta^2 \sum_{i=0}^\infty(3\delta)^i + 3\delta^2 \sum_{i=1}^\infty i (3\delta)^{i-1} } \\ & = 3^\ell \sqrt{\frac{T}{n}} \lr{ \ell + \ell \frac{3\delta^2 }{1-3\delta} + \frac{3\delta^2 }{(1-3\delta)^2} } \asymp \ell 3^\ell \sqrt{\frac{T}{n}}, \end{align*} since $3\delta=1/2$. Since $3^\ell \asymp \sqrt{\frac{n}{m}} $ and $\ell \asymp \log\frac{n}{m} $, we conclude that the complexity of the algorithm is $ O \lr{ \sqrt {\frac{T}{m}} \log\frac{n}{m} } $, as claimed. Let us consider the case when $\mc M $ is empty; then with certainty each $B_j$ fails to output any $i$, and \cref{alg:alg2} correctly outputs \texttt{No solutions}. In this case, the complexity of the algorithm is of order \begin{equation*} \sqrt{\frac{T}{n}}\sum_{j=1}^d \sum_{k=1}^j k 3^k = \sqrt{\frac{T}{n}}\sum_{j=1}^d (d+1-j) j 3^j \asymp d 3^d \sqrt{\frac{T}{n}} \asymp \sqrt{T} \log n, \end{equation*} since $3^d \asymp \sqrt{n}$. \end{proof} \section{Lower bound} For the improved lower bound, we consider a query problem which can be solved with variable time search. Let $g : \{0,1,\star\}^m \to \{0,1\}$ be a partial function defined on the strings with exactly one non-$\star$ value, which is the value of the function. The function $f$ we examine then is the composition of $\text{OR}_n$ with $g$. We note that $g$ is also known in the literature as $\text{pSEARCH}$, which has been used for quantum lower bounds in cryptographic applications \cite{BBHKLS18}. For any $i \in [n]$, if the index of the non-$\star$ element in the corresponding instance of $g$ is $j_i \in [m]$, then we can find this value in $O(\sqrt{j_i})$ queries using Grover's search. This creates an instance of the variable search problem with unknown times $t_i = \sqrt{j_i}$. By examining only inputs with fixed $T = \sum_{i=1}^n t_i^2 = \sum_{i=1}^n j_i$ and the restriction of $f$ on these inputs $f_T$, we are able to prove a $\Omega(\sqrt{T \log T})$ query lower bound using the weighted quantum adversary bound \cite{Aar06}. Since any quantum algorithm for the variable time search also solves $f_T$, this gives the required lower bound. \begin{restatable}{theorem}{thmLB}\label{thm:lb} Any algorithm that solves variable time search with unknown times $t_i$ requires time $\Omega(\sqrt{T \log T})$, where $T = \sum_{i\in[n]} t_i^2$. \end{restatable} We note that the lower bound of Theorem \ref{thm:lb} contains a factor of $\sqrt{\log T}$ while the upper bound of Lemma \ref{thm:alg2analysis} contains a factor of $\log n$. There is no contradiction between these two results as the lower bound uses inputs with $T=\Theta(n \log n)$ and for those inputs $\log T= (1+o(1)) \log n$. \begin{proof}[Proof of Theorem \ref{thm:lb}] Consider a partial function $f : D \to \{0,1\}$, where $D \subset \{\star,0,1\}^{[n]\times [m]}$, defined as follows. An input $x \in D$ if for each $i \in [n]$ there is a unique $j \in [m]$ such that $x_{i,j} \neq \star$; denote this $j$ by $j_{x,i}$. Then $f(x) = 1$ iff there exists an $i$ such that $x_{i,{j_{x,i}}} = 1$. Suppose that $x$ is given by query access to $x_{i,j}$. For any $i$, we can check whether $x_{i,j_{x,i}} = 1$ in $O(\sqrt{j_{x,i}})$ queries with certainty in the following way. There is a version of Grover's search that detects a marked element out of $N$ elements in $O(\sqrt N)$ queries with certainty, if the number of marked elements is either $0$ or $1$ \cite{BHMT02}. By running this algorithm for the first $N$ elements, where we iterate over $N = 1, 2, \ldots, 2^{\lceil \log_2{j_{x,i}}\rceil}$, we will detect whether $x_{i,j_{x,i}} = 1$ in $O(\sqrt{j_{x,i}})$ queries with certainty. Letting $t_i = \sqrt{j_{x,i}}$ and $T = \sum_{i \in [n]} t_i^2$, we get an instance of a variable search problem. Now fix any value of $T$ and examine only inputs with such $T$. Denote $f$ restricted on $T$ by $f_T$. If the quantum query complexity of $f_T$ is $\Q(f_T)$, then any algorithm that solves variable time search must require at least $\Omega(\Q(f_T))$ time. In the following, we will prove that $\Q(f_T) = \Omega(\sqrt{T \log T})$. \paragraph{Adversary bound.} We will use the relational version of the quantum adversary bound \cite{Aar06}. Let $X \subseteq f_T^{-1}(0)$ and $Y \subseteq f_T^{-1}(1)$ and $R : X \times Y \to \mathbb R_{\geq 0}$ be a weight function. For any input $x \in X$, define $w(x) = \sum_{y \in Y} R(x,y)$ and for any $i \in [n]$, $j \in [m]$, define $w(x,i,j) = \sum_{y \in Y, x_{i,j} \neq y_{i,j}} R(x,y)$. Similarly define $w(y)$ and $w(y,i,j)$. Then \[\Q(f_T) = \Omega\Biggl(\min_{\substack{x \in x, y \in Y \\ i \in [n], j \in [m] \\ R(x,y) > 0 \\ x_{i,j} \neq y_{i,j}}} \sqrt{\frac{w(x)w(y)}{w(x,i,j)w(y,i,j)}}\Biggr).\] \paragraph{Input sets.} Here we define the subsets of inputs $X$ and $Y$. First, let $k$ be the smallest positive integer such that $T \leq 2^k k$ and $k$ is a multiple of $4$. Denote $d = 2^k$, then $k = \log_2 d$ and $T = \Theta(d \log d)$. An input $z$ from either $X$ or $Y$ must then satisfy the following conditions. \begin{itemize} \item for each $p \in \left[0,\frac{k}{2}\right]$, there are exactly $\frac{d}{2^p}$ indices $i$ such that $j_{z,i} \in [2^p,2^{p+1})$; we will call the set of such indices the \emph{$p$-th block} of $z$; \item moreover, for each $p$ and each $\ell \in [0,2^p)$, there are exactly $\frac{d}{2^{2p}}$ indices $i$ such that $j_{z,i} = 2^p+\ell$. \end{itemize} Consequently, we examine inputs with $n = 2^k + 2^{k-1} + \ldots 2^{\frac{k}{2}}$ and $m = 2^{\frac{k}{2}+1}-1$. Additionally, an input $y$ belongs to $Y$ only if there is a unique $i$ such that $y_{i,j_{y,i}} = 1$. For this $i$, we also require $j_{y,i} \geq 2^{\frac{k}{4}+1}$: equivalently this means that $i$ belongs to a block with $p > \frac{k}{4}$. We verify the value of $T' = \sum_{i\in[n]} t_i^2$ for these inputs. If $i$ belongs to the $p$-th block of an input $z$, then $j_{z,i} = \Theta(2^p)$, as $j_{z,i} \in [2^p, 2^{p+1})$. Then \[T' = \sum_{i\in[n]} t_i^2 = \sum_{i\in[n]} j_{z,i} = \sum_{p \in \left[0,\frac{k}{2}\right]} \frac{d}{2^p}\cdot 2^p = d \left(\frac{k}{2}+1\right) = \Theta(T).\] Note that since $\Q(f_{T'}) \leq \Q(f_T)$, a lower bound on $\Q(f_{T'})$ in terms of $T$ will also give us a lower bound on $\Q(f_T)$. In the remainder of the proof, we will thus lower bound $\Q(f_{T'})$. \paragraph{Relation.} For an index $i \in [n]$ of an input $z$ that belongs to the $p$-th block, we define an \emph{index weight} $W_{z,i} = 2^p$. Then we also define values \begin{itemize} \item $B_p = \frac{d}{2^p} \cdot 2^p = d$ is the total index weight of the $p$-th block; \item $J_p = \frac{d}{2^{2p}} \cdot 2^p = \frac{d}{2^p}$ is the total index weight in the $p$-th block for any $j_{z,i} \in [2^p,2^{p+1})$. \end{itemize} Note that these values do not depend on the input. For the relation, we will call the $p$-th block \emph{light} if $p \in \left[ 0, \frac{k}{4}\right]$ and \emph{heavy} if $p \in \left( \frac{k}{4}, \frac{k}{2} \right]$. Two inputs $x \in X$ and $y \in Y$ have $R(x,y) > 0$ iff: \begin{itemize} \item there are exactly two indices $i_0, i_1 \in [n]$ such that $j_{x,i_b} \neq j_{y,i_b}$; \item $i_0$ is from some light block $p_0$ and $i_1$ is from some heavy block $p_1$ of $y$; let $j_0 = j_{y,i_0}$ and $j_1 = j_{y,i_1}$; \item $y_{i_0,j_0} = 0$, $y_{i_1,j_1} = 1$. \item $x_{i_0,j_1} = x_{i_1,j_0} = 0$. \end{itemize} Then let the weight in the relation be \[R(x,y) = W_{y,i_0}W_{y,i_1} = W_{x,i_1}W_{x,i_0} = 2^{p_0} 2^{p_1}.\] Figure \ref{fig:relation} illustrates the structure of the inputs and the relation. \begin{figure} \caption{An example of two inputs $x \in X$ and $y \in Y$ in the relation. Inputs $x$ and $y$ differ only in the 4 highlighted positions. All of the empty cells contain $\star$, not shown for readability. For $y$, the non-$\star$ symbols of the light blocks are located in the left upper area separated by the dashed lines, while the non-$\star$ symbols of the heavy blocks are located in the lower right area. Note that for $x$, $i_0$ is in a heavy block and $i_1$ is in a light block.} \label{fig:relation} \end{figure} \paragraph{Lower bound.} Now we will calculate the values for the adversary bound. Fix two inputs $x \in X$ and $y \in Y$ with $R(x,y) > 0$. First, since for $x$ the index $i_1$ can be any index from any light block and $i_0$ can be any index from any heavy block, \[w(x)=\Biggl(\sum_{p_0 \in \left[0,\frac{k}{4}\right]} B_{p_0}\Biggr)\cdot\Biggl(\sum_{p_1 \in \left(\frac{k}{4},\frac{k}{2}\right]} B_{p_1}\Biggr) = \Theta(d^2k^2).\] For $w(y)$, note that $p_1$ is uniquely determined by the position of the unique symbol $1$ in $y$. However, the choice for $i_0$ is not additionally constrained, hence \[w(y) = \Biggl(\sum_{p_0 \in \left[0,\frac{k}{4}\right]} B_{p_0}\Biggr)\cdot2^{p_1} = \Theta(dk2^{p_1}).\] Therefore, the nominator in the ratio in the adversary bound is \[w(x)w(y) = \Theta(d^3k^32^{p_1}).\] Now note the following important property: if $x_{i,j} \neq y_{i,j}$, then one of $x_{i,j}$ and $y_{i,j}$ is $\star$, and the other is either $0$ or $1$. There are in total exactly $4$ positions $(i,j)$ where $x$ and $y$ differ. We will examine each case separately. \begin{enumerate}[label=(\alph*)] \item $i = i_0$, $j = j_0$. In this case $x_{i,j} = \star$ and $y_{i,j} = 0$. For $x$, $i_1$ is not fixed but $j_0$ is known and hence also $p_0$ is known. Therefore, the total index weight from the light blocks is $J_{p_0}$. On the other hand, the positions of $i_0$ and, therefore, also $p_1$ are fixed. Thus, \[w(x,i,j) = J_{p_0} \cdot 2^{p_1} = \frac{d}{2^{p_0}} \cdot 2^{p_1}.\] For $y$, both $i_0$ and $i_1$ are fixed, hence \[w(y,i,j) = 2^{p_0} \cdot 2^{p_1} < d,\] since $p_0+p_1 \leq \frac{k}{4}+\frac{k}{2} < k$. Overall, \[w(x,i,j)w(y,i,j) < \frac{d}{2^{p_0}} \cdot 2^{p_1} \cdot d = \frac{d^2\cdot 2^{p_1}}{2^{p_0}}.\] \item $i = i_0$, $j = j_1$. In this case $x_{i,j} = 0$ and $y_{i,j} = \star$. For $x$, now the position $i_0$ is fixed, but $i_1$ can be chosen without additional constraints. The index $i_0$ uniquely defines the value of $p_1$. Hence, \[w(x,i,j) = \Biggl(\sum_{p_0 \in \left[0,\frac{k}{4}\right]} B_{p_0}\Biggr)\cdot 2^{p_1} = \Theta(dk2^{p_1}).\] For $y$, similarly as in the previous case, we have $i_0$ and $i_1$ fixed, thus \[w(y,i,j) = 2^{p_0} \cdot 2^{p_1} < d.\] Then \[w(x,i,j)w(y,i,j) = O(dk2^{p_1} \cdot d) = O(d^2k2^{p_1}).\] \item $i = i_1$, $j = j_0$. In this case $x_{i,j} = 0$ and $y_{i,j} = \star$. For $x$, $i_1$ is fixed, so it uniquely determines $p_0$. The index $i_0$ can be chosen without additional restrictions. Hence, \[w(x,i,j) = 2^{p_0}\cdot \Biggl(\sum_{p_1 \in \left(\frac{k}{4},\frac{k}{2}\right]} B_{p_1}\Biggr) = \Theta(2^{p_0}\cdot dk).\] For $y$, $i_0$ is not fixed but $j_0$ is fixed, which also fixes $p_0$. Therefore, the total index weight from the light blocks is $J_{p_0}$. On the other hand, $i_1$ and $p_1$ are fixed for $y$ by the position of the symbol $1$, thus \[w(y,i,j) = J_{p_0} \cdot 2^{p_1} = \frac{d}{2^{p_0}} \cdot 2^{p_1}.\] Their product is \[w(x,i,j)w(y,i,j) = \Theta\left(2^{p_0}\cdot dk \cdot \frac{d}{2^{p_0}} \cdot 2^{p_1}\right) = \Theta(d^2k2^{p_1}).\] \item $i = i_1$, $j = j_1$. In this case $x_{i,j} = \star$ and $y_{i,j} = 1$. For $x$, $i_1$ is fixed, hence $p_0$ is also fixed; $i_0$ is not fixed, but $j_1 = j$ and $p_1$ is uniquely defined. Hence, \[w(x,i,j) = 2^{p_0} \cdot J_{p_1} = 2^{p_0} \cdot \frac{d}{2^{p_1}}.\] For $y$, the position of the symbol $1$ must necessarily change, hence \[w(y,i,j) = w(y) = \Theta(dk2^{p_1}).\] The product then is \[w(x,i,j)w(y,i,j) = \Theta\left(2^{p_0} \cdot \frac{d}{2^{p_1}} \cdot dk2^{p_1}\right) = \Theta(d^2k2^{p_0}) = O(d^2k2^{p_1}),\] as $p_0 \leq \frac k 4 < p_1$. \end{enumerate} We can see that in all cases the denominator in the ratio of the adversary bound is $O(d^2k2^{p_1})$. Therefore, \[\frac{w(x)w(y)}{w(x,i,j)w(y,i,j)} = \Omega\left(\frac{d^3k^32^{p_1}}{d^2k2^{p_1}}\right) = \Omega(dk^2) = \Omega(d \log^2 d)\] and since $\log T = \Theta(\log(d \log d)) = \Theta(\log d + \log \log d) = \Theta(\log d)$, we have \[\Q(f_T) \geq \Q(f_{T'}) = \Omega\left(\sqrt{d \log^2 d}\right) = \Omega\left(\sqrt{T \log T}\right). \qedhere\] \end{proof} \section{Conclusion} In this paper, we developed a new quantum algorithm and a new quantum lower bound for variable time search. Our quantum algorithm has complexity $O(\sqrt{T} \log n)$, compared to $O(\sqrt{T} \log^{1.5} T)$ for the best previously known algorithm (quantum variable time amplitude amplification \cite{ambainis2012variable} instantiated to the case of search). It also has the advantage of being simpler than previous quantum algorithms for variable time search. If the recursive structure is unrolled, our algorithm consists of checking algorithms $C_{T_i}$ for various times $T_i$ interleaved with Grover diffusion steps. Thus, the structure is the essentially same as for regular search and the main difference is that $C_{T_i}$ for different $i$ are substituted at different query steps. We note that our algorithm has a stronger assumption about $T$: we assume that an upper bound estimate $T\geq \sum_{i=1}^n t_i^2$ is provided as an input to the algorithm and the complexity depends on this estimate $T$, rather than the actual $\sum_{i=1}^n t_i^2$. Possibly, this assumption can be removed by a doubling strategy that tries values of $T$ that keep increasing by a factor of 2 but the details remain to be worked out. Our quantum lower bound is $\Omega(\sqrt{T \log T})$ which improves over the previously known $\Omega(\sqrt{T})$ lower bound. This shows that variable time search for the ``unknown times'' case (when the times $t_1, \ldots, t_n$ are not known in advance and cannot be used to design the quantum algorithm) is more difficult than for the ``known times'' case (which can be solved with complexity $\Theta(\sqrt{T})$). A gap between the upper and lower bounds remains but is now just a factor of $\sqrt{\log T}$. Possibly, this is due to the lower bound using a set of inputs for which an approximate distribution of values $t_i$ is fixed. In such a case, the problem may be easier than in the general case, as an approximately fixed distribution of $t_i$ can be used for algorithm design. {\bf Acknowledgments.} We thank Krišjānis Prūsis for useful discussions on the lower bound proof. The authors are grateful to the anonymous referees for the helpful comments and suggestions. This research was supported by the ERDF project 1.1.1.5/18/A/020. \begin{appendices} \crefalias{section}{appendix} \section{Proofs of \texorpdfstring{\cref{thm:recursiveamplitude,thm:C1-3}}{Lemmas 2 and 3}}\label{sec:appA} \recursiveamplitude* \begin{proof} For each $j$ express the final state of $A_j$ in the canonical basis as \[ \sum_{i=1}^n \beta_{ij} \ket {i, a_{ij}}, \] where $a_{ij} \in \{ 0,1,*\} $ and $a_{ij} = 0$ iff $x_i=0$ and $t_i \leq T_j$ (i.e., iff $i \notin \mc S_j$). Initially, $\beta_{i0}= n^{-1/2}$ for all $i$. Then \begin{equation*} \sin^2 \alpha_j = \sum_{i \in \mc S_j} \lrv{\beta_{ij}}^2, \end{equation*} for all $j$. To see how the amplitude $\beta_{i(j+1)}$ is related to $\beta_{ij}$, consider how the state evolves under $A_{j+1}$: \begin{itemize} \item the final state of $A_{j}$ is \[ \sum_{i \in [n] \setminus \mc S_{j} } \beta_{ij} \ket {i, 0} + \sum_{i \in \mc S_{j} } \beta_{ij}\ket {i, a_{ij}}, \] by the definition of $\beta_{ij}$; moreover, $a_{ij}\in \{ 1,*\}$ for all $i \in \mc S_{j} $. \item Amplitude amplification $A_{j}(1)$ results in the state \[ \sum_{i \in [n] \setminus \mc S_{j} } \frac{\cos(3\alpha_{j})}{\cos \alpha_{j}} \beta_{ij} \ket {i, 0} + \sum_{i \in \mc S_{j} } \frac{\sin(3\alpha_{j})}{\sin \alpha_{j}} \beta_{ij}\ket {i, a_{ij}} . \] \item An application of $C_{T_{j+1}}$ transforms this state to \[ \sum_{i \in [n] \setminus \mc S_{j} } \frac{\cos(3\alpha_{j})}{\cos \alpha_{j}} \beta_{ij} \ket {i, 0} + \sum_{i \in \mc S_{j} \setminus \mc S_{j+1} } \frac{\sin(3\alpha_{j})}{\sin \alpha_{j}} \beta_{ij}\ket {i, 0} + \sum_{i \in \mc S_{j+1} } \frac{\sin(3\alpha_{j})}{\sin \alpha_{j}} \beta_{ij}\ket {i, a_{i(j+1)}}. \] \end{itemize} We conclude that \begin{equation}\label{eq:nextBetaJ} \beta_{i(j+1)} = \begin{cases} \beta_{ij}\frac{\sin(3\alpha_{j})}{\sin \alpha_{j}} , & i \in \mc S_{j} , \\ \beta_{ij} \frac{\cos(3\alpha_{j})}{\cos \alpha_{j}}, & i \in [n] \setminus \mc S_{j}. \end{cases} \end{equation} In particular, for any $j \in [\ell]$ and $i \in \mc S_j $ we have \begin{equation*} \beta_{ij} = \frac{1}{\sqrt n}\prod_{k=1}^{j-1} \frac{\sin (3\alpha_{k})}{\sin\alpha_{k}}, \end{equation*} since each such $i$ is in $\mc S_k$, $k\leq j-1$, thus, by \eqref{eq:nextBetaJ}, the respective amplitude gets multiplied by $ \frac{\sin(3\alpha_{k})}{\sin \alpha_{k}} $ at each step. This establishes the second part of the lemma (that the amplitudes $\beta_{ij}$ are all equal for any $i \in \mc S_j$). For the first part, we arrive at \begin{equation*} \sin^2 \alpha_j = \sum_{i \in \mc S_j} \lrv{\beta_{ij}}^2 = \sum_{i \in \mc S_j} \prod_{k=1}^{j-1} \lr{\frac{\sin (3\alpha_{k})}{\sin\alpha_{k}}}^2 = \frac{s_j}{n} \prod_{k=1}^{j-1} \lr{\frac{\sin (3\alpha_{k})}{\sin\alpha_{k}}}^2. \end{equation*} \end{proof} \cOneThree* \begin{proof} We will prove the following inequality: \begin{equation}\label{eq:sumTillSL} \sum_{j=1}^{\ell-1} s_j 9^j < \frac{9n}{4} . \end{equation} Then \ref{enum:1} will immediately follow, since each term on \eqref{eq:sumTillSL} is nonnegative. Furthermore, also \ref{enum:2} follows from \eqref{eq:sumTillSL} via {the} generalized Bernoulli's inequality: \begin{align*} & \prod_{j=1}^{\ell-1} \lr{ 1 - \frac{4 s_j}{27n} \cdot 9^j } \geq 1 - \frac{4}{27n}\sum_{j=1}^{\ell-1} s_j 9^j \geq 1 -\frac{4}{27n} \cdot \frac{9n}{4} = \frac{2}{3}. \end{align*} First we observe that \begin{equation*} \sum_{j=1}^d \sum_{i \in \mc S_{j-1} \setminus \mc S_j } t_i^2 = \sum_{i\in [n]\setminus \mc M} t_i^2 < \sum_{i\in [n] } t_i^2 \leq T. \end{equation*} Notice that each set difference $\mc S_{j-1} \setminus \mc S_j $ can be characterized as follows: \begin{equation*} \mc S_{j-1} \setminus \mc S_j = \lrb{ i \in [n] \ : \ \lr{T_{j-1} < t_i \leq T_j} \land x_i =0}. \end{equation*} Therefore all $t_i^2$ s.t.~$i \in \mc S_{j-1} \setminus \mc S_j $ satisfy the bound \begin{equation*} t_i^2 \geq T_{j-1}^2 = \begin{cases} \frac{9^{j-1} T}{n}, & j>1, \\ 0, & j=1. \end{cases} \end{equation*} Thus we obtain the following inequality: \begin{equation*} \frac{T}{n}\sum_{j=2}^d 9^{j-1} \lrv{\mc S_{j-1} \setminus \mc S_j} < \sum_{j=1}^d \sum_{i \in \mc S_{j-1} \setminus \mc S_j } t_i^2 < T \end{equation*} or \begin{equation}\label{eq:skIneq} \sum_{k=1}^{d-1} 9^k\lr{s_k - s_{k+1}} < n. \end{equation} We also expand $9^\ell s_\ell$ as follows, taking into account $s_d=m$: \begin{equation*} {9^\ell s_\ell} = 9^{\ell} \lr{s_{\ell} - s_{\ell+1} } + \frac{1}{9} \cdot 9^{\ell+1} \lr{s_{\ell+1} - s_{\ell+2} } + \ldots + \frac{1}{9^{d-1-\ell}} \cdot 9^{d-1} \lr{s_{d-1} - s_{d} } + 9^\ell m. \end{equation*} From this equality, taking into account $ s_k - s_{k+1} \geq 0 $, we can upper bound $9^\ell s_\ell $ as \begin{equation}\label{eq:9lsl} 9^\ell s_\ell \leq \sum_{k=\ell}^{d-1} 9^k\lr{s_k - s_{k+1}} + 9^{\ell} m \end{equation} Rewrite \eqref{eq:skIneq} as \begin{align*} & s_1 + \frac{8}{9}\sum_{k=1}^{\ell-1} 9^k s_k-9^{\ell-1} s_\ell + \sum_{k=\ell}^{d-1} 9^k\lr{s_k - s_{k+1}} <n \end{align*} and apply \eqref{eq:9lsl} to obtain \begin{align*} & s_1 + \frac{8}{9}\sum_{k=1}^{\ell-1} 9^k s_k + \sum_{k=\ell}^{d-1} 9^k\lr{s_k - s_{k+1}} < n + 9^{\ell-1} s_\ell \leq n + \frac{1}{9} \sum_{k=\ell}^{d-1} 9^k\lr{s_k - s_{k+1}} + 9^{\ell-1} m \\ & \frac{8}{9}\sum_{k=1}^{\ell-1} 9^k s_k + \frac{8}{9} \sum_{k=\ell}^{d-1} 9^k\lr{s_k - s_{k+1}} < n -s_1 + 9^{\ell-1} m \\ & 8 \sum_{k=1}^{\ell-1} 9^k s_k < 9n - 9s_1 +9^{\ell}m < 9n +9^{\ell}m. \end{align*} By the choice of $\ell$ we have $9^{\ell-1} \leq \frac{n}{m} $, therefore we arrive at \begin{equation*} 8 \sum_{k=1}^{\ell-1} 9^k s_k < 9n +9 \frac{n}{m} \cdot m = 18n, \end{equation*} which is equivalent to \eqref{eq:sumTillSL}. Finally, to show \ref{enum:3}, we recall that $ s_\ell \geq s_d = m $. Again by the choice of $ \ell $, $ 9^\ell \geq \frac{n}{m} $. Consequently, \begin{equation*} 9^\ell s_\ell \geq \frac{n}{m} \cdot m= n, \end{equation*} as claimed. \end{proof} \end{appendices} \end{document}
\begin{document} \title[Hilbert-Kunz criterion] {A Hilbert-Kunz criterion for solid closure in dimension two (characteristic zero)} \author[Holger Brenner]{Holger Brenner} \address{Mathematische Fakult\"at, Ruhr-Universit\"at Bochum, 44780 Bochum, Germany} \email{[email protected]} \subjclass{} \begin{abstract} Let $I$ denote a homogeneous $R_+$-primary ideal in a two-dimensional normal standard-graded domain over an algebraically closed field of characteristic zero. We show that a homogeneous element $f$ belongs to the solid closure $I^*$ if and only if $e_{HK}(I) = e_{HK}((I,f))$, where $e_{HK}$ denotes the (characteristic zero) Hilbert-Kunz multiplicity of an ideal. This provides a version in characteristic zero of the well-known Hilbert-Kunz criterion for tight closure in positive characteristic. \end{abstract} \maketitle \noindent Mathematical Subject Classification (2000): 13A35; 13D40; 14H60 \section*{Introduction} Let $(R, \fom)$ denote a local Noetherian ring or an $\mathbb{N}$-graded algebra of dimension $d$ of positive characteristic $p$. Let $I$ denote an $\fom$-primary ideal, and set $I^{[q]}= (f^q: f\in I)$ for a prime power $q=p^{e}$. Then the Hilbert-Kunz function of $I$ is given by $$ e \longmapsto \length ( R/I^{[p^{e}]}) \, ,$$ where $\length $ denotes the length. The Hilbert-Kunz multiplicity of $I$ is defined as the limit $$ e_{HK} (I)= \lim_{e \ra \infty} \, \length (R/I^{[p^{e}]}) /p^{ed} \, .$$ This limit exists as a positive real number, as shown by Monsky in \cite{monskyhilbertkunz}. It is an open question whether this number is always rational. The Hilbert-Kunz multiplicity is related to the theory of tight closure. Recall that the tight closure of an ideal $I$ in a Noetherian ring of characteristic $p$ is by definition the ideal $$I^* \!\!=\!\! \{f \in R: \exists c \mbox{ not in any minimal prime} : cf^q \in I^{[q]} \mbox{ for almost all } q=p^{e}\} \, .$$ For an analytically unramified and formally equidimensional local ring $R$ the equation $e_{HK}(I) = e_{HK} (J) $ holds if and only if $I^* =J^*$ holds true for ideals $I \subseteq J$ (see \cite[Theorem 5.4]{hunekeapplication}). Hence $f \in I^*$ if and only if $e_{HK}(I) = e_{HK} ( (I,f))$. This is the Hilbert-Kunz criterion for tight closure in positive characteristic. The aim of this paper is to give a characteristic zero version of this relationship between Hilbert-Kunz multiplicity and tight closure for $R_+$-primary homogeneous ideals in a normal two-dimensional graded domain $R$. There are several notions for tight closure in characteristic zero, defined either by reduction to positive characteristic or directly. We will work with the notion of solid closure (see \cite{hochstersolid}). In dimension two, the containment in the solid closure $f \in (f_1 , \ldots , f_n)^*$ means that the open subset $D( \fom) \subset \Spec A$ is not an affine scheme, where $A=R[T_1 , \ldots , T_n]/(f_1T_1 + \ldots + f_nT_n+f)$ is the so-called forcing algebra, see \cite[Proposition 1.3]{brennertightproj}. The definition of the Hilbert-Kunz multiplicity in positive characteristic does not suggest at first sight an analogous notion in characteristic zero. However, a bridge is provided by the following result of \cite{brennerhilbertkunz}, which gives an explicit formula for the Hilbert-Kunz multiplicity and proves its rationality in dimension two (the rationality of the Hilbert-Kunz multiplicity for the maximal ideal was also obtained independently in \cite{trivedihilbertkunz}). \begin{theoremintro} Let $R$ denote a two-dimensional standard-graded normal domain over an algebraically closed field of positive characteristic, $Y= \Proj R$. Let $I=(f_1 , \ldots , f_n)$ denote a homogeneous $R_+$-primary ideal generated by homogeneous elements $f_i$ of degree $d_i, i=1 , \ldots , n$. Then the Hilbert-Kunz multiplicity of the ideal $I$ equals $$e_{HK}(I)=\frac{ \deg (Y)}{2}( \sum_{k=1}^t r_k \nu_k^2 - \sum_{i=1}^n d_i^2) \, .$$ \end{theoremintro} Here the numbers $r_k$ and $\nu_k$ come from the strong Harder-Narasimhan filtration of the syzygy bundle $\Syz(f_1^q , \ldots , f_n^q)(0)$ given by the short exact sequence $$0 \lra \Syz(f_1^q , \ldots , f_n^q)(0) \lra \bigoplus_{i=1}^n \O(-q d_i) \stackrel{f_1^q , \ldots , f_n^q}{\lra} \O_Y \lra 0 \, .$$ This syzygy bundle is a locally free sheaf on the smooth projective curve $Y= \Proj R$, and its strong Harder-Narasimhan filtration is a filtration $\shS_1 \subset \ldots \subset \shS_t = \Syz(f_1^q , \ldots , f_n^q)(0)$ such that the quotients $\shS_k/\shS_{k-1}$ are strongly semistable, meaning that every Frobenius pull-back is semistable. Such a filtration exists for $q$ big enough by a theorem of Langer, \cite[Theorem 2.7]{langersemistable}. Then we set $r_k = \rk (\shS_k/\shS_{k-1})$ and $\nu_k=- \mu( \shS_k/\shS_{k-1}) /q \deg (Y)$, where $\mu$ denotes the slope. To define the Hilbert-Kunz multiplicity in characteristic zero we now take the right hand side of the above formula as our model. \begin{definitionintro} Let $R$ denote a two-dimensional normal standard-graded $K$-domain over an algebraically closed field $K$ of characteristic zero. Let $I=(f_1 , \ldots , f_n)$ be a homogeneous $R_+$-primary ideal given by homogeneous ideal generators $f_i$ of degree $d_i$. Let $\shS_1 \subset \ldots \subset \shS_t = \Syz(f_1 , \ldots , f_n)(0)$ denote the Harder-Narasimhan filtration of the syzygy bundle on $Y= \Proj R$, set $\mu_k =\mu( \shS_k/\shS_{k-1}) $ and $r_k = \rk ( \shS_k/\shS_{k-1}) $. Then the Hilbert-Kunz multiplicity of $I$ is by definition $$e_{HK}(I)= \frac{ \deg (Y)}{2}( \sum_{k=1}^t r_k (\frac{\mu_k}{\deg(Y)})^2 - \sum_{i=1}^n d_i^2) = \frac{ \sum_{k=1}^t r_k \mu_k^2 - \deg(Y)^2 \sum_{i=1}^n d_i^2}{2 \deg (Y)} \, .$$ \end{definitionintro} It is easy to show that this definition does not depend on the chosen ideal generators and is therefore an invariant of the ideal, see \cite[Proposition 4.9]{brennerhilbertkunz}. With this invariant we can in fact give the following Hilbert-Kunz criterion for solid closure in characteristic zero in dimension two (see Theorem \ref{hilbertkunzsolid}): \begin{theoremintro} \label{hilbertkunzsolidintro} Let $K$ denote an algebraically closed field of characteristic zero, let $R$ denote a standard-graded two-dimensional normal $K$-domain. Let $I$ be a homogeneous $R_+$-primary ideal and let $f$ denote a homogeneous element. Then $f$ is contained in the solid closure, $f \in I^*$, if and only if $e_{HK} (I)= e_{HK}((I,f))$. \end{theoremintro} To prove this theorem it is convenient to consider more generally for a locally free sheaf $\shS$ on a smooth projective curve $Y$ the expression $$\mu_{HK}(\shS) = \sum_{k=1}^t r_k \mu_k^2 \, ,$$ where $r_k$ and $\mu_k$ are the ranks and the slopes of the semistable quotient sheaves in the Harder-Narasimhan filtration of $\shS$. We call this number the Hilbert-Kunz slope of $\shS$. With this notion the Hilbert-Kunz multiplicity of an ideal $I=(f_1 , \ldots , f_n)$ is related to the Hilbert-Kunz slope of the syzygy bundle by $$e_{HK}((f_1 , \ldots , f_n)) = \frac{1}{2 \deg (Y)} \big( \mu_{HK}(\Syz(f_1 , \ldots , f_n)(0)) - \mu_{HK} (\bigoplus_{i=1}^n \O(-d_i)) \big) \, .$$ With this notion we will in fact prove the following theorem, which implies Theorem \ref{hilbertkunzsolidintro} (see Theorem \ref{hilbertkunzaffinecriterion}). \begin{theoremintro} \label{hilbertkunzaffinecriterionintro} Let $Y$ denote a smooth projective curve over an algebraically closed field of characteristic $0$. Let $\shS$ denote a locally free sheaf on $Y$ and let $c \in H^1(Y, \shS)$ denote a cohomology class given rise to the extension $0 \ra \shS \ra \shS' \ra \O_Y \ra 0$ and the affine-linear torsor $\mathbb{P}(\shS'^\dual) - \mathbb{P}(\shS^\dual)$. Then $\mathbb{P}(\shS'^\dual) - \mathbb{P}(\shS^\dual)$ is an affine scheme if and only if $\mu_{HK}(\shS') < \mu_{HK} (\shS)$. \end{theoremintro} \section{The Hilbert-Kunz slope of a vector bundle} We recall briefly some notions for locally free sheaves (or vector bundles), see \cite{huybrechtslehn} or \cite{hardernarasimhan}. Let $Y$ denote a smooth projective curve over an algebraically closed field and let $\shS$ denote a locally free sheaf of rank $r$. Then $\deg (\shS)= \deg (\bigwedge^r \shS)$ is called the degree of $\shS$ and $\mu(\shS)= \deg(\shS)/r$ is called the slope of $\shS$. If $\mu(\shT) \leq \mu(\shS)$ holds for every locally free subsheaf $\shT \subseteq \shS$, then $\shS$ is called semistable. In general there exists the so-called Harder-Narasimhan filtration. This is a filtration of locally free subsheaves $\shS_1 \subset \ldots \subset \shS_t = \shS$ such that the quotient sheaves $\shS_k/\shS_{k-1}$ are semistable locally free sheaves with decreasing slopes $\mu_1 > \ldots > \mu_t$. The Harder-Narasimhan filtration is uniquely determined by these properties. $\shS_1$ is called the maximal destabilizing subsheaf, $\mu_1 =\mu_{\rm max}(\shS)$ is called the maximal slope of $\shS$ and $\mu_t= \mu_{\rm min}(\shS)$ is called the minimal slope of $\shS$. If $\shS \ra \shT$ is a non-trivial sheaf homomorphism, then $\mu_{\rm min}(\shS) \leq \mu_{\rm max}(\shT)$. We begin with the definition of the Hilbert-Kunz slope of $\shS$. \begin{definition} Let $\shS$ denote a locally free sheaf on a smooth projective curve over an algebraically closed field of characteristic $0$. Let $\shS_1 \subset \ldots \subset \shS_t=\shS$ denote the Harder-Narasimhan filtration of $\shS$, set $r_k = \rk (\shS_k/\shS_{k-1})$ and $\mu_k = \mu(\shS_k/\shS_{k-1})$. We define the Hilbert-Kunz slope of $\shS$ by $$\mu_{HK} (\shS) = \sum_{k=1}^t r_k \mu_k^2 = \sum_{k=1}^t \frac{ \deg( \shS_k/\shS_{k-1} )^2}{ r_k}\, .$$ \end{definition} The only justification for considering this number is Theorem \ref{hilbertkunzsolid} below. We gather together some properties of this notion in the following proposition. \begin{proposition} \label{hilbertkunzslopeproperties} Let $\shS$ denote a locally free sheaf on a smooth projective curve over an algebraically closed field of characteristic $0$. Then the following hold true. \numiii \begin{enumerate} \item If $\shS$ is semistable, then $\mu_{HK}(\shS) =\deg(\shS)^2/\rk(\shS)$. \item Let $\shT \subset \shS$ denote a locally free subsheaf occurring in the Harder-Narasimhan filtration of $\shS$. Then $\mu_{HK}(\shS) =\mu_{HK}(\shT) + \mu_{HK}(\shS/\shT)$. \item We have $\mu_{HK}(\shS \oplus \shT) = \mu_{HK}(\shS) + \mu_{HK}(\shT)$. \item $\mu_{HK}(\shS)= \mu_{HK}(\shS^\dual)$. \item Let $\shL$ denote an invertible sheaf. Then $$\mu_{HK} (\shS \otimes \shL) = \mu_{HK}(\shS) + 2 \deg(\shS) \deg (\shL) + \rk(\shS) \deg(\shL)^2 \, .$$ \item Let $\varphi: Y' \ra Y$ denote a finite morphism between smooth projective curves of degree $n$. Then $\mu_{HK}( \varphi^*(\shS)) = n^2 \mu_{HK}(\shS)$. \end{enumerate} \end{proposition} \proof (i) and (ii) are clear from the definition. (iii). The maximal destabilizing subsheaf of $\shS \oplus \shT$ is either $\shS_1 \oplus 0$, $0 \oplus \shT_1$ or $\shS_1 \oplus \shT_1$. Hence the result follows from (ii) by induction on the rank of $\shS \oplus \shT$. (iv). Let $0= \shS_0 \subset \shS_1 \subset \ldots \subset \shS_t = \shS$ denote the Harder-Narasimhan filtration of $\shS$. Set $\shQ_k = \shS/\shS_k$. This gives a filtration $0 \subset \shQ_{t-1}^\dual \subset \ldots \subset \shQ_1^\dual \subset \shQ_0^\dual = \shS^\dual$. From $0 \ra \shS_k /\shS_{k-1} \ra \shS/\shS_{k-1} \ra \shS/\shS_k \ra 0$ we get $0 \ra \shQ_k^\dual \ra \shQ_{k-1}^\dual \ra \shQ^\dual_{k-1}/\shQ^\dual_k \cong ( \shS_k/\shS_{k-1})^\dual \ra 0$. Hence the filtration is the Harder-Narasimhan filtration of $\shS^\dual$ and the result follows from $\mu( \shQ^\dual_{k-1}/\shQ^\dual_k) = - \mu(\shS_k/\shS_{k-1})$. (v). The Harder-Narasimhan filtration of $\shS \otimes \shL$ is $\shS_1 \otimes \shL \subset \ldots \subset \shS_t \otimes \shL $ and $\mu(\shS_k \otimes \shL/ \shS_{k-1} \otimes \shL )= \mu( (\shS_k/ \shS_{k-1}) \otimes \shL ) =\mu (\shS_k/ \shS_{k-1}) + \mu(\shL)$. Therefore \begin{eqnarray*} \mu_{HK}(\shS \otimes \shL) &=& \sum_{k=1}^t r_k \mu_k( \shS \otimes \shL)^2 \cr &=& \sum_{k=1}^t r_k (\mu_k + \deg (\shL))^2 \cr &=& \sum_{k=1}^t r_k (\mu_k^2 + 2 \mu_k \deg (\shL) +\deg(\shL)^2) \cr &=& \mu_{HK} (\shS) + 2\deg(\shL) \sum_{k=1}^t r_k\mu_k +\deg(\shL^2)\sum_{k=1}^t r_k \, . \end{eqnarray*} This is the stated result, since $\deg(\shS) = \sum_{k=1}^t r_k \mu_k$ and $\rk(\shS)= \sum_{k=1}^t r_k$. (vi). The pull-back of a semistable sheaf under a separable morphism is again semistable, and the pull-back of the Harder-Narasimhan filtration is the Harder-Narasimhan filtration of $\varphi^* (\shS)$. Hence the result follows from $\deg(\varphi^*(\shS)) = n \deg (\shS)$. \qed \begin{lemma} \label{semistableminimal} The Hilbert-Kunz multiplicity of a locally free sheaf $\shS$ has the property that $\mu_{HK}(\shS) \geq \deg(\shS)^2/\rk(\shS)$, and equality holds if and only if $\shS$ is semistable. \end{lemma} \proof We have to show that $$\sum_{k=1}^t r_k \mu_k ^2 \geq \deg(\shS)^2/\rk(\shS) = (r_1\mu_1 + \ldots + r_t\mu_t)^2/(r_1 + \ldots + r_t) $$ or equivalently that $$(r_1 + \ldots + r_t) (\sum_{k=1}^t r_k \mu_k ^2 ) \geq (r_1\mu_1 + \ldots + r_t\mu_t)^2 \,.$$ The left hand side is $ \sum_{k=1}^t r_k^2 \mu_k^2 + \sum_{i \neq k} r_i r_k \mu_k^2$ (we sum over ordered pairs), and the right hand side is $ \sum_{k=1}^t r_k^2 \mu_k^2 + \sum_{i \neq k} r_i r_k \mu_i \mu_k$. Hence left hand minus right hand is $$ \sum_{i \neq k} r_i r_k \mu_k^2 - \sum_{i \neq k} r_i r_k \mu_i \mu_k $$ So this follows from $0 \leq (\mu_i - \mu_k)^2 =\mu_i^2 + \mu_k^2 -2 \mu_i \mu_k$ for all pairs $i \neq k$. Equality holds if and only if $\mu_i= \mu_k$, but then $t=1$ and $\shS$ is semistable. \qed \begin{remark} Lemma \ref{semistableminimal} implies that the number $\mu_{HK}(\shS) - \frac{ \deg(\shS)^2}{\rk(\shS)} \geq0$, and $=0$ holds exactly in the semistable case. It follows from Proposition \ref{hilbertkunzslopeproperties} (v) that this number is invariant under tensoring with an invertible sheaf. \end{remark} \begin{proposition} Let $\shS$ and $\shT$ denote two locally free sheaves on $Y$. Then $$\mu_{HK}(\shS \otimes \shT)= \rk(\shT) \mu_{HK}(\shS) + \rk(\shS) \mu_{HK}(\shT) +2 \deg(\shS) \deg(\shT) \, .$$ \end{proposition} \proof Let $r_i$, $\mu_i$, $i \in I$, and $r_j$, $\mu_j$, $j \in J$, ($I$ and $J$ disjoint) denote the ranks and slopes occurring in the Harder-Narasimhan filtration of $\shS$ and $\shT$ respectively. It is a non-trivial fact (in characteristic zero!) that the tensor product of two semistable bundle is again semistable, see \cite[Theorem 3.1.4]{huybrechtslehn}. From this it follows that the semistable quotients of the Harder-Narasimhan filtration of $\shS \otimes \shT$ are given as $(\shS_i/ \shS_{i-1}) \otimes (\shT_j/\shT_{j-1})$ of rank $r_i \cdot r_j$ and slope $\mu_i+ \mu_j$. Therefore the Hilbert-Kunz slope is \begin{eqnarray*} \mu_{HK}(\shS \otimes \shT) &=& \sum_{i,j} r_i r_j (\mu_i + \mu_j)^2 \cr &=& \sum_{i,j} r_i r_j \mu_i^2 + \sum_{i,j}r_ir_j \mu_j^2 +2\sum_{i,j} r_ir_j \mu_i \mu_j \cr &=& (\sum_{j} r_j )( \sum_i r_i \mu_i^2) + (\sum_{i} r_i )( \sum_j r_j \mu_j^2) +2( \sum_i r_i \mu_i)(\sum_j r_j \mu_j) \cr &=& \rk(\shT) \mu_{HK}(\shS) + \rk(\shS) \mu_{HK}(\shT) +2 \deg(\shS) \deg(\shT) \end{eqnarray*} \qed \section{A Hilbert-Kunz criterion for affine torsors} \label{} In this section we consider a locally free sheaf $\shS$ on a smooth projective curve $Y$ together with a cohomology class $c \in H^1(Y, \shS) \cong \Ext(\O_Y, \shS)$. Such a class gives rise to an extension $0 \ra \shS \ra \shS' \ra \O_Y \ra 0$. Of course $\deg(\shS')= \deg(\shS)$ and $\rk(\shS')= \rk (\shS) + 1$. We shall investigate the relationship between $\mu_{HK}(\shS)$ and $\mu_{HK}(\shS')$. \begin{lemma} \label{extensionlemma} Let $Y$ denote a smooth projective curve over an algebraically closed field. Let $\shS$, $\shT$ and $\shQ$ denote locally free sheaves on $Y$. Then the following hold. \numiii \begin{enumerate} \item Let $\varphi: \shT \ra \shS$ denote a sheaf homomorphism, $c \in H^1(Y, \shT)$ with corresponding extension $\shT'$, let $\shS'$ denote the extension of $\shS$ corresponding to $\varphi(c) \in H^1(Y, \shS)$. Then there is a sheaf homomorphism $\varphi': \shT' \ra \shS'$ extending $\varphi$. \item Suppose that $0 \ra \shT \ra \shS \ra \shQ \ra 0$ is a short exact sequence, and $c \in H^1(Y, \shT)$. Then $\shT' \subseteq \shS'$ and $\shS'/ \shT' \cong \shS/ \shT$. \item Suppose that $0 \ra \shT \ra \shS \ra \shQ \ra 0$ is a short exact sequence, and $c \in H^1(Y, \shS)$. Then $\shS' \ra \shQ' \ra 0$ and $\shQ' \cong \shS'/\shT$. \item If $\shS$ is semistable of degree $0$ and $c \in H^1(Y,\shS)$, then also $\shS'$ is semistable. \end{enumerate} \end{lemma} \proof The cohomology class $c$ is represented by the $\check{C}$ech cocycle $\check{c} \in H^0(U_1 \cap U_2, \shS)$, where $Y= U_1 \cup U_2$ is an affine covering. Then $\shS'$ arises from $\shS_1'= \shS|_{U_1} \oplus \O$ and $\shS_2'= \shS|_{U_2} \oplus \O$ by glueing $\shS_1'|{U_1 \cap U_2} \cong \shS_2'|{U_1 \cap U_2}$ via $(s,t) \mapsto (s+ t\check{c},t)$. The natural mappings $\shT_i' \ra \shS_i'$, $i=1,2$, glue together to a morphism $\shT' \ra \shS'$. The injectivity and surjectivity transfer from $\varphi$ to $\varphi'$, since these are local properties. (ii) and (iii) then follow from suitable diagrams. (iv). Suppose that $\shF \subseteq \shS'$ is a semistable subsheaf of positive slope. Then the induced mapping $\shF \ra \O$ is trivial and therefore $\shF \subseteq \shS$, which contradicts the semistability of $\shS$. \qed Let $\shS_1 \subset \ldots \subset \shS_t =\shS$ denote the Harder-Narasimhan filtration of $\shS$ and $c \in H^1(Y, \shS)$. If the image of $c$ in $H^1(Y, \shS/\shS_{t-1})$ is zero, then $c$ stems from a class $c_{t-1} \in H^1(Y, \shS_{t-1})$. So we find inductively a class $c_n \in H^1(Y, \shS_n)$ mapping to $c$ and such that the image in $H^1(Y, \shS_n/\shS_{n-1})$ is not zero (or $c$ itself is $0$). This yields extensions $\shS_k'$ of $\shS_k$ for $k \geq n$. It is crucial for the behavior of $\shS'$ whether $\mu(\shS_n/\shS_{n-1}) \geq 0$ or $< 0$. The following Proposition deals with the case $\mu(\shS_n/\shS_{n-1}) \geq 0$. \begin{proposition} \label{positive} Let $\shS_1 \subset \ldots \subset \shS_t=\shS$ be the Harder-Narasimhan filtration of $\shS$ and let $c \in H^1(Y,\shS)$. Let $n$ be such that the image of $c$ in $H^1(Y, \shS_k/\shS_{k-1})$ is $0$ for $k >n$ but such that the image in $H^1(Y,\shS_n/ \shS_{n-1})$ is $ \neq 0$. Suppose that $\mu (\shS_n/ \shS_{n-1})$ is $ \geq 0$. Let $i$ be the biggest number such that $ \mu(\shS_i/\shS_{i-1}) \geq 0$ {\rm(}hence $n \leq i${\rm)}. \numiii \begin{enumerate} \item Suppose that $\mu_i > 0$. Then the Harder Narasimhan filtration of $\shS'$ is $$\shS_1 \subset \ldots \subset \shS_i \subset \shS_i' \subset \shS_{i+1}' \subset \ldots \subset \shS' \, .$$ \item Suppose that $\mu_i= 0$. Then the Harder-Narasimhan filtration of $\shS'$ is $$\shS_1 \subset \ldots \subset \shS_{i-1} \subset \shS_i' \subset \shS_{i+1}' \subset \ldots \subset \shS' \, .$$ \end{enumerate} \end{proposition} \proof (i). The quotients of the filtration are $ \shS_k/ \shS_{k-1}$, $k \leq i$, which have positive slope, $\shS_i' /\shS_i \cong \O_Y$, and $\shS_k' / \shS'_{k-1} \cong \shS_k /\shS_{k-1}$ (Lemma \ref{extensionlemma}(ii)) for $k >i$, which have negative slope. These quotients are all semistable and the slope numbers are decreasing. (ii). The quotients $\shS_k/\shS_{k-1}$ are semistable with decreasing positive slopes for $k=1 , \ldots , i-1$. The quotients $\shS_k'/ \ \shS_{k-1}' \cong \shS_k /\shS_{k-1}$ are semistable with decreasing negative slopes for $k =i+1 , \ldots , t$. The quotient $\shS_i' /\shS_{i-1}$ is isomorphic to $(\shS_i/\shS_{i-1})'$ by Lemma \ref{extensionlemma}(iii), hence semistable of degree $0$ by Lemma \ref{extensionlemma}(iv). \qed In the rest of this section we study the remaining case, that $\mu( \shS_n/ \shS_{n-1}) <0$. In this case it is not possible to describe the Harder-Narasimhan filtration of $S'$ explicitly. However we shall see that in this case the Hilbert-Kunz slope of $\shS'$ is smaller than the Hilbert-Kunz slope of $\shS$. We need the following two lemmata. \begin{lemma} \label{slopecompare} Let $\shT$ denote a locally free sheaf on $Y$ with Harder-Narasimhan filtration $\shT_k$, $\mu_k = \mu(\shT_k/\shT_{k-1})$ and $r_k= \rk(\shT_k/\shT_{k-1})$. Let $$ (\tau_i) =( \mu_1 , \ldots , \mu_1, \mu_2 , \ldots , \mu_2, \mu_3 , \ldots , \mu_{t-1}, \mu_t , \ldots , \mu_t)$$ denote the slopes where each $\mu_k$ occurs $r_k$-times. Let $\shS \subseteq \shT$ denote a locally free subsheaf of rank $r$ and let $\sigma_i$, $i=1 , \ldots , r$ denote the corresponding numbers for $\shS$. Then $\sigma_i \leq \tau_i$ for $i=1 , \ldots , r$. Moreover, if $\shS$ is saturated {\rm(}meaning that the quotient sheaf is locally free{\rm)} and if no subsheaf $\shS_j$ of the Harder-Narasimhan filtration of $\shS$ occurs in the Harder-Narasimhan filtration of $\shT$, then $\sigma_i \leq \tau_{i+1}$ for $i= 1 , \ldots , r$. \end{lemma} \proof Let $i$, $i=1 , \ldots , r$ be given and let $j$ be such that $\rk (\shS_{j-1}) < i \leq \rk(\shS_j)$, hence $\sigma_i = \mu_j(\shS) = \mu(\shS_j/\shS_{j-1})$. We may assume that $i= \rk(\shS_j)$. Let $k$ be such that $\rk(\shT_{k-1}) < i \leq \rk(\shT_k)$. Therefore $\shS_j \not \subseteq \shT_{k-1}$, and the induced morphism $\shS_j \ra \shT /\shT_{k-1}$ is not trivial. Hence $\sigma_i=\mu_j(\shS) =\mu_{\rm min}(\shS_j) \leq \mu_{\rm max} (\shT/\shT_{k-1}) = \mu_k(\shT)=\tau_i$. Now suppose that $ \sigma_i > \tau_{i+1}$. Then necessarily $\sigma_i > \sigma_{i+1}$ and $\tau_i > \tau_{i+1}$ by what we have already proven. Therefore $i= \rk(\shS_j) = \rk(\shT_k)$. If $\shS_j \subseteq \shT_k$, then they are equal, since both sheaves are saturated of the same rank, but this is excluded by the assumptions. Hence $\shS_j \not\subseteq \shT_k$ and $\shS_j \ra \shT /\shT_k$ is non-trivial. Therefore $\sigma_i = \mu_{\rm min} (\shS_j) \leq \mu_{\rm max}(\shT/\shT_k)= \mu_{k+1}(\shT)= \tau_{i+1}$. \qed \begin{remark} If the numbers $\tau_i$ are given as in the previous lemma, then $\deg(\shT) = \sum_i \tau_i$ and $\mu_{HK}(\shT ) = \sum_{i} \tau_i^2$. \end{remark} \begin{lemma} \label{numkrit} Let $ \alpha_1 \leq \ldots \leq \alpha_r$ and $\beta_1 \leq \ldots \leq \beta_{r+1}$ denote positive real numbers such that $\alpha_{i} \geq \beta_{i+1} $ for $i=1 , \ldots , r$ and $\sum_{i=1}^r \alpha_i = \sum_{i=1}^{r+1} \beta_i$. Then $ \sum_{i=1}^{r+1} \beta_i^2 \leq \sum_{i=1}^r \alpha_i^2 $ and equality holds if and only if $\alpha_i = \beta_{i+1}$. \end{lemma} \proof Let $\alpha_i = \beta_{i+1} +\delta_i$, $\delta_i \geq 0$. From $\sum_{i=1}^r \alpha_i = \sum_{i=1}^r \delta_i + \sum_{i=1}^r \beta_{i+1}= \sum_{i=1}^{r+1} \beta_i$ we get $\beta_1 = \sum_{i=1}^r \delta_i$ ($\leq \beta_2 $). The quadratic sums are $$ \sum_{i=1}^{r} \alpha_i^2 = \sum_{i=2}^{r+1} \beta_i^2 + \sum_{i=1}^r \delta_i^2 +2 \sum_{i=1}^r \delta_i \beta_{i+1} $$ and $$ \sum_{i=1}^{r+1} \beta_i^2 = ( \sum_{i=1}^r \delta_i)^2 + \sum_{i=2}^{r+1} \beta_i^2 = 2 \sum_{i< j }\delta_i \delta_j + \sum_{i=1}^r \delta_i^2 + \sum_{i=2}^{r+1} \beta_i^2 \, .$$ So we have to show that $ \sum_{i <j } \delta _i \delta _j \leq \sum_{j=1}^r \delta_i \beta_{i+1}$. But this is clear from $\sum_{i < j} \delta_j \leq \sum_{j=1}^r \delta_j \leq \beta_2 \leq \beta_{i+1}$ for all $i= 1 , \ldots , r$. Equality holds if and only if $\delta_i=0$. \qed A cohomology class $H^1(Y, \shS)$ corresponds to a geometric $\shS$-torsor $T \ra Y$. This is an affine-linear bundle on which $\shS$ acts transitively. A geometric realization is given as $T = \mathbb{P}(\shS'^\dual) - \mathbb{P}(\shS^\dual)$. The global cohomological properties of this torsor are related to the Hilbert-Kunz slope in the following way. \begin{theorem} \label{hilbertkunzaffinecriterion} Let $Y$ denote a smooth projective curve over an algebraically closed field of characteristic $0$. Let $\shS$ denote a locally free sheaf on $Y$ and let $c \in H^1(Y, \shS)$ denote a cohomology class given rise to the extension $0 \ra \shS \ra \shS' \ra \O_Y \ra 0$ and the affine-linear torsor $\mathbb{P}(\shS'^\dual) - \mathbb{P}(\shS^\dual)$. Then the following are equivalent. \numiii \begin{enumerate} \item There exists a locally free quotient $\varphi: \shS \ra \shQ \ra 0$ such that $\mu_{\rm max}(\shQ) <0$ and the image $\varphi(c) \in H^1(Y, \shQ)$ is non-trivial. \item The torsor $\mathbb{P}(\shS'^\dual) - \mathbb{P}(\shS^\dual)$ is an affine scheme. \item The Hilbert-Kunz slope drops, that is $\mu_{HK}(\shS') < \mu_{HK}(\shS)$. \end{enumerate} \end{theorem} \proof The equivalence (i) $\Leftrightarrow$ (ii) was shown in \cite[Theorem 2.3]{brennertightplus}. The implication (iii) $\Rightarrow$ (i) follows from Proposition \ref{positive}: for if (i) does not hold, then we are in the situation of Proposition \ref{positive} that $\mu(\shS_n/\shS_{n-1}) \geq 0$. The explicit description of the Harder-Narasimhan filtration of $\shS'$ gives in both cases that $\mu_{HK}(\shS') = \mu_{HK}(\shS)$. So suppose that (i) holds. This means that there exists a subsheaf $\shS_n \subseteq \shS$ occurring in the Harder-Narasimhan filtration of $\shS$ such that $c$ stems from $c_{n} \in H^1(Y, \shS_{n})$ and such that its image in $ H^1(Y, \shS/\shS_{n-1})$ is non-trivial with $ \mu_{\rm max} (\shS/\shS_{n-1}) =\mu(\shS_{n}/\shS_{n-1}) = \mu_n < 0$. Let $\shT_1 \subset \ldots \subset \shT_t=\shS'$ denote the Harder-Narasimhan filtration of $\shS'$ with slopes $\mu_k= \mu(\shT_k/\shT_{k-1})$ and ranks $r_k=\rk(\shT_k/\shT_{k-1})$. Suppose that the maximal slope $\mu(\shT_1)$ is positive. Then the induced mapping $\shT_1 \ra \shS'/\shS \cong \O_Y$ is trivial, and $\shT_1 \subseteq \shS$. This is then also the maximal destabilizing subsheaf of $\shS$, since $\mu_{\rm max} (\shS) \leq \mu_{\rm max}(\shS') =\mu(\shT_1)$. Therefore $\mu_{HK}(\shS) = \mu_{HK}(\shT_1) + \mu_{HK}(\shS/\shT_1)$ and $\mu_{HK}(\shS') = \mu_{HK}(\shT_1) + \mu_{HK}(\shS'/\shT_1)$ by Proposition \ref{hilbertkunzslopeproperties}(ii). Since $\shS'/\shT_1$ is the extension of $\shS/\shT_1$ defined by the image of the cohomology class in $H^1(Y, \shS/\shT_1)$ (Lemma \ref{extensionlemma}(iii)), we may mod out $\shT_1$. Note that this does not change the condition in (i). Hence we may assume inductively that $\mu_{\rm max}(\shS) \leq 0$ and $\mu_{\rm max}(\shS') \leq 0$. Now suppose that $\shT_1$ has degree $0$. Again, if $\shT_1 \subseteq \shS$, then this is also the maximal destabilizing subsheaf of $\shS$, and we can mod out $\shT_1$ as before. So suppose that $\shT_1 \ra \O_Y$ is non-trivial. Then this mapping is surjective, let $\shK \subset \shS$ denote the kernel. This means that the extension defined by $c \in H^1(Y,\shS)$ comes from the extension given by $0 \ra \shK \ra \shT_1 \ra \O_Y \ra 0$, and $\tilde{c} \in H^1(Y, \shK)$. $\shK$ is semistable, since its degree is $0$ and $\mu_{\rm max}(\shS) \leq 0$. But then the image of $c$ is $0$ in every quotient sheaf of $\shS$ with negative maximal slope, which contradicts the assumptions. Therefore we may assume that $\mu_{\rm max} (\shS') < 0$. We want to apply Lemma \ref{slopecompare} to $\shS \subset \shS' =\shT$. Assume that $\shS$ and $\shS'$ have a common subsheaf occuring in both Harder-Narasimhan filtrations. Then they have the same maximal destabilizing subsheaf $\shF=\shS_1 =\shT_1$, which has negative degree. If $c$ comes from $\tilde{c} \in H^1(Y,\shF)$, then $\shF \subset \shF' \subseteq \shS'$ and $\mu(\shF)= \deg(\shF)/\rk (\shF) < \deg(\shF)/(\rk(\shF)+1) = \mu(\shF')$, which contradicts the maximality of $\shF$. Hence the image of $c$ in $H^1(Y, \shS/\shF)$ is not zero and we can mod out $\shF$ as before. Therefore we may assume that $\shS$ and $\shS'$ do not have any common subsheaf in their Harder-Narasimhan filtrations. Then Lemma \ref{slopecompare} yields that $\sigma_i \leq \tau_{i+1}$, and all these numbers are $\leq 0$ and moreover $\tau_i <0$. Lemma \ref{numkrit} applied to $\alpha_i=- \sigma_i$ and $\beta_i=-\tau_i$ yields that $\sum_{i=1}^r \sigma_i^2 \geq \sum_{i=1}^{r+1} \tau_i^2$, and $>$ holds since $\tau_1 \neq 0$. \qed \begin{remark} Suppose that $\shS$ is a semistable locally free sheaf of negative degree, and let $c \in H^1(Y, \shS)$ with corresponding extension $\shS'$. Then Theorem \ref{hilbertkunzaffinecriterion} together with Lemma \ref{semistableminimal} yield the inequalities $$ \frac{\deg(\shS)^2}{r+1} \leq \mu_{HK} (\shS') \leq \frac{\deg(\shS)^2}{r} \, .$$ If $\shS'$ is also semistable, then we have equality on the left. \end{remark} \section{A Hilbert-Kunz criterion for solid closure} We come now back to our original setting of interest, that of a two-dimen\-sional normal standard-graded domain $R$ over an algebraically closed field $K$. A homogeneous $R_+$-primary ideal $I=(f_1 , \ldots , f_n)$ gives rise to the syzygy bundle $\Syz(f_1 , \ldots , f_n)(0)$ on $Y= \Proj R$ defined by the presenting sequence $$0 \lra \Syz(f_1 , \ldots , f_n)(m) \lra \bigoplus_{i=1}^n \O_Y(m-d_i) \stackrel{f_1 , \ldots , f_n}{\lra} \O_Y(m) \lra 0 \, .$$ Another homogeneous element $f$ of degree $m$ yields an extension $$0 \lra \Syz(f_1 , \ldots , f_n)(m) \lra \Syz(f_1 , \ldots , f_n,f)(m) \lra \O_Y \lra 0$$ which corresponds to the cohomology class $\delta(f) \in H^1(Y,\Syz(f_1 , \ldots , f_n)(m) )$ coming from the presenting sequence via the connecting homomorphism $$\delta: H^0(Y, \O_Y(m))=R_m \ra H^1(Y, \Syz(f_1 , \ldots , f_n)(m)) \, .$$ The Hilbert-Kunz multiplicities of the ideals and the Hilbert-Kunz slopes of the syzygy bundles are related in the following way. \begin{lemma} \label{hilbertkunzmultiplicityslope} Let $K$ denote an algebraically closed field of characteristic $0$. Let $R$ denote a standard-graded two-dimensional normal $K$-domain, $Y= \Proj R$. Let $I$ be a homogeneous $R_+$-primary ideal and let $f$ denote a homogeneous element of degree $m$. Then the Hilbert-Kunz multiplicities $e_{HK}(I)= e_{HK}((I,f))$ are equal if and only if the Hilbert-Kunz slopes of the corresponding syzygies bundles $\mu_{HK}(\Syz(f_1 , \ldots , f_n)(m)) =\mu_{HK}(\Syz(f_1 , \ldots , f_n,f )(m))$ are equal. \end{lemma} \proof Let $\mu_k$ and $r_k$ ($\tilde{\mu}_k$ and $\tilde{r}_k$) denote the ranks and the slopes in the Harder-Narasimhan filtration of $\Syz(f_1 , \ldots , f_n)(0)$ (of $\Syz(f_1 , \ldots , f_n, f)(0)$ respectively). For the Hilbert-Kunz multiplicities of the ideals $(f_1 , \ldots , f_n)$ and $(f_1 , \ldots , f_n, f)$ we have to compare $$e_{HK}(I) = \frac{1}{2 \deg(Y)} \big(\sum_{k=1}^t r_k \mu_k^2 - \deg(Y)^2 \sum_{i=1}^n d_i^2 \big)$$ and $$e_{HK}((I,f)) = \frac{1}{2\deg(Y)} \big(\sum_{k=1}^{\tilde{t}} \tilde{r}_k \tilde{\mu}_k^2 - \deg(Y)^2 (m^2+\sum_{i=1}^n d_i^2) \big) \, .$$ The extension defined by $c=\delta(f)\in H^1(Y, \Syz(f_1 , \ldots , f_n)(m))$ is $$0 \lra \shS=\Syz(f_1 , \ldots , f_n)(m) \lra \shS'=\Syz(f_1 , \ldots , f_n,f)(m) \lra \O_Y \lra 0$$ and the Hilbert-Kunz slopes of these sheaves are due to Proposition \ref{hilbertkunzslopeproperties} (v) (since $\deg(\Syz(f_1 , \ldots , f_n)(0)) = -\deg(Y) \sum_{i=1}^n d_i$) $$\mu_{HK}(\shS)= \sum_{k=1}^t r_k \mu_k^2 +2 (- \sum_{i=1}^n d_i\deg(Y)) m \deg(Y) + (n-1)m^2 \deg(Y)^2$$ and $\mu_{HK}(\shS')=$ \begin{eqnarray*} &=& \sum_{k=1}^{\tilde{t}} \tilde{r}_k \tilde{\mu}_k^2 + 2 (- ( \sum_{i=1}^n d_i +m) \deg(Y)) m \deg(Y) + nm^2 \deg(Y)^2 \cr &=& \sum_{k=1}^{\tilde{t}} \tilde{r}_k \tilde{\mu}_k^2 - 2 (\sum_{i=1}^n d_i) m \deg(Y)^2 + (n-1)m^2 \deg(Y)^2 - m^2 \deg(Y)^2 \, . \end{eqnarray*} So the difference is in both cases (up to the factor $1/2 \deg(Y)$) $$ \sum_{k=1}^{\tilde{t}} \tilde{r}_k \tilde{\mu}_k^2 - \sum_{k=1}^t r_k \mu_k^2 - m^2 \deg(Y)^2 \, .$$ Therefore $e_{HK}(I)= e_{HK}((I,f))$ if and only if $$\mu_{HK} (\Syz(f_1 , \ldots , f_n)(m))= \mu_{HK} (\Syz(f_1 , \ldots , f_n,f)(m)) \, .$$ \qed \begin{remark} Let $0 \ra \shS \ra \shT \ra \shQ \ra 0$ denote a short exact sequence of locally free sheaves. Then the alternating sum of the Hilbert-Kunz slopes, that ist $\mu_{HK}(\shS) - \mu_{HK}(\shT) + \mu_{HK}(\shQ)$ does not changes when we tensor the sequence with an invertible sheaf. This follows from Proposition \ref{hilbertkunzslopeproperties}(v). For an extension $0 \ra \shS \ra \shS' \ra \O_Y \ra 0$ this number is $\geq 0$ by Theorem \ref{hilbertkunzaffinecriterion}, and we suspect that this is true in general. From the presenting sequence $0 \ra \Syz(f_1 , \ldots , f_n)(0) \ra \bigoplus_{i=1}^n \O(-d_i) \ra \O_Y \ra 0$ it follows via $e_{HK}(I)= \frac{1}{2 \deg(Y)}( \mu_{HK} (\Syz(f_1 , \ldots , f_n)(0))- \mu_{HK} (\bigoplus_{i=1}^n \O(-d_i))$ that the Hilbert-Kunz multiplicity of an ideal is always nonnegative. In fact $I=R$ is the only ideal with $e_{HK}(I)=0$. This follows from Theorem \ref{hilbertkunzsolid} below, since $1 \not\in I^*$ for $I \neq R$. \end{remark} We come now to the main result of this paper. Recall that the solid closure of an $\fom$-primary ideal $I=(f_1 , \ldots , f_n)$ in a two-dimensional normal excellent domain $R$ is given by the condition that $f \in (f_1 , \ldots , f_n)^*$ if and only $D(\fom) \subset \Spec R[T_1 , \ldots , T_n]/(f_1T_1 + \ldots + f_nT_n+f)$ is not an affine scheme. In positive characteristic this is the same as tight closure, see \cite[Theorem 8.6]{hochstersolid}. In the case of an $R_+$-primary homogeneous ideal in a standard-graded normal $K$-domain this is equivalent to the property that the torsor $ \mathbb{P}({\shS '} ^\dual) - \mathbb{P}(\shS^\dual)$ over the corresponding curve $Y= \Proj R$ is not affine (see \cite[Proposition 3.9]{brennertightproj}). This relates solid closure to the setting of the previous section. \begin{theorem} \label{hilbertkunzsolid} Let $K$ denote an algebraically closed field. Let $R$ denote a standard-graded two-dimensional normal $K$-domain. Let $I$ be a homogeneous $R_+$-primary ideal and let $f$ denote a homogeneous element. Then $f \in I^*$ if and only if $e_{HK} (I)= e_{HK}((I,f))$. \end{theorem} \proof If the characteristic is positive then this is a standard result from tight closure theory as mentioned in the introduction. So suppose that the characteristic is $0$. Let $I=(f_1 , \ldots , f_n)$ be generated by homogeneous elements, and set $m =\deg(f)$. The containment in the solid closure, $f \in (f_1 , \ldots , f_n)^*$, is equivalent with the non-affineness of the torsor $ \mathbb{P}({\shS '} ^\dual) - \mathbb{P}(\shS^\dual)$ \cite[Proposition 3.9]{brennertightproj}, where $\shS = \Syz(f_1 , \ldots , f_n)(m)$ and $S'$ is the extension given by the cohomology class $\delta(f)$. Hence the result follows from Theorem \ref{hilbertkunzaffinecriterion} and Lemma \ref{hilbertkunzmultiplicityslope}. \qed \end{document}
\begin{document} \title{Admissible sheaves on $\p3$} \author{Marcos Jardim \\ IMECC - UNICAMP \\ Departamento de Matem\'atica \\ Caixa Postal 6065 \\ 13083-970 Campinas-SP, Brazil } \maketitle \begin{abstract} Admissible locally-free sheaves on $\p3$, also known in the literature as mathematical instanton bundles, arise in twistor theory, and are in 1-1 correspondence with instantons on $\mathbb{R}^4$. In this paper, we study admissible sheaves on $\p3$ from the algebraic geometric point of view. We discuss examples and compare the admissibility condition with semistability and splitting type. \end{abstract} \tableofcontents \baselineskip18pt \section*{Introduction} \label{intro} An intense interest on the construction and classification of locally-free sheaves on the 3-dimensional complex projective space started on the late 70's, when twistor theory yielded a 1-1 correspondence between instantons (i.e. anti-self-dual connections of finite $L^2$ norm) on $\mathbb{R}^4$ and certain holomorphic vector bundles on $\p3$; this is the celebrated Penrose-Ward correspondence \cite{Ma,WW}. This fact was later used by Atiyah, Drinfeld, Hitchin and Manin to construct and classify all instantons \cite{ADHM}. Since then, many authors have studied the so-called {\em mathematical} (or {\em complex}) {\em instanton bundles}, defined in the literature as rank 2 locally-free sheaves $E$ on $\p3$ with $c_1(E)=c_3(E)=0$ and $c_2(E)=c>0$ satisfying $H^0(\p3,E(-1))=H^1(\p3,E(-2))=0$; see \cite{CTT} for a recent brief survey of this topic. These correspond to $SL(2,\cpx)$ instantons on $\mathbb{R}^4$ of charge $c$. The correct generalization for higher rank sheaves is given by Manin and Drinfeld (see \cite{Ma}), and leads us to the key definition of this paper: \begin{definition} An admissible sheaf on $\p3$ is a coherent sheaf $E$ satisfying: $$ H^p(\p3,E(k))=0 ~~ {\it for} ~~ p\leq1 ~,~ p+k\leq-1, ~~ {\it and} ~~ p\geq2 ~,~ p+k\geq0 $$ \end{definition} Admissible locally-free sheaves of rank $r$ and vanishing first Chern class are in 1-1 correspondence with $SL(r,\cpx)$ instantons on $\mathbb{R}^4$ \cite{Ma}. In this paper, we study mainly torsion-free admissible sheaves with vanishing first Chern class, which can be regarded as a generalization of instantons. Our focus is on the algebraic geometric properties of such objects, like semistability and splitting type. The paper is organized as follows. In Section \ref{s1} we remark that admissible sheaves are in 1-1 correspondence with certain monads, exploring a few properties and some examples in Section \ref{ex}. We then discuss how admissibility and semistability compare with one another in Section \ref{s2}, and conclude with an analysis of the splitting type of torsion-free admissible sheaves with vanishing first Chern class in the last section. \paragraph{Acknowledgment.} Some of the results presented here were obtained in joint work with Igor Frenkel \cite{FJ2}; we thank him for his continued support. We also thank the organizers and participants of the XVIII Brazilian Algebra Meeting. \section{Monads} \label{s1} Let $X$ be a smooth projective variety. A {\em monad} on $X$ is a sequence $V_{\bullet}$ of the following form: \begin{equation} \label{m1} \cV_{\bullet} ~ : ~ 0 \to V_{-1} \stackrel{\alpha}{\longrightarrow} V_{0} \stackrel{\beta}{\longrightarrow} V_{1} \to 0 \end{equation} which is exact on the first and last terms. Here, $V_k$ are locally free sheaves on $X$. The sheaf $E=\ker\beta/{\rm Im}~\alpha$ is called the cohomology of the monad $\cV_{\bullet}$, also denoted by $H^1(\cV_{\bullet})$. In this paper, we will focus on the so-called {\em special monads} on $\p3$, which are of the form: $$ 0\to V\otimes\op3(-1) \stackrel{\alpha}{\longrightarrow} W\otimes\op3 \stackrel{\beta}{\longrightarrow} V'\otimes\op3(1) \to 0 ~, $$ where $\alpha$ is injective and $\beta$ is surjective. The existence of such objects has been completely classified by Floystad in \cite{F}; let $v=\dim V$, $w=\dim W$ and $v'=\dim V'$. \begin{theorem} \label{exist} There exists a special monad on $\p3$ as above if and only if at least one of the following conditions hold: \begin{itemize} \item $w \geq 2v'+ 2$ and $w \geq v+v'$; \item $w\geq v + v' + 3$. \end{itemize} \end{theorem} Monads appeared in a wide variety of contexts within algebraic geometry, like the construction of locally free sheaves on complex projective spaces, the study of curves in $\p3$ and surfaces in $\p4$. In this section, we will see how they are related to admissible sheaves on $\p3$. \begin{theorem} \label{T1} Every admissible torsion-free sheaf $E$ on $\p3$ can be obtained as the cohomology of a special monad \begin{equation} \label{m2} 0\to V\otimes\op3(-1) \stackrel{\alpha}{\longrightarrow} W\otimes\op3 \stackrel{\beta}{\longrightarrow} V'\otimes\op3(1) \to 0 ~, \end{equation} where $V=H^1(\p3,E\otimes\Omega^2_{\p3}(1))$, $W=H^1(\p3,E\otimes\Omega^1_{\p3})$ and $V'=H^1(\p3,E(-1))$. \end{theorem} \begin{proof} Manin proves the case $E$ being locally-free in \cite[p. 91]{Ma}, using the Beilinson spectral sequence. However, the argument generalizes word by word for $E$ being torsion-free; just note that the projection formula $$ R^ip_{1*}\left(p_1^*\op3(k)\otimes p_2^* F\right)= \op3(k)\otimes H^i(\p3,F) $$ holds for every torsion-free sheaf $F$, where $p_1$ and $p_2$ are the natural projections of $\p3\times\p3$ onto the first and second factors. \end{proof} Clearly, the cohomology sheaf $E$ is always coherent, but more can be said in particular situations. Note that $\alpha\in{\rm Hom}(V,W)\otimes\op1$ and $\beta\in{\rm Hom}(W,V')\otimes\op1$. Clearly, the surjectivity of $\beta$ as a sheaf map implies that the localized map $\beta_x$ is surjective for all $x\in\p3$, while the injectivity of $\alpha$ as a sheaf map implies that the localized map $\alpha_x$ is injective only for generic $x\in\p3$. \begin{theorem} \label{T2} The cohomology $E$ of the monad \begin{equation} \label{m4} 0 \to V\otimes\op2(-1) \stackrel{\alpha}{\longrightarrow} W\otimes\op2 \stackrel{\beta}{\longrightarrow} V'\otimes\op2(1) \to 0 \end{equation} is a coherent admissible sheaf with: $$ {\rm rank}(E) = \dim W - \dim V - \dim V' ~~ , ~~ c_1(E) = \dim V' - \dim V $$ $$ ch_2(E) = \frac{1}{2}(\dim V + \dim V') ~~ {\rm and} ~~ ch_3(E) = \frac{1}{6}(\dim V - \dim V') ~ . $$ Moreover: \begin{itemize} \item $E$ is torsion-free if and only if the localized maps $\alpha_x$ are injective away from a subset of codimension 2; \item $E$ is reflexive if and only if the localized maps $\alpha_x$ are injective away from finitely many points; \item $E$ is locally-free if and only if the localized maps $\alpha_x$ are injective for all $x\in\p3$. \end{itemize} \end{theorem} \begin{proof} The kernel sheaf ${\cal K}=\ker\beta$ is locally-free, and one has the sequence: \begin{equation} \label{ker} 0 \to V\otimes\op3(-1) \stackrel{\alpha}{\longrightarrow} {\cal K} \to E \to 0 \end{equation} so $E$ is clearly coherent. Notice also that: $$ {\rm ch}(E) = \dim W - \dim V \cdot {\rm ch}(\op3(1)) - \dim V' \cdot {\rm ch}(\op3(1)) $$ from which the calculation of the Chern classes of $E$ follows easily. Taking the dual of the sequence (\ref{ker}), we obtain: \begin{equation} \label{ker*} 0 \to E^* \to {\cal K}^* \stackrel{\alpha^*}{\longrightarrow} V^*\otimes\op3(1) \to {\rm Ext}^1(E,\op3) \to 0 \end{equation} since ${\cal K}$ is locally-free. In particular, ${\rm Ext}^2(E,\op3)={\rm Ext}^3(E,\op3)=0$ and $$ {\cal I} = {\rm supp}~{\rm Ext}^1(E,\op3) = \{ x\in\p3 ~ | ~ \alpha_x ~ {\rm is~not~injective} ~ \} $$ So it is now enough to argue that $\cal C$ is torsion-free if and only if $\dim I=1$ and that $\cal C$ is reflexive if and only if $\dim I=0$; the third statement is clear. Recall that the $m^{\rm th}$-singularity set of a coherent sheaf $\cal F$ is given by: $$ S_m({\cal F}) = \{ X\in\p3 ~|~ dh({\cal F}_x) \geq 3-m \} $$ where $dh({\cal F}_x)$ stands for the homological dimension of ${\cal F}_x$ as an ${\cal O}_x$-module: $$ dh({\cal F}_x) = d ~~~ \Longleftrightarrow ~~~ \left\{ \begin{array}{l} {\rm Ext}^d_{{\cal O}_x}({\cal F}_x,{\cal O}_x) \neq 0 \\ {\rm Ext}^p_{{\cal O}_x}({\cal F}_x,{\cal O}_x) = 0 ~~ \forall p>d \end{array} \right. $$ In the case at hand, we have that $dh({\cal F}_x) = 1$ if $X\in I$, and $dh({\cal F}_x) = 0$ if $X\notin I$. Therefore $S_0({\cal C})=S_1({\cal C})=\emptyset$, while $S_2({\cal C})=I$. It follows that \cite[Proposition 1.20]{ST} : \begin{itemize} \item if $\dim I = 1$, then $\dim S_m({\cal C})\leq m-1$ for all $m<3$, hence $\cal C$ is a locally 1$^{\rm st}$-syzygy sheaf; \item if $\dim I = 0$, then $\dim S_m({\cal C})\leq m-2$ for all $m<3$, hence $\cal C$ is a locally 2$^{\rm nd}$-syzygy sheaf. \end{itemize} The desired statements follow from the observation that $\cal C$ is torsion-free if and only if it is a locally 1$^{\rm st}$-syzygy sheaf, while $\cal C$ is reflexive if and only if it is a locally 2$^{\rm nd}$-syzygy sheaf \cite[p. 148-149]{OSS}. \end{proof} As part of the proof above, it is worth emphasizing that if $E$ is admissible then ${\rm Ext}^2(E,\op3)={\rm Ext}^3(E,\op3)=0$. It follows from Theorems \ref{T1} and \ref{T2} that there exists a (set theoretical) 1-1 correspondence between special monads and admissible sheaves. As shown by Manin and Drinfeld (see \cite{Ma}), such correspondence is categorical. \begin{theorem} The functor that associates a special monad on $\p3$ to its cohomology sheaf defines an equivalence between the categories of special monads and admissible sheaves. \end{theorem} We complete this section with an important fact: \begin{proposition}\label{dv} If $E$ is an admissible sheaf, then $H^0(\p2,E^*(k))=0$ for all $k\leq-1$. \end{proposition} \begin{proof} $E$ is the cohomology of the monad (\ref{m2}); setting $V=H^1(\p2,E(-2))$, $W=H^1(\p2,E\otimes\Omega^1_{\p2}(-1))$ and $V'=H^1(\p2,E(-1))$, one had the sequences $$ 0 \to {\cal K}(k) \to W\otimes\op2(k) \to V'\otimes\op2(k+1) \to 0 ~~ {\rm and} $$ $$ 0 \to V\otimes\op2(k-1) \to {\cal K}(k) \to E(k) \to 0 ~ . $$ where ${\cal K}=\ker\{W\otimes\op2\to V'\otimes\op2\}$ is a locally-free sheaf. It follows from the first sequence that: $$ H^0(\p2,{\cal K}(k))=0 ~~ \forall k\leq-1 ~,~ H^2(\p2,{\cal K}(k))=0 ~~ \forall k\geq-2 $$ $$ {\rm and} ~~ H^0(\p2,{\cal K}^*(k))=0 ~~ \forall ~ k\leq-1 ~~, ~~ {\rm by~Serre~duality}.$$ The proposition then follows easily from the dual of the second sequence. \end{proof} \section{Examples of admissible sheaves}\label{ex} Let us now study various examples of admissible sheaves on $\p3$. Theorem \ref{exist} implies that there are admissible coherent sheaves in rank 0 and 1, but there can be no admissible sheaves with zero first Chern class in these ranks, apart from the trivial ones. Examples of admissible sheaves with vanishing first Chern class start in rank 2. The basic one is an admissible torsion-free sheaf $E$ which is not locally-free; it arises as the cohomology $E$ of the monad: \begin{equation} \label{ex-tf} \op3(-1) \stackrel{\alpha}{\rightarrow} \op3^{\oplus4} \stackrel{\beta}{\rightarrow} \op3(1) \end{equation} $$ \alpha = \left(\begin{array}{c} x \\ y \\ 0 \\ 0 \end{array}\right) ~~{\rm and}~~ \beta= (-y ~~ x ~~ z ~~ w) $$ It is easy to see that $\beta$ is surjective for all $[x:y:z:w]\in\p3$, while $\alpha$ is injective provided $x,y\neq0$. It then follows from Theorem \ref{T2} that $E$ is torsion-free, but not locally-free. In particular, the singularity set of $E$ (i.e. the support of $E^{**}/E$) consists of the line $\{x=y=0\}\subset\p3$. Note also that $c_2(E)=1$ and $c_1(E)=c_3(E)=0$. Reflexive sheaves on $\p3$ have been extensively studied in a series of papers by Hartshorne \cite{Ha}, among other authors. In particular, it was show that a rank 2 reflexive sheaf ${\cal F}$ on $\p3$ is locally-free if and only if $c_3({\cal F})=0$. Therefore, we conclude: \begin{proposition} ({\bf Hartshorne \cite{Ha}}) There are no rank 2 admissible sheaves on $\p3$ which are reflexive but not locally-free. \end{proposition} The situation for higher rank is quite different, though, and it is easy to construct a rank 3 admissible sheaf which is reflexive but not locally-free. Setting $w=5$ and $v=v'=1$, consider the monad: \begin{equation} \label{ex-ref} \op3(-1) \stackrel{\alpha}{\rightarrow} \op3^{\oplus5} \stackrel{\beta}{\rightarrow} \op3(1) \end{equation} $$ \alpha = \left(\begin{array}{c} x \\ y \\ 0 \\ 0 \\ z \end{array}\right) ~~{\rm and}~~ \beta= (-y ~~ x ~~ z ~~ w ~~ 0) $$ Again, it is easy to see that $\beta$ is surjective for all $[x:y:z:w]\in\p3$, while $\alpha$ is injective provided $x,y,z\neq0$. It then follows from Theorem \ref{T2} that $EE$ is reflexive, but not locally-free; its singularity set is just the point $[0:0:0:1]\in\p3$. Note also that $c_2(E)=1$ and $c_1(E)=c_3(E)=0$. Finally, we give an example of a rank 2 admissible locally-free sheaf. Setting $w=4$ and $v=v'=1$, consider the monad: \begin{equation} \label{ex-lf} \op3(-1) \stackrel{\alpha}{\rightarrow} \op3^{\oplus4} \stackrel{\beta}{\rightarrow} \op3(1) \end{equation} $$ \alpha = \left(\begin{array}{c} x \\ y \\ -w \\ z \end{array}\right) ~~{\rm and}~~ \beta= (-y ~~ x ~~ z ~~ w) $$ It is easy to see that $\beta$ is surjective and $\alpha$ is injective for all $[x:y:z:w]\in\p3$, so $E$ is indeed locally-free; note that $c_2(E)=1$ and $c_1(E)=c_3(E)=0$. With these simple examples in low rank, we can produce high rank admissible sheaves using the following: \begin{proposition} \label{ext} If $F'$ and $F''$ are coherent admissible sheaves, its extention $E$: $$ 0 \to F' \to E \to F'' \to 0 $$ is also admissible. \end{proposition} The proof is an easy consequence of the associated long exact sequence in cohomology, and it is left to the reader. As a consequence of Serre duality, we have: \begin{proposition} \label{tensor-dual} If $E$ is a locally-free admissible sheaf, then $E^*$ is also admissible. \end{proposition} \section{Semistability of torsion-free admissible sheaves} \label{s2} Recall that a torsion-free sheaf $E$ on $\p3$ is said to be {\em semistable} if for every coherent subsheaf $0\neq F\hookrightarrow E$ we have $$ \mu(F)=\frac{c_1(F)}{{\rm rk}(F)} \leq \frac{c_1(E)}{{\rm rk}(E)}=\mu(E) ~ . $$ Furthermore, if for every coherent subsheaf $0\neq F\hookrightarrow E$ with $0<{\rm rk}(F)<{\rm rk}(E)$ we have $$ \frac{c_1(F)}{{\rm rk}(F)} < \frac{c_1(E)}{{\rm rk}(E)} ~ , $$ then $E$ is said to be {\em stable}. It is also important to remember that: \begin{itemize} \item $E$ is (semi)stable if and only if $E^*$ is; \item $E$ is (semi)stable if and only if $\mu(F)<\mu(E)$ ($\mu(F)\leq\mu(E)$) for all coherent subsheaves $F\hookrightarrow E$ whose quotient $E/F$ is torsion-free; \item $E$ is (semi)stable if and only if $\mu(Q)>\mu(E)$ ($\mu(Q)\geq\mu(E)$) for all torsion-free quotients $E\to Q\to 0$ with $0<{\rm rk}(Q)<{\rm rk}(E)$. \end{itemize} Furthermore, if $E$ is locally-free, it is enough to test the locally-free subsheaves $F\hookrightarrow E$ with $0<{\rm rk}(F)<{\rm rk}(E)$ to conclude that $E$ is stable. The goal of this section is to compare the semistability and admissibility conditions. We provide to positive results for admissible sheaves of rank 2 and 3. \begin{theorem} \label{ss-adm} Let $E$ be a semistable torsion-free sheaf with $c_1(E)=0$. $E$ is admissible if and only if $H^1(\p3,E(-2))=H^2(\p3,E(-2))=0$. Furthermore, an admissible torsion-free sheaf is stable if and only if $H^0(\p3,E)=0$. \end{theorem} In other words, if $E$ is a semistable torsion-free sheaf with $c_1(E)=0$ and $H^1(\p3,E(-2))=H^2(\p3,E(-2))=0$, then $E$ is admissible. \begin{proof} Semistability implies immediately that $H^0(\p3,E(k))=0$, for all $k\leq1$. If $E$ is locally-free, then by Serre duality we have $H^3(\p3,E(k))=0$ for all $k\geq-3$, since $E^*$ is also semistable. If $E$ is torsion-free, we can use the semistability of $E^**$ and the sequence $$ 0 \to E \to E^{**} \to Q \to 0 ~~,~~ Q=E^{**}/E $$ to conclude that $^3(\p3,E(k))=^3(\p3,E^{**}(k))=0$, since $Q$ is supported in dimension less or equal to 1. Now we assume that $H^1(\p3,E(-2))=H^2(\p3,E(-2))=0$, and let $\wp$ be a plane in $\p3$. From he sequence: \begin{equation} \label{wp} 0 \to E(k-1) \to E(k) \to E|_{\wp}(k) \to 0 \end{equation} we conclude that $H^0(\wp,E|_{\wp}(-1))=H^2(\wp,E|_{\wp}(-2))=0$. \begin{claim} If $V$ is a torsion-free sheaf on $\p2$ with $H^0(\p2,V(-1))=H^2(\p2,V(-2))=0$, then $H^0(\p2,V(k))=0$ for $k\leq-1$ and $H^2(\p2,V(k))=0$ for $k\geq-2$. \end{claim} \noindent {\it Proof of the claim:} For any line $\ell\subset\p2$, we have the sequence $$ 0 \to V(k-1) \to V(k) \to V|_{\ell}(k) \to 0 ~~, $$ so that $$ 0 \to H^0(\p2,V(k-1)) \to H^0(\p2,V(k)) ~~ {\rm and} ~~ H^2(\p2,V(k-1)) \to H^2(\p2,V(k)) \to 0 ~~. $$ The claim follows easily by induction. Returning to (\ref{wp}), we also have: $$ H^0(\wp,E|_{\wp}(k)) \to H^1(\p3,E(k-1)) \to H^1(\p3,E(k)) $$ so, for $k\leq-1$, if $H^1(\p3,E(k))=0$, then also $H^1(\p3,E(k-1))=0$. Thus by induction we conclude that $H^1(\p3,E(k))=0$ for all $k\leq-2$. Similarly, we have: $$ H^2(\p3,E(k-1)) \to H^2(\p3,E(k)) \to H^2(\wp,E|_{\wp}(k)) $$ and again by induction we conclude that $H^2(\p3,E(k))=0$ for all $k\geq-2$, as desired. \end{proof} The converse statement seems to depend on the rank, as we will see in the two results below. \begin{theorem}\label{ss-rk2} Every rank 2 admissible torsion-free sheaf $E$ with $c_1(E)=0$ is semistable. Moreover, if $H^0(\p3,E^*)=0$, then $E$ is stable. \end{theorem} \begin{proof} First, assume that $L$ is a rank 2 reflexive sheaf with $c_1(L)=0$ and $H^0(L(k))=0$ for all $k\leq-1$; we show that $L$ is semistable. Indeed, let $F\hookrightarrow L$ be a torsion-free subsheaf of rank 1, with torsion-free quotient $Q=L/F$. By Lemma 1.1.16 in \cite[p. 158]{OSS}, it follows that $F$ is also reflexive; but every rank 1 reflexive sheaf is locally-free, thus $F=\op3(d)$. Any map $F\to L$ yields a section in $H^0(\p3,L(-d))$, $c_1(F)=d\leq0$ and $E$ is semistable, being stable if $H^0(\p3,E^*)=0$ Now if $E$ is a rank 2 admissible torsion-free sheaf with $c_1(E)=0$, then $L=E^*$ is a rank 2 reflexive sheaf with $c_1(L)=0$ and $H^0(L(k))=0$ for all $k\leq-1$, by Proposition \ref{dv} and since the dual of any coherent sheaf is always reflexive. Thus $E^*$ is semistable, so $E$ is as well. Clearly, $E$ is stable if $H^0(\p3,E^*)=0$, as desired. \end{proof} A similar result for rank 3 sheaves requires a stronger hypothesis: reflexivity, rather than torsion-freeness. \begin{theorem}\label{ss-rk3} Every rank 3 admissible reflexive sheaf $E$ with $c_1(E)=0$ is semistable. Moreover, if $H^0(\p3,E)=H^0(\p3,E^*)=0$, then $E$ is stable. \end{theorem} \begin{proof} In fact, one can show that every rank 3 reflexive sheaf with $c_1(E)=0$ and $H^0(E(k))=H^0(E^*(k))=0$ is semistable. The desired theorem follows easily from this fact. Indeed, let $F\hookrightarrow E$ be a torsion-free subsheaf, with torsion-free quotient $Q=E/F$, so that $c_1(F)=-c_1(Q)$. As in the proof of Theorem \ref{ss-rk2}, it follows that $F$ is reflexive. There are two possibilities: \noindent {\bf (i)} ${\rm rank}~F=1$. In this case, $F$ is locally-free, so a map $F\to E$ yields a section in $H^0(\p3,E(-d))$, where $d=c_1(F)$. Hence $c_1(F)\leq0$. \noindent {\bf (ii)} ${\rm rank}~F=2$, so ${\rm rank}~Q=1$. Now $Q^*$ is a reflexive (hence locally-free) subsheaf of $E^*$, which gives a section in $H^0(\p3,E^*(-d))$, where $d=c_1(Q^*)=c_1(F)$. Hence $c_1(F)\leq0$. It follows that $E$ is semistable, being stable if $H^0(\p3,E)=H^0(\p3,E^*)=0$. \end{proof} Together with Theorem \ref{ss-adm}, we conclude that: \begin{itemize} \item a rank 2 torsion-free sheaf on $\p3$ with $c_1(E)=0$ is admissible if and only if it is semistable and $H^1(\p3,E(-2))=H^2(\p3,E(-2))=0$; \item a rank 3 reflexive sheaf on $\p3$ with $c_1(E)=0$ is admissible if and only if it is semistable and $H^1(\p3,E(-2))=H^2(\p3,E(-2))=0$. \end{itemize} \section{Trivial splitting type} \label{s3} Since every locally-free sheaf on a projective line splits as a sum of line bundles, one can study sheaves on projective spaces by looking into the behavior of restriction to a line \cite{OSS}. \begin{definition} A torsion-free sheaf $E$ on $\p3$ is said to be of trivial splitting type if there is a line $\ell\subset\p3$ such that $E|_\ell$ is the trivial locally-free sheaf, i.e. $E|_\ell\simeq{\cal O}_{\ell}^{\oplus{\rm rk}E}$. \end{definition} A sheaf of trivial splitting type necessarily has vanishing first Chern class. Note that, by semicontinuity, if $E$ is of trivial splitting type then $E|_\ell$ is trivial for a generic line in $\p3$. Torsion-free sheaves of trivial splitting type where completely classified in \cite{FJ2}, and they were shown to be closely related with a complex version of the celebrated Atiyah-Drinfeld-Hitchin-Manin matrix equations. Furthermore, every torsion-free sheaf of trivial splitting type is semistable; indeed, assume that $E$ has rank $r$, and let $F\hookrightarrow E$ be a coherent subsheaf of rank $s$, with torsion-free quotient $E/F$. Then on a generic line $\ell\subset\p3$ we have: $$ F_\ell = {\cal O}_{\ell}(a_1)\oplus\cdots\oplus{\cal O}_{\ell}(a_s) \hookrightarrow E|_\ell\simeq{\cal O}_{\ell}^{\oplus r} ~~, $$ where $c_1(F)=a_1+\cdots+a_s$. It follows that $c_1(F)\leq0$, since we must have $a_k\leq0$, $k=1,\dots,s$. \begin{theorem} \label{tst-adm} Let $E$ be a torsion-free sheaf of trivial splitting type. $E$ is admissible if and only if $H^1(\p3,E(-2))=H^2(\p3,E(-2))=0$. \end{theorem} Of course, this is an easy consequence of Theorem \ref{ss-adm} and the observation above, but here is a direct proof. \begin{proof} Let $E$ be an admissible torsion-free sheaf. Without loss of generality, we can assume that $E|_{\ell_\infty}$ is trivial for $\ell_\infty=\{z=w=0\}$. Let $\wp$ be a plane containing $\ell_\infty$, e.g. $\wp=\{z=0\}$. Then $E|_{\wp}$ is a torsion-free sheaf on $\wp$ which is trivial at $\ell_\infty$. From the proof of Theorem \ref{ss-adm} we know that: \begin{equation} \label{v1} H^0(\wp,E|_{\wp}(k)) = 0 ~ \forall k\leq-1 ~~ , ~~ H^2(\wp,E|_{\wp}(k)) = 0 ~ \forall k\geq-2 \end{equation} Now consider the sheaf sequence: \begin{equation} \label{v2} 0 \to E(k-1) \stackrel{\cdot z}{\longrightarrow} E(k) \longrightarrow E|_{\wp}(k) \to 0 \end{equation} Using (\ref{v1}), we conclude that: $$ H^3(\p3,E(k)) = H^3(\p3,E(k-1)) ~ \forall k\geq-2 $$ But, by Serre's vanishing theorem, $H^3(\p3,E(N))=0$ for sufficiently large $N$, thus $H^3(\p3,E(k)) = 0$ for all $k\geq-3$. Similarly, we have: $$ H^0(\p3,E(k-1)) = H^0(\p3,E(k)) ~ \forall k\leq-1 $$ Since $E\hookrightarrow E^{**}$, we have via Serre duality: $$ H^0(\p3,E(k))\hookrightarrow H^0(\p3,E^{**}(k)) = H^3(\p3,E^{***}(-k-4))^*~. $$ Thus, again by Serre's vanishing theorem, $H^0(\p3,E(-N))=0$ for for sufficiently large $N$, so that $H^0(\p3,E(k)) = 0$ for all $k\leq-1$. We also have that: $$ 0 \to H^1(\p3,E(k-1)) \to H^1(\p3,E(k)) ~ \forall k\leq-1 $$ hence $H^1(\p3,E(-2)) = 0$ implies that $H^1(\p3,E(k)) = 0$ for all $k\leq-2$. Furthermore, $$ H^2(\p3,E(k-1)) \to H^2(\p3,E(k)) \to 0 ~ \forall k\geq-2 $$ forces $H^2(\p3,E(k)) = 0$ for all $k\geq-2$ once $H^2(\p3,E(-2)) = 0$. \end{proof} As in the previous section, the converse statement seems to depend on the rank. The generic splitting type of a semistable locally-free sheaf with vanishing first Chern class is determined by Theorem 2.1.4 in \cite[p. 205-206]{OSS}. In particular, it follows that every semistable rank 2 locally-free sheaf is of trivial splitting type. Thus, from Theorem \ref{ss-rk2}, we conclude: \begin{theorem} \label{tst-rk2} Every rank 2 admissible locally-free sheaf $E$ with $c_1(E)=0$ is of trivial splitting type. \end{theorem} To explore a few easy consequences of the classical theory of locally-free sheaves on complex projective spaces, let $\mathbb{G}$ denote the Grasmannian of lines in $\p3$. \begin{definition} Let $E$ be a locally-free sheaf of trivial splitting type. The set $$ {\cal J}_E =\{ \ell\in\mathbb{G} ~~|~~ E_{\ell} ~ {\rm is~not~trivial} \} $$ is called the set of jumping lines of $E$; it is always a closed subvariety of $\mathbb{G}$. Moreover, $E$ is said to be uniform if ${\cal J}_E$ is empty, i.e. if $E_{\ell}$ is independent of $\ell\in\mathbb{G}$. \end{definition} \begin{theorem}\label{uni} Every rank 2 uniform, admissible locally-free sheaf $E$ with $c_1(E)=0$ is trivial. \end{theorem} \begin{proof} By Theorem \ref{tst-rk2}, $E$ is of trivial splitting type. Since $E$ is uniform, $E_\ell$ must be trivial for all $\ell\in\mathbb{G}$. It then follows from Theorem 3.2.1 in \cite[p. 51]{OSS} that $E$ is trivial. \end{proof} Our last result, regarding the set of jumping lines, follows from Theorem 2.2.3 in \cite[p. 228]{OSS}. \begin{theorem} If $E$ is an rank 2 admissible locally-free sheaf with $c_1(E)=0$, then its set of jumping lines ${\cal J}_E$ is a subvariety of pure codimension 1 in $\mathbb{G}$. \end{theorem} \section{Open problems} The results proved in this paper point to a number of quite interesting questions and possible generalizations. First of all, we expect that if $E$ is a properly torsion-free or properly reflexive admissible sheaf, then its dual $E^*$ is not admissible, but we have not been able to construct any examples. We would also like to see whether the results in Section \ref{s2} can be generalized to higher rank. It seems too much to expect every admissible sheaf to be semistable; but the correspondence between instantons and locally-free admissible sheaves makes the statement "every admissible locally-free sheaf is semistable" an attractive conjecture. On the other hand, is Theorem \ref{ss-rk3} optimal, i.e. is there a rank 3 torsion-free admissible sheaf which is not semistable? It would also be interesting to study the connection between admissibility and Gieseker stability. Since every Gieseker semistable sheaf on a projective space is also semistable, we conclude from Theorem \ref{ss-adm} that every Gieseker semistable torsion-free sheaf is admissible; one would like to determine to what extent the converse is also true. Theorems \ref{tst-rk2} and \ref{uni} point to interesting properties of higher rank admissible sheaves: is every admissible locally-free sheaf with vanishing first Chern class of trivial splitting type? Is every uniform, admissible locally-free sheaf with vanishing first Chern class trivial? We've also seen that if $E$ is an admissible torsion-free and $\wp$ is a plane in $\p3$, then the restriction $E|_{\wp}$ satisfies the following cohomological condition: $$ H^0(\wp,E|_{\wp}(k)) = 0 ~ \forall k\leq-1 ~~ , ~~ H^2(\wp,E|_{\wp}(k)) = 0 ~ \forall k\geq-2 ~~ .$$ A sheaf on $\p2$ satisfying the above conditions are called {\em instanton sheaves}, and are very interesting on their own right, also being closely related to instantons. The analysis of how the instanton condition compares with semistability and trivial splitting type is work in progress \cite{J-p2}, but many of the results proved here have their analogs for instanton sheaves in $\p2$. In particular, it is shown in \cite{J-p2} that every instanton sheaf is the cohomology of a special monad, and that every rank 2 torsion-free instanton sheaf is semistable. We can then conjecture that if a (rank 2) torsion-free sheaf $E$ on $\mathbb{P}^k$ is the cohomology of a special monad, then $E$ is semistable; this is true for $k=2,3$. \end{document}
\begin{document} \newtheorem{lemma}{Lemma}{\bf}{\it} \newtheorem{theorem}{Theorem}{\bf}{\it} \title{A New Approach to GraphMaps, a System Browsing\\ Large Graphs as Interactive Maps} \author{\authorname{Debajyoti Mondal\sup{1} and Lev Nachmanson\sup{2}} \affiliation{\sup{1}Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada} \affiliation{\sup{2}Microsoft Research, Redmond, WA, U.S.A.} \email{[email protected], [email protected]} } \keywords{Network Visualization, Layered Drawing, Geometric Spanners, Competition Mesh, Network Flow } \abstract{A GraphMaps is a system that visualizes a graph using zoom levels, which is similar to a geographic map visualization. GraphMaps reveals the structural properties of the graph and enables users to explore the graph in a natural way by using the standard zoom and pan operations. The available implementation of GraphMaps faces many challenges such as the number of zoom levels may be large, nodes may be unevenly distributed to different levels, shared edges may create ambiguity due to the selection of multiple nodes. In this paper, we develop an algorithmic framework to construct GraphMaps from any given mesh (generated from a 2D point set), and for any given number of zoom levels. We demonstrate our approach introducing competition mesh, which is simple to construct, has a low dilation and high angular resolution. We present an algorithm for assigning nodes to zoom levels that minimizes the change in the number of nodes on visible on the screen while the user zooms in and out between the levels. We think that keeping this change small facilitates smooth browsing of the graph. We also propose new node selection techniques to cope with some of the challenges of the GraphMaps approach.} \onecolumn \maketitle \normalsize \section{\uppercase{Introduction}} Traditional data visualization systems render all the vertices and edges of the graph on a single screen. For large graphs, this approach requires rendering many objects on the screen, which overwhelms the user. A GraphMaps system confronts the challenge of visualizing large graphs by enabling the users to browse the graphs as interactive maps. Like Google or Bing Maps, a GraphMaps system visualizes the high priority features on the top level, and as we zoom in, the low priority entities start to appear in the subsequent levels. To achieve this effect, for a given graph $G$ and a positive integer $k > 0$, GraphMaps creates the graphs $G_1, G_2,\ldots, G_k$, where $G_i$, $1 \le i < k$, is an induced subgraph of $G_{i+1}$, and $G_k = G$. \begin{figure*} \caption{(a) A node-link diagram of a graph $G$. (b--d) A GraphMaps visualization of $G$. (e) An example of edge bundling. } \label{fig:introduction} \end{figure*} The graph $G_i$, where $ 1\le i\le k$, corresponds to the $i$th zoom level. Assume that the nodes of $G$ are ranked by their importance. The discussion on what a node importance is and how the ranking is obtained, is out of the scope of the paper, but by default GraphMaps uses Pagerank~\cite{bp12} to obtain such a ranking. Let $V(G_i)$ be the nodes of $G_i$. We build graphs $G_i$ in such a way that the nodes of $G_i$ are equally or more important than the nodes of $V(G_{i+1}) \setminus V(G_i)$. At the top view, we render the graph $G_1$. As we zoom in and the zoom reaches $2^{i-1}$, the rendering switches from $G_{i-1}$ to $G_i$, exposing less important nodes and their incident edges. To create spatial stability GraphMaps keeps the node positions fixed, and the rendering of edges changes incrementally between $G_i$ and $G_{i+1}$ as described in Section~\ref{sec:cm}. By browsing a graph with GraphMaps, the user obtains a quick overview of the important elements. Navigation through different zoom levels reveals the structure of the graph. In addition, users can interact with the system. For example, when the user clicks on a node $u$, the visualization highlights and renders all neighbors of $u$ (even those that do not belong to the current $G_i$) and the edges that connect $u$ to its neighbors. By using this interaction the user can explore a path by selecting a set of successive nodes on the path, and can answer adjacency questions by selecting the corresponding pair of nodes. We draw the nodes as points, and edges as polygonal chains. Each maximal straight line segment in the drawing is called a \emph{rail}. The edges may share rails. Every point where a pair of rails meet is either a node or a point which we call a \emph{junction}. Figure~\ref{fig:introduction}(a) depicts a traditional node-link diagram of a graph $G$. Figures~\ref{fig:introduction}(b--c) illustrate a GraphMaps visualization of $G$ on three zoom levels. The gray region at each level corresponds to a viewport in that level. The higher ranked nodes of $G$ have the darker color. The tiny gray dots represent the locations of the nodes that are not visible in the current layer. Figures~\ref{fig:zoom} illustrate the node selection technique and the zoom feature. The rails rendered by thick lines correspond to the shortest paths from the selected nodes $a$ and $h$ to their neighbors. \begin{figure} \caption{Node selections and zoom in. } \label{fig:zoom} \end{figure} In our scheme, where we change the rendering depending on the zoom level, the quality of the visualization depends both on the quality of the drawing on each level, and the differences between the drawings of successive zoom levels. We think that a good drawing of a graph on a single zoom level satisfies the following properties: \begin{enumerate} \item[-] The angular resolution is large \item[-] The \emph{degree} at a node or at a junction is small. \item[-] The amount of \emph{ink}, that is the sum of the lengths of all distinct rails used in the drawing, is small. \item[-] The \emph{edge stretch factor} or \emph{dilation}, that is the ratio of the length of an edge route to the Euclidean distance between its end nodes, is small. \end{enumerate} \noindent These properties help to follow the edge routes, reduce the visual load, and thus improve the readability of a drawing. Since some of the principles contradict each other, optimizing all of them simultaneously is a difficult task. Our algorithm, in addition to creating a good drawing of each $G_i$, attempts to construct these drawings in a way that a switch from $G_i$ to $G_{i+1}$ does not cause a large change on the screen. We try to keep the amount of new appearing details relatively small and also try to keep the edge geometry stable. \begin{figure*} \caption{A partial display of a flight network with approximately 3K nodes and 19K edges. (left) Traditional node-link diagram over the airports of North America. (middle) The top-level of a GraphMaps visualization based on our approach, where the airports TUS and YWG are selected. (right) A view after zoom in.} \label{abs} \end{figure*} \subsection{Related Work} A large number of graph visualization tools, e.g., Centrifuge, Cytoscape, Gephi, Graphviz, Biolayout3d, have been developed over the past few decades due to a growing interest in exploring network data. A good visualization requires the careful placement of nodes, e.g., sometimes nodes with similar properties are placed close to each other, whereas the nodes that are dissimilar are placed far away. Force directed approaches, multi-dimensional scaling and stochastic neighbor embedding are some common techniques to generate the node positions~\cite{Hu,KlimentaB12,MaatenH08}. Techniques that try to make the visualization readable by drawing the edges carefully consider various types of edge bundling~\cite{DBLP:journals/tvcg/ErsoyHPCT11,GansnerHNS11,LambertBA10,PupyrevNBH11} and edge routing techniques~\cite{DobkinGKN97,HoltenW09,DwyerN09}. Informally, the edge bundling technique groups the edges that are travelling towards a common direction, and routes these edges through some narrow tunnel. Figure~\ref{fig:introduction}(e) illustrates an example of edge bundling. Other forms of clutter reduction approaches include node aggregation~\cite{DBLP:conf/chi/Wattenberg06,DBLP:conf/chi/DunneS13,DBLP:journals/tvcg/ZinsmaierBDS12}, topology compression~\cite{DBLP:conf/bigdataconf/ShiLSCL13,DBLP:journals/jgaa/BrunelGKRW14}, and sampling algorithms~\cite{DBLP:conf/globecom/GaoHL15}. This paper focuses on GraphMaps, proposed by Nachmanson et al.~\cite{Nachmanson15}, that reduces clutter by distributing nodes to different zoom levels and routing edges on shared rails. Like the clutter reduction approaches, a primary goal of GraphMaps is to make the visualization more readable and interactive in the higher levels of abstraction. Nachmanson et al.~\cite{Nachmanson15} use multidimensional scaling to create the node positions. To distribute the nodes into zoom levels, they consider at each level $i$, an uniform $2^i\times 2^i$ grid, where each grid cell is called a \emph{tile}. The tiles are filled with nodes, the most important nodes first. While filling the levels with nodes, they maintain a node and a rail quota that bound the number of nodes and rails intersecting a tile. Whenever an insertion of a new node creates a tile intersecting more than one fourth of the node quota nodes or more than one quarter of rail quota rails, a new zoom level is created to insert the rest of the entities. The visualization of GraphMaps works in such a way that each viewport is covered by four tiles of the current level. This ensures that not more than the node quota nodes and the rail quota rails are rendered per viewport. GraphMaps visualization also relates to the hierarchical visualization of clustered networks~\cite{DBLP:journals/tochi/SchafferZGBDDR96,DBLP:conf/apvis/BalzerD07}. We refer the reader to the survey~\cite{eurovisstar.20151110} for more details on visualizing graphs based on graph partitioning. There exist some systems that render large graphs on multiple layers by using the notion of temporal graphs, e.g., evolving software systems~\cite{CollbergKNPW03,LambertBA10}. A generalization of stochastic neighbor embedding renders nodes on multiple maps~\cite{MaatenH12}. Gansner et al.~\cite{GansnerHK10} proposed a visualization that emphasizes node clusters as geographic regions. \subsection{Contribution} The existing implementation of GraphMaps~\cite{Nachmanson15} focuses mainly on the quality of the layout at individual zoom level. The construction follows a top-down approach, where the successive levels were obtained by inserting nodes incrementally in a greedy manner. We propose an algorithm to construct a GraphMaps visualization starting from a complete drawing of the graph $G(=G_k)$ at the bottom level. Specifically, given an arbitrary mesh and the edge routes of $G_k$ on this mesh, our method builds the edge routes for $G_{k-1},\ldots, G_{2}, G_1$, in this order. We introduce a particular type of mesh, called competition mesh, which is of independent interest due to its low edge stretch factor $(2+\sqrt{2})$, and high angular resolution $45^\circ$. We then construct GraphMaps visualizations by applying our algorithm to this mesh. We develop a node assignment algorithm that minimizes the change in the drawing when switching from $G_i$ to $G_{i+1}$ during zoom in, where $1\le i <k$. Moreover, we propose new node selection techniques to cope with some of the challenges of the GraphMaps approach. We also carried out experiments on some real-life datasets (see Figure~\ref{abs} and Section~\ref{EXPERIMENTS}). Our experiments reveal the usefulness of GraphMaps, even in its basic implementations, for understanding the network information through interactive exploration. \section{\uppercase{Technical Background}} \label{section:TB} We now introduce the mesh that we use for edge routing and analyze its properties. Let $P$ be a set of $n$ distinct points that correspond to the node positions, and let $R(P)$ be the smallest axis aligned rectangle that encloses all the points of $P$. A \emph{competition mesh} of $P$ is a geometric graph constructed by shooting from each point, four axis-aligned rays at the same constant speed (towards the top, bottom, left and right), where each ray stops as soon as it hits any other ray or $R(P)$. We break the ties arbitrarily, i.e., if two non-parallel rays hit each other simultaneously, then arbitrarily one of these rays stops and the other ray continues. If two rays are collinear and hit each other from the opposite sides, then both rays stop. We denote this graph by $M(P)$, e.g., see Figure~\ref{fig:competitionmesh}. The vertices of $M(P)$ are the points of $P$ (\emph{nodes}), and the points where a pair of the rays meet (\emph{junctions}). Two vertices in $M(P)$ are adjacent if and only if the straight line segment connecting them belongs to $M(P)$. A competition mesh can also be viewed as a variation of a motorcycle graph~\cite{EppsteinGKT08}, or a geometric spanner with Steiner points~\cite{BoseS13}. In the rest of the paper the term `vertices' denotes all the nodes and junctions of $M(P)$. \begin{figure*} \caption{(a) A point set and its corresponding competition mesh. (b--c) Bounding the bottom-left quadrant of $t$. (d) A monotone path inside the bottom-left quadrant of $t$. } \label{fig:competitionmesh} \end{figure*} For any point $u$, let $u_x$ and $u_y$ be the $x$ and $y$-coordinates of $u$, respectively, and let $l_v(u)$ and $l_h(u)$ be the vertical and horizontal straight lines through $u$, respectively. For any two points $p,q$ in $\mathbb{R}^2$, let $dist_E(p,q)$ be the Euclidean distance between $p$ and $q$. For each point $u$ of the plane we define four \emph{quadrants} formed by the horizontal and vertical lines passing through $u$. A path $v_1,\dots, v_k$ is \emph{monotone} in direction of vector $\textbf{a}$ if for each $1 \le i < k$ the dot product $\textbf{a}\cdot (v_{i+1} - v_i)$ is not negative. Lemmas~\ref{monotone}--\ref{dilation} prove some properties of $M(P)$. \begin{lemma} \label{monotone} Let $u$ be a node in $M(P)$. Then in each quadrant of $u$ there is a path in $M(P)$ that starts at $u$, ends at some point on $R(P)$, and is monotone in both horizontal and vertical directions. \end{lemma} \begin{proof} Without loss of generality it suffices to prove the lemma for the first quadrant of $u$, which consists of the set of points $v$ such that $v_x\ge u_x$ and $v_y\ge u_y$. Suppose for a contradiction that there is no such path in this quadrant. Consider a maximal $xy$-monotone path $\Pi$ that starts at $u$ and ends at some node or junction $w$ of $M(P)$. If $w$ is a node, then we extend $\Pi$ using the right or top ray of $w$, which is a contradiction. Therefore, $w$ must be a junction in $M(P)$. Without loss of generality assume that the straight line segment $\ell$ incident to $w$ is horizontal. Since $\Pi$ is a maximal $xy$-monotone path, the ray $r_\ell$ corresponding to $\ell$ must be stopped by some vertical ray $r'$ generated by some vertex $w'$. Observe that $w_y> w'_y$, otherwise we can extend $\Pi$ towards $w'$. Since $r_\ell$ is stopped, the ray $r'$ must continue unless there are some other downward ray $r''$ that hits $r'$ at $w$. In both cases we can extend $\Pi$, either by following $r'$ (if it continues), or following the source of $r''$ (if $r'$ is stopped by $r''$), which contradicts to the assumption that $\Pi$ is maximal. \end{proof} \begin{lemma} \label{dilation} For any set $P$ with $n$ points $M(P)$ has $O(n)$ vertices and edges. The graph distance between any two nodes of $M(P)$ is at most $ (2+\sqrt{2})$ times the Euclidean distance. \end{lemma} \begin{proof} By construction of $M(P)$, whenever a junction is created, one ray stops. Since $|P|=n$ there are at most $4n$ rails, and therefore we cannot have more than $4n$ junctions, that proves that the number of vertices in $M(P)$ is $O(n)$. Since $M$ is a planar graph, the number of edges is also $O(n)$. We now show that the ratio of the graph distance and the Euclidean distance between any two nodes of $M(P)$ is at most $(2+\sqrt{2})$. Let $C_{left}, C_{right},C_{top}$, and $C_{bottom}$ be the four cones with the apex at $(0,0)$ determined by the lines $y=\pm x$. Let $s$ and $t$ be two nodes in $M(P)$. Without loss of generality assume that $s$ located at $(0,0)$ and $t$ lies on $C_{right}$. Consider now an $x$-monotone orthogonal path $P_{right}=(v_0,v_1,v_2,\ldots,v_q)$ in the mesh such that $v_0$ coincides with $s$, for each $0<i\le q$, $v_i$ is a node in $M(P)$ that stops the rightward ray of $v_{i-1}$, and $v_q$ lies on or to the right of $l_v(t)$, e.g., see Figure~\ref{fig:competitionmesh}(b). Suppose that $t$ is either above or below $P_{right}$. If $t$ is above $P_{right}$, then consider a $y$-monotone path $P_{top}=(w_0,$ $w_1,w_2,\ldots,w_r)$ such that $w_0$ coincides with $s$, for each $0<i\le r$, $w_i$ is a node in $M(P)$ that stops the upward ray of $v_{i-1}$, and $v_r$ lies on or above $l_h(t)$. If $t$ is below $P_{right}$, then define a path $P_{bottom}$ symmetrically, e.g., see Figure~\ref{fig:competitionmesh}(c). Without loss of generality assume that $t$ is above $P_{right}$. Observe that the paths $P_{top}$ and $P_{right}$ remain inside the cones $C_{top}$ and $C_{right}$, respectively, and bound the bottom-left quadrant of $t$, as shown in the shaded region in Figure~\ref{fig:competitionmesh}(b). By Lemma~\ref{monotone}, $t$ has a $(-x)(-y)$-monotone path $P$ that starts at $t$ and reaches the boundary of $R(P)$, e.g., see Figure~\ref{fig:competitionmesh}(d). This path $P$ must intersect either $P_{top}$ or $P_{right}$. Hence we can find a path $P'$ from $s$ to $t$, where $P'$ starts at $s$, travels along either $P_{top}$ or $P_{right}$ depending on which one $P$ intersects, and then follows $P$ from the intersection point. We now show that length of $P'$ is at most $(2+\sqrt{2})\cdot dist_E(s,t)$. Since any ray is not shorter than a ray it stops, the sum of the lengths of the vertical segments of $P_{right}$ is at most the sum of the lengths of the horizontal segments. Therefore, the part of $P_{right}$ inside the bottom-left quadrant of $t$ is at most $2t_x$. Similarly, the part of $P_{top}$ inside the bottom-left quadrant of $t$ is at most $2t_y$. Path $P$ is not longer than $t_x+t_y$ (see Figure~\ref{fig:competitionmesh}(d)) . Therefore, the length of $P'$ is at most $t_x+t_y + 2 \cdot\max\{ t_x, t_y)\}$ $\le \sqrt{2} \cdot dist_E(s,t) +2 \cdot\max\{ t_x, t_y\}$ $\le (2+\sqrt{2}) \cdot dist_E(s,t)$. In the case when $t$ belongs to $P_{right}$, the length of path $P$ is zero and the proof easily follows. \end{proof} The following lemma states that a competition mesh can be constructed in $O(n\log n)$ time. \begin{lemma} \label{lem:time} For any set $P$ with $n$ points, the competition mesh $M(P)$ can be constructed in $O(n \log n)$ time. \end{lemma} \begin{proof} Define for each point $w\in P$, a set of eight non-overlapping cones as follows: The central angle of each cone is $45^\circ$ and the cones are ordered counter clockwise around $w$. The first cone lies in the first quadrant of $w$ between the lines $y=x+w_x$ and $y=0$, as shown in Figure~\ref{fig:construction}(a). Guibas and Stolfi~\cite{GuibasS83} showed that in $O(n\log n)$ time, one can find for every point $w\in P$ the nearest neighbor of $w$ (according to the Manhattan Metric) in each of the eight cones of $w$. Assume that $\delta_y = \{ \min_{\{a,b\}\in P \text{, where } a_y \not= b_y} |a_y - b_y| \}$, which can be computed in $O(n \log n)$ time by sorting the points according to $y$-coordinates. \begin{figure*} \caption{(a) A point set $P$ and the nearest neighbor of $w$ in each of the eight cones around $w$. (b--e) Construction of the mesh of $P$, while processing (b) top rays, (c) bottom rays, (d) left rays and (e) right rays.} \label{fig:construction} \end{figure*} We construct $M$ in four phases. The first phase iterates through the top rays of the each point $w$ and finds the point $w'$ closest to $l(h)_w$ (in Manhattan metric) such that $|w'_x-w_x|\le |w'_y-w_y|$. Note that if such a $w'$ exists, then one of the horizontal rays $r'$ of $w'$ would reach the point $(w_x,w'_y)$ before the top ray $r$ of $w$ (while all rays are grown in an uniform speed). According to the definition of the competition mesh, we can continue the ray $r'$ and stop growing the ray $r$. If no such $w'$ exists, then $r$ must hit $R(P)$. To find the point $w'$, it suffices to compare the Manhattan distances of the nearest neighbors in the second and third cones of $w$ (breaking ties arbitrarily). Since the nearest neighbors at each cone can be accessed in $O(1)$ time, we can process all the top rays in linear time. Figure~\ref{fig:construction}(b) shows the junctions and nodes created after the first phase, where all the rays that stopped growing are shown in thin lines. The nearest neighbors in the second and third cones of $w$ are $a$ and $b$, respectively. Since $a$ is closer to $l_h(w)$ than $b$, the top ray of $w$ does not grow beyond $(w_x,a_x)$. The second phase processes the bottom rays in a similar fashion, e.g., see Figure~\ref{fig:construction}(c). Both the first and second phase use a ray shooting data structure to check whether the current ray already hits an existing ray. We describe this data structure in the next paragraph while discussing phase three. Let the planar subdivision at the end of phase two be $S$. In the third phase, we grow each left ray until it hits any other vertical segment in $S$, as follows: For each vertical edge $ab$ in $S$, construct a segment $a'b'$ by shrinking $ab$ by $\delta_y/3$ from both ends. For each node and junction $w$, construct a segment $w_1w_2$ such that $w_1 = (w_x,w_y-\delta_y/4)$ and $w_2 = (w_x,w_y+\delta_y/4)$. Let $S_v$ be the constructed segments. Note that the segments in $S_v$ are non-intersecting. Giyora and Kaplan~\cite{GiyoraK09} gave a ray shooting data structure $D$ that can process $O(n)$ non-intersecting vertical rays in $O(n\log n)$ time, and given a query point $q$, $D$ can find in $O(\log n)$ time the segment (if any) in $S_v$ immediately to the left of $q$. Furthermore, $D$ supports insertion and deletion in $O(\log n)$ time. For each point $q\in P$ in the increasing order of $x$-coordinates, we shoot a leftward ray $r$ from $q$, and find the first segment $ab$ hit by the ray. Assume that $a_y<b_y$ and $r$ hits $ab$ at point $x$. We update the subdivision $S$ accordingly, then delete segment $ab$ from $D$, and insert segment $xb$ in $D$. Note that these updates keep all the segments in $D$ non-intersecting. Since there are $O(n)$ left rays, processing all these rays takes $O(n \log n)$ time. Figure~\ref{fig:construction}(d) illustrates the third phase. The fourth phase processes the right rays in a similar fashion, e.g., see Figure~\ref{fig:construction}(e). Since the preprocessing time of the data structures we use is $O(n\log n)$, and since each phase runs in $O(n \log n)$ time, the construction of the computation mesh takes $O(n \log n)$ time in total. \end{proof} \section{\uppercase{GraphMaps System}} \label{sec:cm} Our technique for calculating the graphs $G_1,\dots, G_k$ (equivalently, node level assignment) is described in Section~\ref{sec:smooth}. For now, let us assume that the sequence of graphs is ready. We now show how to route edges on graphs $G_i$. The computation of edges starts from the bottom. Namely we build a competition mesh $M$ for graph $G (=G_k)$. We route each edge $(u,v)\in G$ as a shortest path $P_{uv}$ in $M$. Let us denote by $M'$ the mesh we obtain after applying these modifications to $M$. Next we modify $M'$ to make the routes more visually appealing. We perform local modifications and try to minimize the total ink of the routes, which is the sum of lengths of edges of $M'$ used in the routes~\cite{gansner2006improved}, and remove thin faces. During the modifications we keep the angular resolution greater or equal than some $\alpha >0$, and the minimum distance between non-incident vertices and edges of $M'$ greater or equal than some $\beta>0$. The local modifications are described below. \begin{figure} \caption{(a--b) Removal of thin faces. (b--c) Moving junctions towards median. } \label{fig:detour} \end{figure} \begin{figure*} \caption{(a) Zoom level 2. (b--d) Transition from level 1 to 2. (bottom) Transition in our GraphMaps system. } \label{fig:pathsim} \end{figure*} \textit{Face refinement:} For each face $f$ of $M'$ that does not contain a node of $G$ in its boundary, we compute the width of $f$, which is the smallest Euclidean distance between any two non-adjacent rails of $f$. If the width of $f$ is smaller than some given threshold, then remove the longest edge of $f$ from $M'$ (breaking ties arbitrarily). Figures~\ref{fig:detour}(a--b) depict such a removal, where the thin face is shown in gray. The edge routes using the removed edge are rerouted through the remaining boundary of $f'$. \textit{Median:} Move each junction $\kappa$ of $M'$ toward the geometric median of its neighbors, i.e., the point that minimizes the sum of distance to the neighbors, as long as the restriction mentioned above holds. Iterate the move for a certain number of times, or until the change becomes smaller than some given threshold. Figures~\ref{fig:detour}(c--d) illustrate the outcome of this step. \textit{Shortcut:} Remove every degree two junction and replace the two edges adjacent to it by the edge shortcutting the removed junction, as long as the restriction mentioned above holds. In all the above modifications, the routes are updated accordingly. Modifications ``Median'' and ``Shortcut'' diminish the ink. The final $M'$ gives the geometry of the bottom-level drawing of $G$ in our version of GraphMaps. Given a mesh $M_i$ representing the geometry for the drawing of $G_i$, where $1<i\le k$, we construct $M_{i-1}$ from $M_i$ by removing from the latter the nodes $V(G_i) \setminus V(G_{i-1})$, and by removing the edges that are not used by any route $P_{u,v}$, where $(u,v)$ is an edge of $G_{i-1}$. Some routes $P_{u,v}$ can be straightened in $M_{i-1}$. We use the simplification algorithm~\cite{pathsim} to morph the paths of $M_{i}$ to paths of $M_{i-1}$. Figures~\ref{fig:pathsim}(a--b) illustrate the simplification. This change in the edge routes geometry diminishes the consistency between the drawings of successive levels. To smoothen the differences while transiting from zoom level $i$ to $i+1$ we linearly interpolate between the paths of $M_i$ and $M_{i+1}$, as demonstrated in Figures~\ref{fig:pathsim}(b--d). The idea of path simplification and transition via linear interpolation enables us to construct GraphMaps in a bottom-up approach. In fact, the above strategy can be applied to transform any mesh generated from a set of 2D points to a GraphMaps visualization. \section{\uppercase{Computing Node Levels}} \label{sec:smooth} Let us consider in more details how the view changes when we zoom by examining Figures~\ref{fig:transition}(a)--(d). On the top-left tile of Figure~\ref{fig:transition}(a), the user's viewport covers the whole graph, so $G_1$ is exposed. In Figure~\ref{fig:transition}(d), the user's viewport contains only the top-left tile of Figure~\ref{fig:transition}(a), and the visualization switches to graph $G_2$. Seven new nodes, which were not fully visible in zoom level 1, become fully visible for the current viewport, as represented in light gray. If all of a sudden, a large number of new nodes become fully visible, then it may disrupt user's mental map. Here we propose an algorithm to keep this change small. \begin{figure*} \caption{(a) Zoom level 1. (b--c) Transition from level 1 to 2. (d) Zoom level 2. } \label{fig:transition} \end{figure*} We build the tiles as in~\cite{Nachmanson15}. In the first level we have only one tile coinciding with the graph bounding box. On the $i$th level, where $i>1$, the tiles are obtained by splitting each tile in the $(i-1)$th level into a uniform $2\times 2$ grid cell. This arrangement of tiles can be considered as a rooted tree $T$, where the tiles correspond to the nodes of the tree. Specifically, the topmost tile is the root of $T$, and a node $u$ is a child of another node $v$ if the corresponding tiles $t_u$ and $t_v$ lie in two different but adjacent levels, and $t_u\subset t_v$. We refer to $T$ as a \emph{tile tree}. For every node $v$ in $T$, denote by $S(v)$ the number of fully visible nodes in the tile $t_v$. For an edge $e=(v,w)$ in $T$, where $v$ is a parent of $w$, we denote by $\delta_e$ the number of new nodes that become visible while navigating from $t_v$ to $t_w$, i.e., $\delta_e = |S(w)\setminus S(v)|$. We can control the rate the nodes appear and disappear from the viewport by minimizing $\sum_{e\in E(T)}\delta_e^2$, where $E(T)$ is the set of edges in $T$. For simplicity we show how to solve the problem in one dimension, where all the points are lying on a horizontal line. It is straightforward to extend the technique in $\mathbb{R}^2$. \begin{enumerate} \item[] \textit{Problem.} \textsc{Balanced Visualization} \item[] \textit{Input.} A set $P$ of $n$ points on a horizontal line, where every point $q\in P$ is assigned a rank $r(q)$. A tile tree $T$ of height $\rho$; and a node quota $Q$, i.e., the number of points allowed to appear in each tile. \item[] \textit{Output.} Compute a mapping $g:P \rightarrow \{1,2,\ldots \rho\}$ (if exists) that \begin{enumerate} \item[-] satisfies the node quota, \item[-] minimizes the objective $F = \sum_{e\in E(T)}\delta_e^2$, and \item[-] for every pair of points $q,q'\in P$ with $r(q) \ge r(q')$, satisfies the inequality $g(q)\le g(q')$, which we refer to as the \emph{rank condition}. \end{enumerate} \end{enumerate} If the rank of all the points are distinct, then the solution to \textsc{Balanced Visualization} is unique, and can be computed in a greedy approach. But the problem becomes non-trivial when many points may have the same rank. In this scenario, we prove that the \textsc{Balanced Visualization} problem can be solved in $O(\tau^2\log^2 \tau )+O(n \log n)$ time, where $\tau$ is the number of nodes in $T$. This is quite fast since the maximum zoom level is a small number, i.e., at most 10, in practice. We reduce the problem to the problem of computing a minimum cost maximum flow problem, where the edge costs can be convex~\cite{Orlin,OrlinV13}, e.g., quadratic function of the flow passing through the edge. Figure~\ref{fig:flow}(a) depicts a set of points on a line, where the associated tiles are shown using rectangular regions. The numbers in each rectangle is the number of points in the corresponding tile. Figure~\ref{fig:flow}(b) shows a corresponding network $G$, where the source is denoted by $s$, and the sinks are denoted by $T_1,T_2,\ldots,T_8$. \begin{figure*} \caption{(a) A set of points on a line and the associated tiles are shown using rectangular regions. (b) A corresponding network $G$. (c) A solution to the minimum cost maximum flow problem. (d) A solution to the \textsc{Balanced Visualization} \label{fig:flow} \end{figure*} The excess at the source and the deficit at the sinks are written in their associated squares. We allow each internal node $w$ (unfilled square) of the tile tree to pass at most $Q$ units of flow through it. This can be modeled by replacing $w$ by an edge $(u,v)$ of capacity $Q$, where all the edges incoming to $w$ are incident to $u$ and the outgoing edges are incident to $v$. This transformation is not shown in the figure. Set the capacity of all other edges to $\infty$. The production of the source is $n$ units, and the units of flow that each sink can consume is equal to the number of points lying in the corresponding tile. The cost of sending flows along the tree edges (solid black) is zero. The cost of sending flows along the dotted edge connecting the source and the tree root is also zero. The cost of sending $x$ units of flow along the dashed edges is $x^2$; sending $x$ units of flow through a dashed edge corresponds to $x$ new nodes that are becoming visible when we zoom in at the tile of the edge target. Figure~\ref{fig:flow}(c) illustrates a solution to the minimum cost maximum flow problem, where the flows are interpreted as follows: \begin{enumerate} \item[(A)] The number in a square denotes the number of points that would be visible in the associated tile. \item[(B)] The edges $(s,w)$, where $w$ is not the root, are labeled by numbers. Each such number corresponds to the new nodes that will appear while zooming in from the tile associated to $parent(w)$ to the tile associated to $w$. Thus the cost of this network flow is the sum of the squares of these numbers, i.e., $35$. \item[(C)] Each edge of $(u,v)$ of $T$ is labeled by the number of nodes that are fully visible both $u$ and in $v$. \item[(D)] Any one unit source-to-sink flow corresponds to a point of $P$, where the flow path $source, $ $u_1,u_2,\ldots,u_k(=sink)$ denotes that the point appeared in all the tiles associated to $u_1,\ldots,u_k$. \end{enumerate} \begin{lemma} \label{lem:flow} A minimum cost maximum flow in $G$ minimizes the objective function $F$ of the \textsc{Balanced Visualization} problem. \end{lemma} \begin{proof} If the amount of flow consumed at sink is smaller than $n$, then we can find a cut in $G$ with total capacity less than $n$. Thus even if we saturate the corresponding tiles with points from $P$, we will not be able to visualize all the points without violating the node quota. Therefore, we can visualize all the points if and only if the flow is maximum and the total consumption is $n$. Therefore, the only concern is whether the solution with cost $\lambda$ obtained from flow-network model minimizes the sum of the squared node differences between every parent and child tiles. Suppose for a contradiction that there exists another solution of \textsc{Balanced Visualization} with cost $\lambda' < \lambda$. In this scenario we can label the edges of the network according to the interpretation used in (A)--(D) to obtain a maximum flow in $G$ with cost $\lambda'$. Therefore, the minimum cost computed via the flow-network model cannot be $\lambda$, a contradiction. \end{proof} Given a solution to the network flow, we can construct a corresponding solution to the \textsc{Balanced Visualization} problem as described below. \begin{enumerate} \item[-] For each point $w$, set $g (w) = \infty$. \item[-] For each zoom level $z$ from $\rho$ to 1, process the tiles of zoom level $z$ as follows. Let $W$ be a tile in zoom level $z$. Find the amount of flow $x$ incoming to $W$ from $s$ in $G$. Note that this amount $x$ corresponds to the difference in the number of points between $W$ and $parent(W)$. Therefore, we find a set $L$ of $x$ lowest priority points in $W$ with zoom level equal to $\infty$, then for each $w\in L$, we set $g(w) = z$. Figure~\ref{fig:flow}(d) illustrates a solution to the \textsc{Balanced Visualization} problem that corresponds to the network flow of Figure~\ref{fig:flow}(c). \end{enumerate} \noindent If the resulting mapping does not satisfy the rank condition, then the instance of \textsc{Balanced Visualization} does not have any affirmative solution. The best known running time for solving a convex cost network flow problem on a network of size $O(\tau)$ is $O(\tau^2 \log^2 \tau)$~\cite{Orlin,OrlinV13}. Besides, it is straightforward to compute the corresponding node assignments in $O(n\log n)$ time augmenting the merge sort technique with basic data structures. Hence we obtain the following theorem. \begin{theorem} Given a set of $n$ points in $\mathbb{R}$, a tile tree of $\tau$ nodes, and a quota $Q$, one can find a balanced visualization (if exists) in $O(\tau^2 \log^2 \tau) {+} O(n\log n)$ time. \end{theorem} While implementing GraphMaps, we need to choose a node quota $Q$ depending on the given total number of zoom levels $\rho$. Using a binary search on the number of nodes, in $O(\log n)$ iterations, one can find a minimal node quota that allow visualizing all the points of $P$ satisfying the rank condition. \section{\uppercase{Experiments}} \label{EXPERIMENTS} The GraphMaps system proposed previously~\cite{Nachmanson15} uses \footnote{\href{https://www.cs.cmu.edu/~quake/triangle.html}{\url{https://www.cs.cmu.edu/~quake/triangle.html}}} to obtain the graph for routing edges on a level. Our approach does not depend on Triangle, but uses Competition Mesh. This has several advantages. For example, Competition Mesh usually produces less edges than Triangle. As a result the edge routing runs much faster. With the same initial layout for the nodes, the GraphMaps system based on our approach sped up the initial processing significantly, up to 8 times on some graphs. The graph with 38395 nodes and 85763 edges was processed in the new system within 1 hour and 45 minutes, where the previous GraphMaps system took 6 hours~\cite{Nachmanson15}. Besides, Competition Mesh is more robust than Triangle. We did not experience failures, which were reported on Triangle~\cite{Nachmanson15}. \begin{figure} \caption{Node selection in previous GraphMaps~\cite{Nachmanson15} \label{det} \end{figure} The previous GraphMaps~\cite{Nachmanson15} supports node selection, which is initiated by the user clicks. Selection of a node highlights the paths to its neighbors in red. This may create ambiguity. For example, Figure~\ref{det}(top-left) shows a graph of Burglary events (April 2015) in Manchester, UK, where two events are adjacent if they are located within 1km of each other. Selection of the node `Burnaby Street' highlights a rail very close to the node `Shopping area', which gives a false impression that these nodes are adjacent. After zooming in one can see that there is a detour that carries the highlighted path away from `Shopping area', e.g., see Figure~\ref{det}(top-right). This also give a wrong impression of the node degree. Besides, since the edges may share rails, selection of multiple nodes may obscure the adjacency relationship, e.g., see Figure~\ref{pn}(left). \begin{figure} \caption{Selection of multiple nodes: (a) previous GraphMaps~\cite{Nachmanson15} \label{pn} \end{figure} \begin{figure*} \caption{A visualization of gemstone trade-relation among countries ($\approx$7K edges) in 2015 (https://resourcetrade.earth). Selected nodes: (left) Iraq, Sudan, Cuba, and (right) China.} \label{trade} \end{figure*} We introduce new visualizations that allow the user to better understand the node neighborhood. Clicking on a node toggles its status from selected to not selected. When a node is selected, its neighbors are highlighted in yellow color, and the edges connecting the clicked node with the neighbors are highlighted with some unique color. If the mouse pointer hovers over a node highlighted by yellow color, then a tooltip appears with the list of the node neighbors, e.g., see Figure~\ref{pn}(right). When a selected node is unselected, then every edge adjacent to it is rendered in the default color, and the highlighting is removed from each neighbor, unless it is a neighbor of another selected node. These visualization measures help to resolve some ambiguities caused by edge bundling. Note that our approach does not create any detour, and thus avoids the circular artifacts (rails) around the nodes. Exploring the node adjacencies and degree becomes comparatively easy, and less number of rails aids faster level transition. Like the previous GraphMaps, our approach can also revel the structural properties of the graph. Figure~\ref{trade} depicts a gemstone trade graph, where the countries with most trade relation, e.g., China, are in the central position and the countries with small number of trade relations fall into the periphery. Figure~\ref{Drug} visualizes a drugs-crime event in Manchester, UK, where the events are connected if they are located within a distance of (left) 1Km and (right) 5Km of each other. The high risk events form clusters in the top-level visualizations. \begin{figure*} \caption{A visualization of drugs-crime event in Manchester, UK, with approximately (left) 0.5K edges, and (right) 8K edges. The rails in light blue and white illustrate the first and second zoom levels, respectively.} \label{Drug} \end{figure*} Figure~\ref{fig:z3incl} shows an experiment with the graph of include dependencies of the C++ sources of\footnote{\href{https://github.com/Z3Prover/z3}{\url{https://github.com/Z3Prover/z3}}}. The developer was interested in how his files are used by the rest of the system. He clicked node `lar\_solver.h' representing an important header file of his files and created the neighborhood in red color (the upper drawing). Then he noticed that file `theory\_lra.cpp' includes `lar\_solver.h' and clicked the former, creating the blue neighborhood (in the lower drawing). Then he noticed two files, marked by the black oval, that were included into `theory\_lra.cpp' by mistake. In another experiment, a user was analyzing collaboration between Chinese and Russian composers in 20-th and 21-st centuries. By highlighting the neighborhoods of Chinese composers of 20-th century, the user saw that there were no connections between the composers of these two countries in this period. In 21-st century the only relation of such kind that he found was between Tan Dun\footnote{ \href{https://en.wikipedia.org/wiki/Tan\_Dun}{\url{https://en.wikipedia.org/wiki/Tan\_Dun}}} and Sofia Gubaidulina\footnote{\href{https://en.wikipedia.org/wiki/Sofia\_Gubaidulina}{\url{https://en.wikipedia.org/wiki/Sofia\_Gubaidulina}}}. \begin{figure*} \caption{Highlighting the neighborhood in a unique color helps to understand relations.} \label{fig:z3incl} \end{figure*} More details can be seen in a video\footnote{\href{https://www.youtube.com/watch?v=qCUP20dQqBo&feature=youtu.be}{\url{https://www.youtube.com/watch?v=qCUP20dQqBo&feature=youtu.be}}}. \section{\uppercase{Limitations}} Adjacency relations and node degrees are readily visible in small size traditional node-link diagrams, GraphMaps can process large networks, but it looses those two aspects at an expense of avoiding clutter. The users need to select nodes to explore the adjacencies and node degrees. Currently, we use colors to disambiguate node selections, which limits us to the selection of a small number of nodes avoiding ambiguity. GraphMaps is sensitive to node quota or the maximum number of nodes per tile. Selecting a large node quota may increase the interaction latency during level transitions. On the other hand, selecting a small node quota may select few nodes on the top-level, which may fail to give an overview of the graph structure. An appropriate choice of the node quota based on the graph size and node layout is yet to discover. For simplicity, we used polygonal chains to represent the edges, different colors for multiple node selections, and color transparency to avoid ambiguity. It would be interesting to find ways of improving the visual appeal of a GraphMaps visualization, e.g., using splines for edges, enabling tooltip texts for showing quick information and so on. \section{\uppercase{Conclusion}} We described our algorithm to construct GraphMaps Visualizations using competition mesh. Recall that the edge stretch factor of the competition mesh we created is at most $(2+\sqrt{2})$. A natural open question is to establish tight bound on the edge stretch factor of the competition mesh. It would also be interesting to find bounded degree spanners (possibly with Steiner points) that are monotone and have low stretch factor. We refer the reader to~\cite{BoseS13,DBLP:journals/jgaa/DehkordiFG15,DBLP:conf/compgeom/FelsnerIKKMS16} for more details on such geometric mesh and spanners. We leave it as a future work to examine how the quality of a GraphMaps system may vary depending on the choice of geometric mesh. \begin{figure} \caption{A visualization of the flight network dataset (https://openflights.org/data.html) using previous GraphMaps~\cite{Nachmanson15} \label{abs2} \end{figure} The previous GraphMaps~\cite{Nachmanson15} uses an incremental mesh generation, which does not require path simplification. Since the construction of the upper levels does not take the lower level nodes into account, the top-level view is usually sparse, e.g., see Figure~\ref{abs2}. Our approach is powerful in the sense that any mesh can be transformed into a GraphMaps visualization. But the upper levels are the simplification of the bottom level mesh, and thus the quality of the top-level depends on both the bottom level mesh and the simplification process. It will be interesting to further explore the pros and cons of both approaches. We believe that our results will inspire further research to enhance the appeal and usability of GraphMaps visualizations. \section*{\uppercase{Acknowledgements}} \noindent This work was initiated when the first author was a summer intern at Microsoft Research, Redmond, USA. His subsequent work was partially supported by NSERC. {\small } \end{document}
\begin{document} \title{A Shorter Proof of Marten's Theorem} \author{Adam Ginensky} \date{6-2016} \maketitle \begin{abstract} This note is dedicated to the memory of my friend and teacher R. Narasimhan. Marten's theorem is only one of the thousands of topics that Narasimhan could, and would, talk about with great enthusiasm and with great depth of knowledge. \end{abstract} \tableofcontents \section{Introduction and Notation } This note is intended to give a new proof on Marten's theorem stating that $\dim(W^r_d) \leq d-2r$ for any smooth curve with equality occurring exactly in the case when C is hyper-elliptic. The proof follows the general lines of the proof given in 'The Geometry of Algebraic Curves'. What is different is that it uses Hopf's theorem to simplify the proof and strengthen the conclusions of the theorem. More specifically Mumford and Keem have explicated the curves for which $\dim(W^r_d) = d-2r-1$ . Our analysis allows us to state for any $ e>0$ which curves are eligible to have the bound $ \dim(W^r_d) = d-2r-e$. This is done in terms of the Clifford index of C. Finally, we sketch a proof of Hopf's Theorem. In \cite{ACGH} this result is credited to Hopf, but no proof or reference is given. In \url{http://mathoverflow.net/questions/156674/who-stated-and-proved-the-hopf-lemma-on-bilinear-maps} some references are given, but my reading of the references do not show a proof that an algebraic geometer might easily follow. Prof. Mohan Kumar has communicated to me an algebro-geometric proof of this result. With his permission it is included in this note. I would also like to thank Prof. Kumar for reading an earlier version of this note and giving me corrections and useful comments. Mathematics has always been an avocation and not a vocation for me. My lack of daily contact with mathematics and mathematicians makes me particularly prone to make errors and have slips of form. For that reason all comments and corrections are greatly appreciated. We assume the following throughout- \begin{itemize} \item C will be a smooth curve defined over an algebraically closed field. \item L a special line bundle on C \item $\deg(L) = d ,\nd d \leq g-2 \nd h^0(L) = r+1 $ \item that is $L \in W^r_d$ \end{itemize} I will denote by $\alpha_L$ the Petri map \[ H^0(L) \otimes H^0(K_C \otimes L^{-1})\stackrel{\alpha_L}{\rightarrow} H^0(K_C) . \] Notice that this map is always injective on each factor. In addition recall that If $ L \cong \mathcal{O}_C(D)$, then $\mbox{Cliff}(L) = \mbox{Cliff}(D) = d-2r$ . Finally we recall the basic result due to Hopf and it's corollary.\\ \begin{lemma}[Hopf] \label{hopfLemma} Let $A\otimes B \to C$ be a bi-linear map of vector spaces defined over an algebraically closed field with $\dim(A) = a \nd \dim(B) = b$ . If the map is injective on each factor, then $\dim(Im(A \otimes B) \geq a+b - 1$ \end{lemma} \begin{corollary} L as above, \begin{itemize} \item $\dim(\alpha_L) \geq g - \mbox{Cliff} (L) $ \item $\dim(\mbox{coker}(\alpha_L)) \leq \mbox{Cliff}(L)$ \end{itemize} \end{corollary} \begin{proof} Since $\alpha_L$ is injective on each factor and $(r+1) + (g-d+r) = g -(d-2r) +1 $, the first item follows from Hopf's lemma with $A = H^0(L), B = H^0(K_C \otimes L^{-1}) , \nd C = H^0(K_C)$. The second assertion follows directly from the first assertion. \end{proof} \section{Marten's Theorem } Marten's theorem bounds the dimension of $W_r^d$ . We follow the proof in \cite{ACGH} p.192. . Starting with the fact, \[ \dim(W_d^{r+1}) \leq \dim(T_{L,W_d^{r+1}}) = \dim(\mbox{coker}(\alpha_L)) \] for a general $ L \in W^r_d$, the authors then show that $\dim(W_d^{r+1}) > d-2r$ leads to $h^0(K_C \otimes L^{-2})$ being too large to satisfy the conditions of Clifford's theorem . We present a shorter and more direct proof using the corollary to Hopf's Lemma. \begin{theorem}[Marten] $\dim(W_r^d) \leq d-2r$ and if C is not hyperelliptic, then $\dim(W_r^d) \leq d-2r-1$ \end{theorem} \begin{proof} As remarked above, it is enough to bound the tangent space, $T_{L,W_d^{r}}$ at a point $ L \notin T_{L,W_d^{r+1}}$. By proposition 4.2 on p .189 of \cite{ACGH} we know that if $L \notin W_c^{r+1}$, then \[ \dim(T_{L,W_d^{r}}) = \dim(\mbox{coker}(\alpha_L ) )\] and by the corollary to Hopf's theorem, \[ \dim(\mbox{coker}(\alpha_L ) \leq \mbox{Cliff}(L) = d-2r \] This bound is clearly achieved for hyperelliptic curves The better bound in the case that C is not hyperelliptic will be explained and generalized in the next section. \end{proof} \section{Extensions of Marten's Theorem} The sharper result that in fact $\dim(W_r^d) \leq d-2r-1$ if C is not hyperelliptic follows from an inductive argument that bounds from below $\dim(h^0(K_C \otimes L^{-2}))$ . Mumford (and Keem) have complimented the analysis by analyzing the border cases when $\dim(W_r^d) = d-2r-1$. We will present an argument below that we believe simplifies the exposition in \cite{ACGH}. The proof allows us to deduce a corollary that relates the dimension of $W_r^d $ to the Clifford index of the curve. While not as sharp as Mumford's result for the border case $\dim(W_r^d) = d-2r-1$, it does give new information as to which curves can achieve the bound $\dim(W_r^d) = d-2r-e$ with $ e \geq 2$ . The result is. \textbf{ Main Result:} If $ \mbox{Cliff}(C) = c$ , then $\dim(W_r^d) \leq d-2r +\left\lceil \frac{c}{2}\right\rceil$. Firstly, we may assume that L is base point free - else remove the basepoints and perform the argument with the new d. We introduce some notion. Let ${s_1,s_2, \ldots, s_{r+1} }$ be a basis of $H^0(L)$, such that ${s_1,s_2 }$ span L. Then for every $k \geq 2$ we get an exact sequence, \[ 0 \to \mbox{Ker}_k \to \mathcal{O}^{k} \xrightarrow{(s_1,\ldots , s_k)} L \to 0 \] and after tensoring with $K_C \otimes L^{-1}$ we get another exact sequence- \[ 0 \to \mathcal{K}_k \to \left((K_C \otimes L^{-1}) \otimes \mathcal{O}^{k}\right) \stackrel{\alpha_k} \to K_C \to 0 \] where $\mathcal{K}_k =\mbox{Ker}_k \otimes K_C \otimes L^{-1}$ and $\alpha_k$ is the Petri map restricted to the k given sections. By the base point free pencil trick , $\mathcal{K}_2 \cong K_C \otimes L^{-2} $ and hence for $k>2$, we have an exact sequence . \[ 0 \to K_C \otimes L^{-2} \to \mathcal{K}_{k} \to \left((K_C \otimes L^{-1}) \otimes \mathcal{O}^{k-2}\right) \to 0 \] Setting $ k = r+1 $ we get an inequality, \[ h^0(\mathcal{K}_{r+1}) \leq h^0\left(K_C \otimes L^{-1}\right) + (r-1)h^0(K_C \otimes L^{-2}) = h^0\left(K_C \otimes L^{-1}\right) + (r-1)(g-d+r) \] If $\dim((\alpha_L)) = g - \mbox{Cliff}(L) + e $ , then, \[ h^0(\mathcal{K}_{r+1}) = \dim(\ker(\alpha_L)) = (g-d+r)(r+1) - (g-d + 2r +e), \] and consequently we can bound $h^0(K_C\otimes L^{-2}) $ from below by (and this is the essence of the argument in \cite{ACGH}) via \begin{align*} h^0(K_C\otimes L^{-2}) \geq &\dim(h^0(\mathcal{K}_{r+1})) - (r-1)h^0(K_C \otimes L^{-1})\\ & = \dim(h^0(\mathcal{K}_{r+1})) - (r-1)(g-d+r)\\ & = \left[h^0\left((K_C \otimes L^{-1}) \otimes \mathcal{O}^{r+1}\right) -\dim(\alpha_L)\right] - (r-1) (g-d+r) \\ & =(g-d+r)(r+1) - (g-(d - 2r) +e) - (r-1)(g-d+r) \\ & = g -d -e \end{align*} Since \[ \deg(K_C\otimes L^{-2}) = 2g-2 - 2d \nd h^0(K_C\otimes L^{-2}) \geq g-d -e \] we compute that \[ \mbox{Cliff}(K_C\otimes L^{-2}) = \mbox{Cliff}(\mathcal{O}(2D) \geq 2g-2-2d - 2(g-d+e-1) = 2e \] The conclusion is, since by assumption $2D$ is eligible to calculate the Clifford index, that $ 2e \geq \mbox{Cliff}(C)$ . We summarize the discussion in \\ \begin{theorem} Suppose $\dim(\alpha_L) = g-d+2r +e$ , then $ 2e \geq \mbox{Cliff}(C) $ and consequently $ e \geq \left\lceil \frac{c}{2}\right\rceil$ \end{theorem} \begin{corollary} Suppose $\mbox{Cliff}(C) = c, $ then for any special line bundle L, we have $\dim(\alpha_L) \geq g- (d-2r) + \left\lceil \frac{c}{2}\right\rceil $ \end{corollary} \begin{proof} Our assumption is that $\dim((\alpha_L)) = g - \mbox{Cliff}(L) + e $ and $ e \geq \left\lceil \frac{c}{2}\right\rceil $ by the previous corollary. \end{proof} \begin{corollary}[Main Result] $\dim(W_r^d) \leq d-2r -\left\lceil \frac{c}{2}\right\rceil$. In particular, if C is not hyperelliptic, $\dim(W_r^d) \leq d-2r-1 $ and if $\dim(W_r^d) = d-2r-1 $, then the Clifford index of C is atmost two. \end{corollary} \section{ Discussion of Mumford's refinement of Marten's Theorem } The last corollary in some sense generalizes Mumford's refinement. It is not as precise as Mumford's result for the cases that Mumford covers, but it does give different information. What Mumford proves is \begin{theorem}[Mumford] If $ \dim(W^r_d) = d -2r-1 $ then C is either trigonal, bi-elliptic, or a smooth plane quintic. \end{theorem} Notice that in trigonal or plane quintic are exactly the cases of Clifford index one and the a bi-elliptic curve does have Clifford index 2. However there are many curves of clifford index 2 which are not bi-elliptic . In particular a plane sextic has Clifford index 2. The new information of this note suggests an approach to recreating and extending Mumford's result. Namely to find the curves for which the bound in Marten's theorem is sharp, that is to find the curves for which $ \dim(W^r_d) = d -2r-\left\lceil \frac{c}{2}\right\rceil$ , one has to find curves on which there is a divisor D such that not only is Cliff(D) 'small', but so is Cliff(2D). I suspect that, in the end, the kind of case by case analysis that Mumford did is necessary. On the other hand, Corollary 3 gives more precise information than was previously reported in the literature on how the bound in Marten's theorem is dependent on the existence of very special divisors. To the best of my knowledge, the Corollary provides new information about the case when $ \dim (W^r_d) \leq d-2r-2. $ The best result (known to me) is Keem's theorem- \begin{theorem}[Keem] If C is a smooth algebraic curve of genus $g \geq 11$ and we have integers d and r satisfying $ d \leq g+r-4$ and we have $ \dim(W^r_d) \geq d - 2r -2 $ then C has a $g^1_4$ and hence is of Clifford index 2. \end{theorem} Notice that the genus of a plane sextic has genus 10 and that is why (presumably) all curves of lower genus are eliminated. By comparison, Corollary 2 gives- \begin{theorem} Suppose C is a smooth curve , $ d \leq g-2 $ and $ \dim(W^r_d) = d - 2r -2 $, then the Clifford index of C is two and hence C is either a plane sextic or 4 gonal. \end{theorem} \begin{proof} Every curve of Clifford index two is either a plane sextic or possesses a $g^1_4$ . \end{proof} This improves upon Keem's result by eliminating the lower bound on the genus and improving the bound on r to the optimal value. \section{ Proof of Hopf's Lemma} We wish to thank Prof. Mohan Kumar for allowing us to include this proof. While it is conceivable that this proof is known to others, I can find no reference to it in the literature. I would sincerely appreciate any comments or references on this topic. I note that while this result is called Hopf's theorem in 'The Geometry of Algebraic Curves', it seems also to be known as Hopf's lemma or Hopf's lemma on bi-linear forms in the topology literature. We call it a lemma in deference to it's usage in topology, which seems to have been the original use of the lemma. \begin{lemma}[Hopf] Let $A\otimes B \to C$ be a bi-linear map of vector spaces defined over an algebraically closed field with $\dim(A) = a \nd \dim(B) = b$ . If the map is injective on each factor, then $\dim(Im(A \otimes B) \geq a+b - 1$\end{lemma} \begin{proof} The proof uses the fact that if we let $M=(x_{ij})$ be the space of matrices of size m x n, and set $X_r \subset \mathbb{A}^{mn} $ to be all the matrices of rank at most r, then $X_r$ has codimension equal to $(m-r)(n-r)$ in $\mathbb{A}^{mn} $. While Prof. Kumar communicated a very short algebraic proof of this fact, in the spirit of this note we prefer to reference \cite{ACGH} p. 67. Firstly, note we may trivially assume that $ b \geq 2$ and that further $\dim(C)= c = a+b-2 $ . Denote by $f_b$ the map $ A \to C $ given by $\otimes b$ with $ b \in B $. By assumption, all the maps $f_b$ are injective. In fact, if $b_1, \ldots b_n$ are a basis for B, then, setting $ f_1, \ldots f_n$ to be the linear maps corresponding to the $b_i$, then the $f_i$ must be linearly independent as elements of $ \mathbb{P}\left(\mbox{Hom}\left(A,C\right)\right) $ because any linear relation $ \sum a_i f_i = 0 $ would imply that $ f_{\sum a_i b_i} = 0$ . Consequently these elements span a linear space $ H \subset \mathbb{P}^{ac-1} $ of dimension b-1. Consider $ X_1 \subset \mathbb{P}^{ac-1} $ The subset of maps $A \to C $ which are not injective. This has codimension \[ (c-(a-1) )(a - (a-1) ) = c - a + 1 = a+b-2 -a + 1 = b-1 \] Hence $ X_1 \cap H \neq \emptyset $. Since we have assume our base field is algebraically closed this means there exists a point $ p \in X_1 \cap H$ with coordinates $(a_1, \ldots a_n) $ and then $ \sum a_i f_i $ is not injective , contradicting the hypothesis of the lemma . \end{proof} \end{document}
\begin{document} \title{Simple normal crossing Fano varieties and log Fano manifolds} \begin{abstract}{\noindent A projective log variety $(X, D)$ is called ``a log Fano manifold" if $X$ is smooth and if $D$ is a reduced simple normal crossing divisor on $X$ with $-(K_X+D)$ ample. The $n$-dimensional log Fano manifolds $(X, D)$ with nonzero $D$ are classified in this article when the log Fano index $r$ of $(X, D)$ satisfies either $r\geq n/2$ with $\rho(X)\geq 2$ or $r\geq n-2$. This result is a partial generalization of the classification of logarithmic Fano threefolds by Maeda. } \end{abstract} \section{Introduction}\label{introsection} As is well known, Fano varieties play an essential roll in various situations, especially in birational geometry. Many algebraic geometers have been studying Fano varieties, in both smooth and singular cases. During the past 30 years, researches of smooth Fano varieties, so called \emph{Fano manifolds}, have been advanced very much by using the theory of extremal rays; Fano manifolds can be classified up to dimension three. We have also known that the anticanonical degree of $n$-dimensional Fano manifolds are bounded for an arbitrary $n$. However, very little is known for higher dimensional case. Nowadays, many people use the \emph{Fano index} of a Fano manifold, the largest positive integer $r$ such that the anticanonical divisor is $r$ times a Cartier divisor, or the \emph{Fano pseudoindex} of a Fano manifold, the minimum of the intersection numbers of the anticanonical divisor with irreducible rational curves, in order to classify higher dimensional Fano manifolds. For example, the $n$-dimensional Fano manifolds with the Fano indices $r\geq n-2$ have been classified (\cite{KO,fujitabook,isk77,MoMu,mukai,wis,wisn90,wisn91}). Another example is the \emph{Mukai conjecture} \cite[Conjecture 4]{mukaiconj} (resp.\ \emph{generalized Mukai conjecture}), that the product of the Picard number and the Fano index (resp.\ the Fano pseudoindex) minus one is less than or equal to the dimension for any Fano manifold. The generalized Mukai conjecture is one of the most significant topics in the classification theory of Fano manifolds and is still open even now excepts for the case $n\leq 5$ or $\rho(X)\leq 3$ (see \cite{Occ,NO}). One of the most significant results related to the Mukai conjecture is due to Wi\'sniewski \cite{wisn90,wisn91}; he has classified $n$-dimensional Fano manifolds with the Fano index $r>n/2$ and the Picard number $\rho(X)\geq 2$. On the other hand, it is difficult to classify singular irreducible ($\mathbb{Q}$-)Fano varieties since we cannot use deformation theory successfully. In this article, we consider \emph{simple normal crossing Fano varieties}, that is, projective simple normal crossing varieties whose dualizing sheaves are dual of ample invertible sheaves. The concept of simple normal crossing Fano varieties is one of the most natural generalization of that of Fano manifolds. It also seems to be more tractable than that of singular irreducible ($\mathbb{Q}$-)Fano varieties. We also consider the \emph{snc Fano indices} (resp.\ the \emph{snc Fano pseudoindices}) of simple normal crossing Fano varieties; the largest positive integer $r$ such that the dualizing sheaf is $(-r)$-th power of some ample invertible sheaf (resp.\ the minimum of the intersection number of the dual of the dualizing sheaf and a rational curve). It is expected that simple normal crossing Fano varieties with large snc Fano indices have various applications as smooth Fano varieties do (for example, see \cite{Kol11}). It is natural to consider their irreducible components with the conductor divisors in order to investigate simple normal crossing Fano varieties. The component with the conductor has been considered by Maeda in his studies of a logarithmic Fano variety \cite{Maeda}. He has classified logarithmic Fano varieties of dimension at most three. A logarithmic Fano variety is called a \emph{log Fano manifold} in this article, which is a pair $(X, D)$ consisting of a smooth projective variety $X$ and a reduced simple normal crossing divisor $D$ on $X$ such that $-(K_X+D)$ is ample. These manifolds share similar properties with Fano manifolds but there are some differences between them. For example, as Maeda pointed out, the degree $(-(K_X+D)^{\cdot n})$ cannot be bounded for log Fano manifolds $(X, D)$ of fixed dimension $n\geq 3$. We also introduce the concept of the \emph{log Fano index} (resp.\ the \emph{log Fano pseudoindex}) of a log Fano manifold, the largest positive integer $r$ dividing $-(K_X+D)$, i.e., $-(K_X+D)\sim rL$ for a Cartier divisor $L$ (resp.\ the minimum of the intersection number of $-(K_X+D)$ and a rational curve). Our main new treatment is to consider log Fano manifolds with the log Fano indices (or the log Fano pseudoindices). The main idea to investigate $n$-dimensional log Fano manifolds $(X, D)$ with the log Fano index $r$ (resp.\ the log Fano pseudoindex $\iota$) and $D\neq 0$ is the following. First, to see the contraction morphism associated to an extremal ray intersecting the log divisor positively. We can classify del Pezzo type log Fano manifolds using only this strategy in Section \ref{delpezzo_section}. Second, one can show that $D$ is an $(n-1)$-dimensional simple normal crossing Fano variety such that the snc Fano index is divisible by $r$ (resp.\ the snc Fano pseudoindex is at least $\iota$). Hence we can use inductive arguments; the larger the snc Fano index or the snc Fano pseudoindex be, the simpler the structure of simple normal crossing Fano varieties be, as for Fano manifolds. As a consequence, we can analyze the structure of $X$ by viewing the restriction of contraction morphisms to $D$. Now we quickly organize this article. Section \ref{junbisection} is a preliminary section. We introduce the notion of simple normal crossing varieties and log manifolds in Section \ref{snc_section}. We also investigate the condition when log manifolds (and invertible sheaves on them) are glued as a simple normal crossing variety (and an invertible sheaf on it) in Theorem \ref{glue} and Proposition \ref{picglue}. We define simple normal crossing Fano varieties and log Fano manifolds in Section \ref{sncfano_section} and quickly give some properties in Section \ref{firstprop_section}. We also show some basic results of bundle structures (Section \ref{bdle_section}), extremal contractions (Section \ref{extcont_section}) and the special projective bundles, so called the Hirzebruch--Kleinschmidt varieties (Section \ref{property_scroll}). In Section \ref{ex_section}, we give various examples of log Fano manifolds with large log Fano indices, which occur in the theorems in Section \ref{thm_section}. In Section \ref{thm_section}, we state the main results of this article. In Section \ref{delpezzo_section}, we treat an $n$-dimensional log Fano manifold $(X, D)$ with $D\neq 0$ such that the log Fano pseudoindex is equal to $n-1$ (Proposition \ref{dP}), whose proof is easy but is optimal to understand how to classify log Fano manifolds with index data. The main purpose of this article, which we discuss in Section \ref{mukaiconjsection}, is to classify $n$-dimensional log Fano manifolds of the log Fano indices $r$ with nonzero boundaries such that $n\leq 2r$ and $\rho(X)\geq 2$ (Theorem \ref{mukai0} and Theorem \ref{mukai1}, see also Table \ref{maintable}), which is a log version of the treatment of the Mukai conjecture by Wi\'sniewski \cite{wisn90,wisn91}. We prove Theorem \ref{mukai1} in Section \ref{prf_section}. We note that Wi\'sniewski argued the case $r>n/2$ and we treat the case $r\geq n/2$ and nonzero boundaries. We also note that we do not treat Maeda's case $n=3$ and $r=1$; some of the techniques of the proof are similar to Maeda's one but we investigate completely different objects to Maeda's one. \begin{thm}[= Theorem \ref{mukai0}]\label{main0_intro} If $(X,D)$ is an $n$-dimensional log Fano manifold with the log Fano pseudoindex $\iota>n/2$, $D\neq 0$ and $\rho(X)\geq 2$, then $n=2\iota-1$ and \[ (X, D)\simeq(\mathbb{P}[\mathbb{P}^{\iota-1};0^\iota, m], \,\, \mathbb{P}^{\iota-1}\times\mathbb{P}^{\iota-1}) \] with $m\geq 0$, where the embedding $D\subset X$ is obtained by the canonical projection $\mathbb{P}[\mathbb{P}^{\iota-1};0^\iota]\subset_{\operatorname{can}}\,\mathbb{P}[\mathbb{P}^{\iota-1};0^\iota, m]$. $($This is exactly the case in Example \ref{2rminusone} in Section \ref{ex_section}.$)$ \end{thm} \begin{thm}[= Main Theorem \ref{mukai1}]\label{main1_intro} Let $(X, D)$ be a $2r$-dimensional log Fano manifold with the log Fano index $r\geq 2$, $D\neq 0$ and $\rho(X)\geq 2$. Then $(X, D)$ is exactly in the Examples \ref{burouappu}, \ref{pP}, \ref{kayaku}, \ref{fanoQ}, \ref{rthree}, \ref{rtwo}, \ref{Tp}, \ref{Pp}, \ref{zerozeroone}, \ref{zeroonebig}, \ref{zerozerobig} $($See Table \ref{maintable} for the list of $(X, D)$$).$ \end{thm} As a consequence of Theorems \ref{mukai0}, \ref{mukai1}, combining Maeda's result, we have classified $n$-dimensional log Fano manifolds with the log Fano indices $r\geq n-2$ and nonzero boundaries, which we discuss in Section \ref{mukaisection} (Corollary \ref{coindex3}). \operatorname{sm}allskip \noindent\textbf{Acknowledgements.} The author is grateful to Professor J\'anos Koll\'ar for showing him the earlier version of \cite{Kol11} and for making a suggestion to classify snc Fano varieties. He also would like to expresses his gratitude to Professors Shigefumi Mori and Noboru Nakayama for their warm encouragements and for making various suggestions. He thanks Professor Eiichi Sato and Doctor Kazunori Yasutake for suggesting him to replace log Fano index by log Fano pseudoindex in Theorem \ref{mukai0} during the participation of Algebraic Geometry Seminar in Kyushu University. The author is partially supported by JSPS Fellowships for Young Scientists. \noindent\textbf{Notation and terminology.} We always work in the category of algebraic (separated and finite type) schemes over a fixed algebraically closed field $\Bbbk$ of characteristic zero. A \emph{variety} means a connected and reduced algebraic scheme. For the theory of extremal contraction, we refer the readers to \cite{KoMo}. For a complete variety $X$, the Picard number of $X$ is denoted by $\rho(X)$. For a smooth projective variety $X$, we define $\operatorname{Eff}(X)$ (resp.\ $\mathbb{N}ef(X)$) to be the effective (resp.\ nef) cone which is defined as the cone in $N^1(X)$ spanned by classes of effective (resp.\ nef) divisors on $X$. For a smooth projective variety $X$ and a $K_X$-negative extremal ray $R\subset\overline{\mathbb{N}E}(X)$, \[ l(R):=\min\{(-K_X\cdot C)\mid C\text{ is a rational curve with } [C]\in R\} \] is called the \emph{length} $l(R)$ of $R$. A rational curve $C\subset X$ with $[C]\in R$ and $(-K_X\cdot C)=l(R)$ is called \emph{a minimal rational curve of $R$}. For a morphism of algebraic schemes $f\colon X\rightarrow Y$, we define the \emph{exceptional locus} $\operatorname{Exc}(f)$ \emph{of $f$} by \[ \operatorname{Exc}(f):=\{x\in X\mid f \text{ is not isomorphism around }x\}. \] For a complete variety $X$, an invertible sheaf $\mathcal{L}$ on $X$ and for a nonnegative integer $i$, let us denote the dimension of the $\Bbbk$-vector space $H^i(X,\mathcal{L})$ by $h^i(X,\mathcal{L})$. We also define $h^i(X,L)$ as $h^i(X,\mathcal{O}_X(L))$ for a Cartier divisor $L$ on $X$. For algebraic schemes (or coherent sheaves on a fixed algebraic scheme) $X_1,\dots,X_m$, the projection is denoted by $p_{i_1,\dots,i_k}\colon\mathbb{P}od_{i=1}^mX_i\rightarrow\mathbb{P}od_{j=1}^kX_{i_j}$ for any $1\leq i_1<\cdots<i_k\leq m$. For an algebraic scheme $X$ and a locally free sheaf of finite rank $\mathcal{E}$ on $X$, let $\mathbb{P}_X(\mathcal{E})$ be the projectivization of $\mathcal{E}$ in the sense of Grothendieck and $\mathcal{O}_\mathbb{P}(1)$ be the tautological invertible sheaf. We usually denote the projection by $p\colon\mathbb{P}_X(\mathcal{E})\rightarrow X$. For locally free sheaves $\mathcal{E}_1,\dots,\mathcal{E}_m$ of finite rank on $X$ and $1\leq i_1<\cdots<i_k\leq m$, we sometimes denote the embedding obtained by the natural projection $p_{i_1,\dots,i_k}\colon\bigoplus_{i=1}^m\mathcal{E}_i\rightarrow\bigoplus_{j=1}^k\mathcal{E}_{i_j}$ by \[ \mathbb{P}_X\Biggl(\bigoplus_{j=1}^k\mathcal{E}_{i_j}\Biggr)\quad\subset_{\operatorname{can}}\,\quad \mathbb{P}_X\Biggl(\bigoplus_{i=1}^m\mathcal{E}_i\Biggr) \] and we call that this embedding is \emph{obtained by the canonical projection}. The symbol $\mathbb{Q}^n$ (resp.\ $\mathcal{Q}^n$) means a smooth (resp.\ possibly non-smooth) hyperquadric in $\mathbb{P}^{n+1}$ for $n\geq 2$. We write $\mathcal{O}_{\mathbb{Q}^n}(1)$ (resp.\ $\mathcal{O}_{\mathcal{Q}^n}(1)$) as the invertible sheaf which is the restriction of $\mathcal{O}_{\mathbb{P}^{n+1}}(1)$ under the natural embedding. We sometimes write $\mathcal{O}(m)$ instead of $\mathcal{O}_{\mathbb{Q}^n}(m)$ (or $\mathcal{O}_{\mathcal{Q}^n}(m)$, $\mathcal{O}_{\mathbb{P}^n}(m)$) for simplicity. For an irreducible projective variety $V$ with $\operatorname{Pic}(V)=\mathbb{Z}$, the ample generator $\mathcal{O}_V(1)$ of $\operatorname{Pic}(V)$, a nonnegative integer $t$ and integers $a_0,\dots,a_t$, we denote the projective space bundle \[ \mathbb{P}_V(\mathcal{O}_V(a_0)\oplus\dots\oplus\mathcal{O}_V(a_t))\quad\text{by}\quad\mathbb{P}[V; a_0,\dots,a_t] \] for simplicity. (We often denote \[ \mathbb{P}[V; \underbrace{b_0,\dots,b_0}_{n_0 \text{ times}},\dots, \underbrace{b_u,\dots,b_u}_{n_u \text{ times}}] \quad\text{by}\quad\mathbb{P}[V; b_0^{n_0},\dots,b_u^{n_u}] \] for any integers $b_0,\dots,b_u$ and positive integers $n_0,\dots,n_u$.) We also denoted by $\mathcal{O}(m; n)$ the invertible sheaf \[ p^*\mathcal{O}_V(m)\otimes\mathcal{O}_\mathbb{P}(n)\quad\text{on}\quad\mathbb{P}[V; a_0,\dots,a_t] \] for any integers $m$ and $n$, where $p\colon \mathbb{P}[V; a_0,\dots,a_t]\rightarrow V$ is the projection and $\mathcal{O}_\mathbb{P}(1)$ is the tautological invertible sheaf with respect to $p$. For any $0\leq i_1<\cdots<i_k\leq t$, we denote the embedding \[ \mathbb{P}_V(\mathcal{O}_V(a_{i_1})\oplus\dots\oplus\mathcal{O}_V(a_{i_k})) \subset_{\operatorname{can}}\,\mathbb{P}_V(\mathcal{O}_V(a_0)\oplus\dots\oplus\mathcal{O}_V(a_t)) \] obtained by the canonical projection by $\mathbb{P}[V; a_{i_1},\dots,a_{i_k}]\subset_{\operatorname{can}}\,\mathbb{P}[V; a_0,\dots,a_t]$, and we also call that this embedding is \emph{obtained by the canonical projection}. \section{Preliminaries}\label{junbisection} \subsection{Snc varieties and log manifolds}\label{snc_section} First, we define simple normal crossing varieties and log manifolds. \begin{definition}[normal crossing singularities]\label{ncdfn} Let $X$ be a variety and $x\in X$ be a closed point. We say that $X$ has \emph{normal crossing singularity} at $x$ if $X$ is equi-dimensional of dimension $n$ around $x$ and if the completion of the local ring $\mathcal{O}_{X,x}$ is isomorphic to \[ \Bbbk[[x_1,\dots,x_{n+1}]]/(x_1\cdots x_k) \] for some $1\leq k\leq n+1$. \end{definition} \begin{definition}[snc varieties and log manifolds]\label{sncdfn} \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{(\arabic{enumi})} \item A \emph{simple normal crossing variety} (\emph{snc} variety, for short) is a variety $\mathcal{X}$ having normal crossing singularities at any closed points $x\in\mathcal{X}$ and each irreducible component of $\mathcal{X}$ is a smooth variety. \item A \emph{log manifold} is a pair $(X,D)$ such that $X$ is a smooth variety and $D$ is a simple normal crossing divisor on $X$, that is, $D$ is reduced and each connected component of $D$ is an snc variety. \end{enumerate} \end{definition} \begin{remark}\label{nrmlrmk} Let $\mathcal{X}$ be an snc variety and $\nu\colon\overline{X}\rightarrow\mathcal{X}$ be the normalization morphism of $\mathcal{X}$. Then $\overline{X}$ is exactly the disjoint union of all the irreducible components of $\mathcal{X}$ and $\nu$ is induced by natural embeddings. \end{remark} \begin{definition}[conductor divisor]\label{conddfn} Let $\mathcal{X}$ be an snc variety with the irreducible decomposition $\mathcal{X}=\bigcup_{1\leq i\leq m}X_i$. For any distinct $1\leq i$, $j\leq m$, the intersection $X_i\cap X_j$ can be seen as a smooth divisor $D_{ij}$ in $X_i$. We define \[ D_i:=\sum_{j\neq i}D_{ij} \] and call it the \emph{conductor divisor} in $X_i$ (with respect to $\mathcal{X}$). We often write that $(X_i, D_i)\subset\mathcal{X}$ is an irreducible component for the sake of simplicity. We also write $\mathcal{X}=\bigcup_{1\leq i\leq m}(X_i, D_i)$ for emphasizing the conductor divisors. \end{definition} \begin{remark}\label{adjunctionrmk} If $\mathcal{X}$ is an snc variety, then $\mathcal{X}$ has an invertible dualizing sheaf $\omega_{\mathcal{X}}$ since $\mathcal{X}$ has only Gorenstein singularities. Furthermore, if $(X,D)\subset\mathcal{X}$ is an irreducible component with the conductor divisor, then $(X, D)$ is a log manifold and \[ \omega_{\mathcal{X}}|_X\simeq\mathcal{O}_X(K_X+D) \] by the adjunction formula, where $K_X$ denotes the canonical divisor of $X$. \end{remark} \begin{definition}[{strata and lc centers \cite{fujino}}]\label{lccenter} \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{(\arabic{enumi})} \item Let $\mathcal{X}$ be an snc variety with the irreducible decomposition $\mathcal{X}=\bigcup_{1\leq i\leq m}X_i$. \begin{itemize} \item A \emph{stratum} of $\mathcal{X}$ is an irreducible component of $\bigcap_{i\in I}X_i$ with the reduced scheme structure for a subset $I\subset\{1,\dots,m\}$. \item A \emph{minimal stratum} of $\mathcal{X}$ is a stratum of $\mathcal{X}$ which is a minimal in the set of strata of $\mathcal{X}$ under the partial order of the inclusion. \end{itemize} \item Let $(X, D)$ be a log manifold. \begin{itemize} \item An \emph{lc center} of $(X, D)$ is a stratum of some connected component of $D$. \item A \emph{minimal lc center} of $(X, D)$ is an lc center of $(X, D)$ which is a minimal in the set of lc centers of $(X, D)$ under the partial order of the inclusion. \end{itemize} \end{enumerate} \end{definition} Now, we give the theorem that decompressing log manifolds to snc varieties. Recall that an algebraic scheme has the \emph{Chevalley-Kleiman property} if every finite subscheme is contained in an open affine subscheme (see \cite[Definition 47]{quot}). For example, any projective variety has the Chevalley-Kleiman property. \begin{thm}\label{glue} Let $(X_1, D_1),\dots,(X_m, D_m)$ be $n$-dimensional log manifolds such that $X_i$ has the Chevalley-Kleiman propeety for any $1\leq i\leq m$. Assume that $D_i$ is decomposed into \[ D_i=\sum_{j\neq i,\,\, 1\leq j\leq m}D_{ij} \] such that $D_{ij}$ are smooth $($possibly, non connected or empty$)$ divisor for any $1\leq i\leq m$, and there exists isomorphisms \[ \phi_{ij}\colon D_{ij}\rightarrow D_{ji} \] for all distinct $1\leq i, j\leq m$. We assume furthermore that \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{\rm{(\arabic{enumi})}} \item\label{glue1} $\phi_{ji}=\phi_{ij}^{-1}$ for any distinct $i$, $j$, \item\label{glue2} $\phi_{ij}(D_{ij}\cap D_{ik})=D_{ji}\cap D_{jk}$ and $\phi_{jk}|_{D_{ji}\cap D_{jk}}\circ\phi_{ij}|_{D_{ij}\cap D_{ik}}=\phi_{ik}|_{D_{ij}\cap D_{ik}}$ for any distinct $i$, $j$, $k$. \end{enumerate} Then, there exists an algebraic scheme $\mathcal{X}$ which has the Chevalley-Kleiman property such that any connected component of $\mathcal{X}$ is an $n$-dimensional snc variety and the normalization of $\mathcal{X}$ can be written as \[ \nu\colon X_1\sqcup\dots\sqcup X_m\rightarrow\mathcal{X}, \] and $\mathcal{X}$ also satisfies the following: \begin{itemize} \item The conductor divisor in $X_i$ (with respect to $\mathcal{X}$) is exactly $D_i$ for any $1\leq i\leq m$. \item $\nu(x_i)=\nu(x_j)$ if and only if $x_i\in D_{ij}$, $x_j\in D_{ji}$ and $\phi_{ij}(x_i)=x_j$ for any $1\leq i<j\leq m$, $x_i\in X_i$ and $x_j\in X_j$. \end{itemize} \end{thm} \begin{proof}[Sketch of the proof] Let $X:=X_1\sqcup\dots\sqcup X_m$ and $D:=D_1\sqcup\dots\sqcup D_m\subset X$, a divisor on $X$. We note that $\{\phi_{ij}\}_{i\neq j}$ defines an involution on $\bigsqcup_{i\neq j}D_{ij}$, which is the normalization of $D$, by the condition \eqref{glue1}. It is easy to show that this involution generates a finite equivalence relation in the sense of Koll\'ar \cite[(26), \S 3]{kollarbook} by the condition \eqref{glue2}. Therefore, by \cite[Theorem 23, \S 3]{kollarbook} and \cite[Corollary 48]{quot} (see also \cite{fer}), we have a finite surjective morphism $\nu\colon X\rightarrow\mathcal{X}$ onto a seminormal variety which has the Chevalley-Kleiman property such that the following property holds. For $x_i\in X_i$ and $x_j\in X_j$, $\nu(x_i)=\nu(x_j)$ holds if and only if either of the following holds; $i=j$ and $x_i=x_j$, or $i\neq j$, $x_i\in D_{ij}, x_j\in D_{ji}$ and $\phi_{ij}(x_i)=x_j$. Pick an arbitrary small affine open subscheme $\mathcal{U}=\operatorname{Spec} A\subset\mathcal{X}$. Since $\nu$ is a finite morphism, $U_i:=\nu^{-1}(\mathcal{U})\cap X_i$ is affine for any $1\leq i\leq m$. Let $U_i=\operatorname{Spec} A_i$. Then we have a finite extension of $\Bbbk$-algebra $A\hookrightarrow\mathbb{P}od_{i=1}^m A_i$ obtained by $\nu$. After replacing $\mathcal{U}$ by a small one, $D_{ij}\cap U_i\subset U_i$ can be defined by a single element $f_{ij}\in A_i$ for any $i$, $j$. Let $\psi_{ij}\colon A_i/(f_{ij})\rightarrow A_j/(f_{ji})$ be the isomorphism of $\Bbbk$-algebra associated to the isomorphism $\phi_{ji}|_{D_{ji}\cap U_j}\colon D_{ji}\cap U_j\rightarrow D_{ij}\cap U_i$. Then it is easy to show the equality \[ A=\Biggl\{(a_i)_{i}\in\mathbb{P}od_{i=1}^m A_i\ \Bigg|\ \psi_{ij}(a_i\operatorname{mod}\, (f_{ij}))=a_j\operatorname{mod}\, (f_{ji}) \quad\text{for all }i\neq j\Biggr\}, \] the restriction $\nu|_{X_i}$ is a closed embedding for any $1\leq i\leq m$ and $\operatorname{Spec} A$ has normal crossing singularities at all closed points. \end{proof} Next, we consider the descent of invertible sheaves. \begin{proposition}\label{picglue} Let $\mathcal{X}$ be an $n$-dimensional snc variety with the irreducible decomposition $\mathcal{X}=\bigcup_{i=1}^mX_i$ which has a unique minimal stratum. We also let $X_{ij}:=X_i\cap X_j$ $($scheme theoretic intersection$)$ for any $1\leq i<j\leq m$. Then we have an exact sequence \[ 0\rightarrow\operatorname{Pic}(\mathcal{X})\xrightarrow{\eta}\bigoplus_{i=1}^m\operatorname{Pic}(X_i) \xrightarrow{\mu}\bigoplus_{1\leq i<j\leq m}\operatorname{Pic}(X_{ij}), \] where $\eta$ is the restriction homomorphism and \[ \mu\Bigl((\mathcal{H}_i)_i\Bigr):=(\mathcal{H}_i|_{X_{ij}}\otimes\mathcal{H}_j^\vee|_{X_{ij}})_{i<j}. \] \end{proposition} \begin{proof} Let $\mathcal{X}_i:=\bigcup_{j=1}^iX_j\subset\mathcal{X}$ for any $1\leq i\leq m$. Then it is easy to show that both $\mathcal{X}_i$ and $\mathcal{X}_i\cap X_{i+1}$ are snc varieties and have a unique minimal stratum. Since units of structure sheaves form an exact sequence \[ 1\rightarrow\mathcal{O}_{\mathcal{X}_{i+1}}^*\rightarrow\mathcal{O}_{X_{i+1}}^*\times\mathcal{O}_{\mathcal{X}_i}^* \rightarrow\mathcal{O}_{\mathcal{X}_i\cap X_{i+1}}^*\rightarrow 1 \] of sheaves of Abelian groups, which induces a long exact sequence \begin{eqnarray*} 1 & \rightarrow & \Bbbk^*\rightarrow\Bbbk^*\times\Bbbk^* \xrightarrow{\upsilon}\Bbbk^* \\ & \rightarrow & \operatorname{Pic}(\mathcal{X}_{i+1})\xrightarrow{\lambda}\operatorname{Pic}(X_{i+1})\oplus\operatorname{Pic}(\mathcal{X}_i) \rightarrow\operatorname{Pic}(\mathcal{X}_i\cap X_{i+1}). \end{eqnarray*} Here $\mathcal{X}_i$, $\mathcal{X}_{i+1}$ and $X_{i+1}$ are all connected. The map $\lambda$ above is injective since $\upsilon$ is surjective. In particular, $\eta$ is injective. It is obvious that $\mu\circ\eta=0$. Assume that $(\mathcal{H}_i)_i\in\bigoplus_{i=1}^m\operatorname{Pic}(X_i)$ satisfies $\mu\bigl((\mathcal{H}_i)_i\bigr)=0$. We will construct an invertible sheaf $\mathcal{L}_i\in\operatorname{Pic}(\mathcal{X}_i)$ for any $1\leq i\leq m$ by induction such that \begin{itemize} \item $\mathcal{L}_i|_{X_i}\simeq\mathcal{H}_i$ and \item $\mathcal{L}_i|_{\mathcal{X}_{i-1}}\simeq\mathcal{L}_{i-1}$ (if $i\geq 2$). \end{itemize} If $i=1$, then $\mathcal{L}_1$ must be (isomorphic to) $\mathcal{H}_1$. Assume that we have constructed $\mathcal{L}_1,\dots,\mathcal{L}_i$. Since \[ \operatorname{Pic}(\mathcal{X}_{i+1})\rightarrow\operatorname{Pic}(X_{i+1})\oplus\operatorname{Pic}(\mathcal{X}_i) \rightarrow\operatorname{Pic}(\mathcal{X}_i\cap X_{i+1}) \] is exact, it is enough to show $\mathcal{L}_i|_{\mathcal{X}_i\cap X_{i+1}}\simeq \mathcal{H}_{i+1}|_{\mathcal{X}_i\cap X_{i+1}}$ to construct $\mathcal{L}_{i+1}$. We already know that the natural sequence \[ 0\rightarrow\operatorname{Pic}(\mathcal{X}_i\cap X_{i+1})\xrightarrow{\kappa} \bigoplus_{j=1}^i\operatorname{Pic}(X_{j,i+1}) \] is exact since $\mathcal{X}_i\cap X_{i+1}$ has a unique minimal stratum. Both $\mathcal{L}_i|_{\mathcal{X}_i\cap X_{i+1}}$ and $\mathcal{H}_{i+1}|_{\mathcal{X}_i\cap X_{i+1}}$ map $\kappa$ to $(\mathcal{H}_j|_{X_{j,i+1}})_j$, thus we can construct $\mathcal{L}_{i+1}$. Therefore we obtain an invertible sheaf $\mathcal{L}_m\in\operatorname{Pic}(\mathcal{X})$ such that $\mathcal{L}|_{X_i}\simeq\mathcal{H}_i$ for any $1\leq i\leq m$. Thus $\eta(\mathcal{L}_m)=(\mathcal{H}_i)_i$. \end{proof} \subsection{Snc Fano varieties and log Fano manifolds}\label{sncfano_section} We shall define simple normal crossing Fano varieties and log Fano manifolds. \begin{definition}[snc Fano varieties and log Fano manifolds]\label{sncfanopredfn} \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{(\arabic{enumi})} \item A projective snc variety $\mathcal{X}$ is said to be a \emph{simple normal crossing Fano variety} (\emph{snc Fano variety}, for short) if the dual of the dualizing sheaf $\omega_{\mathcal{X}}^\vee$ is ample. \item A projective log manifold $(X,D)$ is said to be a \emph{log Fano manifold} if $-(K_X+D)$ is ample. \end{enumerate} \end{definition} \begin{ex}[See also Proposition \ref{indn}]\label{curve} Let $(X, D)$ be a one-dimensional log Fano manifold. Then $X\simeq\mathbb{P}^1$ and $D$ is either one point or empty, since $0<\deg (-(K_X+D))=2-2g-\deg D$ holds for the genus $g$ of $X$. Therefore, if $\mathcal{X}$ is a one-dimensional snc Fano variety, then $\mathcal{X}$ is isomorphic to either smooth or reducible conic. \end{ex} We also define the \emph{index} and \emph{pseudoindex} for a simple normal crossing Fano variety and also for a log Fano manifold; whose notion is essential in the paper. \begin{definition}[index]\label{sncfanodfn} \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{(\arabic{enumi})} \item Let $\mathcal{X}$ be an snc Fano variety. We define the \emph{snc Fano index} of $\mathcal{X}$ as \[ \max\{r\in\mathbb{Z}_{>0}\mid\omega_{\mathcal{X}}^{\vee}\simeq\mathcal{L}^{\otimes r} \text{ for some }\mathcal{L}\in\operatorname{Pic}(\mathcal{X})\}. \] \item Let $(X, D)$ be a log Fano manifold. We define the \emph{log Fano index} of $(X, D)$ as \[ \max\{r\in\mathbb{Z}_{>0}\mid -(K_X+D)\sim rL \text{ for some Cartier divisor }L\text{ on }X\}. \] \end{enumerate} \end{definition} \begin{definition}[pseudoindex]\label{psncfanodfn} \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{(\arabic{enumi})} \item Let $\mathcal{X}$ be an snc Fano variety. We define the \emph{snc Fano pseudoindex} of $\mathcal{X}$ as \[ \min\{\deg_C(\omega_{\mathcal{X}}^{\vee}|_C)\mid C\subset\mathcal{X} \text{ rational curve}\}. \] \item Let $(X, D)$ be a log Fano manifold. We define the \emph{log Fano pseudoindex} of $(X, D)$ as \[ \min\{(-(K_X+D)\cdot C)\mid C\subset X \text{ rational curve}\}. \] \end{enumerate} \end{definition} \begin{remark}\label{pseudo_geq} For an snc Fano variety $\mathcal{X}$ (resp.\ a log Fano manifold $(X, D)$), the snc Fano pseudoindex (resp.\ the log Fano pseudoindex) $\iota$ is divisible by the snc Fano index (resp.\ the log Fano index) $r$ by definition. In particular, $\iota\geq r$ holds. \end{remark} \begin{remark}\label{fanoirrrmk} Let $\mathcal{X}$ be an $n$-dimensional snc Fano variety with the snc Fano index $r$, the snc Fano pseudoindex $\iota$ and $\mathcal{L}$ be an invertible sheaf on $\mathcal{X}$ such that $\omega_{\mathcal{X}}^{\vee}\simeq\mathcal{L}^{\otimes r}$ holds. Then $(X,D)$ is an $n$-dimensional log Fano manifold such that $-(K_X+D)\sim rL$ holds, where $(X,D)\subset\mathcal{X}$ is an irreducible component with the conductor and $L$ is a divisor corresponding to the restriction of $\mathcal{L}$ to $X$. It is easily shown by Remark \ref{adjunctionrmk}. Hence the log Fano index of $(X, D)$ is divisible by $r$ and the log Fano pseudoindex of $(X, D)$ is at least $\iota$. \end{remark} \subsection{First properties of log Fano manifolds}\label{firstprop_section} We quickly give some properties about log Fano manifolds. \begin{thm}[{\cite[Theorem 1.3, 1.4]{Maeda}, \cite[Theorem 3.35]{KoMo}}]\label{cone} Let $(X,D)$ be a log Fano manifold. Then $\mathbb{N}E(X)$ is spanned by a finite number of extremal rays. Furthermore, for any extremal ray $R\subset\mathbb{N}E(X)$, we have: \begin{itemize} \item The ray $R$ is spanned by a class of rational curve $C$ on $X$. \item There exists a contraction morphism $\operatorname{cont}_R:X\rightarrow Y$ associated to $R$ and there exists an exact sequence \[ 0\rightarrow\operatorname{Pic}(Y)\xrightarrow{\operatorname{cont}_R^*}\operatorname{Pic}(X)\xrightarrow{(\bullet\cdot C)}\mathbb{Z}. \] \end{itemize} \end{thm} \begin{lemma}[{\cite[Corollary 2.2, Lemma 2.3]{Maeda}}]\label{picard} Let $(X,D)$ be a log Fano manifold. Then $\operatorname{Pic}(X)$ is torsion free. Furthermore if $\Bbbk=\mathbb{C}$, the homomorphism \[ \operatorname{Pic}(X)\rightarrow H^2(X^\text{\rm{an}}; \mathbb{Z}) \] is isomorphism. \end{lemma} \begin{remark}\label{logfanopic_rmk} Let $(X, D)$ be a log Fano manifold, $r$ be a positive integer and $L$ be a divisor on $X$ such that $-(K_X+D)\sim rL$. Then $L$ is uniquely defined up to linear equivalence by $X$, $D$ and $r$ by Lemma \ref{picard}. \end{remark} \begin{remark}\label{sncpicrmk} Let $\mathcal{X}$ be an snc Fano variety. Then $\operatorname{Pic}(\mathcal{X})$ is torsion free by Lemma \ref{picard}, Theorem \ref{fujinothm} \eqref{fujinothm2} and Proposition \ref{picglue}. Hence we can also say the following: For an snc Fano variety $\mathcal{X}$, $r$ a positive integer and $\mathcal{L}$ an invertible sheaf such that $\omega_{\mathcal{X}}^{\vee}\simeq\mathcal{L}^{\otimes r}$, $\mathcal{L}$ is uniquely defined up to isomorphism by $\mathcal{X}$ and $r$. \end{remark} \begin{proposition}\label{rhoonefano} Let $(X, D)$ be a log Fano manifold with $\rho(X)=1$ and $D\neq 0$. Then $X$ is a Fano manifold whose Fano index $($resp.\ Fano pseudoindex$)$ is larger than the log Fano index $($resp.\ the log Fano pseudoindex$)$ of $(X, D)$. \end{proposition} \begin{proof} Since $-K_X\sim -(K_X+D)+D$ and $D$ is ample (we note that $\rho(X)=1$), the assertions are obvious. \end{proof} \begin{thm}[cf. {\cite[Lemma 2.4]{Maeda}}]\label{fujinothm} \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{$(\arabic{enumi})$} \item\label{fujinothm1} Let $(X,D)$ be a log Fano manifold such that the log Fano index is divisible by $r$ $($resp.\ the log Fano pseudo index $\geq\iota$$)$. Then $D$ is a $($connected$)$ snc Fano variety and the snc Fano index is also divisible by $r$ $($resp.\ the snc Fano pseudoindex $\geq\iota$$)$. \item\label{fujinothm2} Let $\mathcal{X}$ be an snc Fano variety. Then there is a unique minimal stratum of $\mathcal{X}$. In particular, any two irreducible components of $\mathcal{X}$ intersect with each other. \end{enumerate} \end{thm} \begin{proof} \eqref{fujinothm1} We know that $D$ is connected by \cite[Lemma 2.4 (a)]{Maeda}. Let $L$ be a divisor on $X$ such that $-(K_X+D)\sim rL$. Then $\omega_D^\vee\simeq\mathcal{O}_X(-(K_X+D))|_D\simeq(\mathcal{O}_X(L)|_D)^{\otimes r}$ by adjunction. \eqref{fujinothm2} We can prove by using the same idea in \cite[Lemma 2.4 $(\rm{a}')$]{Maeda}. We remark that this is directly shown by \cite[Theorem 6.6 (ii)]{Ambro} and \cite[Theorem 3.47 (ii)]{fujino}. \end{proof} \begin{corollary}\label{cpnt} For an $n$-dimensional snc Fano variety $\mathcal{X}$ with the snc Fano pseudoindex $\iota$, the number of irreducible components of $\mathcal{X}$ is at most $n+2-\iota$. \end{corollary} \begin{proof} We can assume $\mathcal{X}$ is reducible since the assertion is well-known for the irreducible case by \cite{mor79}. We prove by induction on $n$. If $n=1$, then $\mathcal{X}$ is isomorphic to a reducible conic (see Example \ref{curve}). Hence the assertion is obvious. Now, we assume that the assertion holds for the case $n-1$. We know that any two irreducible components of $\mathcal{X}$ intersect with each other by Theorem \ref{fujinothm} \eqref{fujinothm2}. Therefore, for any irreducible component with the conductor $(X,D)\subset\mathcal{X}$, the number of irreducible components of $\mathcal{X}$ minus one is equal to the number of irreducible components of $D$. We note that $D$ is an $(n-1)$-dimensional snc Fano variety such that the snc Fano index is at least $\iota$ by Remark \ref{fanoirrrmk} and Theorem \ref{fujinothm} \eqref{fujinothm1}. The number of irreducible components of $D$ is at most $n+1-\iota$ by induction step. Therefore the number of irreducible components of $\mathcal{X}$ is at most $n+2-\iota$. \end{proof} By Theorem \ref{fujinothm} \eqref{fujinothm2}, we also obtain the following corollary using Theorem \ref{glue} and Proposition \ref{picglue}. \begin{corollary}\label{logfanopic} Fix $n$, $r$, $m\in\mathbb{Z}_{>0}$. Let $(X_i, D_i)$ be an $n$-dimensional log Fano manifold whose log Fano index is divisible by $r$ for any $1\leq i\leq m$. Assume that the irreducible decomposition is written as $D_i=\sum_{j\neq i, 1\leq j\leq m}D_{ij}$ for any $1\leq i\leq m$, and there exist isomorphisms \[ \phi_{ij}\colon D_{ij}\rightarrow D_{ji} \] for all distinct $1\leq i$, $j\leq m$ which satisfy the cocycle conditions \eqref{glue1} and \eqref{glue2} in Theorem \ref{glue}. Then there exists an $n$-dimensional snc Fano variety $\mathcal{X}$ with the snc Fano index is divisible by $r$ whose irreducible decomposition can be written as $\mathcal{X}=\bigcup_{i=1}^m(X_i, D_i)$. \end{corollary} Nowadays, thanks to the recent progress of minimal model program, we also know the following results. \begin{thm}[{\cite[Theorem 1]{zhang}}]\label{rat_conn} Let $(X, D)$ be a log Fano manifold. Then $X$ is a rationally connected variety, that is, arbitrary two closed points in $X$ are joined by an irreducible rational curve. \end{thm} \begin{thm}[{\cite[Corollary 1.3.2]{BCHM}}]\label{dream} Let $(X, D)$ be a log Fano manifold. Then $X$ is a Mori dream space $($see \cite{HK} for definition$)$. \end{thm} \subsection{Bundles and subbundles}\label{bdle_section} In this subsection, we recall some bundle structures. The following lemma is well-known. \begin{lemma}\label{lemP} Let $X$ be an irreducible variety, $D\subset X$ be an effective Cartier divisor and $c$ be a nonnegative integer. Let $\pi\colon X\rightarrow Y$ be a $\mathbb{P}^c$-bundle such that $\pi|_D\colon D\rightarrow Y$ is a $\mathbb{P}^{c-1}$-subbundle. That is, $\pi$ is a proper and smooth morphism such that $\pi^{-1}(y)\simeq\mathbb{P}^c$ and $(\pi|_D)^{-1}(y)$ is isomorphic to a hyperplane section under this isomorphism for any closed point $y\in Y$. Then there exists a commutative diagram of $Y$-morphisms \[ \begin{CD} D @>{\iota}>> X \\ @V{\iota_D}VV @V{\iota_X}VV \\ \mathbb{P}_Y((\pi|_D)_*\mathcal{N}_{D/X}) @>>j> \mathbb{P}_Y(\pi_*\mathcal{O}_X(D)), \end{CD} \] where \begin{itemize} \item $\iota$ is the inclusion map, \item both $\iota_D$ and $\iota_X$ are isomorphisms, \item $j$ is obtained by the natural surjection \[ \pi_*\mathcal{O}_X(D)\rightarrow (\pi|_D)_*\mathcal{N}_{D/X}, \] where $\mathcal{N}_{D/X}$ is the normal sheaf $\mathcal{O}_D(D)$. \end{itemize} Furthermore, we have $D\in|\mathcal{O}_{\mathbb{P}}(1)|$ under these isomorphisms, where $\mathcal{O}_\mathbb{P}(1)$ is the tautological invertible sheaf on $\mathbb{P}_Y(\pi_*\mathcal{O}_X(D))$. \end{lemma} Next, we consider $\mathcal{Q}^{c+1}$-bundles and $\mathcal{Q}^c$-subbundles. \begin{definition}\label{Qq} Let $\pi\colon X\rightarrow Y$ be a morphism between irreducible varieties and $c$ be a positive integer. We say that $\pi\colon X\rightarrow Y$ is a \emph{$\mathcal{Q}^{c+1}$-bundle} if $\pi$ is a proper and flat morphism such that $\pi^{-1}(y)$ is (scheme theoretically) isomorphic to a hyperquadric in $\mathbb{P}^{c+2}$. For a $\mathcal{Q}^{c+1}$-bundle $\pi\colon X\rightarrow Y$ and an effective Cartier divisor $D$ on $X$, we say that $\pi|_D\colon D\rightarrow Y$ is a \emph{$\mathcal{Q}^c$-subbundle} of $\pi$ if $(\pi|_D)^{-1}(y)$ is isomorphic to a hyperplane section under the isomorphism $\pi^{-1}(y)\simeq\mathcal{Q}^{c+1}$ for any closed point $y\in Y$. We note that the morphisms $\pi$ and $\pi|_D$ is not needed to be smooth. (That is why we do not use the symbol $\mathbb{Q}$ but $\mathcal{Q}$.) \end{definition} \begin{lemma}\label{lemQ} Let $X$ be an irreducible variety, $D\subset X$ be an effective Cartier divisor, $Y$ be a smooth variety and $c$ be a positive integer. Suppose that $\pi\colon X\rightarrow Y$ is a $\mathcal{Q}^{c+1}$-bundle and $\pi|_D\colon D\rightarrow Y$ is a $\mathcal{Q}^c$-subbundle. Then we have: \begin{enumerate} \renewcommand{\arabic{enumi}}{\roman{enumi}} \renewcommand{$(\theenumi)$}{\rm{(\arabic{enumi})}} \item\label{lemQ1} The natural sequence \[ 0\rightarrow\mathcal{O}_Y\rightarrow\pi_*\mathcal{O}_X(D)\rightarrow(\pi|_D)_*\mathcal{N}_{D/X}\rightarrow 0 \] is exact. \item\label{lemQ2} $\pi_*\mathcal{O}_X(D)$ and $(\pi|_D)_*\mathcal{N}_{D/X}$ are locally free of rank $c+3$ and $c+2$, respectively. In particular, $P:=\mathbb{P}_Y(\pi_*\mathcal{O}_X(D))$ is a $\mathbb{P}^{c+2}$-bundle over $Y$ and $H:=\mathbb{P}_Y((\pi|_D)_*\mathcal{N}_{D/X})$ is a $\mathbb{P}^{c+1}$-subbundle. \item\label{lemQ3} The natural homomorphism \[ \pi^*\pi_*\mathcal{O}_X(D)\rightarrow\mathcal{O}_X(D) \] is surjective, and it induces a relative quadric embedding $X\hookrightarrow P$ over $Y$. \item\label{lemQ4} $D$ is isomorphic to the complete intersection $X\cap H$ in $P$ under these embeddings. \end{enumerate} \end{lemma} \begin{proof} For any hyperquadric $\mathcal{Q}^{c+1}$ in $\mathbb{P}^{c+2}$, \[ h^0(\mathcal{Q}^{c+1},\mathcal{O}_{\mathcal{Q}^{c+1}}(1))=c+3,\quad h^1(\mathcal{Q}^{c+1},\mathcal{O}_{\mathcal{Q}^{c+1}}(1))=0 \] holds. Hence we get the results of \eqref{lemQ1} and \eqref{lemQ2} by cohomology and base change theorem. The surjectivity of the homomorphism in \eqref{lemQ3} is easily shown since $\mathcal{O}_{\mathcal{Q}^{c+1}}(1)$ is generated by global sections. We can also show that this surjection induces a relative quadric embedding $X\hookrightarrow P$. Thus we have shown \eqref{lemQ3}. Similarly, we get a surjection \[ (\pi|_D)^*(\pi|_D)_*\mathcal{N}_{D/X}\rightarrow\mathcal{N}_{D/X} \] and this gives a relative quadric embedding $D\hookrightarrow H$. Then the composition $D\hookrightarrow H\subset P$ is equal to $D\subset X\hookrightarrow P$ by construction. Now we prove that $D=X\cap H$ in $P$ under these embeddings. Let $p\colon P\rightarrow Y$ be the projection. We note that for any closed point $y\in Y$ there exists a unique hyperplane $H^0_y\subset p^{-1}(y)$ containing $(\pi|_D)^{-1}(y)$. Indeed, if there exist two such distinct hyperplanes $H^1_y$, $H^2_y\subset p^{-1}(y)$ containing $(\pi|_D)^{-1}(y)$, then $(\pi|_D)^{-1}(y)$ is contained in the reduced $c$-dimensional linear subspace $H^1_y\cap H^2_y$, which is a contradiction. Let $H^0:=\bigcup_{y\in Y}H^0_y\subset P$. Then $H^0$ is the unique divisor of $P$ which is a $\mathbb{P}^{c+1}$-subbundle containing $D$. Therefore, $H=H^0$ and $D=X\cap H$, since $D=X\cap H^0$ by construction. \end{proof} \begin{lemma}\label{PQlem} Let $X$ be an irreducible variety such that $h^1(X, \mathcal{O}_X)=0$. \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{$(\arabic{enumi})$} \item\label{PQlem1} Let $c$ be a nonnegative integer and $p_1\colon X\times\mathbb{P}^c\rightarrow X$, $p_2\colon X\times\mathbb{P}^c\rightarrow\mathbb{P}^c$ be the projections. Then \[ (p_1)_*(p_2^*\mathcal{O}_{\mathbb{P}^c}(1))\simeq\mathcal{O}_X^{\oplus c+1}. \] \item\label{PQlem2} Let $c\geq 2$ and $p_1\colon X\times\mathbb{Q}^c\rightarrow X$, $p_2\colon X\times\mathbb{Q}^c\rightarrow\mathbb{Q}^c$ be the projections. Then \[ (p_1)_*(p_2^*\mathcal{O}_{\mathbb{Q}^c}(1))\simeq\mathcal{O}_X^{\oplus c+2}. \] \end{enumerate} \end{lemma} \begin{proof} We prove both assertions by induction on $c$. \eqref{PQlem1} The case $c=0$ is trivial. We assume that the assertion holds for the case: $c-1$. There has the canonical exact sequence \[ 0\rightarrow\mathcal{O}_{X\times\mathbb{P}^c}\rightarrow p_2^*\mathcal{O}_{\mathbb{P}^c}(1)\rightarrow (p_2|_{X\times\mathbb{P}^{c-1}})^*\mathcal{O}_{\mathbb{P}^{c-1}}(1)\rightarrow 0. \] After taking $(p_1)_*$, the sequence \[ 0\rightarrow\mathcal{O}_X\rightarrow(p_1)_*(p_2^*\mathcal{O}_{\mathbb{P}^c}(1))\rightarrow (p_1|_{X\times\mathbb{P}^{c-1}})_*\bigl((p_2|_{X\times\mathbb{P}^{c-1}})^*\mathcal{O}_{\mathbb{P}^{c-1}}(1)\bigr)\rightarrow 0 \] is exact. We note that $(p_1|_{X\times\mathbb{P}^{c-1}})_*((p_2|_{X\times\mathbb{P}^{c-1}})^*\mathcal{O}_{\mathbb{P}^{c-1}}(1))\simeq\mathcal{O}_X^{\oplus c}$ by the induction step. The sequence always splits since $h^1(X, \mathcal{O}_X)=0$. Hence we have proved \eqref{PQlem1}. \eqref{PQlem2} The case $c=2$ is the direct consequence of (1) since $\mathbb{Q}^2$ is isomorphic to $\mathbb{P}^1\times\mathbb{P}^1$. We assume that the assertion holds for the case: $c-1$. There has the canonical exact sequence \[ 0\rightarrow\mathcal{O}_{X\times\mathbb{Q}^c}\rightarrow p_2^*\mathcal{O}_{\mathbb{Q}^c}(1)\rightarrow (p_2|_{X\times\mathbb{Q}^{c-1}})^*\mathcal{O}_{\mathbb{Q}^{c-1}}(1)\rightarrow 0. \] After taking $(p_1)_*$, we have the splitting exact sequence \[ 0\rightarrow\mathcal{O}_X\rightarrow(p_1)_*(p_2^*\mathcal{O}_{\mathbb{Q}^c}(1))\rightarrow \mathcal{O}_X^{\oplus c+1}\rightarrow 0 \] by repeating the same argument in the proof of \eqref{PQlem1}. Hence we have proved \eqref{PQlem2}. \end{proof} \subsection{Facts on extremal contractions and its applications}\label{extcont_section} In this section, we show the structure of the contraction morphism associated to a special ray using Wi\'sniewski's inequality. First, we give a criterion for a smooth projective variety to have the Picard number one in Lemma \ref{rhoone}. Second, we give some delicate structure properties for special log Fano manifolds in Proposition \ref{structure_cor}, which is essential to prove Theorems \ref{mukai0} and \ref{mukai1}. We remember Wi\'sniewski's inequality, which plays an essential role in this section. \begin{thm}[Wi\'sniewski's inequality \cite{wisn91}]\label{wisn_ineq} Let $X$ be an $n$-dimensional smooth projective variety and $R\subset\overline{\mathbb{N}E}(X)$ be a $K_X$-negative extremal ray with the associated contraction morphism $\pi\colon X\rightarrow Y$. Then we have the inequality \[ \dim\operatorname{Exc}(\pi)+\dim F\geq n+l(R)-1 \] for any nontrivial fiber $F$ of $\pi$. \end{thm} We give a criterion for a smooth projective variety $X$ being $\rho(X)=1$ using Theorem \ref{wisn_ineq}. \begin{lemma}\label{rhoone} Let $X$ be an irreducible smooth projective variety, $D\subset X$ be a prime divisor and $R\subset\overline{\mathbb{N}E}(X)$ be a $K_X$-negative extremal ray with the associated contraction morphism $\pi\colon X\rightarrow Y$ such that $(D\cdot R)>0$. \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{$(\arabic{enumi})$} \item\label{rhoone1} If the restriction morphism $\pi|_D:D\rightarrow\pi(D)$ is not birational, then $\pi$ is of fiber type, i.e., $\dim Y<\dim X$ holds. \item\label{rhoone2} If $l(R)\geq 3$, then $\pi|_D:D\rightarrow Y$ is not a finite morphism. Furthermore, if $\rho(D)=1$ holds in addition, then $X$ is a Fano manifold with $\rho(X)=1$. \end{enumerate} \end{lemma} \begin{proof} \eqref{rhoone1} If $\pi$ is of birational type, then it is a divisorial contraction and the exceptional divisor is exactly $D$, since $\pi|_D:D\rightarrow\pi(D)$ is not birational. However, we get a contradiction since $(D\cdot R)>0$. Hence $\pi$ is of fiber type. \eqref{rhoone2} Let us choose an arbitrary nontrivial fiber $F$ of $\pi$. We have $D\cap F\neq\emptyset$ since $(D\cdot R)>0$. Then \[ \dim(F\cap D)\geq\dim F-1\geq l(R)-2\geq 1 \] by Wi\'sniewski's inequality (Theorem \ref{wisn_ineq}). Hence $F\cap D$ contains a curve. Now, we assume that the Picard number of $D$ is equal to one. Then $\pi(D)$ must be a point since all curves in $D$ are numerically proportional. Therefore $\pi$ is of fiber type by \eqref{rhoone1}. If $\dim Y\geq 1$, then $(D\cdot R)=0$; hence $Y$ is a point. In particular, $\rho(X)=1$. Thus $X$ is a Fano manifold since there exists a $K_X$-negative extremal ray. \end{proof} We also show that there exists a `special' $K_X$-negative extremal ray for a log Fano manifold with nonzero boundary, which is essential to classify some special log Fano manifolds. \begin{lemma}\label{longray} Let $(X,D)$ be a log Fano manifold with the log Fano index $r$ and the log Fano pseudoindex $\iota$, $L$ be a divisor on $X$ such that $-(K_X+D)\sim rL$ holds, and assume that $D\neq 0$. Then there exists an extremal ray $R\subset\mathbb{N}E(X)$ such that $(D\cdot R)>0$. Let $R$ be an extremal ray satisfying $(D\cdot R)>0$ and $\pi\colon X\rightarrow Y$ be the contraction morphism associated to $R$. Then $R$ is always $K_X$-negative and $l(R)\geq\iota+1$. Moreover, the restriction morphism \[ \pi|_{D_1}\colon D_1\rightarrow \pi(D_1) \] to its image is an algebraic fiber space, that is, $(\pi|_{D_1})_*\mathcal{O}_{D_1}=\mathcal{O}_{\pi(D_1)}$, for any irreducible component $D_1\subset D$. Furthermore, for a minimal rational curve $C\subset X$ of $R$, we have the following properties: \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{$(\arabic{enumi})$} \item\label{longray1} If $l(R)=\iota+1$, then $(D\cdot C)=1$. \item\label{longray2} If $l(R)=r+2$ and $r\geq 2$, then $(L\cdot C)=1$ and $(D\cdot C)=2$. \end{enumerate} \end{lemma} \begin{proof} The existence of such extremal ray is obvious, since $D$ is a nonzero effective divisor and $\mathbb{N}E(X)$ is spanned by finite number of extremal rays. Let $R\subset\mathbb{N}E(X)$ be an extremal ray such that $(D\cdot R)>0$. Then $R$ is $K_X$-negative since $(-K_X\cdot R)=(-(K_X+D)\cdot R)+(D\cdot R)>0$. To see that $\pi|_{D_1}\colon D_1\rightarrow \pi(D_1)$ is an algebraic fiber space, it is enough to show that the homomorphism $\pi_*\mathcal{O}_X\rightarrow(\pi|_{D_1})_*\mathcal{O}_{D_1}$ is surjective. We know that the sequence \[ \pi_*\mathcal{O}_X\rightarrow(\pi|_{D_1})_*\mathcal{O}_{D_1}\rightarrow R^1\pi_*\mathcal{O}_X(-D_1) \] is exact and $R^1\pi_*\mathcal{O}_X(-D_1)=0$ by a vanishing theorem (see for example \cite[Theorem 2.42]{fujino}). Hence $\pi|_{D_1}\colon D_1\rightarrow \pi(D_1)$ is an algebraic fiber space for any irreducible component $D_1\subset D$. Let $C\subset X$ be a minimal rational curve of $R$. Then we have \[ l(R)=(-K_X\cdot C)=(-(K_X+D)\cdot C)+(D\cdot C)\geq\iota+1. \] If $l(R)=\iota+1$, then the above inequality is exactly equal. Hence $(D\cdot C)=1$ holds. If $l(R)=r+2$ and $r\geq 2$, then \[ r+2=l(R)=r(L\cdot C)+(D\cdot C)\geq r+1. \] Therefore $(L\cdot C)=1$ and $(D\cdot C)=2$ holds. \end{proof} Using Lemma \ref{longray}, we can show a delicate structure properties for certain log Fano manifolds. \begin{proposition}\label{structure_cor} Let $(X,D)$ be a log Fano manifold of the log Fano index $r$, log Fano pseudoindex $\iota$ and assume that $D\neq 0$. Pick an arbitrary extremal ray $R\subset\mathbb{N}E(X)$ such that $(D\cdot R)>0$ and let $\pi\colon X\rightarrow Y$ be the contraction morphism associated to $R$. Let $F$ be an arbitrary nontrivial fiber of $\pi$. Then $\dim(D\cap F)\geq\iota-1$ holds. Furthermore, we have the following results. \begin{enumerate} \renewcommand{\arabic{enumi}}{\roman{enumi}} \renewcommand{$(\theenumi)$}{\rm{(\arabic{enumi})}} \item\label{structure_cor1} If $\dim(D\cap F)=\iota-1$ for any nontrivial fiber $F$, then $\pi\colon X\rightarrow Y$ is a $\mathbb{P}^\iota$-bundle and $\pi|_D\colon D\rightarrow Y$ is a $\mathbb{P}^{\iota-1}$-subbundle. \item\label{structure_cor2} If $r\geq 2$ and there exists an irreducible component $D_1$ of $D$ such that $\dim(D_1\cap F)=r$ for any $F$, then one of the following holds. \begin{enumerate} \renewcommand{\arabic{enumi}i}{\alph{enumii}} \renewcommand{$(\theenumi)$i}{\rm{(\arabic{enumi}i)}} \item\label{structure_cor2a} $Y$ is a smooth projective variety and $\pi$ is the blowing up along an irreducible smooth projective subvariety $W\subset Y$ of codimension $r+2$. \item\label{structure_cor2b} $Y$ is smooth, $\pi\colon X\rightarrow Y$ is a $\mathcal{Q}^{r+1}$-bundle and $\pi|_{D_1}\colon D_1\rightarrow Y$ is a $\mathcal{Q}^r$-subbundle $($cf. Definition \ref{Qq}$)$. \item\label{structure_cor2c} $\pi\colon X\rightarrow Y$ is a $\mathbb{P}^{r+1}$-bundle and $\pi|_{D_1}\colon D_1\rightarrow Y$ is a $\mathbb{P}^r$-subbundle. \item\label{structure_cor2d} $\pi_*\mathcal{O}_X(L)$ is locally free of rank $r+2$, where $L$ is a divisor on $X$ such that $-(K_X+D)\sim rL$. Furthermore, $\pi\colon X\rightarrow Y$ is isomorphic to the projection $p\colon\mathbb{P}_Y(\pi_*L)\rightarrow Y$ and $(\pi|_{D_1})^{-1}(y)$ is a hyperquadric section under the isomorphism $\pi^{-1}(y)\simeq\mathbb{P}^{r+1}$ for any closed point $y\in Y$. Moreover, $\pi_*\mathcal{O}_X(L)\simeq(p|_{D_1})_*(\mathcal{O}_{\mathbb{P}}(1)|_{D_1})$ under the isomorphism. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} Let $L$ be an ample divisor on $X$ such that $-(K_X+D)\sim rL$ and $C$ be a minimal rational curve of $R$. We note that $D$ and $F$ intersect with each other since $(D\cdot R)>0$. Hence \begin{equation}\label{long_ineq} \dim(D\cap F)\geq\dim F-1\geq \dim X-\dim\operatorname{Exc}(\pi)+l(R)-2\\ \geq l(R)-2\geq\iota-1\geq r-1 \end{equation} by Wi\'sniewski's inequality (Theorem \ref{wisn_ineq}) and by Lemma \ref{longray}. First, we consider the case \eqref{structure_cor1}. Then $\dim\operatorname{Exc}(\pi)=\dim X$ and $l(R)=\iota+1$. Hence $\pi$ is of fiber type, all fibers of $\pi$ are of dimension $\iota$, and the equalities $(D\cdot C)=1$ and $(-K_X\cdot C)=\iota+1$ hold by Lemma \ref{longray}. Therefore $\pi\colon X\rightarrow Y$ is a $\mathbb{P}^\iota$-bundle and $\pi|_D\colon D\rightarrow Y$ is a $\mathbb{P}^{\iota-1}$-subbundle by \cite[Lemma 2.12]{fujita}. Next, we consider the case \eqref{structure_cor2}. We first show that $(D_1\cdot R)>0$. If not, any nontrivial fiber $F$ is included in $D_1$ (in particular, $\pi$ is of birational type). Then Wi\'sniewski's inequality (Theorem \ref{wisn_ineq}) and Lemma \ref{longray} shows that \[ \dim\operatorname{Exc}(\pi)+r\geq\dim X+l(R)-1\geq\dim X+\iota. \] Hence $\pi$ is of fiber type, this leads to a contradiction. Consequently, we have $(D_1\cdot R)>0$. We first assume that $\dim\operatorname{Exc}(\pi)<\dim X$. Then $\dim\operatorname{Exc}(\pi)=\dim X-1$ and $l(R)=r+1$ by substituting $D_1$ for (\ref{long_ineq}). Hence $\pi$ is a divisorial contraction such that $\dim F=r+1$ for any $F$, and the equality $(D\cdot C)=1$ holds by Lemma \ref{longray} \eqref{longray1}. Thus, $Y$ is a smooth projective variety and $\pi$ is the blowing up whose center $W\subset Y$ is a smooth projective subvariety of codimension $r+2$ by \cite[Theorem 4.1 (iii)]{AW}. Therefore the condition \eqref{structure_cor2a} is satisfied in the case $\dim\operatorname{Exc}(\pi)<\dim X$. We second consider the case where $\dim\operatorname{Exc}(\pi)=\dim X$, that is, $\pi$ is of fiber type. We note that $l(R)=r+1$ or $r+2$ by (\ref{long_ineq}). We consider the case where $\pi$ is of fiber type and $l(R)=r+1$. Then $\dim F=r+1$ for any fiber and the equalities $(D_1\cdot C)=1$ and $(-K_X\cdot C)=r+1$ hold by (\ref{long_ineq}) and Lemma \ref{longray} \eqref{longray1}. Thus $\pi_*\mathcal{O}_X(D_1)$ is locally free of rank $r+3$ and $X$ is embedded over $Y$ into $\mathbb{P}_Y(\pi_*\mathcal{O}_X(D_1))$ as a divisor of relative degree $2$ by \cite[Theorem B]{ABW}. Therefore the condition \eqref{structure_cor2b} satisfied in the case $\dim\operatorname{Exc}(\pi)=\dim X$ and $l(R)=r+1$. We consider the case where $\pi$ is of fiber type and $l(R)=r+2$. Then $(L\cdot C)=1$ and either $(D_1\cdot C)=1$ or $2$ holds by Lemma \ref{longray}. Thus $\pi\colon X\rightarrow Y$ is isomorphic to the $\mathbb{P}^{r+1}$-bundle $\mathbb{P}_Y(\pi_*\mathcal{O}_X(L))$ by \cite[Lemma 2.12]{fujita}. If $(D_1\cdot C)=1$, then $\pi|_{D_1}\colon D_1\rightarrow Y$ is a $\mathbb{P}^r$-subbundle. Therefore the condition \eqref{structure_cor2c} satisfied in the case $\dim\operatorname{Exc}(\pi)=\dim X$, $l(R)=r+2$ and $(D_1\cdot C)=1$. Finally, we consider the remained case where $\pi$ is of fiber type, $l(R)=r+2$ and $(D_1\cdot C)=2$. Under the isomorphism $X\simeq\mathbb{P}_Y(\pi_*\mathcal{O}_X(L))$, we have a natural exact sequence \[ 0\rightarrow\mathcal{O}_{\mathbb{P}}(1)(-D_1)\rightarrow\mathcal{O}_{\mathbb{P}}(1) \rightarrow\mathcal{O}_{\mathbb{P}}(1)|_{D_1}\rightarrow 0, \] where $\mathcal{O}_\mathbb{P}(1)$ is the tautological invertible sheaf on $\mathbb{P}_Y(\pi_*\mathcal{O}_X(L))$. After taking $p_*$, we also obtain an exact sequence \[ 0\rightarrow 0\rightarrow\pi_*\mathcal{O}_X(L) \rightarrow(p|_{D_1})_*(\mathcal{O}_{\mathbb{P}}(1)|_{D_1})\rightarrow 0 \] by cohomology and base change theorem, since $h^i(\mathbb{P}^{r+1}, \mathcal{O}(-1))=0$ holds for $i=0,1$. Therefore the condition \eqref{structure_cor2d} satisfied in the case $\dim\operatorname{Exc}(\pi)=\dim X$, $l(R)=r+2$ and $(D_1\cdot C)=2$. \end{proof} \subsection{Properties on scrolls}\label{property_scroll} In Section \ref{property_scroll}, we consider special toric varieties which are the projective space bundles of which splits into invertible sheaves over projective spaces, so called the Hirzebruch-Kleinschmidt varieties. During Section \ref{property_scroll}, we fix the notation. \begin{notation}\label{scroll_not} Let $s$, $t$ be positive integers and $a_0,\dots,a_t$ be integers with $0=a_0\leq a_1\leq\dots\leq a_t$. Let \[ X:=\mathbb{P}[\mathbb{P}^s; a_0,\dots,a_t], \] that is, \[ X=\mathbb{P}_{\mathbb{P}^s}(\mathcal{O}(a_0)\oplus\dots\oplus\mathcal{O}(a_t)). \] We also let \[ D_i:=\mathbb{P}[\mathbb{P}^s; a_0,\dots,a_{i-1},a_{i+1},\dots,a_t]\subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{P}^s; a_0,\dots,a_t], \] that is, the embedding is obtained by the canonical projection, for any $0\leq i\leq t$. (See Notation and terminology in Section \ref{introsection}.) \end{notation} \begin{lemma}\label{scroll_lem} We have the following properties. \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{$(\arabic{enumi})$} \item\label{scroll_lem1} $\operatorname{Pic}(X)=\mathbb{Z}[\mathcal{O}(1; 0)]\oplus\mathbb{Z}[\mathcal{O}(0; 1)]$. \item\label{scroll_lem2} $\mathcal{O}_X(-K_X)\simeq\mathcal{O}(s+1-\sum_{i=1}^ta_i;\,\, t+1)$. \item\label{scroll_lem3} $D_i\in|\mathcal{O}(-a_i; 1)|$ for any $0\leq i\leq t$. \item\label{scroll_lem35} $\deg_{C_f}(\mathcal{O}(u; v)|_{C_f})=v$ and $\deg_{C_h}(\mathcal{O}(u; v)|_{C_h})=u$, where $C_f$ is a line in a fiber of $X\rightarrow\mathbb{P}^s$ and $C_h$ is a line in $\mathbb{P}[\mathbb{P}^s; a_0]\subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{P}^s; a_0,\dots,a_t].$ \item\label{scroll_lem4} $\mathbb{N}ef(X)=\mathbb{R}_{\geq 0}[\mathcal{O}(1; 0)]+\mathbb{R}_{\geq 0}[\mathcal{O}(0; 1)]$ and $\operatorname{Eff}(X)=\mathbb{R}_{\geq 0}[\mathcal{O}(1; 0)]+\mathbb{R}_{\geq 0}[\mathcal{O}(-a_t; 1)].$ \item\label{scroll_lem5} For a divisor $D=\sum_{i=1}^tc_iD_i+dH$ with $c_i, d\in\mathbb{Z}$, where $H$ is the pullback of a hyperplane in $\mathbb{P}^s$, the value $h^0(X, \mathcal{O}_X(D))$ is exactly the number of the elements of the set \begin{eqnarray*} \Biggl\{(P_1,\dots,P_s,Q_1,\dots,Q_t)\in\mathbb{Z}^{\oplus s+t}\,\Bigg| \, -\sum_{j=1}^tQ_j\geq 0,\,\, Q_i\geq-c_i\ (1\leq i\leq t),\\ -\sum_{i=1}^sP_i+\sum_{j=1}^ta_jQ_j\geq -d,\,\, P_1,\dots,P_s\geq 0\Biggr\}. \end{eqnarray*} \item\label{scroll_lem6} If there exists an effective divisor $D$ on $X$ with $D\in|\mathcal{O}(k; 1)|$ such that $k<-a_{t-1}$ holds, then $D$ always contains $D_t$ as an irreducible component. \item\label{scroll_lem7} If a member $D\in|\mathcal{O}(k; 2)|$ is reduced, then $k\geq -a_t-a_{t-1}$. \item\label{scroll_lem8} Assume that $a_{t-2}<a_t$. Then any effective and reduced divisor $D$ on $X$ with $D\in|\mathcal{O}(-a_t-a_{t-1}; 2)|$ is decomposed into two irreducible components $D^t$ and $D^{t-1}$ such that $D^t\sim D_t$ and $D^{t-1}\sim D_{t-1}$: here $D^t=D_t$ if $a_{t-1}<a_t$. Furthermore, after taking an automorphism of $X$ over $\mathbb{P}^s$, $D^t$ and $D^{t-1}$ can move $D_t$ and $D_{t-1}$, respectively. \end{enumerate} \end{lemma} \begin{proof} Since \eqref{scroll_lem1}--\eqref{scroll_lem5} are well-known, we shall prove \eqref{scroll_lem6}--\eqref{scroll_lem8}. We note that the total coordinate ring of $X$ is the $\mathbb{Z}^{\oplus 2}$-graded polynomial ring \[ \Bbbk[x_0, \dots, x_s, y_0,\dots,y_t] \] with the grading \[ \deg x_i=(1, 0),\quad\deg y_j=(-a_j, 1). \] First, we consider \eqref{scroll_lem6}. A defining equation of $D$ is expressed as a linear combination of the monomials \[ y_t\mathbb{P}od_{i=0}^sx_i^{m_i} \] with $\sum_{i=0}^sm_i=k+a_t$, since $k<a_{t-1}$. Thus $D$ contains a component of the subvariety defined by $y_t=0$, which is nothing but $D_t$. Next, we consider \eqref{scroll_lem7}. If $k<-a_t-a_{t-1}$, then a defining equation of $D$ is expressed as a linear combination of the monomials \[ y_t^2\mathbb{P}od_{i=0}^sx_i^{m_i} \] with $\sum_{i=0}^sm_i=k+2a_t$. Therefore $D$ must be nonreduced since $D-2D_t$ is an effective divisor. Finally, we consider \eqref{scroll_lem8}. If $a_{t-1}<a_t$, then a defining equation of $D$ is expressed as a linear combination of the monomials \[ y_t^2\mathbb{P}od_{i=0}^sx_i^{m_i} \] with $\sum_{i=0}^sm_i=a_t-a_{t-1}$ and \[ y_{t-j}y_t \] with $1\leq j\leq l$, where $l$ is the number of the elements of the set $\{1\leq j\leq t-1\mid a_j=a_{t-1}\}$. Then $D$ has a component of the subvariety defined by $y_t=0$, which is nothing but $D_t$. We can also show that $D^{t-1}:=D-D_t$ is irreducible if $D$ is reduced and $D^{t-1}\sim D_{t-1}$. If $a_{t-2}<a_{t-1}=a_t$, a defining equation of $D$ is expressed as a linear combination of monomials \[ y_{t-1}^2,\quad y_{t-1}y_t\quad\text{and}\quad y_t^2. \] Hence $D$ has exactly two irreducible components $D^t$ and $D^{t-1}$ such that $D^t\sim D^{t-1}\sim D_t(\sim D_{t-1})$ since $D$ is reduced. The existence of an automorphism $\alpha$ of $X$ over $p\colon X\rightarrow\mathbb{P}^s$ such that $\alpha(D^t)=D_t$ and $\alpha(D^{t-1})=D_{t-1}$ is easy in either case by a simple linear-algebraic argument. \end{proof} \begin{corollary}\label{scroll_cor1} Let $D$ be a member of $D\in|\mathcal{O}(c; d)|$ for some $d>0$. Assume that $(X, D)$ is a log Fano manifold such that the log Fano index is $r$ and the log Fano pseudoindex is $\iota$. \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{$(\arabic{enumi})$} \item\label{scroll_cor11} If $\iota\geq t$, then $d=1$, $t=\iota$ and $s\geq\iota-1$ holds. Furthermore, if $s=\iota-1$, then $a_1=\cdots=a_{\iota-1}=0$ and $c=-a_\iota$. \item\label{scroll_cor12} If $r\geq t$ $($hence $\iota\geq t$ holds$)$, $s=r$ and $r\geq 2$, then we have $r=\iota$ and either $(a_1,\dots,a_{r-2},a_{r-1},c)=(0,\dots,0,0,1-a_r)$ or $(0,\dots,0,1,-a_r)$ holds. \end{enumerate} \end{corollary} \begin{proof} By Lemma \ref{scroll_lem} \eqref{scroll_lem2}, \[ \mathcal{O}_X(-(K_X+D))\simeq\mathcal{O}\Biggl(s+1-\sum_{i=1}^ta_i-c;\,\,\,t+1-d\Biggr). \] Hence $t+1-d\geq\iota$ by Lemma \ref{scroll_lem} \eqref{scroll_lem35}. Thus $d=1$ and $t=\iota$ if $\iota\geq t$ holds (resp.\ $d=1$ and $t=\iota=r$ if $r\geq t$ holds). We also note that $s+1-\sum_{i=1}^\iota a_i-c$ is at least $\iota$ and is a positive multiple of $r$ and $c\geq -a_\iota$. Hence $s\geq\iota-1+\sum_{i=1}^{\iota-1}a_i\geq\iota-1$. \eqref{scroll_cor11} If $s=\iota-1$, then $\iota-\sum_{i=1}^\iota a_i\geq\iota+c\geq\iota-a_\iota$. Therefore $\sum_{i=1}^{\iota-1}a_i=0$ and $c=-a_\iota$ hold. \eqref{scroll_cor12} If $s=r$ and $r\geq 2$, then $r+1-\sum_{i=1}^ra_i-c$ is divisible by $r$ and $\sum_{i=1}^ra_i+c\geq\sum_{i=1}^{r-1}a_i\geq 0$. Hence $\sum_{i=1}^ra_i+c=1$. Therefore either $(a_1,\dots,a_{r-2},a_{r-1},c)=(0,\dots,0,0,1-a_r)$ or $(0,\dots,0,1,-a_r)$ holds. \end{proof} \begin{corollary}\label{scroll_cor2} Let $r:=t-1$ with $r\geq 2$ and $D$ be a member of $D\in|\mathcal{O}(c; d)|$ for some $d>0$. Assume that $(X, D)$ is a log Fano manifold such that the log Fano index is divisible by $r$. Then we have $d=2$ and $s\geq r-1$. Furthermore, if $s=r-1$, then $a_1=\cdots=a_{r-1}=0$ and $c=-a_r-a_{r+1}$. \end{corollary} \begin{proof} We repeat the argument similar to Corollary \ref{scroll_cor1}. By Lemma \ref{scroll_lem} \eqref{scroll_lem2}, \[ \mathcal{O}_X(-(K_X+D))\simeq \mathcal{O}\Biggl(s+1-\sum_{i=1}^{r+1}a_i-c;\,\,\, r+2-d\Biggr). \] Thus we have $d=2$ since $r+2-d$ is a positive multiple of $r$ and $r\geq 2$. We also know that $s\geq r-1+\sum_{i=1}^{r+1}a_i+c\geq r-1+\sum_{i=1}^{r-1}a_i\geq r-1$ by Lemma \ref{scroll_lem} \eqref{scroll_lem7}. Furthermore, if $s=r-1$, then $r-\sum_{i=1}^{r+1}a_i\geq r+c\geq r-a_r-a_{r+1}$. Therefore we complete the proof. \end{proof} \section{Examples}\label{ex_section} In this section, we give some examples of log Fano manifolds with large log Fano indices. \subsection{Example of dimension $2\iota-1$ and log Fano (pseudo)index $\iota$}\label{sbsc_mukai0} First, we consider the case \eqref{scroll_cor11} in Corollary \ref{scroll_cor1}, which is the important example of $(2\iota-1)$-dimensional log Fano manifold with the log Fano (pseudo)index $\iota$ (See Theorem \ref{mukai0}). \begin{exmuk}\label{2rminusone} Let $\iota\geq 2$, $m\geq 0$, \[ X=\mathbb{P}[\mathbb{P}^{\iota-1}; 0^\iota, m]\quad\text{and}\quad D\in|\mathcal{O}(-m; 1)|. \] We know that $\mathcal{O}(1; 1)$ is an ample invertible sheaf and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{O}(1; 1)^{\otimes\iota}$. If $m>0$, then $D$ is unique and $D=\mathbb{P}[\mathbb{P}^{\iota-1}; 0^\iota]\subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{P}^{\iota-1}; 0^\iota, m]$ by Lemma \ref{scroll_lem} \eqref{scroll_lem6}. If $m=0$, then $X=\mathbb{P}^{\iota-1}\times\mathbb{P}^\iota$ and $D\in|\mathcal{O}_{\mathbb{P}^{\iota-1}\times\mathbb{P}^\iota}(0,1)|$. Hence any member $D\in|\mathcal{O}_{\mathbb{P}^{\iota-1}\times\mathbb{P}^\iota}(0,1)|$ is always an irreducible smooth divisor and can move $\mathbb{P}[\mathbb{P}^{\iota-1}; 0^\iota]\subset_{\operatorname{can}}\,\mathbb{P}[\mathbb{P}^{\iota-1}; 0^\iota, m]$ after taking an automorphism of $X$ over $\mathbb{P}^{\iota-1}$. We note that the dimension of $|\mathcal{O}_{\mathbb{P}^{\iota-1}\times\mathbb{P}^\iota}(0,1)|$ is equal to $\iota$ by Lemma \ref{scroll_lem} \eqref{scroll_lem5}. Therefore $(X, D)$ is a $(2\iota-1)$-dimensional log Fano manifold with the log Fano index $\iota$ and the log Fano pseudoindex $\iota$ for any $D\in|\mathcal{O}(-m; 1)|$. \end{exmuk} \subsection{Examples of dimension $2r$ and log Fano index $r$}\label{sbsc_mukai1} Next, we give examples of $2r$-dimensional log Fano manifolds with the log Fano indices $r$ (See Theorem \ref{mukai1}). \begin{example}\label{burouappu} Let $X:=\operatorname{Bl}_{\mathbb{P}^{r-2}}\mathbb{P}^{2r}\xrightarrow{\operatorname{Bl}}\mathbb{P}^{2r}$, that is, the blowing up of $\mathbb{P}^{2r}$ along a linear subspace $\mathbb{P}^{r-2}$ of dimension $r-2$. Let $E\subset X$ be the exceptional divisor. Consider the linear system $|\operatorname{Bl}^*\mathcal{O}_{\mathbb{P}^{2r}}(1)\otimes\mathcal{O}_X(-E)|$. It is easy to show that the linear system is of dimension $r+1$ and any element $D$ in the system is the strict transforms of a hyperplane in $\mathbb{P}^{2r}$ containing the center of the blowing up. In particular, $D$ is irreducible and smooth. The invertible sheaf $\mathcal{H}:=\operatorname{Bl}^*\mathcal{O}_{\mathbb{P}^{2r}}(2)\otimes\mathcal{O}_X(-E)$ is ample. We also know that $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$ for any $D\in|\operatorname{Bl}^*\mathcal{O}_{\mathbb{P}^{2r}}(1)\otimes\mathcal{O}_X(-E)|$. \end{example} \begin{example}\label{pP} Let $X:=\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}$ and $D$ is an effective divisor on $X$ such that $D\in|\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}}(0,2)|$. Then the dimension of the linear system is $(r+2)(r+3)/2-1$ and $D$ is a simple normal crossing divisor if and only if $D$ is the pull back of the smooth or reducible hyperquadric in $\mathbb{P}^{r+1}$. In particular, a general element in the linear system is a simple normal crossing divisor. Let $\mathcal{H}:=\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}}(1,1)$. Then $\mathcal{H}$ is an ample invertible sheaf on $X$ and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$ for any simple normal crossing $D\in|\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}}(0,2)|$. \end{example} \begin{example}\label{kayaku} Let \[ X:=\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_1, m_2] \] with $0\leq m_1\leq m_2$ and $1\leq m_2$, and $D$ is an effective divisor on $X$ such that $D\in|\mathcal{O}(-m_1-m_2; 2)|$. All reduced elements in this linear system can be seen the sum of \[ \mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_1],\,\,\, \mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_2]\,\,\, \subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_1, m_2] \] after taking an automorphism over $\mathbb{P}^{r-1}$ by Lemma \ref{scroll_lem} \eqref{scroll_lem8}. In particular, this must be a simple normal crossing divisor. We note that the dimension of the linear system $|\mathcal{O}(-m_1-m_2; 2)|$ is equal to \[ \left\{ \begin{array}{ll} 2 & (m_1=m_2) \\ \binom{m_2+r-1}{r-1}+r-1 & (m_1=0) \\ \binom{m_2-m_1+r-1}{r-1} & (0<m_1<m_2) \end{array} \right. \] by Lemma \ref{scroll_lem} \eqref{scroll_lem5}. Let $\mathcal{H}:=\mathcal{O}(1; 1)$. Then $\mathcal{H}$ is an ample invertible sheaf on $X$ and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$ for any reduced $D\in|\mathcal{O}(-m_1-m_2; 2)|$. \end{example} \begin{example}[See also Remark \ref{fanoQrmk}]\label{fanoQ} Let \[ E:=\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}]\subset_{\operatorname{can}}\, X':=\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}, m] \] with $m\geq 0$. We note that $E\simeq\mathbb{P}^{r-1}\times\mathbb{P}^r$. Consider a smooth divisor $B$ in $X'$ with $B\in|\mathcal{O}(0; 2)|$ such that the intersection $B\cap E$ is also smooth. We note that the homomorphism \[ H^0(X', \mathcal{O}(0; 2))\rightarrow H^0(E, \mathcal{O}(0; 2)|_E) \] is surjective since $H^1(X', \mathcal{O}(0; 2)(-E)) =0$ by Kodairs's vanishing theorem. Hence general $B\in|\mathcal{O}(0; 2)|$ satisfies this property. We also note that the dimension of the linear system $|\mathcal{O}(0; 2)|$ is equal to \[ \binom{2m+r-1}{r-1}+(r+1)\binom{m+r-1}{r-1}+\frac{(r+1)(r+2)}{2}-1 \] by Lemma \ref{scroll_lem} \eqref{scroll_lem5}. Let $\tau\colon X\rightarrow X'$ be the double cover of $X'$ with the branch divisor $B$, and $D$ be the strict transform of $E$ on $X$. Then $X$ is smooth and $D\simeq\mathbb{P}^{r-1}\times\mathbb{Q}^r$ by construction. We know that $\mathcal{O}_X(-K_X)\simeq\tau^*(\mathcal{O}_{X'}(-K_{X'})\otimes\mathcal{O}(0; -1)) \simeq\tau^*\mathcal{O}(r-m; r+2)$ and $\mathcal{O}_X(D)\simeq\tau^*\mathcal{O}(-m; 1)$. Let $\mathcal{H}:=\tau^*\mathcal{O}(1; 1)$ an ample invertible sheaf on $X$. Then $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$. We also note that $\mathcal{H}$ cannot be divisible by any positive number larger than one by Remark \ref{fanoQrmk}. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$. \end{example} \begin{example}[See also Remark \ref{rthreermk}]\label{rthree} In this example, we consider the case $r\geq 3$. Let \[ D:=\mathbb{P}[\mathbb{Q}^r; 0^r]\subset_{\operatorname{can}}\, X:=\mathbb{P}[\mathbb{Q}^r; 0^r, m] \] with $m\geq 0$. We note that $D$ is isomorphic to $\mathbb{P}^{r-1}\times\mathbb{Q}^r$. We also note that there exists a unique element in $|D|$ if $m>0$. If $m=0$ then $X=\mathbb{P}^r\times\mathbb{Q}^r$ and $D\in|\mathcal{O}_{\mathbb{P}^r\times\mathbb{Q}^r}(1, 0)|$ hence the dimension of the linear system $|D|$ is equal to $r$; any element in $|D|$ defines a smooth divisor in $X$. Let $\mathcal{H}:=\mathcal{O}(1; 1)$. We can show that $\mathcal{H}$ is an ample invertible sheaf on $X$ provided that $m\geq 0$, and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$ by an easy calculation. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$. \end{example} \begin{example}\label{rtwo} In this example, we only consider the case $r=2$. Let \[ D:=\mathbb{P}_{\mathbb{P}^1\times\mathbb{P}^1}(\mathcal{O}^{\oplus 2})\subset_{\operatorname{can}}\, X:=\mathbb{P}_{\mathbb{P}^1\times\mathbb{P}^1}(\mathcal{O}^{\oplus 2} \oplus\mathcal{O}(m_1, m_2)) \] with $0\leq m_1\leq m_2$. We note that there exists a unique element in $|D|$ if $m_2>0$. If $m_1=m_2=0$ then $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^2$ and $D\in|\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^2}(0,0,1)|$ hence the dimension of the linear system $|D|$ is equal to $2$; any element in $|D|$ defines a smooth divisor. Let $\mathcal{H}:=p^*\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(1,1)\otimes\mathcal{O}_\mathbb{P}(1)$, where $p\colon X\rightarrow\mathbb{P}^1\times\mathbb{P}^1$ is the projection and $\mathcal{O}_\mathbb{P}(1)$ is the tautological invertible sheaf with respect to the projection $p$. We can show that $\mathcal{H}$ is an ample invertible sheaf on $X$ provided that $0\leq m_1\leq m_2$, and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes 2}$ by an easy calculation. Therefore $(X, D)$ is a $4$-dimensional log Fano manifold with the log Fano index $2$. \end{example} \begin{example}[See also Remark \ref{Tprmk}]\label{Tp} Let \[ D:=\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r})\subset_{\operatorname{can}}\, X:=\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r}\oplus\mathcal{O}(m)) \] with $m\geq 1$. We first note that $D$ is unique in its linear system $|D|$ if $m\geq 2$. If $m=1$ then the dimension of $|D|$ is equal to $r+1$. This is easy from the exact sequence \[ 0\rightarrow\mathcal{O}_X\rightarrow\mathcal{O}_X(D)\rightarrow\mathcal{N}_{D/X}\rightarrow 0 \] and the fact $\mathcal{N}_{D/X}\simeq\mathcal{O}_{\mathbb{P}^r\times\mathbb{P}^r}(1-m,1)|_D$ under an embedding $D\subset\mathbb{P}^r\times\mathbb{P}^r$ of bidegree $(1, 1)$. We note that there exists an embedding $X\subset X_1:=\mathbb{P}[\mathbb{P}^r; 1^{r+1}, m]$ obtained by the surjection $\alpha$ in the exact sequence \[0\rightarrow\mathcal{O}_{\mathbb{P}^r}\rightarrow\mathcal{O}(1)^{\oplus r+1}\xrightarrow{\alpha} T_{\mathbb{P}^r}\rightarrow 0. \] Let $\mathcal{H}:=\mathcal{O}(0; 1)$ on $X_1$. Then $\mathcal{H}$ is an ample invertible sheaf on $X_1$ provided that $m\geq 1$, and satisfies $\mathcal{O}_X(-(K_X+D))\simeq(\mathcal{H}|_X)^{\otimes r}$. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$. \end{example} \begin{example}\label{Pp} Let $X:=\mathbb{P}^r\times\mathbb{P}^r$ and $D$ is an effective divisor on $X$ with $D\in|\mathcal{O}_{\mathbb{P}^r\times\mathbb{P}^r}(1,1)|$. Then the dimension of the linear system is $r(r+2)$, any smooth element is isomorphic to $\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r})$ and any non-smooth element is the union of the first and second pullbacks of hyperplanes. In particular, any $D$ in the linear system $|\mathcal{O}_{\mathbb{P}^r\times\mathbb{P}^r}(1,1)|$ is a simple normal crossing divisor. Let $\mathcal{H}:=\mathcal{O}_{\mathbb{P}^r\times\mathbb{P}^r}(1,1)$. Then $\mathcal{H}$ is an ample invertible sheaf and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$ for any $D\in|\mathcal{O}_{\mathbb{P}^r\times\mathbb{P}^r}(1,1)|$. \end{example} \begin{example}\label{zerozeroone} Let \[ X:=\mathbb{P}[\mathbb{P}^r; 0^r, 1]. \] We can view $X$ as the blowing up of $P:=\mathbb{P}^{2r}$ along a linear subspace $H\subset P$ of dimension $r-1$. Let $\phi\colon X\rightarrow P$ be the blowing up and $E$ be the exceptional divisor of $\phi$. Then \[ E=\mathbb{P}[\mathbb{P}^r; 0^r]\subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{P}^r; 0^r, 1]. \] Let $D$ be an effective divisor such that $D\in|\mathcal{O}(0; 1)|$. Any smooth element in the linear system $|\mathcal{O}(0; 1)|$ corresponds to the strict transform of a hyperplane in $P$ which does not contain $H$. Any non-smooth element in the linear system $|\mathcal{O}(0; 1)|$ can be written as $E+D_0$, where $D_0$ is the strict transform of a hyperplane in $P$ which contains $H$. In particular, any divisor in the linear system $|\mathcal{O}(0; 1)|$ is a simple normal crossing divisor. We also note that the dimension of the linear system $|\mathcal{O}(0; 1)|$ is equal to $2r$. Let $\mathcal{H}:=\mathcal{O}(1; 1)$. Then $\mathcal{H}$ is an ample invertible sheaf on $X$ and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$ for any $D\in|\mathcal{O}(0; 1)|$. \end{example} \begin{example}\label{zeroonebig} Let \[ X:=\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1, m] \] with $m\geq 1$ and $D$ is an effective divisor on $X$ with $D\in|\mathcal{O}(-m; 1)|$. If $m\geq 2$, then $D$ is unique in the linear system $|\mathcal{O}(-m; 1)|$ and must be equal to the subbundle \[ X\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1]\subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1, m]. \] If $m=1$, then the dimension of the linear system is equal to $1$ by Lemma \ref{scroll_lem} \eqref{scroll_lem5}, and it is easy to show that any element is smooth and can be also seen the subbundle obtained by the above canonical projection after taking an automorphism of $X$ over $\mathbb{P}^r$. In particular, any $D\in|\mathcal{O}(-m; 1)|$ is a smooth divisor. Let $\mathcal{H}:=\mathcal{O}(1; 1)$. Then $\mathcal{H}$ is an ample invertible sheaf on $X$ and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$ for any $D\in|\mathcal{O}(-m; 1)|$. \end{example} \begin{example}\label{zerozerobig} Let \[ X:=\mathbb{P}[\mathbb{P}^r; 0^r, m] \] with $m\geq 2$ and $D$ is an effective divisor on $X$ such that $D\in|\mathcal{O}(-m+1; 1)|$. We note that $D$ always has the support \[ D_0:=\mathbb{P}[\mathbb{P}^r; 0^r]\subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{P}^r; 0^r, m] \] by Lemma \ref{scroll_lem} \eqref{scroll_lem6}. Furthermore, $D-D_0$ is the pull back of a hyperplane in $\mathbb{P}^r$ by the projection $p\colon X\rightarrow\mathbb{P}^r$. Therefore any $D\in|\mathcal{O}(-m+1; 1)|$ is a simple normal crossing divisor. We also note that the dimension of the linear system $|\mathcal{O}(-m+1; 1)|$ is equal to $r$ by Lemma \ref{scroll_lem} \eqref{scroll_lem5}. Let $\mathcal{H}:=\mathcal{O}(1; 1)$. Then $\mathcal{H}$ is an ample invertible sheaf on $X$ and $\mathcal{O}_X(-(K_X+D))\simeq\mathcal{H}^{\otimes r}$. Therefore $(X, D)$ is a $2r$-dimensional log Fano manifold with the log Fano index $r$ for any $D\in|\mathcal{O}(-m+1; 1)|$. \end{example} Now, we state some remarks about these examples. \begin{remark}\label{fanoQrmk} In Example \ref{fanoQ}, the homomorphism \[ \tau^*\colon\operatorname{Pic}(X')\rightarrow\operatorname{Pic}(X) \] is an isomorphism. In particular, $\rho(X)=2$. \end{remark} \begin{proof} We can assume $\Bbbk=\mathbb{C}$. For the case $m=0$ is obvious since $X\simeq\mathbb{P}^{r-1}\times\mathbb{Q}^{r+1}$. We consider the case $m>0$. Let $R\subset X$ be the ramification divisor of $\tau$. We know that the linear system $|\mathcal{O}(0; 1)|$ in $X'$ gives a divisorial contraction morphism $f\colon X'\rightarrow Q$ contracting $E\simeq\mathbb{P}^{r-1}\times\mathbb{P}^r$ to $\mathbb{P}^r$. We note that $B\subset X'$ is the pull back of some ample divisor $A\subset Q$. Thus $H_i((X'\setminus B)^{\text{an}}; \mathbb{Z})=0$ for all $i>2r+r-2$ by \cite[p.25, (2.3) Theorem]{morse} for the proper morphism $f|_{X'\setminus B}\colon X'\setminus B\rightarrow Q\setminus A$ to an affine variety. Thus $H_c^i((X'\setminus B)^{\text{an}}; \mathbb{Z})=0$ for all $i<r+2$ by Poincar\'e's duality. We know that there exists an exact sequence \[ H_c^2((X'\setminus B)^{\text{an}}; \mathbb{Z})\rightarrow H^2((X')^{\text{an}}; \mathbb{Z})\xrightarrow{\alpha}H^2(B^{\text{an}}; \mathbb{Z}) \rightarrow H_c^3((X'\setminus B)^{\text{an}}; \mathbb{Z}). \] Thus $\alpha$ is an isomorphism. Applying the same argument to the composition $f\circ\tau\colon X\rightarrow Q$, we obtain the isomorphism $H^2(X^{\text{an}}; \mathbb{Z})\xrightarrow{\sim}H^2(R^{\text{an}}; \mathbb{Z})\simeq H^2(B^{\text{an}}; \mathbb{Z})$. Therefore $H^2((X')^{\text{an}}; \mathbb{Z})\simeq H^2(X^{\text{an}}; \mathbb{Z})$. Therefore $\operatorname{Pic}(X')\simeq\operatorname{Pic}(X)$ by Lemma \ref{picard}. \end{proof} \begin{remark}\label{rthreermk} If $m<0$, then $(X, D)$ never be a log Fano manifold in Example \ref{rthree}. \end{remark} \begin{proof} Let $S:=\mathbb{P}[\mathbb{Q}^r; m]\subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{Q}^r; 0^r, m]$, the section of the projection $p\colon X\rightarrow\mathbb{Q}^r$. Then $\mathcal{O}_X(-(K_X+D))|_S\simeq\mathcal{O}_{\mathbb{Q}^r}(r(m+1))$. Therefore $-(K_X+D)$ never be ample. \end{proof} \begin{remark}\label{Tprmk} If $m<1$, then $(X, D)$ never be a log Fano manifold in Example \ref{Tp}. \end{remark} \begin{proof} Let \[ S:=\mathbb{P}_{\mathbb{P}^r}(\mathcal{O}(m))\subset_{\operatorname{can}}\, X=\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r}\oplus\mathcal{O}(m)). \] Then $\mathcal{O}_X(-(K_X+D))|_S\simeq\mathcal{O}_{\mathbb{P}^r}(mr)$. Therefore $-(K_X+D)$ never be ample. \end{proof} \begin{remark}\label{mukai1rmk} For Examples \ref{burouappu}, \ref{pP}, \ref{kayaku}, \ref{fanoQ}, \ref{rthree}, \ref{rtwo}, \ref{Tp}, \ref{Pp}, \ref{zerozeroone}, \ref{zeroonebig}, \ref{zerozerobig}, two varieties $X_1$ and $X_2$ are not isomorphic to each other for any two $(X_1, D_1)$ and $(X_2, D_2)$ if either of the following holds: \begin{itemize} \item $(X_1, D_1)$ and $(X_2, D_2)$ are in different example boxes. \item $(X_1, D_1)$ and $(X_2, D_2)$ are in a same example boxes but the parameters $m$ or $(m_1, m_2)$ are distinct. \end{itemize} We also note that any $X$ in these examples is unique up to isomorphism except for Example \ref{fanoQ}. \end{remark} We tabulate these examples in Section \ref{sbsc_mukai1}. We will show in Theorem \ref{mukai1} that these are the all examples of $2r$-dimensional log Fano manifolds $(X, D)$ wih the log Fano indices $r\geq 2$, $D\neq 0$ and $\rho(X)\geq 2$. \begin{center} \begin{table}\caption{The list of $2r$-dimensional log Fano manifolds $(X, D)$ with the log Fano indices $r\geq 2$, $\rho(X)\geq 2$ and nonzero boundaries}\label{maintable} \begin{tabular}[t]{|c|c|c|} \hline \rule[-1ex]{0ex}{3.5ex} \operatorname{sm}all{No.} & $X$ & $D$ \\ \hline \rule[-0.5ex]{0ex}{3ex} \ref{burouappu} & $\operatorname{Bl}_{\mathbb{P}^{r-2}}\mathbb{P}^{2r}$ & $\operatorname{Bl}_{\mathbb{P}^{r-2}}\mathbb{P}^{2r-1}$ with \\ \rule[-1ex]{0ex}{3ex} & & $\mathbb{P}^{r-2}\subset\mathbb{P}^{2r-1}\subset\mathbb{P}^{2r}$ linear \\ \hline \rule[-0.5ex]{0ex}{3ex} \ref{pP} & $\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}$ & $\mathbb{P}^{r-1}\times\mathbb{Q}^r$ \\ \cline{3-3} \rule[-0.5ex]{0ex}{3ex} & & $\mathbb{P}^{r-1}\times\mathbb{P}^r\cup\mathbb{P}^{r-1}\times\mathbb{P}^r$ \\ \hline \rule[-0.5ex]{0ex}{3ex} \ref{kayaku} & $\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_1, m_2]$ & $\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_1]\cup \mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_2]$ \\ \rule[-1ex]{0ex}{3ex} & with $0\leq m_1\leq m_2, 1\leq m_2$ & \\ \hline \rule[-0.5ex]{0ex}{3ex} \ref{fanoQ} & the double cover of & the strict transform of \\ \rule[-1ex]{0ex}{3ex} & $\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}, m]$ $(m\geq 0)$ with & $\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}]$\,\,$(\simeq\mathbb{P}^{r-1}\times\mathbb{Q}^r)$; \\ \rule[-1ex]{0ex}{3ex} & the smooth branch $B\in|\mathcal{O}(0; 2)|$ & smooth \\ \hline \rule[-1.5ex]{0ex}{4ex} \ref{rthree} & $(r\geq 3)$ $\mathbb{P}[\mathbb{Q}^r; 0^r, m]$ with $m\geq 0$ & $\mathbb{P}[\mathbb{Q}^r; 0^r](\simeq\mathbb{P}^{r-1}\times\mathbb{Q}^r)$ \\ \hline \rule[-0.5ex]{0ex}{3ex} \ref{rtwo} & $(r=2)$ $\mathbb{P}_{\mathbb{P}^1\times\mathbb{P}^1}(\mathcal{O}^{\oplus 2}\oplus\mathcal{O}(m_1, m_2))$ & $\mathbb{P}_{\mathbb{P}^1\times\mathbb{P}^1}(\mathcal{O}^{\oplus 2})(\simeq\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)$ \\ \rule[-1ex]{0ex}{3ex} & with $0\leq m_1\leq m_2$ & \\ \hline \rule[-1.5ex]{0ex}{4ex} \ref{Tp} & $\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r}\oplus\mathcal{O}(m))$ with $m\geq 1$ & $\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r})$\\ \hline \rule[-0.5ex]{0ex}{3ex} \ref{Pp} & $\mathbb{P}^r\times\mathbb{P}^r$ & $\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r})\in|\mathcal{O}(1, 1)|$\\ \cline{3-3} \rule[-0.5ex]{0ex}{3ex} & & $\mathbb{P}^{r-1}\times\mathbb{P}^r\cup\mathbb{P}^r\times\mathbb{P}^{r-1}$\\ \hline \rule[-0.5ex]{0ex}{3ex} \ref{zerozeroone} & $\mathbb{P}[\mathbb{P}^r; 0^r, 1]$ & $\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1](\simeq\operatorname{Bl}_{\mathbb{P}^{r-2}}\mathbb{P}^{2r-1})$\\ \cline{3-3} \rule[-0.5ex]{0ex}{3ex} & & $\mathbb{P}[\mathbb{P}^r; 0^r]\cup\mathbb{P}[\mathbb{P}^{r-1}; 0^r, 1]$\\ \hline \rule[-1.5ex]{0ex}{4ex} \ref{zeroonebig} & $\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1, m]$ with $m\geq 1$ & $\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1](\simeq\operatorname{Bl}_{\mathbb{P}^{r-2}}\mathbb{P}^{2r-1})$ \\ \hline \rule[-1.5ex]{0ex}{4ex} \ref{zerozerobig} & $\mathbb{P}[\mathbb{P}^r; 0^r, m]$ with $m\geq 2$ & $\mathbb{P}[\mathbb{P}^r; 0^r]\cup\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m]$\\ \hline \end{tabular} \\ \begin{itemize} \item[\ref{burouappu}]\,\, $D$ is the strict transform of a hyperplane in $\mathbb{P}^{2r}$ which contains the center. \item[\ref{pP}]\,\, $D\in|\mathcal{O}(0, 2)|$. The upper column is smooth and the lower is reducible. \item[\ref{kayaku}]\,\, Both components of $D$ to $X$ are obtained by the canonical projections. \item[\ref{fanoQ}]\,\, $\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}]\subset_{\operatorname{can}}\,\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}, m]$. We request that both $B$ and $B\cap\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}]$ are smooth. \item[\ref{rthree},] \ref{rtwo},\, \ref{Tp},\, \ref{zeroonebig}\,\, The embeddings $D\subset X$ are obtained by the canonical projections. \item[\ref{Pp}]\,\, $D\in|\mathcal{O}(1, 1)|$. The upper column is smooth and the lower is non-smooth that is obtained by the union of the first and second pullbacks of hyperplanes. \item[\ref{zerozeroone}]\,\, $\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1]$, $\mathbb{P}[\mathbb{P}^r; 0^r]\subset_{\operatorname{can}}\,\mathbb{P}[\mathbb{P}^r; 0^r, 1]$ and $\mathbb{P}[\mathbb{P}^{r-1}; 0^r, 1]$ is the pullback of a hyperplane in $\mathbb{P}^r$. \item[\ref{zerozerobig}]\,\, $\mathbb{P}[\mathbb{P}^r; 0^r]\subset_{\operatorname{can}}\,\mathbb{P}[\mathbb{P}^r; 0^r, m]$ and $\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m]$ is the pullback of a hyperplane in $\mathbb{P}^r$. \end{itemize} \end{table} \end{center} \section{Theorems}\label{thm_section} In this section, we state the main propositions and theorems of classification results. \subsection{Log Fano manifolds of del Pezzo type}\label{delpezzo_section} First, we give the classification result of $n$-dimensional log Fano manifolds $(X,D)$ with the log Fano indices $r\geq n-1$. The case $D=0$ is well-known as del Pezzo manifolds (see for example \cite[I \S 8]{fujitabook}), hyperquadric or projective space (see \cite{KO}). Hence we consider the case that $D\neq 0$. We note that the case $(n, r)=(2, 1)$ has been classified by Maeda \cite[\S 3]{Maeda}. We treat the log Fano pseudoindex instead of the log Fano index. \begin{proposition}\label{indn} Let $(X,D)$ be an $n$-dimensional log Fano manifold with the log Fano pseudoindex $\iota$ and assume $D$ is nonzero. Then $\iota\leq n$. If $\iota=n$, then $X\simeq\mathbb{P}^n$ and $D$ is a hyperplane section under this isomorphism. \end{proposition} \begin{proof} Assume that $\iota\geq n$. Choose an extremal ray $R$ with a minimal rational curve $[C]\in R$ such that $(D\cdot R)>0$. Then $l(R)\geq n+1$ by Lemma \ref{longray}. We know that $l(R)\leq n+1$, where the equality holds if and only if $X\simeq\mathbb{P}^n$ and $D\in|\mathcal{O}(1)|$ by \cite{CMSB}. \end{proof} \begin{proposition}\label{dP} Let $(X,D)$ be an $n$-dimensional log Fano manifold with the log Fano pseudoindex $\iota=n-1$ and assume $D$ is nonzero and $n\geq 3$. Then $X$ is isomorphic to $\mathbb{P}^n$ or $\mathbb{Q}^n$ unless $n=3$ and $(X, D)$ is isomorphic to the case $\iota=2$ in Example \ref{2rminusone} $($cf. Section \ref{sbsc_mukai0}$)$. Moreover: \begin{itemize} \item If $X=\mathbb{P}^n$, then $D\in|\mathcal{O}(2)|$, i.e., $D$ is a smooth or reducible hyperquadric. \item If $X=\mathbb{Q}^n$, then $D\in|\mathcal{O}(1)|$, i.e., $D$ is a smooth hyperplane section. \end{itemize} \end{proposition} \begin{proof} Let $R$ be an extremal ray with a minimal rational curve $[C]\in R$ such that $(D\cdot R)>0$. Then $l(R)\geq n$ holds by Lemma \ref{longray}. Let $\pi\colon X\rightarrow Y$ be the contraction morphism associated to $R$. By Wi\'sniewski's inequality (Theorem \ref{wisn_ineq}), \[ \dim\operatorname{Exc}(\pi)+\dim F\geq n+l(R)-1\geq 2n-1 \] holds for any nontrivial fiber $F$ of $\pi$. Hence $\pi$ is of fiber type, that is, $X=\operatorname{Exc}(\pi)$ holds. If $\rho(X)=1$, then $X$ itself Fano manifold and the Fano pseudoindex of $X$ is larger than $n-1$ by Proposition \ref{rhoonefano}. If $\rho(X)=1$ and the Fano pseudoindex of $X$ is at least $n+1$, then $X\simeq\mathbb{P}^n$ by \cite{CMSB} and $D\in|\mathcal{O}(2)|$ holds. If $\rho(X)=1$ and the Fano pseudoindex is equal to $n$, then $(D\cdot C)=1$ by Lemma \ref{longray} \eqref{longray1}. Therefore $X\simeq\mathbb{Q}^n$ and $D\in|\mathcal{O}(1)|$ by \cite{KO}. We consider the remaining case $\rho(X)\geq 2$. Then $\dim F=n-1$ for any fiber $F$ of $\pi$ and $l(R)=n$. Hence $(D\cdot C)=1$ holds by Lemma \ref{longray} \eqref{longray1}. Therefore $\pi$ is a $\mathbb{P}^{n-1}$-bundle over a smooth projective curve $Y$ by \cite[Theorem 2]{fujita}. Then $Y\simeq\mathbb{P}^1$ since any extremal ray of $X$ is spanned by a class of rational curve (this can also be shown by Theorem \ref{rat_conn}). Hence we can assume $X=\mathbb{P}[\mathbb{P}^1; a_0,\dots,a_{n-1}]$, where $0=a_0\leq a_1\leq\dots\leq a_{n-1}$. Thus $1\geq n-2$ by Corollary \ref{scroll_cor1}. Since $n\geq 3$, we have $n=3$, $a_1=0$ and $D\in|\mathcal{O}(-a_2; 1)|$ by Corollary \ref{scroll_cor1} (1). That is exactly the case which we have considered in Example \ref{2rminusone} for the case $\iota=2$. \end{proof} \subsection{Log Fano manifolds related to Mukai conjecture}\label{mukaiconjsection} For a classification problem of Fano manifolds, it is very famous so called the Mukai conjecture \cite[Conjecture 4]{mukaiconj} such that the product of the Picard number and the Fano index minus one is less than or equal to the dimension for any Fano manifold. The case where the Fano index is larger than the half of the dimension is treated by Wi\'sniewski \cite{wisn90, wisn91}. We consider the log version, which are the main results in this article. \begin{thm}\label{mukai0} Let $(X,D)$ be an $n$-dimensional log Fano manifold with the log Fano pseudoindex $\iota>n/2$, $D\neq 0$ and $\rho(X)\geq 2$. Then $n=2\iota-1$, and $(X, D)$ is isomorphic to the case in Example \ref{2rminusone} in Section \ref{ex_section}. \end{thm} \begin{proof} We prove the theorem by induction on $n$. The cases $n\leq 4$ have been done in Proposition \ref{dP}, thus we may assume $n\geq 5$. Choose an extremal ray $R$ with a minimal rational curve $[C]\in R$ as in Lemma \ref{longray} and let $\pi\colon X\rightarrow Y$ be the associated contraction. Then $l(R)\geq\iota+1\geq 4$. We also choose an irreducible component with the conductor divisor $(D_1, E_1)\subset D$ such that $(D_1\cdot R)>0$. We know that $\rho(D_1)\geq 2$ by Lemma \ref{rhoone} \eqref{rhoone2}, and $D$ is an $(n-1)$-dimensional snc Fano variety whose snc Fano pseudoindex $\geq\iota$ by Theorem \ref{fujinothm} \eqref{fujinothm1}. Thus $(D_1, E_1)$ is an $(n-1)$-dimensional log Fano manifold whose log Fano pseudoindex $\geq\iota$, $\rho(D_1)\geq 2$ and $\iota>n/2>(n-1)/2$. Hence $E_1=0$ (hence $D=D_1$) by induction step. Applying \cite[Corollary 4.3]{Occ} to $D$, we have $n-1=2(\iota-1)$ and $D\simeq\mathbb{P}^{\iota-1}\times\mathbb{P}^{\iota-1}$. We know in Lemma \ref{rhoone} \eqref{rhoone2} that $\pi|_D$ contracts a curve. Since $D\simeq\mathbb{P}^{\iota-1}\times\mathbb{P}^{\iota-1}$, $\pi|_D\colon D\rightarrow\pi(D)$ is not birational. Thus $\pi\colon X\rightarrow Y$ is of fiber type by Lemma \ref{rhoone} \eqref{rhoone1}, and $\pi|_D\colon D\rightarrow Y$ is surjective since $(D\cdot R)>0$. We know that $\pi|_D\colon D\rightarrow Y$ is an algebraic fiber space by Lemma \ref{longray}. Hence $\pi|_D$ is isomorphic to the first projection \[ p_1\colon\mathbb{P}^{\iota-1}\times\mathbb{P}^{\iota-1}\rightarrow\mathbb{P}^{\iota-1}. \] In particular, $\dim(\pi^{-1}(y)\cap D)=\iota-1$ for any closed point $y\in Y\simeq\mathbb{P}^{\iota-1}$. Therefore, $\pi\colon X\rightarrow Y$ is a $\mathbb{P}^\iota$-bundle and $\pi|_D\colon D\rightarrow Y$ is a $\mathbb{P}^{\iota-1}$-subbundle by Proposition \ref{structure_cor} \eqref{structure_cor1}. Since $D\simeq\mathbb{P}^{\iota-1}\times\mathbb{P}^{\iota-1}$, there exists an integer $m\in\mathbb{Z}$ such that $(\pi|_D)_*\mathcal{N}_{D/X}\simeq\mathcal{O}_{\mathbb{P}^{\iota-1}}(-m)^{\oplus\iota}$ by Lemma \ref{lemP}. We also know by Lemma \ref{lemP} that $X\simeq\mathbb{P}_{\mathbb{P}^{\iota-1}}(\pi_*\mathcal{O}_X(D))$ and the embedding $D\subset X$ is obtained by the surjection $\alpha$ in the natural exact sequence \[ 0\rightarrow\mathcal{O}_{\mathbb{P}^{\iota-1}}\rightarrow\pi_*\mathcal{O}_X(D)\xrightarrow{\alpha} (\pi|_D)_*\mathcal{N}_{D/X}\rightarrow 0. \] Since $\iota-1=(n+1)/2-1\geq 2$, this exact sequence always splits. Hence we have $\pi_*\mathcal{O}_X(D)\simeq\mathcal{O}_{\mathbb{P}^{\iota-1}}\oplus\mathcal{O}_{\mathbb{P}^{\iota-1}}(-m)^{\oplus\iota}$ and $D\subset X$ is obtained by the canonical projection \[ \mathcal{O}_{\mathbb{P}^{\iota-1}}\oplus\mathcal{O}_{\mathbb{P}^{\iota-1}}(-m)^{\oplus\iota}\rightarrow \mathcal{O}_{\mathbb{P}^{\iota-1}}(-m)^{\oplus\iota}. \] This case has been already considered in Corollary \ref{scroll_cor1} \eqref{scroll_cor11}; $m\geq 0$ holds. This is exactly the case which we have been considered in Example \ref{2rminusone}. Therefore we have completed the proof of Theorem \ref{mukai0}. \end{proof} We recall Wi\'sniewski's classification result. \begin{thm}[{\cite{wisn91}}]\label{wisn_classify} Let $X$ be an $n$-dimensional Fano manifold with the Fano index $r$. If $n=2r-1$ and $\rho(X)\geq 2$, then $X$ is isomorphic to one of the following: \begin{enumerate} \renewcommand{\arabic{enumi}}{\roman{enumi}} \renewcommand{$(\theenumi)$}{\rm{\arabic{enumi})}} \item\label{wisn_classify1} $\mathbb{P}^{r-1}\times\mathbb{Q}^r,$ \item\label{wisn_classify1} $\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r}),$ \item\label{wisn_classify1} $\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1].$ \end{enumerate} \end{thm} Using Theorems \ref{mukai0}, \ref{wisn_classify}, we classify $(X, D)$ a $2r$-dimensional log Fano manifold with the log Fano index $r\geq 2$ and $D\neq 0$. We note that the case $r=1$ has been classified by Maeda \cite[\S 3]{Maeda}. \begin{thm}[Main Theorem]\label{mukai1} Let $(X, D)$ be a $2r$-dimensional log Fano manifold with the log Fano index $r\geq 2$, $D\neq 0$ and $\rho(X)\geq 2$. Then the possibility of $X$ and $D$ is exactly in the Examples \ref{burouappu}, \ref{pP}, \ref{kayaku}, \ref{fanoQ}, \ref{rthree}, \ref{rtwo}, \ref{Tp}, \ref{Pp}, \ref{zerozeroone}, \ref{zeroonebig}, \ref{zerozerobig} (See Table \ref{maintable} for the list of $X$ and $D$). \end{thm} We prove Theorem \ref{mukai1} in Section \ref{prf_section}. \subsection{Classification of Mukai-type log Fano manifolds}\label{mukaisection} \begin{corollary}\label{coindex3} We have classified $n$-dimensional log Fano manifolds $(X, D)$ with the log Fano indices $r\geq n-2$. \end{corollary} \begin{proof} We can assume $r=n-2$ since the cases $r\geq n-1$ have been treated in Section \ref{delpezzo_section}. The case $D=0$ is well-known, called as Mukai manifolds (see \cite{isk77,MoMu,mukai,wis,wisn90,wisn91}). The case $n=3$, $D\neq 0$ has been classified by Maeda \cite{Maeda}. The case $n\geq 4$, $D\neq 0$, $\rho(X)\geq 2$ is already known by Propositions \ref{indn}, \ref{dP} and Theorems \ref{mukai0}, \ref{mukai1}. The remaining case $n\geq 4$, $D\neq 0$, $\rho(X)=1$ is evident from Proposition \ref{rhoonefano}; $X$ and $D$ is isomorphic to one of the following: \begin{enumerate} \renewcommand{\arabic{enumi}}{\mathbb{R}oman{enumi}} \renewcommand{$(\theenumi)$}{\rm{(\arabic{enumi})}} \item\label{MI} $X\simeq\mathbb{P}^n$ and $D\in|\mathcal{O}(3)|$. \item\label{MII} $X\simeq\mathbb{Q}^n$ and $D\in|\mathcal{O}(2)|$. \item\label{MIII} $X\simeq V_d$ and $D\in|\mathcal{O}(1)|$ with $1\leq d\leq 5$, where $V_d$ is a del Pezzo manifold of degree $d$ in the sense of Takao Fujita \cite[Theorem 8.11, 1)--5)]{fujitabook}, and $\mathcal{O}(1)$ is the ample generator of $\operatorname{Pic}(V_d)$. \end{enumerate} Conversely, we know that general elements in the linear systems of \eqref{MI}--\eqref{MIII} are smooth. Hence the cases \eqref{MI}--\eqref{MIII} actually occur. \end{proof} \section{Proof of Main Theorem \ref{mukai1}}\label{prf_section} In this section, we give a proof of Main Theorem \ref{mukai1}. Let $L$ be an ample divisor on $X$ such that $-(K_X+D)\sim rL$. Pick an extremal ray $R$ with a minimal rational curve $[C]\in R$ such that $(D\cdot R)>0$ and let $\pi\colon X\rightarrow Y$ be the associated contraction morphism. Then we know that $l(R)\geq r+1\geq 3$ by Lemma \ref{longray}. We note that $D$ is a $(2r-1)$-dimensional snc Fano variety whose snc Fano index is divisible by $r$ by Theorem \ref{fujinothm} \eqref{fujinothm1}. Let $(D_1, E_1)\subset D$ be an irreducible component of $D$ with the conductor divisor such that $(D_1\cdot R)>0$. By the assumption $\rho(X)\geq 2$ and Lemma \ref{rhoone} \eqref{rhoone2}, the morphism $\pi|_{D_1}\colon D_1\rightarrow\pi(D_1)$ is not a finite morphism and $\rho(D_1)\geq 2$ holds. Since $(D_1, E_1)$ is a $(2r-1)$-dimensional log Fano manifold whose log Fano index is divisible by $r$, the possibility of the morphism $\pi|_{D_1}\colon D_1\rightarrow\pi(D_1)$ (which is an algebraic fiber space by Lemma \ref{longray}) is isomorphic to exactly one of the following list by Theorems \ref{mukai0} and \ref{wisn_classify}. \begin{enumerate} \renewcommand{\arabic{enumi}}{\arabic{enumi}} \renewcommand{$(\theenumi)$}{$(\arabic{enumi})$} \item\label{main1} $\mathbb{P}^{r-1}\times\mathbb{Q}^r\xrightarrow{p_1}\mathbb{P}^{r-1}$, where $E_1=0$. \item\label{main2} $\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m]\xrightarrow{p}\mathbb{P}^{r-1}$, where $E_1\in|\mathcal{O}(-m; 1)|$ with $m\geq 0$. \item\label{main3} $\mathbb{P}^{r-1}\times\mathbb{Q}^r\xrightarrow{p_2}\mathbb{Q}^r$, where $E_1=0$. \item\label{main4} $\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r})\xrightarrow{p}\mathbb{P}^r$, where $E_1=0$. \item\label{main5} $\mathbb{P}[\mathbb{P}^{r}; 0^{r-1}, 1]\xrightarrow{p}\mathbb{P}^r$, where $E_1=0$. \item\label{main6} $\mathbb{P}^r\times\mathbb{P}^{r-1}\xrightarrow{p_1}\mathbb{P}^r$, where $E_1\in|\mathcal{O}_{\mathbb{P}^r\times\mathbb{P}^{r-1}}(1, 0)|$ (the case in Theorem \ref{mukai0} with $m=0$). \item\label{main7} $\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m]\xrightarrow{\phi}Z$, the divisorial contraction morphism contracting $E_1\simeq\mathbb{P}^{r-1}\times\mathbb{P}^{r-1}$ to $\mathbb{P}^{r-1}$, where $m>0$. \item\label{main8} $\operatorname{Bl}_{\mathbb{P}^{r-2}}\mathbb{P}^{2r-1}\xrightarrow{\operatorname{Bl}}\mathbb{P}^{2r-1}$, the blowing up of $\mathbb{P}^{2r-1}$ along a linear subspace $\mathbb{P}^{r-2}$, where $E_1=0$. \end{enumerate} \begin{remark} For the cases \eqref{main1}, \eqref{main2} and \eqref{main8}, $\dim(F\cap D_1)=r$ for any nontrivial fiber $F$ of $\pi$. For the cases \eqref{main3}, \eqref{main4}, \eqref{main5}, \eqref{main6} and \eqref{main7}, $\dim(F\cap D_1)=r-1$ for any nontrivial fiber $F$ of $\pi$. \end{remark} We separate the cases whether the contraction morphism $\pi$ is of fiber type (Section \ref{fiber_type}) or not (Section \ref{birational_type}). \subsection{Fiber type case}\label{fiber_type} Here, we consider the case where $\pi$ is of fiber type. Since $\dim F\geq l(R)-1\geq r\geq 2$ for any fiber $F$ of $\pi$, we have $\dim D_1>\dim Y$. Hence $\pi|_{D_1}$ is surjective and belongs to the cases \eqref{main1}--\eqref{main6} (we note that $\pi|_{D_1}$ is an algebraic fiber space by Lemma \ref{longray}). \noindent\textbf{The cases \eqref{main1} and \eqref{main2}} First, we consider the cases \eqref{main1} and \eqref{main2}. Then $\dim(\pi|_{D_1})^{-1}(y)=r$ for any closed point $y\in Y\simeq\mathbb{P}^{r-1}$. Thus one of \eqref{structure_cor2b}, \eqref{structure_cor2c} or \eqref{structure_cor2d} in Proposition \ref{structure_cor} holds. \underline{The case \eqref{main1}} Since $\pi|_D$ is a $\mathcal{Q}^r$-bundle over $Y\simeq\mathbb{P}^{r-1}$, only the case \eqref{structure_cor2b} or \eqref{structure_cor2d} can occurs. First, we consider the case \eqref{structure_cor2b}. Since $D\simeq\mathbb{P}^{r-1}\times\mathbb{Q}^r$, $\pi|_D$ is isomorphic to the first projection and we can write $\mathcal{N}_{D/X}\simeq\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{Q}^r}(-m,1)$ for some integer $m\in\mathbb{Z}$. Then $(\pi|_D)_*\mathcal{N}_{D/X}\simeq\mathcal{O}_{\mathbb{P}^{r-1}}(-m)^{\oplus r+2}$ by Lemma \ref{PQlem}, and the sequence \[ 0\rightarrow\mathcal{O}_{\mathbb{P}^{r-1}}\rightarrow\pi_*\mathcal{O}_X(D)\xrightarrow{\alpha} \mathcal{O}_{\mathbb{P}^{r-1}}(-m)^{\oplus r+2}\rightarrow 0 \] is exact. Furthermore, $X$ is obtained as a smooth divisor belonging to $|p^*\mathcal{O}_{\mathbb{P}^{r-1}}(s)\otimes\mathcal{O}_{\mathbb{P}}(2)|$ in $\mathbb{P}:=\mathbb{P}_{\mathbb{P}^{r-1}}(\pi_*\mathcal{O}_X(D))$ for some $s\in\mathbb{Z}$, where $p\colon\mathbb{P}\rightarrow\mathbb{P}^{r-1}$ is the projection, $D$ is the complete intersection of $X$ with $H:=\mathbb{P}[\mathbb{P}^{r-1}; (-m)^{r+2}]$ in $\mathbb{P}$. Here $H\subset\mathbb{P}$ is the subbundle of $p$ obtained by the surjection $\alpha$ in the above exact sequence, by Lemma \ref{lemQ}. Under the isomorphism $H\simeq\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}$, the divisor $D\simeq\mathbb{P}^{r-1}\times\mathbb{Q}^r$ belongs to $|\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}}(s-2m, 2)|$. Thus $s\geq 2m$ since $h^0(\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}, \mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}}(t, 2))=0$ for any $t<0$. If $s>2m$, then the restriction homomorphism $\operatorname{Pic}(\mathbb{P}^{r-1}\times\mathbb{P}^{r+1})\rightarrow\operatorname{Pic}(D)$ is isomorphism by Lefschetz hyperplane theorem and $\mathcal{O}_D(-K_D)\simeq\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}}(r-(s-2m), r)|_D$ by the adjunction theorem, but we know that $-K_D$ is divisible by $r$, which leads to a contradiction. Therefore we have $s=2m$. \begin{claim}\label{mclaim} $m\geq 0$ holds. \end{claim} \begin{proof} We first consider the case $r=2$. Since $\rho(X)=2$, we can write $\mathbb{N}E(X)=R+R'$ and let the contraction morphism associated to $R'$ be $\pi'\colon X\rightarrow Y'$. We note that any nontrivial fiber $F'$ of $\pi'$ satisfies $\dim F'=1$ since any curve in $F'$ never be contracted by $\pi$. If $(D\cdot R')>0$, then any nontrivial fiber $F'$ of $\pi'$ satisfies $\dim F'\geq 2$ by the same argument in Proposition \ref{structure_cor}, this is a contradiction. If $(D\cdot R')<0$, then $\operatorname{Exc}(\pi')\subset D$. Hence $m>0$. If $(D\cdot R')=0$, then $R'$ is a $K_X$-negative extremal ray and $l(R')\geq 2$ by the same argument in Proposition \ref{structure_cor}. Hence $\pi'$ is of fiber type by Wi\'sniewski's inequality (Theorem \ref{wisn_ineq}). Thus $\pi'|_D$ is not a finite morphism since $(D\cdot R')=0$. Therefore $m\geq 0$. Now we consider the case $r\geq 3$. The above exact sequence always splits, hence $H=\mathbb{P}[\mathbb{P}^{r-1}; (-m)^{r+2}]\subset_{\operatorname{can}}\,\mathbb{P}=\mathbb{P}[\mathbb{P}^{r-1}; (-m)^{r+2}, 0]$. Assume that $m<0$ holds. The total coordinate ring of $\mathbb{P}$ is the $\mathbb{Z}^{\oplus 2}$-graded polynomial ring \[ \Bbbk[x_0, \dots, x_{r-1}, y_0, y_1, \dots, y_{r+2}] \] with the grading \begin{eqnarray*} \deg x_i & = & (1, 0) \ (1\leq i\leq r-1),\\ \deg y_0 & = & (0, 1),\\ \deg y_i & = & (m, 1) \ (1\leq i\leq r+2). \end{eqnarray*} $X$ is obtained by a graded equation of bidegree $(2m, 1)$. Since $m<0$, any bidegree $(2m, 1)$ polynomial is obtained by linear combinations of the elements in $\{y_iy_j\}_{1\leq i\leq j\leq r+2}$. Then any divisor obtained by graded equations with the grade $(2m, 1)$ have singular points along the points defined by the graded equations $y_1=\dots=y_{r+2}=0$ by the Jacobian criterion. This is a contradiction since $X$ must be a smooth divisor. Therefore $m\geq 0$. \end{proof} Hence the above exact sequence splits. We now normalize the bundle structures for simplicity. That is, we rewrite \[ H:=\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+2}]\subset_{\operatorname{can}}\,\mathbb{P}:=\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+2}, m] \] with $m\geq 0$, $X$ is a smooth divisor on $\mathbb{P}$ with $X\in|\mathcal{O}(0; 2)|$ and $D=X\cap H$ and $D$ is smooth. Since $H\simeq\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}$ and $D\simeq\mathbb{P}^{r-1}\times\mathbb{Q}^r$, we can take the pull back of a point $S(\simeq\mathbb{P}^{r-1})\subset H\simeq\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}\xrightarrow{p_2}\mathbb{P}^{r+1}$ in $\mathbb{P}^{r+1}$ such that $S\cap D=\emptyset$. We can assume that $S$ is the section of $p\colon\mathbb{P}\rightarrow\mathbb{P}^{r-1}$ obtained by the canonical first projection, that is, \[ S=\mathbb{P}[\mathbb{P}^{r-1}; 0]\subset_{\operatorname{can}}\,\mathbb{P}=\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+2}, m]. \] Then the relative linear projection from $S$ over $\mathbb{P}^{r-1}\simeq Y$ obtains the morphism \[ \sigma\colon\mathbb{P}\setminus S\rightarrow X':=\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}, m] \] over $\mathbb{P}^{r-1}\simeq Y$. The restriction of $\sigma$ to $X$ gives a double cover morphism $\tau\colon X\rightarrow X'$. It is easy to show that the branch divisor $B\subset X'$ of $\tau$ is the smooth divisor on $X'$ such that $B\in|\mathcal{O}(0; 2)|$. Since the strict transform of the divisor \[ D':=\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}]\subset_{\operatorname{can}}\, X'=\mathbb{P}[\mathbb{P}^{r-1}; 0^{r+1}, m] \] in $X'$ is exactly $D$, the intersection $B\cap D'$ is also smooth. This is exactly the case in Example \ref{fanoQ}. Now, we consider the case \eqref{structure_cor2d}. We write $\mathcal{E}:=\pi_*\mathcal{O}_X(L)$, then \[ X\simeq\mathbb{P}_{\mathbb{P}^{r-1}}(\mathcal{E})\xrightarrow{p}\mathbb{P}^{r-1}. \] We can write $\mathcal{O}_{\mathbb{P}}(1)|_D\simeq\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{Q}^r}(-m, 1)$ for some integer $m\in\mathbb{Z}$, where $\mathcal{O}_\mathbb{P}(1)$ is the tautological invertible sheaf on $X$ with respect to the projection $p$. Hence $\mathcal{E}\simeq(p|_D)_*(\mathcal{O}_{\mathbb{P}}(1)|_D)\simeq\mathcal{O}_{\mathbb{P}^{r-1}}(-m)^{\oplus r+2}$ holds by Lemma \ref{PQlem}. Therefore $X\simeq\mathbb{P}^{r-1}\times\mathbb{P}^{r+1}$, which is exactly the case in Example \ref{pP}. \underline{The case \eqref{main2}} For convenience, let $m_1:=m$, where $m$ is in \eqref{main2}. Then only the case \eqref{structure_cor2c} can occurs since $\pi|_{D_1}$ is a $\mathbb{P}^r$-bundle over $Y\simeq\mathbb{P}^{r-1}$. We note that $D$ has exactly two irreducible components $D_1$ and $D_2$ since $E_1$ is irreducible. We note that $(D_2\cdot R)>0$ since $\pi$ is of fiber type and $\pi|_{E_1}$ is surjective. Hence $\rho(D_2)\geq 2$ by the previous argument. Therefore $\pi|_{D_2}\colon D_2\rightarrow Y$ is isomorphic to \[ \mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_2]\xrightarrow{p}\mathbb{P}^{r-1} \] with $m_2\geq 0$. That is, $D_2$ also satisfy the case \eqref{main2} by repeating the same argument. We can assume $0\leq m_1\leq m_2$. Under the isomorphism $D_1\simeq\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_1]$, we can write $\mathcal{N}_{D_1/X}\simeq\mathcal{O}(u; 1)$ with $u\in\mathbb{Z}$. We have $u=-m_2$ since $\mathcal{N}_{D_1/X}|_{D_1\cap D_2}\simeq\mathcal{N}_{D_1\cap D_2/D_2}$ and $\mathcal{N}_{D_1\cap D_2/D_2}\simeq\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{P}^{r-1}}(-m_2, 1)$. Hence \begin{eqnarray*} p_*\mathcal{N}_{D_1/X} & \simeq & p_*(p^*\mathcal{O}_{\mathbb{P}^{r-1}}(-m_2)\otimes\mathcal{O}_{\mathbb{P}}(1))\\ & \simeq & \mathcal{O}_{\mathbb{P}^{r-1}}(-m_2)^{\oplus r}\oplus\mathcal{O}_{\mathbb{P}^{r-1}}(m_1-m_2). \end{eqnarray*} Thus the exact sequence \[ 0\rightarrow\mathcal{O}_{\mathbb{P}^{r-1}}\rightarrow\pi_*\mathcal{O}_X(D_1)\rightarrow \mathcal{O}_{\mathbb{P}^{r-1}}(-m_2)^{\oplus r}\oplus\mathcal{O}_{\mathbb{P}^{r-1}}(m_1-m_2)\rightarrow 0 \] splits since $m_1\leq m_2$. Therefore \[ X\simeq\mathbb{P}[\mathbb{P}^{r-1}; 0^r, m_1, m_2] \] with $0\leq m_1\leq m_2$. We note that $D\in|\mathcal{O}(-m_1-m_2; 2)|$ by Corollary \ref{scroll_cor2}. This is exactly the case in Example \ref{kayaku}. \noindent\textbf{The cases \eqref{main3}--\eqref{main6}} Next, we consider the cases \eqref{main3}--\eqref{main6}. Then $\dim(\pi|_{D_1})^{-1}(y)=r-1$ for any closed point $y\in Y$. Hence only the case \eqref{structure_cor1} in Proposition \ref{structure_cor} occurs. \underline{The case \eqref{main3}} In this case, $Y$ is isomorphic to $\mathbb{Q}^r$. First, we consider the case $r=2$. $\pi|_D$ is isomorphic to $p_{23}\colon\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\rightarrow\mathbb{P}^1\times\mathbb{P}^1$ and we can write $\mathcal{N}_{D/X}\simeq\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1}(1, -m_1, -m_2)$ with $m_1$, $m_2\in\mathbb{Z}$. \begin{claim} $m_1$, $m_2\geq 0$ holds. \end{claim} \begin{proof} It is enough to show $m_1\geq 0$. Let $f=\{t\}\times\mathbb{P}^1\subset \mathbb{P}^1\times\mathbb{P}^1\simeq Y$ be an arbitrary fiber of $p_1:\mathbb{P}^1\times\mathbb{P}^1\rightarrow\mathbb{P}^1$, where $t\in\mathbb{P}^1$. Let $X_f$ (resp.\ $D_f$) be the intersection of $\pi^{-1}(f)$ and $X$ (resp.\ $D$). Then $X_f\rightarrow f$ is a $\mathbb{P}^2$-bundle, $D_f$ is a smooth divisor in $X_f$ with $D_f\neq 0$ and \[ \mathcal{O}_{X_f}(-(K_{X_f}+D_f))\simeq\mathcal{O}_X(-(K_X+D))|_{X_f}\simeq\mathcal{O}_X(2L)|_{X_f}. \] Thus $(X_f, D_f)$ is a $3$-dimensional log Fano manifold whose log Fano index is an even number and $\rho(X_f)=2$. Hence \[ D_f=\mathbb{P}[\mathbb{P}^1; 0^2](\simeq\mathbb{P}^1\times\mathbb{P}^1) \subset_{\operatorname{can}}\, X_f=\mathbb{P}[\mathbb{P}^1; 0^2, m] \] with $m\geq 0$ by Proposition \ref{dP}, thus $\mathcal{N}_{D_f/X_f}\simeq\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(1, -m)$. Since $\mathcal{N}_{D_f/X_f}\simeq\mathcal{N}_{D/X}|_{D_f}\simeq\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(1, -m_1)$, we have $m_1=m\geq 0$. \end{proof} We know that ${p_{23}}_*\mathcal{N}_{D/X}\simeq\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(-m_1, -m_2)^{\oplus 2}$ by Lemma \ref{PQlem}. Hence we can show that the exact sequence obtained by Lemma \ref{lemP} \[ 0\rightarrow\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}\rightarrow\pi_*\mathcal{O}_X(D)\rightarrow \mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(-m_1, -m_2)^{\oplus 2}\rightarrow 0 \] splits. Hence we we can show that \[ D=\mathbb{P}_{\mathbb{P}^1\times\mathbb{P}^1}(\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}^{\oplus 2})\subset_{\operatorname{can}}\, X=\mathbb{P}_{\mathbb{P}^1\times\mathbb{P}^1}(\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}^{\oplus 2}\oplus\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(m_1, m_2)) \] with $0\leq m_1\leq m_2$ by Lemma \ref{lemP}. This is exactly the case in Example \ref{rtwo}. We now consider the remaining case $r\geq 3$. We can write the normal sheaf $\mathcal{N}_{D/X}\simeq\mathcal{O}_{\mathbb{P}^{r-1}\times\mathbb{Q}^r}(1, -m)$ with $m\in\mathbb{Z}$. Then $(\pi|_D)_*\mathcal{N}_{D/X}\simeq\mathcal{O}_{\mathbb{Q}^r}(-m)^{\oplus r}$ by Lemma \ref{PQlem}. Hence we can see that the exact sequence obtained by Lemma \ref{lemP} \[ 0\rightarrow\mathcal{O}_{\mathbb{Q}^r}\rightarrow\pi_*\mathcal{O}_X(D) \rightarrow\mathcal{O}_{\mathbb{Q}^r}(-m)^{\oplus r}\rightarrow 0 \] splits. Hence we can show that \[ D=\mathbb{P}[\mathbb{Q}^r; 0^r]\subset_{\operatorname{can}}\, X=\mathbb{P}[\mathbb{Q}^r; 0^r, m] \] by Lemma \ref{lemP}. This is exactly the case in Example \ref{rthree}; the divisor $-(K_X+D)$ is ample if and only if $m\geq 0$ by Remark \ref{rthreermk}. \underline{The case \eqref{main4}} We can write $(\pi|_D)_*\mathcal{N}_{D/X}\simeq T_{\mathbb{P}^r}\otimes\mathcal{O}_{\mathbb{P}^r}(-m)$ with $m\in\mathbb{Z}$ by Lemma \ref{lemP}. Hence we obtain the exact sequence such that the right hand of the sequence has been seen in Lemma \ref{lemP}: \[ 0\rightarrow\mathcal{O}_{\mathbb{P}^r}\rightarrow\pi_*\mathcal{O}_X(D)\rightarrow T_{\mathbb{P}^r}\otimes\mathcal{O}_{\mathbb{P}^r}(-m)\rightarrow 0. \] It is well known that \[ \operatorname{Ext}_{\mathbb{P}^r}^1(T_{\mathbb{P}^r}\otimes\mathcal{O}_{\mathbb{P}^r}(-m), \mathcal{O}_{\mathbb{P}^r})\simeq \begin{cases} 0 & (m\neq 0)\\ \Bbbk & (m=0). \end{cases} \] We also know that all unsplit exact sequences for the case $m=0$ are obtained by the canonical exact sequences \[ 0\rightarrow\mathcal{O}_{\mathbb{P}^r}\rightarrow\mathcal{O}_{\mathbb{P}^r}(1)^{\oplus r+1} \rightarrow T_{\mathbb{P}^r}\rightarrow 0. \] If the exact sequence is not split, then $X\simeq\mathbb{P}^r\times\mathbb{P}^r$ by the above argument. This case has been considered by Example \ref{Pp}. If the exact sequence splits, then we can show that \[ D=\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r})\subset_{\operatorname{can}}\, X=\mathbb{P}_{\mathbb{P}^r}(T_{\mathbb{P}^r}\oplus\mathcal{O}_{\mathbb{P}^r}(m)). \] This case has been considered by Example \ref{Tp}; the divisor $-(K_X+D)$ is ample if and only if $m\geq 1$ by Remark \ref{Tprmk}. \underline{The case \eqref{main5}} We can write $(\pi|_D)_*\mathcal{N}_{D/X}\simeq(\mathcal{O}_{\mathbb{P}^r}^{\oplus r-1}\oplus\mathcal{O}_{\mathbb{P}^r}(1)) \otimes\mathcal{O}_{\mathbb{P}^r}(-m)$ with $m\in\mathbb{Z}$ by Lemma \ref{lemP}. Since $r\geq 2$, the exact sequence \[ 0\rightarrow\mathcal{O}_{\mathbb{P}^r}\rightarrow\pi_*\mathcal{O}_X(D)\rightarrow (\mathcal{O}_{\mathbb{P}^r}^{\oplus r-1}\oplus\mathcal{O}_{\mathbb{P}^r}(1)) \otimes\mathcal{O}_{\mathbb{P}^r}(-m)\rightarrow 0 \] splits. Thus \[ X\simeq\mathbb{P}[\mathbb{P}^r; 0^{r-1}, 1, m] \] and $D\in|\mathcal{O}(-m; 1)|$. Since $\mathcal{O}_X(-K_X)\simeq\mathcal{O}(r-m; r+1)$, we have $\mathcal{O}_X(L)\simeq\mathcal{O}(1; 1)$. We know in Corollary \ref{scroll_cor1} \eqref{scroll_cor12} such that $m\geq 0$; this case has been considered in Examples \ref{zerozeroone} and \ref{zeroonebig}. \underline{The case \eqref{main6}} We can write $(\pi|_{D_1})_*\mathcal{N}_{D_1/X}\simeq\mathcal{O}_{\mathbb{P}^r}(-m)^{\oplus r}$ with $m\in\mathbb{Z}$ by Lemma \ref{lemP}. Since $r\geq 2$, the exact sequence \[ 0\rightarrow\mathcal{O}_{\mathbb{P}^r}\rightarrow\pi_*\mathcal{O}_X(D)\rightarrow \mathcal{O}_{\mathbb{P}^r}(-m)^{\oplus r}\rightarrow 0 \] splits. Thus \[ X\simeq\mathbb{P}[\mathbb{P}^r; 0^r, m]. \] We know in Corollary \ref{scroll_cor1} \eqref{scroll_cor12} such that $m\geq 0$ holds; this case has been considered in Examples \ref{Pp}, \ref{zerozeroone} and \ref{zerozerobig}. \subsection{Birational type case}\label{birational_type} We know that $\pi|_{D_1}\colon D_1\rightarrow \pi(D_1)$ is a birational morphism by Lemma \ref{rhoone} \eqref{rhoone1} and an algebraic fiber space by Lemma \ref{longray}. Hence $\pi|_{D_1}\colon D_1\rightarrow \pi(D_1)$ belongs to the cases \eqref{main7} and \eqref{main8}. However, we have $\dim(D_1\cap F)=r-1$ for any nontrivial fiber $F$ of $\pi$ for the case \eqref{main7}; this contradicts to Proposition \ref{structure_cor} \eqref{structure_cor1}. For the case \eqref{main8}, we have $\dim(D\cap F)=r$ for any nontrivial fiber $F$ of $\pi$. Thus only the case \eqref{structure_cor2a} in Proposition \ref{structure_cor} occurs. That is, $Y$ is smooth and $\pi$ is the blowing up along a smooth projective subvariety $W\subset Y$ of dimension $r-2$. Let $D_Y:=\pi(D)\subset Y$. Then $D_Y\simeq\mathbb{P}^{2r-1}$, and $W\subset D_Y$ is a linear subspace of dimension $r-2$ under the isomorphism $D_Y\simeq\mathbb{P}^{2r-1}$. Let $E\subset X$ be the exceptional divisor of $\pi$. Then $\pi^*D_Y=D+E$. We note that there exists a divisor $L_Y$ on $Y$ such that $\pi^*\mathcal{O}_Y(L_Y)\simeq\mathcal{O}_X(L+E)$ by Theorem \ref{cone} since $(E\cdot C)=-1$ and $(L\cdot C)=1$. Therefore \[ \mathcal{O}_Y(rL_Y)\simeq\mathcal{O}_Y(-(K_Y+D_Y)) \] by Theorem \ref{cone} since $\pi^*\mathcal{O}_Y(rL_Y)\simeq\mathcal{O}_X(rL+rE) \simeq\mathcal{O}_X(-(K_X+D)+rE)\simeq\mathcal{O}_X(-\pi^*K_Y-D-E)\simeq\pi^*\mathcal{O}_Y(-(K_Y+D_Y))$. \begin{claim}\label{cont_ample} $(Y, D_Y)$ is also a $2r$-dimensional log Fano manifold whose log Fano index is divisible by $r$. \end{claim} \begin{proof} It is enough to show that $L_Y$ is an ample divisor on $Y$. We know that $\mathbb{N}E(Y)$ is a closed convex cone since so is $\mathbb{N}E(X)$. Hence it is enough to show that $(L_Y\cdot C_Y)>0$ for any irreducible curve $C_Y\subset Y$. If $C_Y\not\subset W$, taking the strict transform $\widehat{C}_Y$ of $C_Y$ in $X$, then $(L_Y\cdot C_Y)=(L\cdot\widehat{C}_Y)+(E\cdot\widehat{C}_Y)>0$. Hence it is enough to show the case $C_Y\subset W$. We note that $W\subset D_Y$ and all curves in $D_Y$ are numerically proportional since $D_Y\simeq\mathbb{P}^{2r-1}$. Therefore we can reduce to the case $C_Y\not\subset W$. \end{proof} Since $D_Y\simeq\mathbb{P}^{2r-1}$, we have $\rho(Y)=1$ by Lemma \ref{rhoone} \eqref{rhoone2}. Therefore $Y\simeq\mathbb{P}^{2r}$ and $D_Y$ is a hyperplane under this isomorphism by \cite[Theorem 7.18]{fujitabook}. This is exactly the case in Example \ref{burouappu}. Therefore we have completed the proof of Theorem \ref{mukai1}. \noindent K.\ Fujita Research Institute for Mathematical Sciences (RIMS), Kyoto University, Oiwake-cho, Kitashirakawa, Sakyo-ku, Kyoto 606-8502, Japan [email protected] \end{document}
\begin{document} \title[Shape derivatives II: Dielectric scattering]{Shape derivatives of boundary integral operators in electromagnetic scattering. Part II: Application to scattering by a homogeneous dielectric obstacle } \author{Martin Costabel} \address{IRMAR, Institut Math\'ematique, Universit\'e de Rennes 1, 35042 Rennes, France} \email{[email protected]} \author{Fr\'ed\'erique Le Lou\"er} \address{ Institut f\"ur Numerische und Andgewandte Mathematik, Universit\"at G\"ottingen, 37083 G\"ottingen, Germany} \email{[email protected] } \begin{abstract} We develop the shape derivative analysis of solutions to the problem of scattering of time-harmonic electromagnetic waves by a penetrable bounded obstacle. Since boundary integral equations are a classical tool to solve electromagnetic scattering problems, we study the shape differentiability properties of the standard electromagnetic boundary integral operators. The latter are typically bounded on the space of tangential vector fields of mixed regularity $\TT\HH\sp{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$. Using Helmholtz decomposition, we can base their analysis on the study of pseudo-differential integral operators in standard Sobolev spaces, but we then have to study the G\^ateaux differentiability of surface differential operators. We prove that the electromagnetic boundary integral operators are infinitely differentiable without loss of regularity. We also give a characterization of the first shape derivative of the solution of the dielectric scattering problem as a solution of a new electromagnetic scattering problem. \end{abstract} \keywords{Maxwell's equations, boundary integral operators, surface differential operators, shape derivatives, Helmholtz decomposition.} \date{} \maketitle \section{Introduction} Consider the scattering of time-harmonic electromagnetic waves by a bounded obstacle $\Omega$ in $\R^3$ with a smooth and simply connected boundary $\Gamma$ filled with an homogeneous dielectric material. This problem is described by the system of Maxwell's equations with piecewise constant electric permittivity and magnetic permeability, valid in the sense of distributions, which implies two transmission conditions on the boundary of the obstacle guaranteeing the continuity of the tangential components of the electric and magnetic fields across the interface. The transmission problem is completed by the Silver--M\"uller radiation condition at infinity (see \cite{Monk} and \cite{Nedelec}). Boundary integral equations are an efficient method to solve such problems for low and high frequencies. The dielectric scattering problem is usually reduced to a system of two boundary integral equations for two unknown tangential vector fields on the interface (see \cite{BuffaHiptmairPetersdorffSchwab} and \cite{Nedelec}). We refer to \cite{CostabelLeLouer2} for methods developed by the authors to solve this problem using a single boundary integral equation. Optimal shape design with a goal function involving the modulus of the far field pattern of the dielectric scattering problem has important applications, such as antenna design for telecommunication systems and radars. The analysis of shape optimization methods is based on the analysis of the dependency of the solution on the shape of the dielectric scatterer, and a local analysis involves the study of derivatives with respect to the shape. An explicit form of the shape derivatives is desirable in view of their implementation in shape optimization algorithms such as gradient methods or Newton's method. In this paper, we present a complete analysis of the shape differentiability of the solution of the dielectric scattering problem and of its far field pattern, using integral representations. Even if numerous works exist on the calculus of shape derivatives of various shape functionals \cite{DelfourZolesio, DelfourZolesio2,Hadamard, PierreHenrot, Zolesio}, in the framework of boundary integral equations the scientific literature is not extensive. However, one can cite the papers \cite{Potthast2}, \cite{Potthast1} and \cite{Potthast3}, where R.~Potthast has considered the question, starting with his PhD thesis \cite{Potthast4}, for the Helmholtz equation with Dirichlet or Neumann boundary conditions and the perfect conductor problem, in spaces of continuous and H\"older continuous functions. Using the integral representation of the solution, one is lead to study the G\^ateaux differentiability of boundary integral operators and potential operators with weakly singular and hypersingular kernels. The natural space of distributions (energy space) which occurs in the electromagnetic potential theory is $\TT\HH\sp{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$, the set of tangential vector fields whose components are in the Sobolev space $H\sp{-\frac{1}{2}}(\Gamma)$ and whose surface divergence is in $H\sp{-\frac{1}{2}}(\Gamma)$. We face two main difficulties: On one hand, the solution of the scattering problem is given in terms of products of boundary integral operators and their inverses. In order to be able to construct shape derivatives of such products, it is not sufficient to find shape derivatives of the boundary integral operators, but it is imperative to prove that the derivatives are bounded operators between the same spaces as the boundary integral operators themselves. On the other hand, the very definition of shape differentiability of operators defined on the shape-dependent space $\TT\HH\sp{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ poses non-trivial problems. Our strategy consists in using the Helmholtz decomposition of this Hilbert space which gives a representation of a tangential vector field in terms of (tangential derivatives of) two scalar potentials. In this way, we split the analysis into two steps: First the G\^ateaux differentiability analysis of scalar boundary integral operators and potential operators with strongly and weakly singular kernels, and second the study of shape derivatives of surface differential operators. This work contains results from the thesis \cite{FLL} where this analysis has been used to develop a shape optimization algorithm of dielectric lenses in order to obtain a prescribed radiation pattern. This is the second of two papers on shape derivatives of boundary integral operators, the first one \cite{CostabelLeLouer} being aimed at a general theory of shape derivatives of singular integral operators appearing in boundary integral equation methods. The paper is organized as follows: In Section \ref{BoundIntOp} we recall some standard results about trace mappings and regularity properties of the boundary integral operators in electromagnetism. In Section \ref{ScatProb} we define the scattering problem for time-harmonic electromagnetic waves at a dielectric interface. We then give an integral representation of the solution --- and of the quantity of interest, namely the far field of the dielectric scattering problem --- following the single source integral equation method developed in \cite{CostabelLeLouer2}. The remaining parts of the paper are dedicated to the shape differentiability analysis of the solution of the dielectric scattering problem. We use the results of our first paper \cite{CostabelLeLouer} on the G\^ateaux differentiability of boundary integral operators with pseudo-homogeneous kernels. We refer to this paper for a discussion of the notion of G\^ateaux derivatives in Fr\'echet spaces and of some of their basic properties. In Section \ref{Helmholtzdec} we discuss the difficulties posed by the shape dependency of the function space $\TT\HH\sp{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ on which the integral operators are defined, and we present a strategy for dealing with this difficulty, namely using the well-known tool \cite{delaBourdonnaye} of Helmholtz decomposition. In our approach, we map the variable spaces $\TT\HH\sp{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$ to a fixed reference space with a transformation that preserves the Hodge structure. This technique involves the analysis of surface differential operators that have to be considered in suitable Sobolev spaces. Therefore in Section \eqref{SurfDiffOp} we recall and extend the results on the differentiability properties of surface differential operators established in \cite[Section 5]{CostabelLeLouer}. Using the rules on derivatives of composite and inverse functions, we obtain in Section \ref{ShapeSol} the shape differentiability properties of the solution of the scattering problem. More precisely, we prove that the boundary integral operators are infinitely G\^ateaux differentiable without loss of regularity, whereas previous results allowed such a loss \cite{Potthast3}, and we prove that the shape derivatives of the potentials are smooth away from the boundary but they lose regularity in the neighborhood of the boundary. This implies that the far field is infinitely G\^ateaux differentiable, whereas the shape derivatives of the solution of the scattering problem lose regularity. These new results generalize existing results: In the acoustic case, using a variational formulation, a characterization of the first G\^ateaux derivative was given by A. Kirsch \cite{Kirsch} for the Dirichlet problem and then by Hettlich \cite{Hettlich, HettlichErra} for the impedance problem and the transmission problem. An alternative technique was introduced by Kress and P\"aiv\"arinta in \cite{KressPaivarinta} to investigate Fr\'echet differentiability in acoustic scattering by the use of a factorization of the difference of the far-field pattern of the scattered wave for two different obstacles. In the electromagnetic case, Potthast used the integral equation method to obtain a characterization of the first shape derivative of the solution of the perfect conductor scattering problem. In \cite{Kress}, Kress improved this result by using a far-field identity and in \cite{HadarKress} Kress and Haddar extended this technique to acoustic and electromagnetic impedance boundary value problems. At the end of Section \ref{ShapeSol} we obtain a characterization of the first shape derivative of the solution of the dielectric scattering problem as the solution of a new electromagnetic transmission problem. We show by deriving the integral representation of the solution that the first derivative satisfies the homogeneous Maxwell equations, and by directly deriving the boundary values of the solution itself we see that the first derivative satisfies two new transmission conditions on the boundary. In the end we will have obtained two different algorithms for computing the shape derivative of the solution of the dielectric scattering problem and of the far field pattern: A first one by differentiating the integral representations and a second one by solving the new transmission problem associated with the first derivative. The characterization of the derivatives as solutions to boundary value problems has been obtained in the acoustic case by Kress \cite{ColtonKress}, Kirsch \cite{Kirsch}, Hettlich and Rundell \cite{HettlichRundell} and Hohage \cite{Hohage} and has been used for the construction of Newton-type or second degree iterative methods in acoustic inverse obstacle scattering. Whereas the use of these characterizations requires high order regularity assumption for the boundary, we expect that the differentiation of the boundary integral operators does not require much regularity. Although in this paper we treat the case of a smooth boundary, in the last section we give some ideas on possible extensions of the results of this paper to non-smooth domains. \color{red} A final remark on the terminology of derivatives with respect to the variation of the domain: For the solutions of boundary value problems with a moving boundary, in the context of continuum mechanics one distinguishes frequently between \emph{material derivatives} and \emph{shape derivatives}. The former correspond to the G\^ateaux derivative with respect to the deformation, which is then interpreted as a flow associated with a velocity field, when the solution is pulled back to a fixed undeformed reference domain (Lagrangian coordinates). Depending on the interpretation of the flow, other names are used, such as ``Lagrangian derivative'', or ``substantial derivative''. The shape derivatives, on the other hand, correspond to the G\^ateaux derivative of the solution in the deformed domain (Eulerian coordinates). One also talks about ``Eulerian derivative''. The difference between the two has the form of a convection term, which can lead to a loss of one order of regularity of the shape derivative on the support of the velocity field \cite{LeugeringetalAMOptim11}. In the context of this terminology, the derivatives of the boundary integral operators that we studied in Part I of this paper \cite{CostabelLeLouer} for general pseudohomogeneous kernels and in this paper for the boundary integral operators of electromagnetism, correspond to the Lagrangian point of view, because they are obtained by pull-back to a fixed reference boundary. However, the derivative of the solution of dielectric scattering problem that we construct in Section \ref{ShapeSol} -- with the help of the derivatives of the boundary integral operators -- is the Eulerian shape derivative. We also observe the loss of one order of regularity of the shape derivative with respect to the solution of the transmission problem. Finally, for the far field, the two notions of derivative coincide, because the deformation has compact support. \color{black} \section{Boundary integral operators and their main properties} \label{BoundIntOp} Let $\Omega$ be a bounded domain in $\R^{3}$ and let $\Omega^c$ denote the exterior domain $\R^3\backslash\overline{\Omega}$. Throughout this paper, we will for simplicity assume that the boundary $\Gamma$ of $\Omega$ is a smooth and simply connected closed surface, so that $\Omega$ is diffeomorphic to a ball. We use standard notation for surface differential operators and boundary traces. More details can be found in \cite{Nedelec}. For a vector function $\vv\in\mathscr{C}^k(\R^3,\C^3)$ with $k\in\N^*$, we denote by $[\nabla\vv]$ the matrix the $i$-th column of which is the gradient of the $i$-th component of $\vv$, and we set $[\D\vv]=\transposee{[\nabla\vv]}$. Let $\nn$ denote the outer unit normal vector on the boundary $\Gamma$. The tangential gradient of a complex-valued scalar function $u\in\mathscr{C}^k(\Gamma,\C)$ is defined by \begin{equation}\label{G} \nabla_{\Gamma}u=\nabla\tilde{u}_{|\Gamma}-\left(\nabla\tilde{u}_{|\Gamma}\cdot\nn\right)\nn, \end{equation} and the tangential vector curl is defined by \begin{equation}\label{RR} \operatorname{\mathbf{curl}}_{\Gamma}u=\nabla\tilde{u}_{|\Gamma}\wedge\nn, \end{equation} where $\tilde{u}$ is a smooth extension of $u$ to the whole space $\R^3$. For a complex-valued vector function $\uu\in\mathscr{C}^k(\Gamma,\C^3)$, we denote $[\nabla_{\Gamma}\uu]$ the matrix the $i$-th column of which is the tangential gradient of the $i$-th component of $\uu$ and we set $[\D_{\Gamma}\uu]=\transposee{[\nabla_{\Gamma}\uu]}$. The surface divergence of $\uu\in\mathscr{C}^k(\Gamma,\C^3)$ is defined by \begin{equation}\label{e:D} \operatorname{\mathrm{div}}_{\Gamma}\uu=\operatorname{\mathrm{div}}\tilde{\uu}_{|\Gamma}-\left([\nabla\tilde{\uu}_{|\Gamma}]\nn\cdot\nn\right),\end{equation} and the surface scalar curl is defined by \begin{equation*}\label{R} \operatorname{\mathrm{curl}}_{\Gamma}\uu=\nn\cdot\left(\operatorname{\mathbf{curl}}\tilde{\uu}\right)\,. \end{equation*} These definitions do not depend on the choice of the extension $\tilde{\uu}$. \begin{definition}\label{2.1} For a vector function $\vv\in \mathscr{C}^{\infty}(\overline{\Omega},\C^3)$ a scalar function $v\in\mathscr{C}^{\infty}(\overline{\Omega},\C)$ and $\kappa\in\C\setminus\{0\}$, we define the traces : $$\gamma v=v_{|_{\Gamma}} ,$$ $$\gamma_{n} v=\frac{\partial}{\partial\nn}v=\nn\cdot(\nabla v)_{|_{\Gamma}},$$ $$\gamma_{D}\vv:=\nn\wedge (\vv)_{|_{\Gamma}}\textrm{ (Dirichlet),}$$ $$\gamma_{N_{\kappa}}\vv:=\dfrac{1}{\kappa}\nn\wedge(\operatorname{\mathbf{curl}} \vv)_{|_{\Gamma}}\textrm{ (Neumann).}$$ \end{definition} We define in the same way the exterior traces $\gamma^c$, $\gamma_{n}^c$, $\gamma_{D}^c$ and $\gamma_{N_{\kappa}}^c$. For a domain $G\subset\R^3$ we denote by $H^s(G)$ the usual $L^2$-based Sobolev space of order $s\in\R$, and by $H^s_{\mathrm{loc}}(\overline G)$ the space of functions whose restrictions to any bounded subdomain $B$ of $G$ belong to $H^s(B)$. Spaces of vector functions will be denoted by boldface letters, thus $$ \HH^s(G)=(H^s(G))^3\,. $$ If $\D$ is a differential operator, we write: \begin{eqnarray*} \HH^s(\D,\Omega)& = &\{ u \in \HH^s(\Omega) : \D u \in \HH^s(\Omega)\}\\ \HH^s_{\mathrm{loc}}(\D,\overline{\Omega^c})&=& \{ u \in \HH^s_{\mathrm{loc}}(\overline{\Omega^c}) : \D u \in \HH^s_{\mathrm{loc}}(\overline{\Omega^c}) \}. \end{eqnarray*} The space $\HH^s(\D, \Omega)$ is endowed with the natural graph norm. When $s=0$, this defines in particular the Hilbert spaces $\HH(\operatorname{\mathbf{curl}},\Omega)$ and $\HH(\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}},\Omega)$. We introduce the Hilbert spaces $H^s(\Gamma)=\gamma\left(H^{s+\frac{1}{2}}(\Omega)\right),$ and $\TT\HH^{s}(\Gamma)=\gamma_{D}\left(\HH^{s+\frac{1}{2}}(\Omega)\right)$. For $s>0$, the trace mappings $$\gamma:H^{s+\frac{1}{2}}(\Omega)\rightarrow H^{s}(\Gamma),$$ $$\gamma_{n}:H^{s+\frac{3}{2}}(\Omega)\rightarrow H^{s}(\Gamma),$$ $$\gamma_{D}:\HH^{s+\frac{1}{2}}(\Omega)\rightarrow \TT\HH^{s}(\Gamma) $$ are then continuous. The dual of $H^s(\Gamma)$ and $\TT\HH^{s}(\Gamma)$ with respect to the $L^2$ (or $\LL^2$) scalar product is denoted by $H^{-s}(\Gamma)$ and $\TT\HH^{-s}(\Gamma)$, respectively. The surface differential operators defined above can be extended to Sobolev spaces: For $s\in\R$ the tangential gradient and the tangential vector curl are obviously linear and continuous operators from $H^{s+1}(\Gamma)$ to $\TT\HH^s(\Gamma)$. The surface divergence and the surface scalar curl can then be defined on tangential vector fields by duality, extending duality relations valid for smooth functions \begin{equation}\label{dualgrad} \int_{\Gamma}(\operatorname{\mathrm{div}}_{\Gamma}\jj)\cdot\varphi\,ds=-\int_{\Gamma}\jj\cdot\nabla_{\Gamma}\varphi\,ds \qquad \text{for all }\jj\in\TT\HH^{s+1}(\Gamma),\;\varphi\in H^{-s}(\Gamma), \end{equation} \begin{equation}\label{dualrot} \int_{\Gamma}(\operatorname{\mathrm{curl}}_{\Gamma}\jj)\cdot\varphi\,ds=\int_{\Gamma}\jj\cdot\operatorname{\mathbf{curl}}_{\Gamma}\varphi\,ds \qquad\text{for all }\jj\in\TT\HH^{s+1}(\Gamma),\;\varphi\in H^{-s}(\Gamma). \end{equation} We have the following equalities: \begin{equation}\label{eqd2} \operatorname{\mathrm{curl}}_{\Gamma}\nabla_{\Gamma}=0\text{ and }\operatorname{\mathrm{div}}_{\Gamma}\operatorname{\mathbf{curl}}_{\Gamma}=0 \end{equation} \begin{equation}\label{eqd3} \operatorname{\mathrm{div}}_{\Gamma}(\nn\wedge \jj)=-\operatorname{\mathrm{curl}}_{\Gamma}\jj\text{ and }\operatorname{\mathrm{curl}}_{\Gamma}(\nn\wedge \jj)=\operatorname{\mathrm{div}}_{\Gamma}\jj\end{equation} \begin{definition} Let $s\in\R$. We define the Hilbert space $$ \TT\HH^{s}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)=\left\{ \jj\in\TT\HH^{s}(\Gamma) \,;\, \operatorname{\mathrm{div}}_{\Gamma}\jj \in H^{s}(\Gamma)\right\} $$ endowed with the norm $$ ||\cdot||_{\TT\HH^{s}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)}= \left(||\cdot||_{\TT\HH^{s}(\Gamma)}^{2}+||\operatorname{\mathrm{div}}_{\Gamma}\cdot||_{H^{s}(\Gamma)}^{2}\right)^{\frac12}. $$ \end{definition} \begin{lemma}\label{2.3} The operators $\gamma_{D}$ and $\gamma_{N}$ are linear continuous from $\mathscr{C}^{\infty}(\overline{\Omega},\C^3)$ to $\TT\LL^2(\Gamma)$ and they can be extended to continuous linear operators from $\HH(\operatorname{\mathbf{curl}},\Omega)$ and $\HH(\operatorname{\mathbf{curl}},\Omega)\cap\HH(\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}},\Omega)$, respectively, to $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$. \end{lemma} For $\uu\in \HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\overline{\Omega^c})$ and $\vv \in \HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}},\overline{\Omega^c}))$ we define $\gamma_{D}^c\uu $ and $\gamma_{N}^c\vv $ in the same way and the same mapping properties hold true. Recall that we assume that the boundary $\Gamma$ is smooth and topologically trivial. For a proof of the following result, we refer to \cite{BuffaCiarlet,MartC,Nedelec}. \begin{lemma}\label{LapBel} Let $s\in\R$. The Laplace--Beltrami operator defined by \begin{equation}\label{eqd1} \Delta_{\Gamma}u=\operatorname{\mathrm{div}}_{\Gamma}\nabla_{\Gamma}u=-\operatorname{\mathrm{curl}}_{\Gamma}\operatorname{\mathbf{curl}}_{\Gamma}u. \end{equation} is linear and continuous from $H^{s+2}(\Gamma)$ to $H^s(\Gamma)$. For $f\in H^s(\Gamma)$ and $u\in H^{s+2}(\Gamma)$, the equation $\Delta_{\Gamma}u=f$ has the equivalent formulation \begin{equation}\label{ippLB} \int_{\Gamma}\nabla_{\Gamma}u\cdot\nabla_{\Gamma}\varphi\,ds = -\int_{\Gamma}f\cdot\varphi\,ds,\qquad\text{ for all }\varphi\in H^{-s}(\Gamma). \end{equation} The operator $\Delta_{\Gamma}:H^{s+2}(\Gamma)\to H^{s}(\Gamma)$ is Fredholm of index zero, its kernel and cokernel consisting of constant functions, so that $\Delta_{\Gamma}:H^{s+2}(\Gamma)\slash\C\to H^{s}_{*}(\Gamma)$ is an isomorphism. Here we define the space $H^s_{*}(\Gamma)$ by $$ f\in H^s_{*}(\Gamma)\quad\Longleftrightarrow\quad f\in H^s(\Gamma)\textrm{ and }\int_{\Gamma}f\,ds=0. $$ For $f\in H^s_{*}(\Gamma)$ we denote the unique solution $u\in H^{s+2}(\Gamma)\slash\C$ of \eqref{ippLB} by $u=\Delta_{\Gamma}^{-1}f$. \end{lemma} This result is due to the injectivity of the operator $\nabla_{\Gamma}$ from $H^{s+2}(\Gamma)\slash\C$ to $\TT\HH^{s+1}(\Gamma)$, the Lax-Milgram lemma applied to \eqref{ippLB} for $s=-1$, and standard elliptic regularity theory. Note that $\operatorname{\mathbf{curl}}_{\Gamma}$ is also injective from $H^{s+2}(\Gamma)\slash\C$ to $\TT\HH^{s+1}(\Gamma)$, and by duality both $\operatorname{\mathrm{div}}_{\Gamma}$ and $\operatorname{\mathrm{curl}}_{\Gamma}$ are surjective from $\TT\HH^{s+1}(\Gamma)$ to $H^s_{*}(\Gamma)$ Notice that $\operatorname{\mathrm{curl}}_{\Gamma}$ is defined in a natural way on all of $\HH^{s+1}(\Gamma)$ and maps to $H^s_{*}(\Gamma)$, because we have $\operatorname{\mathrm{curl}}_{\Gamma}(\varphi\nn)=0$ for any scalar function $\varphi\in H^{s+1}(\Gamma)$. Thus \eqref{dualrot} is still valid for a not necessarily tangential density $\jj\in\HH^{s+1}(\Gamma)$. An analogous property for $\operatorname{\mathrm{div}}_{\Gamma}$ defined by \eqref{e:D} is not available. We now recall some well known results about electromagnetic potentials. Details can be found in \cite{BuffaCiarlet, BuffaCostabelSchwab, BuffaCostabelSheen,BuffaHiptmairPetersdorffSchwab,HsiaoWendland,Nedelec}. Let $\kappa$ be a positive real number and let $G_{a}(\kappa,|x-y|)=\dfrac{e^{i\kappa|x-y|}}{4\pi| x-y|} $ be the fundamental solution of the Helmholtz equation $ {\Delta u + \kappa^2u =0}. $ The single layer potential ${\mathbb P}si_{\kappa}$ is given by \begin{center}$\quad{\mathbb P}si_{\kappa}u(x) = \displaystyle{\int_{\Gamma}G_{a}(\kappa,|x-y|)u(y) ds(y)}\qquad x \in\R^3\backslash\Gamma$,\end{center} and its trace by $$ V_{\kappa}u(x)= \int_{\Gamma}G_{a}(\kappa,|x-y|)u(y) ds(y)\qquad x \in\Gamma. $$ As discussed in the first part of this paper \cite{CostabelLeLouer}, the fundamental solution is pseudo-homogeneous of class $-1$. The single layer potential ${\mathbb P}si_{\kappa}u$ is continuous across the boundary $\Gamma$. As a consequence we have the following result : \begin{lemma} \label{3.1} Let $s\in\R$. The operators $$ \begin{array}{ll} {\mathbb P}si_{\kappa} & : H^{s-\frac{1}{2}}(\Gamma)\rightarrow H^{s+1}(\Omega)\cap H^{s+1}_{\mathrm{loc}}(\overline{\Omega^{c}}) \; \Big(H^{s-\frac{1}{2}}(\Gamma)\rightarrow H^{s+1}_{\mathrm{loc}}(\R^3)\text{ if }s<\frac12\Big) \\[1ex] V_{\kappa}& : H^{s-\frac{1}{2}}(\Gamma)\rightarrow H^{s+\frac{1}{2}}(\Gamma) \end{array} $$ are continuous. \end{lemma} The electric potential operator ${\mathbb P}si_{E_{\kappa}}$ is defined for $\jj\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma) $ by $$ {\mathbb P}si_{E_{\kappa}}\jj := \kappa\,{\mathbb P}si_{\kappa}\,\jj + \dfrac{1}{\kappa}\nabla{\mathbb P}si_{\kappa}\operatorname{\mathrm{div}}_{\Gamma}\jj \,. $$ In $\R^{3}\setminus\Gamma$, this can be written as ${\mathbb P}si_{E_{\kappa}}\jj := \dfrac{1}{\kappa}\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}{\mathbb P}si_{\kappa}\jj$ because of the Helmholtz equation and the identity $\strut\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}} = -\Delta +\nabla\operatorname{\mathrm{div}}$. The magnetic potential operator ${\mathbb P}si_{M_{\kappa}}$ is defined for $\mm\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma) $ by $$ {\mathbb P}si_{M_{\kappa}}\mm := \operatorname{\mathbf{curl}}{\mathbb P}si_{\kappa}\mm. $$ We denote the identity operator by $\Id$. \begin{lemma} \label{3.2} The potentials operators ${\mathbb P}si_{E_{\kappa}}$ and ${\mathbb P}si_{M_{\kappa}}$ are continuous from $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ to $\HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\R^3)$. For $\jj\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ we have $$ (\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}} -\kappa^2\Id){\mathbb P}si_{E_{\kappa}}\jj = 0\textrm{ and }(\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}} -\kappa^2\Id){\mathbb P}si_{M_{\kappa}}\mm = 0\textrm{ in }\R^3\backslash\Gamma, $$ and ${\mathbb P}si_{E_{\kappa}}\jj$ and ${\mathbb P}si_{M_{\kappa}}\mm$ satisfy the Silver-M\"uller condition. \end{lemma} We define the electric and the magnetic far field operators for a density $\jj$ and an element $\hat{x}$ of the unit sphere $S^2$ of $\R^3$ by \begin{equation}\label{FF} \begin{split} {\mathbb P}si_{E_{\kappa}}^{\infty}\,\jj(\hat{x})=&\;\kappa\;\hat{x}\wedge\left(\int_{\Gamma}e^{-i\kappa\hat{x}\cdot y}\jj(y)ds(y)\right)\wedge\hat{x},\\{\mathbb P}si_{M_{\kappa_{e}}}^{\infty}\,\jj(\hat{x})=&\;i\kappa\;\hat{x}\wedge\left(\int_{\Gamma}e^{-i\kappa\hat{x}\cdot y}\jj(y)ds(y)\right). \end{split} \end{equation} These operators are bounded from $\TT\HH^{s}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ to \\ \mbox{$\TT\LL^2(S^2)=\{\hh\in\LL^2(S^2);\;\hh(\hat{x})\cdot\hat{x}=0\}$}, for all $s\in\R$. We can now define the main boundary integral operators: \begin{align} C_{\kappa}\jj(x)&=-\displaystyle{\int_{\Gamma}\nn(x)\wedge\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}^x\{G_{a}(\kappa,|x-y|)\jj(y)\}ds(y)}\nonumber\\ \label{Ck} &=\left(-\kappa\;\;\nn\wedge V_{\kappa}\,\jj+\dfrac{1}{\kappa}\operatorname{\mathbf{curl}}_{\Gamma}V_{\kappa}\operatorname{\mathrm{div}}_{\Gamma}\jj\right)(x),\\ \intertext{\quad\; and} M_{\kappa}\jj(x)&=-\displaystyle{\int_{\Gamma}\nn(x)\wedge\operatorname{\mathbf{curl}}^x\{G_{a}(\kappa,|x-y|)\jj(y)\}ds(y)}\nonumber\\ \label{MkDkBk} &=\;\;({D_{\kappa}}\,\jj-{B_{\kappa}}\,\jj)(x),\\ \intertext{\quad\; with} {B_{\kappa}}\,\jj(x)&=\displaystyle{\int_{\Gamma}\nabla^xG_{a}(\kappa,|x-y|)\left(\jj(y)\cdot\nn(x)\right)ds(y),}\nonumber\\ {D_{\kappa}}\,\jj(x)&=\displaystyle{\int_{\Gamma}\left(\nabla^xG_{a}(\kappa,|x-y|)\cdot\nn(x)\right)\jj(y)ds(y).}\nonumber \end{align} The operators $M_{\kappa}$ and $C_{\kappa}$ are bounded operators from $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ to itself. \section{The dielectric scattering problem} \label{ScatProb} We consider the scattering of time-harmonic waves at a fixed frequency $\omega$ by a three-dimensional bounded and non-conducting homogeneous dielectric obstacle represented by the domain $\Omega$. The electric permittivity $\epsilon$ and the magnetic permeability $\mu$ are assumed to take constant positive real values in $\Omega$ and $\Omega^c$. Thus they will be discontinuous across the interface $\Gamma$, in general. The wave number is given by $\kappa=\omega\sqrt{\mu\epsilon}$. We distinguish the dielectric quantities related to the interior domain $\Omega$ through the index $i$ and to the exterior domain $\Omega^c$ through the index $e$. The time-harmonic Maxwell system can be reduced to second order equations for the electric field only. The time-harmonic dielectric scattering problem is then formulated as follows. \textbf{The solution of the dielectric scattering problem :} Consider the scattering of a given incident electric wave $\EE^{inc}\in\HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\R^3)$ that satisfies $\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}} \EE^{inc} - \kappa_{e}^2\EE^{inc} =0$ in a neighborhood of $\overline{\Omega}$. The interior electric field $\EE^{i}\in \HH(\operatorname{\mathbf{curl}},\Omega)$ and the exterior electric scattered field $\EE^{s}\in \HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\overline{\Omega^c})$ satisfy the time-harmonic Maxwell equations \begin{eqnarray} \label{(1.2a)} \operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}} \EE^{i} - \kappa_{i}^2\EE^{i}& = 0&\text{ in }\Omega, \\ \label{(1.2b)} \operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}} \EE^{s} - \kappa_{e}^2\EE^{s} &= 0&\text{ in }\Omega^c, \end{eqnarray} the two transmission conditions, \begin{eqnarray} \label{T1}&\,\,\;\;\nn\wedge \EE^{i}=\nn\wedge( \EE^{s}+\EE^{inc})&\qquad\text{ on }\Gamma \\ \label{T2} &\mu_{i}^{-1}(\nn\wedge\operatorname{\mathbf{curl}} \EE^{i}) = \mu_{e}^{-1}\nn\wedge\operatorname{\mathbf{curl}}(\EE^{s}+\EE^{inc})&\qquad\text{ on }\Gamma \end{eqnarray} and the Silver-M\"uller radiation condition: \begin{equation}\label{T3} \lim_{|x|\rightarrow+\infty}|x|\left| \operatorname{\mathbf{curl}} \EE^{s}(x)\wedge\frac{x}{| x |}- i\kappa_{e}\EE^{s}(x) \right| =0. \end{equation} It is well known that the problem \eqref{(1.2a)}-\eqref{T3} admits a unique solution for any positive real values of the exterior wave number $\kappa_{e}$. We refer the reader to \cite{BuffaHiptmairPetersdorffSchwab, CostabelLeLouer2,MartinOla} for a proof via boundary integral equation methods. To analyze the dependency of the solution on the shape of the scatterer $\Omega$, we will use an integral representation of the solution, obtained by the single boundary integral equation method developped by the autors in \cite{CostabelLeLouer2}. It is based on the layer ansatz for the exterior electric field $\EE^s$: \begin{equation} \label{u2c} \EE^{s} = -{{\mathbb P}si_{E_{\kappa_{e}}}}\jj -i\eta{{\mathbb P}si_{M_{\kappa_{e}}}}{C_{0}^*}\jj\quad \text{ in }\R^3\setminus\overline{\Omega} \end{equation} where $\eta$ is a positive real number, $\jj\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ and the operator ${C}_{0}^{*}$ is defined for $\jj\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ by $$ C_{0}^*\jj=-\nn\wedge V_{0}\,\jj-\operatorname{\mathbf{curl}}_{\Gamma}V_{0}\operatorname{\mathrm{div}}_{\Gamma}\jj. $$ Thanks to the transmission conditions and the Stratton-Chu formula, we have the integral representation of the interior field \begin{equation} \label{u12} \EE^{i} =-\frac{1}{\rho}({\mathbb P}si_{E_{\kappa_{i}}}\{\gamma_{N_{e}}^c\EE^{inc} +N_{e}\jj\}) - ({\mathbb P}si_{M_{\kappa_{i}}}\{\gamma_{D}^c\EE^{inc} + L_{e}\jj\})\text{ in }\Omega\end{equation} where $\rho=\dfrac{\kappa_{i}\mu_{e}}{\kappa_{e}\mu_{i}}$ and $$ L_{e}={C_{\kappa_{e}}}+i\eta\left(-\frac{1}{2}\Id+{M_{\kappa_{e}}}\right){C_{0}^*},$$ $$N_{e}=\left(-\frac{1}{2}\Id+{M_{\kappa_{e}}}\right)+i\eta {C_{\kappa_{e}}}{C_{0}^*}.$$ The exterior Dirichlet trace applied to the right-hand side \eqref{u12} vanishes. The density $\jj$ then solves the following boundary integral equation \begin{equation} \label{SS} {\boldsymbol{\mathsf S}} \jj \equiv \rho\left(-\tfrac{1}{2}\Id + M_{\kappa_{i}}\right)L_{e}\jj + C_{\kappa_{i}}N_{e}\jj = -\rho\left(-\tfrac{1}{2}\Id + M_{\kappa_{i}}\right)\gamma_{D}\EE^{inc}+ C_{\kappa_{i}}\gamma_{N_{\kappa_{e}}}\EE^{inc}. \end{equation} \begin{theorem} \label{thT} The operator $\boldsymbol{\mathsf S}$ from $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ to itself is linear, bounded and invertible. Moreover, given the electric incident field $\EE^{inc}\!\in\HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\R^3)$, the integral representations {\rm \eqref{u2c}, \eqref{u12}} of $\EE^{i}$ and $\EE^{s}$ give the unique solution of the dielectric scattering problem for all positive real values of the dielectric constants $\mu_{i}$, $\mu_{e}$, $\epsilon_{i}$ and $\epsilon_{e}$. \end{theorem} An important quantity that is of interest in many shape optimization problems is the far field pattern of the electric field, defined on the unit sphere of $\R^3$ by $$ \EE^{\infty}(\hat{x})=\lim_{|x|\rightarrow\infty}4\pi|x|\dfrac{\EE^s(x)}{e^{i\kappa_{e}|x|}},\qquad \text{ with }\dfrac{x}{|x|}=\hat{x}. $$ We have $\EE^{\infty}\in\TT\LL^2(S^2)\cap\mathscr{C}^{\infty}(S^2,\C^3)$. To obtain the integral representation of the far field $\EE^{\infty}$ of the solution, it suffices to replace in \eqref{u2c} the potential operators ${\mathbb P}si_{E_{\kappa_{e}}}$ and ${\mathbb P}si_{M_{\kappa_{e}}}$ by the far field operators ${\mathbb P}si_{E_{\kappa_{e}}}^{\infty}$ and ${\mathbb P}si_{M_{\kappa_{e}}}^{\infty}$ defined in \eqref{FF}, respectively. In the method we have described, the solution $\EE=(\EE^{i},\EE^{s})$ and the far field $\EE^{\infty}$ are constructed from operators defined by integrals on the boundary $\Gamma$ and the incident field. For a fixed incident field and fixed constants $\kappa_{i}$, $\kappa_{e}$, $\mu_{i}$, $\mu_{e}$, these quantities therefore depend on the geometry of the boundary $\Gamma$ of the scatterer $\Omega$ only. In the sequel we analyze the $\Gamma$-dependence of the solution following the definition of shape derivatives and the notations of section 4 of the paper \cite{CostabelLeLouer}. \section{Shape dependence via Helmholtz decomposition}\label{Helmholtzdec} Let us fix a reference domain $\Omega$. We consider variations of $\Omega$ generated by transformations of the form $x\mapsto x+r(x)$ of points $x\in\R^3$, where $r$ is a smooth vector function defined on $\Gamma$. This transformation deforms the domain $\Omega$ in a domain $\Omega_{r}$ of boundary $\Gamma_{r}$. The functions $r$ are assumed to belong to the Fr\'echet space $\mathscr{C}^{\infty}(\Gamma,\R^3)$. For $\varepsilon>0$ and some metric $d_{\infty}$ on $\mathscr{C}^{\infty}(\Gamma,\R^3)$, we set $$ B^{\infty}(0,\varepsilon)= \left\{r\in\mathscr{C}^{\infty}(\Gamma,\R^3),\; d_{\infty}(0,r)<\varepsilon\right\}. $$ In the following, we choose $\varepsilon$ small enough so that for any $r\in B^{\infty}(0,\varepsilon)$, $(\Id+r)$ is a diffeomorphism from $\Gamma$ to $\Gamma_{r}=(\Id+r)\Gamma=\left\{x_{r}=x+r(x); x\in\Gamma\right\}$. The aim of this paper is to study the shape differentiability, that is, the G\^ateaux differentiability with respect to $r$, of the functionals mapping $r$ to the solution $$ (\mathscr{E}^{i}(r),\mathscr{E}^{s}(r))=\left(\EE^i(\Gamma_{r}),\EE^s(\Gamma_{r})\right) $$ of the dielectric scattering problem with obstacle $\Omega_{r}$, and to the far field $\mathscr{E}^{\infty}(r)=\EE^\infty(\Gamma_{r})$. In the following, we use the superscript $r$ for integral operators and trace mappings pertaining to $\Gamma_{r}$, while functions defined on $\Gamma_{r}$ will often have a subscript $r$. According to the boundary integral equation method described in the previous section, we have \begin{equation} \mathscr{E}^s(r)= \left(-{\mathbb P}si_{E_{\kappa_{e}}}^r-i\eta{\mathbb P}si_{M_{\kappa_{e}}}^rC_{0}^{*,r}\right)\jj_{r}\qquad\text{ in }\Omega_{r}^c=\R^3\backslash\overline{\Omega_{r}}, \end{equation} where $\jj_{r}$ solves the integral equation $$ \boldsymbol{\mathsf S}^r\jj_{r}=-\rho\left(-\frac{1}{2}\Id+M_{\kappa_{i}}^r\right)\gamma_{D}^r\EE^{inc}-C_{\kappa_{i}}^r\gamma_{N_{\kappa_{e}}}^r\EE^{inc}, $$ and \begin{equation}\label{risr2} \mathscr{E}^{i}(r) = -\frac{1}{\rho}{\mathbb P}si_{E_{\kappa_{i}}}^r\gamma_{N_{\kappa_{e}}}^{c,r}\mathscr{E}^{tot}(r) - {\mathbb P}si_{M_{\kappa_{i}}}^r\gamma_{D}^{c,r}\mathscr{E}^{tot}(r)\qquad\text{ in }\Omega_{r}, \end{equation} with \begin{equation}\label{risr1} \mathscr{E}^{tot}(r)=\EE^{inc}+\mathscr{E}^s(r)\,. \end{equation} The far field pattern of the dielectric scattering problem by the interface $\Gamma_{r}$ is $$ \mathscr{E}^{\infty}(r)= \left(-{\mathbb P}si^{\infty,r}_{E_{\kappa_{e}}}-i\eta{\mathbb P}si_{M_{\kappa_{e}}}^{\infty,r}C_{0}^{*,r}\right)\jj_{r}. $$ As defined in \eqref{SS}, the operator $\boldsymbol{\mathsf S}^r$ is composed of the operators $C_{\kappa_{e}}^r$, $M_{\kappa_{e}}^r$, $C_{\kappa_{i}}^r$ et $M_{\kappa_{i}}^r$, which are all bounded operators on the space $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$. Therefore we have to study the G\^ateaux differentiability of the following mappings on $B^{\infty}(0,\varepsilon)$ $$ \begin{array}{ll} r\mapsto M_{\kappa}^r, C_{\kappa}^r\in\mathscr{L}(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})) \\ r\mapsto {\mathbb P}si_{M_{\kappa}}^r,{\mathbb P}si_{E_{\kappa}}^r\in\mathscr{L}(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r}),\HH(\operatorname{\mathbf{curl}},\Omega_{r})\cup\HH_{loc}(\operatorname{\mathbf{curl}},\overline{\Omega_{r}^c})) \\ r\mapsto {\mathbb P}si^{\infty,r}_{M_{\kappa}},{\mathbb P}si^{\infty,r}_{E_{\kappa}}\in\mathscr{L}(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r}),\TT\LL^2(S^2)\cap\mathscr{C}^{\infty}(S^2,\C^3)). \end{array} $$ Finally, the differentiability properties of the mapping $r\mapsto C_{0}^{*r}$ can be deduced from those of the mapping $r\mapsto C_{\kappa}^r$. In this approach, several difficulties have to be overcome. The first one is that if we want to find the derivatives of the solution of the scattering problem, which is given as a product of operators and of their inverses, all defined on the same space $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ (for the derivative at $r=0$), it is necessary to prove that the derivatives themselves are defined as bounded operators on the same space, too. On the other hand, the very definition of the differentiability of operators defined on $\TT\HH\sp{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ raises non-trivial questions. For reducing the variable space $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$ to a fixed reference space, it is not sufficient, as we did in the scalar case studied in the first part \cite{CostabelLeLouer}, to use a change of variables. Let us discuss this question: \emph{How to define the shape derivative of operators defined on the variable space $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$?} in detail. We recall the notation $\tau_{r}$ for the ``pullback'' induced by the change of variables. It maps a function $u_{r}$ defined on $\Gamma_{r}$ to the function $\tau_{r}u_{r}=u_{r}\circ(\Id+r)$ defined on $\Gamma$. For $r\in B^{\infty}(0,\varepsilon)$, the transformation $\tau_{r}$ is an isomorphism from $H^t(\Gamma_{r})$ to $H^t(\Gamma)$. We have $$ (\tau_{r}u_{r})(x)=u_{r}(x+r(x))\text{ and }(\tau_{r}^{-1}u)(x_{r})=u(x). $$ The natural idea to use this for a product of operators, proposed by Potthast in \cite{Potthast2} in the acoustic case, is to insert the identity $\tau_{r}^{-1}\tau_{r}=\Id_{\HH^{-\frac{1}{2}}(\Gamma_{r})}$ between the factors. This allows to consider integral operators on the fixed boundary $\Gamma$ only and to would require study the differentiability of the mappings \begin{equation} \label{taurM} r\mapsto\tau_{r}C_{\kappa}^{r}\tau_{r}^{-1}, \quad r\mapsto\tau_{r}M_{\kappa}^{r}\tau_{r}^{-1}, \quad r\mapsto{\mathbb P}si_{E_{\kappa}}^{r}\tau_{r}^{-1}, \quad r\mapsto{\mathbb P}si_{M_{\kappa}}^{r}\tau_{r}^{-1}, \end{equation} but as has been already pointed out in \cite{Potthast3}, difficulties remain. The main cause for this is that $\tau_{r}$ does not map vector fields tangential to $\Gamma_{r}$ to vector fields tangential to $\Gamma$, and in particular, $$ \tau_{r}(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r}))\not=\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma). $$ This will lead to a loss of regularity if we simply try to differentiate the mappings in \eqref{taurM}. Let us explain this for the operator $M_{\kappa}$. The operator $M_{\kappa}^r$, when acting on vector fields tangential to $\Gamma_{r}$, has additional regularity like what is known for the scalar double layer potential, namely it has a pseudo-homogeneous kernel of class $-1$, whereas it is of class $0$ when considered on all vector fields (see \eqref{MkDkBk}). If we differentiate the kernel of $\tau_{r}M_{\kappa}^r\tau_{r}^{-1}$, we will not obtain a pseudo-homogeneous kernel of class $-1$ on the set of vector fields tangential to $\Gamma$, so that we find a loss of regularity for the G\^ateaux derivative of $\tau_{r}M_{\kappa}^r\tau_{r}^{-1}$. For mapping tangent vector fields to tangent vector fields, the idea of Potthast was to use projectors from one tangent plane to the other. Let us denote by $\pi(r)$ the pullback $\tau_{r}$ followed by orthogonal projection to the tangent plane to $\Gamma$. This maps any vector function on $\Gamma_{r}$ to a tangential vector function on $\Gamma$, and we have $$ (\pi(r)\uu_{r})(x)=\uu_{r}(x+r(x))-\left(\nn(x)\cdot \uu_{r}(x+r(x))\right)\nn(x). $$ The restriction of $\pi(r)$ to tangential functions on $\Gamma_{r}$ admits an inverse, denoted by $\pi^{-1}(r)$, if $r$ is sufficiently small. The mapping $\pi^{-1}(r)$ is defined by $$ (\pi^{-1}(r)\uu)(x+r(x))=\uu(x)-\nn(x)\frac{\nn_{r}(x+r(x))\cdot \uu(x)}{\nn_{r}(x+r(x))\cdot \nn(x)}, $$ and it is easy to see that $\pi(r)$ is an isomorphism between he space of continuous tangential vector functions on $\Gamma_{r}$ and on $\Gamma$, and for any $t$ between $\TT\HH^t(\Gamma_{r})$ and $\TT\HH^t(\Gamma)$. In the framework of continuous tangential functions it suffices to insert the product $\pi^{-1}(r)\pi(r)=\Id_{\TT\mathscr{C}^0(\Gamma_{r})}$ between factors in the integral representation of the solution to reduce the analysis to the study of boundary integral operators defined on $\TT\mathscr{C}^0(\Gamma)$, which does not depend on $r$. In our case, we would obtain operators defined on the space $$ \pi(r)\left(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})\right)=\left\{u\in \TT\HH^{-\frac{1}{2}}(\Gamma), \operatorname{\mathrm{div}}_{\Gamma_{r}}(\pi^{-1}(r)\uu)\in H^{-\frac{1}{2}}(\Gamma_{r})\right\}, $$ which still depends on $r$ and is, in general, different from $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$. We propose a different approach, using the Helmholtz decomposition of the space $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$ to introduce a new pullback operator ${\mathbb P}p_{r}$ that defines an isomorphism between $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$ and $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$. Recall that we assume that the boundary $\Gamma$ is smooth and simply connected. We have the following decomposition. We refer to \cite{delaBourdonnaye} for the proof. \begin{theorem} The Hilbert space $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ admits the following Helmholtz decomposition:\begin{equation} \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)= \nabla_{\Gamma} H^{\frac{3}{2}}(\Gamma) \oplus {\operatorname{\mathbf{curl}}}_{\Gamma} H^{\frac{1}{2}}(\Gamma) . \end{equation} \end{theorem} Since $\varepsilon$ is chosen such that for all $r\in B^{\infty}(0,\varepsilon)$ the surfaces $\Gamma_{r}$ are still regular and simply connected, the spaces $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$ admit similar decompositions. The operator of change of variables $\tau_{r}$ is an isomorphism from $H^{\frac{3}{2}}(\Gamma_{r})$ to $H^{\frac{3}{2}}(\Gamma)$ and from $H^{\frac{1}{2}}(\Gamma_{r})$ to $H^{\frac{1}{2}}(\Gamma)$, and it maps constant functions to constant functions. Let $\jj_{r}\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$ and let $\jj_{r}=\nabla_{\Gamma_{r}}\;p_{r}+\operatorname{\mathbf{curl}}_{\Gamma_{r}}\;q_{r}$ be its Helmholtz decomposition. The scalar functions $p_{r}$ and $q_{r}$ are determined uniquely up to additive constants. The following operator : \begin{equation} \label{Prsmooth} \begin{array}{llcl} {\mathbb P}p_{r}:&\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})&\longrightarrow& \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma) \\ &\jj_{r}=\nabla_{\Gamma_{r}}\;p_{r}+\operatorname{\mathbf{curl}}_{\Gamma_{r}}\;q_{r}&\mapsto&\jj=\nabla_{\Gamma}\;(\tau_{r}p_{r})+\operatorname{\mathbf{curl}}_{\Gamma}\;(\tau_{r}q_{r}) \end{array} \end{equation} is therefore well defined, linear, continuous and invertible. Its inverse ${\mathbb P}p_{r}^{-1}$ is given by \begin{equation} \begin{array}{llcl} {\mathbb P}p_{r}^{-1}:&\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)&\longrightarrow &\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r}) \\ &\jj=\nabla_{\Gamma}\;p+\operatorname{\mathbf{curl}}_{\Gamma}\;q&\mapsto&\jj_{r}= \nabla_{\Gamma_{r}}\;\tau_{r}^{-1}(p)+\operatorname{\mathbf{curl}}_{\Gamma_{r}}\;\tau_{r}^{-1}(q). \end{array} \end{equation} Obviously for $r=0$ we have ${\mathbb P}p_{r}={\mathbb P}p_{r}^{-1}=\Id_{\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)}$. We can now insert the identity $\Id_{\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})}={\mathbb P}p_{r}^{-1}{\mathbb P}p_{r}$ between factors in the integral representation of the solution $(\mathscr{E}^{i}(r),\mathscr{E}^s(r))$, and we are finally led to study the G\^ateaux differentiability properties of the following mappings, defined on $r$-independent spaces. \begin{equation}\label{reformulation} \begin{array}{lclll} \; B^{\infty}(0,\varepsilon) &\rightarrow& \mathscr{L}(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma), \HH(\operatorname{\mathbf{curl}}, K_{p})) & : & r\mapsto {\mathbb P}si^{r}_{E_{\kappa}}{\mathbb P}p_{r}^{-1} \\ \; B^{\infty}(0,\varepsilon)& \rightarrow& \mathscr{L}(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma), \HH(\operatorname{\mathbf{curl}},K_{p})) & : & r\mapsto {\mathbb P}si^{r}_{M_{\kappa}}{\mathbb P}p_{r}^{-1} \\ \; B^{\infty}(0,\varepsilon_{p})& \rightarrow& \mathscr{L}(\TT\HH^{s}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma), \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma))& : & r\mapsto {\mathbb P}p_{r}M^{r}_{\kappa}{\mathbb P}p_{r}^{-1} \\ \;B^{\infty}(0,\varepsilon_{p})&\rightarrow& \mathscr{L}(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma), \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma))& : & r\mapsto {\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1} \end{array} \end{equation} where $K_{p}$ is a compact subset of $\R^3\backslash\Gamma$. These mappings are composed of scalar singular integral operators, the shape derivatives of which we studied in the first part \cite{CostabelLeLouer}, of surface differential operators, and of the inverse of the Laplace--Beltrami operator, which appears in the construction of the Helmholtz decomposition. Let us look at the representation of the operators in \eqref{reformulation} in terms of the Helmholtz decomposition. \noindent\textbf{Helmholtz representation of ${\mathbb P}si^{r}_{E_{\kappa}}{\mathbb P}p_{r}^{-1}$}\\ The operator ${\mathbb P}si^{r}_{E_{\kappa}}\mathbf{P_{r}}^{-1}$ is defined for $\jj=\nabla_{\Gamma}\;p+\mathbf{\operatorname{\mathbf{curl}}}_{\Gamma}\;q\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ and $x\in K_{p}$ by: \begin{equation*}\label{PE} \begin{split} {\mathbb P}si^{r}_{E_{\kappa}}{\mathbb P}p_{r}^{-1}\jj(x) =&\;\kappa\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x-y_{r}|)\left(\nabla_{\Gamma_{r}}\tau_{r}^{-1}p\right)(y_{r})ds(y_{r})} \\ &\;+\kappa\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x-y_{r}|)\left(\operatorname{\mathbf{curl}}_{\Gamma_{r}} \tau_{r}^{-1}q\right)(y_{r})ds(y_{r})} \\ &\;+ \dfrac{1}{\kappa}\nabla\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x-y_{r}|)\left(\Delta_{\Gamma_{r}}\tau_{r}^{-1} p\right)(y_{r})ds(y_{r})}. \end{split} \end{equation*} \noindent\textbf{Helmholtz representation of ${\mathbb P}si^{r}_{M_{\kappa}}{\mathbb P}p_{r}^{-1}$}\\ The operator ${\mathbb P}si^{r}_{M_{\kappa}}\mathbf{P_{r}}^{-1}$ is defined for $\jj=\nabla_{\Gamma}\;p+\mathbf{\operatorname{\mathbf{curl}}}_{\Gamma}\;q\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ and $x\in K_{p}$ by: \begin{equation*}\label{PM} \begin{split} {\mathbb P}si^{r}_{M_{\kappa}}{\mathbb P}p_{r}^{-1}\jj(x)=&\;\operatorname{\mathbf{curl}}\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x-y_{r}|)\left(\nabla_{\Gamma_{r}}\tau_{r}^{-1}p\right)(y_{r})ds(y_{r})} \\ &+\operatorname{\mathbf{curl}}\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x-y_{r}|)\left(\mathbf{\operatorname{\mathbf{curl}}}_{\Gamma_{r}}\tau_{r}^{-1} q\right)(y_{r})(y_{r})ds(y_{r})}. \end{split} \end{equation*} \noindent\textbf{Helmholtz representation of ${\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}$}\\ Recall that for $\jj_{r}\in\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$, the operator $C_{\kappa}^r$ is defined by \begin{equation*} \begin{array}{ll} C^{r}_{\kappa}\jj_{r}(x_{r})=&-\kappa\,\nn_{r}(x_{r})\wedge\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x_{r}-y_{r}|)\jj_{r}(y_{r})ds(y_{r})} \\ &-\dfrac{1}{\kappa}\nn_{r}(x_{r})\wedge\nabla_{\Gamma_{r}}^{x_{r}}\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x_{r}-y_{r}|)\operatorname{\mathrm{div}}_{\Gamma_{r}}\jj_{r}(y_{r})ds(y_{r})}. \end{array} \end{equation*} We want to write ${C^{r}_{\kappa}}\jj_{r}$ in the form $\nabla_{\Gamma_{r}}P_{r}+\operatorname{\mathbf{curl}}_{\Gamma_{r}}Q_{r}$. Using formulas \eqref{eqd2}--\eqref{eqd3}, we find $$ \operatorname{\mathrm{div}}_{\Gamma_{r}}{C^{r}_{\kappa}}\jj_{r}=\Delta_{\Gamma_{r}} P_{r}\;\text{ and } \operatorname{\mathrm{curl}}_{\Gamma_{r}}{C^{r}_{\kappa}}\jj_{r}=-\Delta_{\Gamma_{r}} Q_{r}. $$ As a consequence we have for $x_{r}\in\Gamma_{r}$ \begin{equation} \begin{array}{ll} P_{r}(x_{r})=&-\kappa\;\Delta_{\Gamma_{r}}^{-1}\operatorname{\mathrm{div}}_{\Gamma_{r}}\left(\nn_{r}(x_{r})\wedge\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x_{r}-y_{r}|)\jj_{r}(y_{r})ds(y_{r})}\right) \end{array} \end{equation} and \begin{equation*} \begin{array}{rl} Q_{r}(x_{r})=&-\kappa\;(-\Delta_{\Gamma_{r}}^{-1})\operatorname{\mathrm{curl}}_{\Gamma_{r}}\left(\nn_{r}(x_{r})\wedge\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x_{r}-y_{r}|)\jj_{r}(y_{r})ds(y_{r})}\right) \\ &\mkern-20mu -\dfrac{1}{\kappa}(-\Delta_{\Gamma_{r}})\operatorname{\mathrm{curl}}_{\Gamma_{r}}(-\operatorname{\mathbf{curl}}_{\Gamma_{r}})\displaystyle{\int_{\Gamma_{r}}\!\!G_{a}(\kappa,|x_{r}-y_{r}|)\operatorname{\mathrm{div}}_{\Gamma_{r}}\jj_{r}(y_{r})ds(y_{r})} \\ =&\kappa\;\Delta_{\Gamma_{r}}^{-1}\operatorname{\mathrm{curl}}_{\Gamma_{r}}\left(\nn_{r}(x_{r})\wedge\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x_{r}-y_{r}|)\jj_{r}(y_{r})ds(y_{r})}\right) \\ &+\dfrac{1}{\kappa}\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|x_{r}-y_{r}|)\operatorname{\mathrm{div}}_{\Gamma_{r}}\jj_{r}(y_{r})ds(y_{r})}. \end{array} \end{equation*} The operator ${\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}$ is defined for $\jj=\nabla_{\Gamma}\;p+\mathbf{\operatorname{\mathbf{curl}}}_{\Gamma}\;q\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ by $$ {\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}=\nabla_{\Gamma}P(r)+\operatorname{\mathbf{curl}}_{\Gamma}Q(r), $$ with \begin{multline*} P(r)(x)=\\ -\kappa\left(\tau_{r}\Delta_{\Gamma_{r}}^{-1}\operatorname{\mathrm{div}}_{\Gamma_{r}}\tau_{r}^{-1}\right)\Big((\tau_{r}\nn_{r})(x)\wedge\tau_{r}\Big\{\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|\cdot-y_{r}|)(\nabla_{\Gamma_{r}}\tau_{r}^{-1}p)(y_{r})ds(y_{r})} \\ +\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|\cdot-y_{r}|)(\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1}q)(y_{r})ds(y_{r})}\Big\}(x)\Big) \end{multline*} and \begin{multline*} Q(r)(x)=\\ \kappa\left(\tau_{r}\Delta_{\Gamma_{r}}^{-1}\operatorname{\mathrm{curl}}_{\Gamma_{r}}\tau_{r}^{-1}\right)\Big((\tau_{r}\nn_{r})(x)\wedge\tau_{r}\Big\{\displaystyle{\int_{\Gamma_{r}} \!\! G_{a}(\kappa,|\cdot-y_{r}|)(\nabla_{\Gamma_{r}}\tau_{r}^{-1}p)(y_{r})ds(y_{r})} \\ +\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|\cdot-y_{r}|)(\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1}q)(y_{r})ds(y_{r})}\Big\}(x)\Big) \\ +\dfrac{1}{\kappa}\tau_{r}\Big(\displaystyle{\int_{\Gamma_{r}}G_{a}(\kappa,|\cdot-y_{r}|)(\Delta_{\Gamma_{r}}\tau_{r}^{-1}p)(y_{r})ds(y_{r})}\Big)(x). \end{multline*} \noindent\textbf{Helmholtz representation of ${\mathbb P}p_{r}M^{r}_{\kappa}{\mathbb P}p_{r}^{-1}$}\\ Recall that for all $\jj_{r}\in\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}}, \Gamma_{r})$, the operator $M_{\kappa}^r$ is defined by \begin{equation*} \begin{array}{ll} M^{r}_{\kappa}\jj_{r}(x_{r})=&\displaystyle{\int_{\Gamma_{r}}\big((\nabla^{x_{r}}G_{a}(\kappa,|x_{r}-y_{r}|))\cdot\nn_{r}(x_{r})\big)\jj_{r}(y_{r})ds(y_{r})} \\ &-\displaystyle{\int_{\Gamma_{r}}\nabla^{x_{r}}G_{a}(\kappa,|x_{r}-y_{r}|)\big(\nn_{r}(x_{r})\cdot\jj_{r}(y_{r})\big) ds(y_{r})}. \end{array} \end{equation*} Using the equalities \eqref{eqd3} and the identity $\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}=-\Delta+\nabla\operatorname{\mathrm{div}}$, we have $$ \begin{array}{rl} \operatorname{\mathrm{div}}_{\Gamma_{r}}M^r_{\kappa}\jj_{r}(x_{r})=&\nn_{r}(x_{r})\cdot\!\displaystyle{\int_{\Gamma_{r}} \!\! \operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}^{x_{r}} \left\{G_{a}(\kappa,|x_{r}-y_{r}|)\jj_{r}(y_{r}) \right\} ds(y_{r})} \\ =&\kappa^2\nn_{r}(x_{r})\cdot\displaystyle{\int_{\Gamma_{r}}\left\{G_{a}(\kappa,|x_{r}-y_{r}|)\jj_{r}(y_{r}) \right\}ds(y_{r})} \\ & +\displaystyle{\int_{\Gamma_{r}}\dfrac{\partial}{\partial\nn_{r}(x_{r})} \left\{G_{a}(\kappa,|x_{r}-y_{r}|)\operatorname{\mathrm{div}}_{\Gamma_{r}}\jj_{r}(y_{r}) \right\}ds(y_{r})}. \end{array} $$ Proceeding in the same way as with the operator ${\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}$, we obtain that the operator ${\mathbb P}p_{r}M^{r}_{\kappa}{\mathbb P}p_{r}^{-1}$ is defined for $\jj=\nabla_{\Gamma}\;p+\mathbf{\operatorname{\mathbf{curl}}}_{\Gamma}\;q\in \TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ by: $$ {\mathbb P}p_{r}M^{r}_{\kappa}{\mathbb P}p_{r}^{-1}\jj=\nabla_{\Gamma}P'(r)+\operatorname{\mathbf{curl}}_{\Gamma}Q'(r), $$ with \begin{multline*} P'(r)(x)=\\ \left(\tau_{r}\Delta_{\Gamma_{r}}^{-1}\tau_{r}^{-1}\right)\tau_{r}\left\{\kappa^2 \int_{\Gamma_{r}}\nn_{r}\cdot \left\{G_{a}(\kappa,|\cdot-y_{r}|)\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1}q(y_{r})\right\}ds(y_{r}) \right. \\ +\,\kappa^{2}\int_{\Gamma_{r}}\nn_{r}\cdot \left\{G_{a}(\kappa,|\cdot-y_{r}|)\nabla_{\Gamma_{r}}\tau_{r}^{-1}p(y_{r})\right\}ds(y_{r}) \\ +\left.\int_{\Gamma_{r}}\dfrac{\partial}{\partial\nn_{r}}G_{a}(\kappa,|\cdot-y_{r}|)(\Delta_{\Gamma_{r}}\tau_{r}^{-1}p)(y_{r})ds(y_{r})\right\}(x), \end{multline*} and \begin{multline*} Q_{r}'(x)=\\ \left(\tau_{r}\Delta_{\Gamma_{r}}^{-1}\operatorname{\mathrm{curl}}_{\Gamma_{r}}\tau_{r}^{-1}\right)\tau_{r}\left\{\int_{\Gamma_{r}}\left(\nabla G_{a}(\kappa,|\cdot-y_{r}|)\cdot\nn_{r}\right)(\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1}q)(y_{r})ds(y_{r}) \right. \\ +\int_{\Gamma_{r}}(\left( \nabla G_{a}(\kappa,|\cdot-y_{r}|)\cdot\nn_{r}\right)(\nabla_{\Gamma_{r}}\tau_{r}^{-1} p)(y_{r})ds(y_{r}) \\ - \int_{\Gamma_{r}}\nabla G_{a}(\kappa,|\cdot-y_{r}|)\left(\nn_{r}\cdot(\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1} q)(y_{r})\right) ds(y_{r}) \\ -\left.\int_{\Gamma_{r}}\nabla G_{a}(\kappa,|\cdot-y_{r}|)\left(\nn_{r}\cdot(\nabla_{\Gamma_{r}}\tau_{r}^{-1}p)(y_{r})\right) ds(y_{r})\right\}(x). \end{multline*} These operators are composed of boundary integral operators with weakly singular kernel and of the surface differential operators defined in section \ref{BoundIntOp}. Each of these weakly singular boundary integral operators has a pseudo-homogeneous kernel of class -1. The $\mathscr{C}^{\infty}$-G\^ateaux differentiability properties of such boundary integral operators has been established in the preceding paper \cite{CostabelLeLouer}. It remains now to show that the surface differential operators, more precisely $\tau_{r}\nabla_{\Gamma_{r}}\tau_{r}^{-1},\;\tau_{r}\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1},\;\tau_{r}\operatorname{\mathrm{div}}_{\Gamma_{r}}\tau_{r}^{-1},\;\tau_{r}\operatorname{\mathrm{curl}}_{\Gamma_{r}}\tau_{r}^{-1}$, as well as $\tau_{r}\Delta_{\Gamma_{r}}\tau_{r}^{-1}$ and its inverse, preserve their mapping properties by differentiation with respect to $r$. \section{G\^ateaux differentiability of surface differential operators}\label{SurfDiffOp} The analysis of the surface differential operators requires the differentiability properties of some auxiliary functions, such as the outer unit normal vector $\nn_{r}$ and the Jacobian $J_{r}$ of the change of variable $x\mapsto x+r(x)$. We recall some results established in the first part \cite[section 4]{CostabelLeLouer}. For the definition of G\^ateaux derivatives and the corresponding analysis, see \cite{Schwartz}. \begin{lemma} \label{N} The mapping $\mathcal{N}:B^{\infty}(0,\varepsilon)\ni r\mapsto\tau_{r}\nn_{r}=\nn_{r}\circ(\Id+r)\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ is $\mathscr{C}^{\infty}$-G\^ateaux-differentiable and its first derivative in the direction of $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ is given by $$ d \mathcal{N}[r,\xi]=-\left[\tau_{r}\nabla_{\Gamma_{r}}(\tau_{r}^{-1}\xi)\right]\mathcal{N}(r). $$ \end{lemma} \begin{lemma}\label{J} The mapping $\mathcal{J}$ from $r\in B^{\infty}(0,\varepsilon)$ to the surface Jacobian $J_{r}\in\mathscr{C}^{\infty}(\Gamma,\R)$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable and its first derivative in the direction of $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ is given by $$ d\mathcal{J}[r_{0},\xi]=J_{r_{0}}\cdot\big(\tau_{r_{0}}\operatorname{\mathrm{div}}_{\Gamma_{r_{0}}}(\tau^{-1}_{r_{0}}\xi)\big). $$ \end{lemma} The differentiability properties of the tangential gradient and of the surface divergence in the framework of classical Sobolev spaces is established in \cite[section 5]{CostabelLeLouer}. \begin{lemma}\label{nabla} The mapping $$ \begin{array}{cccc} \mathcal{G}:&B^{\infty}(0,\varepsilon)&\rightarrow&\mathscr{L}(H^{s+1}(\Gamma),\HH^{s}(\Gamma)) \\ &r&\mapsto&\tau_{r}\nabla_{\Gamma_{r}}\tau_{r}^{-1} \end{array} $$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable and its first derivative for $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ is given by $$ d \mathcal{G}[r,\xi]u=-[\mathcal{G}(r)\xi]\mathcal{G}(r)u+\big(\mathcal{G}(r)u\cdot[\mathcal{G}(r)\xi]\mathcal{N}(r)\big)\,\mathcal{N}(r). $$ \end{lemma} \begin{lemma}\label{D} The mapping $$ \begin{array}{cccc} \mathcal{D}:&B^{\infty}(0,\varepsilon)&\rightarrow&\mathscr{L}(\HH^{s+1}(\Gamma),H^s(\Gamma)) \\ &r&\mapsto&\tau_{r}\operatorname{\mathrm{div}}_{\Gamma_{r}}\tau_{r}^{-1} \end{array} $$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable and its first derivative for $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ is given by $$ d\mathcal{D}[r,\xi]\uu=-\Tr([\mathcal{G}(r)\xi][\mathcal{G}(r)\uu])+\left([\mathcal{G}(r)\uu]\mathcal{N}(r)\cdot[\mathcal{G}(r)\xi]\mathcal{N}(r)\right). $$ \end{lemma} Similar results can now be obtained for the tangential vector curl by composition of the tangential gradient with the normal vector. \begin{lemma}\label{rot2} The mapping $$ \begin{array}{cccc} \boldsymbol{\mathcal{R}}:& B^{\infty}(0,\varepsilon)&\rightarrow&\mathscr{L}(H^{s+1}(\Gamma),\HH^s(\Gamma)) \\ &r&\mapsto&\tau_{r}\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1} \end{array} $$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable and its first derivative for $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ is given by $$ d \boldsymbol{\mathcal{R}}[r,\xi]u=\transposee{[\mathcal{G}(r)\xi]}\boldsymbol{\mathcal{R}}(r)u-\mathcal{D}(r)\xi\cdot\boldsymbol{\mathcal{R}}(r)u. $$ \end{lemma} \begin{proof} Let $u\in H^{s+1}(\Gamma)$. By definition, we have $\boldsymbol{\mathcal{R}}(r)u=\mathcal{G}(r)u\wedge \mathcal{N}(r)$. By lemmas \ref{N} and \ref{nabla} this application is $\mathscr{C}^{\infty}$-G\^ateaux differentiable. For the derivative in the direction $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ we find $$ d\boldsymbol{\mathcal{R}}[r,\xi]u=-{[\mathcal{G}(r)\xi]}\mathcal{G}(r)u\wedge \mathcal{N}(r)-\mathcal{G}(r)u\wedge[\mathcal{G}(r)\xi]\mathcal{N}(r). $$ For any $(3\times3)$ matrix $A$ and vectors $b$ and $c$ there holds $$ (Ab)\wedge c+b\wedge Ac=\Tr(A)(b\wedge c)-\transposee{A}(b\wedge c). $$ We obtain the expression of the first derivative with the choice $A=-[\mathcal{G}(r)\xi]$, $b=\mathcal{G}(r)u$ and $c=\mathcal{N}(r)$. \end{proof} \begin{lemma}\label{rot1} The mapping $$ \begin{array}{cccc} \mathcal{R}:&B^{\infty}(0,\varepsilon)&\rightarrow&\mathscr{L}(\HH^{s+1}(\Gamma),H^s(\Gamma)) \\ &r&\mapsto&\tau_{r}\operatorname{\mathrm{curl}}_{\Gamma_{r}}\tau_{r}^{-1} \end{array} $$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable and its first derivative for $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ is given by $$ d\mathcal{R}[r,\xi]\uu=-\sum_{i=1}^3\left(\mathcal{G}(r)\xi_{i}\cdot\boldsymbol{\mathcal{R}}(r)u_{i}\right)- \mathcal{D}(r)\xi\cdot \mathcal{R}(r)\uu $$ where $\uu=(u_{1},u_{2},u_{3})$ and $\xi=(\xi_{1},\xi_{2},\xi_{3})$. \end{lemma} \begin{proof} Let $\uu\in\HH^{s+1}(\Gamma)$. Notice that we have $\operatorname{\mathrm{curl}}_{\Gamma}\uu=-\Tr\big([\operatorname{\mathbf{curl}}_{\Gamma}\uu]\big)$. We can therefore write $$ \mathcal{R}(r)\uu=-\Tr(\boldsymbol{\mathcal{R}}(r)\uu). $$ The $\mathscr{C}^{\infty}$-differentiability of $R$ results from the $\mathscr{C}^{\infty}$-differentiability of $\boldsymbol{\mathcal{R}}$. The first derivative in the direction $\xi$ is \begin{equation*} \begin{split} d\boldsymbol{\mathcal{R}}[r,\xi]\uu=&-\Tr\left(d \boldsymbol{\mathcal{R}}[r,\xi]\uu\right) \\ =&-\Tr\left(\transposee{[\mathcal{G}(r)\xi]}[\boldsymbol{\mathcal{R}}(r)\uu]\right)-\mathcal{D}(r)\xi\cdot \Tr\left(-\boldsymbol{\mathcal{R}}(r)\uu]\right) \\ =&-\sum_{i=1}^3\left(\mathcal{G}(r)\xi_{i}\cdot\boldsymbol{\mathcal{R}}(r)\uu_{i}\right)- \mathcal{D}(r)\xi\cdot \mathcal{R}(r)\uu. \end{split} \end{equation*} \end{proof} Higher order derivatives of the tangential vector curl operator and of the surface scalar curl operator can be obtained by applying these results recursively. In view of the integral representations of the operators ${\mathbb P}p_{r}C_{\kappa}^r{\mathbb P}p_{r}^{-1}$ and ${\mathbb P}p_{r}M_{\kappa}^r{\mathbb P}p_{r}^{-1}$, we have to study the G\^ateaux differentiability of the mappings $$ \begin{array}{lll} r&\mapsto&\tau_{r}\Delta_{\Gamma_{r}}^{-1}\operatorname{\mathrm{div}}_{\Gamma_{r}}\tau_{r}^{-1} \\ r&\mapsto&\tau_{r}\Delta_{\Gamma_{r}}^{-1}\operatorname{\mathrm{curl}}_{\Gamma_{r}}\tau_{r}^{-1}. \end{array} $$ We have seen that for $r\in B^{\infty}(0,\varepsilon)$ the operator $\operatorname{\mathrm{curl}}_{\Gamma_{r}}$ is linear and continuous from $\HH^{s+1}(\Gamma_{r})$ to $H^s_{*}(\Gamma_{r})$, that the operator $\operatorname{\mathrm{div}}_{\Gamma_{r}}$ is linear and continuous from $\TT\HH^{s+1}(\Gamma_{r})$ to $H_{*}^s(\Gamma_{r})$ and that $\Delta_{\Gamma_{r}}^{-1}$ is defined from $H^{s}_{*}(\Gamma_{r})$ to $H^{s+2}(\Gamma_{r})\slash\C$. To use the chain rules, it is necessary to prove that the derivatives at $r=0$ act between the spaces $\HH^{s+1}(\Gamma)$ and $H^s_{*}(\Gamma)$ for the scalar curl operator, between the spaces $\TT\HH^{s+1}(\Gamma)$ and $H^s_{*}(\Gamma)$ for the divergence operator and between the spaces $H^s_{*}(\Gamma)$ and $H^{s+2}(\Gamma)/\C$ for the Laplace--Beltrami operator. An important observation is $$ u_{r}\in H^s_{*}(\Gamma_{r})\text{ if and only if }J_{r}\,u_{r}\circ(\Id+r)\in H^s_{*}(\Gamma). $$ Using the duality \eqref{dualrot} on the boundary $\Gamma_{r}$ we can write for any vector density $\jj\in\HH^{s+1}(\Gamma)$ and any scalar density $\varphi\in H^{-s}(\Gamma)$ \begin{equation}\label{rotdual} \begin{aligned} \int_{\Gamma}\tau_{r}\Big(\operatorname{\mathrm{curl}}_{\Gamma_{r}}(\tau_{r}^{-1}\jj)\Big)\cdot \varphi \,J_{r}ds &= \int_{\Gamma_{r}}\operatorname{\mathrm{curl}}_{\Gamma_{r}}(\tau_{r}^{-1}\jj)\cdot(\tau_{r}^{-1}\varphi)\,ds \\ &=\int_{\Gamma_{r}}(\tau_{r}^{-1}\jj)\cdot\operatorname{\mathbf{curl}}_{\Gamma_{r}}(\tau_{r}^{-1}\varphi)\,ds \\ &=\int_{\Gamma}\jj\cdot\tau_{r}\Big(\operatorname{\mathbf{curl}}_{\Gamma_{r}}(\tau_{r}^{-1}\varphi)\Big)\,J_{r}\,ds. \end{aligned} \end{equation} Taking $\varphi\in\R$ (i.e. $\varphi$ is a constant function) then the right-hand side vanishes. This means that $J_{r}\big(\tau_{r}\operatorname{\mathrm{curl}}_{\Gamma_{r}}(\tau_{r}^{-1}\jj)\big)$ is of vanishing mean value. \begin{lemma} The mapping $$ \begin{array}{cccc} \mathcal{R}^{*}: &B^{\infty}(0,\varepsilon)&\rightarrow&\mathscr{L}(\HH^{s+1}(\Gamma),\HH^{s}_{*}(\Gamma)) \\ &r&\mapsto&J_{r}\,\tau_{r}\operatorname{\mathrm{curl}}_{\Gamma_{r}}\tau_{r}^{-1} \end{array} $$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable and we have in any direction $\xi=(\xi_{1},\xi_{2},\xi_{3})\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ $$ \left\{\begin{array}{ccl}\dfrac{\partial \mathcal{R}^{*}}{\partial r}[r,\xi]&=&-\mathcal{J}(r)\cdot\sum\limits_{i=1}^3\left(\mathcal{G}(r)\xi_{i}\cdot\boldsymbol{\mathcal{R}}(r)\uu_{i}\right), \\ \dfrac{\partial^m \mathcal{R}^{*}}{\partial r^m}[r,\xi]&=&0,\;\text{ for all }m\ge2.\end{array}\right. $$ \end{lemma} \begin{proof} Looking at the expression of the derivatives of the tangential gradient and of the tangential vector curl in lemmas \ref{nabla} and \ref{rot2} we prove iteratively that all the derivatives of $\mathcal{G}(r)\varphi$ and of $\boldsymbol{\mathcal{R}}(r)\varphi$ are composed of $\mathcal{G}(r)\varphi$ and $\boldsymbol{\mathcal{R}}(r)\varphi$, so that for $\varphi\in\R$ the derivatives of the right-hand side of \eqref{rotdual} vanishes. We have for all $m\in\N$ and $\uu\in\HH^{s+1}(\Gamma)$: $$ \frac{\partial^m}{\partial r^m}\left\{\int_{\Gamma}J_{r}\big(\tau_{r}\operatorname{\mathrm{curl}}_{\Gamma_{r}}(\tau_{r}^{-1}\jj)\big) \,ds\right\}[r,\xi]=\int_{\Gamma}\frac{\partial^m}{\partial r^m}\left\{\mathcal{R}^{*}\right\}[r,\xi]\jj \,ds=0. $$ It can also be obtained by directly deriving the expression of $\mathcal{R}^{*}\jj$ using the formulas obtained in the lemmas \ref{N} to \ref{rot1}. The first derivative of $\mathcal{R}^{*}$ is given by $$ \begin{array}{ll} d\left(\mathcal{R}^{*}\right)[r,\xi]\uu&=-\mathcal{J}(r)\cdot\sum\limits_{i=1}^3\left(\mathcal{G}(r)\xi_{i}\cdot\boldsymbol{\mathcal{R}}(r)\uu_{i}\right)\\ &=-J_{r}.\tau_{r}\left(\sum\limits_{i=1}^3\nabla_{\Gamma_{r}}(\tau_{r}^{-1}\xi_{i})\cdot\operatorname{\mathbf{curl}}_{\Gamma_{r}}(\tau_{r}^{-1}\uu_{i})\right) \end{array} $$ The right-hand side is of vanishing mean value since the space $\nabla_{\Gamma_{r}}H^s(\Gamma_{r})$ is orthogonal to $\operatorname{\mathbf{curl}}_{\Gamma_{r}}H^s(\Gamma_{r})$ for the $\LL^2(\Gamma_{r})$ duality product. For the second order derivative we derive $r\mapsto d\left(\mathcal{R}^{*}\right)[r,\xi]\jj$ in the direction $\eta=(\eta_{1},\eta_{2},\eta_{3})\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ and we obtain $$\begin{array}{lcl}d^2\left(\mathcal{R}^{*}\right)[r;\xi,\eta]\uu&=&\mathcal{J}(r)\cdot\sum\limits_{i=1}^3\left([\mathcal{G}(r)\eta]\mathcal{G}(r)\xi_{i}\cdot\boldsymbol{\mathcal{R}}(r)\uu_{i}\right)\\&&-\mathcal{J}(r)\cdot\sum\limits_{i=1}^3\left(\mathcal{G}(r)\xi_{i}\cdot\transposee{[\mathcal{G}(r)\eta]}\boldsymbol{\mathcal{R}}(r)\uu_{i}\right)\\&=&0.\end{array}$$ Higher order derivatives of $\mathcal{R}^{*}$ vanish. \end{proof} For the surface divergence, similar arguments can be applied. Using the duality \eqref{dualgrad}, we can write for $\jj\in\TT\HH^{s+1}(\Gamma)$: $$ \begin{aligned} \displaystyle{\int_{\Gamma}\tau_{r}\Big(\operatorname{\mathrm{div}}_{\Gamma_{r}}(\pi(r)^{-1}\jj)\Big)\cdot \varphi \,J_{r}ds} &=\displaystyle{\int_{\Gamma_{r}}\operatorname{\mathrm{div}}_{\Gamma_{r}}(\pi(r)^{-1}\jj)\cdot(\tau_{r}^{-1}\varphi)\,ds} \\ &=\displaystyle{-\int_{\Gamma_{r}}(\pi(r)^{-1}\jj)\cdot\nabla_{\Gamma_{r}}(\tau_{r}^{-1}\varphi)\,ds} \\ &=\displaystyle{-\int_{\Gamma}\tau_{r}(\pi(r)^{-1}\jj)\cdot\tau_{r}\Big(\nabla_{\Gamma_{r}}(\tau_{r}^{-1}\varphi)\Big)\,J_{r}ds}. \end{aligned} $$ This shows that for constant $\varphi\in\R$ the right-hand side vanishes and therefore $J_{r}\big(\tau_{r}\operatorname{\mathrm{div}}_{\Gamma_{r}}(\pi^{-1}(r)\jj)\big)$ is of vanishing mean value. We obtain the following result. \begin{lemma} The mapping $$ \begin{array}{cccc} \mathcal{D}^{*}:&B^{\infty}(0,\varepsilon)&\rightarrow&\mathscr{L}(\TT\HH^{s+1}(\Gamma),\HH^{s}_{*}(\Gamma)) \\ &r&\mapsto&J_{r}\,\tau_{r}\operatorname{\mathrm{div}}_{\Gamma_{r}}\pi^{-1}(r) \end{array} $$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable. \end{lemma} Now it remains to analyze the inverse of the Laplace--Beltrami operator $\Delta_{\Gamma}$. We apply the following abstract result on the G\^ateaux derivative of the inverse in a Banach algebra. We leave its proof to the reader. \begin{lemma}\label{d-1} Let $U$ be an open subset of a Fr\'echet space $\mathcal{X}$ and let $\mathcal{Y}$ be a Banach algebra. Assume that $f : U\rightarrow \mathcal{Y}$ is G\^ateaux differentiable at $r_{0}\in U$ and that $f(r)$ is invertible in $\mathcal{Y}$ for all $r\in U$. Then $g$ is G\^ateaux differentiable at $r_{0}$ and its first derivative in the direction $\xi\in\mathcal{X}$ is \begin{equation}d f[r_{0},\xi] = -f(r_{0})^{-1} \circ d f[r_{0},\xi] \circ f(r_{0})^{-1}. \end{equation} Moreover if $f$ is $\mathscr{C}^m$-G\^ateaux differentiable then $g$ is, too. \end{lemma} From the preceding results we deduce the $\mathscr{C}^{\infty}$-G\^ateaux differentiability of the mapping $$ \begin{array}{cccl} \mathcal{L}^{*} :& B^{\infty}(0,\varepsilon)&\rightarrow&\mathscr{L}(H^{s+2}(\Gamma),H^{s}_{*}(\Gamma)) \\ &r&\mapsto& J_{r}\tau_{r}\Delta_{\Gamma_{r}}\tau_{r}^{-1}=-\mathcal{R}^*(r)\boldsymbol{\mathcal{R}}(r). \end{array} $$ Let us note that $\tau_{r}$ induces an isomorphism between the quotient spaces $H^s(\Gamma_{r})/\C$ and $H^s(\Gamma)/\C$. \begin{lemma}\label{delta-1} The mapping $$ \begin{array}{ccc} B^{\infty}(0,\varepsilon)&\rightarrow&\mathscr{L}(H^{s}_{*}(\Gamma), H^{s+2}(\Gamma)\slash\C) \\ r&\mapsto&\big(\mathcal{L}^{*}(r)\big)^{-1} \end{array} $$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable and we have in any direction $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ \begin{equation}\label{dLapBm1} d\left\{r\mapsto\big(\mathcal{L}^{*}(r)\big)^{-1}\right\}[0,\xi]=\;-\Delta_{\Gamma}^{-1}\circ d\mathcal{L}^{*}[0,\xi]\circ\Delta_{\Gamma}^{-1}. \end{equation} \end{lemma} \begin{proof} We have seen in section \ref{BoundIntOp} that the Laplace--Beltrami operator is invertible from $H^{s+2}(\Gamma_{r})\slash\C$ to $H^s_{*}(\Gamma_{r})$. As a consequence $\mathcal{L}^{*}(r)$ is invertible from $H^{s+2}(\Gamma)\slash\C$ to $H^s_{*}(\Gamma)$. We conclude by using Lemma \ref{d-1}. \end{proof} Let us give another formulation of \eqref{dLapBm1}. For any $u\in H^{s+2}(\Gamma)$ and $\varphi\in H^{-s}(\Gamma)$ we have $$ \int_{\Gamma}\tau_{r}\Big(\Delta_{\Gamma_{r}}(\tau_{r}^{-1}u)\Big)\cdot \varphi \,J_{r}ds=-\int_{\Gamma}\Big(\mathcal{G}(r)u\cdot\mathcal{G}(r)\varphi\Big)\,J_{r}ds. $$ It is more convenient to differentiate the right-hand side than the left hand side. For $f\in H^s_{*}(\Gamma_{r})$, the element $\big(\mathcal{L}^{*}(r)\big)^{-1}f$ is the solution $u$ of $$ -\int_{\Gamma}\Big(\mathcal{G}(r)u\cdot\mathcal{G}(r)\varphi\Big)\,J_{r}ds=\int_{\Gamma}f\cdot\varphi\,ds,\qquad\text{ for all }\varphi\in H^s(\Gamma). $$ The formula \eqref{dLapBm1} means that the first derivative at $r=0$ in the direction $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ of $r\mapsto\big(\mathcal{L}^{*}(r)\big)^{-1}f$ is the solution $v$ of \begin{equation}\label{LapLip} \int_{\Gamma}\frac{\partial}{\partial r}\Big\{\Big(\mathcal{G}(r)u_{0}\cdot\mathcal{G}(r)\varphi\Big)\,J_{r}\Big\}[0,\xi]ds=-\int_{\Gamma}\nabla_{\Gamma}v\cdot\nabla_{\Gamma}\varphi\,ds,\;\text{ for all }\varphi\in H^s(\Gamma), \end{equation} with $u_{0}=\Delta_{\Gamma}^{-1}f$. Now we have all the tools to establish the differentiability properties of the electromagnetic boundary integral operators and then of the solution to the dielectric scattering problem. \section{Shape derivatives of the solution of the dielectric problem}\label{ShapeSol} For the shape-dependent integral operators we now use the following simplified notation $$ \begin{aligned} {\mathbb P}si_{E_{\kappa}}(r)&={\mathbb P}si_{E_{\kappa}}^r{\mathbb P}p_{r}^{-1}, &{\mathbb P}si_{M_{\kappa}}(r)={\mathbb P}si_{M_{\kappa}}^r{\mathbb P}p_{r}^{-1},\ \; \\ C_{\kappa}(r)&={\mathbb P}p_{r}C^r_{{\kappa}}{\mathbb P}p_{r}^{-1}, &\ \;M_{\kappa}(r)={\mathbb P}p_{r}M^r_{{\kappa}}{\mathbb P}p_{r}^{-1}. \end{aligned} $$ In the following we use the results of the preceding paper \cite{CostabelLeLouer} about the G\^ateaux differentiability of potentials and boundary integral operators with pseudo-homogeneous kernels. \begin{theorem}\label{thpsi} The mappings $$ \begin{array}{rcl}B^{\infty}(0,\varepsilon)&\rightarrow& \mathscr{L}(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma), \HH(\operatorname{\mathbf{curl}},K_{p}))\\r&\mapsto&{\mathbb P}si_{E_{\kappa}}(r)\\r&\mapsto&{\mathbb P}si_{M_{\kappa}}(r)\end{array} $$ are infinitely G\^ateaux differentiable. The derivatives can be written in explicit form by differentiating the kernels of the operators ${\mathbb P}si_{E_{\kappa}}^r$ and ${\mathbb P}si_{M_{\kappa}}^r$, see \cite[Theorem 4.7]{CostabelLeLouer}, and by using the formulas for the derivatives of the surface differential operators given in Section~\ref{SurfDiffOp}. The first derivatives at $r=0$ can be extended to bounded linear operators from $\TT\HH^{\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ to $\HH(\operatorname{\mathbf{curl}},\Omega)$ and to $\HH_{loc}(\operatorname{\mathbf{curl}},\overline{\Omega^c})$. Given $\jj\in\TT\HH^{\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$, the potentials $d{\mathbb P}si_{E_{\kappa}}[0,\xi]\jj$ and $d{\mathbb P}si_{M_{\kappa}}[0,\xi]\jj$ satisfy the Maxwell equations $$ \operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}\uu-\kappa^2\uu=0 $$ in $\Omega$ and $\Omega^c$, and the Silver-M\"uller radiation condition. \end{theorem} \begin{proof} Let $\jj\in\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ and let $\jj=\nabla_{\Gamma}p+\operatorname{\mathbf{curl}}_{\Gamma}q$ be its Helmholtz decomposition. Recall that ${\mathbb P}si_{E_{\kappa}}(r)\jj$ and ${\mathbb P}si_{M_{\kappa}}(r)\jj$ can be written as: \begin{equation*} \begin{split} {\mathbb P}si_{E_{\kappa}}(r)\,\jj&=\kappa {\mathbb P}si^{r}_{\kappa}\tau_{r}^{-1}(\tau_{r}\mathbf{P_{r}}^{-1}\jj)-\dfrac{1}{\kappa}\nabla {\mathbb P}si^{r}_{\kappa}\tau_{r}^{-1}\big(\tau_{r}\Delta_{\Gamma_{r}}(\tau_{r}^{-1}p)\big), \\{\mathbb P}si_{E_{\kappa}}(r)\,\jj&=\operatorname{\mathrm{curl}}{\mathbb P}si_{\kappa}^{r}\tau_{r}^{-1}(\tau_{r}\mathbf{P_{r}}^{-1}\jj). \end{split} \end{equation*} By composition of differentiable mappings, we deduce that $r\mapsto{\mathbb P}si_{E_{\kappa}}(r)$ and $r\mapsto~{\mathbb P}si_{M_{\kappa}}(r)$ are infinitely G\^ateaux differentiable far from the boundary and that their first derivatives are continuous from $\TT\HH^{\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ to $\LL^2(\Omega)\cup \LL^2_{\mathrm{loc}}(\overline{\Omega^c})$. Recall that we have $$ \operatorname{\mathbf{curl}}{\mathbb P}si_{E_{\kappa}}(r)\jj=\kappa{\mathbb P}si_{M_{\kappa}}(r)\jj\text{ and }\operatorname{\mathbf{curl}}{\mathbb P}si_{M_{\kappa}}(r)\jj=\kappa{\mathbb P}si_{E_{\kappa}}(r)\jj. $$ Far from the boundary we can invert the differentiation with respect to $x$ and the derivation with respect to $r$, which gives $$ \operatorname{\mathbf{curl}} d{\mathbb P}si_{E_{\kappa}}[0,\xi]\jj=\kappa\, d{\mathbb P}si_{M_{\kappa}}[0,\xi]\jj\text{ and }\operatorname{\mathbf{curl}} d{\mathbb P}si_{M_{\kappa}}[0,\xi]\jj=\kappa\, d{\mathbb P}si_{E_{\kappa}}[0,\xi]\jj. $$ It follows that $d{\mathbb P}si_{E_{\kappa}}[0,\xi]\jj$ and $ d{\mathbb P}si_{M_{\kappa}}[0,\xi]\jj$ are in $\HH(\operatorname{\mathbf{curl}},\Omega)\cup\HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\overline{\Omega^c})$ and that they satisfy the Maxwell equations and the Silver-M\"uller condition. \end{proof} We recall from Section~\ref{Helmholtzdec} that with the notation of Section~\ref{SurfDiffOp} the operator $C_{\kappa}(r)$ admits the following representation \begin{equation} C_{\kappa}(r)\,\jj=\label{C}\mathbf{P_{r}}C^{r}_{\kappa}\mathbf{P_{r}}^{-1}\jj=\nabla_{\Gamma}P(r)+\operatorname{\mathbf{curl}}_{\Gamma}Q(r), \end{equation} where \begin{equation*} \begin{array}{lcl}P(r)&=&-\kappa\;(\mathcal{L}^{*}(r))^{-1}\mathcal{R}^{*}(r)\left(\tau_{r}V^{r}_{\kappa}\tau_{r}^{-1}\right)\left[\mathcal{G}(r)p+\boldsymbol{\mathcal{R}}(r)q\right]\end{array} \end{equation*} and \begin{equation*} \begin{array}{lcl}Q(r)&=&-\kappa\;(\mathcal{L}^{*}(r))^{-1}\mathcal{D}^{*}(r)\pi(r)\tau_{r}^{-1}\left(\tau_{r}V^{r}_{\kappa}\tau_{r}^{-1}\right)\left[\mathcal{G}(r)p+\boldsymbol{\mathcal{R}}(r)q\right] \\ &&+\dfrac{1}{\kappa}\left(\tau_{r}V^{r}_{\kappa}\tau_{r}^{-1}\right)\left(\tau_{r}\Delta_{\Gamma_{r}}(\tau_{r}^{-1}p)\right).\end{array} \end{equation*} Let $\jj\in \TT\HH^{s}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ and let $\jj=\nabla_{\Gamma}\;p+\operatorname{\mathbf{curl}}_{\Gamma}\;q$ be its Helmholtz decomposition. We want to derive $$ \begin{array}{ll}{\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}\jj&={\mathbb P}p_{r}C^{r}_{\kappa}(\nabla_{\Gamma_{r}}\tau_{r}^{-1}p+\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1}q)\\&={\mathbb P}p_{r}(\nabla_{\Gamma_{r}}P_{r}+\operatorname{\mathbf{curl}}_{\Gamma_{r}}Q_{r})\\&=\nabla_{\Gamma}P(r)+\operatorname{\mathbf{curl}}_{\Gamma}Q(r).\end{array} $$ We find $$ dC_{\kappa}[0,\xi]\jj=\nabla_{\Gamma}dP[0,\xi]+\operatorname{\mathbf{curl}}_{\Gamma}dQ[0,\xi]. $$ Thus the derivative with respect to $r$ of $\mathbf{P_{r}}C^{r}_{\kappa}\mathbf{P_{r}}^{-1}\jj$ is given by the derivatives of the functions $P(r)=\tau_{r}(P_{r})$ and of $Q(r)=\tau_{r}(Q_{r})$.\newline We also note that for an $r$-dependent vector function $f(r)$ on $\Gamma$ there holds $$ d \{\pi(r)\tau_{r}^{-1}f(r)\}[0,\xi]=\pi(0) df[0,\xi]. $$ By composition of infinitely differentiable mappings we obtain the following theorem. \begin{theorem} The mapping: $$ \begin{array}{lcl}B^{\infty}(0,\varepsilon)&\rightarrow &\mathscr{L}\left(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma),\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)\right)\\r&\mapsto&{\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}\end{array} $$ is infinitely G\^ateaux differentiable. The derivatives can be written in explicit form by differentiating the kernel of the operator $C_{\kappa}^r$, see \cite[Corollary 4.5]{CostabelLeLouer}, and by using the formulas for the derivatives of the surface differential operators given in Section~\ref{SurfDiffOp}. \end{theorem} Similarly, recall that the operator ${\mathbb P}p_{r}M^{r}_{\kappa}\mathbf{P_{r}}^{-1}$ admits the following representation : \begin{equation*}\label{M} \mathbf{P_{r}}M^{r}_{\kappa}\mathbf{P_{r}}^{-1}\jj=\nabla_{\Gamma}P'(r)+\operatorname{\mathbf{curl}}_{\Gamma}Q'(r), \end{equation*} where \begin{equation*}\label{P'} \begin{array}{rl}P'(r)=&(\mathcal{L}^{*}(r))^{-1}(\kappa^2J_{r}\cdot\tau_{r}\nn_{r}\cdot (\tau_{r}V^r_{\kappa}\tau_{r}^{-1})\left[\mathcal{G}(r)p+\boldsymbol{\mathcal{R}}(r)q\right] \\& +(\mathcal{L}^{*}(r))^{-1}(J_{r}\cdot\tau_{r}D_{\kappa}^{r}\tau_{r}^{-1})(\tau_{r}\Delta_{\Gamma_{r}}(\tau_{r}^{-1}p)) \end{array} \end{equation*} and \begin{equation*} \begin{array}{ll} Q'(r)=(\mathcal{L}^{*}(r))^{-1}\mathcal{R}^{*}(r)(\tau_{r}(B_{\kappa}^{r}-D_{\kappa}^{r})\tau_{r}^{-1})\left[\mathcal{G}(r)p+\boldsymbol{\mathcal{R}}(r)q\right] \end{array} \end{equation*} with \begin{equation*} \begin{aligned} \tau_{r}B_{k}^r{\mathbb P}p_{r}^{-1}\jj &=\tau_{r}\left\{\displaystyle{\int_{\Gamma_{r}}\nabla G(\kappa,|\cdot-y_{r}|)\left(\nn_{r}(\,\cdot\,)\cdot(\nabla_{\Gamma_{r}}\tau_{r}^{-1} p)(y_{r})\right)ds(y_{r})}\right. \\ &\quad +\left.\displaystyle{\int_{\Gamma_{r}}\nabla G(\kappa,|\cdot-y_{r}|)\left(\nn_{r}(\,\cdot\,)\cdot(\operatorname{\mathbf{curl}}_{\Gamma_{r}}\tau_{r}^{-1} q)(y_{r})\right)ds(y_{r})}\Big\}\right\}. \end{aligned} \end{equation*} \begin{theorem} The mapping: $$ \begin{array}{lcl}B^{\infty}(0,\varepsilon)&\rightarrow& \mathscr{L}\left(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma),\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)\right)\\r&\mapsto&{\mathbb P}p_{r}M_{\kappa}{\mathbb P}p_{r}^{-1}\end{array} $$ is infinitely G\^ateaux differentiable. The G\^ateaux derivatives have the same regularity as $M_{\kappa}$, so that they are compact operators in $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$. The derivatives can be written in explicit form by differentiating the kernel of the operators $M_{\kappa}^r$, see \cite[Corollary 4.5]{CostabelLeLouer}, and by using the formulas for the derivatives of the surface differential operators given in Section~\ref{SurfDiffOp}. \end{theorem} \begin{proof} The differentiability of the double layer boundary integral operator is established in \cite[Example 4.10]{CostabelLeLouer}. It remains to prove the infinite G\^ateaux differentiability of the mapping $$ \begin{array}{lcl}B_{\delta}&\rightarrow& \mathscr{L}\left(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma),\HH^{\frac{1}{2}}(\Gamma)\right)\\r&\mapsto&\tau_{r}B^r_{\kappa}{\mathbb P}p_{r}^{-1}.\end{array} $$ The function $(x,y-x)\mapsto \nabla G(\kappa,|x-y|)$ is pseudo-homogeneous of class 0. We then have to prove that for any fixed $(x,y)\in(\Gamma\times\Gamma)^*$ and any function $p\in H^{\frac{3}{2}}(\Gamma)$ the G\^ateaux derivatives of $$ r\mapsto (\tau_{r}\nn_{r})(x)\cdot\left(\tau_{r}\nabla_{\Gamma_{r}}\tau_{r}^{-1}p\right)(y) $$ behave as $|x-y|^2$ when $x-y$ tends to zero. To do so, either we write $$ (\tau_{r}\nn_{r})(x)\cdot\left(\tau_{r}\nabla_{\Gamma_{r}}\tau_{r}^{-1}p\right)(y)=\left((\tau_{r}\nn_{r})(x)-(\tau_{r}\nn_{r})(y)\right)\cdot\left(\tau_{r}\nabla_{\Gamma_{r}}\tau_{r}^{-1}p\right)(y) $$ or we use Lemmas \ref{N} and \ref{nabla} and straighforward computations. \end{proof} \begin{theorem} \label{shapederiv} Assume that $\EE^{inc}\in\HH^1_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\R^3)$ and that the mappings $$ \begin{array}{rcl}B^{\infty}(0,\varepsilon)&\rightarrow&\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)\\ r&\mapsto&{\mathbb P}p_{r}\left(\nn_{r}\wedge\EE^{inc}_{|\Gamma_{r}}\right)\\ r&\mapsto& {\mathbb P}p_{r}\left(\nn_{r}\wedge\left(\operatorname{\mathbf{curl}}\EE^{inc}\right)_{|{\Gamma_{r}}}\right)\end{array} $$ are G\^ateaux differentiable at $r=0$. Then the mapping from $r\in B^{\infty}(0,\varepsilon)$ to the solution $\mathscr{E}(r)=\EE(\Omega_{r})\in\HH(\operatorname{\mathbf{curl}},\Omega_{r})\cup\HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\overline{\Omega^c})$ of the scattering problem by the obstacle $\Omega_{r}$ is G\^ateaux differentiable at $r=0$. \end{theorem} \begin{proof} We use the integral equation method described in Theorem~\ref{thT}. Let $\jj$ be the solution of the integral equation \eqref{SS}. By composition of infinitely differentiable mappings we see that $$ \begin{array}{lcl}B^{\infty}(0,\varepsilon)&\rightarrow &\mathscr{L}\left(\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma),\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)\right)\\r&\mapsto&\boldsymbol{\mathsf S}(r)={\mathbb P}p_{r}\boldsymbol{\mathsf S}^r{\mathbb P}p_{r}^{-1}\end{array} $$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable. Then with \eqref{u2c} and \eqref{SS} we get for the exterior field $\mathscr{E}^s$ \begin{equation*} \begin{split} d \mathscr{E}^s[0,\xi]=\;&\left(-d{\mathbb P}si_{E_{\kappa_{e}}}[0,\xi]-i\eta d{\mathbb P}si_{M_{\kappa_{e}}}[0,\xi]C^{*}_{0}-i\eta{\mathbb P}si_{M_{\kappa_{e}}}d C^{*}_{0}[0,\xi]\right)\jj \\ &+(-{\mathbb P}si_{E_{\kappa_{e}}}-i\eta{\mathbb P}si_{M_{\kappa_{e}}}C^*_{0})\boldsymbol{\mathsf S}^{-1}\big(-d \boldsymbol{\mathsf S}[0,\xi]\,\jj\big) \\ +(-&{\mathbb P}si_{E_{\kappa_{e}}}\!\!\!-i\eta{\mathbb P}si_{M_{\kappa_{e}}}C^*_{0})\boldsymbol{\mathsf S}^{-1}\left(-\rho d M_{\kappa_{i}}[0,\xi]\gamma_{D}\EE^{inc}- d C_{\kappa_{i}}[0,\xi]\gamma_{N_{\kappa_{e}}}\EE^{inc}\right) \\ &\!\!\!+(-{\mathbb P}si_{E_{\kappa_{e}}}-i\eta{\mathbb P}si_{M_{\kappa_{e}}}C^*_{0})\boldsymbol{\mathsf S}^{-1}\left(-\rho\Big(\tfrac{1}{2}+M_{\kappa_{i}}\right)d\left\{ {\mathbb P}p_{r}\gamma_{D}^{r}\EE^{inc}\right\}[0,\xi]\Big) \\ &+(-{\mathbb P}si_{E_{\kappa_{e}}}-i\eta{\mathbb P}si_{M_{\kappa_{e}}}C^*_{0})\boldsymbol{\mathsf S}^{-1}\left(-C_{\kappa_{i}}d\left\{ {\mathbb P}p_{r}\gamma_{N_{\kappa_{e}}}^{r}\EE^{inc}\right\}[0,\xi]\right). \end{split} \end{equation*} We know that $\jj\in\TT\HH^{\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$, so that the first terms on the right-hand side are in $\HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\overline{\Omega^c})$, and the hypotheses guarantee that the last two terms are in $\HH_{\mathrm{loc}}(\operatorname{\mathbf{curl}},\overline{\Omega^c})$. For the interior field we write \begin{equation*} \begin{split} d \mathscr{E}^{i}[0,\xi]=\;&-\frac{1}{\rho}d {\mathbb P}si_{E_{\kappa_{i}}}[0,\xi]\gamma_{N_{\kappa_{e}}}^c\left(\EE^{s}+\EE^{inc}\right)-d{\mathbb P}si_{M_{\kappa_{i}}}[0,\xi]\gamma_{D}^c\left(\EE^{s}+\EE^{inc}\right) \\ &-\frac{1}{\rho}{\mathbb P}si_{E_{\kappa_{i}}}d\hspace{-.5mm}\left\{{\mathbb P}p_{r}\gamma_{N_{\kappa_{e}}}^{c,r}\left(\mathscr{E}^s(r)+\EE^{inc}\right)\right\}[0,\xi]\\ &\qquad\qquad-{\mathbb P}si_{M_{\kappa_{i}}}d\hspace{-.5mm}\left\{{\mathbb P}p_{r}\gamma_{D}^{c,r}\left(\mathscr{E}^s(r)+\EE^{inc}\right)\right\}\hspace{-1mm}[0,\xi]. \end{split} \end{equation*} The hypotheses guarantee that $\gamma_{N_{\kappa_{e}}}^c\left(\EE^{s}+\EE^{inc}\right)$ and $\gamma_{D}^c\left(\EE^{s}+\EE^{inc}\right)$ are in $\TT\HH^{\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$, which implies that the first two terms are in $\HH(\operatorname{\mathbf{curl}},\Omega)$, and that the last two terms are in $\HH(\operatorname{\mathbf{curl}},\Omega)$. \end{proof} \begin{theorem} The mapping sending $r\in B^{\infty}(0,\varepsilon)$ to the far field pattern $\EE^{\infty}(\Omega_{r})\in\TT\mathscr{C}^{\infty}(S^2)$ of the solution to the scattering problem by the obstacle $\Omega_{r}$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable. \end{theorem} \begin{proof} The mapping $B^{\infty}(0,\varepsilon)\ni r\mapsto\left\{(\hat{x},y)\mapsto e^{i\kappa\hat{x}\cdot(y+r(y))}\right\}\in \mathscr{C}^{\infty}(S^2\times\Gamma)$ is $\mathscr{C}^{\infty}$-G\^ateaux differentiable and the derivatives define smooth kernels. By the linearity of the integral we deduce that the boundary--to--far--field operators $$ \begin{array}{lcl} B_{\delta}&\rightarrow&\mathscr{L}(\TT\HH^s(\operatorname{\mathrm{div}}_{\Gamma},\Gamma),\mathscr{C}^{\infty}(S^2)) \\ r&\mapsto&{\mathbb P}si^{\infty}_{E_{\kappa}}(r)={\mathbb P}si^{\infty,r}_{E_{\kappa}}\tau_{r}^{-1} \\ r&\mapsto&{\mathbb P}si^{\infty}_{M_{\kappa}}(r)={\mathbb P}si_{M_{\kappa}}^{\infty,r}\tau_{r}^{-1} \end{array} $$ are $\mathscr{C}^{\infty}$-G\^ateaux differentiable. For $\jj\in~\TT\HH^s(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ we have: $$ d{\mathbb P}si_{E_{\kappa}}^{\infty}[0,\xi]\jj(\hat{x})=i\kappa\hat{x}\wedge\left(\int_{\Gamma}e^{-i\kappa\hat{x}\cdot y}\big(\operatorname{\mathrm{div}}_{\Gamma}\xi(y)-i\kappa\hat{x}\cdot\xi(y)\big)\jj(y)ds(y)\right)\wedge\hat{x}, $$ and $$ d{\mathbb P}si_{M_{\kappa}}^{\infty}[0,\xi]\jj(\hat{x})=\kappa\hat{x}\wedge\left(\int_{\Gamma}e^{-i\kappa\hat{x}\cdot y}\big(\operatorname{\mathrm{div}}_{\Gamma}\xi(y)-i\kappa\hat{x}\cdot\xi(y)\big)\jj(y)ds(y)\right). $$ We conclude by using the integral representation of $\EE^{\infty}(\Omega_{r})$ and previous theorems. \end{proof} \subsection{Characterization of the first derivative} The following theorem gives a caracterization of the first G\^ateaux derivative of $r\mapsto\mathscr{E}(r)$ at $r=0$. \begin{theorem} \label{T:firstder} Under the hypotheses of Theorem {\rm \ref{shapederiv}}, the first derivative of the solution $\mathscr{E}(r)$ of the dielectric scattering problem at $r=0$ in the direction $\xi\in\mathscr{C}^{\infty}(\Gamma,\R^3)$ solves the following transmission problem : \begin{equation} \left\{\begin{aligned} \operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}} d \mathscr{E}^{i}[0,\xi]-\kappa_{i}^2 d\mathscr{E}^{i}[0,\xi]=0 \\ \operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}} d \mathscr{E}^{s}[0,\xi]-\kappa_{e}^2 d\mathscr{E}^{s}[0,\xi]=0 \end{aligned}\right. \end{equation} with the interface conditions \begin{equation} \label{E:firstder} \left\{\begin{aligned} \nn\wedge d\mathscr{E}^{i}[0,\xi]-\nn\wedge d\mathscr{E}^{s}[0,\xi]=g_{D} \\ \mu_{i}^{-1}\nn\wedge\operatorname{\mathbf{curl}} d\mathscr{E}^{i}[0,\xi]-\mu_{e}^{-1}\nn\wedge\operatorname{\mathbf{curl}} d\mathscr{E}^{s}[0,\xi]=g_{N}, \end{aligned}\right. \end{equation} where with the solution $(\EE^{i},\EE^{s})$ of the scattering problem, \begin{equation*} \begin{split} g_{D}=&-\left(\xi\cdot\nn\right)\Big(\nn\wedge\operatorname{\mathbf{curl}}\EE^{i}-\nn\wedge\operatorname{\mathbf{curl}}(\EE^s+\EE^{inc})\Big)\wedge\nn \\ &+\operatorname{\mathbf{curl}}_{\Gamma}\Big((\xi\cdot\nn)\big(\nn\cdot\EE^{i}-\nn\cdot(\EE^s+\EE^{inc})\big)\Big), \end{split} \end{equation*} and \begin{equation*} \begin{split} g_{N}=&-\left(\xi\cdot\nn\right)\left(\dfrac{\kappa_{i}^2}{\mu_{i}}\nn\wedge\EE^{i}-\dfrac{\kappa_{e}^2}{\mu_{e}}\nn\wedge(\EE^s+\EE^{inc})\right)\wedge\nn \\ &+\operatorname{\mathbf{curl}}_{\Gamma}\Big((\xi\cdot\nn)\;\left(\mu_{i}^{-1}\operatorname{\mathrm{curl}}_{\Gamma}\EE^{i}-\mu_{e}^{-1}\operatorname{\mathrm{curl}}_{\Gamma}(\EE^s+\EE^{inc})\right)\Big), \end{split} \end{equation*} and $d\mathscr{E}^{s}[0,\xi]$ satisfies the Silver-M\"uller radiation condition. \end{theorem} \begin{proof} We have shown in the previous paragraph that the potential operators and their G\^ateaux derivatives satisfy the Maxwell equations and the Silver-M\"uller radiation condition. It remains to compute the boundary conditions. They can be obtained from the integral representation, but this is rather tedious. A simpler way consists in deriving for a fixed $x\in\Gamma$ the expression \begin{equation}\label{bvder} \nn_{r}(x+r(x))\wedge\left(\mathscr{E}^i(r)(x+r(x))-\mathscr{E}^s(r)(x+r(x))-\EE^{inc}(x+r(x))\right)=0. \end{equation} This gives $$ \begin{aligned} 0=\;&d\mathcal{N}[0,\xi](x)\wedge\left(\EE^{i}(x)-\EE^s(x)-\EE^{inc}(x)\right) \\ &+\nn(x)\wedge\left(d\mathscr{E}^i[0,\xi](x)-d\mathscr{E}^s[0,\xi](x)\right) \\ &+\nn\wedge\left(\xi(x)\cdot\nabla\left(\EE^{i}-\EE^s-\EE^{inc}\right)\right). \end{aligned} $$ We now use the explicit form of the shape derivatives of the normal vector given in Lemma~\ref{N} : $d\mathcal{N}[0,\xi]=-\left[\nabla_{\Gamma}\xi\right]\nn$, and the formula $ \nabla u=\nabla_{\Gamma}u+\left(\frac{\partial u}{\partial \nn}\right)\nn. $ We obtain \begin{multline*} \nn(x)\wedge\left(d\mathscr{E}^i[0,\xi](x)-d\mathscr{E}^s[0,\xi](x)\right)=\\ \left[\nabla_{\Gamma}\xi\right]\nn\wedge\left(\EE^{i}(x)-\EE^s(x)-\EE^{inc}(x)\right) \\ -\nn\wedge\left(\xi(x)\cdot\nabla_{\Gamma}\left(\EE^{i}(x)-\EE^s(x)-\EE^{inc}(x)\right)\right) \\ -(\xi\cdot\nn)\nn\wedge\dfrac{\partial}{\partial\nn}\left(\EE^{i}(x)-\EE^s(x)-\EE^{inc}(x)\right). \end{multline*} Since the tangential component of $\EE^{i}-\EE^s-\EE^{inc}$ vanishes, we have \begin{multline*} \left(\xi(x)\cdot\nabla_{\Gamma}\left(\EE^{i}(x)-\EE^s(x)-\EE^{inc}(x)\right)\right)\\ =\left(\nn\cdot\left(\EE^{i}(x)-\EE^s(x)-\EE^{inc}(x)\right)\right)\left(\left[\transposee{\nabla_{\Gamma}\nn}\right]\xi\right) \end{multline*} and \begin{multline*} \left(\left[\nabla_{\Gamma}\xi\right]\nn\right)\wedge\left(\EE^{i}(x)-\EE^s(x)-\EE^{inc}(x)\right)\\ =\left(\left[\nabla_{\Gamma}\xi\right]\nn\right)\wedge\nn\left(\EE^{i}(x)-\EE^s(x)-\EE^{inc}(x)\right)\cdot\nn. \end{multline*} Since we are on a regular surface, we have $\nabla_{\Gamma}\nn=\transposee{\nabla_{\Gamma}\nn}$ and $$ \left(\left[\nabla_{\Gamma}\xi\right]\nn\right)\wedge\nn-\nn\wedge\left(\left[\transposee{\nabla_{\Gamma}\nn}\right]\xi\right)=\operatorname{\mathbf{curl}}_{\Gamma}\left(\xi\cdot\nn\right). $$ Using the expansion (see \cite[p.\ 75]{Nedelec}) $$ \operatorname{\mathbf{curl}}\uu=\left(\operatorname{\mathrm{curl}}_{\Gamma}\uu\right)\nn+\operatorname{\mathbf{curl}}_{\Gamma}\left(\uu_{\Gamma}\cdot\nn\right)-\left([\nabla_{\Gamma}\nn]\uu\right)\wedge\nn-\left(\frac{\partial\uu}{\partial\nn}\right)\wedge\nn $$ we obtain that $$ \begin{aligned} -\nn\wedge\left(\gamma_{n}\EE^{i}-\gamma_{n}^c\big(\EE^s+\EE^{inc}\big)\right)&=-\nn\wedge\left(\gamma\operatorname{\mathbf{curl}}\EE^{i}-\gamma^c\operatorname{\mathbf{curl}}\big(\EE^s+\EE^{inc}\big)\right)\wedge\nn \\ &\qquad+\operatorname{\mathbf{curl}}_{\Gamma}\left(\nn\cdot\left(\gamma\EE^{i}-\gamma^c\big(\EE^s+\EE^{inc}\big)\right)\right). \end{aligned} $$ Here we used that the curvature operator $[\nabla_{\Gamma}\nn]$ acts on the tangential component of vector fields, so that $$ [\nabla_{\Gamma}\nn]\left(\gamma\EE^{i}-\gamma^c\big(\EE^s+\EE^{inc}\big)\right)=0. $$ Thus we have \begin{equation*} \begin{split} g_{D}=&-\left(\xi\cdot\nn\right)\nn\wedge\left(\gamma\operatorname{\mathbf{curl}}\EE^{i}-\gamma^c\operatorname{\mathbf{curl}}\big(\EE^s+\EE^{inc}\big)\right)\wedge\nn \\ &+\operatorname{\mathbf{curl}}_{\Gamma}\left((\xi\cdot\nn)\big(\nn\cdot\gamma\EE^{i}-\nn\cdot\gamma^c(\EE^s+\EE^{inc})\big)\right). \end{split} \end{equation*} To obtain the second transmission condition, we use similar computations with the electric field $\EE$ replaced by the magnetic field $\dfrac{1}{i\omega\mu}\operatorname{\mathbf{curl}}\EE$. \end{proof} \section{Perspectives: Non-smooth boundaries} We have presented a complete differentiability analysis of the electromagnetic integral operators with respect to smooth deformations of a smooth boundary in the framework of Sobolev spaces. Using the boundary integral equation approach we have established that the far-field pattern of the dielectric scattering problem is infinitely differentiable with respect to the deformations and we gave a characterization of the first derivative as the far-field pattern of a new transmission problem. In the case of a non-smooth boundary --- a polyhedral or more generally a Lipschitz boundary --- the formulas determining the first derivative given in Theorem~\ref{T:firstder} are problematic. The normal vector field $\nn$ will have discontinuities, and the factor $\xi\cdot\nn$ and vector product with $\nn$ that appear in the right-hand side of \eqref{E:firstder} may not be well defined in the energy trace spaces. It is, however, known for the acoustic case that the far field is infinitely shape differentiable for non-smooth boundaries, too, see \cite{Hohage} for a proof via the implicit function theorem. Our procedure of using a boundary integral representation gives an alternative way of characterizing the shape derivatives of the solution of the dielectric scattering problem and of its far field. We do not require the computation of the boundary traces of the solution and taking tangential derivatives and multiplication by possibly discontinuous factors. Instead we determine the shape derivatives of the boundary integral operators. While the study of G\^ateaux differentiability of boundary integral operators, for the case of smooth deformations of a Lipschitz domain, is still an open problem that will require further work, our approach via Helmholtz decompositions seems to be a promising starting point for tackling this question. Let us briefly indicate why we think this is so. We consider the case where the boundary $\Gamma$ is merely Lipschitz, but the deformation is defined by a vector field $\xi$ that is smooth (at least $\mathscr{C}^1$) in a neighborhood of $\Gamma$. Note that the reduction to purely normal displacements that is often used for studying shape optimization problems for smooth boundaries does not make sense here, as soon as there are corners present. In this situation, many of the ingredients of our toolbox are still available. Here are some of them. First, the change of variables mapping $\tau_{r}$ still defines an isomorphism between $H^{\frac{1}{2}}(\Gamma_{r})$ and $H^{\frac{1}{2}}(\Gamma)$. By duality, we see that the mapping $u_{r}\mapsto J_{r}\tau_{r}u_{r}$ defines an isomorphism between $H^{-\frac{1}{2}}(\Gamma_{r})$ and $H^{-\frac{1}{2}}(\Gamma)$. When we want to transport the energy space $\TT\HH^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})$, we can still use Helmholtz decomposition. Namely, the following result is known, see \cite{BuffaCiarlet,BuffaCiarlet2,BuffaCostabelSchwab,BuffaCostabelSheen,BuffaHiptmairPetersdorffSchwab}. \begin{lemma} \label{HelmLip} Assume that $\Gamma$ is a simply connected closed Lipschitz surface. The Hilbert space $\TT\HH_{\|}^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ admits the following Helmholtz decomposition: $$ \TT\HH_{\|}^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)= \nabla_{\Gamma}\mathcal{H}(\Gamma)\oplus {\operatorname{\mathbf{curl}}}_{\Gamma}\,H^{\frac{1}{2}}(\Gamma). $$ where $$ \mathcal{H}(\Gamma)=\{u\in H^1(\Gamma)\;:\;\Delta_{\Gamma}u\in H^{-\frac{1}{2}}(\Gamma)\}. $$ \end{lemma} The notation $\TT\HH_{\|}^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ recalls the fact that special care has to be taken for the definition of the energy space. A natural idea for the transport of the energy trace space is then, instead of \eqref{Prsmooth}, to define \begin{equation*} \begin{array}{rrcl} {\mathbb P}p_{r}:&\TT\HH_{\|}^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})&\longrightarrow& \TT\HH_{\|}^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)\\ &\nabla_{\Gamma_{r}}\;p_{r}+\operatorname{\mathbf{curl}}_{\Gamma_{r}}\;q_{r}&\mapsto&\nabla_{\Gamma}\;\Delta_{\Gamma}^{-1}\left(J_{r}\big(\tau_{r}\Delta_{\Gamma_{r}}p_{r}\big)\right)+\operatorname{\mathbf{curl}}_{\Gamma}(\tau_{r}q_{r}). \end{array} \end{equation*} This is justified by the sequence of isomorphisms $$ \begin{array}{ccccccc} \mathcal{H}(\Gamma_{r})/\C&\displaystyle{ \stackrel{\Delta_{\Gamma_{\hspace{-.5mm}r}}\;}{\longrightarrow}}&H^{-\frac{1}{2}}_{*}(\Gamma_{r})& {\longrightarrow}&H^{-\frac{1}{2}}_{*}(\Gamma)&\stackrel{\Delta_{\Gamma}^{-1}}{\longrightarrow}& \mathcal{H}(\Gamma)/\C\\ p_{r}&\mapsto&\Delta_{\Gamma_{r}}p_{r}&\mapsto&J_{r}(\tau_{r}\Delta_{\Gamma_{r}}p_{r})&\mapsto&\Delta_{\Gamma}^{-1}\big(J_{r}(\tau_{r}\Delta_{\Gamma_{r}}p_{r})\big). \end{array} $$ The inverse of the transformation ${\mathbb P}p_{r}$ is given by \begin{equation*} \begin{array}{rrcl} {\mathbb P}p_{r}^{-1}:&\TT\HH_{\|}^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)&\longrightarrow& \TT\HH_{\|}^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma_{r}},\Gamma_{r})\\ &\nabla_{\Gamma}\;p+\operatorname{\mathbf{curl}}_{\Gamma}\;q&\mapsto&\nabla_{\Gamma_{r}}\;\tau_{r}^{-1}\big(\mathcal{L}^{*}(r)\big)^{-1}\Delta_{\Gamma}p+\operatorname{\mathbf{curl}}_{\Gamma_{r}}(\tau_{r}^{-1}q). \end{array} \end{equation*} In this situation it seems to be more convenient to rewrite the operators ${\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}$ and ${\mathbb P}p_{r}M^{r}_{\kappa}{\mathbb P}p_{r}^{-1}$ as operators acting on the the scalar fields $p^{*}=\Delta_{\Gamma}p\in H^{-\frac{1}{2}}_{*}(\Gamma)$ and $q\in H^{\frac{1}{2}}(\Gamma_{r})/\C$ instead of $p$ and $q$. For example, the operator ${\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}$ is defined for $\jj=\nabla_{\Gamma}\Delta_{\Gamma}^{-1}p^{*}+\operatorname{\mathbf{curl}}_{\Gamma}q\in \TT\HH_{\|}^{-\frac{1}{2}}(\operatorname{\mathrm{div}}_{\Gamma},\Gamma)$ by $$ {\mathbb P}p_{r}C^{r}_{\kappa}{\mathbb P}p_{r}^{-1}=\nabla_{\Gamma}\Delta_{\Gamma}^{-1}P^{*}(r)+\operatorname{\mathbf{curl}}_{\Gamma}Q(r), $$ with \begin{equation*} \begin{array}{lcl}P(r)&=&-\kappa\;\mathcal{R}^{*}(r)\left(\tau_{r}V^{r}_{\kappa}\tau_{r}^{-1}\right)\left[\mathcal{G}(r)\big(\mathcal{L}^{*}(r)\big)^{-1}p^{*}+\boldsymbol{\mathcal{R}}(r)q\right]\end{array} \end{equation*} and \begin{equation*} \begin{array}{lcl}Q(r)&=&-\kappa\;(\mathcal{L}^{*}(r))^{-1}\mathcal{D}^{*}(r)\pi(r)\left(\tau_{r}V^{r}_{\kappa}\tau_{r}^{-1}\right)\left[\mathcal{G}(r)\big(\mathcal{L}^{*}(r)\big)^{-1}p^{*}+\boldsymbol{\mathcal{R}}(r)q\right] \\ &&+\dfrac{1}{\kappa}\left(\tau_{r}V^{r}_{\kappa}\tau_{r}^{-1}\right)\left(J_{r}^{-1}p^{*}\right). \end{array} \end{equation*} Here we have used the same notation for the surface differential operators as introduced in Section~\ref{SurfDiffOp}. These formulas together with similar ones for the operator ${\mathbb P}p_{r}M_{\kappa}{\mathbb P}p_{r}^{-1}$ can now be the starting point for generalization of the analysis of shape differentiability of the Maxwell boundary integral operators to Lipschitz domains. We expect that the results for the differentiability of the surface differential operators and then also of the boundary integral operators will be similar to what we have obtained for the case of smooth domains. This is, however, far from trivial and will require further work. \end{document}
\begin{document} \title{\textbf{Moderate deviations of generalized $N$-urn Ehrenfest models}} \author{Lirong Ren \thanks{\textbf{E-mail}: [email protected] \textbf{Address}: School of Science, Beijing Jiaotong University, Beijing 100044, China.}\\ Beijing Jiaotong University\\ Xiaofeng Xue \thanks{\textbf{E-mail}: [email protected] \textbf{Address}: School of Science, Beijing Jiaotong University, Beijing 100044, China.}\\ Beijing Jiaotong University} \date{} \maketitle \noindent {\bf Abstract:} This paper is a further investigation of the generalized $N$-urn Ehrenfest model introduced in \cite{Xue2020}. A moderate deviation principle from the hydrodynamic limit of the model is derived. The proof of this main result follows a routine procedure introduced in \cite{Kipnis1989}, where a replacement lemma plays the key role. To prove the replacement lemma, the large deviation principle of the model given in \cite{Xue2020} is utilized. \quad \noindent {\bf Keywords:} hydrodynamic limit, N-urn Ehrenfest model, moderate deviation, replacement lemma. \section{Introduction and main results}\label{section one} In this paper we will prove a moderate deviation principle from the hydrodynamic limit of the generalized $N$-urn Ehrenfest model introduced in \cite{Xue2020}. We first recall the definition of the model. Initially some gas molecules are put into $N$ boxes, where $N\geq 2$ is an integer. We assume that numbers of gas molecules in different boxes are independent and the number of gas molecules in the $i$th box follows Poisson distribution with mean $\phi(\frac{i}{N})$ for $1\leq i\leq N$, where $\phi$ is a positive function in $C\left([0, 1]\right)$. For $1\leq i,j\leq N$, each gas molecule in the $i$th box jumps to the $j$th box at rate $\frac{1}{N}\lambda\left(\frac{i}{N}, \frac{j}{N}\right)$, where $\lambda$ is a positive function in $C^{1,1}\left([0, 1]\times[0, 1]\right)$. When $\lambda\equiv 1$, the above model reduces to the classic $N$-urn Ehrenfest model introduced in \cite{Cheng2020}. For any $t\geq 0$, let $X_t^N(i)$ be the number of gas molecules in the $i$th box at moment $t$ and \[ X_t^N=\left(X_t^N(1), X_t^N(2),\ldots, X_t^N(N)\right), \] then $\{X_t^N\}_{t\geq 0}$ is a continuous-time Markov process with state space $\{0, 1,2,\ldots\}^N$ and generator $\mathcal{L}_N$ given by \[ \mathcal{L}_Nf(x)=\sum_{i=1}^N\sum_{j=1}^N\frac{x(i)}{N}\lambda\left(\frac{i}{N}, \frac{j}{N}\right)\left[f(x^{i,j})-f(x)\right] \] for any $f\in C\left(\{0, 1,2,\ldots\}^N\right)$ and $x\in \{0, 1,2,\ldots\}^N$, where $x^{i,j}=x$ when $i=j$ and \[ x^{i,j}(l)= \begin{cases} x(l) & \text{~if~}l\neq i,j,\\ x(i)-1 & \text{~if~}l=i,\\ x(j)+1 & \text{~if~}l=j \end{cases} \] when $i\neq j$. Now we recall the hydrodynamic limit of $\{X_t^N\}_{t\geq 1}$ given in \cite{Xue2020}. For each $N\geq 1$ and any $t\geq 0$, we define the empirical measure $\mu_t^N$ as \[ \mu_t^N(du)=\frac{1}{N}\sum_{i=1}^NX_t^N(i)\delta_{\frac{i}{N}}(du), \] where $\delta_{\frac{i}{N}}(du)$ is the Dirac measure concentrated at $\frac{i}{N}$. That is to say, $\mu_t^N$ is a random linear operator from $C([0, 1])$ to $\mathbb{R}$ that \[ \mu_t^N(f)=\int_{[0, 1]} f(u)\mu_t^N(du)=\frac{1}{N}\sum_{i=1}^NX_t^N(i)f(\frac{i}{N}) \] for any $f\in C([0, 1])$. Let $P_1$ be the linear operator from $C([0, 1])$ to $C([0, 1])$ that \[ (P_1f)(x)=\int_0^1\lambda(x,y)f(y)dy \] for any $f\in C([0,1]), x\in [0,1]$ and $P_2$ be the one that \[ (P_2f)(x)=\int_0^1\lambda(x,y)f(x)dy \] for any $f\in C([0, 1]), x\in [0,1]$, then it is shown in \cite{Xue2020} that there is a unique deterministic measure-valued process $\{\mu_t\}_{t\geq 0}$ that \begin{equation}\label{equation HydrodynamicLimit} \mu_t(f)=\int_0^1f(x)\phi(x)dx+\int_0^t\mu_s\left((P_1-P_2)f\right)ds \end{equation} for any $t\geq 0$ and $f\in C([0, 1])$. The following proposition is proved in \cite{Xue2020}, which gives the hydrodynamic limit of $\{X_t^N\}_{t\geq 0}$ as $N\rightarrow+\infty$. \begin{proposition}[{\cite[Theorem 2.3]{Xue2020}}]\label{proposition 1.1 HydrodynamicLimit} Let $\mu$ be defined as in Equation \eqref{equation HydrodynamicLimit}, then \[ \lim_{N\rightarrow+\infty}\mu_t^N(f)=\mu_t(f) \] in probability for any $t\geq 0$ and $f\in C([0, 1])$. \end{proposition} In this paper, we are concerned with the moderate deviation principle from the hydrodynamic limit given in Proposition \ref{proposition 1.1 HydrodynamicLimit}. To give our results, we first introduce some notations and definitions. We use $\mathcal{S}$ to denote the dual of $C([0, 1])$, i.e., the set of linear operators from $C([0, 1])$ to $\mathbb{R}$. For later use, we use $\mathcal{A}$ to denote the subset of $\mathcal{S}$ consist of nonnegative measures, i.e, $\nu\in \mathcal{A}$ if and only if $\nu(f)\geq 0$ for any nonnegative $f\in C([0, 1])$. For given $T_0>0$, we use $\mathcal{D}([0, T_0], \mathcal{S})$ to denote the set of c\`{a}dl\`{a}g functions from $[0, T_0]$ to $\mathcal{S}$. For any $\nu\in \mathcal{S}$, we define \begin{equation}\label{equ 1.2 definition of Iini} I_{ini}(\nu)=\sup_{f\in C([0, 1])}\left\{\nu(f)-\frac{1}{2}\int_0^1\phi(x)f^2(x)dx\right\}. \end{equation} For any $\pi\in \mathcal{D}([0, T_0], \mathcal{S})$, we define \begin{align}\label{equ 1.3 defintion of Idyn} I_{dyn}(\pi)&=\sup_{G\in C^{1,0}([0, T_0]\times [0, 1])}\Bigg\{\pi_{T_0}(G_{T_0})-\pi_0(G_0)-\int_0^{T_0}\pi_s\left((\partial_s+P_1-P_2)G_s\right)ds \notag\\ &-\frac{1}{2}\int_0^{T_0}\left(\int_{[0,1]}\left(\int_0^1\lambda(x,y)\left(G_s(y)-G_s(x)\right)^2dy\right)\mu_s(dx)\right)ds\Bigg\}, \end{align} where $\mu$ is defined as in Equation \eqref{equation HydrodynamicLimit} and $G_t(\cdot)=G(t, \cdot)$ for any $G\in C^{1,0}([0, T_0]\times [0, 1]), 0\leq t\leq T_0$. Let $\{a_N\}_{N\geq 1}$ be a given positive sequence that $\lim_{N\rightarrow+\infty}\frac{a_N}{N}=\lim_{N\rightarrow+\infty}\frac{\sqrt{N}}{a_N}=0$, then we define random measure $\theta_t^N$ as \[ \theta_t^N(du)=\frac{1}{a_N}\sum_{i=1}^N\left(X_t^N(i)-EX_t^N(i)\right)\delta_{\frac{i}{N}}(du) \] for any $N\geq 1$ and $0\leq t\leq N$. We use $\theta^N$ to denote $\{\theta_t^N:~0\leq t\leq T_0\}$, then $\theta^N\in \mathcal{D}([0, T_0], \mathcal{S})$. Now we give our main result. \begin{theorem}\label{theorem 1.1 MDP}Let $I_{ini}$ and $I_{dyn}$ be defined as in Equations \eqref{equ 1.2 definition of Iini} and \eqref{equ 1.3 defintion of Idyn} respectively, then \begin{equation}\label{equ 1.4 UpperBoundofMDP} \limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\theta^N\in C\right)\leq -\inf_{\pi\in C}\left(I_{ini}(\pi_0)+I_{dyn}(\pi)\right) \end{equation} for any closed set $C\subseteq \mathcal{D}([0, T_0], \mathcal{S})$ and \begin{equation}\label{equ 1.5 LowerBoundofMDP} \liminf_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\theta^N\in O\right)\geq -\inf_{\pi\in O}\left(I_{ini}(\pi_0)+I_{dyn}(\pi)\right) \end{equation} for any open set $O\subseteq \mathcal{D}([0, T_0], \mathcal{S})$. \end{theorem} To make Theorem \ref{theorem 1.1 MDP} easy to catch, our next result gives alternative representation formulas of $I_{ini}$ and $I_{dyn}$. For any $f, g\in C([0, 1])$ and $0\leq t\leq T_0$, we define \[ \langle f|g \rangle_t=\int_{[0,1]}\left(\int_0^1\lambda(x,y)\left(f(y)-f(x)\right)\left(g(y)-g(x)\right)dy\right)\mu_t(dx). \] Furthermore, for any $F, G\in C([0, T_0]\times [0, 1])$, we define \[ \ll F, G\gg=\int_0^{T_0}\langle F_s|G_s \rangle_s ds. \] For $F_1, F_2\in C([0, T_0]\times [0, 1])$, we write $F_1\sim F_2$ when $\ll F_1-F_2, F_1-F_2\gg=0$. We use $\mathcal{H}$ to denote the completion of $C([0, T_0]\times[0, 1])/\sim$ under the inner product $\ll \cdot, \cdot\gg$, then we have the following result. \begin{theorem}\label{theorem 1.3 alternative representation formula} If $\nu\in \mathcal{S}$ makes $I_{ini}(\nu)<+\infty$, then there exists $g\in L^2([0, 1])$ that $\nu(dx)=g(x)\phi(x)dx$ and \[ I_{ini}(\nu)=\nu(g)-\frac{1}{2}\int_0^1\phi(x)g^2(x)dx=\frac{\int_0^1\phi(x)g^2(x)dx}{2}. \] If $\pi\in \mathcal{D}([0, T_0], \mathcal{S})$ makes $I_{dyn}(\pi)<+\infty$, then there exists $F\in \mathcal{H}$ that \[ \pi_{T_0}(G_{T_0})-\pi_0(G_0)-\int_0^{T_0}\pi_s\left((\partial_s+P_1-P_2)G_s\right)ds=\ll G, F\gg \] for any $G\in C^{1,0}([0, T_0]\times [0, 1])$ and $I_{dyn}(\pi)=\frac{\ll F, F\gg}{2}$. \end{theorem} Theorem \ref{theorem 1.3 alternative representation formula} is a routine result since $I_{ini}$ and $I_{dyn}$ are both defined as the supremum of a linear function minus a positive definite quadratic one. The proof of Theorem \ref{theorem 1.3 alternative representation formula} follows the same procedure as those in proofs of analogue results such as Lemma 5.1 of \cite{Kipnis1989} and Equation (2.2) of \cite{Xue2021}, where a crucial step is the utilization of Riesz representation theorem. Hence, we omit the proof of Theorem \ref{theorem 1.3 alternative representation formula} in this paper. The proof of Theorem \ref{theorem 1.1 MDP} follows a routine strategy introduced in \cite{Kipnis1989}, where an exponential martingale plays the key role. A replacement lemma is crucial for the execution of the above strategy, which is the main difficulty we need to overcome in this paper. We prove this replacement lemma according to the large deviation principle of our model given in \cite{Xue2020}. For mathematical details, see Section \ref{section two}. \section{Replacement lemma}\label{section two} In this section we will prove the following replacement lemma. \begin{lemma}\label{lemma 2.1 replacement} Let $\mu$ be defined as in Equation \eqref{equation HydrodynamicLimit}, then for any $G\in C([0, T_0]\times[0, 1])$ and $\epsilon>0$, \begin{equation}\label{equation supperexpoential in relacement lemma} \limsup_{N\rightarrow+\infty}\frac{1}{a_N}\log P\left(\sup_{0\leq t\leq T_0}\left|\mu_t^N(G_t)-\mu_t(G_t)\right|\geq \epsilon\right)=-\infty. \end{equation} \end{lemma} The large deviation principle of our model given in \cite{Xue2020} is crucial for the proof of Lemma \ref{lemma 2.1 replacement}, which we recall here. For any $\nu\in \mathcal{S}$, we define \[ J_{ini}(\nu)=\sup_{f\in C([0, 1])}\left\{\nu(f)-\int_0^1\phi(x)\left(e^{f(x)}-1\right)dx\right\}. \] For any $\pi\in \mathcal{D}([0, T_0], \mathcal{S})$, we define \[ J_{dyn}(\pi)=\sup_{G\in C^{1,0}([0, T_0]\times [0, 1])}\left\{\pi_{T_0}(G_{T_0})-\pi_0(G_0)-\int_0^{T_0}\pi_s\left((\partial_s+\mathcal{B})G_s\right)ds\right\}, \] where \[ \mathcal{B}f(x)=\int_0^1\lambda(x, y)\left(e^{f(y)-f(x)}-1\right)dy \] for any $f\in C([0, 1])$ and $x\in [0, 1]$. Then the following upper bound of large deviation principle is given in \cite{Xue2020}. \begin{proposition}[{\cite[Theorem 2.6]{Xue2020}}]\label{proposition 2.2 LargeDeviation} Let $\mu^N=\{\mu_t^N\}_{0\leq t\leq T_0}$, then \[ \limsup_{N\rightarrow+\infty}\frac{1}{N}\log P\left(\mu^N\in C\right)\leq -\inf_{\pi\in C}(J_{ini}(\pi_0)+J_{dyn}(\pi)) \] for any closed set $C\subseteq \mathcal{D}([0, T_0], \mathcal{S})$. \end{proposition} Note that although Reference \cite{Xue2020} adopts the assumption that $\lambda(x,y)=\lambda_1(x)\lambda_2(y)$ for some $\lambda_1, \lambda_2\in C([0, 1])$, this assumption is utilized in the proof of the lower bound of the large deviation principle. The upper bound does not rely on the this assumption. To prove Lemma \ref{lemma 2.1 replacement}, we need the following two lemmas. \begin{lemma}\label{lemma 2.3} If $\pi\in \mathcal{D}([0, T_0], \mathcal{S})$ makes $J_{ini}(\pi_0)+J_{dyn}(\pi)=0$, then $\pi=\mu$. \end{lemma} \begin{lemma}\label{lemma 2.4} For any $0<C<+\infty$, \[ \mathfrak{A}_C:=\left\{\pi\in \mathcal{D}([0, T_0], \mathcal{S}):~J_{ini}(\pi_0)+J_{dyn}(\pi)\leq C\text{~and~}\pi_t\in \mathcal{A}\text{~for all~}0\leq t\leq T_0\right\} \] is compact. \end{lemma} We first utilize Lemmas \ref{lemma 2.3} and \ref{lemma 2.4} to prove Lemma \ref{lemma 2.1 replacement}. \proof[Proof of Lemma \ref{lemma 2.1 replacement}] For any $\epsilon>0$ and given $G\in C^{1,0}([0, T_0]\times[0, 1])$, we define $D_{\epsilon, G}$ as \[ D_{\epsilon, G}=\left\{\pi\in \mathcal{D}([0, T_0], \mathcal{S}):~\sup_{0\leq t\leq T_0}\left|\pi_t(G_t)-\mu_t(G_t)\right|\geq \epsilon\text{~and~}\pi_t\in \mathcal{A}\text{~for all~}0\leq t\leq T_0\right\}. \] Since $\mu^N_t\in \mathcal{A}$ for all $0\leq t\leq T_0$ and $\frac{N^2}{a_N^2}\rightarrow+\infty$, by Proposition \ref{proposition 2.2 LargeDeviation}, we only need to show that \[ \inf_{\pi\in D_{\epsilon, G}}\left(J_{ini}(\pi_0)+J_{dyn}(\pi)\right)>0 \] to prove Lemma \ref{lemma 2.1 replacement}. If $\inf_{\pi\in D_{\epsilon, G}}\left(J_{ini}(\pi_0)+J_{dyn}(\pi)\right)=0$, then there exists a sequence $\{\pi^n\}_{n\geq 1}$ in $D_{\epsilon, G}\cap \mathfrak{A}_1$ that \begin{equation}\label{equ 2.4} \lim_{n\rightarrow+\infty}\left(J_{ini}(\pi^n_0)+J_{dyn}(\pi^n)\right)=0. \end{equation} By Lemma \ref{lemma 2.4}, $\mathfrak{A}_1$ is compact. Hence, there exists $\hat{\pi}\in \mathfrak{A}_1$ that a subsequence $\{\pi^{n_k}\}_{k\geq 1}$ of $\{\pi^n\}_{n\geq 1}$ satisfies that $\lim_{k\rightarrow+\infty}\pi^{n_k}=\hat{\pi}$. Since $J_{ini}$ and $J_{dyn}$ are both defined as supremums of continuous functions, it is easy to check that $J_{ini}(\pi_0)+J_{dyn}(\pi)$ is lower semi-continuous of $\pi$. Then, by Equation \eqref{equ 2.4}, \[ J_{ini}(\hat{\pi}_0)+J_{dyn}(\hat{\pi})=0 \] and consequently $\hat{\pi}=\mu$ according to Lemma \ref{lemma 2.3}. However, since $D_{\epsilon, G}$ is closed, $\hat{\mu}\in D_{\epsilon, G}$ and hence \[ \sup_{0\leq t\leq T_0}\left|\hat{\pi}_t(G_t)-\mu_t(G_t)\right|\geq \epsilon, \] which is contradict with $\hat{\pi}=\mu$. \qed At last we prove Lemmas \ref{lemma 2.3} and \ref{lemma 2.4}. \proof[Proof of Lemma \ref{lemma 2.3}] Since $J_{ini}(\pi_0)\geq \pi_0(0)-\int_0^1\phi(x)\left(e^0-1\right)dx=0$ and \[ J_{dyn}(\pi)\geq\pi_{T_0}(0)-\pi_0(0)-\int_0^{T_0}\pi_s\left((\partial_s+\mathcal{B})0\right)ds=0, \] $J_{ini}(\pi_0)+J_{dyn}(\pi)=0$ implies that $J_{ini}(\pi_0)=J_{dyn}(\pi)=0$. Then, for any $f\in C([0, 1])$, \[ K_f(c):=\pi_0(cf)-\int_0^1\phi(x)\left(e^{cf(x)}-1\right)dx \] gets maximum $0$ at $c=0$ and hence $\frac{d}{dc}K_f(c)\Big|_{c=0}=0$, which implies that \[ \pi_0(f)=\int_0^1\phi(x)f(x)dx \] for any $f\in C([0,1])$ and hence $\pi_0(dx)=\phi(x)dx$. Similarly, for any $h\in C^1([0,T_0])$ and $f\in C([0, 1])$, let $G^{h,f}(t,x)=h(t)f(x)$, then \[ \Gamma_{h,f}(c):=\pi_{T_0}(cG^{h,f}_{T_0})-\pi_0(cG^{h,f}_0)-\int_0^{T_0}\pi_s\left((\partial_s+\mathcal{B})(cG^{h,f}_s)\right)ds \] gets maximum at $c=0$ and hence $\frac{d}{dc}\Gamma_{h,f}(c)\Big|_{c=0}=0$, which implies that \[ h_{T_0}\pi_{T_0}(f)-h_0\pi_0(f)-\int_0^{T_0}h^\prime_s\pi_s(f)ds=\int_0^{T_0}h_s\pi_s\left((P_1-P_2)f\right)ds \] for any $f\in C([0, 1]), h\in C^1([0, T_0])$ and hence $\{\pi_t(f)\}_{0\leq t\leq T_0}$ is differentiable with \[ \frac{d}{dt}\pi_t(f)=\pi_t\left((P_1-P_2)f\right) \] for any $f\in C([0, 1])$. Consequently, $\pi=\mu$. \qed \proof[Proof of Lemma \ref{lemma 2.4}] By Arzel\`{a}-Ascoli Theorem, we only need to show that for any nonnegative $f\in C[0, 1]$, \[ \left\{\pi_t(f):~0\leq t\leq T_0\right\}_{\pi\in \mathfrak{A}_C} \] are uniformly bounded and equicontinuous. Let $\vec{1}$ be the function that $\vec{1}(x)=1$ for all $x\in [0, 1]$, then for any $\pi\in\mathfrak{A}_C$, \begin{equation}\label{equ 2.2} \pi_0(\vec{1})\leq C+\int_0^1\phi(x)\left(e^{\vec{1}(x)}-1\right)dx=C+(e-1)\int_0^1\phi(x)dx. \end{equation} For given $0<t<T_0$ and sufficiently large $n$, let $\Lambda^t_n$ be the function from $[0, T_0]$ to $\mathbb{R}$ that $\Lambda^t_n(s)=0$ when $s\leq t$ or $s\geq t+\frac{1}{n}$ and $\Lambda^t_n(s)=-n$ when $t<s<t+\frac{1}{n}$. Since $C([0, T_0])$ is dense in $L^1([0, T_0])$, let $\{\tilde{\Lambda}^t_{n,m}\}_{m\geq 1}$ be a sequence in $C([0, T_0])$ that $\tilde{\Lambda}^t_{n,m}$ converges in $L^1$ to $\Lambda^t_n$ as $m\rightarrow+\infty$. Then we define $h^t_n\in C([0,T_0]), \tilde{h}^t_{n,m}\in C^1([0, T_0])$ that \[ h^t_n(s)=1+\int_0^s\Lambda^t_n(u)du,~\tilde{h}^t_{n,m}(s)=1+\int_0^s\tilde{\Lambda}^t_{n,m}(u)du \] for all $s\in [0, 1]$. As a result, $\tilde{h}^t_{n,m}$ converges to $h^t_n$ uniformly in $[0, 1]$ as $m\rightarrow+\infty$. Let $\tilde{G}^t_{n,m}\in C^{1,0}([0, T_0]\times[0, 1])$ that $\tilde{G}^t_{n,m}(s, x)=\tilde{h}^t_{n,m}(s)\vec{1}(x)$ for any $0\leq s\leq T_0, 0\leq x\leq 1$, then for $\pi\in \mathfrak{A}_C$, \[ \pi_{T_0}\left(\tilde{G}^t_{n,m,T_0}\right)-\pi_{0}\left(\tilde{G}^t_{n,m,0}\right)\leq C+\int_0^{T_0}\pi_s\left((\partial_s+\mathcal{B})\tilde{G}^t_{n,m,s}\right)ds. \] Let $m\rightarrow+\infty$, we have \[ -\pi_0(\vec{1})\leq C-n\int_t^{t+\frac{1}{n}}\pi_s(\vec{1})ds. \] Since $\pi$ is right-continuous, let $n\rightarrow\infty$, we have \begin{equation}\label{equ 2.1point5} \pi_t(\vec{1})\leq \pi_0(\vec{1})+C. \end{equation} Then, by Equation \eqref{equ 2.2}, \begin{align}\label{equ 2.3} \pi_t(f)&\leq \left(\max_{0\leq x\leq 1}f(x)\right)\pi_t(\vec{1}) \notag\\ &\leq \left(\max_{0\leq x\leq 1}f(x)\right)\left(2C+(e-1)\int_0^1\phi(x)dx\right) \end{align} for any $0\leq t\leq T_0$ and hence $\left\{\pi_t(f):~0\leq t\leq T_0\right\}_{\pi\in \mathfrak{A}_C}$ are uniformly bounded. For $s<t<T_0$ and sufficiently large $n$, we define $\hat{\Lambda}^{t,s}_n$ as the function from $[0, T_0]$ to $\mathbb{R}$ that \[ \hat{\Lambda}^{t,s}_n(u)= \begin{cases} 0 &\text{~if~}u\leq s, s+\frac{1}{n}<u\leq t \text{~or~}u>t+\frac{1}{n},\\ n &\text{~if~}s<u\leq s+\frac{1}{n},\\ -n &\text{~if~}t<u\leq t+\frac{1}{n}. \end{cases} \] Then, via replacing $\Lambda^t_n$ by $\hat{\Lambda}^{t,s}_n$ and $\vec{1}$ by $g\in C([0, 1])$ in the analysis leading to Equation \eqref{equ 2.1point5}, we have \[ \pi_t(g)\leq \pi_s(g)+C+\int_s^t\pi_u\left(\mathcal{B}g\right)du \] for any $g\in C([0, 1])$. For any given $M>0$, let $g=Mf$, then we have \[ \pi_t(f)\leq \pi_s(f)+\frac{C}{M}+\int_s^t \frac{1}{M}\pi_u\left(\mathcal{B}(Mf)\right)du. \] By Equation \eqref{equ 2.3}, for any $\pi\in \mathfrak{A}_C$ and $u\in (s, t)$, \[ \pi_u\left(\mathcal{B}(Mf)\right)\leq \left(\max_{0\leq x\leq 1}|\mathcal{B}(Mf)(x)|\right)\left(2C+(e-1)\int_0^1\phi(x)dx\right). \] Hence, for any $\epsilon>0$, we can first choose $M$ sufficiently large that $\frac{C}{M}<\frac{\epsilon}{2}$ and then there exists $\delta_1>0$ only depending on $M$ and $f$ that \[ \int_s^t \frac{1}{M}\pi_u\left(\mathcal{B}(Mf)\right)du<\frac{\epsilon}{2} \] and hence \[ \pi_t(f)\leq \pi_s(f)+\epsilon \] for any $t-s\leq \delta_1$ and $\pi\in \mathfrak{A}_C$. Let $g=-Mf$, then it is proved similarly that there exists $\delta_2>0$ only depending on $\epsilon$ and $f$ that \[ \pi_t(f)\geq \pi_s(f)-\epsilon \] for any $t-s\leq \delta_2$ and $\pi\in \mathfrak{A}_C$. As a result, $\{\pi_t(f):~0\leq t\leq T_0\}_{\pi\in \mathfrak{A}_C}$ are equicontinuous and hence the proof is complete. \qed \section{The proof of Equation \eqref{equ 1.4 UpperBoundofMDP}}\label{Section 3} In this section we give the proof of Equation \eqref{equ 1.4 UpperBoundofMDP}. With Lemma \ref{lemma 2.1 replacement}, the proof of our main result follows a routine procedure introduced in \cite{Kipnis1989}, which has also been utilized in References \cite{Gao2003}, \cite{Xue2021} and so on to prove MDPs of models such as exclusion processes, density-dependent Markov chains and so on. Hence, in this paper we only give a outline of the proof without repeating too many similar details with those in above references. For later use, for a given positive sequence $\{c_N\}_{N\geq 1}$ that $\lim_{N\rightarrow+\infty}c_N=+\infty$ and a sequence of random variables $\{Y_N\}_{N\geq 1}$, we write $Y_N$ as $o_{\exp}(c_N)$ when \[ \lim_{N\rightarrow+\infty}\frac{1}{c_N}\log P(|Y_N|\geq \epsilon)=-\infty \] for any $\epsilon>0$ and write $Y_N$ as $O_{\exp}(c_N)$ when \[ \limsup_{N\rightarrow+\infty}\frac{1}{c_N}\log P(|Y_N|\geq \epsilon)<0 \] for any $\epsilon>0$. Now we first prove Equation \eqref{equ 1.4 UpperBoundofMDP} for compact $K\subseteq \mathcal{D}([0, T_0], \mathcal{S})$. \proof [Proof of Equation \eqref{equ 1.4 UpperBoundofMDP} for compact sets] For each $N\geq 1$ and any $G\in C^{1,1}([0, T_0]\times [0, 1])$, we define $H^N_G(t, X_t^N)$ as \[ H_G^N(t, X_t^N)=\exp\left\{\frac{a_N^2}{N}\theta_t^N(G_t)\right\} =\exp\left\{\frac{a_N}{N}\sum_{i=1}^N\left(X_t^N(i)-EX_t^N(i)\right)G_t(\frac{i}{N})\right\} \] and define $\Gamma_t^N(G)$ as \[ \Gamma_t^N(G)=\frac{H_G^N(t, X_t^N)}{H_G^N(0, X_0^N)}\exp\left\{-\int_0^t\frac{\left(\partial_s+\mathcal{L}_N\right)H_G^N(s, X_s^N)}{H_G^N(s, X_s^N)}ds\right\}. \] Then it is easy to check that $\{\Gamma_t^N(G)\}_{0\leq t\leq T_0}$ is a martingale with mean $1$ by It\^{o}'s formula. Therefore, for any $t\geq 0$ and $f\in C([0, 1])$, \begin{equation}\label{equ 3.1} Ee^{\frac{a_N}{N}\sum_{i=1}^N\left(X_0^N(i)-EX_0^N(i)\right)f(\frac{i}{N})} =E\left(e^{\frac{a_N}{N}\sum_{i=1}^N\left(X_0^N(i)-EX_0^N(i)\right)f(\frac{i}{N})}\Gamma_t^N(G)\right). \end{equation} According to our assumption of $X_0^N$ and the fact that $\lim_{N\rightarrow+\infty}\frac{a_N}{N}=0$, it is easy to check that \begin{equation}\label{equ 3minus1} \lim_{N\rightarrow+\infty}Ee^{\frac{a_N}{N}\sum_{i=1}^N\left(X_t^N(i)-EX_0^N(i)\right)f(\frac{i}{N})}=\frac{1}{2}\int_0^1\phi(x)f^2(x)dx. \end{equation} according to Taylor's expansion formula up to the second order. For later use, for each $N\geq 1$, we define $P_1^Nf(x)=\frac{1}{N}\sum_{j=1}^N\lambda(x, \frac{j}{N})f(\frac{j}{N})$, $P_2^Nf(x)=f(x)\frac{1}{N}\sum_{j=1}^N\lambda(x, \frac{j}{N})$, $\mathcal{K}f(x)=\int_0^1\lambda(x, y)(f(y)-f(x))^2dy$ and $\mathcal{K}^Nf(x)=\frac{1}{N}\sum_{j=1}^N\lambda(x, \frac{j}{N})(f(\frac{j}{N})-f(x))^2$ for any $f\in C([0, 1]), x\in [0, 1]$. According to the generator $\mathcal{L}_N$ of $\{X_t^N\}_{t\geq 0}$, \[ \frac{d}{dt}EX_t^N(i)=-EX_t^N(i)\sum_{j=1}^N\frac{\lambda(\frac{i}{N},\frac{j}{N})}{N}+\sum_{j=1}^N\frac{\lambda(\frac{j}{N}, \frac{i}{N})}{N}EX_t^N(j) \] while \[ \mathcal{L}_NH_G^N(t, X_t^N)=\sum_{i=1}^N\frac{\lambda(\frac{i}{N}, \frac{j}{N})X_t^N(i)}{N}\sum_{j=1}^NH_G^N(t, X_t^N)\left(e^{\frac{a_N}{N}\left(G_t(\frac{j}{N})-G_t(\frac{i}{N})\right)}-1\right). \] Then, by the fact that $\frac{a_N}{N}\rightarrow 0$ and Taylor's expansion formula up to the second order, it is not difficult to show that \begin{equation}\label{equ 3.0} \Gamma_{T_0}^N(G)=\exp\left\{\frac{a_N^2}{N}\left(l(\theta^N, G)+\epsilon^N\right)\right\}, \end{equation} where \begin{align*} l(\pi, G)=&\pi_{T_0}(G_{T_0})-\pi_0(G_0)-\int_0^{T_0}\pi_s\left((\partial_s+P_1-P_2)G_s\right)ds \\ &-\frac{1}{2}\int_0^{T_0}\left(\int_{[0,1]}\left(\int_0^1\lambda(x,y)\left(G_s(y)-G_s(x)\right)^2dy\right)\mu_s(dx)\right)ds \notag \end{align*} for any $\pi\in \mathcal{D}([0, T_0], \mathcal{S})$ and \[ \epsilon^N=\int_0^{T_0}\left(\epsilon_{1, t}^N+\epsilon_{2, t}^N+\epsilon_{3,t}^N+\epsilon_{4, t}^N\right)dt, \] where $\epsilon_{1,t}^N$ is the third order Lagrange's remainder of the Taylor's formula that \[ |\epsilon_{1, t}^N|\leq C_1\frac{a_N}{N}\frac{1}{N}\sum_{i=1}^N\left(X_t^N(i)+EX_t^N(i)\right) \] with constant $C_1<+\infty$ independent of $t$ and $N$, \[ \epsilon_{2,t}^N=\mu^N_t(\mathcal{K}G_t)-\mu_t(\mathcal{K}G_t), \text{~}\epsilon_{3,t}^N=\mu^N_t(\mathcal{K}^NG_t)-\mu^N_t(\mathcal{K}G_t) \] and \[ \epsilon_{4,t}^N=\theta_t^N\left((P_1^N-P_2^N)G_t-(P_1-P_2)G_t\right). \] According to the fact that $\sum_{i=1}^NX_t^N(i)\equiv \sum_{i=1}^NX_0^N(i)$ and our assumption of $X_0^N$, it is easy to check that $\sup_{t\leq T_0}|\epsilon_{1, t}^N|=O_{\exp}\left(\frac{N^2}{a_N}\right)=o_{\exp}\left(a_N\right)$ by Markov's inequality. Since $\lambda\in C^{1,1}([0, 1]\times[0,1]), G_t\in C^1([0, 1])$, it is easy to check that \[ |\epsilon_{4,t}^N|\leq \frac{C_2}{Na_N}\sum_{i=1}^N(X_0^N(i)+EX_0^N(i)) \] for some $C_2<+\infty$ independent of $t$ and $N$ according to Lagrange's mean value theorem. Then, we similarly have $\sup_{t\leq T_0}|\epsilon_{4, t}^N|=O_{\exp}\left(Na_N\right)=o_{\exp}\left(a_N\right)$ according to Markov's inequality. According to a similar analysis, \[ |\epsilon_{3,t}^N|\leq \frac{C_3}{N^2}\sum_{i=1}^NX_0^N(i) \] for some $C_3<+\infty$ independent of $t, N$ and hence $\sup_{t\leq T_0}|\epsilon_{3, t}^N|=O_{\exp}\left(N^2\right)=o_{\exp}\left(a_N\right)$ according to Markov's inequality. By Lemma \ref{lemma 2.1 replacement}, $\sup_{t\leq T_0}|\epsilon_{2, t}^N|=o_{\exp}\left(a_N\right)$. In conclusion, \begin{equation*} \epsilon^N=o_{\exp}\left(a_N\right)=o_{\exp}\left(\frac{a_N^2}{N}\right). \end{equation*} As a result, for any $\epsilon>0$ and compact $K\subseteq \mathcal{D}\big([0, T_0], \mathcal{S}\big)$, \begin{equation}\label{equ 3.3} \limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\theta^N\in K, |\epsilon^N|\leq \epsilon\right) =\limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\theta^N\in K\right). \end{equation} By Equation \eqref{equ 3.0}, $\Gamma_{T_0}^N(G)\geq \exp\left\{\frac{a_N^2}{N}\left(l(\theta^N, G)-\epsilon\right)\right\}$ when $|\epsilon^N|\leq \epsilon$. Therefore, by Equation \eqref{equ 3.1}, \begin{align*} &Ee^{\frac{a_N}{N}\sum_{i=1}^N\left(X_0^N(i)-EX_0^N(i)\right)f(\frac{i}{N})}\\ &\geq E\left(e^{\frac{a_N}{N}\sum_{i=1}^N\left(X_0^N(i)-EX_0^N(i)\right)f(\frac{i}{N})}\Gamma_{T_0}^N(G)1_{\{\theta^N\in K, |\epsilon^N|\leq \epsilon\}}\right)\\ &\geq \exp\left\{\frac{a_N^2}{N}\inf_{\pi\in K}\left\{\pi_0(f)+l(\pi, G)-\epsilon\right\}\right\}P\left(\theta^N\in K, |\epsilon^N|\leq \epsilon\right). \end{align*} Then, according to Equations \eqref{equ 3minus1} and \eqref{equ 3.3}, \begin{align*} &\limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\theta^N\in K\right) =\limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\theta^N\in K, |\epsilon^N|\leq \epsilon\right)\\ &\leq -\inf_{\pi\in K}\left\{\pi_0(f)+l(\pi, G)\right\}+\frac{1}{2}\int_0^1\phi(x)f^2(x)dx+\epsilon\\ &=-\inf_{\pi\in K}\left\{\pi_0(f)-\frac{1}{2}\int_0^1\phi(x)f^2(x)dx+l(\pi, G)\right\}+\epsilon. \end{align*} Since $f, G, \epsilon$ are arbitrary, \begin{align}\label{equ 3.4} &\limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\theta^N\in K\right) \notag\\ &\leq -\sup_{f\in C([0, 1]), \atop G\in C^{1,1}([0, T_0]\times[0, 1])}\inf_{\pi\in K}\left\{\pi_0(f)-\frac{1}{2}\int_0^1\phi(x)f^2(x)dx+l(\pi, G)\right\}. \end{align} Since $\pi_0(f)-\frac{1}{2}\int_0^1\phi(x)f^2(x)dx+l(\pi, G)$ is concave with $(f, G)$ and convex with $\pi$, according to the minimax theorem given in \cite{Sion1958}, \begin{align*} &\sup_{f\in C([0, 1]), \atop G\in C^{1,1}([0, T_0]\times[0, 1])}\inf_{\pi\in K}\left\{\pi_0(f)-\frac{1}{2}\int_0^1\phi(x)f^2(x)dx+l(\pi, G)\right\}\\ &=\inf_{\pi\in K}\sup_{f\in C([0, 1]), \atop G\in C^{1,1}([0, T_0]\times[0, 1])}\left\{\pi_0(f)-\frac{1}{2}\int_0^1\phi(x)f^2(x)dx+l(\pi, G)\right\}\\ &=\inf_{\pi \in K}\left(\sup_{f\in C([0, 1])}\left\{\pi_0(f)-\frac{1}{2}\int_0^1\phi(x)f^2(x)dx\right\}+\sup_{G\in C^{1,1}([0, T_0]\times[0, 1])}l(\pi, G)\right)\\ &=\inf_{\pi \in K}\left(I_{ini}(\pi_0)+\sup_{G\in C^{1,1}([0, T_0]\times[0, 1])}l(\pi, G)\right). \end{align*} Since $C^{1,1}\left([0, T_0]\times[0,1]\right)$ is dense in $C^{1,0}\left([0, T_0]\times[0, 1]\right)$, \[ \sup_{G\in C^{1,1}([0, T_0]\times[0, 1])}l(\pi, G)=\sup_{G\in C^{1,0}([0, T_0]\times[0, 1])}l(\pi, G)=I_{dyn}(\pi) \] and hence Equation \eqref{equ 1.4 UpperBoundofMDP} holds for all compact $K\subseteq \mathcal{D}\left([0, T_0], \mathcal{S}\right)$ according to Equation \eqref{equ 3.4}. \qed To prove Equation \eqref{equ 1.4 UpperBoundofMDP} for all closed sets, we need the following two lemmas as preliminaries. \begin{lemma}\label{lemma 3.1 independent and Poisson} Under our assumption of $X_0^N$, $X_t^N(1), X_t^N(2),\ldots, X_t^N(N)$ are independent for any $t\geq 0$ and $X_t^N(i)$ follows Poisson distribution with mean $EX_t^N(i)$ for all $1\leq i\leq N$. \end{lemma} \begin{lemma}\label{lemma 3.2 control of integration} For any $f\in C([0, 1])$ and $\epsilon>0$, \[ \limsup_{M\rightarrow+\infty}\limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\sup_{0\leq t\leq T_0}\left|\int_0^t\theta^N_s(f)ds\right|>M\right)=-\infty \] and \[ \limsup_{\delta\rightarrow0}\limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\sup_{|t-s|\leq\delta\atop 0\leq s<t\leq T_0}\left|\int_s^t\theta_u^N(f)du\right|>\epsilon\right)=-\infty. \] \end{lemma} The proof of Lemma \ref{lemma 3.1 independent and Poisson} is given in Appendix \ref{subsection A.1}. With Lemma \ref{lemma 3.1 independent and Poisson}, we have \begin{equation}\label{equ 3.6} E\left(\exp\{\frac{a^2_N}{N}\theta_t^N(f)\}\right)=\exp\left\{\sum_{i=1}^NEX_t^N(i)\left(e^{\frac{a_N}{N}f\left(\frac{i}{N}\right)} -\frac{a_N}{N}f\left(\frac{i}{N}\right)-1\right)\right\}. \end{equation} We have shown in \cite{Xue2020} that there exists $C_5<+\infty$ independent of $N$ that \begin{equation}\label{equ 3.7} \sup_{N\geq 1, 1\leq i\leq N, 0\leq t\leq T_0}EX_t^N(i)\leq C_5. \end{equation} With Equations \eqref{equ 3.6} and \eqref{equ 3.7}, the proof of Lemma \ref{lemma 3.2 control of integration} follows the same procedure as that introduced in the proof of Lemma 2.2 of \cite{Gao2003}, where a crucial step is the utilization of Garsia-Rademich-Rumsey Lemma. Hence we omit the details of the proof of Lemma \ref{lemma 3.2 control of integration} here. At last, we give the proof of Equation \eqref{equ 1.4 UpperBoundofMDP} for all closed sets. \proof[Proof of Equation \eqref{equ 1.4 UpperBoundofMDP}] Since we have proved Equation \eqref{equ 1.4 UpperBoundofMDP} for all compact sets, we only need to show that $\{\theta^N\}_{N\geq 1}$ are exponentially tight to complete this proof. By the criteria given in \cite{Puhalskii1994}, we only need to show that \begin{equation}\label{equ 3.8} \limsup_{M\rightarrow+\infty}\limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P\left(\sup_{0\leq t\leq T_0}\left|\theta_t^N(f)\right|>M\right)=-\infty \end{equation} and \begin{equation}\label{equ 3.9} \limsup_{\delta\rightarrow 0}\limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log \sup_{\tau\in \Upsilon}P\left(\sup_{0<t\leq \delta}\left|\theta^N_{\tau+t}(f)-\theta_\tau^N(f)\right|>\epsilon\right)=-\infty \end{equation} for any $f\in C^1([0,1])$ and $\epsilon>0$, where $\Upsilon$ is the set of all stopping times bounded by $T_0$ from above. With Lemma \ref{lemma 3.2 control of integration}, proofs of Equations \eqref{equ 3.8} and \eqref{equ 3.9} follows same procedures as those in proofs of Equations (3.3) and (3.4) of \cite{Gao2003} respectively, where a crucial step is the utilization of Doob's inequality on the exponential martingale $\{\Gamma_t^N(f)\}_{0\leq t\leq T_0}$. Consequently, the proof is complete. \qed \section{The proof of Equation \eqref{equ 1.5 LowerBoundofMDP}}\label{section 4} In this section we prove Equation \eqref{equ 1.5 LowerBoundofMDP}. As we have introduced, our proof follows the strategy introduced in \cite{Kipnis1989}, where a crucial step is to derive the law of large numbers of $\theta^N$ under the transformed probability measure with $\Gamma^N_{T_0}(G)$ introduced in Section \ref{Section 3} as the R-N derivative with respect to the original measure of $\{X_t^N\}_{t\geq 0}$. For later use, we first introduce some notations and definitions. For any $f\in C([0,1])$ and sufficiently large $N$, we denote by $P^N_f$ the probability measure of our process $\{X_t^N\}_{t\geq 0}$ under the initial condition that $\{X_0^N(i)\}_{1\leq i\leq N}$ are independent and $X_0^N(i)$ follows Poisson distribution with mean $\phi\left(\frac{i}{N}\right)+\frac{a_N}{N}f\left(\frac{i}{N}\right)$. For any $G\in C^{1,1}\left([0, T_0]\times [0, 1]\right)$, we define $\hat{P}^N_{f, G}$ as the probability measure that \[ \frac{d\hat{P}^N_{f, G}}{dP^N_f}=\Gamma_{T_0}^N(G). \] Then the following lemma is crucial for us to prove Equation \eqref{equ 1.5 LowerBoundofMDP}, which gives the law of large numbers of $\theta^N$ under the transformed measure $\hat{P}^N_{f, G}$. \begin{lemma}\label{lemma 4.1 LLNunderTransformedMeasure} For given $G\in C^{1,1}\left([0, T_0]\times [0, 1]\right)$, $\theta^N$ converges in $\hat{P}^N_{f, G}$-probability to $\vartheta^{f,G}$ as $N\rightarrow+\infty$, where $\vartheta^{f, G}$ is the unique element in $\mathcal{D}\left([0, T_0], \mathcal{S}\right)$ that \begin{equation}\label{equ 4.1 measureValuedODE} \begin{cases} \frac{d}{dt}\vartheta_t^{f,G}(h)=\vartheta_t^{f, G}\left((P_1-P_2)h\right)+\langle G_t | h\rangle_t \text{~for all~}0\leq t\leq T_0\text{~and~}h\in C([0, 1]),\\ \vartheta_0^{f, G}(dx)=f(x)dx. \end{cases} \end{equation} \end{lemma} \begin{remark}\label{Remark 4.1} Lemma \ref{lemma 4.1 LLNunderTransformedMeasure} is a routine auxiliary result for the proof of the lower bound of the MDP. Analogues of Lemma \ref{lemma 4.1 LLNunderTransformedMeasure} such as Theorems 3.1 of \cite{Kipnis1989}, 4.1 of \cite{Gao2003} and Lemma 4.2 of \cite{Xue2021} have been given in literatures to prove LDPs or MDPs for models such as exclusion processes and density dependent Markov chains. With Lemma \ref{lemma 4.1 LLNunderTransformedMeasure}, roughly speaking, we can estimate $P(\theta^N=d\pi)$ for some $\pi\in \mathcal{D}\left([0, T_0], \mathcal{S}\right)$ as following. Choose $f, G$ to make $\vartheta^{f,G}=\pi$, then \begin{align*} P(\theta^N=d\pi)=E_{\hat{P}^N_{f, G}}\left(\frac{dP}{dP^N_f}\left(\Gamma_{T_0}^N(G)\right)^{-1}1_{\{\theta^N=d\pi\}}\right). \end{align*} Lemma \ref{lemma 4.1 LLNunderTransformedMeasure} implies that $\hat{P}^N_{f, G}\left(\theta^N=d\pi\right)=1+o(1)$ and hence our MDP holds when we can show that \[ \frac{dP}{dP^N_f}\left(\Gamma_{T_0}^N(G)\right)^{-1}\Bigg|_{\theta^N=\pi}=\exp\left\{-\frac{a_N^2}{N}\left(I_{ini}(\pi_0)+I_{dyn}(\pi)+o(1)\right)\right\}, \] which can be obtained according to Theorem \ref{theorem 1.3 alternative representation formula}. The rigorous statement of the above intuitive analysis is given at the end of this section. \end{remark} The following lemma is a preliminary for us to prove Lemma \ref{lemma 4.1 LLNunderTransformedMeasure}. \begin{lemma}\label{lemma 4.2} For given $f, h\in C([0, 1])$ and $G\in C^{1,1}([0, T_0]\times[0, 1])$, \begin{equation}\label{equ 4.2} \sum_{0\leq t\leq T_0}\left(\theta^N_t(h)-\theta^N_{t-}(h)\right)^2=o_{\exp}(N) \end{equation} under both $P_f^N$ and $\hat{P}_{f,G}^N$. \end{lemma} The proof of Lemma \ref{lemma 4.2} is given in Appendix \ref{subsection A.2}. For the proof of Lemma \ref{lemma 4.1 LLNunderTransformedMeasure}, we introduce some notations and definitions. For a sequence of random variables $\{Y_N\}_{N\geq 1}$, we write $Y_N$ as $o_p(1)$ when $\lim_{N\rightarrow+\infty}Y_N=0$ in probability. For any $h\in C^1([0, 1])$ and $G\in C^{1,1}([0, T_0]\times [0, 1])$, we define \begin{align*} \mathcal{M}_t(\theta^N(h))&=\theta_t^N(h)-\theta_0^N(h)-\int_0^t\left(\partial_s+\mathcal{L}_N\right)\theta_s^N(h)ds\\ &=\theta_t^N(h)-\theta_0^N(h)-\int_0^t\theta_s^N\left((P_1^N-P_2^N)h\right)ds \end{align*} and \[ \mathcal{M}_t(H_G^N)=H_G^N(t, X_t^N)-H_G^N(0, X_0^N)-\int_0^t\left(\partial_s+\mathcal{L}_N\right)H_G^N(s, X_s^N)ds, \] where $H_G^N$ is defined as in Section \ref{Section 3}. According to basic properties of Markov processes, $\{\mathcal{M}_t(\theta^N(h))\}_{t\geq 0}$ and $\{\mathcal{M}_t(H_G^N)\}_{t\geq 0}$ are both martingales. In this paper, for two local martingales $\{\mathcal{M}^1_t\}_{t\geq 0}, \{\mathcal{M}^2\}_{t\geq 0}$, we use $\{\langle \mathcal{M}^1, \mathcal{M}^2\rangle_t\}_{t\geq 0}$ to denote the predictable quadratic-covariation process which is continuous and use $\{[\mathcal{M}^1, \mathcal{M}^2]_t\}_{t\geq 0}$ to denote the optional quadratic-covariation process that \[ \lim_{\sup_i(t_{i+1}-t_i)\rightarrow 0}\sum_{i}\left(\mathcal{M}^1_{t_{i+1}}-\mathcal{M}^1_{t_i}\right)\left(\mathcal{M}^2_{t_{i+1}}-\mathcal{M}^2_{t_i}\right)=[\mathcal{M}^1, \mathcal{M}^2]_t \] in probability, where the limit is over all partitions $\{t_i\}$ of $[0, t]$. Then, according to basic properties of Markov processes and direct calculations, \begin{align}\label{equ 4.3} &d\langle \mathcal{M}(H_G^N), \mathcal{M}(\theta^N(h))\rangle_t \notag\\ &=\left(-\theta_t^N(h)\mathcal{L}_NH_G^N(t, X_t^N)-H_G^N(t, X_t^N)\mathcal{L}_N\theta_t^N(h)+\mathcal{L}_N\left(\theta^N_t(h)H_G^N(t, X_t^N)\right)\right)dt\\ &=\sum_{i=1}^N\sum_{j=1}^N\frac{\lambda\left(\frac{i}{N}, \frac{j}{N}\right)X_t^N(i)}{N}H_G^N(t, X_t^N)\left(e^{\frac{a_N}{N}\left(G_t\left(\frac{j}{N}\right)-G_t\left(\frac{i}{N}\right)\right)}-1\right)\frac{h\left(\frac{j}{N}\right) -h\left(\frac{i}{N}\right)}{a_N}dt\notag. \end{align} Now we prove Lemma \ref{lemma 4.1 LLNunderTransformedMeasure}. Our proof follows the strategy introduced in the proof of Lemma 4.2 of \cite{Xue2021}, where a crucial step is the utilization of a generalized version of Girsanov's theorem introduced in \cite{Schuppen1974}. \proof[The proof of Lemma \ref{lemma 4.1 LLNunderTransformedMeasure}] The existence and uniqueness of Equation \eqref{equ 4.1 measureValuedODE} is given in Appendix \ref{Appendix A.3}. We further prove in Appendix \ref{Appendix A.4} that $\{\theta^N\}_{N\geq 1}$ are $\hat{P}^N_{f, G}$-tight. Since $C^1([0, 1])$ is dense in $C([0, 1])$, we only need to check that if $\varpi$ is a $\hat{P}^N_{f, G}$-weak limit of a subsequence of $\{\theta^N\}_{N\geq 1}$, then $\varpi$ satisfies Equation \eqref{equ 4.1 measureValuedODE} for all $h\in C^1([0, 1])$. According to It\^{o}'s formula and the definition of $\Gamma^N_{T_0}(G)$, \begin{align}\label{equ 4.4} d\Gamma^N_{T_0}(G)&=\frac{1}{H_G^N(0, X_0^N)}\exp\left\{\int_0^t\frac{(\partial_u+\mathcal{L}_N)H_G^N(u, X_u^N)}{H_G^N(u, X_u^N)}du\right\}d\mathcal{M}_t(H_G^N)\notag\\ &=\Gamma^N_{T_0}(G)d\widetilde{\mathcal{M}}_t(H_G^N), \end{align} where \[ \widetilde{\mathcal{M}}_t(H_G^N)=\int_0^t\frac{1}{H_G^N(u, X_u^N)}d\mathcal{M}_u(H_G^N). \] For any $h\in C^1([0, 1])$, let \[ \widehat{\mathcal{M}}_t(\theta^N(h))=\mathcal{M}_t(\theta^N(h))-\langle\mathcal{M}(\theta^N(h)), \widetilde{\mathcal{M}}(H_G^N)\rangle_t, \] then according to Equation \eqref{equ 4.4} and Theorem 3.2 of \cite{Schuppen1974}, which is a generalized version of Girsanov's theorem, $\left\{\widehat{\mathcal{M}}_t(\theta^N(h))\right\}_{0\leq t\leq T_0}$ is a local martingale under $\hat{P}^{N}_{f, G}$ for all $h\in C^1([0, 1])$ and \[ \left[\widehat{\mathcal{M}}(\theta^N(h)),\widehat{\mathcal{M}}(\theta^N(h))\right]_t=\left[\mathcal{M}(\theta^N(h)), \mathcal{M}(\theta^N(h))\right]_t \] under both $P_f^N$ and $\hat{P}^N_{f, G}$. Since $\{X^N_t\}_{t\geq 0}$ is a pure jump process and $EX^N_t(i)$ is differentiable with $t$ for $1\leq i\leq N$, \[ \left[\mathcal{M}(\theta^N(h)), \mathcal{M}(\theta^N(h))\right]_t=\sum_{0\leq u\leq t}\left(\theta_u^N(h)-\theta_{u-}^N(h)\right)^2. \] Hence, by Lemma \ref{lemma 4.2} and Doob's inequality, $\sup_{0\leq t\leq T_0}|\widehat{\mathcal{M}}_t(\theta^N(h))|=o_p(1)$ under $\hat{P}^N_{f, G}$. By Equation \eqref{equ 4.3} and the definition of $\widetilde{\mathcal{M}}_t(H_G^N)$, \begin{align*} &d\langle\mathcal{M}(\theta^N(h)), \widetilde{\mathcal{M}}(H_G^N)\rangle_t\\ &=\sum_{i=1}^N\sum_{j=1}^N\frac{\lambda\left(\frac{i}{N}, \frac{j}{N}\right)X_t^N(i)}{N}\left(e^{\frac{a_N}{N}\left(G_t\left(\frac{j}{N}\right)-G_t\left(\frac{i}{N}\right)\right)}-1\right)\frac{h\left(\frac{j}{N}\right) -h\left(\frac{i}{N}\right)}{a_N}dt. \end{align*} As a result, under $\hat{P}^N_{f, G}$, \begin{align}\label{equ 4.5} &\theta^N_t(h)=\theta^N_0(h)+o_p(1)+\int_0^t\theta_s^N\left((P_1^N-P_2^N)h\right)ds\\ &+\int_0^t\Bigg(\sum_{i=1}^N\sum_{j=1}^N\frac{\lambda\left(\frac{i}{N}, \frac{j}{N}\right)X_s^N(i)}{N}\Big(e^{\frac{a_N}{N}\left(G_s\left(\frac{j}{N}\right)-G_s\left(\frac{i}{N}\right)\right)} -1\Big)\frac{h\left(\frac{j}{N}\right) -h\left(\frac{i}{N}\right)}{a_N}\Bigg)ds.\notag \end{align} According to Taylor's expansion formula up to the second order, \[ e^{\frac{a_N}{N}\left(G_s\left(\frac{j}{N}\right)-G_s\left(\frac{i}{N}\right)\right)}=\frac{a_N}{N}\left(G_s\left(\frac{j}{N}\right)-G_s\left(\frac{i}{N}\right)\right) +O(\frac{a_N^2}{N^2}). \] Then, according to a similar analysis with that in the proof of Equation \eqref{equ 1.4 UpperBoundofMDP}, \begin{align*} &\int_0^t\Bigg(\sum_{i=1}^N\sum_{j=1}^N\frac{\lambda\left(\frac{i}{N}, \frac{j}{N}\right)X_s^N(i)}{N}\Big(e^{\frac{a_N}{N}\left(G_s\left(\frac{j}{N}\right)-G_s\left(\frac{i}{N}\right)\right)} -1\Big)\frac{h\left(\frac{j}{N}\right) -h\left(\frac{i}{N}\right)}{a_N}\Bigg)ds\\ &+\int_0^t\theta_s^N((P_1^N-P_2^N)h)ds\\ &=\int_0^t\mu_s^N(\mathcal{R}_s(G,h))ds+\int_0^t\theta_s^N((P_1-P_2)h)ds+\epsilon_{7,t}^N, \end{align*} where $\mathcal{R}_s(G,h)(x)=\int_0^1\lambda(x, y)(G_s(y)-G_s(x))(h(y)-h(x))dy$ for all $0\leq x\leq 1$ and $\sup_{0\leq t\leq T_0}|\epsilon_{7,t}^N|=o_{\exp}\left(a_N\right)$ under $P$, the initial probability measure of our model. As we have shown in the proof of Lemma \ref{lemma 4.2}, conditioned on $\sum_{i=1}^NX_0^N(i)\leq NM$, $\frac{d\hat{P}^N_{f,G}}{dP^N_f}=\Gamma_{T_0}^N(G)\leq \exp\{a_NMC_6\}$ for some $C_6$ independent of $N$. Similarly, it is easy to check that \begin{align*} \frac{dP^N_f}{dP}&=\frac{e^{-\sum_{i=1}^N\left(\frac{a_N}{N}f\left(\frac{i}{N}\right)+\phi\left(\frac{i}{N}\right)\right)} \prod_{i=1}^N\left(\frac{a_N}{N}f\left(\frac{i}{N}\right)+\phi\left(\frac{i}{N}\right)\right)^{X_0^N(i)}} {e^{-\sum_{i=1}^N\phi\left(\frac{i}{N}\right)}\prod_{i=1}^N\phi\left(\frac{i}{N}\right)^{X_0^N(i)}}\\ &\leq \exp\{a_NMC_7\} \end{align*} for some $C_7=C_7(f)$ independent of $N$ conditioned on $\sum_{i=1}^NX_0^N(i)\leq NM$. Then it is easy to check that $\sup_{0\leq t\leq T_0}|\epsilon_{7,t}^N|=o_{\exp}\left(a_N\right)$ under $\hat{P}_{f, G}^{N}$ according to Equation \eqref{Appendix A.3} with $P^N_f$ replaced by $P$. By Lemma \ref{lemma 2.1 replacement}, \[ \int_0^t\mu_s^N(\mathcal{R}_s(G,h))ds=\int_0^t\mu_s(\mathcal{R}_s(G,h))ds+\epsilon_{8,t}^N, \] where $\sup_{0\leq t\leq T_0}|\epsilon_{8,t}^N|=o_{\exp}(a_N)$ under $P$. Then, according to a similar analysis with that of $\epsilon_{7,t}^N$, $\sup_{0\leq t\leq T_0}|\epsilon_{8,t}^N|=o_{\exp}\left(a_N\right)$ under $\hat{P}_{f, G}^N$. In conclusion, since $\mu_s(\mathcal{R}_s(G,h))=\langle G_s|h\rangle_s$, \[ \theta_t^N(h)=\theta_0^N(h)+o_p(1)+\int_0^t\theta_s^N\left((P_1-P_2)h\right)ds+\int_0^t\langle G_s|h\rangle_sds \] under $\hat{P}^N_{f, G}$, where $o_p(1)$ can be chosen uniformly for $0\leq t\leq T_0$. Since we have proved that $\{\theta^N\}_{N\geq 1}$ is $\hat{P}_{f, G}^N$-tight in Appendix \ref{Appendix A.4}, we only need to show that \begin{equation}\label{equ 4.6} \theta_0^N(h)=\int_0^1f(x)h(x)dx+o_p(1) \end{equation} under $\hat{P}^N_{f, G}$ to finish the proof. As we have introduced in Section \ref{Section 3}, distributions of $\theta^N_0$ under $\hat{P}^N_{f, G}$ and $P^N_f$ are equal. As a result, Equation \eqref{equ 4.6} follows directly from the definition of $P^N_f$ and Chebyshev's inequality and hence the proof is complete. \qed At last we prove Equation \eqref{equ 1.5 LowerBoundofMDP}. The proof is a rigorous statement of the intuitive analysis given in Remark \ref{Remark 4.1}. \proof[Proof of Equation \eqref{equ 1.5 LowerBoundofMDP}] Equation \eqref{equ 1.5 LowerBoundofMDP} is trivial when $\inf_{\pi\in O}\{I_{ini}(\pi_0)+I_{dyn}(\pi)\}=+\infty$. Hence we only deal with the case where $\inf_{\pi\in O}\{I_{ini}(\pi_0)+I_{dyn}(\pi)\}<+\infty$. For any $\epsilon>0$, there exists $\pi^\epsilon\in O$ that \[ I_{ini}(\pi^\epsilon_0)+I_{dyn}(\pi^\epsilon)\leq \inf_{\pi\in O}\{I_{ini}(\pi_0)+I_{dyn}(\pi)\}+\epsilon. \] Then, by Theorem \ref{theorem 1.3 alternative representation formula}, there exist $f^\epsilon\in L^2([0, 1])$ and $F^\epsilon\in \mathcal{H}$ that $\pi_0^\epsilon(dx)=f^\epsilon(x)dx$, \begin{equation}\label{equ 4.7} \pi_{T_0}^\epsilon(G_{T_0})-\pi_0^\epsilon(G_0)-\int_0^{T_0}\pi_s^\epsilon\left((\partial_s+P_1-P_2)G_s\right)ds=\ll G, F^\epsilon\gg \end{equation} for any $G\in C^{1,0}([0, T_0]\times [0, 1])$ and \[ I_{ini}(\pi_0^\epsilon)=\frac{1}{2}\int_0^1\frac{\left(f^\epsilon(x)\right)^2}{\phi(x)}dx, \text{~}I_{dyn}(\pi^\epsilon)=\frac{\ll F^\epsilon, F^\epsilon \gg}{2}. \] By Equation \eqref{equ 4.7}, let $G(s,x)=l(s)h(x)$ for some $l\in C^1([0, T_0])$ and $h\in C([0, 1])$, then \[ l_{T_0}\pi^\epsilon_{T_0}(h)-l_0\pi^\epsilon_{0}(h)-\int_0^{T_0}l^\prime(s)\pi^\epsilon_s(h)ds=\int_0^{T_0}l(s)\pi^\epsilon_s((P_1-P_2)h)ds +\int_0^{T_0}l(s)\langle F^\epsilon_s|h\rangle_sds. \] Since $l$ can be chosen arbitrarily, $\pi^\epsilon_t(h)$ is absolutely continuous with respect to $t$ and \[ \frac{d}{dt}\pi^\epsilon_t(h)=\pi_t^\epsilon((P_1-P_2)h)+\langle F^\epsilon_t|h\rangle_t. \] Therefore, as we have shown in Appendix \ref{Appendix A.3}, \begin{equation}\label{equ 4.8} \pi_t^\epsilon=e^{t(P_1-P_2)^{*}}\pi_0^\epsilon+\int_0^te^{(t-u)(P_1-P_2)^*}\Xi_u^{F^\epsilon}du. \end{equation} Since $C([0, 1])$ is dense in $L^2([0, 1])$ and $C^{1,0}([0, T_0]\times [0, 1])$ is dense in $\mathcal{H}$, there exist a sequence $\{f^n\}_{n\geq 1}$ in $C([0, 1])$ and a sequence $\{F^n\}_{n\geq 1}$ in $C^{1,0}([0, T_0]\times [0, 1])$ that $f^n\rightarrow f^\epsilon$ in $L^2$ and $F^n\rightarrow F^\epsilon$ in $\mathcal{H}$ and then, \[ \lim_{n\rightarrow+\infty}\frac{1}{2}\int_0^1\frac{\left(f^n(x)\right)^2}{\phi(x)}dx=\frac{1}{2}\int_0^1 \frac{\left(f^\epsilon(x)\right)^2}{\phi(x)}dx, \text{~}\lim_{n\rightarrow+\infty}\frac{\ll F^n, F^n\gg}{2}=\frac{\ll F^\epsilon, F^\epsilon\gg}{2}. \] For each $n\geq 1$, let $\pi^n$ be the unique element in $\mathcal{D}([0, T_0], \mathcal{S})$ that \begin{equation*} \begin{cases} \frac{d}{dt}\pi_t^n(h)=\pi_t^n\left((P_1-P_2)h\right)+\langle F_t^n | h\rangle_t \text{~for all~}0\leq t\leq T_0\text{~and~}h\in C([0, 1]),\\ \pi_0^n(dx)=f^n(x)dx. \end{cases} \end{equation*} i.e., $\pi_t^n=e^{t(P_1-P_2)^{*}}\pi_0^n+\int_0^te^{(t-u)(P_1-P_2)^*}\Xi_u^{F^n}du$ as we have shown in Appendix \ref{Appendix A.3}. Then, by Equation \eqref{equ 4.8}, $\pi^n\rightarrow \pi^\epsilon$ in $\mathcal{D}([0, T_0], \mathcal{S})$. Furthermore, by Theorem \ref{theorem 1.3 alternative representation formula}, $I_{ini}(\pi^n_0)=\frac{1}{2}\int_0^1\frac{\left(f^n(x)\right)^2}{\phi(x)}dx$ and $I_{dyn}(\pi^n)=\frac{\ll F^n, F^n\gg}{2}$. Then, since $O$ is open, there exists $m\geq 1$ that $\pi^m\in O$ and \[ I_{ini}(\pi^m_0)+I_{dyn}(\pi^m)\leq \inf_{\pi\in O}\{I_{ini}(\pi_0)+I_{dyn}(\pi)\}+2\epsilon. \] By Equation \eqref{equ 3.0}, \[ \Gamma_{T_0}^N(F^m)=\exp\left\{\frac{a_N^2}{N}\left(l(\theta^N, F^m)+\epsilon^N\right)\right\}, \] where $\epsilon^N=o_{\exp}(a_N)$ under $P$. According to a similar analysis with that of $\epsilon_{7,t}^N$, it is easy to check that $\epsilon^N=o_{\exp}(a_N)$ under $\hat{P}^N_{f^m, F^m}$. Let \[ D^\epsilon=\left\{\pi:~|l(\pi, F^m)-l(\pi^m, F^m)|<\epsilon\right\}\cap O, \] then $\hat{P}^N_{f^m, F^m}(\theta^N\in D^\epsilon)=1+o(1)$ as $N\rightarrow+\infty$ by Lemma \ref{lemma 4.1 LLNunderTransformedMeasure} and the fact that $\pi^m\in D^\epsilon$. According to the definition of $F^m$ and $\pi^m$, it is easy to check that $l(\pi^m, F^m)=\frac{\ll F^m, F^m\gg}{2}=I_{dyn}(\pi^m)$. By Chebyshev's inequality and the definition of $P^N_f$, it is easy to check that \[ \frac{dP}{dP_{f^m}^N}=\exp\left\{-\frac{a_N^2}{N}\left(\frac{1}{2}\int_0^1\frac{\left(f^m(x)\right)^2}{\phi(x)}dx+o_p(1)\right)\right\} \] under $\hat{P}^{N}_{f^m, F^m}$. As a result, let \[ \hat{D}^{\epsilon,N}=\{\theta^N\in D^\epsilon\}\cap\{|\epsilon^N|<\epsilon\} \cap\left\{\frac{dP}{dP_{f^m}^N}\geq\exp\left\{-\frac{a_N^2}{N}\left(\frac{1}{2}\int_0^1\frac{\left(f^m(x)\right)^2}{\phi(x)}dx+\epsilon\right)\right\}\right\}, \] then $\hat{P}^N_{f^m, F^m}(\hat{D}^{\epsilon,N})=1+o(1)$ as $N\rightarrow+\infty$ and \[ \frac{dP}{d\hat{P}^N_{f^m, F^m}}=\left(\Gamma_{T_0}^N(F^m)\right)^{-1}\frac{dP}{dP_{f^m}^N} \geq \exp\left\{-\frac{a_N^2}{N}\left(I_{ini}(\pi_0^m)+I_{dyn}(\pi^m)+3\epsilon\right)\right\} \] on $\hat{D}^{\epsilon, N}$. Since $\hat{D}^{\epsilon, N}\subseteq \{\theta^N\in O\}$, \begin{align*} P(\theta^N\in O)&\geq P(\hat{D}^{\epsilon, N})=E_{\hat{P}^N_{f^m, F^m}}\left(\frac{dP}{d\hat{P}^N_{f^m, F^m}}1_{\{\hat{D}^{\epsilon, N}\}}\right)\\ &\geq \exp\left\{-\frac{a_N^2}{N}\left(I_{ini}(\pi_0^m)+I_{dyn}(\pi^m)+3\epsilon\right)\right\}(1+o(1)) \end{align*} and hence \begin{align*} \liminf_{N\rightarrow+\infty}\frac{N}{a_N^2}\log P(\theta^N\in O)&\geq -\left(I_{ini}(\pi_0^m)+I_{dyn}(\pi^m)\right)-3\epsilon\\ &\geq -\inf_{\pi\in O}\left(I_{ini}(\pi_0)+I_{dyn}(\pi)\right)-5\epsilon. \end{align*} Since $\epsilon$ is arbitrary, the proof is complete. \qed \appendix{} \section{Appendix} \subsection{Proof of Lemma \ref{lemma 3.1 independent and Poisson}}\label{subsection A.1} In this subsection we prove Lemma \ref{lemma 3.1 independent and Poisson}. \proof[Proof of Lemma \ref{lemma 3.1 independent and Poisson}] For $1\leq i, j\leq N$, we use $p_t^N(i,j)$ to denote probability that a given gas molecule is in the $j$th box at moment $t$ conditioned on it is in the $i$th box at moment $0$. Then, according to our assumption of $X_0^N$, \begin{equation}\label{equ A.1} EX_t^N(i)=\sum_{l=1}^NEX_0^N(l)p_t^N(l,i)=\sum_{l=1}^N\phi(\frac{l}{N})p_t^N(l,i). \end{equation} For $i\neq j$ and $1\leq k\leq X_0^N(i)$, we use $A_k^{N,t}(i, j)$ to denote the indicator function of the event that the $k$th gas molecule in the $i$th box at moment $0$ is in the $j$th box at moment $t$, then for given $r_1, r_2, \ldots, r_N\in \mathbb{R}$, \[ \exp\left\{\sum_{j=1}^Nr_jX_t^N(j)\right\}=\exp\left\{\sum_{l=1}^N\sum_{k=1}^{X_0^N(l)}\sum_{j=1}^Nr_jA_k^{N,t}(l,j)\right\}. \] Therefore, according to our assumption of $X_0^N$ and Equation \eqref{equ A.1}, \begin{align*} E\left(\exp\left\{\sum_{j=1}^Nr_jX_t^N(j)\right\}\Bigg|X_0^N\right)&=\prod_{l=1}^N\prod_{k=1}^{X_0^N(l)}\left(\sum_{j=1}^Ne^{r_j}p_t^N(l,j)\right) \\ &=\prod_{l=1}^N\left(\sum_{j=1}^Ne^{r_j}p_t^N(l, j)\right)^{X_0^N(l)} \end{align*} and \begin{align}\label{equ A.2} &E\left(\exp\left\{\sum_{j=1}^Nr_jX_t^N(j)\right\}\right)=\prod_{l=1}^NE\left(\left(\sum_{j=1}^Ne^{r_j}p_t^N(l, j)\right)^{X_0^N(l)}\right)\\ &=\prod_{l=1}^N \exp\left\{\left(\sum_{j=1}^Ne^{r_j}p_t^N(l, j)-1\right)\phi\left(\frac{l}{N}\right)\right\} =\exp\left\{\sum_{j=1}^N\left(e^{r_j}-1\right)\sum_{l=1}^N\phi\left(\frac{l}{N}\right)p_t^N(l,j)\right\}\notag\\ &=\exp\left\{\sum_{j=1}^N\left(e^{r_j}-1\right)EX_t^N(j)\right\}. \notag \end{align} Since $r_1, r_2,\ldots, r_N$ are arbitrary, Lemma \ref{lemma 3.1 independent and Poisson} follows from Equation \eqref{equ A.2} directly. \qed \subsection{Proof of Lemma \ref{lemma 4.2}}\label{subsection A.2} In this subsection, we prove Lemma \ref{lemma 4.2}. \proof[Proof of Lemma \ref{lemma 4.2}] We first show that Equation \eqref{equ 4.2} holds under $P^N_f$. For any $M>0$, according to Markov's inequality, \[ \limsup_{N\rightarrow+\infty}\frac{1}{N}\log P^N_f\left(\frac{1}{N}\sum_{i=1}^NX_0^N(i)\geq M\right)\leq (e-1)\int_0^1\phi(x)dx-M \] and hence \begin{equation}\label{equ A.3} \limsup_{M\rightarrow+\infty}\limsup_{N\rightarrow+\infty}\frac{1}{N}\log P^N_f\left(\frac{1}{N}\sum_{i=1}^NX_0^N(i)\geq M\right)=-\infty. \end{equation} Note that Equation \eqref{Appendix A.3} still holds when $P^N_f$ is replaced by the original probability measure $P$ of our process according to the same analysis as that under $P^N_f$. Conditioned on $\frac{1}{N}\sum_{i=1}^NX_0^N(i)\leq M$, $\theta^N_t$ jumps at rate at most $\|\lambda\|NM$, where $\|\lambda\|=\sup_{0\leq x,y\leq 1}|\lambda(x,y)|$ and \[ \left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\leq \frac{4}{a^2_N}\|h\|^2 \] when $t$ is a jump moment, where $\|h\|=\sup_{0\leq x\leq 1}|h(x)|$. As a result, conditioned on $\frac{1}{N}\sum_{i=1}^NX_0^N(i)\leq M$, $\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2$ is stochastically dominated from above by $\frac{4\|h\|^2}{a_N^2}Y(NM\|\lambda\|T_0)$, where $\{Y(t)\}_{t\geq 0}$ is a Poisson process with rate $1$. Hence, by Markov's inequality, \begin{align*} &P_f^N\left(\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\geq \epsilon, \sum_{i=1}^NX_0^N(i)\leq NM\right)\leq P\left(Y(NM\|\lambda\|T_0)\geq \frac{a_N^2\epsilon}{4\|h\|^2}\right) \\ &\leq e^{-\frac{a_N^2\epsilon}{4\|h\|^2}}e^{(e-1)NMT_0\|\lambda\|} \end{align*} and consequently \[ \limsup_{N\rightarrow+\infty}\frac{1}{N}\log P_f^N\left(\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\geq \epsilon, \sum_{i=1}^NX_0^N(i)\leq NM\right)=-\infty \] since $\frac{a_N^2}{N}\rightarrow+\infty$. As a result, \begin{align*} &\limsup_{N\rightarrow+\infty}\frac{1}{N}\log P_f^N\left(\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\geq \epsilon\right)\\ &\leq \limsup_{N\rightarrow+\infty}\frac{1}{N}\log P_f^N\left(\frac{1}{N}\sum_{i=1}^NX_0^N(i)\geq M\right). \end{align*} Since $M$ is arbitrary, Equation \eqref{equ 4.2} holds under $P_f^N$ according to Equation \eqref{equ A.3}. \quad Now we prove that Equation \eqref{equ 4.2} holds under $\hat{P}^N_{f, G}$. Conditioned on $\frac{1}{N}\sum_{i=1}^NX_0^N(i)\leq M$, it is easy to check that there exists $C_6=C_6(G)<+\infty$ independent of $N$ that $\Gamma_{T_0}^N(G)\leq e^{a_NC_6M}$ for sufficiently large $N$. Then, for sufficiently large $N$, \begin{align*} &\hat{P}_{f, G}^N\left(\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\geq \epsilon, \frac{1}{N}\sum_{i=1}^NX_0^N(i)\leq M\right) \\ &=E_{P_f^N}\left(\Gamma_{T_0}^N(G)1_{\left\{\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\geq \epsilon, \frac{1}{N}\sum_{i=1}^NX_0^N(i)\leq M\right\}}\right)\\ &\leq e^{a_NC_6M}P_f^N\left(\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\geq \epsilon\right). \end{align*} Since we have shown that Equation \eqref{equ 4.2} holds under $P_f^N$ and $\lim_{N\rightarrow+\infty}\frac{a_N}{N}=0$, \[ \lim_{N\rightarrow+\infty} \frac{1}{N}\log \hat{P}_{f, G}^N\left(\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\geq \epsilon, \frac{1}{N}\sum_{i=1}^NX_0^N(i)\leq M\right)=-\infty \] and hence \begin{align*} &\limsup_{N\rightarrow+\infty}\frac{1}{N}\log \hat{P}_{f,G}^N\left(\sum_{0\leq t\leq T_0}\left(\theta_t^N(h)-\theta_{t-}^N(h)\right)^2\geq \epsilon\right)\\ &\leq \limsup_{N\rightarrow+\infty}\frac{1}{N}\log \hat{P}_{f,G}^N\left(\frac{1}{N}\sum_{i=1}^NX_0^N(i)\geq M\right). \end{align*} Since $\Gamma_0^N(G)=1$ and $\{\Gamma_t^N(G)\}_{0\leq t\leq T_0}$ is a martingale, distributions of $X_0^N$ under $P_f^N$ and $\hat{P}_{f, G}^N$ are the same. Then, since $M$ is arbitrary, Equation \eqref{equ 4.2} holds under $\hat{P}_{f, G}^N$ according to Equation \eqref{equ A.3}. \qed \subsection{Existence and uniqueness of the solution to Equation \eqref{equ 4.1 measureValuedODE}}\label{Appendix A.3} In this subsection we give the proof of existence and uniqueness of the solution to Equation \eqref{equ 4.1 measureValuedODE}. \proof[Proof of existence and uniqueness of the solution to Equation \eqref{equ 4.1 measureValuedODE}] For any $\mu \in \mathcal{S}$, we use $\|\mu\|$ to denote the norm of $\mu$, i.e., \[ \|\mu\|=\sup\left\{|\mu(f)|:~f\in C([0, 1])\text{~and~}\sup_{0\leq x\leq 1}|f(x)|\leq 1\right\}. \] We further define $(P_1-P_2)^{*}$ as the linear operator from $\mathcal{S}$ to $\mathcal{S}$ that \[ \left((P_1-P_2)^{*}\mu\right)(f)=\mu\left((P_1-P_2)f\right) \] for any $\mu\in \mathcal{S}$ and $f\in C([0, 1])$. Then it is easy to check that $\|(P_1-P_2)^{*}\mu\|\leq 2\|\lambda\|\|\mu\|$ for any $\mu\in \mathcal{S}$. As a result, it is reasonable to define \[ e^{c(P_1-P_2)^{*}}=\sum_{n=0}^{+\infty}\frac{c^n((P_1-P_2)^{*})^n}{n!} \] for any $c\in \mathbb{R}$ and the domain of $e^{c(P_1-P_2)^{*}}$ is $\mathcal{S}$. For $G\in C^{1,1}([0, T_0]\times[0, 1])$ and any $0\leq t\leq T_0$, let $\Xi_t^G$ be the element in $\mathcal{S}$ that $\Xi_t^G(f)=\langle G_t|f\rangle_t$ for any $f\in C([0, 1])$. Then Equation \eqref{equ 4.1 measureValuedODE} can be considered as a $\mathcal{S}$-valued linear ODE that \[ \begin{cases} &\frac{d}{dt}\vartheta^{f,G}_t=(P_1-P_2)^{*}\vartheta^{f,G}_t+\Xi_t^G\text{~for~}0\leq t\leq T_0,\\ &\vartheta^{f,G}_0(dx)=f(x)dx. \end{cases} \] Therefore, \[ \frac{d}{dt}\left(e^{-t(P_1-P_2)^{*}}\vartheta^{f,G}_t\right)=e^{-t(P_1-P_2)^{*}}\Xi_t^G \] and hence \[ \vartheta_t^{f,G}=e^{t(P_1-P_2)^{*}}\vartheta_0^{f,G}+\int_0^te^{(t-u)(P_1-P_2)^{*}}\Xi_u^Gdu, \] where $\vartheta^{f,G}_0(dx)=f(x)dx$. Since we have directly solved Equation \eqref{equ 4.1 measureValuedODE}, the solution exists and is unique. \qed \subsection{$\hat{P}^N_{f, G}$-tightness of $\{\theta^N\}_{N\geq 1}$}\label{Appendix A.4} In this subsection we prove that $\{\theta^N\}_{N\geq 1}$ is $\hat{P}^N_{f, G}$-tight. \proof[Proof of $\hat{P}^N_{f, G}$-tightness of $\{\theta^N\}_{N\geq 1}$] By Aldous' criteria, we only need to check that the following two claims hold. \textbf{Claim} 1. For all $h\in C([0, 1])$, \[ \lim_{M\rightarrow+\infty}\limsup_{N\rightarrow+\infty} \hat{P}^N_{f, G}\left(|\theta^N_t(h)|\geq M\right)=0 \] for all $0\leq t\leq T_0$. \textbf{Claim} 2. For any $\epsilon>0$ and $h\in C([0, 1])$, \[ \lim_{\delta\rightarrow 0}\limsup_{N\rightarrow+\infty}\sup_{\tau\in \Upsilon, s\leq \delta}\hat{P}^N_{f, G}\left(|\theta^N_{\tau+s}(h)-\theta^N_\tau(h)|>\epsilon\right)=0, \] where $\Upsilon$ is the set of stopping times of $\{X_t^N\}_{t\geq 0}$ bounded by $T_0$. We first check Claim 1. As we have shown in Sections \ref{Section 3} and \ref{section 4}, \[ \Gamma_{T_0}^N(G)=\exp\left\{\frac{a_N^2}{N}\left(l(\theta^N, G)+\epsilon^N\right)\right\}, \] where $\epsilon^N=o_{\exp}(a_N)$ under both $P$ and $\hat{P}^N_{f, G}$. Hence, to check Claim 1, we only need to show that \begin{equation}\label{equ A.4} \lim_{M\rightarrow+\infty}\limsup_{N\rightarrow+\infty} \hat{P}^N_{f, G}\left(|\theta^N_t(h)|\geq M, |\epsilon^N|\leq 1\right)=0. \end{equation} By H\"{o}lder's inequality, Markov's inequality and the fact that \[ \left(\frac{d\hat{P}^N_{f, G}}{dP^N_f}\right)^2=\left(\Gamma_{T_0}^N(G)\right)^2\leq\exp\left\{\frac{2a_N^2}{N}\left(l(\theta^N, G)+1\right)\right\} \] when $|\epsilon^N|\leq 1$, to prove Equation \eqref{Appendix A.4} we only need to show that \begin{equation}\label{equ A.5} \limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log\sup_{0\leq t\leq T_0}E_{_{P^N_f}}\left(\exp\left\{\frac{a_N^2}{N}\theta_t^N(h)\right\}\right)<+\infty \end{equation} and \begin{equation}\label{equ A.6} \limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log\sup_{0\leq t\leq T_0}E_{_{P^N_f}}\left(\exp\left\{\frac{Ca_N^2}{N}l(\theta^N, G)\right\}\right)<+\infty \end{equation} for any $C>0$. By Lemma \ref{lemma 3.1 independent and Poisson}, under $P^N_f$, $\{X_t^N(i)\}_{1\leq i\leq N}$ are independent and $X_t^N(i)$ follows Poisson distribution with mean \[ E_{_{P^N_f}} X_t^N(i)=\sum_{j=1}^N\left(\phi\left(\frac{j}{N}\right)+\frac{a_N}{N}f\left(\frac{j}{N}\right)\right)p_t^N(j,i) \] for all $1\leq i\leq N$. As a result, \begin{align*} &E_{_{P^N_f}}\left(\exp\left\{\frac{a_N^2}{N}\theta_t^N(h)\right\}\right)\\ &=e^{\sum_{i=1}^NE_{_{P^N_f}}X_t^N(i)\left(e^{\frac{a_N}{N}h(\frac{i}{N})}-\frac{a_N}{N}h(\frac{i}{N})-1\right) +\sum_{i=1}^N\frac{a_N}{N}h(\frac{i}{N})\left(E_{_{P^N_f}}X_t^N(i)-EX_t^N(i)\right)}. \end{align*} Since $\sum_{i=1}^Np_t^N(j,i)=1$, \[ \sum_{i=1}^N\frac{a_N}{N}h(\frac{i}{N})\left(E_{_{P^N_f}}X_t^N(i)-EX_t^N(i)\right)\leq \frac{a_N^2}{N}\|h\|\|f\|. \] According to a similar analysis with that given in Section 4 of \cite{Xue2020}, there exists $C_9$ independent of $N$ such that \[ \sup_{1\leq i\leq N, 0\leq t\leq T_0}E_{_{P^N_f}}X_t^N(i)\leq C_9 \] for sufficiently large $N$. Therefore, according to Taylor's expansion formula up to the second order, \[ \limsup_{N\rightarrow+\infty}\frac{N}{a_N^2}\log\sup_{0\leq t\leq T_0}E_{_{P^N_f}}\left(\exp\left\{\frac{a_N^2}{N}\theta_t^N(h)\right\}\right)\leq C_9\int_0^1h^2(x)dx+\|h\|\|f\| \] and hence Equation \eqref{equ A.5} holds. Now we check Equation \eqref{equ A.6}. By repeated utilizing H\"{o}lder's inequality and Jensen's inequality, \begin{align}\label{equ A.7} &E_{_{P_f^N}}\left(\exp\left\{\frac{Ca_N^2}{N}\theta_{T_0}^N(G_{T_0})-\frac{Ca_N^2}{N}\theta_0^N(G_0) -\int_0^{T_0}\frac{Ca_N^2}{N}\theta_s^N\left((\partial_s+P_1-P_2)G_s\right)ds\right\}\right) \notag\\ &\leq \sqrt{E_{_{P_f^N}}e^{\frac{a_N^2}{N}\theta_{T_0}^N(2CG_{T_0})+\frac{a_N^2}{N}\theta_0^N(-2CG_0)}}\sqrt{E_{_{P_f^N}} e^{\int_0^{T_0}\frac{a_N^2}{N}\theta_s^N\left(-2C(\partial_s+P_1-P_2)G_s\right)ds}} \\ &\leq \left(E_{_{P_f^N}}e^{\frac{a_N^2}{N}\theta_{T_0}^N(4CG_{T_0})}\right)^{\frac{1}{4}}\left(E_{_{P_f^N}}e^{\frac{a_N^2}{N}\theta_0^N(-4CG_0)}\right)^{\frac{1}{4}} \notag\\ &\text{\quad}\times\sqrt{E_{_{P_f^N}}\left(\frac{1}{T_0}\int_0^{T_0}e^{\frac{a_N^2}{N}\theta_s^N\left(-2CT_0(\partial_s+P_1-P_2)G_s\right)}ds\right)} \notag\\ &= \left(E_{_{P_f^N}}e^{\frac{a_N^2}{N}\theta_{T_0}^N(4CG_{T_0})}\right)^{\frac{1}{4}}\left(E_{_{P_f^N}}e^{\frac{a_N^2}{N}\theta_0^N(-4CG_0)}\right)^{\frac{1}{4}} \notag\\ &\text{\quad}\times\sqrt{\frac{1}{T_0}\int_0^{T_0}\left(E_{_{P_f^N}}e^{\frac{a_N^2}{N}\theta_s^N\left(-2CT_0(\partial_s+P_1-P_2)G_s\right)}\right)ds}. \notag \end{align} Equation \eqref{equ A.6} follows from Equations \eqref{equ A.5} and \eqref{equ A.7} and hence Claim 1 holds. Now we check Claim 2. As we have shown in Section \ref{section 4}, under $\hat{P}^N_{f, G}$, \[ \theta^N_t(h)=\int_0^1f(x)h(x)dx+o_p(1)+\int_0^t\theta_s^N((P_1-P_2)h)ds+\int_0^t\langle G_s|h\rangle_s ds, \] where $o_p(1)$ can be chosen uniformly for $0\leq t\leq T_0$. Hence, to prove Claim 2, we only need to check that \begin{equation}\label{equ A.8} \lim_{\delta\rightarrow 0}\limsup_{N\rightarrow+\infty}\sup_{\tau\in \Upsilon, s\leq \delta}\hat{P}^N_{f, G}\left(\left|\int_{\tau}^{\tau+s}\theta_u^N((P_1-P_2)h)du\right|>\epsilon\right)=0. \end{equation} As we have shown in Section \ref{section 4}, \[ \frac{dP}{dP^N_f}=\exp\left\{-\frac{a_N^2}{N}\left(\frac{1}{2}\int_0^1\frac{f^2(x)}{\phi(x)}dx+\epsilon_9^N\right)\right\}, \] where $\epsilon_9^N=o_p(1)$ under $P^f_N$ and $\hat{P}^f_N$. Hence, to prove Equation \eqref{equ A.8}, we only need to check that \begin{equation}\label{equ A.9} \lim_{\delta\rightarrow 0}\limsup_{N\rightarrow+\infty}\sup_{\tau\in \Upsilon, s\leq \delta}\hat{P}^N_{f, G}\left(\left|\int_{\tau}^{\tau+s}\theta_u^N((P_1-P_2)h)du\right|>\epsilon, |\epsilon^N|\leq 1, |\epsilon_9^N|\leq 1\right)=0. \end{equation} By H\"{o}lder's inequality, \begin{align*} &\hat{P}^N_{f, G}\left(\left|\int_{\tau}^{\tau+s}\theta_u^N((P_1-P_2)h)du\right|>\epsilon, |\epsilon^N|\leq 1, |\epsilon_9^N|\leq 1\right)\\ &\leq\sqrt{E_{_{P^N_f}}e^{\frac{2a_N^2}{N}\left(l(\theta^N, G)+1\right)}}\sqrt{P_f^N\left(\left|\int_{\tau}^{\tau+s}\theta_u^N((P_1-P_2)h)du\right|>\epsilon, |\epsilon^N_9|\leq 1\right)}\\ &\leq \sqrt{E_{_{P^N_f}}e^{\frac{2a_N^2}{N}\left(l(\theta^N, G)+1\right)}}\sqrt{e^{\frac{a_N^2}{N}\left(1+\frac{1}{2}\int_0^1\frac{f^2(x)}{\phi(x)}dx\right)}} \sqrt{P\left(\left|\int_{\tau}^{\tau+s}\theta_u^N((P_1-P_2)h)du\right|>\epsilon\right)}\\ &\leq \sqrt{E_{_{P^N_f}}e^{\frac{2a_N^2}{N}\left(l(\theta^N, G)+1\right)}}\sqrt{e^{\frac{a_N^2}{N}\left(1+\frac{1}{2}\int_0^1\frac{f^2(x)}{\phi(x)}dx\right)}}\\ &\text{\quad}\times \sqrt{P\left(\sup_{0\leq t_1<t_2\leq T_0, \atop |t_2-t_1|<\delta}\left|\int_{t_1}^{t_2}\theta_u^N((P_1-P_2)h)du\right|>\epsilon\right)}. \end{align*} As a result, Equation \eqref{equ A.9} follows from Lemma \ref{lemma 3.2 control of integration} and Equation \eqref{equ A.6} and hence Claim 2 holds. Since Claims 1 and 2 both hold, the proof is complete. \qed \quad \textbf{Acknowledgments.} The author is grateful to the financial support from the National Natural Science Foundation of China with grant number 11501542. {} \end{document}
\begin{document} \title{Viscosity Solutions for Doubly-Nonlinear Evolution Equations} \author{Luca Courte} \address[Luca Courte]{Abteilung f\"ur Angewandte Mathematik, Albert-Ludwigs-Universit\"at Freiburg, Raum 228, Hermann-Herder-Straße 10, 79104 Freiburg i. Br.} \email{[email protected]} \urladdr{https://aam.uni-freiburg.de/mitarb/courte/index.html} \author{Patrick Dondl} \address[Patrick Dondl]{Abteilung f\"ur Angewandte Mathematik, Albert-Ludwigs-Universit\"at Freiburg, Raum 217, Hermann-Herder-Straße 10, 79104 Freiburg i. Br.} \email{[email protected]} \urladdr{https://aam.uni-freiburg.de/agdo/index.html} \subjclass[2010]{35D40, 35G31, 35K55, 34E15, 34A60, 34C55} \keywords{viscosity solution, partial differential inclusions, doubly nonlinear equations, vanishing viscosity limit, minimizing movement, hysteresis} \date{\today} \begin{abstract} We extend the theory of viscosity solutions to treat scalar-valued doubly-nonlinear evolution equations. Such equations arise naturally in many mechanical models including a dry friction. After providing a suitable definition for discontinuous viscosity solutions in this setting, we show that Perron's construction is still available, i.e., we prove an existence result. Moreover, we will prove comparison principles and stability results for these problems. The theoretical considerations are accompanied by several examples, e.g., we prove the existence of a solution to a rate-independent level-set mean curvature flow. Finally, we discuss in detail a rate-independent ordinary differential equation stemming from a problem with non-convex energy. We show that the solution obtained by maximal minimizing movements and the solution obtained by the vanishing viscosity method coincide with the upper and lower Perron solutions and show the emergence of a rate-independent hysteresis loop. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} In this article, we consider scalar-valued doubly-nonlinear evolution equations and we discuss the existence and uniqueness of solutions from a viscosity solution perspective \cite{Crandall1992, CourteDondlStefanelli}. To be precise, given an open domain $\Omega \subset \mathbb{R}n$, a time interval $I = (0, T)$ with $T >0$, and a boundary condition $u_0 : \partial_P (\Omega\times I) \to \mathbb{R}$, where $\partial_P (\Omega\times I)$ is the parabolic boundary, i.e., $\partial_P (\Omega\times I) \coloneqq \Omega \times \{0\} \cup \partial \Omega \times I$, we are interested in the existence and properties of \emph{discontinuous viscosity solutions} $u : \Omega \times I \to \mathbb{R}$ to differential inclusions of the following type \begin{align} F(x, t, u, u_t, \nabla u, D^2 u) \in \mathcal{S}(u_t) G(x, t, u, \nabla u) &\text{ in } \Omega \times I, \label{eq:pde}\\ u(\cdot, 0) = u_0 &\text{ on } \partial_P(\Omega\times I), \label{eq:pde_bdry} \end{align} where $F : \Omega \times I \times \mathbb{R} \times \mathbb{R} \times \mathbb{R}n \times \operatorname{Sym}(n) \to \mathbb{R}$ is a possibly nonlinear function, $\operatorname{Sym}(n)$ are the symmetric $n\times n$ matrices, $G:\Omega \times I \times \mathbb{R} \times \mathbb{R}n \to [0, \infty)$ is a positive and possibly nonlinear function, and $\mathcal{S} : \mathbb{R} \to \mathcal{P}(\mathbb{R}) \setminus \{\emptyset\}$ is a set-valued function. The symbol $\mathcal{P}(\mathbb{R})$ denotes the power-set of $\mathbb{R}$. Such doubly nonlinear set-valued equations arise naturally while modeling dry friction (or rate independent) mechanics. For an overview, see \cite{Mielke2015}. The prevalence of rate independent friction is due to the fact that it (as well as other, more complex dissipation models) emerges as the limit of viscous evolution in highly oscillatory (wiggly) environments \cite{Mielke2011, MR4027693}. The concept of energetic solutions derived using iterative minimization schemes has been shown to be powerful method to treat such rate independent models \cite{Francfort2006}. Classical applications include models for fracture \cite{Francfort1998} and phase transitions \cite{MielkeTheil}. More recently, the concept has been applied to damage mechanics \cite{knees_rossi_zanini_2019, Knees2019, knees_17} and plasticity \cite{Mielke2004, Mielke2017}, among many others (e.g., \cite{Mielke2007, Rossi2008, Mielke2012, Mielke2016, Knees2019}). One central issue when considering iterative minimization as a means to prove existence of solutions for rate independent systems is that a certain amount of compactness is necessary -- in all but some exceptional, linear cases, the dissipation distance must be compact on sets where the potential energy is finite. Viscosity solutions provide a means to overcome this issue, albeit at the expense that only scalar problems can be treated. The concept of discontinuous viscosity solutions \cite{Barles1987, Bardi1997, Crandall1992, Barles2013} turns out to be a good setting for the study of \eqref{eq:pde}, as potential time discontinuities, i.e., jumps, are one of the main features of doubly-nonlinear evolution equations. The proofs of comparison principles and existence of solutions follow the classical theory and are therefore relegated in Appendix \ref{app:proof}. Instead, we focus on the usefulness of this concept by applying the results to several examples relevant in mechanics. In particular, in section \ref{sec:per_sol}, we consider an ordinary differential inclusion and show how the Perron solutions relate to the solutions obtained by the vanishing viscosity limit and the minimizing movements procedure (see, e.g., \cite{Efendiev_Mielke, MR3021776}). The extension of the theory of viscosity solutions to this novel setting and the relationship between the Perron solutions, the vanishing viscosity solution, and the maximal minimizing movement solution are the main contributions of this article. The remainder of this article is organized as follows. In section \ref{sec:disc_vs}, we extend the notion of discontinuous viscosity solutions to equations of type \eqref{eq:pde} and present applications of the theory. As the proofs are similar to the case of (discontinuous) viscosity solutions in the sense of \cite{Crandall1992} they have been moved to Appendix \ref{app:proof} and the main focus in that section lies in the application of the results. With the help of Perron's construction \cite{Ishii1987}, we prove in subsection \ref{ssec:existence} an existence result for \eqref{eq:pde} under some mild continuity conditions on $G$ and $\mathcal{S}$. Contrary to the references mentioned in the beginning, this allows us to easily prove existence of solutions to equations, even if $-\mathcal{S}$ is not maximal monotone. Indeed, we highlight this by proving existence of solutions to the equation $$\mathcal{F}(u_t) - \Delta u \ni f(x, t),$$ where $\mathcal{F} : \mathbb{R} \to \mathcal{P}(\mathbb{R})$ models a combination of viscous friction, dry friction, and static friction (i.e., a variation of ``sticktion'' \cite{Visintin1994}), e.g., $\mathcal{F}(0) = [-2, 2]$ and $\mathcal{F}(a) = \{\operatorname{sgn}(a) + a\}$, whenever $a\not= 0$, see Example \ref{ex:fric}. In subsection \ref{ssec:comp}, we prove that equation \eqref{eq:pde} satisfies a strict comparison principle under some monotonicity assumptions, i.e., we can compare a strict subsolution with a supersolution and a subsolution with strict supersolution. We illustrate the usefulness of this in two examples. First, we will show the existence and uniqueness of a solution to a doubly nonlinear evolution equation with a degenerate elliptic operator, see Example \ref{ex:nonlinear}. Second, in Example \ref{ex:mcf}, we prove that there exists a family of level-set functions $u(\cdot,t) : \mathbb{R}^n \to \mathbb{R}$ satisfying \begin{align*} |\nabla u| \partial R(u_t) - |\nabla u| \operatorname{div}(\tfrac{\nabla u}{|\nabla u|}) \ni f \text{ on } \{u(\cdot, t) = 0\}, \end{align*} i.e., the set $\{u=0\}$ is a solution to a rate-independent level-set mean curvature flow if we assume $\partial R$ to be the subdifferential of the absolute value, \begin{equation} \label{eq:R} \partial R(a) \coloneqq \left\{ \begin{array}{ll} \lbrack-1, 1\rbrack &\text{ if } a = 0, \\ \operatorname{sgn}(a) &\text{ elsewhere.} \end{array} \right. \end{equation} Note that these techniques also work if we replace $\partial R$ by $\mathcal{F}$ as in Example \ref{ex:fric} or other (continuous) multi-valued maps. This provides a theory that extends beyond the advancements made in \cite{Visintin1990}, as we can treat problems with mixed dynamics. Finally, we devote section \ref{sec:per_sol} to study how discontinuous viscosity solutions of the rate-independent ordinary differential equation \begin{align*} \partial R(u_t) + e(u) &\ni f, \end{align*} relate to other solution concepts when $\partial R$ is given by \eqref{eq:R}, $e : \mathbb{R} \to \mathbb{R}$ is non-monotone, and $f : \mathbb{R} \to \mathbb{R}$ is a given loading. We also explain how a rate-independent hysteresis loop emerges if $f$ is a periodic loading, see subsection \ref{ssec:hysteresis}. Let us briefly present the results obtained in this section. Given a subsolution $u$ and a supersolution $v$, the Perron solutions are the following two functions \begin{align} \label{eq:greater_perron}U(x, t) &\coloneqq \sup\{ \tilde{u}(x, t) \;|\; \tilde{u} \text{ is a subsolution with } u \le \tilde{u} \le v \}, \\ \label{eq:smaller_perron}V(x, t) &\coloneqq \inf\{ \tilde{v}(x, t) \;|\; \tilde{v} \text{ is a supersolution with } u \le \tilde{v} \le v \}, \end{align} and it turns out that they are discontinuous viscosity solutions, see Theorem \ref{thm:perron}. Assume $f$ is increasing, then the smaller Perron solution \eqref{eq:smaller_perron} corresponds to the solution that arises during a vanishing viscosity limit. On the other hand, it turns out that the greater Perron solution \eqref{eq:greater_perron} can be constructed using a maximal minimizing movements sequence, i.e., it is an energetic solution. As any other discontinuous viscosity solution lies between $V$ and $U$, this shows that this theory provides solutions that capture the behavior of any (meaningful) evolution to this equation. We conjecture that this link persists even in the case of the partial differential equation \eqref{eq:pde} if all the involved solution concepts are meaningful. \section{Discontinuous Viscosity Solutions}\label{sec:disc_vs} For the convenience of the reader, we start by providing the basic concepts and notations required to state the definition of discontinuous viscosity solutions. Let $\Omega \subset \mathbb{R}n$, we write $USC(\Omega)$ for the set of all upper-semicontinuous functions from $\Omega$ to $\mathbb{R}$. Likewise, $LSC(\Omega)$ contains all lower-semicontinuous functions from $\Omega$ to $\mathbb{R}$. For upper- and lower-semicontinuous functions, one can introduce the notion of super- and sub-jets respectively \cite[Section~8]{Crandall1992}. \begin{definition}[Parabolic Jets] Let $\Omega \subset \mathbb{R}n$ be an open domain, $T > 0$, $I \coloneqq (0, T)$, and $u \in USC(\Omega \times I)$. The second order parabolic super-jet $\mathcal{P}^{2, +}u(\hat x, \hat t)$ at $(\hat x, \hat t) \in \Omega \times I$ contains all triplets $(a, p, X) \in \mathbb{R} \times \mathbb{R}n \times \operatorname{Sym}(n)$ such that \[ u(x, t) \le u(\hat x, \hat t) + a(t-\hat t) + \left<p, x - \hat x\right> + \left<X(x-\hat x), x-\hat x\right> + {\rm o}(|t-\hat t| + |x-\hat x|^2). \] Moreover, the second order parabolic sub-jet of a lower-semicontinuous function $v \in LSC(\Omega \times I)$ is defined as $\mathcal{P}^{2, -} v(\hat y, \hat s) \coloneqq - \mathcal{P}^{2, +} (-v(\hat y, \hat s))$ for all $(y, s) \in \Omega \times I$, i.e., it contains all triplets $(b, q, Y) \in \mathbb{R} \times \mathbb{R}n \times \operatorname{Sym}(n)$ such that \[ v(y, s) \ge v(\hat y, \hat s) + b(s-\hat s) + \left<q, y - \hat y\right> + \left<Y(y-\hat y), y-\hat y\right> + {\rm o}(|s-\hat s| + |y-\hat y|^2). \] \end{definition} \begin{definition}[Semicontinuous Envelopes] Let $\Omega \subset \mathbb{R}n$ and $f : \Omega \to \mathbb{R}$ a function, we call the function \[ f^*(x) \coloneqq \inf\{ g \in USC(\Omega) \;|\; g \ge f \} \] the upper-semicontinuous envelope of $f$. Analogously, \[ f_*(x) \coloneqq \sup\{ g \in LSC(\Omega) \;|\; g \le f \} \] is the lower-semicontinuous envelope of $f$. \end{definition} Now that we have all the ingredients in hand, we can define discontinuous viscosity solutions in our novel setting. The following definition extends the definition in \cite{CourteDondlStefanelli}. For a detailed discussion of viscosity solutions, we refer to \cite{Barles1987, Bardi1997, Crandall1992, Barles2013}. \begin{definition}[Discontinuous Viscosity Solutions]\label{def:dis_vc} Let $\Omega \subset \mathbb{R}n$ be an open domain and $I\coloneqq (0, T)$ a finite time interval. We call an upper-semicontinuous function $u \in USC(\Omega \times I)$ a viscosity subsolution of \eqref{eq:pde} if for every $(x, t) \in \Omega \times I$ and every $(a, p, X) \in \mathcal{P}^{2, +} u(x, t)$ and some $\mu \in \mathcal{S}(a)$ we have \begin{equation}\label{eq:def_subsol} F_*(x, t, u(x, t), a, p, X) \le \mu G(x, t, u(x, t), p). \end{equation} Moreover, a lower-semicontinuous function $v \in LSC(\Omega \times I)$ is called a viscosity supersolution of \eqref{eq:pde} if for every $(x, t) \in \Omega \times I$ and every $(b, q, Y) \in \mathcal{P}^{2, -} v(x, t)$ and some $\nu \in \mathcal{S}(b)$ we have \begin{equation}\label{eq:def_supersol} F^*(x, t, v(x, t), b, q, Y) \ge \nu G(x, t, v(x, t), q). \end{equation} Finally, a function $u$ whose upper-semicontinuous envelope $u^*$ is a viscosity subsolution and whose lower-semicontinuous envelope $u_*$ is a viscosity supersolution, is called a discontinuous viscosity solution. If a discontinuous viscosity solution $u$ is continuous, i.e., $u^* = u_* = u$ then $u$ is simply called viscosity solution. \end{definition} \begin{remark} We only obtain a meaningful solution concept if $F$ is degenerate elliptic, i.e., $F(x, t, r, a, p, Y) \le F(x, t, r, a, p, X)$ if $X\le Y$ for all $(x, t) \in \Omega\times I$, $r, a \in \mathbb{R}$, $p \in \mathbb{R}n$ and $X \in \operatorname{Sym}(n)$ and $G$ continuous. \end{remark} \subsection{Existence}\label{ssec:existence} We will employ Perron's construction (see \cite{Ishii1987}) to prove the existence of discontinuous viscosity solutions of equation \eqref{eq:pde}. Perron's method is based on the (semi-)continuity of the equation. Due to the definition of discontinuous sub- and supersolutions, no further assumptions on $F$ are necessary here. Furthermore, we assume that $G$ is continuous and that $\mathcal{S}$ satisfies the following conditions: \begin{enumerate}[label=C\arabic*)] \item\label{it:C_seq} If $a_n \to a$ then any sequence $\mu_n \in \mathcal{S}(a_n)$ has a subsequence $\mu_{n_k}$ such that $\mu_{n_k} \to \mu \in \mathcal{S}(a)$. \item\label{it:C_epsDelta} For all $a\in \mathbb{R}$ and for each $\epsilon > 0$ there is a $\delta > 0$ such that for all $b\in\mathbb{R}$ with $|a-b| < \delta$ there are $\mu \in \mathcal{S}(a)$ and $\nu \in \mathcal{S}(b)$ with $|\mu-\nu| <\epsilon$, \item\label{it:C_compact} For all $a\in \mathbb{R}$ the set $\mathcal{S}(a)$ is compact. \end{enumerate} In \cite{CourteDondlStefanelli} it was already noted that if $-\mathcal{S}$ is maximal monotone, these conditions are satisfied. The arguments required to prove the following result are an adaptation of the theory from \cite[22-24]{Crandall1992}, \cite[302-305]{Bardi1997}. \begin{theorem}[Perron's Method]\label{thm:perron} Let $\Omega \subset \mathbb{R}n$, $I \coloneqq (0, T)$, $T > 0$. Moreover, let $u$ be a subsolution and $v$ be a supersolution of \eqref{eq:pde} with $u \le v$ on $\Omega \times I$. Then the functions \begin{align*} U(x, t) &\coloneqq \sup\{ \tilde{u}(x, t) \;|\; \tilde{u} \text{ is a subsolution with } u \le \tilde{u} \le v \} \\ V(x, t) &\coloneqq \inf\{ \tilde{v}(x, t) \;|\; \tilde{v} \text{ is a supersolution with } u \le \tilde{v} \le v \} \end{align*} are discontinuous viscosity solutions of \eqref{eq:pde}. Moreover, if $u$ and $v$ are continuous, it holds $V^* \le U$, $V \le U_*$, and in particular \[ u \le V \le U \le v ~~\text{ on $\Omega \times I$.} \] \end{theorem} \begin{proof} This theorem is proved in \ref{proof:perron}. \end{proof} \begin{remark} As soon as comparison holds for the equation, it follows that $U^* \le U_*$ and $V^* \le V_*$, i.e. $U$ and $V$ are continuous. Moreover, we obtain $U = V$ if $u = v$ on the parabolic boundary. \end{remark} \begin{example}[``Sticktion'']\label{ex:fric} Let $\Omega \subset \mathbb{R}n$ be an open domain and $I =(0, T)$ a finite time interval. We consider the following differential inclusion \begin{align*} \mathcal{F}(u_t) - \Delta u &\ni f \text{ in } \Omega \times I, \\ u(\cdot, 0) &= 0 \text{ on } \Omega, \end{align*} where $\mathcal{F} : \mathbb{R} \to \mathcal{P}(\mathbb{R})$ is given by $\mathcal{F}(0) \coloneqq [-2, 2]$ and $\mathcal{F}(a) \coloneqq \{\operatorname{sgn}(a) + a\}$, whenever $a\not= 0$. This set-valued function can be seen as a model for a dissipation that combines viscous friction, dry friction, and static friction. Moreover, the right-hand side $f \in C(\Omega \times I)$ is assumed to be a bounded continuous loading. In order to employ Perron's construction, we have to verify that $\mathcal{F}$ satisfies the required conditions. First of all, by definition $\mathcal{F}$ satisfies \ref{it:C_compact}. To simplify notation, define $F : \mathbb{R} \setminus \{0\} \to \mathbb{R}$, $F(a) \coloneqq \operatorname{sgn}(a) + a$. If $a\not=0$ then $\mu \in \mathcal{F}(a)$ if and only if $\mu = F(a)$. Now, take any sequence $a_n \to a$ and any sequence $\mu_n \in \mathcal{F}(a_n)$. Due to the boundedness of $a_n$, the sequence $\mu_n$ is bounded and we can extract a converging subsequence. If $a \not= 0$ then we are at a point of continuity of $F$ and hence the subsequence converges to $F(a)$, i.e., to an element of $\mathcal{F}(a)$. Now assume that $a=0$, then we can pass to a further subsequence such that either all elements $a_n = 0$ or all $a_n \not= 0$. In the first case, the compactness of $\mathcal{F}(0)$ shows that a subsequence of $\mu_n$ converges to an element of $\mathcal{F}(0)$. In the second case, we pass to a further subsequence such that $a_n > 0$ or $a_n < 0$. In either case, we can look at the right-continuous or left-continuous extension of $F$ and see that $\mu_n \to \mu \in \mathcal{F}(0)$. This finishes the proof that $\mathcal{F}$ satisfies \ref{it:C_seq}. With a similar argument one can also show that $\mathcal{F}$ satisfies \ref{it:C_epsDelta}. Hence, we showed that all the prerequisites to apply Theorem \ref{thm:perron} are fulfilled. The missing ingredients are the existence of a sub- and a supersolution that satisfy the Dirichlet boundary condition in a strong sense. Let $\underline{u}(x, t) \coloneqq -||f||_\infty t$, then for all $(x, t) \in \Omega \times I$ it holds \[ \mathcal{F}(\underline{u}_t(x, t)) - \Delta \underline{u}(x, t) \ni -1 - ||f||_\infty \le f(x, t). \] Therefore, $\underline{u}$ is a subsolution and we can prove analogously that $\overline{u} \coloneqq -\underline{u}$ is a supersolution. Moreover, it holds that $\underline{u} \le \overline{u}$ and $\underline{u}(\cdot, 0) = \overline{u}(\cdot, 0) = 0$. Hence, we can construct a discontinuous viscosity solution using Perron's method. \end{example} \subsection{Comparison Principles}\label{ssec:comp} In the previous subsection, we saw how Perron's construction can be used to construct discontinuous viscosity solutions. Unfortunately, in general, one cannot hope to find a unique solution to equation \eqref{eq:pde}. Hence, we cannot hope that \eqref{eq:pde} fulfills a strong comparison principle. Despite this, it turns out that under some mild monotonicity assumptions, we can still prove a comparison principle with a strict subsolution and a supersolution or a subsolution and a strict supersolution. This strict comparison principle allows us to study the behavior of any solution to the equation. This turns out to be particularly useful when studying the Perron solution in section \ref{sec:per_sol}. Note, that we will provide a version of the comparison principle that is suitable to treat geometric singularities as they arise in the mean curvature flow. As already mentioned we have to make some additional assumptions on $F$, $G$ and $\mathcal{S}$. Indeed, we assume that the function $G : \Omega \times I \times \mathbb{R} \times \mathbb{R}n \to [0, \infty)$ satisfies \begin{enumerate}[label=G\arabic*)] \item\label{it:G_Lipschitz} There is a constant $L_G > 0$ such that $\left|G(x, t, r, p)-G(x, t, s, p)\right| \le L_G |r-s|$ for all $r, s \in \mathbb{R}$, $(x, t) \in \Omega \times I$ and $p \in \mathbb{R}n$, \item\label{it:G_modulus} There exists a modulus of continuity $\omega_G$ such that for all $\alpha > 0$ big enough, it holds \begin{align*} &G(x, t, r, 4\alpha |x-y|^2 (x-y)) - G(y, t, r, 4\alpha |x-y|^2 (x-y)) &&\\&\le \omega_G(|x-y| + \alpha |x-y|^4), \end{align*} whenever $x, y \in \Omega, t \in I, r\in \mathbb{R}$. \end{enumerate} The set-valued function $\mathcal{S} :\mathbb{R} \to \mathcal{P}(\mathbb{R}) \setminus \{\emptyset\}$ is decreasing and bounded, i.e., \begin{enumerate}[label=S\arabic*)] \item\label{it:S_mon} $\sup S(a) \le \inf S(b)$ for all $a, b \in \mathbb{R}$ with $a > b$, \item\label{it:S_bdd} It holds $\mathcal{S}_\mathrm{max} \coloneqq \sup \left\{\bigcup_{a\in \mathbb{R}} \bigcup_{s\in\mathcal{S}(a)} |s|\right\} < \infty$ . \end{enumerate} Moreover, $F : \Omega \times I \times \mathbb{R} \times \mathbb{R} \times \mathbb{R}n \times \mathrm{Sym}(n) \to \mathbb{R}$ satisfies \begin{enumerate}[label=F\arabic*)] \item\label{it:F_inc}$\gamma(r-s) \le F_*(x, t, r, a, p, X) - F_*(x, t, s, a, p, X)$ for all $r, s \in \mathbb{R}$ with $r \ge s$ and $\gamma \coloneqq \mathcal{S}_{\textrm{max}}L_G$ with $L_G$ from \ref{it:G_Lipschitz} and $\mathcal{S}_{\mathrm{max}}$ from \ref{it:S_bdd}, \item\label{it:F_modulus} There exists a modulus of continuity $\omega_F$ such that \begin{align*} F^*(y, t, r, b, p, Y) - F_*(x, t, r, a, p, X) \le \omega_F(|x-y| + \alpha |x-y|^4), \end{align*} whenever $x, y \in \Omega, t \in I, r, a \in \mathbb{R}, \alpha \in \mathbb{R}_{\ge0}$, $p \coloneqq 4\alpha |x-y|^2 (x-y)$, $b \le a$, and \[ -4||Z|| \left(\begin{array}{cc}\operatorname{Id} & 0 \\ 0 & \operatorname{Id}\end{array}\right) \le \left(\begin{array}{cc}X & 0 \\ 0 & -Y\end{array}\right)\le \left(\begin{array}{cc}Z+\frac{1}{2||Z||}Z^2 & -(Z+\frac{1}{2||Z||}Z^2) \\ -(Z+\frac{1}{2||Z||}Z^2 ) & Z+\frac{1}{2||Z||}Z^2 \end{array}\right),\] with $Z \coloneqq 4 \alpha |x-y|^2 \operatorname{Id} + 8\alpha (x-y) \otimes (x-y)$. In particular, the last inequality implies that $X \le Y$ and $||X||, ||Y|| \le C |x-y|^2$. \end{enumerate} \begin{theorem}[Strict Comparison Principle on Bounded Domains]\label{thm:comp} Let $\Omega \subset\mathbb{R}n$ be a bounded, open domain and $I \coloneqq (0, T)$ a finite time interval. Moreover, let $u$ be a strict viscosity subsolution, i.e., there exists $\lambda > 0$ such that for every $(x, t) \in \Omega \times I$ and every $(a, p, X) \in \mathcal{P}^{2, +} u(x, t)$ and some $\mu \in \mathcal{S}(a)$ we have \begin{equation} F_*(x, t, u(x, t), a, p, X) \le \mu G(x, t, u(x, t), p) - \lambda, \end{equation} and $v$ be a viscosity supersolution of \eqref{eq:pde} with $u \le v$ on the parabolic boundary $\partial_P (\Omega \times I)$, i.e., on $ \partial\Omega \times I \cup \Omega \times \{0\}$, then $u \le v$ in $\Omega \times I$. The same result holds if $u$ is merely a subsolution and $v$ is a strict supersolution. \end{theorem} \begin{proof} This theorem is proved in \ref{proof:comp}. \end{proof} \begin{remark}\label{rmk:comp} If the partial differential inclusion is well-behaved, the strict comparison principle can be extended to a comparison principle, i.e., we can compare any subsolution with any supersolution. For instance, consider the equation \[ u_t + F(x, t, u, \nabla u, D^2 u) \in \mathcal{S}(u_t) G(x, t, u, \nabla u), \] such that \ref{it:F_modulus}, \ref{it:G_Lipschitz}, \ref{it:G_modulus}, \ref{it:S_bdd}, and \ref{it:S_mon} hold, and $F$ is either increasing or Lipschitz in the third variable. Then an exponential rescaling, i.e., $U(x, t) \coloneqq e^{-\lambda t} u(x, t)$ can be used to cast the equation and any subsolution and any supersolution in a form such that Theorem \ref{thm:comp} is applicable. Note that whenever comparison holds for any subsolution and any supersolution, the Perron solutions from Theorem \ref{thm:perron} coincide. \end{remark} \begin{example}[Equation with degenerate elliptic operator]\label{ex:nonlinear} Now that we have proved a comparison principle, we can present the existence theory we developed in full glory. For this, we give an example of a doubly-nonlinear equation with a degenerate elliptic operator. Assume that $\Omega \coloneqq B_1(0) \subset \mathbb{R}^2$, and $I = (0, T)$ a finite time-interval and consider the following equation \begin{align*} \partial R(u_t) + \max\{- \Delta u, 0 \} + u^3 \ni f &, \text{ in } \Omega \times [0, T) \\ u = 0 &, \text{ on } \partial_P(\Omega \times I) , \end{align*} with $f \in C^0(\Omega \times [0, T])$ and $\partial R$ the subdifferential of the absolute value, see \eqref{eq:R}. Moreover, we assume that there is a continuous, increasing function $m : I \to [0, \infty)$ with $m(0) = 0$ such that \[ |f(x, t)| \le 1 + \left(m(t)(1-|x|^2)\right)^3. \] Let $\underline{u}(x, t) \coloneqq m(t)(|x|^2-1)$ and $\overline{u}(x, t) \coloneqq m(t)(1-|x|^2)$, then it holds \begin{align*} \partial R(\underline{u}_t) + \max\{ -\Delta \underline{u}, 0 \}+ \underline{u}^3 \ni -1 + 0 - \left(m(t)(1-|x|^2)\right)^3 \le f(x, t) \end{align*} and likewise \[ \partial R(\overline{u}_t) + \max\{ -\Delta \overline{u}, 0 \}+ \overline{u}^3 \ge f(x, t). \] Hence, $\underline{u}$ and $\overline{u}$ are viscosity sub- and supersolutions with $\underline{u} = \overline{u} = 0$ on the parabolic boundary. Perron's method, Theorem \ref{thm:perron}, now allows to construct a viscosity solution to the problem. The uniqueness follows by comparison (see Theorem \ref{thm:comp} and Remark \ref{rmk:comp}), as the equation is strictly monotone in $u$. \end{example} We can also extend the strict comparison principle on the whole domain $\mathbb{R}n$. To be able to prove such a result, the functions $F$ and $G$ have to satisfy stronger conditions. To be precise, we use the following conditions: \begin{enumerate}[label=FU)] \item\label{it:F_growth} The function $F$ can be written as $F(x, t, r, a, p, X) = F_1(x, t, r, a) + F_2(t, p, X)$ with $F_1$ continuous. Moreover, there are $C_{F_1} > 0$, $K_F > 0$, and moduli of continuity $\omega_{F_1}$ and $\omega_{F_2}$, such that for all $x, y \in \mathbb{R}n$, $t \in I$, $r\in \mathbb{R}$, $a, b \in \mathbb{R}n$, $p, q \in \mathbb{R}n$, and $X, Y \in \operatorname{Sym}(n)$, the following conditions hold: \begin{enumerate}[label=(\roman*)] \item $F_1(y, t, r, b) - F_1(x, t, r, a) \le \omega_{F_1}(|x-y| + b-a)$, whenever $a \le b$, \item $F_2^*(t, p, Y) - {F_2}_*(t, p, X) \le \omega_{F_2}(|x-y| + \alpha |x-y|^4)$, whenever $p \coloneqq 4\alpha |x-y|^2 (x-y)$, and \[ -4||Z|| \left(\begin{array}{cc}\operatorname{Id} & 0 \\ 0 & \operatorname{Id}\end{array}\right) \le \left(\begin{array}{cc}X & 0 \\ 0 & -Y\end{array}\right)\le \left(\begin{array}{cc}Z+\frac{1}{2||Z||}Z^2 & -(Z+\frac{1}{2||Z||}Z^2) \\ -(Z+\frac{1}{2||Z||}Z^2 ) & Z+\frac{1}{2||Z||}Z^2 \end{array}\right),\] with $Z \coloneqq 4 \alpha |x-y|^2 \operatorname{Id} + 8\alpha (x-y) \otimes (x-y)$. \item $F_1(y, t, r, b) - F_1(x, t, r, a) \le C_{F_1} + K_F |x-y|$, whenever $b \le a$ \item $F_2^*(t, p, Y) - {F_2}_*(t, p, X)$ is locally bounded. \end{enumerate} \end{enumerate} \begin{enumerate}[label=GU)] \item\label{it:G_growth} The function $G$ can be written as $G(x, t, r, p) = G_1(x, t, r) + G_2(t, p)$ with $G_1$ and $G_2$ continuous. Moreover, there are $C_{G_1} > 0$, $K_G > 0$, and a modulus of continuity $\omega_{G_1}$, such that for all $x, y \in \mathbb{R}n$, $t \in I$, $r\in \mathbb{R}$, and $p, q \in \mathbb{R}n$, the following conditions hold: \begin{enumerate}[label=(\roman*)] \item $G_1(x, t, r) - G_1(y, t, r) \le \omega_{G_1}(|x-y|)$, \item $G_1(x, t, r) - G_1(y, t, r) \le C_{G_1} + K_G |x-y|$. \end{enumerate} \end{enumerate} \begin{theorem}[Comparison Principle in $\mathbb{R}n$]\label{thm:comp_rn} Let $I \coloneqq (0, T)$ with $T < \infty$. Assume that $F$ satisfies \emph{\ref{it:F_inc}}, with $\gamma = \mathcal{S}_{\textrm{max}}L_G +\eta$, $\eta > 0$, \emph{\ref{it:F_growth}}, $G$ satisfies \emph{\ref{it:G_Lipschitz}}, \emph{\ref{it:G_growth}}, and $\mathcal{S}$ fulfills \emph{\ref{it:S_mon}}, \emph{\ref{it:S_bdd}} on $\mathbb{R}n$. Moreover, let $u$ be a viscosity subsolution and $v$ be a viscosity supersolution of \eqref{eq:pde} with \begin{equation}\label{eq:comp_rn_assumption} u(x, t) - v(y, t) \le L(1+|x|+|y|) \text{ for all } (x, y, t) \in \mathbb{R}n \times \mathbb{R}n \times I \end{equation} for some $L > 0$ which is independent of $t$. If $u(\cdot, 0) \le v(\cdot, 0)$ then $u \le v \text{ in } \mathbb{R}n \times I$. \end{theorem} \begin{proof} This theorem is proved in \ref{proof:comp_rn}. \end{proof} \begin{remark} If the difference of the sub- and supersolution in the assumptions of Theorem \ref{thm:comp_rn} is a priori bounded, the growth estimate holds trivially. Hence, $\eta$ can be chosen as zero, i.e., we do not need to have strong monotonicity. However to deal with the second part of the theorem, one of the functions then has to be a strict sub- or supersolution as in Theorem \ref{thm:comp}. \end{remark} \begin{example}[Level-set Mean Curvature Flow]\label{ex:mcf} It turns out that viscosity solutions are particularly well-suited to prove existence of solutions to the level-set formulation of the mean curvature flow \cite{Crandall1992}. In fact, let $d$ be the signed distance function of some surface, then it turns out that the normal velocity is given by $d_t$ and the curvature by $\Delta d$. Replacing $d$ by any differentiable function $u$ with $\operatorname{sgn}(u) = \operatorname{sgn}(d)$ and $u = 0 \iff d = 0$, we can compute the normal velocity by $\frac{u_t}{|\nabla u|}$ and the curvature by $\operatorname{div}(\frac{\nabla u}{|\nabla u|})$. Therefore, we postulate the following equation (see also \eqref{eq:R}) as a rate-independent mean curvature flow \[ \partial R(\tfrac{u_t}{|\nabla u|}) \ni \operatorname{div}(\tfrac{\nabla u}{|\nabla u|}) \text{ on } \{u(\cdot, t) = 0\}. \] Multiplying this equation by $|\nabla u|$, rearranging, and introducing a time-dependent forcing $f$, we obtain \begin{equation}\label{eq:ri_mcf} |\nabla u| \partial R(\tfrac{u_t}{|\nabla u|}) - |\nabla u| \operatorname{div}(\tfrac{\nabla u}{|\nabla u|}) \ni f \text{ on } \{u(\cdot, t) = 0\}. \end{equation} Now to prove existence of a continuous function $u : \mathbb{R}n \times I \to \mathbb{R}$ that solves \eqref{eq:ri_mcf}, we are going to solve the following equation \begin{equation}\label{eq:ri_mcf_monotone} |\nabla u| \partial R(u_t) - |\nabla u| \operatorname{div}(\tfrac{\nabla u}{|\nabla u|}) + u \ni f \text{ on } \mathbb{R}n \times I. \end{equation} Note, that in the set $\{u(\cdot, t) = 0\}$ the term $+u$ vanishes and it renders the whole equation monotone in $u$. Let us now introduce $$F(x, t, r, p, X) \coloneqq -\operatorname{tr}(X) + \operatorname{tr}(\frac{p\otimes p}{|p|^2}X) + r - f(x, t),\quad G(p) \coloneqq |p|,$$then \eqref{eq:ri_mcf_monotone} is equivalent to \begin{equation*} F(x, t, u, \nabla u, D^2 u) \in -\partial R(u_t) G(\nabla u) \text{ on } \mathbb{R}n \times I. \end{equation*} The function $F$ is degenerate elliptic, discontinuous in $p=0$, $G$ is Lipschitz and can be decomposed as required by \ref{it:F_growth}, \ref{it:G_growth}. Indeed, the only difficulty is to prove that \ref{it:F_growth}(ii) is satisfied. For this, we need to compute the upper and lower semi-continuous envelopes. It is easy to see that \begin{align*} F^*(x, t, r, p, X) &= \left\{ \begin{array}{ll} F(x, t, r, p, X) &\text{ if } p \not= 0, \\ -\operatorname{tr}(X) + \lambda_\textrm{max}(X) + r - f(x, t) &\text{ if } p = 0, \end{array} \right. \\ F_*(x, t, r, p, X) &= \left\{ \begin{array}{ll} F(x, t, r, p, X) &\text{ if } p \not= 0, \\ -\operatorname{tr}(X) + \lambda_\textrm{min}(X) + r - f(x, t) &\text{ if } p = 0, \end{array} \right. \end{align*} If $f$ is uniformly continuous, then \ref{it:F_growth}(ii) is satisfied by degenerate ellipticity if $p\not=0$. If $p = 0$, then $|x-y| = 0$ and hence $Z = 0$ which implies that $X = Y = 0$ and the inequality is trivially satisfied. \begin{theorem}\label{thm:mcf_sol} Assume that $f\in C(\mathbb{R}n \times I)$ is uniformly continuous and it holds that $|f(x, t) -f(x, 0)| \le \Lambda t$, $u_0 \in C^2(\mathbb{R}n)$ is such that there exists some $\mu \in [-1, 1]$ with \[ \mu|\nabla u_0(x)| - |\nabla u_0(x)| \operatorname{div}(\tfrac{\nabla u_0(x)}{|\nabla u_0(x)|}) + u_0(x) = f(x, 0), \] then there is a unique viscosity solution $u \in C(\mathbb{R}n \times I)$ to \eqref{eq:ri_mcf_monotone} with $u(\cdot, 0) = u_0$. \end{theorem} \begin{proof} We already discussed that equation \eqref{eq:ri_mcf_monotone} satisfies conditions \ref{it:F_inc}, \ref{it:F_growth}, \ref{it:G_Lipschitz}, \ref{it:G_growth}, \ref{it:C_seq}, \ref{it:C_epsDelta}, \ref{it:S_bdd}, and \ref{it:S_mon}. Moreover, we have a strict monotonicity in $r$. Hence, the comparison principle, Theorem \ref{thm:comp_rn}, Remark \ref{rmk:comp}, and Perron's construction, Theorem \ref{thm:perron}, are applicable. As we assumed that the initial condition satisfies the equation, we can introduce $\overline{u}(x, t) \coloneqq u_0(x) + \Lambda t$, it holds \[ |\nabla u_0| - |\nabla u_0| \operatorname{div}(\tfrac{\nabla u_0}{|\nabla u_0|}) + u_0(x) + \Lambda t \ge f(x, 0) + \Lambda t \ge f(x, t). \] Hence, $\overline{u}$ is a viscosity supersolution with $\overline{u}(x, 0) = u_0(x)$. Likewise a viscosity subsolution $\underline{u}(x, t) \coloneqq u_0(x) - \Lambda t$ can be computed. Due to the existence of these solutions, we can construct a unique viscosity solution $u$ with $\underline{u} \le u \le \overline{u}$ and hence $u(\cdot, 0) = u_0$. \end{proof} It is a rather delicate affair to construct sub- and supersolutions that provide quantitative statements on the solutions of this mean-curvature flow model. Let us however mention how to pose a problem in this setting, consider an initial condition $u_0(x) \coloneqq \phi(\tilde{u}_{0}(x))$, where $\tilde{u}_0$ is some smooth level-set function and $\phi$ is a smooth, increasing cutoff function with $\phi(0) = 0$, $\phi'(s) > 0$ if and only if $s \in (-\epsilon, \epsilon)$ with some $\epsilon$ that will be fixed later. Note, that $\{ u_0(x) = 0\} = \{ \tilde{u}_0(x) = 0\}$ and due to the cutoff, we control the growth of the function at infinity and we have the relations $|\nabla u_0| = |\phi'(\tilde{u}_0)| |\nabla \tilde{u}_0|$ and $|\nabla u_0| \operatorname{div}(\frac{\nabla u_0}{|\nabla u_0|}) = \phi'(\tilde{u}_0)|\nabla \tilde{u}_0| \operatorname{div}(\frac{\nabla \tilde{u}_0}{|\nabla \tilde{u}_0|})$. Moreover, as $u_0$ has to lie in the stable-set, a choice for $f$ would be the following given some decreasing Lipschitz-continuous function $\eta : [0, \infty) \to [0, 1]$ that decays from one to zero, \[ f(x, t) \coloneqq (-\phi'(\tilde{u}_0(x)) \operatorname{div}(\tfrac{\nabla \tilde{u}_0(x)}{|\nabla \tilde{u}_0(x)|}) + \phi(\tilde{u}_0(x))) \eta(t). \] The existence of a solution now follows by Theorem \ref{thm:mcf_sol}. Consider, now, for instance the submanifold $\mathbb{S}^{n-1}_r \coloneqq \{ x\in \mathbb{R}n \;|\; |x| = r\}$. Given an appropriate level-set function at $|x| = r$ we have then the quantity $|\nabla u|\operatorname{div}(\tfrac{\nabla u}{|\nabla u|}) = 2(n-1)$ and $2|\nabla u|\partial R(u_t) = 2r \partial R(u_t)$. Hence, if $r > n-1$, then the curvature of the function is small enough so that it lies in the stable-set and the solution is stationary. On the other hand, if the curvature is big, i.e., $r < n-1$, then the submanifold has to degenerate to a point in order to satisfy the equation for all $t$. This can be generalized to other manifolds, i.e., only the points of the manifold with big enough curvature evolve. \end{example} \subsection{Stability Result}\label{ssec:stability} In this section, we prove a stability result for discontinuous viscosity solutions. If stability holds then the solutions of a sequence of partial differential equations converge to a solution of the limit equation. One of the remarkable features of discontinuous viscosity solutions is that this is true under rather weak assumptions on the equations. In the case of discontinuous viscosity solutions, the stability result is also called the half-relaxed limit method \cite{Barles1987}. The half-relaxed limits of a sequence of functions $u_n : \Omega \to \mathbb{R}$ which is uniformly locally bounded, i.e., on each compact $K \subset \Omega$, we have $\sup_{n\in \mathbb{N}} \sup_{x\in K} |u_n(x)| < \infty$, are \[ \operatornamewithlimits{limsup^\ast}_{n\to\infty} u_n(x) \coloneqq \limsup_{n\to \infty, y\to x} u_n(y), \quad \operatornamewithlimits{liminf_\ast}_{n\to\infty} u_n(x) \coloneqq \liminf_{n\to \infty, y\to x} u_n(y). \] \begin{theorem}[Stability Theorem]\label{thm:stability} Let $n \in \mathbb{N}$ and $u_n \in \operatorname{USC}(\Omega \times I)$ a sequence of viscosity subsolutions of \[ F_n(x, t, u_n, ({u_n})_t, \nabla u_n, D^2 u_n) \in \mathcal{S}_n(({u_n})_t) G_n(x, t, u_n, \nabla u_n) \text{ in } \Omega \times I, \] with $F_n: \Omega \times I \times \mathbb{R} \times \mathbb{R} \times \mathbb{R}n \times \operatorname{Sym}(n) \to \mathbb{R}$ uniformly locally bounded, the functions $G_n : \Omega \times I \times \mathbb{R} \times \mathbb{R}n \to [0, \infty)$ are uniformly converging to some $G$, and $\mathcal{S}_n : \mathbb{R} \to \mathcal{P}(\mathbb{R})$ such that there exists $\mathcal{S}: \mathbb{R} \to \mathcal{P}(\mathbb{R})$ such that whenever $a_n \to a$ and $\mu_n \in \mathcal{S}_n(a_n)$ then there exists a subsequence $\mu_{n_k}$ with $\mu_{n_k} \to \mu \in \mathcal{S}(a)$. If the functions $u_n$ are uniformly locally bounded on $\Omega \times I$, then $\overline{u} \coloneqq \operatornamewithlimits{limsup^\ast}_{n\to \infty} u_{n}$ is a subsolution to \[ \underline{F}(x, t, u, u_t, \nabla u, D^2 u) \in \mathcal{S}(u_t) G(x, t, u, \nabla u) \text{ in } \Omega \times I, \] where $\underline{F} \coloneqq \operatornamewithlimits{liminf_\ast}_{n\to \infty} F_n$. The same result holds for a sequence of supersolution with the obvious modifications. \end{theorem} \begin{proof} This theorem is proved in \ref{proof:stability}. \end{proof} \begin{remark} Note that this theorem also shows that if we have a sequence of viscosity solutions in the sense of \cite{Crandall1992} to an equation of type \eqref{eq:pde} where $\mathcal{S}_n$ are not set-valued. Then they converge in the above sense to a function that is a viscosity solution in the sense of definition \ref{def:dis_vc} when the limit $\mathcal{S}$ is set-valued. This property was used in \cite{CourteDondlStefanelli} to prove regularity for viscosity solutions to a specific partial differential inclusion. \end{remark} \section{Relation between the Perron Solutions, the Vanishing Viscosity Solution and the Maximal Minimizing Movement Solution}\label{sec:per_sol} In the final section of this article, we want to discuss the relation between the Perron solutions, the vanishing viscosity solution, and the maximal minimizing movement solution. The two Perron solutions $U$ and $V$ of an equation are constructed by maximizing over all subsolutions respectively minimizing over all supersolutions, see Theorem \ref{thm:perron}. Hence, they are in a sense the extremal solutions. Indeed, if we take any other discontinuous viscosity solution that lies in the given bounds, it will by definition lie between $V$ and $U$. Therefore, both solutions are of particular interest and we analyze them for the following ordinary differential inclusion \begin{equation}\label{eq:ex2} \begin{array}{ll} \partial R(u_t) + e(u) \ni f & \text{ in } (0, T), \\ u(0) = 0 &, \end{array} \end{equation} with $e \in C_{\rm loc}^{0, 1}(\mathbb{R})$ is a locally Lipschitz-continuous function with $e(r) = 0$ if and only if $r=0$, $\lim_{r\to \pm \infty} e(r) = \pm \infty$ (e.g., $-e$ is the derivative of a tilted double-well potential), and $f : (0, T) \to \mathbb{R}$ is also Lipschitz-continuous and increasing with $|f(0)| \le 1$. The subdifferential $\partial R$ is given by \eqref{eq:R}. In the following, we need to consider the monotonically increasing envelopes of $e$. They are defined as follows \begin{align} \label{eq:mon_env_lower}e_m(x) \coloneqq \sup\{ m(x) \;|\; m \text{ is increasing and } m \le e \}, \\ \label{eq:mon_env_upper}e^m(x) \coloneqq \inf\{ m(x) \;|\; m \text{ is increasing and } m \ge e \}. \end{align} As $e$ is continuous and hence locally bounded, they are well-defined and also increasing. The existence of the Perron solutions $U$ and $V$, see Theorem \ref{thm:perron}, follows by constructing suitable sub- and supersolutions. To be as general as possible we have to find sub- and supersolutions that are rapidly increasing/decreasing in order to avoid that the choice of the sub-/supersolution constricts the Perron solutions. For this, we note that any differentiable, monotonically decreasing function $u$ with $u(0) = 0$ is a viscosity subsolution, as $f$ is increasing, i.e., $$ \partial R(u_t) + e(u) \ni -1 + e(u) \le -1 + 0 \le f(0) \le f(t).$$ Moreover, if we take any strictly monotone function $m \ge e^m$, the equation \begin{equation} \begin{array}{ll} \partial R(v_t) + m(v) \ni f &, \text{ in } (0, T), \\ v(0) = 0 &, \end{array} \end{equation} has a unique viscosity solution $v$ as comparison, Theorem \ref{thm:comp}, holds. Any such solution $v$ is a viscosity supersolution to \eqref{eq:ex2}, i.e., for all $t\in I$ and all $b\in \mathcal{P}^{1,-} v(t)$ there is $\nu \in \partial R(b)$ such that $$\nu + e(v(t)) \le \nu + m(v) \le f(t).$$ If we choose a rapidly decreasing $u$ and a rapidly increasing $v$, we obtain Perron solutions $U$ and $V$ from Theorem \ref{thm:perron} and it holds $u \le V \le U \le v$. \begin{remark} Any discontinuous viscosity solution to \eqref{eq:ex2} lies between $V$ and $U$ if it is bounded by $u$ and $v$. By our choice of $u$ and $v$ these bounds will always be satisfied. \end{remark} \subsection{Viscous Approximation} We prove now that the smaller Perron solution, i.e., $V$, which is the infimum over all supersolutions, coincides with the discontinuous viscosity solution $u_{\rm vis}$ of \eqref{eq:ex2} which is obtained by considering the vanishing viscosity limit. Therefore, consider for $\epsilon > 0$ the following system \begin{equation}\label{eq:ex2_vis} \begin{array}{ll} \epsilon u_t + \partial R(u_t) + e(u) \ni f &, \text{ in } (0, T), \\ u(0) = 0 &. \end{array} \end{equation} \begin{theorem} For any $\epsilon > 0$, there exists a unique viscosity solution $u_\epsilon \in C(I)$ with $u_\epsilon(0) = 0$ to \eqref{eq:ex2_vis} and $u_\epsilon$ is positive and increasing. \end{theorem} \begin{proof} Let us first note, that all the prerequisites for Theorem \ref{thm:comp} and Remark \ref{rmk:comp}, i.e., \ref{it:C_seq}, \ref{it:C_epsDelta}, \ref{it:C_compact}, \ref{it:F_inc}, \ref{it:F_modulus}, \ref{it:G_Lipschitz}, \ref{it:G_modulus}, are satisfied. Therefore, we have a comparison principle. Existence and uniqueness is now a matter of constructing a sub- and a supersolution by Perron's method, Theorem \ref{thm:perron}. With a similar discussion as above, we can deduce the existence of a unique solution $u_\epsilon$. First, let us note that the zero function is a subsolution and hence $0 \le u_\epsilon$. Now, take $\delta > 0$ and define $w(t) \coloneqq u_\epsilon(t+\delta)$, then $w(0) \ge 0$. Moreover, for all $t \in I$ and $b\in \mathcal{P}^{1, -} w(t) = \mathcal{P}^{1, -} u(t+\delta)$ there exists $\nu \in \partial R (a)$ such that \[ \epsilon b + \nu + e(w(t)) = \epsilon b + \nu + e(u(t+\delta)) \ge f(t+\delta) \ge f(t). \] Recall that we assumed $f$ to be increasing. Hence, $w$ is a supersolution and therefore $u_\epsilon(t) \le w(t) = u_\epsilon(t+\delta)$. As this holds for any $\delta > 0$, $u_\epsilon$ is increasing. \end{proof} \begin{theorem}\label{thm:ex2_vis_approx} Let $u_{\rm vis}(t) \coloneqq \sup_{\epsilon > 0}u_\epsilon(t)$, then $u_{\rm vis}$ is a discontinuous viscosity solution to \eqref{eq:ex2} and it holds \[ {u_{\rm vis}}_*(t) = u_{\rm vis}(t) = \operatornamewithlimits{liminf_\ast}_{\epsilon\to 0} u_\epsilon(t) \text{ and } u_{\rm vis}^*(t) = \operatornamewithlimits{limsup^\ast}_{\epsilon\to 0} u_\epsilon(t). \] If $f$ is strictly increasing with $f'\ge \gamma > 0$, then $u_{\rm vis}$ is also a discontinuous viscosity solution to \begin{equation*} \partial R(u_t) + e^m(u) \ni f , \text{ in } (0, T). \end{equation*} \end{theorem} \begin{proof} We note that the monotonicity in $u_\epsilon$ implies that the parabolic sub- and superjet contain only positive values. This allows us to use the comparison principle to show that $u_{\epsilon'} \ge u_\epsilon$ whenever $\epsilon \ge \epsilon'$. Now, we can apply \cite[Lemma~2.18]{Bardi1997} combined with the stability result, Theorem \ref{thm:stability} to obtain the first assertion. The proof of the second statement is a little more involved. We consider the set $\{e^m - e \not=0\}$ and take any connected component $(u_0, u_1)$ of this set (due to the continuity of $e$ all connected components of this set are open). Moreover, as $e(x) = 0$ if and only if $x=0$, we have $0\not\in(u_0, u_1)$. Now, assume that there is a time $\overline{t} \in I$ with $u_{\rm vis}(\overline{t}) = \overline{u} \in (u_0, u_1)$. There is a sequence $(t_\epsilon) \subset I$ with $u_\epsilon(t_\epsilon) = u_0$ and $u_\epsilon(t) \not= u_0$ for all $t > t_\epsilon$. If $\epsilon \ge \epsilon'$ then we have $u_{\epsilon}(t_{\epsilon}) = u_0 = u_{\epsilon'}(t_{\epsilon'}) \ge u_{\epsilon}(t_{\epsilon'})$ and we thus obtain $t_{\epsilon} \ge t_{\epsilon'}$. We conclude that $(t_\epsilon)$ is a decreasing sequence, bounded from below by $0$ and therefore convergent to a limit $t_0 \in I$. It holds \[ u_{\rm vis}(t_0) = \sup_{\epsilon > 0} u_\epsilon(t_0) \le \sup_{\epsilon > 0} u_\epsilon(t_\epsilon) = u_0 < u_{\rm vis}(\overline{t}) \] and therefore, we have $t_0 < \overline{t}$. Now, define for $\lambda > 0$ \[ v_\epsilon(t) \coloneqq u_0 + \tfrac{2}{\pi} \arctan(\lambda \tfrac{(t-t_\epsilon)^2}{\epsilon})(u_1-u_0). \] This function is a subsolution to $\epsilon u_t +\partial R(u_t) + e(u) = f$ on $(t_\epsilon, T)$. Indeed, by noting that $e(v_{\epsilon}(t_\epsilon + s)) \le e(v_{\epsilon}(t_\epsilon))$, it holds for any $t = t_\epsilon + s \in (t_\epsilon, T)$ that \begin{align*} \frac{4\lambda}{\pi} \frac{s}{1+(\lambda \frac{s^2}{\epsilon})^2}(u_1 - u_0) + 1 + e(v_\epsilon(t_\epsilon + s)) &\le \frac{4\lambda t}{\pi }(u_1-u_0) + e(v_\epsilon(t_\epsilon)) + 1\\ &\le \frac{4\lambda s}{\pi }(u_1-u_0) + f(t_\epsilon) \end{align*} By choosing $\lambda = 4^{-1} \gamma\pi(u_1-u_0)^{-1}$ we obtain that the right-hand side is smaller then $f(t)$. Hence by comparison, we have $v_\epsilon(t_\epsilon + \delta) \le u_\epsilon(t_\epsilon + \delta)$ for any $\delta > 0$. Fix any $\delta > 0$, then there is a subsequence such that $u_{\rm vis}(t_0 + 2\delta) = \lim_{\epsilon' \to 0} u_{\epsilon ' }(t_0 + 2\delta)$. Moreover, for $\epsilon' > 0$ small enough it holds $t_0 + 2\delta > t_{\epsilon'} + \delta$ and therefore \[ u_{\rm vis}(t_0 + 2\delta) = \lim_{\epsilon' \to 0} u_{\epsilon'}(t_0 + 2\delta) \ge \limsup_{\epsilon' \to 0} u_{\epsilon'}(t_{\epsilon'} + \delta) \ge \limsup_{\epsilon' \to 0} v_{\epsilon'}(t_{\epsilon'} + \delta) = u_1 \] Hence $u_{\rm vis}^*(t_0) = \inf_{\delta \to 0} u_{\rm vis}(t_0 + 2\delta) \ge u_1 > u_{\rm vis}(\overline{t})$. Due to the monotonicity of $u_{\rm vis}$, we reach a contradiction as $t_0 > \overline{t}$. Hence, $u_{\rm vis}(t) \not\in \{e^m - e \not=0\}$ for all $t \in I$. \end{proof} \begin{corollary} Assume that $f$ is strictly increasing with $f'\ge \gamma > 0$, then it holds \[ u_{\rm vis} = V_*. \] \end{corollary} \begin{proof} First of all, we show that $u_{\rm vis}$ is a viscosity supersolution to \eqref{eq:ex2}. This follows from the first part of Theorem \ref{thm:ex2_vis_approx} and hence it holds that \[ V \le u_{\rm vis}. \] For the other inequality note that $V_*$ is a viscosity supersolution to \begin{equation*} \partial R(u_t) + e^m(u) \ni f , \text{ in } (0, T), \end{equation*} with $e^m$ as defined in \eqref{eq:mon_env_upper}. To see this, take any $t\in I$ and any $b \in \mathcal{P}^{1, -} V_*(t)$, then there is a $\nu \in \partial R(b)$ such that \[ \nu + e^m(V_*(t)) \ge \nu + e(V_*(t)) \ge f(t). \] Moreover, the function $w_\epsilon(t) \coloneqq u_{\rm vis}^*(t-\epsilon)$ is a strict subsolution to the above equation due to the second part of Theorem \ref{thm:ex2_vis_approx} and the strict monotonicity of $f$. This allows us to invoke the strict comparison principle, Theorem \ref{thm:comp}, and we obtain for all $t\in I$ that $w(t-\epsilon) \le V_*(t)$ and in the $\epsilon \to 0$ limit we obtain $w_*(t) \le V_*(t)$ which is equivalent to $u_{\rm vis}(t) \le V_*(t)$ and the statement follows. \end{proof} \subsection{Minimizing Movements} We will now construct a minimizing movement solution \cite{Mielke2015} which is maximal among all minimizing movement solutions. In this way, we guarantee that the solution jumps as soon as possible. Hence, let us take a partition $0 = t_0 < t_1 < \dots < t_{N-1} < t_N = T$ and define \begin{align*} q^0 &\coloneqq 0, \\ q^k &\coloneqq \sup\{q \;|\; q \in \arg\min_{\tilde{q}\in \mathbb{R}} \{ E(\tilde{q}) - f(t_k)\tilde{q} + |\tilde{q}-q^{k-1}| \} \}, \end{align*} with $f$ as above, i.e., increasing and $f(0) \in [0, 1]$, and $E(a) \coloneqq \int_{0}^{a} e(s) \;\mathrm{d}s$, note that $E \ge 0$. \begin{lemma} As $f$ is increasing, we have $q^k \ge q^{k-1}$ and $q^k = 0$ while $f(t_k) \le 1$. Moreover, if $f$ is strictly increasing then for all $k \ge k_0$, where $k_0$ is the first $k$ such that $f(t_k) > 1$, we have $q^k > q^{k-1}$. \end{lemma} \begin{proof} First of all, we have that \[ E(q^1) - f(t_1)q^1 + |q^1| \le 0 \] and as $E$ is positive $q^1$ has to be positive. Moreover, it holds for $k \ge 1$ that \begin{align*} E(q^k) - f(t_k)q^k + |q^k - q^{k-1}| &\le E(q^{k-1}) - f(t_k)q^{k-1} \\ &\le E(q^{k}) - f(t_{k-1})q^{k} + (f(t_{k-1}) - f(t_k))q^{k-1} \\&~~+ |q^k - q^{k-2}| - |q^{k-1} - q^{k-2}|. \end{align*} Hence, we obtain \[ (f(t_{k-1}) - f(t_k))q^k \le (f(t_{k-1}) - f(t_k))q^{k-1} \] and as $f(t_{k-1}) - f(t_k) \le 0$, we conclude that $q^k \ge q^{k-1}$. If $f(t_1) \le 1$, then for any $\tilde{q} > 0$ we have $E(\tilde{q})-f(t_1)\tilde{q} + |\tilde{q}| \ge E(\tilde{q}) - \tilde{q} +\tilde{q} = E(\tilde{q}) > 0$, as $e$ is increasing around $0$. This shows that $0$ is the only minimizer. Now, we can iterate this argument to see that $q^k = 0$ while $f(t_k) \le 1$. The final statement can again be proven using induction. First, we show that $q^{k_0} > q^{k_0-1} = 0$. As $f(t_0) = 1+\epsilon$ for some $\epsilon > 0$, we obtain for all $\delta > 0$ that \[ E(\delta) - f(t_0)\delta + \delta = \int_{0}^{\delta} e(s) \;\mathrm{d}s - \epsilon \delta \le \tfrac{L}{2}\delta^2 -\epsilon \delta, \] where $L=L(0) > 0$ is the Lipschitz-constant of $e$ around $0$. Now, we can choose $\delta < \frac{2\epsilon}{L}$ and obtain \[ E(\delta) - f(t_0)\delta + \delta < 0 \] and hence $0$ cannot be a minimizer which implies $q^{k_0} > 0$. Now, take any $k \ge k_0$ and assume that $q^{k-1} > q^{k-2} > \dots > q^{k_0} > 0$. Again, we have for any $\delta > 0$ that \begin{align*} E(q^{k}) - f(t_{k})q^{k} + (q^{k}-q^{k-1}) &\le E(q^{k-1}+\delta) - f(t_{k})(q^{k-1}+\delta) + \delta \\ &= E(q^{k-1}) + \int_{q^{k-1}}^{q^{k-1}+\delta} e(s) \;\mathrm{d}s - f(t_{k})(q^{k-1}+\delta) + \delta, \\ &\le E(q^{k-1}) + \delta e(q^{k-1}) + \tfrac{L}{2}\delta^2 - f(t_{k})(q^{k-1}+\delta) + \delta, \end{align*} where $L=L(q^{k-1})$ is the Lipschitz-constant of $e$ in some interval around $q^{k-1}$, say $[q^{k-1}-1, q^{k-1}+1]$. We reorder the inequality and use $E(q^k) - E(q^{k-1}) \ge 0$ to obtain \[ (1-f(t_{k})) (q^k-q^{k-1}) \le \delta (e(q^{k-1})- f(t_{k})q^{k-1} + 1 + \tfrac{L}{2}\delta). \] Again, we rewrite $f(t_k) = f(t_{k-1}) + \epsilon$ and we use that $e(q^{k-1}) - f(t_{k-1})q^{k-1} + 1 = 0$ as $q^{k-1}$ is a minimizer and $q^{k-1} > q^{k-2}$ to conclude \[ (1-f(t_{k})) (q^k-q^{k-1}) \le \delta (- \epsilon q^{k-1} + \tfrac{L}{2}\delta). \] Finally, we can choose $\delta$ so small enough so that the right hand-side becomes negative and then divide by $(1-f(t_k))<0$ in order to show that \[ q^k-q^{k-1} > 0. \] \end{proof} As viscosity solutions are not defined on discrete time-points, we have to consider some interpolation of the discrete values. Let us therefore define the interpolant \begin{align*} q^N(t) \coloneqq q^{k-1} \text{ for } t \in [t_{k-1}, t_k). \end{align*} \begin{lemma}\label{lem:mm_inc} The interpolant $q^N$ is a discontinuous viscosity solution of \begin{align*} \partial R(u_t) + e(u) \ni f^N, \end{align*} where $f^N(t) \coloneqq f(t_{k-1})$ for $t \in [t_{k-1}, t_k)$. Moreover, if $f$ is strictly increasing then $q^N$ is also a discontinuous viscosity solution of \begin{align*} \partial R(u_t) + e_m(u) \ni f^N. \end{align*} \end{lemma} \begin{proof} As $q^k \in \arg\min_{\tilde{q}\in \mathbb{R}} \{ E(\tilde{q}) - f(t_k)\tilde{q} + |\tilde{q}-q^{k-1}| \}$, it follows by the properties of subdifferentials that \begin{align} \label{eq:proof_mm_incl} 0 &\in [-1 + e(q^k) - f(t_k), 1+ e(q^k) - f(t_k)]&\text{ if } q^k = q^{k-1}, \\ \label{eq:proof_mm_eq} 0 &= 1 + e(q^k) - f(t_k) & \text{ if } q^k > q^{k-1}, \end{align} for all $k \in 1, \dots, N$. Whenever, $t\in I$ is a continuity point of $q^N$ then $\mathcal{P}^{1, +}q^N(t) = \mathcal{P}^{1, -}q^N(t)= \{0\}$ and due to the inclusion \eqref{eq:proof_mm_incl}, $q^N$ satisfies the sub- and supersolution inequalities. Let us now look at the jump points, i.e., $q^{k} = (q^N)^*(t) > (q^N)_*(t) = q^{k-1}$. In this case it holds $\mathcal{P}^{1, +}(q^N)^*(t) = \mathcal{P}^{1, -}(q^N)_*(t)= [0, \infty)$. Hence, we obtain for all $a \in \mathcal{P}^{1, +}(q^N)^*(t)$ with \eqref{eq:proof_mm_eq} that \begin{align*} \partial R(a) + e((q^N)^*(t)) &\ni 1 + e(q^{k}) = f(t_k) = f^N(t) \le (f^N)^*(t), \end{align*} i.e., $(q^N)^*$ is a viscosity subsolution. On the other hand, for all $b \in \mathcal{P}^{1, -}(q^N)_*(t)$ we have \begin{align*} \partial R(b) + e((q^N)_*(t)) &\ni 1 + e(q^{k-1}) \ge f(t_{k-1}) = (f^N)_*(t), \end{align*} and therefore $(q^N)_*$ is a viscosity supersolution and the function $q^N$ is a discontinuous viscosity solution. Hence, the first statement holds. As $q^k$ is chosen to be the maximal value of the minimizers, the function $e$ in the equations above can be replaced by $e_m$. Indeed, as long as $t_k < t_{k_0}$, we have $q^k = 0$ and by assumption we also have that $e(0) = e_m(0)$. For any, $t_k \ge t_{k_0}$ it holds due to Lemma \ref{lem:mm_inc} that \[ e(q^k)-f(t_k) + 1 = 0, \] and by the definition of $q^k$ this is equivalent to $q^k = \max\{e^{-1}(-1+f(t_k)) \}$ (note that we used the coercivity of $e$ here). However, this and the local Lipschitz-continuity of $e$ imply that there is a monotone function that lies below $e$ and touches $e$ at $q^k$ and hence by the definition of $e_m$ it follows that $e(q^k) = e_m(q^k)$, \end{proof} Now, be refining the partition we can analyze the limit $N \to \infty$. Due to the stability result \ref{thm:stability}, the function $u_{\rm mm} \coloneqq \operatornamewithlimits{liminf_\ast}_{N\to\infty}q^N$ is a viscosity supersolution of \begin{align*} \partial R(u_t) + e_m(u) \ni f. \end{align*} \begin{theorem} If $f$ is strictly increasing, then $u_{\rm mm} = U_*$, where $U$ is the greater of the Perron solutions of \eqref{eq:ex2}. \end{theorem} \begin{proof} First of all, we notice that $q^N$ is a viscosity subsolution of \eqref{eq:ex2} as $f$ is increasing, i.e., \[ \partial R(q^N_t) + e(q) \le f^N \le f. \] Therefore $q^N \le U$ and hence $u_{\rm mm} \le U_*$. One the other hand, if we set $h(t) \coloneqq u_{\rm mm}(t+\epsilon)$ for some $\epsilon > 0$ then \[ \partial R(h_t) + e_m(h) \ni f(t+\epsilon) > f(t), \] i.e., $h$ is strict supersolution to this equation and $U$ is a subsolution as \[ \partial R(U_t) + e_m(U) \le \partial R(U_t) + e(U) \le f(t). \] The comparison principle \ref{thm:comp} allows us to conclude that $U(t) \le u_{\rm mm}(t+\epsilon)$. The asserted statement follows by taking the limit as $\epsilon \to 0$. \end{proof} \subsection{Hysteresis}\label{ssec:hysteresis} \begin{figure} \caption{The top figures show the graph of the functions $e$ and $f$ from subsection \ref{ssec:hysteresis} \label{fig:hysteresis} \end{figure} We have characterized the Perron solutions under an increasing loading $f$. If we assume that $f$ is decreasing then the previously established correspondence changes, i.e., $U$ corresponds to the vanishing viscosity solution and $V$ corresponds to the maximal minimizing movement solution. To see the emergence of a rate-independent hysteresis loop, we assume that $f$ is a periodic loading that changes direction outside of the set $$[ \min\{e^m-e_m \not= 0\} - 1, \max\{e^m-e_m \not= 0\} + 1].$$ In this case, when $f$ changes direction, we have $U=V$ and the correspondence changes as described above. However, we have also seen that $V < U$ in the set $\{e^m-e_m \not= 0\}$ and hence we obtain a (or multiple) rate-independent hysteresis loop for $u_{\rm vis}$ and $u_{\rm mm}$. Note that there are to different effects that contribute to the hysteresis loop, first due to the term $\partial R$ we obtain hysteresis when we change loading. Second, due to the non-monotone $e$, we obtain a further hysteresis loop. To highlight this, we refer to Figure \ref{fig:hysteresis} where the images of the curves $t \mapsto (f(t), u_{\rm vis}(t))$ and $(f(t), u_{\rm mm}(t))$ are plotted for $e : \mathbb{R} \to \mathbb{R}$ given by $e(x) \coloneqq x^3 - \frac{9}{2} x^2 + \frac{11}{2} x$ and $f : [0, 16] \to \mathbb{R}$ given by \[ f(t) = \left\{\begin{array}{ll} t &\text{if } t \le 4, \\ 8-t &\text{if } t \in (4, 10), \\ t-12 &\text{if } t \ge 10. \end{array}\right. \] Note that $e$ achieves has a local maximum at $x_{\rm max} \coloneqq \frac{3}{2} - \frac{1}{2}\sqrt{\frac{5}{3}}$ with $$e(x_{\rm max}) = \frac{1}{36} (54+5\sqrt{15}) \approx 2.038$$ and a local minimum at $x_{\rm min} \coloneqq \frac{3}{2} + \frac{1}{2}\sqrt{\frac{5}{3}}$ with $$e(x_{\rm min}) = \frac{1}{36} (54-5\sqrt{15}) \approx 0.962$$ and hence, we expect $u_{\rm vis}$ to jump first when $f(t) = e(x_{\rm max}) + 1 \approx 3.038$ and second when $f(t) = e(x_{\rm min}) - 1 \approx -0.038$. Furthermore, we expect $u_{\rm mm}$ to have a first jump when $f(t) = e(x_{\rm min}) + 1 \approx 1.962$ and a second jump when $f(t) = e(x_{\rm max}) - 1 \approx 1.038$. \section*{Declaration of Interest} Declarations of interest: none. \appendix \section{Proofs}\label{app:proof} \subsection{Perron's method} The next two lemmas turn out to be helpful while proving theorem \ref{thm:perron}. \begin{lemma}\label{lem:sup_subsolutions} Let $\mathcal{F}$ be a non-empty set of subsolutions of \eqref{eq:pde} and set \[ w(x, t) \coloneqq \sup_{u\in \mathcal{F}} u(x, t). \] If $w(x, t) < \infty$ then its upper-semicontinuous envelope $w^*$ is a subsolution of \eqref{eq:pde}. The analogous result holds for supersolutions. \end{lemma} \begin{proof} It is enough to prove the result for subsolution as each supersolution is a subsolution of $-F(x, t, u, u_t, \nabla u, D^2 u) \in -\mathcal{S}(a) G(x, t, u, \nabla u)$, i.e., by replacing $F$ with $-F$ (noting that the degenerate ellipticity assumption is not used in the proof) and $\mathcal{S}$ with $-\mathcal{S}$ we obtain the result for supersolutions. Let $(x, t) \in \Omega \times I$ and $(a, p, X) \in \mathcal{P}^{2, +}w^*(x, t)$. By definition of the upper-semicontinuous envelope one can find a sequence $(x_n, t_n, u_n) \in \Omega \times I \times \mathcal{F}$ such that $u_n(x_n, t_n) \to w^*(x, t)$ and for any other sequence $(x'_n, t'_n) \to (x, t)$ we have $\limsup_{n\to \infty} u_n(x'_n, t'_n) \le w^*(x, t)$. This implies that there is a sequence $(\hat x_n, \hat t_n) \in \Omega \times I$ and a sequence $(a_n, p_n, X_n) \in \mathcal{P}^{2, +} u_n(\hat x_n,\hat t_n)$ such that \[ (\hat x_n, \hat t_n, u_n(\hat x_n, \hat t_n), a_n, p_n, X_n) \to (x, t, w^*(x, t), a, p, X), \] see \cite[Proposition~4.3]{Crandall1992}. As $u_n \in \mathcal{F}$ is a subsolution there is a $\mu_n \in \mathcal{S}(a_n)$ such that \[ F_*(\hat x_n, \hat t_n, u(\hat x_n, \hat t_n), a_n, p_n, X_n) \le \mu_n G(\hat x_n, \hat t_n, u(\hat x_n, \hat t_n), p_n). \] Passing to a subsequence, by \ref{it:C_seq} we have $\mu_{n_k} \to \mu \in \mathcal{S}(a)$ and the last inequality still holds for this subsequence. Due to the lower semicontinuity of $F_*$ and the continuity of $G$, we can pass to limit inferior as $k\to \infty$ and we obtain \[ F_*(x,t, w^*(x, t), a, p, X) \le \mu G(x, t, w^*(x, t), p). \] We proved that for all $(x, t) \in \Omega \times I$ and all $(a, p, X) \in \mathcal{P}^{2, +} w^*(x, t)$ there is a $\mu \in \mathcal{S}(a)$ such that the last inequality holds. This shows that $w^*$ is a subsolution of \eqref{eq:pde}. \end{proof} \begin{lemma}\label{lem:bump_const} Let $u$ be a subsolution of \eqref{eq:pde}. Assume that $u_*$ is not a supersolution at some point $(\hat x, \hat t)$, i.e., there exists $(a, p, X) \in \mathcal{P}^{2, -} u_*(\hat x, \hat t)$ such that for all $\mu \in \mathcal{S}(a)$ we have \begin{equation} F^*(\hat x, \hat t, u_*(\hat x, \hat t), a, p, X) < \mu G(\hat x, \hat t, u_*(\hat x, \hat t), p). \label{eq:strict_ineq} \end{equation} In this case, for any $\epsilon > 0$ there exists a subsolution $u_\epsilon: \Omega \times I \to \mathbb{R}$ satisfying \begin{itemize} \item $u_\epsilon(x, t) \ge u(x, t)$, \item $\sup(u_\epsilon - u) > 0$, \item $u_\epsilon(x, t) = u(x, t)$ for all $(x, t)\in \Omega\times I$ with $|(x, t)-(\hat{x}, \hat{t})|\ge\epsilon$. \end{itemize} Again, the same result holds also for supersolutions with the obvious modifications. \end{lemma} \begin{proof} Let $(\hat{x}, \hat{t})$ and $(a, p, X) \in \mathcal{P}^{2, -} u_*(\hat{x}, \hat{t})$ be such that inequality \eqref{eq:strict_ineq} holds. By \ref{it:C_compact} the set $\mathcal{S}(a)$ is compact and there is $\alpha > 0$ such that \[ F^*(\hat x, \hat t, u_*(\hat x, \hat t), a, p, X) - \mu G(\hat x, \hat t, u_*(\hat x, \hat t), p) \le -\alpha \] for all $\mu \in \mathcal{S}(a)$. Let us define \begin{equation}\label{eq:bump_const_proof_1} u_{\delta, \gamma}(x, t) \coloneqq u_*(\hat x, \hat t) + \delta + a(t-\hat{t})+ \left<p, x-\hat x\right>+ \tfrac{1}{2}\left<X(x-\hat x), x-\hat x\right> - \tfrac{\gamma}{2} |(x, t) - (\hat{x}, \hat{t})|^2. \end{equation} As $F^*$ is upper-semicontinuous, we have for $|(x, t) - (\hat{x}, \hat{t})| \to 0$, \begin{align*} &F^*( x, t, u_{\delta, \gamma}( x, t), (u_{\delta, \gamma})_t( x, t), \nabla u_{\delta, \gamma}(x, t), D^2 u_{\delta,\gamma}(x, t)) &\\&= F^*( x, t, u_{\delta, \gamma}( x, t), a - \gamma (t-\hat t), p - \gamma (x-\hat{x}), X - \gamma \operatorname{Id}) \\ &\le F^*(\hat x, \hat t, u_*(\hat x, \hat t), a, p, X) + {\rm{o}}(1). \end{align*} The continuity of $G$ can be used in the same way to obtain $$ G( x, t, u_{\delta, \gamma}( x, t), \nabla u_{\delta, \gamma}(x, t))= G(\hat x, \hat t, u_*(\hat x, \hat t), p) + {\rm{o}}(1). $$ Moreover, $\partial_t u_{\delta, \gamma}( x, t) = a -\gamma (t-\hat{t})$ which implies with condition \ref{it:C_epsDelta} that there are $\mu_\gamma \in \mathcal{S}(\partial_t u_{\delta, \gamma}( x, t))$ with $\gamma$ small enough such that $\mu_\gamma = \mu + o(1)$. As \eqref{eq:bump_const_proof_1} holds, we conclude that if $\delta, \gamma, r$ are small enough then is $u_{\delta, \gamma}$ a subsolution of \eqref{eq:pde} in $B_r(\hat{x}, \hat{t})$. Moreover, since \[ u(x, t) \ge u_*(x, t) \ge u_*(\hat{x}, \hat{t})+ a(t-\hat t)+ \left<p, x-\hat x\right>+ \tfrac{1}{2}\left<X(x-\hat x), x-\hat x\right> + {\rm{o}}(|(x, t)-(\hat x, \hat t)|^2), \] we can choose $\delta = c(\gamma, r)$ to obtain $u(x, t) > u_{\delta, \gamma}(x, t)$ for $(x, t) \in B_r(\hat{x}, \hat{t}) \setminus B_{\frac{r}{2}}(\hat{x}, \hat{t})$. Therefore the function \[ u_{\gamma}(x, t) \coloneqq \left\{\begin{array}{ll} \max\{ u(x, t), u_{\delta, \gamma}(x, t) \} & \text{ in } B_r(\hat{x}, \hat{t}), \\ u(x, t) & \text{ elsewhere.} \end{array}\right. \] is a subsolution by Lemma \ref{lem:sup_subsolutions}. It is clear that $u_\gamma(x, t) \ge u(x, t)$ and that in a neighborhood of $(\hat{x}, \hat{t})$ we have $u_\gamma(x, t) > u(x, t)$. For $\epsilon$ given, by choosing $r, \gamma < \epsilon$ we have that $u_{\gamma}^*$ satisfies all the required properties. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:perron}]\label{proof:perron} As $U \le v \le \infty$, Lemma \ref{lem:sup_subsolutions} implies that $U^*$ is a subsolution. Now assume that $(U^*)_* = U_*$ is not a supersolution, then there has to be a neighborhood where $U_* < v$. In this neighborhood we can apply Lemma \ref{lem:bump_const} to obtain subsolutions $u_\epsilon$ that are strictly bigger then $U_*$. Moreover, as $\epsilon \to 0$ these subsolutions $u_\epsilon$ converge to $U_*$. Hence, we can choose $\epsilon$ small enough so that $U_* \le u_\epsilon < v$. This contradicts the maximality of $U$. We showed that $U_*$ is a supersolution and $U^*$ is a subsolution, i.e., $U$ is indeed a discontinuous viscosity solution. The same arguments also prove that $V$ is a discontinuous viscosity solution. To see the last statement in this Theorem, we note that $V^*$ is a subsolution and $U_*$ is a supersolution with $V^* \le v^* = v$ and $u = u_* \le U_*$. \end{proof} \subsection{Comparision Principles} The proofs of the comparision principles, both bounded and unbouned, in this section are primarily an adaption of the methods in \cite{Crandall1992}. \begin{proof}[Proof of Theorem \ref{thm:comp}]\label{proof:comp} Assume by contradiction that comparison does not hold, i.e. \[ \sup_{\substack{x\in \Omega \\ t\in I}} \left\{ u(x, t) - v(x, t)\right\} \eqqcolon \delta > 0 \] and define \[ M_{\alpha, \gamma} \coloneqq \sup_{\substack{x, y \in \Omega \\ t\in I}} \left\{ u(x, t) - v(y, t) - \alpha|x-y|^4 - \tfrac{\gamma}{T-t}\right\}. \] We have $M_{\alpha, \gamma} > \delta/2$ for $\gamma$ small enough. Since the domain is bounded, the supremum is achieved at a point $(\hat{x}, \hat{y}, \hat{t}) \in \overline{\Omega}\times \overline{\Omega} \times [0,T)$. We will now show that the triplet $(\hat{x}, \hat{y}, \hat{t})$ is in the interior of the parabolic domain if $\alpha$ is large enough. Assume first that $\hat t = 0$ then \[ M_{\alpha, \gamma } = u(\hat{x}, 0) - v(\hat{y}, 0) - \alpha|\hat x-\hat y|^4 - \tfrac{\gamma}{T-\hat t} \le - \alpha|\hat x-\hat y|^4 - \tfrac{\gamma}{T-\hat t} \le 0, \] since $u\le v$ on the parabolic boundary $\partial_P (\Omega \times I)$, contradicting $M_{\alpha, \gamma} > \delta/2$. We now check that, if $\alpha$ is chosen to be large enough, $\hat{x}$ and $\hat{y}$ necessarily belong to $\Omega$. Assume the contrary, namely there exists a subsequence $\alpha_n \to \infty$ with $\hat{x}_n \in \partial \Omega$ realizing the sup. Then, we also have $\hat{y}_n \to \hat{y}_\infty \in \partial \Omega$ and therefore \[ \lim_{n\to \infty}M_{\alpha_n, \gamma } = \lim_{n\to \infty}\left(u(\hat{x}_n, \hat{t}) - v(\hat{y}_n, \hat{t}) - \alpha_n|\hat x_n-\hat y_n|^4 - \tfrac{\gamma}{T-\hat t}\right) \le 0 - \tfrac{\gamma}{T} \le 0, \] where we used again that $u \le v$ on $\partial_P(\Omega\times I)$ and reached a contradiction. Therefore, we have proved that $(\hat{x}, \hat{y}, \hat{t}) \in \Omega \times \Omega \times (0, T)$, at least if $\alpha$ is large enough. Hence, we have \cite[Theorem 8.3]{Crandall1992} \[ (a, p, X) \in \mathcal{P}^{2, +} u(\hat{x}, \hat{t}) \text{ and } (b, p, Y) \in \mathcal{P}^{2, -} v(\hat{y}, \hat{t}) \] with $a-b = \tfrac{\gamma}{(T-\hat{t})^2}$, $p \coloneqq 4\alpha |\hat x-\hat y|^2 (\hat x-\hat y)$, and \[ -4||Z|| \left(\begin{array}{cc}\operatorname{Id} & 0 \\ 0 & \operatorname{Id}\end{array}\right) \le \left(\begin{array}{cc}X & 0 \\ 0 & -Y\end{array}\right)\le \left(\begin{array}{cc}Z+\frac{1}{2||Z||}Z^2 & -(Z+\frac{1}{2||Z||}Z^2) \\ -(Z+\frac{1}{2||Z||}Z^2 ) & Z+\frac{1}{2||Z||}Z^2 \end{array}\right),\] with $Z \coloneqq 4 \alpha |\hat x-\hat y|^2 \operatorname{Id} + 8\alpha (\hat x-\hat y) \otimes (\hat x-\hat y)$. This means that condition \ref{it:F_modulus} and condition \ref{it:G_modulus} can be used. As $u$ is a subsolution and $v$ is a supersolution, we can find $\mu \in \mathcal{S}(a)$ and $\nu \in \mathcal{S}(b)$ such that \begin{align} F_*(\hat x,\hat t, u, a, p, X) - \mu G(\hat x,\hat t, u, p) \le 0,\label{eq:sub} \\ F^*(\hat y,\hat t, v, b, p, Y) - \nu G(\hat y,\hat t, v, p) \ge 0,\label{eq:super} \end{align} where one of the inequalities holds even if we replace $0$ by $\mp\lambda$ with $\lambda > 0$. By subtracting \eqref{eq:sub} from \eqref{eq:super}, we obtain \begin{align*} \lambda \le&~ F^*(\hat y,\hat t, v, b, p, Y) - F_*(\hat x,\hat t, u, a, p, X)- \nu G(\hat y,\hat t, v, p) + \mu G(\hat x,\hat t, u, p). \end{align*} Adding and subtracting terms, we get \begin{align*} \lambda \le&~ F^*(\hat y,\hat t, v, b, p, Y) - F_*(\hat x,\hat t, v, a, p, X) \\ &+ F_*(\hat x,\hat t, v, a, p, X)- F_*(\hat x,\hat t, u, a, p, X) \\ &- \nu G(\hat y,\hat t, v, p)+ \nu G(\hat x,\hat y, v, p) \\ &- \nu G(\hat x,\hat t, v, p)+ \nu G(\hat x,\hat t, u, p) \\&- \nu G(\hat x,\hat t, u, p) + \mu G(\hat x,\hat t, u, p) \\ \le&~ \omega_F(|\hat x-\hat y| + \alpha |\hat x-\hat y|^4) + \mathcal{S}_{\mathrm{max}}L_G(v-u) \\ &+|\nu| \omega_G(|\hat x-\hat y|+ \alpha |\hat x-\hat y|^4) + \mathcal{S}_{\mathrm{max}} L_G |u-v| \\ &+ (\mu - \nu) G(\hat x,\hat t, u, p). \end{align*} Where the second inequality follows from \ref{it:F_inc}, \ref{it:F_modulus}, and \ref{it:G_Lipschitz}, \ref{it:G_modulus} by noticing that, $X \le Y$, and $a > b$. In particular, \ref{it:S_mon} implies that $\mu-\nu \le 0$. As $G \ge 0$, it follows that the last term above is negative. Eventually, by means of $u(\hat x, \hat t) > v(\hat y, \hat t)$ we get \begin{align*} \lambda \le&~\omega_F(|\hat x-\hat y|+ \alpha |\hat x-\hat y|^4)+\mathcal{S}_{\mathrm{max}}\omega_G(|\hat x-\hat y|+ \alpha |\hat x-\hat y|^4). \end{align*} By taking $\alpha \to \infty$, we have $\alpha |\hat x-\hat y|^4 \to 0$ (see \cite{Crandall1992}) which also implies $|\hat x-\hat y| \to 0$. Therefore, the right-hand side above goes to $0$, leading to a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:comp_rn}]\label{proof:comp_rn} We are going to subdivide the proof in two steps. First, we are going to prove that the difference $u-v$ satisfies a growth estimate and afterwards we will prove the comparison principle. \emph{Step 1: Growth estimate} The first step of the proof consists in proving that the difference $u-v$ satisfies the growth estimate \begin{equation} \sup_{(x, y, t) \in \mathbb{R}n \times \mathbb{R}n \times I} u(x, t) - v(y, t) - 2\eta^{-1}K|x-y| - \tfrac{\gamma}{T-t} < \infty, \label{eq:comp_rn_growth} \end{equation} where $K \coloneqq K_{F} + \mathcal{S}_{\textrm{max}} K_G$. Following \cite[Theorem~5.1.]{Crandall1992}, we choose a family $\beta_R$ of $C^2(\mathbb{R}n)$ functions such that \begin{enumerate}[label=$\roman*)$] \item $\beta_R \ge 0$, \item $\liminf_{|x| \to \infty} \frac{\beta_R(x)}{|x|} \ge 2L$, \item $|D\beta_R(x)| + |D^2\beta_R(x)| \le C$, for $R \ge 1$, $x\in \mathbb{R}n$, \item $\lim_{R\to \infty} \beta_R(x) = 0$ for $x\in \mathbb{R}n$, \end{enumerate} where $C > 0$ is a positive constant. Let us now define the function \[ \Phi(x, y, t) \coloneqq u(x, t) - v(y, t) - 2\eta^{-1}K(1+|x-y|^2)^{\frac{1}{2}} - \left( \beta_R(x) + \beta_R(y)\right) - \tfrac{\gamma}{T-t}. \] Note that condition $ii)$ implies that there is a constant $r(R)$ such that $\beta_R(x) \ge \frac{3}{2}L|x|$ if $|x| > r(R)$. Moreover by \eqref{eq:comp_rn_assumption}, we obtain for $|x|, |y| > r(R)$ the estimate \begin{align*} \Phi(x, y, t) &\le L(1+|x|+|y|) - 2\eta^{-1}K - \tfrac{3}{2}L|x| - \tfrac{3}{2}L|y| - \tfrac{\gamma}{T-t}\\ &= L - 2\eta^{-1}K - \tfrac{\gamma}{T-t} - \tfrac{1}{2}L(|x|+|y|). \end{align*} Hence, the function has to attain its supremum in a compact subset of $\mathbb{R}n \times \mathbb{R}n \times I$. Let $(\hat{x}, \hat{y}, \hat{t})$ be this maximum. If the asserted inequality \eqref{eq:comp_rn_growth} fails to hold, then $\Phi(\hat{x}, \hat{y}, \hat{t}) > 0$ for $R>0$ big enough. Secondly assume the other case, i.e. $\Phi(\hat{x}, \hat{y}, \hat{t}) > 0$, which implies that \begin{equation} 2\eta^{-1}K|\hat{x}-\hat{y}| \le u(\hat{x}, \hat{t}) - v(\hat{y}, \hat{t}) - \tfrac{\gamma}{T-\hat{t}}. \label{eq:comp_rn_bound} \end{equation} In case $\hat{t} = 0$ then we would get \[ 0 < \Phi(\hat{x}, \hat{y}, 0) \le - 2K(1+|\hat{x}-\hat{y}|^2)^{\frac{1}{2}} - \left( \beta_R(\hat{x}) + \beta_R(\hat{y})\right) + \tfrac{\gamma}{T} \le 0, \] which is a contradiction. Hence, the maximum $(\hat{x}, \hat{y}, \hat{t})$ lies inside $\mathbb{R}n \times \mathbb{R}n \times (0, T)$, yielding \begin{align*} (a, p + D\beta_R(\hat{x}), X + D^2\beta_R(\hat{x})) &\in \mathcal{P}^{2, +} u(\hat{x}, \hat{t}),\\ (b, p - D\beta_R(\hat{y}), -X - D^2\beta_R(\hat{y})) &\in \mathcal{P}^{2, -} v(\hat{y}, \hat{t}), \end{align*} with $a = b+ \frac{\gamma}{(T-\hat{t})^2}$, $p = 2\eta^{-1}K \frac{\hat{x}-\hat{y}}{1+ |\hat{x}-\hat{y}|^2}$, and $$X = \frac{2\eta^{-1}K}{1+|\hat{x}-\hat{y}|^2} \operatorname{Id} - 4\eta^{-1}K \frac{\hat{x}-\hat{y}}{1+|\hat{x}-\hat{y}|^2} \otimes \frac{\hat{x}-\hat{y}}{1+|\hat{x}-\hat{y}|^2}.$$ This implies that one can find $\mu \in \mathcal{S}(a)$, and $\nu \in \mathcal{S}(b)$ such that \begin{align} F_*(\hat{x}, \hat{t}, u, a, p + D\beta_R(\hat{x}), X + D^2\beta_R(\hat{x})) &\le \mu G(\hat{x}, \hat{t}, u, p + D\beta_R(\hat{x})) \label{eq:comp_rn_subsol}, \\ F^*(\hat{y}, \hat{t}, v, b, p - D\beta_R(\hat{y}), -X - D^2\beta_R(\hat{y})) &\ge \nu G(\hat{y}, \hat{t}, v, p - D\beta_R(\hat{y})) \label{eq:comp_rn_supersol}. \end{align} Subtracting \eqref{eq:comp_rn_subsol} from \eqref{eq:comp_rn_supersol} shows that \begin{align*} 0 \le&~F^*(\hat y,\hat t, v, b, p - D\beta_R(\hat{y}), -X - D^2\beta_R(\hat{y})) - F_*(\hat x,\hat t, u, a, p + D\beta_R(\hat{x}), X + D^2\beta_R(\hat{x})) \\ &- \nu\left( G(\hat y,\hat t, v, p - D\beta_R(\hat{y})) - G(\hat x,\hat t, u, p + D\beta_R(\hat{x}))\right) + (\mu - \nu) G(\hat x,\hat t, u, p + D\beta_R(\hat{x})). \end{align*} As $b \le a$, we have $\mu \le \nu$ and we can estimate the last term in the right-hand side above by $0$. To treat other terms we use condition \ref{it:F_inc}, \ref{it:F_growth}, \ref{it:G_Lipschitz}, and \ref{it:G_growth}. As the difference of $F_2^* - {F_2}_*$ is locally bounded and $G_2$ is continuous there is $C(p, X, D\beta_R, D^2\beta_R) > 0$ locally bounded such that \begin{align*} 0 \le&~ \eta (v(\hat y, \hat t) - u(\hat x, \hat t)) + C_{F_1} + \mathcal{S}_{\textrm{max}}C_{G_1} + C(p, X, D\beta_R, D^2\beta_R) \\ &+ (K_F + \mathcal{S}_{\textrm{max}}K_G) |\hat x - \hat y|. \end{align*} As $p, X$ are bounded and $D\beta_R$, $D^2 \beta_R$ are bounded independently of $R$ we can introduce a constant $C > 0$ which is independent of $R$ and obtain \[ 0 \le C + \eta (v(\hat y, \hat t) - u(\hat x, \hat t)) + K |\hat x - \hat y|. \] Finally, using \eqref{eq:comp_rn_bound}, we see that $u(\hat x, \hat t) - v(\hat y, \hat t)$ is uniformly bounded, i.e., \[ u(\hat x, \hat t) - v(\hat y, \hat t) \le \tfrac{2C}{\eta}. \] The bound on $u-v$ implies that \[ \Phi(x, y, t) \le \Phi(\hat{x}, \hat{y}, \hat{t}) \le u(\hat{x}, \hat{t}) - v(\hat{y}, \hat{t}) \le \tfrac{2C}{\eta}. \] By sending $R \to \infty$ we obtain \[ u(x, t) - v(y, t) - 2K(1+|x-y|^2)^{\frac{1}{2}} - \tfrac{\gamma}{T-t} \le \tfrac{2C}{\eta} \] and \eqref{eq:comp_rn_growth} is proved. \emph{Step 2: Comparison Principle} It is now time to prove the comparison principle. Let us therefore assume that \[ \sup_{\substack{x\in \mathbb{R}n \\ t\in [0, T)}} \left\{ u(x, t) - v(x, t)\right\} \eqqcolon \delta > 0 \] and define \[ M_{\alpha, \epsilon, \gamma } \coloneqq \sup_{\substack{x, y \in \mathbb{R}n \\ t\in I}} \left\{ u(x, t) - v(y, t) - \alpha|x-y|^4 - \epsilon(|x|^2+|y|^2) - \tfrac{\gamma}{T-t}\right\}. \] Due to the growth estimate \eqref{eq:comp_rn_growth}, $M_{\alpha, \epsilon, \gamma}$ is uniformly bounded. Moreover, we have $M_{\alpha, \epsilon, \gamma} > \delta/2$ for $\gamma, \epsilon$ small enough. Using \eqref{eq:comp_rn_growth}, we see that $M_{\alpha, \epsilon, \gamma}$ is attained at some $(\hat{x}, \hat{y}, \hat{t})$ satisfying \begin{align} \label{eq:comp_rn_estimate} \alpha|\hat x-\hat y|^4 + \epsilon(|\hat x|^2+|\hat y|^2) &\le u(\hat x, \hat t) - v(\hat y, \hat t) - \tfrac{\gamma}{T-\hat t} \le 2K\eta^{-1}|\hat{x} - \hat{y}| + C \\ \nonumber &\le \tfrac{\alpha}{4} |\hat{x}-\hat{y}|^4 + \tfrac{3}{4}(\tfrac{2K\eta^{-1}}{\alpha^{1/4}})^{4/3} + C, \end{align} for some constant $C = C(K, C_{F_1}, C_{F_2}) > 0$. Hence, the maximum is achieved inside of the domain, i.e. $(\hat{x}, \hat{y}, \hat{t}) \in \mathbb{R}n \times \mathbb{R}n \times (0, T)$ and we can again apply the Jensen-Ishii lemma to obtain that \begin{align*} (a, p+2\epsilon \hat{x}, X + 2\epsilon\operatorname{Id}) &\in \mathcal{P}^{2, +} u(\hat{x}, \hat{t}), \\ (b, p-2\epsilon \hat{y}, Y-2\epsilon\operatorname{Id}) &\in \mathcal{P}^{2, -} v(\hat{y}, \hat{t}) , \end{align*} with $a-b = \tfrac{\gamma}{(T-\hat{t})^2}$, $p \coloneqq 4\alpha |\hat x-\hat y|^2 (\hat x-\hat y)$, and \[ -4||Z|| \left(\begin{array}{cc}\operatorname{Id} & 0 \\ 0 & \operatorname{Id}\end{array}\right) \le \left(\begin{array}{cc}X & 0 \\ 0 & Y\end{array}\right)\le \left(\begin{array}{cc}Z+\frac{1}{2||Z||}Z^2 & -(Z+\frac{1}{2||Z||}Z^2) \\ -(Z+\frac{1}{2||Z||}Z^2 ) & Z+\frac{1}{2||Z||}Z^2 \end{array}\right),\] with $Z \coloneqq 4 \alpha |\hat x-\hat y|^2 \operatorname{Id} + 8\alpha (\hat x-\hat y) \otimes (\hat x-\hat y)$. As $u$ is a subsolution and $v$ is a supersolution, we can find $\mu \in \mathcal{S}(a)$ and $\nu \in \mathcal{S}(b)$ such that \begin{align} F_*(\hat x,\hat t, u, a, p+2\epsilon\hat x, X+ 2\epsilon\operatorname{Id}) - \mu G(\hat x,\hat t, u, p + 2\epsilon\hat x) \le 0,\label{eq:comp_rn_proof_sub} \\ F^*(\hat y,\hat t, v, b, p - 2\epsilon\hat y, Y- 2\epsilon\operatorname{Id}) - \nu G(\hat y,\hat t, v, p- 2\epsilon\hat y) \ge 0,\label{eq:comp_rn_proof_super}. \end{align} By subtracting \eqref{eq:comp_rn_proof_sub} from \eqref{eq:comp_rn_proof_super}, we obtain \begin{align*} 0 \le&~ F^*(\hat y,\hat t, v, b, p - 2\epsilon \hat y, Y- 2\epsilon\operatorname{Id}) - F_*(\hat x,\hat t, u, a, p + 2\epsilon \hat x, X + 2\epsilon\operatorname{Id}) \\ &- \nu G(\hat y,\hat t, v, p - 2\epsilon \hat{y}) + \mu G(\hat x,\hat t,u, p + 2\epsilon \hat{x}). \end{align*} By adding and subtracting terms one has that \begin{align*} 0 \le&~ F^*(\hat y,\hat t, v, b, p- 2\epsilon \hat y, Y -2\epsilon\operatorname{Id}) - F_*(\hat x,\hat t, v, a, p + 2\epsilon \hat x, X + 2\epsilon\operatorname{Id}) \\ &+ F_*(\hat x,\hat t, v, a, p + 2\epsilon \hat x, X + 2\epsilon\operatorname{Id})- F_*(\hat x,\hat t, u, a, p + 2\epsilon \hat x, X + 2\epsilon\operatorname{Id}) \\ &- \nu G(\hat y,\hat t, v, p- 2\epsilon \hat y)+ \nu G(\hat x,\hat t, v, p+ 2\epsilon \hat x) \\ &- \nu G(\hat x,\hat t, v, p+ 2\epsilon \hat x)+ \nu G(\hat x,\hat t, u, p+ 2\epsilon \hat x) \\ &- \nu G(\hat x,\hat t, u, p+ 2\epsilon \hat x) + \mu G(\hat x,\hat t, u, p+ 2\epsilon \hat x)\\ \le&~\eta(v(\hat y, \hat t) - u(\hat x, \hat t)) \\ &+ F^*(\hat y,\hat t, v, b, p- 2\epsilon \hat y, Y -2\epsilon\operatorname{Id}) - F_*(\hat x,\hat t, v, a, p + 2\epsilon \hat x, X + 2\epsilon\operatorname{Id}) \\ &- \nu G(\hat y,\hat t, v, p- 2\epsilon \hat y)+ \nu G(\hat x,\hat t, v, p+ 2\epsilon \hat x). \end{align*} In the second inequality we used \ref{it:F_inc}, \ref{it:G_Lipschitz}, \ref{it:S_mon}, along with $u(\hat x, \hat t) > v(\hat y, \hat t)$, $X\le Y$, and $a\ge b$. To treat the terms in the last inequality, we use \ref{it:F_growth}, \ref{it:G_growth}, and \ref{it:S_bdd} to obtain, \begin{align*} \eta \tfrac{\delta}{2} \le&~ F_1(\hat y, \hat t, v, b) - F_1(\hat x, \hat t, v, a) +F_2^*(\hat t, p- 2\epsilon \hat y, Y -2\epsilon\operatorname{Id}) - {F_2}_*(\hat t, p + 2\epsilon \hat x, X + 2\epsilon\operatorname{Id}) \\ &+\mathcal{S}_\textrm{max} | G_1(\hat y, \hat t, v) - G_1(\hat x,\hat t, v) | +\mathcal{S}_\textrm{max} | G_2(t, p + 2\epsilon \hat y) - G_2(t, p- 2\epsilon \hat y) | \\ \le&~\omega_{F_1}(|\hat x - \hat y| + \tfrac{\gamma}{T-\hat t}) + F_2^*(\hat t, p- 2\epsilon \hat y, Y -2\epsilon\operatorname{Id}) - {F_2}_*(\hat t, p + 2\epsilon \hat x, X + 2\epsilon\operatorname{Id}) \\ &~+\mathcal{S}_\textrm{max}\omega_{G_1}(|\hat x - \hat y|) +\mathcal{S}_\textrm{max} | G_2(t, p + 2\epsilon \hat y) - G_2(t, p- 2\epsilon \hat y) | \end{align*} As equation \eqref{eq:comp_rn_estimate} implies that $\alpha |\hat x - \hat y|^4$ is bounded independently of $\epsilon$, and therefore also $p, q, X$, and $Y$, we can take the limit superior as $\epsilon \to 0$ of the above equation to obtain the inequality \begin{align*} \eta \tfrac{\delta}{2} \le&~\omega_{F_1}(|\hat x - \hat y| + \tfrac{\gamma}{T-\hat t}) + F_2^*(\hat t, p, Y) - {F_2}_*(\hat t, p, X)+\mathcal{S}_\textrm{max}\omega_{G_1}(|\hat x - \hat y|). \end{align*} Note that we used the upper-semicontinuity of $F_2^* -{F_2}_*$ and the continuity of $G$. Finally, we can again use \ref{it:F_growth} to reach a contradiction as follows \begin{align*} \eta \tfrac{\delta}{2} &\le\liminf_{\alpha \to \infty}\liminf_{\gamma \to 0}\omega_{F_1}(|\hat x - \hat y| + \tfrac{\gamma}{T-\hat t}) +\omega_{F_2}(|\hat x - \hat y| + \alpha |\hat x - \hat y|^4)+\mathcal{S}_\textrm{max}\omega_{G_1}(|\hat x - \hat y|)\\ &= 0, \end{align*} if $\liminf_{\gamma \to 0} \liminf_{\epsilon \to 0} \tfrac{\gamma}{T-\hat{t}} = 0$ and $\liminf_{\alpha\to \infty}\liminf_{\gamma \to 0} \liminf_{\epsilon \to 0} \alpha |\hat x - \hat y|^4 = 0$. These equalities hold with a similiar argument as in \cite{Giga1991}. \end{proof} \subsection{Stability Result} The proof is based on \cite[Chapter~6]{Barles2013} with the necessary adaptions. \begin{proof}[Proof of Theorem \ref{thm:stability}]\label{proof:stability} Consider any $(x, t) \in \Omega \times I$ and $(a, p, X) \in \mathcal{P}^{2, +} \overline{u}(x, t)$. By applying Lemma 6.1 from \cite{Barles2013} (see also \cite[Proposition~4.3]{Crandall1992}), we see that there is a sequence $(x_{n_k}, t_{n_k}) \in \Omega \times I$ and $(a_{n_k}, p_{n_k},X_{n_k}) \in \mathcal{P}^{2, +} u_{n_k}(x_{n_k}, t_{n_k})$ such that \[ (x_{n_k}, t_{n_k}, u_{n_k}(x_{n_k}, t_{n_k}), a_{n_k}, p_{n_k},X_{n_k}) \to (x, t, \overline{u}(x, t), a, p, X). \] As $u_{n_k}$ are viscosity subsolutions, there are $\mu_{n_k} \in \mathcal{S}_{n_k}(a_{n_k})$ such that \[ F_{n_k}(x_{n_k}, t_{n_k}, u_{n_k}, a_{n_k}, p_{n_k}, X_{n_k}) - \mu_{n_k} G_{n_k}(x_{n_k}, t_{n_k}, u_{n_k}, p_{n_k}) \le 0. \] By the assumptions in the theorem, there is a further subsequence (not relabeled) such that $\mu_{n_k} \to \mu \in \mathcal{S}(a)$. Hence, we have by the definition of the half-relaxed limit $\underline{F}$ and the uniform convergence of $G_{n_k}$ that \begin{align*} &\underline{F}(x, t, u, a, p, X) - \mu G(x, t, u, p) \\ &\le \liminf_{k\to \infty } \left(F_{n_k}(x_{n_k}, t_{n_k}, u_{n_k}, a_{n_k}, p_{n_k}, X_{n_k}) - \mu_{n_k} G_{n_k}(x_{n_k}, t_{n_k}, u_{n_k}, p_{n_k})\right) \\ &\le 0. \end{align*} \end{proof} \end{document}
\begin{document} \title{Privacy of a lossy bosonic memory channel} \author{Giovanna Ruggeri} \email{[email protected]} \affiliation{Dipartimento di Fisica, Universit\`a di Lecce, I-73100 Lecce, Italy} \author{Stefano Mancini} \email{[email protected]} \affiliation{Dipartimento di Fisica, Universit\`{a} di Camerino, I-62032 Camerino, Italy} \begin{abstract} We study the security of the information transmission between two honest parties realized through a lossy bosonic \emph{memory} channel when losses are captured by a dishonest party. We then show that entangled inputs can enhance the private information of such a channel, which however does never overcome that of unentangled inputs in absence of memory. \end{abstract} \pacs{03.67.Hk, 03.65.Ud, 42.50.Dv} \maketitle \section{Introduction} Quantum communications concern the study of quantum channels that also use continuous alphabets \cite{brau}. These can be modeled by bosonic field modes whose phase space quadratures enable for continuous variable encoding/decoding. Lossy bosonic channel, which consists of a collection of bosonic modes that lose energy en route from the transmitter to the receiver, belongs to the class of Gaussian channels which provide a fertile testing ground for the general theory of quantum channels' capacities \cite{Hol99} and are easy to implement experimentally with high accuracy level (beam splitters or squeezers are examples of Gaussian operations) \cite{brau}. Hence, the increasing attention devoted to channels with memory effects, where the noise may be strongly correlated between uses of the channel, has been extended to bosonic channels \cite{GM05}. The main motivation that has led to investigate memory effects in such channels has been the possibility to enhance their classical capacity by means of entangled inputs \cite{Rug05, Cerf04}. Thus, the quest for effectiveness of entanglement inputs in other kind of capacities. Here we study the \emph{private classical information capacity} \cite{Dev} of a lossy bosonic memory channel. We use the model introduced in Ref.\cite{Rug05} where the memory effects are realized by considering quantum correlations among environments acting on different channel uses \cite{GM05}. We then analyze the security (privacy) of the channel when there is a third dishonest party, who captures the information lost during the process of transmission between the sender and the receiver, i.e. the eavesdropper accesses the environment's final state. We shall show that entangled inputs can enhance the private classical information capacity \cite{Dev}, but not above that of unentangled inputs in absence of memory. This is in contrast with what happen for simple classical information capacity \cite{Rug05,Cerf04}. \section{The model} Let us consider a $2$-shot bosonic channel with correlated noise, acting on two independent modes of the electromagnetic field associated with the annihilation operators ${\hat a}_1,{\hat a}_2$. Here, each mode ${\hat a}_k$ (with $k=1,2$) interacts with an environment mode ${\hat b}_k$ through a beam splitter of transmittivity $\eta\in[0,1]$, modeling losses. For single mode Gaussian channels it is conjectured that Gaussian inputs suffice to achieve the classical capacities. Here we do not question this generally admitted conjecture coming form Ref.\cite{Hol99}, and we assume it to be also valid for two-mode (memory) channels, likewise Refs.\cite{Cerf04}. Hence, we consider a general Gaussian input as a mixture of entangled (two-mode squeezed) coherent states, given by \begin{equation} |\psi \left( \mu_1\,,\mu_2 \right)\rangle={\hat S}_{a}(r)\left [{\hat D}_{a_2} (\mu_2)|0 \rangle_{a_2} {\hat D}_{a_1}(\mu_1)|0\rangle_{a_1}\right], \label{psi} \end{equation} where ${\hat D}_{a_k} (\mu_k)=\exp \left(\mu_k {\hat a}_{k}^{\dagger}-\mu_k^* {\hat a}_{k} \right)$ is the single $k$-mode displacement operator, corresponding to the complex number $\mu_k$ and ${\hat S}_a(r)=\exp \left[\frac{1}{2}r \left( {\hat a}_1^{\dagger}{\hat a}_2^{\dagger}-{\hat a}_1 {\hat a}_2\right) \right]$ denotes the two-mode squeeze operator \cite{brau}, with $r$ the entanglement parameter between the two inputs ($r=0$ refers to input product states). Let us write the input (\ref{psi}) as a density operator \begin{eqnarray} {\rho}_{in}\left(\mu_1\,,\mu_2 \right)=|\psi\left(\mu_1\,,\mu_2 \right)\rangle \langle\psi\left(\mu_1\,,\mu_2 \right)|. \label{rhoin} \end{eqnarray} Then, we assume such states weighted with the Gaussian probability distribution \begin{equation} P\left(\mu_1\,,\mu_2 \right)=\frac{1}{\pi^2 N^2} \exp \left( -\frac{|\mu_1|^2+|\mu_2|^2}{N}\right)\,, \label{Pimu} \end{equation} where $N$ is the average photon number per channel use. In practice, due to entanglement, the effective average photon number per channel use will be $N_{eff}=N+\sinh^2 r$. According to Ref.\cite{GM05}, we introduce the correlations between the environment actions on the two channel uses by a two-mode squeezed vacuum state, \begin{equation} {\rho}_{env}={\hat S}_b(s)\left [|0 \rangle_{b_2}|0\rangle_{b_1}{}_{b_1}\langle 0|{}_{b_2}\langle 0|\right] {\hat S}^{\dag}_b(s) \label{rhoenv} \end{equation} where ${\hat S}_b(s)=\exp \left[\frac{1}{2}s \left( {\hat b}_1^{\dagger}{\hat b}_2^{\dagger}-{\hat b}_1 {\hat b}_2\right) \right]$ with $s$ the memory parameter ($s=0$ corresponds to memoryless case). In our model, the interaction between input and environment is characterized by the beam splitter transformation \cite{brau} \begin{eqnarray} \label{unouno} {\hat a}_k&\mapsto&\sqrt{\eta}\; {\hat a}_k-\sqrt{1-\eta}\; {\hat b}_k,\\ {\hat b}_k&\mapsto&\sqrt{1-\eta}\; {\hat a}_k+\sqrt{\eta}\; {\hat b}_k. \label{duedue} \end{eqnarray} As consequence the input and environment states are mapped onto a global (possibly entangled) state ${\rho}$ \begin{equation} {\rho}_{in}\otimes{\rho}_{env}\mapsto {\rho}. \label{rhomap} \end{equation} Then, on one hand, the receiver will get the output state $\rho_{out}={\rm Tr}_{b_1b_2}(\rho)$. On the other hand, the losses might be captured by a dishonest party, the eavesdropper, accessing the environment's final state ${\rho}_{eve}={\rm Tr}_{a_1a_2}(\rho)$. Since we deal with Gaussian states, it turns useful to work with Wigner distribution functions \cite{brau}. Let $q_k\,,p_k$ be the quadrature variables of $\hat{a}_k$, i.e. the classical variable corresponding to $\hat{q}_k=(\hat{a}_k +\hat{a}_k^{\dag})/\sqrt{2}$, $\hat{p}_k=-i(\hat{a}_k -\hat{a}_k^{\dag})/\sqrt{2}$. Moreover, let $\mu_k^R\,,\mu_k^I$ be the real and imaginary part of $\mu_k$. We introduce the row vectors in $\mathbb{R}^{4}$, \begin{eqnarray} \mbox{\boldmath$u$}&=&\left( q_1,q_2,p_1,p_2\right)\,,\\ \mbox{\boldmath$\mu$}&=&\left( \mu_1^R,\mu_2^R,\mu_1^I,\mu_2^I\right)\,, \end{eqnarray} and the real $4 \times 4$ matrix \begin{equation} \label{Ar} {\cal A}_r= \left( \begin{array}{cccc} \cosh {2r} & -\sinh {2r} & 0 & 0 \\ -\sinh {2r} & \cosh {2r} & 0 & 0 \\ 0 & 0 & \cosh {2r} & \sinh {2r} \\ 0 & 0 & \sinh {2r} & \cosh {2r} \end{array} \right)\,. \end{equation} The Wigner function corresponding to ${\rho}_{in}$ in Eq.(\ref{rhoin}) reads \begin{equation} W_{in}(\mbox{\boldmath$u$};\mbox{\boldmath$\mu$})=\frac{1}{\pi^2} \exp\left[-\mbox{\boldmath$u$}{\cal A}_r \mbox{\boldmath$u$}^T-\mbox{\boldmath$\mu$}{\cal A}_r \mbox{\boldmath$\mu$}^T+2\mbox{\boldmath$\mu$}{\cal A}_r \mbox{\boldmath$u$}^T\right]. \label{Win} \end{equation} Moreover, let us denote by \begin{eqnarray} \mbox{\boldmath$v$}&=&\left(x_1,x_2, y_1,y_2\right)\, \end{eqnarray} the real 4-component vector of the quadrature variables $x_k$, $y_k$ associated with the environment operators ${\hat b}_k$, i.e. the classical variable corresponding to $\hat{x}_k=(\hat{b}_k +\hat{b}_k^{\dag})/\sqrt{2}$, $\hat{y}_k=-i(\hat{b}_k -\hat{b}_k^{\dag})/\sqrt{2}$. Then, the Wigner function corresponding to ${\rho}_{env}$ of Eq.(\ref{rhoenv}) reads \begin{eqnarray} W_{env}(\mbox{\boldmath$v$})=\frac{1}{\pi^2} \exp\left[-\mbox{\boldmath$v$}{\cal A}_s \mbox{\boldmath$v$}^T\right]\,, \end{eqnarray} where ${\cal A}_s$ is given by Eq.(\ref{Ar}) with the replacement $r\to s$. To go further on, let us define the vectors in $\mathbb{R}^{8}$ \begin{equation} \mbox{\boldmath$\gamma$}=(\mbox{\boldmath$u$},\mbox{\boldmath$v$})\,,\quad \mbox{\boldmath$\theta$}=(\mbox{\boldmath$u$},\mbox{\boldmath$0$})\,,\quad \mbox{\boldmath$\tau$}=(\mbox{\boldmath$0$},\mbox{\boldmath$v$})\,,\quad \mbox{\boldmath$\kappa$}=(\mbox{\boldmath$\mu$},\mbox{\boldmath$0$})\,. \label{gathtaka} \end{equation} We can write the total (input plus environment) Wigner function, corresponding to ${\rho}_{in}\otimes {\rho}_{env}$ as \begin{eqnarray}\label{Wtot} &&W_{in}\left(\mbox{\boldmath$u$};\mbox{\boldmath$\mu$}\right)W_m\left(\mbox{\boldmath$v$}\right)\\ &=&\frac{1}{\pi^4} \exp\left[ -\mbox{\boldmath$\gamma$}{\cal A}\mbox{\boldmath$\gamma$}^T+2\mbox{\boldmath$\kappa$}{\cal A}\mbox{\boldmath$\gamma$}^T-\mbox{\boldmath$\kappa$}{\cal A}\mbox{\boldmath$\kappa$}^T\right]\,, \nonumber \end{eqnarray} where ${\cal A}$ is the $8 \times 8$ diagonal block matrix \begin{equation} {\cal A}= \left( \begin{array}{cc} {\cal A}_r & 0 \\ 0 & {\cal A}_s \end{array} \right)\,. \end{equation} As consequence of Eqs.(\ref{unouno}), (\ref{duedue}) the signal-noise coupling corresponds to the change of variables \begin{equation}\label{bs} \mbox{\boldmath$\gamma$}^T\longrightarrow {\cal B}\mbox{\boldmath$\gamma$}^T\, \end{equation} produced by the unitary beam splitter matrix \begin{equation} \label{O} {\cal B}= \left( \begin{array}{ccc} \sqrt{\eta}\,\, {\cal I} & \sqrt{1-\eta} \,\, {\cal I} \\ -\sqrt{1-\eta} \,\, {\cal I} & \sqrt{\eta} \,\, {\cal I} \end{array} \right)\,, \end{equation} with $ {\cal I}$ the $4\times 4$ identity matrix. By using Eq.(\ref{bs}) into (\ref{Wtot}) we obtain the total Wigner function after the interaction between input and environment, thus corresponding to the state $\rho$ of Eq.(\ref{rhomap}) \begin{align} W(\mbox{\boldmath$\gamma$}; \mbox{\boldmath$\kappa$})=\frac{1}{\pi^4} \exp&\left[ -\mbox{\boldmath$\gamma$}\, {\cal B}^T{\cal A}{\cal B} \,\mbox{\boldmath$\gamma$}^T\right.\nonumber\\ &\left.+2\mbox{\boldmath$\kappa$}\, {\cal A}{\cal B} \,\mbox{\boldmath$\gamma$}^T-\mbox{\boldmath$\kappa$}\,{\cal A}\,\mbox{\boldmath$\kappa$}^T \right]. \label{Wtotafter} \end{align} \section{Private Classical Information Capacity} Integrating Eq.(\ref{Wtotafter}) over the variable $\mbox{\boldmath$v$}$ (resp. $\mbox{\boldmath$u$}$), we get the output (resp. eavesdropper) Wigner function, $W_{out}(\mbox{\boldmath$\theta$}; \mbox{\boldmath$\kappa$})$ (resp. $W_{eve}(\mbox{\boldmath$\tau$}; \mbox{\boldmath$\kappa$})$). It means to have the following correspondences \begin{eqnarray} {\rho}_{out}&\leftrightarrow& W_{out}(\mbox{\boldmath$\theta$};\mbox{\boldmath$\kappa$})=\int d\mbox{\boldmath$v$} \;W(\mbox{\boldmath$\gamma$};\mbox{\boldmath$\kappa$}),\label{rhoout}\\ {\rho}_{eve}&\leftrightarrow& W_{eve}(\mbox{\boldmath$\tau$};\mbox{\boldmath$\kappa$})=\int d\mbox{\boldmath$u$} \;W(\mbox{\boldmath$\gamma$};\mbox{\boldmath$\kappa$}).\label{rhoeve} \end{eqnarray} Averaging over the input distribution (\ref{Pimu}), we also have \begin{eqnarray} {\overline\rho}_{out}&\leftrightarrow& {\overline W}_{out}(\mbox{\boldmath$\theta$})=\int d\mbox{\boldmath$\mu$} \;P(\mbox{\boldmath$\mu$}){W}_{out}(\mbox{\boldmath$\theta$};\mbox{\boldmath$\kappa$}), \label{orhoout}\\ {\overline\rho}_{eve}&\leftrightarrow& {\overline W}_{eve}(\mbox{\boldmath$\tau$})=\int d\mbox{\boldmath$\mu$} \; P(\mbox{\boldmath$\mu$}){W}_{eve}(\mbox{\boldmath$\tau$};\mbox{\boldmath$\kappa$}). \label{orhoeve} \end{eqnarray} From Eqs.(\ref{rhoout}), (\ref{rhoeve}), (\ref{orhoout}), (\ref{orhoeve}) taking into account Eq.(\ref{gathtaka}), we get the following Gaussian functions \begin{eqnarray} W_{out}(\mbox{\boldmath$u$};\mbox{\boldmath$\mu$})&=&\frac{1}{(2\pi)^2\sqrt{\det V_{out}}} \exp \left[-\frac{1}{2} \mbox{\boldmath$u$} V_{out}^{\,-1}\mbox{\boldmath$u$}^T\right.\\ & & \left.+\sqrt{\eta}\mbox{\boldmath$\mu$}V_{out}^{\,-1}\mbox{\boldmath$u$}^T -\frac{1}{2}\eta \mbox{\boldmath$\mu$}V_{out}^{\,-1}\mbox{\boldmath$\mu$}^T\right],\nonumber\\ W_{eve}(\mbox{\boldmath$v$};\mbox{\boldmath$\mu$})&=&\frac{1}{(2\pi)^2\sqrt{\det V_{eve}}} \exp\left[-\frac{1}{2} \mbox{\boldmath$v$} V_{eve}^{\,-1}\mbox{\boldmath$v$}^T\right.\\ & & \left.+\sqrt{1-\eta}\mbox{\boldmath$\mu$}V_{eve}^{\,-1}\mbox{\boldmath$v$}^T -\frac{1}{2}(1-\eta) \mbox{\boldmath$\mu$}V_{eve}^{\,-1}\mbox{\boldmath$\mu$}^T\right],\nonumber\\ {\overline W}_{out}(\mbox{\boldmath$u$})&=&\frac{1}{(2\pi)^2\sqrt{\det{\overline V}_{out}}}\exp{\left[-\frac{1}{2} \mbox{\boldmath$u$} {\overline V}_{out}^{\,-1}\mbox{\boldmath$u$}^T \right]},\\ {\overline W}_{eve}(\mbox{\boldmath$v$})&=&\frac{1}{(2\pi)^2\sqrt{\det{\overline V}_{eve}}}\exp{\left[-\frac{1}{2} \mbox{\boldmath$v$} {\overline V}_{eve}^{\,-1}\mbox{\boldmath$v$}^T \right]}, \end{eqnarray} whose covariance matrices result \begin{eqnarray} V_{out}\,&=&\frac{1}{2}\left[\eta {\mathcal A}_{r}^{-1}+\left(1-\eta\right){\mathcal A}_{s}^{-1}\right],\label{Vout}\\ V_{eve}\,&=&\frac{1}{2}\left[\left(1-\eta\right) {\mathcal A}_{r}^{-1}+\eta {\mathcal A}_{s}^{-1}\right],\label{Veve}\\ {\overline V}_{out}&=&V_{out}+\frac{1}{2}\,\eta\, N {\cal I},\label{oVout}\\ {\overline V}_{eve}&=&V_{eve}+\frac{1}{2}\left(1-\eta\right) \,N {\cal I}.\label{oVeve} \end{eqnarray} Since any excess information the receiver has relative to eavesdropper can in principle be exploited by receiver and sender to distill a shared secret key \cite{CK78}, it makes sense to consider the difference between output and eavesdropper Holevo's information as guaranteed privacy of the channel \cite{Dev}. Hence, we introduce the private information normalized to the number of channel uses \begin{eqnarray} \label{Ip} I_p(2)=\frac{1}{2}\left(\chi_{out}-\chi_{eve}\right), \end{eqnarray} with \begin{eqnarray} \chi_{out}&=& S\left(\overline{\rho}_{out}\right)-\int d\mbox{\boldmath$\mu$} P(\mbox{\boldmath$\mu$}) S\left({\rho}_{out}\right),\label{chiout}\\ \chi_{eve}&=& S\left(\overline{\rho}_{eve}\right)-\int d\mbox{\boldmath$\mu$} P(\mbox{\boldmath$\mu$}) S\left({\rho}_{eve}\right),\label{chieve} \end{eqnarray} being $S$ the von Neumann entropy. The supremum of private information (\ref{Ip}) represents the 2-shot private classical information capacity \cite{Dev}. Due to the initial conjecture about optimality of Gaussian encoding and the generality of the state (\ref{psi}), it amounts to the maximum of $I_p$ over parameter $r$. The symplectic eigenvalues of covariance matrices (\ref{Vout}), (\ref{Veve}), (\ref{oVout}), (\ref{oVeve}) \begin{eqnarray} \lambda_{out,j}&=&\frac{1}{2}\left[1-2\,\eta\,\left(1-\eta\right)+2\,\eta\,\left(1-\eta\right)\,\cosh 2(r-s)\right]^{1/2},\nonumber\\ & & \\ \lambda_{eve,j}&=&\frac{1}{2}\left[1-2\,\eta\,\left(1-\eta\right)+2\,\eta\,\left(1-\eta\right)\,\cosh 2(r-s)\right]^{1/2},\nonumber\\ & & \\ {\overline\lambda}_{out,j}&=&\frac{1}{2}\left\{1+\eta^2\left(N^2+2\,N\cosh 2r\right)\right.\nonumber\\ & & \left.+2\,\eta\,(1-\eta)\left[\cosh 2(r-s)+N\cosh 2s-1\right]\right\}^{1/2},\nonumber\\ & & \\ {\overline\lambda}_{eve,j}&=&\frac{1}{2}\left\{1+(1-\eta)^2\left(N^2+2\,N\cosh 2r\right)\right.\nonumber\\ & & \left.+2\,\eta\,(1-\eta)\left[\cosh 2(r-s)+N\cosh 2s-1\right]\right\}^{1/2},\nonumber\\ & & , \end{eqnarray} allow to calculate the entropies in Eqs.(\ref{chiout}) and (\ref{chieve}) as \cite{Hol99} \begin{eqnarray} S({\rho}_{out})&=&\sum_{j=1}^{2}g\left(|\lambda_{out,j}|-\frac{1}{2}\right), \label{Srhoout}\\ S({\rho}_{eve})&=&\sum_{j=1}^{2}g\left(|\lambda_{eve,j}|-\frac{1}{2}\right), \label{Srhoeve}\\ S(\overline{\rho}_{out})&=&\sum_{j=1}^{2}g\left(|{\overline\lambda}_{out,j}|-\frac{1}{2}\right),\\ S(\overline{\rho}_{eve})&=&\sum_{j=1}^{2}g\left(|{\overline\lambda}_{eve,j}|-\frac{1}{2}\right), \end{eqnarray} where $g(x)=(x+1)\log_2 (x+1)-x\log_2 x$. It results that Eqs.(\ref{Srhoout}) and (\ref{Srhoeve}) do not depend on $\mbox{\boldmath$\mu$}$, hence we straightforward get $\chi_{out}$ and $\chi_{eve}$ of Eqs.(\ref{chiout}) and (\ref{chieve}) since both integrals amount to 1. \section{Results and Conclusions} We are now in the position to analyze the behavior of the quantity $I_p$ of Eq.(\ref{Ip}). Since we want to bound the effective average photon number per channel use, in practice we fix $N_{eff}$ as the effective input photon number and we consider $N$ varying as function of $r$ ($N=N_{eff}-\sinh^2 r$), limiting the range of $r$ to those values for which $N\ge 0$. Another parameter to take into account is the beam splitter transmittivity $\eta$. We distinguish three limit cases. For $\eta=1$, the channel is not a lossy channel, thus the whole information arrives to the receiver; i.e. $I_p\ge 0$ and it is maximum with respect to $\eta$. For $\eta=1/2$ the receiver and the eavesdropper have the same information; i.e. $I_p=0$. For $\eta=0$ the whole information is lost and captured by the eavesropper; i.e. $I_p\le 0$ and it is minimum with respect to $\eta$. Here we present two intermediate situations: $\eta=0.8$ (Fig.\ref{fig1}) and $\eta=0.2$ (Fig.\ref{fig2}). \begin{figure} \caption{Private information $I_p$ versus the entanglement parameter $r$. Curves from top to bottom are for $s=0$, $1$, $2$, $3$. The values of other parameters are $\eta=0.8$ and $N_{eff} \label{fig1} \end{figure} \begin{figure} \caption{Private information $I_p$ versus the entanglement parameter $r$. Curves from bottom to top are for $s=0$, $1$, $2$, $3$. The values of other parameters are $\eta=0.2$ and $N_{eff} \label{fig2} \end{figure} In Fig.\ref{fig1} the private information $I_p$ is shown {\em vs.} the entanglement parameter $r$ for different values of the degree of memory $s$, for an input photon number $N_{eff}=2$. For $s=0$ the behavior of $I_p$ is symmetric with respect to the value $r=0$, at which it attains the maximum. That is, entangled inputs are no way useful in the memoryless case. As soon as the degree of memory increases ($s>0$) the symmetry is broken. For a certain range of $r$'s values the function $I_p$ goes above its value corresponding to product input state ($r=0$). This clearly shows an improvement of the security of transmission through a memory channel, due to entangled inputs instead of product states. However, the effectiveness of entangled inputs in presence of memory never overcomes that of unentangled inputs in absence of memory (the maxima of the curves for $s> 0$ are below the maximum of the curve for $s=0$). This is in contrast with what happen for the simple classical capacity \cite{Rug05,Cerf04}. It is also intuitive that by increasing the memory strength, the private information becomes lower and lower until flattening to zero. In fact, correlations among the (classical) symbols help the eavesdropper in predicting them, while completely uncorrelated symbols would avoid that. In Fig.\ref{fig2} the privacy information {\em vs.} the entanglement parameter $r$ is negative, as it is expected for $\eta\le 1/2$.\footnote{$I_p$ shows a mirror symmetry with respect to abscissa when $\eta\rightarrow (1-\eta)$ and a mirror symmetry with respect to ordinate when $s\rightarrow -s$.} Hence, in such cases the channel does not represent a secure mean for information transmission even in presence of memory. The actual security threshold $\eta=1/2$ recalls those found for secure continuous variable cryptographic key distribution \cite{Gro05}. In particular, our model corresponds to lossy channel coherent attack where also the receiver is allowed to perform general collective measurements. Thus, due to the symmetry between receiver's and eavesdropper's operations, the security threshold coincides with that of individual attacks $\eta=1/2$ \cite{Gro05}. It is straightforward to extend the studied model to many uses of the channel, by employing the same correlation strength among all environment modes (like in Ref.\cite{Rug05}), that is by employing an operator $\hat{S}_b$ of the form $\exp\left[\frac{1}{2}s\sum_{k\neq k'}\left(\hat{b}_k^{\dag}\hat{b}_{k'}^{\dag}-\hat{b}_k\hat{b}_{k'}\right)\right]$. In such a case we can observe that the quantities $\chi_{out}$ and $\chi_{eve}$ are linear in the number of uses (number of modes). Thus, $I_p(n)=I_p(2)$ and the presented results still hold. A much more demanding task would be the study the private classical information when non symmetric noise correlations are involved in many channel uses, i.e. when the memory has a finite range over channel uses. However, we forecast the same conclusions about the effectiveness of entangled inputs. In conclusion, we have studied the private classical information capacity for a lossy bosonic channel including memory effects. We have shown the possibility to enhance it by means of entangled inputs with respect to product states. However, the effectiveness of entangled inputs in presence of memory never overcomes that of unentangled inputs in absence of memory. This is in contrast with what happen for simple classical capacity. \end{document}
\begin{document} \begin{abstract} Superconductivity for Type II superconductors in external magnetic fields of magnitude between the second and third critical fields is known to be restricted to a narrow boundary region. The profile of the superconduc\-ting order parameter in the Ginzburg-Landau model is expected to be governed by an effective one-dimensional model. This is known to be the case for external magnetic fields sufficiently close to the third critical field. In this text we prove such a result on a larger interval of validity. \end{abstract} \title{Superconductivity between $H_{C_2} \section{Introduction} \subsection{Background} When studying superconductivity in the Ginzburg-Landau mo\-del in strong magnetic fields, one encounters three critical values of the magnetic field strength. The first critical field is where a vortex appears and will not concern us in the present text. At the second critical field, denoted $H_{C_2}$, superconductivity becomes essentially restricted to the boundary and is weak in the interior. At the third critical field, $H_{C_3}$, superconductivity disappears altogether. In this paper we will discuss superconductivity in the zone between $H_{C_2}$ and $H_{C_3}$. The Ginzburg-Landau model of superconductivity is the following functional, \begin{align}\label{eq-hc2-GL} \mathcal E[\psi,\mathbf{A}]&=\int_\Omega |(\nabla-i\kappa H\mathbf{A})\psi|^2-\kappa^2|\psi|^2+\frac{\kappa^2}2|\psi|^4+(\kappa H)^2|{\rm curl}(\mathbf{A}-\mathbf{F})|^2\,dx\,. \end{align} Here $\psi \in W^{1,2}(\Omega)$ is a complex valued wave function, $\mathbf{A} \in W^{1,2}(\Omega,{\mathbb R}^2)$ a vector potential, $\kappa$ the Ginzburg-Landau parameter (a material parameter), and $H$ is the strength of the applied magnetic field. The potential $\mathbf{F}:\Omega\to{\mathbb R}^2$ is the unique vector field satisfying, \begin{equation}\label{eq-hc2-F} \curl\mathbf{F}=1\,,\quad\Div\mathbf{F}=0\quad\text{in}~\Omega\,, \qquad\qquad N\cdot \mathbf{F}=0\quad\text{on}~\partial\Omega\,, \end{equation} where $N$ is the unit inward normal vector of $\partial\Omega$. With this notation, the critical fields behave as follows for large $\kappa$: \begin{align} \label{eq:24} H_{C_2} \approx \kappa + o(\kappa),\qquad H_{C_3} \approx \frac{\kappa}{\Theta_0} + o(\kappa), \end{align} where $\Theta_0 \approx 0.59$ is a universal constant. The definition of $\Theta_0$ is recalled in~\eqref{eqTheta_0} below. Therefore, when we study the Ginzburg-Landau functional for $H = b \kappa$, $1<b<\Theta_0^{-1}$, superconductivity should be a boundary phenomenon. This was proved in a weak sense in~\cite{pan2}. \begin{theorem}[\cite{pan2}]\label{thm:Pan} For any $b\in \left]1, \Theta_0^{-1}\right[$, there exists a constant $E_b$, such that, for $H = \kappa b$, \begin{align} \label{eq:25} \inf_{(\psi,{\bf A}) \in W^{1,2}(\Omega) \times W^{1,2}(\Omega;{\mathbb R}^2)} {\mathcal E}_{\kappa, H} [\psi, {\bf A}] = - \sqrt{\kappa H} E_b |\partial \Omega| + o(\kappa),\qquad \text{ as } \kappa \rightarrow \infty. \end{align} \end{theorem} Local energy results are also obtained in~\cite{pan2}. Theorem~\ref{thm:Pan} indicates that superconductivity is uniformly distributed along the boundary. However, the constant $E_b$ is only defined as a limit and its calculation is not easy. A number of conjectures related to the calculation of $E_b$ are given in~\cite{pan2}. In~\cite{AlHe} (see also~\cite[Chapter~14]{fohebook}), the constant $E_b$ is determined for $b$ in the vicinity of $\Theta_0^{-1}$. It turns out that the determination of the constant in this {\it non-linear} problem can be reduced to the positivity of a {\it linear} operator. Define the space ${\mathcal B}^1({\mathbb R}^+)$ as \begin{align} \label{eq:29} {\mathcal B}^1({\mathbb R}^+) = \{ \phi \in L^2({\mathbb R}^+) \,:\, \phi' \in L^2({\mathbb R}^+) \text{ and } t \phi \in L^2({\mathbb R}^+)\}. \end{align} Define, for $z \in {\mathbb R}$, $\lambda>0$, \begin{align} \label{eq:44} {\mathcal F}_{z,\lambda}(\phi)&:= \int_0^{+\infty}|\phi'(t)|^2 + (t-z)^2 |\phi(t)|^2 + \frac{\lambda}{2} |\phi(t)|^4 - \lambda |\phi(t)|^2\,dt\,, \end{align} and let $f_{z,\lambda}$ be a non-negative minimizer of this functional (see Theorem~\ref{thm:Sammenkog} below for properties of minimizers---in particular the fact that $f_{z,\lambda}$ exists and is unique). For given $\lambda >0$, minimize ${\mathcal F}_{z,\lambda}(f_{z,\lambda})$ over $z$ and denote a minimum by $\zeta(\lambda)$---we will prove below that such a minimum exists when $\lambda \in \left]\Theta_0,1\right]$. By definition of $f_{\zeta(\lambda) ,\lambda}$, \begin{align}\label{eq:26} {\mathcal F}_{z,\lambda}(\phi) \geq {\mathcal F}_{\zeta(\lambda),\lambda}(f_{\zeta(\lambda),\lambda}), \end{align} for all $(z,\phi) \in {\mathbb R} \times {\mathcal B}^1({\mathbb R^+})$. We also introduce a linear operator ${\mathfrak k}_{\lambda}$. Define, for $\nu \in {\mathbb R}$, $\lambda \in {\mathbb R^+}$, the operator ${\mathfrak k}_{\lambda}= {\mathfrak k}_{\lambda}(\nu)$ to be the Neumann realization of \begin{align}\label{eq:1b} {\mathfrak k}_{\lambda}(\nu) = -\frac{d^2}{dt^2} + (t - \nu)^2 + \lambda f_{\zeta(\lambda),\lambda}(t)^2, \end{align} on $L^2({\mathbb R}_{+})$. We denote by $\{\lambda_j(\nu)\}_{j=1}^{\infty}$ the spectrum of ${\mathfrak k}_{\lambda}(\nu)$. Also $\{ v_j(t;\nu)\}_{j=1}^{\infty}$ will be the associated real, normalized eigenfunctions. \begin{remark} Notice the following complication: Since we do not know that $\zeta(\lambda)$ is unique, the operator ${\mathfrak k}_{\lambda}(\nu)$ is really a family of operators, \[ {\mathfrak k}^{(j)}_{\lambda}(\nu) = -\frac{d^2}{dt^2} + (t - \nu)^2 + \lambda f_{\zeta_j(\lambda),\lambda}(t)^2, \] one for every minimum $\zeta_j(\lambda)$. \end{remark} It follows from~\cite{AlHe,fohebook} that \begin{theorem}\label{eq:SpecThm} Let $\lambda \in \left]\Theta_0,1\right[$. Suppose that there exists a minimum $\zeta(\lambda)$ such that for the corresponding choice of the operator ${\mathfrak k}_{\lambda}(\nu)$ we have \begin{align} \label{eq:27} \lambda \leq \inf_{\nu\in\mathbb{R}} \lambda_1(\nu)\,. \end{align} Then \begin{align} \label{eq:28} E_{\lambda^{-1}} = \frac{\lambda}{2} \| f_{\zeta(\lambda),\lambda}\|_{L^4({\mathbb R}^+)}^4. \end{align} \end{theorem} It is also proved in~\cite{AlHe,fohebook} (see Proposition~14.2.13 in~\cite{fohebook}) that there exists $\epsilon >0$ such that~\eqref{eq:27} is satisfied for $\lambda \in \left]\Theta_0, \Theta_0+\epsilon\right[$. The objective of the present paper is to give explicit bounds on the magnitude of $\epsilon$. \begin{remark} A minimizer $f_{z,\lambda}$ of the functional ${\mathcal F}_{z,\lambda}$ will be a solution to the Euler-Lagrange equations for the minimization problem~\eqref{eq:44} \begin{align}\label{eq:9b} -u'' + (t-z)^2 u + \lambda |u|^2 u = \lambda u, \qquad u'(0) = 0. \end{align} In particular, when $\nu=\zeta(\lambda)$ we have $\lambda_1(\nu) =\lambda$, since (by~\eqref{eq:9b} with $z = \zeta(\lambda)$) $f_{\zeta(\lambda),\lambda}$ will be a positive eigenfunction of ${\mathfrak k}_{\lambda}(\zeta(\lambda))$. \end{remark} \subsection{Main results} We are not able to prove~\eqref{eq:27} for all $\lambda\in]\Theta_0,1]$. Here we state some partial results. Clearly, $\nu=\zeta$ is a stationary point for $\lambda_1(\nu)$. Our first result shows that this is a local minimum. \begin{theorem}\label{thm:nuzeta}~ \begin{enumerate} \item Let $\Theta_0<\lambda\leq 1$. Then $\lambda_1(\nu)$ has a local minimum for $\nu=\zeta,$ i.e., there exist positive constants $\delta_\lambda$ and $c_\lambda$ such that for all $|\nu-\zeta|<\delta_\lambda$ it holds that \[ \lambda_1(\nu)\geq \lambda + c_\lambda(\nu-\zeta)^2. \] \item Let $\lambda > \Theta_0,$ $z \in {\mathbb R},$ and let $f_{z,\lambda}$ be a positive minimizer of ${\mathcal F}_{z,\lambda}$. Define \begin{align}\label{eq:61b} \lambda_1(\nu;z) := \inf \Spec\Big\{-\frac{d^2}{dt^2} + (t-\nu)^2 + \lambda f_{z,\lambda}^2 \Big\} , \end{align} where we consider the Neumann realization on $L^2({\mathbb R}^+)$ of the operator. Then, $\lambda_1(\nu;z) \rightarrow 1$ as $\nu \rightarrow +\infty$. Furthermore, there exists $\nu_0= \nu_0(\lambda,z)>0$ such that \begin{equation} \label{eq:61} \lambda_1(\nu;z)> 1, \end{equation} for all $\nu \geq \nu_0$. \end{enumerate} \end{theorem} \begin{remark} In particular, the second item in Theorem~\ref{thm:nuzeta} implies that~\eqref{eq:27} is not true for $\lambda >1$. It is therefore natural to expect that~\eqref{eq:27} will be valid if and only if $\lambda \in \left]\Theta_0, 1\right]$. Notice that we will not prove that a minimum $\zeta(\lambda)$ exists for $\lambda >1$. This explains the somewhat cumbersome statement in the second item in Theorem~\ref{thm:nuzeta}. \end{remark} We also obtain an explicit range of values of $\lambda$ for which the condition~\eqref{eq:27} is satisfied. The results contain some explicit universal constants that will be defined later. In this introduction we will only state the numerical values obtained. \begin{theorem}\label{thm:largenu}~ \begin{itemize} \item[(i)] Let $\Theta_0<\lambda\leq 1$. For all $\nu\leq 1.33$ it holds that $\lambda_1(\nu)\geq\lambda$. \item[(ii)] Let $\Theta_0\leq\lambda\leq 0.8$. Then~\eqref{eq:27} holds, i.e. \begin{equation*} \inf_{\nu\in\mathbb{R}}\lambda_1(\nu)\geq \lambda. \end{equation*} \end{itemize} \end{theorem} \begin{figure} \caption{A schematic picture of what we know about $\lambda_1(\nu)$ from Theorems~\ref{thm:nuzeta} \label{fig:whatweknow} \end{figure} In Section~\ref{sec:linear} we recall some well-known results about the linear de Gennes operator, and give some new spectral estimates. In Section~\ref{sec:nonlinear} we study the nonlinear problem appearing from the functional $\mathcal{F}_{z,\lambda}(\phi)$ in~\eqref{eq:44} and prove~\eqref{eq:61}. In Section~\ref{sec:mainop} we consider the operator $\mathfrak{k}_\lambda(\nu)$ and prove the remainder of Theorem~\ref{thm:nuzeta} and Theorem~\ref{thm:largenu}. \section{The linear problem}\label{sec:linear} \subsection{Reminder for the de Gennes operator} Define \begin{align} \label{defmathfrakh} {\mathfrak h}(\xi) = -\frac{d^2}{dt^2} + (t - \xi)^2, \end{align} in $L^2({\mathbb R}^{+})$ with Neumann boundary conditions at $0$. We will denote the eigenvalues of this operator by $\{\mu_j(\xi)\}_{j=1}^{\infty}$ and corresponding (real normalized) eigenfunctions by $u_j(t) = u_j(t;\xi)$. From a similar calculation as the one leading to~(A.18) in~\cite{bohe}, \begin{equation}\label{eq:muonebohe} \mu_1(\xi) \geq 1 - C_1\xi\exp(-\xi^2), \end{equation} for some constant $C_1>0$ and for sufficiently large $\xi$. As part of the proof of Proposition~\ref{prop:Asympu} below we will obtain a weaker asymptotics of $\mu_1(\xi)$. A basic identity from perturbation theory (Feynman-Hellmann) is \begin{align} \label{eq:1} \mu_j'(\xi) = -2 \int_0^{+\infty} (t -\xi) |u_j(t;\xi)|^2\,dt. \end{align} An integration by parts, combined with the equation satisfied by $u_j(t;\xi)$ yields the useful alternative formula from Dauge-Helffer~\cite{DaugeHelffer}: \begin{align} \label{eq:4} \mu_j'(\xi) = (\xi^2-\mu_j(\xi)) |u_j(0;\xi)|^2\,. \end{align} From~\eqref{eq:4} it is simple to deduce that $\mu_j$ has a unique minimum attained at $\xi_0^{(j)}$ satisfying \begin{align} \label{eq:5} \mu_j(\xi_0^{(j)}) = (\xi_0^{(j)})^2\,. \end{align} Notice that, from~\eqref{eq:1}, we obtain \begin{align} \label{eq:6} \xi_0^{(j)} > 0\,, \end{align} for all $j$. We will sometimes write $\xi_0=\xi_0^{(1)}$. By definition \begin{align} \label{eqTheta_0} \Theta_0 = \inf_{\xi\in\mathbb{R}} \mu_1(\xi) = \mu_1(\xi_0^{(1)}) = (\xi_0^{(1)})^2. \end{align} Finally, we recall that \begin{equation} \mu_j(0) = 1+ 4(j-1) \,,\quad \lambda_j^D(0) = 3 + 4(j-1)\,, \end{equation} where $\lambda_j^D(\xi)$ denotes the $j$-th eigenvalue of the Dirichlet realization of $\mathfrak h(\xi)$ in $L^2(\mathbb R^+)$. These identities follow upon noticing that the eigenfunctions of the harmonic oscillator on the entire line are respectively even or odd functions. \subsection{Comparison Dirichlet-Neumann} In this section we recall useful links between the Dirichlet spectrum and the Neumann spectrum of the family $\mathfrak h(\xi)$ ($\xi \in \mathbb R$) in $L^2(\mathbb R^+)$\,. By domain monotonicity, it is standard that $\xi \mapsto \lambda_j^D(\xi)$ is monotonically decreasing. By comparison of the form domains: \begin{equation} \mu_j(\xi) \leq \lambda_j^D(\xi)\,. \end{equation} Also, \begin{gather*} \lim_{\xi \to +\infty} \lambda_1^D(\xi)= \lim_{\xi \to +\infty} \mu_1(\xi) = 1\,,\\ \lim_{\xi \to +\infty} \lambda_2^D(\xi)= \lim_{\xi \to +\infty} \mu_2(\xi) = 3\,. \end{gather*} Using Sturm-Liouville theory, we also observe that, for any $j\geq 2$ and any $\xi$, there exists $\xi'$ such that \begin{equation} \mu_j(\xi)= \lambda_{j-1}^D(\xi')\,. \end{equation} In particular, using that \begin{equation} \inf _{\xi\in\mathbb{R}} \lambda_1^D(\xi)=1\,, \end{equation} we get \begin{equation}\label{minormu2} \mu_2(\xi) > 1\,. \end{equation} \subsection{The virial theorem} For $\ell >0$, the map $t \mapsto \ell t$ can be unitarily implemented on $L^2(\mathbb{R}^+)$ by the operator $U f(t) = \sqrt{\ell} f(\ell t)$. Therefore, $\mathfrak h(\xi)$ is isospectral to the (Neumann realization of the) operator \begin{equation*} {\mathfrak k}_{\ell} := -\ell^{-2} \frac{d^2}{dt^2} + (\ell t- \xi)^2. \end{equation*} Since the eigenvalues are unchanged when $\ell$ varies we can take the derivative at $\ell =1$ and find (using~\eqref{eq:1}) \begin{align*} 0 &= \int_0^{+\infty} |u_j'(t;\xi)|^2\,dt - \int_0^{+\infty} t (t - \xi) |u_j(t;\xi)|^2\,dt \\ &= \int_0^{+\infty} |u_j'(t;\xi)|^2\,dt - \int_0^{+\infty} (t - \xi)^2 |u_j(t;\xi)|^2\,dt + \frac{\xi}{2} \mu_j'(\xi). \end{align*} Combined with the definition of the energy \begin{align*} \mu_j(\xi) = \int_0^{+\infty} |u_j'(t;\xi)|^2\,dt + \int_0^{+\infty} (t- \xi)^2 |u_j(t;\xi)|^2\,dt\,, \end{align*} we get \begin{align} \label{eq:2a} \int_0^{+\infty} |u_j'(t;\xi)|^2\,dt &= \frac{\mu_j(\xi)}{2} - \frac{\xi \mu_j'(\xi)}{4}\,, \end{align} and \begin{align}\label{eq:2b} \int_0^{+\infty} (t- \xi)^2 |u_j(t;\xi)|^2\,dt &= \frac{\mu_j(\xi)}{2} + \frac{\xi \mu_j'(\xi)}{4}\,. \end{align} \subsection{Lower bounds on $\mu_j(\xi)$} \subsubsection{Estimates on $\mu_1$} As a warm-up, we recall the lower bound on $\mu_1 (\xi)$. Let $u_1(\,\cdot\,;\xi)$ be the ground state of ${\mathfrak h}(\xi)$. We use this function as a trial state for ${\mathfrak h}(0)$ and find \begin{align} 1 = \inf {\rm Spec}\, {\mathfrak h}(0) &< \langle u_1(\,\cdot\,;\xi), {\mathfrak h}(0) u_1(\,\cdot\,;\xi)\rangle =\mu_1(\xi) + 2 \xi \int_0^{+\infty} (t-\xi) u_1(t;\xi)^2 \,dt + \xi^2. \nonumber \end{align} So we obtain the inequality~: \begin{equation} \label{eq:13} 1 < \mu_1(\xi) - \xi \mu_1'(\xi) + \xi^2\,. \end{equation} We insert $\xi_0^{(1)}$, using $(\xi_0^{(1)})^2 = \Theta_0 = \min_{\xi} \mu_1(\xi)$, $\mu_1'(\xi_0^{(1)})=0$ and get \begin{align} \label{eq:14} \frac{1}{2} < \Theta_0\,. \end{align} \subsubsection{Estimates on $\mu_j$, $j>1$} From~\eqref{eq:5},~\eqref{eq:6} and the fact that $\lim_{\xi\to+\infty}\mu_j(\xi)=(2j-1)$ we find that \[ 0<\xi^{(j)}_0<\sqrt{2j-1}. \] The function $\xi\mapsto \mu_j(\xi)$ decreases from its value $\mu_j(0)=4j-3$ until it arrives at its minimum at $\xi^{(j)}_0$, after which it becomes increasing, so there exists a unique point $\widehat{\xi}_j > 0$ such that $\mu_j(\widehat{\xi}_j)=2j-1$. By comparison with the harmonic oscillator on a half axis it can be seen that $\widehat{\xi}_j$ coincides with the smallest value of $\xi$ for which $h_j'(\xi)=0$, where $h_j'(\xi)$ denotes the $j$th Hermite function. In particular one easily finds that \begin{equation}\label{eq:xihat} \widehat{\xi}_2 = 1,\quad \text{and}\quad \widehat{\xi}_3 = \sqrt{5/2}. \end{equation} To get the behavior of $\widehat{\xi}_j$ as $j\to\infty$ we observe by reflection that $-\widehat{\xi}_j$ is given by the value of $\xi$ for which $\mu_1(\xi)=2j-1$. Let us get an upper bound on $\mu_1(\xi)$ for $\xi$ negative. For any $\gamma>0$ and any $\xi\in\mathbb{R}$ we use the inequality \[ (t-\xi)^2 \leq (1+\gamma)t^2 + (1+1/\gamma)\xi^2 \] to obtain the quadratic form comparison (here and below $\int_0^{+\infty} |u|^2\,dt =1$) \[ \int_0^{+\infty} |u'|^2 + (t-\xi)^2|u|^2\,dt \leq \int_0^{+\infty} |u'|^2 + (1+\gamma)t^2|u|^2\,dt + (1+1/\gamma)\xi^2. \] Comparing the first eigenvalue $\mu(\xi)$ with the first eigenvalue of the (scaled) harmonic oscillator, we find \[ \mu_1(\xi)\leq \sqrt{1+\gamma}+(1+1/\gamma)\xi^2. \] The upper bound we get from this seems to be poor. For any $\gamma>0$ and any $\xi\in\mathbb{R}$ we use the inequality \[ (t-\widehat{\xi}_j)^2 \leq (1+\gamma)(t-\xi)^2 + (1+1/\gamma)(\widehat{\xi}_j-\xi)^2 \] to obtain the quadratic form comparison \[ \int_0^{+\infty} |u'|^2 + (t-\widehat{\xi}_j)^2|u|^2\,dt \leq \int_0^{+\infty} |u'|^2 + (1+\gamma)(t-\xi)^2|u|^2\,dt + (1+1/\gamma)(\widehat{\xi}_j-\xi)^2. \] By scaling and change of function, we have that the quadratic form on the right-hand side is unitary equivalent to \[ \sqrt{1+\gamma}\int_0^{+\infty} |u'|^2 + (t-(1+\gamma)^{1/4}\xi)^2|u|^2\,dt +(1+1/\gamma)(\widehat{\xi}_j-\xi)^2. \] In particular, with the choice $\xi=\xi^{(j)}_0(1+\gamma)^{-1/4}$ we obtain, comparing the $j$th eigenvalue of the corresponding operators and using~\eqref{eq:5}, that \begin{align*} 2j-1 =\mu_j(\widehat{\xi}_j) &\leq \sqrt{1+\gamma}\mu_j(\xi^{(j)}_0)+ (1+1/\gamma)\bigl(\widehat{\xi}_j-\xi^{(j)}_0(1+\gamma)^{-1/4}\bigr)^2\\ & = \sqrt{1+\gamma}\bigl(\xi^{(j)}_0\bigr)^2+ (1+1/\gamma)\bigl(\widehat{\xi}_j-\xi^{(j)}_0(1+\gamma)^{-1/4}\bigr)^2. \end{align*} Now let $j=2$. By~\eqref{eq:xihat} we have \[ 3 \leq \sqrt{1+\gamma}\bigl(\xi^{(2)}_0\bigr)^2+ (1+1/\gamma)\bigl(\xi^{(2)}_0(1+\gamma)^{-1/4}-1\bigr)^2. \] Completing the square, we get \[ \bigl(\xi^{(2)}_0-(1+\gamma)^{-3/4}\bigr)^2 \geq \frac{2\gamma}{(1+\gamma)^{3/2}}, \] and hence the inequality \begin{equation}\label{eq:xitwogamma} \xi^{(2)}_0 > \frac{1+\sqrt{2\gamma}}{(1+\gamma)^{3/4}} \end{equation} (since $\frac{1-\sqrt{2\gamma}}{(1+\gamma)^{3/4}} < 1$ for all $\gamma>0$. Indeed, the function $\gamma\mapsto \frac{1-\sqrt{2\gamma}}{(1+\gamma)^{3/4}}$ starts at $1$ for $\gamma=0$ and then decreases to its minimal value $-1/\sqrt{3}$ for $\gamma=8$ after which it increases to $0$ as $\gamma\to\infty$). Optimizing~\eqref{eq:xitwogamma} in $\gamma>0$ we find that the maximal value is attained for $\gamma=1/2$, for which we have \[ \xi^{(2)}_0 > \frac{2^{7/4}}{3^{3/4}}\approx 1.48. \] The corresponding lower bound for $\mu_2$ is \begin{equation}\label{eq:mutwoopt} \mu_2\bigl(\xi^{(2)}_0\bigr) \geq \frac{2^{7/2}}{3^{3/2}}\approx 2.18. \end{equation} Continuing with $j=3$, we arrive at the inequality \[ 5 \leq \sqrt{1+\gamma}\bigl(\xi^{(3)}_0\bigr)^2+ (1+1/\gamma)\bigl(\xi^{(3)}_0(1+\gamma)^{-1/4}-\sqrt{5/2}\bigr)^2. \] The same type of calculation shows that \[ \xi^{(3)}_0 > \sqrt{\frac{5}{2}}\frac{1+\sqrt{\gamma}}{(1+\gamma)^{3/4}}. \] Optimizing over $\gamma>0$ yields $\gamma=\frac{1}{2}\bigl(13-3\sqrt{17}\bigr)\approx 0.32$ with corresponding inequality \[ \xi^{(3)}_0 >\frac{\sqrt{5}\Bigl(2+\sqrt{26-6\sqrt{17}}\Bigr)}{(30-6\sqrt{17})^{3/4}} \approx 2.01 \] which in turn gives \[ \mu_3(\xi^{(3)}_0) \geq \frac{5\Bigl(2+\sqrt{26-6\sqrt{17}}\Bigr)^2}{(30-6\sqrt{17})^{3/2}} \approx 4.04. \] \begin{remark} We can compare these estimates with the numerical values \[ \xi^{(2)}_0\approx 1.62,\quad \mu_2(\xi^{(2)}_0)\approx 2.64,\quad \xi^{(3)}_0\approx 2.16,\quad \text{and}\quad \mu_3(\xi^{(3)}_0)\approx 4.65. \] \end{remark} \subsection{Asymptotics of $u_1$} We end this section by giving an asymptotic formula for $u_1(\,\cdot\,;\xi)$ for large $\xi$. \begin{proposition}\label{prop:Asympu} For all $\alpha<1$ there exist $C_{\alpha}>0$ and $\Xi_0>0$ such that \begin{align} \label{eq:62} \Bigl|u_1(t,\xi) - \frac{1}{\sqrt{\pi}} \exp\bigl[ -(t-\xi)^2/2\bigr]\Bigr| \leq C_{\alpha} \exp(-\alpha \xi^2/2), \end{align} for all $t \in {\mathbb R}^+$ and all $\xi>\Xi_0$. \end{proposition} \begin{proof} Let $\phi$ be smooth, $\phi(t) = 0$ for $t\leq 1$, $\phi(t) = 1$ for $t\geq 2$ and define \begin{align} \label{eq:63} \tilde u(t) = \phi(t) \frac{1}{\sqrt{\pi}} \exp\big[ -(t-\xi)^2/2\big]. \end{align} An elementary calculation now yields (for $\xi>2$ and some constant $C>0$) \begin{align} \label{eq:64} \bigl\| [{\mathfrak h}(\xi) - 1 ]\tilde u \bigr\|^2 \leq C \xi^2 \exp\bigl(-(\xi-2)^2\bigr), \end{align} Using the lower bound on $\mu_2(\xi)$ and the spectral theorem this implies that \begin{align} \label{eq:70} |\mu_1(\xi) -1| \leq C \exp(-\alpha \xi^2/2), \end{align} and the existence of a (possibly non-normalized) ground state eigenfunction $u_1$ such that \begin{align} \label{eq:71} \| \tilde u - u_1 \|_2 \leq C \exp(-\alpha \xi^2/2). \end{align} One now obtains the similar estimate in $W^{1,2}(\mathbb{R}^+)$, from which the pointwise estimate follows. \end{proof} \section{Estimates on the non-linear problem}\label{sec:nonlinear} We now analyse the functional ${\mathcal F}_{z,\lambda}$ defined in~\eqref{eq:44}. \subsection{Preliminaries} We introduce the notation \begin{align}\label{eq:2} {\mathfrak I}(\lambda):= \{ \xi \in {\mathbb R} \,:\, \mu_1(\xi) < \lambda \}. \end{align} For future reference, we notice that if $\Theta_0 < \lambda <1$, then there exist $\xi_1(\lambda), \xi_2(\lambda)>0$ such that \begin{align}\label{eq:7} {\mathfrak I}(\lambda) = \bigl]\xi_1(\lambda), \xi_2(\lambda)\bigr[. \end{align} For $\lambda =1$ we have $ {\mathfrak I}(\lambda)=\left[0,\infty\right[$. \begin{theorem}\label{thm:Sammenkog}~ \begin{itemize} \item For all $z \in {\mathbb R}, \lambda >0,$ the functional ${\mathcal F}_{z,\lambda}$ admits a non-negative minimizer $f_{z,\lambda} \in {\mathcal B}^1({\mathbb R}^{+})$, which is non-trivial if and only if $\lambda > \mu_1(z)$. The minimizer $f_{z,\lambda}$ is a solution to the Euler-Lagrange equation~\eqref{eq:9b} and satisfies the bound \begin{equation}\label{eq:fone} \|f_{z,\lambda}\|_\infty \leq 1\,. \end{equation} Furthermore, minimizers are unique up to multiplication by a constant $c \in {\mathbb S}^1 \subset {\mathbb C}$. \item For all $\epsilon\in \left]0,1/2\right[$, $\lambda >0$ and $z\in {\mathfrak I}(\lambda)$, there exist constants $c_{\epsilon}, C_{\epsilon}>0$ such that \begin{align} \label{eq:46} c_{\epsilon} \exp\Big( -\Big[\frac{1}{2}+\epsilon\Big](t-z)^2\Big) \leq f_{z,\lambda}(t) \leq C_{\epsilon} \exp\Big( -\Big[\frac{1}{2}-\epsilon\Big](t-z)^2\Big). \end{align} \end{itemize} \end{theorem} \begin{proof} The first item in Theorem~\ref{thm:Sammenkog} is a slight improvement of known results (see~\cite[Proposition~14.2.1 and~14.2.2]{fohebook}), so we will only give brief indications of proof. For given $z$ and $\lambda$ the functional is clearly bounded from below, so the existence of minimizers is standard. Also, by differentiation of the absolute value, we see that minimizers can be chosen non-negative. The proof of the non-triviality statement is also straight-forward. The equation~\eqref{eq:9b} follows by variation around a minimum, and~\eqref{eq:fone} is a consequence of the maximum principle applied to~\eqref{eq:9b}. We finally consider the uniqueness question. Let $u$ be a minimizer and let $f = |u|$. By the Euler-Lagrange equation~\eqref{eq:9b} we see that \begin{align} {\mathfrak k}_{\lambda}(z ) f = \lambda f,\qquad {\mathfrak k}_{\lambda}(z ) u = \lambda u. \end{align} By Cauchy uniqueness, we therefore have $u = c f$ for some $c \in {\mathbb S}^1$. Therefore, to prove uniqueness it suffices to prove uniqueness of non-negative minimizers. The proof of this (which does not use any bound on the value of $\lambda$) is given in the proof of \cite[Proposition~14.2.2]{fohebook} and will not be repeated. The upper and lower bounds in~\eqref{eq:46} can both be proved using the following strategy, so we only consider the upper bound. We start from the equation for $f_{z,\lambda}$ in the form \begin{equation} \label{eq:56} f_{z,\lambda}''(t) = [ (t-z)^2 + \lambda f_{z,\lambda}^2(t) - \lambda] f_{z,\lambda}(t). \end{equation} Define, for $\alpha <1$, the function $g$ as $g(t) = C \exp(-\frac{\alpha}{2}(t-z)^2)$, for some constant $C>0$. Then \begin{equation} \label{eq:57} g''(t)= [\alpha^2 (t-z)^2 - \alpha] g(t). \end{equation} Choose $T>z$ so large that \begin{equation} \label{eq:58} 0<[\alpha^2 (t-z)^2 - \alpha] \leq [ (t-z)^2 + \lambda f_{z,\lambda}^2(t) - \lambda] , \end{equation} for all $t \geq T$. This is possible since $\alpha <1$. Choose $C>0$ in such a way that \begin{align} \label{eq:59} g(T) > f_{z,\lambda}(T). \end{align} Suppose that the inequality $g(t) \geq f_{z,\lambda}(t)$ fails for some $t > T$. Since both functions tend to $0$ at $+\infty$ (at least along some sequence, since $f \in L^2({\mathbb R}^+)$), we deduce that $u:= f- g$ has a positive maximum at some point $t_0>T$. Thus $u''(t_0) \leq 0$. But, for $t \geq T$, we have \begin{align} \label{eq:60} u''(t) &= [ (t-z)^2 + \lambda f_{z,\lambda}^2(t) - \lambda] f_{z,\lambda}(t)- [\alpha^2 (t-z)^2 - \alpha] g(t) \nonumber \\ &\geq [\alpha^2 (t-z)^2 - \alpha] u(t). \end{align} At $t_0$ this is strictly positive and we get a contradiction. \end{proof} By a continuity argument, we find \begin{proposition} For $0 < \lambda \leq 1$, the function \begin{align} \label{eq:45} {\mathbb R} \ni z \mapsto {\mathcal F}_{z,\lambda}(f_{z,\lambda}) \end{align} admits a minimum $\zeta(\lambda) >0$. \end{proposition} Notice that for $\lambda > 1$, the existence of a minimum is an open problem. \begin{proof} Only the case $\lambda =1$ needs some consideration. We will prove that the minimal energy in that case tends to $0$ as $z \to +\infty$. By continuity this implies the proposition. We calculate, for arbitrary $\phi \in {\mathcal B}^1({\mathbb R}^+)$ and $\alpha \in \left]0,1\right[$, and estimating (part of) the quadratic expression from below by the linear ground state energy \begin{align} \label{eq:47} {\mathcal F}_{z,1}(\phi) &\geq \int_0^{+\infty} \bigl[\alpha (t -z)^2 +(1-\alpha)\mu_1(z) - 1\bigr] |\phi|^2 + \frac{1}{2} |\phi|^4 \,dt\nonumber \\ &\geq \int_{\bigl\{|t-z|\leq \sqrt{[1 - (1-\alpha)\mu_1(z)]/\alpha} \bigr\}} \bigl[(1-\alpha)\mu_1(z) - 1\bigr] |\phi|^2 + \frac{1}{2} |\phi|^4\,dt\nonumber \\ &\geq - \bigl[(1-\alpha)\mu_1(z) - 1\bigr]^2 \sqrt{\frac{1 -(1-\alpha)\mu_1(z)}{\alpha}}\nonumber \\ &= - \bigl[1 -\mu_1(z)+\alpha\mu_1(z)\bigr]^2 \sqrt{\frac{1-\mu_1(z) + \alpha\mu_1(z)}{\alpha} } , \end{align} where the last inequality follows by completing the square. We choose $\alpha = \alpha(z) = 1-\mu_1(z) \rightarrow 0$ as $z \rightarrow +\infty$ to get the conclusion. \end{proof} We can now prove~\eqref{eq:61}. \begin{proof}[Proof of the second item in Theorem~\ref{thm:nuzeta}] Let $z\in {\mathbb R}$ and let $f_{z,\lambda}$ be a positive minimizer of ${\mathcal F}_{z,\lambda}$. Notice that $z$ and $\lambda$ will be fixed in the remainder of the proof. We therefore write $f$ instead of $f_{z,\lambda}$. We also denote by $\tilde{\lambda}_j(\nu)=\lambda_j(\nu,z)$ the eigenvalues of the operator in~\eqref{eq:61b}. We apply Temple's inequality (see~\cite{kato3}) with $u_1 := u_1(\,\cdot\,;\nu)$ as a test function. Under the condition that $\tilde{\lambda}_2(\nu)>A$, Temple's inequality says that \begin{equation}\label{eq:Temple1} \tilde{\lambda}_1(\nu)\geq A - \frac{B}{\tilde{\lambda}_2(\nu)-A}, \end{equation} where \[ A=\Big\langle u_1,\Big\{-\frac{d^2}{dt^2} + (t-\nu)^2 + \lambda f^2\Big\} u_1 \Big\rangle = \mu_1(\nu) + \lambda \| f u_1 \|_2^2 \] and \[ B = \Big\|\Big\{-\frac{d^2}{dt^2} + (t-\nu)^2 + \lambda f^2\Big\} u_1\Big\|_2^2 - A^2 = \lambda^2 \| f^2 u_1 \|_2^2 - \lambda^2 \| f u_1\|_2^4. \] Using the upper bound in~\eqref{eq:62} and~\eqref{eq:46}, $\| f u_1 \|_2 \rightarrow 0$ as $\nu \rightarrow \infty$. Since $\tilde{\lambda}_2(\nu) \geq \mu_2(\nu)$ we see that the condition $\tilde{\lambda}_2(\nu)>A$ is satisfied for large $\nu$'s, and there \begin{align} \label{eq:65} \tilde{\lambda}_1(\nu)\geq \mu_1(\nu) + \lambda \| f u_1 \|_2^2 - C \lambda^2 \| f^2 u_1 \|_2^2, \end{align} for some $C>0$ independent of $\nu$. Using the upper bounds in~\eqref{eq:62} and~\eqref{eq:46}, we get for all $0<\alpha < 1$, and large $\nu$, \begin{align} \label{eq:66} \| f^2 u_1 \|_2^2 &\leq C \exp(-\alpha \nu^2) + C \int_{-\infty}^{+\infty} \exp(-2\alpha(t-z)^2)\exp(-\alpha(t-\nu)^2)\,dt \nonumber\\ &\leq C \exp(-\alpha \nu^2) + C'\exp(-2\alpha' \nu^2/3), \end{align} where $\alpha' < \alpha$ is arbitrary. Without striving for optimality, we make the simple estimate \begin{align} \label{eq:67} \| f u_1 \|_2^2 \geq \int_{\nu/2 - 1}^{\nu/2+1} f^2 u_1^2\,dt. \end{align} In this interval of integration it follows from~\eqref{eq:62} that $u_1^2 \geq C \exp\bigl(-(\nu/2 +1)^2\bigr)$ and from~\eqref{eq:46} that $f^2 \geq C \exp(-\beta \nu^2/4)$ for any $\beta >1$. Inserting in the integral yields, for any $\beta'>1$, \begin{align} \label{eq:68} \| f u_1 \|_2^2 \geq C \exp( - \beta' \nu^2/2). \end{align} Combining~\eqref{eq:65},~\eqref{eq:66},~\eqref{eq:68} and the asymptotics of $\mu_1$ from~\eqref{eq:muonebohe} gives that \begin{align} \label{eq:69} \tilde{\lambda}_1(\nu) > 1, \end{align} for large $\nu$, which is~\eqref{eq:61}. To prove that $\tilde{\lambda}_1(\nu) \rightarrow 1$, we use the variational principle with $u_1=u_1(\,\cdot\,;\nu)$ as a test function. Notice that by the lower bound just established, we only need to prove an upper bound with limit $1$ at infinity. The variational principle gives \begin{align} \tilde{\lambda}_1(\nu) \leq \mu_1(\nu) + \lambda \| f u_1 \|_2^2. \end{align} Since we have seen above that $\| f u_1 \|_2 \rightarrow 0$ and $\mu_1(\nu) \rightarrow 1$ in the large $\nu$ limit, this implies the upper bound required. \end{proof} \subsection{A virial-type result} The function $f_{\zeta,\lambda}$ satisfies the Euler-Lagrange equation~\eqref{eq:9b}. Since, $\zeta = \zeta(\lambda)$ is a minimum for the non-linear energy, we get \begin{align} \label{eq:36} \int_0^{+\infty} (t-\zeta) f_{\zeta,\lambda}^2 \,dt = 0. \end{align} In particular it holds that $\zeta(\lambda)>0$. Moreover, multiplying~\eqref{eq:9b} by $f_{\zeta,\lambda}$ and integrating, we obtain \begin{align} \label{eq:10b} \| f_{\zeta,\lambda}' \|_2^2 + \|(t- \zeta) f_{\zeta,\lambda} \|_2^2 + \lambda \| f_{\zeta,\lambda} \|_4^4 = \lambda \|f_{\zeta,\lambda} \|_2^2\,. \end{align} \begin{lemma} Assume that $\Theta_0\leq\lambda\leq1$ and that $(\zeta,f_{\zeta,\lambda})$ is a minimizer of the functional~\eqref{eq:44}. Then \begin{align} \|f'_{\zeta(\lambda),\lambda}\|_2^2 - \|(t - \zeta(\lambda))f_{\zeta(\lambda),\lambda}\|_2^2 + \frac \lambda 4 \|f_{\zeta(\lambda),\lambda}\|_4^4 &=0\,,\label{eq:Hzero}\\ 2 \|f'_{\zeta(\lambda),\lambda}\|_2^2 + \frac{5\lambda}{4} \| f_{\zeta(\lambda),\lambda}\|_4^4 &= \lambda \|f_{\zeta(\lambda),\lambda}\|_2^2\,,\label{eq:Hone}\\ \intertext{ and } 2 \| (t - \zeta(\lambda))f_{\zeta(\lambda),\lambda}\|_2^2 + \frac {3\lambda}{ 4 }\| f_{\zeta(\lambda),\lambda}\|_4^4 &= \lambda \|f_{\zeta(\lambda),\lambda}\|_2^2\,.\label{eq:Htwo} \end{align} \end{lemma} \begin{proof} By a change of variable and of function in the functional ${\mathcal F}_{z,\lambda}$ we get a rescaled functional \[ \phi \mapsto \int_0^{+\infty} \rho^2 |\phi'(t)|^2 + \Bigl(\frac t \rho - \zeta\Bigr)^2 |\phi(t)|^2 + \frac {\lambda \rho}{ 2 } |\phi(t)|^4 - \lambda |\phi(t)|^2 \,dt \] with same infimum. Expressing that the infimum is independent of $\rho$, we obtain (using~\eqref{eq:36}) at $\rho=1$ and $\zeta =\zeta(\lambda)$, the identity~\eqref{eq:Hzero}. Combining with~\eqref{eq:10b} we also get~\eqref{eq:Hone} and~\eqref{eq:Htwo}. \end{proof} \subsection{Different bounds on $f_{\zeta,\lambda}$} \begin{proposition}\label{Prop:UnifNormsf} Assume that $\Theta_0\leq\lambda\leq 1$ and let $(\zeta,f_{\zeta,\lambda})$ be a minimum of the function $(z,f) \mapsto {\mathcal F}_{z,\lambda}(f)$ with ${\mathcal F}$ defined in~\eqref{eq:44}. Then \begin{equation}\label{eq:fzero} f_{\zeta,\lambda}(0)^2 = \frac{2}{\lambda}\bigl(\lambda-\zeta^2\bigr). \end{equation} Furthermore, \begin{equation}\label{eq:41} 2(\lambda-\zeta^2) \leq \lambda \| f_{\zeta,\lambda} \|_{\infty}^2 \leq \frac{9}{2^{4/3}}\zeta^{2/3}\lambda^{1/3} \biggl(\frac{1}{2} -\frac{5(\lambda-\Theta_0)}{12\zeta^{1/2} \lambda\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr)^{1/3} (\lambda-\mu_1(\zeta)) \end{equation} and \begin{equation}\label{eq:42} \frac{\lambda-\Theta_0}{\|u_1(\,\cdot\,:\xi_0)\|_4^2} \leq \lambda \| f_{\zeta,\lambda} \|_4^2 \leq \frac{3}{2} \zeta^{1/2} (\lambda - \mu_1(\zeta)). \end{equation} \end{proposition} \begin{remark} A numerical calculation yields the approximate value $\|u_1(\,\cdot\,;\xi_0)\|_4^4 \approx 0.584$. One can also get a lower bound to $\|u_1(\,\cdot\,;\xi_0)\|_4^4$ using~\eqref{eq:42}: We have \begin{displaymath} \|u_1(\,\cdot\,;\xi_0)\|_4^4 \geq \frac{4}{9}\lim_{\lambda \rightarrow \Theta_0} \frac{(\lambda-\Theta_0)^2} {\zeta(\lambda)\bigl(\lambda - \mu_1(\zeta(\lambda))\bigr)^2} =\frac{4}{9\xi_0}\approx 0.579. \end{displaymath} \end{remark} \begin{proof} The lower bound in~\eqref{eq:41} is an easy consequence of~\eqref{eq:fzero}. Both are proved in~\cite{pan2}. We reproduce the short proof for the sake of completeness. Indeed, define the function \[ H(t)=f_{\zeta,\lambda}'(t)^2 - (t-\zeta)^2f_{\zeta,\lambda}(t)^2 +\lambda f_{\zeta,\lambda}(t)^2 -\frac{\lambda}{2}f_{\zeta,\lambda}(t)^4. \] A calculation, using~\eqref{eq:9b} shows that $H'(t)=-2(t-\zeta)f_{\zeta,\lambda}(t)^2$. By exponential decay it also holds that $\lim_{t\to\infty}H(t)=0$. Hence, by~\eqref{eq:36} we have that $H(0)=-\int_0^\infty H'(t)\,dt=0$. On the other hand we also have $H(0)=(\lambda-\zeta^2)f_{\zeta,\lambda}(0)^2 -\frac{\lambda}{2}f_{\zeta,\lambda}(0)^4$. Since $f_{\zeta,\lambda}(0)\neq 0$, we get the equality in~\eqref{eq:fzero}. We continue with the lower bound in~\eqref{eq:42}. By definition we have \begin{align} \label{eq:54} - \frac{\lambda}{2} \| f_{\zeta,\lambda} \|_4^4 = {\mathcal F}_{\zeta,\lambda}[f_{\zeta,\lambda}] = \inf_{z \in {\mathbb R}, \phi \in B^1} {\mathcal F}_{z,\lambda}[\phi]. \end{align} We insert the trial state $z= \xi_0$, $\phi = \rho u_1(\,\cdot\,;\xi_0)$, with $\rho = \sqrt{(\lambda-\Theta_0)/[\lambda \| u_1(\,\cdot\,;\xi_0)\|_4^4]}$, in~\eqref{eq:54}. This yields, \begin{align} \label{eq:55} - \frac{\lambda}{2} \| f_{\zeta,\lambda} \|_4^4 \leq - \frac{\lambda}{2} \frac{(\lambda-\Theta_0)^2}{\lambda^2 \| u_1(\,\cdot\,;\xi_0)\|_4^4}. \end{align} This finishes the proof of the lower bound in~\eqref{eq:42}. Finally, we turn to the upper bounds. Using the variational characterization of $\mu_1(\zeta),$ equation~\eqref{eq:10b} implies that \begin{equation}\label{eq:10badd} \lambda \| f_{\zeta,\lambda} \|_4^4 \leq (\lambda-\mu_1(\zeta)) \| f_{\zeta,\lambda}\|_2^2\,. \end{equation} We estimate, using~\eqref{eq:36}, and for $\alpha>1$ (recall that $\zeta >0$), \begin{align} \label{eq:37} \| f_{\zeta,\lambda}\|_2^2 &\leq \int_0^{\alpha \zeta} |f_{\zeta,\lambda}|^2\,dt + \frac{1}{\zeta(\alpha-1)} \int_{\alpha\zeta}^{+\infty} (t-\zeta) |f_{\zeta,\lambda}|^2\,dt \nonumber \\ & = \int_0^{\alpha \zeta} \frac{\alpha \zeta -t}{\zeta(\alpha-1)} |f_{\zeta,\lambda}|^2\,dt\nonumber \\ &\leq \zeta^{1/2} \sqrt{\frac{\alpha^3}{3(\alpha-1)^2}} \|f_{\zeta,\lambda}\|_4^2. \end{align} We choose the optimal $\alpha = 3$ and implement~\eqref{eq:10badd} to get \begin{equation}\label{eq:38} \| f_{\zeta,\lambda}\|_2^2 \leq \frac{3}{2}\zeta^{1/2} \|f_{\zeta,\lambda}\|_4^2 \leq \frac{3}{2} \zeta^{1/2} \sqrt{\frac{\lambda - \mu_1(\zeta)}{\lambda}}\| f_{\zeta,\lambda}\|_2, \end{equation} i.e. \begin{align} \label{eq:39} \| f_{\zeta,\lambda}\|_2 \leq \frac{3}{2}\, \zeta^{1/2}\, \sqrt{\frac{\lambda - \mu_1(\zeta)}{\lambda}}. \end{align} Combining~\eqref{eq:10badd} and~\eqref{eq:39} yields the upper bound~\eqref{eq:42}. One easily obtains \begin{align}\label{eq:35} f_{\zeta,\lambda}(t)^3 = - \int_t^{+\infty} (f_{\zeta,\lambda}^3)'(\tau) \,d\tau \leq 3 \| f_{\zeta,\lambda}\|_4^2 \|f_{\zeta,\lambda}'\|_2\,. \end{align} From~\eqref{eq:Hone},~\eqref{eq:55} and~\eqref{eq:38} we have \begin{align} \|f_{\zeta,\lambda}'\|_2^2 & = \lambda\biggl(\frac{1}{2}\|f_{\zeta,\lambda}\|_2^2 - \frac{5}{16}\|f_{\zeta,\lambda}\|_4^4\biggr)\\ & \leq \lambda\|f_{\zeta,\lambda}\|_2^2 \biggl(\frac{1}{2}-\frac{5}{12\zeta^{1/2}} \|f_{\zeta,\lambda}\|_4^2\biggr)\\ & \leq \lambda\|f_{\zeta,\lambda}\|_2^2 \biggl(\frac{1}{2} -\frac{5(\lambda-\Theta_0)}{12\zeta^{1/2} \lambda\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr), \end{align} which combined with~\eqref{eq:10badd},~\eqref{eq:39} and~\eqref{eq:35} implies \begin{multline} \label{eq:40} \lambda \| f_{\zeta,\lambda} \|_{\infty}^2 \leq \lambda \big\{ 3\|f_{\zeta,\lambda}\|_4^2 \|f_{\zeta,\lambda}'\|_2\big\}^{2/3}\\ \leq \lambda\Biggl\{3\frac{1}{\sqrt{\lambda}}(\lambda-\mu_1(\zeta))^{1/2} \|f_{\zeta,\lambda}\|_2 \sqrt{\lambda}\|f_{\zeta,\lambda}\|_2\biggl(\frac{1}{2} -\frac{5(\lambda-\Theta_0)}{12\zeta^{1/2} \lambda\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr)^{1/2}\Biggr\}^{2/3}\\ \leq \lambda \Biggl\{3(\lambda-\mu_1(\zeta))^{1/2} \frac{9}{4}\zeta\frac{1}{\lambda}(\lambda-\mu_1(\zeta)) \biggl(\frac{1}{2} -\frac{5(\lambda-\Theta_0)}{12\zeta^{1/2} \lambda\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr)^{1/2} \Biggr\}^{2/3}\\ \leq \frac{9}{2^{4/3}}\zeta^{2/3}\lambda^{1/3} \biggl(\frac{1}{2} -\frac{5(\lambda-\Theta_0)}{12\zeta^{1/2} \lambda\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr)^{1/3} (\lambda-\mu_1(\zeta)). \end{multline} \end{proof} \subsection{Bounds on $\zeta(\lambda)$} It follows from Theorem~\ref{thm:Sammenkog} that $\zeta(\lambda) \in {\mathfrak I}(\lambda)$. These bounds on $\zeta$ can be sharpened considerably. \begin{lemma}\label{lem:intervals} Let $\Theta_0 < \lambda \leq 1$. It holds that \begin{equation}\label{eq:zetabounds} \sqrt{\lambda/2}\leq \zeta(\lambda)\leq \sqrt{\lambda}. \end{equation} \end{lemma} \begin{proof} From~\eqref{eq:fzero} we find that $\zeta^2<\lambda$. Moreover, by the bound~\eqref{eq:fone}, $\|f_{\zeta,\lambda}\|_\infty\leq 1$, combined with the lower bound~\eqref{eq:41}, we easily obtain the lower bound $\zeta(\lambda)\geq \sqrt{\lambda/2}$. \end{proof} \begin{remark}\label{rem:zetabound} The lower bound in Lemma~\ref{lem:intervals} can be improved using both the lower and upper bounds in~\eqref{eq:41}, see Figure~\ref{fig:zetabound}. \begin{figure} \caption{Different bounds on $\zeta(\lambda)$. Using Lemma~\ref{lem:intervals} \label{fig:zetabound} \end{figure} \end{remark} \section{The analysis of ${\mathfrak k}_{\lambda}(\nu)$}\label{sec:mainop} \subsection{Starting point} Recall the operator ${\mathfrak k}_{\lambda}(\nu)$ with associated eigenvalues $\{ \lambda_{j}(\nu)\}$ defined in~\eqref{eq:1b}. We will for shortness write $f$ instead of $f_{\zeta(\lambda),\lambda}$ and $\zeta$ instead of $\zeta(\lambda)$ in this section. From the sign of the perturbation and Proposition~\ref{Prop:UnifNormsf} we get: \begin{proposition} Let $\Theta_0\leq \lambda\leq 1$. We have the following estimates on the eigenvalues of ${\mathfrak k}_{\lambda}(\nu)$: \begin{equation}\label{eq:45-1} \mu_j(\nu) \leq \lambda_j(\nu) \leq \mu_j(\nu) + \frac{9}{2^{4/3}}\zeta^{2/3} \lambda^{1/3} \biggl(\frac{1}{2} -\frac{5(\lambda-\Theta_0)}{12\zeta^{1/2} \lambda\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr)^{1/3} (\lambda - \mu_1(\zeta)), \end{equation} and \begin{equation}\label{eq:46-1} \mu_1(\nu) \leq \lambda_1(\nu) \leq \mu_1(\nu) + \frac{3^{3/4}}{2^{1/2}} \zeta^{1/2} (\lambda - \mu_1(\zeta)) \bigl(\mu_1(\nu)/2-\nu\mu_1'(\nu)/4\bigr)^{1/4}. \end{equation} \end{proposition} \begin{proof} The estimate~\eqref{eq:45-1} is an immediate consequence of \eqref{eq:41}. To show the second estimate~\eqref{eq:46-1}, we notice that \begin{equation*} \lambda_1(\nu) \leq \langle u_1 , {\mathfrak k}_{\lambda}(\nu) u_1 \rangle = \mu_1(\nu) + \lambda \| f u_1 \|_2^2, \leq \mu_1(\nu)+ \lambda \| f \|_4^2 \|u_1 \|_4^2, \end{equation*} and \begin{equation}\label{eq:nagy} \|u_1 \|_4^2 \leq \frac{2^{1/2}}{3^{1/4}}\| u_1 \|_2^{3/2} \| u_1' \|_2^{1/2} \leq \frac{2^{1/2}}{3^{1/4}} \bigl(\mu_1(\nu)/2-\nu\mu_1'(\nu)/4\bigr)^{1/4}. \end{equation} The first inequality in~\eqref{eq:nagy} is due to Nagy~\cite{na}, while the second one follows from~\eqref{eq:2a}. The upper bound in~\eqref{eq:46-1} now follows from the upper bound in~\eqref{eq:42}. \end{proof} \begin{lemma} If $\nu\not\in \mathfrak{I}(\lambda)$ then $\lambda_1(\nu) \geq \lambda$. \end{lemma} \begin{proof} If $\nu\not\in\mathfrak{I}(\lambda)$ then, by~\eqref{eq:45-1}, we get $\lambda_1(\nu)\geq \mu_1(\nu)\geq \lambda$. \end{proof} We continue with some identities. \begin{proposition} Suppose that $\nu_0$ is a stationary point for $\lambda_1$, i.e. \begin{align}\label{eq:2bis} \lambda_1'(\nu_0) = 0\,. \end{align} Then we have the following identities: \begin{gather} \{ \lambda_1(\nu_0) - \nu_0^2 -\lambda f^2(0) \} v^2_1(0;\nu_0) = 2\lambda \int_0^{+\infty} v^2_1(t;\nu_0) f(t) f'(t)\,dt\,,\label{eq:3b}\\ \int_0^{+\infty} (t- \nu_0) v^2_1(t;\nu_0) \,dt=0\,,\label{eq:3afh}\\ \| (t-\nu_0) v_1(\,\cdot\,;\nu_0)\|_2^2 + \lambda \int_0^{+\infty} t v^2_1(t;\nu_0) f(t) f'(t)\,dt = \| v_1'(\,\cdot\,;\nu_0)\|_2^2\,, \label{eq:4b}\\ \| v_1'(\,\cdot\,;\nu_0)\|_2^2 + \| (t-\nu_0) v_1(\,\cdot\,;\nu_0)\|_2^2 + \lambda \| f \, v_1(\,\cdot\,;\nu_0) \|_2 ^2=\lambda_1(\nu_0)\,. \label{eq:5b} \end{gather} \end{proposition} \begin{proof} Equation~\eqref{eq:3b} is a Dauge-Helffer type formula,~\eqref{eq:3afh} is the Feynman-Hell\-mann formula,~\eqref{eq:4b} follows by the virial theorem and~\eqref{eq:5b} is just the energy equation. \end{proof} \begin{corollary} If $0<\zeta<\nu_0$, $\lambda_1'(\nu_0)=0$ and $\int_0^{+\infty} v^2_1(t;\nu_0) f(t) f'(t)\,dt\geq 0$ then $\lambda_1(\nu) > \lambda$. \end{corollary} \begin{proof} From~\eqref{eq:3b} and~\eqref{eq:fzero} we get \begin{equation*} \lambda_1(\nu_0) \geq \lambda f(0)^2+\nu_0^2 = \lambda + (\lambda-\zeta^2)+(\nu_0^2-\zeta^2) > \lambda, \end{equation*} since $\lambda\geq \zeta^2$ by~\eqref{eq:zetabounds} and $\nu_0^2>\zeta^2$ by the assumption. \end{proof} \begin{remark} From Theorem~\ref{thm:largenu} we notice that it is enough to consider $\nu_0>1.33$ and so the condition on $\nu_0$ and $\zeta$ is not restricting since $\zeta<1$. It is also worth to notice that if $\int_0^{+\infty} v^2_1(t;\nu_0) f(t) f'(t)\,dt < 0$ then also \begin{equation*} \int_0^{+\infty} t v^2_1(t;\nu_0) f(t) f'(t)\,dt < 0, \end{equation*} since there exists a $t_0$ such that $f'(t)$ is positive for $t\in\left]0,t_0\right[$ and negative for $t\in\left]t_0,\infty\right[$, see~\cite{pan2}. \end{remark} \subsection{Lower bound on $\lambda_1(\nu)$} \begin{lemma} If $\lambda_2(\nu)>\lambda+(\nu-\zeta)^2$ then it holds that \begin{equation}\label{eq:temple} \lambda_1(\nu)\geq \lambda + (\nu-\zeta)^2 \biggl[1 - \frac{4\|(t-\zeta)f\|_2^2} {\bigl(\lambda_2(\nu)-\lambda-(\nu-\zeta)^2\bigr)\|f\|_2^2}\biggr]. \end{equation} \end{lemma} \begin{proof} The Temple inequality (see~\cite{kato3}) with $f/\|f\|_2$ as trial state, implies that if $\lambda_2(\nu)>A$ then \begin{equation}\label{eq:Temple} \lambda_1(\nu)\geq A - \frac{B}{\lambda_2(\nu)-A}, \end{equation} where \[ A=\frac{\langle f,{\mathfrak k}_{\lambda}(\nu)f \rangle}{\|f\|_2^2} = \lambda + (\nu-\zeta)^2 \] and \[ B = \frac{\langle f, ({\mathfrak k}_{\lambda}(\nu)-A)^2 f\rangle}{\|f\|_2^2} = \frac{\langle f, {\mathfrak k}_{\lambda}(\nu)^2 f\rangle}{\|f\|_2^2} - A^2. \] Using that ${\mathfrak k}_{\lambda}(\zeta)f=\lambda f$, we find that \[ {\mathfrak k}_{\lambda}(\nu)f = \lambda f - 2(\nu-\zeta)(t-\zeta)f + (\nu-\zeta)^2 f, \] and so \[ \|{\mathfrak k}_{\lambda}(\nu)f\|^2 = \bigl(\lambda+(\nu-\zeta)^2\bigr)^2\|f\|_2^2 + 4(\nu-\zeta)^2\|(t-\zeta)f\|_2^2. \] We conclude that \[ B = 4(\nu-\zeta)^2\frac{\|(t-\zeta)f\|_2^2}{\|f\|_2^2}. \] Inserting these expressions for $A$ and $B$ into~\eqref{eq:Temple} yields~\eqref{eq:temple}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:nuzeta}] We only consider (1), since the second item has already been established. Combining the lower bounds on $\|f\|_4$ from~\eqref{eq:38} and~\eqref{eq:42} we first get \begin{equation} \begin{aligned} 2\|(t-\zeta)f\|_2^2 &= \lambda\|f\|_2^2-\frac{3\lambda}{4}\|f\|_4^4\\ & \leq \lambda\|f\|_2^2\Bigl(1-\frac{1}{2\zeta^{1/2}}\|f\|_4^2\Bigr)\\ & \leq \lambda\|f\|_2^2 \biggl(1-\frac{\lambda-\Theta_0} {2\lambda\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr). \end{aligned} \end{equation} We implement this in~\eqref{eq:temple} and use the simple inequality $\lambda_2(\nu)\geq\mu_2(\nu)$, \begin{equation}\label{eq:nuzetabracket} \lambda_1(\nu)\geq \lambda+(\nu-\zeta)^2 \Biggl[ \frac{\mu_2(\nu)-\bigl(3\lambda- \frac{\lambda-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2} \bigr) -(\nu-\zeta)^2}{\lambda_2(\nu)-\lambda-(\nu-\zeta)^2} \Biggr]. \end{equation} By continuity it suffices to check verify that \begin{align} \label{eq:Numerator} \mu_2(\zeta) - \biggl(3\lambda- \frac{\lambda-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2} \biggr) > 0, \end{align} and \begin{align} \lambda_2(\zeta)-\lambda>0. \end{align} This last inequality is trivially satisfied since $\lambda_2\geq \mu_2$ which satisfies the lower bound~\eqref{eq:mutwoopt}. Thus we only have to consider~\eqref{eq:Numerator}. Notice that the parenthesis in~\eqref{eq:Numerator} is strictly less than $3$. Since $\mu_2$ is decreasing on $\left[0,1\right]$ and $\mu_2(1)=3$ this finishes the proof. \end{proof} Define the set ${\mathcal X}(\lambda) \subset {\mathfrak I} (\lambda)$ as the possible values of $\zeta$, i.e. \begin{align} {\mathcal X}(\lambda) := \{ \zeta \in {\mathbb R} \,:\, \text{ the function } {\mathbb R}\ni z \mapsto {\mathcal F}_{z,\lambda}(f_{z,\lambda}) \text{ has a minimum at } \zeta\}. \end{align} By Lemma~\ref{lem:intervals} we have ${\mathcal X}(\lambda) \subset \bigl[\sqrt{\lambda/2}, \sqrt{\lambda}\,\bigr]$, but from Figure~\ref{fig:zetabound} it actually follows that \begin{equation}\label{eq:bestzeta} {\mathcal X}(\lambda) \subset \bigl[\xi_0, \sqrt{\lambda}\,\bigr] \end{equation} We can summarize the result~\eqref{eq:nuzetabracket} of Temple's inequality as follows \begin{proposition}\label{prop:numden} Let $\Theta_0 \leq \lambda \leq 1$. Assume that \begin{equation}\label{eq:nume} \mu_2(\nu)-\biggl(3\lambda- \frac{\lambda-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2} \biggr) -(\nu-\zeta)^2 \geq 0, \end{equation} and \begin{equation} \label{eq:72} \mu_2(\nu)-\lambda-(\nu-\zeta)^2 > 0 \end{equation} for all $\zeta\in \mathcal{X}(\lambda)$ and $\nu\in\mathfrak{I}(\lambda)$. Then $ \lambda_1(\nu) \geq \lambda$ for all $\nu \in {\mathfrak I} (\lambda)$. \end{proposition} \begin{proof}[Proof of Theorem~\ref{thm:largenu}] We will use Proposition~\ref{prop:numden}. We start by verifying~\eqref{eq:72}. To prove (i) we need only to consider $0\leq\nu\leq 1.33$ and to prove (ii) it suffices to consider $0\leq\nu\leq 1.5$ since the right endpoint of the interval $\mathfrak{I}(0.8)$ is less than $1.5$ (solving the equation $\mu_1(\nu)=0.8$ gives a numerical value $\nu\approx 1.496$). The inequality~\eqref{eq:72} holds for all $0\leq\nu\leq 1.5$, $\zeta\in\mathcal{X}(\lambda)$ and $\Theta_0\leq\lambda\leq 1$. Indeed, $(\nu-\zeta)^2<1$ by~\eqref{eq:bestzeta} and $\mu_2(\nu)\geq 2.18$ by~\eqref{eq:mutwoopt}. We now consider~\eqref{eq:nume}. If $0\leq\nu\leq\zeta$, $\xi_0\leq \zeta\leq1$ and $\Theta_0\leq\lambda\leq 1$ then \begin{equation}\label{eq:nulessze} \mu_2(\nu)-3\lambda-(\nu-\zeta)^2 \geq \mu_2(\nu)-3-(\nu-1)^2. \end{equation} From Figure~\ref{fig:mu12prime} it is clear that $\mu_2'(\nu)<2(\nu-1)$ on $0\leq\nu\leq1$. Hence, the function $\nu\mapsto \mu_2(\nu)-3-(\nu-1)^2$ is decreasing on this interval. Since $\mu_2(1)=3$ we find that the right-hand side of~\eqref{eq:nulessze} is bounded from below by $0$, and it follows that~\eqref{eq:nume} holds for $\nu\leq\zeta$. To complete the proof of (i) it is sufficient to show that the inequality~\eqref{eq:nume} holds for $\xi_0\leq\zeta\leq \sqrt{\lambda}$, $\zeta<\nu\leq 1.33$ and $\Theta_0\leq\lambda\leq 1$. From Figure~\ref{fig:mu1andmu2} we note that $\mu_2$ is decreasing for these values of $\nu$, and so since $\nu>\zeta$ it follows that the left-hand side in~\eqref{eq:nume} is decreasing as a function of $\nu$. Hence we get a lower bound replacing $\nu$ by the right endpoint $1.33$. Moreover, $1.5\approx 3-1/\bigl(\xi_0^{1/2}\|u(\,\cdot\,;\xi_0)\|_4^2\bigr)>0$ so we also get a lower bound if we replace $\lambda$ by $1$, i.e. \begin{multline}\label{eq:nuzetai} \mu_2(\nu)-\biggl(3\lambda- \frac{\lambda-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2} \biggr) -(\nu-\zeta)^2\\ \geq \mu_2(1.33)- \biggl(3- \frac{1-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr) -(1.33-\zeta)^2. \end{multline} Differentiating the right-hand side of~\eqref{eq:nuzetai} with respect to $\zeta$ and estimating on $\xi_0\leq\zeta\leq 1$ we find \begin{gather*} \frac{d}{d\zeta}\biggl[ \mu_2(1.33)- \biggl(3- \frac{1-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr) -(1.33-\zeta)^2\biggr]\\ \begin{aligned} &=-\frac{1-\Theta_0}{2\zeta^{3/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}+2(1.33-\zeta)\\ &\geq -\frac{1-\Theta_0}{2\xi_0^{3/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}+2(1.33-1)\\ &\approx 0.26. \end{aligned} \end{gather*} Thus, we get a lower bound of the right-hand side of~\eqref{eq:nuzetai} by inserting the left endpoint $\zeta=\xi_0$. The lower bound is \[ \mu_2(1.33)- \biggl(3- \frac{1-\Theta_0}{\xi_0^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr) -(1.33-\xi_0)^2\approx 0.01. \] This finishes the proof of (i). We continue with (ii). It is sufficient to show that the inequality~\eqref{eq:nume} holds for $\xi_0\leq\zeta\leq \sqrt{\lambda}$, $\zeta<\nu\leq 1.5$ and $\Theta_0\leq \lambda\leq 0.8$, where the endpoint $1.5$ is chosen to be slightly larger than the right endpoint of the interval $\mathfrak{I}(0.8)$. Again $\mu_2$ is decreasing for these values of $\nu$, and so since $\nu>\zeta$ it follows that the left-hand side in~\eqref{eq:nume} is decreasing as a function of $\nu$. Hence we get a lower bound replacing $\nu$ by $1.5$. In the same way as for (i) we also get a lower bound if we replace $\lambda$ by the right endpoint $0.8$, i.e. \begin{multline}\label{eq:nuzetaii} \mu_2(\nu)-\biggl(3\lambda- \frac{\lambda-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2} \biggr) -(\nu-\zeta)^2\\ \geq \mu_2(1.5)- \biggl(3\times 0.8 - \frac{0.8-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr) -(1.5-\zeta)^2. \end{multline} We differentiate the right-hand side of~\eqref{eq:nuzetaii}, and estimate for $\xi_0\leq\zeta\leq\sqrt{0.8}$, to find \begin{gather} \frac{d}{d\zeta}\biggl[ \mu_2(1.5)- \biggl(3\times 0.8 - \frac{0.8-\Theta_0}{\zeta^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr) -(1.5-\zeta)^2 \biggr]\\ \begin{aligned} &= -\frac{0.8-\Theta_0}{2\zeta^{3/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}+2(1.5-\zeta)\\ &\geq -\frac{0.8-\Theta_0}{2\xi_0^{3/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2} +2(1.5-\sqrt{0.8})\\ &\approx 1.0. \end{aligned} \end{gather} Hence, we get a lower bound of the right-hand side of~\eqref{eq:nuzetaii} by inserting the left endpoint $\zeta=\xi_0$. The lower bound we get is \[ \mu_2(1.5)- \biggl(3\times 0.8 - \frac{0.8-\Theta_0}{\xi_0^{1/2}\|u_1(\,\cdot\,;\xi_0)\|_4^2}\biggr) -(1.5-\xi_0)^2\approx 0.026. \] This finishes the proof of (ii). \end{proof} \appendix \section{Comments on the numerical calculations}\label{sec:numerical} We give some details on how the numerical calculations were done. The solutions to the eigenvalue equation $\mathfrak{h}(\xi)u=\mu(\xi) u$, not taking the Neumann boundary condition into account, are given by \begin{equation}\label{eq:usol} u(t)= c_1 e^{-\frac12(t-\xi)^2}H_{\frac12(\mu(\xi)-1)}(t-\xi) + c_2 e^{\frac12(t-\xi)^2}H_{-\frac12(\mu(\xi)+1)}(i(t-\xi)). \end{equation} Here, $H_\nu(t)$ solves the Hermite equation (see Section~10.13 in~\cite{erd2}) \begin{equation*} -y''(t)+2ty'(t)-2\nu y(t)=0, \end{equation*} and is polynomially bounded at infinity. Hence, for the function $u$ in~\eqref{eq:usol} to be square integrable, we must set $c_2=0$. Using the well-known relations for the derivative of $H_\nu$, $\frac{d}{dt}H_\nu(t)=2\nu H_{\nu-1}(t)$, we find that the Neumann condition $u'(0)=0$ reads \begin{equation}\label{eq:muxi} (\mu(\xi)-1)H_{\frac12(\mu(\xi)-3)}(-\xi)+\xi H_{\frac12(\mu(\xi)-1)}(-\xi)=0. \end{equation} Hence, for $\xi\in\mathbb{R}$, the $j$th eigenvalue $\mu_j(\xi)$ of the operator $\mathfrak{h}(\xi)$ is given by the $j$th (positive) solution $\mu(\xi)$ of~\eqref{eq:muxi}. To obtain an equation for $\mu_j'(\xi)$ we differentiate~\eqref{eq:muxi} implicitly. We use the software Mathematica from Wolfram Research (who claims that Mathematica is able to calculate these special functions to any given precision\footnote{See \url{http://reference.wolfram.com/mathematica/ref/HermiteH.html}.}) to solve these equations numerically and draw the plots. By inserting~\eqref{eq:5} into~\eqref{eq:muxi} we are also able to calculate the constant $\Theta_0$ to any precision (see also Remark~A.6 in~\cite{fope}). \section{Additional graphs}\label{sec:plots} In this appendix we have collected some additional graphs that have to do with the eigenvalues $\mu_j(\xi)$ of $\mathfrak{h}(\xi)$. \begin{figure} \caption{A plot of $\mu_1$ (dashed) and $\mu_2$ (solid).} \label{fig:mu1andmu2} \end{figure} \begin{figure} \caption{A plot of $\mu_1'$ (dashed) and $\mu_2'$ (solid).} \label{fig:mu12prime} \end{figure} \end{document}
\begin{document} \setcounter{equation}{0} \begin{titlepage} \begin{center} {\large \bf CRITICAL RANDOM WALK IN RANDOM ENVIRONMENT \\ ON TREES} \\ \end{center} \begin{center} {\sc Robin Pemantle \footnote{Research supported in part by a National Science Foundation postdoctoral fellowship} $\! ^, \!$ \footnote{Department of Mathematics, University of Wisconsin-Madison, Van Vleck Hall, 480 Lincoln Drive, Madison, WI 53706} and Yuval Peres} \footnote{Current address: Department of Statistics, University of California, Berkeley, CA 94720} \\ University of Wisconsin-Madison and Yale University \end{center} \centerline{\bf Abstract} \noindent We study the behavior of Random Walk in Random Environment (RWRE) on trees in the critical case left open in previous work. Representing the random walk by an electrical network, we assume that the ratios of resistances of neighboring edges of a tree $\Gamma$ are i.i.d. random variables whose logarithms have mean zero and finite variance. Then the resulting RWRE is transient if simple random walk on $\Gamma$ is transient, but not vice versa. We obtain general transience criteria for such walks, which are sharp for symmetric trees of polynomial growth. In order to prove these criteria, we establish results on boundary crossing by tree-indexed random walks. These results rely on comparison inequalities for percolation processes on trees and on some new estimates of boundary crossing probabilities for ordinary mean-zero finite variance random walks in one dimension, which are of independent interest. \noindent{Keywords:} tree, random walk, random environment, random electrical network, tree-indexed process, percolation, boundary crossing, capacity, Hausdorff dimension. \noindent{Subject classification: } Primary: 60J15. Secondary: 60G60, 60G70, 60E07. \end{titlepage} \section{Introduction} Precise criteria are known for the transience of simple random walk on a tree (in this paper, a tree is an infinite, locally finite, rooted acyclic graph and has no leaves, i.e. no vertices of degree one). See for example Woess (1986), Lyons (1990) or Benjamini and Peres (1992a). How is the type of the random walk affected if the transition probabilities are randomly perturbed? Qualitatively we can say that if this perturbation has no ``backward push'' (defined below), then the random walk tends to become more transient; the primary aim of this paper is to establish a quantitative version of this assertion. Designate a vertex $\rho$ of the tree as its root. For any vertex $\sigma \neq \rho$, denote by $\sigma'$ the unique neighbor of $\sigma$ closer to $\rho$ ($\sigma'$ is also called the parent of $\sigma$). An {\em environment\/} for random walk on a fixed tree, $\Gamma$, is a choice of transition probabilities $q(\sigma, \tau)$ on the vertices of $\Gamma$, with $q(\sigma , \tau) > 0$ if and only if $\sigma$ and $\tau$ are neighbors. When these transition probabilities are taken as random variables, the resulting mixture of Markov chains is called {\em Random Walk in Random Environment} (RWRE). Following Lyons and Pemantle (1992), we study random environments under the homogeneity condition \begin{equation} \label{eq1} \mbox{The variables } X (\sigma) = \log \left ( {q(\sigma' , \sigma) \over q(\sigma' , \sigma'')} \right ) \mbox{ are i.i.d. for } |\sigma| \geq 2 , \end{equation} where $|\sigma|$ denotes the distance from $\sigma$ to $\rho$. Let $X$ denote a random variable with this common distribution. There are several motivations for studying RWRE under the condition (\ref{eq1}): \begin{itemize} \item For nearest-neighbour RWRE on the integers, the assumption (\ref{eq1}) is equivalent to assuming that the transition probabilities themselves are i.i.d. . The first result on RWRE was obtained by Solomon (1975), who showed that when $X(\sigma)$ have mean zero and finite variance, then RWRE on the integer line is recurrent, while it is transient if $E(X)>0$. Thus in determining the type of the RWRE, $X$ plays the primary role. The integer line is the simplest infinite tree, and Theorem 2.1 below determines almost exactly the class of trees for which the same criterion applies. An assumption that random variables analogous to $X(\sigma)$ are stationary (and in particular, identically distributed) is also crucial in the work of Durrett (1986), which extended the RWRE results of Sinai (1982) to the multidimensional integer lattice. \item In terms of the associated resistor network, (\ref{eq1}) means that the ratios of the resistances of adjacent edges in $\Gamma$ are i.i.d; such networks are useful for determining the Hausdorff measures of certain random fractals-- See Falconer (1988) and Lyons (1990). The logarithms of the resistances in such a network form a tree-indexed random walk. (A precise definition of such walks is given below.) This structure appears in a variety of settings: \newline As a generalization of branching random walk (Joffe and Moncayo 1973); in a model for ``random distribution functions'' (Dubins and Freedman 1967); in the analysis of game trees (Nau 1983); in studies of random polymers (Derrida and Spohn 1988) and in first-passage percolation (Lyons and Pemantle (1992), Benjamini and Peres (1994b)). \item The tools developed to analyse RWRE satisfying the assumption (\ref{eq1}) are also useful when that assumption is relaxed, e.g. to allow for some dependence between vertices which are ``siblings''. In Section 7 we describe an application to certain {\em reinforced random walks} which may be reduced to a RWRE. \end{itemize} The main result of Lyons and Pemantle (1992) is that RWRE on $\Gamma$ is a.s. transient if the {\em Hausdorff dimension}, denoted $dim(\Gamma)$, is strictly greater than the {\em backward push} \begin{equation} \label{eq2} \beta (X) \, {\stackrel {def} {=}} - \log \min_{0 \leq \lambda \leq 1} {\bf{E}} e^{\lambda X} \end{equation} and a.s. recurrent if $dim(\Gamma) < \beta (X)$. (The definition of $dim(\Gamma)$ will be given in Section~2; the quantity $e^{dim(\Gamma)}$ is called the branching number of $\Gamma$ in papers of R. Lyons; the backward push $\beta$ is zero whenever $X$ has mean zero.) While subsuming previous results in Lyons (1990) and Pemantle (1988), these criteria leave some interesting cases unresolved. For instance if ${\bf{E}} X = 0$ (the random environment is ``fair'') then one easily sees that $\beta (X) = 0$, so the above criteria yield transience of the RWRE only when $\Gamma$ has positive Hausdorff dimension, which in particular implies exponential growth. In fact, much smaller trees suffice for transience of the RWRE in this case, at least if $X$ has a finite second moment (Theorem~\ref{th2.1} below). In particular, this RWRE is a.s. transient whenever simple random walk on the same tree is transient. To illustrate the difference between old criteria such as exponential growth and the criteria set forth in this paper, we limit the discussion for the rest of the introduction to {\em spherically symmetric trees}, i.e. trees determined by a growth function $f : {\bf{Z}}^+ \rightarrow {\bf{Z}}^+$ for which every vertex at distance $n$ from the root has degree $1 + f(n)$. Note, however, that much of the interest in these results stems from their applicability to nonsymmetric trees; criteria for general trees involve the notion of capacity and are deferred to the next section. Assume that $\Gamma$ is spherically symmetric and that the variables $X(\sigma)$ in~(\ref{eq1}) have mean zero and finite variance (the assumption of finite variance is plausible since the $X(\sigma)$ are logs of ratios, so the ratios themselves may still have large tails). Our first result, Theorem~\ref{th2.1}, is that the RWRE is almost surely transient if \begin{equation} \label{eq3} \sum_n n^{-1/2} |\Gamma_n|^{-1} < \infty \end{equation} and this condition is also necessary, provided that the regularity condition \begin{equation} \label{eq4} \sum_n n^{-3/2} \log |\Gamma_n| < \infty \end{equation} holds, where $|\Gamma_n|$ is the cardinality of the $n^{th}$ level of $\Gamma$. We conjecture that condition~(\ref{eq3}) is necessary as well as sufficient for transience of RWRE. To see why this is a natural conjecture, and to point out that the randomness makes the walk more transient, compare this to the following known result: simple random walk on a spherically symmetric tree is transient if and only if $\sum |\Gamma_n|^{-1} < \infty$. The key to proving the above result is an analysis of {\em tree-indexed random walks}. These are random fields $\{ S (\sigma) : \sigma \in \Gamma \}$ defined from a collection of $\mbox{i.i.d.}$ real random variables $\{ X (\sigma) : \sigma \in \Gamma, \sigma \neq \rho \}$ by letting $S(\sigma)$ be the sum of $X(\tau)$ over vertices $\tau$ on the path from the root to $\sigma$. Note that when $\Gamma$ is a single infinite ray (identified with the positive integers) then this is just an ordinary random walk; when $\Gamma$ is the family tree of a Galton-Watson branching process, this is a branching random walk. Random walks indexed by general trees first appeared in Joffe and Moncayo (1973). The motivating question for the study of tree-indexed random walks (cf. Benjamini and Peres (1994a, 1994b)) is this: when is $\Gamma$ large enough so that for a $\Gamma$-indexed random walk, the values of $S(\sigma)$ along at least one ray of $\Gamma$ exhibit a prescribed behavior atypical for an ordinary random walk? In this paper we prove several results in this direction, one of which we now describe, and apply them to RWRE on trees. In the special case where the variables $X(\sigma)$ take only the values $\pm 1$ with equal probability, Benjamini and Peres (1994b) obtained conditions for the existence of a ray in $\Gamma$ along which the partial sums $S(\sigma)$ tend to infinity. In particular, for spherically symmetric trees,~(\ref{eq3}) suffices for the existence of such a ray while the condition \begin{equation} \label{insert} \liminf_{n \rightarrow \infty} n^{-1/2} |\Gamma_n| > 0 \end{equation} is necessary. In Theorem~\ref{th2.2} below, this result is extended to variables $X(\sigma)$ with zero mean and finite variance, and also sharpened. For spherically symmetric trees, we show that~(\ref{eq3}) is necessary and sufficient for the existence of a ray along which $S(\sigma) \rightarrow \infty$. As is well known, transience of a reversible Markov chain is equivalent to finite resistance of the associated resistor network, where the transition probabilities from any vertex are proportional to the conductances (reciprocal resistances); see for example Doyle and Snell (1984). For an environment satisfying~(\ref{eq1}), the conductance attached to the edge between $\sigma'$ and $\sigma$ is $e^{S(\sigma)}$, where $\{ S(\sigma) \}$ is the $\Gamma$-indexed random walk with increments $\{ X(\tau) : \tau \neq \rho \}$. Since finite resistance is a tail event, transience of the environment satisfies a zero-one law. In particular, the network will have finite resistance whenever a ray exists along which $S(\sigma) \rightarrow \infty$ sufficiently fast so that $e^{-S(\sigma)}$ is summable. In this way Theorem~\ref{th2.2} yields Theorem~\ref{th2.1}. For completeness, we state here a result from Pemantle (1992) about the case where the $\mbox{i.i.d.}$ random variables $\{ X (\sigma) \}$ have negative mean and the backward push $\beta (X)$ is positive. For a spherically symmetric tree $\Gamma$, the result of Lyons and Pemantle (1992) yields recurrence of the RWRE if $$\liminf_{n \rightarrow \infty} e^{-n\beta} |\Gamma_n| = 0$$ and transience if $$|\Gamma_n| \geq Ce^{n(\beta + \epsilon)}$$ for some $C,\epsilon > 0$ and all $n$. Here, analyzing the critical case is more difficult, but assuming a regularity condition on the random environment it can be shown that the boundary between transience and recurrence occurs when $$ |\Gamma_n| \approx e^{\beta n + c n^{1/3}} .$$ Here, unlike in the mean zero case, randomness makes the RWRE more recurrent, since the known necessary and sufficient condition for transience of RWRE when $X (\sigma) = - \beta$ a.s. is that $$\sum e^{n \beta} |\Gamma_n|^{-1} < \infty .$$ The rest of the paper is organized as follows. Precise statements of our main results are collected in the next section. Some estimates for ordinary, mean zero, finite variance random walks that will be needed in the sequel are collected in Section~3. Some of these, along the lines of Woodroofe (1976), may be of independent interest. In particular, we determine the rate of growth of a mean-zero, finite variance random walk conditioned to remain positive; this sharpens considerably a result of Ritter (1981). Section~4 explains the second moment method for trees, developed by R. Lyons. Here we have some new results comparing dependent and independent percolation and comparing spherically symmetric trees to nonsymmetric trees of the same size (Theorem~\ref{th4.2}). After these preliminaries, tree-indexed random walks are discussed in Section~5, along with an example in which the increments are symmetric stable random variables. The example shows that the {\em sustainable speed} of a $\Gamma$-indexed random walk with a given increment distribution may not be determined by the dimension of $\Gamma$. Also, this answers a question of R. Lyons (personal communication) by providing an RWRE satisfying~(\ref{eq1}) which is transient on a tree of polynomial growth, even though ${\bf{E}} X(\sigma) < 0$. The RWRE with no backward push is discussed in Section~6 and an application to reinforced random walk is described in Section~7. \section{Statements of results} \setcounter{equation}{0} We begin with some definitions. Recall that all our trees are infinite, locally finite, rooted at some vertex $\rho$, and have no leaves. We use the notation $\sigma \in \Gamma$ to mean that $\sigma$ is a vertex of $\Gamma$. \begin{quote} 1. An infinite path from the root of a tree $\Gamma$ is called a {\em ray} of $\Gamma$. We refer to the collection of all rays as the {\em boundary}, $\partial \Gamma$, of $\Gamma$. 2. If a vertex $\tau$ of $\Gamma$ is on the path connecting the root, $\rho$, to a vertex $\sigma$, then we write $\tau \leq \sigma$. For any two vertices $\sigma$ and $\tau$, let $\sigma \wedge \tau$ denote their greatest lower bound, i.e. the vertex where the paths {}from $\rho$ to $\sigma$ and $\tau$ diverge. Similarly, the vertex at which two rays $\xi$ and $\eta$ diverge is denoted $\xi \wedge \eta$. 3. A set of vertices ${\bf{P}}i$ of $\Gamma$ which intersects every ray of $\Gamma$ is called a {\em cutset}. 4. Let $\phi : {\bf{Z}}^+ \rightarrow {\bf{R}}$ be a decreasing positive function with $\phi (n) \rightarrow 0$ as $n \rightarrow \infty$. The Hausdorff measure of $\Gamma$ in gauge $\phi$ is $$\liminf_{{\bf{P}}i} \; \sum_{\sigma \in {\bf{P}}i} \phi (|\sigma|) ,$$ where the liminf is taken over ${\bf{P}}i$ such that the distance from $\rho$ to the nearest vertex in ${\bf{P}}i$ goes to infinity. The supremum over $\alpha$ for which $\Gamma$ has positive Hausdorff measure in gauge $\phi (n) = e^{-n\alpha}$ is called the {\em Hausdorff dimension} of $\Gamma$. Strictly speaking, this is the Hausdorff dimension of the boundary of $\Gamma$ in the metric $d(\xi , \eta) = e^{-|\eta \wedge \xi|}$. For spherically symmetric trees, this is just the liminf exponential growth rate; for general trees it may be smaller. 5. Hausdorff measure may be defined for Borel subsets $A \subseteq \partial \Gamma$ by only requiring the cutsets ${\bf{P}}i$ to intersect all rays in $A$. Say that $\Gamma$ has $\sigma$-finite Hausdorff measure in gauge $\phi$ if $\partial \Gamma$ is the union of countably many subsets with finite Hausdorff measure in gauge $\phi$. 6. Say that $\Gamma$ has {\em positive capacity} in gauge $\phi$ if there is a probability measure $\mu$ on $\partial \Gamma$ for which the {\em energy} $$I_\phi (\mu ) = \int_{\partial \Gamma} \int_{\partial \Gamma} \phi (|\xi \wedge \eta|)^{-1} \; d\mu (\xi) \, d\mu (\eta) $$ is finite. The infimum over probability measures $\mu$ of this energy is denoted by $1 / \mbox{Cap}_\phi (\Gamma)$. \end{quote} An important fact about capacity and Hausdorff measure, proved by Frostman in 1935, is that $\sigma$-finite Hausdorff measure in gauge $\phi$ implies zero capacity in gauge $\phi$; the converse just barely fails; c.f.\ Carleson (1967 Theorem~4.1). This gap is either the motivation or the bane of much of the present work, since many of our criteria would be necessary and sufficient if zero capacity were identical to $\sigma$-finite Hausdorff measure. \begin{th}[proved in Section 6] \label{th2.1} Suppose that $\mbox{i.i.d.}$ random variables $\{ X(\sigma) : \rho \neq \sigma \in \Gamma \}$ are used to define an environment on a tree $\Gamma$ via~(\ref{eq1}), i.e. the edge from $\sigma'$ to $\sigma$ is assigned the conductance $$\prod_{\rho < \tau \leq \sigma} e^{X(\tau)} .$$ Assume that $X(\sigma)$ have zero mean and finite variance. \begin{quote} $(i)$ If $\Gamma$ has positive capacity in gauge $\phi(n) = n^{-1/2}$, then the resulting RWRE is transient. $(ii)$ If $\Gamma$ has zero Hausdorff measure in the same gauge, then the RWRE is recurrent. $(iii)$ If $\Gamma$ satisfies the regularity condition \begin{equation} \label{regularity} \sum_{n=1}^\infty n^{-3/2} \log |\Gamma_n| < \infty \end{equation} then $\sum_{n=1}^\infty n^{-1/2} |\Gamma_n|^{-1} = \infty$ implies recurrence of the RWRE. In particular, if $\Gamma$ is spherically symmetric and satisfies the regularity condition, then positive capacity in gauge $n^{-1/2}$ is necessary and sufficient for transience. \end{quote} \end{th} {\bf Remarks:} \\ \noindent{1.} Lyons (1990) shows that simple random walk ($X(\sigma) = 0$ with probability one) is transient if and only if $\Gamma$ has positive capacity in gauge $\phi (n) = n^{-1}$. Thus part $(i)$ of the theorem justifies the assertion in the introduction that a fair random environment makes the random walk more transient. For spherically symmetric trees the definitions of Hausdorff measure and capacity are simpler and the theorem reduces to the conditions in~(\ref{eq3})~-~(\ref{insert}). \noindent{2.} Any spherically symmetric tree to which this theorem does not apply must have zero capacity in gauge $n^{-1/2}$ but fail the regularity condition; this implies it grows in vigorous bursts, satisfying $|\Gamma_n| < n^{1/2 + \epsilon}$ infinitely often, and $|\Gamma_n| > {\bf{Example}: }p (n^{1/2 - \epsilon})$ infinitely often as well. \noindent{3.} If the variables $X(\sigma)$ in~(\ref{eq1}) have positive expectation then (trivially) for any tree $\Gamma$ the RWRE is transient, since the sum of the resistances along any fixed ray is almost surely finite. Part $(i)$ of the theorem is proved by showing that in the mean zero case there exists a random ray with the same property. This in turn is deduced from the next theorem concerning tree-indexed random walks. \begin{th}[proved in Section 5] \label{th2.2} Let $\{ X(\sigma) \}$ be $\mbox{i.i.d.}$ random variables indexed by the vertices of $\Gamma$, and let $S(\sigma) = \sum_{\rho < \tau \leq \sigma} X(\tau)$. Suppose that $X(\sigma)$ have zero mean and finite variance. Then \begin{quote} $(i)$ If $\Gamma$ has positive capacity in gauge $\phi (n) = n^{-1/2}$ then $${\bf{P}} ({\bf{Example}: }ists \xi \in \partial \Gamma \,:\, \forall \sigma \in \xi \;\; S(\sigma) \geq 0) > 0 .$$ Furthermore, under the same capacity condition, for every increasing positive function $f$ satisfying \begin{equation} \label{eq5} \sum_{n=1}^\infty n^{-3/2} f(n) < \infty \end{equation} there exists with probability one a ray $\xi$ of $\Gamma$ such that $S(\sigma) \geq f(|\sigma|)$ for all but finitely many $\sigma \in \xi$. $(ii)$ If $\Gamma$ has $\sigma$-finite Hausdorff measure in gauge $\phi (n) = n^{-1/2}$, then $${\bf{P}} ({\bf{Example}: }ists \xi \in \partial \Gamma \,:\, \forall \sigma \in \xi \;\; S(\sigma) \geq 0) = 0 .$$ Furthermore, for any increasing $f$ satisfying~(\ref{eq5}), there is with probability one NO ray $\xi$ such that $S(\sigma) \geq -f(|\sigma|)$ for all but finitely many $\sigma \in \xi$. $(iii)$ The conclusions of part $(ii)$ also hold if we assume, instead of the Hausdorff measure assumption, that $\sum_n n^{-1/2} |\Gamma_n|^{-1} = \infty$. \end{quote} \end{th} {\bf Remark:} \\ For spherically symmetric trees, parts $(i)$ and $(iii)$ cover all cases, thus proving a sharp dichotomy for tree-indexed random walks: either there exist rays with $S(\sigma)$ tending to infinity faster than $n^{1/2 - \epsilon}$ for all $\epsilon > 0$, or else along every ray $S(\sigma)$ must dip below $-n^{1/2 - \epsilon}$ infinitely often. We believe this dichotomy holds for all trees but the proof eludes us. In general, the condition in part $(iii)$ is not comparable to the condition in part $(ii)$. \section{Estimates for mean zero, finite variance random walk} \setcounter{equation}{0} Here we collect estimates for ordinary, one-dimensional, mean zero, finite variance random walks which are needed in the sequel. Begin with a classical estimate whose proof may be found in Feller (1966), Section XII.8. \begin{pr} \label{prFeller} Let $X_1 , X_2 , \ldots$ be $\mbox{i.i.d.}$, nondegenerate, mean zero, random variables with finite variance and let $S_n = \sum_{k=1}^n X_k$. Let $T_0$ denote the hitting time on the negative half-line: $T_0 = \min \{ n \geq 1 : S_n < 0 \}$. Then \begin{equation} \label{eqFeller} \lim_{n \rightarrow \infty} \sqrt{n} {\bf{P}} (T_0 > n) = c_1 > 0 , \end{equation} and in particular, $c_1' \leq \sqrt{n} {\bf{P}} (T_0 > n) \leq c_1''$ for all $n$. $ \Box$ \end{pr} We now determine which boundaries $f(n)$ behave like the horizontal boundary $f(n) \equiv 0$ in that ${\bf{P}} (S_k > f(k) , k = 1 , \ldots , n)$ is still asymptotically $c n^{-1/2}$. \begin{th} \label{th3.1} With $X_n$ and $S_n$ as in the previous proposition, let $f (n)$ be any increasing positive sequence. Then \begin{quote} (I) The condition \begin{equation} \label{eqsummable} \sum_{n=1}^\infty n^{-3/2} f(n) < \infty \end{equation} is necessary and sufficient for the existence of an integer $n_f$ such that $$\inf_{n \geq n_f} \sqrt{n} {\bf{P}} (S_k \geq f(k) \mbox{ for } n_f \leq k \leq n) > 0 .$$ (II) The same condition~(\ref{eqsummable}) is necessary and sufficient for $$\sup_{n \geq 1} \sqrt{n} {\bf{P}} (S_k \geq -f(k) \mbox{ for } 1 \leq k \leq n) < \infty .$$ \end{quote} \end{th} {\bf Remarks:} \\ \noindent{1.} Part (I) may be restated as asserting that if $f$ satisfies~(\ref{eqsummable}), then random walk conditioned to stay positive to time $n$ stays above $f$ between times $n_f$ and $n$ with probability bounded away from zero as $n \rightarrow \infty$. This condition arises in the classical Dvoretzky-Erd\"os test for random walk in three-space to eventually avoid a sequence of concentric balls of radii $f(n)$. Passing to the continuous limit, the absolute value of random walk in three-space becomes a Bessel (3) process, which is just a (one-dimensional) Brownian motion conditioned to stay positive. Proving Theorem~\ref{th3.1} via this connection (using Skorohod representation, say) seems more troublesome than the direct proof. \noindent{2.} In the case where the positive part of the summands $X_i$ is bounded and the negative part has a moment generating function, Theorem~\ref{th3.1} (with further asymptotics) was proved by Novikov (1981). He conjectured that these conditions could be weakened. For the Brownian case, see Millar (1976). Other estimates of this type are given by Woodroofe (1976) and Roberts (1991). For their statistical ramifications, see Siegmund (1986) and the references therein. Our estimate can be used to calculate the rate of escape of a random walk conditioned to stay positive {\em forever}; this process has been studied by several authors -- see Keener (1992) and the references therein. The proof uses the following three-part lemma. Let $$T_h = \min \{ n \geq 1 : S_n < -h \}$$ denote the hitting time on $(-\infty , - h)$. \begin{lem} \label{lem ii-iv} ~~~ \begin{quote} (i) ${\bf{P}} (T_h > n) \leq c_2 h n^{-1/2}$ for all integers $n \geq 1$ and real $h \geq 1$. (ii) ${\bf{E}} (S_n^2 \, | \, T_0 > n) \leq c_3 n$ for $n \geq 1$. (iii) ${\bf{P}} (T_h > n) \geq c_4 h n^{-1/2}$ for all integers $n \geq 1$ and real $h \leq \sqrt{n}$. \end{quote} \end{lem} {\bf Remarks} (corresponding to the assertions in the lemma): \noindent{$(i)$} This estimate is from Kozlov (1976); as we shall see, it follows immediately from Proposition~\ref{prFeller}. \noindent{$(ii)$} In fact we shall verify that $${\bf{E}} (S_n^2 \, | \, T_0 > n) \leq 2 n \mbox{ Var}(X_1) + o(n) ,$$ where the constant 2 cannot be reduced in general. \noindent{$(iii)$} Under the additional assumption that ${\bf{E}} |X_1|^3 < \infty$, this estimate is proved in Lemma~1 of Zhang (1991). Assuming only finite variance, as we do, the estimate was known to Kesten (personal communication) and is implicit in Lawler and Polaski (1992). {\sc Proof:} $(i)$ We may assume that $h \leq \sqrt{n}$ and that $h$ is an integer. By the central limit theorem there exists an $r \geq 1$, depending only on the common distribution of the $X_k$, such that $${\bf{P}} (S_{r^2 h^2} > h) > 1/3 .$$ {}From Proposition~\ref{prFeller} and the FKG inequality (or the Harris inequality; see Grimmett (1989, Section~2.2)): $$ {\bf{P}} (S_{r^2 h^2} > h \mbox{ and } T_0 > r^2 h^2) > {c_1'(rh)^{-1} \over 3} = c h^{-1} .$$ Consequently \begin{eqnarray*} ch^{-1} P (T_h > n) & \leq & P \left [ S_{r^2 h^2} > h \mbox{ and } T_0 > r^2 h^2 \mbox{ and } \sum_{r^2 h^2 + 1}^{r^2 h^2 + k} X_j \geq -h \mbox{ for } 1 \leq k \leq n \right ] \\[2ex] & \leq & P(T_0 > r^2 h^2 + n) \\[2ex] & \leq & c_1'' n^{-1/2} \end{eqnarray*} which yields the required estimate. $(ii)$ Consider the minimum of $T_0$ and $n$: $$ {\bf{E}} (T_0 \wedge n) = \sum_{k=1}^n {\bf{P}} (T_0 \geq k) = \sum_{k=1}^n (c_1 + o(1)) k^{-1/2} = 2 (c_1 + o(1)) n^{1/2} .$$ Therefore, using Wald's identity, $$ {\bf{E}} (S_n^2 {\bf 1}_{T_0 > n}) \leq {\bf{E}} S_{T_0 \wedge n}^2 = {\bf{E}} X_1^2 {\bf{E}} (T_0 \wedge n) = 2 {\bf{E}} X_1^2 (c_1 + o(1)) n^{1/2} .$$ Dividing both sides by ${\bf{P}} (T_0 > n)$ and using Proposition~\ref{prFeller} gives $${\bf{E}} (S_n^2 \, | \, T_0 > n) \leq (2 + o(1)) n {\bf{E}} X_1^2 .$$ Analyzing the proof, it is easy to see that this is sharp at least when the increments $X_j$ are bounded. $(iii)$ First, we use $(ii)$ to derive the estimate \begin{equation} \label{eq6} {\bf{E}} (S_n^2 \, | \, T_h > n) \leq cn \end{equation} with $c$ independent of $h$ and $n$. By the invariance principle, $$\inf \{ {\bf{P}} (T_h > n) : n \geq 1 , h \geq \sqrt{n} \} > 0$$ so~(\ref{eq6}) is immediate for $h \geq \sqrt{n}$. Assume then that $h < \sqrt{n}$. Let $A_i$ denote the event that $T_h > n$ and $S_i$ is the last minimal element among $0 = S_0 , S_1 , \ldots , S_n$. Then $${\bf{E}} (S_n^2 \, | \, T_h > n) = \sum_{i=1}^n {\bf{P}} (A_i) {\bf{E}} (S_n^2 \, | \, A_i) .$$ Conditioning further on $S_i$ we see that this is at most $$ \sup \{ {\bf{E}} (S_n^2 \, | \, A_i , S_i = y) : 1 \leq i \leq n , -h \leq y \leq 0 \} . $$ But by the Markov property, $${\bf{E}} (S_n^2 \, | \, A_i , S_i = y) = {\bf{E}} ((y + S_{n-i})^2 \, | \, S_k > 0 \mbox{ for } 1 \leq k \leq n-i ) .$$ Since $y < 0$ and $y^2 < n$, this gives $$ {\bf{E}} (S_n^2 \, | \, T_h > n) \leq \sup_{0 \leq i \leq n} \{ n + {\bf{E}} (S_{n-i}^2 \, | \, S_k > 0 \mbox{ for } 1 \leq k \leq n-i) \} \leq (1+c_3)n $$ with $c_3$ as in $(ii)$, proving~(\ref{eq6}). Now letting $A$ denote the event $\{ T_h > n \}$, we have $$0 = {\bf{E}} S_{T_h \wedge n} = {\bf{P}} (A) {\bf{E}} (S_n \, | \, A) + {\bf{P}} (A^c) {\bf{E}} (S_{T_h} \, | \, A^c) $$ and therefore (using~(\ref{eq6}) in the final step): $${{\bf{P}} (A) \over {\bf{P}} (A^c)} = {{\bf{E}} (-S_{T_h} \, | \, A^c) \over {\bf{E}} (S_n \, | \, A)} \geq {h \over ({\bf{E}} (S_n^2 \, | \, A))^{1/2}} \geq {h \over (cn)^{1/2}} .$$ This establishes $(iii)$. $ \Box$ {\sc Proof of Theorem}~\ref{th3.1}, part (I): First assume that $\sum n^{-3/2} f(n) < \infty$. Consider the events $$V_m = \left \{ S_k > f(2^m) \mbox{ for } 2^{m-1} < k \leq 2^m \right \} .$$ We claim that for any $N \geq 2^{m-1}$, \begin{equation} \label{eq7} {\bf{P}} (V_m^c \, | \, T_0 > 4N) \leq {\tilde{c}} f(2^m) 2^{-m/2} \end{equation} with some constant ${\tilde{c}} > 0$. Indeed by conditioning on the first $k \in [2^{m-1} + 1 , 2^m]$ for which $S_k \leq f(2^m)$ one sees that \begin{eqnarray*} && {\bf{P}} (V_m^c \cap \{ T_0 > 4N \}) \\[2ex] & \leq & {\bf{P}} (T_0 > 2^{m-1}) \max_{k \leq 2^m} {\bf{P}} (S_j - S_k \geq -f(2^m) \mbox{ for all } j \in [k+1 , 4N] ) \\[2ex] & \leq & {\bf{P}} (T_0 > 2^{m-1}) {\bf{P}} (T_{f(2^m)} \geq N) \\[2ex] & \leq & c_1''c_2 f(2^m) (N 2^{m-1})^{-1/2} . \end{eqnarray*} Using Proposition~\ref{prFeller}, this establishes~(\ref{eq7}). Since $f$ is nondecreasing, the hypothesis $$\sum_n n^{-3/2} f(n) < \infty$$ is equivalent to $\sum_{m=1}^\infty 2^{-m/2} f(2^m) < \infty$. Choose an integer $m_f$ such that $$ {\tilde{c}} \sum_{m = m_f}^\infty 2^{-m/2} f(2^m) < {1 \over 2} .$$ By~(\ref{eq7}), for every $M > m_f$ we have $$\sum_{m = m_f}^M {\bf{P}} (V_m^c \, | \, T_0 > 2^{M+1}) \leq {1 \over 2}$$ and hence $${\bf{P}} \left ( \bigcap_{m = m_f}^M V_m \, | \, T_0 > 2^{M+1} \right ) \geq {1 \over 2} .$$ Taking $n_f = 2^{m_f}$ and recalling the definition of $V_m$ concludes the proof. For the converse, first recall a known fact about mean zero, finite variance random walks, namely that \begin{equation} \label{eq8} \lim_{R \rightarrow \infty} \sup_{h \geq 0} {\bf{P}} (h + S_{T_h} \leq -R) = 0 . \end{equation} In other words the amounts by which $\{ S_n \}$ {\em overshoots} the boundary $-h$ are tight as $h$ varies over $(0,\infty)$. Indeed the overshoots for the random walk $\{ S_n \}$ are the same as for the associated renewal process $\{ L_n \}$ of descending ladder random variables, where $L_1$ is the first negative value among $S_1 , S_2 , \ldots$, and in general, $L_{n+1}$ is the first among $\{ S_k \}$ which is less than $L_n$. The differences $L_{n+1} - L_n$ are $\mbox{i.i.d.}$; they have finite first moment if and only if ${\bf{E}} X_1^2 < \infty$ (see Feller (1966), Section XVIII.5). In this case, the overshoots are tight since by the renewal theorem, they converge in distribution (see Feller (1966), Section XI.3). This proves~(\ref{eq8}). Some new notation will be useful: \begin{defn} For any function $f(n)$ and any random walk $\{ S_n \}$, let $A(f;a,b)$ denote the event that $S_n \geq f(n)$ for all $n \in [a,b]$. Let $A(f;b)$ denote $A(f;1,b)$. \end{defn} Proceeding now to the proof itself, it is required to prove~(\ref{eqsummable}) from the assumption that for some $n_f \geq 1$, \begin{equation} \label{eq9} \inf_{n \geq n_f} {\bf{P}} ( A(f;n_f,n) \, | \, T_0 > n) > 0 . \end{equation} It may be assumed without loss of generality that $f(n) \rightarrow \infty$, since otherwise there is nothing to prove; also, by changing $f$ at finitely many integers, it may be assumed, without affecting the condition~(\ref{eqsummable}) we are trying to prove, that~(\ref{eq9}) holds with $n_f = 1$. Impose the restriction $f(n) \leq \sqrt{n}$; this restriction will be removed at the end of the proof. Let $c_6$ be the infimum of probabilities ${\bf{P}} (A(f;n) \, | \, T_0 > n)$, which is positive by~(\ref{eq9}). The key estimate to proving~(\ref{eqsummable}) is \begin{equation} \label{eq11} {\bf{P}} (A(f;2n)^c \, | \, A(f;n) \mbox{ and } T_0 > N) \geq c_7 f(n) n^{-1/2} \end{equation} for some $c_7 > 0$, all sufficiently large $n$ and all $N \geq 2n$. Verifying this estimate involves several steps. \begin{quote} \underline{Step 1: Controlling $S_n$, given $A(f;n)$}. From part~$(ii)$ of Lemma~\ref{lem ii-iv}, $${\bf{E}} (S_n^2 \, | \, A(f;n)) \leq c_6^{-1} {\bf{E}} (S_n^2 \, | \, T_0 > n) \leq (c_3 / c_6) n .$$ Therefore \begin{equation} \label{eq12} {\bf{P}} (S_n \leq c_8 \sqrt{n} \, | \, A(f;n)) \geq {1 \over 2}, \end{equation} where $c_8 = 2 c_3 / c_6 > 0$. \underline{Step 2: Securing a dip below the boundary}. By the central limit theorem there exists an integer $n^*$ and a constant $c_9 > 0$ such that $${\bf{P}} (S_{2n} - S_n < - c_8 \sqrt{n} ) \geq 4 c_9 \mbox{ for } n \geq n^* .$$ Let $t(n) = \min \{ k > n : S_k < f(n) \}$. Then nonnegativity of $f$ and the Markov property imply $${\bf{P}} (t(n) \leq 2n \, | \, A(f;n) , S_n ) \geq 4 c_9 {\bf 1}_{S_n \leq c_8 \sqrt{n}} ,$$ and hence by~(\ref{eq12}), \begin{equation} \label{eq13} {\bf{P}} (t(n) \leq 2n \, | \, A(f;n)) \geq 2 c_9 \end{equation} whenever $n \geq n^*$. \underline{Step 3: Controlling the overshoot}. Use tightness of the overshoots to pick an $R > 0$ such that ${\bf{P}} (S_{t(n)} \geq f(n) - R \, | \, S_n = y) \geq 1 - c_9$ for any $y \geq f(n)$. Increase $n^*$ if necessary to ensure that $f(n^*) > 2R$ and hence for all $n \geq n^*$, ${\bf{P}} (S_{t(n)} \geq f(n) / 2 \, | \, A(f;n)) \geq 1 - c_9$. Combining this with~(\ref{eq13}) yields $${\bf{P}} \left ( t(n) \leq 2n \mbox{ and } S_{t(n)} \geq {f(n) \over 2} \, | \, A(f;n) \right ) \geq c_9 ;$$ thus by Proposition~\ref{prFeller} and the definition of $c_6$ there is some $c_{10} > 0$ such that for all $n$, \begin{equation} \label{eq 940102a} {\bf{P}} (A(f;n) \cap \{ t(n) \leq 2n \} \cap \{ S_{t(n)} \geq {f(n) \over 2} \}) \geq c_{10} n^{-1/2} . \end{equation} \underline{Step 4: Maintaining positivity}. From the strong Markov property and part~$(iii)$ of Lemma~\ref{lem ii-iv}, the event $\{ S_k - S_{t(n)} \geq -f(n) / 2 \mbox{ for } k \in [t(n)+1 , t(n) +N] \}$ is independent of the random walk up to time $t(n)$ and has probability at least $(c_4 / 2) f(n) N^{-1/2}$. Multiplying by the inequality~(\ref{eq 940102a}) proves that $${\bf{P}} (A(f;n) \cap A(f;2n)^c \cap \{ T_0 > N \}) \geq c_{11} f(n) (nN)^{-1/2} ,$$ where $c_{11} = c_{10} \cdot c_4 / 2$. Now the key estimate~(\ref{eq11}) follows from Proposition~\ref{prFeller}. \end{quote} \noindent{From} here the rest is easy sailing. If $n > n^*$ and $2^M \geq 2n$ then $$ {\bf{P}} (A(f;2n) \, | \, T_0 > 2^M) \leq (1 - c_7 f(n) n^{-1/2}) {\bf{P}} (A(f;n) \, | \, T_0 > 2^M) . $$ Therefore $${\bf{P}} (A(f;2^M) \, | \, T_0 > 2^M) \leq \prod_{\log_2 n^* < m \leq M} 1 - c_7 f(2^m) 2^{-m/2} .$$ Recalling that the LHS is bounded away from zero, we infer that necessarily $$\sum_m 2^{-m/2} f(2^m) < \infty .$$ Having proved summability from~(\ref{eq9}) when $f(n) \leq \sqrt{n}$, we now remove the restriction. If the restriction is violated finitely often, this is easily corrected where it is used in Step~2 by choosing $n^*$ sufficiently large. If the restriction is violated infinitely often, then the above proof works for $g(n) = f(n) \wedge \sqrt{n}$ to show that $$ \sum_{m=1}^\infty 2^{-m/2} \min (f(2^m) , 2^{m/2}) < \infty $$ which contradicts infinitely many violations. Thus $2^{-m/2} f(2^m)$ is summable in any event, which is equivalent to~(\ref{eqsummable}). $ \Box$ {\sc Proof of Theorem}~\ref{th3.1}, part (II): We may assume that $f(n) \uparrow \infty$ since Lemma~\ref{lem ii-iv} part $(i)$ covers bounded $f$. Retain the notation $A(f;a,b)$, {}from the previous proof. One of the two halves of the equivalence, $\sup_{n \geq 1} \sqrt{n} {\bf{P}} (S_k \geq -f(k) \mbox{ for } 1 \leq k \leq n) < \infty$ implies summability of~(\ref{eqsummable}), is easy. Assume that ${\bf{P}} (A(-f;n)) \leq C n^{-1/2}$ for all $n \geq 1$. Under this assumption, we may repeat the proof of~(\ref{eq11}) substituting 0 for the upper boundary, $f$, and substituting $-f$ for the lower boundary, 0, to yield \begin{equation} \label{eq18} {\bf{P}} (A(0;2n)^c \, | \, A(0;n) \cap A(-f;N)) \geq c_7 f(n) n^{-1/2} \end{equation} for all large $n$ and $N \geq 4n$. Our assumption implies that the products $$ \prod_{m=1}^M {\bf{P}} (A(0;2^{m+1}) \, | \, A(0;2^m) \cap A(-f;2^{M+2})) , $$ being greater than ${\bf{P}} (A(0;2^{M+1}) \, | \, A(-f;2^{M+2}))$, must be bounded below by a positive constant. From~(\ref{eq18}) it follows that the product of $1 - c_7 n^{-1/2} f(n)$ is nonzero as $n$ ranges over powers of two, which implies $\sum 2^{-m/2} f(2^m) < \infty$, completing the proof for this half of the equivalence. The other direction rests on an inequality which will require some work to prove. {\sc Claim:} {\em There is a constant $\, {c_{12}} \geq 1 \, $ for which} \begin{equation} \label{new 3.10} {\bf{P}} \left ( T_0 < 2n \, | \, T_0 \geq n \mbox{ and } A(-f ; N) \right ) \leq {c_{12}} f(3n) n^{-1/2} \end{equation} {\em provided that\/} $N \geq 4n$. (Observe that when $f(3n)^2 \geq n$ the inequality is trivial. Also note that it suffices to establish (\ref{new 3.10}) for large $n$, since ${c_{12}}$ can be chosen large enough to render the inequality trivial for small $n$.). Assuming~(\ref{new 3.10}) for the moment, \begin{eqnarray} && {\bf{P}} (T_0 \geq 2^{M+1} \, | \, A(-f; 2^{M+2})) \nonumber \\[2ex] & \geq & {\bf{P}} (T_0 \geq 2^{m_0} \, | \, A(-f; 2^{M+2})) \cdot \prod_{m=m_0}^M \left ( 1 - {c_{12}} f(3 \cdot 2^m) 2^{-m/2} \right ) \nonumber \\[2ex] & \geq & c_{m_0} \prod_{m=m_0}^M \left ( 1 - {c_{12}} f(3 \cdot 2^m) 2^{-m/2} \right ) , \label{new 3.11} \end{eqnarray} where $m_0$ is large enough so that all the factors on the right positive. Since the summability of~(\ref{eqsummable}) is equivalent to $$ \sum f(3 \cdot 2^m) 2^{-m/2} < \infty , $$ the RHS of~(\ref{new 3.11}) is bounded from below by a positive constant $c_{13}$ and therefore $${\bf{P}} (A( -f , 2^{M+2})) \leq c^{-1} {\bf{P}} (T_0 \geq 2^{M+1}) .$$ Proposition~\ref{prFeller} then easily yields the inequality we are seeking (use the smallest $M$ such that $2^M \geq n$). Thus it suffices to establish~(\ref{new 3.10}). This is done by cutting and pasting portions of the random walk trajectory. To bound the LHS of~(\ref{new 3.10}) we will condition on the time $T_0 = k \in [n , 2n)$ of the first negative value of the random walk, and on the overshoot $S_{T_0} = -y < 0$. This gives \begin{equation} \label{new 3.12} {\bf{P}} ( n \leq T_0 < 2n \mbox{ and } A(-f;N)) \leq P (T_0 \geq n) \max_{\begin{array}{c} n \leq k < 2n \\ 0 < y < f(k) \end{array}} {\bf{P}} (A( -f ; k , N) \, | \, S_k = -y) . \end{equation} Next observe that, given $S_k = -y$, the events $A(-f; k,N)$ and $\{ S_{3n} > \sqrt{n} \}$ are both increasing events in the conditionally independent variables $X_{k+1} , \ldots , X_N$. Applying the Harris inequality (or FKG) yields \begin{eqnarray} && {\bf{P}} (A(-f; k,N) \mbox{ and } S_{3n} > \sqrt{n} \, | \, S_k = -y) \nonumber \\[2ex] & \geq & {\bf{P}} (A(-f; k,N) \, | \, S_k = -y) \cdot {\bf{P}} ( S_{3n} > \sqrt{n} \, | \, S_k = -y) \nonumber \\[2ex] & \geq & c_{14} {\bf{P}} (A(-f; k,n) \, | \, S_k = -y) , \label{new 3.13} \end{eqnarray} where the last inequality uses the fact that $3n-k > n$, that $0 < y \leq f(3n) < \sqrt{n}$, and the central limit theorem. Now begins the cutting and pasting. We shall combine a trajectory $X_{k+1}^{(1)} , X_{k+2}^{(1)} , \ldots , X_N^{(1)}$ in the event on the LHS of~(\ref{new 3.13}) (called $B_1$ in figure~1) with two independent random walk trajectories $\{ X_j^{(2)} : j \geq 1 \}$ and $\{ X_j^{(3)} : j \geq 1 \}$ depicted in figures~2 and~3 respectively. Assume without loss of generality that $f(3n)^2$ is an integer. Define an event (i.e.\ a subset of sequence space) by $$B(k,y) = A (y-f; 0, N-k) \cap \{ S_{3n-k} > \sqrt{n} + y \}$$ and observe that $B_1 \cap \{ S_k = -y \}$ may be written as the intersection of $\{ S_k = -y \}$ with the event that the shifted sequence $X_{k+1}^{(1)}, X_{k+2}^{(1)}, \ldots$ is in $B(k,y)$. Define the mapping taking the three trajectories $\{ X_j^{(i)} \}$, $i = 1,2,3$ into a trajectory $\{ {\tilde{X}}_j : 1 \leq j \leq N \}$ as follows. $$\begin{array}{rrcl} I. & {\tilde{X}}_j = X_j^{(2)} & \mbox{ for } & 1 \leq j \leq f(3n)^2 \\[2ex] II. & {\tilde{X}}_{f(3n)^2 + j} = X_{k+j}^{(1)} & \mbox{ for } & 1 \leq j \leq 3n - k \\[2ex] III. & {\tilde{X}}_{f(3n)^2 + 3n - k + j} = X_j^{(3)} & \mbox{ for } & 1 \leq j \leq k - f(3n)^2 \\[2ex] IV. & {\tilde{X}}_j = X_j^{(1)} & \mbox{ for } & 3n + 1 \leq j \leq N . \end{array}$$ \begin{picture}(400,100) \put(100, 20) {\framebox(200,60){Figures 1 - 4 go here}} \end{picture} \noindent We claim that the ``pasted'' trajectory $\{{\tilde{X}}_j : 1 \leq j \leq N \}$ lies in the event \newline $B_4 \, {\stackrel {def} {=}} A(0;n) \cap A(-f;N)$ depicted in figure~4 whenever the trajectories $\{ X_j^{(1)} \}$, $\{ X_j^{(2)} \}$ and $\{ X_j^{(3)} \}$ lie in $B_1$, $B_2$ and $B_3$ respectively (See figures~1-4 for the definitions of $B_1, B_2$ and $B_3$). Indeed, let ${\cal{S}}T_j = {\tilde{X}}_1 + \cdots + {\tilde{X}}_j$ and observe that for $1 \leq j \leq 3n - k$, $$ \sum_{i=k+1}^{k+j} X_i^{(1)} \geq - f(k+j) - (-y) \geq - f(3n) .$$ Thus ${\cal{S}}T_{f(3n)^2 + j} \geq {\cal{S}}T_{f(3n)^2} - f(3n) \geq 0$. This verifies that part II of the trajectory in figure~4 satisfies the requirements to be in $B_4$; the other verifications are immediate. Since the sequence $\{ {\tilde{X}}_j \}$ is a fixed permutation of the three sequences $\{ X_j^{(1)} \}$, $\{ X_j^{(2)} \}$ and $\{ X_j^{(3)} \}$, it is still i.i.d.\ and hence \begin{equation} \label{new 3.15} {\bf{P}} (B(k,y)) {\bf{P}} (B_2) {\bf{P}} (B_3) = {\bf{P}} (B_1 \, | \, S_k = -y) {\bf{P}} (B_2) {\bf{P}} (B_3) \leq {\bf{P}} (B_4) . \end{equation} Now the event $B_2$ is the intersection of two increasing events, so by the Harris inequality, Proposition~\ref{prFeller} and the central limit theorem, \begin{equation} \label{new 3.16} {\bf{P}} (B_2) \geq {\bf{P}} (A(0 ; f(3n)^2)) {\bf{P}} (S_{f(3n)^2} \geq f(3n)) \geq { c_{15} \over f(3n)} \end{equation} for some positive $c_{15}$ when $n$ (and hence $f(3n)$) are sufficiently large. Similarly, by Lemma~\ref{lem ii-iv}~$(iii)$, the CLT and the Harris inequality, \begin{equation} \label{new 3.17} {\bf{P}} (B_3) \geq ({1 \over 2} - o(1)) {\bf{P}} (A(- \sqrt{n} ; k-f(3n)^2 )) \geq c_{16} > 0. \end{equation} Together with~(\ref{new 3.15}) and~(\ref{new 3.16}) this yields \begin{equation} \label{new 3.18} {\bf{P}} (B_1 \, | \, S_k = -y) \leq c_{17} f(3n) {\bf{P}} (B_4) . \end{equation} By~(\ref{new 3.13}), $${\bf{P}} (A( -f; k,N) \, | \, S_k = -y) \leq c_{18} f(3n) {\bf{P}} (B_4) $$ and since this is true for any choice of $y$, we integrate out $y$ to get $${\bf{P}} (A( -f; k,N)) \leq c_{18} f(3n) {\bf{P}} (B_4) .$$ Finally, recalling~(\ref{new 3.12}) and the definition of $B_4$, we obtain \begin{eqnarray*} {\bf{P}} (n \leq T_0 < 2n \mbox{ and } A(-f; N)) & \leq & c_{18} {\bf{P}} (T_0 \geq n) f(3n) {\bf{P}} (B_4) \\[2ex] & \leq & c_{19} n^{-1/2} f(3n) {\bf{P}} (T_0 \geq n \mbox{ and } A(-f ; N)) \end{eqnarray*} which is equivalent to~(\ref{new 3.10}). This completes the proof of Theorem~\ref{th3.1}. $ \Box$ \section{Target percolation on trees} \setcounter{equation}{0} In this section only, it will be convenient to consider finite as well as infinite trees. Among finite trees, we allow only those of constant height, i.e. all maximal paths from the root (still called rays) have the same length $N$. Thus the set $\partial \Gamma$ of rays may be identified with $\Gamma_N$. The definitions of energy and capacity remain the same. The definition of Hausdorff measure fails, since the cutset ${\bf{P}}i$ cannot go to infinity; replacing the liminf by an infimum defines the {\em Hausdorff content}. Following Lyons (1992), we consider a very general {\em percolation process} on $\Gamma$: a random subgraph $W \subseteq \Gamma$ chosen {}from some arbitrary distribution on sets of vertices in $\Gamma$. The event that $W$ contains the path connecting $\rho$ and $\sigma$ is denoted $\{ \rho \leftrightarrow \sigma \}$; similarly write $\{ \rho \leftrightarrow \partial \Gamma \}$ for the event that $W$ contains a ray of $\Gamma$. A familiar example from percolation theory is when each edge $e$ is retained independently with some probability $p(e)$. The random component $W$ of this subgraph that contains the root is called a {\em Bernoulli percolation} on $\Gamma$. Another example that is general enough to include nearly all cases of interest is a {\em target} percolation. This is defined {}from a family of $\mbox{i.i.d.}$ real random variables $\{X(\sigma) : \sigma \neq \rho \}$ by choosing some closed set $B \subseteq {\bf{R}}^N$ and defining $$W = \bigcup_{k=0}^N \{ \sigma \in \Gamma_k : (X(\tau_1) , \ldots , X(\tau_k)) \in \pi_k B \} ,$$ where $\pi_k$ is the projection of $B$ on the first $k$ coordinates, the sequence $\rho , \tau_1, \ldots , \tau_k = \sigma$ is the path from the root to $\sigma$, and $\rho$ is defined always to be in $W$. The set $B$ is called the target set. Observe that since $B$ is closed, a ray $\xi = (\rho, \sigma_1 , \sigma_2 , \ldots)$ is in $W$ if and only if $(X(\rho) , X(\sigma_1) , \ldots) \in B$. Letting $\{ X (\sigma) \}$ all be uniform on $[0,1]$ and $B = \{ \prod_{j=1}^\infty [0,a_j] \}$ for some $a_j \in [0,1]$ recovers a class of Bernoulli percolations. The following lemma, which will be sharpened below, is contained in the results of Lyons (1992). Because of notational differences, the brief proof is included. \begin{lem} \label{lem4.1} Consider a percolation in which ${\bf{P}} (\rho \leftrightarrow \sigma) = p(|\sigma|)$ for some strictly positive function $p$. \begin{quote} $(i)$ First moment method: ${\bf{P}} (\rho \leftrightarrow \partial \Gamma)$ is bounded above by the Hausdorff content of $\Gamma$ in the gauge $p(n)$. If $\Gamma$ is infinite and has zero Hausdorff measure in gauge $\{ p(n) \}$, then ${\bf{P}} (\rho \leftrightarrow \partial \Gamma) = 0$. $(ii)$ Second moment method: Suppose further that there is a positive, nonincreasing function $g : {\bf{Z}}^+ \rightarrow {\bf{R}}$ such that for any two vertices $\sigma , \tau \in \Gamma_n$ with $|\sigma \wedge \tau| = k$, \begin{equation} \label{eqQB} {\bf{P}} (\rho \leftrightarrow \sigma \mbox{ and } \rho \leftrightarrow \tau) \leq {p(n)^2 \over g(k) } . \end{equation} Then $\,{\bf{P}} (\rho \leftrightarrow \partial \Gamma) \geq \mbox{Cap}_g (\Gamma)$. \end{quote} \end{lem} {\bf Remark:} \\ For Bernoulli percolations,~(\ref{eqQB}) holds with equality for $g(k) = p(k)$. More generally, when $g(k) = p(k) / M$ for some constant $M > 0$, the percolation is termed {\em quasi-Bernoulli} (Lyons 1989). In this case, \begin{equation} \label{eqQB2} {\bf{P}} (\rho \leftrightarrow \partial \Gamma) \geq {\mbox{Cap}_p (\Gamma) \over M} . \end{equation} Note also that any target percolation satisfies the condition in the lemma: ${\bf{P}} (\rho \leftrightarrow \sigma) = p(|\sigma|)$. {\sc Proof:} $(i)$: For any cutset ${\bf{P}}i$, $${\bf{P}} (\rho \leftrightarrow \partial \Gamma) \leq {\bf{P}} (\rho \leftrightarrow \sigma \mbox{ for some } \sigma \in {\bf{P}}i) \leq \sum_{\sigma \in {\bf{P}}i} p(|\sigma|) .$$ The assertion follows by taking the infimum over cutsets. $(ii)$: First assume that $\Gamma$ has finite height $N$. Let $\mu$ be a probability measure on $\partial \Gamma = \Gamma_N$. Consider the random variable $$ Y_N = \sum_{\sigma \in \Gamma_N} \mu (\sigma) {\bf 1}_{\rho \leftrightarrow \sigma} .$$ Clearly ${\bf{E}} Y_N = p(N)$ and \begin{eqnarray*} {\bf{E}} Y_N^2 & = & {\bf{E}} \sum_{\sigma , \tau \in \Gamma_N} \mu (\sigma) \mu (\tau) {\bf 1}_{\rho \leftrightarrow \sigma \mbox{ and } \rho \leftrightarrow \tau} \\[2ex] & \leq & \sum_{\sigma , \tau \in \Gamma_N} \mu (\sigma) \mu (\tau) { p(N)^2 \over g(|\sigma \wedge \tau|)} \\[2ex] & = & p(N)^2 I_g (\mu) \end{eqnarray*} by the definition of the energy $I_g$. The Cauchy-Schwartz inequality gives $$p(N)^2 = {\bf{E}} (Y_N {\bf 1}_{Y_n > 0})^2 \leq {\bf{E}} Y_N^2 {\bf{P}} (Y_N > 0)$$ and dividing the previous inequality by ${\bf{E}} Y_N^2$ gives $1 \leq I_g (\mu) {\bf{P}} (Y_N > 0)$. Since $\mbox{Cap}_g (\Gamma)$ is the supremum of $I_g(\mu)^{-1}$ over probability measures $\mu$, it follows that ${\bf{P}} (Y_N > 0) \geq \mbox{Cap}_g (\Gamma)$. The case where $N = \infty$ is obtained form a straightforward passage to the limit. $ \Box$ For infinite trees, we are primarily interested in whether ${\bf{P}} (\rho \leftrightarrow \partial \Gamma)$ is positive. The above lemma fails to give a sharp answer even for quasi-Bernoulli percolations, since the condition $\mbox{Cap}_p (\Gamma) = 0$ does not imply zero Hausdorff measure. We believe but cannot prove the following. \begin{conj} \label{conjcapacity} For any target percolation on an infinite tree with ${\bf{P}} (\rho \leftrightarrow \partial \Gamma) > 0$, the tree $\Gamma$ must have positive capacity in gauge $p(n)$, where $p(|\sigma|) = {\bf{P}} (\rho \leftrightarrow \sigma)$. \end{conj} To see why we restrict to target percolations, let $\Gamma$ be the infinite binary tree, let $\xi$ be a ray chosen uniformly from the canonical measure on $\partial \Gamma$ and let $W = \xi$. Then ${\bf{P}} (\rho \rightarrow \partial \Gamma) = 1$, but $\Gamma$ has zero capacity in gauge $p(n) = 2^{-n}$. Evans (1992) gives a capacity criterion on the target set $B$ necessary and sufficient for ${\bf{P}} (\rho \leftrightarrow \partial \Gamma) > 0$ in the special case where $\Gamma$ is a homogeneous tree (every vertex has the same degree). His work was extended by Lyons (1992), who showed that $\mbox{Cap}_p (\Gamma) > 0$ was necessary for ${\bf{P}}(\rho \leftrightarrow \partial \Gamma) > 0$ for all Bernoulli percolations and for non-Bernoulli percolations satisfying a certain condition. Specific non-Bernoulli target percolations are used in Lyons (1989) to analyze the Ising model, and in Lyons and Pemantle (1992), Benjamini and Peres (1994b) and Pemantle and Peres (1994) to determine the speed of first-passage percolation. In the present work, the special case $$B = \{ {\bf x} \in {\bf{R}}^\infty : \sum_{i=1}^n x_i \geq 0 \mbox{ for all } n \}$$ will play a major role. Unfortunately, this set does not satisfy Lyons' (1992) condition. It is, however, an {\em increasing set}, meaning that if ${\bf x} \geq {\bf y}$ componentwise and ${\bf y} \in B$, then ${\bf x} \in B$. This motivates the next lemma. \begin{lem}[sharpened first moment method] \label{sharpened} Consider a target percolation on a tree $\Gamma$ in which the target set $B$ is an increasing set. Assume that $p(n) \, {\stackrel {def} {=}} p(\rho \leftrightarrow \sigma)$ for $\sigma \in \Gamma_n$ goes to zero as $n \rightarrow \infty$. \begin{quote} $(i)$ With probability one, the number of surviving rays (elements of $W$) is either zero or infinite. $(ii)$ If $\partial \Gamma$ has $\sigma$-finite Hausdorff measure in the gauge $\{ p(n) \}$, then ${\bf{P}} (\rho \leftrightarrow \partial \Gamma) = 0$. \end{quote} \end{lem} \noindent{Remark:} With further work, we can show that the assumption that $B$ is an increasing set may be dropped; since this is the only case we need, we impose the assumption to greatly simplify the proof. The above example shows that the ``target'' assumption cannot be dropped. \noindent{\sc Proof:} Assume that ${\bf{P}} (\mbox{ finitely many rays survive}) > 0$. Let $A_k$ denote the event that exactly $k$ rays survive and fix a $k$ for which ${\bf{P}} (A_k) > 0$. Let ${\cal{F}}_n$ denote the $\sigma\mbox{-field}$ generated by $\{ X (\sigma) : |\sigma| \leq n \}$. Convergence of the martingale ${\bf{P}} (A_k \, | \, {\cal{F}}_n)$ shows that for sufficiently large $n$ the probability that ${\bf{P}} (\mbox{ exactly $k$ surviving rays} \, | \, {\cal{F}}_n) > .99$ is positive. Let $\{ x (\sigma) : |\sigma| \leq n \}$ be a set of values for which $${\bf{P}} (\mbox{exactly $k$ rays survive} \, | \, X(\sigma) = x (\sigma) : |\sigma| \leq n) > 0.99 \, .$$ Totally order the rays of $\Gamma$ in any way; since the probability of any fixed ray surviving is zero, it follows that there is a ray $\xi_0$ such that $${\bf{P}} (\mbox{some $\xi < \xi_0$ survives} \, | \, X(\sigma) = x (\sigma) : |\sigma| \leq n) = {1 \over 2} \, .$$ This implies that $${\bf{P}} (\mbox{at least $k$ rays $\xi > \xi_0$ survive} \, | \, X(\sigma) = x (\sigma) : |\sigma| \leq n) \geq 0.49 \, .$$ Since all the $X(\sigma)$ are conditionally independent given $X(\sigma) = x (\sigma)$ for $|\sigma| \leq n$, and since the events of at least one ray less than $\xi_0$ surviving and at least $k$ rays greater than $\xi_0$ surviving are both increasing, we may apply the FKG inequality to conclude that $${\bf{P}} (\mbox{at least $k+1$ rays survive} \, | \, X(\sigma) = x (\sigma) : |\sigma| \leq n) \geq {0.49 \over 2} \, .$$ This contradicts the choice of $x(\sigma)$, so we conclude that $${\bf{P}} (\mbox{ finitely many rays survive}) = 0.$$ For part~$(ii)$, write $\Gamma = \bigcup_{n=1}^\infty \Gamma_k$ where each $\partial \Gamma_k$ has finite Hausdorff measure $h_k$ in the gauge $\{ p_n \}$. For each $k$ there are cutsets ${\bf{P}}i_k^{(j)}$ of $\Gamma_k$ tending to infinity such that $$\sum_{\sigma \in {\bf{P}}i_k^{(j)}} p (|\sigma|) \rightarrow h_k$$ and therefore the expected number of surviving rays of $\Gamma_k$ is at most $h_k$. Since the number of surviving rays of $\Gamma_k$ is either 0 or infinite, we conclude it is almost surely 0. $ \Box$ The remainder of this section makes progress towards Conjecture~\ref{conjcapacity} by proving that positive capacity is necessary for ${\bf{P}} (\rho \leftrightarrow \partial \Gamma) > 0$ in some useful cases. We do this by comparing different target percolations, varying either $\Gamma$ or the set $B$. The notation $p(n) = {\bf{P}} (\rho \leftrightarrow \sigma)$ for $\sigma \in \Gamma_n$ is written $p(B;n)$ when we want to emphasize dependence on $B$; similarly, ${\bf{P}} (B; \cdots)$ reflects dependence on $B$. It is assumed that the common distribution of the $X(\sigma)$ defining the target percolation never change, since this may always be accomplished through a measure-theoretic isomorphism. It will be seen below (and this makes the comparison theorems useful) that spherically symmetric trees are much easier to handle than general trees. This is partly because ${\bf{P}}(\rho \leftrightarrow \partial \Gamma)$ may be calculated recursively by conditioning. In particular, if $f(n)$ is the growth function for $\Gamma$ (i.e. each $\sigma \in \Gamma_{n-1}$ has $f(n)$ neighbors in $\Gamma_n$), $\Gamma (\sigma)$ is the subtree of $\Gamma$ rooted at a vertex $\sigma \in \Gamma_1$, and $B/x \subseteq {\bf{R}}^{N-1} = \{ y \in {\bf{R}}^{N-1} : (x , y_1 , y_2 , \ldots, y_{N-1}) \in B \}$ is the cross-section of $B$ at $x$, then conditioning on $X(\sigma)$ for $\sigma \in \Gamma_1$ gives \begin{equation} \label{eqrecursive} {\bf{P}} (B ; \rho \not\leftrightarrow \partial \Gamma) = \left [ {\bf{E}} {\bf{P}} ( B / X(\sigma) ; \sigma \not\leftrightarrow \partial \Gamma (\sigma)) \right ]^{f(1)} . \end{equation} Notice that this recursion makes sense if $f(1)$ is any positive real, not necessarily an integer. As a notational convenience, we define ${\bf{P}} (\rho \leftrightarrow \partial \Gamma)$ for {\em virtual} spherically symmetric trees with positive real growth functions, $f$, by~(\ref{eqrecursive}) for trees of finite height and passage to the limit for infinite trees. Specifically, if $B$ is any target set and $\Gamma$ is any tree, we let $f(n) = |\Gamma_n| / |\Gamma_{n-1}|$ and define the symbol ${\cal{S}}(\Gamma)$ to stand for the spherical symmetrization of $\Gamma$ in the sense that the expression ${\bf{P}} (B ; \rho \not\leftrightarrow \partial {\cal{S}}(\Gamma))$ is defined to stand for the value of the function ${\bf{P}}si (B ; f(1) , \ldots , f(N))$, where ${\bf{P}}si$ is defined by the following recursion in which $X$ is a random variable with the common distribution of the $X (\sigma)$: \begin{quote} ${\bf{P}}si (B;a) = {\bf{P}} (X \notin B)^a$ if $B \subseteq {\bf{R}}$ and $a > 0$; ${\bf{P}}si (B ; a_1 , \ldots , a_n) = \left [ {\bf{E}} {\bf{P}}si (B/X ; a_2 , \ldots , a_n) \right ]^{a_1}$ ~~if $B \subseteq {\bf{R}}^n$ and $a \in ({\bf{R}}^n)^+$. \end{quote} \begin{th} \label{th4.2} Let $\Gamma$ be any tree of height $N \leq \infty$, let $B \subset {\bf{R}}^N$ be any target set, and let ${\cal{S}} (\Gamma)$ be the (virtual) spherically symmetric tree with $f(n) = |\Gamma_n| / |\Gamma_{n-1}|$. Let ${\cal{S}} (B)$ be a Cartesian product target set $\{ y \in {\bf{R}}^N : y_i \leq b_i \mbox{ for all finite } i \leq N \}$ with $b_i$ chosen so that $$\prod_{i=1}^n {\bf{P}} (X(\sigma) \leq b_i) = p(B;n) .$$ Then \begin{eqnarray} {\bf{P}} (B ; \rho \leftrightarrow \partial \Gamma) & \leq & {\bf{P}} (B ; \rho \leftrightarrow \partial {\cal{S}} (\Gamma)) \label{eqcomp1} \\[2ex] & \leq & {\bf{P}} ({\cal{S}} (B) ; \rho \leftrightarrow \partial {\cal{S}} (\Gamma)) \label{eqcomp2} \\[2ex] & \leq & 2 \left [ p(B;N)^{-1} |\Gamma_N|^{-1} + \sum_{k=0}^{N-1} p(B;k)^{-1} (|\Gamma_k|^{-1} - |\Gamma_{k+1}|^{-1}) \right ]^{-1} \label{eqcomp3} \end{eqnarray} where the $p(B;N)^{-1} |\Gamma_N|^{-1}$ term appears only if $N < \infty$. If ${\cal{S}}(\Gamma)$ exists as a tree, i.e.\ $f(n)$ is an integer for all $n$, then the expression~(\ref{eqcomp3}) is precisely $2 \mbox{Cap}_p (\Gamma)$. \end{th} When $\Gamma$ is spherically symmetric, ${\cal{S}}(\Gamma) = \Gamma$ and we get: \begin{cor} \label{cor4.25} If $\Gamma$ is spherically symmetric then $\mbox{Cap}_p (\Gamma) > 0$ is necessary for ${\bf{P}} (\rho \leftrightarrow \partial \Gamma) > 0$ in any target percolation. $ \Box$ \end{cor} \noindent{Remark and counterexample:} The inequality ${\bf{P}} (B ; \rho \leftrightarrow \partial \Gamma) \leq {\bf{P}} ({\cal{S}} (B) ; \rho \leftrightarrow \partial \Gamma)$ holds for spherically symmetric trees by~(\ref{eqcomp2}) and also for certain types of target sets $B$ (see Theorem~\ref{th4.4} below) but fails in general. Whenever this inequality holds, Lyons' (1992) result that $\mbox{Cap}_p (\Gamma) > 0$ is necessary for ${\bf{P}} (\rho \leftrightarrow \partial \Gamma) > 0$ in Bernoulli percolation implies Conjecture~\ref{conjcapacity} for that case, since ${\cal{S}} (B)$ defines a Bernoulli percolation. A counterexample to the general inequality is the tree: \\ \begin{picture}(260,150) \put(110,10){\circle*{3}} \put(150,10){\circle*{3}} \put(190,10){\circle*{3}} \put(130,50){\circle*{3}} \put(170,50){\circle*{3}} \put(150,90){\circle*{3}} \put(150,130){\circle*{3}} \put(150,130){\line(0,-1){40}} \put(150,90){\line(1,-2){20}} \put(150,90){\line(-1,-2){20}} \put(130,50){\line(-1,-2){20}} \put(130,50){\line(1,-2){20}} \put(170,50){\line(1,-2){20}} \put(240,50){$\Gamma$} \end{picture} \\ Let $X(\sigma)$ be uniform on $[0,1]$ and define $$ \left ( B = [0,1/2] \times [2\epsilon,1] \times [0,1] \right ) \cup \left ( [1/2,1] \times [0,1] \times [4\epsilon,1] \right ) .$$ Then ${\cal{S}} (B) = [0,1] \times [0,1-\epsilon] \times [0 , {1 - 3\epsilon \over 1 - \epsilon}]$ and $$ {\bf{P}} ({\cal{S}} (B) ; \rho \leftrightarrow \partial \Gamma) \leq 1 - 3 \epsilon^2 < 1 - 2 \epsilon^2 - 32\epsilon^3 = {\bf{P}} (B ; \rho \leftrightarrow \partial \Gamma)$$ for sufficiently small $\epsilon$. \noindent{{\bf Question:}} Is there an infinite tree, $\Gamma$, and a target set, $B$, for which $${\bf{P}} ({\cal{S}} (B) ; \rho \leftrightarrow \partial \Gamma) = 0 < {\bf{P}} (B ; \rho \leftrightarrow \partial \Gamma) \; ?$$ The proof of Theorem~\ref{th4.2} is based on the following convexity lemma. \begin{lem} \label{lem4.3} For any tree $\Gamma$ of height $n < \infty$, define the function $h_\Gamma (z_1 , \ldots , z_n)$ for arguments $1 \geq z_1 \geq \cdots \geq z_n \geq 0$ by $$h_\Gamma (z_1 , \ldots , z_n) = {\bf{P}} ({\cal{S}} (B) ; \rho \not\leftrightarrow \Gamma_n)$$ where $B$ defines a target percolation with ${\bf{P}} (B ; \rho \leftrightarrow \sigma) = z_{|\sigma|}$. If $\Gamma$ is spherically symmetric then $h_\Gamma$ is a convex function. The same holds for virtual trees, under the restriction that the growth function $f(n)$ is always greater than or equal to one. \end{lem} {\sc Proof:} Proceed by induction on $n$. When $n = 1$, certainly $h_\Gamma (z_1) = (1-z_1)^{|\Gamma_1|}$ is convex. Now assume the result for trees of height $n-1$ and let $\Gamma (\sigma)$ denote the subtree of $\Gamma$ rooted at $\sigma$: $\{ \tau : \tau \geq \sigma \}$. Since $\Gamma$ is spherically symmetric, all subtrees $\Gamma (\sigma)$ with $\sigma \in \Gamma_1$ are isomorphic, spherically symmetric trees of height $n-1$. By definition of $h_\Gamma$, $$h_\Gamma (z_1 , \ldots , z_n) = \left [ (1 - z_1) + z_1 h_{\Gamma (\sigma)} \left ( {z_2 \over z_1} , \ldots , {z_n \over z_1} \right ) \right ]^{|\Gamma_1|} $$ where $\sigma \in \Gamma_1$. By induction, the function $h_{\Gamma (\sigma)}$ is convex. This implies that $$g (z_1 , \ldots , z_n) = z_1 h_{\Gamma (\sigma)} \left ( {z_2 \over z_1} , \ldots , {z_n \over z_1} \right ) $$ is convex: since $g$ is homogeneous of degree one, it suffices to check convexity on the affine hyperplane $\{ z_1 = 1 \}$, where it is clear. Adding a linear function to $g$ and taking a power of at least one preserves convexity, so $h_\Gamma$ is convex, completing the induction. $ \Box$ {\sc Proof of Theorem}~\ref{th4.2}: The first inequality is proved in Pemantle and Peres (1994). For the second inequality it clearly suffices to consider trees of finite height, $N$. When $N = 1$ there is nothing to prove, so fix $N > 1$ and assume for induction that the inequality holds for trees of height $N-1$. Define $\Gamma ( \sigma)$, $h_\Gamma$, $B / x$ and $p(B;k)$ as previously, and observe that for every $j < N$, $${\bf{E}} \, p(B / X ; j-1) = p(B ; j)$$ where $X$ has the common distribution of the $\{X(\sigma)\}$. Use the induction hypothesis and the fact that $X(\sigma)$ are independent to get \begin{eqnarray*} && {\bf{P}} (B ; \rho \not\leftrightarrow \partial \Gamma) \\[2ex] & = & \prod_{\sigma \in \Gamma_1} \left [ 1 - p(B;1) + p(B;1) \, {\bf{E}} {\bf{P}} (B/X ; \sigma \not\leftrightarrow \partial \Gamma (\sigma)) \right ] \\[2ex] & \geq & \left [ 1 - p(B;1) + p(B;1) \, {\bf{E}} h_{\Gamma (\sigma)} \left ( {p(B/X;1) \over p(B;1)} , \ldots , {p(B/X;N-1) \over p(B;1)} \right ) \right ]^{|\Gamma_1|} . \end{eqnarray*} Utilizing the convexity of $h_{\Gamma (\sigma)}$ and Jensen's inequality, the last expression is at least \begin{eqnarray*} && \left [ 1 - p(B;1) + p(B;1) \, h_\Gamma \left ( {p(B;2) \over p(B;1)} , \ldots , {p(B;N) \over p(B;1)} \right ) \right ]^{|\Gamma_1|} \\[2ex] & = & h_\Gamma (p(B;1) , \ldots , p(B;N)) \\[2ex] & = & {\bf{P}} ({\cal{S}} (B) ; \rho \not\leftrightarrow \partial \Gamma) \end{eqnarray*} completing the induction and the proof of the second inequality. When $\Gamma$ is spherically symmetric, Theorem~2.1 of Lyons (1992) asserts that $${\bf{P}} ( \rho \leftrightarrow \partial \Gamma) \leq 2 \mbox{Cap}_p (\Gamma).$$ The measure $\mu$ that minimizes $I_p (\mu)$ for spherically symmetric trees is easily seen to be uniform, with $$I_p (\mu) = \int\int p(|\xi \wedge \eta|)^{-1} d\mu^2 ; $$ summing by parts shows that RHS of~(\ref{eqcomp3}) is equal to $2 I_p (\mu)^{-1}$. When $\Gamma$ is not spherically symmetric, the final inequality is proved by induction, as follows. Fix $B$ and let $p_n = {\bf{P}} (B; \rho \leftrightarrow \sigma) = {\bf{P}} ({\cal{S}}(B) ; \rho \leftrightarrow \sigma)$ for $|\sigma| = n$. Let $\psi_p (\lambda_1 , \ldots , \lambda_N)$ denote ${\bf{P}} ({\cal{S}} (B) ; \rho \leftrightarrow \partial \Gamma)$ for a (possibly virtual) spherically symmetric tree $\Lambda$ whose growth numbers $f(n)$ satisfy $\prod_{i=0}^{k-1} f(i) = \lambda_k$. Write $R_p ( \lambda_1 , \ldots , \lambda_N)$ for the ``electrical resistance'' of this tree when edges at level $i$ are assigned resistance $p_i^{-1} - p_{i-1}^{-1}$ and an additional unit resistor is attached to the root. Explicitly, define $$R_p ( \lambda_1 , \ldots , \lambda_N) = p_N^{-1} \lambda_N^{-1} + \sum_{i=0}^{N-1} p_i^{-1} (\lambda_i^{-1} - \lambda_{i+1}^{-1}) ,$$ where $\lambda_0 \, {\stackrel {def} {=}} 1$, so the inequality to be proved is \begin{equation} \label{eq19} \psi_p (\lambda_1 , \ldots , \lambda_N) \leq 2 R_p (\lambda_1 , \ldots , \lambda_N)^{-1} . \end{equation} Proceed by induction, the case $N = 0$ boiling down to $1 \leq 2$. Letting $p_i' = p_i / p_1$, we have $$\psi_p (\lambda_1 , \ldots , \lambda_N) = 1 - \left [ 1 - p_1 \psi_{p'} \left ( {\lambda_2 \over \lambda_1} , \ldots , {\lambda_N \over \lambda_1} \right ) \right ]^{\lambda_1} $$ Using the elementary inequality $${1 - x^{\lambda_1} \over 1 + x^{\lambda_1}} \leq \lambda_1 {1 - x \over 1 + x} , $$ valid for all $x \in [0,1]$ and $\lambda_1 \geq 1$, we obtain \begin{eqnarray*} {\psi_p ( \lambda_1 , \ldots , \lambda_N) \over 2 - \psi_p ( \lambda_1 , \ldots , \lambda_N)} & = & {1 - \left [ 1 - p_1 \psi_{p'} ( {\lambda_2 \over \lambda_1} , \ldots , {\lambda_N \over \lambda_1} ) \right ]^{\lambda_1} \over 1 + \left [ 1 - p_1 \psi_{p'} ( {\lambda_2 \over \lambda_1} , \ldots , {\lambda_N \over \lambda_1} ) \right ]^{\lambda_1} } \\[3ex] & \leq & { \lambda_1 p_1 \psi_{p'} ( {\lambda_2 \over \lambda_1} , \ldots , {\lambda_N \over \lambda_1} ) \over 2 - p_1 \psi_{p'} ( {\lambda_2 \over \lambda_1} , \ldots , {\lambda_N \over \lambda_1} )} . \end{eqnarray*} Applying the inductive hypothesis, this is at most $$ { 2 p_1 \lambda_1 R_{p'} ( {\lambda_2 \over \lambda_1} , \ldots , {\lambda_N \over \lambda_1} )^{-1} \over 2 - 2 p_1 R_{p'} ( {\lambda_2 \over \lambda_1} , \ldots , {\lambda_N \over \lambda_1} )^{-1}} = {p_1 \lambda_1 \over R_{p'} ( {\lambda_2 \over \lambda_1} , \ldots , {\lambda_N \over \lambda_1} ) - p_1 } . $$ The last expression may be simplified using $$R_p ( \lambda_1 , \ldots , \lambda_N) = 1 - \lambda_1^{-1} + p_1^{-1} \lambda_1 ^{-1} R_{p'} (\lambda_2 / \lambda_1 , \ldots , \lambda_N / \lambda_1)$$ to get $$ {\psi_p ( \lambda_1 , \ldots , \lambda_N) \over 2 - \psi_p ( \lambda_1 , \ldots , \lambda_N)} \leq {p_1 \lambda_1 \over p_1 \lambda_1 (R_p (\lambda_1 , \ldots , \lambda_N) - 1)} = {2 R_p (\lambda_1 , \ldots , \lambda_N)^{-1} \over 2 - 2 R_p (\lambda_1 , \ldots , \lambda_N)^{-1}} .$$ This proves~(\ref{eq19}) and the theorem. $ \Box$ The last result of this section gives a condition on the target set $B$, sufficient to imply that ${\bf{P}} (\rho \leftrightarrow \partial \Gamma)$ increases when $B$ is replaced by ${\cal{S}}(B)$. The condition is rather strong, however it can be applied in two very natural cases -- see Theorem~\ref{th5.1} below. \begin{th} \label{th4.4} Let $\Gamma$ be any tree of height $N \leq \infty$, let $X(\sigma)$ be $\mbox{i.i.d.}$ real random variables, and let $B$ be any target set. For integers $k \leq j \leq N$ and real numbers $x_1 , \ldots , x_k$, define $$p_j (x_1 , \ldots , x_k) = {\bf{P}} ((X_{k+1} , \ldots , X_j) \in \pi_j (B) / x_1 \cdots x_k) $$ to be the measure of the cross section at $x_1 , \ldots , x_k$ of the projection onto the first $j$ coordinates of $B$. Suppose that for every fixed $x_1 , \ldots , x_{k-1}$, the matrix $M$ whose $(y,j)$-entry is $p_j (x_1 , \ldots , x_{k-1} , y)$ is totally positive of order two ($TP_2$), i.e. $M_{xi} M_{yj} \geq M_{yi} M_{xj}$ when $k \leq i < j$ and $x < y$. Then \begin{equation} \label{eq20} {\bf{P}} (B ; \rho \leftrightarrow \partial \Gamma) \leq {\bf{P}} ({\cal{S}} (B) ; \rho \leftrightarrow \partial \Gamma) . \end{equation} \end{th} We shall require the following version of Jensen's inequality. \begin{lem} \label{po convexity} Let $\unlhd$ be a partial order on ${\bf{R}}^n$ such that for every $w \in {\bf{R}}^n$ the set $\{ z : z \unlhd w \}$ is convex. Let $\mu$ be a probability measure supported on a bounded subset of ${\bf{R}}^n$ which is totally ordered by $\unlhd$, and let $h : {\bf{R}}^n \rightarrow {\bf{R}}$ be a continuous function. If $h$ is convex on any segment connecting two comparable points $z \unlhd w$, then $$ h \left ( \int x \, d\mu \right ) \leq \int h(x) \, d\mu .$$ \end{lem} {\sc Proof:} It is enough to prove this in the case where $\mu$ has finite support, since we may approximate any measure by measures supported on finite subsets and use continuity of $h$ and bounded support to pass to the limit. Letting $z_1 \unlhd \cdots \unlhd z_m$ denote the support of $\mu$ and letting $a_i = \mu \{ z_i \}$, we proceed by induction on $m$. If $m = 2$, the desired inequality $h ( a_1 z_1 + a_2 z_2) \leq a_1 h(z_1) + a_2 h(z_2)$ is a direct consequence of the assumption on $h$. If $m > 2$, let $\nu$ be the measure which puts mass $a_1 + a_2$ at $(a_1 z_1 + a_2 z_2)/(a_1 + a_2)$ and mass $a_i$ at $z_i$ for each $i \geq 3$. The support of $\nu$ is a totally ordered set of cardinality $n-1$ so the induction hypothesis implies $$ h \left ( \int x \, d\mu \right ) = h \left ( \int x \, d\nu \right ) \leq \int h(x) \, d\nu .$$ Applying the convexity assumption on $h$ at $z_1$ and $z_2$ then gives $$\int h(x) \, d\nu \leq \int h(x) \, d\mu ,$$ completing the induction. $ \Box$ {\sc Proof of Theorem}~\ref{th4.4}: For each $n$, let $\bigtriangleup_n$ denote the space of points $\{ (z_1 , \ldots , z_n) \in {\bf{R}}^n : 1 \geq z_1 \geq \cdots \geq z_n \geq 0\}$ and for ${\bf z} , {\bf w} \in \bigtriangleup_n$, define ${\bf z} \preceq {\bf w}$ if and only if the matrix with rows ${\bf z}$ and ${\bf w}$ and first column $(1,1)$ is $TP_2$ (equivalently, $z_i / w_i$ is at most one and nonincreasing in $i$). Define $h_\Gamma$ on $\bigcup \bigtriangleup_n$ as in Lemma~\ref{lem4.3} so that $$h_\Gamma (z_1 , \ldots , z_n) = \prod_{\sigma \in \Gamma_1} \left [ (1 - z_1) + z_1 h_{\Gamma (\sigma)} \left ( {z_2 \over z_1} , \ldots , {z_n \over z_1} \right ) \right ] . $$ In order to use Lemma~\ref{po convexity} for $h_\Gamma$, $\preceq$, and $\bigtriangleup_n$, we observe first that $\preceq$ is closed under convex combinations in either argument. Observe also that $\bigtriangleup_n$ is compact and $h_\Gamma$ continuous; we now establish by induction on $n$ that $h_\Gamma$ is convex along the line segment joining ${\bf z}$ and ${\bf w}$ whenever ${\bf z} \preceq {\bf w} \in \bigtriangleup_n$. The initial step is immediate: $h_\Gamma (z_1) = (1-z_1)^{|\Gamma_1|}$, which is convex for $z_1 \in [0,1]$. When $n > 1$, observe that for each $\sigma \in \Gamma_1$, the function $1 - z_1 + z_1 h_{\Gamma (\sigma)} (z_2 / z_1 , \ldots , z_n / z_1)$ is decreasing along the line segment {}from ${\bf z}$ to ${\bf w}$ when ${\bf z} \preceq {\bf w}$. The product of decreasing convex functions is again convex, so it suffices to check that each $$\phi (z_1 , \ldots z_n) \, {\stackrel {def} {=}} z_1 h_{\Gamma(\sigma)} (z_2 / z_1 , \ldots , z_n / z_1) $$ is convex along such a line segment. Pictorially, we must show that the graph of $\phi$ in $\bigtriangleup_n \times {\bf{R}}$ defined by $\{ (z_1 , \ldots , z_{n+1}) : z_{n+1} = z_1 h_{\Gamma (\sigma)} (z_2 / z_1 , \ldots , z_n / z_1) \}$ lies below any chord $\overline ({\bf z} , \phi ({\bf z})) ({\bf w} , \phi ({\bf w}))$ whenever ${\bf z} \preceq {\bf w}$. Observe that the graph of $\phi$ is the cone of the set $\{ (1 , z_2 , \ldots , z_{n+1}) : z_{n+1} = h_{\Gamma (\sigma)} (z_2 , \ldots , z_n) \}$ with the origin. In other words, viewing $\bigtriangleup_{n-1}$ as embedded in $\bigtriangleup_n$ by $(z_2 , \ldots , z_n) \mapsto (1 , z_2 , \ldots , z_n)$, the graph of $\phi$ is the cone of the graph of the $n-1$-argument function $h_{\Gamma (\sigma)}$. To check that the chord of the graph of $\phi$ between ${\bf z}$ and ${\bf w}$ lies above the graph, it then suffices to see that the chord of the graph of $h_{\Gamma (\sigma)}$ between $(z_2 / z_1 , \ldots , z_n / z_1)$ and $(w_2 / w_1 , \ldots , w_n / w_1)$ lies above the graph. But ${\bf z} \preceq {\bf w} \in \bigtriangleup_n$ implies $(z_2 / z_1 , \ldots , z_n / z_1) \preceq (w_2 / w_1 , \ldots , w_n / w_1) \in \bigtriangleup_{n-1}$, so this follows from the induction hypothesis. We now prove the theorem for trees of finite height, the infinite case following from writing ${\bf{P}} (\rho \leftrightarrow \partial \Gamma)$ as the decreasing limit of ${\bf{P}} (\rho \leftrightarrow \Gamma_n)$. Let $N < \infty$ and proceed by induction on $N$, the case $N=1$ being trivial since $B = {\cal{S}}(B)$. Assume therefore that $N > 1$ and that the theorem is true for smaller values of $N$. The induction is then completed by justifying the following chain of identities and inequalities. By conditioning on the independent random variables $\{ X(\sigma) : \sigma \in \Gamma_1 \}$ and using the induction hypothesis we get: \begin{eqnarray} {\bf{P}} (B ; \rho \not\leftrightarrow \partial \Gamma) & = & \prod_{\sigma \in \Gamma_1} {\bf{E}} {\bf{P}} (B / X ; \sigma \not\leftrightarrow \partial \Gamma (\sigma)) \label{new 4.7} \\[2ex] & \geq & \prod_{\sigma \in \Gamma_1} {\bf{E}} {\bf{P}} ({\cal{S}} (B) / X ; \sigma \not\leftrightarrow \partial \Gamma (\sigma)) . \nonumber \end{eqnarray} Recalling the definition of $h_{\Gamma (\sigma)}$ and $p_j (x)$, this is equal to \begin{eqnarray} && \prod_{\sigma \in \Gamma_1} {\bf{E}} \left [ h_{\Gamma (\sigma)} (p_2 (X) , \ldots , p_N (X)) \right ] \nonumber \\[2ex] & = & \prod_{\sigma \in \Gamma_1} \left [ (1 - p_1) + p_1 \int h_{\Gamma (\sigma)} (p_2 (x) , \ldots , p_N (x)) \, d\mu (x) \right ] \label{new 4.10} , \end{eqnarray} where $\mu$ is the conditional distribution of $X$ given $X \in \pi_1 (B)$ and $p_1 = {\bf{P}} (X \in \pi_1 (B))$. Now observe that the vectors $(p_2 (x) , \ldots , p_N (x))$ with $x \in \pi_1 (B)$ are totally ordered by $\preceq$ according to the $k=1$ case of the $TP_2$ assumption of the theorem. Since $\int p_j (x) \, d\mu (x) = p_j / p_1$ for $2 \leq j \leq N$, Lemma~\ref{po convexity} applied to $h_{\Gamma (\sigma)}$ and $\preceq$ on the set $\bigtriangleup_{N-1}$ shows that~(\ref{new 4.10}) is at least \begin{eqnarray} && \prod_{\sigma \in \Gamma_1} \left [ (1 - p_1) + p_1 h_{\Gamma (\sigma)} \left ( {p_2 \over p_1} , \ldots , {p_N \over p_1} \right ) \right ] \nonumber \\[2ex] & = & h_{\Gamma} ( p_1 , \ldots , p_N) \nonumber \\[2ex] & = & {\bf{P}} ({\cal{S}} (B) ; \rho \not\leftrightarrow \partial \Gamma ) . \label{new 4.13} \end{eqnarray} Comparing~(\ref{new 4.7}) to~(\ref{new 4.13}) we see that the theorem is established. $ \Box$ \section{Positive rays for tree-indexed random walks} \setcounter{equation}{0} This section puts together results from the previous two sections in order to prove Theorem~\ref{th2.2} and a few related corollaries and examples. {\sc Proof of Theorem}~\ref{th2.2}: $(i)$ We are given a tree $\Gamma$ with positive capacity in gauge $\phi (n) = n^{-1/2}$ and $\mbox{i.i.d.}$ real random variables $\{ X(\sigma) \}$ with mean zero and finite variance. Consider the target percolation with target set $B^{(0)} = \{ {\bf x} \in {\bf{R}}^\infty : \sum_{i=1}^n x_i \geq 0 \mbox{ for all } n \}$. By Proposition~\ref{prFeller}, $$c_1' |\sigma|^{-1/2} \leq {\bf{P}} (\rho \leftrightarrow \sigma) \leq c_1'' |\sigma|^{-1/2} .$$ Now we verify that this percolation is quasi-Bernoulli, that is to say, \begin{equation} \label{eq21} {\bf{P}} ( \rho \leftrightarrow \sigma \mbox{ and } \rho \leftrightarrow \tau \, | \, \rho \leftrightarrow \sigma \wedge \tau) \leq c \, { |\sigma \wedge \tau| \over |\sigma|^{1/2} |\tau|^{1/2}} \end{equation} for $\sigma , \tau \in \Gamma$. Assume without loss of generality that $|\sigma \wedge \tau| \leq (1/2) \min (|\sigma| , |\tau|)$, since otherwise the claim is immediate. The LHS of~(\ref{eq21}) is equal to \begin{equation} \label{eq 940102b} \begin{array}{l} \int_0^\infty {\bf{P}} (S(\sigma \wedge \tau) \in dy \, | \, \rho \leftrightarrow \sigma \wedge \tau) \cdot \\ ~~~~{\bf{P}} (\rho \leftrightarrow \sigma \, | \, \rho \leftrightarrow \sigma \wedge \tau, S(\sigma \wedge \tau) = y) \cdot {\bf{P}} (\rho \leftrightarrow \tau \, | \, \rho \leftrightarrow \sigma \wedge \tau, S(\sigma \wedge \tau) = y) \, . \end{array} \end{equation} Recalling the definition of $T_y$ as the first time a trajectory is less than $-y$, we may write \begin{eqnarray*} {\bf{P}} (\rho \leftrightarrow \sigma \, | \, \rho \leftrightarrow \sigma \wedge \tau, S(\sigma \wedge \tau) = y) & = & \\ {\bf{P}} ( T_y \geq |\sigma| - |\sigma \wedge \tau|) & \leq & 4 c_2 {y \over |\sigma|^{1/2}} \end{eqnarray*} by part~$(i)$ of Lemma~\ref{lem ii-iv}. A similar bound holds for the last factor in~(\ref{eq 940102b}). Now use the the second part of Lemma~\ref{lem ii-iv} to show that~(\ref{eq 940102b}) is at most \begin{eqnarray*} && 4 c_2^2 \int_0^\infty {\bf{P}} (S(\sigma \wedge \tau) \in dy \, | \, \rho \leftrightarrow \sigma \wedge \tau) {y^2 \over |\sigma|^{1/2} |\tau|^{1/2}} \\[2ex] & = & {4 c_2^2 \over |\sigma|^{1/2} |\tau|^{1/2}} {\bf{E}} (S(\sigma \wedge \tau)^2 \, | \, \rho \leftrightarrow \sigma \wedge \tau) \\[2ex] & \leq & 4 c_2^2 c_3 \, {|\sigma \wedge \tau| \over |\sigma|^{1/2} |\tau|^{1/2}} , \end{eqnarray*} verifying the claim. Putting $|\sigma| = |\tau| = n$ and $|\sigma \wedge \tau| = k$, it immediately follows that ${\bf{P}} (\rho \leftrightarrow \sigma \mbox{ and } \rho \leftrightarrow \tau) \leq c \sqrt{k} / n$, and the second moment method (Lemma~\ref{lem4.1} part~$(ii)$) implies that with positive probability a ray exists along which $S(\sigma)$ remains nonnegative. To obtain the full assertion of the theorem, let $f(n)$ be any increasing sequence satisfying $\sum n^{-3/2} f(n) < \infty$ and define a new target percolation with target set $$B^{(f)} = \{ {\bf x} \in {\bf{R}}^\infty : \sum_{i=1}^n x_i \geq f(n) {\bf 1}_{n \geq n_f} \mbox{ for all } n \geq 1 \} $$ where $n_f$ is as in Theorem~\ref{th3.1}, part~$(i)$. The conclusion of Theorem~\ref{th3.1}, part~$(i)$, shows that ${\bf{P}} (B^{(f)} ; \rho \leftrightarrow \sigma)$ is of order $|\sigma|^{-1/2}$; since ${\bf{P}} (B^{(f)} ; \rho \leftrightarrow \sigma \mbox{ and } \rho \leftrightarrow \tau) \leq {\bf{P}} (B^{(0)} ; \rho \leftrightarrow \sigma \mbox{ and } \rho \leftrightarrow \tau)$, the new percolation $B^{(f)}$ is still quasi-Bernoulli. Thus with positive probability, $\Gamma$ contains a ray along which ${\cal{S}}(\sigma) \geq f(|\sigma|)$ for sufficiently large $|\sigma|$. To see that this event actually has probability one, observe that it contains the tail event $$\{ {\bf{Example}: }ists \xi \in \partial \Gamma \; {\bf{Example}: }ists C > 0 \; \forall \sigma \in \xi , \; S(\sigma) \geq f(|\sigma|) + |\sigma|^{1/4} - C \} , $$ which has positive probability by the preceding argument, hence probability one. $(ii)$ Assume that $\Gamma$ has $\sigma$-finite Hausdorff measure in gauge $\phi (n) = n^{-1/2}$. For any nondecreasing function, $f$, consider the random subgraph $W_{-f} = \{ \sigma \in \Gamma : S(\sigma) \geq -f (|\sigma|)\}$. By the summability assumption on $f$ and by Theorem~\ref{th3.1} part II, there is a constant $c$ for which $$p(|\sigma|) = {\bf{P}} (\rho \leftrightarrow \sigma) \leq c |\sigma|^{-1/2} .$$ The sharpened first moment method (Lemma~\ref{sharpened}) implies that $W_{-f}$ almost surely fails to contain a ray of $\Gamma$. This easily implies the stronger statement that the subgraph $W_{-f}$ has no infinite components, almost surely. $(iii)$ Define $W_{-f}$ and $p(|\sigma|)$ as above. From Theorem~\ref{th4.2} we get \begin{equation} \label{eq23} {\bf{P}} (\rho \leftrightarrow \partial \Gamma) \leq 2 \left [ \sum_{k=0}^\infty p(k)^{-1} \left ( |\Gamma_k|^{-1} - |\Gamma_{k+1}|^{-1} \right ) \right ]^{-1} . \end{equation} Since $p(k) \leq c k^{-1/2}$, summation by parts shows that \begin{eqnarray*} && \sum_{k=0}^\infty p(k)^{-1} \left ( |\Gamma_k|^{-1} - |\Gamma_{k+1}|^{-1} \right ) \\[2ex] & \geq & c^{-1} \sum_{k=0}^\infty k^{1/2} \left ( |\Gamma_k|^{-1} - |\Gamma_{k+1}|^{-1} \right ) \\[2ex] & = & c^{-1} \sum_{k=1}^\infty [k^{1/2} - (k-1)^{1/2}] |\Gamma_k|^{-1} \\[2ex] & \geq & (2c)^{-1} \sum_{k=1}^\infty k^{-1/2} |\Gamma_k|^{-1} , \end{eqnarray*} so if the last sum is infinite then the RHS of~(\ref{eq23}) is zero, completing the proof. $ \Box$ For certain special distributions of the step sizes $\{ X(\sigma)\}$, the class of trees for which some ray stays positive with positive probability may be sharply delineated. \begin{th} \label{th5.1} Let $\Gamma$ be any infinite tree and let the $\mbox{i.i.d.}$ random variables $\{X(\sigma)\}$ have common distribution $F_1$ or $F_2$, where $F_1$ is a standard normal and $F_2$ is the distribution putting probability $1/2$ each on $\pm 1$. Then the probability that $S(\sigma) \geq 0$ along some ray of $\Gamma$ is nonzero if and only if $\Gamma$ has positive capacity in gauge $\phi (n) = n^{-1/2}$. \end{th} {\bf Remark:} The usual variants also follow. When $\Gamma$ has positive capacity in gauge $\phi$, the probability is one that some ray of $\Gamma$ has $S(\sigma) < 0$ finitely often. This is equivalent to finding, almost surely, a ray for which $S(\sigma) \geq f(|\sigma|)$ all but finitely often, for any monotone $f$ satisfying $\sum n^{-3/2} |f(n)| < \infty$. {\sc Proof:} Let $B \subseteq {\bf{R}}^\infty$ be the target set $\{ {\bf x} \in {\bf{R}}^\infty : \sum_{i=1}^n x_i \geq 0 \mbox{ for all } n \}$. One half of the theorem, namely that positive capacity implies ${\bf{P}} (B ; \rho \leftrightarrow \partial \Gamma) > 0$, follows immediately from part~$(i)$ of Theorem~\ref{th2.2}. For the other half, observe that zero capacity implies ${\bf{P}} ({\cal{S}} (B) ; \rho \leftrightarrow \partial \Gamma) = 0$, since ${\cal{S}}(B)$ is Bernoulli with the same values of $p(n)$, and $\mbox{Cap}_p (\Gamma) > 0$ is known to be necessary and sufficient for percolation (c.f. remarks after Conjecture~\ref{conjcapacity}). The present theorem then follows from~(\ref{eq20}) once the conditions of Theorem~\ref{th4.4} are verified. It suffices to establish $M_{yj} / M_{xj} > M_{yi} / M_{xi}$ for $y > x$, in the case where $j = i+1$. To verify this, pick any $j,k$ and any $x_1 , \ldots , x_k \geq 0$ and observe that $p_j (x_1 , \ldots , x_k) = {\bf{P}} (x_1 + \cdots + x_k + S_i \geq 0 \mbox{ for all } i \leq j)$, where $\{ S_i \}$ is a random walk with step sizes distributed as the $\{ X(\sigma)\}$. It suffices then to show that for $0 \leq x < y$, $${\bf{P}} (y + S_{j+1} \geq 0 \, | \, y + S_i \geq 0 : i \leq j) \geq {\bf{P}} (x + S_{j+1} \geq 0 \, | \, x + S_i \geq 0 : i \leq j) .$$ To see this in the case of $F_1$, use induction on $j$. For $j = 0$ one pointmass obviously dominates the other. Assuming it now for $j-1$, write $${\bf{P}} (y + S_{j+1} \geq 0 \, | \, y + S_i \geq 0 : i \leq j) = \int d\nu_y (z) {\bf{P}} (z + S_j \geq 0 \, | \, z + S_i \geq 0 : i \leq j-1) ,$$ where $\nu_y (z)$ is the conditional measure of $y + S_1$ given $S_i \geq -y : i \leq j$. The Radon-Nikodym derivative $d\nu_y / d\nu_x$ at $z \geq y$ is $dF_1 (z-y) / dF_1 (z-x)$ times a normalizing constant. This is an increasing function of $z$, by the increasing likelihood property of the normal distribution. Thus $\nu_y$ stochastically dominates $\nu_x$. By induction, the integrand is increasing in $z$, which, together with the stochastic domination, establishes the inequality. The same argument works for $F_2$, noting that $y - x$ is always an even integer. $ \Box$ \begin{cor} \label{bouncing rays} Suppose the edges of an infinite tree $\Gamma$ are labeled by $\mbox{i.i.d.}$, mean zero, finite variance random variables $\{ X(\sigma)\}$ with partial sums $\{ S(\sigma)\}$. Assume that $$ \begin{array}{l} \mbox{either } \Gamma \mbox{ is spherically symmetric} \\ \mbox{or } \\ \{ X(\sigma)\} \mbox{ are normal or take values } \pm 1 \end{array} \hspace{2in} (*)$$ If there is almost surely a ray along which $\inf S(\sigma) > - \infty$, then there is almost surely a ray along which $\lim S(\sigma) = + \infty$. \end{cor} {\sc Proof:} Both are equivalent to $\Gamma$ having positive capacity in gauge $n^{-1/2}$. $ \Box$ {\bf Problem:} Remove the assumption (*). If the moment generating function of $X$ fails to exist in a neighborhood of zero, it is possible that ${\bf{E}} X < 0$ but still some trees of polynomial growth have rays along which $S(\sigma) \rightarrow \infty$. The critical growth exponent need not be $1/2$ in this case. We conclude this section with such an example. Suppose that the common distribution of the $X(\sigma)$ is a symmetric, stable random variable with index $\alpha \in (1,2)$. Fix $c > 0$ and consider the target set $$B = \{ {\bf x} \in {\bf{R}}^\infty : \sum_{i=1}^n x_i > cn \mbox{ for all } n \}$$ and $$B' = \{ {\bf x} \in {\bf{R}}^\infty : cn^2 > \sum_{i=1}^n x_i > cn \mbox{ for all } n \} . $$ The following estimates may be proved. \begin{pr} \label{pr stables} For $\sigma , \tau \in \Gamma_n$, let $k = |\sigma \wedge \tau|$. Then \begin{equation} \label{eqst1} {\bf{P}} (B' ; \rho \leftrightarrow \sigma) \leq {\bf{P}} (B ; \rho \rightarrow \sigma) \leq c_1 n^{-\alpha} \end{equation} \begin{equation} \label{eqst2} {\bf{P}} (B' ; \rho \leftrightarrow \sigma \mbox{ and } \rho \leftrightarrow \tau) / {\bf{P}} (B' ; \rho \rightarrow \sigma)^2 \leq c_2 (\epsilon) k^{1 + 4\alpha + \epsilon} \end{equation} for any $\epsilon > 0$ and some constants $c_i > 0$. \end{pr} $ \Box$ Suppose that $\Gamma$ is a spherically symmetric tree with growth rate $|\Gamma_n| \approx n^{\beta}$. Plugging~(\ref{eqst1}) into Lemma~\ref{lem4.1}, we see that ${\bf{P}} (B' ; \rho \leftrightarrow \partial \Gamma)$ is zero when $\beta < \alpha$, while plugging in~(\ref{eqst2}) shows that ${\bf{P}} (B' ; \rho \leftrightarrow \partial \Gamma)$ is positive when $\beta > (1 + 4 \alpha)$. If one then considers the tree-indexed random walk whose increments are distributed as $X - c$, one sees that $\beta < \alpha$ implies that with probability one $S(\sigma) < 0$ infinitely often on every ray, whereas $\beta > 1 + 4 \alpha$ implies that with probability one $S(\sigma) \rightarrow \infty$ with at least linear rate along some ray. For $\beta > 1 + 4 \alpha$, RWRE with this distribution of $X$ is therefore transient even though ${\bf{E}} X < 0$. Defining the {\em sustainable speed} of a tree-indexed random walk to be the almost surely constant value $$\sup_{\xi} \liminf_{\sigma \in \xi} {S(\sigma) \over |\sigma|} ,$$ Lyons and Pemantle (1992) have shown that the Hausdorff dimension of $\Gamma$ and the distribution of $X$ together determine the sustainable speed of the tree-indexed random walk, as long as $X$ has a moment generating function in a neighborhood of zero. When the increments are symmetric stable random variables, the moment hypothesis is violated, and the analysis above shows that the sustainable speed of the tree-indexed random walk can be different for different polynomially growing trees of Hausdorff dimension zero. \section{Critical RWRE: proofs} \setcounter{equation}{0} The following easy lemma will be useful. \begin{lem} \label{bottleneck} If $\Gamma$ is any tree with conductances $C(\sigma)$, let $U(\sigma) = \min_{\rho < \tau \leq \sigma} C(\tau)$. Then the conductance from $\rho$ to a cutset ${\bf{P}}i$ is at most \begin{equation} \label{qottle} \sum_{\sigma \in {\bf{P}}i} U(\sigma) . \end{equation} \end{lem} {\sc Proof:} For each $\sigma \in {\bf{P}}i$, let $\gamma (\sigma)$ be the sequence of conductances on the path from $\rho$ to $\sigma$, and let $\Gamma'$ be a tree consisting of disjoint paths for each $\sigma \in {\bf{P}}i$, each path having conductances $\gamma (\sigma)$. $\Gamma$ is a contraction of $\Gamma'$, so by Rayleigh's monotonicity law (Doyle and Snell 1984), the conductance to ${\bf{P}}i$ in $\Gamma$ is less than or equal to the conductance of $\Gamma'$, which is the sum over $\sigma \in {\bf{P}}i$ of conductances bounded above by $U(\sigma)$. $ \Box$ {\sc Proof of Theorem}~\ref{th2.1}: $(i)$ This is almost immediate from Theorem~\ref{th2.2}, which was proved in the previous section. Since $\Gamma$ has positive capacity in gauge $n^{-1/2}$, that theorem guarantees the almost sure existence of a ray $\xi$ along which the partial sums $S(\sigma) = \sum_{\rho < \tau \leq \sigma} X(\tau)$ satisfy $S(\sigma) \geq 2 \log |\sigma|$ when $\sigma$ is sufficiently large. The total resistance along $\xi$ is then $\sum_{\sigma \in \xi} e^{-S(\sigma)} < \infty$, so the resistance of the entire tree is finite and the RWRE transient. $(ii)$ First we calculate a bound for the expected conductance along the path from $\rho$ to $\sigma$ in terms of $|\sigma|$, then use the Hausdorff measure assumption to bound the net conductance of the tree from $\rho$ to $\partial \Gamma$. In this calculation it is convenient to attach an auxiliary unit resistor, thought of as an edge $\rho' \rho$ comprising the $-1$ level of $\Gamma$. After this addition, the minimal conductance in any edge in the path from $\rho'$ to $\sigma$ is $e^{m(\sigma)}$, where $m(\sigma) = \min \{ S(\tau) : \rho \leq \tau \leq \sigma \}$ is nonpositive since $S(\rho) = 0$. Applying the first part of Lemma~\ref{lem ii-iv} gives $${\bf{P}} (m(\sigma) \geq -y) \leq c |\sigma|^{-1/2} \max \{ y,1 \} $$ for some constant $c$. Plug this into the identity $${\bf{E}} e^{m(\sigma)} = \int_0^1 {\bf{P}} (e^{m(\sigma)} \geq u) \, du ,$$ changing variables to $y = \log (1/u)$ to get \begin{eqnarray*} && {\bf{E}} e^{m(\sigma)} = \int_0^\infty {\bf{P}} (m(\sigma) \leq -y) e^{-y} \, dy \\[2ex] & \leq & c |\sigma|^{-1/2} \left ( 1 + \int_1^\infty y e^{-y} \, dy \right ) \\[2ex] & \leq & 2 c |\sigma|^{-1/2} . \end{eqnarray*} The Hausdorff measure assumption implies the for any $\epsilon > 0$ there is a cutset ${\bf{P}}i (\epsilon)$ for which $\sum_{\sigma \in {\bf{P}}i (\epsilon)} |\sigma|^{-1/2} < \epsilon$, hence by Lemma~\ref{bottleneck} the expected conductance from $\rho'$ to ${\bf{P}}i (\epsilon)$ is at most $2c\epsilon$. Thus the net conductance from $\rho'$ to $\partial \Gamma$ vanishes almost surely, so the RWRE is almost surely recurrent. $(iii)$ Set $f(n) = \log |\Gamma_n| + 2 \log (n+1)$. The assumptions in~$(iii)$ imply that $f$ is increasing and $\sum n^{-3/2} f(n) < \infty$, so Theorem~\ref{th2.2} implies that with probability one, every ray of $\Gamma$ has $S(\sigma) < -f(|\sigma|)$ infinitely often. Thus for every $N \geq 1$ there exists almost surely a random cutset ${\bf{P}}i$ such that for every $\sigma \in {\bf{P}}i$, $|\sigma| > N$ and $S(\sigma) < -f(|\sigma|)$. The net conductance from $\rho$ to this ${\bf{P}}i$ is at most \begin{eqnarray*} \sum_{\sigma \in {\bf{P}}i} e^{S(\sigma)} & \leq & \sum_{\sigma \in {\bf{P}}i} e^{-f(|\sigma|)} \\[2ex] & \leq & \sum_{n=N}^\infty |\Gamma_n| e^{-f(n)} \\[2ex] & \leq & \sum_{n=N}^\infty n^{-2} . \end{eqnarray*} Taking $N$ large shows that the net conductance to $\partial \Gamma$ vanishes almost surely. $ \Box$ \section{Reinforced random walk} Reinforced RW is a process introduced by Coppersmith and Diaconis (unpublished) to model a tendency of the random walker to revisit familiar territory. The variant of this process analysed in Pemantle (1988) has an inherent bias toward the root, i.e. a positive backward push; here we consider the following ``unbiased'' variant which fits into the general framework of reinforcement described in Davis (1990), and may be analysed using the tools developed in the previous sections of this paper. Let $\Gamma$ be an infinite rooted tree, with (dynamically changing) positive weights $\, w_n(e) $ for $ n\geq 0$ attached to each edge $e$. At time zero, all weights are set to one: $w_0(e)=1$ for all $e$. Let $Y_0$ be the root of $\Gamma$, and for every $n \geq 0$, given $Y_1, \ldots Y_n$ let $Y_{n+1}$ be a randomly chosen vertex adjacent to $Y_n$, so that each edge $e$ emanating from $Y_n$ has conditional probability proportional to $w_n(e)$ to be the edge connecting $Y_n$ to $Y_{n+1}$. Each time an edge is traversed {\em back and forth}, its weight is increased by 1, i.e. $w_k(e)-1$ is the number of ``return trips taken on $e$ by time $k$''. Call the resulting process $\{Y_n\}$ an ``unbiased reinforced random walk''. \begin{th} \label{th7.1} \begin{description} \item{$(i)$} If $\Gamma$ has positive capacity in gauge $\phi(n) = n^{-1/2}$, then the resulting reinforced RW is transient, i.e., ${\bf{P}}(Y_n=Y_0 \mbox{\, infinitely often})=0$. \item{$(ii)$} If $\Gamma$ has zero Hausdorff measure in the same gauge, then the reinforced RW is recurrent, i.e., ${\bf{P}}(Y_n=Y_0 \mbox{\, infinitely often})=1$. \end{description} \end{th} \noindent{\sc Proof:} Fix a vertex $\sigma$ in $\Gamma$, of degree d. As explained in Section 3 of Pemantle (1988), for every vertex $\sigma$ of $\Gamma$, the sequence of edges by which the walk leaves $\sigma$ is equivalent to ``Polya's urn'' stopped at a random time. Initially the urn contains d balls, one of each color. (The colors correspond to the edges emanating from $\sigma$.) Each time the walk leaves $\sigma$, a ball is picked at random from the urn, and returned to the urn along with another ball of the same color. (This corresponds to increasing the weight of the relevant edge). >From Section VII.4 of Feller (1966) we find that the sequence of edges taken from $\sigma$ is a stochastically equivalent to a mixture of sequences of i.i.d. variables, where the mixing measure is uniform over the simplex of probability vectors of length $d$. A standard method to generate a uniform random vector on the simplex is to pick $d \,$ independent identically distributed exponential random variables, and normalize them by their sum. This leads to the following {\bf RWRE description of the reinforced RW}: \newline Assign to each edge $e$ in $\Gamma$ two exponential random variables $U( \stackrel{\rightarrow}{e})$ and $U( \stackrel{\leftarrow}{e})$, one for each orientation, so that all the assigned variables are i.i.d. . These labels are then used to define an environment for a random walk on $\Gamma$ such that the transition probability from a vertex $\sigma$ to a neighboring vertex $\tau$ is $$ q(\sigma,\tau)={U(\stackrel{\rightarrow}{\sigma \tau}) \over \sum \{U( \stackrel{\rightarrow}{e}) : \; \stackrel{\rightarrow}{e} \mbox{\, emanates from \,} \sigma \} } \; . $$ Thus the log-ratios $\{X(\sigma) \}_{\sigma \in \Gamma}$ defined in (\ref{eq1}) are identically distributed, and any subcollection of these variables where no two of the corresponding vertices are siblings, is independent. Clearly the variables $\{X(\sigma) \}$ have mean 0 and finite variance. The proof of Theorem 2.1$(ii)$ goes over unchanged to prove part $(ii)$ of the present theorem, since in order to apply the "first moment method", Lemma 4.1$(i)$, it suffices that for any ray $\xi$ in $\Gamma$, the variables $\{X(\sigma) \}$ for $\sigma$ on $\xi$ be independent. To prove part $(ii)$ via the second moment method, Lemma 4.1$(ii)$, consider the percolation process defined by retaining only vertices for which the partial sum from the root of $X(\sigma)$ is positive. It suffices to verify that this percolation is quasi-Bernoulli, i.e. it satisfies (5.1); this involves only a minor modification (which we omit) of the proof of Theorem 2.2$(i)$ given in section 5. $ \Box$ \noindent{\sc Acknowledgements:} We are indebted to the referee for a remarkably careful reading of the paper. We gratefully acknowledge Russell Lyons and the Institute for Iterated Dining for bringing us together. \renewcommand{1.0}{1.0}\large\normalsize \end{document}
\begin{document} \title{Online Generalized Network Design Under (Dis)Economies of Scale} \begin{abstract} We consider a general online network design problem where a sequence of $N$ {\mathsf{e}m requests} arrive over time, each of which needs to use some subset of the available {\mathsf{e}m resources} $E$. The cost incurred by a resource $e\in E$ is some function $f_e$ of the total load $\mathsf{e}ll_e$ on that resource. The objective is to minimize the total cost $\sum_{e\in E} f_e(\mathsf{e}ll_e)$. We focus on cost functions that exhibit (dis)economies of scale, that are of the form $f_e(x) = \sigma_e + \xi_e\cdot x^{\alpha_e}$ if $x>0$ (and zero if $x=0$), where the exponent $\alpha_e\ge 1$. Optimization problems under these functions have received significant recent attention due to applications in energy-efficient computing. Our main result is a deterministic online algorithm with tight competitive ratio $\Theta\left(\max_{e\in E} \left(\frac{\sigma_e}{\xi_e}\right)^{1/\alpha_e}\right)$ when $\alpha_e$ is constant for all $e\in E$. This framework is applicable to a variety of network design problems in undirected and directed graphs, including multicommodity routing, Steiner tree/forest connectivity and set-connectivity. In fact, our online competitive ratio even matches the previous-best (offline) approximation ratio for generalized network design. \mathsf{e}nd{abstract} \setcounter{page}{1} \section{Introduction} Network design problems (involving selecting a subgraph with certain connectivity properties) are of significant practical and theoretical interest. A classic setting in network design is as follows. There are several requests that need to be routed through a network, where each resource $e$ has a non-decreasing cost-function $f_e$ that determines the cost $f_e(\mathsf{e}ll_e)$ incurred at $e$ as a function of its load $\mathsf{e}ll_e$. The objective is to minimize the overall cost $\sum_e f_e(\mathsf{e}ll_e)$. Traditional network design models involve {\mathsf{e}m concave} cost-functions. These are cost functions that exhibit ``economies of scale'', i.e., a larger load results in a smaller cost-per-unit-load. This is the setting in buy-at-bulk network design, that has been studied extensively in approximation and online algorithms~\cite{AA97,ChekuriHKS10,ChakrabartyEKP18}. The most basic problems in this setting are Steiner tree and forest~\cite{AgrawalKR95,GoemansW95}. Recent applications in energy-efficient scheduling and routing have motivated the study of cost-functions with ``diseconomies of scale'' \cite{AndrewsAZZ12,MakarychevS18}. Here, larger load results in a larger cost-per-unit-load. These functions capture the energy consumption of network resources that are {\mathsf{e}m speed scalable} and adjust their speed in proportion to their load. The energy consumed at speed/load $x$ grows super-linearly as $x^\alpha$ where the exponent $\alpha>1$. For most technologies, exponent $\alpha$ lies between $1$ and $3$ \cite{AndrewsAZZ12,WiermanAT12}. As discussed in \cite{AndrewsAZZ12}, a more accurate model for energy consumption involves a start-up cost in addition to the super-linear $x^\alpha$ term. This leads to the cost function: \begin{equation}\label{eq:energy} f_e(x) = \left\{ \begin{array}{ll} 0 & \mbox{ if } x=0\\ \sigma_e + \xi_e\cdot x^{\alpha_e} & \mbox{ if } x>0 \mathsf{e}nd{array}\right., \mathsf{e}nd{equation} where the parameters $\sigma_e, \xi_e\ge 0$ and $\alpha_e\ge 1$ depend on the particular device (resource). The first term $\sigma_e$ represents the cost incurred in simply keeping the device powered-on but idle and the second term $\xi_e\cdot x^{\alpha_e}$ represents the cost incurred due to speed-scaling. These cost functions exhibit {\mathsf{e}m both} economies and diseconomies of scale. Indeed, they appear concave for small values of the load $x$ and convex for large values of the load. So these functions are said to exhibit {\mathsf{e}m (dis)economies of scale}. A major challenge in designing algorithms for such cost-functions is that one needs the balance two opposing goals (1) aggregating demands in the concave regime and (2) separating demands in the convex regime. Prior work~\cite{AndrewsAZ16,AntoniadisIKMNP20,KrishnaswamyNPS14} has mainly focused on the special case of {\mathsf{e}m uniform} (or related) cost functions where the $\alpha_e$s and $\frac{\sigma_e}{\xi_e}$s are uniform across all resources $e$. Recently, \cite{EmekKLS20} studied a large class of {\mathsf{e}m generalized network design} problems under cost functions of the form~\mathsf{e}qref{eq:energy}, which included routing requests, Steiner tree/forest connectivity and set-connectivity in undirected and directed graphs. The main result in \cite{EmekKLS20} was a unified approximation framework that provided an $O\left(\max_{e} \left(\frac{\sigma_e}{\xi_e}\right)^{1/\alpha_e}\right)$ approximation algorithm assuming only a ``minimum cost oracle'' that can satisfy a {\mathsf{e}m single} request at minimum cost. In this paper, we consider the same class of generalized network design (\mathsf{e}nsuremath{\mathsf{GND}}\xspace) problems as \cite{EmekKLS20}, but in the {\mathsf{e}m online setting}. Here, requests arrive over time and each request needs to be (irrevocably) assigned to some resources immediately upon arrival. Our main result is a deterministic online algorithm with competitive ratio $O\left(\max_{e} \left(\frac{\sigma_e}{\xi_e}\right)^{1/\alpha_e}\right)$, which even matches the best approximation ratio known for \mathsf{e}nsuremath{\mathsf{GND}}\xspace. We also show that no deterministic online algorithm can do better (up to a constant factor). \subsection{Problem Definition}\label{subsec:prelim} In the generalized network design (\mathsf{e}nsuremath{\mathsf{GND}}\xspace) problem, we have a set $E$ of resources and $N$ requests that use these resources. Each request $i\in [N]$ is associated with: \begin{itemize} \item a collection $\mathcal{P}_i\subseteq 2^E$ of ``replies'' where an algorithm needs to choose some $p_i\in \mathcal{P}_i$ in order to satisfy request $i$. The reply collections may be specified implicitly. \item a weight vector $w_i \in \mathbb{R}_{\ge 1}^E$ where request $i$ induces a load of $w_{i,e}$ on each resource $e$ that it uses. Note that the weights on different resources may be unrelated. (The requirement that weights/demands of requests are at least one is common to all prior work.) \mathsf{e}nd{itemize} Each resource $e\in E$ is associated with an individual cost function $f_e:\mathbb{R}\rightarrow \mathbb{R}$ of the form~\mathsf{e}qref{eq:energy}. We will refer to such functions as {\mathsf{e}m (D)oS functions}. We emphasize that the parameters $\sigma_e$, $\xi_e$ and $\alpha_e$ may be different across resources. So we can handle networks with heterogenous resources (for example, routers running on different technologies). A solution is just a choice of reply $p_i\in \mathcal{P}_i$ for each request $i\in [N]$. Then, the load on each resource $e\in E$ is $\mathsf{e}ll_e = \sum_{i: e\in p_i} w_{i,e}$. The objective is to minimize the total cost $\sum_{e\in E} f_e(\mathsf{e}ll_e)$. In the online setting, the requests $i\in [N]$ arrive over time, and the algorithm should choose a reply $p_i\in \mathcal{P}_i$ for each request $i$ immediately upon arrival (which cannot be changed later). As usual, we use competitive analysis to measure the performance of an online algorithm, which is relative to the offline optimum that knows the entire request sequence upfront. We use $m:=|E|$ to denote the number of resources. For each resource $e\in E$, define $q_e : = (\sigma_e/\xi_e)^{1/\alpha_e}$. Note that $q_e$ is the value of load $x$ at which the two terms $\sigma_e$ and $\xi_e \cdot x^{\alpha_e}$ in the (D)oS cost function $f_e(x)$ become equal. Let $q:=\max_{e\in E} q_e$. Also, let $\alpha:= \max_{e\in E} \alpha_e$ denote the maximum exponent in the (D)oS functions. \paragraph{Min-cost Oracle} We will assume that the reply-collections $\mathcal{P}_i$ are such that one can find an approximately min-cost reply efficiently. Formally, we assume that there is a $\tau$-approximation algorithm for the problem $\min_{p\in \mathcal{P}_i} \sum_{e\in p} d_e$ for any request $i\in [N]$ and any scalars $\{d_e\ge 0\}_{e\in E}$. If computational complexity is not a consideration (which is sometimes the case with online algorithms) then this assumption is satisfied trivially with $\tau=1$. \noindent {\bf Example 1 (multicommodity routing).} The resources $E$ are edges in some directed graph $G=(V,E)$. Each request $i\in [N]$ consists of a source $s_i\in V$, destination $t_i\in V$ and demand $d_i\ge 1$. For each $i\in [N]$, the reply-collection $\mathcal{P}_i$ consists of all $s_i-t_i$ paths in $G$, and the weights $w_{i,e}=d_i$ for all $e\in E$. The resulting \mathsf{e}nsuremath{\mathsf{GND}}\xspace instance corresponds to selecting an $s_i-t_i$ routing path carrying $d_i$ units of flow (for each request $i$), so as to minimize the total energy cost of the routing. The min-cost oracle in this case corresponds to the shortest path problem in directed graphs, which admits an exact algorithm: so $\tau=1$. \noindent {\bf Example 2 (set connectivity and set-strong-connectivity).} The resources $E$ are edges in some undirected (resp. directed) graph $G=(V,E)$. Each request $i\in [N]$ consists of a subset $T_i\subseteq V$ of nodes and demand $d_i\ge 1$. The reply-collection $\mathcal{P}_i$ consists of all edge-subsets that induce a connected (resp. strongly connected) subgraph containing $T_i$. The weights $w_{i,e}=d_i$ for all $e\in E$. The resulting \mathsf{e}nsuremath{\mathsf{GND}}\xspace instance corresponds to selecting an overlay network for each terminal-set $T_i$ that can support $d_i$ units of flow. The min-cost oracle for the undirected case corresponds to the Steiner tree problem: so we have $\tau=1.39$~\cite{ByrkaGRS13}. In the directed case, the oracle is the strongly connected Steiner subgraph problem, for which we have (i) $\tau=k^\mathsf{e}psilon$ for any constant $\mathsf{e}psilon>0$ in polynomial time~\cite{CharikarCCDGGL99} or (ii) $\tau=O(\frac{\log^2k}{\log\log k})$ in {\mathsf{e}m quasi-polynomial} time~\cite{GLL19,GhugeN20}. Here $k=\max_i |T_i|$ is the maximum number of terminals in any request. \subsection{Our Results and Techniques}\label{subsec:results} Our main result is the following: \begin{theorem}\label{thm:main-1} There is a polynomial time $O(q\tau + (\mathsf{e} \alpha\tau)^\alpha)$-competitive deterministic online algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace assuming a $\tau$-approximation algorithm for the min-cost oracle. \mathsf{e}nd{theorem} Above, $\mathsf{e}\approx 2.718$ is the base of the natural logarithm. The running time of this algorithm is $O(Nm + N\cdot \mathcal{P}hi(m))$ where $\mathcal{P}hi(m)$ is the time taken by the min-cost oracle. Note that when $\tau=1$, we obtain a competitive ratio of $O(q+(\mathsf{e}\alpha)^\alpha)$. To the best of our knowledge, previous online algorithms for \mathsf{e}nsuremath{\mathsf{GND}}\xspace were restricted to the case of multicommodity routing in undirected graphs with uniform edge-cost functions~\cite{AntoniadisIKMNP20}. Our result provides a unified framework to address various types of requests (including Steiner and set-connectivity) in both undirected and directed graphs. Moreover, this is the first competitive ratio (even in the previously-studied setting \cite{AntoniadisIKMNP20}) that does not grow with the network size or the number of requests. Finally, our result also applies to {\mathsf{e}m non-uniform} cost functions: in this setting, no online algorithm was known even for single-commodity routing with edge costs. As noted earlier, our competitive ratio matches the $O_\alpha(q\tau + \tau^\alpha)$ approximation algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace obtained in \cite{EmekKLS20}.\footnote{The $O_\alpha$ notation treats $\alpha$ as constant and suppresses factors that depend on $\alpha$.} Even when used in the offline setting, our algorithm has several advantages. First, the dependence on $\alpha$ in the approximation ratio is better: we obtain a factor of $(\mathsf{e} \alpha)^\alpha=\mathsf{e}^{\alpha (1+\ln \alpha)}$ whereas the previous algorithm had a $3^{\alpha^2}$ factor \cite{EmekKLS20-comm}. Second, our algorithm is deterministic whereas the previous algorithm was randomized. Third, our running time is better. Fourth, our algorithm itself is very simple and (arguably) simpler to analyze. To prove Theorem~\ref{thm:main-1}, we first show that any (D)oS function $f_e(x)$ of the form \mathsf{e}qref{eq:energy} can be well-approximated by a weighted sum of power functions of form $h_e(x)=\mathsf{e}ta_e\cdot x + \xi_e\cdot x^{\alpha_e}$. This reduction loses a factor of $2(\sigma_e/\xi_e)^{1/\alpha_e}$ in the objective. This allows us to then focus on the \mathsf{e}nsuremath{\mathsf{GND}}\xspace problem under (non-uniform) power cost functions, which is a convex objective. For \mathsf{e}nsuremath{\mathsf{GND}}\xspace under power cost functions, if we were only interested in an offline approximation algorithm, we could use the approach in \cite{MakarychevS18} that was based on a convex relaxation and rounding to obtain an $A_\alpha$-approximation algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace (assuming $\tau=1$). Here, $A_\alpha \approx (\frac{\alpha}{\ln(1+ \alpha)})^\alpha$ is the fractional Bell number. This approach however does not work in the online setting. Instead, we use a more direct approach motivated by work on online load balancing with $\mathsf{e}ll_p$-norms~\cite{AwerbuchAGKKV95}. For each request $i$, our algorithm basically selects the reply in $\mathcal{P}_i$ that results in the smallest increase in the objective. (The actual algorithm involves tracking a modified objective function.) We analyze our algorithm using the online primal-dual method for convex programs. The idea is to (1) write a convex relaxation for \mathsf{e}nsuremath{\mathsf{GND}}\xspace and its dual, and (2) upper bound the (integral) primal objective by some factor $\rho$ times the dual objective. By weak duality, we then obtain a competitive ratio of $\rho$. There have been a number of recent papers using the online primal-dual approach for convex programs (see \S\ref{subsec:related} for more details). The work closest to ours is \cite{GuptaKP12}, where an $O(\alpha)^\alpha$-competitive algorithm was obtained for the special case of \mathsf{e}nsuremath{\mathsf{GND}}\xspace with uniform $\alpha$ power cost functions and multicommodity routing requests. Our approach is more general as it can handle a much wider class of requests and non-uniform $\alpha_e$ powers. From a technical perspective, while our primal convex program is the natural extension of that in \cite{GuptaKP12} (for multicommodity routing), we use a different (re)formulation of the dual program and also set dual variables differently. Our dual formulation is easier to reason about, and hence allows for a clean analysis even in more general settings. Implementing the above approach directly leads to an $O(q(\mathsf{e} \alpha \tau)^\alpha)$-competitive algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace using a $\tau$-approximate min-cost oracle. To obtain the more refined guarantee in Theorem~\ref{thm:main-1}, we improve both steps above. In the reduction from (D)oS functions $f_e(x)$ to power functions $h_e(x)$, we show that the factor $q_e$ loss only affects the linear term in $h_e(x)$. Then, in the online algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace under power functions, we show that the greedy objective can be further modified to ensure a stronger $O(\tau)$ competitive ratio for the linear terms, while the non-linear terms incur an $O((\mathsf{e} \alpha \tau)^\alpha)$ competitive ratio. We also provide a nearly matching lower bound for online \mathsf{e}nsuremath{\mathsf{GND}}\xspace: \begin{theorem}\label{thm:main-2} Every deterministic online algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace has competitive ratio $\Omega\left( q +(1.44 \alpha)^\alpha\right)$. \mathsf{e}nd{theorem} As usual with online lower bounds, this is information-theoretic and independent of computational requirements. So this nearly matches the $O(q+(\mathsf{e} \alpha)^\alpha)$ competitive ratio from Theorem~\ref{thm:main-1} when $\tau=1$. The lower bound instance involves single-commodity routing requests in directed graphs. The $\Omega(q)$ part of the lower bound relies on a construction similar to the online directed Steiner tree lower bound~\cite{FaloutsosPS02}. The $\Omega((1.44 \alpha)^\alpha)$ part of the lower bound follows from the corresponding result for online load balancing with $\alpha^{th}$ power of loads~\cite{Caragiannis08}. Finally, we can also extend our main result to a larger class of functions called {\mathsf{e}m real exponent polynomials} (REP) that were studied in \cite{EmekKLS20}. These have the form \begin{equation}\label{eq:REP} \bar{f}_e(x) = \left\{ \begin{array}{ll} 0 & \mbox{ if } x=0\\ \sigma_e + \sum_{j=1}^q \xi_{e,j} \cdot x^{\alpha_{e,j}} & \mbox{ if } x>0 \mathsf{e}nd{array}\right., \mathsf{e}nd{equation} where the parameters $\sigma_e, \xi_{e,1},\cdots \xi_{e,q}\ge 0$ and the exponents $\alpha_{e,j}\ge 1$. \begin{theorem}\label{thm:main-3} There is a polynomial time $O(Q\tau + (\mathsf{e} \alpha\tau)^\alpha)$-competitive deterministic online algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace under REP cost functions assuming a $\tau$-approximation algorithm for the min-cost oracle. Here $Q=\max_{e\in E} \min_{j\in [q]} (\sigma_e/\xi_{e,j})^{1/\alpha_{e,j}}$. \mathsf{e}nd{theorem} The idea here is to reduce any \mathsf{e}nsuremath{\mathsf{GND}}\xspace problem with REP costs into another instance with (D)oS cost functions of form~\mathsf{e}qref{eq:energy} but with more resources. \subsection{Related Work}\label{subsec:related} Most of the prior work in network design under (D)oS cost functions has focused on multicommodity routing requests with uniform weights (i.e., $w_{i,e}=d_i$ for all resources $e$ and requests $i$). \cite{AndrewsAZZ12} were the first to study this model and obtained an $O(q\cdot \log^{\alpha-1} D)$-approximation algorithm where $D=\max_{i=1}^N d_i$ is the maximum weight. When $\sigma_e=0$ for all resources $e$ (in which case the objective is a weighted sum of power functions), \cite{MakarychevS18} obtained an improved $A_\alpha$-approximation algorithm. These results apply to undirected as well as directed graphs. Further results are known for multicommodity routing in {\mathsf{e}m undirected graphs} in the special case of {\mathsf{e}m uniform} cost functions, where $f_e(x) =c_e\cdot f(x)$ for a common (D)oS function $f(x)$. When costs are incurred on edges, \cite{AndrewsAZ16} obtained a poly-logarithmic $O(\log^{O(\alpha)} N)$-approximation algorithm, and \cite{AntoniadisIKMNP20} later improved the approximation ratio to $O(\log^{ \alpha } N)$. When costs are incurred on nodes (which is harder than the edge-version), \cite{KrishnaswamyNPS14} obtained an $O(\log^{O(\alpha)} N)$-approximation algorithm. All these results rely crucially on the uniformity of the cost function. In particular, they use the fact that it is best to aggregate $q=(\sigma/\xi)^{1/\alpha}$ units of demand, after which the aggregated demands can be routed in a ``well separated'' manner. It is unclear if these techniques can be used for non-uniform costs as the ``aggregate demand'' quantity for different resources is different (it is $q_e = (\sigma_e/\xi_e)^{1/\alpha_e}$ for each resource $e$). Furthermore, these results relied on cut-sparsification and small flow-cut gaps, which do not extend to directed graphs. In fact, the directed Steiner forest problem (which is a special case of \mathsf{e}nsuremath{\mathsf{GND}}\xspace) is hard to approximate better than $\Omega(2^{\log^{1-\mathsf{e}psilon} N})$ for any constant $\mathsf{e}psilon>0$ \cite{DodisK99}. We note that the parameter $q\approx N$ for \mathsf{e}nsuremath{\mathsf{GND}}\xspace instances corresponding to Steiner forest: so we cannot expect an approximation ratio much better than $poly(q)$ for \mathsf{e}nsuremath{\mathsf{GND}}\xspace. In fact, any $o(\sqrt{q})$-approximation algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace would improve on the best approximation ratio known for directed Steiner forest~\cite{ChekuriEGS11,FeldmanKN12}. As mentioned earlier, \cite{EmekKLS20} considered the much wider class of \mathsf{e}nsuremath{\mathsf{GND}}\xspace problems, and obtained an $O(q)$-approximation algorithm. As discussed in \cite{EmekKLS20}, their result extends prior work involving (D)oS cost functions in several ways: unrelated weights, non-uniform cost functions, strongly polynomial runtime etc. Our result inherits all these advantages even in the online setting. The technique in \cite{EmekKLS20} was based on the ``smoothness'' toolbox from \cite{Roughgarden15}. Our approach (discussed above) is completely different, and leads to a much simpler algorithm. In the online setting, \cite{AntoniadisIKMNP20} obtained an $\tilde{O}(\log^{3\alpha +1}N)$-competitive randomized algorithm for multicommodity routing in undirected graphs with uniform cost functions on edges and uniform weights. This ratio is incomparable to the $O(q+(\mathsf{e} \alpha)^\alpha)$ deterministic online ratio that we obtain (even in more general settings). When $\sigma_e=0$ for all resources $e$ and all $\alpha_e$ are uniform, $O(\alpha)^\alpha$-competitive online algorithms were known for load balancing~\cite{AwerbuchAGKKV95} and multicommodity routing \cite{GuptaKP12}. Our algorithm can be seen as a natural extension of these results to the setting of \mathsf{e}nsuremath{\mathsf{GND}}\xspace. \cite{AwerbuchAGKKV95} used a potential-function analysis that appears hard to extend to non-uniform $\alpha_e$s. As discussed in \S\ref{subsec:results}, though our approach as well as \cite{GuptaKP12} are based on the online primal-dual method, there are important differences as well. The online primal-dual method (see the survey~\cite{BuchbinderN09}) is a very general technique that has led to several strong results in online algorithms. Typically, this approach is applied with covering/packing linear-program relaxations, e.g. \cite{AlonAABN09,AlonAABN06,BansalBN12}. However, a number of recent papers, e.g. \cite{GuptaKP12,AnandGK12,DevanurH18,AzarBCCCGHKNNP16,NagarajanS17,HuangK19}, have extended this to the setting of covering programs with convex objectives. Our result adds to this line of work. Although our fractional relaxation is a ``convex covering program'' as studied in \cite{AzarBCCCGHKNNP16}, we cannot use the general-purpose algorithm presented there because the number of variables in our relaxation for \mathsf{e}nsuremath{\mathsf{GND}}\xspace is exponential: the competitive ratio in \cite{AzarBCCCGHKNNP16} is logarithmic in the number of variables. We note however that our idea of setting dual variables based on the gradient of the primal objective (at the final solution) was partly motivated from \cite{AzarBCCCGHKNNP16}. \subsection{Paper Outline} We start with the reduction from (D)oS cost functions to weighted power functions in \S\ref{sec:redn}. In \S\ref{sec:frac} we provide a fractional online algorithm for the natural convex relaxation of \mathsf{e}nsuremath{\mathsf{GND}}\xspace under power cost functions. Then, in \S\ref{sec:int} we extend this to an integral online algorithm. \S\ref{sec:overall} puts things together and finishes the proofs of Theorems~\ref{thm:main-1} and \ref{thm:main-3}. Finally, \S\ref{sec:LB} provides the online lower bounds (Theorem~\ref{thm:main-2}). \section{Reducing (D)oS Functions to Weighted Power Functions}\label{sec:redn} We first make the simple but useful observation that any cost-function $f_e$ of the form~\mathsf{e}qref{eq:energy} can be approximated by a {\mathsf{e}m convex power function}, at the loss of a multiplicative factor $2q_e$, where $q_e : = (\sigma_e/\xi_e)^{1/\alpha_e}$. To this end, define for each $e\in E$, a new function \begin{equation}\label{eq:convex-fn} h_e(x) \,:= \,\xi_e q_e^{\alpha_e-1}\cdot x + \xi_e\cdot x^{\alpha_e},\quad \mbox{for all }x\ge 0. \mathsf{e}nd{equation} \begin{lemma}\label{lem:fn-apx} For each $e\in E$ and $x\in \{0\}\cup \mathbb{R}_{\ge 1}$, we have $$\frac12 \cdot h_e(x)\le f_e(x)\le \max\{q_e,1\} \cdot \xi_e q_e^{\alpha_e-1}\cdot x + \xi_e\cdot x^{\alpha_e} \le \max\{q_e,1\} \cdot h_e(x).$$ \mathsf{e}nd{lemma} \begin{proof} At $x = 0$ the inequalities trivially hold. So we assume $x\ge 1$ in the rest of the proof. For the first inequality, we divide it into two cases. If $x < q_e$, then \[ h_e(x) = \xi_e q_e^{\alpha_e-1}\cdot x + \xi_e\cdot x^{\alpha_e} \leq \xi_e q_e^{\alpha_e} + \xi_e\cdot x^{\alpha_e} = \sigma_e + \xi_e\cdot x^{\alpha_e}= f_e(x) \] If $x \geq q_e$, then \[ h_e(x) = \xi_e q_e^{\alpha_e-1}\cdot x + \xi_e\cdot x^{\alpha_e} \leq 2 \xi_e x^{\alpha_e} \leq 2 (\xi_e x^{\alpha_e} + \sigma_e) = 2f_e(x) \] For the second inequality, we have $$\max\{1,q_e\}\cdot \xi_e q_e^{\alpha_e-1}\cdot x + \xi_e\cdot x^{\alpha_e} \ge \xi_e q_e^{\alpha_e}\cdot x + \xi_e\cdot x^{\alpha_e} = \sigma_e x + \xi_e\cdot x^{\alpha_e} \geq \sigma_e + \xi_e\cdot x^{\alpha_e} = f_e(x),$$ where the second inequality uses $x\ge 1$. \mathsf{e}nd{proof} Recall that $q:=\max_{e\in E} q_e$. By Lemma~\ref{lem:fn-apx}, at the loss of factor $2\max\{q,1\}$, it suffices to solve the \mathsf{e}nsuremath{\mathsf{GND}}\xspace problem under power cost functions, where each resource $e\in E$ has a cost function of the form $g_e(x)=c_e\cdot x^{\alpha_e}$ (see details in \S\ref{sec:overall}). In the next two sections, we provide online algorithms for \mathsf{e}nsuremath{\mathsf{GND}}\xspace under {\mathsf{e}m weighted power functions}. \section{Fractional Online Algorithm}\label{sec:frac} We consider the following convex program relaxation for \mathsf{e}nsuremath{\mathsf{GND}}\xspace, denoted $(P)$. \begin{align} \min\quad&\sum_{e\in E} c_e\cdot \left( \sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}\right)^{\alpha_e}\notag\\ \mbox{s.t.}\quad& \sum_{p\in \mathcal{P}_i}x_{i,p} \ge 1,\qquad \forall i\in [N]\label{cons:primal}\\ &\mathbf{x} \ge \mathbf{0}.\notag \mathsf{e}nd{align} Note that all constraints are of ``covering type'' and the objective is convex. However, there are an exponential number of variables as the replies $\mathcal{P}_i$ are implicitly specified. We will solve this program approximately using the online primal-dual method. First, we provide a continuous time online algorithm, that is easier to describe and analyze (Theorem~\ref{thm:frac-online}). Then, we explain how to obtain a polynomial time implementation at a small loss in the competitive ratio (\S\ref{app:poly-time}). Let $E_1=\{e\in E: \alpha_e=1\}$. The dual of convex program $(P)$ is below, denoted $(D)$. \begin{align} \max\quad&\sum_{i=1}^N y_i \,-\, \sum_{e\in E\setminus E_1} \frac{ c_e\alpha_e}{\beta_e} \cdot z_e^{\beta_e} \notag\\ \mbox{s.t.}\quad& \sum_{e\in p}w_{i,e} c_e \alpha_e\cdot z_e \ge y_{i},\qquad \forall p\in \mathcal{P}_i,\, \forall i\in [N] \label{cons:dual}\\ & z_e \le 1, \qquad\qquad\qquad\qquad\forall e\in E_1 \label{cons:d-UB}\\ &\mathbf{y}, \mathbf{z} \ge \mathbf{0}.\notag \mathsf{e}nd{align} Above, for each $e\in E\setminus E_1$, value $\beta_e> 1$ is the conjugate of $\alpha_e$, i.e. $\frac{1}{\alpha_e}+\frac{1}{\beta_e}=1$. Note that there are no terms in the dual objective corresponding to $e\in E_1$. We derive this dual in Appendix~\ref{app:derive-dual}. It turns out that strong duality holds for this primal-dual pair. However, we will only use weak duality, which is proved below. \begin{lemma}\label{lem:weak-duality} For any primal $x\in (P)$ and dual $(y,z)\in (D)$ solutions, $$ \sum_{e\in E} c_e\cdot \left( \sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}\right)^{\alpha_e} \,\,\ge \,\, \sum_{i=1}^N y_i \,-\, \sum_{e\in E\setminus E_1} \frac{ c_e\alpha_e}{\beta_e} \cdot z_e^{\beta_e}.$$ \mathsf{e}nd{lemma} \begin{proof} For easier notation, let $\mathsf{e}ll_e:=\sum_i \sum_{p\in \mathcal{P}_i: e\in p} w_{i,e}\cdot x_{i,p}$ be the fractional load on each $e\in E$. For each $e\in E_1$, let $\beta_e=\infty$: note that $\frac1{\beta_e} z_e^{\beta_e}=0$ as $z_e\le 1$. We will show that $$\sum_i y_i \le \sum_{e\in E} \alpha_e c_e\cdot \left( \frac{1}{\alpha_e}\cdot \mathsf{e}ll_e^{\alpha_e} + \frac{1}{\beta_e}\cdot z_e^{\beta_e} \right),$$ which would prove the lemma. Indeed, we have: \begin{align} \sum_i y_i & \le \sum_i \left(\sum_{p\in \mathcal{P}_i} x_{i,p} \right) \cdot y_i \le \sum_i \sum_{p\in \mathcal{P}_i} x_{i,p} \cdot \left(\sum_{e\in p} w_{i,e}c_e\alpha_e\cdot z_e\right) \label{eq:weak-dual-1}\\ & = \sum_e c_e \alpha_e \cdot z_e \left(\sum_i \sum_{p\in \mathcal{P}_i: e\in p} w_{i,e} \cdot x_{i,p}\right) = \sum_e c_e \alpha_e \cdot z_e \cdot \mathsf{e}ll_e \label{eq:weak-dual-2}\\ &\le \sum_{e\in E_1} c_e \cdot \mathsf{e}ll_e + \sum_{e\in E\setminus E_1} c_e \alpha_e \cdot z_e \cdot \mathsf{e}ll_e \le \sum_e c_e \alpha_e \left( \frac{1}{\alpha_e}\cdot \mathsf{e}ll_e^{\alpha_e} + \frac{1}{\beta_e}\cdot z_e^{\beta_e} \right).\label{eq:weak-dual-3} \mathsf{e}nd{align} Above, the first inequality in \mathsf{e}qref{eq:weak-dual-1} is by constraint~\mathsf{e}qref{cons:primal} and non-negativity, and the last inequality in \mathsf{e}qref{eq:weak-dual-1} is by constraint~\mathsf{e}qref{cons:dual}. The equality in \mathsf{e}qref{eq:weak-dual-2} is by interchanging summation. The first inequality in \mathsf{e}qref{eq:weak-dual-3} is by constraint~\mathsf{e}qref{cons:d-UB} and the last inequality is by Young's inequality, which says $A\cdot B \le \frac{1}{\alpha}\cdot A^\alpha + \frac{1}{\beta}\cdot B^\beta$ for any $A,B\ge 0$ and $\alpha,\beta > 1$ with $\frac{1}{\alpha}+\frac{1}{\beta}=1$. This completes the proof. \mathsf{e}nd{proof} \begin{algorithm} \DontPrintSemicolon Upon arrival of request $i$, do the following.\; \For{each continuous time $t\in [0,1]$}{ Choose reply $p^*\in \mathcal{P}_i$ using the min-cost oracle under costs $d_e= \alpha_e c_e \cdot \mathsf{e}ll_e^{\alpha_e-1}\cdot w_{i,e}$ for each $e\in E$, where $\mathsf{e}ll_e=\sum_i \sum_{p\in \mathcal{P}_i: e\in p} w_{i,e}\cdot x_{i,p}$ is the current fractional load on $e$.\; Raise primal variable $ x_{i,p^*}$ at rate one, i.e. $\frac{\partial}{\partial t} x_{i,p^*}=1$.\; } \caption{Fractional online algorithm for $(P)$} \mathsf{e}nd{algorithm} \begin{theorem}\label{thm:frac-online} The fractional online algorithm has competitive ratio at most $\alpha^\alpha$ where $\alpha=\max_{e\in E} \alpha_e$. \mathsf{e}nd{theorem} \begin{proof} The proof is by dual fitting: we will provide a feasible dual solution $(y,z)$ and show that the online primal solution $\bar{x}$ has objective at most $\alpha^\alpha$ times the dual objective. Combined with Lemma~\ref{lem:weak-duality}, this would imply the theorem. Let $\bar{\mathsf{e}ll}_e= \sum_i \sum_{p\in \mathcal{P}_i: e\in p} w_{i,e}\cdot \bar{x}_{i,p}$ be the final load on each $e\in E$. Let $\psielta\in (0,1]$ be some parameter, and define the dual solution: $$z_e = \psielta\cdot \bar{\mathsf{e}ll}_e^{\alpha_e-1},\qquad \forall e\in E.$$ $$y_i = \min_{p\in \mathcal{P}_i} \sum_{e\in p} w_{i,e}c_e\alpha_e\cdot z_e,\qquad \forall i\in [N].$$ Note that dual-constraint~\mathsf{e}qref{cons:d-UB} is satisfied as $z_e=\psielta\le 1$ for all $e\in E_1$. Moreover, \mathsf{e}qref{cons:dual} is satisfied by definition of $y$. So $(y,z)$ is a feasible dual solution. For each request $i$, let $q_i\in \mathcal{P}_i$ denote the reply that achieves the minimum cost in the definition of $y_i$ above. We now relate the primal objective $\bar{P} = \sum_e c_e\cdot \bar{\mathsf{e}ll}_e^{\alpha_e}$ with the dual objective $D$, by showing: \begin{equation} \label{eq:primal-dual-frac} D \,\,\ge \,\, \left(\psielta -(\alpha-1)\cdot \psielta^{\frac{\alpha}{\alpha -1}}\right)\cdot \bar{P} \mathsf{e}nd{equation} Consider the algorithm when some request $i$ arrives. For each time $t\in [0,1]$, if $p^*\in\mathcal{P}_i$ is the current reply and $\{\mathsf{e}ll_e\}_{e\in E}$ denotes the current loads, then by the primal update: \begin{align*} \frac{\partial}{\partial t} \bar{P} & = \sum_{e\in p^*} c_e\alpha_e \left( \sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}\right)^{\alpha_e-1} w_{i,e} = \sum_{e\in p^*} w_{i,e} c_e\alpha_e\cdot \mathsf{e}ll_e^{\alpha_e-1} \le \sum_{e\in q_i} w_{i,e} c_e\alpha_e\cdot \mathsf{e}ll_e^{\alpha_e-1} \\ & \le \sum_{e\in q_i} w_{i,e} c_e\alpha_e\cdot \bar{\mathsf{e}ll}_e^{\alpha_e-1} = \frac{1}{\psielta}\cdot y_i. \mathsf{e}nd{align*} Above, the first inequality is by the choice of the current reply $p^*$ at time $t$, the second inequality is by monotonicity of the primal solution $x$ over time, and the last equality is by the choice of the dual value $y_i$. It follows that the increase in $\bar{P}$ due to request $i$ is at most $\frac{y_i}{\psielta}$. Adding over all $i$, $$\bar{P} \le \frac{1}{\psielta}\sum_{i=1}^N y_i.$$ Now, consider the contribution of the $z$-variables to the dual objective: $$\sum_{e\in E\setminus E_1} \frac{c_e\alpha_e}{\beta_e}\cdot z_e^{\beta_e} = \sum_{e\in E\setminus E_1} \psielta^{\beta_e}\frac{c_e\alpha_e}{\beta_e} \left(\bar{\mathsf{e}ll}_e^{\alpha_e-1}\right)^{\beta_e}= \sum_{e\in E\setminus E_1} \psielta^{\beta_e} c_e(\alpha_e-1) \bar{\mathsf{e}ll}_e^{\alpha_e}\le \psielta^{\frac{\alpha}{\alpha-1}}(\alpha-1) \sum_{e\in E\setminus E_1}c_e \bar{\mathsf{e}ll}_e^{\alpha_e}. $$ The equalities use the fact that $\frac{1}{\beta_e}=1-\frac{1}{\alpha_e}$. The inequality above uses that $\psielta\le 1$ and $\beta_e = 1+\frac{1}{\alpha_e-1}\ge 1+\frac{1}{\alpha-1}$ for all $e$. Finally, the right-hand-side above is at most $\psielta^{\frac{\alpha}{\alpha-1}}(\alpha-1) \cdot \bar{P}$. Therefore, the dual objective is: $$D = \sum_{i=1}^N y_i - \sum_{e\in E\setminus E_1} \frac{c_e\alpha_e}{\beta_e}\cdot z_e^{\beta_e} \ge \psielta\cdot \bar{P} - \psielta^{\frac{\alpha}{\alpha-1}}(\alpha-1) \cdot \bar{P},$$ which proves \mathsf{e}qref{eq:primal-dual-frac}. Finally, choosing $\psielta=1/\alpha^{\alpha-1}$, we obtain $\bar{P}\le \alpha^\alpha\cdot D$. \mathsf{e}nd{proof} \subsection{Polynomial Time Algorithm}\label{app:poly-time} To make the previous (continuous-time) algorithm run in polynomial time, we show how to reduce the number of queries to the min-cost oracle. The main idea is to perform a new query whenever the cost (under current loads) of the current reply increases by a factor $1+\mathsf{e}psilon$, where $\mathsf{e}psilon>0$ is a constant. Recall that the cost-function under loads $\{\mathsf{e}ll_e\}_{e\in E}$ is $d_e= \alpha_e c_e \cdot \mathsf{e}ll_e^{\alpha_e-1}\cdot w_{i,e}$ for all $e\in E$. We also artificially increase the initial load on every resource to be $\mathsf{e}ta\rightarrow 0$ rather than zero. Below, we will ensure that (1) the number of queries to the min-cost oracle is polynomial and (2) the competitive ratio is still roughly $\alpha^\alpha$. Recall that $m=|E|$ is the number of resources, and $\alpha = \max_e \alpha_e$. By scaling costs, we can assume (without loss of generality) that $c_e\ge 1$ for all $e\in E$. Moreover, recall that all weights $w_{i,e}\ge 1$. Let $B$ denote the maximum cost/weight in the instance. Let $p^*$ denote the current reply at any point of the new algorithm. Note that reply $p^*$ is always a $1+\mathsf{e}psilon$ approximately min-cost reply under the current cost function $\{d_e\}$. For the competitive ratio, consider how the analysis changes when the reply $p^*$ is only guaranteed to be a $1+\mathsf{e}psilon$ approximate reply (rather than min-cost). The increase in the primal objective due to request $i$ is then at most $\frac{(1+\mathsf{e}psilon) y_i}{\psielta}$. Adding over all $i$, we obtain $\bar{P} - I \leq \frac{1+\mathsf{e}psilon}{\psielta} \sum y_i$, where $I=\sum_{e\in E} c_e \mathsf{e}ta^{\alpha_e}$ is the initial primal objective and $\bar{P}$ is the final objective. Note that $I\le mB\mathsf{e}ta$. As before, the dual objective is bounded as \[D \geq \frac{\psielta}{1+\mathsf{e}psilon} \cdot (\bar{P} -I) - \psielta^\frac{\alpha}{\alpha - 1}(\alpha - 1) \cdot \bar{P} \ge \left(\frac{\psielta}{1+\mathsf{e}psilon} - \psielta^\frac{\alpha}{\alpha - 1}(\alpha - 1) \right)\cdot \bar{P} -I\] Choosing $\psielta$ as $(\frac{1}{\alpha (1+\mathsf{e}psilon)})^{\alpha-1}$ to maximize the coefficient on $\bar{P}$, we obtain $\bar{P} \leq ((1+\mathsf{e}psilon)\alpha)^{\alpha} \cdot (D + I)$. By weak duality, we know that $D\le \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$ the optimal fractional value. We now bound $I\le mB\mathsf{e}ta$ in terms of \mathsf{e}nsuremath{\mathsf{OPT}}\xspace. In any fractional solution $\{x_{i,p}\}$, for any request $i$, we have $$\sum_e \sum_{p \in P_i: e\in p} x_{i,p} = \sum_{p\in \mathcal{P}_i} |p|\cdot x_{i,p}\ge \sum_{p\in \mathcal{P}_i} x_{i,p}\ge 1.$$ Averaging over all resources, some $e\in E$ has $ \sum_{p \in P_i: e\in p} x_{i,p}\ge \frac1m$, which means its load is at least $\frac{1}{m}$ (as all weights are at least one). So cost of any fractional solution is at least $\frac{1}{m^\alpha}$. It now follows that $I\le mB\mathsf{e}ta \le m^{\alpha+1}B\mathsf{e}ta\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$. Choosing $\mathsf{e}ta = \frac{\mathsf{e}psilon}{m^{1+\alpha} B}$, the primal objective $\bar{P}\le (1+\mathsf{e}psilon)^{\alpha+1}\alpha^\alpha\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$. We now bound the number of queries. Note that the min-cost of any reply is at least $\mathsf{e}ta^{\alpha-1}$ as all the loads are initially $\mathsf{e}ta$. Moreover, any load $\mathsf{e}ll_e\le NB$ which implies that the maximum cost of any reply is at most $\alpha m N^{\alpha-1}B^{\alpha + 1}$. As we make a new query only when the current cost of $p^*$ increases by a factor $1+\mathsf{e}psilon$, the number of queries is at most $$\log_{1+\mathsf{e}psilon} \left(\frac{\alpha m N^{\alpha-1}B^{\alpha+1}}{ \mathsf{e}ta^{\alpha-1}}\right) = O\left(\alpha^2 \log(mnB)\right),$$ where we used the above choice of $\mathsf{e}ta$. So the number of queries is polynomial. \section{Integer Online Algorithm}\label{sec:int} We now provide an integral online algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace. It is well-known (see e.g. \cite{AndrewsAZZ12}) that the convex relaxation $(P)$ used in \S\ref{sec:frac} has a polynomially large integrality gap even for single-commodity routing on undirected graphs. To get around this, we use an idea from \cite{AzarE05} for load balancing, by adding additional {\mathsf{e}m linear terms} corresponding to the $\alpha_e^{th}$ power of loads from individual requests. Let $\rho\ge 1$ be a parameter to be set later. Upon the arrival of request $i$, we do the following: \begin{itemize} \item Choose reply $p_i\in \mathcal{P}_i$ using the min-cost oracle under the costs \begin{equation}\label{eq:cost-integer-online} \psi_e= \alpha_e c_e \cdot \mathsf{e}ll_e^{\alpha_e-1}\cdot w_{i,e} \,+\, \frac{\rho}{\mathsf{e}^{\alpha}} \cdot c_e \alpha_e w_{i,e}^{\alpha_e}, \mbox{ for each }e\in E, \mathsf{e}nd{equation} where $\mathsf{e}ll_e \,:=\, \sum_{j<i : e\in p_j} w_{j,e}$ is the current load on $e$. \mathsf{e}nd{itemize} \begin{theorem}\label{thm:online} The online \mathsf{e}nsuremath{\mathsf{GND}}\xspace algorithm has competitive ratio at most $2(\mathsf{e}\alpha)^{\alpha}$ where $\alpha=\max_{e\in E} \alpha_e$. \mathsf{e}nd{theorem} We prove this result in the rest of this section. Let $A_e$ denote the final load on each resource $e\in E$. The online algorithm's objective is then $A:=\sum_{e} c_e\cdot A_e^{\alpha_e}$. We will use a different (stronger) convex relaxation for \mathsf{e}nsuremath{\mathsf{GND}}\xspace and relate $A$ to the new relaxation. The new relaxation has the same constraints in $(P)$ but the objective is now: \begin{equation} \label{eq:new-obj} \sum_{e\in E} c_e\cdot \left( \sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}\right)^{\alpha_e}\,\,+\,\,\sum_{e\in E} \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} \cdot \sum_{i=1}^N w_{i,e}^{\alpha_e}\sum_{p\in \mathcal{P}_i: e\in p} x_{i,p} \mathsf{e}nd{equation} \begin{lemma} \label{lem:new-opt} The optimal value of the new convex program with objective~\mathsf{e}qref{eq:new-obj} is at most $(1+\alpha\mathsf{e}^{-\alpha})\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$, where \mathsf{e}nsuremath{\mathsf{OPT}}\xspace is the optimal value of the (integral) \mathsf{e}nsuremath{\mathsf{GND}}\xspace instance. \mathsf{e}nd{lemma} \begin{proof} Consider an optimal solution to \mathsf{e}nsuremath{\mathsf{GND}}\xspace with objective \mathsf{e}nsuremath{\mathsf{OPT}}\xspace. We set a corresponding solution for ($P$) by setting $x_{i,p}$ to 1 if $p$ is the reply used to satisfy request $i$ and 0 otherwise. Using the fact that each $x_{i,p} $ is either 0 or 1, we have for each $e$, \[\sum_{i=1}^N w_{i,e}^{\alpha_e}\sum_{p\in \mathcal{P}_i: e\in p} x_{i,p} \leq \left(\sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}\right)^{\alpha_e}\] So, the objective of the new relaxation is at most \[(1+\frac{\alpha}{\mathsf{e}^\alpha})\sum_{e\in E} c_e\cdot \left( \sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}\right)^{\alpha_e} = (1+\frac{\alpha}{\mathsf{e}^\alpha})\mathsf{e}nsuremath{\mathsf{OPT}}\xspace,\] which proves the lemma. \mathsf{e}nd{proof} To make notation simpler, for the analysis we imagine adding dummy resources $E'=\{e' : e\in E\}$ corresponding to the second term in the new objective. We set $\alpha_{e'} := 1$, $c_{e'} := 1$ and $w_{i,e'} := \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e}$ for all $i\in [N]$ and $e\in E$. Moreover, we extend each reply $p\in \mathcal{P}_i$ so that it contains {\mathsf{e}m both} copies $e,e'$ of each resource $e\in p$. The new reply collections are referred to as $\{\mathcal{P}'_i\}_{i=1}^N$. The dual of the new convex program, denoted $(D')$, is given below. \begin{align} \max\quad&\sum_{i=1}^N y_i \,-\, \sum_{e\in E\setminus E_1} \frac{ c_e\alpha_e}{\beta_e} \cdot z_e^{\beta_e}\notag\\ \mbox{s.t.}\quad& \sum_{e\in p}w_{i,e} c_e \alpha_e\cdot z_e \ge y_{i},\qquad \forall p\in \mathcal{P}'_i,\, \forall i\in [N] \label{cons:new-dual}\\ & z_e \le 1, \qquad\qquad\qquad\qquad\forall e\in E_1\cup E' \label{cons:new-d-UB}\\ &\mathbf{y}, \mathbf{z} \ge \mathbf{0}.\notag \mathsf{e}nd{align} Above, $E_1=\{e\in E: \alpha_e=1\}$. Note that all the dummy resources $E'$ have the exponent $\alpha_e=1$: so they do not appear in the second term of the dual objective. Define the dual solution: $$z_e := \frac1\rho \cdot A_e^{\alpha_e-1},\qquad \forall e\in E.$$ $$z_{e'} := 1,\qquad \forall e'\in E'.$$ $$y_i := \min_{p'\in \mathcal{P}'_i} \sum_{e\in p'} w_{i,e}c_e\alpha_e\cdot z_e= \min_{p\in \mathcal{P}_i} \sum_{e\in p} \left( w_{i,e}c_e\alpha_e\cdot z_e + \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e}\right),\qquad \forall i\in [N].$$ The second equality above (for $y_i$) follows from the definitions of the new reply-collection $\mathcal{P}_i'$ and weights $w_{i,e'}$, and the setting $z_{e'}=1$ for $e'\in E'$. Note that dual-constraint~\mathsf{e}qref{cons:new-d-UB} is satisfied as $z_e=\psielta\le 1$ for all $e\in E_1$ and $z_{e'}=1$ for all $e'\in E'$. Moreover, \mathsf{e}qref{cons:new-dual} is satisfied by definition of $y$. So $(y,z)$ is a feasible dual solution. For each request $i$, let $q_i\in \mathcal{P}_i$ denote the reply that achieves the minimum cost in the definition of $y_i$ above. We now relate $A$ with the dual objective $D$. Consider the algorithm when some request $i$ arrives. Let $\mathsf{e}ll_e$ denote the load on each $e\in E$ before request $i$ is assigned. Recall that $p_i\in \mathcal{P}_i$ is the selected reply. Then, the increase in the algorithm's objective, $(\Delta A)_i$ equals: \begin{align} &= \sum_{e\in p_i} c_e \left( (\mathsf{e}ll_e+ w_{i,e})^{\alpha_e} - \mathsf{e}ll_e^{\alpha_e} \right) \le \sum_{e\in p_i} c_e \alpha_e (\mathsf{e}ll_e+ w_{i,e})^{\alpha_e-1} w_{i,e} \label{eq:int-dual-1} \\ &\le\sum_{e\in p_i} c_e \alpha_e w_{i,e} \left(\mathsf{e} \cdot \mathsf{e}ll_e^{\alpha_e-1} + \alpha_e^{\alpha_e-1} \cdot w_{i,e}^{\alpha_e-1} \right) = \mathsf{e}\cdot \sum_{e\in p_i} \left( c_e \alpha_e w_{i,e} \mathsf{e}ll_e^{\alpha_e-1} + \frac{1}{\mathsf{e}}c_e\alpha_e^{\alpha_e} w_{i,e}^{\alpha_e} \right) \label{eq:int-dual-2} \\ &\le \mathsf{e}\cdot \sum_{e\in p_i} \left( c_e \alpha_e w_{i,e} \mathsf{e}ll_e^{\alpha_e-1} + \frac{\rho}{\mathsf{e}^{\alpha}}\cdot c_e \alpha_e w_{i,e}^{\alpha_e} \right) \le \mathsf{e}\cdot \sum_{e\in q_i} \left( c_e \alpha_e w_{i,e} \mathsf{e}ll_e^{\alpha_e-1} + \frac{\rho}{\mathsf{e}^{\alpha}}\cdot c_e\alpha_e w_{i,e}^{\alpha_e} \right) \label{eq:int-dual-3}\\ &= \mathsf{e}\rho \cdot \sum_{e\in q_i} \left( \frac1\rho \cdot c_e \alpha_e w_{i,e} \mathsf{e}ll_e^{\alpha_e-1} + \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e} \right)\le \mathsf{e}\rho \cdot \sum_{e\in q_i} \left( \frac1\rho \cdot c_e \alpha_e w_{i,e} A_e^{\alpha_e-1} + \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e} \right)\label{eq:int-dual-4} \\ &= \mathsf{e}\rho \cdot \sum_{e\in q_i} \left( w_{i,e}c_e\alpha_e\cdot z_e + \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e}\right) = \mathsf{e}\rho \cdot y_i. \label{eq:int-dual-5} \mathsf{e}nd{align} The inequality in \mathsf{e}qref{eq:int-dual-1} uses convexity of the $x^{\alpha_e}$ function. The inequality in \mathsf{e}qref{eq:int-dual-2} uses the inequality $(X+Y)^{\alpha-1} \le \mathsf{e}\cdot X^{\alpha-1} + \alpha^{\alpha-1}\cdot Y^{\alpha-1}$ for $\alpha\ge 1$ and $X,Y\ge 0$, which follows from Lemma~4.1 in \cite{AwerbuchAGKKV95} (by setting $c=\mathsf{e}$). The first inequality in \mathsf{e}qref{eq:int-dual-3} uses $\rho\ge (\mathsf{e} \alpha)^{\alpha-1}$ which we will ensure. The second inequality in \mathsf{e}qref{eq:int-dual-3} uses the choice of $p_i$ under the costs \mathsf{e}qref{eq:cost-integer-online}. The inequality in \mathsf{e}qref{eq:int-dual-4} uses the fact that loads are monotonically non-decreasing. The equalities in \mathsf{e}qref{eq:int-dual-5} use the definition of reply $q_i$ and choice of dual variables $y_i$ and $z_e$. Adding over all $i$, $$A \le \mathsf{e}\rho \cdot \sum_{i=1}^N y_i.$$ Now, consider the contribution of the $z$-variables to the dual objective: $$\sum_{e\in E\setminus E_1} \frac{c_e\alpha_e}{\beta_e}\cdot z_e^{\beta_e} = \sum_{e\in E\setminus E_1} \rho^{-\beta_e}\frac{c_e\alpha_e}{\beta_e} \left(A_e^{\alpha_e-1}\right)^{\beta_e}= \sum_{e\in E\setminus E_1} \rho^{-\beta_e} c_e(\alpha_e-1) A_e^{\alpha_e}\le \rho^{-\frac{\alpha}{\alpha-1}}(\alpha-1) \sum_{e\in E\setminus E_1}c_e A_e^{\alpha_e}, $$ which follows the same way as for the fractional online algorithm. Therefore, the dual objective is: $$D = \sum_{i=1}^N y_i - \sum_{e\in E\setminus E_1} \frac{c_e\alpha_e}{\beta_e}\cdot z_e^{\beta_e} \ge \left(\frac{1}{\mathsf{e} \rho} - \rho^{-\frac{\alpha}{\alpha-1}}(\alpha-1) \right)\cdot A.$$ Finally, choosing $\rho=(\mathsf{e} \alpha)^{\alpha-1}$, we obtain $A\le (\mathsf{e}\alpha)^\alpha\cdot D$. Combined with the observation that $D\le (1+\alpha\mathsf{e}^{-\alpha})\mathsf{e}nsuremath{\mathsf{OPT}}\xspace$ (by Lemma~\ref{lem:new-opt}), we obtain Theorem~\ref{thm:online}. \subsection{Using Approximate Min-Cost Replies} \psief\overline{A}{\overline{A}} Here we consider the situation where an exact min-cost reply cannot be computed efficiently. This is indeed the case in some applications. We extend our online algorithm so that it also works with approximately min-cost replies. Moreover, we obtain a stronger guarantee for the linear terms in the objective, which will be used in proving our main result (see \S\ref{sec:overall}). Recall that $E_1\subseteq E$ denotes the resources with exponent $\alpha_e=1$. \begin{theorem}\label{thm:apx-online} Assume that there is a $\tau$-approximation algorithm for the min-cost oracle in \mathsf{e}nsuremath{\mathsf{GND}}\xspace. Then, there is a polynomial time $2(\mathsf{e} \alpha\tau )^\alpha$-competitive online algorithm for \mathsf{e}nsuremath{\mathsf{GND}}\xspace. In fact, if $L$ and $H$ denote the costs incurred by the algorithm on resources in $E_1$ and $E\setminus E_1$ respectively, then $L\le 2\tau\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$ and $H\le 2(\mathsf{e}\alpha\tau)^\alpha\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$. \mathsf{e}nd{theorem} \begin{proof} This algorithm is a slight modification of the previous one. Upon arrival of request $i\in [N]$, we select the reply $p_i\in \mathcal{P}_i$ returned by the $\tau$-approximate min-cost oracle under costs: \begin{equation}\label{eq:cost-int-LMP} \bar{\psi}_e := \left\{ \begin{array}{ll} \alpha_e c_e \cdot \mathsf{e}ll_e^{\alpha_e-1}\cdot w_{i,e} \,+\, \frac{\rho}{\mathsf{e}^{\alpha}} \cdot c_e \alpha_e w_{i,e}^{\alpha_e} &\mbox{ if } e\in E\setminus E_1\\ \rho c_e w_{i,e} & \mbox{ if }e\in E_1 \mathsf{e}nd{array}\right.. \mathsf{e}nd{equation} Note that the only difference from the costs \mathsf{e}qref{eq:cost-integer-online} in Theorem~\ref{thm:online} is in the cost setting for $E_1$. We will also set the parameter $\rho\ge 1$ differently. We only prove the second statement in the theorem, which clearly implies the first statement. As before, let $A_e$ denote the algorithm's final load on each resource $e\in E$. Let $L:=\sum_{e\in E_1} c_e A_e$ and $H:=\sum_{e\in E\setminus E_1} c_e A_e^{\alpha_e}$ denote the costs incurred by the algorithm on resources in $E_1$ and $E\setminus E_1$ respectively. Note that the algorithm's cost $A=L+H$. We will bound the modified objective $\overline{A}:=\mathsf{e} \rho\cdot L +H$. In the analysis, we will use a slightly different relaxation for \mathsf{e}nsuremath{\mathsf{GND}}\xspace. The new relaxation has the same constraints in $(P)$ with objective: \begin{equation} \label{eq:improved-obj} \sum_{e\in E} c_e\cdot \left( \sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}\right)^{\alpha_e}\,\,+\,\,\sum_{e\in E\setminus E_1} \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} \sum_{i=1}^N w_{i,e}^{\alpha_e}\sum_{p\in \mathcal{P}_i: e\in p} x_{i,p} \mathsf{e}nd{equation} As in Lemma~\ref{lem:new-opt}, the optimal value of this new convex program is at most $(1+\alpha\mathsf{e}^{-\alpha})\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$, where \mathsf{e}nsuremath{\mathsf{OPT}}\xspace is the optimal value of the (integral) \mathsf{e}nsuremath{\mathsf{GND}}\xspace instance. We imagine adding dummy resources $E''=\{e' : e\in E\setminus E_1\}$ corresponding to the second term in the new objective \mathsf{e}qref{eq:improved-obj}. We set $\alpha_{e'} := 1$, $c_{e'} := 1$ and $w_{i,e'} := \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e}$ for all $i\in [N]$ and $e\in E\setminus E_1$. Moreover, we extend each reply $p\in \mathcal{P}_i$ so that it contains {\mathsf{e}m both} copies $e,e'$ of each resource $e\in p\setminus E_1$. The new reply collections are referred to as $\{\mathcal{P}''_i\}_{i=1}^N$. The dual of the new convex program, denoted $(D'')$, is given below. \begin{align*} \max\quad&\sum_{i=1}^N y_i \,-\, \sum_{e\in E\setminus E_1} \frac{ c_e\alpha_e}{\beta_e} \cdot z_e^{\beta_e} \\ \mbox{s.t.}\quad& \sum_{e\in p''}w_{i,e} c_e \alpha_e\cdot z_e \ge y_{i},\qquad \forall p''\in \mathcal{P}''_i,\, \forall i\in [N] \\ & z_e \le 1, \qquad\qquad\qquad\qquad\forall e\in E_1\cup E'' \\ &\mathbf{y}, \mathbf{z} \ge \mathbf{0}. \mathsf{e}nd{align*} Note that $\alpha_{e'}=1$ for all dummy resources $e'\in E''$: so they do not appear in the second term of the dual objective. We define the following dual solution: $$z_e := \frac1\rho \cdot A_e^{\alpha_e-1},\qquad \forall e\in E\setminus E_1.$$ $$z_{e'} := 1,\qquad \forall e'\in E_1\cup E''.$$ $$y_i := \min_{p''\in \mathcal{P}''_i} \sum_{e\in p''} w_{i,e}c_e\alpha_e\cdot z_e= \min_{p\in \mathcal{P}_i} \left( \sum_{e\in p \cap E_1}c_e w_{i,e} + \sum_{e\in p\setminus E_1} \left( c_e\alpha_e w_{i,e}\cdot z_e + \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e}\right)\right),\quad \forall i\in [N].$$ The only difference from the choice in Theorem~\ref{thm:online} is that now $z_e=1$ for all $e\in E_1\cup E''$. It is easy to see that this is feasible for the dual program $(D'')$. Let $q_i\in \mathcal{P}_i$ denote the reply that achieves the min-cost in the definition of $y_i$. Now consider the increase in $\overline{A}$ when request $i$ arrives. Let $\mathsf{e}ll_e$ denote the load on $e\in E$ before request $i$ is assigned. \begin{align} (\Delta \overline{A})_i &= \mathsf{e} \rho \sum_{e\in p_i\cap E_1} c_e w_{i,e} \,+\, \sum_{e\in p_i\setminus E_1} c_e\left( (\mathsf{e}ll_e+ w_{i,e})^{\alpha_e} - \mathsf{e}ll_e^{\alpha_e} \right) \notag\\ &\le \mathsf{e} \rho \sum_{e\in p_i\cap E_1} c_e w_{i,e} \,+\, \mathsf{e}\cdot \sum_{e\in p_i\setminus E_1} \left( c_e \alpha_e w_{i,e} \mathsf{e}ll_e^{\alpha_e-1} + \frac{\rho}{\mathsf{e}^{\alpha}}\cdot c_e \alpha_e w_{i,e}^{\alpha_e} \right) \label{eq:improved-2}\\ &= \mathsf{e} \sum_{e\in p_i} \bar{\psi}_e \le \mathsf{e} \tau \sum_{e\in q_i} \bar{\psi}_e = \mathsf{e} \tau \rho \left(\sum_{e\in q_i\cap E_1} c_e w_{i,e} + \sum_{e\in q_i\setminus E_1} \left( \frac{c_e \alpha_e }\rho w_{i,e} \mathsf{e}ll_e^{\alpha_e-1} + \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e} \right)\right) \label{eq:improved-3}\\ &\le \mathsf{e} \tau \rho \left(\sum_{e\in q_i\cap E_1} c_e w_{i,e} + \sum_{e\in q_i\setminus E_1} \left( \frac1\rho c_e \alpha_e w_{i,e} A_e^{\alpha_e-1} + \frac{c_e \alpha_e }{\mathsf{e}^{\alpha}} w_{i,e}^{\alpha_e} \right)\right) = \mathsf{e}\tau \rho \cdot y_i. \label{eq:improved-4} \mathsf{e}nd{align} Inequality \mathsf{e}qref{eq:improved-2} follows by the same calculations as in \mathsf{e}qref{eq:int-dual-1}-\mathsf{e}qref{eq:int-dual-3}. The equalities in \mathsf{e}qref{eq:improved-3} is by definition of the costs~\mathsf{e}qref{eq:cost-int-LMP}, and the inequality in \mathsf{e}qref{eq:improved-3} is by choice of $p_i$. The inequality in \mathsf{e}qref{eq:improved-4} is by the monotonicity of loads. So we obtain $\overline{A} \le \mathsf{e}\tau\rho \sum_{i=1}^N y_i$. Moreover, we can bound the contribution of the $z$-variables in the exact same way as before, to obtain: $$\sum_{e\in E\setminus E_1} \frac{c_e\alpha_e}{\beta_e}\cdot z_e^{\beta_e} \le \rho^{-\frac{\alpha}{\alpha-1}}(\alpha-1) \sum_{e\in E\setminus E_1}c_e A_e^{\alpha_e} = \rho^{-\frac{\alpha}{\alpha-1}}(\alpha-1) \cdot H.$$ Therefore, the dual objective is: $$D = \sum_{i=1}^N y_i - \sum_{e\in E\setminus E_1} \frac{c_e\alpha_e}{\beta_e}\cdot z_e^{\beta_e} \ge \frac{\overline{A}}{\mathsf{e}\tau \rho} - \rho^{-\frac{\alpha}{\alpha-1}}(\alpha-1) \cdot H = \frac{L}{\tau} + \left(\frac{1}{\mathsf{e} \tau\rho} - \rho^{-\frac{\alpha}{\alpha-1}}(\alpha-1) \right)H. $$ Finally, choosing $\rho=(\mathsf{e}\tau \alpha)^{\alpha-1}$, we obtain: $$\frac{L}{\tau} \,+\, \frac{H}{(\mathsf{e}\tau\alpha)^\alpha} \le D\le (1+\alpha\mathsf{e}^{-\alpha})\mathsf{e}nsuremath{\mathsf{OPT}}\xspace,$$ where we used that the optimal value of $(D'')$ is at most $(1+\alpha\mathsf{e}^{-\alpha})\mathsf{e}nsuremath{\mathsf{OPT}}\xspace\le 2\mathsf{e}nsuremath{\mathsf{OPT}}\xspace$. Hence, $L\le 2\tau\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$ and $H\le 2(\mathsf{e}\alpha\tau)^\alpha\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace$, which completes the proof. \mathsf{e}nd{proof} \section{Application to \mathsf{e}nsuremath{\mathsf{GND}}\xspace with (D)oS Costs}\label{sec:overall} \psief\ensuremath{{\cal I}}\xspace{\mathsf{e}nsuremath{{\cal I}}\xspace} \psief\ensuremath{{\cal J}}\xspace{\mathsf{e}nsuremath{{\cal J}}\xspace} We now complete the proof of our main result (Theorem~\ref{thm:main-1}). Given an instance \ensuremath{{\cal I}}\xspace of \mathsf{e}nsuremath{\mathsf{GND}}\xspace with (D)oS costs as in \mathsf{e}qref{eq:energy}, we use Lemma~\ref{lem:fn-apx} to define a new instance \ensuremath{{\cal J}}\xspace of \mathsf{e}nsuremath{\mathsf{GND}}\xspace with power cost functions, as follows. For each original resource $e\in E$, we have two copies $e_1$ and $e_a$. Let $E_1:=\{e_1:e\in E\}$ and $E_a:=\{e_a:e\in E\}$: so the resources in \ensuremath{{\cal J}}\xspace are $E'=E_1\cup E_a$. Define scalars $c_{e_1}:=\xi_e q_e^{\alpha_e-1}$ and $c_{e_a}:=\xi_e$ for all $e\in E$. Also, define exponents $\alpha_{e_1}:=1$ and $\alpha_{e_a}:=\alpha_e$ for all $e\in E$. The weighted power functions in instance \ensuremath{{\cal J}}\xspace are $g_{r}(x):= c_{r}\cdot x^{\alpha_{r}}$ for all resources $r\in E'$. The reply-collections are extended so that for each reply $p\in \mathcal{P}_i$ in \ensuremath{{\cal I}}\xspace, there is a corresponding reply in \ensuremath{{\cal J}}\xspace that contains {\mathsf{e}m both} copies of resources $e\in p$. For each $e\in E$, note that function $h_e(x)$ used in Lemma~\ref{lem:fn-apx} is $h_e(x) = g_{e_1}(x)+g_{e_a}(x)$. Using the first inequality in Lemma~\ref{lem:fn-apx}, the optimal value of instance \ensuremath{{\cal J}}\xspace is $\mathsf{e}nsuremath{\mathsf{OPT}}\xspace_\ensuremath{{\cal J}}\xspace \le 2\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace_\ensuremath{{\cal I}}\xspace$. Now, using Theorem~\ref{thm:apx-online} on instance \ensuremath{{\cal J}}\xspace, we obtain: $$L=\sum_{r\in E_1} c_r A_r \le 2\tau\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace_\ensuremath{{\cal J}}\xspace \mbox{ and } H=\sum_{r\in E_a} c_r A_r^{\alpha_r} \le 2(\mathsf{e}\tau\alpha)^\alpha\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace_\ensuremath{{\cal J}}\xspace,$$ where $\{A_r\}_{r\in E'}$ denote the final loads in the algorithm. For each $e\in E$, note that $A_{e_1}=A_{e_a}$; we use $A_e$ to denote this load. As every weight is at least one, we have each $A_e\in \{0\}\cup \mathbb{R}_{\ge 1}$. The objective value in the original instance \ensuremath{{\cal I}}\xspace is \begin{align} \sum_{e\in E} f_e(A_e) &\le \sum_{e\in E} \left( \max\{q_e,1\}\cdot \xi_e q_e^{\alpha_e-1} \cdot A_e + \xi_e \cdot A_e^{\alpha_e}\right)\label{eq:combine-0} \\ &=\sum_{e\in E} \left( \max\{q_e,1\}\cdot c_{e_1}\cdot A_e + c_{e_a}\cdot A_e^{\alpha_e}\right)\le \max\{q,1\}\cdot L + H \label{eq:combine-1} \\ &\le 2\left(\max\{q,1\}\tau+ (\mathsf{e}\tau\alpha)^\alpha\right)\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace_\ensuremath{{\cal J}}\xspace\le 4\left(\max\{q,1\}\tau+ (\mathsf{e}\tau\alpha)^\alpha\right)\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace_\ensuremath{{\cal I}}\xspace.\label{eq:combine-2} \mathsf{e}nd{align} Inequality \mathsf{e}qref{eq:combine-0} is by Lemma~\ref{lem:fn-apx} (2nd inequality) and $A_e\in \{0\}\cup \mathbb{R}_{\ge 1}$. In \mathsf{e}qref{eq:combine-1}, the equality is by definition of the scalars $c_r$ and the inequality is by definition of $L$ and $H$. In \mathsf{e}qref{eq:combine-2}, the first inequality is by the above bounds on $L$ and $H$, and the last inequality uses $\mathsf{e}nsuremath{\mathsf{OPT}}\xspace_\ensuremath{{\cal J}}\xspace \le 2\cdot \mathsf{e}nsuremath{\mathsf{OPT}}\xspace_\ensuremath{{\cal I}}\xspace$. This completes the proof of Theorem~\ref{thm:main-1}. \paragraph{Remark:} The requirement that every weight is at least one is crucial in obtaining our result. As noted earlier, this requirement also appears in all prior work, e.g. \cite{AndrewsAZZ12,AndrewsAZ16,AntoniadisIKMNP20,MakarychevS18,EmekKLS20}. In fact, any $r(q)$ competitive ratio for \mathsf{e}nsuremath{\mathsf{GND}}\xspace under arbitrary weights (possibly less than one) leads to an $O(1)$-competitive online algorithm, which is not possible even for the simplest setting of single-commodity flow in edge-weighted undirected graphs. To see this, consider a new instance of \mathsf{e}nsuremath{\mathsf{GND}}\xspace with weights $w'_{i,e}=w_{i,e}/q$ and parameters $\sigma'_e=\sigma_e/q^\alpha$ and $\xi'_e=\xi_e$. Note that the new \mathsf{e}nsuremath{\mathsf{GND}}\xspace instance is equivalent to the old one (the objective value of each solution is scaled down by $q^\alpha$). Moreover, the new value $q'=1$, which means that we have an $r(q')=O(1)$ competitive algorithm. \paragraph{REP cost functions} We now consider the \mathsf{e}nsuremath{\mathsf{GND}}\xspace problem under more general costs of the form~\mathsf{e}qref{eq:REP} and prove Theorem~\ref{thm:main-3}. The main idea is to replace each resource $e\in E$ with $q$ {\mathsf{e}m copies} $e_1,\cdots e_q$ each with (D)oS cost function of the usual form~\mathsf{e}qref{eq:energy}. Then, we will directly apply Theorem~\ref{thm:main-1}. For each $e\in E$, (by renumbering if needed) let $$\left(\frac{\sigma_e}{\xi_{e,1}}\right)^{1/\alpha_{e,1}}\,\, =\,\, \min_{j=1}^q \left(\frac{\sigma_e}{\xi_{e,j}}\right)^{1/\alpha_{e,j}}.$$ The new \mathsf{e}nsuremath{\mathsf{GND}}\xspace instance has resources $\bar{E}:=\{e_j : j\in [q], e\in E\}$. For each $e\in E$, set $$\sigma_{e_j}:= \left\{ \begin{array}{ll} \sigma_e & \mbox{ if }j=1\\ 0 & \mbox{ if } j=2,\cdots q \mathsf{e}nd{array}\right., \mbox{ and } \alpha_{e_j}:=\alpha_{e,j}, \xi_{e_j}:=\xi_{e,j} \mbox{ for all } j\in [q].$$ Let $f_{e_j}(x)$ denote the (D)oS cost function for each $e_j\in \bar{E}$. Clearly, $\bar{f}_e(x) = \sum_{j=1}^q f_{e_j}(x)$ for all $x\ge 0$ and $e\in E$. Moreover,$$q:=\max_{f\in \bar{E}} \left(\frac{\sigma_f}{\xi_f}\right)^{1/\alpha_f} = \max_{e\in E} \min_{j\in [q]} \left(\frac{\sigma_e}{\xi_{e,j}}\right)^{1/\alpha_{e,j}} = Q.$$ Recall the definition of $Q$ in Theorem~\ref{thm:main-3}. For each request $i$, the new reply-collection is $$\bar{\mathcal{P}}_i:=\{\cup_{e\in p} \{e_1,\cdots e_q\} \,:\, p\in \mathcal{P}_i\},$$ i.e., each new reply corresponds to selecting all copies of the resources in some original reply $p$. Assuming a $\tau$-approximation algorithm for the min-cost oracle under $\mathcal{P}_i$, it is easy to obtain a $\tau$-approximation algorithm for the new min-cost oracle under $\bar{\mathcal{P}}_i$. Indeed, given costs $d:\bar{E}\rightarrow \mathbb{R}_+$ we define costs $d' : E\rightarrow \mathbb{R}_+$ as $d'_e:=\sum_{j=1}^q d_{e_j}$ for each $e\in E$ and apply the oracle for $\mathcal{P}_i$. Using Theorem~\ref{thm:main-1} on the new \mathsf{e}nsuremath{\mathsf{GND}}\xspace instance, we obtain an $O(q\tau+(\mathsf{e} \alpha\tau)^\alpha) = O(Q\tau+(\mathsf{e} \alpha\tau)^\alpha)$ competitive online algorithm under REP cost functions. This proves Theorem~\ref{thm:main-3}. The runtime of this algorithm is $O(Nmq + N\mathcal{P}hi(mq))$ where $\mathcal{P}hi(\cdot)$ denotes the time taken by the min-cost oracle. \section{Lower Bounds}\label{sec:LB} \psief\ensuremath{\mathsf{SSR}}\xspace{\mathsf{e}nsuremath{\mathsf{SSR}}\xspace} We now show that our competitive ratio is tight up to a constant factor and prove Theorem~\ref{thm:main-2}. We consider the single commodity routing problem (\ensuremath{\mathsf{SSR}}\xspace) in directed graphs, which is a special case of \mathsf{e}nsuremath{\mathsf{GND}}\xspace. We are given a directed graph $(V,E)$ with weight $c_e\ge 0$ associated with each edge $e\in E$. There is a common source $s\in V$ and each online request $i$ corresponds to routing unit flow from $s$ to a sink node $t_i\in V$. The edge cost function of each edge is $f_e(x) = c_e\cdot f(x)$ where $$f(x) = \left\{ \begin{array}{ll} 0 &\mbox{ if }x=0\\ \sigma+x^\alpha&\mbox{ if }x>0 \mathsf{e}nd{array} \right.. $$ Note that $q=\sigma^{1/\alpha}$. The min-cost reply oracle corresponds to shortest path: so we also have a polynomial time exact oracle in this case. We provide two different instances of \ensuremath{\mathsf{SSR}}\xspace that show lower bounds of (i) $\Omega(q)$ for every choice of $\alpha\ge 1$ and $\sigma\ge 0$, and (ii) $\Omega((1.44\alpha)^\alpha)$ even when $q=0$. The $\Omega((1.44\alpha)^\alpha)$ lower bound follows from the restricted-assignment scheduling problem with $\mathsf{e}ll_p$-norm of loads~\cite{Caragiannis08}. Recall, in that problem there are $m$ machines and $N$ jobs arrive over time. Each job $i$ specifies a subset $M_i$ of machines and needs to be assigned to one of them. The objective is to minimize the sum of $p^{th}$ powers of the machine loads. This corresponds to the directed graph on nodes $\{s\}\cup\{u_e\}_{e=1}^m\cup \{v_i\}_{i=1}^N$ where $s$ is the source, $u$-nodes correspond to machines and $v$-nodes correspond to jobs. There is an edge from $s$ to each $u_e$ with weight $1$. For each job $i\in [N]$ and machine $e\in M_i$, there is an edge from $u_e$ to $v_i$ of weight zero. We also set $\alpha=p$ and $\sigma=0$. The resulting \ensuremath{\mathsf{SSR}}\xspace instance is clearly equivalent to the scheduling problem. The $\Omega(q)$ lower bound uses a construction similar to the lower bound for online directed Steiner tree~\cite{FaloutsosPS02}. Fix any value of $\sigma>0$ and $\alpha\ge 1$ (which also fixes $q$). We will show an $\Omega(q)$ lower bound for \ensuremath{\mathsf{SSR}}\xspace instances with this value of $\sigma$ and $\alpha$. The graph $G$ consists of a complete binary tree $B$ of depth $q$ rooted at node $t$ with all edges directed towards $t$, and source $s$ with edges to all nodes of the tree. Let $S$ denote all the edges out of $s$. All edges of the binary tree have weight zero and all edges in $S$ have weight one. The input sequence consists of $q$ requests as follows. At any point in the algorithm, let $A$ denote all edges that carry flow at least one: so the current cost is at least $|A\cap S|\cdot \sigma$. The first sink $t_1=t$. For $i=2,\cdots q$, sink $t_i$ is chosen to be the child of $t_{i-1}$ in $B$ such that $A$ does not contain an $s-t_i$ path. It is clear that $|A\cap S|\ge q$ at the end of this request sequence. So the online cost is at least $q\sigma$. Note that the sinks $t_1, \cdots t_q$ lie on a single directed path in the tree $B$: so an offline solution can just select the edges $(s,t_q)$ followed by $(t_i, t_{i-1})$ for $i=q,\cdots 2$. The cost of this solution is at most $\sigma+q^\alpha=2\sigma$ as it uses only one edge in $S$ (which carries flow of $q$). Thus, the competitive ratio is at least $q/2$. {\small } \appendix \section{Deriving the Dual Program}\label{app:derive-dual} \psief\ensuremath{\mathsf{w}_e}\xspace{\mathsf{e}nsuremath{\mathsf{w}_e}\xspace} The primal program $(P)$ is: \begin{align*} \min\quad&\sum_{e\in E} c_e\cdot \left( \sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}\right)^{\alpha_e}\\ \mbox{s.t.}\quad& \sum_{p\in \mathcal{P}_i}x_{i,p} \ge 1,\qquad \forall i\in [N] \\ &\mathbf{x} \ge \mathbf{0}. \mathsf{e}nd{align*} Let $k=\sum_{i=1}^N|\mathcal{P}_i|$ denote the number of variables in this convex program. Let $A \in \mathbb{R}_{N\times k}$ denote the constraint matrix for the covering constraints; note that $A$ is block-diagonal with $$a_{j, (i,p)}=\left\{\begin{array}{ll} 1 &\mbox{ if }i=j\\ 0 & \mbox{ if } i\ne j \mathsf{e}nd{array}\right., \qquad \forall j\in [N],\, i\in [N],\, p\in \mathcal{P}_i.$$ For simpler notation, let $\ensuremath{\mathsf{w}_e}\xspace\in \mathbb{R}^k$ denote the vector with $\ensuremath{\mathsf{w}_e}\xspace(i,p)=w_{i,e}$ if $p\ni e$ (and $0$ otherwise) for all $p\in \mathcal{P}_i$ and $i\in [N]$. So $\ensuremath{\mathsf{w}_e}\xspace^T x = \sum_{i=1}^N w_{i,e} \sum_{p\in \mathcal{P}_i: e\in p} x_{i,p}$. Define functions $$g_e(x) := c_e\cdot \left(\ensuremath{\mathsf{w}_e}\xspace^T x\right)^{\alpha_e},\,\forall e\in E, \quad \mbox{ and } \quad g(x) := \sum_{e\in E} g_e(x).$$ Note that $g(x)$ is the objective in our convex program. Letting $\{y_i\}_{i=1}^N$ denote the Lagrange multipliers for the covering constraints, we obtain the Lagrangian function: \[L(y)=L(y_1, ...y_N) = \inf_{x \geq 0} \left( g(x) + y^T (\mathbf{1} - Ax)\right) = y^T \mathbf{1} - g^*(A^Ty),\] where $g^*(\mu) := \sup_{x\ge 0} (\mu^T x - g(x))$ is the Fenchel conjugate of $g(x)$. The dual program is then: \begin{equation} \label{eq:app-dual} \sup_{y\ge 0} L(y) = \sup_{y\ge 0} \left(y^T \mathbf{1} - g^*(A^Ty)\right).\mathsf{e}nd{equation} Note that we always have $\mu:=A^Ty\ge 0$ as $y\ge 0$ and $A$ is non-negative. As $g(x) = \sum_{e \in E} g_e(x)$, we can use the Moreau-Rockafeller formula to compute \[g^*(\mu) = \bigoplus_{e\in E} g_e^*(\mu) = \inf_{\sum_{e } \mu_e = \mu} \sum_{e\in E}g_e^*(\mu_e),\] where $\oplus$ is the infimal convolution. We now compute the conjugate functions for $g_e$s depending on whether $\alpha_e>1$ or $\alpha _e=1$. Recall that $E_1=\{e\in E:\alpha_e=1\}$. \begin{cl}\label{cl:conj>1} For any $e\in E$ with $\alpha_e>1$ and $\mu\ge 0$, $$g_e^*(\mu) = c_e(\alpha_e-1)\left(\frac{\lambda}{c_e\alpha_e}\right)^{\beta_e}, \quad \mbox{ where }\lambda = \max_{i,p} \frac{\mu(i,p)}{\ensuremath{\mathsf{w}_e}\xspace(i,p)} \mbox{ and }\beta_e=\frac{\alpha_e}{\alpha_e-1}.$$ Above, we treat $0/0=0$. Also, $g^*_e(\mu)=\infty$ if $\mu(i,p)>0$ for any $(i,p)$ with $\ensuremath{\mathsf{w}_e}\xspace(i,p)=0$. \mathsf{e}nd{cl} \begin{proof} Note that $g_e^*(\mu) = \sup_{x\ge 0} \left(\mu^T x - c_e(\ensuremath{\mathsf{w}_e}\xspace^Tx)^\alpha_e\right)$. For simpler notation, we use $h$ to index the coordinates $(i,p)$ in the vectors $x$ and $\ensuremath{\mathsf{w}_e}\xspace$. We first show that if the supremum in $g_e^*(\mu)$ is achieved, there is an optimal $x$ with at most one positive coordinate. Suppose, that $x$ achieves the supremum in $g_e^*(\mu)$ and has $x_1,x_2>0$. Without loss of generality, say $\frac{\mu(1)}{\ensuremath{\mathsf{w}_e}\xspace(1)} \ge \frac{\mu(2)}{\ensuremath{\mathsf{w}_e}\xspace(2)}$. Then, for some $\mathsf{e}psilon>0$, consider the new solution $x'$ with $x'_1=x_1+\mathsf{e}psilon \ensuremath{\mathsf{w}_e}\xspace(2)$, $x'_2=x_2-\mathsf{e}psilon \ensuremath{\mathsf{w}_e}\xspace(1)$ and $x'_h=x_h$ for other coordinates $h$. The increase in the objective is $\mathsf{e}psilon (\mu(1)\ensuremath{\mathsf{w}_e}\xspace(2) - \mu(2)\ensuremath{\mathsf{w}_e}\xspace(1)) \ge 0$. So we can choose an $\mathsf{e}psilon>0$ such that $x'_2=0$ and the objective at $x'$ is at least as much as at $x$. Repeating this process, we will end up with a solution with at most one positive coordinate, as claimed. By the above argument, we can assume that an optimal $x$ (if any) will only have positive value on the coordinate $h$ that maximizes $\mu(h)/\ensuremath{\mathsf{w}_e}\xspace(h)$. By renumbering let $h=1$ denote this coordinate. Then $g_e^*(\mu) = \sup_{x_1\ge 0} \left(\mu(1)\cdot x_1 - c_e (\ensuremath{\mathsf{w}_e}\xspace(1)\cdot x_1)^{\alpha_e}\right)$. By simple calculus, if $\ensuremath{\mathsf{w}_e}\xspace(1)=0$ and $\mu(1)>0$ then $g_e^*(\mu)=\infty$; otherwise the supremum is achieved and $g_e^*(\mu)=c_e(\alpha_e-1)\left(\frac{\mu(1)}{\ensuremath{\mathsf{w}_e}\xspace(1) c_e\alpha_e}\right)^{\frac{\alpha_e}{\alpha_e-1}}$. The claim follows by definition of $\lambda$ and $\beta_e$. \mathsf{e}nd{proof} Using Claim~\ref{cl:conj>1}, for any $e\in E\setminus E_1$ we can re-write: \begin{equation}\label{eq:app-dual-1} g^*_e(\mu_e) = \min \left\{ \frac{c_e\alpha_e}{\beta_e} \left( \frac{\lambda_e}{c_e\alpha_e}\right)^{\beta_e} \,:\, \ensuremath{\mathsf{w}_e}\xspace(i,p)\cdot \lambda_e \ge \mu_e(i,p) \mbox{ for all }(i,p)\right\}, \mathsf{e}nd{equation} where an infeasible problem has value $\infty$. \begin{cl}\label{cl:conj=1} For any $e\in E$ with $\alpha_e=1$ and $\mu\ge 0$, $$g_e^*(\mu) = \left\{ \begin{array}{ll} 0& \mbox{ if }\mu(i,p) \le c_e\cdot \ensuremath{\mathsf{w}_e}\xspace(i,p) \mbox{ for all } i\in [N] \mbox{ and } p\in \mathcal{P}_i\\ \infty & \mbox{ otherwise}. \mathsf{e}nd{array}\right..$$ \mathsf{e}nd{cl} \begin{proof} Note that $g_e^*(\mu) = \sup_{x\ge 0} \left(\mu^T x - c_e(\ensuremath{\mathsf{w}_e}\xspace^T x)\right)$. Again, we use $h$ to index the coordinates $(i,p)$ in the vectors $x$ and $\ensuremath{\mathsf{w}_e}\xspace$. First, suppose $\mu(h)>c_e\cdot \ensuremath{\mathsf{w}_e}\xspace(h)$ for some $h$ (say $h=1$). Then, setting $x_h=0$ for $h\ne 1$ and $x_1\rightarrow \infty$ we obtain $g_e^*(\mu)=\infty$. Now, we assume $\mu \le c_e\cdot \ensuremath{\mathsf{w}_e}\xspace$. Then, it is clear that $g_e^*(\mu)=0$. \mathsf{e}nd{proof} Using Claim~\ref{cl:conj=1}, for $e\in E_1$, we have: \begin{equation}\label{eq:app-dual-2} g^*_e(\mu_e) = \min \left\{ 0 \,:\, \ensuremath{\mathsf{w}_e}\xspace(i,p)\cdot \lambda_e \ge \mu_e(i,p) \mbox{ for all }(i,p),\, \mbox{ and }\lambda_e\le c_e\right\}, \mathsf{e}nd{equation} where an infeasible problem has value $\infty$. By Claims~\ref{cl:conj>1} and \ref{cl:conj=1}, we know that $g^*_e(\mu)$ is non-decreasing in $\mu$. So we can write: \[g^*(\mu) = \inf_{\sum_{e } \mu_e \ge \mu} \sum_{e\in E}g_e^*(\mu_e).\] Combined with \mathsf{e}qref{eq:app-dual-1} and \mathsf{e}qref{eq:app-dual-2}, we have: \begin{align*} g^*(\mu) = \quad &\min\quad \sum_{e\in E\setminus E_1} \frac{c_e\alpha_e}{\beta_e} \left( \frac{\lambda_e}{c_e\alpha_e}\right)^{\beta_e} \\ &\mbox{s.t.}\quad \sum_{e\in E} \lambda_e \cdot \ensuremath{\mathsf{w}_e}\xspace(i,p) \ge \mu(i,p),\qquad \forall i\in [N],\, p\in \mathcal{P}_i, \\ &\qquad\lambda_e\le c_e,\qquad \forall e\in E_1,\\ &\qquad\mathbf{\lambda} \ge \mathbf{0}. \mathsf{e}nd{align*} Combining this with \mathsf{e}qref{eq:app-dual}, using $\mu=A^T y$ and the definition of matrix $A$, the dual program is: \begin{align*} \max&\quad \sum_{i=1}^N y_i \, -\, \sum_{e\in E\setminus E_1} \frac{c_e\alpha_e}{\beta_e} \left( \frac{\lambda_e}{c_e\alpha_e}\right)^{\beta_e} \\ \mbox{s.t.}&\quad \sum_{e\in E} \lambda_e \cdot \ensuremath{\mathsf{w}_e}\xspace(i,p) \ge y_i,\qquad \forall i\in [N],\, p\in \mathcal{P}_i, \\ &\qquad\lambda_e\le c_e,\qquad \forall e\in E_1,\\ &\qquad\mathbf{\lambda}, \mathbf{y} \ge \mathbf{0}. \mathsf{e}nd{align*} Using new variables $z_e=\lambda_e/(c_e\alpha_e)$ we obtain the dual program $(D)$ as described in \S\ref{sec:frac}. \mathsf{e}nd{document}
\begin{document} \title{The growth order of the optimal constants \\ in Tur\'{a}n-Er\H {o}d type inequalities in $L^q(K,\mu)$} \author{P. Yu. Glazyrina, Yu. S. Goryacheva and Sz. Gy. R\'{e}v\'{e}sz} \date{\today} \maketitle \begin{abstract} In 1939 Tur\'{a}n raised the question about lower estimations of the maximum norm of the derivatives of a polynomial $p$ of maximum norm $1$ on the compact set $K$ of the complex plain under the normalization condition that the zeroes of $p$ in question all lie in $K$. Tur\'{a}n studied the problem for the interval $I=[-1,1]$ and the unit disk $D$ and found that with $n := \deltag p$ tending to infinity, the precise growth order of the minimal possible derivative norm (oscillation order) is $\sqrt{n}$ for $I$ and $n$ for $D$. Er\H{o}d continued the work of Tur\'{a}n considering other domains. Finally, in 2006, Hal\'{a}sz and R\'{e}v\'{e}sz proved that the growth of the minimal possible maximal norm of the derivative is of order $n$ for all compact convex domains. Although Tur\'{a}n himself gave comments about the above oscillation question in $L^q$ norms, till recently results were known only for $D$ and $I$. Recently, we have found order $n$ lower estimations for several general classes of compact convex domains, and proved that in $L^q$ norm the oscillation order is at least $n/\log n$ for all compact convex domains. In the present paper we prove that the oscillation order is not greater than $n$ for all compact (not necessarily convex) domains $K$ and $L^q$-norm with respect to any measure supported on more than two points on $K$. \end{abstract} {\it Mathematics subject classification (2010):} 41A17, 30E10, 52A10. {\it Keywords:} polynomial, Tur\'{a}n’s lower estimate of derivative norm, compact set, positive width, measure, weight, Lebesgue measure, area measure. \section{Introduction }\label{sec:Introduction} Let $K$ be a compact set of the complex plane $\mathbb{C}$ and $\mu$ be a finite Borel measure on $K$. For a polynomial $p \in \mathbb{C}[z]$ and a parameter $0<q<\infty$ we set $$ \| p \|_q :=\| p \|_{L^q(K,\mu)} := \left(\int\limits_{K}|p(z)|^q \mu(dz)\right)^{1/q}, $$ and for $q=\infty$ we will also consider the limiting case\footnote{The last equality follows by continuity of $p(z)$; otherwise one should have taken $\inf\left\{ \sup_{z\in Q} |p(z)| \colon Q\subset K,\ \mu(K\setminus Q)=0\right\}$. Note that this definition of the $\infty$-norm is essentially independent of the measure $\mu$ (apart form its support), and hence is different form the usual weighted maximum norm definition $\|p\|_{w,\infty}:=\sup\|pw\|_\infty$. While in case of an absolutely continuous measure $\mu$ with density function $w$ (with respect to Lebesgue or area measure $\lambda$ restricted to $K$), the weighted $L^q(K,\mu)$ norm matches the weighted norm $\|p\|_{w,q}:=\left( \int_K |p(z)|^q w(z) d\lambda(z)\right)^{1/q}$ for any $0<q<\infty$ (and analogously if the measure $\mu$ is absolutely continuous with respect to the arc length measure $\ell$ of $\partial K$), the limiting case of this relation provides our above definition, and does not always match the $\|\cdot\|_{w,q}$-norm. Therefore, such weighted maximum norms require separate studies. They may, however, be obtained as a limiting case of $L^q$ norms with respect to varying weights $w^q$, as then $\|p\|_{w,\infty}=\lim_{q \to \infty} \|pw\|_q$. } $$ \| p \|_\infty :=\| p \|_{L^\infty(K,\mu)} :=\lim_{q \to \infty} \| p \|_q=\max_{z\in \mathop{\mathrm{supp}} \mu} |p(z)|. $$ Let $d $ denote the diameter of $K$ and $ w$ denote the width of $K$, $$ d:=d(K) := \max_{z, z' \in K}|z - z'| , \quad \displaystyle w:=w(K) := \min_{\gamma \in [-\pi, \pi]} \max_{z,z' \in K}\left(\mathop{\mathrm{Re}}(ze^{i\gamma})-\mathop{\mathrm{Re}}(z'e^{i\gamma})\right) . $$ The width $w$ is equal to the smallest distance between two parallel lines between which $K$ lies, hence $w\le d$. Denote by $\mathcal{P}_n(K)$ the set of algebraic polynomials $p$ of degree exactly $n$, all of whose zeros lie in $K$. The (normalized) quantity under our study is the ``inverse Markov factor'' or ``oscillation factor'' \begin{equation}\label{mainconst} M_{n,q}(K,\mu) := \inf_{\substack{p\in\mathcal{P}_n(K),\\ \|p\|_q\neq 0}} \frac{\|p'\|_q}{\|p\|_q}. \end{equation} The most important case is when $K$ is convex and $\mu$ is the arc length measure on the boundary of $K$, in which case we will simply write $M_{n,q}(K)$. Problem \eqref{mainconst} goes back to Tur\'{a}n \cite{Tur}, who raised the question of the inequality \begin{equation}\label{tur} \|p'\| _{C(K)} \ge M_{n,\infty} (K)\|p\|_{C(K)}, \quad p \in \mathcal{P}_n(K). \end{equation} This inequality is a kind of converse to the classical inequalities of Markov and Bernstein. With any positive constant in place of $M_{n,q}(K)$ the inequality \eqref{tur} fails to hold on the set of all polynomials of degree $n$, as is shown by the example of polynomials $z^n+c$ as $c\to \infty$. For this reason, we need to restrict the class of polynomials for getting a sound inequality with nonzero constant. Tur\'{a}n imposed the additional condition $\displaystyle p \in \mathcal{P}_n(K)$, and studied the problem for the interval $K=I:=[-1,1]$ and the unit disk $K=D:=\{z\in \mathbb{C}:|z|\leq1\}$. He proved that \begin{equation}\label{T1} M_{n,\infty} (D)=\frac{n}{2}, \quad \text{and} \quad M_{n,\infty} (I)\geq \frac{\sqrt{n}}{6}. \end{equation} Moreover, he pointed out by example of $(1 - x^2)^n$ that the $\sqrt{n}$ order in the latter relation cannot be improved upon. In the same year, Er\H{o}d \cite{Erod} continued Tur\'{a}n's research and showed that $M_{n,\infty}(I) = \sqrt{n/e}+O(1/n),$ $n\to \infty$. More importantly, he extended the study of Tur\'{a}n's problem to general convex domains of $\mathbb{C}$, and obtained several results in the maximum norm for various general classes of convex domains. On account of the above, we will term these type of inequalities ``Tur\'an-Er\H{o}d type inequalities'', and call the respective optimal constants $M_{n,q}(K,\mu)$ the ``Tur\'an-Er\H{o}d constants'', too. The growth order of $M_{n,q}(K,\mu)$ in terms of the degree $n$ is in the focus of our study. In the full generality of all compact convex sets Levenberg and Poletsky proved the general inequality $M_{n,\infty} (K)\geq \dfrac{\sqrt{n}}{20 d} $ \cite[Theorem 3.2]{LP}. The order of this result is best possible for the interval case, but for domains with nonempty interior a susbtantial improvement is possible. Namely, for any compact convex domain $K$ it was obtained in 2006 \cite[Theorem 1]{SzR} by Hal\'{a}sz and R\'{e}v\'{e}sz that for all $n\in \mathbb{N}$ $$\displaystyle M_{n,\infty} (K)\geq 0.0003 \frac{w}{d^2} n,$$ and R\'{e}v\'{e}sz also proved \cite[Theorem 2]{SzR} \begin{equation}\label{Revesz01} M_{n,\infty}(K) \leq 600 \frac{w}{d^2} n, \quad \textrm{for} \ n>\dfrac{d^2}{128w^2}\ln\dfrac{d}{16w}. \end{equation} Note that both estimates have the same form and depend only on $n$, $w$ and $d$, so that up to an absolute constant factor, even the dependence on the geometric features of the general convex body is established. Recently these results were partially extended to the $L^q$ norms ($q\ge 1$) with respect to the arc length measure $\ell$ on the boundary curve $\Gamma$ of $K$ by Glazyrina and R\'{e}v\'{e}sz. They proved \cite{GR1, GR2} that the growth order of $M_{n,q}(K)$ is again $n$ for a certain class of compact convex domains, including all smooth\footnote{A convex domain is smooth if it has a unique supporting line at each boundary point.}, compact convex domains $K$ and also convex polygonal domains having no acute angles at their vertices. It was conjectured \cite[Conjecture 1]{GR2} that even for arbitrary compact convex domains the growth order of $M_{n,q}(K)$ should be $n$. In \cite{GR3}, it is shown that in $L^q$ norm the oscillation order is at least $n/\log n$ for all compact convex domains. From the other direction, Glazyrina and R\'{e}v\'{e}sz proved that one cannot expect more than order $n$ growth because in fact $M_{n,q} (K)\le \dfrac{15}{d}n$, $(q\ge 1)$ (a combination of Theorem 5 and Remark 6 of \cite{GR1}.) In this paper we study $L^q$-norms for finite $q$, and obtain $\infty$-norm estimates only via direct limiting cases. Therefore, in this introduction we also restrict mainly to $L^q$ results. Detailed overviews of further results in Tur\'{a}n type inequalities can be seen in \cite{SzR,GR1,GR2, GR3}. Note also that until our work \emph{weighted} $L^q$ norms were only considered for the interval $I$ (and even there only with absolutely continuous measures with some density function $w$). One notable result is due to Varma \cite{Varma1, Varma2}, who proved that for the interval $\sqrt{\frac12 n + \frac34 +\frac{3}{4n}} <M_{n,2}(I) \le \sqrt{\frac12 n + \frac34 +\frac{3}{4(n-1)}}$, which in itself is not a weighted result, but Varma also compared some weighted norms of $p'$ to non-weighted norm of $p$, which implied his above mentioned results. Varma and Underhill \cite{Underhill} studied the inequality \begin{equation}\label{Iweight} \|p'w\|_{L^q[-1,1]} \geq C_{n,q,w}\|pw\|_{L^q[-1,1]}, \quad p \in \mathcal{P}_n([-1,1]). \end{equation} They found the sharp value $M_{n,q}(I,w)$ of $C_{n,q,w}$ for even $n$, $q=2$, $w(x)=(1-x^2)^{\alpha}$, $\alpha>1$, and for $q=4$, $w(x)=(1-x^2)^3$. For the cases of $q=4$, $w(x)=(1-x^2)$ and $w(x)=(1-x^2)^2$, they established the right order of magnitude of the respective Tur\'an-Er\H{o}d constants. Xiao and Zhou \cite[p. 198, Theorem 1, Corollary 1]{Xiao} proved that $M_{n,\infty}(I,w) \ge C(w) \sqrt{n}$ for any nonnegative, continuous, piecewise monotonic weight $w(x)$ on $[-1,1]$. They also pointed out that for the Jacobi weight $(1+x)^{\alpha}(1-x)^{\beta}$, $\alpha,\beta \ge 0$ the $\sqrt{n}$ order cannot be improved. Wang and Zhou \cite[Theorems 1 and 2]{Wang} proved $M_{n,q}(I,w) \ge C(w,q) \sqrt{n}$ for generalized Jacobi weights, i.e. for weights with finite total mass $\int_{-1}^1 w(x)dx$ and satisfying that values of $w(x_1), w(x_2)$ are within constant ratio if the variables $x_1, x_2$ have bounded proportion of distances from the interval endpoints 1 or $-1$. Yu and Wei \cite[Corollaries 1 and 2]{Yu} obtained Tur\'an type inequalities for doubling, and in case of $q=\infty$, for so-called $A^*$ weights. Subsequently, in \cite{Chinese} the results for doubling weights were extended to a somewhat larger class of weights, called "$N$-doubling weights". Some $L^q$ results were obtained for the disk and its perimeter with the (unweighted) arc length measure on it, but truly weighted versions were not derived. Combining the results of Malik \cite{Malik69} (obtained for $R \le 1$) and Govil \cite{Govil} (proved for $R \ge 1$), it is known that denoting $D_R:=\{z\in\mathbb{C}~:~|z| \le R\}$ and putting $M_{n,q}(K,Q):=\inf_{P \in \mathcal{P}_n(K)} \| P'\|_{L^q(Q)} /\|P\|_{L^q(Q)}$, we have $$ M_{n,\infty}(D_R,D)= \begin{cases} \dfrac{n}{1+R}, R\le 1 \\ \dfrac{n}{1+R^n}, R\ge 1 \end{cases} . $$ Regarding maximum norm, there are some further results in the literature for the two discs case, sharpening the above in case there is suitable information on zeroes or coefficients, but these results do not improve the above--sharp in themselves--inequalities in general. In \cite{Malik} the exact Tur\'an-Er\H{o}d-type comparison constant of the circle is computed between $\infty$-norm and $q$-norm, i.e. it is proved that for any $q>0$ and $P\in P_n(D)$ it holds $\|p'\|_{\infty} \ge A_q \|p\|_q$, where $A_q$ is an explicit constant, attained in case $P(z)=z^n+1$ (and equivalent polynomials). This was generalized by Aziz \cite{Aziz} to any $L^q$ norms, even different ones on the two sides of the respective inequalities, both for zeroes in $D_R$ with $R\le 1$ and also for $R>1$. Formerly \cite{GR1, GR2, GR3}, we extended $L^q$-norm investigations from the interval and circle case to boundary arc length $L^q$ norm of compact convex sets, but only here we proceed towards general weighted norms, moreover even to measures $\mu$ and $L^q(K,\mu)$ norms. This seems to be appropriate also regarding the other, natural possibility of extensions, namely, regarding the ``two-set problem'' with two prescribed sets: one, say $Q$, for taking the norm and one, say $K$, for the location of zeros. Maximum norm results for two sets $K$ and $Q$ can be viewed as the limiting case of $L^q(K,\mu)$ estimates with respect to a fixed measure $\mu$ supported exactly in $Q$, while restricting the zeros of the considered polynomials to lie in $K$ (i.e., the polynomials to belong to $\mathcal{P}_n(K)$). As discussed above, that type of results were derived in the disk case with $Q$ being the unit circle and $K$ being another concentric disk \cite{Malik69, Govil, Aziz}. As an enlightening example (and the only one we know about apart from the concentric circle cases), let us quote a recent result of Komarov. In 2019, Komarov \cite{Komarov} obtained the estimate $M_{n,\infty}(D^+,I)\ge 2\sqrt{n} /(3\sqrt{210e})$ for $K=D^{+}:=\{z\in D~:~ \mathop{\mathrm{Im}} z\geq0\}$, where in general we mean $M_{n,\infty}(K,Q):=\inf_{p\in \mathcal{P}_n (K)} \|p'\|_{C(Q)}/\|p\|_{C(Q)}$. The $\sqrt{n}$ order in this estimate is sharp\footnote{Indeed, already Tur\'an showed that there exists a polynomial $p\in \mathcal{P}_n(I)$ with $\|p'\|_{C(I))}=O(\sqrt{n}) \|p\|_{C(I)}$, and any such polynomial is automatically in Komarov's class $\mathcal{P}_n(D^{+})$.}. As explained above, this can be considered as a special case of a measure defined weighted norm estimate, with $\mathop{\mathrm{supp}} \mu=I$ (say $\mu$ can be the linear Lebesgue measure on $I$). We also note that Erd\'{e}lyi \cite{Erd} generalized Komarov's result further to the set of polynomials of degree $n$ having at least $n-\ell$ zeros in $D^+$ and at least one zero in $[-1,1].$ As Komarov's result shows, if the width of $\mathop{\mathrm{supp}} \mu$ is zero, then the result may follow the pattern of Tur\'{a}n's result for $I$, the order of growth being $\sqrt{n}$. We are more interested in the second case, when the support of the measure does not degenerate to a subset of a straight line segment. The only cases known are cases when $\mu$ is the arc length measure $\ell$ of the boundary curve $\partial K$ of a certain compact convex domain $K$; and in all known cases the growth order of $M_{n,q}(K)$ is $n$. However, for fully general compact convex domains we only know estimates to the effect $n/\log n \ll M_{n,q}(K) \ll n$ and only conjectured that for all compact convex domains $K$ the order of $M_{n,q}(K)$ should indeed be $n$. Here we will extend investigations to other measures, and will in particular consider the situation with $\mu$ being the area- (Lebesgue-) measure $\lambda$ restricted to $K$, where $K$ is a compact convex domain. Let us recall the following. From Tur\'{a}n's proof \cite[the footnote on p. 93]{Tur} it follows that if for the point $z\in \partial K$ there is a disk $D_R$ of radius $R$ passing through $z$ and containing $K$ then \begin{equation}\label{Rcirc} |p'(z)|\ge \frac{n}{2R}|p(z)|. \end{equation} In 2002, Levenberg and Poletsky [31, Proposition 2.1] called a set $K$ $R$-circular if \eqref{Rcirc} holds for all points $z\in \partial K$ \cite[p. 176, Theorem 2.2]{LP}. It is easy to verify that an $R$-circular set is always convex \cite[p. 176, Corollary 2.3, Remark]{LP}. Suppose that $K$ is $R$-circular and a finite measure $\mu$ is supported on $\partial K$ -- the boundary of $K$. Raising inequality \eqref{Rcirc} to a power $q>0$ and integrating over $\partial K$ with respect to $\mu$, we get \begin{equation*}\label{Rcirc1} \int_{\partial K}|p'(z)|^q \mu(dz) \ge \frac{n^q}{(2R)^q} \int_{\partial K}|p(z)|^q \mu(dz). \end{equation*} Hence $M_{n,q}(K,\mu)\ge \dfrac{n}{2R}$. This was already noted by Tur\'{a}n for the disk (and arc length on $\partial D$), and formally presented by Levenberg and Poletsky for general $R$-circular domains and the boundary arc length measure $\ell$ on $\partial K$, see Proposition 5 in \cite{LP}. However, it automatically extends to any measure supported on $\partial K$ as well. Our goal is to obtain an upper bound for $M_{n,q}(K,\mu)$ in terms of $n$, $d$, $w$, and some characteristics of the measure $\mu$. Let us note that we cannot expect any sound results if $\mathop{\mathrm{supp}} \mu$ is finite, for one can easily guarantee (for sufficiently large degree $n$) that $p'$ vanishes on a given prescribed finite set in $K$. Therefore, we can surely restrict our attention to the case when for two points $A$ and $B$ the measure is not supported on just those two points. In other words, $\mu(\{A,B\})<\mu(K)$, i.e. $\mu(K\setminus\{A,B\})>0$. Let us select and fix any diameter of $K$, and let $A$ and $B$ be the endpoints of this given diameter. Then the straight lines $a$ and $b$, passing through $A$ and $B$ and orthogonal to the diameter $AB$, must be strict supporting lines, so that $a\cap K=\{A\}$, and $b \cap K =\{B\}$ only. In fact, more is true: K is a subset of the disks of radius $d$ about $A$ and about $B$ as well. Therefore, outer regularity of the measure $\mu$ entails that with sufficiently close parallel lines $a'$ and $b'$ to $a$ resp. $b$, even the $\mu$-measure of the part of $K$ between $a'$ and $b'$ stays positive. These considerations motivate our formal parameterization of the main result of the paper. For the following, we introduce some further notation. Let $K$ be a compact set, and points $A, B\in K$ be such that $|B-A|=d$. We will denote by $K_\deltalta$ the part of the set $K$, enclosed between two parallel lines that are perpendicular to the segment $[A,B]$ and located at the distance $\deltalta d/2$ from the midpoint $(A+B)/2$ of the segment, i.e., \begin{equation}\label{Kdelta} K_\deltalta:=\left\{z\in K : \mathop{\mathrm{Re}}\right((z-(A+B)/2)e^{-i\arg(B-A)}\left) \in [- \deltalta d/2, \deltalta d/2] \right\}. \end{equation} \begin{theorem}\label{theorem1} Let $\displaystyle K $ be a compact subset of~$\mathbb{C}$ with width $w>0$ and diameter $d$. Suppose that a finite non-negative measure $\mu$ is given on $K$ and there are $\theta \in (0,1)$ and $\deltalta \in (0,1)$ such that \begin{equation}\label{ogrMu} \mu\left(K_\deltalta\right) \geq \theta \mu(K). \end{equation} Then for any $0<q\le \infty$ and \begin{equation}\label{ncondTh} \displaystyle n \ge 2(1+1/q)\frac{d^2}{w^2}\ln\frac{d}{w} \end{equation} we have\footnote{We set $1/q=0$ for $q=\infty$.} \begin{equation}\label{Main} M_{n,q}(K,\mu)\le C_q(\deltalta,\theta)\frac{w}{d^2}n, \quad \mbox{where} \quad C_q(\deltalta,\theta):= \dfrac{121}{1-\deltalta}\left(1 +\frac{2}{\theta} \right)^{1/q}. \end{equation} \end{theorem} If $K$ is convex and $\mu$ is the area on $K$ (i.e., the restriction to $K$ of the Lebesgue measure $\lambda$ on the plain) or $\mu$ is the (linear) Lebesgue measure (i.e. the arc length measure $\ell$) on the boundary of $K$, then \eqref{ogrMu} take place for any $\theta \in (0,1)$ with a suitably chosen $\deltalta$. As we will show in the last section, it is possible to estimate $\theta$ via $\deltalta$, and then optimize with respect to $\delta\in(0,1)$, finally getting a bound in function of $q$ only. This leads to the following corollaries. \begin{corollary}\label{cor1} Let $K\subset \mathbb{C}$ be a compact convex subset of $\mathbb{C}$ having width $w>0$ and diameter $d$. Let $0<q\le \infty$, and $\displaystyle {n \ge 2(1+1/q)\frac{d^2}{w^2}\ln\frac{d}{w}}$. If $\mu$ is the linear Lebesgue measure on the boundary of $K$ (arc length measure $d\ell$), then we have $$ M_{n,q}(K) \le C_q \frac{w^2}{d} n, $$ where \begin{equation}\label{C1} C_q:= 121 \frac{3q+2+2\sqrt{q^2+3q+1}}{5q} \left(3+2q+2\sqrt{q^2+3q+1}\right)^{1/q}. \end{equation} \end{corollary} As a consequence, if $q=\infty$, then $M_{n,\infty}(K)\le 121 \dfrac{w^2}{d} n$ for $n\ge 2\dfrac{d^2}{w^2}\ln \dfrac d w$, improving upon the earlier estimate \eqref{Revesz01}. \begin{corollary}\label{cor2} Let $K\subset \mathbb{C}$ be a compact convex subset of $\mathbb{C}$ having width $w>0$ and diameter $d$. Let $0<q\le \infty$, and $ \displaystyle {n \ge 2(1+1/q)\frac{d^2}{w^2}\ln\frac{d}{w}}$. If $\mu$ is the two dimensional Lebesgue measure $\lambda$ on $K$ (area), then we have $$ M_{n,q}(K,\lambda) \le C_q^*\frac{w^2}{d} n, $$ where \begin{equation}\label{C2} C_q^*:=121 \frac{5q+4+2\sqrt{4q^2+10q+4}}{9q} \left( 4q+5+2\sqrt{4q+10q+4}\right)^{1/q}. \end{equation} \end{corollary} \section{Auxiliary results for the upper estimate of $M_{n,q}(K,\mu)$ }\label{ar} Our proof starts with the observation that the constant $C_q(\deltalta,\theta)$ is invariant under an affine map of $\mathbb{C}$. To be more precise, consider an affine map $\displaystyle \phi(z) = \kappa (z- z_0)$, $\kappa\in \mathbb{C}\setminus \{0\},$ $z_0\in \mathbb{C}$, and the image $ \widetilde{K}=\{\phi(z) \ : \ z\in K\}$ of $K$ under this map. Define the Borel measure $\w{\mu}$ on $\w{K}$ by the formula $\w{\mu} (\widetilde{E})= \mu ( \psi(\widetilde{E})),$ where $\psi(t)=t/\kappa +z_0$ is the inverse map to $\phi.$ The widths and the diameters of $K$ and $\widetilde{K}$ are related by equalities \begin{equation*}\label{dwK} \displaystyle w(\widetilde{K}) = \kappa w(K), \qquad d(\widetilde{K})=\kappa d(K). \end{equation*} The parameters $\theta$ and $\deltalta$ in \eqref{Kdelta} are the same for $K$ and $\widetilde{K}.$ A polynomial $p(z)\in \mathcal{P}_n(K)$ if and only if $\widetilde{p}(t):=p(\psi(t)) \in \mathcal{P}_n(\widetilde{K})$. By \cite[p.~190, Theorem~3.6.1]{Bogachev} we have $$ \|\w p\|_{L^q(\w K)}^q=\int_{\w K}|\w p(t)|^q (\mu \circ \psi) (dt)=\int_{K}|\w p(\phi(z))|^q \mu(dz) =\int_{K}|p(z)|^q \mu(dz)= \|p\|_{L^q(K)}^q, $$ and $ \|\w{p} '\|_{L^q(\w K)}=(1/\kappa)\|p'\|_{L^q(K)}$, as $\displaystyle \w{p}'(t) = p'(\psi(t))\psi'(t) =(1/\kappa)p'(\psi(t)).$ If we will obtain estimate~\eqref{Main} for the set $\w{K}$, then that will yield $$ \frac{\|p'\|_q}{\|p\|_q}=\kappa\frac{\|\widetilde{p}'\|_q}{\|\widetilde{p}\|_q}\le \kappa C_q(\deltalta,\theta) \frac{w(\w{K})}{d^2(\w{K})}n=C_q(\deltalta,\theta)\frac{w(K)}{d^2(K)}n, $$ i.e. we will obtain estimates for $K$ and $\mu$ with the same constant factor. On account of the above observation, we can assume without loss of generality that the compact set $K$ has diameter $d=2$ and the selected diameter endpoints are the points $-1$ and $1$. In this case the set $K_\deltalta$ defined by \eqref{Kdelta} takes the form \begin{equation}\label{Kdeltanorm} K_\deltalta=\{x+iy\in K : x\in[-\deltalta,\deltalta]\}. \end{equation} \begin{lemma}\label{lemma1} Let $\displaystyle K $ be a set having width $w>0$ and diameter $d=2$, with the points $-1$ and $1$ belonging to $K$. Then $K$ is contained in the rectangle $[-1,1]\times [-iw,iw]$. \end{lemma} \begin{proof} The idea of the proof was proposed by Hungarian mathematician S\'{a}ndor Krenedits. Denote by $c^*=x+iy^*$ a point on the boundary of $K$ such that $\displaystyle |y^*| = \max_{x+iy \in K} |y|.$ Obviously, $K$ is contained in the rectangle $[-1,1]\times [-i|y^*|,i|y^*|]$. We need to show that $|y^*|\le w.$ Denote by $T$ the triangle with the vertices $a=-1$, $c^*$, $b= 1$. As $w$ is the width of $K$ and all vertices of $T$ are points of $K$, we have $w(T)\le w=w(K)$. The width of $T$ is equal to the smallest height of the triangle, i.e. the height drawn to the longest side. Since the largest side is $ab$ ($2$ is the diameter of $K$), the length of the smallest height equals $|y^*|$. Hence $|y^*|=w(T) \le w(K).$ \end{proof} \begin{lemma}\label{lemma2} Let $ K $ be a compact set with a positive width $w>0$ and diameter $d=2$, and let $ K $ contain the points $-1$, $1.$ Suppose that a finite non-negative measure $\mu$ is given on $K$ and there are $\theta \in (0,1)$ and $\deltalta \in (0,1)$ such that \eqref{ogrMu} holds. If $w+\deltalta<1$, then there exists an interval $[A-w,A]$ such that either $0<A\le \deltalta$ or $-\deltalta \le A-w<0$ and the set $$Q^* = \{x+iy \in K:\, A-w \leq x \le A \}$$ satisfies the inequality \begin{equation}\label{lemma2_02} \mu(Q^*) \geq \frac{w}{2}\theta\mu(K). \end{equation} \end{lemma} \begin{proof} Let us set $L:=\lceil\deltalta/w\rceil$ (the least integer greater than or equal to $\deltalta/w$). Note that $L\ge 1$ by definition. Let us represent the set $K_\deltalta$ as $$K_\deltalta=\bigcup\limits_{\ell=-L+1}^L Q_\ell, \ \mbox{where} \ Q_{\ell} :=\{x+iy\in K: x\in \left[(\ell-1) w, \ell w\right]\cap [-\deltalta,\deltalta]\}. $$ Take $\ell_0 \in\{-L+1,\ldots,L\}$ such that $\displaystyle \mu(Q_{\ell_0})=\max_\ell \mu(Q_{\ell})$, then $$ \theta \mu(K)\le \mu(K_\deltalta)\le \sum_{\ell=-L+1}^L \mu(Q_{\ell}) \le 2L \mu(Q_{\ell_0})\le 2\dfrac{\deltalta+w}{w} \mu(Q_{\ell_0})\le \dfrac2w \mu(Q_{\ell_0}), $$ hence $\mu(Q_{\ell_0})\ge \dfrac w2 \theta \mu(K)$. If $\ell_0>0$ we set $A:=\min \{\ell_0 w, \deltalta\}$, and if $\ell_0\le 0$ we set $A:=w+\max \{-\ell_0w, -\deltalta\}$. In both cases $Q_{\ell_0} \subset Q^*$, which implies \eqref{lemma2_02}. \end{proof} The proof of Theorem~\ref{theorem1} will consist of a construction of a suitable polynomial $p$ and careful estimates of its norm and the norm of its derivative. We will take this polynomial in the form \begin{equation}\label{pnk} p(z)=(1+z)^{n-k}(1-z)^k. \end{equation} It is worth pointing out that the values of $|p(z)|=|p(x+iy)|$ increase with increasing $|y|$. For this reason we will need estimates $|p|$ on the intervals $[-1,1]$ and $[-1+iw,1+iw].$ In the following three lemmas we study the behaviour of $|p|$ on these intervals. \begin{lemma}\label{lemma4} Suppose that $n$ and $k$ are positive integers, $2k <n$. Then $p(x)=(1+x)^{n-k}(1-x)^k$ attains its maximum on $[-1,1]$ at the point $M=1-2k/n \in (0,1)$, increases on $[-1,M]$, decreases on $[M,1]$, and \begin{equation}\label{compp} p(M-x) \ge p(M+x) \quad \mbox{for all} \quad x\in [0, 1-M]. \end{equation} \end{lemma} \begin{proof} An easy computation shows that \begin{equation}\label{pp} p'(x) = n(1+x)^{n-k-1}(1-x)^{k-1}(1-2k/n-x), \end{equation} thus the polynomial $p$ attains its maximum on $[-1,1]$ at the point $M=1-2k/n \in (0,1)$, increases on $[-1,M]$, decreases on $[M,1]$. We can write $p(M+x)$ for $x\in(-(1-M),1-M)$ in the form \begin{align*} p(M+x)&=(1+M)^{n-k}(1-M)^{k}\left(1+\frac{x}{1+M}\right)^{n-k}\left(1-\frac{x}{1-M}\right)^{k} \end{align*} and $$ \ln p(M+x) = \ln p(M)+n\tau(x), \ \text{where} \ \tau(x) = \left(1-\frac{k}{n}\right)\ln\left(1+\frac{x}{1+M}\right)+\frac{k}{n}\ln\left(1-\frac{x}{1-M}\right). $$ We note that $k/n = (1-M)/2$, $1-k/n=(1+M)/2$ and expand $\tau(x)$ in a Taylor series on the interval $(-(1-M), 1-M)$: \begin{align*} \tau(x)&=\frac{1+M}{2}\sum_{\ell=1}^{\infty}\frac{(-1)^{\ell-1}x^{\ell}}{\ell(1+M)^{\ell}}+ \frac{1-M}{2}\sum_{\ell=1}^{\infty}\frac{(-1)^{\ell-1}(-1)^{\ell}x^{\ell}}{\ell(1-M)^{\ell}}\\ &= \sum_{\ell=1}^{\infty}\frac{(-1)^{\ell-1}(1-M)^{\ell-1}-(1+M)^{\ell-1}} {2\ell(1-M^2)^{\ell-1}}x^{\ell}. \end{align*} Since $1-M^2>0$, it follows that for odd $\ell$ the sign of the coefficient of $x^\ell$ equals the sign of the expression $\displaystyle (-1)^{\ell-1}(1-M)^{\ell-1} -(1+M)^{\ell-1}=(1-M)^{\ell-1}-(1+M)^{\ell-1}<0$. This proves that $\tau(x)<\tau(-x),$ $x\in[0, 1-M)$, and consequently $p(M-x) \ge p(M+x)$ for $x\in[0, 1-M)$. The inequality is valid at the point $x=1-M$ due to the continuity of $p$. \end{proof} For given positive integers $n$ and $k$, $0<2k\le n$ and a given $w\in(0,1)$ we introduce the following notation \begin{equation*}\label{fnk} f_{n,k}(x)=((1+x)^2+w^2)^{n-k}((1-x)^2+w^2)^k, \quad M_{n,k}:=1-2k/n. \end{equation*} \begin{lemma}\label{lemma5} Suppose that $n$ and $k$ are positive integers, $0<2k \sz{<} n$, $M=M_{n,k},$ $w\in(0,(1-M)/4)$. Then there are points $$\ell_{n,k}\in(-1, -M), \quad \widetilde{M}=\widetilde{M}_{n,k}\in (M,\, M+w/2), \quad \mbox{and} \quad r_{n,k}\in (1-w/2,1)$$ such that $f_{n,k}$ decreases on $[-1,\ell_{n,k}]$ and $[\widetilde{M},r_{n,k}]$, and increases on $[\ell_{n,k},\widetilde{M}]$ and $[r_{n,k},1]$. \end{lemma} \begin{proof} As observed above $k/n=(1-M)/2$ and $(n-k)/n=(1+M)/2$. Our goal is to estimate extremum points of the function $$ h(x)=\frac1n \ln f_{n,k}(x)= \frac{1+M}{2}\ln((1+x)^2+w^2)+\frac{1-M}{2}\ln((1-x)^2+w^2). $$ We have $$h'(x) = \frac{(1+M)(1+x)}{(1+x)^2+w^2}- \frac{(1-M)(1-x)}{(1-x)^2+w^2}=\frac{2u(x)}{((1+x)^2+w^2)((1-x)^2+w^2)}, $$ where $u(x) =(M+x)w^2+(M-x)(1-x^2)=x^3-Mx^2-(1-w^2)x+M(1+w^2).$ By Descartes' rule of signs, the polynomial $u(x)$ has two positive zeros or no positive zeros at all. It is easily seen that $u(-1)<0$ and $u(-M), \, u(M),\, u(1)>0$. Suppose $x\in (M,1),$ then $M-x<0$ and \begin{align*} u(x)&\le (M+1)w^2+(M-x)(1-x)(1+M)=(1+M)(x^2-x(1+M)+M+w^2). \end{align*} The parabola $v(x)=x^2-x(1+M)+M+w^2$ vanishes at the points\footnote{Here we use $4w <1-M$.} $$x_1=\frac{1}{2}\left(1+M-\sqrt{(1-M)^2-4w^2} \right) \ \mbox{and} \ x_2=\frac{1}{2}\left(1+M+\sqrt{(1-M)^2-4w^2} \right).$$ We can estimate the left zero from above and the right zero from below as \begin{align*} x_1&=\frac{1}{2}\left(1+M-(1-M)+(1-M)\left(1-\sqrt{1-4w^2/(1-M)^2}\right) \right)=\\ &= M+\frac{1-M}{2}\frac{1-(1-4w^2/(1-M)^2)}{1+\sqrt{1-4w^2/(1-M)^2}}\le M+\frac{1-M}{2} \frac{4w^2}{(1-M)^2} \\&= M+ \frac{2w^2}{1-M}\le M+\frac{w}{2}, \end{align*} $$ x_2=(1+M)/2+ (1+M)/2-x_1\ge 1+M-M-w/2=1-w/2. $$ Hence $u(x)$ lying below $v(x)$, is negative on $[M+w/2, 1-w/2]$. Therefore, $u(x)$ has a zero at some point in $(M,\, M + w/2)$ -- we denote this point by $\widetilde{M}$ -- and another zero at some point in $(1-w/2,\,1)$, which we denote by $r_{n,k}$. Observe that $u(x)$ changes its sign on $[-1,-M]$, $u(x)$ has at most three zeros and $u(x)$ has two zeros on $(M,1)$. It follows that $u(x)$ has one zero on $(-1,-M)$, which we denote by $\ell_{n,k}$, the other two which we denoted by $\widetilde{M}$ and $r_{n,k}$, and no more. The Lemma is proved. \end{proof} \begin{lemma}\label{lemma6} Let $w\in(0,1)$ and $A\in(0,1)$ be given numbers such that $B:=A+3w<1$. Suppose that positive integers $n$ and $k$ are chosen such that $0<2k < n-1$, \begin{equation}\label{nw} n\ge 4/w \quad (\Leftrightarrow 2/n\le w/2), \end{equation} and \begin{equation}\label{Mcond} A+w \le M=1-2k/n \le A+\frac32 w. \end{equation} If \begin{equation}\label{l503} w\le \dfrac{1-B}{\sqrt{e^2-1}+2}, \end{equation} then \begin{equation}\label{fn1k} f_{n-1,k}(x)\le f_{n-1,k}(A-2w), \quad x\in[-1,A-2w], \end{equation} and \begin{equation}\label{fn1k1} f_{n-1,k-1}(x)\le f_{n-1,k-1}(B+2w), \quad x\in[B+2w,1]. \end{equation} \end{lemma} \begin{proof} Let us first verify \eqref{fn1k}. To simplify notation we set $M_1=M_{n-1,k}=1-\dfrac{2k}{n-1}$. Since $2k<n-1$ by condition, we get $$ 0<M-M_1=\dfrac{2k}{n-1}-\dfrac{2k}{n}=\frac{2k}{n(n-1)}\le \dfrac{w}{4} \quad \mbox {and}\quad A- 2w< M - \frac12 w\le M_1. $$ By Lemma \ref{lemma5}\footnote{ We need here $2k<n-1$, not just $2k\le n-1$.} there are points $\ell_{n-1,k}\in (-1,-M_1)$ and $\widetilde{M}_{n-1,k}\in(M_1,M_1+w/2)$ such that the function $f_{n-1,k}$ decreases on $[-1, \ell_{n-1,k}]$ and increases on $[\ell_{n-1,k}, \widetilde{M}_{n-1,k}]$. Thus it is enough to verify $f_{n-1,k}(-1)\le f_{n-1,k}(A-2w).$ Applying inequality $\ln x\le x-1,$ $x>0$, gives \begin{equation*} \begin{aligned} &\frac{1}{2}\ln\frac{4+w^2}{(1-A+2w)^2+w^2}\le \ln\frac{2}{1-A+2w}\le \frac{2}{1-A+2w}-1= \frac{1+(A-2w)}{1-(A-2w)} \le \frac{1+M_1}{1-M_1}. \end{aligned} \end{equation*} Using the last estimate we deduce \begin{align*} &\frac1n\ln\frac{ f_{n-1,k}(-1)}{ f_{n-1,k}(A-2w)}= \frac{1+M_1}{2}\ln\frac{w^2}{(1+A-2w)^2+w^2} +\frac{1-M_1}{2}\ln\frac{4+w^2}{(1-A+2w)^2+w^2} \\ &\le\frac{1+M_1}{2}\ln \frac{w^2}{(1+A-2w)^2+w^2}+(1-M_1)\dfrac{1+M_1}{1-M_1}= \frac{1+M_1}{2}\ln \frac{e^2w^2}{(1+A-2w)^2+w^2}. \end{align*} Solving the inequality $\dfrac{e^2w^2}{(1+A-2w)^2+w^2}\le 1$, we obtain \begin{equation}\label{l501} \frac{e^2w^2}{(1+A-2w)^2+w^2}\le 1 \quad \Leftrightarrow \quad \sqrt{e^2-1}w\le 1+A-2w \quad \Leftrightarrow \quad w\le \frac{1+A}{\sqrt{e^2-1}+2}. \end{equation} Condition \eqref{l501} is weaker than \eqref{l503}. The proof for \eqref{fn1k1} is similar. To simplify notation we set $M_2=M_{n-1,k-1}=1-\dfrac{2(k-1)}{n-1}$. We have $$ M_2-M=\dfrac{2k}{n}-\dfrac{2(k-1)}{n-1}=\frac{2(n-k)}{n(n-1)}\le \frac 2n\le \frac w2, \quad M_2+w/2\le M+w<B \le B+2w. $$ By Lemma \ref{lemma5} there are points $\widetilde{M}_{n-1,k-1}\in (M_2,M_2+w/2)$ and $r_{n-1,k-1}\in (1-w/2,1)$ such that the function $f_{n-1,k-1}$ decreases on $[\widetilde{M}_{n-1,k-1},r_{n-1,k-1}]$ and increases on $[r_{n-1,k-1},1]$. Thus it is enough to verify $f_{n-1,k-1}(1) \le f_{n-1,k-1}(B+2w).$ We have \begin{equation*} \begin{aligned} &\frac{1}{2}\ln\frac{4+w^2}{(1+B+2w)^2+w^2}\le \ln\frac{2}{1+B+2w}\le \frac{1-(B+2w)}{1+(B+2w)} \le \frac{1-M_2}{1+M_2} \end{aligned} \end{equation*} and \begin{align*} &\frac1{n-1}\ln\frac{ f_{n-1,k-1}(1)}{ f_{n-1,k-1}(B+2w)}= \frac{1+M_2}{2}\ln\frac{4+w^2}{(1+B+2w)^2+w^2} \\&+\frac{1-M_2}{2}\ln\frac{w^2}{(1-B-2w)^2+w^2} \le\frac{1-M_2}{2}\ln \frac{e^2w^2}{(1-B-2w)^2+w^2} . \end{align*} Solving the inequality $\displaystyle \frac{e^2w^2}{(1-B-2w)^2+w^2}\le 1$, we find $ \displaystyle w\le \frac{1-B}{\sqrt{e^2-1}+2}. $ The Lemma is proved. \end{proof} \section{Outline of the proof of Theorem \ref{theorem1}}\label{sec:outline} Recall that on account of the observation at the beginning of Section \ref{ar}, we can assume that $K$ has diameter $d=2$ and contains the points $-1$ and $1$. In this case the set $K_\deltalta$, defined by \eqref{Kdelta}, takes the form \eqref{Kdeltanorm}. We put \begin{equation}\label{mcond} m:= 5+\sqrt{e^2-1}. \end{equation} The proof of Theorem~\ref{theorem1} will be dived into two cases: the case when the width $w$ is relatively small, namely, $w \le (1-\deltalta)/m$, and the case when the width $w$ is large, $w >(1-\deltalta)/m$. In both cases we first establish the assertion of Theorem~\ref{theorem1} for $0<q<\infty$, and then use a passage to the limit to deal with $q=\infty$. \section{The case of a small width}\label{Secsmallw} Suppose that $$w\le (1-\deltalta)/m.$$ Then $w+\deltalta<1$ and by Lemma~\ref{lemma2} there exists an interval $[A-w,A]$ such that either $0<A\le \deltalta$ or $-\deltalta \le A-w<0$ and the set $Q^* = \{x+iy \in K:\, A-w \leq x \le A \}$ satisfies \eqref{lemma2_02}. Without loss of generality we assume that $0<A\le \deltalta$. The proof falls into four subsections. \subsection{Construction of a polynomial and a partition of the set $K$}\label{Ss} Let us take $B:=A+3w$ and introduce another bigger set $$ Q := \left\{x+iy \in K:\, A-2w \leq x \leq B+2w\right\}. $$ Our conditions imply that $$ A-2w\ge -2w \ge -\dfrac{2}{m}>-1, \quad B+2w=A+5w< A+mw\le A+1-\deltalta\le 1, $$ thus $Q\subset[-1,1]\times[-iw,iw].$ Consider the polynomial $p(z) = (1+z)^{n-k}(1-z)^k$ for $2\le 2k <n$. By Lemma~\ref{lemma4} it attains its maximum on $[-1,1]$ at the point $\displaystyle M=1-2k/n$. If we take $n$ subject to \eqref{nw}, then we can choose $k:=k(n)$ such that \eqref{Mcond} is satisfied. Further, our choice of $A$ guarantees that $[A+w,A+\frac32 w]\subset(0,1).$ Suppose that $0<q<\infty.$ Let us write the norms $\|p\|^q_q$ and $\|p'\|^q_q$ as $$ \|p\|^q_q= \int_{Q}|p(z)|^{q} \mu(dz)+\int_{K \backslash Q}|p(z)|^{q}\mu(dz)=:J_Q+J_{K \backslash Q}, $$ and $$ \|p'\|^q_q= n^q\left(\int_{Q}\frac{|p'(z)|^q}{n^q}\mu(dz)+\int_{K\backslash Q}\frac{|p'(z)|^q}{n^q} \mu(d z)\right)=: n^q\left(J'_Q+J_{K \backslash Q}'\right). $$ In the next subsections we prove two inequalities of the form \begin{equation}\label{ab} J'_Q\leq \alpha w^qJ_Q \quad \mbox{and} \quad J_{K \backslash Q}'\leq \beta w^q J_{Q}. \end{equation} Since \begin{equation*} \|p'\|_q=n (J'_Q+J_{K \backslash Q}')^{1/q} \leq n (\alpha+\beta)^{1/q} w J_Q^{1/q} \le 4(\alpha+\beta)^{1/q} \dfrac{w}{d^2}n\|p\|_q, \end{equation*} this will allow us to derive \begin{equation}\label{abf} C_{q}(\deltalta,\theta)\le 4(\alpha+\beta)^{1/q}. \end{equation} In the following we will need some estimates of the same type. It is convenient to list them all right here at the beginning of the argument. So we will need \begin{equation}\label{e1} 1-A\ge (1-\deltalta)\ge mw, \end{equation} \begin{equation}\label{e1} 1-B=1-A-3w\ge (m-3)w, \end{equation} \begin{equation}\label{e2} 1-(B+w) \ge (m-4)w, \end{equation} \begin{equation}\label{e3} 1-(B+2w) =1-A-5w \ge 1-\deltalta-5(1-\deltalta)/m=(1-\deltalta)(1-5/m), \end{equation} \begin{equation}\label{e4} B+w-M = A+4w-M\ge ( 5/2)w\ge 2w. \end{equation} \subsection{Estimate of $J'_{Q}$} Using representation \eqref{pp} with $z$ in place of $x$, we estimate the integral $J'_Q$ as $$ J'_Q = \int_{Q} \left(|1+z|^{(n-k)}|1-z|^{k}\right)^q\left(\frac{|M-z|}{|1+z||1-z|}\right)^q \mu(dz) \le \max_{z\in Q }\left(\frac{|M-z|}{|1+z||1-z|}\right)^q J_Q. $$ It is easily seen that ($z=x+iy$, $x,y\in \mathbb{R}$) \begin{align*} &\frac{|M-z|^2}{|1+z|^2|1-z|^2} = \frac{\left(M-x\right)^2+y^2}{((1+x)^2+y^2)((1-x)^2+y^2)} \le \frac{\left(M-x\right)^2/w^2+1}{(1-x^2)^2} w^2. \end{align*} By definition of the set $Q$, we get $$ (x-M)/w\le (B+2w-A-w)/w=4 \quad \text{and} \quad (M-x)/w \le (A+1.5 w-A+2w)/w \le 3.5, $$ thus $(M-x)^2/w^2\le 16$, whilst by \eqref{e3} we have $$ 1-x^2\ge 1-x\ge 1-(B+2w)\ge (1-\deltalta)(1-5/m). $$ Hence we obtain the estimate \begin{equation}\label{JpQ} J'_Q \le \dfrac{17^{q/2}}{(1-\deltalta)^q(1-5/m)^q}w^q J(Q). \end{equation} \subsection{Estimate of $J'_{K\setminus Q}$}\label{SsJpKQ} 1.) We first estimate $J_{Q}$ from below. By Lemma~\ref{lemma4} the polynomial $p(x)$ increases on ${[A-w, A]\subset[-1,M]}$, hence for $z\in Q^*$ \begin{equation*} |p(z)|=|1+z|^{n-k}|1-z|^{k} = ((1+x)^2+y^2)^{(n-k)/2}((1-x)^2+y^2)^{k/2} \geq p(x) \geq p(A-w). \end{equation*} It follows that \begin{equation}\label{JQA} J_{Q}\ge \int_{Q^*}|p(z)|^q d\mu(z) \ge p(A-w)^q\mu(Q^*) \ge p(A-w)^q\frac{\theta}{2}w \mu(K). \end{equation} Furthermore, $p(A-w)=p(M-(M-A+w))\ge p(M+(M-A+w))$ by \eqref{compp} and $p(M+(M-A+w)) \ge p(B+w)$ as $M<M+M-A+w \le A+\frac32 w+\frac32 w+w= B+w<1$, and $p(x)$ decreases on $[M,1]$. Combining this estimate and \eqref{JQA} yields \begin{equation}\label{JQB} J_{Q}\ge p(B+w)^q\frac{\theta}{2}w \mu(K). \end{equation} To estimate $J'_{K\setminus Q}$ from above, we represent $K\backslash Q$ in the form $K_A \bigcup K_B,$ where $$K_A = \{x+iy \in K:\, x\in [-1 , \,A-2w]\},\quad K_B = \{x+iy \in K:\, x\in [ B+2w,\, 1]\}. $$ In the following applications use will be made of Lemma~\ref{lemma6}. All assumptions of the Lemma follow from Subsection \ref{Ss}. Let us verify condition \eqref{l503}. Indeed, because of \eqref{e1} condition \eqref{l503} will be proved once we prove the inequality $$ w\le \dfrac{(m-3)w}{\sqrt{e^2-1}+2} \quad \textrm{or equivalently} \quad \sqrt{e^2-1}+2 \le m-3. $$ According to the choice \eqref{mcond} of $m$, this holds with equality. Suppose $z\in K_A$, then $|M-z|\le |1-z|$. Applying this, inequality \eqref{fn1k} from Lemma~\ref{lemma6}, and the estimate $1+A-2w \ge 1-A-2w \ge (1-\deltalta)(1-2/m)$ we obtain \begin{align*} \frac{|p'(z)|}{n}&=|1+z|^{n-k-1}|1-z|^{k-1}|M-z|\le |1+z|^{n-k-1}|1-z|^k \le | f_{n-1,k}(x)|\le \\&\le | f_{n-1,k}(A-2w)|= \frac{| p(A-2w+iw)|}{|1+A-2w+iw|}\le \frac{ | p(A-2w+iw)|}{(1-\deltalta)(1-2/m)}. \end{align*} Consequently, \begin{equation}\label{estA} J'_{K_A}=\frac{1}{n^q}\int_{K_A}|p'(z)|^qd\mu(z) \le \frac{| p(A-2w+iw)|^q}{(1-\deltalta)^q(1-2/m)^q}\mu(K_A). \end{equation} Suppose $z\in K_B$, then $|M-z|\le |1+z|.$ Applying this, inequality \eqref{fn1k1} from Lemma~\ref{lemma6}, and estimate \eqref{e3} we obtain \begin{align*} \frac{|p'(z)|}{n}&=|1+z|^{n-k-1}|1-z|^{k-1}|M-z|\le |1+z|^{n-k}|1-z|^{k-1} \le f_{n-1,k-1}(x)\le \\&\le | f_{n-1,k-1}(B+2w)|= \frac{| p(B+2w+iw)|}{|1-(B+2w)+iw|}\le \frac{| p(B+2w+iw)|}{(1-\deltalta)(1-5/m)}. \end{align*} Consequently, \begin{equation}\label{estB} J'_{K_B}=\frac{1}{n^q}\int_{K_B}|p'(z)|^qd\mu(z) \le \frac{| p(B+2w+iw)|^q}{(1-\deltalta)^q(1-5/m)^q}\mu(K_B). \end{equation} Adding \eqref{estA} to \eqref{estB} we get \begin{align*} J'_{K\setminus Q}&=J'_{K_A}+J'_{K_B}\le \frac{| p(A-2w+iw)|^q}{(1-\deltalta)^q(1-2/m)^q}\mu(K_A)+ \frac{| p(B+2w+iw)|^q}{(1-\deltalta)^q(1-5/m)^q}\mu(K_B)\le \\ &\le \frac{\mu(K)}{(1-\deltalta)^q(1-5/m)^q}\max\{| p(A-2w+iw)|,\, | p(B+2w+iw)| \}^q. \end{align*} Combining this with \eqref{JQA} and \eqref{JQB} we deduce \begin{equation}\label{Jp01} \frac{J'_{K\setminus Q}}{J_Q}\le \frac{2w^q}{\theta(1-\deltalta)^q(1-5/m)^q} \max\left\{\frac{| p(A-2w+iw)|}{w^{1+1/q} p(A-w)}, \, \frac{| p(B+2w+iw)|}{w^{1+1/q} p(B+w)} \right\}^q. \end{equation} 2.) Now our goal is to find conditions on the degree $n$ of the polynomial $p$ under which \begin{equation}\label{Jp02} \dfrac{|p(A-2w+iw)|}{ w^{1+1/q}p(A-w)} \le 1 \quad \mbox{and} \quad \dfrac{| p(B+2w+iw)|}{ w^{1+1/q}p(B+w)} \le 1 \end{equation} or equivalently \begin{equation}\label{TwoNeqq} \ln \frac{|p(A-2w+iw)|}{p(A-w)} < (1+1/q) \ln w \end{equation} and \begin{equation}\label{TwoNeqq2} \ln \frac{|p(B+2w+iw)|}{p(B+w)} < (1+1/q) \ln w. \end{equation} We first deal with inequality~\eqref{TwoNeqq}. To simplify notation we set $A'=A-w$, $$a := \frac{(1+A'-w)^2+w^2}{(1+A')^2},\quad b := \frac{(1-(A'-w))^2+w^2}{(1-A')^2}=\frac{(1-A'+w)^2+w^2}{(1-A')^2}.$$ Then the left side of \eqref{TwoNeqq} can be written as \begin{gather*} \ln \frac{|p(A-2w+iw)|}{p(A-w)}=\ln \frac{|p(A'-w+iw)|}{p(A')}=\frac12\ln a^{n-k}b^{k}=\frac{n}{4}\left((1+M)\ln a + (1-M)\ln b\right). \end{gather*} Since $\ln x \leq x-1$ ($x>0$), we have $\ln a \le a-1$ and $\ln b \leq b-1$. We proceed \begin{equation}\label{f02} \begin{gathered} (1+M)\ln a + (1-M)\ln b \leq (1+M)(a-1) + (1-M)(b-1) \\ = (1+M)\frac{-2(1+A')w+2w^2}{(1+A')^2} + (1-M)\frac{2(1-A')w+2w^2}{(1-A')^2} \\= \frac{4w}{1-A'^2} \left(-(M-A')+w-\frac{2A'(M-A')w}{1-A'^2}\right) \\ < \frac{4w}{1-A'^2} \left(-(M-A')+w\right)= \frac{4w}{1-A'^2} \left(-(M-A)\right) \le 4w\left(-(M-A)\right) \le -4w^2, \end{gathered} \end{equation} the last inequality following from \eqref{Mcond}. Therefore, inequality \eqref{TwoNeqq} will be satisfied once we get $\dfrac n4(-4w^2)\le (1+1/q) \ln w.$ This is equivalent to the following restriction on $n$: \begin{equation}\label{Jp04} n \ge \frac{(1+1/q)\ln 1/w }{w^2}. \end{equation} We now turn to inequality~\eqref{TwoNeqq2}. With the notation $B':=B+w$, $$ a_1 := \frac{(1+B'+w)^2+w^2}{(1+B')^2},\quad b_1 := \frac{(1-(B'+w))^2+w^2}{(1-B')^2}=\frac{(1-B'-w)^2+w^2}{(1-B')^2}, $$ the left side of \eqref{TwoNeqq2} can be written as \begin{gather*} \ln \frac{|p(B+2w+iw)|}{p(B+w)}=\ln \frac{|p(B'+w+iw)|}{p(B')}\\ =\frac12\ln a_1^{n-k}b_1^{k}=\frac{n}{4}\left((1+M)\ln a_1 + (1-M)\ln b_1\right). \end{gather*} Replacing in \eqref{f02} $A'$ by $B'=B+w$ and $w$ by $-w$, and successively applying inequality \eqref{e2} to estimate $1-B'$ from below and inequality \eqref{e4} to estimate $B'-M$ from below, we get \begin{gather*} (1+M)\ln a_1 + (1-M)\ln b_1 \le \frac{-4w}{1-B'^2}\left(-(M-B')-w-\frac{2B'(M-B')(-w)}{1-B'^2}\right) \\ = \frac{4w}{1-B'^2}\left((M-B')+w+\frac{2B'(B'-M)w}{1-B'^2}\right)\\ \le \frac{4w}{1-B'^2} \left(-(B'-M)+w+\frac{(B'-M)w}{1-B'}\right)\le \frac{4w}{1-B'^2} \left(-(B'-M)+\frac{B'-M}{m-4}+w\right) \\ =\frac{-4w^2}{1-B'^2} \left(\dfrac{B'-M}{w}\left(1-\frac{1}{m-4}\right)-1\right) \le\frac{-4w^2}{1-B'^2} \left(\dfrac52\left(1-\frac{1}{m-4}\right)-1\right) \\ \le -2w^2\left(3-\dfrac{5}{m-4}\right). \end{gather*} The inequality \eqref{TwoNeqq2} will be satisfied once we get $$ \dfrac{n}{4}(-2w^2)\left(3-\dfrac{5}{m-4}\right)\le (1+1/q) \ln w. $$ This is equivalent to \begin{equation}\label{Jp05} n \ge \dfrac{2}{\left(3-5/(m-4)\right)}\frac{(1+1/q)\ln 1/w}{w^2}. \end{equation} So, we have established \eqref{TwoNeqq2} under restriction \eqref{Jp05}. Therefore, if both \eqref{Jp04} and \eqref{Jp05} are satisfied, and, moreover, we also have \eqref{nw}, then both \eqref{TwoNeqq} and \eqref{TwoNeqq2} follows, whence \eqref{Jp02} obtains, too. Combining \eqref{Jp01} and \eqref{Jp02} then yields \begin{equation}\label{Jp06} J'_{K\setminus Q}\le \frac{2}{\theta(1-\deltalta)^q(1-5/m)^q}w^q J_Q \end{equation} for $n$ satisfying \eqref{Jp04}, \eqref{Jp05} and \eqref{nw}. \subsection{Estimate of $J'_K$} Suppose $0<q<\infty.$ Estimates \eqref{JpQ} and \eqref{Jp06} give us the numbers $\alpha$ and $\beta$ from \eqref{ab}. Substituting them into \eqref{abf} we obtain \begin{align*} C_{q}(\deltalta,\theta)&\le 4(\alpha+\beta)^{1/q} = 4\left( \dfrac{17^{q/2}}{(1-\deltalta)^q(1-5/m)^q}+ \frac{2}{\theta(1-\deltalta)^q(1-5/m)^q} \right)^{1/q} \\&\le \frac{4\cdot 17^{1/2}}{(1-5/m)}\dfrac{1}{(1-\deltalta)}\left( 1 +\frac{2}{\theta} \right)^{1/q} \end{align*} for $n$ satisfying \eqref{Jp04}, \eqref{Jp05} and \eqref{nw}. Let us compare these three conditions. Observe that $7<m<8$, which entails \begin{equation*}\label{ncond01} 1<\dfrac{2}{3-5/4}<\dfrac{2}{3-5/(m-4)} < \dfrac{2}{3-5/3}=1.5. \end{equation*} Thus we can replace the restriction \eqref{Jp05} by \begin{equation}\label{ncond1} n > \frac{1.5(1+1/q)\ln 1/w}{w^2}. \end{equation} which is stronger than \eqref{Jp04} and \eqref{ncondTh} in Theorem~\ref{theorem1}. From $w\le(1-\deltalta)/m\le 1/7$ and monotonicity of $x\ln x$ for $x\ge 7$ we get that $(1/w)\ln(1/w) \ge 7\ln 7\ge 7$. Thus it follows that $$ \dfrac{2}{\left(3-5/(m-4)\right)}\frac{(1+1/q)\ln 1/w}{w^2}\ge \frac{\ln 1/w}{w^2}\ge \dfrac{7}{w}. $$ Thus \eqref{Jp05} is stronger than \eqref{nw} for all $q$, and in fact even without the term containing $q$. Finally taking into account that $4 \cdot 17^{1/2}/ (1 - 5/m)\le 4 \cdot 5/ (1- 5/7)= 70<121$ we obtain the assertion of Theorem~\ref{theorem1} for $0<q<\infty$ and all $n$ satisfying \eqref{ncond1}. We are left with the case $q=\infty.$ Observe that the constructed polynomial $p$ is independent of $q$, and that any $n\ge \dfrac{2\ln 1/w}{w^2}$ meets \eqref{ncond1} for sufficiently large $q$. Therefore we can pass to the limit (see, e.g., \cite[Problem 4.7.44]{Bogachev}) for all $n \ge \dfrac{2\ln 1/w}{w^2}$ and obtain \begin{equation}\label{qinfty} M_{n,\infty}(K,\mu)\le \dfrac{\|p'\|_\infty}{\|p\|_\infty}=\lim_{q\to \infty}\dfrac{\|p'\|_q}{\|p\|_q}\le \limsup _{q\to \infty}C_q(\deltalta,\theta)\dfrac{w}{d^2}n\le \dfrac{121}{1-\deltalta}\dfrac{w}{d^2}n. \end{equation} This proves Theorem~\ref{theorem1} for $q=\infty.$ \section{The case of a large width} We now turn to the case $(1-\deltalta)/m < w\le 1$. Recall that there exist $\theta \in(0,1)$ and $\deltalta \in (0,1)$ such that \eqref{ogrMu} holds. Then at least one of the ``halves'' $K_\ell=K_\deltalta \cap( [-\deltalta,0]\times [-iw,iw])$ or $K_r=K_\deltalta \cap ([0,\deltalta]\times [-iw,iw])$ has measure greater than $\dfrac \theta 2 \mu(K).$ Without loss of generality we assume that $\mu(K_\ell)\ge \dfrac \theta 2 \mu(K).$ Let us take the polynomial $p(z)=(1-z)^n$, $n\ge 2.$ As in Section~\ref{Secsmallw} we first deal with $0 < q <\infty$. We divide $K$ into two subsets $$K_1 = \{x+iy \in K \ : \ x\in [-1, 3/4]\},\quad \mbox{and} \quad K_2 = \{x+iy \in K \ : \ x\in [3/4, 1]\}.$$ The estimate $|1-z|\geq |1-x|\ge 1/4$ for $z\in K_1$ yields \begin{equation}\label{bw02} \left\|p'\right\|_{L^q(K_1,\mu)}^q=n\left\|(1-z)^{n-1}\right\|_{L^q(K_1,\mu)}^q \le n^q \max_{z\in K_1}\frac{1}{|1-z|^q}\|p\|_{L^q(K_1,\mu)}^q \le (4n)^q\|p\|_q^q . \end{equation} We proceed to estimate $\left\|p'\right\|_{L^q(K_2)}.$ Since the diameter $d(K)=2$ and $-1, 1\in K$, the set $K$ is bounded by the circle of radius $2$ and centered at the point $-1$, thus $y^2\le 4-(1+x)^2$ for any $z=x+iy\in K$. This gives $$|1-z|^2=(1-x)^2+y^2 \le (1-x)^2+4-(1+x)^2= 4-4x\le 1, \quad z\in K_2,$$ hence \begin{equation}\label{bw01} \left\|p'\right\|_{L^q(K_2,\mu)}^q=n^q\left\|(1-z)^{n-1}\right\|_{L^q(K_2,\mu)} \leq n^q\max_{z\in K_2}|1-z|^{(n-1)q}\mu(K_2) \leq n^q \mu(K). \end{equation} To estimate $\mu(K)$ we note that $\displaystyle \|p\|_q^q \ge \|p\|_{L^q(K_\ell)}^q \ge \min_{z\in K_\ell}|1-z|^{nq} \mu(K_\ell) \ge \dfrac{\theta}{2} \mu(K), $ thus $\mu(K)\le \dfrac 2\theta \|p\|_q^q.$ Substituting this into \eqref{bw01} yields \begin{equation}\label{bw03} \left\|p'\right\|_{L^q(K_2,\mu)}^q \le n^q \frac 2\theta \|p\|_q^q. \end{equation} Adding \eqref{bw02} to \eqref{bw03} we deduce that $$ \|p'\|_q^q \le \left((4n)^q +n^q\frac{2}{\theta} \right)\|p\|_q^q \le (4n)^q\left(1 +\frac{2}{\theta} \right) \|p\|_q^q, $$ and finally that $$\dfrac{\|p'\|_q}{\|p\|_q} \le 4n\left(1 +\frac{2}{\theta} \right)^{1/q} \le 4n\frac{m}{1-\deltalta}w \dfrac{4}{d^2}\left(1 +\frac{2}{\theta} \right)^{1/q}\le \frac{16m}{1-\deltalta}\left(1 +\frac{2}{\theta} \right)^{1/q}\frac{w}{d^2}n. $$ It remains to observe that $16m\le 121.$ A passage to the limit similar to \eqref{qinfty} proves the case $q=\infty$. \section{Proof of the Corollaries} Suppose first that $\mu$ is the linear Lebesgue measure on the boundary of $K$. We want to derive Corollary \ref{cor1} from Theorem \ref{theorem1}. \begin{proof}[Proof of Corollary \ref{cor1}] It is plausible that to any $0<\delta<1$ there is some appropriate $\theta$ with which Theorem \ref{theorem1} can be applied. To obtain an effective estimate we first look for a concrete admissible value of $\theta$ corresponding to an arbitrary value of $\delta\in(0,1)$. By Lemma~\ref{lemma1}, $K$ is contained in the rectangle $[-1,1]\times [-iw, iw]$. By properties of convex curves \cite[p. 52, Property 5]{Bonnesen} the arc length of $\partial K$ is not greater than the perimeter of the rectangle, i.e. $\mu(K)\le 4(1+w)\le 8$, while the arc length of the part of $\partial K$ belonging to $K_\deltalta$ is trivially at least $4\deltalta$, i.e. $\mu(K_\deltalta)\ge 4\deltalta$. This provides us with the admissible value $\theta=\delta/2.$ Therefore, an application of Theorem \ref{theorem1} leads to \begin{equation}\label{C1plus} M_{n,q}(K) \le C_q(\deltalta) \frac{w^2}{d} n, \quad \textrm{where} \quad C_q(\deltalta):=C_q\left(\deltalta,\frac{\deltalta}2\right)= \frac{121}{1-\deltalta}\left(1 +\frac{4}{\deltalta} \right)^{1/q} \qquad (0<\delta<1). \end{equation} It remains to optimize on the choice of $\deltalta$ in function of $q$. So now we are looking for $\displaystyle \min_{0<\delta<1} \dfrac{1}{121} C_q(\delta) =\min_{0<\delta<1} \dfrac{(1+4/\delta)^{1/q}}{1-\delta}$. Denote $$\varphi(\delta):= \log \dfrac{(1+4/\delta)^{1/q}}{1-\delta} = \dfrac{1}{q}\log(1+4/\delta)-\log(1-\delta).$$ It is easy to see that on $(0,1)$ this function is strictly convex, with $\varphi(0)=\varphi(1)=+\infty$ determining a $U$-shape form. We are looking for its unique minimum point. Differentiation results in $\varphi'(\delta)= \dfrac{-4}{q} \dfrac{1}{\delta^2+4\delta} +\dfrac{1}{1-\delta}$, and the unique root of this in $(0,1)$ can be obtained from the quadratic equation $\delta^2+4\delta=\dfrac{4}{q}(1-\delta)$ or $\delta^2+(4+4/q)\delta-4/q=0$. This equation has two roots, one in the negative semiaxis and one in $(0,1)$: the latter is $\delta_1:= \frac{2}{q} \left(Q-(q+1) \right)$, where $Q:=\sqrt{q^2+3q+1}$. This latter point is the unique minimum point for $\varphi(\delta)$, so that we get \begin{align*} \min_{0<\delta<1} \frac{1}{121} C_q(\delta) &=\min_{0<\delta<1} \exp(\varphi(\delta)) = \exp(\varphi(\delta_1))= \frac{\{1+4/(\frac{2}{q}(Q-(q+1)))\}^{1/q}}{1-\frac{2}{q}(Q-(q+1))} \\&= q \frac{\{1+2q\frac{Q+q+1}{Q^2-(q+1)^2}\}^{1/q}}{3q+2-2Q} =q (3q+2+2Q) \frac{\{1+2q\frac{Q+q+1}{q}\}^{1/q}}{(3q+2)^2-4Q^2} \\&= q (3q+2+2Q) \frac{\{1+2(Q+q+1)\}^{1/q}}{5q^2} = \frac{3q+2+2Q}{5q} (3+2q+2Q)^{1/q}. \end{align*} Substituting back into \eqref{C1plus} yields the stated inequality \eqref{C1}. \end{proof} Next, we consider the case when $\mu$ is the (restriction to $K$) of the 2-dimensional Lebesgue measure (area). \begin{proof}[Proof of Corollary \ref{cor2}] Again by Lemma~\ref{lemma1}, $\mu (K) \le \mu ([-1,1]\times [-iw, iw])= 4w.$ Taking into account the definition of the minimal width, we observe that there exist $\alpha \in [0,1]$ and points $x^+$, $x^-\in [-1,1]$ such that the points $z^+=x^++i\alpha w$ and $z^-=x^--i(1-\alpha )w$ belong to $K$ (as otherwise the width of $K$ in the vertical direction would be less than $w$). Write $K^+_\deltalta=\{x+iy\in K_\deltalta \colon y\ge 0\},$ $K^-_\deltalta=\{x+iy\in K_\deltalta \colon y\le 0\}.$ If $x^+\in[-\deltalta,\deltalta]$, then by convexity of $K$, the value of $\mu(K_\deltalta^+)$ is at least the area of the triangle with the vertices $-\deltalta$, $z^+$, $\deltalta$, which is equal to $\alpha w \deltalta$. If $x^+\in[\deltalta,1]$, then $\mu(K_\deltalta^+)$ is not less then the area of the trapezium bounded by the real axis, the straight lines $x=\pm\deltalta$, and the straight line passing through the points $-1$ and $1+i\alpha w.$ The area of this trapezium is also $\alpha w \deltalta$. Similarly, if $x^+\in[-1,-\deltalta]$, then $\mu(K_\deltalta^+)$ is not less then the area of the trapezium bounded by the real axis, the straight lines $x=\pm\deltalta$, and the straight line passing through the points $-1+i\alpha w$ and $1$. Hence, in all cases, $\mu(K_\deltalta^+)\ge \alpha w \deltalta.$ Analogously, $\mu(K_\deltalta^-)\ge (1-\alpha) w \deltalta.$ Finally, $\mu(K_\deltalta) \ge w\deltalta$. Therefore, $\mu(K_\deltalta)/\mu(K)\ge \deltalta/4$. Thus for any $0<\delta<1$ we can set $\theta:=\deltalta/4$ as an admissible value for an application of Theorem~\ref{theorem1}. The theorem provides \begin{equation}\label{C2plus} M_{n,q}(K,\lambda) \le C_q^*(\deltalta) \frac{w^2}{d} n, \quad \textrm{where} \quad C_q^*(\deltalta):=C_q\left(\deltalta,\frac{\deltalta}4\right) =\frac{121}{1-\deltalta}\left(1 +\frac{8}{\deltalta} \right)^{1/q}. \end{equation} Now we consider $\varphi(\delta):=\log \frac{1}{121} C_q^*(\delta)$, which is again a strictly convex, $U$-shaped function with endpoint values $\varphi(0)=\varphi(1)=+\infty$. To find the minimum, we compute the unique critical point satisfying $\varphi'(\delta)=0$. This equation can be written as $$ \frac1q \frac{1}{1+8/\delta}\cdot(-\frac{8}{\delta^2})-\frac{1}{1-\delta}\cdot(-1)=0 \quad \text{or} \quad q\delta^2+(8q+8)\delta-8=0. $$ The quadratic equation has two roots, one in the negative semiaxes and another one in $(0,1)$: this latter one is $\delta_1=\frac2q \left(Q-2q-2\right)$, where $Q:=\sqrt{4q^2+10q+4}$. This is the minimum point of $\varphi(\delta)$. Therefore, \begin{align*} \min_{0<\delta<1} \frac{1}{121} C_q^*(\delta) &=\min_{0<\delta<1} \exp(\varphi(\delta)) = \exp(\varphi(\delta_1))= \frac{\{1+8/(\frac{2}{q}(Q-2q-2))\}^{1/q}}{1-\frac{2}{q}(Q-2q-2))} \\&= q \frac{\left(1+\frac{4q}{Q-2q-2}\right)^{1/q}}{q-2Q+4q+4} =\frac{q(5q+4+2Q)}{(5q+4)^2-4Q^2} \left(1+ \frac{4q(Q+2q+2)}{Q^2-(2q+2)^2} \right)^{1/q} \\&= \frac{q(5q+4+2Q)}{9q^2} \left( 1+\frac{4q(Q+2q+2)}{2q}\right)^{1/q} = \frac{5q+4+2Q}{9q} \left(4q+5+2Q\right)^{1/q}. \end{align*} Substitution in \eqref{C2plus} results in the stated assertion \eqref{C2}. \end{proof} \newcommand{N}{N} Our present work has dealt with the case of complex \emph{domains} with positive width $w>0$. The partition of the set $K$ and the calculus depended on $w$ and are valid only for $n>d^2/w^2$, a~fatal condition if $w=0$. As we mention in the introduction, in all known cases the \emph{lower estimates} for $w=0$ have $\sqrt{n}$ order. However, we are not aware of general \emph{upper estimations} which would ensure that the order is indeed $\sqrt{n}$. The most general we are aware of is the remark of Xiao and Zhou following \cite[p. 198, Theorem~1, Corollary 1]{Xiao} and pointing out that their estimate is of the right order for Jacobi weights. It seems likely that $O(\sqrt{n})$ upper estimates for $M_{n,q}(I,\mu)$ remain valid even for more general weights and measures. \label{lastpage} {\it Acknowledgments.} P. Yu. Glazyrina was supported by the Ministry of Science and Higher Education of the Russian Federation (Ural Federal University Program of Development within the Priority-2030 Program). Sz. Gy.~R\'{e}v\'{e}sz was supported in part by Hungarian National Research, Development and Innovation Fund, project \# \# K-132097. \noindent \hspace*{5mm} \begin{minipage}{\textwidth} \noindent \hspace*{-5mm} Polina Yu.{} Glazyrina\\ Institute of Natural Sciences and Mathematics,\\ Ural Federal University,\\ Mira street 19\\ 620002 Ekaterinburg, Russia\\ \end{minipage} \noindent \hspace*{5mm} \begin{minipage}{\textwidth} \noindent \hspace*{-5mm} Yulia S.{} Goryacheva\\ Institute of Natural Sciences and Mathematics,\\ Ural Federal University,\\ Mira street 19\\ 620002 Ekaterinburg, Russia\\ \end{minipage} \noindent \hspace*{5mm} \begin{minipage}{\textwidth} \noindent \hspace*{-5mm} Szilárd Gy.{} Révész\\ Alfréd Rényi Institute of Mathematics\\ Reáltanoda utca 13-15\\ 1053 Budapest, Hungary \\ \end{minipage} \end{document}
\begin{document} \title{Right-angled Artin groups are symmetric diagram groups} \begin{abstract} In this article, we show that, for every $n \geq 2$, the pure virtual twin group $PVT_n$ can be naturally described as a symmetric diagram group, a family of groups introduced by V. Guba and M. Sapir and associated to semigroup presentations. Inspired by this observation, we prove that every finitely generated right-angled Artin group is a symmetric diagram group. This contrasts with the fact that not all right-angled Artin groups are planar diagram groups. \end{abstract} \section{Introduction} \noindent Guba-Sapir extended in \cite{MR1396957} combinatorial group theory to the realm of diagram groups. One of the most motivating examples was Thompson's group $F$, the group of all piecewise-linear orientation-preserving homeomorphisms of the interval whose break points are dyadic integers, and whose values in the break points are also dyadic integers. The study of diagram groups lead to new results about Thompson's groups $F$. Since then, these groups have been studied substantially, in particular, towards investigating properties of other groups which can be viewed as diagram groups or their subgroups. Framework on this subject includes computational problems such as the word, conjugacy and commutation problems, finiteness properties, homology, orderability and different geometries, to name a few. We refer to the recent survey article \cite{Diagram-Groups-Genevois} for a detailed account.\\ Interestingly, diagram groups have dissimilar yet equivalent interpretations. One of these alternative definitions allows us to define different generalisations of diagram groups. For distinction, classical diagram groups, from now on referred as \textit{planar diagram groups}, have their elements represented as pictures on the plane consisting of non-intersecting wires and transistors. A similar definition on annuli yields \textit{annular diagram group}, and allowing wires to cross in an arbitrary manner yields \textit{symmetric diagram groups} \cite{MR1396957, MR2136028}. These extensions mimic the case of Thompson's groups $F \subset T \subset V$. That is, for every semigroup presentation $\mathcal{P}= \langle \Sigma \mid \mathcal{R} \rangle$ and every non-empty word $w \in \Sigma$, we define three nested groups $D_p(\mathcal{P},w) \subset D_a(\mathcal{P},w) \subset D_s(\mathcal{P},w)$, where $D_p(\mathcal{P},w)$ (resp. $D_a(\mathcal{P},w)$, $D_s(\mathcal{P},w)$) is a planar (resp. annular, symmetric) diagram group. \\ The characterisation of groups which are (planar) diagram groups is widely open. And, usually, it is difficult to determine whether or not a given group can be described as a diagram group. The particular case of right-angled Artin groups is interesting. It is known that not every right-angled Artin group is a planar diagram group \cite{MR1725439}, but a complete classification is unknown. This motivates us to investigate the relation between right-angled Artin groups and symmetric diagram groups. In the symmetric case, we prove that every right-angled Artin group turns out to be a symmetric diagram group. \begin{thm}\label{thm:MainIntro} Every finitely generated right-angled Artin group is a symmetric diagram group. \end{thm} \noindent As a nice application, it follows that there exist torsion-free symmetric diagram groups that are not planar diagram groups. It is clear that symmetric diagram groups may not be planar diagram groups, since the latter are torsion-free while the former may contain torsion. Examples include for instance Thompson's group $V$ or Houghton groups. However, whether torsion-free symmetric diagram groups are automatically planar diagram groups is not clear at first glance. Up to our knowledge, we exhibit the first example providing a negative answer. \begin{cor}\label{Symmetric-Not-Planar} There exist torsion-free symmetric diagram groups, such as right-angled Artin groups defined by cycles of odd length $\geq 5$, that are not planar diagram groups. \end{cor} The fact that the right-angled Artin groups mentioned above are not planar diagram groups is proved in \cite{MR1725439}. See also Theorem~\ref{thm:NotPlanar}. Besides, this article explores the world of symmetric diagram groups for which not much is known. In this direction, we consider in particular, twin groups and their generalisations, and relate their pure subgroups to planar, annular, and symmetric diagram groups. The term \textit{twin group} was coined in the work of Khovanov \cite{MR1370644}, and these groups appear in the literature under the names of Grothendieck cartographical groups \cite{MR1106916}, planar braid groups \cite{MR4170471}, traid groups \cite{MR4035955} and flat braids \cite{MR1738392}. They are right-angled Coxeter groups which serve as planar analogues of classical Artin braid groups. These groups relate to the doodles on $2$-sphere which are finite collections of closed curves that are required not to have triple intersections. The kernel of a natural map from twin group onto the symmetric group is known as \textit{pure twin group} $PT_n$ which is the fundamental group of real codimension two subspace arrangements. In other words, it is the fundamental group of configuration space of $n$ particles on a line such that no three particles can collide. Through \cite{Twin-Group-Diagram-Group,article3}, it is known that $PT_n$ is a planar diagram group.\\ From the motivation of virtual knots and braids, arose the virtual extension of twin groups \cite{MR4027588}. Similar to the pure twin group, the canonical subgroup of virtual twin group is called \textit{pure virtual twin group} $PVT_n$ which turns out to be an irreducible right-angled Artin group \cite{Structure-Automorphisms-Pure-Virtual-Twin-Groups}. From the before mentioned results, we deduce that the pure virtual twin group is a symmetric diagram group. However, the information of the elements of these groups can be translated in terms of monotonic strings, real and virtual crossings on the plane. This allows us to give an alternative and clearer proof where the symmetric aspect of diagram group is evidently visible. We prove the following result. \begin{thm}\label{Main-Theorem-Pure-Virtual-Twin} Let $\mathcal{P}_n= \langle x_1, x_2, \dots, x_n ~|~ x_i x_j =x_j x_i \text{ for all } i \neq j \rangle$ be a semigroup presentation. The symmetric diagram group $D_s(\mathcal{P}_n,x_1x_2 \cdots x_n )$ is isomorphic to the pure virtual twin group $PVT_n$. \end{thm} For completion, we also deduce from \cite{MR1725439} that the right-angled Artin group $PVT_n$ is not a planar diagram group, \ref{Symmetric-Not-Planar} providing another example illustrating Corollary~\ref{Symmetric-Not-Planar}. Quite recently, Mostovoy defined \textit{annular twin groups} which is motivated by considering configuration space of $n$ particles on a circle than a line, whose fundamental group corresponds to \textit{pure annular twin group} $aPT_n$. Farley \cite[Theorem 2]{Twin-Group-Diagram-Group} independently considered and defined pure annular twin groups in the sense of diagrams, and proved that $aPT_n$ is an annular diagram group. As an immediate corollary to Theorem \ref{Main-Theorem-Pure-Virtual-Twin}, we get the following non-trivial result. \begin{cor} The pure twin group embeds in the pure annular twin group, which further embeds in the pure virtual twin group. \end{cor} It is worth mentioning that it is not a trivial question to ask if a group embeds in its virtual extension. For example, we can prove the braid groups embed in virtual braid groups (and similarly that $T_n$ embeds in $VT_n$) using classical result on embeddings between parabolic subgroups of Artin and Coxeter groups (see for instance \cite{MR3519103}). In summary, symmetric diagrams can be roughly viewed as a virtual extension of planar diagrams, in the sense of knots and virtual knots, braids and virtual braids, twin groups and virtual twin groups. More specifically, as virtual (pure) braids provide all possible crossings between different strands, symmetric diagrams realize all possible combinations of wires. Recently in \cite{Virtual-Artin-Groups}, it was pointed out that virtual braid groups can be also realised mimicking the action of the symmetric group on its root system. An interesting consequence is that we can obtain this way the usual presentation of pure virtual braid groups, where generators are in one to one correspondence with simple roots of the symmetric group. It seems interesting to investigate if symmetric diagrams can be similarly defined in terms of underlying actions of the symmetric group on the geometry of the corresponding planar diagram group. \textbf{Acknowledgements.} First author was partially supported by the ANR project AlMaRe (ANR-19-CE40-0001).The third author has received funding from European Union's Horizon Europe Research and Innovation programme under the Marie Sklodowska Curie grant agreement no. 101066588. \section{Diagram groups and diagram products} \subsection{Diagram groups}\label{section:diagrams} In literature, planar diagram groups have various distinct, yet equivalent definitions. Informally, they are either defined as fundamental group of Squier square complex, or in terms of groups consisting of equivalence classes of certain pictures comprising of labelled directed graphs with cells (which leads to viewing planar diagram groups as two-dimensional analogues of free groups). They can also be viewed as collection of equivalent dual pictures consisting of transistors and wires. This last will be the formal definition of which we will adapt throughout the paper. \paragraph{\textbf{Symmetric semigroup diagrams.}} In this paragraph, we introduce the main definitions related to \emph{symmetric semigroup diagrams}, firstly introduced in \cite{MR1396957} and further studied in \cite{MR2136028}. We follow the definition given in \cite{MR3822286} and we refer to Example \ref{ex:diagram} below to get a good picture to keep in mind when reading the definition (notice, however, that on our diagrams the frames are not drawn). \noindent The \textit{free semigroup} on the set $\Sigma$ is the set $\Sigma^+$ of all non-empty strings formed from $\Sigma$, that is, $$\Sigma^{+}= \{ u_1u_2\dots u_n ~|~ n \in \mathbb{N}, ~u_i \in \Sigma \text{ for all }i\}.$$ Let $1$ denotes the empty string and $\Sigma^*= \Sigma^+ \cup \{1\}$. We define operation of concatenation on sets $\Sigma^+$ and $\Sigma^*$. We say $$w_1 = w_2$$ if $w_1$ and $w_2$ are equal as words in $\Sigma^*$. A \emph{set of relations} $\mathcal{R}$ is a subset of $\Sigma^+ \times \Sigma^+$. For clarity, we denote an element $(u,v)$ of $\mathcal{R}$ as $u=v$.\\ A \textit{semigroup presentation} $\mathcal{P}:= \langle \Sigma \mid \mathcal{R} \rangle$ is the data of an alphabet $\Sigma$ and a set of relations $\mathcal{R}$. In all our paper, we follow the convention that, if $u=v$ is a relation of $\mathcal{R}$, then $v=u$ does not belong to $\mathcal{R}$; as a consequence, $\mathcal{R}$ does not contain relations of the form $u=u$.\\ We now begin to define the fundamental pieces which we will glue together to construct (symmetric) diagrams. \begin{itemize} \item A \emph{wire} is a homeomorphic copy of $[0,1]$. The point $0$ is the \emph{bottom} of the wire, and the point $1$ its \emph{top}. \item A \emph{transistor} is a homeomorphic copy of $[0,1]^2$. One says that $[0,1] \times \{1 \}$ (resp. $[0,1] \times \{ 0 \}$, $\{0\} \times [0,1]$, $\{1 \} \times [0,1]$) is the \emph{top} (resp. \emph{bottom}, \emph{left}, \emph{right}) \emph{side} of the transistor. Its top and bottom sides are naturally endowed with left-to-right orderings. \item A \emph{frame} is a homeomorphic copy of $\partial [0,1]^2$. It has \emph{top}, \emph{bottom}, \emph{left} and \emph{right sides}, just as a transistor does, and its top and bottom sides are naturally endowed with left-to-right orderings. \end{itemize} Fix a semigroup presentation $\mathcal{P}= \langle \Sigma \mid \mathcal{R} \rangle$. A \emph{symmetric diagram over $\mathcal{P}$} is a labelled oriented quotient space obtained from \begin{itemize} \item a finite non-empty collection $W(\Delta)$ of wires; \item a labelling function $\ell : W(\Delta) \to \Sigma$; \item a finite (possibly empty) collection $T(\Delta)$ of transistors; \item and a frame, \end{itemize} which satisfies the following conditions: \begin{itemize} \item each endpoint of each wire is attached either to a transistor or to the frame; \item the bottom of a wire is attached either to the top of a transistor or to the bottom of the frame; \item the top of a wire is attached either to the bottom of a transistor or the top of the frame; \item the images of two wires in the quotient must be disjoint; \item if $\mathrm{top}(T)$ (resp. $\mathrm{bot}(T)$) denotes the word obtained by reading from left to right the labels of the wires which are connected to the top side (resp. the bottom side) of a given transistor $T$, then either $\mathrm{top}(T)= \mathrm{bot}(T)$ or $\mathrm{bot}(T)= \mathrm{top}(T)$ belongs to $\mathcal{R}$; \item if, given two transistors $T_1$ and $T_2$, we write $T_1 \prec T_2$ when there exists a wire whose bottom contact is a point on the top side of $T_1$ and whose top contact is a point on the bottom side of $T_2$, and if we denote by $<$ the transitive closure of $\prec$, then $<$ is a strict partial order on the set of the transistors. \end{itemize} Two symmetric diagrams are \emph{equivalent} if there exists a homeomorphism between them which preserves the labellings of wires and all the orientations (left-right and top-bottom) on all the transistors and on the frame. From now on, every symmetric diagram will be considered up to equivalence. This allows us to represent a diagram in the plane so that the frame and the transistors are straight rectangles such that their left-right and top-bottom orientations coincide with a fixed orientation of the plane, and so that wires are vertically monotone. \noindent Given a symmetric diagram $\Delta$, one defines its \emph{top label} (resp. its \emph{bottom label}), denoted by $\mathrm{top}(\Delta)$ (resp. $\mathrm{bot}(\Delta)$), as the word obtained by reading from left to right the labels of the wires connected to the top (resp. the bottom) of the frame. A \emph{symmetric $(u,v)$-diagram} is a symmetric diagram whose top label is $u$ and whose bottom label is $v$. \noindent Fixing two symmetric diagrams $\Delta_1$ and $\Delta_2$ satisfying $\mathrm{top}(\Delta_2)= \mathrm{bot}(\Delta_1)$, one can define their \emph{concatenation} $\Delta_1 \circ \Delta_2$ by gluing bottom endpoints of the wires of $\Delta_1$ which are connected to the bottom side of the frame to the top endpoints of the wires of $\Delta_2$ which are connected to the top side of the frame, following the left-to-right ordering, and by identifying, and next removing, the bottom side of the frame of $\Delta_1$ with the top side of the frame of $\Delta_2$. Loosely speaking, we ``glue'' $\Delta_2$ below $\Delta_1$. \noindent Given a symmetric diagram $\Delta$, a \emph{dipole} in $\Delta$ is the data of two transistors $T_1,T_2$ satisfying $T_1 \prec T_2$ such that, if $w_1, \ldots, w_n$ denotes the wires connected to the top side of $T_1$, listed from left to right: \begin{itemize} \item the top endpoints of the $w_i$'s are connected to the bottom of $T_2$ with the same left-to-right order, and no other wires are attached to the bottom of $T_2$; \item the top label of $T_2$ is the same as the bottom label of $T_1$. \end{itemize} Given such a dipole, one may \emph{reduce} it by removing the transistors $T_1, T_2$ and the wires $w_1, \ldots, w_n$, and connecting the top endpoints of the wires which are connected to the top side of $T_2$ with the bottom endpoints of the wires which are connected to the bottom side of $T_1$ (preserving the left-to-right orderings). \noindent A symmetric diagram without dipoles is \emph{reduced}. Clearly, any symmetric diagram can be transformed into a reduced one by reducing its dipoles, and according to \cite[Lemma~2.2]{MR2136028}, the reduced diagram we get does not depend on the order we choose to reduce its dipoles. We refer to this reduced diagram as the \emph{reduction} of the initial diagram. Two diagrams are the same \emph{modulo dipoles} if they have the same reduction. \begin{ex}\label{ex:diagram} Consider the semigroup presentation $$\mathcal{P}= \langle a,b,c \mid ab=ba, ac=ca, bc=cb \rangle.$$ Figure \ref{figure9} shows the concatenation of two symmetric diagrams over $\mathcal{P}$, and the reduction of the resulting symmetric diagram. \begin{figure}\label{figure9} \end{figure} \end{ex} \begin{definition} For every $w \in \Sigma^+$, the \emph{symmetric diagram group} $D_s(\mathcal{P},w)$ is the set of all the symmetric $(w,w)$-symmetric semigroup diagrams over $\mathcal{P}$, modulo dipoles, endowed with the concatenation. \end{definition} \noindent This is indeed a group according to \cite{MR2136028} who refer them as \textit{picture groups}. They also appear under the name of \textit{braided diagram groups} in the work of Guba-Sapir \cite{MR1396957}. \begin{remark} We recall the remark given in \cite{MR3822286} that the braided diagrams, despite the name, are not truly braided. We noticed that the two braided diagrams are equivalent if there is a certain type of marked homeomorphism between them. Thus, the equivalence does not depend upon the embedding into the larger space. Though these groups seem to have little in common with Artin braid groups. So conventionally, we prefer to call these symmetric diagram groups. \end{remark} \begin{ex}\label{ex:V} The symmetric diagram group $D_s(\mathcal{P},x)$ associated to the semigroup presentation $\mathcal{P} = \langle x \mid x=x^2 \rangle$ is isomorphic to Thompson's group $V$. See \cite[Example 16.6]{MR1396957} for a proof. \end{ex} \paragraph{\textbf{Planar and annular diagram groups.}} In this section, we fix a semigroup presentation $\mathcal{P}= \langle \Sigma \mid \mathcal{R} \rangle$ and a baseword $w \in \Sigma^+$. Below, we define two variations of symmetric semigroup diagrams. Our first variation is the main topic of \cite{MR1396957}. \begin{definition} A symmetric diagram over $\mathcal{P}$ is \emph{planar} if there exists an embedding $\Delta \to \mathbb{R}^2$ which preserves the left-to-right orderings and the top-bottom orientations on the transistors and the frame. The \emph{planar diagram group} $D_p(\mathcal{P},w)$ is the subgroup of $D_s(\mathcal{P},w)$ consisting of all planar diagrams. \end{definition} \begin{ex}\label{ex:F} The planar diagram group $D_p(\mathcal{P},x)$ associated to the semigroup presentation $\mathcal{P} = \langle x \mid x=x^2 \rangle$ is isomorphic to Thompson's group $F$. See \cite[Example 6.4]{MR1396957} for a proof. \end{ex} \noindent Our second variation was also introduced in \cite{MR1396957}, and further studied in \cite{MR2136028}. Keep in mind Figure \ref{figure6} when reading the definition. \begin{figure} \caption{Annular diagram.} \label{figure6} \end{figure} \begin{definition} A symmetric diagram $\Delta$ is \emph{annular} if it can be embedded into an annulus by preserving the left-to-right orderings and the top-bottom orientations on the transistors and the frame. More precisely, suppose that we replace the frame of $\Delta$ with a pair of disjoint circles, both endowed with the counterclockwise orientation in place of the previous left-to-right orderings of the top and bottom sides of the frame; we also fix a basepoint on each circles (which will be disjoint from the wires). Transistors and wires are defined as before, and their attaching maps are subject to the same conditions as before, where the inner (resp. outer) circle of the frame plays the role of the top (resp. bottom) side of the frame. The resulting diagram is \emph{annular} if it embeds into the plane by preserving the left-to-right orderings on the transistors. The \emph{annular diagram group} $D_a(\mathcal{P},w)$ is the set of annular $(w,w)$-diagrams over $\mathcal{P}$, modulo dipoles, endowed with the concatenation. \end{definition} \noindent All the definitions introduced in the previous section naturally generalise to planar and annular semigroup diagrams. Nevertheless, we mention that, when concatenating two annular diagrams, i.e.\ when identifying the top circle of the second diagram with the bottom circle of the second one, we have to match the basepoints on the different circles. \begin{ex}\label{ex:T} The annular diagram group $D_a(\mathcal{P},x)$ associated to the semigroup presentation $\mathcal{P} = \langle x \mid x=x^2 \rangle$ is isomorphic to Thompson's group $T$. See \cite[Example 16.5]{MR1396957} for a proof. \end{ex} \noindent It is worth noticing that a planar diagram is an annular diagram as well, so that, in the same way that $F \subset T \subset V$, one has $$D(\mathcal{P},w) \subset D_a(\mathcal{P},w) \subset D_s(\mathcal{P},w)$$ for every semigroup presentation $\mathcal{P}= \langle \Sigma \mid \mathcal{R} \rangle$ and every baseword $w \in \Sigma^+$. \subsection{Diagram products} \noindent In this section, we describe symmetric diagram products as introduced in \cite{MR4033512}. Example~\ref{ex:diagramsPG} and Figure~\ref{figure10} below illustrate the corresponding definitions which we give now. \noindent Let $\mathcal{P}= \langle \Sigma \mid \mathcal{R} \rangle$ be a semigroup presentation and $\mathcal{G}= \{ G_s \mid s \in \Sigma \}$ a collection of groups indexed by the alphabet $\Sigma$. We set a new alphabet $$\Sigma(\mathcal{G})= \{ (s,g) \mid s \in \Sigma, \ g \in G_s \}$$ and a new set of relations $\mathcal{R}(\mathcal{G})$ containing $$ (u_1,g_1) \cdots (u_n,g_n) = (v_1,h_1) \cdots (v_m,h_m) $$ for all $u_1 \cdots u_n=v_1 \cdots v_m \in \mathcal{R}$ and $g_1 \in G_{u_1}, \ldots, g_n \in G_{u_n}$, $h_1 \in G_{v_1}, \ldots, h_n \in G_{u_m}$. We get a new semigroup presentation $\mathcal{P}(\mathcal{G})= \langle \Sigma(\mathcal{G}) \mid \mathcal{R} (\mathcal{G}) \rangle$. A \emph{symmetric diagram over $(\mathcal{P}, \mathcal{G})$} is a symmetric semigroup presentation over $\mathcal{P}(\mathcal{G})$. If $\Delta$ is such a diagram, we denote by $\mathrm{top}^-(\Delta)$ (resp. $\mathrm{bot}^-(\Delta)$) the image of $\mathrm{top}(\Delta)$ (resp. $\mathrm{top}(\Delta)$) under the natural projection $\Sigma(\mathcal{G})^+ \to \Sigma^+$ (which ``forgets'' the second coordinate). If $u,v \in \Sigma^+$ are words, a \emph{$(u,v)$-diagram over $(\mathcal{P}, \mathcal{G})$} is a diagram $\Delta$ satisfying $\mathrm{top}^-(\Delta)=u$ and $\mathrm{bot}^-(\Delta)=v$; we say that $\Delta$ is a \emph{$(u,\ast)$-diagram over $(\mathcal{P}, \mathcal{G})$} if we do not want to mention $v$. \noindent All the vocabulary introduced in Section \ref{section:diagrams} applies to symmetric diagrams over $(\mathcal{P}, \mathcal{G})$ thought of as symmetric semigroup diagrams over $\mathcal{P}(\mathcal{G})$, except the concatenation and the dipoles which we define now. \noindent If $\Delta_1$ and $\Delta_2$ are two symmetric diagrams over $(\mathcal{P}, \mathcal{G})$ satisfying $\mathrm{top}^-(\Delta_2)= \mathrm{bot}^-(\Delta_1)$, we define the \emph{concatenation} of $\Delta_1$ and $\Delta_2$ as the symmetric diagram over $(\mathcal{P}, \mathcal{G})$ obtained in the following way. Write $\mathrm{bot}(\Delta_1) = (w_1,g_1) \cdots (w_n,g_n)$ and $\mathrm{top}(\Delta_2)= (w_1,h_1) \cdots (w_n,h_n)$ for some $w_1, \ldots, w_n \in \Sigma$ and $g_1, \ldots, g_n,h_1, \ldots, h_n \in \bigsqcup\limits_{G \in \mathcal{G}} G$. Now, \begin{itemize} \item we glue the top endpoints of the wires of $\Delta_2$ connected to the top side of the frame to the bottom endpoints of the wires of $\Delta_1$ connected to the bottom side of the frame (respecting the left-to-right ordering); \item we label these wires from left to right by $(w_1,g_1h_1), \ldots, (w_n,g_nh_n)$; \item we identify, and then remove, the bottom side of the frame of $\Delta_1$ and the top side of the frame of $\Delta_2$. \end{itemize} The symmetric diagram we get is the \emph{concatenation} $\Delta_1 \circ \Delta_2$. \noindent Given a symmetric diagram $\Delta$ over $(\mathcal{P}, \mathcal{G})$, a \emph{dipole} in $\Delta$ is the data of two transistors $T_1,T_2$ satisfying $T_1 \prec T_2$ such that, if $w_1, \ldots, w_n$ denote the wires connected to the top side of $T_1$, listed from left to right: \begin{itemize} \item the top endpoints of the $w_i$'s are connected to the bottom of $T_2$ with the same left-to-right order, and no other wires are attached to the bottom of $T_2$; \item the labels $\mathrm{top}^-(T_2)$ and $\mathrm{bot}^-(T_1)$ are the same; \item the wires $w_1, \ldots, w_n$ are labeled by letters of $\Sigma(\mathcal{G})$ with trivial second coordinates. \end{itemize} Given such a dipole, one may \emph{reduce} it by \begin{itemize} \item removing the transistors $T_1,T_2$ and the wires $w_1, \ldots, w_n$; \item connecting the top endpoints of the wires $a_1, \ldots, a_m$ (from left to right) which are connected to the top side of $T_2$ with the bottom endpoints of the wires $b_1, \ldots, b_m$ (from left to right) which are connected to the bottom side of $T_1$ (preserving the left-to-right orderings); \item and labelling the new wires by $( \ell_1, g_1h_1), \ldots, (\ell_m, g_mh_m)$ from left to right, if $a_i$ is labelled by $(\ell_i,h_i)$ and $b_i$ by $(\ell_i,g_i)$ for every $1 \leq i \leq m$. \end{itemize} A symmetric diagram over $(\mathcal{P}, \mathcal{G})$ which does not contain any dipole is \emph{reduced}. Of course, any symmetric diagram can be transformed into a reduced one by reducing its dipoles, and the same argument as \cite[Lemma 2.2]{MR2136028} shows that the reduced diagram we get does not depend on the order we choose to reduce its dipoles. We refer to this reduced diagram as the \emph{reduction} of the initial diagram. Two diagrams are the same \emph{modulo dipoles} if they have the same reduction. \begin{definition} For every $w \in \Sigma^+$, the \emph{symmetric diagram product} $D_s(\mathcal{P}, \mathcal{G},w)$ is the set of all the symmetric $(w,w)$-diagrams $\Delta$ over $(\mathcal{P},\mathcal{G})$, modulo dipoles, endowed with the concatenation. \end{definition} \noindent This is indeed a group, for the same reason that a symmetric diagram group turns out to be a group. \begin{ex}\label{ex:diagramsPG} Consider the semigroup presentation $\mathcal{P}= \langle a,b,p \mid a=ap, b=pb \rangle$ and the collection of groups $\mathcal{G}= \{ G_a=G_b=G_p= \mathbb{Z} \}$. Figure \ref{figure10} shows the concatenation of two symmetric diagrams over $(\mathcal{P}, \mathcal{G})$, and the reduction of the resulting diagram. \begin{figure} \caption{Concatenation and reduction of two symmetric diagrams.} \label{figure10} \end{figure} \end{ex} \subsection{Combination theorem} \noindent It is proved in \cite[Theorem~4]{MR1725439} that a planar diagram product of planar diagram groups is again a diagram group. The motivation is clear: given a diagram reprensenting an element of our diagram product, one can produce a diagram over a bigger semigroup presentation by expanding each wire labelled by an element with a diagram representing it. This idea allows one to construct an injective morphism from every (planar or symmetric) diagram product of (planar or symmetric) diagram groups into a (planar or symmetric) diagram group. The difficulty is to choose the latter group in order get a morphism which is also surjective. Let us illustrate what may happen on a specific example. \noindent Let $\mathcal{P} = \langle a \mid \ \rangle$. Given a group $G$, the symmetric diagram product $D(\mathcal{P},\mathcal{G},a^2)$, where $\mathcal{G}= \{G(a):=G\}$, is isomorphic to the wreath product $G \wr \mathbb{Z}/2\mathbb{Z}:= (G \oplus G) \rtimes \mathbb{Z}/2\mathbb{Z}$. Now, assume that $G$ is isomorphic to Thompson's group $V$. We know that $V$ can be described as the symmetric diagram group $D(\mathcal{Q},x)$ where $\mathcal{Q}:= \langle x \mid x=x^2 \rangle$. A natural attempt is to set the semigroup presentation $$\mathcal{S}:= \langle a,x \mid a=x, x=x^2 \rangle$$ and to ask whether $D(\mathcal{P},\mathcal{G},a^2)$ is isomorphic to $D(\mathcal{S},a^2)$. However, every reduced $(a^2,a^2)$-diagram over $\mathcal{S}$ can be written (up to dipole reduction) as a concatenation $A \circ \Delta \circ A^{-1}$ where $\Delta$ is an $(x^2,x^2)$-diagram over $\mathcal{Q}$ and where $A$ is the $(a^2,x^2)$-diagram given by the derivation $aa \to xa \to xx$. Thus, $D(\mathcal{S},a^2)$ is isomorphic to $D(\mathcal{Q},x^2)$. Since $x=x^2$ modulo $\mathcal{Q}$, the latter is also isomorphic to $D(\mathcal{Q},x)$, and we conclude that $D(\mathcal{S},a^2)$ is actually isomorphic to $V$. \noindent In our example, the obstruction comes from the fact that the two natural copies of $D(\mathcal{Q},x)$ in $D(\mathcal{S},a^2)$ ``interact'', which is not the case for the two copies of $V$ in the diagram product. In planar diagram products, it is possible to insert ``separation letters'' in order to prevent such an interaction. However, in symmetric diagram products, the trick does not work anymore since wires are allowed to cross. \noindent Actually, it is reasonable to think that, contrary to the planar case, symmetric diagram groups are not stable under symmetric diagram products. As explicit condidates: \begin{question} Are the free product $V \ast V$ and the wreath product $V \wr \mathbb{Z}/2\mathbb{Z}$ symmetric diagram groups? \end{question} \noindent Nevertheless, it turns out that symmetric diagram products of symmetric diagram groups are symmetric diagram groups in some cases. Below, we record a simple case which will help us proving Theorem~\ref{thm:MainIntro}. \begin{thm}\label{thm:Combination} Let $\mathcal{P}:= \langle \Sigma \mid \mathcal{R} \rangle$ be a semigroup presentation and $w \in \Sigma^+$ a word. Set $\mathcal{G}:= \{ G(s)= \mathbb{Z}, s \in \Sigma\}$. The diagram product $D(\mathcal{P},\mathcal{G},w)$ is isomorphic to the diagram group $D(\mathcal{Q},w)$ where $$\mathcal{Q}:=\left\langle \Sigma \sqcup \bigsqcup_{s \in \Sigma} \{a_s,b_s,c_s\} \mid \mathcal{R} \sqcup \bigsqcup\limits_{s \in \Sigma} \{s=a_s,a_s=b_s,b_s=c_s, c_s=s\} \right\rangle.$$ \end{thm} \begin{proof} To every diagram $\Delta$ over $(\mathcal{P},\mathcal{G})$, we associate a diagram $\Theta(\Delta)$ over $\mathcal{Q}$ as follows. Given a wire in $\Delta$ labelled by some $(s,k)$ with $s \in \Sigma$ and $k \in G(s)$, we replace it with the $(s,s)$-diagram $M_s(k)$ over $\mathcal{Q}$ given by the derivation $$s \to \underset{k \text{ times}}{\underbrace{a_s \to b_s \to c_s \to \cdots \to a_s \to b_s \to c_s}} \to s $$ if $k \geq 0$ or its inverse if $k<0$. \noindent The first observation is that $\Theta$ sends diagrams which are equivalent modulo dipole reduction to diagrams which are equivalent modulo dipole reduction. Indeed, let $\Delta'$ be a diagram obtained from another diagram $\Delta$ by reducing a dipole, say $(T_1,T_2)$. The wires connecting $T_1$ and $T_2$ do not cross and have trivial second coordinates. Therefore, $(T_1,T_2)$ also defines a dipole in $\Theta(\Delta)$. Clearly, reducing $(T_1,T_2)$ in $\Theta(\Delta)$ yields $\Theta(\Delta')$. This justifies our assertion. \noindent Thus, $\Theta$ defines a map $D(\mathcal{P},\mathcal{G},w) \to D(\mathcal{Q},w)$. Let us verify that it is a morphism. Let $\Phi,\Psi \in D(\mathcal{P},\mathcal{G},w)$ be two diagrams and let $w_1, \ldots,w_n$ denote the letters of $w$ from left to right. We can decompose $\Phi$ (resp. $\Psi$) as a concatenation $\Phi_0 \circ \epsilon$ (resp. $\eta \circ \Phi_0$) where $\Phi_0$ (resp. $\Psi_0$) has its bottom (resp. top) wires whose labels have trivial second coordinates and where $\epsilon$ (resp. $\eta$) has no transistor but only wires labelled from left to right by, say, $g_1 \in G(w_1), \ldots, g_n \in G(w_n)$ (resp. $h_1 \in G(w_1), \ldots, h_n \in G(w_n)$). Then $\Phi \circ \Psi$ can be written as $\Phi_0 \circ \mu \circ \Psi_0$ where $\mu$ has no transistor by only wires labelled from left to right by $g_1h_1, \ldots, g_nh_n$. By construction, $\Theta( \Phi \circ \Psi)= \Theta( \Phi_0) \circ \Theta(\mu) \circ \Theta(\Psi_0)$. But we clearly have $\Theta(\mu) \equiv \Theta(\epsilon) \circ \Theta(\eta)$, $\Theta(\Phi_0) \circ \Theta(\epsilon)$, and $\Theta(\eta) \circ \Theta(\Psi_0)= \Theta(\Psi)$. Hence $\Theta(\Phi \circ \Psi) \equiv \Theta(\Phi) \circ \Theta(\Psi)$. \noindent Now, let us verify that $\Theta$ is injective. Let $\Delta \in D(\mathcal{P},\mathcal{G},w)$ be a diagram. Assume that $\Theta(\Delta)$ contains a dipole, say $(T_1,T_2)$. Because $\Theta(\Delta)$ is obtained from $\Delta$ by expanding the wires with reduced diagrams, necessarily $T_1$ and $T_2$ already existed in $\Delta$, and necessarily define a dipole in $\Delta$. Therefore, $\Theta$ sends reduced diagrams to reduced diagrams, which implies its injectivity. \noindent Finally, it remains to show that $\Theta$ is surjective. So let $\Delta \in D(\mathcal{Q},w)$ be an arbitrary reduced diagram. Because $\Delta$ is reduced, for every $s \in \Sigma$: \begin{itemize} \item a transistor labelled by $a_s=b_s$ must lie between two transistors labelled by $s=a_s$ and $b_s=c_s$ or $c_s=a_s$ and $b_s=a_s$; \item a transistor labelled by $b_s=c_s$ must lie between two transistors labelled by $a_s=b_s$ and $c_s=s$ or $c_s=a_s$; \item a transistor labelled by $c_s=a_s$ must lie between two transistors labelled by $b_s=c_s$ and $a_s=s$ or $a_s=b_s$. \end{itemize} As a consequence, there exist subdiagrams $\Phi_1, \ldots, \Phi_n$ such that every transistor labelled by a relation not in $\mathcal{R}$ is contained in some $\Phi_i$ and such that, for each $1 \leq i \leq n$, $\Phi_i = M_s(k)$ for some $s \in \Sigma$ and $k \in \mathbb{Z}$. Let $\Psi$ denote the diagram obtained from $\Delta$ by collapsing each such $\Phi_i$ to a single wire labelled by $(s,k)$. Then $\Psi$ is a diagram over $(\mathcal{P},w)$ and $\Theta(\Psi)=\Delta$ by construction. \end{proof} \section{Graph products} \subsection{Preliminaries} \noindent Let $\Gamma$ be a simplicial graph and $\mathcal{G}= \{ G_u \mid u \in V(\Gamma) \}$ be a collection of groups indexed by the vertex-set $V(\Gamma)$ of $\Gamma$. The \emph{graph product} $\Gamma \mathcal{G}$ is defined as the quotient $$\left( \underset{u \in V(\Gamma)}{\ast} G_u \right) / \langle \langle [g,h]=1, g \in G_u, h \in G_v \ \text{if} \ (u,v) \in E(\Gamma) \rangle \rangle$$ where $E(\Gamma)$ denotes the edge-set of $\Gamma$. The groups of $\mathcal{G}$ are referred to as \emph{vertex-groups}. \noindent A \emph{word} in $\Gamma \mathcal{G}$ is a product $g_1 \cdots g_n$ for some $n \geq 0$ and, for every $1 \leq i \leq n$, $g_i \in G$ for some $G \in \mathcal{G}$; the $g_i$'s are the \emph{syllables} of the word, and $n$ is the \emph{length} of the word. Clearly, the following operations on a word does not modify the element of $\Gamma \mathcal{G}$ it represents: \begin{description} \item[Cancellation] delete the syllable $g_i=1$; \item[Amalgamation] if $g_i,g_{i+1} \in G$ for some $G \in \mathcal{G}$, replace the two syllables $g_i$ and $g_{i+1}$ by the single syllable $g_ig_{i+1} \in G$; \item[Shuffling] if $g_i$ and $g_{i+1}$ belong to two adjacent vertex-groups, switch them. \end{description} A word is \emph{graphically reduced} if its length cannot be shortened by applying these elementary moves. Every element of $\Gamma \mathcal{G}$ can be represented by a graphically reduced word, and this word is unique up to the shuffling operation. For more information about graphically reduced words, we refer to \cite{GreenGP} (see also \cite{HsuWise,VanKampenGP}). \subsection{Main theorem}The rest of the section is dedicated to the proof of Theorem~\ref{thm:GraphProduct}. The strategy is to realise graph products of groups as symmetric diagram groups and next to apply Theorem~\ref{thm:Combination} in order to conclude. \begin{definition} Let $n \in \mathbb{N} \cup \{\infty\}$ be an integer and $\mathscr{C}$ a collection of subsets in $[n]:= \{1, \ldots, n\}$. The \emph{disjointness graph} $\Delta_\mathscr{C}$ of $\mathscr{C}$ is the graph whose vertex-set is $\mathscr{C}$ and whose edges connect two elements of $\mathscr{C}$ whenever they are disjoint. \end{definition} \noindent As a preliminary observation, we show that every countable graph can be realised as a disjointness graph. \begin{lemma}\label{lem:DisjointGraph} For every countable graph $\Gamma$, there exist some $n \in \mathbb{N} \cup \{\infty\}$ and some collection of finite subsets $\mathscr{C}$ of $[n]$ such that $\Delta_\mathscr{C}$ is isomorphic to $\Gamma$. \end{lemma} \begin{proof} If $\Gamma$ is the union of two isolated vertices, it suffices to take $n=2$ and $\mathscr{C}=\{ \{1\}, \{1,2\}\}$. From now on, we assume that $\Gamma$ is not the union of two isolated vertices. \noindent Let $\Gamma^{\mathrm{opp}}$ denote the opposite graph of $\Gamma$, namely the graph whose vertex-set is $V(\Gamma)$ and whose edges connect two vertices whenever they are not adjacent in $\Gamma$. Let $n \in \mathbb{N} \cup \{\infty\}$ denote the number of edges in $\Gamma^{\mathrm{opp}}$. From now on, we identify $[n]$ with the edge-set of $\Gamma^{\mathrm{opp}}$. For every vertex $u$ of $\Gamma$, let $S(u) \subset [n]$ denote the set of the edges of $\Gamma^\mathrm{opp}$ containing $u$. \noindent We claim that the map $\Psi : u \mapsto S(u)$ induces a graph isomorphism $\Gamma \to \Delta_\mathscr{C}$ where $\mathscr{C}:= \{ S(u), u \in V(\Gamma)\}$. Because $\Gamma$ is the union of two isolated vertices, $\Psi$ induces a bijection between the vertices of $\Gamma$ and $\Delta_\mathscr{C}$. If two vertices $u,v \in \Gamma$ are adjacent, then there is no edge in $\Gamma^\mathrm{opp}$ connecting $u$ and $v$, which amounts to saying that $S(u)$ and $S(v)$ are disjoint. Thus, $S(u)$ and $S(v)$ are adjacent in $\Delta_\mathscr{C}$. Conversely, if $u,v \in \Gamma$ are not adjacent, then $S(u)$ and $S(v)$ contain the edge of $\Gamma^\mathrm{opp}$ connecting $u$ and $v$, so $S(u)$ and $S(v)$ cannot be adjacent in $\Delta_\mathscr{C}$. \end{proof} \noindent \begin{thm}\label{thm:GraphProduct} Let $n \in \mathbb{N}$ be an integer, $\mathscr{C}$ a collection of subsets in $[n]$, and $\mathcal{G}:= \{G_I, I \in \mathscr{C}\}$ a collection of groups indexed by $\mathscr{C}$. Let $\mathcal{P}_\mathscr{C}$ denote the semigroup presentation $$\left\langle x_i \ (1 \leq i \leq n), \ a_I \ (I \in \mathscr{C}) \mid x_{i_1} \cdots x_{i_k} = a_I \ (I=\{i_1< \cdots < i_k\} \in \mathscr{C}) \right\rangle.$$ Setting $\mathcal{H}:= \{ H(x_i)= \{1\} \ (1 \leq i \leq n), \ H(a_I)=G_I \ (I \in \mathscr{C})\}$, the symmetric diagram product $D_s(\mathcal{P}_\mathscr{C},\mathcal{H},x_1 \cdots x_n)$ is isomorphic to the graph product $\Delta_\mathscr{C} \mathcal{G}$. \end{thm} \begin{proof} For convience, set $w:=x_1 \cdots x_n$. For every vertex $I= \{x_{i_1}< \cdots < x_{i_k}\} \in \Delta_\mathscr{C}$ and every element $g \in G_I$, let $\Theta(g)$ denote the $(w,w)$-diagram obtained by connecting the wires $x_{i_1}, \ldots, x_{i_k}$ to a transistor $T$ labelled by $x_{i_1} \cdots x_{i_k} =a_I$, by labelling the bottom wire of $T$ by $(a_I,g)$, which we connect to a transistor labelled by $a_I=x_{i_1} \cdots x_{i_k}$. See Figure~\ref{Theta}. \begin{figure} \caption{Example of a $\Theta(g)$.} \label{Theta} \end{figure} \noindent Observe that: \begin{itemize} \item for every $I \in \Delta_\mathscr{C}$ and for all $g,h \in G_I$, $\Theta(g) \circ \Theta(h) \equiv \Theta(gh)$; \item for all adjacent vertices $I,J \in \Delta_\mathscr{C}$ and all elements $g \in G_I$, $h \in G_J$, we have $\Theta(g) \circ \Theta(h) = \Theta(h) \circ \Theta(g)$ because the transistors of the two diagrams are connected to two disjoint sets of wires (respectively labelled by $x_i$ for $i \in I$ and $x_j$ for $j \in J$). \end{itemize} It follows that $\Theta$ extends to a morphism $\Delta_\mathscr{C} \mathcal{G} \to D(\mathcal{P}_\mathscr{C}, \mathcal{H},w)$ by sending a word in generators $g_1 \cdots g_n$ to $\Theta(g_1) \circ \cdots \circ \Theta(g_n)$. \noindent Fix a word in generators $g=g_1 \cdots g_n$. If $\Theta(g)=\Theta(g_1) \circ \cdots \circ \Theta(g_n)$ contains a dipole, two cases may occur. First, there may exist some $1 \leq i \leq n$ such that $\Theta(g_i)$ contains a dipole, which amounts to saying that $g_i=1$. Otherwise, if all the $\Theta(g_i)$ are reduced, then there must exist $1 \leq i < j \leq n$ such that our dipole contains one transistor in $\Theta(g_i)$ and the other in $\Theta(g_j)$. This implies that $g_i$ and $g_j$ belong to the same vertex-group of $\Delta_\mathscr{C}\mathcal{G}$, say $G_I$ with $I \in \mathscr{C}$, and that the \emph{captive} wires (i.e.\ the top and bottom wires connected to a transistor) of the $\Theta(g_\ell)$ for $i < \ell < j$ are connected to the \emph{free} (i.e.\ not captive) wires of $\Theta(g_i)$ and $\Theta(g_j)$. This amounts to saying that each $g_\ell$ for $i<\ell<j$ belongs to a vertex-group $G_L$ with $L$ disjoint from $I$. Thus, in our word $g_1 \cdots g_n$, the syllable $g_j$ can be shuffled next to $g_i$ and finally merged with $g_j$. \noindent Our argument shows that, if $\Theta(g)$ is not reduced, then $g_1 \cdots g_n$ is not graphically reduced. Consequently, $\Theta$ sends a (non-empty) graphically reduced word in generators to a (non-trivial) reduced diagram. This implies that $\Theta$ is an injective morphism. \noindent It remains to show that $\Theta$ is surjective. So let $\Delta \in D(\mathcal{P}_\mathscr{C},\mathcal{H},w)$ be a reduced diagram. If $\Delta$ is trivial, then $\Theta(1)= \Delta$. From now on, we assume that $\Delta$ is non-trivial. Necessarily, $\Delta$ contains a transistor $R$ labelled by $x_{i_1} \cdots x_{i_k}=a_I$ for some $I=\{i_1< \cdots < i_k\} \in \mathscr{C}$ such that the top wires of $R$ are also top wires of $\Delta$. The bottom wire of $R$, whose label has first coordinate $a_I$, must be connected to a transistor $S$ labelled by $a_I=x_{i_1} \cdots x_{i_k}$. Thus, the union of $R$ and $S$ coincides with $\Theta(g)$ where $g \in G_I$ is such that the wire connecting $R$ and $S$ is labelled by $(a_I,g)$. Then, the reduction of $\Theta(g)^{-1} \circ \Delta$ has less transistors than $\Delta$. By iterating the argument, we conclude that $\Delta$ belongs to the image of $\Theta$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:MainIntro}.] The theorem now follows from Theorem~\ref{thm:GraphProduct}, Lemma~\ref{lem:DisjointGraph}, and Theorem~\ref{thm:Combination}. \end{proof} \section{The pure virtual twin group is a braided diagram group} The purpose of this section is to give a new perspective to the pure virtual twin group by viewing them as symmetric diagram groups. We recall the preliminaries in the subsequent section. \subsection{Virtual twin groups} The {\it virtual twin group} $VT_n$, $n \ge 2$, is generated by the set $\{ s_1, s_2, \ldots, s_{n-1}, \rho_1, \rho_2, \ldots, \rho_{n-1}\}$ with defining relations \begin{eqnarray} s_i^{2} &=&1 \hspace*{5mm} \textrm{for } i = 1, 2, \dots, n-1, \label{1}\\ s_is_j &=& s_js_i \hspace*{5mm} \textrm{for } |i - j| \geq 2,\label{2}\\ \rho_i^{2} &=& 1 \hspace*{5mm} \textrm{for } i = 1, 2, \dots, n-1, \label{3}\\ \rho_i\rho_j &=& \rho_j\rho_i \hspace*{5mm} \textrm{for } |i - j| \geq 2, \label{4}\\ \rho_i\rho_{i+1}\rho_i &=& \rho_{i+1}\rho_i\rho_{i+1}\hspace*{5mm} \textrm{for } i = 1, 2, \dots, n-2, \label{5}\\ \rho_is_j &=& s_j\rho_i \hspace*{5mm} \textrm{for } |i - j| \geq 2, \label{6}\\ \rho_i\rho_{i+1} s_i &=& s_{i+1} \rho_i \rho_{i+1}\hspace*{5mm} \textrm{for } i = 1, 2, \dots, n-2. \label{7} \end{eqnarray} In particular, $VT_2 \cong \mathbb{Z}_2 * \mathbb{Z}_2$, the infinite dihedral group. There is a natural surjection $\pi:VT_n \to S_n$ given by $$\pi(s_i) = \pi(\rho_i) = (i, i+1)$$ for all $1\leq i \leq n-1$. The kernel $PVT_n$ of this surjection is called the \textit{pure virtual twin group}. The group $PVT_n$ is an analogue of the pure virtual braid group. The map $S_n \to VT_n$ given by $(i, i+1)\mapsto \rho_i$ is a splitting of the short exact sequence $$1 \to PVT_n \to VT_n \to S_n \to 1,$$ and hence $VT_n= PVT_n \rtimes S_n$. Let $$\lambda_{i, i+1}= s_i \rho_i,$$ for each $1 \le i \le n-1$ and $$\lambda_{i,j} = \rho_{j-1} \rho_{j-2} \dots \rho_{i+1} \lambda_{i, i+1} \rho_{i+1} \dots \rho_{j-2} \rho_{j-1},$$ for each $1 \leq i < j \leq n$ and $j \ne i+1$. The following result \cite{Structure-Automorphisms-Pure-Virtual-Twin-Groups} describes the structure of $PVT_n$. \begin{thm}\label{pvtn-right-angled-artin} The pure virtual twin group $PVT_n$ on $n \ge 2 $ strands is an irreducible right-angled Artin group and is presented by $$\big\langle \lambda_{i,j},~1 \leq i < j \leq n ~|~ \lambda_{i,j} \lambda_{k,l} = \lambda_{k,l} \lambda_{i,j} \text{ for distinct integers } i, j, k, l \big\rangle.$$ \end{thm} These virtual twin groups are classical generalisation of twin groups, which are generated by the set $\{ s_1, s_2, \ldots, s_{n-1}\}$ whose generators satisfy the defining relations (\ref{1})--(\ref{2}), whose pure subgroup is denoted by $PT_n$. Analogous to (virtual) braid groups \cite{MR2128049}, (virtual) twin groups have a nice diagrammatical interpretation which we describe in this section (see \cite{MR4209535} for more details).\\ Consider a set $Q$ of fixed $n$ points on the real line $\mathbb{R}$. A \textit{virtual twin diagram} on $n$ strands is a subset $D$ of the strip $\mathbb{R} \times [0,1]$ consisting of $n$ intervals called {\it strands} such that the boundary of $D$ is $Q \times \{0,1\}$ and the following conditions are met: \begin{enumerate} \item the natural projection $\mathbb{R} \times [0,1] \to [0,1]$ maps each strand homeomorphically onto $[0,1]$. Informally, the strands are monotonic, \item the set $V(D)$ of all crossings of the diagram $D$ consists of transverse double points of $D$ where each crossing is preassigned to be a real or a virtual crossing as shown below. A virtual crossing is depicted by a crossing encircled with a small circle. \end{enumerate} \begin{figure} \caption{Real and virtual crossing} \label{Crossings} \end{figure} We say that two virtual twin diagrams on $n$ strands are said to be \textit{equivalent} if one can be obtained from the other by isotopies of the plane and a finite sequence of planar Reidemeister moves as shown in Figure [\ref{ReidemeisterMoves}]. \begin{figure} \caption{Reidemeister moves for virtual twin diagrams} \label{ReidemeisterMoves} \end{figure} A \textit{virtual twin} is then defined as an equivalence class of such virtual twin diagrams. The product $D_1D_2$ of two virtual twin diagrams $D_1$ and $D_2$ is defined by placing $D_1$ on top of $D_2$ and then shrinking the interval to $[0,1]$. It is not difficult to check that this is a well-defined binary operation on the set of all virtual twins on $n$ strands. This set of all virtual twins on $n$ strands forms a group which is isomorphic to the abstractly defined group $VT_n$. The generators $s_i$ and $\rho_i$ of $VT_n$ can be represented as in Figure [\ref{generator-vtn}]. This approach of defining virtual twin groups is crucial in viewing pure virtual twin group as a symmetric diagram group, which we prove in the following subsection. \begin{figure} \caption{Generator $s_i$ and $\rho_i$} \label{generator-vtn} \end{figure} \subsection{Main theorems} We first start by proving that the group $PVT_n$ is not a planar diagram group. \begin{thm}\label{thm:NotPlanar} The group $PVT_n$ is a planar diagram group if and only if $n \leq 4$. \end{thm} \begin{proof} It follows from Theorem~\ref{pvtn-right-angled-artin} that $PVT_2 \simeq \mathbb{Z}$, $PVT_3 \simeq \mathbb{F}_3$, and $PVT_4 \simeq \mathbb{Z}^2 \ast \mathbb{Z}^2 \ast \mathbb{Z}^2$. All these groups are planar diagram groups. Now, fix an integer $n \geq 5$. According to Theorem~\ref{pvtn-right-angled-artin}, $PVT_n$ is isomorphic to the right-angled Artin group $A(\Gamma_n)$, where $\Gamma_n$ is the graph whose vertices are the pairs of integers $\{i<j\}$ in $[1,n]$ and whose edges connect two pairs whenever they are disjoint. As mentioned in \cite[Corollary~3.19]{Diagram-Groups-Genevois}, the proof of \cite[Theorem~30]{MR1725439} shows that a right-angled Artin group whose defining graph contains an induced cycle of odd length $\geq 5$ cannot be a planar diagram group. Therefore, it suffices to exhibit in $\Gamma_n$ an induced cycle of length $5$. The vertices $\{1,2\}$, $\{2,3\}$, $\{3,5\}$, $\{4,5\}$, $\{1,4\}$ define such a cycle. \end{proof} \begin{thm} Let $\mathcal{P}_n= \langle x_1, x_2, \dots, x_n ~|~ x_i x_j =x_j x_i \text{ for all } i \neq j \rangle$ be a semigroup presentation. The symmetric diagram group $D_s(\mathcal{P}_n,x_1x_2 \cdots x_n )$ is isomorphic to the pure virtual twin group $PVT_n$. \end{thm} \begin{proof} Consider a pure virtual twin $b$, we stack the twin in a frame such that the strings are labelled $x_1, x_2, \dots , x_n$ from left to right, and the ends of the strings connect the top of the frame with the bottom of the frame. With abuse of notation, we denote the top contacts (also bottom contacts) of frame, reading from the left to right, by $x_1, x_2, \dots , x_n$. Since $b \in PVT_n$, this configuration is a well-defined operation as shown in figure below. In the figure of $b$, there are real and virtual crossings, which we assume to be transversal, we will consider the virtual crossings to be the crossings amongst the wires (without loss of generality, we may assume that) and use the real crossings to be used to construct transistors. Consider a real crossing $c$ in which the $x_i$ strand crosses the strand $x_j$ from the left as we go down the twin diagram. Then we replace the crossing $c$ with $(x_ix_j, x_jx_i)$-transistor. It is to be noted here that the real crossings correspond to the transistors and the virtual crossings correspond to the intersection of wires in the symmetric diagram. For clarity of the proof, we encircle the intersection of wires. Doing this for all the real crossings, we get a $(x_1x_2 \cdots x_n, x_1x_2 \cdots x_n)$-symmetric diagram over the presentation $\mathcal{P}_n$. Conversely, consider any $(x_1x_2 \cdots x_n, x_1x_2 \cdots x_n)$-symmetric diagram over the presentation $\mathcal{P}_n$, then replace all the transistors with the real crossings, and we get a pure virtual twin diagram. \\ We now show that the map between $PVT_n$ and $D_s(\mathcal{P}_n,x_1x_2 \cdots x_n )$ is a bijection. The correspondence between the equivalence relations between twin diagrams and symmetric diagrams are shown in Figure \ref{fig:Proof-Illustration-II}. \begin{figure}\label{fig:Proof-Illustration-II} \end{figure} \begin{figure}\label{fig:Proof-Illustration} \end{figure} This implies that the real relations in twin diagrams are in correspondence with the equivalent modulo dipoles. Next, it is easy to that the virtual moves in twin diagrams correspond to the isotopy of wires. One of the moves is shown in Figure \ref{fig:Proof-Illustration}. It is crucial to remember that the definition allows that by isotopy a wire can intersect a transistor, since it does not alter the top and bottom contacts of the transistor. Thus, this map is indeed an isomorphism, and we are done. \end{proof} We refer the readers \cite[Section 1.3]{Mostovoy-Round-Twin} definition of annular twin group and we get the following result. \begin{cor} The pure twin group embeds in the pure annular twin group which further embeds in the pure virtual twin group. \end{cor} \begin{proof} It follows from the fact that the pure (annular) twin groups are (annular) planar diagram groups \cite{Twin-Group-Diagram-Group}, and that for semigroup presentation $\mathcal{P}_n= \langle x_1, x_2, \dots, x_n ~|~ x_i x_j =x_j x_i \text{ for all } i \neq j \rangle$ and word $w=x_1x_2 \cdots x_n \in \Sigma^+$, we have $D_p(\mathcal{P},w) \subset D_a(\mathcal{P},w) \subset D_s(\mathcal{P},w)$. \end{proof} \addcontentsline{toc}{section}{References} {\footnotesize } \Address \end{document}
\begin{document} \title{Entanglement generation and detection in split exciton-polariton condensates} \author{Jingyan Feng} \affiliation{State Key Laboratory of Precision Spectroscopy, School of Physical and Material Sciences, East China Normal University, Shanghai 200062, China} \author{Hui Li} \affiliation{State Key Laboratory of Precision Spectroscopy, School of Physical and Material Sciences, East China Normal University, Shanghai 200062, China} \author{Zheng Sun} \affiliation{State Key Laboratory of Precision Spectroscopy, School of Physical and Material Sciences, East China Normal University, Shanghai 200062, China} \author{Tim Byrnes} \email{[email protected]} \affiliation{New York University Shanghai, 567 West Yangsi Road, Shanghai, 200126, China; NYU-ECNU Institute of Physics at NYU Shanghai, 3663 Zhongshan Road North, Shanghai 200062, China; Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning, NYU Shanghai, 567 West Yangsi Road, Shanghai, 200126, China.} \affiliation{State Key Laboratory of Precision Spectroscopy, School of Physical and Material Sciences, East China Normal University, Shanghai 200062, China} \affiliation{Center for Quantum and Topological Systems (CQTS), NYUAD Research Institute, New York University Abu Dhabi, UAE.} \affiliation{Department of Physics, New York University, New York, NY 10003, USA} \date{\today} \begin{abstract} We propose a method of generating and detecting entanglement in two spatially separated exciton-polariton Bose-Einstein condensates (BECs) at steady-state. In our scheme we first create a spinor polariton BEC, such that steady-state squeezing is obtained under a one-axis twisting interaction. Then the condensate is split either physically or virtually, which results in entanglement generated between the two parts. A virtual split means that the condensate is not physically split, but its near-field image is divided into two parts and the spin correlations are deduced from polarization measurements in each half. We theoretically model and examine logarithmic negativity criterion and several correlation-based criteria to show that entanglement exists under experimentally achievable parameters. \end{abstract} \maketitle \section{Introduction}\label{i} Entanglement is a central property of quantum physics that distinguishes it from classical physics \cite{Vedral97,Vedral14}, and is considered an essential resource for applications such as quantum information \cite{Bennett93,Lee02}, quantum cryptography \cite{Gisin02,Yin2020,Ekert91} and quantum metrology \cite{Giovannetti2011,pezze18}. Entangled states have already been achieved at the macroscopic scale, and in systems such as atomic ensembles \cite{Krauter13} and mechanical resonators \cite{Kotler21}. Several experiments realized the generation of entanglement and other quantum correlations between the atoms of a single BEC cloud \cite{Lange2018,Kunkel2018,Fadel18,Schmied16}, which have been proposed for several applications \cite{Berrada13,Esteve08}. A well-known platform of creating BECs is with suitably structured semiconductor systems supporting exciton-polaritons. Exciton-polaritons are a superposition of an exciton (an electron-hole bound pair) and a cavity photon, and form a bosonic quasiparticle \cite{Deng10,Kasprzak06,Jonathan11,Byrnes2014}. The coupling between the exciton and photon results in an extremely light mass for the exciton-polaritons \cite{Deng02,Byrnes2014}, allowing for the possibility of realizing BECs \cite{Kasprzak06,Deng02,Balili07}. One of the advantages of polariton BECs is that they can be experimentally implemented at higher temperatures, even at room temperatures, by using materials such as GaN, ZnO. \cite{Christopoulos07,Baumberg08,kena10,Guillet11,Plumhof14,Fei22}. This makes the polariton system attractive for future technological applications, as they would not require bulky cryogenic apparatus. Currently, entanglement in spatially separate Bose-Einstein condensates, of any species (atomic, polaritonic, or otherwise) is yet to be observed. However, experimental demonstration of generating and detecting entanglement between spatially separated regions of a single BEC has been achieved \cite{Lange2018,Kunkel2018,Fadel18}. In these works, entanglement was first created between the atoms on a single $^{87}$Rb atomic BEC, using methods such as state-dependent forces, spin-nematic squeezing and spin-changing collisions. Then by using a magnified near-field image of the single atomic BEC, two different spatial regions of the same BEC were examined for correlations. It was shown that the entanglement can be detected after releasing the atomic gases from the traps. While the splitting process is only virtual, and not physically separated, this constitutes the first step to showing that BECs can be put in an entangled state. Numerous theoretical proposals have been made for generating entanglement in two completely separate atomic BECs \cite{Treutlein06,yumang19,kitzinger20,Idlas16,Pyrkov13,Pettersson17,Abdelrahman14,Rosseau14,Hussain14}. For polariton condensates, to date, no reports of detection of entanglement in a single or multiple polariton condensates have been made. The virtual splitting procedure may be an excellent first candidate for observing such entanglement. As shown in Ref. \cite{yumang19}, a physical or virtual split gives identical results in terms of entanglement, and extensions of the approach to completely separate BECs may be performed in the future. A very promising first candidate for entanglement between spatially separated BECs would be polariton BECs, which can be easily manipulated \cite{Kim08,Estrecho21,yingjie22}. In this paper, we propose a method of generating entanglement in a split polariton BEC and give an experimental scheme of detecting entanglement (see Fig. \ref{experimentsetup}). A single spinor polariton BEC is initially excited in the quantum wells (QWs) by optically pumping. Due to the natural self-interactions between the polaritons, this produces a one-axis twisting effect, producing multi-particle entanglement which involves all polaritons in the BEC. The single BEC is then spatially split into two ensembles which produces two separate spins. We note that this splitting procedure can be either a physical split or a virtual splitting procedure, where the image of the polaritons is partitioned into two (Fig. \ref{experimentsetup} (b)) \cite{yumang19}. After the splitting procedure, the sub-systems are still entangled due to the one-axis twisting producing multiparticle entanglement (see Fig. \ref{becsplit}) \cite{Tichy12,Bouvrie16,Bouvrie19}. We use a spin mapping to map our system with particle number fluctuations onto a fixed particle number space in order to use well-established spin correlators developed to detect entanglement. We calculate logarithmic negativity and correlation-based criteria to demonstrate that multi-particle entanglement exists not only in each BEC, but in a spatially separated configuration between two BECs. We show that our system exhibits stronger entanglement for larger particle number sectors in various regimes. By adjusting realistic system parameters one can improve the entanglement level. \begin{figure} \caption{The experimental setup for our system. (a) A spinor exciton-polariton BEC forms in the QWs generated by pump laser. The spinor BEC is formed from the spin components of the polaritons, and are excited by applying a laser of suitable polarization (both clockwise and anti-clockwise circular polarization) to excite equal populations of the spins. The photon component of the polaritons leaks through semiconductor quantum microcavity, then its image is focused on a Charge Coupled Device (CCD) of a camera. By individually detecting the polarization of the separate parts of the photoluminescence (imaged light) on CCD, one may deduce the presence of entanglement between different spatial regions of the BEC as imaged on the CCD. (b) The enlarged image resolved from CCD. The middle dashed line shows the regions defining the two spin components used to detect entanglement. \label{experimentsetup} \label{experimentsetup} \end{figure} This paper is organized as follows. In Sec. \ref{ii} we introduce the theoretical model for a single spinor exciton-polariton condensate, and introduce the splitting operation, which produces two spatially separate BECs. In Sec. \ref{iii}, we numerically simulate our method and analyze our simulation results. In Sec. \ref{iv}, we show the main results of entanglement generation and detection by using different entanglement criteria. Finally, in Sec. \ref{v} we summarize and discuss our results. \section{Spin squeezed polariton condensates}\label{ii} \subsection{Theoretical model} We now describe the theoretical model used to simulate our interacting spinor polariton condensate. For further details we refer the reader to Ref.\cite{jingyan21}, which analyzes a similar situation prior to splitting. The master equation for the spinor polariton BEC is \begin{align} \frac{d\rho}{dt}= - \frac{i}{\hbar}[H_{\text{system}},\rho]-\frac{\gamma}{2}{\cal L} [a,\rho]-\frac{\gamma}{2} {\cal L} [b,\rho], \label{masterequation} \end{align} where the Hamiltonians $H_{\text{system}}=H_{0}+H_{\text{pump}}+H_{\text{int}}$ is defined \begin{align} H_{0} &=\hbar \Delta(a^{\dagger}a+b^{\dagger}b),\nonumber\\ H_{\text{pump}} &=\hbar A(a^{\dagger}e^{-i\theta_{a}}+ae^{i\theta_{a}}+b^{\dagger}e^{-i\theta_{b}}+be^{i\theta_{b}}),\nonumber\\ H_{\text{int}} &=\frac{\hbar U}{2}(a^{\dagger}a(a^{\dagger}a-1))+\frac{\hbar U}{2}(b^{\dagger}b(b^{\dagger}b-1))\nonumber\\ &+\hbar V a^{\dagger}a b^{\dagger} b. \end{align} Here, $a^{\dagger},b^{\dagger}$ and $a,b$ are the creation and annihilation operators for the two zero momentum polariton spin species $ s = \pm 1 $ respectively, which obey bosonic commutation relations \begin{align} [a,a^\dagger ] & = [b, b^\dagger ] = 1, \nonumber \\ [a,b] & =0 . \end{align} The contribution of higher momentum polariton modes are not considered in our proposal as they do not affect the spin squeezing entanglement, which is the focus of this study. The above Hamiltoinian models resonant excitation, where the polaritons are typically excited at zero in-plane momentum, such that the remaining momenta are relatively unpopulated. One may also consider off-resonant excitation, where other momenta will also be present, but in such a scheme only the zero momentum polaritons should be examined, which could be achieved by filtering in momentum space. We note that resonant excitation techniques have been used in numerous experimental studies of polariton BECs, and is considered to be an equivalent way of obtaining a condensed polariton cloud, although it lacks the condensation step that characterizes the BEC phase transition \cite{Adiyatullin17,Takesue04,Boulier14}. The Hamiltonian $H_{0}$ defines the energy $\hbar \Delta $ of zero-momentum polaritons with respect to the pump laser. $H_{\text{pump}}$ is the Hamiltonian for the pump laser with amplitude $A$, and $\theta_{a}$,$\theta_{b}$ represent the pumping phases of modes $a$ and $b$, respectively. The Hamiltonian $H_{\text{int}}$ includes the non-linear interaction energy $ \hbar U $ between the same spins and $ \hbar V $ for different spins. The superoperator \begin{align} {\cal L}[a,\rho]& =a^{\dagger}a\rho+\rho a^{\dagger}a-2a\rho a^{\dagger}, \nonumber\\ {\cal L}[b,\rho]& =b^{\dagger}b\rho+\rho b^{\dagger}b-2b\rho b^{\dagger}, \end{align} is the Lindbladian loss for photons leaking through the cavity. According to the master equation (\ref{masterequation}), the polariton population decays with rate $\gamma$. \begin{figure} \caption{Entanglement in a split polariton condensate. (a) A single spinor polariton BEC first forms in the QWs, generating multi-particle entanglement at steady-state, represented by the wiggly lines. (b) The external potential trapping the condensate is modified such that it is spatially split into two BECs. The entanglement is transformed to a non-local form where it exists between the two split BECs. \label{becsplit} \label{becsplit} \end{figure} To solve the master equation, we decompose the density matrix in the Fock basis, and numerically evolve the master equation. The density matrix can be written as \begin{align} \rho=\sum_{klk'l'} \rho_{klk'l'}|k, l \rangle \langle k' ,l'|, \label{densmatexp} \end{align} where \begin{align} | k, l \rangle = \frac{(a^{\dagger}) ^{k} (b^{\dagger})^{l}}{\sqrt{k ! l !}} |0\rangle, \label{fockstatedef} \end{align} are the normalized Fock states that obey $\langle k,l|k',l'\rangle = \delta_{k k'} \delta_{l l'}$. \subsection{Splitting the polariton condensate} Initially the spin modes $a$ and $b$ form a single BEC with all polaritons forming a multipartite entangled state due to the non-linear interaction as illustrated in Fig. \ref{becsplit} (a). In order to obtain the sub-modes $a_{1},a_{2},b_{1},b_{2} $ of the two spins $a$ and $b$, we apply the transformation \begin{align} a \rightarrow \frac{1}{\sqrt{2}}(a_{1}+a_{2}),\nonumber\\ b \rightarrow \frac{1}{\sqrt{2}}(b_{1}+b_{2}).\nonumber\\ \label{split1} \end{align} The above splitting implies that there exists unoccupied modes undergoing the transformation \begin{align} \widetilde{a} \rightarrow \frac{1}{\sqrt{2}}(a_{1}-a_{2}),\nonumber\\ \widetilde{b} \rightarrow \frac{1}{\sqrt{2}}(b_{1}-b_{2}).\nonumber\\ \label{split2} \end{align} This transformation corresponds to a coherent splitting process similar to that shown in Fig. \ref{becsplit} (b). Alternatively, it could correspond to the virtual splitting as that shown in Fig. \ref{experimentsetup}, where the polariton condensate is split into two parts according to two spatial regions. These spatial regions have a one-to-one relation to the optical modes that emerge from the microcavity and hence may be spatially imaged according to the scheme shown in Fig. \ref{experimentsetup} (b). The above splitting operation forms either two physically separate BECs or two distinct halves of a BEC, and changes the entanglement structure, which we show in Fig. \ref{becsplit} (b). After the split, the Fock states transform as \begin{align} |k, l \rangle \rightarrow & \frac{1}{\sqrt{k ! l !}} \left(\frac{a_{1}^{\dagger}+a_{2}^{\dagger}}{\sqrt{2}}\right)^{k} \left(\frac{b_{1}^{\dagger}+b_{2}^{\dagger}}{\sqrt{2}}\right)^{l} |0\rangle \nonumber\\ =&\frac{1}{\sqrt{2^{k+l}} \sqrt{k ! l !}} \sum_{nm} {k \choose n} {l \choose m} \nonumber\\ &\times (a_{1}^{\dagger})^{n} (a_{2}^{\dagger})^{k-n} (b_{1}^{\dagger})^{m} (b_{2}^{\dagger})^{l-m} |0\rangle \nonumber\\ =&\frac{1}{\sqrt{2^{k+l}}} \sum_{nm} \sqrt{ {k \choose n} {l \choose m} } |n,m,k-n,l-m \rangle, \label{11111} \end{align} where the normalized Fock state with four modes can be written as \begin{align} |k_1,l_1,k_2,l_2\rangle = \frac{(a_{1}^{\dagger})^{k_1} (b_{1}^{\dagger})^{l_1}(a_{2}^{\dagger})^{k_2} (b_{1}^{\dagger})^{l_2}}{\sqrt{k_1 ! l_1 ! k_2 ! l_2 ! }}|0\rangle. \end{align} Substituting the above into (\ref{densmatexp}), the density matrix of the split condensate is written in general as \begin{align} \rho^{\text{sp}} =& \sum_{\substack{kl\\k'l'}} \sum_{\substack{nm\\n'm'}} \frac{\rho_{klk'l'}} {\sqrt{2^{k+l+k'+l'}}} \sqrt{{k \choose n} {l \choose m} {k' \choose n'} {l' \choose m'}} \nonumber\\ &\times |n,m,k-n,l-m \rangle \langle n',m',k'-n',l'-m'|. \label{densmatexpnew} \end{align} The spin operators on the split BEC are defined as \begin{align} S_{j}^{x} &=a^{\dagger}_{j} b_{j}+b_{j}^{\dagger}a_{j}, \nonumber\\ S_{j}^{y} &=i(b^{\dagger}_{j} a_{j}-a^{\dagger}_{j} b_{j}), \nonumber\\ S_{j}^{z} &=a^{\dagger}_{j} a_{j}-b^{\dagger}_{j} b_{j}, \label{schwingerspin} \end{align} where $j \in \{1,2 \}$ labels the two BECs (either physical or virtual). These spin operators obey bosonic commutation relations \begin{align} [S^{l}, S^{m}]=2i\epsilon_{lmn}S^{n}, \end{align} where $\epsilon_{lmn}$ is the Levi-Civita symbol and $ l,m,n \in \{x,y,z\} $. The number operators for the two parts can be written as \begin{align} {\cal N}_j = a_j^\dagger a_j + b^\dagger_j b_j, \label{numberoperator} \end{align} where $j \in \{1,2 \}$. \subsection{Number fixing} The exciton-polariton BEC system is an open dissipative system and does not obey conservation of total polariton number. In context of atomic BECs, the total atom number $N$ is assumed to be fixed for a single run of the experiment. Any relation that is derived for fixed atom number (such as entanglement criteria) is not necessarily valid if the total particle number fluctuates. In order to deal with this, we thus use a similar approach to Ref.\cite{jingyan21} (Sec. \uppercase\expandafter{\romannumeral2}) to map $\rho^{\text{sp}}$ onto a fixed Hilbert space. Thus we define the density matrix in the $N$-sector as \begin{align} \rho^{\text{sp}}_N=\frac{\Pi_N \rho^{\text{sp}} \Pi_N}{p_{N}}, \label{rhospfixed} \end{align} where \begin{align} \Pi_N = &\sum_{N_{1}=0}^N \sum_{k_{1}=0}^{N_1} \sum_{k_{2}=0}^{N-N_1} |k_{1}, N_{1}-k_{1}, k_{2}, N-N_{1}-k_{2} \rangle \nonumber\\ & \times \langle k_{1}, N_{1}-k_{1}, k_{2}, N-N_{1}-k_{2} | \label{spinprojector} \end{align} is the projector on the $N-$particle subspace, and $N_{1}$ is the number of polaritons of the first BEC. The probability of the $N$-sector is defined as \begin{align} p_{N}=\text{Tr}(\Pi_ N \rho^{\text{sp}} \Pi_N), \end{align} which satisfies the relation \begin{align} \sum_{N} p_{N}=1. \end{align} Next we define the expectation values of quantum operator $ {\cal O } $ in fixed $N$-sectors \begin{align} \langle {\cal O } \rangle_N \equiv \text{Tr} ( \rho_{N} {\cal O } ) , \label{expectation} \end{align} where $\rho_N$ is the projection of $\rho$ in a fixed $N$ space and the subscript $N$ refers to the fixed subspace. Therefore, the total polariton number would be \begin{align} \langle {\cal N}_1 \rangle_N +\langle {\cal N}_2 \rangle_N = N. \end{align} The variance of operator ${\cal O}$ for $N$-sector is defined as \begin{align} \text{Var}_N ({\cal O})= \langle {\cal O}^2 \rangle_N - \langle {\cal O} \rangle_{N}^2. \end{align} The projector (\ref{spinprojector}) involves a fixed polariton number $N$. However, the total polariton number collapses to a fixed $N_1$ and $N_2$ after measurement. To define the projector on the fixed $N_1,N_2$ space $(N_2=N-N_1)$, we denote \begin{align} \Pi_{N_1,N_2} = &\sum_{k_{1}=0}^{N_1} \sum_{k_{2}=0}^{N_2} |k_{1}, N_{1}-k_{1}, k_{2}, N_2-k_{2} \rangle \nonumber\\ & \times \langle k_{1}, N_{1}-k_{1}, k_{2}, N_2-k_{2} | , \label{spinprojectorfixeN1} \end{align} which gives a fixed particle number on two halves. Thus the expectation values for the operator $\cal O$ in this space can be written as \begin{align} \langle {\cal O } \rangle_{N_1,N_2} \equiv \text{Tr} ( \rho_{N_1,N_2} {\cal O } ) , \label{expectationfixed} \end{align} we then obtain the relation of the expectation value of $\cal O$ for the two types of number fixing: \begin{align} \langle {\cal O } \rangle_N &= \text{Tr}(\rho_N {\cal O}) \nonumber\\ &= \sum_{N_1=0}^N \sum_{N'_1=0}^N \text{Tr}(\Pi_{N_1,N_2} \rho_N \Pi_{N'_1,N'_2} \cal O) \nonumber\\ &= \sum_{N_1=0}^N p_{N_1,N_2|N}\text{Tr} ( \rho_{N_1,N_2} \cal O)\nonumber\\ &= \sum_{N_1=0}^N p_{N_1,N_2|N} \langle {\cal O} \rangle_{N_1,N_2}, \label{representation} \end{align} where $p_{N_1,N_2|N}$ is the conditional probability satisfies $\sum_{N_1}^N p_{N_1,N_2|N} = 1$, we assume $\cal O$ is a locally particle number conserving operator, and we use the fact that $\Pi_{N}^2=\Pi_N, \Pi_{N_1,N_2}^2=\Pi_{N_1,N_2}$. The above relations will be useful when it comes to examining correlation-based entanglement detection criteria, since these are often derived in the context of fixed $N_1, N_2 $ and we wish to relate these to number fluctuating averages. \section{Numerical simulation}\label{iii} \subsection{Evaluation of expectation values} In simulating the master equation (\ref{masterequation}), a truncation is necessary, since the full Hilbert space is unbounded. Therefore we impose a cutoff $N_{\text{max}}$, which means that the number of bosons that occupy each mode is restricted to $ k,l \in [0,N_\text{max} ] $. Any states with $ k,l>N_\text{max}$ are set to have zero amplitude. We note that the calculation of the effective spin still involves consideration of the truncation space within its context \cite{jingyan21}. We then use (\ref{spinprojector}) to project the states on fixed total number $N$. We note that physically such a projection is automatically done when any measurement is performed. In any entanglement detection procedure, one requires detection of correlation between the two halves of the condensate. This involves detecting polaritons on the two sides of the condensate, and implicitly this involves a number fixing procedure. The density matrix (\ref{densmatexpnew}) is defined in a large Hilbert space with four spin modes $a_1,b_1,a_2,b_2$. Due to the numerical overhead with directly calculating the split four mode case, we calculate the expectation value of spin quantities $\cal O$ based on the original space before the splitting transformation, which contains only two modes. For example, the spin operators under this transformation will be written as \begin{align} S^x_j = a^{\dagger}_j b_j + b^{\dagger}_j a_j &\rightarrow \frac{1}{2} ( a^{\dagger} b + a^{\dagger} \widetilde b + {\widetilde a}^{\dagger} b + {\widetilde a}^{\dagger} \widetilde b) \nonumber \\ &+ \frac{1}{2} (b^{\dagger} a + b^{\dagger} \widetilde a + {\widetilde b}^{\dagger} a+ {\widetilde b}^{\dagger} \widetilde a ), \nonumber \\ S^y_j = i(b^{\dagger}_j a_j - a^{\dagger}_j b_j) &\rightarrow \frac{i}{2} ( b^{\dagger} a + b^{\dagger} \widetilde a + {\widetilde b}^{\dagger} a + {\widetilde b}^{\dagger} \widetilde a) \nonumber \\ &- \frac{i}{2} (a^{\dagger} b + a^{\dagger} \widetilde b + {\widetilde a}^{\dagger} b+ {\widetilde a}^{\dagger} \widetilde b ), \nonumber \\ S^z_j = a^{\dagger}_j a_j - b^{\dagger}_j b_j &\rightarrow \frac{1}{2} ( a^{\dagger} a + a^{\dagger} \widetilde a + {\widetilde a}^{\dagger} a + {\widetilde a}^{\dagger} \widetilde a) \nonumber \\ &- \frac{1}{2} (b^{\dagger} b + b^{\dagger} \widetilde b + {\widetilde b}^{\dagger} b+ {\widetilde b}^{\dagger} \widetilde b ), \nonumber \\ \end{align} where $j \in \{1,2 \}$, and we applied the inverse unitary transformation of the splitting procedure \begin{align} a_1 \rightarrow \frac{1}{\sqrt{2}}(a+\widetilde a), \nonumber\\ b_1 \rightarrow \frac{1}{\sqrt{2}}(b+\widetilde b), \nonumber\\ a_2 \rightarrow \frac{1}{\sqrt{2}}(a-\widetilde a), \nonumber\\b_2 \rightarrow \frac{1}{\sqrt{2}}(b-\widetilde b). \label{inversetran} \end{align} The transformed spin operators involve both the original modes $ a,b $ as well as the unoccupied modes $ \widetilde{a}, \widetilde{b} $. Since we know that prior to the splitting operations $\widetilde a,\widetilde b$ annihilation operators are unoccupied, expectation values involving the operators $ \widetilde{a}, \widetilde{b} $ will give zero. For example, expectation values of the local modes give \begin{align} \langle S^x_j \rangle_N = \frac{1}{2} \langle a^{\dagger} b + b^{\dagger} a \rangle_N, \nonumber\\ \langle S^y_j \rangle_N = \frac{i}{2} \langle b^{\dagger} a - a^{\dagger} b \rangle_N, \nonumber\\ \langle S^z_j \rangle_N = \frac{1}{2} \langle a^{\dagger} a - b^{\dagger} b \rangle_N. \nonumber\\ \end{align} where $j \in \{1,2 \}$. For second order spin correlations, we have \begin{align} &\langle S_{1}^{x}S_{2}^{x} \rangle_N \nonumber\\ &= \frac{1}{4} \langle a^{\dagger} b a^{\dagger} b + a^{\dagger} b b^{\dagger} a - a^{\dagger} a +b^{\dagger} a a^{\dagger} b -b^{\dagger} b + b^{\dagger} a b^{\dagger} a \rangle_N, \nonumber\\ &\langle S_{1}^{y}S_{2}^{y} \rangle_N \nonumber\\ &= \frac{1}{4} \langle a^{\dagger} b b^{\dagger} a - a^{\dagger} a - a^{\dagger} b a^{\dagger} b - b^{\dagger} a b^{\dagger} a + b^{\dagger} a a^{\dagger} b - b^{\dagger} b \rangle_N, \nonumber\\ &\langle S_{1}^{z}S_{2}^{z} \rangle_N \nonumber\\ &= \frac{1}{4} \langle a^{\dagger} a a^{\dagger} a - a^{\dagger} a -a^{\dagger} a b^{\dagger} b -b^{\dagger} b a^{\dagger} a +b^{\dagger} b b^{\dagger} b - b^{\dagger} b \rangle_N, \nonumber\\ \end{align} where we used the commutation relation $[\widetilde a,\widetilde {a}^{\dagger}] = [\widetilde b,\widetilde {b}^{\dagger}]=1$. The elements of density matrix (\ref{densmatexpnew}) can also be obtained from the original space, which can be calculated by \begin{align} & \langle k_1,l_1,k_2,l_2| \rho^{\text{sp}} |k'_1,l'_1,k'_2,l'_2\rangle \nonumber\\ =& \frac{1}{\sqrt{k_1 ! l_1 ! k_2 ! l_2 ! k'_1 ! l'_1 ! k'_2 ! l'_2 ! }} \nonumber\\ &\times \langle 0|a_{1}^{k_1} b_{1}^{l_1} a_{2}^{k_2} b_{1}^{l_2} \rho^{\text{sp}} (a_{1}^{\dagger})^{k'_1} (b_{1}^{\dagger})^{l'_1}(a_{2}^{\dagger})^{k'_2} (b_{1}^{\dagger})^{l'_2} |0 \rangle \nonumber\\ \rightarrow& \frac{1}{\sqrt{k_1 ! l_1 ! k_2 ! l_2 ! k'_1 ! l'_1 ! k'_2 ! l'_2 ! }} \times \frac{1}{\sqrt{2}^{k_1+l_1+k_2+l_2+k'_1+l'_1+k'_2+l'_2}}\nonumber\\ &\times \langle 0| a^{k_1+k_2} b^{l_1+l_2} \rho (a^{\dagger})^{k'_1+k'_2} (b^{\dagger})^{l'_1+l'_2} |0 \rangle, \nonumber\\ \end{align} where again we used the inverse unitary transformation (\ref{inversetran}). In the non-interacting limit $(U/\gamma=V/\gamma=0)$, each $N$-sector corresponds to a spin coherent state $|1/\sqrt{2},1/\sqrt{2}\rangle \rangle_1 \otimes |1/\sqrt{2},1/\sqrt{2}\rangle \rangle_2$ in the pump regime where $\theta_a = \theta_b = 0$ and $\Delta = 0$ after spin mapping \cite{jingyan21}, where we define the spin coherent state \begin{align} | \alpha, \beta \rangle \rangle & = \frac{1}{\sqrt{N!}} ( \alpha a^\dagger + \beta b^\dagger)^N | 0 \rangle \nonumber \\ & = \sum_k \sqrt{N \choose k} \alpha^k \beta^{N-k} | k, N-k \rangle. \label{spincohdef} \end{align} For the initial state that is polarized in the $S^{x}$ direction, we evaluate that \begin{align} \langle S_{1}^{x} \rangle_N =\frac{1}{2^N} \sum_{N_1} {N \choose N_{1}} N_1 = \frac{N}{2}, \end{align} \begin{align} \langle S_{2}^{x} \rangle_N =\frac{1}{2^N} \sum_{N_2} {N \choose N_{2}} N_2 = \frac{N}{2}. \end{align} Hence, for each chosen $N$-sector, we have the average number of polaritons for the two parts are $N/2$, which indicates the equivalence of the calculations in the large space and in the split procedure. \subsection{Effective entangling Hamiltonian} To show how the effect of one-axis twisting in this split procedure, we project the total spin operator $S^z$ on to the fixed $N_1,N_2$ space by using (\ref{spinprojector}) \begin{align} &\Pi_{N_1, N_2} S^z \Pi_{N_1, N_2} = \Pi_{N_1, N_2} a^{\dagger}a \Pi_{N_1, N_2}- \Pi_{N_1, N_2} b^{\dagger}b \Pi_{N_1, N_2} \nonumber\\ &=\frac{1}{2}\Pi_{N_1, N_2}(a_{1}^{\dagger}a_{1}-b_{1}^{\dagger}b_{1} + a_{2}^{\dagger}a_{2}-b_{2}^{\dagger}b_{2})\Pi_{N_1, N_2} \nonumber\\ &+ \frac{1}{2}\Pi_{N_1, N_2}(a_{1}^{\dagger}a_{2}+ a_{2}^{\dagger}a_{1} - b_{1}^{\dagger}b_{2} -b_{2}^{\dagger}b_{1})\Pi_{N_1, N_2} \nonumber\\ &\rightarrow \Pi_{N_1, N_2}(a_{1}^{\dagger}a_{1}-b_{1}^{\dagger}b_{1} + a_{2}^{\dagger}a_{2}-b_{2}^{\dagger}b_{2})\Pi_{N_1, N_2}\nonumber\\ &=\Pi_{N_1, N_2}(S_1^{z}+S_2^{z})\Pi_{N_1, N_2}, \end{align} where the cross terms return zero for a fixed number $N$. Using the above result we then obtain \begin{align} &\Pi_{N_1, N_2} (S^z)^2 \Pi_{N_1, N_2} = (\Pi_{N_1, N_2} S^z \Pi_{N_1, N_2})^2 \nonumber\\ &\rightarrow \Pi_{N_1, N_2}(S_1^{z}+S_2^{z})^2\Pi_{N_1, N_2} \nonumber\\ &= \Pi_{N_1, N_2}((S_{1}^{z})^2+2S_{1}^{z}S_{2}^{z}+(S_{2}^{z})^2)\Pi_{N_1, N_2}, \label{effHam} \end{align} where we applied the relations $\Pi_{N_1,N_2}^{2}=\Pi_{N_1,N_2}$ and $[\Pi_{N_1,N_2},S^z]=0$. Thus the effective spin of the squeezing generation operator $S_{z}^2$ on one single polariton BEC in a fixed $N_1,N_2$ subspace corresponds to the two spatial BECs case as $(S^{z})^2 \rightarrow (S_{1}^{z})^2+2S_{1}^{z}S_{2}^{z}+(S_{2}^{z})^2$. This shows that we expect squeezing on each BEC individually due to the terms $ (S^z_1)^2 $ and $ (S^z_2)^2 $ and the term $2S_1^z S_2^z$ generates entanglement between two BECs. This is similar to the one-axis two spin (1A2S) squeezing Hamiltonian, which produces entanglement with a fractal time dependence \cite{Byrnes13,Kurkjian13}. \begin{figure} \caption{The logarithmic negativity (\ref{logcriteria} \label{log} \end{figure} \section{Entanglement detection}\label{iv} \subsection{Logarithmic negativity} Logarithmic negativity is an entanglement monotone that is used to quantify the bipartite entanglement in mixed states \cite{Plenio05,Vidal02}. This is defined as \begin{align} E(\rho^{\text{sp}}_N)={\text{log}}_{2}||(\rho^{\text{sp}}_N)^{T_{2}}|| = {\text{log}}_{2}\sum_{i}|\lambda_{i}|, \label{logcriteria} \end{align} where $(\rho^{\text{sp}}_N)^{T_{2}}$ is the partial transpose of the second polariton BEC density matrix, $||{\cal X}||$ is the Schatten-1 norm of ${\cal X}$ and $|\lambda_{i}|$ is the absolute value of the eigenvalues of $(\rho^{\text{sp}}_N)^{T_{2}}$. The range of $E$ is from 0 to the maximum value $E_{\text{max}}={\text{log}}_2(N/2+1)$, where in the maximally entangled case $N_1=N_2=N/2$. We note that this result only involves the $N$-sector which has the maximum $p_{N}$ for the particular parameter set that we choose, i.e. the most likely measured $N$-sector. In Fig.\ref{log}, we examine the logarithmic negativity, where we show (\ref{logcriteria}) as a function of the pumping rate $A/\gamma$ and the squeezing interaction parameter $U/\gamma$. We find that $E=0$ for a spin coherent state $(U/\gamma=0)$ and $E>0$ when both $U/\gamma, A/\gamma>0 $, as expected. The tendency of the growth of $E$ with $A/\gamma$ and $U/\gamma$ are clearly seen. We note that large $ N $ needs a smaller time to obtain the same squeezing level, as expected from the optimal squeezing time $ \propto 1/N^{2/3}$ for one-axis squeezing \cite{Kitagawa93,byrnes2020quantum}. Larger pumping and interaction corresponds to a higher level of squeezing, giving rise to more entanglement in our system. We thus expect that the entanglement should be present in the current polariton system at steady state for large pumping rate and high Q-cavity regime \cite{jingyan21}. Concretely, this would correspond to parameters corresponding to $1/\gamma>30 $ ps and $U/\gamma>0.3$. \begin{figure} \caption{Entanglement criteria for the split polariton BECs system at steady-state. Criteria (\ref{GMVT} \label{criteria} \end{figure} \subsection{Correlation-based criteria} While a non-zero logarithmic negativity gives an unambiguous signal of entanglement, it may be difficult in practice to detect it in experiment due to the need for full density matrix tomography. Thus experimental limitations may require the use of alternative measures that are better suited to the available measurements. Several correlation-based entanglement detectors are available. The more sensitive detectors are the expectation values of total spin operators. The first one we consider is the Giovannetti-Mancini-Vitali-Tombesi (GMVT) criterion \cite{Giovannetti03}, which states that for any separable state \begin{align} \frac{\sqrt{\text{Var}_N (g_{y}S_{1}^y - S_{2}^y)\text{Var}_N (g_{z}S_{1}^z + S_{2}^z)}}{|g_{y}g_{z}|(|\langle S_{1}^x \rangle_N | +|\langle S_{2}^x \rangle_N |)} \ge 1, \label{GMVT} \end{align} where the $g_{y},g_{z}$ are free parameters to minimize the left hand side. The inequality (\ref{GMVT}) is true for all separable states. Hence a violation of the inequality indicates that the state must be entangled. In our case we choose $g_{y}=g_{z}=1$. The second criterion is the Duan-Giedke-Cirac-Zoller (DGCZ) criterion \cite{Duan00} valid for any separable state \begin{align} \frac{\text{Var}_N (S_{1}^y - S_{2}^y) + \text{Var}_N (S_{1}^z + S_{2}^z)}{2(|\langle S_{1}^x \rangle_N |+|\langle S_{2}^x \rangle_N |)} \ge 1. \label{DGCZ} \end{align} The third criterion is the Hofmann-Takeuchi (HT) criterion \cite{Hofmann03} valid for any separable state \begin{align} \frac{\text{Var}_N (S_{1}^x + S_{2}^x) + \text{Var}_N (S_{1}^y - S_{2}^y) + \text{Var}_N (S_{1}^z + S_{2}^z)}{2(\langle {{\cal N}_1} \rangle_N+ \langle {{\cal N}_2}\rangle_N)} \ge 1, \label{Holfmann} \end{align} where $N$ is the total polariton number of $N$-sector. The above inequalities have been converted from their fixed $N_1, N_2 $ relations to a fixed $ N $ through an averaging procedure. We note that the average variance of the operator ${\cal O}$ in the fixed $N_1, N_2$ space is either equal to or less than the variance defined using $N$-sectors, denoted by $\text{Var}_N ({\cal O})$ (see Appendix \ref{appendix}). Furthermore, the average expectation values are equal to $\langle {\cal O} \rangle_N$ (see Appendix \ref{appendixb}). Therefore, the violation of the inequalities (\ref{GMVT})-(\ref{Holfmann}) indicates the presence of entanglement within each $N$-sector. \begin{figure} \caption{Entanglement criteria (\ref{GMVT} \label{criteria2} \end{figure} Fig. \ref{criteria}(a)-(c) shows the three criteria as a function of the $N$-sectors respectively. The first thing that we notice is the similar behavior of GMVT and DGCZ criteria. The curves show that the former detects entanglement in a wider range than the latter. In Fig. \ref{criteria}(a), we see that for experimentally reasonable parameter choices, for larger $A/\gamma$ and small $\Delta/\gamma$, the entanglement criteria decrease monotonically with $N$. This corresponds to more squeezing for larger $N$, which was observed from the $Q$ functions and squeezing parameters in Ref. \cite{jingyan21}. Comparing the different parameters $U/\gamma$, we find that we obtain a higher entanglement level in a high Q-cavity, since the larger $U/\gamma$ represents more squeezing. Further, we show that the small detuning can enhance entanglement to a large extent. Depending upon the $ N $-sector examined, in some cases increasing the pump $ A/\gamma $ does not necessarily lead to an enhancement of the entanglement. We find a threshold in the $N$-sector, where below the threshold a larger pump rate $A/\gamma$ tends to increase entanglement, while above the threshold it decreases. For example, under HT criterion in Fig. \ref{criteria}(c), the threshold is $ N \approx 6 $. Fig. \ref{criteria2}(a) shows a ``staircase" dependence of GMVT, DGCZ and HT criteria. The staircase dependence is observed because we consider the most likely $ N $-sector to be measured, and with increasing $ A/\gamma $ or $ U/\gamma $ this changes. For example, at $A/\gamma\sim1.25 $, the entanglement level suddenly decreases due to the change in the $N$-sector, then the entanglement level slightly reduces. What this shows is that while increasing $ A/\gamma $ can slightly degrade the entanglement within a fixed $ N $-sector, a larger pump can also change the most probable $ N $-sector, which can lead to an improvement of the entanglement. This is why in Fig. \ref{log} we generally observe an increase in entanglement with larger pumping. In experiments typically a larger pump rate is easily achieved, thus the most reachable regime is the higher $A/\gamma$. In Fig. \ref{criteria2}(b) we show these three criteria versus the interaction parameter $U/\gamma$. We see that the below the threshold $U/\gamma\sim0.09$, the entanglement level improves monotonically, but then saturates and again has a ``staircase" dependence due to the changes in $ N $-sector. Therefore, to obtain a higher entanglement level, a moderate interaction $ U /\gamma $ may be sufficient to obtain an optimized level of entanglement. \section{Conclusion}\label{v} In this work we theoretically proposed a method of generating spatially separated entanglement at steady-state in a spinor exciton-polariton BEC and gave two ways of realizing the experimental setup. In the first approach, the polaritons would be physically split in a coherent fashion, by raising an external potential, for example. The second approach involves virtually splitting the polariton condensate into two halves, by examining a spatially resolved near-field image of entanglement in the polariton BEC. In both of these approaches, equivalent results are obtained, in the ideal case. Technically, the virtual split is much easier to achieve than the physical split since such extra manipulations may involve additional sources of decoherence. However, the physical split is more in line with the notion of two separated entangled BECs, and is yet to be realized in any physical system. The initial formation of the condensate can be attributed to one-axis spin squeezing interaction between the polariton modes. This type of interaction leads to entanglement generation between all polaritons in the system. The formation of entanglement can be attributed to the cross term $2S_1^z S_2^z$ (\ref{effHam}). By examining and comparing the logarithmic negativity, GMVT, DGCZ, and HT criteria in various regimes, we show such entanglement can be detected between the two BECs. The entanglement can be improved with pump rate $A/\gamma$ increasing. We also find that a small detuning $\Delta/\gamma$ can enhance entanglement. Further, one may obtain an optimal entanglement level by adjusting the interaction parameter $U/\gamma$. To date, there has not been any report of entanglement generation within a polariton BEC. Several experiments in atomic BEC have been performed at the single BEC level to demonstrate entanglement, but two separate BECs have never been entangled. Due to the controllability of polariton condensate, there is an opportunity to experimentally realize some of these milestones in the near future. \begin{acknowledgments} This work is supported by the National Natural Science Foundation of China (62071301); NYU-ECNU Institute of Physics at NYU Shanghai; Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning; the Joint Physics Research Institute Challenge Grant; the Science and Technology Commission of Shanghai Municipality (19XD1423000,22ZR1444600); the NYU Shanghai Boost Fund; the China Foreign Experts Program (G2021013002L); the NYU Shanghai Major-Grants Seed Fund; Tamkeen under the NYU Abu Dhabi Research Institute grant CG008; and the SMEC Scientific Research Innovation Project (2023ZKZD55). \end{acknowledgments} \appendix \section{Derivation of the variance average in the fixed $N_1,N_2$ space } \label{appendix} The definition of the variance average of a quantum operator ${\cal O}$ is \begin{align} &\sum_{N_1=0}^N p_{N_1,N_2|N} \text{Var}_{N_1,N_2} ({\cal O})\nonumber\\ =& \sum_{N_1=0}^N p_{N_1,N_2|N} \langle {\cal O}^2 \rangle_{N_1,N_2} - \sum_{N_1=0}^N p_{N_1,N_2|N} \langle {\cal O} \rangle_{N_1,N_2}^2. \end{align} Using Cauchy-Schwarz inequality \cite{Duan00}, we find \begin{align} &\sum_{N_1=0}^N p_{N_1,N_2|N} \text{Var}_{N_1,N_2} ({\cal O})\nonumber\\ \leq& \sum_{N_1=0}^N p_{N_1,N_2|N} \langle {\cal O}^2 \rangle_{N_1,N_2} - \left(\sum_{N_1=0}^N p_{N_1,N_2|N} |\langle {\cal O} \rangle_{N_1,N_2}|\right)^2 \nonumber\\ \leq& \sum_{N_1=0}^N p_{N_1,N_2|N} \langle {\cal O}^2 \rangle_{N_1,N_2} - \left(\sum_{N_1=0}^N p_{N_1,N_2|N} \langle {\cal O} \rangle_{N_1,N_2}\right)^2, \end{align} by substituting $\langle {\cal O}^2 \rangle_{N_1,N_2}$,$\langle {\cal O} \rangle_{N_1,N_2}^2$ with $\langle {\cal O}^2 \rangle_N$,$\langle {\cal O} \rangle_N^2$ (\ref{representation}), we have \begin{align} \sum_{N_1=0}^N p_{N_1,N_2|N} \text{Var}_{N_1,N_2} ({\cal O}) \leq \text{Var}_N ({\cal O}). \end{align} Replacing $\cal O$ with $S_1^x+S_2^x$,$S_1^y-S_2^y$,$S_1^z+S_2^z$ respectively, we obtain the relations in (\ref{GMVT})-(\ref{Holfmann}). \section{Derivation of the average of expectation values in the fixed $N_1,N_2$ space} \label{appendixb} The definition of the average of expectation values of a quantum operator ${\cal O}$ is \begin{align} \sum_{N_1=0}^N p_{N_1,N_2|N} \langle {\cal O} \rangle_{N_1,N_2}, \end{align} by using the relations (\ref{representation}) we have \begin{align} \sum_{N_1=0}^N p_{N_1,N_2|N} \langle {\cal O} \rangle_{N_1,N_2} = \langle {\cal O} \rangle_N. \end{align} Replacing $\cal O$ with $S_1^x$,$S_2^x$,${{\cal N}_1}$,${{\cal N}_2}$ respectively, we obtain the relations in (\ref{GMVT})-(\ref{Holfmann}). \end{document}
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{cor}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{definition}{Definition} \newtheorem{quest}[theorem]{Question} \newtheorem{remark}[theorem]{Remark} \newtheorem{conj}[theorem]{Conjecture} \newcommand{{{\mathrm h}}}{{{\mathrm h}}} \numberwithin{equation}{section} \numberwithin{theorem}{section} \numberwithin{table}{section} \def\mathop{\sum\!\sum\!\sum}{\mathop{\sum\!\sum\!\sum}} \def\mathop{\sum\ldots \sum}{\mathop{\sum\ldots \sum}} \def\mathop{\int\ldots \int}{\mathop{\int\ldots \int}} \def\hbox{\rlap{$\sqcap$}$\sqcup$}{\hbox{\rlap{$\sqcap$}$\sqcup$}} \def\qed{\ifmmode\hbox{\rlap{$\sqcap$}$\sqcup$}{\mathbf{\,e}}lse{\unskip\nobreak\hfil \penalty50\hskip1em\null\nobreak\hfil\hbox{\rlap{$\sqcap$}$\sqcup$} \parfillskip=0pt\finalhyphendemerits=0{\mathbf{\,e}}ndgraf}\fi} \newfont{\widetildeeneufm}{eufm10} \newfont{\seveneufm}{eufm7} \newfont{\fiveeufm}{eufm5} \newfam{\mathbf{\,e}}ufmfam \widetildeextfont{\mathbf{\,e}}ufmfam=\widetildeeneufm \scriptfont{\mathbf{\,e}}ufmfam=\seveneufm \scriptscriptfont{\mathbf{\,e}}ufmfam=\fiveeufm \def\frak#1{{\fam{\mathbf{\,e}}ufmfam\relax#1}} \def\widetilde{\widetilde} \def\EuScript{F}{\EuScript{F}} \def\widetildeB{\widetilde{B}} \newcommand{{\boldsymbol{\lambda}}}{{\boldsymbol{\lambda}}} \newcommand{{\boldsymbol{\mu}}}{{\boldsymbol{\mu}}} \newcommand{{\boldsymbol{\xi}}}{{\boldsymbol{\xi}}} \newcommand{{\boldsymbol{\rho}}}{{\boldsymbol{\rho}}} \defFrak K{Frak K} \defFrak{T}{Frak{T}} \def{Frak A}{{Frak A}} \def{Frak B}{{Frak B}} \def\mathfrak{C}{\mathfrak{C}} \def \balpha{\bm{\alpha}} \def \bbeta{\bm{\beta}} \def \bgamma{\bm{\gamma}} \def \blambda{\bm{\lambda}} \def \bchi{\bm{\chi}} \def \bphi{\bm{\varphi}} \def \bpsi{\bm{\psi}} \def{\mathbf{\,e}}qref#1{(\ref{#1})} \def\vec#1{\mathbf{#1}} \def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}} \newcommand{\rmod}[1]{\: \mbox{mod} \: #1} \def{\mathcal g}{{\mathcal g}} \def\mathbf r{\mathbf r} \def{\mathbf{\,e}}{{\mathbf{\,e}}} \def{\mathbf{\,e}}p{{\mathbf{\,e}}_p} \def{\mathbf{\,e}}m{{\mathbf{\,e}}_m} \def{\mathrm{Tr}}{{\mathrm{Tr}}} \def{\mathrm{Nm}}{{\mathrm{Nm}}} \def{\mathbf{S}}{{\mathbf{S}}} \def{\mathrm{lcm}}{{\mathrm{lcm}}} \def{\mathrm{ord}}{{\mathrm{ord}}} \def\left({\left(} \def\right){\right)} \def\fl#1{\left\lfloor#1\right\rfloor} \def\rf#1{\left\lceil#1\right\rceil} \def\qquad \mbox{and} \qquad{\qquad \mbox{and} \qquad} \newcommand{\commF}[1]{\marginpar{ \begin{color}{blue} \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule F: #1\par \hrule{\mathbf{\,e}}nd{color}}} \newcommand{\commI}[1]{\marginpar{ \begin{color}{magenta} \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule I: #1\par \hrule{\mathbf{\,e}}nd{color}}} \hyphenation{re-pub-lished} \mathsurround=1pt \defb{b} \overfullrule=5pt \def \D{{\mathbb D}} \def \T{{\mathbb T}} \def \F{{\mathbb F}} \def \K{{\mathbb K}} \def \N{{\mathbb N}} \def \Z{{\mathbb Z}} \def \Q{{\mathbb Q}} \def \R{{\mathbb R}} \def \C{{\mathbb C}} \def\F_p{\F_p} \def \fp{\F_p^*} \def\cK_p(m,n){{\mathcal K}_p(m,n)} \def\psi_p(m,n){\psi_p(m,n)} \def\cS_p(\cI){{\mathcal S}_p({\mathcal I})} \def\cS_p(\cI)J{{\mathcal S}_p({\mathcal I},{\mathcal J})} \def\cS_p(\cA;\cI,\cJ){{\mathcal S}_p({\mathcal A};{\mathcal I},{\mathcal J})} \def\cS_p(\cA,\cB;\cI,\cJ){{\mathcal S}_p({\mathcal A},{\mathcal B};{\mathcal I},{\mathcal J})} \def \xbar{\overline x_p} \widetildeitle[Level curves of rational functions]{Level curves of rational functions and unimodular points on rational curves} \author[F. Pakovich]{Fedor Pakovich} \address{Department of Mathematics, Ben Gurion University of the Negev, P.O.B. 653, Beer Sheva, 8410501, Israel} {\mathbf{\,e}}mail{[email protected]} \author[I. E. Shparlinski] {Igor E. Shparlinski} \address{Department of Pure Mathematics, University of New South Wales, Sydney, NSW 2052, Australia} {\mathbf{\,e}}mail{[email protected]} \begin{abstract} We obtain an improvement and broad generalisation of a result of N.~Ailon and Z.~Rudnick (2004) on common zeros of shifted powers of polynomials. Our approach is based on reducing this question to a more general question of counting intersections of level curves of complex functions. We treat this question via classical tools of complex analysis and algebraic geometry. {\mathbf{\,e}}nd{abstract} \keywords{Unimodular points, Ailon and Rudnick theorem, Blaschke product} \subjclass[2010]{11D61, 12D10, 30C15, 30J10} \maketitle \section{Introduction} Recall that Ailon and Rudnick~\cite[Theorem~1]{AR} have shown that for any multiplicatively independent polynomials $P_1(z)$ and $P_2(z)$ with complex coefficients there exists a polynomial $F(z) \in \C[z]$ such that for any positive integer $k$ the greatest common divisor of $ P_1(z)^k-1$ and $P_2(z)^k-1$ divides $F$, that is $$ \gcd\left(P_1(z)^k-1, P_2(z)^k-1\right) \mid F(z), \quad k=1,2, \ldots. $$ Since it is easy to see that for a non-trivial polynomial $P(z) \in \C[z]$ the multiplicity of any factor of $P(z)^k-1$ does not exceed $\deg P$, the theorem of Ailon and Rudnick is equivalent to the following statement: if $P_1$ and $P_2$ are complex polynomials, then \begin{equation} \label{eq:Zero AR} \#\bigcup_{k=1}^\infty \{z \in \C~:~P_1(z)^k = P_2(z)^k =1\} \le C(P_1,P_2), {\mathbf{\,e}}nd{equation} for some constant $C(P_1,P_2)$ that depends only on $P_1$ and $P_2$, unless for some non-zero integers $m_1$ and $m_2$ we have \begin{equation} \label{eq:P} P_1^{m_1}(z)P_2^{m_2}(z)=1 {\mathbf{\,e}}nd{equation} identically. Different versions and generalization Ailon-Rudnick result have been studied in many recent papers (see, for example,~\cite{GhHsTu2,HsTu,Ost,PaWa} and the references therein). The method of Ailon and Rudnick~\cite{AR} relies on a result conjectured by Lang and proved by Ihara, Serre and Tate, which states that the intersection of an irreducible curve ${\mathcal C}$ in $\C^*\widetildeimes \C^*$ with the roots of unity $\mu_{\infty}\widetildeimes \mu_{\infty}$ is finite, unless ${\mathcal C}$ is of the form $X^nY^m - {\mathbf{\,e}}ta= 0$ or $X^m - {\mathbf{\,e}}ta Y^n = 0$, where ${\mathbf{\,e}}ta\in \mu_{\infty}$, that is unless ${\mathcal C}$ is a translate by a torsion point of an algebraic subgroup of $\C^*\widetildeimes \C^*$ (see \cite{Lang}, \cite{Lang2}, and also \cite{BeSm}). Corvaja, Masser, and Zannier in the paper of ~\cite{CMZ} ask about a possible extension of the Lang statement~\cite{Lang}, where instead of the intersection of ${\mathcal C}$ with $\mu_{\infty}\widetildeimes \mu_{\infty}$ the intersection with $S^1\widetildeimes S^1$ is considered (here $S^1$ is treated as the topological closure of torsion points). In particular, they proved that the system \begin{equation} \label{eq:Z} \left| z \right | =\left| P(z)\right | =1, {\mathbf{\,e}}nd{equation} where $P(z)$ is a polynomial, has finitely many solutions, unless $P(z)$ is a monomial. They also remarked that if $P(z)$ is allowed to be a rational function, then the system~{\mathbf{\,e}}qref{eq:Z} might have infinitely many solutions for non-monomial $P(z)$. In this paper we consider the system of equations for the level curves \begin{equation} \label{eq:Unimod} \left| P_1(z)\right | =\left| P_2(z)\right | =1, {\mathbf{\,e}}nd{equation} where $P_1(z)$ and $P_2(z)$ arbitrary rational functions, generalising the systems~{\mathbf{\,e}}qref{eq:Zero AR} and ~{\mathbf{\,e}}qref{eq:Z}. Using classical tools of complex analysis and algebraic geometry, we describe $P_1$ and $P_2$ for which this system has infinitely many solutions and provide bounds for the number of solutions in the other cases. Thus, our results can be considered as extensions of the result of Ailon and Rudnick~\cite{AR} as well as of the Lang statement~\cite{Lang} in the particular case concerning of curves of genus zero. \section{Results} Recall that a {\it finite Blaschke product} is a rational function $B(z) \in \C(z)$ of the form $$B(z)=\zeta\prod_{i=1}^n\left(\frac{z-a_i}{1-\bar{a_i}z}\right)^{m_i}, $$ where $a_i$ are complex numbers in the open unit disc $$\D = \{z \in \C~:~|z| <1\},$$ the exponents $m_i$, $i=1, \ldots, n$, are positive integers, and $\left| \zeta\right |=1$. A rational function $Q(z)$ of the form $Q(z) = B_1(z)/B_2(z)$, where $B_1$ and $B_2$ are finite Blaschke products, is called a {\it quotient of finite Blaschke products}. In the above notation, our first result is the following. \begin{theorem} \label{thm:LC-GenZero} Let ${\mathcal C}: F(x,y)=0$, where $F(x,y) \in \C[x,y]$, be an irreducible algebraic curve of genus zero and of degree $d= \deg F$. Then ${\mathcal C}$ has at most $d^2$ unimodular points, unless it can be parametrised by some quotients of finite Blaschke products $x = Q_1(z)$ and $y = Q_2(z)$. {\mathbf{\,e}}nd{theorem} Our second result is the following generalisation of the bound~{\mathbf{\,e}}qref{eq:Zero AR}. \begin{theorem} \label{thm:cor} Let $P_1(z)$ and $P_2(z)$ be complex rational functions of degrees $n_1$ and $n_2$. Then \begin{equation} \label{eq:Zero PS} \begin{split} \# \{z \in \C~:~\left| P_1(z)\right | =\left| P_2(z)\right | =1\} \le (n_1+n_2)^2, {\mathbf{\,e}}nd{split} {\mathbf{\,e}}nd{equation} unless \begin{equation} \label{eq:PBW} P_1=B_1\circ W \qquad \mbox{and} \qquad P_2=B_2\circ W {\mathbf{\,e}}nd{equation} for some quotients of finite Blaschke products $B_1$ and $B_2$ and rational function $W$. {\mathbf{\,e}}nd{theorem} In order to see that Theorem~\ref{thm:cor} implies~{\mathbf{\,e}}qref{eq:Zero AR} it is enough to observe that if a quotient of finite Blaschke products is a polynomial, then this polynomial is necessary a power. Thus, ~{\mathbf{\,e}}qref{eq:PBW} reduces to $$P_1=W^{m_1} \qquad \mbox{and} \qquad P_2=W^{m_2},$$ implying~{\mathbf{\,e}}qref{eq:P}. Notice that since any quotient of finite Blaschke products maps the unit circle $$ \T = \{z \in \C~:~|z| =1\} $$ to itself, if $P_1$ and $P_2$ satisfy~{\mathbf{\,e}}qref{eq:PBW}, then the level curves $\left| P_1(z)\right | =1$ and $\left| P_2(z)\right | =1$ have a common component $W^{-1}\{\T\}$, so a bound like~{\mathbf{\,e}}qref{eq:Zero PS}, or any other finiteness result, cannot exist. In particular, this happens if $P_1$ is a unimodular constant and $P_2$ is an arbitrary rational function (in this case~{\mathbf{\,e}}qref{eq:PBW} holds for $B_1=P_1$, $B_2=z$, and $W=P_2$). \section{Proofs} \label{sec:curve} Since the inverse Cayley transform $$z \mapsto T(z) = i\frac{1+z}{1-z}$$ maps $\D$ to the upper half-plane, and the unit circle $\T$ maps under $T$ to the extended real line, a rational function $Q$ is a quotient of a finite Blaschke product if and only if the rational function $$R=T\circ Q\circ T^{-1}$$ maps $\R\cup \infty$ to $\R\cup \infty$. In turn, the last condition is equivalent to the condition that $R$ has real coefficients (since $R(z)$ and $\overline{R}(z)$ coincide for infinitely many values of $z$). Thus, Theorem~\ref{thm:LC-GenZero} is equivalent to the following statement. \begin{theorem} \label{lem:RealPoint Curve} If an irreducible algebraic curve ${\mathcal C}: F(x,y)=0$ of genus zero and degree $d$ has more than $d^2$ real points, then ${\mathcal C}$ can be para\-met\-rised by rational functions with real coefficients. {\mathbf{\,e}}nd{theorem} \begin{proof}Observe that real points of ${\mathcal C}$ belong to the intersection of the curve ${\mathcal C}$ and the curve $\overline{{\mathcal C}}:\overline{F}(x,y)=0$. Therefore, it follows from the B\'ezout theorem that whenever ${\mathcal C}$ has more than $d^2$ real points there exists $c\in \C$ such that $\overline{F}=c F$. Such $c$ must satisfy $c\overline{c}=1$, implying that we can find a complex number $\lambda$ such that $\lambda^2=c$ and $\lambda\overline{\lambda} =1$. Since $$\overline{\lambda F}=\overline{\lambda}\lambda^2 F=\lambda F,$$ the polynomial $\lambda F$ has real coefficients, and hence ${\mathcal C}$ can be defined over $\R$. Since the maximal number of singular points of a plane curve of degree $d$ does not exceed $$\frac{(d-1)(d-2)}{2}$$ (see, for example,~\cite[Page~49]{fi}) and ${\mathcal C}$ has more than $d^2$ real points, ${\mathcal C}$ has a non-singular real point. Finally, an algebraic curve ${\mathcal C}$ of genus zero defined over $\R$ admits a parametrisation by rational functions defined over $\R$ whenever ${\mathcal C}$ has at least one non-singular $\R$-point (see, for example,~\cite[Theorem~7.6]{swd}). {\mathbf{\,e}}nd{proof} In order to prove Theorem~\ref{thm:cor}, recall that if a parametrisation $x=P_1(z)$, $y=P_2(z)$ of an algebraic curve $C$ of genus zero is proper, that is, if $$ \C(z)=\C(P_1(z),P_2(z)), $$ then $$\deg P_1=\deg_yF \qquad \mbox{and} \qquad \deg P_2=\deg_x F,$$ (see, for example,~\cite[Theorem~4.21]{swd}). Let now $P_1$ and $P_2$ be rational functions of degrees $n_1$ and $n_2$. Then the L\"uroth theorem implies that there exist a rational function $W$ and rational functions $Q_1$ and $Q_2$ such that the equalities~{\mathbf{\,e}}qref{eq:PBW} hold, and $$x=Q_1(z), \qquad y=Q_2(z),$$ is a proper parametrisation of an algebraic curve $C$ of degree at most $n_1+n_2$. Therefore, if~{\mathbf{\,e}}qref{eq:Zero PS} does not hold, then $Q_1$ and $Q_2$ are quotients of finite Blaschke products by Theorem~\ref{thm:LC-GenZero}. \begin{remark}{\rm We observe that the above argument provides a simple geometric criterion for a curve ${\mathcal C}:G(x,y)=0$ to have infinitely many unimodular points. Namely, considering instead of the curve ${\mathcal C}$ a curve $\widehat{{\mathcal C}}: \widehat G(x,y)=0$, where $$\widehat G(x,y)= G\left(T(x),T(y)\right),$$ we reduce the question to the question about real points of $\widehat{\mathcal C}$. On the other hand, it is easy to see that an algebraic curve has infinitely many real points if and only if it is defined over $\R$ and has at least one simple $\R$-point. Indeed, the necessity has been proved above. In the other direction, if a curve defined over $\R$ has a simple $\R$-point, then the implicit function theorem implies that it has infinitely many $\R$-points. } {\mathbf{\,e}}nd{remark} \begin{thebibliography}{9999} \bibitem{AR} N. Ailon and Z. Rudnick, \widetildeextit{Torsion points on curves and common divisors of $a^k-1$ and $b^k-1$}, Acta Arith., {\bf 113} (2004), 31--38. \bibitem{BeSm} F. Beukers and C. J. Smyth, \widetildeextit{Cyclotomic points on curves}, Number theory for the millenium (Urbana, Illinois, 2000), I, A K Peters, 2002, 67-85. \bibitem{CMZ} P. Corvaja, D. Masser and U. Zannier, \widetildeextit{ Sharpening `Manin--Mumford' for certain algebraic groups of dimension 2}, Enseign. Math., {\bf 59} (2013), 1--45. \bibitem{fi} G. Fischer, \widetildeextit{ Plane algebraic curves}, Amer. Math. Soc., Providence, RI, 2001. \bibitem {GhHsTu2} D. Ghioca, L.-C. Hsia and T. J. Tucker, \widetildeextit{On a variant of the Ailon-Rudnick theorem in finite characteristic}, New York Journal of Math., {\bf 23} (2017), 213--225. \bibitem {HsTu} L.-C. Hsia and T. J. Tucker, \widetildeextit{Greatest common divisors of iterates of polynomials}, Algebra and Number Theory, {\bf 11} (2017) 1437--1459. \bibitem{Lang} S. Lang, \widetildeextit{Division points on curves}, Ann. Mat. Pura Appl., {\bf 70} (1965), 229--234. \bibitem{Lang2} S. Lang, \widetildeextit{Fundamentals of Diophantine geometry,} Springer-Verlag, New York, 1983. \bibitem{Ost} A. Ostafe, \widetildeextit{On some extensions of the Ailon-Rudnick theorem}, Monat. Math., {\bf 181} (2016), 451--471. \bibitem{PaWa} H. Pasten and J. T.-Y. Wang, \widetildeextit{GCD bounds for analytic functions}, Intern. Math. Res. Not., {\bf 2017} (2017), 47--95. \bibitem {swd} J. Sendra, F. Winkler and S. P\'er\'ez-D\'iaz, \widetildeextit{Rational algebraic curves. A computer algebra approach}, Algorithms and Computation in Mathematics, vol.22. Springer, Berlin, 2008. {\mathbf{\,e}}nd{thebibliography} {\mathbf{\,e}}nd{document}
\begin{document} \title{Binary domain generalization for sparsifying binary neural networks\thanks{FG and MAZ are supported by the French government, through the 3IA Côte d’Azur Investments in the Future project managed by the ANR (ANR-19-P3IA-0002)}} \toctitle{Binary domain generalization for sparsifying binary neural networks} \author{Riccardo Schiavone {\Letter} \inst{1}\orcidID{0000-0002-5089-7499} \and Francesco Galati\inst{2}\orcidID{0000-0001-6317-6298} \and Maria~A.~Zuluaga\inst{2}\orcidID{0000-0002-1147-766X}} \authorrunning{R. Schiavone et al.} \tocauthor{Riccardo Schiavone, Francesco Galati, Maria A. Zuluaga} \institute{Department of Electronics and Telecommunications, Politecnico di Torino, Italy\\ \email{[email protected]} \and Data Science Department, EURECOM, Sophia Antipolis, France \\ \email{\{galati,zuluaga\}@eurecom.fr} } \maketitle \begin{abstract} Binary neural networks (BNNs) are an attractive solution for developing and deploying deep neural network (DNN)-based applications in resource constrained devices. Despite their success, BNNs still suffer from a fixed and limited compression factor that may be explained by the fact that existing pruning methods for full-precision DNNs cannot be directly applied to BNNs. In fact, weight pruning of BNNs leads to performance degradation, which suggests that the standard binarization domain of BNNs is not well adapted for the task. This work proposes a novel more general binary domain that extends the standard binary one that is more robust to pruning techniques, thus guaranteeing improved compression and avoiding severe performance losses. We demonstrate a closed-form solution for quantizing the weights of a full-precision network into the proposed binary domain. Finally, we show the flexibility of our method, which can be combined with other pruning strategies. Experiments over CIFAR-10 and CIFAR-100 demonstrate that the novel approach is able to generate efficient sparse networks with reduced memory usage and run-time latency, while maintaining performance. \keywords{Binary neural networks \and Deep neural networks \and Pruning \and Sparse representation.} \end{abstract} \section{Introduction} The increasing number of connected Internet-of-Things (IoT) devices, now surpassing the number of humans connected to the internet~\cite{evans2011}, has led to a sensors-rich world, capable of addressing real-time applications in multiple domains, where both accuracy and computational time are crucial~\cite{iot_application}. Deep neural networks (DNNs) have the potential of enabling a myriad of new IoT applications, thanks to their ability to process large complex heterogeneous data and to extract patterns needed to take autonomous decisions with high reliability~\cite{bengioDNN}. However, DNNs are known for being resource-greedy, in terms of required computational power, memory, and energy consumption~\cite{DNNanalysis}, whereas most IoT devices are characterized by limited resources. They usually have limited processing power, small storage capabilities, they are not GPU-enabled and they are powered with batteries of limited capacity, which are expected to last over 10 years without being replaced or recharged. These constraints represent an important bottleneck towards the deployment of DNNs in IoT applications~\cite{yao2018}. A recent and notable example to enable the usage of DNNs in limited resource devices are binary neural networks (BNNs) \cite{Courbariaux2016}. BNNs use binary weights and activation functions that allow them to replace computationally expensive multiplication operations with low-cost bitwise operations during forward propagation. This results in faster inference and better compression rates, while maintaining an acceptable accuracy for complex learning tasks~\cite{guo2022join,liu2020reactnet}. For instance, BNNs have achieved over $80\%$ classification accuracy on ImageNet~\cite{guo2022join,Imagenet}. Despite the good results, BNNs have a fixed and limited compression factor compared to full-precision DNNs, which may be insufficient for certain size and power constraints of devices~\cite{lin2020mcunet}. A way to further improve BNNs' compression capacity is through network pruning, which seeks to control a network's sparsity by removing parameters and shared connections~\cite{han2015}. Pruning BNNs, however, is a more challenging task than pruning full-precision neural networks and it is still a challenge with many open questions~\cite{xu2019main}. Current attempts~\cite{guerra2021automatic,kuhar2022signed,munagala2020stq,schiavone2021sparse,wu2020sbnn,wang2021sub,xu2019main} often rely on training procedures that require more training stages than standard BNNs, making learning more complex. Moreover, these methods fail in highly pruned scenarios, showing severe accuracy degradation over simple classification problems. In this work, we introduce sparse binary neural network (SBNN), a more robust pruning strategy to achieve sparsity and improve the performance of BNNs. Our strategy relies on entropy to optimize the network to be largely skewed to one of the two possible weight values, i.e. having a very low entropy. Unlike BNNs that use symmetric values to represent the network's weights, we propose a more general binary domain that allows the weight values to adapt to the asymmetry present in the weights distribution. This enables the network to capture valuable information, achieve better representation, and, thus better generalization. The main contributions of our work can be summarized as follows: 1) We introduce a more general binary domain w.r.t. the one used by BNNs to quantize real-valued weights; 2) we derive a closed-form solution for binary values that minimizes quantization error when real-valued weights are mapped to the proposed domain; 3) we enable the regularization of the BNNs weights distribution by using entropy constraints; 4) we present efficient implementations of the proposed algorithm, which reduce the number of bitwise operations in the network proportionally to the entropy of the weight distribution; and 5) we demonstrate SBNN's competitiveness and flexibility through benchmark evaluations. The remaining of this work is organized as follows. Section~\ref{sec:related} discusses previous related works. The core of our contributions are described in Section~\ref{sec:method}. In Section~\ref{sec:results}, we study the properties of the proposed method and assess its performance, in terms of accuracy and operation reduction at inference, through a set of experiments using, CIFAR-10, CIFAR-100 \cite{Cifar10} and ImageNet \cite{Imagenet} datasets. Finally, a discussion on the results and main conclusions are drawn in Section~\ref{sec:conclusions}. \section{Related Work}\label{sec:related} We first provide an overview of BNNs. Next, we review sparsification through pruning~\cite{Bello1992,han2015,louizos2017,srivastava2014} and quantization~\cite{compress_prune,courbariaux2017,yang2019quantization,Zhang2018}, the two network compression strategies this work relies on. A broad review covering further network compression and speed-up techniques can be found in \cite{liang2021pruning}. \noindent \textbf{Binary Neural Networks.} BNNs~\cite{Courbariaux2016} have gained attention in recent years due to their computational efficiency and improved compression. Subsequent works have extended \cite{Courbariaux2016} to improve its accuracy. For instance, \cite{xnornet} introduced a channel-wise scaling coefficient to decrease the quantization error. ABC-Net adopts multiple binary bases \cite{lin2017towards}, and Bi-Real \cite{liu2018birealnet} recommends short residual connection to reduce the information loss and a smoother gradient for the signum function. Recently, ReActNet~\cite{liu2020reactnet} generalized the traditional $\text{sign}(\cdot)$ and PReLU activation functions to extend binary network capabilities, achieving an accuracy close to full-precision ResNet-18 \cite{he2016resnet} and MobileNet V1 \cite{howard2017mobilenets} on ImageNet \cite{Imagenet}. By adopting the RSign, the RPReLU along with an attention formulation Guo et al. \cite{guo2022join} surpassed the $80\%$ accuracy mark on ImageNet. Although these works have been successful at increasing the performance of BNNs, few of them consider the compression aspect of BNNs. \noindent \textbf{Network Sparsification.} The concept of sparsity has been well studied beyond quantized neural networks as it reduces a network's computational and storage requirements and it prevents overfitting. Methods to achieve sparsity either explicitly induce it during learning through regularization (e.g. $L_{0}$ \cite{louizos2017} or $L_{1}$ \cite{han2015} regularization), or do it incrementally by gradually augmenting small networks \cite{Bello1992}; or by post hoc pruning \cite{Gomez2019,sparse_using_binary,srivastava2014}. BNNs pruning is particularly challenging because weights in the $\lbrace\pm1\rbrace$ domain cannot be pruned based only on their magnitude. Existing methods include removing unimportant channels and filters from the network \cite{guerra2021automatic,munagala2020stq,wu2020sbnn,xu2019main}, but optimum metrics are still unclear; quantizing binary kernels to a smaller bit size than the kernel size \cite{wang2021sub}; or using the $\lbrace0,\pm 1\rbrace$ domains~\cite{kuhar2022signed,schiavone2021sparse}. Although these works suggest that the standard $\lbrace \pm1\rbrace$ binary domain has severe limitations regarding compression, BNNs using the $\lbrace0,\pm1\rbrace$ domain have reported limited generalization capabilities~\cite{kuhar2022signed,schiavone2021sparse}. In our work, we extend the traditional binary domain to a more general one, that can be efficiently implemented via sparse operations. Moreover, we address sparsity explicitly with entropy constraints, which can be formulated as magnitude pruning of the generic binary weight values mapping them in the $\lbrace0,1\rbrace$ domain. In our proposed domain, BNNs are more robust to pruning strategies and show better generalization properties than other pruning techniques for the same sparsity levels. \noindent \textbf{Quantization.} Network quantization allows the use of fixed-point arithmetic and a smaller bit-width to represent network parameters w.r.t the full-precision counterpart. Representing the values using only a finite set requires a quantization function that maps the original elements to the finite set. The quantization can be done after training the model, using parameter sharing techniques \cite{compress_prune}, or during training by quantizing the weights in the forward pass, as ternary neural networks (TNNs)~\cite{ternary2014}, BNNs~\cite{Courbariaux2015} and other quantized networks do~\cite{courbariaux2017,yang2019quantization}. Our work builds upon the strategy of BNNs by introducing a novel quantization function that maps weights to a binary domain that is more general than the $\{\pm1\}$ domain used in most state-of-the-art BNNs. This broader domain significantly reduces the distortion-rate curves of BNNs across various sparsity levels, enabling us to achieve greater compression. \section{Method}\label{sec:method} The proposed SBNN achieves network pruning via sparsification by introducing a novel quantization function that extends standard BNNs weight domain $\{\pm1\}$ to a more generic binary domain $\{\alpha,\beta\}$ and a new penalization term in the objective loss controlling the entropy of the weight distribution and the sparsity of the network (Section~\ref{problem_formulation_sec}). We derive in Section~\ref{sec:weight_optimization} the optimum SBNN's $\{\alpha,\beta\}$ values, i.e. the values that minimize the quantization loss when real-valued weights are quantized in the proposed domain. In Section~\ref{training_sec}, we use BNN's state-of-the-art training algorithms for SBNN training by adding the sparsity regularization term to the original BNN's objective loss. Section~\ref{implementation_sec} describes the implementation details of the proposed SBNN to illustrate their speed-up gains w.r.t BNNs. \subsection{Preliminaries}\label{preliminaries} The training of a full-precision DNN can be seen as a loss minimization problem: \begin{equation}\label{eq:standard} \displaystyle \argmin_{\widetilde{\textbf{W}}} \mathcal{L}(y,\hat{y}) \end{equation} where $\mathcal{L}(\cdot)$ is a loss function between the true labels $y$ and the predicted values $\hat{y}=f(\textbf{x};\widetilde{\textbf{W}})$, which are a function of the data input $\textbf{x}$ and the network's full precision weights $\widetilde{\textbf{W}}=\smash{\lbrace \widetilde{\textbf{w}}^{\ell} \rbrace}$, with $\smash{\widetilde{\textbf{w}}^{\ell} \in \mathbb{R}^{N^{\ell}}}$ the weights of the $\ell^{th}$ layer, and $N=\sum_{\ell} N^{\ell}$ the total number of weights in the DNN. We denote the $i^{th}$ weight element of $\widetilde{\textbf{w}}^{\ell}$ as $\widetilde{w}_i^{\ell}$.\\ A BNN~\cite{Courbariaux2016} uses a modified signum function as quantization function that maps full precision weights $\widetilde{\textbf{W}}$ and activations $\widetilde{\textbf{a}}$ to the $\lbrace\pm 1\rbrace$ binary domain, enabling the use of low-cost bitwise operations in the forward propagation, i.e. \begin{equation*} \overline{\textbf{W}} = \text{sign}(\widetilde{\textbf{W}})\,, \qquad \dfrac{\displaystyle \partial g(\widetilde{w}_i)}{\displaystyle \partial \widetilde{w}_i} = \left\{\begin{array}{ll} \frac{\displaystyle \partial g(\widetilde{w}_i)}{\displaystyle \partial \overline{w}_i} & \quad \text{, if} -1 \leq \widetilde{w}_i \leq 1 \\ 0 & \quad \text{, otherwise}, \end{array}\right. \end{equation*} where $\text{sign}(\cdot)$ denotes the modified sign function over a vector, $g(\cdot)$ is a differentiable function, $\overline{\textbf{W}}$ the network's weights in the $\lbrace\pm 1\rbrace$ binary domain, $\overline{w}_i$ a given weight in the binary domain, and $\widetilde{w_i}$ the associated full-precision weight. \subsection{Sparse Binary Neural Network (SBNN) Formulation}\label{problem_formulation_sec} Given $\Omega^\ell=\lbrace \alpha^\ell, \beta^\ell \rbrace$ a general binary domain, with $\alpha^\ell, \beta^\ell \in \mathbb{R}$, and $\alpha^\ell < \beta^\ell$, let us define a SBNN, such that, for any given layer $\ell$, \begin{equation} w_i^{\ell} \in \Omega^\ell \qquad \forall \,\, i, \end{equation} with $w_i^{\ell}$ the $i^{th}$ weight element of the weight vector, ${\textbf{w}}^{\ell}$, and $\textbf{w}= \left\{\textbf{w}^{\ell}\right\}$ the set of weights for all the SBNN. We denote $S_{\alpha^\ell}$ and $S_{\beta^\ell}$ the indices of the weights with value $\alpha^\ell$, $\beta^\ell$ in $\mathbf{w}^\ell$ \begin{equation} S_{\alpha^\ell} = \lbrace i \, | \, 1 \leq i \leq N^\ell, w^\ell_i = \alpha^\ell \rbrace ,\qquad S_{\beta^\ell} = \lbrace i \, | \, 1 \leq i \leq N^\ell, w^\ell_i = \beta^\ell \rbrace. \nonumber \end{equation} Since $\alpha^\ell < \beta^\ell \,\, \forall \,\, \ell$, it is possible to estimate the number of weights taking the lower and upper values of the general binary domain over all the network: \begin{equation} L^\ell = |S_{\alpha^\ell}|,\qquad U^\ell = |S_{\beta^\ell}|,\qquad L = \sum_\ell L^\ell, \qquad U = \sum_\ell U^\ell, \label{eq:ul2} \end{equation} with $L+U =N$, the total number of SBNN network weights. In the remaining of the manuscript, for simplicity and without loss of generality, please note that we drop the layer index $\ell$ from the weights notation. To express the SBNN weights $\mathbf{w}$ in terms of binary $\lbrace 0,1 \rbrace$ weights, we now define a a mapping function $r: \lbrace 0,1 \rbrace \longrightarrow \lbrace \alpha, \beta \rbrace$ that allows to express $\mathbf{w}$: \begin{equation}\label{alphabeta_to_zeroone_eq} w_{i} = r\left(w_{\lbrace0,1\rbrace,i}\right) = \left(w_{\lbrace0,1\rbrace,i} + \xi\right) \cdot \eta \end{equation} with \begin{equation} \alpha=\xi \cdot \eta,\qquad \beta = (1 + \xi) \cdot \eta, \label{eq:map_alpha} \end{equation} and $ w_{\lbrace0,1\rbrace,i} \in \lbrace 0,1 \rbrace$, the $i^{th}$ weight of a SBNN, when restricted to the binary set $\lbrace 0,1 \rbrace$. Through these mapping, $0$-valued weights are pruned from the network, the making SBNN sparse. The bit-width of a SBNN is measured with the binary entropy $h()$ of the distribution of $\alpha$-valued and $\beta$-valued weights, \begin{equation} h(p)= -p\log_2(p)-(1-p)\log_2(1-p) \qquad \left[ \text{bits}/ \text{weight}\right], \end{equation} with $p = U / N$. Achieving network compression using a smaller bit-width than that of standard BNN's weights (1 bit/weight) is equivalent to setting a constraint in the SBNN's entropy to be less or equal than a desired value $h^*$, i.e. \begin{equation} h(U/N) \leq h^*.\label{eq:entropy_constraint} \end{equation} Given $h^{-1}()$ the inverse binary entropy function for $0 \leq p \leq 1/2$, it is straightforward to derive such constraint, $U\leq M$ where \begin{equation} M\triangleq N \cdot h^{-1}(h^*).\label{eq:define_M} \end{equation} From Eq.~\eqref{eq:entropy_constraint} and \eqref{eq:define_M}, this implies that the constraint corresponds to restricting the maximum number of $1s$ in the network, and thus the sparsity of the network. Thus, the original full-precision DNN loss minimization problem (Eq.~\eqref{eq:standard}) can be reformulated as: \begin{equation}\label{min_problem_zeroone_eq} \begin{aligned} \displaystyle & \argmin_{\textbf{w}_{\lbrace0,1\rbrace},\xi,\eta} & & \mathcal{L}(y,\hat{y}) \\ & \text{s.t.} & & \textbf{w}_{\lbrace0,1\rbrace} \in \{0,1\}^{N}, \\ &&& U \leq M < N. \end{aligned} \end{equation} The mixed optimization problem in Eq.~\eqref{min_problem_zeroone_eq} can be simplified by relaxing the sparsity constraint on $U$ through the introduction of a non-negative function $\displaystyle g(\cdot)$, which penalizes the weights when $U > M$: \begin{equation}\label{min_problem_zeroone_loss_eq} \begin{aligned} \displaystyle & \argmin_{\textbf{W}_{\lbrace0,1\rbrace},\xi,\eta} & & \mathcal{L}(y,\hat{y}) + \lambda g(\textbf{W}_{\lbrace0,1\rbrace})\\ & \text{s.t.} & & \textbf{W}_{\lbrace0,1\rbrace} \in \{0,1\}^{N}\\ \end{aligned} \end{equation} and $\lambda$ controls the influence of $g(\cdot)$. A simple, yet effective function $g(\textbf{W}_{\lbrace0,1\rbrace})$ is the following one: \begin{equation}\label{penalty_function_eq} g\left(\textbf{W}_{\lbrace0,1\rbrace}\right) = \text{ReLU}\left( U/N - \text{EC} \right), \end{equation} where $\text{EC} = M/N$ represents the fraction of expected connections, which is the fraction of $1$-valued weights in $\textbf{W}_{\lbrace0,1\rbrace}$ over the total number of weights of $\textbf{W}_{\lbrace0,1\rbrace}$. Eq.~\eqref{min_problem_zeroone_eq} allows to compare the proposed SBNN with the standard BNN formulation. By setting $\xi= -1/2$ and $\eta = 2$, for which $\alpha = -1$ and $\beta=+1$ (Eq.~\eqref{alphabeta_to_zeroone_eq}), and removing the constraint on $U$ leads to the standard formulation of a BNN. This implies that any BNN can be represented using the $\{0,1\}$ domain and perform sparse operations. However, in practice when $U$ is not contrained to be $\leq M$, then $U\approx N/2$ and $h(1/2) = 1$ bit/weight, which means that standard BNNs cannot be compressed more. \subsection{Weight Optimization}\label{sec:weight_optimization} In this section, we derive the value of $\Omega=\lbrace \alpha,\beta \rbrace$ which minimizes the quantization error when real-valued weights are quantized using it. The minimization of the quantization error accounts to minimizing the binarization loss, $\mathcal{L}_B$, which is the optimal estimator when $\widetilde{\textbf{W}}$ is mapped to $\textbf{W}$~\cite{xnornet}. This minimization is equivalent to finding the values of $\alpha$ and $\beta$ which minimize $\mathcal{L}_B$. To simplify the derivation of the optimum $\alpha$ and $\beta$ values, we minimize $\mathcal{L}_B$ over two variables in one-to-one correspondence with $\alpha$ and $\beta$. To achieve this, as in Eq.~\ref{alphabeta_to_zeroone_eq}-\ref{eq:map_alpha}, we map $w_i \in \Omega$ to $\overline{w}_i \in \{-1,+1\}$, i.e. \begin{equation*} w_i = \tau\overline{w}_i+\phi, \end{equation*} where $\tau$ and $\phi$ are two real-valued variables, and $\alpha=-\tau+\phi$ and $\beta=\tau+\phi$. As a result, $\alpha$ and $\beta$ are in one-to-one correspondence with $\tau$ and $\phi$, and the minimization of $\mathcal{L}_B$ can be formulated as \begin{align} \begin{aligned} \tau^*,\phi^* & = \arg \min_{\tau,\phi} \mathcal{L}_B = \arg \min_{\tau,\phi} \left\lVert \widetilde{\textbf{w}} - (\tau\overline{\textbf{w}}+\phi\bm{1})\right\rVert_{2} \end{aligned} \label{eq:minimization_phitau} \end{align} where $\lVert\cdot\rVert_{2}$ is the $\ell_2$-norm and $\bm{1}$ is the all-one entries matrix. By first expanding the $\ell_2$-norm term and using the fact that $\text{sum}(\overline{\textbf{w}}) = N^{\ell} (2p-1)$, it is straightforward to reformulate Eq.~\ref{eq:minimization_phitau} as a a function of the sum of real-valued weights, their ${\ell_1}$-norm, the fraction of $+1$-valued binarized weights and the two optimization parameters. In such case, the $\nabla \mathcal{L}_B$ is \begin{equation}\label{eq:gradient} \nabla \mathcal{L}_B = \begin{pmatrix} \frac{\partial \mathcal{L}_B}{\partial \tau}\\ \frac{\partial \mathcal{L}_B}{\partial \phi} \end{pmatrix} = 2 \begin{pmatrix}-\lVert\widetilde{\textbf{w}}\rVert_{1}+N^{\ell}\bigl(\tau + \phi (2p-1)\bigr)\\ -\text{ sum}(\widetilde{\textbf{w}})+N^{\ell} \bigl(\phi + \tau (2p-1)\bigr) \end{pmatrix}. \end{equation} Solving to find the optimal values $\tau$ and $\phi$ we obtain \begin{equation} \tau^* = \frac{\lVert\widetilde{\textbf{w}}\rVert_{1}}{N^{\ell}} - \phi^* (2p-1)\,,\,\,\,\,\, \phi^* = \frac{\text{sum}(\widetilde{\textbf{w}})}{N^{\ell}} - \tau^* (2p-1). \label{eq:optimal_phitau} \end{equation} When $p=0.5$, like in standard BNNs, it gives the classical value of $\tau^*=\lVert\widetilde{\textbf{w}}\rVert_{1}/N^{\ell}$ as in \cite{xnornet}. By substituting $\phi^*$ in Eq. \eqref{eq:minimization_phitau}, we obtain the closed-form solution \begin{equation} \tau^* = \frac{\lVert\widetilde{\textbf{w}}\rVert_{1}-(2p-1)\text{sum}(\widetilde{\textbf{w}})}{N^{\ell}(1- (2p-1)^2)}\,,\,\,\,\,\, \phi^* = \frac{\text{ sum}(\widetilde{\textbf{w}})-(2p-1)\lVert\widetilde{\textbf{w}}\rVert_{1}}{N^{\ell}(1- (2p-1)^2)}. \label{eq:optimal_phitau_closed} \end{equation} As the gradient (Eq.~\ref{eq:gradient}) is linear in $\phi$ and $\tau$, this implies that there is a unique critical point. Moreover, an analysis of the Hessian matrix confirms that $\mathcal{L}_B$ is convex and that local minimum is a global minimum. The derivation is here omitted as it is straightforward. \subsection{Network Training}\label{training_sec} The SBNN training algorithm builds upon state-of-the-art BNN training algorithms~\cite{bethge2019backtosimplicity,Courbariaux2016,liu2020reactnet}, while introducing network sparsification. To profit from BNNs training scheme, we replace $\mathbf{W}_{\lbrace0,1\rbrace}, \xi$ and $\eta$ (Eq. \eqref{min_problem_zeroone_loss_eq}) with $\overline{W}, \tau$ and $\phi$. Doing so, $\mathcal{L}(y,\hat{y})$ corresponds to the loss of BNN algorithms $\mathcal{L}_{\text{\tiny{BNN}}}$. SBNN training also requires to add the penalization term from Eq.~\eqref{penalty_function_eq} to account for sparsity. To account for $\overline{\textbf{W}}$, the regularization function $g(\mathbf{W}_{\lbrace0,1\rbrace})$ (Eq.~\eqref{penalty_function_eq}) is redefined according to \begingroup\makeatletter\def9.5{9.5}\check@mathfonts \begin{equation}\label{actual_ones_eq} j(\overline{\textbf{W}}) = \text{ReLU}\left(\left(\sum_i \dfrac{\overline{w}_i +1}{2N}\right)-\text{EC}\right), \end{equation} \endgroup and the SBNN objective loss can be expressed as \begin{equation}\label{loss_bnn_sbnn_eq} \mathcal{L}_{\text{\tiny{SBNN}}} = \mathcal{L}_{\text{\tiny{BNN}}} + \lambda \,j(\overline{\textbf{W}}). \end{equation} During training, we modulate the contribution of the regularization term $j(\overline{\textbf{W}})$ by imposing, at every training iteration, to be equal to a fraction of $\mathcal{L}_{\text{\tiny{SBNN}}}$, i.e. \begin{equation}\label{gamma_eq} \gamma = \dfrac{\lambda\,j(\overline{\textbf{W}})}{\mathcal{L}_{\text{\tiny{SBNN}}}}. \end{equation} The hyperparameter $\gamma$ is set to a fixed value over all the training process. Since $\mathcal{L}_{\text{\tiny{SBNN}}}$ changes at every iteration, this forces $\lambda$ to adapt, thus modulating the influence of $j(\overline{\textbf{W}})$ proportionally to the changes in the loss. The lower $\gamma$ is set, the less influence $j(\overline{\textbf{W}})$ has on the total loss. This means that network sparsification will be slower, but convergence will be achieved faster. On the opposite case (high $\gamma$), the training will favor sparsification. \begin{figure*} \caption{BNNs vs. SBNNs operations in a convolutional layer using $c_{out} \label{implementation_fig} \end{figure*} \subsection{Implementation Gains}\label{implementation_sec} We discuss the speed-up gains of the proposed SBNN through its efficient implementation using linear layers in the backbone architecture. Its extension to convolutional layers (Fig.~\ref{implementation_fig}) is straightforward, thus we omit it for the sake of brevity. We describe the use of sparse operations, as it can be done on an FPGA device~\cite{fu2022towards,wang2021sub}. Instead, when implemented on CPUs, SBNNs can take advantage of pruned layers, kernels and filters for acceleration~ \cite{guerra2021automatic,munagala2020stq,wu2020sbnn,xu2019main}. Moreover, for kernels with only a single binary weight equal to $1$ there is no need to perform a convolution, since the kernels remove some elements from the corner of their input. The connections in a SBNN are the mapped one-valued weights, i.e. the set $S_1$. Therefore, SBNNs do not require any $\mathrm{XNOR}$ operation on FPGA, being $\mathrm{popcount}$ the only bitwise operation needed during the forward pass. The latter, however, is performed only in a layer's input bits connected through the one-valued weights rather than the full input. For any given layer $\ell$, the number of binary operations of a BNN is $\mathcal{O}_{\text{\tiny{BNN}}} = 2 N^{\ell}$ \cite{bethge2019backtosimplicity}, $N^{\ell}$ $\mathrm{XNOR}$ operations and $N^{\ell}$ $\mathrm{popcount}$s. A rough estimate of the implementation gain in terms of the number of binary operations of SBNNs w.r.t. BNNs can be expressed in terms of the EC as \begin{equation} \dfrac{\mathcal{O}_{\text{\tiny{SBNN}}}}{\mathcal{O}_{\text{\tiny{BNN}}}} \approx \dfrac{2N^{\ell}}{\text{EC}\cdot N^{\ell}} \approx \dfrac{2}{\text{EC}}, \end{equation} which indicates that the lower the EC fraction, the higher the gain w.r.t. BNNs. Binary operations are not the only ones involved in the inference of SBNN layers. After the sparse $\{0,1\}$ computations, the mapping operations to the $\{\alpha,\beta\}$ domain take place, also benefiting from implementation gains. To analyze these, let us now denote $\mathbf{x}$ the input vector to any layer and $\mathbf{z}=\mathbf{w}\,\mathbf{x}$ its output. Using E. \eqref{alphabeta_to_zeroone_eq}, $\mathbf{z}$ can be computed as \begin{equation} \mathbf{z} = \xi\,\mathbf{z}' + \xi\,\eta\,\mathbf{q}, \label{implementation_eq} \end{equation} where $\mathbf{z}'=\mathbf{w_{\lbrace0,1\rbrace}}\,\mathbf{x}$ is the result of sparse operations (Fig.~\ref{implementation_fig}), $\mathbf{q} = \mathbf{1}\,\mathbf{x}$, and $\mathbf{1}$ the all-ones matrix. All the elements in $\mathbf{q}$ take the value $2\cdot\mathrm{popcount}(\mathbf{x}) - |\mathbf{x}|$, with $|\mathbf{x}|$ the size of $\mathbf{x}$. Therefore, they are computed only once, for each row of $\mathbf{1}$. Being $\xi$ and $\eta$ known at inference time, they can be used to precompute the threshold in the threshold comparison stage of the implementation of the $\mathrm{batchnorm}$ and $\mathrm{sign}$ operations following the estimation of $\mathbf{z}$~\cite{umuroglu2017finn}. Thus, SBNNs require $|\mathbf{x}|$ binary operations, one real product and $|\mathbf{x}|$ real sums to obtain $\mathbf{z}$ from $\mathbf{z}'$. \section{Experiments and Results}\label{sec:results} We first run a set of ablation studies to analyze the properties of the proposed method (Section \ref{sec:result_domain_comparison}). Namely, we analyze the generalization of SBNNs in a standard binary domain and the proposed generic binary domain; we study the role of the quantization error in the network's performance; and the effects of sparsifying binary kernels. Next, we compare our proposed method to other state-of-the-art techniques using the well established CIFAR-10 and CIFAR-100~\cite{Cifar10} datasets. Preliminary results on ImageNet~\cite{Imagenet} are also discussed. All our code has been made publicly available\footnote{\href{https://github.com/robustml-eurecom/SBNN/}{github.com/robustml-eurecom/SBNN}}. \subsection{Ablation Studies}\label{sec:result_domain_comparison} \noindent \textbf{Experimental setup.} We use a ResNet-18 binarized model trained on CIFAR-10 as backbone architecture. We train the networks for 300 epochs, with batch size of 512, learning rate of $1e-3$, and standard data augmentation techniques (random crops, rotations, horizontal flips and normalization). We use an Adam optimizer and the cosine annealer for updating the learning rate as suggested in \cite{liu2021adam} and we follow the binarization strategy of IR-Net \cite{qin2020forward}.\\ \noindent \textbf{Generalization properties.}\label{sec:comparison_domains} We compare the performance of the proposed generic binary domain to other binary domains used by BNNs by assessing the networks' generalization capabilities when the sparsity ratio is $95\%$. For this experiment, we use the $\{-\beta,+\beta\}$ domain from~\cite{xnornet} with no sparsity constraints as the baseline. Additionally, we consider the same domain with a $95\%$ sparsity constraint and the $\{\alpha,\beta\}$ domain obtained optimizing $\tau$ and $\phi$ according to Eq. \eqref{eq:optimal_phitau_closed} with the $95\%$ sparsity constraint. Table \ref{tab:quantization_error} reports the obtained results in terms of top-1 accuracy and accuracy loss w.r.t. the BNN baseline model ($\Delta$). When we impose the $95\%$ sparsity constraint with the $\{-\beta,+\beta\}$ domain, the accuracy drop w.r.t. to the baseline is $2.98\%$. Using the $\{\alpha,\beta\}$ domain, the loss goes down to $2.47\%$, nearly $0.5\%$ better than the $\{-\beta,+\beta\}$ domain. The results suggest that a more general domain leads to improved generalization capabilities.\\ \noindent \textbf{Impact of the quantization error}\label{sec:quantization_error} We investigate the impact of the quantization error in the SBNN generalization. To this end, we compare the proposed quantization technique (Sec.~\ref{sec:weight_optimization}) with the strategy of learning $\Omega$ via back-propagation. We denote this approach Learned $\{\alpha,\beta\}$ (Table~\ref{tab:quantization_error}). The obtained results show that with the learning of the parameters the accuracy loss w.r.t. the BNN baseline decreases down to $-0.09\%$, thus $2.38\%$ better than when $\tau$ and $\phi$ are analytically obtained with Eq.~\eqref{eq:optimal_phitau_closed}. This result implies that the quantization error is one of the sources of accuracy degradation when mapping real-valued weights to any binary domain, but it is not the only source. Indeed, activations are also quantized. Moreover, errors are propagated throughout the network. Learning $\Omega$ can partially compensate for these other error sources. \begin{table*}[t] \caption{Role of the binary domain and the quantization error when sparsifying BNNs. Experiments performed on CIFAR-10 with a binarized ResNet-18 model.} \label{tab:quantization_error} \begin{center} { \begin{tabular}{c|ccc} \hline Domain & Sparsity constraint & Top-1 Accuracy & $\Delta$ \\\hline \hline Baseline & / & 88.93\% & /\\\cdashline{1-4} $\{-\beta,+\beta\}$ \cite{xnornet} & 95\% & 85.95\% & -2.98\%\\ $\{\alpha,\beta\}$ & 95\% & 86.46\% & -2.47\%\\ Learned $\{\alpha,\beta\}$ & 95\% & 88.84\% & -0.09\%\\ \hline \end{tabular} } \end{center} \end{table*} \ \\ \noindent \textbf{Effects of network sparsification}\label{sec:pruning_effects} We investigate the effects of network sparsification and how they can be leveraged to reduce the binary operations (BOPs) required in SBNNs. In Section \ref{sec:comparison_domains}, we showed that our binary domain is more adept at learning sparse network representations compared to the standard binary domain. This allows us to increase the sparsity of SBNNs while maintaining a desired level of accuracy. When the sparsity is sufficiently high, many convolutional kernels can be entirely removed from the network, which further reduces the BOPs required for SBNNs. Additionally, convolutional kernels with only a single binary weight equal to $1$ do not require a convolution to be performed, as these kernels simply remove certain elements from the input. To illustrate this effect, we plotted the distribution of binary kernels for the $5$th, $10$th, and $15$th layers of a binarized ResNet-18 model (Fig. ~\ref{tab:kernels}). The first column shows the distribution when no sparsity constraints are imposed, while the second and third columns show the distribution for sparsity levels of $95\%$ and $99\%$, respectively. The kernels are grouped based on their Hamming weights, which is the number of non-zero elements in each $\{0,1\}^{3\times3}$ kernel. The plots suggest that increasing the sparsity of SBNNs results in a higher number of kernels with Hamming weights of $0$ and $1$. \newcommand\w{0.39} \newcommand\h{0.22} \begin{table}[!t] \centering \begin{tabular}{cccc} & Sparsity $50\%$ & Sparsity $95\%$ & Sparsity $99\%$ \\ \multirow{3}{*}{\rotatebox[origin=c]{90}{Percentage of kernels}} & \begin{tikzpicture} \begin{axis}[name=plot1, xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 15, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,5,10,15,20,30,40,50,60,70,80,90,100}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC50_L5_kernelsweights.dat}; \end{axis} \end{tikzpicture} & \begin{tikzpicture} \begin{axis}[name=plot2, at=(plot1.right of south east), xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 60, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,20,40,60,70,80,90,100}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC5_L5_kernelsweights.dat}; \end{axis} \end{tikzpicture} & \begin{tikzpicture} \begin{axis}[name=plot3, at=(plot2.right of south east), xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 100, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,30,60,90}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC1_L5_kernelsweights.dat}; \end{axis} \end{tikzpicture} \\ & \begin{tikzpicture} \begin{axis}[name=plot1, xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 20, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,5,10,15,20,30,40,50,60,70,80,90,100}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC50_L10_kernelsweights.dat}; \end{axis} \end{tikzpicture} & \begin{tikzpicture} \begin{axis}[name=plot2, at=(plot1.right of south east), xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 60, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,20,40,60,70,80,90,100}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC5_L10_kernelsweights.dat}; \end{axis} \end{tikzpicture} & \begin{tikzpicture} \begin{axis}[name=plot3, at=(plot2.right of south east), xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 100, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,30,60,90}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC1_L10_kernelsweights.dat}; \end{axis} \end{tikzpicture} \\ & \begin{tikzpicture} \begin{axis}[name=plot1, xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 15, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,5,10,15,20,30,40,50,60,70,80,90,100}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC50_L15_kernelsweights.dat}; \end{axis} \end{tikzpicture} & \begin{tikzpicture} \begin{axis}[name=plot2, at=(plot1.right of south east), xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 100, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,30,60,90}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC5_L15_kernelsweights.dat}; \end{axis} \end{tikzpicture} & \begin{tikzpicture} \begin{axis}[name=plot3, at=(plot2.right of south east), xmin = -0.25, xmax = 9.25, ymin = 0, ymax = 105, xtick= {0,1,2,3,4,5,6,7,8,9}, ytick={0,33,67,100}, grid = both, minor tick num = 0, major grid style = {lightgray}, minor grid style = {lightgray!25}, width = \w\columnwidth, height = \h\columnwidth] \addplot+ [ycomb,] file {Graphics/results_kernels/result_EC1_L15_kernelsweights.dat}; \end{axis} \end{tikzpicture} \\ & & kernel's Hamming weight & \\ \end{tabular} \captionof{figure}{Percentage of binary kernels for various Hamming weights of a binarized Resnet-18 model over CIFAR-10 for different sparsity constraints. The $5$-th, $10$-th and $15$-th layers are shown in the top, middle and bottom rows, respectively.} \label{tab:kernels} \end{table} \subsection{Benchmark}\label{sec:benchmark} \noindent \textbf{CIFAR-10.} We compare our method against state-of-the-art methods over a binarized ResNet-18 model using CIFAR-10. Namely, we consider: STQ \cite{munagala2020stq}, Slimming \cite{wu2020sbnn}, Dual-P \cite{fu2022towards}, Subbit \cite{wang2021sub}, IR-Net~\cite{qin2020forward} and our method with learned $\tau$ and $\phi$, for different sparsity constraints. We use the IR-Net as BNN baseline to be compressed. We use the experimental setup described in Sec. \ref{sec:result_domain_comparison} with some modifications. We extend the epochs to 500 as in~\cite{wang2021sub}, and we use a MixUp strategy~\cite{zhang2017mixup}. In the original IR-Net formulation~\cite{qin2020forward}, the training setup is missing. We use our setup to train it, achieving the same accuracy as in \cite{qin2020forward}. Table \ref{tab:CIFAR_10} reports the obtained results in terms of accuracy (Acc.), accuracy loss w.r.t. the IR-Net model ($\Delta$), and BOPs reduction (BOPs PR). For our SBNN, we estimate BOPs PR by counting the number of operations which are not computed from the convolutional kernels with Hamming weight $0$ and $1$. For other methods, we refer the reader to the original publications. We assess our method at different levels of sparsity, in the range 50 to 99\%. For SBNNs we also report the percentage of SBNN's convolutional kernels with Hamming weight $0$ ($K_0$) and with Hamming weight $1$ ($K_1$). The results suggest that our method is competitive with other more complex pruning strategies. Moreover, our method reports similar accuracy drops w.r.t. state-of-the-art Subbit and Dual-P for similar BOPs PR. However, we need to point out that Subbit and Dual-P results refer to BOPs PR on FPGA, where SBNN can take advantage of sparse operations (Section \ref{implementation_sec}) also for the kernels with larger Hamming weights than $0$ and $1$, because on FPGA all operations involving $0$-valued weights can be skipped. For instance, the use of sparse operations on the SBNN 95\% allows to remove $\approx$ 84.9\% BOPs. \begin{table}[t] \scriptsize \caption{Evaluation of kernel removal for different pruning targets using a binarized Resnet-18 model on CIFAR-10.} \label{tab:CIFAR_10} \begin{center} \resizebox{.8\linewidth}{!}{ \begin{tabular}{ccccrr} \hline \multicolumn{1}{c}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{Acc.}} & \multicolumn{1}{c}{$\bm{\Delta}$} & \multicolumn{1}{c}{\textbf{BOPs PR}} &\multicolumn{1}{c}{$\bm{K_0}$} & \multicolumn{1}{c}{$\bm{K_1}$} \\\hline IR-Net & 91.50\% & / & / & / & /\\\cdashline{1-6} STQ & 86.56\% & -5.50\% & -40.0\%& / & / \\ Slimming & 89.30\% & -2.20\% & -50.0\%& / & / \\ Dual-P (2$\rightarrow$1) & 91.02\% & -0.48\% & -70.0\%& / & /\\ Dual-P (3$\rightarrow$1) & 89.81\% & -1.69\% & -80.6\%& / & / \\ Dual-P (4$\rightarrow$1) & 89.43\% & -2.07\% & -85.4\% & / & / \\ Subbit $0.67$-bits & 91.00\% & -0.50\% & -47.2\%& / & / \\ Subbit $0.56$-bits & 90.60\% & -0.90\% & -70.0\% & / & /\\ Subbit $0.44$-bits & 90.10\% & -1.40\% & -82.3\%& / & /\\ \rowcolor{LightCyan} SBNN 50\% [\textbf{our}] & 91.70\% & +0.20\% & -11.1\%& 5.6\% & 6.8\% \\ \rowcolor{LightCyan} SBNN 75\% [\textbf{our}] & 91.71\% & +0.21\% & -24.5\% & 30.7\% & 15.9\% \\ \rowcolor{LightCyan} SBNN 90\% [\textbf{our}] & 91.16\% & -0.24\% & -46.5\%& 61.8\% & 15.5\% \\ \rowcolor{LightCyan} SBNN 95\% [\textbf{our}] & 90.94\% & -0.56\% & -63.2\%& 77.1\% & 11.8\% \\ \rowcolor{LightCyan} SBNN 96\% [\textbf{our}] & 90.59\% & -0.91\% & -69.7\%& 81.0\% & 10.1\% \\ \rowcolor{LightCyan} SBNN 97\% [\textbf{our}] & 90.71\% & -0.79\% & -75.7\%& 84.8\% & 8.7\%\\ \rowcolor{LightCyan} SBNN 98\% [\textbf{our}] & 89.68\% & -1.82\% & -82.5\%& 89.3\% & 6.5\%\\ \rowcolor{LightCyan} SBNN 99\% [\textbf{our}] & 88.87\% & -2.63\% & -88.7\%& 94.6\% & \,3.3\%\\ \hline \end{tabular} } \end{center} \end{table} \noindent \textbf{CIFAR-100.} We compare our method in the more challenging setup of CIFAR-100, with 100 classes and 500 images per class, against two state-of-the-art methods: STQ \cite{munagala2020stq}, and Subbit~\cite{wang2021sub}. We use ReActNet-18~\cite{liu2020reactnet} as the backbone architecture, using a single training step and no teacher. We train for 300 epochs with the same setup used for CIFAR-10 with Mixup augmentation. As no previous results for this setup have been reported for ReActNet-18 and Subbit, for a fair comparison, we trained them from scratch using our setup. We report the same metrics used for CIFAR-10, plus the the reduction of binary parameters (BParams PR). For our SBNN, we estimate BParams PR as follows. For each kernel we use 2 bits to differentiate among zero Hamming weight kernels, one Hamming weight kernels and all the other kernels. Then, we add 4 bits to the kernels with Hamming weight 1 to represent the index position of their $1$-valued bit, whereas we add $9$ bits for all the other kernels with Hamming weight larger than 1, which are their original bits. For the other methods, please refer to their work for their estimate of BParams PR. Table~\ref{tab:CIFAR_100} reports the obtained results for the different methods and our SBNN for various sparsity targets. We can see that our pruning method is more effective in reducing both the BOPs and the parameters than Subbit. It allows to remove $79.2\%$ of kernels, while increasing the original accuracy by $0.79\%$ w.r.t. the ReActNet-18 baseline. Instead, we observe nearly $1\%$ accuracy drop for a Subbit network for a similar BOPs reduction. Moreover, our method allows to remove nearly $15\%$ more binary parameters. \begin{table*}[t] \caption{Evaluation of kernel removal for different pruning targets using a ReActNet-18 model on CIFAR-100.} \label{tab:CIFAR_100} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{cccccrr} \hline \multicolumn{1}{c}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{Acc.}} & \multicolumn{1}{c}{$\bm{\Delta}$} & \multicolumn{1}{c}{\textbf{BOPs PR}} & \multicolumn{1}{c}{\textbf{BParams PR}} & \multicolumn{1}{c}{$\bm{K_0}$} & \multicolumn{1}{c}{$\bm{K_1}$} \\\hline ReActNet-18$^*$ & 62.79\% & / & / & / & / & /\\\cdashline{1-7} STQ & 57.72\% & -5.05\% & -36.1\% & -36.1\% & / & / \\ Subbit $0.67$-bits$^*$& 62.60\% & -0.19\% & -47.2\% & -33.3\%& / & / \\ Subbit $0.56$-bits$^*$& 62.07\% & -0.72\% & -70.0\% & -44.4\%& / & / \\ Subbit $0.44$-bits$^*$& 61.80\% & -0.99\% & -82.3\% & -55.6\%& / & / \\ \rowcolor{LightCyan} SBNN 50\% [\textbf{our}] & 63.03\% & +0.24\% & -11.1\% & /& 5.6\% & 6.8\% \\ \rowcolor{LightCyan} SBNN 95\% [\textbf{our}] & 63.33\% & +0.54\% & -66.2\% & -59.9\%& 72.9\% & 16.6\% \\ \rowcolor{LightCyan} SBNN 96\% [\textbf{our}] & 63.04\% & +0.25\% & -67.3\% & -63.7\% & 78.9\% & 12.6\%\\ \rowcolor{LightCyan} SBNN 97\% [\textbf{our}] & 62.41\% & -0.38\% & -73.4\% & -66.8\%& 82.9\% & 11.1\% \\ \rowcolor{LightCyan} SBNN 98\% [\textbf{our}] & 63.58\% & +0.79\% & -79.2\% & -70.3\%& 88.1\% & 8.0\%\\ \rowcolor{LightCyan} SBNN 99\% [\textbf{our}] & 62.23\% & -0.57\% & -87.8\% & -74.0\%& 93.6\% & 4.7\% \\ \hline \multicolumn{7}{l}{ $^*$ our implementation.} \end{tabular} } \end{center} \end{table*} \noindent \textbf{ImageNet.} We assess our proposed SBNN trained with target sparsity of $75\%$ and $90\%$ on ImageNet. We compare them with state-of-the-art BNNs, namely: XNOR-Net~\cite{xnornet}, Bi-RealNet-18~\cite{liu2018birealnet} and ReActNet-18, ReActNet-A \cite{liu2020reactnet} and Subbit \cite{wang2021sub}. Moreover, we also report the accuracy of the full-precision ResNet-18~\cite{he2016resnet} and MobileNetV1~\cite{howard2017mobilenets} models, as a reference. We use a ReActNet-A~\cite{liu2020reactnet} as SBNN's backbone with its MobileNetV1 ~\cite{howard2017mobilenets} inspired topology and with the distillation procedure used in~\cite{liu2020reactnet}, whereas in Subbit \cite{wang2021sub} they used ReActNet-18 as backbone. One of the limitations of Subbit \cite{wang2021sub} is that their method cannot be applied to the pointwise convolutions of MobileNetV1~\cite{howard2017mobilenets}. Due to GPUs limitations, during our training, we decreased the batch size to 64. For a fair comparison, we retrained the original ReActNet-A model with our settings. Table~\ref{Imagenet_results_table} reports the results in terms of accuracy (Acc). We also include the number of operations (OPs) to be consistent with other BNNs assessment on ImageNet. For BNNs, OPs are estimated by the sum of floating-point operations (FLOPs) plus BOPs rescaled by a factor 1/64 \cite{xnornet,liu2018birealnet,liu2020reactnet}. We assume sparse operations on FPGA to estimate BOPs for SBNN. We observe that BOPs are the main contributors to ReActNet-A's OPs (Table~\ref{Imagenet_results_table}), thus decreasing them largely reduces the OPs. This, instead, does not hold for ReActNet-18, which may explain why Subbit is not effective in reducing OPs of its baseline. Our method instead is effective even for less severe pruning targets and it requires less than $3.4\times$ OPs w.r.t. state-of-the-art ReActNet-A model, while incurring in an acceptable generalization loss between $1.9-3.4\%$. \begin{table*}[t] \caption{Method comparison on ImageNet. } \label{Imagenet_results_table} \begin{center} \resizebox{.8\linewidth}{!}{ \begin{tabular}{l|crrr} \hline \multicolumn{1}{c|}{Model} & \multicolumn{1}{c}{Acc} & \multicolumn{1}{c}{BOPs} & \multicolumn{1}{c}{FLOPs} & \multicolumn{1}{c}{OPs} \\ & \multicolumn{1}{c}{Top-1} & ($\times10^8$) & ($\times10^8$) & ($\times10^8$)\\ \hline \hline MobileNetV1 \cite{howard2017mobilenets} (full-precision) & $70.60$ & - & 5.7 & 5.7\\% & ResNet-18 \cite{he2016resnet} (full-precision) & $72.12$ & - & 19 & 19\\% & \cdashline{1-5} {XNOR-Net \cite{xnornet}} & $51.20$ & 17 & 1.41 & 1.67\\ {Bi-RealNet-18 \cite{liu2018birealnet}} & $56.40$ & 17 & 1.39 & 1.63\\ {ReActNet-18 \cite{liu2020reactnet}} & $65.50$ & 17 & 1.63 & 1.89\\% & {ReActNet-A \cite{liu2020reactnet}$^*$} & $68.12$ & 48 & 0.12 & 0.87\\% & \cdashline{1-5} Subbit 0.67-bits ReActNet-18 & $63.40$ & 9 & 1.63 & 1.77\\% & {Subbit 0.56-bits ReActNet-18} & $62.10$ & 5 & 1.63 & 1.71\\% & {Subbit 0.44-bits ReActNet-18} & $60.70$ & 3 & 1.63 & 1.68\\% & \rowcolor{LightCyan} {SBNN 75\% ReActNet-A \textbf{[ours]}} & $66.18$ & 8 & 0.12 & 0.25\\%& - \\ \rowcolor{LightCyan} {SBNN 90\% ReActNet-A \textbf{[ours]}} & $64.72$ & 2 & 0.12 & 0.16\\%& - \\ \hline \multicolumn{5}{l}{$^*$ our implementation.} \end{tabular} } \end{center} \end{table*} \section{Conclusions}\label{sec:conclusions} We have presented sparse binary neural network (SBNN), a novel method for sparsifying BNNs that is robust to simple pruning techniques by using a more general binary domain. Our approach involves quantizing weights into a general $\Omega = \{\alpha, \beta\}$ binary domain that is then expressed as 0s and 1s at the implementation stage. We have formulated the SBNN method as a mixed optimization problem, which can be solved using any state-of-the-art BNN training algorithm with the addition of two parameters and a regularization term to control sparsity. Our experiments demonstrate that SBNN outperforms other state-of-the-art pruning methods for BNNs by reducing the number of operations, while also improving the baseline BNN accuracy for severe sparsity constraints. Future research can investigate the potential of SBNN as a complementary pruning technique in combination with other pruning approaches. In summary, our proposed SBNN method provides a simple yet effective solution to improve the efficiency of BNNs, and we anticipate that it will be a valuable addition to the field of binary neural network pruning. \section*{Ethical Statement} \renewcommand{$\bullet$}{$\bullet$} The proposed SBNN can in principle extend the range of devices, at the edge of communication networks, in which DNN models can be exploited. Our work touches various ethical considerations: \begin{itemize} \item \textbf{Data Privacy and Security}: By performing inference of DNNs directly on edge devices, data remains localized and does not need to be transmitted to centralized servers. This reduces the risk of sensitive data exposure during data transfer, enhancing privacy protection. \item \textbf{Fairness and Bias}: SBNNs, like other DNNs at the edge, can be susceptible to biased outcomes, as they rely on training data that may reflect societal biases. However, by simplifying the weight representation to binary values, SBNNs may reduce the potential for biased decision-making because they may be less influenced by subtle variations that can introduce bias. Nevertheless, it is essential to address and mitigate biases in data to ensure fairness in outcomes and avoid discriminatory practices. \item \textbf{Transparency and Explainability}: The SBNN design can be applied to DNN models that are designed to provide transparency and explainability. Moreover, the binary nature of SBNNs can make them more interpretable and easier to understand compared to complex, multi-valued neural networks. This interpretability can help users gain insights into the decision-making process and facilitate transparency. \item \textbf{Human-Centric Design}: SBNNs can extend the use of DNNs at the edge, extending the range of users of applications which are focused on human well-being, human dignity and inclusivity. \item \textbf{Resource Allocation and Efficiency}: SBNNs allows the use of DNNs in a more efficient way from both the use of energy, memory and other crucial resources, thus allowing to reduce the environmental impact of DNNs. \item \textbf{Ethics of Compression}: While SBNNs offer computational efficiency and reduced memory requirements, the compression of complex information into binary values may raise ethical concerns. Compression may lead to oversimplification or loss of critical details, potentially impacting the fairness, accuracy, or reliability of decision-making systems. \end{itemize} It is important to consider these ethical aspects of SBNNs when evaluating their suitability for specific applications and to ensure responsible and ethical deployment in alignment with societal values and requirements. \end{document}
\begin{document} \title{Combinatorics-Based Approaches to Controllability Characterization for Bilinear Systems} \tikzstyle{vertex}=[circle, draw, fill=black!50, inner sep=0pt, minimum width=4pt] \newdimen\R \R=8mm \begin{abstract} The control of bilinear systems has attracted considerable attention in the field of systems and control for decades, owing to their prevalence in diverse applications across science and engineering disciplines. Although much work has been conducted on analyzing controllability properties, the mostly used tool remains the Lie algebra rank condition. In this paper, we develop alternative approaches based on theory and techniques in combinatorics to study controllability of bilinear systems. The core idea of our methodology is to represent vector fields of a bilinear system by permutations or graphs, so that Lie brackets are represented by permutation multiplications or graph operations, respectively. Following these representations, we derive combinatorial characterization of controllability for bilinear systems, which consequently provides novel applications of symmetric group and graph theory to control theory. Moreover, the developed combinatorial approaches are compatible with Lie algebra decompositions, including the Cartan and non-intertwining decomposition. This compatibility enables the exploitation of representation theory for analyzing controllability, which allows us to characterize controllability properties of bilinear systems governed by semisimple and reductive Lie algebras. \end{abstract} \begin{keywords} Bilinear systems, Lie groups, graph theory, symmetric groups, representation theory, Cartan decomposition \end{keywords} \section{Introduction} Bilinear systems, a class of nonlinear systems, emerge naturally as mathematical models to describe the dynamics of numerous processes in science and engineering. Prominent examples include the Bloch system governing the dynamics of spin-$\tfrac{1}{2}$ nuclei immersed in a magnetic field in quantum physics \cite{Glaser98,Li_PRA06,Li_PNAS11}, the compartmental model describing the movement of cells and molecules in biology \cite{Mohler78,Eisen79,Mohler80}, and the integrate-and-fire model characterizing the membrane potential of a neuron under synaptic inputs and injected current in neuroscience \cite{Dayan05,Gerstner02}. The prevalence of bilinear systems has been actively promoting the research in control theory and engineering concerning the analysis and manipulation of such systems for decades. The initial investigation into control problems involving bilinear systems traces back to the year of 1935, when the Greek mathematician Constantin Carath\'{e}odory studied optimal control of bilinear systems presented in terms of Pfaffian forms by using calculus of variations and partial differential equations \cite{Caratheodory35}. However, research in systematic analysis of fundamental properties of bilinear control systems was not prosperous until the early 1970s, when leading control theorists, such as Brockett, Jurdjevic, and Sussmann, developed geometric control theory for introducing techniques in Lie theory and differential geometry to classical control theory \cite{Brockett14,Brockett72,Jurdjevic72,Hirschorn75,Brockett76,Hermann77}. One of the most remarkable results in geometric control theory is the Lie algebra rank condition (LARC), which establishes an equivalence between controllability of control-affine systems defined on smooth manifolds and Lie algebras generated by the vector fields governing the system dynamics \cite{Brockett72,Isidori95,Jurdjevic96}. In our recent work, based on the LARC, we developed a necessary and sufficient controllability condition for bilinear systems by using techniques in symmetric group theory \cite{Zhang19}. In particular, we introduced a monoid structure on symmetric groups so that Lie bracket operations are compatible with monoid operations. This then resulted in a characterization of controllability in terms of elements in ``symmetric monoids'' for bilinear systems, which also offered an alternative to the LARC and further shed light on interpreting geometric control theory from an algebraic perspective. In this paper, we propose a combinatorics-based framework to analyze controllability of bilinear systems defined on Lie groups by adopting techniques in symmetric group theory and graph theory. Specifically, the main idea is to associate such systems with permutations or graphs, so that Lie bracket operations of the vector fields governing the system dynamics can be represented by permutation multiplications and edge operations on the graphs. This combinatorics approach immediately leads to the characterizations of controllability in terms of permutation cycles and graph connectivity. In particular, we identify the classes of bilinear systems, for which controllability has equivalent symmetric group and graph representations. A prominent example is the system defined on $\son$, the special orthogonal group, for which we reveal a correspondence between permutation cycles in the symmetric group and trees in the graph associated with these systems. It is worth noting that, different from our previous work on the symmetric group method \cite{Zhang19}, the correspondence between Lie bracket operations and permutation multiplications established in this paper do not require any monoid structure on symmetric groups. On the other hand, the application of graph theory in the developed combinatorics-based framework offers a distinct viewpoint to the field of control theory. Specifically, in the existing literature, graphs are naturally used in the context of networked and multi-agent systems, e.g., for describing the coupling topology and deriving structural controllability conditions \cite{Mou2016,Qin2016,Tsopelakos2019}, while, in this work, we establish a non-trivial relationship between graph connectivity and controllability for a single bilinear system. Moreover, a great advantage of the developed framework is its compatibility with various Lie algebra decomposition techniques in representation theory. In particular, we illustrate the application of these methods to systems of which the underlying Lie algebras are semisimple or reductive, while in these cases, the correspondence between Lie bracket operations and permutation multiplications as well as graph operations is elusive due to their complicated algebraic structures. In this work, we exploit the Cartan and non-intertwining decompositions to decompose the system Lie algebras into simple components, so that the combinatorics-based controllability analysis is equivalently carried over to these components. This paper is organized as follows. In Section~\ref{sec:prelim}, we provide the preliminaries relevant to our developments, including the LARC for systems on Lie groups and a brief review of the Lie algebra $\mathfrak{so}(n)$. In Section~\ref{sec:com.framework}, we establish the symmetric group and graph-theoretic methods based upon the study of bilinear systems on $\son$. In Section~\ref{sec:non-standard}, we introduce the notions and tools of Cartan and non-intertwining decompositions for decomposing the system Lie algebras into simpler components, which enables and facilitates the generalization of the combinatorics-based framework to broader classes of bilinear systems. A brief review of the basics of symmetric groups and Lie algebra decompositions can be found in the appendices. \section{Preliminaries} \label{sec:prelim} To prepare for our development of the combinatorial controllability conditions, in this section, we briefly review the Lie algebra $\mathfrak{so}(n)$ and the LARC for right-invariant bilinear systems. Meanwhile, we introduce the notations we use throughout this paper. \subsection{The Lie Algebra Rank Condition} The LARC has been the most recognizable tool, if not unique, for analyzing controllability of bilinear systems since the 1970s. It establishes a connection between controllability and the Lie algebra generated by the vector fields governing the system dynamics. In this paper, we primarily focus on the bilinear system evolving on a compact and connected Lie group of the form, \begin{equation} \label{eq:bilinear.system} \dot{X}(t)=B_0X(t)+\Bigl(\sum_{i=1}^mu_i(t)B_i\Bigr)X(t),\quad{}X(0)=I, \end{equation} where $X(t)\in G$ is the state on a compact and connected Lie group $G$, $I$ is the identity element of $G$, $B_i$ are elements in the Lie algebra $\mathfrak{g}$ of $G$, and $u_i(t)\in\mathbb{R}$ are piecewise constant control inputs. For any subset $\Gamma\subseteq\mathfrak{g}$, we use $\mathrm{Lie}\,(\Gamma)$ to denote the Lie subalgebra generated by $\Gamma$, i.e., the smallest vector subspace of $\mathfrak{g}$ containing $\Gamma$ that is closed under the Lie bracket defined by $[C,D]:=CD-DC$ for $C,D\in\mathfrak{g}$. With these notations, the LARC for the system in \cref{eq:bilinear.system} can be stated as follows. \begin{theorem}[LARC] \label{thm:LARC} The system in \cref{eq:bilinear.system} is \emph{controllable} on $G$ if and only if $\mathrm{Lie}\,(\Gamma)=\mathfrak{g}$, where $\Gamma=\{B_0, B_1, \ldots, B_m\}$. \end{theorem} \begin{proof} See \cite{Brockett72}. \end{proof} \subsection{Basics of the Lie Algebra \texorpdfstring{{$\mathfrak{so}(n)$}}{so(n)}} The Lie algebra $\mathfrak{so}(n)$ is a vector space of dimension $n(n-1)/2$, which consists of all $n$-by-$n$ real skew-symmetric matrices. In particular, if we use $\Omega_{ij}$ to denote the skew-symmetric matrix with $1$ in the $(i,j)$-th entry, ${-1}$ in the $(j,i)$-th entry, and $0$ elsewhere, then the set $\mathcal{B} =\{\Omega_{ij}\in\mathbb{R}^{n\times{}n}: 1\leqslant{}i<j\leqslant{}n\}$ forms a basis of $\mathfrak{so}(n)$, which we refer to as the \emph{standard basis} of $\mathfrak{so}(n)$. The following lemma then reveals the Lie bracket relations among elements in $\mathcal{B}$. \begin{lemma} \label{lem:son} The Lie bracket of $\Omega_{ij}$ and $\Omega_{kl}$ satisfies the relation $[\Omega_{ij},\Omega_{kl}]=\delta_{jk}\Omega_{il}+\delta_{il}\Omega_{jk}+\delta_{jl}\Omega_{ki}+\delta_{ik}\Omega_{lj}$, where $\delta$ is the Kronecker delta function defined by \[ \delta_{mn}= \begin{cases} 1, & \text{if }m=n; \\ 0, & \text{otherwise}. \end{cases} \] \end{lemma} \begin{proof} The proof follows directly from computations. \end{proof} The relations in \cref{lem:son} can also be equivalently expressed as $[\Omega_{ij},\Omega_{kl}]\neq{}0$ if and only if $i=k$, $i=l$, $j=k$, or $j=l$. This algebraic structure facilitates controllability characterization of the bilinear system governed by the vector fields represented in the standard basis $\mathcal{B}$, which is the main focus of the next section. \section{Combinatorics-Based Controllability Analysis for Bilinear Systems} \label{sec:com.framework} In this section, we introduce a combinatorics-based framework to characterize controllability of bilinear systems. Within this framework, we adopt tools in two subfields of combinatorics - the symmetric group theory and graph theory, and build connections of Lie brackets of vector fields to permutation multiplications in symmetric groups and operations on graph edges, respectively. Such connections enable us to characterize controllability in terms of permutation cycles and graph connectivity. Here, we will investigate bilinear systems defined on $\son$, given by \begin{equation} \label{eq:system_SOn} \dot{X}(t)=\Omega_{i_0j_0}X+\Bigl(\sum_{k=1}^mu_i(t)\Omega_{i_k j_k}\Bigr)X, \quad \Omega_{i_kj_k}\in\mathcal{B}, \quad X(0)=I. \end{equation} as building blocks to establish this framework. Furthermore, we will show that owing to the special algebraic structure of $\mathfrak{so}(n)$ presented in \cref{lem:son}, the symmetric group and the graph-theoretic approach, when applied to \eqref{eq:system_SOn}, give an equivalent characterization of controllability through an interconnection between symmetric groups and graphs. \subsection{The Symmetric Group Method for Controllability Analysis} \label{sec:Sn} In this section, we introduce the symmetric group method for analyzing controllability of the system in \eqref{eq:system_SOn}. In this approach, a subset of vector fields in $\mathcal{B}$ is represented using a permutation in $S_n$, the symmetric group of $n$ letters. Through this representation, we connect the Lie brackets of vector fields to permutation multiplications, so that controllability is determined by the length of permutation cycles. For a brief summary of symmetric groups and permutations, see \cref{appd:Sn}. \subsubsection{Mapping Lie Brackets to Permutations} To establish a relation from Lie brackets to permutation multiplications, we first define a \emph{relation} between subsets of $\mathcal{B}$ and permutations in $S_n$ by \begin{equation} \label{eq:iota} \iota:\mathcal{P}(\mathcal{B})\rightarrow S_n,\quad \iota(\{\Omega_{i_0j_0},\Omega_{i_1j_1},\dots,\Omega_{i_mj_m}\})= (i_0j_0)(i_1j_1)\cdots(i_mj_m). \end{equation} Because every permutation can be decomposed into a product of transpositions ($2$-cycles), the relation $\iota$ is surjective so that every subset of $\mathcal{B}$ admits a permutation representation. To see how Lie bracket operations are related to permutation multiplications by $\iota$, we illustrate the idea using two elements $\Omega_{ij},\Omega_{kl}\in\mathcal{B}$. On the Lie algebra level, if $[\Omega_{ij},\Omega_{kl}]\neq0$, then \cref{lem:son} implies that $\{i,j\}$ and $\{k,l\}$ have a common index. Without loss of generality, we may assume $j=k$ and $i\neq l$, so that $[\Omega_{ij},\Omega_{jl}]=\Omega_{il}$. Meanwhile, on the symmetric group level, we have $\iota(\{\Omega_{ij},\Omega_{jl}\})=(ij)(jl)=(ijl)$, so the permutation multiplication increases the cycle length by $1$, from the $2$-cycle factors $(ij)$ and $(jk)$ to a $3$-cycle $(ijk)$. However, if $[\Omega_{ij},\Omega_{kl}]=0$, then $\{i,j\}\cap\{k,l\}=\emptyset$, and $\iota(\{\Omega_{ij},\Omega_{kl}\})=(ij)(kl)$ is a product of two disjoint cycles. The phenomenon that elements in $\mathcal{B}$ with non-vanishing Lie brackets relating to a cycle with increased length extends inductively to larger subsets of $\mathcal{B}$. To be more specific, if $\Gamma\subset\mathcal{B}$ contains $m$ elements such that the iterated Lie brackets of them are non-vanishing, then $\iota(\Gamma)$ is an $(m+1)$-cycle. This observation immediately motivates the use of cycle length to examine controllability of systems on $\son$ in \cref{eq:system_SOn}. Before we state and prove our main theorem, let us first illustrate the symmetric group method by two examples. \begin{example} \label{ex:so(5)} Consider a system evolving on $\mathrm{SO}(5)$, given by \begin{equation} \label{eq:ex_so(5)} \dot{X}(t)=\Bigl(\sum_{i=1}^4u_i(t)\Omega_{i,i+1}\Bigr)X(t),\quad X(0)=I, \end{equation} and let $\Gamma=\{\Omega_{i,i+1}: i=1,\ldots,4\}$ denote the set of control vector fields. The correspondence between Lie brackets in $\Gamma$ and permutation multiplications in $S_5$ follows \begin{equation} \label{eq:so5_S5} \begin{aligned} [\Omega_{12},\Omega_{23}] &=\Omega_{13} & &\leftrightarrow & (12)(23) &=(123), \\ [\Omega_{23},\Omega_{34}] &=\Omega_{24} & &\leftrightarrow & (23)(34) &=(234), \\ [\Omega_{34},\Omega_{45}] &=\Omega_{35} & &\leftrightarrow & (34)(45) &=(345), \\ [\Omega_{12},[\Omega_{23},\Omega_{34}]] &=\Omega_{14} & &\leftrightarrow & (12)(234) &=(1234), \\ [\Omega_{23},[\Omega_{34},\Omega_{45}]] &=\Omega_{25} & &\leftrightarrow & (23)(345) &=(2345), \\ [\Omega_{12},[\Omega_{23},[\Omega_{34},\Omega_{45}]]] &=\Omega_{15} & &\leftrightarrow & (12)(2345) &=(12345). \end{aligned} \end{equation} Note that successively Lie bracketing elements in $\Gamma$ results in $\Omega_{13}$, $\Omega_{14}$, $\Omega_{15}$, $\Omega_{24}$, $\Omega_{25}$, and $\Omega_{35}$, together with the $4$ elements in $\Gamma$, we have $10$ linearly independent vector fields. Because $\mathfrak{so}(5)$ is a $10$-dimensional Lie algebra, we conclude $\mathrm{Lie}\,(\Gamma)=\mathfrak{so}(5)$, which implies that the system in \cref{eq:ex_so(5)} is controllable on ${\rm SO}(5)$ by the LARC. On the other hand, \cref{eq:so5_S5} also shows $\iota(\Gamma)=(12345)$, a cycle of maximum length in $S_5$. This suggests that controllability of systems on $\son$ can be characterized by cycles of \emph{maximum} length in the corresponding symmetric group. \end{example} \begin{example} \label{ex:so(5)_2} Consider another system evolving on ${\rm SO}(5)$ driven by three controls, given by \begin{equation} \label{eq:ex_so(5)_2} \dot{X}(t)=\bigl(u_1(t)\Omega_{12}+u_2(t)\Omega_{23}+u_3(t)\Omega_{45}\bigr)X(t), \quad X(0)=I. \end{equation} In this case, the single Lie brackets, \[ \begin{aligned} [\Omega_{12},\Omega_{23}] &=\Omega_{13} & &\leftrightarrow & &(12)(23)=(123),\\ [\Omega_{12},\Omega_{45}] &=0 & &\leftrightarrow & &(12)(45),\\ [\Omega_{23},\Omega_{45}] &=0 & &\leftrightarrow & &(23)(45), \end{aligned} \] and the double Lie brackets, \[ \begin{aligned} [\Omega_{13},\Omega_{12}] &=[[\Omega_{12},\Omega_{23}],\Omega_{12}]=\Omega_{23} & &\leftrightarrow & (12)(23)(12) &=(13), \\ [\Omega_{23},\Omega_{13}] &=[\Omega_{23},[\Omega_{12},\Omega_{23}]]=\Omega_{12} & &\leftrightarrow & (23)(12)(23) &=(13), \\ [\Omega_{13},\Omega_{45}] &=[[\Omega_{12},\Omega_{23}],\Omega_{45}]=0 & &\leftrightarrow & (12)(23)(45) &=(123)(45), \end{aligned} \] result in a Lie subalgebra of dimension $4$. Therefore, this system is \emph{not} controllable on ${\rm SO}(5)$. On the other hand, for $\Gamma=\{\Omega_{12}, \Omega_{23}, \Omega_{45}\}$, the computations above also show $\iota(\Gamma)=(123)(45)$, which is \emph{not} a single cycle of maximum length in $S_5$. \end{example} \Cref{ex:so(5),ex:so(5)_2} together verify the observation that cycles with the maximum length characterize controllability of bilinear systems on $\son$, which we will prove in the next section. \begin{remark} \label{rmk:iota.not.a.map} Note that the relation $\iota$ introduced in \cref{eq:iota} is \emph{not} a well-defined function, because, for a given $\Gamma\subseteq B$, $\iota(\Gamma)$ depends on the ordering of the elements in $\Gamma$. If, say, $\Gamma=\{\Omega_{12}, \Omega_{14}, \Omega_{23}, \Omega_{24}, \Omega_{34}\}$, then different element orderings, \[ \begin{aligned} &\{\Omega_{12}, \Omega_{14}, \Omega_{23}, \Omega_{24}, \Omega_{34}\} & &\leftrightarrow & (12)(14)(23)(24)(34) &=(14) \\ &\{\Omega_{14}, \Omega_{12}, \Omega_{24}, \Omega_{23}, \Omega_{34}\} & &\leftrightarrow & (14)(12)(24)(23)(34) &=(1234) \end{aligned} \] could result in \emph{different} permutations. Nevertheless, we can verify that for any $\Gamma\subseteq\mathcal{B}$, there always exists a subset $\Sigma\subseteq\Gamma$ such that $\iota$ relates $\Sigma$ to permutations with the same (maximal) orbits, albeit different orderings of the elements in $\Sigma$. For example, for the subset $\Sigma=\{\Omega_{12}, \Omega_{23}, \Omega_{34}\}$ of $\Gamma$, $\iota(\Sigma)$ is always a $4$-cycle with its orbit being $\{1,2,3,4\}$, regardless of its element orderings. The existence of such a subset will be clear once we develop a graph visualization of the permutations in Section~\ref{sec:graph}. \end{remark} \subsubsection{Controllability Characterization in Terms of Permutation Cycles} Leveraging the technique of mapping Lie brackets to permutations developed in the previous section, we are able to characterize controllability of systems on $\son$ in terms of permutation cycles as shown in the following theorem. \begin{theorem} \label{thm:SOn_Sn} The control system defined on $\son$ of the form \begin{equation} \label{eq:son} \dot{X}(t)=\Bigl(\Omega_{i_0j_0}+\sum_{k=1}^mu_k(t)\Omega_{i_kj_k}\Bigr)X(t), \quad X(0)=I, \end{equation} (same system as (3.1)) where $\Gamma:=\{\Omega_{i_k j_k}\}\subseteq\mathcal{B}$ for $k=0,\dots,m$, is controllable if and only if there is a subset $\Sigma\subseteq\Gamma$ such that $\iota(\Sigma)$ is an $n$-cycle, where $\iota$ is the relation defined in \cref{eq:iota}. \end{theorem} \begin{proof} By the LARC, the system in \cref{eq:son} is controllable on $\son$ if and only if $\mathrm{Lie}\,(\Gamma)=\mathfrak{so}(n)$. Therefore, it is equivalent to showing that $\mathrm{Lie}\,(\Sigma)=\mathfrak{so}(n)$ if and only if $\iota(\Sigma)$ is an $n$-cycle for some $\Sigma\subseteq\Gamma$. (Sufficiency): Suppose there exists a subset $\Sigma\subseteq\Gamma$ such that $\iota(\Sigma)$ is an $n$-cycle. Because an $n$-cycle can be decomposed into a product of at least $n-1$ transpositions, this implies $m\geqslant n-1$. Hence, it suffices to assume that the cardinality of $\Sigma$ is $n-1$, and, without loss of generality, let $\Sigma=\{\Omega_{i_1j_1},\dots,\Omega_{i_{n-1}j_{n-1}}\}$, Because $\iota(\Sigma)$ is an $n$-cycle, it follows that the index set $\{i_1,j_1,\dots,i_{n-1},j_{n-1}\}=\{1,\ldots,n\}$. Note that the set $\{i_1,j_1,\dots,i_{n-1},j_{n-1}\}$ contains repeated elements. Next, we prove the sufficiency by induction. When $n=3$, suppose there exists a subset $\Sigma=\{\Omega_{ij},\Omega_{kl}\}\subset\Gamma$ and that $\iota(\Sigma)=(ij)(kl)$ is a 3-cycle, so we must have one of the following: $i=k$, $j=k$, $i=l$, or $j=l$. Consequently, $[\Omega_{ij},\Omega_{kl}]\in \mathcal{B}\backslash\Sigma$, so $\{\Omega_{ij},\Omega_{kl},[\Omega_{ij},\Omega_{kl}]\}$ spans $\mathfrak{so}(3)$. Therefore, the system in \cref{eq:son} is controllable on $\mathrm{SO}(3)$. Now let us assume that for $n\geqslant 4$, a system defined on ${\rm SO}(n-1)$ in the form of \cref{eq:son} is controllable if there is $\Sigma\subseteq\Gamma$ such that $\iota(\Sigma)$ is an $(n-1)$-cycle. Let $\Sigma\subseteq\Gamma$ be a set of $n-1$ elements such that $\iota(\Sigma)=(i_{n-1}j_{n-1})(i_{n-2}j_{n-2})\cdots(i_1j_1)$ is a cycle of length $n$, then for every integer $1 \leqslant{}k \leqslant{}n-1$, there exists some $1 \leqslant{}l \leqslant{}n-1$ such that $\{i_k,j_k\}\cap\{i_l,j_l\}\neq\emptyset$. Consequently, there are $n-2$ transpositions of the form $(i_kj_k)$, $k=1,\dots,n-1$, such that their product is a cycle of length $n-1$. Without loss of generality, we may assume that $\iota(\Sigma\backslash\{\Omega_{i_{n-1}j_{n-1}}\}) =(i_{n-2}j_{n-2})\cdots(i_1j_1)$ is a $(n-1)$-cycle with the nontrivial orbit $\{i_1,j_1,\dots,i_{n-2},j_{n-2}\} =\{1,\dots,n-1\}$. By the induction hypothesis, the system in \cref{eq:son} is controllable on $\mathrm{SO}(n-1)\subset\son$. Equivalently, any $\Omega_{ij}\in \mathcal{B}$ such that $1 \leqslant{} i<j\leqslant n-1$ can be generated by iterated Lie brackets of the elements in $\Sigma\backslash\{\Omega_{i_{n-1}j_{n-1}}\}$. Because $\iota(\Sigma)=(i_{n-1}j_{n-1})\iota(\Sigma\backslash\{\Omega_{i_{n-1}j_{n-1}}\})$ is a $n$-cycle, we must have $i_{n-1}\in\{1,\dots,n-1\}$ and $j_{n-1}=n$. Therefore, $\Omega_{kn}$ can be generated by the Lie brackets $[\Omega_{ki_{n-1}},\Omega_{i_{n-1}j_{n-1}}]$ for any $k=1,\dots,n-1$. As a result, the system in \cref{eq:son} is controllable on $\son$. (Necessity): Because the system in \cref{eq:son} is controllable, $\mathrm{Lie}\,(\Gamma)=\mathfrak{so}(n)$. Then, there exists a subset $\Sigma$ of $\Gamma$ such that $\mathrm{Lie}\,(\Sigma)=\mathfrak{so}(n)$ and $\Sigma$ contains no \emph{redundant elements}, i.e., the elements that can be generated by Lie brackets of the other elements in $\Sigma$. Without loss of generality, we assume $\Sigma=\{\Omega_{i_1j_1},\dots,\Omega_{i_lj_l}\}$, where $l\leqslant m$. By \cref{lem:son}, for any $\Omega_{ab},\Omega_{cd}\in\Sigma$, if $[\Omega_{ab},\Omega_{cd}]\neq 0$, then there must exist a bridging index, i.e., we must have one of the following cases: $a=c$, $a=d$, $b=c$, or $b=d$. This, together with $\mathrm{Lie}\,(\Sigma)=\mathfrak{so}(n)$, implies that the index set $J$ of $\Sigma$ is $J=\{i_1,j_1,\dots,i_l,j_l\}=\{1,\dots,n\}$, and that for any $\Omega_{i_kj_k}\in\Sigma$, there exists some $\Omega_{i_sj_s}\in \Sigma$ with $s\neq k$ such that $\{i_k,j_k\}\cap\{i_s,j_s\}\neq\emptyset$. Moreover, because $\Sigma$ contains no redundant elements, $\iota(\Sigma)=\iota(\Omega_{i_lj_l})\cdots\iota(\Omega_{i_1j_1})$ is a cycle whose orbit contains every element in $\{1,\dots,n\}$, namely, it is a cycle of length $n$. In addition, the cardinality of $\Sigma$ is $n-1$. \end{proof} \begin{remark} \label{rem:number_of_control} Following the above proof, it requires at least $n-1$ controls for the system on $\son$ in \cref{eq:son} to be fully controllable and, on the other hand, for $\iota(\Sigma)$, $\Sigma\subseteq\Gamma$, to reach a cycle of length $n$. \end{remark} Similar to the case in \cref{thm:SOn_Sn} for controllable systems, the controllable submanifold for an uncontrollable system also depends on the permutation related to a subset of $\Gamma$. To be more specific, the cycle decomposition of such a permutation determines the involutive distribution of the submanifold. \begin{corollary} \label{cor:Sn_submanifold} Given a system evolving on $\son$ in the form of \cref{eq:system_SOn}, let $\Xi$ be a minimal subset of $\Gamma$, such that $\mathrm{Lie}\,(\Xi)=\mathrm{Lie}\,(\Gamma)$. If $\iota(\Xi)=\sigma_1\cdot\sigma_2\cdots\sigma_l$ so that each $\sigma_k$, $1 \leqslant{} k \leqslant{}l$, are pairwise disjoint cycles with the nontrivial orbits $\mathcal{O}_k$, then the controllable submanifold of the system is the Lie subgroup of $\son$ with the Lie algebra $\mathrm{Lie}\,(\Gamma)=\bigoplus_{k=1}^l{\rm span}\,\{\Omega_{ij}:i,j\in\mathcal{O}_k\}$. Conversely, if $\mathrm{Lie}\,(\Gamma)=\bigoplus_{k=1}^l{\rm span}\,\{\Omega_{ij}:i,j\in\mathcal{O}_k\}$ for some $\mathcal{O}_k\subset\{1,2,\dots,n\}$, then $\iota(\Xi)=\sigma_1\cdot\sigma_2\cdots\sigma_l$ and $\sigma_k$ are that disjoint cycles with nontrivial orbits $\mathcal{O}_k$. \end{corollary} \begin{proof} Let $\Xi$ be a minimal subset of $\Gamma$ such that $\mathrm{Lie}\,(\Xi)=\mathrm{Lie}\,(\Gamma)$ and $\Xi$ does not contain redundant elements. First, let $\sigma=\iota(\Xi)\in S_n$ be a cycle with nontrivial orbit $\mathcal{O}$, then \cref{thm:SOn_Sn} implies $\mathrm{Lie}\,(\Xi)=\mathrm{span}\,\{\Omega_{ij}:i,j\in\mathcal{O},i<j\}$. Next, if $\sigma=\sigma_1\cdots\sigma_l$ is a permutation as a product of disjoint cycles $\sigma_1,\dots,\sigma_l$ with $l\geqslant 2$, then there exists a partition $\{\Xi_1,\dots,\Xi_l\}$ of $\Xi$ such that $\iota(\Xi_k)=\sigma_k$ for each $k=1,\dots,l$. Let $\mathcal{O}_k$ denotes the nontrivial orbit of $\sigma_k$ for each $k=1,\dots,l$, then $\mathrm{Lie}\,(\Xi_k)=\{\Omega_{ij}:i,j\in\mathcal{O}_k,i<j\}$ and the sets $\mathcal{O}_1,\dots,\mathcal{O}_l$ are pairwise disjoint subsets of $\{1,\dots,n\}$. Hence, $\mathrm{Lie}\,(\Xi_i)\cap\mathrm{Lie}\,(\Xi_j)=\{0\}$ holds for all $i\neq j$, and consequently, we have $\mathrm{Lie}\,(\Xi)=\mathrm{Lie}\,(\Xi_1)\oplus\cdots\oplus\mathrm{Lie}\,\mathcal(\Xi_l)$, where $\oplus$ denotes the direct sum of vector spaces. By the Frobenius Theorem \cite{warner.lie.groups}, $\mathrm{Lie}\,(\Xi)$ is completely integrable, and that the set of all its maximal integral manifolds forms a foliation $\mathcal{F}$ of $\son$. Since the initial condition of the system in \cref{eq:son} is the identity matrix $I$, the leaf of $\mathcal{F}$ passing through $I$ is the controllable submanifold of the system in \cref{eq:son}. The converse is obvious following a very similar argument. \end{proof} According to \cref{thm:SOn_Sn,cor:Sn_submanifold}, mapping the control vector fields in $\Gamma$ to permutations provides not only an alternative approach to effectively examine controllability of systems defined on $\son$, but also a systematic procedure to characterize the controllable submanifold when the system is not fully controllable. Let us now revisit a previous example and see how permutations help determine system controllability. \begin{example}[Controllable Submanifold] \label{ex:controllable_submanifold} Recall \cref{ex:so(5)_2}, where the system in \cref{eq:ex_so(5)_2} is not controllable and there exist no subsets of $\Gamma=\{\Omega_{12},\Omega_{23},\Omega_{45}\}$ such that $\iota(\Gamma)$ is a $5$-cycle. In addition, the controllable submanifold is the integral manifold of the involutive distribution $\Delta=\mathrm{Lie}\,\{\Omega_{12}X, \Omega_{23}X, \Omega_{13}X, \Omega_{45}X\}=\mathrm{span}\,\{\Omega_{ij}X: i,j\in\{1,2,3\}\text{ or }i,j\in\{4,5\}\}$, which can be identified by the nontrivial orbits of $\iota(\Gamma)=(1,2,3)(4,5)$. On the other hand, for each $X\in\mathrm{SO}(5)$, the complement $\Delta_X^{\perp}={\rm span}\,\{\Omega_{ij}X:i=1,2,3, j=4,5\}$ of the distribution evaluated at $X$ contains the bridging elements required for full controllability of this system. \end{example} \subsection{The Graph-Theoretic Method for Controllability Analysis} \label{sec:graph} Graphs appear naturally in the research of networked systems, especially in modeling multi-agent systems and analyzing structural controllability \cite{Mou2016,Qin2016,Tsopelakos2019}. However, most graph-theoretic methods were dedicated to studying networked control systems in existing literature and were not invented and applied for understanding fundamental properties of a single bilinear system. Here, we use graphs to represent the structure of Lie algebras and then characterize controllability of bilinear systems by graph connectivity. In contrast to the symmetric group method presented in Section \ref{sec:Sn}, this graph-theoretic method establishes a correspondence between Lie bracket operations of vector fields and operations on the edges of graphs. \subsubsection{Mapping Lie Brackets to Graphs} \label{sec:Lie_graph} A graph $G$, conventionally denoted by a 2-tuple, $G=(V,E)$, consists of a vertex set $V$ and an edge set $E$. For the purpose of analyzing controllability of the system on $\son$, we are particularly interested in simple graphs, i.e., undirected graphs with no loops or multiple edges, of $n$ vertices. Here, we denote the collection of such graphs $\mathcal{G}$. Without loss of generality, we further assume that every graph in $\mathcal{G}$ has the same vertex set $V=\{v_1,\dots,v_n\}$. Following these notations, we define a map \begin{equation} \label{eq:graph-map} \tau: \mathcal{P}(\mathcal{B})\rightarrow\mathcal{G}\quad{}\text{by}\quad{} \tau(\Gamma)=(V,E_{\Gamma}):=G_\Gamma, \end{equation} where $\mathcal{P}(\mathcal{B})$ denotes the power set of $\mathcal{B}$, i.e., the set consisting of all subsets of $\mathcal{B}$ and $E_\Gamma=\{v_iv_j:\Omega_{ij}\in\Gamma\}$. Some basic properties of $\tau$ are summarized in the following proposition. \begin{proposition}[Properties of $\tau$] \label{prop:property_tau} \mbox{} \begin{enumerate}[font=\normalfont,label={(\roman*)}] \item The map $\tau$ defined in \cref{eq:graph-map} is bijective. \item For any $\Gamma\subseteq\mathcal{B}$, $|\Gamma|=|E_\Gamma|$ holds, where $|\cdot|$ denote the cardinality of a set. \item Let $K_n$ denote the complete graph of $n$ vertices, i.e., the graph whose vertices are pairwise adjacent, then $\tau(\mathcal{B})=K_n$. \end{enumerate} \end{proposition} \begin{proof} Note that (i) and (ii) directly follow from the definition of $\tau$. For (iii), the edge set of $\tau(\mathcal{B})$ satisfies $E_{\mathcal{B}} =\{v_iv_j:\Omega_{ij}\in\mathcal{B}\} =\{v_iv_j:1\leqslant{}i<j\leqslant{}n\} =\{v_iv_j:i,j=1,\dots,n\}$, and hence we conclude $\tau(\mathcal{B})=K_n$. \end{proof} The property (i) in \cref{prop:property_tau} reveals a one-to-one correspondence between the subsets of $\mathcal{B}$ and the graphs in $\mathcal{G}$, which enables the representation of Lie bracket operations by graph operations as follows. Algebraically, for any $\Omega_{ij},\Omega_{jk}\in\mathcal{B}$, \cref{lem:son} implies $[\Omega_{ij},\Omega_{jk}] =\Omega_{ik}\neq{}0$, so that $\mathrm{Lie}\,\{\Omega_{ij},\Omega_{jk}\} ={\mathrm{span}}\{\Omega_{ij},\Omega_{jk},\Omega_{ik}\}$. Graphically, by the definition of $\tau$, the two edges $\tau(\Omega_{ij})=v_iv_j$ and $\tau(\Omega_{jk})=v_jv_k$ share a common vertex $v_j$, and the edge $\tau([\Omega_{ij},\Omega_{jk}])=\tau(\Omega_{ik})=v_iv_k$ intersects with $\tau(\Omega_{ij})$ and $\tau(\Omega_{jk})$ at endpoints $v_i$ and $v_k$, respectively. Therefore, the three edges $\tau(\Omega_{ij})$, $\tau(\Omega_{jk})$, and $\tau([\Omega_{ij},\Omega_{jk}])$ form a triangle, or equivalently, $\tau(\{\Omega_{ij},\Omega_{jk},[\Omega_{ij},\Omega_{jk}]\}) =\{v_iv_j,v_jv_k,v_iv_k\}=K_3$. This observation, as summarized in the following lemma, reveals the relationship between first-order Lie brackets and graph operations for three standard basis elements of $\mathfrak{so}(n)$, which lays the foundation for the graph-theoretic controllability analysis of bilinear systems. \begin{lemma} \label{lem:lie-graph} If $\Omega_{ij},\Omega_{kl}\in\mathcal{B}$ satisfy $[\Omega_{ij},\Omega_{kl}]\neq0$, then \begin{enumerate}[font=\normalfont,label={(\roman*)}] \item the two edges $\tau(\Omega_{ij})$ and $\tau(\Omega_{kl})$ are \emph{incident} (i.e., they share a common vertex); \item the three edges $\tau(\Omega_{ij})$, $\tau(\Omega_{kj})$, and $\tau([\Omega_{ij},\Omega_{kl}])$ form a triangle. \end{enumerate} \end{lemma} To graphically characterize higher-order Lie brackets among arbitrary collections of standard basis elements of $\mathfrak{so}(n)$, we introduce the notion of triangular closure for graphs, which generalizes the action of ``forming triangles'' in \cref{lem:lie-graph}. \begin{definition}[Triangular Closure] \label{def:tri.closure} Let $G=(V,E)$ be a graph, and $\{G^m=(V,E^m):m=0,1,\dots\}$ be an ascending chain of graphs, i.e., $G^m\subseteq G^{m+1}$ for any $m=0,1,\dots$, satisfying \begin{enumerate}[font=\normalfont,label={(\roman*)}] \item $G^0=G$, i.e., $E^0=E$. \item For any $m\geqslant{}0$, $v_iv_j\in{}E^{m+1}$ if and only if $v_iv_j\in{}E^m$ or there exists some vertex $v_k\in{}V$ such that $v_iv_k,v_kv_j\in{}E^m$. \end{enumerate} Then the union of all $G^m$, denoted $\bar{G}=\bigcup_{m=1}^\infty G^m$, or equivalently, $\bar{G}=(V,\bar{E})=(V,\bigcup_{m=1}^{\infty}E^m)$, is called the \emph{triangular closure} of $G$. Moreover, a graph $G$ is called \emph{triangularly closed} if $G=\bar{G}$. \end{definition} Note that for a finite graph $G$, i.e., $G$ has finitely many vertices and edges, the ascending chain of graphs $G =G^0\subseteq{} G^1\subseteq\cdots$ in \cref{def:tri.closure} stabilizes in finite steps, that is, there exists a nonnegative integer $m$ such that $G^m=G^{m+1}=\cdots$, which then implies $\bar{G}=G^m$. In particular, for a graph with $n$ vertices, since it has at most $n(n-1)/2$ edges, its triangular closure can be obtained in at most $n(n-1)/2$ steps. \begin{remark} For readers familiar with graph theory, \cref{def:tri.closure} is mathematically equivalent to the standard definition of \emph{transitive closure}, and the equivalence will become transparent in the proof of \cref{thm:connectivity.controllability}. The triangular closure we introduce here imitates the computations of graded Lie brackets/algebras in a more natural way, so that all orders of Lie brackets can be calculated in a graph. \end{remark} Recall from \cref{lem:lie-graph} that given a subset $\Gamma\subseteq\mathcal{B}$ and its associated graph $G=\tau(\Gamma)$, taking first-order Lie brackets of the elements in $\Gamma$ corresponds to adding edges that connect the endpoints of incident edges in $G$. Applying this procedure to $G=G^{0}$, as defined in \cref{def:tri.closure}, exactly results in $G^1$. Inductively, successively Lie bracketing the elements in $\Gamma$ up to order $m$ will generate the graph $G^m$, as shown below. \begin{theorem} \label{thm:lie-graph} Given a subset $\Gamma\subseteq\mathcal{B}$, let $\Gamma^0\subseteq\Gamma^1\subseteq\cdots$ be an ascending chain of subsets of $\mathcal{B}$ such that $\Gamma^0=\Gamma$, $\Gamma^1=[\Gamma^{0},\Gamma^{0}]\bigcup\Gamma^{0}$, $\dots$, $\Gamma^{m+1}=[\Gamma^{m},\Gamma^{m}]\bigcup\Gamma^{m}$, $\dots$, where $[\Gamma^{m},\Gamma^{m}]=\{[A,B]:A,B\in\Gamma^{m}\}$. Then $G^m=\tau(\Gamma^m)$ holds for all $m=0,1,\ldots$ \end{theorem} \begin{proof} This follows immediately from the definitions of $G^m$ and $\Gamma^m$. \end{proof} Recall that for any finite $G\in\mathcal{G}$, $G^m$ stabilizes to $\bar{G}$ in finite steps. Meanwhile, by \cref{thm:lie-graph}, $\Gamma^m$ also stabilizes to a subset $\hat{\Gamma}\subseteq\mathcal{B}$ which must satisfy $\bar G=\tau(\hat{\Gamma})$. Intuitively, $\hat{\Gamma}$ is supposed to contain all the elements that can be generated by the iterated Lie brackets of the elements in $\Gamma$, because $\bar{G}$ is the largest graph generated by $G$. This conclusion is then rigorously verified in the following corollary. \begin{corollary} \label{cor:lie-graph} Let $\Gamma$ be a subset of $\mathcal{B}$ and $G=\tau(\Gamma)$ be the graph associated with $\Gamma$. If $\hat\Gamma\subseteq\mathcal{B}$ satisfies $\tau(\hat\Gamma)=\bar{G}$, then $\mathrm{Lie}\,(\Gamma)=\mathrm{span}\,(\hat\Gamma)$. \end{corollary} \begin{proof} Let $m$ be a nonnegative integer satisfying $G^m=\bar{G}$, then \cref{thm:lie-graph} implies that $\hat\Gamma=\Gamma^m$, hence $\Gamma^r=\hat{\Gamma}$ holds for all $r\geqslant{}m$. Consequently, by the definition of $\mathrm{Lie}\,(\Gamma)$, we have $\mathrm{Lie}\,(\Gamma) =\mathrm{span}\,(\bigcup_{i=0}^\infty\Gamma^i) =\mathrm{span}\,(\Gamma^m) =\mathrm{span}\,(\hat\Gamma)$. \end{proof} For the purpose of controllability analysis, the subsets of $\mathcal{B}$ generating the whole Lie algebra $\mathfrak{so}(n)$ is of great interest. Therefore, we characterize such subsets by their associated graphs below, which is also a special case of \cref{cor:lie-graph}. \begin{corollary} \label{cor:completion} Consider a subset $\Gamma\subseteq\mathcal{B}$ with the associated graph $G=\tau(\Gamma)$, then $\mathrm{Lie}\,(\Gamma)=\mathfrak{so}(n)$ if and only if $\bar{G}=K_n$. \end{corollary} \begin{proof} (Sufficiency): Let $\hat{\Gamma}\subseteq\mathcal{B}$ satisfy $\bar{G}=\tau(\hat{\Gamma})=K_n$, then the properties (i) and (iii) in \cref{prop:property_tau} imply $\hat\Gamma=\mathcal{B}$. Consequently, $\mathrm{Lie}\,(\Gamma)=\mathrm{span}\, (\hat\Gamma)=\mathrm{span}\,(\mathcal{B})=\mathfrak{so}(n)$ by \cref{cor:lie-graph}. (Necessity): If $\mathrm{Lie}\,(\Gamma)=\mathfrak{so}(n)$, then there exists some nonnegative integer $m$ such that $\Gamma^m=\mathcal{B}$. By \cref{thm:lie-graph}, we obtain $\bar{G}\supseteq{}G^m=\tau(\Gamma^m)=K_n$. On the other hand, because of $\bar{G}\subseteq K_n$, we conclude $\bar{G}=K_n$. \end{proof} Furthermore, \cref{cor:completion} sheds light on a graph representation of controllability, which in turn can be characterized in terms of graph connectivity. In the following section, we will rigorously investigate this observation. \subsubsection{Controllability Characterization in Terms of Graph Connectivity} \label{sec:graph_controllability} The relationship between Lie brackets and graph operations developed in Section~\ref{sec:Lie_graph} enables us to employ graph theory techniques to analyze controllability of systems on $\son$ as in \cref{eq:system_SOn}. In particular, motivated by the connection between a Lie subalgebra and its associated graph presented in \cref{cor:completion}, controllability can be analyzed through the notion of triangular closure defined in \cref{def:tri.closure}. \begin{proposition} \label{prop:controllability} The bilinear system in \cref{eq:system_SOn} is controllable on $\son$ if and only if $\overline{\tau(\Gamma)}=K_n$, where $\tau$ is defined as in \cref{eq:graph-map}, $\Gamma=\{\Omega_{i_0j_0},\dots,\Omega_{i_mj_m}\}$, and $K_n$ is a complete graph of $n$ vertices. \end{proposition} \begin{proof} By the LARC shown in \cref{thm:LARC}, the system in \cref{eq:system_SOn} is controllable on $\mathrm{SO}(n)$ if and only if $\mathrm{Lie}\,(\Gamma)=\mathfrak{so}(n)$, which is equivalent to $\overline{\tau(\Gamma)}=K_n$ by \cref{cor:completion}. \end{proof} Using the following two examples, we will verify \cref{prop:controllability} and draw a parallel between examining the LARC and generating triangular closure of the graph associated with the considered system. This comparison in turn illuminates a graphic visualization of the algebraic procedure of generating Lie algebras for the set of drift and control vector fields. \begin{example} \label{ex:controllable} Consider the system on $\mathrm{SO}(4)$ given by \begin{equation} \label{eq:controllable.so4} \dot{X}(t)=(u_1\Omega_{12}+u_2\Omega_{23}+u_3\Omega_{13}+u_4\Omega_{34})X(t),\quad X(0)=I. \end{equation} Applying $\tau$ to the set of the control vector fields $\Gamma$ results in its associated graph $G=(V,E)$ as follows, \[ \Gamma=\{\Omega_{12},\Omega_{23},\Omega_{13},\Omega_{34}\} \xleftrightarrow{\tau} \{v_1v_2,v_2v_3,v_1v_3,v_3v_4\}=E. \] Because the first order Lie brackets $[\Omega_{23},\Omega_{34}]=\Omega_{24}$ and $[\Omega_{13},\Omega_{34}]=\Omega_{14}$ are not in $\Gamma$, we have $\Gamma^1=\Gamma\cup\{\Omega_{24},\Omega_{14}\}$. Correspondingly, according to \cref{cor:lie-graph}, $G^1=(V,E^{1})$ can be obtained by applying $\tau$ to $\Gamma^1$, i.e., \[ \Gamma^1=\Gamma\cup\{\Omega_{24},\Omega_{14}\} \xleftrightarrow{\tau} \{v_2v_4,v_1v_4\}\cup E=E^{1}. \] Notice that $\mathrm{span}\,(\Gamma^1)=\mathfrak{so}(4)$ and simultaneously $G^1=\bar{G}=K_4$, which concludes controllability of the system in \cref{eq:controllable.so4} from both algebraic and graph-theoretic perspectives. The graphs $G$ and $G^1$ are shown in \cref{fig:controllable.so4}. In particular, the two red edges in $G^1$, which are not in $G$, correspond to the elements in $[\Gamma,\Gamma]$. \begin{figure} \caption{The graph $G$ associated with the system~\cref{eq:controllable.so4} \label{fig:controllable.so4} \end{figure} \end{example} \Cref{ex:controllable} presents a controllable system whose associated graph has a complete triangular closure, which in turn validates the sufficiency of \cref{prop:controllability}. The necessity is illustrated using the following example through an uncontrollable system. \begin{example} \label{ex:uncontrollable.so5} Consider the system on $\mathrm{SO}(5)$ driven by three control inputs, given by \begin{equation} \label{eq:uncontrollable.so5} \dot{X}(t)=(u_1\Omega_{12}+u_2\Omega_{23}+u_3\Omega_{34})X(t),\quad X(0)=I, \end{equation} and let $\Gamma=\{\Omega_{12},\Omega_{23},$ $\Omega_{34}\}$ denote the set of control vector fields. Some straightforward calculations yield the Lie algebra $\mathrm{Lie}\,(\Gamma)={\mathrm{span}}\,\{\Omega_{12},\Omega_{23}, \Omega_{34},\Omega_{13},\Omega_{14},\Omega_{24}\}$, which has dimension $6$. Therefore, the system in \cref{eq:uncontrollable.so5} is not controllable, since $\dim\mathfrak{so}(5)=10$. Using the graph-theoretic approach, \cref{fig:uncontrollable.so5} shows the procedure of generating $\bar{H}$ from $H=\tau(\Gamma)$. In particular, $\bar{H}=H^2$ shown in \cref{fig:uncontrollable.so5} is not complete, which verifies the necessity of \cref{prop:controllability}. \begin{figure} \caption{The graph visualization of Lie bracketing control vector fields of the system in \cref{eq:uncontrollable.so5} \label{fig:uncontrollable.so5} \end{figure} \end{example} It is worth noting that the graph $G$ in \cref{fig:controllable.so4} associated with the controllable system in \cref{eq:controllable.so4} is connected, but the graph $H$ in \cref{fig:uncontrollable.so5} associated with the uncontrollable system in \cref{eq:uncontrollable.so5} is not. This observation inspires the characterization of controllability for systems on $\son$ by graph connectivity. \begin{theorem} \label{thm:connectivity.controllability} The system in \cref{eq:system_SOn} is controllable on $\son$ if and only if $\tau(\Gamma)$ is connected, where $\Gamma=\{\Omega_{i_0 j_0},\dots,\Omega_{i_m j_m}\}$ and $\tau(\Gamma)$ is the graph associated with $\Gamma$. \end{theorem} \begin{proof} Owing to \cref{prop:controllability}, it suffices to prove that the triangular closure of $\tau(\Gamma)$ is complete if and only if $\tau(\Gamma)$ is connected. (Sufficiency): Suppose that $G=\tau(\Gamma)=(V,E)$ is connected, then there is a path in $G$ from $v_i$ to $v_j$ for any $v_i,v_j\in{}V$, say $v_iw_1w_2\cdots{}w_kv_j$ with $w_1,\ldots,w_k\in{}V$. Therefore, we have $v_iw_2\in{}E^1,\ldots, v_iw_k\in{}E^{k-1}$ and $v_iv_j\in{}E^{k}\subseteq\bar{E}$. Since $v_i,v_j\in V$ are chosen arbitrarily, we conclude that the triangular closure $\bar{G}$ contains all edges $v_iv_j$, hence $\bar{G}=K_n$. In addition, this process of generating $\bar G$ is illustrated in \cref{fig:proof.completeness} with the case of $k=5$. \begin{figure} \caption{Illustration of the proof of sufficiency of \cref{thm:connectivity.controllability} \label{fig:proof.completeness} \end{figure} (Necessity): We assume that the triangular closure $\bar G$ of $G=\tau(\Gamma)$ is complete. If there exists an edge $v_iv_j$ \emph{not} in $G$, since $v_iv_j$ is in $\bar{G}=(V,\bar{E})$, we may then assume $v_iv_j\in{}E^k$ and $v_iv_j\not\in{}E^{k-1}$ for some positive integer $k$. Hence, by \cref{def:tri.closure}, there is some vertex $w_1$ such that $v_iw_1,w_1v_j\in{}E^{k-1}$, i.e., there exists a path $v_iw_1v_j$ in $G^{k-1}$ connecting $v_i$ and $v_j$. Repeating this procedure results in a path in $G$ connecting $v_i$ and $v_j$, which implies the connectivity of $G$, and hence the proof is done. \end{proof} \begin{remark} In addition to controllability characterization, \cref{thm:connectivity.controllability} highlights a crucial property of the map $\tau$, that is, $\mathrm{Lie}\,(\Gamma)=\mathfrak{so}(n)$ for some $\Gamma\subseteq\mathcal{B}$ \emph{if and only if} $\tau(\Gamma)$ is \emph{connected}, which is an equivalent formulation of \cref{thm:connectivity.controllability}. \end{remark} Because a connected graph with $n$ vertices contains at least $n-1$ edges, \cref{thm:connectivity.controllability} also identifies the minimum number of control inputs for the system in \cref{eq:system_SOn} to be controllable, as identified using the symmetric group method presented in \cref{thm:SOn_Sn,rem:number_of_control}. \begin{corollary} \label{cor:control_number} If a system on $\son$ in \cref{eq:system_SOn} is controllable, then the number of control inputs $m$ is at least $n-2$, i.e., $m\geqslant{}n-2$. \end{corollary} Although \cref{thm:connectivity.controllability} is developed to examine controllability, it also helps establish some general facts in graph theory from the control systems perspective. In the following, we present one such result that is related to triangular closures. This property also plays an important role in characterizing controllable submanifolds for uncontrollable systems by connected component of the graph associated with the control system. \begin{lemma} \label{lem:tri.closure.complete.components} The triangular closure $\bar{G}$ of a graph $G$ is a disjoint union of its complete components. \end{lemma} \begin{proof} The proof is a direct application of the proof of \cref{thm:connectivity.controllability} to each connected component of $G$. \end{proof} By the above \cref{lem:tri.closure.complete.components}, we can adopt our main result in \cref{thm:connectivity.controllability} to study an uncontrollable system by taking the triangular closure of its associated graph, which is the union of the triangular closures of all connected components. \begin{theorem} \label{thm:controllable.submanifold} The controllable submanifold of the system in \cref{eq:system_SOn} is determined by the connected components of its associated graph. \end{theorem} \begin{proof} Let $\Gamma\subseteq\mathcal{B}$ be the set of vector fields governing the dynamics of the system in \cref{eq:system_SOn}, $G=\tau(\Gamma)$ be the graph representation of $\Gamma$, and $\bar{G}$ denote the triangular closure of $G$. Since connected components of $G$ determine the complete components of $\bar{G}$, it suffices to show that the controllable submanifold of the system is determined by the complete components of $\bar{G}$. According to the Frobenius Theorem~\cite{warner.lie.groups}, the controllable submanifold of the system in \cref{eq:system_SOn} is the maximal integral submanifold of $\mathrm{Lie}\,(\Gamma)$ passing through the identity matrix $I$. Hence, by \cref{lem:tri.closure.complete.components}, because of the completeness of each component of $\bar{G}$, the set $\tau^{-1}(\bar{G})\subseteq\mathcal{B}$ is closed under Lie bracket, which implies $\mathrm{span}\,\tau^{-1}(\bar{G})=\mathrm{Lie}\,(\Gamma).$ Therefore, we conclude that $\mathrm{Lie}\,(\Gamma)$, and thus its maximal integral submanifold, is determined by $\bar G$. \end{proof} \Cref{thm:controllable.submanifold} further reveals a one-to-one correspondence between the Lie algebra generated by a subset of $\mathcal{B}$ and the triangular closure of its associated graph in $\mathcal{G}$. Leveraging this one-to-one correspondence, we are able to give an explicit characterization of controllable submanifolds for uncontrollable systems in terms of connected components of their associated graphs. \begin{example}[Controllable Submanifold] \label{ex:submanifold} Consider two bilinear systems defined on $\mathrm{SO}(6)$ in the form of \cref{eq:system_SOn} governed by the vector fields $\Gamma_1=\{\Omega_{12},\Omega_{23},\Omega_{45},\Omega_{46}\}$ and $\Gamma_2=\{\Omega_{13},\Omega_{23},\Omega_{46},\Omega_{56}\}$, respectively. \Cref{fig:controllable.submanifolds} shows their associated graphs $G_1=\tau(\Gamma_1)$ and $G_2=\tau(\Gamma_2)$, neither of which is connected. Therefore, by \cref{thm:lie-graph}, both systems are not controllable on $\mathrm{SO}(6)$. On the other hand, we notice that $\overline{G_1}=\overline{G_2}$. So by \cref{thm:controllable.submanifold}, the two systems have the same controllable submanifold. Specifically, the controllable submanifold is the Lie subgroup of $\mathrm{SO}(6)$ with the Lie algebra \[ \mathrm{Lie}\,(\Gamma_1)=\mathrm{Lie}\,(\Gamma_2) =\mathrm{span}\,\{\Omega_{ij}:1\leqslant{}i<j\leqslant{}3\} \oplus\mathrm{span}\,\{\Omega_{ij}:4\leqslant{}i<j\leqslant{}6\}. \] Moreover, both $\overline{G_1}$ and $\overline{G_2}$ contain two complete components with the vertex sets $U=\{v_1,v_2,v_3\}$ and $W=\{v_4,v_5,v_6\}$, which are also the vertex sets of the connected components of $G_1$ (or $G_2$). It then follows that the Lie algebra of the controllable submanifold, $\mathrm{span}\,\{\Omega_{ij}:v_i,v_j\in{}U\} \oplus\mathrm{span}\,\{\Omega_{ij}:v_i,v_j\in{}W\}$, can be explicitly characterized by the vertex sets of the complete components of $\overline{G_1}$ and $\overline{G_2}$, as well as the connected components of $G_1$ and $G_2$. \begin{figure} \caption{The graphs and their triangular closures associated with the systems in \cref{ex:submanifold} \label{fig:controllable.submanifolds} \end{figure} \end{example} Furthermore, the developed method of characterizing controllability in terms of graph connectivity is not constrained to systems defined on Lie groups. In particular, as shown in the following example, it can be applied to study formation control of multi-agent systems defined on graphs. From an algebraic perspective, it is equivalent to using the graph-theoretic method to analyze the Lie algebra generated by \emph{symmetric matrices}. \begin{example}[Formation Control] \label{ex:formation.control} In this example, we consider formation control of a multi-agent system, which concerns with the question of coordinating the system to a consensus state. For such a purpose, the dynamics of each agent in a network of $N$ agents with the coupling topology given by the graph $G=(V,E), V=\{v_1,\ldots,v_N\}$, is generally represented by \begin{equation} \label{eq:dynamic.law} \dot{x}_i(t)=\sum_{j\in{}V(i)} u_{ij}(x_j-x_i), \quad{} 1\leqslant{} i\leqslant{} N, \end{equation} where $x_i(t)\in\mathbb{R}^n$ denotes the state of the $i$-th agent, $V(i)=\{1\leqslant{} j\leqslant{} N: v_iv_j\in E\}$ denotes the set of neighboring agents of $i$, and $u_{ij}=u_{ji}$ are the external inputs that control the reciprocal interaction between the $i$-th and $j$-th agents~\cite{Chen14}. We will first formulate the dynamic law in \cref{eq:dynamic.law} into a matrix form, and then apply our analysis on a Lie algebra associated with it. To do this, let \[ A_{ij}:=E_{ii}+E_{jj}-E_{ij}-E_{ji} \] be an $N$-by-$N$ symmetric matrix with zero row and column sums, and let $X\in\mathbb{R}^{N\times{}n}$ denote a matrix whose row vectors are the states of the agents: \[ X=\begin{bmatrix} x_1^{\intercal} \\ \vdots \\ x_N^{\intercal} \end{bmatrix}. \] Then, we can rewrite \cref{eq:dynamic.law} into the following matrix form, \begin{equation} \label{eq:dynamic.law.matrix} \dot{X}=\sum_{v_iv_j\in{}E}u_{ij}A_{ij}X. \end{equation} The formation controllability of the multi-agent system in \cref{eq:dynamic.law.matrix} is determined by the LARC~\cite{Chen14}. Thus, to study this system, we need to know the algebraic structure of matrices $\{A_{ij}\}$. Observe that for $B_{ijk}=-(\Omega_{ij}+\Omega_{jk}+\Omega_{ki})$ and distinct indices $1\leqslant{} i,j,k,l,m\leqslant{} N$, we have \begin{equation} \label{eq:formation.control.algebraic.property} \begin{aligned} [A_{ij}, A_{jk}] &= B_{ijk}, \\ [B_{ijk}, A_{ij}] &= 2(A_{ik}-A_{jk}), \\ [B_{ijk}, A_{il}] &= -A_{ij}+A_{jl}+A_{ik}-A_{kl}, \\ [B_{ijk}, B_{ijl}] &= B_{ikl}+B_{jkl}, \\ [B_{ijk}, B_{ilm}] &= B_{jlm}+B_{kml} = B_{lkj}+B_{mjk}. \end{aligned} \end{equation} Therefore, the Lie algebra $\mathfrak{g}:=\mathrm{Lie}\,\{A_{ij}\}$ has a decomposition, $\mathfrak{g}=\mathfrak{g}_1\oplus{}\mathfrak{g}_{-1}$, with $\mathfrak{g}_1=\mathrm{span}\,\{B_{ijk}\}$ and $\mathfrak{g}_{-1}=\mathrm{span}\,\{A_{ij}\}$. As a consequence, by the LARC, controllability of system~\cref{eq:dynamic.law.matrix} depends on whether the set $\Gamma:=\{A_{ij}: (i,j)\in{}E\}$ generates the Lie algebra $\mathfrak{g}$. Similar to bilinear systems on $\son$, we can adopt a graph-theoretic method for $\mathfrak{g}$ by associating \emph{one part of} $\mathfrak{g}$, i.e., $\mathfrak{g}_{-1}$, to a graph, which in the case of this example, \emph{coincides} with the graph on which the system is defined. To be more specific, for a complete graph $K_N$ and its set of edges $E$, we may define a map $\tau:\Gamma\to{}E$, which sends $A_{ij}\in\Gamma$ to $v_iv_j\in{}E$, so that the image of $\Gamma$ is exactly the graph $G$. Following the correspondence $\tau$, for two adjacent edges $v_iv_j$ and $v_jv_k$, since $\mathrm{Lie}\,\{A_{ij},A_{jk}\} =\mathrm{Lie}\,\{A_{ij},A_{jk},A_{ki}\} =\mathrm{span}\,\{A_{ij}, A_{jk}, A_{ki}, B_{ijk}\}$, the triangle with edges $v_iv_j, v_jv_k, v_kv_i$ in $G$ represents the Lie subalgebra spanned by $\{A_{ij}, A_{jk}, A_{ki}, B_{ijk}\}$. More generally, by the algebraic relations in \cref{eq:formation.control.algebraic.property}, any triangularly closed subgraph of $K_N$ is associated with a subalgebra of $\mathfrak{g}$. Therefore, the Lie (sub)algebra generated by $\Gamma$ can be represented by the triangular closure of $G$; and if $G$ is connected, then its triangular closure is complete, which suggests that $\mathrm{Lie}\,(\Gamma)$ contains all $A_{ij}$'s, so we have $\mathrm{Lie}\,(\Gamma)=\mathfrak{g}$. In conclusion, the controllability of system~\cref{eq:dynamic.law}, and equivalently, system~\cref{eq:dynamic.law.matrix}, is determined by graph connectivity of $G$. \end{example} By now, we have conducted a detailed investigation into controllability of bilinear systems on $\son$ governed by the standard basis elements of $\mathfrak{so}(n)$. Before we extend the scope of our investigation to general bilinear systems, we show that, in contrast to \cref{cor:control_number}, a driftless bilinear system on $\son$ can be controllable using only two control inputs, for all $n>0$. \begin{example} Recall that by \cref{cor:control_number}, driftless bilinear systems on $\mathrm{SO}(n)$ with control vector fields in the standard basis of $\mathfrak{so}(n)$ require at least $n-1$ inputs to be controllable. However, this conclusion may not hold for general systems governed by vector fields not in the standard basis. For example, the following system with two control inputs \begin{equation} \label{eq:minimal.control.inputs} \dot{X}(t)=\bigl[u_1(t)C_1+u_2(t)C_2\bigr]X(t),\quad X(0)=I, \end{equation} where $C_1=\Omega_{12}$ and $C_2=\sum_{i=1}^{n-1}\Omega_{i,i+1}$, is controllable on $\son$. To see this, we will show $\Omega_{1k}\in\mathrm{Lie}\,(\{C_1,C_2\})$ for any $2\leqslant{}k\leqslant{}n$ by induction. At first, note that $\Omega_{13}=[C_1,C_2]\in\mathrm{Lie}\,(\{C_1,C_2\})$. Next, we assume $\Omega_{12}, \Omega_{13},\ldots,\Omega_{1k}\in\mathrm{Lie}\,(\{C_1,C_2\})$ for some $3\leqslant{}k<n$, which is the induction hypothesis. Consequently, we have $[\Omega_{1k},C_2]=\Omega_{2k}-\Omega_{1,k-1}+\Omega_{1,k+1}$ and $[\Omega_{1k},C_1]=\Omega_{2k}$, which implies $\Omega_{1,k+1}=[\Omega_{1k},C_2]-[\Omega_{1k},C_1]+\Omega_{1,k-1} \in\mathrm{Lie}\,(\{C_1,C_2\})$. By induction, we conclude $\Omega_{1k}\in\mathrm{Lie}\,(\{C_1,C_2\})$ for any $2\leqslant{}k\leqslant{}n$. This result implies $\mathrm{Lie}\,(\Sigma)\subseteq\mathrm{Lie}\,(\{C_1,C_2\})$, where $\Sigma=\{\Omega_{1k}:1\leqslant k\leqslant n\}$. Obviously $\tau(\Sigma)$ is a connected graph, and hence by \cref{thm:connectivity.controllability}, $\mathrm{Lie}\,(\Sigma)=\mathfrak{so}(n)$, and the the system in \cref{eq:minimal.control.inputs} is thus controllable. \end{example} \subsection{Equivalence Between the Symmetric Group and Graph-Theoretic Methods} \label{sec:Sn_graph} In Sections~\ref{sec:Sn} and~\ref{sec:graph}, we developed two combinatorics-based methods to analyze controllability of bilinear systems. Both methods connect the Lie brackets of vector fields to operations on combinatorial objects. We will show next that an equivalence exists between the symmetric group and the graph-theoretic method when systems on $\son$ are concerned. We first illustrate this equivalence through a controllable system on $\mathrm{SO}(4)$. \begin{example} \label{ex:revisit.controllable.so4} Let us revisit the system in \cref{eq:controllable.so4} in \cref{ex:controllable}, governed by the set of vector fields $\Gamma=\{\Omega_{12},\Omega_{23},\Omega_{13},\Omega_{34}\}$. We have shown therein that this system is controllable on ${\rm SO}(4)$ by using the graph-theoretic method; and for the symmetric group method, we may choose $\Sigma_1=\{\Omega_{12},\Omega_{13},\Omega_{34}\}\subset\Gamma$ so that $\iota(\Sigma_1)=(1342)$ is a $4$-cycle. However, $\Sigma_1$ is not the only subset that is related to a $4$-cycle, and, for example, one can easily verify that $\iota{}$ also relates the subsets $\Sigma_2=\{\Omega_{13}, \Omega_{23}, \Omega_{34}\}$ and $\Sigma_3=\{\Omega_{12}, \Omega_{23}, \Omega_{34}\}$ to $4$-cycles as $\iota(\Sigma_2)=(13)(23)(34)=(1342)$ and $\iota(\Sigma_3)=(12)(23)(34)=(1234)$. Moreover, it is worth noting that $\Sigma_1$, $\Sigma_2$, and $\Sigma_3$ are the only subsets of $\Gamma$ that are related to $4$-cycles. Meanwhile, and more importantly, their graph representations $\tau(\Sigma_1)$, $\tau(\Sigma_2)$, and $\tau(\Sigma_3)$ coincide with all three spanning trees of the graph $\tau(\Gamma)$ associated with the system (see \cref{tab:permutation-graph.controllable}). On the other hand, from the aspect of Lie algebra, we observe that $\Sigma_i$ is a \emph{minimal} subset of $\Gamma$ generating $\mathrm{Lie}\,(\Gamma)$ for each $i=1,2,3$, that is, $\Sigma'=\Sigma_i$ for any $\Sigma'\subseteq\Sigma_i$ satisfying $\mathrm{Lie}\,(\Sigma')=\mathrm{Lie}\,(\Gamma)$. This observation sheds light on the general result: given a system on $\son$ governed by the set of vector fields $\Gamma$, if $\Sigma$ is a minimal subset of $\Gamma$ with $\mathrm{Lie}\,(\Sigma)=\mathrm{Lie}\,(\Gamma)$, then $\iota(\Sigma)$ is an $n$-cycle if and only if $\tau(\Sigma)$ is a spanning tree of $\tau(\Gamma)$. \end{example} \begin{table}[tbp] \centering \begin{tabularx}{\textwidth}{lYl} \toprule Set of control vector fields & Graph & Permutation in $S_4$ \\ \midrule $\Gamma=\{\Omega_{12},\Omega_{23},\Omega_{13},\Omega_{34}\}$ & \begin{tikzpicture}[baseline={4mm},thick] \node[left] at (0,1) {$v_1$}; \node[right] at (1,1) {$v_2$}; \node[right] at (1,0) {$v_3$}; \node[left] at (0,0) {$v_4$}; \draw (0,1) -- (1,1); \draw (1,1) node[vertex]{} -- (1,0); \draw (1,0) -- (0,1) node[vertex]{}; \draw (1,0) node[vertex]{} -- (0,0) node[vertex]{}; \end{tikzpicture} & $\iota(\Gamma)=(12)(23)(13)(34)=(234)$ \\ \midrule $\Sigma_1=\{\Omega_{12},\Omega_{13},\Omega_{34}\}$ & \begin{tikzpicture}[baseline={4mm},thick] \node[left] at (0,1) {$v_1$}; \node[right] at (1,1) {$v_2$}; \node[right] at (1,0) {$v_3$}; \node[left] at (0,0) {$v_4$}; \draw (0,1) -- (1,1) node[vertex]{}; \draw (1,0) -- (0,1) node[vertex]{}; \draw (1,0) node[vertex]{} -- (0,0) node[vertex]{}; \end{tikzpicture} & $\iota(\Sigma_1)=(12)(13)(34)=(1342)$ \\ \midrule $\Sigma_2=\{\Omega_{13},\Omega_{23},\Omega_{34}\}$ & \begin{tikzpicture}[baseline={4mm},thick] \node[left] at (0,1) {$v_1$}; \node[right] at (1,1) {$v_2$}; \node[right] at (1,0) {$v_3$}; \node[left] at (0,0) {$v_4$}; \draw (0,1) node[vertex]{} -- (1,0); \draw (1,1) node[vertex]{} -- (1,0); \draw (0,0) node[vertex]{} -- (1,0) node[vertex]{}; \end{tikzpicture} & $\iota(\Sigma_2)=(13)(23)(34)=(1342)$ \\ \midrule $\Sigma_3=\{\Omega_{12},\Omega_{23},\Omega_{34}\}$ & \begin{tikzpicture}[baseline={4mm},thick] \node[left] at (0,1) {$v_1$}; \node[right] at (1,1) {$v_2$}; \node[right] at (1,0) {$v_3$}; \node[left] at (0,0) {$v_4$}; \draw (0,1) node[vertex]{} -- (1,1); \draw (1,1) node[vertex]{} -- (1,0); \draw (1,0) node[vertex]{} -- (0,0) node[vertex]{}; \end{tikzpicture} & $\iota(\Sigma_3)=(12)(23)(34)=(1234)$ \\ \bottomrule \end{tabularx} \caption{A comparison between two methods analyzing controllability: the symmetric groups method and the graph-theoretic method. Note that the graphs associated with $\Sigma_1$, $\Sigma_2$ and $\Sigma_3$ are \emph{spanning trees} of the associated graph of $\Gamma$, and that any tree is related to a $4$-cycle in the symmetric group $S_4$.} \label{tab:permutation-graph.controllable} \end{table} \begin{theorem} \label{thm:spanning.tree} Consider a bilinear system on $\son$ as in \cref{eq:system_SOn} and let $\Gamma\subseteq\mathcal{B}$ denote the set of vector fields governing the system dynamics. Suppose $\Sigma\subseteq\Gamma$ is a \emph{minimal} subset such that $\iota(\Sigma)=\sigma\in S_n$ is an $n$-cycle (i.e., $\Sigma$ has no proper subset that is also related to an $n$-cycle via $\iota$), then its associated graph $\tau(\Sigma)$ is a \emph{spanning tree} of $\tau(\Gamma)$, and the system is therefore controllable. Conversely, for a controllable system, any spanning tree $T$ of the connected graph $\tau(\Gamma)$ corresponds to a subset $\Sigma'=\tau^{-1}(T)$, such that $\Sigma'\subseteq\Gamma$ is minimal and that $\iota(\Sigma')$ is an $n$-cycle in $S_n$. \end{theorem} \begin{proof} From group theory we know that a \emph{minimal} $\Sigma$ with $\iota{}(\Sigma)$ being an $n$-cycle should consist of $n-1$ transpositions, and that the union of the orbits of all $n-1$ transpositions is the orbit of $\sigma$. This means the graph $\tau(\Sigma)$ has $n$ vertices and $n-1$ edges. Since a graph with $n$ vertices and $n-1$ edges is both \emph{connected} and \emph{acyclic}, and since $\tau(\Sigma)$ covers all $n$ vertices of $\tau(\Gamma)$, we conclude that $\tau(\Sigma)$ is the spanning tree of $\tau(\Gamma)$. On the other hand, for a subset $\Sigma'\subseteq\Gamma$ satisfying that $\tau(\Sigma')$ is a spanning tree of $\tau(\Gamma)$, we must have $|\Sigma'|=n-1$. Since a decomposition of an $n$-cycle needs at least $n-1$ transpositions, if $\iota(\Sigma')$ is an $n$-cycle, then $\Sigma'$ is obviously minimal. The following claim shows that $\iota(\Sigma')$ is indeed an $n$-cycle, regardless of the ordering of elements in $\Sigma'$. \begin{claim} A tree consisting of $k$ edges in the connected graph $\tau(\Gamma)$ in \cref{thm:spanning.tree} is related to a $(k+1)$-cycle via $\iota$, regardless of the ordering of transpositions. \end{claim} \begin{claimproof} Let us consider a tree $T$ with $k$ edges in $\tau(\Gamma)$, and prove the claim by induction. It is trivial for $k=1$; and for $k=2$, say $T=v_{j_1}v_{j_2}v_{j_3}$, then $\iota$ sends $T$ to either $(j_1j_2j_3)$ or $(j_1j_3j_2)$, depending on the orderings of $(j_1j_2)$ and $(j_2j_3)$. Assume the claim is true for $k=l-1$ for some $l\in\mathbb{Z}_{+}$; and for a tree $T$ with $k=l$ edges, we can choose a subtree $T'$ of $T$ that consists of $l-1$ edges. Without loss of generality, we may assume $T'$ has vertices $\{v_1, \ldots, v_l\}$ and that $T$ has an additional vertex $v_{l+1}$ and an additional edge $v_1v_{l+1}$. Let $\{\sigma_1, \ldots, \sigma_{l-1}\}$ be a set of transpositions such that each $\sigma_i$ is related to a distinct edge in $T'$ by $\iota$, and let $\rho=(1,l+1)$ denote the transposition related to the additional edge $v_1v_{l+1}$ in $T$. Our goal is to show that the permutation \begin{equation} \label{eq:permutation} \sigma_{i_1}\cdots\sigma_{i_t}\rho\sigma_{i_{t+1}}\cdots\sigma_{i_{l-1}} \end{equation} is an $(l+1)$-cycle. Note that for any $1\leqslant{} i,j_1,j_2\leqslant{} l$, we have the following law of commutation: \begin{equation} \label{eq:commutation.law} (i,l+1)(j_1j_2)= \begin{cases} (j_1j_2)(i,l+1) & \text{if neither $j_1$ or $j_2$ equals to $i$;} \\ (j_1j_2)(j_2,l+1) & \text{if $j_1=i$.} \end{cases} \end{equation} Therefore, we can rewrite the permutation~\cref{eq:permutation} as $\sigma_{i_1}\cdots\sigma_{i_{l-1}}\rho'$, where $\rho'=(j_p,l+1)$ for some $1\leqslant{} j_p\leqslant{} l$. By our assumption, $\sigma_{i_1}\cdots{}\sigma_{i_{l-1}}$ is an $l$-cycle: $\sigma_{i_1}\cdots{}\sigma_{i_{l-1}}=(j_1j_2\cdots{}j_l)$, so finally we have \[ \begin{aligned} \sigma_{i_1}\cdots\sigma_{i_t}\rho\sigma_{i_{t+1}}\cdots\sigma_{i_{l-1}} &=\sigma_{i_1}\cdots{}\sigma_{i_{l-1}}\rho' =(j_1j_2\cdots{}j_l)(j_p,l+1) \\ &=(j_1\cdots{} j_p,l+1,j_{p+1}\cdots{} j_l), \end{aligned} \] which is an $(l+1)$-cycle. It is clear that the ordering of transpositions is irrelevant in our proof. \end{claimproof} Therefore, a spanning tree $\tau(\Sigma')$ consisting of $n-1$ edges is related to an $n$-cycle in $S_n$ via $\iota$, which finishes our proof. \end{proof} Given a controllable system on $\son$, \cref{thm:spanning.tree} reveals the relation between $n$-cycles and spanning trees of the associated graph. In particular, for such a system governed by the set $\Gamma\subseteq\mathcal{B}$ of vector fields, this theorem supplements \cref{thm:connectivity.controllability} by explicitly describing the subsets of $\Gamma$ that are related to $n$-cycles using graphs. The following corollary then summarizes all the symmetric group and graph-theoretic characterizations of controllability for systems on $\son$. \begin{corollary} \label{cor:controllability.summary} Consider a bilinear system defined on $\son$ as in \cref{eq:system_SOn}, and let $\Gamma$ denote the set of vector fields governing the system dynamics. The following are equivalent: \begin{enumerate}[font=\normalfont,label={(\arabic*)}] \item The system is controllable on $\son$. \item $\tau(\Gamma)$ is a connected graph. \item For any minimal subset $\Sigma\subseteq\Gamma$ generating $\mathfrak{so}(n)$, $\iota(\Sigma)$ is an $n$-cycle and $\tau(\Sigma)$ is a spanning tree of $\tau(\Gamma)$. \end{enumerate} \end{corollary} In the remainder of this section, we will focus on uncontrollable systems. Recall \cref{thm:controllable.submanifold} that the controllable submanifold for an uncontrollable system on $\son$ is determined by the connected components of its associated graph. Meanwhile, according to \cref{cor:Sn_submanifold}, by applying the method of symmetric groups, the controllable submanifold can also be characterized by the nontrivial orbits of $\iota(\Xi)$ for a minimal subset $\Xi\subseteq\Gamma$ generating $\mathrm{Lie}\,(\Gamma)$. To see that the two methods are equivalent and to extend \cref{thm:spanning.tree} to uncontrollable cases, we first introduce the concept of \emph{spanning forests}, which generalizes the notion of spanning trees to disconnected graphs. Given a (disconnected) graph, its \emph{spanning forest} is a maximal acyclic subgraph, or equivalently, a subgraph consisting of a spanning tree in each connected component of the graph \cite{Bollobas98}. Following this definition, we will show that the minimal subset $\Xi\subseteq\Gamma$ in \cref{cor:Sn_submanifold} corresponds to a spanning forest of $\tau(\Gamma)$, so that the controllable submanifold can also be equivalently described by the connected components of the spanning forest. This result is illuminated in the following example. \begin{example} Consider a bilinear system on $\mathrm{SO}(6)$ in the form of \cref{eq:system_SOn} governed by the set of vector fields $\Gamma=\{\Omega_{12}, \Omega_{14}, \Omega_{23}, \Omega_{24},\Omega_{34}, \Omega_{56}\}$. As shown in \cref{tab:permutation-graph.uncontrollable}, $\iota(\Gamma)$ is disconnected with two components, and hence this system is not controllable on ${\rm SO}(6)$. To describe its controllable submanifold, we choose a spanning forest $\tau(\Xi_1)$ of the associated graph $\tau(\Gamma)$ with $\Xi_1=\{\Omega_{14}, \Omega_{24}, \Omega_{34}, \Omega_{56}\}$. Note that the permutation $\iota(\Xi_1)=(14)(24)(34)(56)=(1432)(56)$ has two nontrivial orbits: $\mathcal{O}_1=\{1,2,3,4\}$ and $\mathcal{O}_2=\{5,6\}$, each corresponds to a connected component of the graph $\tau(\Gamma)$, or equivalently, a summand in the decomposition of the Lie algebra of the controllable submanifold: \[ \mathrm{Lie}\,(\Gamma)={\rm span}\,\{\Omega_{ij}:i,j\in\mathcal{O}_{1}\}\oplus{\rm span}\,\{\Omega_{ij}:i,j\in\mathcal{O}_{2}\}. \] Now suppose we choose a different spanning forest $\tau(\Xi_2)$ which corresponds to another subset $\Xi_2=\{\Omega_{12}, \Omega_{24}, \Omega_{34}, \Omega_{56}\}\subseteq\Gamma$. Note that the permutation $\iota(\Xi_2)=(1243)(56)$ is different from $\iota{}(\Xi_1)$, but both have the \emph{same} orbits. The graphs and permutations associated with $\Gamma$ and its subsets $\Xi_1$ and $\Xi_2$ are also listed in \cref{tab:permutation-graph.uncontrollable}. \end{example} In general, for a spanning forest $F$ of $\tau(\Gamma)$, we know by \cref{thm:spanning.tree} that each tree $T_i$ consisting of $n_i$ vertices in $F$ is related to an $n_i$-cycle via $\iota$, which characterizes a summand of the decomposition of $\mathrm{Lie}\,(\Gamma)$. So by applying \cref{thm:spanning.tree} to each (maximal) tree in the forest $F$, we have the following \cref{cor:spanning.tree.uncontrollable}, which describes the relation between the associated graphs and permutations for an uncontrollable bilinear system. \begin{table}[tbp] \centering \begin{tabularx}{\textwidth}{lYl} \toprule Set of control vector fields & Graph & \makecell[l]{Permutation in $S_6$ and \\ its nontrivial orbits} \\ \midrule \(\Gamma=\Bigl\{ \begin{array}{l} \Omega_{12}, \Omega_{14},\Omega_{23}, \\ \Omega_{24}, \Omega_{34},\Omega_{56} \end{array}\Bigr\}\) & \begin{tikzpicture}[baseline={-1ex},scale=1.0,thick] \node[left] at (180:\R) {$v_1$}; \node[above left] at (120:\R) {$v_2$}; \node[above right] at (60:\R) {$v_3$}; \node[right] at (0:\R) {$v_4$}; \node[below right] at (300:\R) {$v_5$}; \node[below left] at (240:\R) {$v_6$}; \draw (180:\R) -- (120:\R); \draw (180:\R) node[vertex]{} -- (0:\R); \draw (120:\R) -- (60:\R); \draw (120:\R) node[vertex]{} -- (0:\R); \draw (0:\R) node[vertex]{} -- (60:\R) node[vertex]{}; \draw (240:\R) node[vertex]{} -- (300:\R) node[vertex]{}; \end{tikzpicture} & \makecell[l]{ \(\begin{aligned} \iota(\Gamma) &=(12)(14)(23)(24)(34)(56) \\ &=(14)(56) \end{aligned}\) \\ \gape[t]{$\mbox{Orbits}=\{1,4\}, \{5,6\}$}} \\ \midrule $\Xi_1=\{\Omega_{14},\Omega_{24},\Omega_{34},\Omega_{56}\}$ & \begin{tikzpicture}[baseline={-1ex},scale=1.0,thick] \node[left] at (180:\R) {$v_1$}; \node[above left] at (120:\R) {$v_2$}; \node[above right] at (60:\R) {$v_3$}; \node[right] at (0:\R) {$v_4$}; \node[below right] at (300:\R) {$v_5$}; \node[below left] at (240:\R) {$v_6$}; \draw (180:\R) node[vertex]{} -- (0:\R); \draw (60:\R) node[vertex]{} -- (0:\R); \draw (120:\R) node[vertex]{} -- (0:\R) node[vertex]{}; \draw (240:\R) node[vertex]{} -- (300:\R) node[vertex]{}; \end{tikzpicture} & \makecell[l]{ \(\begin{aligned} \iota(\Xi_1) &=(14)(24)(34)(56) \\ &=(1432)(56) \end{aligned}\) \\ \gape[t]{$\mbox{Orbits}=\{1,2,3,4\}, \{5,6\}$}} \\ \midrule $\Xi_2=\{\Omega_{12},\Omega_{24},\Omega_{34},\Omega_{56}\}$ & \begin{tikzpicture}[baseline={-1ex},scale=1.0,thick] \node[left] at (180:\R) {$v_1$}; \node[above left] at (120:\R) {$v_2$}; \node[above right] at (60:\R) {$v_3$}; \node[right] at (0:\R) {$v_4$}; \node[below right] at (300:\R) {$v_5$}; \node[below left] at (240:\R) {$v_6$}; \draw (180:\R) node[vertex]{} -- (120:\R); \draw (120:\R) node[vertex]{} -- (0:\R); \draw (0:\R) node[vertex]{} -- (60:\R) node[vertex]{}; \draw (240:\R) node[vertex]{} -- (300:\R) node[vertex]{}; \end{tikzpicture} & \makecell[l]{ \(\begin{aligned} \iota(\Xi_2) &=(12)(24)(34)(56) \\ &=(1243)(56) \end{aligned}\) \\ \gape[t]{$\mbox{Orbits}=\{1,2,3,4\}, \{5,6\}$}} \\ \bottomrule \end{tabularx} \caption{A comparison between the symmetric group method and the graph-theoretic method for an uncontrollable system on $\mathrm{SO}(6)$. Both graphs associated with subsets $\Xi_1$ and $\Xi_2$ are \emph{spanning forest} of the associated graph of $\Gamma$.} \label{tab:permutation-graph.uncontrollable} \end{table} \begin{corollary} \label{cor:spanning.tree.uncontrollable} Given an uncontrollable bilinear system defined on $\son$ in the form of \cref{eq:system_SOn} governed by the set of vector fields $\Gamma$. Let $F$ be a spanning forest of $\tau(\Gamma)$ and if we denote $\Xi=\tau^{-1}(F)$, then $\Xi$ is a minimal subset of $\Gamma$ with the same generating Lie algebra and the controllable submanifold of the system is determined by the nontrivial orbits of $\iota(\Xi)$. \end{corollary} \begin{proof} For a spanning forest $F$ of $\tau(\Gamma)$, let $T_1, \ldots, T_l$ be (maximal) trees in $F$ s.t.\ $F=T_1\sqcup\cdots\sqcup{} T_l$, where $T_i=(V_i, E_i)$ with $|V_i|=n_i$, $|E_i|=n_i-1$. By \cref{thm:spanning.tree}, each $T_i$ is related to an $n_i$-cycle $\iota(\Xi_i)\in{}S_{n_i}$ for $\Xi_i=\tau^{-1}(T_i)$, and the orbit of $\iota(\Xi_i)$ determines the Lie (sub)algebra $\mathfrak{g}_i:=\mathrm{Lie}\,(\Xi_i)$. Therefore, distinct orbits of $\iota(\Xi)$ consist of the orbits of each $\iota(\Xi_1), \ldots, \iota(\Xi_l)$, which determines the Lie algebra generated by $\Gamma$: $\mathrm{Lie}\,(\Gamma)=\mathrm{Lie}\,(\Xi)=\mathfrak{g}_i\oplus\cdots\oplus{} \mathfrak{g}_l$. Since the controllable submanifold is determined by the Lie subalgebra $\mathrm{Lie}\,(\Gamma)=\mathrm{Lie}\,(\Xi)$, we conclude that it is also determined by distinct orbits of $\iota(\Xi)$, for $\Xi=\tau^{-1}(F)$. \end{proof} \section{Combinatorics-Based Controllability Analysis via Lie Algebra Decompositions} \label{sec:non-standard} Utilizing the algebraic structure of $\mathfrak{so}(n)$, we have developed combinatorial methods that identified vector fields in the standard basis of $\mathfrak{so}(n)$, as well as vector fields generating structured Lie algebras, e.g., the multi-agent system described in \cref{ex:formation.control}, with transpositions in $S_n$ and edges of $n$-vertices graphs. It was also shown that such identifications lead to an equivalence between the two methods for analyzing controllability of systems on $\son$ as defined in \cref{eq:system_SOn}. However, in many cases, the system Lie algebra may be too complicated to associate each of its elements to a permutation or a graph edge, so that the combinatorial methods cannot be directly applied. This dilemma can be resolved through the decomposition of the Lie algebra into components with simpler algebraic structures such that the combinatorial methods can be applied to each component. This idea allows us to generalize the combinatorial framework to bilinear systems defined on boarder classes of Lie groups. To this end, we adopt techniques in representation theory, including the Cartan and non-intertwining decomposition. Some basics of representation theory can be found in \cref{appd:representation}. \subsection{Cartan Decomposition in Symmetric Group Method} \label{sec:Cartan} The Cartan decomposition, named after the influential French mathematician \'{E}lie Cartan, provides a major tool for understanding the algebraic structures of semisimple Lie groups and Lie algebras. Its generalized form, the root space decomposition, decomposes a Lie algebra into a direct sum of vector subspaces, called the root spaces, as introduced in \cref{appd:representation}. However, each root space is not necessarily a Lie subalgebra, i.e., Lie bracket operations may not be closed in the root spaces. This nature of the Cartan (root space) decomposition then disables the use of the graph-theoretic method since it violates the ``triangle rule'' shown in \cref{lem:lie-graph} (ii). As a result, here we pursue and generalize the symmetric group method to analyze controllability of systems with its vector fields living in the root spaces of semisimple Lie algebras. In representation theory, the Lie algebra $\mathfrak{sl}(3,\mathbb{C})$, which consists of $3\times{} 3$ complex matrices with vanishing trace, serves as a primary example to illustrate the Cartan decomposition of semisimple Lie algebras. Therefore, to illustrate our idea, we consider the driftless bilinear system evolving on the Lie group ${\rm SL}(3,\mathbb{C})$ consisting of $3\times 3$ complex matrices with determinant~$1$, given by \begin{equation} \label{eq:sl3} \dot{Z}(t)=\Bigl(\sum_{j=1}^m{} u_j(t)B_j\Bigr)Z(t), \end{equation} where the state $Z(t)\in {\rm SL}(3,\mathbb{C})$, the control vector fields $B_j\in\Gamma\subseteq\mathcal{B}'':=\{H_k,X_l,Y_l: k=1,2; l=1,2,3\}$, the basis of $\mathfrak{sl}(3,\mathbb{C})$ with \begingroup \allowdisplaybreaks \begin{align*} H_1 &=\begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{bmatrix}, & H_2 &=\begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{bmatrix}, & \\ X_1 &=\begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, & X_2 &=\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}, & X_3 &=\begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \\ Y_1 &=\begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, & Y_2 &=\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}, & Y_3 &=\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{bmatrix}, \end{align*} \endgroup and the control inputs $u_j(t)\in\mathbb{C}$. One can easily check that the two Lie subalgebras $\mathfrak{k}_1=\mathrm{Lie}\,\{H_1, X_1,Y_1\}$ and $\mathfrak{k}_2=\mathrm{Lie}\,\{H_2,X_2,Y_2\}$, when considered as Lie algebras over $\mathbb{R}$, are isomorphic to $\mathfrak{so}(3)$. As discussed in Section~\ref{sec:Sn}, controllability of systems on $\mathrm{SO}(3)$ can be characterized by permutation cycles in $S_3$. This suggests that we can characterize controllability of systems on $\mathrm{SL}(3,\mathbb{C})$ by two copies of $S_3$. Formally, we want to establish a map $\iota:\mathcal{P}(\mathcal{B''})\rightarrow S_3\oplus S_3$, where $\oplus$ denotes the direct sum of groups, so that non-vanishing Lie brackets correspond to cycles with increased length. In this case, we define an element $\sigma=(\sigma_1,\sigma_2)$ in $S_3\oplus S_3$ to be a cycle if both $\sigma_1$ and $\sigma_2$ are cycles in $S_3$, and the length of $\sigma$ is defined to be the sum of the length of $\sigma_1$ and $\sigma_2$. Here is one possible definition of $\iota$: \begin{align*} H_1 &\mapsto (e,e), & H_2 &\mapsto (e,e), & \\ X_1 &\mapsto ((12),e), & X_2 &\mapsto (e,(12)), & X_3 &\mapsto ((12),(12)), \\ Y_1 &\mapsto ((23),e), & Y_2 &\mapsto (e,(23)), & Y_3 &\mapsto ((23),(23)), \end{align*} where $e$ denotes the identity of $S_3$. Following this definition of $\iota$, we can check that if $B_1,B_2\in\mathcal{B}''$ satisfy $[B_1,B_2]\neq0$, then the length of $\iota([B_1,B_2])$ is greater than or equal to the length of both $\iota(B_1)$ and $\iota(B_2)$. Moreover, if neither $B_1$ nor $B_2$ is equal to $H_1$ or $H_2$, then the length of $\iota([B_1,B_2])$ is strictly greater than the length of both $\iota(B_1)$ and $\iota(B_2)$. This relation between Lie brackets of elements in $\mathcal{B}''$ and length of cycles in $S_3\oplus S_3$ allows us to draw the following conclusion: \begin{proposition} \label{prop:sl3c} The system in \cref{eq:sl3} is controllable on ${\rm SL}(3,\mathbb{C})$ if and only if there exists a subset $\Sigma$ of $\Gamma=\{B_1,\dots,B_m\}$ such that $\iota(\Sigma)$ is a 6-cycle in $S_3\oplus S_3$. \end{proposition} From the perspective of representation theory, the basis $\mathcal{B}''$ induces the Cartan decomposition of the Lie algebra $\mathfrak{sl}(3,\mathbb{C})$, in which the $2$-dimensional Cartan subalgebra is spanned by $H_1$ and $H_2$. Moreover, the Weyl group of $\mathfrak{sl}(3,\mathbb{C})$ is $S_3$. The above facts provide another explanation for requiring two copies of $S_3$ in the characterization of controllability for systems on ${\rm SL}(3,\mathbb{C})$ governed by vector fields in $\mathcal{B}''$. Notice that the concepts of Cartan subalgebras and Weyl groups are well-defined for all semisimple Lie algebras, not only for ${\rm SL}(3,\mathbb{C})$. Also, Weyl groups are all finite groups and thus subgroups of some symmetric groups. As a result, it is possible to extend the symmetric-group characterization of controllability to systems defined on general semisimple Lie groups. To be more specific, consider the bilinear system defined on a semisimple Lie group $G$ of the form, \begin{equation} \label{eq:semisimple} \dot X=\Bigl(\sum_{i=1}^mu_iB_i\Bigr)X,\quad X(0)=I, \end{equation} where $B_i$ are elements in the Lie algebra $\mathfrak{g}$ of $G$. Moreover, let $\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{k}$ be the Cartan decomposition of $\mathfrak{g}$ with $\mathfrak{h}$ being the Cartan subalgebra and $W$ be the Weyl group of $\mathfrak{g}$. We further assume that $B_i\in\mathfrak{h}$ or $B_i\in\mathfrak{k}$ for every $i=1,\dots,m$, then the above discussion leads to the following conjecture for systems defined on semisimple Lie groups. \begin{conjecture} \label{conj:cartan.decomp.controllable} The system in \cref{eq:semisimple} is controllable on $G$ if and only if there exits $\Sigma\subseteq\Gamma$ such that $\iota(\Sigma)$ is a cycle of maximal length in $W^h$, where $\Gamma=\{B_1,\dots,B_m\}$ is the set of control vector fields, $h=\dim\mathfrak{h}$, and $W^h$ denotes the direct sum of $h$ copies of $W$. \end{conjecture} Recall that the central idea of the symmetric group approach to controllability analysis is to map elements with non-vanishing Lie brackets to cycles with increased length. However, all elements in the Cartan subalgebra have vanishing Lie brackets. The intuition behind the above conjecture comes from the need of appropriately representing these elements using permutations by mapping elements in different root spaces to permutation cycles in different components of the direct sum of $h$ copies of symmetric groups, where $h$ denotes the dimension of the Cartan subalgebra. Moreover, because the interaction between elements in and outside the Cartan subalgebra is characterized by the Weyl group, which is a subgroup of a symmetric group, the symmetric group method applies directly. \subsection{Non-Intertwining Decomposition in Graph-Theoretic Method} In the case that the Lie algebra generated by drift and control vector fields of a bilinear system can be decomposed into components that are Lie subalgebras, we will see that the graph-theoretic method applies more naturally for controllability analysis. One decomposition of this type is the \emph{non-intertwining decomposition}, through which a Lie algebra is decomposed into a direct sum of Lie subalgebras so that elements from different Lie subalgebras have vanishing Lie brackets. The non-intertwining decomposition generalizes the notion of block diagonalization for matrices. \begin{definition} \label{def:non-intertwining.decomp} For a given Lie algebra $\mathfrak{g}$, we call a decomposition $\mathfrak{g}=\mathfrak{g}_1\oplus\cdots\oplus \mathfrak{g}_m$ \emph{non-intertwining} if $[\mathfrak{g}_i, \mathfrak{g}_j]=0$ for any Lie subalgebras $\mathfrak{g}_i, \mathfrak{g}_j$, $1 \leqslant{}i\neq{}j \leqslant{}m$. \end{definition} For example, every reductive Lie algebra admits a non-intertwining decomposition, and many familiar Lie algebras are reductive, such as the algebra of $n\times n$ complex matrices $\mathfrak{gl}(n,\mathbb{C})$ and the algebra of $n\times n$ skew-symmetric complex matrices $\mathfrak{so}(n,\mathbb{C})$ \cite{Knapp2002lie}. If a Lie algebra admits a non-intertwining decomposition, then we will be able to associate each of its components with a graph. The subsequent question is whether graph representation developed in Section \ref{sec:graph} remains valid to characterize controllability. The answer to this question can be illustrated by a system defined on ${\rm SO}(4)$ whose Lie algebra $\mathfrak{so}(4)$ can be decomposed into a direct sum of two non-intertwining copies of $\mathfrak{so}(3)$, as shown in the following example. \begin{example} \label{ex:spin.reps} Let $\mathcal{B}'=\{A_1,A_2,A_3,B_1,B_2,B_3\}$ be a non-standard basis of $\mathfrak{so}(4)$, where \begin{equation} \label{eq:spin.reps.so4.basis} \begin{aligned} A_1 &=\frac{\Omega_{23}+\Omega_{14}}{2}, & A_2 &=\frac{\Omega_{13}-\Omega_{24}}{2}, & A_3 &=\frac{\Omega_{12}+\Omega_{34}}{2}, \\% B_1 &=\frac{\Omega_{13}+\Omega_{24}}{2}, & B_2 &=\frac{\Omega_{14}-\Omega_{23}}{2}, & B_3 &=\frac{\Omega_{12}-\Omega_{34}}{2}. \end{aligned} \end{equation} The Lie brackets of the elements in $\mathcal{B}'$ satisfy $[A_i,A_j]=A_k$, $[B_i,B_j]=B_k$ for any ordered 3-tuple $(i,j,k)=(1,2,3)$, $(2,3,1)$ or $(3,1,2)$, and $[A_i,B_j]=0$ for any $1 \leqslant{} i,j \leqslant{} 3$. As a result, $\mathfrak{so}(4)$ admits a non-intertwining decomposition as $\mathfrak{so}(4)=\mathrm{Lie}\,\{A_1,A_2,A_3\}\oplus\mathrm{Lie}\,\{B_1,B_2,B_3\}$. We note that the Lie bracket relations among elements in $\{A_1,A_2,A_3\}$, as well as $\{B_1,B_2,B_3\}$, are the same as the Lie bracket relations among elements in the standard basis of $\mathfrak{so}(3)$. In other words, both $\mathrm{Lie}\,\{A_1,A_2,A_3\}$ and $\mathrm{Lie}\,\{B_1,B_2,B_3\}$ are isomorphic to $\mathfrak{so}(3)$, so $K_3$ becomes the suitable graph representation for each set. Moreover, because $[A_i,B_j]=0$ for any $i,j=1,2,3$, the graph representation for the non-standard basis $\mathcal{B}'=\{A_1,A_2,A_3\}\sqcup\{B_1,B_2,B_3\}$ is a \emph{disjoint union} of two copies of $K_3$, as shown in \cref{fig:spin.reps}, instead of the complete graph $K_4$ associated with the standard basis of $\mathfrak{so}(4)$. \begin{figure} \caption{The graphs associated with the sets $\{A_1,A_2,A_3\} \label{fig:spin.reps} \end{figure} This example illuminates how the graph representation of controllability developed in Section \ref{sec:graph} can be extended to the bilinear system governed by vector fields generating a non-intertwining Lie algebra, after modifying the definition of $\tau$ in \cref{eq:graph-map} accordingly. \begin{proposition} \label{prop:controllability.two.K3s} Consider a bilinear system on $\mathrm{SO}(4)$ governed by the vector fields in $\mathcal{B}'$, given by \begin{equation} \label{eq:so4_non_standard} \dot{X}(t)=\Bigl(\sum_{i=1}^mu_iC_i\Bigr)X(t), \quad X(0)=I, \end{equation} with $\Gamma=\{C_1,\dots,C_m\}\subseteq\mathcal{B}'$. Given a graph map $\tau':\mathcal{P}(\mathcal{B}')\to\mathcal{G}'$, where $\mathcal{G}'$ denotes the collection of subgraphs of $K_3\sqcup K_3$, satisfying \[ \tau'(A_i)=v_iv_{i+1} \quad \text{and} \quad \tau'(B_i)=w_iw_{i+1}, \] with the index taken modulo $3$, the system in \cref{eq:so4_non_standard} is controllable if and only if $\overline{\tau'(\Gamma)}=K_3\sqcup K_3$, or equivalently, if and only if each component of $\tau'(\Gamma)$ is connected in $K_3$. \end{proposition} \begin{proof} The above result becomes obvious once we verify the following properties of $\tau'$ (c.f.\ \cref{lem:lie-graph}), which are straightforward. \begin{enumerate}[font=\normalfont,label={(\arabic*)}] \item $\tau'(\mathcal{B}')=K_3\sqcup{}K_3$; \item For distinct $C_1,C_2\in\mathcal{B}'$, their Lie bracket $[C_1,C_2]\neq{}0$ if and only if the two edges $\tau'(C_1)$ and $\tau'(C_2)$ have a common vertex; \item The edges $\tau'(C_1),\tau'(C_2)$ and $\tau'([C_1,C_2])$ form a triangle if $[C_1,C_2]\neq{}0$, or equivalently, \[ \tau'(\{C_1,C_2,[C_1,C_2]\})=K_3, \] for any $C_1,C_2\in\mathcal{B}'$ such that $[C_1,C_2]\neq{}0$. \end{enumerate} \end{proof} In addition, recall from \cref{cor:control_number} that three control inputs are enough to have a controllable driftless system on $\mathrm{SO}(4)$ governed by the vector fields in the standard basis; or equivalently, three edges can form a connected graph with four vertices. However, for systems in the form of \cref{eq:so4_non_standard}, they require at least four control inputs to be controllable on $\mathrm{SO}(4)$. From the graph aspect, this is because both components of $\tau(\Gamma)$ require at least two edges to be connected. \end{example} \Cref{ex:spin.reps} further illustrates that for bilinear systems evolving on $\son$ governed by non-standard basis vector fields, i.e., vector fields that are not in the form of standard basis elements, in $\mathfrak{so}(n)$, controllability may not be characterized by using one complete graph $K_n$. Taking the system in \cref{eq:so4_non_standard} as an example, because the Lie algebra of its state-space can be decomposed into a direct sum of two non-intertwining components, its graph representation also requires two components. This finding elucidates that the number of components of the graph associated with a bilinear system is determined by the number of summands in the non-intertwining decomposition of the underlying Lie algebra of the system. \begin{theorem} \label{thm:direct.sum.lie.algebra.disjoint.union.graphs} Given a bilinear system \begin{equation} \label{eq:general.non.intertwining.system} \dot{X}(t)=\Bigl(\sum_{i=1}^m\sum_{j=1}^{n_i} u_{ij}B_{ij}\Bigr)X(t), \quad X(0)=I, \end{equation} defined on a Lie group $G$ whose Lie algebra $\mathfrak{g}$ admits a non-intertwining decomposition as $\mathfrak{g}=\mathfrak{g}_1\oplus\cdots\oplus\mathfrak{g}_m$, where $B_{ij}\in\mathcal{B}_i$ and $\mathcal{B}_i$ is a basis of $\mathfrak{g}_i$ for each $i$. Suppose each $\mathcal{B}_i$ is associated with a connected graph $G_i$ such that a subset $\Sigma_i\subseteq\mathcal{B}_i$ generates $\mathfrak{g}_i$ if and only if its associated graph $\tau(\Sigma_i)$ is a connected subgraph of $G_i$, then the system in \cref{eq:general.non.intertwining.system} is controllable on $G$ if and only if $\tau(\Gamma_i)$ is connected for every $i=1,\dots,m$, where $\Gamma_i=\{B_{ij}:j=1,\dots, n_i\}$. \end{theorem} \begin{proof} By the assumption, $\tau(\Gamma_i)$ is connected if and only if $\mathrm{Lie}\,(\Gamma_i)=\mathfrak{g}_i$ for each $i=1,\dots,m$. Together with the non-intertwining property between each pair of $\mathfrak{g}_i$ and $\mathfrak{g}_j$, the connectivity of $\tau(\Gamma_i)$ for all $i$ is equivalent to \[ \mathrm{Lie}\,(\Gamma) =\bigoplus_{i=1}^m\mathrm{Lie}\,(\Gamma_i) =\bigoplus_{i=1}^m\mathfrak{g}_i =\mathfrak{g}, \] where $\Gamma=\bigcup_{i=1}^m\Gamma_i$. The proof is then concluded by applying the LARC. \end{proof} \begin{remark}[Symmetric Group Method for Systems Governed by Non\hyp{}intertwining Lie Algebras] We find it worthwhile to mention that the symmetric group method also applies to bilinear systems with their underlying Lie algebras admitting a non-intertwining decomposition, through a properly defined $\iota$. For instance, in \cref{ex:spin.reps}, since both $\{A_i\}$ and $\{B_i\}$ in \cref{eq:spin.reps.so4.basis} are isomorphic to the standard basis in $\mathfrak{so}(3)$, the symmetric group method extends to the systems in \cref{eq:so4_non_standard} as well, by associating each component in the decomposition to a copy of $S_3$ and defining $\iota(A_i,B_j)=\bigl((i,i+1), (j,j+1)\bigr)$, with the index taken modulo $3$. Consequently, the system in \cref{eq:so4_non_standard} is controllable if and only if $\iota$ relates $\Gamma$ to two disjoint $3$-cycles in $S_3\oplus S_3$. \end{remark} \section{Summary} In this paper, we develop a combinatorics-based framework to characterize controllability of bilinear systems evolving on Lie groups, in which Lie bracket operations of vector fields are represented by operations on permutations in a symmetric group and edges in a graph. Through such representations, we obtain the tractable and transparent combinatorial characterizations of controllability in terms of permutation cycles and graph connectivity. This framework is established by first considering bilinear systems on $\son$, and we show that, in this case, the permutation and graph representations are equivalent. Then, by exploiting techniques in representation theory, we extend our investigation into a more general category of bilinear systems via proper decompositions of the underlying Lie algebras of the systems. In particular, we illustrate the application of the developed combinatorial methods to bilinear systems whose underlying Lie algebras admit the Cartan or non-intertwining decomposition. The presented methodology not only provides an alternative to the LARC, but also advances geometric control theory by integrating it with techniques in combinatorics and representation theory. As a final remark, compared to known graph-theoretic methods mostly developed for networked or multi-agent systems, our framework proposes novel applications of graphs to the study of bilinear control systems. \appendix \section{Symmetric Groups and Permutations} \label{appd:Sn} In this appendix, we give a brief review of the symmetric group theory. For a thorough discussion on symmetric groups, the reader can refer to any standard algebra textbook, for example~\cite{Lang02}. Let $X_n$ be a finite set of $n$ elements, and without loss of generality, we may assume $X_n=\{1,\cdots,n\}$. A \emph{permutation} $\sigma$ of $X_n$ is a bijection from $X_n$ onto itself, and is denoted by \[ \sigma=\begin{pmatrix} 1 & 2 & \cdots & n \\ i_1 & i_2 & \cdots & i_n \end{pmatrix} \] if $\sigma(1)=i_1$, \dots, $\sigma(n)=i_n$ for distinct $i_1,\ldots,i_n\in{}X_n$. A permutation that switches only two elements is called a \emph{transposition}, and is denoted by $\sigma=(i_1i_2)$ if $i_1\neq i_2$ and $\sigma$ fixes all other indices except for $\sigma(i_1)=i_2$ and $\sigma(i_2)=i_1$. More generally, an \emph{$r$-cycle} denoted by $\sigma=(i_1i_2\cdots{}i_r)$ is a permutation that satisfies $\sigma(i_1)=i_2$, $\sigma(i_{2})=i_3$, \ldots, $\sigma(i_r)=i_1$ and fixes all other indices. It can be shown that any permutation can be decomposed uniquely into disjoint cycles (cycles that have no common indices). For example, when $n=4$, the permutation \(\bigl(\begin{smallmatrix} 1 & 2 & 3 & 4\\ 2 & 3 & 4 & 1 \end{smallmatrix}\bigr)\) can be represented by a single $4$-cycle $(1234)$; while the permutation \(\bigl(\begin{smallmatrix} 1 & 2 & 3 & 4 \\ 3 & 4 & 1 & 2 \end{smallmatrix}\bigr)\) is the composition of two transpositions ($2$-cycles): $(13)(24)$. Given a permutation $\sigma$ of $X_n$ and an integer $i$, $1\leqslant{}i\leqslant{}n$, the \emph{orbit} of $i$ is formed under the cyclic group generated by $\sigma$. So for $\sigma=(1234)$, the orbit of $2$ is $\{\sigma^i(2):i\in\mathbb{N}\} =\{2, \sigma(2), \sigma^2(2), \sigma^3(2)\} =\{1,2,3,4\};$ and for $\sigma=(13)(24)$, the orbit of $2$ is $\{\sigma^i(2):i\in\mathbb{N}\} =\{2,\sigma(2)\} =\{2,4\}$. The \emph{symmetric group} $S_n$ is defined as the group of permutations on $X_n$, with its group operation being the composition of bijections. \section{Basics of Representation Theory} \label{appd:representation} Representation theory is a branch of algebra which studies structure theory by representing elements in an algebraic object, such as a group, a module, or an algebra, using linear transformations of vector spaces. In this appendix, we will review some basic concepts and results in the representation theory of Lie algebras that are used in this paper. Detailed discussions of Lie representation theory can be found in \cite{fulton1991,Knapp2002lie}. To study the algebraic structure of a Lie algebra, let us introduce some related definitions. \begin{definition} \label{def:semisimple.appendix} \mbox{} \begin{itemize} \item A Lie algebra $\mathfrak{g}$ is said to be \emph{abelian} if \[ [\mathfrak{g},\mathfrak{g}]:={\rm span}\,\{[X,Y]:X,Y\in\mathfrak{g}\}=0. \] \item A subspace $\mathfrak{h}$ of $\mathfrak{g}$ is a \emph{Lie subalgebra} of $\mathfrak{g}$ if $[\mathfrak{h},\mathfrak{h}]\subseteq\mathfrak{h}$. In other words, $\mathfrak{h}$ is a Lie algebra itself w.r.t.\ $[\cdot,\cdot]$. \item A Lie subalgebra $\mathfrak{h} \leqslant{}\mathfrak{g}$ is an \emph{ideal} in $\mathfrak{g}$ if $[\mathfrak{h},\mathfrak{g}]\subseteq\mathfrak{h}$. \item The Lie algebra $\mathfrak{g}$ is said to be \emph{simple} if it is nonabelian and has no proper nonzero ideals, and \emph{semisimple} if it has no nonzero abelian ideals. \end{itemize} \end{definition} It can be shown that every semisimple Lie algebra $\mathfrak{g}$ can be decomposed into a direct sum of simple Lie algebras which are ideals in $\mathfrak{g}$. Moreover, this decomposition is unique, and the only ideals of $\mathfrak{g}$ are the direct sums of some of these simple Lie algebras. For example, each special orthogonal Lie algebra $\mathfrak{so}(n)=\{\Omega\in\mathbb{R}^{n\times n}:\Omega+\Omega^{\intercal}=0\}$, as we use extensively in this paper, is simple except for $n=4$, while $\mathfrak{so}(4)$ is semisimple but not simple: as shown in \cref{ex:spin.reps}, $\mathfrak{so}(4)=\mathfrak{so}(3)\oplus\mathfrak{so}(3)$. The study of algebraic structures of semisimple Lie algebras plays a central role in representation theory. One of the most dominant results is the Cartan decomposition that traces back to the work of \'{E}lie Cartan and Wilhelm Killing in the 1880s, which generalizes the notion of singular value decomposition for matrices. Given a semisimple Lie algebra $\mathfrak{g}$, its \emph{Cartan subalgebra} $\mathfrak{h}$ is a maximal abelian subalgebra of $\mathfrak{g}$ such that ${\rm ad}_H$ is diagonalizable for all $H\in\mathfrak{h}$, where ${\rm ad}_XY=[X,Y]$ for all $X,Y\in\mathfrak{g}$. Moreover, the dimension of $\mathfrak{h}$ is called the \emph{rank} of $\mathfrak{g}$. Let $\mathfrak{h}^{\ast}$ denote the dual space of $\mathfrak{h}$, i.e., the space of linear functionals on $\mathfrak{h}$, then a nonzero element $\alpha\in\mathfrak{h}$ is called a \emph{root} of $\mathfrak{g}$ if there exists some $X\in\mathfrak{g}$ such that ${\rm ad}_HX=\alpha(H)X$ for all $H\in\mathfrak{h}^{\ast}$, and $\mathfrak{g}_{\alpha}:=\{X\in\mathfrak{g}: {\rm ad}_HX=\alpha(H)X, \forall H\in\mathfrak{h}\}$ is a vector space called the \emph{root space} of $\mathfrak{g}$, which can be shown to be one-dimensional. Let $R$ denote the set of roots of $\mathfrak{g}$, then $R$ is finite and spans $\mathfrak{h}^*$. With the above notations, the \emph{root space decomposition}, which generalizes the classical \emph{Cartan decomposition}, is defined as \[ \mathfrak{g}=\mathfrak{h}\oplus\Bigl(\bigoplus_{\alpha\in R}\mathfrak{g}_{\alpha}\Bigr). \] A major tool to study the properties of $R$ is the Weyl group, which is defined as follows: Let $\alpha\in R$ be a root and $s_{\alpha}:\mathfrak{h}^*\rightarrow\mathfrak{h}^*$ denote the reflection about the hyperplane in $\mathfrak{h}^*$ orthogonal to $\alpha$, i.e., $s_\alpha(\beta) =\beta-\frac{2\langle\beta,\alpha\rangle}{\langle\alpha,\alpha\rangle}\alpha$ for all $\beta\in\mathfrak{h}^*$, where $\langle\cdot,\cdot\rangle$ is an inner product on $\mathfrak{h}$, then the \emph{Weyl group} $W$ of $R$ is the subgroup of the orthogonal group ${\rm O}(\mathfrak{h}^*)$ of $\mathfrak{h}^*$ generated by all $s_\alpha$ for $\alpha\in R$. It can be shown that $W$ is a finite group and hence a subgroup of a symmetric group by Cayley's theorem. \end{document}
\begin{document} \title{Sensitivity Analysis for Predictive Uncertainty in Bayesian Neural Networks} \author{Stefan Depeweg$^{1,2}$, Jos\'e Miguel Hern\'andez-Lobato$^3$, Steffen Udluft$^2$, Thomas Runkler$^{1,2}$ \\ 1 - Technical Unversity of Munich, Germany. 2 - Siemens AG, Germany. \\ 3 - University of Cambridge, United Kingdom. } \maketitle \begin{abstract} We derive a novel sensitivity analysis of input variables for predictive epistemic and aleatoric uncertainty. We use Bayesian neural networks with latent variables as a model class and illustrate the usefulness of our sensitivity analysis on real-world datasets. Our method increases the interpretability of complex black-box probabilistic models. \end{abstract} \section{Introduction} Extracting human-understandable knowledge out of black-box machine learning methods is a highly relevant topic of research. One aspect of this is to figure out how sensitive the model response is to which input variables. This can be useful both as a sanity check, if the approximated function is reasonable, but also to gain new insights about the problem at hand. For neural networks this kind of model inspection can be performed by a sensitivity analysis \cite{fu1993sensitivity,montavon2017methods}, a simple method that works by considering the gradient of the network output with respect to the input variables. Our key contribution is to transfer this idea towards predictive uncertainty: What features impact the uncertainty in the predictions of our model? To that end we use Bayesian neural networks (BNN) with latent variables \cite{depeweg2016learning,depeweg2017decomposition}, a recently introduced probabilistic model that can describe complex stochastic patterns while at the same time account for model uncertainty. From their predictive distributions we can extract epistemic and aleatoric uncertainties \cite{kendall2017uncertainties,depeweg2017decomposition}. The former uncertainty originates from our lack of knowledge of model parameter values and is determined by the amount of available data, while aleatoric uncertainty consists of irreducible stochasticity originating from unobserved (latent) variables. By combining the sensitivity analysis with a decomposition of predictive uncertainty into its epistemic and aleatoric components, we can analyze which features influence each type of uncertainty. The resulting sensitivities can provide useful insights into the model at hand. On one hand, a feature with high epistemic sensitivity suggests that careful monitoring or safety mechanisms are required to keep the values of this feature in regions where the model is confident. On the other hand, a feature with high aleatoric uncertainty indicates a dependence of that feature with other unobserved/latent variables. \section{Bayesian Neural Networks with Latent Variables} Bayesian Neural Networks(BNNs) are scalable and flexible probabilistic models. Given a training set~${\mathcal{D} = \{ \mathbf{x}_n, \mathbf{y}_n \}_{n=1}^N}$, formed by feature vectors~${\mathbf{x}_n \in \mathbb{R}^D}$ and targets~${\mathbf{y}_n \in \mathbb{R}}^K$, we assume that~${\mathbf{y}_n = f(\mathbf{x}_n,z_n;\mathcal{W}) + \bm \epsilon_n}$, where~$f(\cdot , \cdot;\mathcal{W})$ is the output of a neural network with weights $\mathcal{W}$ and $K$ output units. The network receives as input the feature vector $\mathbf{x}_n$ and the latent variable $z_n \sim \mathcal{N}(0,\gamma)$. We choose rectifiers:~${\varphi(x) = \max(x,0)}$ as activation functions for the hidden layers and and the identity function:~${\varphi(x) = x}$ for the output layer. The network output is corrupted by the additive noise variable~$\bm \epsilon_n \sim \mathcal{N}(\bm 0,\bm \Sigma)$ with diagonal covariance matrix $\bm \Sigma$. The role of the latent variable $z_n$ is to capture unobserved stochastic features that can affect the network's output in complex ways. The network has~$L$ layers, with~$V_l$ hidden units in layer~$l$, and~${\mathcal{W} = \{ \mathbf{W}_l \}_{l=1}^L}$ is the collection of~${V_l \times (V_{l-1}+1)}$ weight matrices. The $+1$ is introduced here to account for the additional per-layer biases. We approximate the exact posterior distribution $p(\mathcal{W},\mathbf{z}\,|\,\mathcal{D})$ with: \begin{align} q(\mathcal{W},\mathbf{z}) = & \underbrace{\left[ \prod_{l=1}^L\! \prod_{i=1}^{V_l}\! \prod_{j=1}^{V_{l\!-\!1}\!+\!1} \mathcal{N}(w_{ij,l}| m^w_{ij,l},v^w_{ij,l})\right]}_{\text{\small $q(\mathcal{W})$}} \times \underbrace{\left[\prod_{n=1}^N \mathcal{N}(z_n \,|\, m_n^z, v_n^z) \right]}_{\text{\small $q(\mathbf{z})$}}\,. \label{eq:posterior_approximation} \end{align} The parameters~$m^w_{ij,l}$,~$v^w_{ij,l}$ and ~$m^z_n$,~$v^z_n$ are determined by minimizing a divergence between $p(\mathcal{W},\mathbf{z}\,|\,\mathcal{D})$ and the approximation $q$. For more detail the reader is referred to the work of \cite{depeweg2016learning,hernandez2016black}. In our experiments we use black-box $\alpha$-divergence minimization with $\alpha=1.0$. \subsection{Uncertainty Decomposition}\label{sec:unc_decomp} BNNs with latent variables can describe complex stochastic patterns while at the same time account for model uncertainty. They achieve this by jointly learning $q(\mathbf{z})$, which captures specific values of the latent variables in the training data, and $q(\mathcal{W})$, which represents any uncertainty about the model parameters. The result is a principled Bayesian approach for learning flexible stochastic functions. For these models, we can identify two types of uncertainty: \emph{aleatoric} and \emph{epistemic} \citep{KIUREGHIAN2009105,kendall2017uncertainties}. Aleatoric uncertainty originates from random latent variables, whose randomness cannot be reduced by collecting more data. In the BNNs this is given by $q(\mathbf{z})$ (and constant additive Gaussian noise $\bm \epsilon$, which we omit). Epistemic uncertainty, on the other hand, originates from lack of statistical evidence and can be reduced by gathering more data. In the BNN this is given by $q(\mathcal{W})$, which captures uncertainty over the model parameters. These two forms of uncertainty are entangled in the approximate predictive distribution for a test input $\mathbf{x}^\star$: \begin{equation} p(\mathbf{y}^\star|\mathbf{x}^\star) = \int p(\mathbf{y}^\star|\mathcal{W},\mathbf{x}^\star,z)p(z^\star)q(\mathcal{W})\,dz^\star\,d\mathcal{W} \,.\label{eq:final_predictive_dist} \end{equation} where $p(\mathbf{y}^\star|\mathcal{W},\mathbf{x}^\star,z^\star)=\mathcal{N}(\mathbf{y}^\star|f(\mathbf{x}^\star,z^\star;\mathcal{W}),\bm \Sigma)$ is the likelihood function of the BNN and $p(z^\star)=\mathcal{N}(z^\star|0,\gamma)$ is the prior on the latent variables. We can use the variance $\sigma^2(y^\star_k|\mathbf{x}^\star)$ as a measure of predictive uncertainty for the $k$-th component of $\mathbf{y}^\star$. The variance can be decomposed into an epistemic and aleatoric term using the law of total variance: \begin{align} \sigma^2(y^\star_k|\mathbf{x}^\star) &= \sigma^2_{q(\mathcal{W})}(\mathbf{E}_{p(z^\star)}[y^\star_k|\mathcal{W},\mathbf{x}^\star]) + \mathbf{E}_{q(\mathcal{W})}[\sigma^2_{p(z^\star)}(y^\star_k|\mathcal{W},\mathbf{x}^\star)] \label{eq:decomp} \end{align} The first term, that is $\sigma^2_{q(\mathcal{W})}(\mathbf{E}_{p(z^\star)}[y^\star_k|\mathcal{W},\mathbf{x}^\star])$ is the variability of $y^\star_k$, when we integrate out $z^\star$ but not $\mathcal{W}$. Because $q(\mathcal{W})$ represents our belief over model parameters, this is a measure of the $ \emph{epistemic}$ uncertainty. The second term, $\mathbf{E}_{q(\mathcal{W})}[\sigma^2_{p(z^\star)}(y^\star_k|\mathcal{W},\mathbf{x}^\star)]$ represents the average variability of $y^\star_k$ not originating from the distribution over model parameters $\mathcal{W}$. This measures \emph{aleatoric} uncertainty, as the variability can only come from the latent variable $z^\star$. \section{Sensitivity Analysis of Predictive Uncertainty} In this section we will extend the method of sensitivity analysis toward predictive uncertainty. The goal is to provide insight into the question of which features affect the stochasticity of our model, which results in aleatoric uncertainty, and which features impact its epistemic uncertainty. For instance, if we have limited data about different settings of a particular feature $i$, even a small change of its value can have a large effect on the confidence of the model. Answers to these two questions can provide useful insights about a model at hand. For instance, a feature with high aleatoric sensitivity indicates a strong interaction with other unobserved/latent features. If a practitioner can expand the set of features by taking more refined measurements, it may be advisable to look into variables which may exhibit dependence with that feature and which may explain the stochasticity in the data. Furthermore, a feature with high epistemic sensitivity, suggests careful monitoring or extended safety mechanisms are required to keep this feature values in regions where the model is confident. We start by briefly reviewing the technique of sensitivity analysis \cite{fu1993sensitivity,montavon2017methods}, a simple method that can provides insight into how changes in the input affect the network's prediction. Let $\mathbf{y} = f(\mathbf{x};\mathcal{W})$ be a neural network fitted on a training set~${\mathcal{D} = \{ \mathbf{x}_n, \mathbf{y}_n \}_{n=1}^N}$, formed by feature vectors~${\mathbf{x}_n \in \mathbb{R}^D}$ and targets~${\mathbf{y}_n \in \mathbb{R}}^K$. We want to understand how each feature $i$ influences the output dimension $k$. Given some test data $\mathcal{D}_\text{test}=\{ \mathbf{x}_n^\star, \mathbf{y}_n^\star \}_{n=1}^{N_\text{test}}$, we use the partial derivate of the output dimension $k$ w.r.t. feature $i$: \begin{equation} I_{i,k} = \frac{1}{N_\text{test}} \sum_{n=1}^{N_\text{test}} \big| \frac{\partial f(\mathbf{x}^\star_n)_k}{\partial x^\star_{i,n}} \big|\,. \label{eq:sensitivity} \end{equation} In Section \ref{sec:unc_decomp} we saw that we can decompose the variance of the predictive distribution of a BNN with latent variables into its epistemic and aleatoric components. Our goal is to obtain sensitivities of these components with respect to the input variables. For this we use a sampling based approach to approximate the two uncertainty components \cite{depeweg2017decomposition} and then calculate the partial derivative of these w.r.t. to the input variables. For each test data point $\mathbf{x}_n^\star$, we perform $N_w \times N_z$ forward passes through the BNN. We first sample $w \sim q(\mathcal{W})$ a total of $N_w$ times and then, for each of these samples of $q(\mathcal{W})$, performing $N_z$ forward passes in which $w$ is fixed and we only sample the latent variable $z$. Then we can do an empirical estimation of the expected predictive value and of the two components on the right-hand-side of Eq. (\ref{eq:decomp}): \begin{align} \mathbf{E}[y^\star_{n,k}|\mathbf{x}_n^\star] &\approx \frac{1}{N_w} \frac{1}{N_z} \sum_{n_w=1}^{N_w} \sum_{n_z=1}^{N_z} y^\star_{n_w,n_z}(\mathbf{x}^\star_n)_k \\ \sigma_{q(\mathcal{W})}(\mathbf{E}_{p(z^\star)}[y^\star_{n,k}|\mathcal{W},\mathbf{x}^\star_n]) &\approx \hat{\sigma}_{N_w}( \frac{1}{N_z}\sum_{n_z=1}^{N_z} y^\star_{n_w,n_z}(\mathbf{x}^\star_n)_k) \\ \mathbf{E}_{q(\mathcal{W})}[\sigma^2_{p(z^\star)}(y^\star_{n,k}|\mathcal{W},\mathbf{x}^\star_n)]^{\frac{1}{2}} &\approx \big(\frac{1}{N_w}\sum_{n_w=1}^{N_w} \hat{\sigma}^2_{N_z}(y^\star_{n_w,n_z}(\mathbf{x}^\star_n)_k)\big)^{\frac{1}{2}} \,. \end{align} where $y^\star_{n_w,n_z}(\mathbf{x}^\star_n)_k = f(\mathbf{x}^\star_n,z^{n_w,n_z};\mathcal{W}^{n_w})_k$ and $\hat{\sigma}^2_{N_z}$ ($\hat{\sigma}^2_{N_w}$) is an empirical estimate of the variance over $N_z$ ($N_w$) samples of $z$ ($\mathcal{W}$). We have used the square root of each component so all terms share the same unit of $y^\star_{n,k}$. Now we can calculate the sensitivities: \begin{align} I_{i,k} &= \frac{1}{N_\text{test}} \sum_{n=1}^{N_\text{test}} \big| \frac{\partial \mathbf{E}[y^\star_{n,k}|\mathbf{x}^\star_n])}{\partial x^\star_{i,n}} \big| \label{eq:expected_sensitivity} \\ I_{i,k}^{\text{epistemic}} &= \frac{1}{N_\text{test}} \sum_{n=1}^{N_\text{test}} \big|\frac{\partial \sigma_{q(\mathcal{W})}(\mathbf{E}_{p(z^\star)}[y^\star_{n,k}|\mathcal{W},\mathbf{x}^\star_n])}{\partial x^\star_{i,n}}\big| \label{eq:epistemic_sensitivity} \\ I_{i,k}^{\text{aleatoric}} &= \frac{1}{N_\text{test}} \sum_{n=1}^{N_\text{test}} \big|\frac{\partial \mathbf{E}_{q(\mathcal{W})}[\sigma^2_{p(z^\star)}(y^\star_{n,k}|\mathcal{W},\mathbf{x}^\star_n)]^{\frac{1}{2}}}{\partial x^\star_{i,n}}\big| \label{eq:aleatoric_sensitivity} \,, \end{align} where Eq. (\ref{eq:expected_sensitivity}) is the standard sensitivity term. We also note that the general drawbacks \cite{montavon2017methods} of the sensitivity analysis, such as considering every variable in isolation, arise due to its simplicity. These will also apply when focussing on the uncertainty components. \section{Experiments} \begin{figure} \caption{Sensitivity analysis for the predictive expectation and uncertainty on toy data \protect\subref{fig:toy} \label{fig:toy} \label{fig:pplant} \label{fig:kin8nm} \label{fig:energy} \label{fig:concrete} \label{fig:protein} \label{fig:wine} \label{fig:boston} \label{fig:naval} \label{fig:uci} \end{figure} In this section we want to do an exploratory study. For that we will first use an artifical toy dataset and then use 8 datasets from the UCI repository \cite{lichman2013uci} in varying domains and dataset sizes. For all experiments, we use a BNN with 2 hidden layer. We first perform model selection on the number of hidden units per layer from $\{20,40,60,80\}$ on the available data, details can be found in Appendix \ref{ap:ms}. We train for $3000$ epochs with a learning rate of $0.001$ using Adam as optimizer. For the sensitivity analysis we will sample $N_w=200$ $w \sim q(\mathcal{W})$ and and $N_z=200$ samples from $z \sim \mathcal{N}(0,\gamma)$. All experiments were repeated $5$ times and we report average results. \subsection{Toy Data} We consider a regression task for a stochastic function with heteroskedastic noise: $y= 7 \sin (x_1) + 3|\cos (x_2 / 2)| \epsilon$ with $\epsilon \sim \mathcal{N}(0,1)$. The first input variable $x_1$ is responsible for the shape of the function whereas the second variable $x_2$ determines the noise level. We sample $500$ data points with $x_1 \sim \text{exponential}(\lambda=0.5)-4$ and $x_2 \sim \mathcal{U}(-4,4)$. Fig. \ref{fig:toy} shows the sensitivities. The first variable $x_1$ is responsible for the epistemic uncertainty whereas $x_2$ is responsible for the aleatoric uncertainty which corresponds with the generative model for the data. \subsection{UCI Datasets} We consider several real-world regression datasets from the UCI data repository \cite{lichman2013uci}. Detailed descriptions, including feature and target explanations, can be found on the respective website. For evaluation we use the same training and test data splits as in \cite{hernandez2016black}. In Fig. \ref{fig:uci} we show the results of all experiments. For some problems the aleatoric sensitivity is most prominent (Fig. \ref{fig:protein},\ref{fig:wine}), while in others we have predominately epistemic sensitivity (Fig. \ref{fig:concrete},\ref{fig:boston}) and a mixture in others. This makes sense, because we have variable dataset sizes (e.g. Boston Housing with 506 data points and 13 features, compared to Protein Structure with 45730 points and 9 features) and also likely different heterogeneity in the datasets. In the power-plant example feature $1$ (temperature) and $2$ (ambient pressure) are the main sources of aleatoric uncertainty of the target, the net hourly electrical energy output. The data in this problems originates from a combined cycle power plant consisting of gas and steam turbines. The provided features likely provide only limited information of the energy output, which is subject to complex combustion processes. We can expect that a change in temperature and pressure will influence this process in a complex way, which can explain the high sensitivities we see. The task in the naval-propulsion-plant example, shown in Fig. \ref{fig:naval}, is to predict the compressor decay state coefficient, of a gas turbine operated on a naval vessel. Here we see that two features, the compressor inlet air temperature and air pressure have high epistemic uncertainty, but do not influence the overall sensitivity much. This makes sense, because we only have a single value of both features in the complete dataset. The model has learned no influence of this feature on the output (because it is constant) but any change from this constant will make the system highly uncertain. \section{Conclusion} In this paper we provided a new way of sensitivity analysis for predictive epistemic and aleatoric uncertainty. Experiments indicate useful insights of this method on real-world datasets. {\small } \begin{appendices} \section{Model Selection}\label{ap:ms} We perform model selection on the number of hidden units for a BNN with 2 hidden layer. Table \ref{tab:results_tll} shows test log-likelihoods with standard error on all UCI datasets. By that we want to lower the effect of over- and underfitting. Underfitting would show itself by high aleatoric and low epistemic uncertainty whereas overfitting would result in high epistemic and low aleatoric uncertainty. \begin{table*}[!ht] \centering \caption{Test Log-likelihood on UCI Dataset.} \begin{tabular}{l@{\hspace{0.25cm}}r@{$\pm$}l@{\hspace{0.25cm}}r@{$\pm$}l@{\hspace{0.25cm}}r@{$\pm$}l@{\hspace{0.25cm}}r@{$\pm$}l@{\hspace{0.25cm}}}\hline &\multicolumn{8}{c}{Hidden units per layer} \\ \bf{Dataset}&\multicolumn{2}{c}{\bf{$20$}}&\multicolumn{2}{c}{\bf{ $40$ }}&\multicolumn{2}{c}{\bf{$60$}}&\multicolumn{2}{c}{\bf{ $80$ }}\\ \hline {\bf Power Plant } & -2.76&0.03&-2.71&0.01&{\bf -2.69}&{\bf 0.02}&-2.71&0.03 \\ {\bf Kin8nm} & 1.23&0.01&1.30&0.01&1.30&0.01&{\bf 1.31}&{\bf 0.01} \\ {\bf Energy Efficiency}&-0.79&0.10&{\bf -0.66}&{\bf 0.07}&-0.79&0.07&-0.74&0.06 \\ {\bf Concrete Strength} & -3.02&0.05&{\bf -2.99}&{\bf 0.03}&-3.00&0.01&-3.01&0.04 \\ {\bf Protein Structure} & -2.40&0.01&-2.32&0.00&-2.26&0.01&{\bf -2.24}&{\bf 0.01} \\ {\bf Wine Quality Red} & 1.94& 0.11&{\bf 1.95}&{\bf 0.09}&1.78&0.13&1.62&0.13 \\ {\bf Boston Housing} & -2.49&0.12&{\bf -2.39}&{\bf 0.05}&-2.52&0.03&-2.66&0.03 \\ {\bf Naval Propulsion} & 1.80&0.04&1.74&0.06&1.81&0.05&{\bf 1.86}&{\bf 0.05} \\ \hline \end{tabular} \label{tab:results_tll} \end{table*} \end{appendices} \end{document}
\betagin{document} \removeabove{0.5cm} \removebetween{0.5cm} \removebelow{0.6cm} \mathfrak{m}aketitle \betagin{prelims} \DisplayAbstractInEnglish \DisplayKeyWords \DisplayMSCclass \mathfrak{m}edskip \lambdanguagesection{Fran\c{c}ais} \DisplayTitleInFrench \DisplayAbstractInFrench \end{prelims} \setcounter{tocdepth}{2} \tableofcontents \section{Introduction} A conjecture of Manin and Orlov states that Grothendieck-Knudsen moduli space $\overline{\mathcal{M}}_{0,n}$ of stable, rational curves with $n$ markings admits a full, exceptional collection which is invariant (as a set) under the action of the symmetric group $S_n$ permuting the markings. The conjecture has been proved by the authors in \cite{CT_partII} by reducing it to the similar statement for several Hassett spaces, one of which is the space under consideration in this paper. While the proof presented in \cite{CT_partII} for other needed Hassett spaces is valid in this particular case as well, it was not discussed in \cite{CT_partII} and we prefer to give a different and much simpler proof here. For a vector of rational weights $\mathfrak{m}athbf{a}=(a_1,\ldots,a_n)$ with $0<a_i\le1$ and $\sum a_i>2$, the Hassett space $\overline{\text{\rm M}}_{\mathfrak{m}athbf{a}}$ is the moduli space of weighted pointed stable rational curves, i.e.,~pairs $(C,\sum a_ip_i)$ with slc singularities, such that $C$ is a genus~$0$, at worst nodal, curve and the $\mathfrak{m}athbb{Q}$-line bundle $\omegaega_C(\sum a_ip_i)$ is ample. For example, $\overline{\mathcal{M}}_{0,n}=\overline{\text{\rm M}}_{(1,\ldots,1)}$. There exist birational reduction morphisms $\overline{\text{\rm M}}_{\mathfrak{m}athbf{a}}\to\overline{\text{\rm M}}_{\mathfrak{m}athbf{a'}}$ every time the weight vectors are such that $a_i\ge a_i'$ for every~$i$. Understanding the derived categories of the Hassett spaces $\overline{\text{\rm M}}_{\mathfrak{m}athbf{a}}$ was considered in the work of Ballard, Favero and Katzarkov \cite{BFK}, and earlier, for $\overline{\mathcal{M}}_{0,n}$ in the work of Manin and Smirnov \cite{ManinSmirnov1} (see also \cite{Smirnov Thesis,ManinSmirnov2}). However, here we consider a modified question. If $\Gamma_{\mathfrak{m}athbf{a}}\subseteq S_n$ denotes the stabilizer of the set of weights ${\mathfrak{m}athbf{a}}$, we ask whether there exists a full, $\Gamma_{\mathfrak{m}athbf{a}}$-invariant exceptional collection on $\overline{\text{\rm M}}_{\mathfrak{m}athbf{a}}$. Theorem \cite[Theorem 1.5] {CT_partII} reduces the existence of such collections on $\overline{\mathcal{M}}_{0,n}$, as well as many other Hassett spaces $\overline{\text{\rm M}}_{\mathfrak{m}athbf{a}}$, to the following cases: \betagin{itemize} \item[(I)] The Losev--Manin spaces $\overline{\text{\rm M}}_{\mathfrak{m}athbf{a}}$, where ${\mathfrak{m}athbf{a}}=(1,1,\end{proof}silon,\ldots,\end{proof}silon)$, $0<\end{proof}silon\ll1$. \item[(II)] The Hassett spaces $\overline{\text{\rm M}}_{p,q}$, for $p+q=n$ ($q\geq0$, $p\geq2$) having $p$ \emph{heavy} weights and $q$ \emph{light} weights with the following properties: \betagin{equation}\lambdabel{Mpq} a_1=\ldots=a_p=a+\eta,\quad a_{p+1}=\ldots=a_n=\end{proof}silon,\quad pa+q\end{proof}silon=2, \end{equation} where $0<\eta,\end{proof}silon\ll1.$ \end{itemize} To reduce to the above cases, the authors were inspired by results of Bergstrom and Minabe \cite{BergstromMinabe, BergstromMinabeLM} that used reduction maps between Hassett spaces. The existence of a full, invariant, exceptional collection in case (I) was proved in \cite{CT_partI}. The work in \cite{CT_partII} proves the statement for the spaces $\overline{\text{\rm M}}_{p,q}$ in (II) with $p\ge3$ and is the most difficult part of the argument. The current paper treats the spaces $\overline{\text{\rm M}}_{p,q}$ in (II) with $p=2$. We emphasize that this case is not explicitly proved in \cite{CT_partII}. However, the proof for $p>2$ is valid even when $p=2$. The proof for $p>2$ requires a lot of different comparisons between different Hassett spaces. Here we prove that this can be avoided when $p=2$. More precisely, the main space under consideration when $p=2$ is the following: \betagin{notn}\lambdabel{Z} Let $Z_N$ denote the Hassett moduli space of rational curves with markings $N\cup\{0,\inftyty\}$ with weights of markings $0$ and $\inftyty$ equal to $\frac{1}{2}+\eta$ and the markings from $N$ equal to $\frac{1}{n}$, where $0<\eta\ll1$. We also write $Z_n:=Z_N$ for $n=|N|$ when there is no ambiguity. \end{notn} The condition on the weights is equivalent to the condition (\ref{Mpq}) for $p=2$ (in which case, $a=1-\frac{(n-2)\end{proof}silon}{2}$). Explicitly, all light points may coincide with one another and one heavy point may coincide with at most $\lfloor\frac{n-1}{2}\rfloor$ heavy points. We have the following description: \betagin{thm}\lambdabel{Z description} When $n$ is odd, the space $Z_n$ is isomorphic to the symmetric GIT quotient $Z_n=(\mathfrak{m}athbb{P}^1)^n\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}_{\mathcal{O}(1,\ldots,1)}\mathfrak{m}athbb{G}_m$, with respect to the diagonal action of $\mathfrak{m}athbb{G}_m$ on $(\mathfrak{m}athbb{P}^1)^n$, coming from $\mathfrak{m}athbb{G}_m$ acting on $\mathfrak{m}athbb{P}^1$ by $z\cdot [x,y]=[zx,z^{-1}y]$. When $n$ is even, $Z_n$ is isomorphic to the Kirwan desingularization of the same GIT quotient. \end{thm} Theorem \ref{Z description} for $n$ odd is stated in \cite{Ha} within a more general set-up. Theorem \ref{Z description} for $n$ even is a direct consequence of \cite{Ha}. For the reader's convenience, we give the proofs in Lemma \ref{identify} and Lemma~\ref{identify2}. The group $S_2\times S_n$ acts on $Z_n$ by permuting $0$, $\inftyty$, and the markings from $N$ respectively. In a similar fashion, the Losev--Manin space $\overline{\text{LM}}_N$ (or $\overline{\text{LM}}_n$, for $n=|N|$) of dimension $(n-1)$ is the Hassett space with weights $(1,1,\end{proof}silon,\ldots,\end{proof}silon)$, with markings from $N\cup\{0,\inftyty\}$ with the weights of $0$, $\inftyty$ equal to $1$, while markings from $N$ are equal to $\end{proof}silon$, with $0<\end{proof}silon\ll1$. The space $\overline{\text{LM}}_N$ is isomorphic to an iterated blow-up of $\mathfrak{m}athbb{P}^{n-1}$ along points $q_1,\ldots, q_n$ in linearly general position, and all linear subspaces spanned by $\{q_i\}$. In particular, $\overline{\text{LM}}_n$ is a toric variety. The action of $S_n$ permuting the markings from $N$ corresponds to a relabeling of the points $\{q_i\}$, while the action of $S_2$, permuting $0$, $\inftyty$, corresponds, at the level of $\mathfrak{m}athbb{P}^{n-1}$, to a Cremona transformation with center at the points $\{q_i\}$. There is a birational $S_2\times S_N$-equivariant morphism, reducing the weights of $0$ and $\inftyty$: $p: \overline{\text{LM}}_N\rightarrow Z_N$. In particular, $Z_N$ is also a toric variety. Our main theorem is the following: \betagin{thm}\lambdabel{main} The Hassett space $Z_n$ has a full exceptional collection which is invariant under the action of $(S_2\times S_n)$. In particular, the K-group $K_0(Z_n)$ is a permutation $(S_2\times S_n)$-module. \end{thm} Theorem \ref{main} is the immediate consequence of Theorem \ref{odd} (case of $n$ odd) and Theorem \ref{even} (case of $n$ even). We now describe the collections. \betagin{defn}\lambdabel{tautological} If $(\mathfrak{p}i:\mathcal{U}\rightarrow\overline{\text{\rm M}}, \sigma_1,\ldots, \sigma_n)$ is the universal family over the Hassett space $\overline{\text{\rm M}}$, one defines tautological classes $$\mathfrak{p}si_i:=\sigma_i^*\omega_{\mathfrak{p}i},\quad \delta_{ij}=\sigma_i^*\sigma_j.$$ Note that when $n$ is odd, we have $\mathfrak{p}si_0+\mathfrak{p}si_{\inftyty}=0$ on $Z_n$. For other relations, including the case when $n$ is even, see Section \ref{Hassett}. \end{defn} \betagin{defn}\lambdabel{L odd case} Assume $n$ is odd. Let $E\subseteq N$ and $p\in\mathfrak{m}athbb{Z}$, such that if $e=|E|$ we have that $p+e$ is even. We define line bundles on $Z_n$ as follows: $$L_{E,p}:=-\left(\frac{e-p}{2}\right)\mathfrak{p}si_{\inftyty}-\sum_{j\in E}\delta_{j\inftyty}.$$ As sums of $\mathfrak{m}athbb{Q}$-line bundles, $L_{E,p}=\frac{p}{2}\mathfrak{p}si_{\inftyty}+\frac{1}{2}\sum_{j\in E} \mathfrak{p}si_j=-\frac{p}{2}\mathfrak{p}si_0+\frac{1}{2}\sum_{j\in E} \mathfrak{p}si_j$. In particular, the action of $S_2$ exchanges $L_{E,p}$ with $L_{E,-p}$. The line bundles $L_{E,p}$ are natural from the GIT point of view, see (\ref{odd translate}). \end{defn} \betagin{thm}\lambdabel{odd} Let $n=2s+1$ odd. The line bundles $\{L_{E,p}\}$ (Definition \ref{L odd case}) form a full, $(S_2\times S_n)$ invariant exceptional collection in $D^b(Z_n)$ under the condition: $$|p|+\mathfrak{m}in(e,n-e)\leq s,\quad \text{where}\quad e=|E|,\quad p+e\quad \text{even}.$$ The line bundles are ordered by decreasing $e$, and for a fixed $e$, arbitrarily. \end{thm} The collection in Theorem \ref{odd} is the dual of the collection in \cite[Theorem 1.10]{CT_partII} for $p=2$, with some of the constraints on the order removed. See also Remark \ref{elaborate1} for a more precise statement. Consider now the case when $n=2s+2\geq2$ is even. In this case the universal family over $Z_n$ has reducible fibers. For each partition $N=T\sqcup T^c$, $|T|=|T^c|=s+1$, we denote $\delta_{T\cup\{\inftyty\}}\subseteq Z_n$ the boundary component parametrizing nodal rational curve with two components, with markings from $T\cup\{\inftyty\}$ on one component and $T^c\cup\{0\}$ on the other. Moreover, $\delta_{T\cup\{\inftyty\}}=\mathfrak{m}athbb{P}^s\times\mathfrak{m}athbb{P}^s$ and we have that $Z_n\rightarrow (\mathfrak{m}athbb{P}^1)^n\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}_{\mathcal{O}(1,\ldots,1)}\text{\rm PGL}_2$ is a Kirwan resolution of singularities with exceptional divisors $\delta_{T\cup\{\inftyty\}}$. \betagin{defn}\lambdabel{L even case} Assume $n$ is even. Let $E\subseteq N$ and $p\in\mathfrak{m}athbb{Z}$, such that if $e=|E|$ we have that $p+e$ is even. We define line bundles on $Z_n$ as follows: $$L_{E,p}:=-\left(\frac{e-p}{2}\right)\mathfrak{p}si_{\inftyty}-\sum_{j\in E}\delta_{j\inftyty}-\sum_{|E\cap T|-\frac{e-p}{2}>0}\left(|E\cap T|-\frac{e-p}{2}\right)\delta_{T\cup\{\inftyty\}}.$$ The line bundles $L_{E,p}$ are natural from the GIT point of view, see Definition \ref{even translate} and the discussion thereafter. From this point of view, it is also clear that the action of $S_2$ exchanges $L_{E,p}$ with $L_{E,-p}$. \end{defn} \betagin{thm}\lambdabel{even} Assume $n=2s+2$ is even, $s\geq0$. The following form a full, $(S_2\times S_n)$ invariant exceptional collection in $D^b(Z_n)$: \betagin{itemize} \item The torsion sheaves $\mathcal{O}(-a,-b)$ supported on $\delta_{T\cup\{\inftyty\}}=\mathfrak{m}athbb{P}^s\times\mathfrak{m}athbb{P}^s$, for all $T\subseteq N$, $|T|=|T^c|=s+1$, such that one of the following holds: \betagin{itemize} \item $0<a\leq s$, $0<b\leq s$, \item $a=0$, $0<b<\frac{s+1}{2}$, \item $b=0$, $0<a<\frac{s+1}{2}$. \end{itemize} \item The line bundles $\{L_{E,p}\}$ (Definition \ref{L even case}) under the following condition: $$|p|+\mathfrak{m}in(e,n+1-e)\leq s+1,\quad \text{where}\quad e=|E|,\quad p+e\quad \text{even}.$$ \end{itemize} The order is as follows: all torsion sheaves precede the line bundles, the torsion sheaves are arranged in order of decreasing $(a+b)$, while the line bundles are arranged in order of decreasing $e$, and for a fixed $e$, arbitrarily. \end{thm} The torsion part of the collection in Theorem \ref{even} is the same as the torsion part of the collection in \cite[Theorem 1.15]{CT_partII} for $p=2$. However, the remaining parts are not the same, nor are they dual to each other, as in the case of Theorem \ref{odd}. There is a relationship between the dual collection $\{L^\varepsilone_{E,p}\}$ and the torsion free part of the collection in \cite[Theorem 1.15] {CT_partII} for $p=2$, but this is more complicated -- see Remark \ref{elaborate2} for a precise statement. To prove that our collections are exceptional, we use the method of windows \cite{DHL,BFK}. We then use some of the main results of \cite[Proposition 1.8, Theorem 1.10]{CT_partI} to prove fullness, by using the reduction map $p:\overline{\text{LM}}_n\rightarrow Z_n$ in order to compare our collections on $Z_n$ with with the push forward of the full exceptional collection on the Losev--Manin space. We emphasize that while in \cite{CT_partII} we prove exceptionality and fullness on spaces like $Z_N$ indirectly, by working on their contractions (small resolutions of the singular GIT quotient when $n$ is even), in this paper we prove both exceptionality and fullness directly, by using the method of windows (for $n$ even on the Kirwan resolution, the blow-up of the strictly semistable locus). As remarked in \cite{CT_partI}, we do not know any smooth projective toric varieties $X$ with an action of a finite group $\Gammamma$ normalizing the torus action which do not have a $\Gammamma$-equivariant exceptional collection $\{E_i\}$ of maximal possible length (equal to the topological Euler characteristic of $X$). From this point of view, the Losev--Manin spaces $\overline{\text{LM}}_N$ and their birational contractions $Z_N$ provide evidence that this may be true in general. The existence of such a collection implies that the K-group $K_0(X)$ is a permutation $\Gammamma$-module. In the Galois setting (when $X$ is defined over a field which is not algebraically closed and $\Gammamma$ is the absolute Galois group), an analogous statement was conjectured by Merkurjev and Panin \cite{MP}. Of course one may further wonder if $\{E_i\}$ is in fact full, which is related to (non)-existence of phantom categories on $X$, another difficult open question. We refer to \cite{CT_Duke,CT_Crelle,CT_rigid} for background information on the birational geometry of $\overline{\mathcal{M}}_{0,n}$, the Losev--Manin space and other related spaces. \noindent{\bf Organization of paper.} In Section 2 we discuss preliminaries on Hassett spaces and prove some general results on how tautological classes pull back under reduction morphisms. These results are of independent interest and have been already used in a crucial way in \cite{CT_partII}. In Section 3, we discuss the GIT interpretation of the Hassett spaces $Z_n$ in the $n$ odd case and prove Theorem \ref{odd}. In Section 4, we do the same for the $n$ even case and prove Theorem \ref{even}. Section 5 serves as an appendix, recalling results on Losev--Manin spaces from \cite{CT_partI} and calculating the push forward to $Z_n$ of the full exceptional collection on the Losev--Manin space $\overline{\text{LM}}_n$. These results are used in Sections 3 and 4 to prove fullness in Theorems \ref{odd} and \ref{even}. Throughout the paper, we do not distinguish between line bundles and the corresponding divisor classes. \noindent{\bf Acknowledgements.} We are grateful to Alexander Kuznetsov for suggesting the problem about the derived categories of moduli spaces of pointed curves in the equivariant setting. We thank Daniel Halpern--Leistner for his help with windows in derived categories. We thank Valery Alexeev and the anonymous referee for useful comments. \section{Preliminaries on Hassett spaces}\lambdabel{Hassett} We refer to \cite{Ha} for background on the Hassett moduli spaces. Recall that for a choice of weights $${\mathfrak{m}athbf{a}}=(a_1, \ldots, a_n),\quad a_i\in\mathfrak{m}athbb{Q},\quad 0<a_i\leq 1,\quad\sum a_i>2,$$ we denote by $\overline{\text{\rm M}}_{\mathfrak{m}athbf{a}}$ the fine moduli space of weighted rational curves with $n$ markings which are stable with respect to the set of weights $\mathbf{a}$. Moreover, $\overline{\text{\rm M}}_\mathbf{a}$ is a smooth projective variety of dimension $(n-3)$. Note that the polytope of weights has a chamber structure with walls $\sum_{i\in I} a_i=1$ for every subset $I\subseteq \{1,\ldots,n\}$. One obtains the Losev--Manin space $\overline{\text{LM}}_N$ by considering weights on the set of markings $\{0,\inftyty\}\cup N$: $$\betagin{itemize}g(1,1,\frac{1}{n},\ldots,\frac{1}{n}\betagin{itemize}g),\quad n=|N|.$$ Replacing the weights equal to $\frac{1}{n}$ with some $\end{proof}silon\in\mathfrak{m}athbb{Q}$, for some $0<\end{proof}silon\ll 1$, defines the same moduli problem, hence, gives isomorphic moduli spaces. Similarly, the moduli space $Z_N$ of Notation \ref{Z} is the moduli space with set of markings $\{0,\inftyty\}\cup N$ and weights $$\betagin{itemize}g(\frac{1}{2}+\eta,\frac{1}{2}+\eta,\frac{1}{n},\ldots,\frac{1}{n}\betagin{itemize}g),\quad \eta\in\mathfrak{m}athbb{Q},\quad 0<\eta\ll 1.$$ If $\mathbf{a}=(a_1, \ldots, a_n)$ and $\mathbf{a}'=(b_1, \ldots, b_n)$ are such that $a_i\geq b_i$, for all $i$, there is a reduction morphism $\rho: \overline{\text{\rm M}}_\mathbf{a}\rightarrow\overline{\text{\rm M}}_\mathbf{a}'$. This is a birational morphism whose exceptional locus consists of boundary divisors $\delta_{I}$ (parametrizing reducible curves with a node that disconnects the markings from $I$ and $I^c$) for every subset $I\subseteq N$ such that $\sum_{i\in I} a_i>1$, but $\sum_{i\in I} b_i\leq1$. For us a special role will be played by the reduction map $p:\overline{\text{LM}}_N\rightarrow Z_N$ which reduces the weights of $\{0,\inftyty\}$ from $1$ to the minimum possible. For a Hassett space $\overline{\text{\rm M}}=\overline{\text{\rm M}}_\mathbf{a}$, with universal family $(\mathfrak{p}i:\mathcal{U}\rightarrow\overline{\text{\rm M}}, \{\sigma_i\})$, recall that we define $\mathfrak{p}si_i:=\sigma_i^*\omega_{\mathfrak{p}i}$, $\delta_{ij}=\sigma_i^*\sigma_j$. Since the sections $\sigmagma_i$ lie in the locus where the map $\mathfrak{p}i$ is smooth, the identity $\sigmagma_i\cdot\omegaega_{\mathfrak{p}i}=-\sigmagma_i^2$ holds on $\mathcal{U}$. Therefore, $-\mathfrak{p}si_i=\mathfrak{p}i_*\betagin{itemize}g(\sigmagma_i^2\betagin{itemize}g)=\sigmagma_i^*\sigmagma_i$. \betagin{lemma}\lambdabel{relations} Assume $\overline{\text{\rm M}}$ is a Hassett space whose universal family $\mathfrak{p}i:\mathcal{U}\rightarrow\overline{\text{\rm M}}$ is a $\mathfrak{m}athbb{P}^1$-bundle. Then the identity $-\omega_{\mathfrak{p}i}=2\sigma_i+\mathfrak{p}i^*(\mathfrak{p}si_i)$ holds on $\mathcal{U}$, and therefore, on $\overline{\text{\rm M}}$ we have for all $i\neq j$: $$\mathfrak{p}si_i+\mathfrak{p}si_j=-2\delta_{ij}.$$ Hence, for all distinct $i,j,k$, we have $\mathfrak{p}si_i=-\delta_{ij}-\delta_{ik}+\delta_{jk}$. \end{lemma} \betagin{proof} Indeed, $-\omega_{\mathfrak{p}i}-2\sigma_i$ restricts to the fibers of the $\mathbb{P}^1$-bundle trivially, and therefore is of the form $\mathfrak{p}i^*(L)$ for some line bundle on $\overline{\text{\rm M}}$. Pulling back by $\sigma_i$ shows that $L=\mathfrak{p}si_i$. \end{proof} When $n$ is odd, the universal family $\mathcal{U}\rightarrow Z_N$ is a $\mathfrak{m}athbb{P}^1$-bundle and the sections $\sigma_0$ and $\sigma_{\inftyty}$ are distinct. Lemma \ref{relations} has the following: \betagin{cor} The following identities hold on $Z_N$ when $n$ is odd: \betagin{equation}\lambdabel{P1-bundle identities} \mathfrak{p}si_0=-\mathfrak{p}si_{\inftyty}=-\delta_{i0}+\delta_{i\inftyty},\quad \mathfrak{p}si_i=-\delta_{i0}-\delta_{i\inftyty}. \end{equation} \end{cor} \betagin{lemma}\lambdabel{general statement} Let $\overline{\text{\rm M}}=\overline{\text{\rm M}}_\mathbf{a}$, $\overline{\text{\rm M}}'=\overline{\text{\rm M}}_{\mathbf{a}'}$ be Hassett spaces, with $\mathbf{a}=(a_i)$, $\mathbf{a}'=(b_i)$, $a_i\geq b_i$ for all $i$. Consider the corresponding reduction map $p:\overline{\text{\rm M}}'\rightarrow\overline{\text{\rm M}}$. Let $(\mathfrak{p}i:\mathcal{U}\rightarrow\overline{\text{\rm M}}, \{\sigmagma_i\})$, $(\mathfrak{p}i':\mathcal{U}'\rightarrow\overline{\text{\rm M}}',\{\sigmagma'_i\})$ be the universal families. Denote by $(\rho: \mathcal{V}\rightarrow \overline{\text{\rm M}}', \{s_i\})$ the pull-back of $(\mathfrak{p}i':\mathcal{U}\rightarrow\overline{\text{\rm M}},\{\sigmagma_i\})$ to $\overline{\text{\rm M}}'$. Then there exists a commutative diagram: \betagin{equation*} \betagin{CD} \mathcal{U}' @>v>> \mathcal{V}@>q>> \mathcal{U}\\ @VV{\mathfrak{p}i'}V @V{\rho}VV @V{\mathfrak{p}i}VV \\ \overline{\text{\rm M}}' @>Id>> \overline{\text{\rm M}}' @>{p}>> \overline{\text{\rm M}} \end{CD} \end{equation*} Furthermore, identifying $\mathcal{U}'$ with a Hassett space $\overline{\text{\rm M}}_{\tilde{\mathbf{a}}}$, where $\tilde{\mathbf{a}}=(a_1,\ldots, a_n, 0)$ (with an additional marking $x$ with weight $0$) \cite[2.1.1]{Ha}, we have: $$v^*\omegaega_{\rho}=\omegaega_{\mathfrak{p}i'}-\sum_{|I|\geq2, \sum_{i\in I}a_i>1, \sum_{i\in I}b_i\leq1} \delta_{I\cup\{x\}},$$ $$v^*s_i=\sigmagma_i+\sum_{i\in I, |I|\geq2, \sum_{i\in I}a_i>1, \sum_{i\in I}b_i\leq1} \delta_{I\cup\{x\}},$$ $$p^*\mathfrak{p}si_i=\mathfrak{p}si_i-\sum_{i\in I, |I|\geq2, \sum_{i\in I}a_i>1, \sum_{i\in I}b_i\leq1}\delta_I,$$ $$p^*\delta_{ij}=\delta_{ij}+\sum_{i,j\in I, |I|\geq 3,\sum_{i\in I}a_i>1, \sum_{i\in I}b_i\leq1}\delta_I.$$ \end{lemma} \betagin{proof} The spaces $\mathcal{U}$ and $\mathcal{U}'$ are smooth \cite[Propositions 5.3 and 5.4]{Ha}. The existence of the commutative diagram follows from semi-stable reduction \cite[Proof of Theorem 4.1]{Ha}. The map $v$ is obtained by applying the relative MMP for the line bundle $\omegaega_{\mathfrak{p}i'}(\sum b_i\sigmagma'_i)$. Concretely, the relative MMP results in a sequences of blow-downs, followed by a small crepant map: $$\mathcal{U}'=\mathcal{S}^1\rightarrow\mathcal{S}^2\rightarrow\ldots\rightarrow\mathcal{S}^r=\mathcal{V},$$ (all over $\overline{\text{\rm M}}'$). The resulting map $v:\mathcal{U}'\rightarrow\mathcal{V}$ is a birational map which contracts divisors in $\mathcal{U}'$ to codimension $2$ loci in $\mathcal{V}$ (as the relative dimension drops from $1$ to $0$). Note that $\mathcal{V}$ is generically smooth along these loci. The $v$-exceptional divisors can be identified via $\mathcal{U}'\cong\overline{\text{\rm M}}_{\tilde{\mathbf{a}}}$ with boundary divisors $\delta_{I\cup\{x\}}$ ($I\subseteq N$), with the property that $\sum_{i\in I} a_i>1$, $\sum_{i\in I} b_i\leq1$. For a flat family of nodal curves $u: \mathcal{C}\rightarrow B$ with Gorenstein base $B$ (in our case smooth) the relative dualizing sheaf $\omegaega_u$ is a line bundle on $\mathcal{C}$ with first Chern class $K_{\mathcal{C}}-u^*K_B$, where $K_{\mathcal{C}}$ and $K_B$ denote the corresponding canonical divisors. In particular: $$\omegaega_{\mathfrak{p}i'}=K_{\mathcal{U}'}-{\mathfrak{p}i'}^*K_{\overline{\text{\rm M}}'}, \quad \omegaega_{\rho}=K_{\mathcal{V}}-\rho^*K_{\overline{\text{\rm M}}'}.$$ Since the map $v$ on an open set is the blow-up of codimension $2$ loci in $\mathcal{V}$, it follows that $K_{\mathcal{U}'}=v^*K_{\mathcal{V}}+\sum E,$ by the blow-up formula. Hence, $v^*\omegaega_{\rho}=\omegaega_{\mathfrak{p}i'}-\sum E$, where the sum runs over all prime divisors $E$ which are $v$-exceptional. This proves the first identity. For the second, we identify the sections $\sigmagma'_i$ (resp., $\sigmagma_i$) with the boundary divisors $\delta_{ix}$ in $\mathcal{U}'$ (resp., in $\mathcal{U}$). Note that the proper transform of the section $s_i$ is $\sigmagma'_i$ and $s_i$ contains $v(\delta_{I\cup\{x\}})$ ($|I|\geq2$), for $\delta_{I\cup\{x\}}$ $v$-exceptional if and only if $i\in I$. Moreover, in this case, $v(\delta_{I\cup\{x\}})$ is contained in $s_i$ (with codimension $1$) and $s_i$ is smooth (since $\overline{\text{\rm M}}'$ is). The second identity follows. By Definition \ref{tautological} and the diagram, $$p^*\mathfrak{p}si_i=p^*\sigmagma_i^*\omegaega_{\mathfrak{p}i}=s_i^*q^*\omegaega_{\mathfrak{p}i}=s_i^*\omegaega_{\rho}= {\sigmagma'_i}^*v^*\omegaega_{\rho},$$ $$p^*\delta_{ij}=p^*\sigmagma_i^*(\sigmagma_j)=s_i^*q^*(\sigmagma_j)=s_i^*s_j= {\sigmagma'}_i^*v^*s_j.$$ The last two formulas now follow using the first two and the fact that ${\sigmagma'}_i^*\delta_{I\cup\{x\}}=\delta_I$ if $i\in I$ and is $0$ otherwise. \end{proof} \betagin{cor}\lambdabel{pull by p} Let $p:\overline{\text{LM}}_N\rightarrow Z_N$ be the reduction map. Let $s:=\Bigl\lfloor\frac{n-1}{2}\Bigr\rfloor$. Then $$p^*\mathfrak{p}si_0=\mathfrak{p}si_0-\sum_{I\subseteq N, 1\leq |I|\leq s}\delta_{I\cup\{0\}},$$ $$p^*\mathfrak{p}si_i=-\sum_{i\in I\subseteq N, 1\leq |I|\leq s}\betagin{itemize}g(\delta_{I\cup\{0\}}+\delta_{I\cup\{\inftyty\}}\betagin{itemize}g) \quad (i\in N),$$ $$p^*\delta_{i0}=\sum_{i\in I\subseteq N, 1\leq |I|\leq s}\delta_{I\cup\{0\}}\quad (i\in N),$$ $$p^*\delta_{ij}=\delta_{ij}+\sum_{i,j\in I\subseteq N, 1\leq |I|\leq s}\betagin{itemize}g(\delta_{I\cup\{0\}}+\delta_{I\cup\{\inftyty\}}\betagin{itemize}g) \quad (i,j\in N).$$ \end{cor} \betagin{lemma}\lambdabel{psi_i is zero on LM} On the Losev--Manin space $\overline{\text{LM}}_N$, we have $\mathfrak{p}si_i=0$ for all $i\in N$. \end{lemma} \betagin{proof} Apply Lemma \ref{general statement} to a reduction map $p: \overline{\text{\rm M}}_{0,N\cup\{0,\inftyty\}}\rightarrow\overline{\text{LM}}_N$: $$p^*\mathfrak{p}si_i=\mathfrak{p}si_i-\sum_{i\in I, |I|\geq2, 0,\inftyty\in I^c}\delta_I.$$ The right hand side of the equality is $0$ \cite[Lemma 3.4]{KeelTevelev}. Therefore, $p^*\mathfrak{p}si_i=0$. As $p_*\mathcal{O}=\mathcal{O}$, by the projection formula, we have $\mathfrak{p}si_i=0$. \end{proof} \betagin{proof}[Proof of Corollary \ref{pull by p}] Follows from Lemma \ref{general statement} and Lemma \ref{psi_i is zero on LM}. In the notation of Lemma \ref{general statement}, the universal family $\mathcal{U}'$ over $\overline{\text{\rm M}}'=\overline{\text{LM}}_N$ can be identified with $\overline{\text{\rm M}}_{\tilde{\mathbf{a}}}$, where $\tilde{\mathbf{a}}=(1,1,\end{proof}silon,\ldots,\end{proof}silon,0)$, with an additional marking $x$ with $0$ weight. But $\mathcal{U}'$ can also be identified with $\overline{\text{LM}}_{N\cup\{x\}}=\overline{\text{\rm M}}_{(1,1,\end{proof}silon,\ldots\end{proof}silon)}$ (here $x$ has weight $\end{proof}silon$). Via this identification, boundary divisors $\delta_J$ correspond to boundary divisors $\delta_J$, for any $J\subseteq N\cup\{0,\inftyty,x\}$. The $v$-exceptional divisors appearing in the sum are $\delta_{I\cup\{x,0\}}$, $\delta_{I\cup\{x,\inftyty\}}$, $I\subseteq N$, $|I| \leq \Bigl\lfloor\frac{n-1}{2}\Bigr\rfloor$. \end{proof} When $n=|N|$ is even, the Hassett space $Z_N=\overline{\text{\rm M}}_{(\frac{1}{2}+\eta, \frac{1}{2}+\eta,\frac{1}{n},\ldots,\frac{1}{n})}$ of Notation \ref{Z} is closely related to the following Hassett spaces: $$Z'_N=\overline{\text{\rm M}}_{(\frac{1}{2}+\end{proof}silon, \frac{1}{2},\frac{1}{n},\ldots,\frac{1}{n})},\quad Z''_N=\overline{\text{\rm M}}_{(\frac{1}{2}, \frac{1}{2}+\end{proof}silon,\frac{1}{n},\ldots,\frac{1}{n})},$$ with weights assigned to $(\inftyty, 0, p_1, \ldots, p_n)$. There exist $p': Z_N\rightarrow Z'_N$, $p': Z_N\rightarrow Z''_N$, reduction maps that contract the boundary divisors using the two different projections. The universal families over $Z'_N$ and $Z''_N$ are $\mathfrak{m}athbb{P}^1$-bundles. Lemma \ref{general statement} applied to the reduction maps $p'$, $p''$ leads to: \betagin{lemma}\lambdabel{relations2} Assume $n=|N|$ is even. The following relations hold between the tautological classes on the Hassett space $Z_N$: $$\mathfrak{p}si_0=\delta_{i\inftyty}-\delta_{i0}+\sum_{i\in T, |T|=\frac{n}{2}}\delta_{T\cup\{\inftyty\}},\quad \mathfrak{p}si_{\inftyty}=\delta_{i0}-\delta_{i\inftyty}+\sum_{i\notin T, |T|=\frac{n}{2}}\delta_{T\cup\{\inftyty\}},$$ $$\mathfrak{p}si_0+\mathfrak{p}si_{\inftyty}=\sum_{|T|=\frac{n}{2}}\delta_{T\cup\{\inftyty\}}.$$ \end{lemma} \betagin{proof} The second relation follows from the first using the $S_2$ symmetry, while the third follows by adding the first two. To prove the first relation, consider the reduction map $p': Z_N\rightarrow Z'_N$. To avoid confusion, we denote by $\mathfrak{p}si'_i$, $\delta'_{ij}$ (resp., $\mathfrak{p}si_i$, $\delta_{ij}$) the tautological classes on $Z'_N$ (resp., on $Z_N$). The universal family $\mathcal{C}'\rightarrow Z'_N$ is a $\mathfrak{m}athbb{P}^1$-bundle. By Lemma \ref{relations}, we have $\mathfrak{p}si'_{\inftyty}=\delta'_{i0}-\delta'_{i\inftyty}$ (since $\delta'_{0\inftyty}=0$). The relation follows, as by Lemma \ref{general statement}, we have $${p'}^*\mathfrak{p}si'_{\inftyty}=\mathfrak{p}si_{\inftyty}-\sum_{|T|=\frac{n}{2}}\delta_{T\cup\{\inftyty\}},\quad {p'}^*\delta'_{i\inftyty}=\delta_{i\inftyty}+\sum_{i\in T, |T|=\frac{n}{2}}\delta_{T\cup\{\inftyty\}},\quad {p'}^*\delta'_{i0}=\delta_{i0}.$$ \end{proof} \section{Proof of Theorem \ref{odd}}\lambdabel{odd section} We start with a few generalities on GIT quotients $(\mathbb{P}^1)^n_{ss}\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}\mathbb{G}_m$. For $n$ odd, we first show that the Hassett space $Z_N$ introduced in (\ref{Z}) can be identified with symmetric GIT quotients $(\mathbb{P}^1)^n_{ss}\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}\mathbb{G}_m$. We use the method of windows from \cite{DHL} to prove exceptionality of the collections in Theorem \ref{odd}. We then prove that the collection is full, by using the full exceptional collection on the Losev--Manin spaces $\overline{\text{LM}}_N$ (see Section \ref{LM}). \subsection{Generalities on GIT quotients $(\mathbb{P}^1)^n_{ss}\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}\mathbb{G}_m$} Assume $n$ is an arbitrary positive integer. Let $\mathfrak{m}athbb{G}_m=\text{Spec}\, k[z,z^{-1}]$ act on $\mathfrak{m}athbb{A}^2$ by $z\cdot(x,y)=(zx,z^{-1}y)$. Let $P\mathbb{G}_m:=\mathbb{G}_m/\{\mathfrak{p}m1\}$. Note that $P\mathbb{G}_m$ acts on $\mathbb{P}^1$ faithfully. Let $0\in\mathbb{P}^1$ be the point with homogeneous coordinates $[0:1]$ and let $\inftyty=[1:0]$. We use concepts of ``linearized vector bundles'' and ``equivariant vector bundles'' interchangeably. For (complexes of) coherent sheaves, we prefer ``equivariant''. We endow the line bundle $\mathcal{O}_{\mathbb{P}^1}(-1)$ with a $\mathbb{G}_m$-linearization induced by the above action of $\mathbb{G}_m$ on its total space $\mathfrak{m}athbb{V}\mathcal{O}_{\mathbb{P}^1}(-1)\subset\mathfrak{m}athbb{P}^1\times\mathfrak{m}athbb{A}^2$. Consider the diagonal action of $\mathbb{G}_m$ on $(\mathbb{P}^1)^n$. For $\mathbf{a}r j=(j_1,\ldots,j_n)$ in $\mathfrak{m}athbb{Z}^n$, we denote $\mathcal{O}(\mathbf{a}r j)$ the line bundle $\mathcal{O}(j_1,\ldots,j_n)$ on $(\mathbb{P}^1)^n$ with $\mathbb{G}_m$-linearization given by the tensor product of linearizations above. We denote $\mathcal{O}\otimes z^k$ the trivial line bundle with $\mathbb{G}_m$-linearization given by the character $\mathfrak{m}athbb{G}_m\rightarrow\mathfrak{m}athbb{G}_m$, $z\mathfrak{m}apsto z^k$. For every equivariant coherent sheaf $\mathcal{F}$ (resp., ~a complex of sheaves $\mathcal{F}^\bullet$), we denote by $\mathcal{F}\otimes z^k$ (resp., ~$\mathcal{F}^\bullet\otimes z^k$) the tensor product with $\mathcal{O}\otimes z^k$. Note that $\mathcal{O}(\mathbf{a}r j)\otimes z^k$ is $P\mathbb{G}_m$-linearized iff $j_1+\ldots+j_n+k$ is even. There is an action of $S_2\times S_n$ on $(\mathbb{P}^1)^n$ which normalizes the $\mathbb{G}_m$ action. Namely, $S_n$ permutes the factors of $(\mathbb{P}^1)^n$ and $S_2$ acts on $\mathbb{P}^1$ by $z\mathfrak{m}apsto z^{-1}$. This action permutes linearized line bundles $\mathcal{O}(\mathbf{a}r j)\otimes z^k$ as follows: $S_n$ permutes components of $\mathbf{a}r j$ and $S_2$ flips $k\mathfrak{m}apsto-k$. \betagin{notn} Consider the GIT quotient $$\Sigma_n:=(\mathbb{P}^1)^n_{ss}\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}_{\mathcal{L}}\mathbb{G}_m,\quad \mathcal{L}=\mathcal{O}(1,\ldots,1),$$ with respect to the ample line bundle $\mathcal{L}$ (with its canonical $\mathbb{G}_m$-linearization described above). Here $(\mathbb{P}^1)^n_{ss}$ denotes the semi-stable locus with respect to this linearization. Let $\mathfrak{p}hi: (\mathfrak{m}athbb{P}^1)_{ss}^n\rightarrow \Sigma_n$ denote the canonical morphism. \end{notn} As GIT quotients $X\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}_{\mathcal{L}}~G$ are by definition $\text{\rm Proj} \betagin{itemize}g(R(X, \mathcal{L})^G\betagin{itemize}g)$, where $R(X, \mathcal{L})^G$ is the invariant part of the section ring $R(X, \mathcal{L})$, we may replace $\mathcal{L}$ with any positive multiple. As the action of $P\mathfrak{m}athbb{G}_m$ on $(\mathfrak{m}athbb{P}^1)^n$ is induced from the action of $\mathfrak{m}athbb{G}_m$, $\Sigma_n$ is isomorphic to the GIT quotient $(\mathbb{P}^1)^n_{ss}\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/} P\mathfrak{m}athbb{G}_m$ (with respect to any even multiple of $\mathcal{L}$). The action of $S_2\times S_n$ on $(\mathbb{P}^1)^n$ descends to $\Sigma_n$. By the Hilbert-Mumford criterion, a point $(z_i)$ in $(\mathfrak{m}athbb{P}^1)^n$ is semi-stable (resp., stable) if $\leq\frac{n}{2}$ (resp., $<\frac{n}{2}$) of the $z_i$ equal $0$ or equal $\inftyty$. \subsection{The space $Z_N$ as a GIT quotient when $n$ is odd} When $n$ is odd, there are no strictly semistable points and the action of $P\mathbb{G}_m$ on $(\mathbb{P}^1)^n_{ss}$ is free. In particular, $\Sigma_n$ is smooth and by Kempf's descent lemma, any $P\mathbb{G}_m$-linearized line bundle on $(\mathbb{P}^1)^n_{ss}$ descends to a line bundle on $\Sigma_n$. Furthermore, $\Sigma_n$ can be identified with the quotient stack $[(\mathbb{P}^1)^n_{ss}/P\mathbb{G}_m]$ and its derived category $D^b(\Sigma_n)$ with the equivariant derived category $D^b_{P\mathbb{G}_m}((\mathbb{P}^1)^n_{ss})$. Consider the trivial $\mathfrak{m}athbb{P}^1$-bundle on $ (\mathfrak{m}athbb{P}^1)^n$ with the following sections: $$\rho: (\mathfrak{m}athbb{P}^1)^n\times \mathfrak{m}athbb{P}^1=\text{\rm Proj}(\text{\rm Sym}(\mathcal{O}\oplus\mathcal{O}))\rightarrow (\mathfrak{m}athbb{P}^1)^n,$$ $$s_0(\overline z)= (\overline z, 0), \quad s_{\inftyty}(\overline z)= (\overline z, \inftyty), \quad s_i(\overline z)= (\overline z, pr_i(\overline z)),$$ where $pr_i:(\mathfrak{m}athbb{P}^1)^n\rightarrow\mathfrak{m}athbb{P}^1$ is the $i$-th projection. The sections $s_0$, resp., $s_{\inftyty}$ are induced by the projection $p_2: \mathcal{O}\oplus\mathcal{O}\rightarrow\mathcal{O}$, resp., $p_1: \mathcal{O}\oplus\mathcal{O}\rightarrow\mathcal{O}$, while the section $s_i$ is induced by the map $\mathcal{O}\oplus\mathcal{O}\rightarrow pr_i^*\mathcal{O}(1)$ given by the sections $x_i=pr_i^*x, y_i=pr_i^*y$ of $pr^*\mathcal{O}(1)$ that define $0$ and $\inftyty$ on the $i$-th copy of $\mathfrak{m}athbb{P}^1$. \betagin{notn}\lambdabel{Delta} Let $\Delta_{i0}=pr_i^{-1}(\{0\})\subseteq(\mathfrak{m}athbb{P}^1)^n$ and $\Delta_{i\inftyty}=pr_i^{-1}(\{\inftyty\})\subseteq(\mathfrak{m}athbb{P}^1)^n$. \end{notn} Note that $\Delta_{i0}$ is the zero locus of the section $x_i$, or the locus in $(\mathfrak{m}athbb{P}^1)^n$ where $s_i=s_0$. Similarly, let $\Delta_{i\inftyty}$ the zero locus of the section $y_i$. We now endow all the above vector bundles with $\mathbb{G}_m$-linearizations. Let $$\mathcal{L}_0=\mathcal{O}\otimes z,\quad \mathcal{L}_{\inftyty}=\mathcal{O}\otimes z^{-1}, \quad \mathcal{L}_i=pr_i^*\mathcal{O}(1)\otimes 1,\quad \mathcal{E}=\mathcal{L}_0\oplus \mathcal{L}_{\inftyty}.$$ The maps $\mathcal{L}_0\rightarrow\mathcal{L}_i$, $\mathcal{L}_{\inftyty}\rightarrow\mathcal{L}_i$ (given by the sections $x_i$, $y_i$) are $\mathbb{G}_m$-equivariant, hence, induce $\mathbb{G}_m$-equivariant surjective maps $\mathcal{E}\rightarrow\mathcal{L}_i$. The projection maps $\mathcal{E}\rightarrow\mathcal{L}_0$ and $\mathcal{E}\rightarrow\mathcal{L}_{\inftyty}$ are clearly $\mathbb{G}_m$-equivariant. While none of $\mathcal{E}$, $\mathcal{L}_0, \mathcal{L}_{\inftyty}, \mathcal{L}_i$ are $P\mathfrak{m}athbb{G}_m$-linearized vector bundles, tensoring with $\mathcal{O}(1,\ldots, 1)$ solves this problem, and we obtain a non-trivial $\mathfrak{m}athbb{P}^1$-bundle $\mathfrak{p}i: \mathfrak{m}athbb{P}(\mathcal{E})\rightarrow \Sigma_n$ with disjoint sections $\sigma_0$, $\sigma_{\inftyty}$ and additional sections $\sigma_1,\ldots, \sigma_n$. Denote $\delta_{i0}$ the locus in $\Sigma_n$ where $\sigma_i=\sigma_0$. This is the zero locus of the section giving the map $\mathcal{L}_{\inftyty}\rightarrow\mathcal{L}_i$ on $\Sigma_n$, i.e., the section whose pull-back to $(\mathfrak{m}athbb{P}^1)^n$ is the section $x_i$. Similarly, we let $\delta_{i\inftyty}$ the locus in $\Sigma_n$ where $\sigma_i=\sigma_{\inftyty}$. Hence, the sections $x_i$, $y_i$ of $pr_i^*\mathcal{O}(1)\otimes 1$ defining $\Delta_{i0}$, $\Delta_{i\inftyty}$ descend to global sections of the corresponding line bundle on $\Sigma_n$ and define $\delta_{i0}$, $\delta_{i\inftyty}$. \betagin{lemma}\lambdabel{dictionary} Assume $n$ is odd. We have the following dictionary between line bundles on the GIT quotient $\Sigma_n$ and $P\mathfrak{m}athbb{G}_m$-linearized line bundles on $(\mathbb{P}^1)^n$: $$\mathcal{O}(\delta_{i0})=pr^*_i\mathcal{O}(1)\otimes z,\quad \mathcal{O}(\delta_{i\inftyty})=pr^*_i\mathcal{O}(1)\otimes z^{-1}$$ $$\mathfrak{p}si_0=\mathcal{O}\otimes z^{-2},\quad \mathfrak{p}si_{\inftyty}=\mathcal{O}\otimes z^{2},\quad \mathfrak{p}si_i=pr^*_i\mathcal{O}(-2)\otimes 1.$$ \end{lemma} \betagin{proof} The first two formulas follows from the previous discussion: $\mathcal{O}(\delta_{i0})$ corresponds to the $P\mathfrak{m}athbb{G}_m$-linearized line bundle $\mathcal{L}_i\otimes \mathcal{L}_{\inftyty}^\varepsilone$. The remaining formulas follow from Lemma \ref{identify} and the identities (\ref{P1-bundle identities}). \end{proof} \betagin{lemma}\lambdabel{identify} If $n=|N|$ is odd, the Hassett space $Z_N$ (see Notation \ref{Z}) is isomorphic to the GIT quotient $\Sigma_n=(\mathbb{P}^1)^n_{ss}\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}_{\mathcal{O}(1,\ldots,1)}\mathbb{G}_m.$ \end{lemma} \betagin{proof} The trivial $\mathfrak{m}athbb{P}^1$-bundle $\rho: (\mathfrak{m}athbb{P}^1)^n_{ss}\times \mathfrak{m}athbb{P}^1\rightarrow (\mathfrak{m}athbb{P}^1)^n_{ss}$ with sections $s_0$, $s_{\inftyty}$, $s_i$ is the pull-back of the $\mathfrak{m}athbb{P}^1$-bundle $\mathfrak{p}i: \mathfrak{m}athbb{P}(\mathcal{E})\rightarrow \Sigma_n$ and sections $\sigma_0$, $\sigma_{\inftyty}$, $\sigma_i$. Since the former is a family of $\mathcal{A}$-stable rational curves, where $\mathcal{A}=(\frac{1}{2}+\eta, \frac{1}{2}+\eta,\frac{1}{n},\ldots,\frac{1}{n})$, we have an induced morphism $f: \Sigma_n\rightarrow Z_N$. Clearly, every $\mathcal{A}$-stable pointed rational curve is represented in the family over $(\mathfrak{m}athbb{P}^1)^n_{ss}$ (hence, $\Sigma_n$). Furthermore, two elements of this family are isomorphic if and only if they belong to the same orbit under the action of $\mathfrak{m}athbb{G}_m$. It follows that $f$ is one-to-one on closed points. As both $Z_N$ and $\Sigma_n$ are smooth, $f$ must be an isomorphism. Alternatively, there is an induced morphism $F: (\mathfrak{m}athbb{P}^1)^n_{ss}\rightarrow Z_N$ which is $\mathfrak{m}athbb{G}_m$-equivariant (with $\mathfrak{m}athbb{G}_m$ acting trivially on $Z_N$). As $\Sigma_n$ is a categorical quotient, it follows that $F$ factors through $\Sigma_n$ and as before, the resulting map $f: \Sigma_n\rightarrow Z_N$ must be an isomorphism. \end{proof} \subsection{Exceptionality} When $n$ is odd, $\Sigma_n$ is a smooth polarized projective toric variety for the torus $\mathbb{G}_m^{n-1}$ and its polytope is a cross-section of the $n$-dimensional cube (the polytope of $(\mathbb{P}^1)^n$ with respect to $\mathcal{L}$) by the hyperplane normal to and bisecting the big diagonal. In particular, the topological Euler characteristic $e(\Sigma_n)$ is equal to the number of edges of the hypercube intersecting that hyperplane: $$e(\Sigma_n)=n{n-1\text{ch}oose {n-1\overlineer 2}}=n{n\text{ch}oose 0}+(n-2){n\text{ch}oose 1}+(n-4){n\text{ch}oose 2}+\ldots.$$ By Lemma \ref{dictionary}, the line bundles $\{L_{E,p}\}$ in Theorem \ref{odd} correspond to restrictions to $(\mathbb{P}^1)^n_{ss}$ of $P\mathfrak{m}athbb{G}_m$ linearized line bundles on $(\mathbb{P}^1)^n$ \betagin{equation}\lambdabel{odd translate} L_{E,p}=\mathcal{O}(-E)\otimes z^p, \end{equation} where $\mathcal{O}(-E)=\mathcal{O}(\mathbf{a}r j)$, and $\mathbf{a}r j$ is a vector of $0$'s and $(-1)$'s, with $-1$'s corresponding to the indices in $E\subseteq N$. (Here we abuse notations and we denote by $L_{E,p}$ both the line bundle on $(\mathbb{P}^1)^n$ and the corresponding one on $\Sigma_n$.) The collection is $(S_2\times S_n)$-equivariant and consists of $e(\Sigma_n)$ line bundles. \betagin{proof}[Proof of Theorem \ref{odd} -- exceptionality] Let $G:=P\mathbb{G}_m$. We use the method of windows \cite{DHL}. We describe the Kempf--Ness stratification \cite[Section 2.1] {DHL} of the unstable locus $(\mathbb{P}^1)^n_{us}$ with respect to $\mathcal{L}$. The $G$-fixed points are $$Z_I=\{(x_i)\,|\,x_i=0\ \text{h}box{\rm for}\ i\not\in I,\ x_i=\inftyty\ \text{h}box{\rm for}\ i\in I\}$$ for every subset $I\subseteq\{1,\ldots,n\}$. Let $\sigmagma_I:\,Z_I\text{h}ookrightarrow (\mathbb{P}^1)^n$ be the inclusion map. The stratification comes from an ordering of the pairs $(\lambdambda, Z)$, where $\lambdambda: \mathfrak{m}athbb{G}_m\rightarrow G$ is a $1$-PS and $Z$ is a connected component of the $\lambdambda$-fixed locus (the points $Z_I$ in our case). The ordering is such that the function $$\mathfrak{m}u(\lambdambda, Z)=-\frac{\text{h}box{\rm weight}_\lambdambda\mathcal{L}|_{Z}}{|\lambdambda|},$$ is decreasing. Here $|\lambdambda|$ is an Euclidean norm on $\text{\rm Hom}(\mathfrak{m}athbb{G}_m,G)\otimes_{\mathfrak{m}athbb{Z}}\mathfrak{m}athbb{R}$. We refer to \cite[Section 2.1] {DHL} for the details. As $\mathfrak{m}u(\lambdambda, Z)=\mathfrak{m}u(\lambdambda^k, Z)$ for any integer $k>0$, it follows that, in our situation, one only has to consider pairs $(\lambdambda, Z_I)$ and $(\lambdambda', Z_I)$, for the two $1$-PS $\lambdambda(z)=z$ and $\lambdambda'(z)=z^{-1}$. Recall that $$\text{h}box{\rm weight}_\lambdambda\mathcal{O}(-1)|_{\inftyty}=+1,\quad \text{h}box{\rm weight}_\lambdambda\mathcal{O}(-1)|_{0}=-1,$$ $$\text{h}box{\rm weight}_\lambdambda(\mathcal{O}\otimes z^p)|_q=p\quad \text{ for all points }q\in\mathfrak{m}athbb{P}^1.$$ It follows that $\text{h}box{\rm weight}_{\lambdambda'}\mathcal{O}(-1)|_{\inftyty}=-1$, $\text{h}box{\rm weight}_{\lambdambda'}\mathcal{O}(-1)|_{0}=+1$ and $$\text{h}box{\rm weight}_\lambdambda\mathcal{L}|_{Z_I}=|I^c|-|I|, \quad \text{h}box{\rm weight}_{\lambdambda'}\mathcal{L}|_{Z_I}=-|I^c|+|I|.$$ The unstable locus is the union of the following Kempf--Ness strata: $$S_I=\{(x_i)\,|\,x_i=\inftyty\ \text{h}box{\rm if}\ i\in I, x_i\neq\inftyty\ \text{h}box{\rm if}\ i\notin I\}\cong \mathfrak{m}athbb{A}^{|I^c|}\quad \text{for}\quad |I|>n/2,$$ $$S'_I=\{(x_i)\,|\,x_i=0\ \text{h}box{\rm if}\ i\not\in I, x_i\neq0\ \text{h}box{\rm if}\ i\in I\}\sigmameq \mathfrak{m}athbb{A}^{|I|}\quad \text{for}\quad |I|<n/2.$$ The destabilizing $1$-PS for $S_I$ (resp.~for $S'_I$) is $\lambdambda$ (resp.~$\lambdambda'$). The $1$-PS $\lambdambda$ (resp., $\lambdambda'$) acts on the conormal bundle $N^\varepsilone_{S_I|(\mathfrak{m}athbb{P}^1)^n}$ (resp., $N^\varepsilone_{S'_I|(\mathfrak{m}athbb{P}^1)^n}$) restricted to $Z_I$ with positive weights and their sum $\eta_I$ (resp., $\eta'_I$) can be computed as $$\eta_I=2|I|,\quad\text{h}box{\rm resp.}\quad \eta'_I=2|I^c|.$$ To see this, note that the sum of $\lambdambda$-weights of $\betagin{itemize}g(N^\varepsilone_{S_I|(\mathfrak{m}athbb{P}^1)^n}\betagin{itemize}g)_{|Z_I}$ equals $$\text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(\deltat N^\varepsilone_{S_I|(\mathfrak{m}athbb{P}^1)^n}\betagin{itemize}g)_{|Z_I}= \text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(\deltat T_{S_I}\betagin{itemize}g)_{|Z_I}-\text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(\deltat T_{(\mathfrak{m}athbb{P}^1)^n}\betagin{itemize}g)_{|Z_I}.$$ Note that $S_I$ can be identified with $\mathfrak{m}athbb{A}^{|I^c|}$ and the point $Z_I\in S_I$ with the point $0\in\mathfrak{m}athbb{A}^{|I^c|}$. The action of $G$ on $\mathfrak{m}athbb{A}^{|I^c|}$ is via $z\cdot (x_j)=(z^2x_j)$. It follows that $\text{h}box{\rm weight}_\lambdambda {T_{S_I}}_{|Z_I}=2|I^c|$. Similarly, the tangent space $\betagin{itemize}g(\deltat T_{(\mathfrak{m}athbb{P}^1)^n}\betagin{itemize}g)_{|Z_I}$ can be identified with the tangent space of $T_0\mathfrak{m}athbb{A}^n$, with the action of $G$ on $(x_j)\in \mathfrak{m}athbb{A}^n$ being $z\cdot x_j=z^2x_j$ if $j\in I^c$ and $z\cdot x_j=z^{-2}x_j$ if $j\in I$. It follows that $\text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(T_{(\mathfrak{m}athbb{P}^1)^n}\betagin{itemize}g)_{|Z_I}=2|I^c|-2|I|$. Hence, $\eta_I=2|I|$. Similarly, $\eta'_I=2|I^c|$. For the Kempf-Ness strata $S_I$ and $S'_I$ we make a choice of ``weights'' $$w_I=w'_I=-2s,\quad \text{where}\quad n=2s+1.$$ By the main result of \cite[Theorem 2.10]{DHL}, $D^b_G((\mathbb{P}^1)^n_{ss})$ is equivalent to the window $\mathfrak{m}athbb{G}_w$ in the equivariant derived category $D^b_G((\mathbb{P}^1)^n)$, namely the full subcategory of all complexes of equivariant sheaves $\mathcal{F}^\bullet$ such that all weights (with respect to corresponding destabilizing $1$-PS) of the cohomology sheaves of the complex $\sigmagma_I^*\mathcal{F}^\bullet$ lie in the segment $$[w_I,w_I+\eta_I)\quad\text{h}box{\rm or}\quad [w'_I,w'_I+\eta'_I),\quad\text{h}box{\rm respectively.}$$ We prove that the window $\mathfrak{m}athbb{G}_w$ contains all linearized line bundles $L_{E,p}=\mathcal{O}(-E)\otimes z^p$ from Theorem \ref{odd}. Recall that $n=2s+1$. Since the collection is $S_2$ invariant and $S_2$ flips the strata $S_I$ and $S'_I$, it suffices to check the window conditions for $S_I$. The $\lambdambda$-weight of $\mathcal{O}(-E)\otimes z^p$ restricted to $Z_I$ equals $|I\cap E|-|I^c\cap E|+p$. It is straightforward to check that the maximum of this quantity over all $E$ is equal to $2s+2|I|-n+1$ when $s$ is odd, or $2s+2|I|-n-1$ when $s$ is even, and the minimum to $-2s$, hence the claim. Since our collection of linearized line bundles is clearly an exceptional collection on $D^b_G((\mathbb{P}^1)^n)$, it follows it is an exceptional collection in $D^b_G(Z_n)$. \end{proof} \subsection{Fullness} We will prove the following general statement. \betagin{thm}\lambdabel{full odd} The collection in Theorem \ref{odd} generates all line bundles $$L_{E,p}:=\mathcal{O}(-E)\otimes z^p,$$ for all $E\subseteq N$, $e=|E|$, $p\in\mathfrak{m}athbb{Z}$ with $e+p$ even. \end{thm} \betagin{proof}[Proof of Theorem \ref{odd} - fullness] By Theorem \ref{full odd}, the collection in Theorem \ref{odd} generates all the objects $Rp_*({\mathfrak{p}i^*_I}\text{h}at\mathfrak{m}athbb{G})$ from Corollary \ref{rewrite}. Fullness then follows by Corollary \ref{S is enough}. Alternatively, it is easy to see that line bundles $L_{E,p}$ generate the derived category of the stack $[(\mathbb{P}^1)^n/P\mathfrak{m}athbb{G}_m]$ and we can finish as in \cite[Proposition 4.1]{CT_partII}. \end{proof} \betagin{proof}[Proof Theorem \ref{full odd}] For simplicity, denote by $\mathcal{C}$ the collection in the theorem. We introduce the \emph{score} of a pair $(E,p)$, with $e=|E|$ as $$s(E,p):=|p|+\mathfrak{m}in\{e,n-e\}.$$ The collection $\mathcal{C}$ consists of $L_{E,p}$ with $s(E,p)\leq s$. We prove the statement by induction on the score $s(E,p)$, and for equal score, by induction on $|p|$. Let $(E,p)$ be any pair as in Theorem \ref{full odd}. If $s(E,p)\leq s$, there is nothing to prove. Assume $s(E,p)>s$. Using $S_2$-symmetry, we may assume w.l.o.g. that $p\geq0$. We will use the two types of $P\mathfrak{m}athbb{G}_m$-equivariant Koszul resolutions from Lemma \ref{G Koszul} to successively generate all objects. \underline{Case $e\leq s$.} The sequence (1) in Lemma \ref{K} for a set $I$ with $|I|=s+1$ followed by tensoring with $L_{E,p}=\mathcal{O}(-E)\otimes z^p$, gives an exact sequence $$ 0\rightarrow L_{E\cup I,p-s-1}\rightarrow\ldots\rightarrow\betagin{itemize}goplus_{J\subseteq I, |J|=j}L_{E\cup J,p-j}\rightarrow\ldots\rightarrow L_{E,p}\rightarrow0.$$ We prove that each term $L_{E\cup J,p-j}$ is generated by $\mathcal{C}$ for all $j>0$. Note that $s(E,p)=|p|+e=p+e$. If $p-j\geq0$, then $$s(E\cup J, p-j)\leq (p-j)+(e+j)=p+e=s(E,p),$$ but as $p-j<p$, we are done by induction on $|p|$. If $p-j<0$ then $$s(E\cup J, p-j)\leq (j-p)+n-(e+j)=n-e-p<e+p=s(E,p)$$ since we assume $e+p>s$. In particular, $L_{E\cup J,p-j}$ is in $\mathcal{C}$. \underline{Case $e\geq s+1$.} Let $I\subseteq E$, with $|I|=s+1$. The sequence (2) in Lemma \ref{K} for the set $I$, followed by tensoring with $L_{E',p-s-1}=\mathcal{O}(E')\otimes z^{p-s-1}$, where $E'=E\setminus I$, gives an exact sequence $$0\rightarrow L_{E,p}\rightarrow\ldots\rightarrow\betagin{itemize}goplus_{J\subseteq I, |J|=j}L_{E'\cup J,j+p-s-1}\rightarrow\ldots\rightarrow L_{E',p-s-1}\rightarrow0.$$ We prove that each term $L_{E\cup J,j+p-s-1}$ is generated by $\mathcal{C}$ for all $J\neq I$ (when $(E'\cup J,j+p-s-1)=(E,p)$). Note that $s(E,p)=p+n-e$. We let $e':=|E'|=e-s-1$. If $j+p-s-1\geq0$, then $$s(E'\cup J, j+p-s-1)\leq (j+p-s-1)+(n-e'-j)=p+n-e=s(E,p).$$ As $p+j-s-1\leq p$ with equality if and only if $J=I$, we are done by induction on $|p|$. If $j+p-s-1<0$, then $$s(E'\cup J, j+p-s-1)\leq -(j+p-s-1)+(e'+j)=e-p<s(E,p)=p+n-e,$$ since we assume $s(E,p)>s$, which gives $e-p\leq s$. \end{proof} \betagin{lemma}\lambdabel{G Koszul}\lambdabel{K} Let $n=2s+1$, $I\subseteq N$, $|I|=s+1$. There are two types of $P\mathfrak{m}athbb{G}_m$-equivariant resolutions: \betagin{itemize} \item[(1)] The restriction to $(\mathfrak{m}athbb{P}^1)^n_{ss}$ of the Koszul complex of the intersection of the divisors $\Delta_{i0}$ (Notation \ref{Delta}) for $i\in I$, which takes the form $$0\rightarrow \mathcal{O}(-I)\otimes z^{-(s+1)}\rightarrow\ldots\rightarrow\betagin{itemize}goplus_{J\subseteq I, |J|=j}\mathcal{O}(-J)\otimes z^{-j}\rightarrow\ldots\rightarrow\mathcal{O}\otimes 1\rightarrow0$$ \item[(2)] The restriction to $(\mathfrak{m}athbb{P}^1)^n_{ss}$ of the Koszul complex of the intersection of the divisors $\Delta_{i\inftyty}$ (Notation \ref{Delta}) for $i\in I$, which takes the form $$0\rightarrow \mathcal{O}(-I)\otimes z^{(s+1)}\rightarrow\ldots\rightarrow\betagin{itemize}goplus_{J\subseteq I, |J|=j}\mathcal{O}(-J)\otimes z^j\rightarrow\ldots\rightarrow\mathcal{O}\otimes 1\rightarrow0$$ \end{itemize} \end{lemma} \betagin{proof} Let $G=P\mathfrak{m}athbb{G}_m$. Denote for simplicity $D_i=\Delta_{i0}$, for all $i\in N$. The divisors $D_1,\ldots, D_n$ intersect with simple normal crossings. Let $Y_I:=\cap_{i\in I}D_i\subseteq (\mathfrak{m}athbb{P}^1)^n$. Consider the Koszul resolution of $Y_I$: $$\ldots \rightarrow\oplus_{i<j, i,j\in I}\mathcal{O}(-D_i-D_j)\rightarrow \oplus_{i\in I}\mathcal{O}(-D_I)\rightarrow\mathcal{O}\rightarrow\mathcal{O}_{Y_I}\rightarrow0.$$ Each of these maps in the sequence is a direct sum of maps of the form $$\mathcal{O}(-D_{j_1}-\ldots-D_{j_t})\rightarrow \mathcal{O}(-D_{j_1}-\ldots-D_{j_{t-1}})$$ obtained by multiplication with a canonical section corresponding to the effective divisor $D_{j_t}$. This can be made into a $G$-equivariant map: $$\mathcal{O}(-D_{j_1}-\ldots-D_{j_t})\otimes z^{-t}\rightarrow \mathcal{O}(-D_{j_1}-\ldots-D_{j_{t-1}})\otimes z^{-(t-1)}.$$ since $\mathcal{O}(-D_i)\otimes z^{-1}\rightarrow\mathcal{O}$ is the $G$-equivariant map given by multiplication with $x_i$, whose zero locus is $D_i=\Delta_{i0}$ (see Lemma \ref{dictionary} and the discussion preceding it). The Lemma follows by restriction to $(\mathfrak{m}athbb{P}^1)^n_{ss}$. Note that $Y_I\cap(\mathfrak{m}athbb{P}^1)^n_{ss}=\emptyset$. The proof of (2) is similar, with the only difference that multiplication with $y_i$, the canonical section of $\Delta_{i\inftyty}$ corresponds to a $G$-equivariant map $\mathcal{O}(-\Delta_{i\inftyty})\otimes z\rightarrow\mathcal{O}$. \end{proof} \betagin{rmk}\lambdabel{elaborate1} We explain the connection with case $p=2$, $q=n=2s+1$ of \cite[Theorem 1.10]{CT_partII}. The collection there is the following: (i) The line bundles $F_{0,E}:=-\frac{1}{2}\sum_{j\in E}\mathfrak{p}si_j$ ($e=|E|$ is even) in the so-called group $1$ (group $1A$ and group $1B$ of the theorem coincide in this case). (ii) The line bundles in the so-called group $2$: $$\mathcal{T}_{l,\{u\}\cup E}:=\sigma_u^*\betagin{itemize}g(\omega_{\mathfrak{p}i}^{\frac{e+1-l}{2}}(E\cup\{u\})\betagin{itemize}g)= \frac{e-l-1}{2}\mathfrak{p}si_u+\sum_{j\in E}\delta_{ju}=-\frac{l+1}{2}\mathfrak{p}si_u-\sum_{j\in E}\frac{1}{2}\mathfrak{p}si_j$$ where $e=|E|$, $u\in\{0,\inftyty\}$, $l\geq0$, $l+|E\cap\{u\}|$ even (i.e., $l+e$ odd), with $$l+\mathfrak{m}in\{e,n-e\}\leq s-1.$$ This collection is the dual of the one in Theorem \ref{odd}. The elements in group $2$ with $l=p-1$, $u=\inftyty$ recover the dual of the collection in Theorem \ref{odd} when $p>0$. Similarly, elements in group $2$ with $l=-p-1$, $u=0$ recover the dual of the collection in Theorem \ref{odd} when $p<0$. The elements of group $1$ recover the dual of the collection in Theorem \ref{odd} when $p=0$. \end{rmk} \section{Proof of Theorem \ref{even}} We employ a similar strategy as in Section \ref{odd section}. We identify the Hassett space $Z_N$ (see (\ref{Z})) when $n=|N|$ is even with the Kirwan resolution of the symmetric GIT quotient $\Sigma_n$. We use the method of windows \cite{DHL} to prove the exceptionality part of Theorem \ref{even}. We prove fullness using previous results on Losev--Manin spaces $\overline{\text{LM}}_N$ (see Section \ref{LM}). \subsection{The space $Z_N$ as a GIT quotient, $n$ even}\lambdabel{even intro} Assume $n=2s+2$. There are $n\text{ch}oose s+1$ strictly semistable points $\{p_T\}\in(\mathfrak{m}athbb{P}^1)_{ss}^n$ one for each subset $T\subseteq N$, $|T|=s+1$. More precisely, the point $p_T$ is obtained by taking $\inftyty$ for spots in $T$ and $0$ for spots in $T^c$. Instead of the GIT quotient $\Sigma_n$, which is singular at the images of these points, we consider its Kirwan resolution $\tilde\Sigma_n$ constructed as follows. Let $W=W_n$ be the blow-up of $(\mathbb{P}^1)^n$ at the points $\{p_T\}$ and let $\{E_T\}$ be the corresponding exceptional divisors. The action of $\mathfrak{m}athbb{G}_m$ lifts to $W$. To describe this action locally around a point $p_T$, assume for simplicity $T=\{s+2,\ldots, n\}$ around the point $p_T$. Consider the affine chart $$\mathfrak{m}athbb{A}^n=(\mathfrak{m}athbb{P}^1\setminus\{\inftyty\})^{s+1}\times(\mathfrak{m}athbb{P}^1\setminus\{0\})^{s+1}$$ In the new coordinates, we have $p_T=0=(0,\ldots,0)$. We let $((x_i), (y_i))$, resp., $((t_i), (u_i))$, for $i=1,\ldots, s$, be coordinates on $\mathfrak{m}athbb{A}^n$, resp., $\mathfrak{m}athbb{P}^{n-1}$. Then $W$ is locally the blow-up $\text{\rm Bl}_0\mathfrak{m}athbb{A}^n$, with equations $$x_it_j=x_jt_i,\quad x_iu_j=y_jt_i,\quad y_iu_j=y_ju_i.$$ The action of $\mathfrak{m}athbb{G}_m$ on $W$ is given by $$z\cdot\betagin{itemize}g((x_i,y_i),[t_i,u_i]\betagin{itemize}g)=\betagin{itemize}g((z^2x_i,z^{-2}y_i),[z^2t_i,z^{-2}u_i]\betagin{itemize}g).$$ The fixed locus of the action of $\mathfrak{m}athbb{G}_m$ on $E_T$ consists of the subspaces $$Z^+_T=\{u_1=\ldots=u_{s+1}=0\}=\mathfrak{m}athbb{P}^s\subseteq\mathfrak{m}athbb{P}^{n-1}=E_T,$$ $$Z^-_T=\{t_1=\ldots=t_{s+1}=0\}=\mathfrak{m}athbb{P}^s\subseteq\mathfrak{m}athbb{P}^{n-1}=E_T.$$ As $\text{\rm Bl}_0\mathfrak{m}athbb{A}^n$ is the total space $\mathfrak{m}athbb{V}(\mathcal{O}_{E_T}(-1))$ of the line bundle $\mathcal{O}_{E_T}(-1)=\mathcal{O}_{E_T}(E_T)$ and the action of $\mathfrak{m}athbb{G}_m$ on $\text{\rm Bl}_0\mathfrak{m}athbb{A}^n$ coincides with the canonical action of $\mathfrak{m}athbb{G}_m$ on $\mathfrak{m}athbb{V}(\mathcal{O}_{E_T}(-1))$ coming from the action of $G$ on $E_T=\mathfrak{m}athbb{P}^{n-1}$ given by $$z\cdot[t_1,\ldots, t_{s+1}, u_1,\ldots, u_{s+1}]=[z^2t_1,\ldots, z^2t_{s+1}, z^{-2}u_1,\ldots, z^{-2}u_{s+1}],$$ it follows that $\mathcal{O}_{E_T}(E_T)$ (and hence, $\mathcal{O}(E_T)$) has a canonical $\mathfrak{m}athbb{G}_m$-linearization. With respect to this linearization, we have: \betagin{equation}\lambdabel{B} \text{weight}_\lambda{\mathcal{O}_{E_T}(-1)}_{|q}=\text{weight}_\lambda{\mathcal{O}(E_T)}_{|q}=+2,\quad q\in Z^+_T,\quad \lambda(z)=z, \end{equation} $$\text{weight}_{\lambda}{\mathcal{O}_{E_T}(-1)}_{|q}=\text{weight}_{\lambda}{\mathcal{O}(E_T)}_{|q}=-2,\quad q\in Z^-_T,\quad \lambda(z)=z.$$ and similarly, $$\text{weight}_{\lambda'}{\mathcal{O}_{E_T}(-1)}_{|q}=\text{weight}_{\lambda'}{\mathcal{O}(E_T)}_{|q}=-2,\quad q\in Z^+_T,\quad \lambda'(z)=z^{-1}.$$ $$\text{weight}_{\lambda'}{\mathcal{O}_{E_T}(-1)}_{|q}=\text{weight}_{\lambda'}{\mathcal{O}(E_T)}_{|q}=+2,\quad q\in Z^-_T,\quad \lambda'(z)=z^{-1}.$$ We denote by $\mathcal{O}(\mathbf{a}r j)(\sum \alpha_T E_T)$ the line bundle $\mathfrak{p}i^*\mathcal{O}(j_1,\ldots, j_n)(\sum \alpha_T E_T)$ on $W_n$ (where $j_i, \alpha_T$ integers and $\mathfrak{p}i: W_n\rightarrow (\mathfrak{m}athbb{P}^1)^n$ is the blow-up map), with the $\mathfrak{m}athbb{G}_m$-linearization given by the tensor product of the canonical linearizations above. As before, for every equivariant coherent sheaf $\mathcal{F}$, we denote by $\mathcal{F}\otimes z^k$ the tensor product with $\mathcal{O}\otimes z^k$. For a subset $E\subseteq N$, we denote $$\mathcal{O}(-E):=\mathfrak{p}i^*\mathcal{O}(\mathbf{a}r j)$$ with $j_i=-1$ if $i\in E$ and $j_i=0$ otherwise. Note that the action of $S_2$ exchanges $\mathcal{O}(-E)\otimes z^p$ with $\mathcal{O}(-E)\otimes z^{-p}$ and $E_T$ with $E_{T^c}$ (Lemma \ref{dictionary2}). Consider the GIT quotient with respect to a (fractional) polarization $$\mathcal{L}=\mathcal{O}(1,\ldots,1)\left(-\end{proof}s\sum E_T\right),$$ where $0<\end{proof}s\ll 1$, $\end{proof}s\in\mathfrak{m}athbb{Q}$, and the sum is over all exceptional divisors (with the canonical polarization described above): $$\tilde\Sigma_n=(W_n)_{ss}\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}_{\mathcal{L}}\mathfrak{m}athbb{G}_m.$$ \betagin{lemma} The $\mathfrak{m}athbb{G}_m$-linearized line bundle $\mathcal{O}(\mathbf{a}r j)(\sum \alpha_T E_T)\otimes z^p$ descends to the GIT quotient $\tilde\Sigma_n$ if and only if for all subsets $I\subseteq N$ with $|I|\neq s+1$ $$-\sum_{i\in I}j_i+\sum_{i\in I^c} j_i+p\quad \text{ is even }$$ and for all subsets $I\subseteq N$ with $|I|=s+1$, we have $$-\sum_{i\in I}j_i+\sum_{i\in I^c} j_i+p\mathfrak{p}m2\alpha_I\quad \text{ is divisible by }\quad 4.$$ \end{lemma} \betagin{proof} By Kempf's descent lemma, a $G$-linearized line bundle $L$ descends to the GIT quotient if and only if the stabilizer of any point in the semistable locus acts trivially on the fiber of $L$ at that point, or equivalently, $\text{weight}_\lambda L_{|q}=0$, for any semistable point $q$ and any $1$-PS $\lambda:\mathfrak{m}athbb{G}_m\rightarrow G$. By definition, $\text{weight}_\lambda L_{|q}=\text{weight}_\lambda L_{|p_0}$, where $p_0$ is the fixed point $\lim_{t\to 0}\lambda(t)\cdot q$. For any point $q$ in $(\mathfrak{m}athbb{P}^1)^n\setminus\{p_T\}$ such that $q=(z_i)$ has $z_i=\inftyty$ for $i\in I$ and $z_i\neq\inftyty$ for $i\in I^c$, we have for $\lambda(z)=z$ that $\lim_{t\to 0}\lambda(t)\cdot q$ is the point with coordinates $z_i=\inftyty$ for $i\in I$ and $z_i=0$ for $i\in I^c$, and hence: \betagin{equation}\lambdabel{A} \text{weight}_{\lambda}\betagin{itemize}g(\mathcal{O}(\mathbf{a}r j)(\sum \alpha_T E_T)\otimes z^p\betagin{itemize}g)_{|q}=-\sum_{i\in I}j_i+\sum_{i\in I^c} j_i+p. \end{equation} Note that such a point $q$ is semistable if and only if $|I|<s+1$. Similarly, if $q$ has $z_i=0$ for $i\in I^c$ and $z_i\neq0$ for $i\in I$, $\lambda'(z)=z^{-1}$: $$\text{weight}_{\lambda'}\betagin{itemize}g(\mathcal{O}(\mathbf{a}r j)(\sum \alpha_T E_T)\otimes z^p\betagin{itemize}g)_{|q}=\sum_{i\in I}j_i-\sum_{i\in I^c} j_i+p.$$ Note, $q$ is semistable iff $|I|>s+1$. The stabilizer of $q$ is $\{\mathfrak{p}m1\}$ in both cases. If $q\in E_T\setminus (Z^+_T\sqcup Z^-_T)$ then $\lim_{t\to 0}\lambda(t)\cdot q\in Z^-_T$, $\lim_{t\to 0}\lambda'(t)\cdot q\in Z^+_T$ and using (\ref{B}) we obtain $$\text{weight}_{\lambda}\betagin{itemize}g(\mathcal{O}(\mathbf{a}r j)(\sum \alpha_T E_T)\otimes z^p\betagin{itemize}g)_{|q}=-\sum_{i\in I}j_i+\sum_{i\in I^c} j_i+p-2\alpha_T,$$ $$\text{weight}_{\lambda'}\betagin{itemize}g(\mathcal{O}(\mathbf{a}r j)(\sum \alpha_T E_T)\otimes z^p\betagin{itemize}g)_{|q}=\sum_{i\in I}j_i-\sum_{i\in I^c} j_i+p-2\alpha_T.$$ A point $q\in E_T\setminus (Z^+_T\sqcup Z^-_T)$ has stabilizer $\{\mathfrak{p}m1, \mathfrak{p}m i\}$. The conclusion follows. \end{proof} \betagin{cor} For $E\subseteq N$, $p\in\mathfrak{m}athbb{Z}$, the line bundle $\mathcal{O}(-E)(\sum\alpha_T E_T)\otimes z^p$ descends to the GIT quotient $\tilde\Sigma_n$ if and only if for all subsets $I\subseteq N$ with $|I|\neq s+1$ $$|I\cap E|-|I^c\cap E|+p\quad \text{ is even }$$ and for all subsets $I\subseteq N$ with $|I|=s+1$, we have $$|I\cap E|-|I^c\cap E|+p-2\alpha_I\quad \text{ is divisible by }\quad 4.$$ \end{cor} \betagin{lemma}\lambdabel{identify2} If $n=|N|$ is even, the Hassett space $Z_N=\overline{\text{\rm M}}_{0,(\frac{1}{2}+\eta, \frac{1}{2}+\eta,\frac{1}{n},\ldots,\frac{1}{n})}$ is isomorphic to the GIT quotient $\tilde\Sigma_n=(W_n)_{ss}\mathfrak{m}athbin{/\mathfrak{m}kern-6mu/}_{\mathcal{O}(1,\ldots,1)(-\end{proof}silon\sum E_T)}\mathbb{G}_m.$ \end{lemma} \betagin{proof} The trivial $\mathfrak{m}athbb{P}^1$-bundle $(\mathfrak{m}athbb{P}^1)^n\times \mathfrak{m}athbb{P}^1\rightarrow (\mathfrak{m}athbb{P}^1)^n$ has sections $s_0$, $s_{\inftyty}$, $s_i$. We still denote by $s_0$, $s_{\inftyty}$, $s_i$ the induced sections of the pull back $W_{ss}\times\mathfrak{m}athbb{P}^1\rightarrow W_{ss}$. The family is not $\mathcal{A}$-stable at the points $p_T$, where $s_i=s_{\inftyty}$ for all $i\in T$ and $s_i=s_0$ for all $i\in T^c$ (markings in $T$ are identified with $\inftyty$, and markings in $T^c$ with $0$). Here $\mathcal{A}=(\frac{1}{2}+\eta, \frac{1}{2}+\eta,\frac{1}{n},\ldots,\frac{1}{n})$. Let $\mathcal{C}'$ be the blow-up of $W\times \mathfrak{m}athbb{P}^1$ along the codimension $2$ loci $$E_T\times\{0\}=s_0(E_T),\quad E_T\times\{\inftyty\}=s_{\inftyty}(E_T).$$ Denote by $\tilde E^0_T$ and $\tilde E^{\inftyty}_T$ the corresponding exceptional divisors in $\mathcal{C}'$. The resulting family $\mathfrak{p}i': \mathcal{C}'\rightarrow W$ has fibers above points $p\in E_T$ a chain of $\mathfrak{m}athbb{P}^1$'s of the form $C_0\cup \tilde F\cup C_{\inftyty}$, where $\tilde F$ is the proper transform of the fiber of $W\times\mathfrak{m}athbb{P}^1\rightarrow W$ and $\tilde F$ meets each of $C_0$ (the fiber of $\tilde E^0_T\rightarrow E_T$ at $p$) and $C_{\inftyty}$ (the fiber of $\tilde E^{\inftyty}_T\rightarrow E_T$ at $p$). The proper transforms of $s_i$ for $i\in T$ (resp., $i\in T^c$) intersect $C_{\inftyty}$ (resp., $C_0$) at distinct points. The dualizing sheaf $\omega_{\mathfrak{p}i'}$ is relatively nef, with degree $0$ on $\tilde F$. It follows that $\omega_{\mathfrak{p}i'}$ induces a morphism $\mathcal{C}'\rightarrow \mathcal{C}$ over $W_{ss}$ which contracts the component $\tilde F$ in each of the above fibers, resulting in an $\mathcal{A}$-stable family. Therefore, we have an induced morphism $F: W_{ss}\rightarrow Z_N$. Clearly, the map $F$ is $\mathfrak{m}athbb{G}_m$-equivariant (where $\mathfrak{m}athbb{G}_m$ acts trivially on $Z_N$). As the GIT quotient $\tilde\Sigma_n$ is a categorical quotient, there is an induced morphism $f: \tilde\Sigma_n\rightarrow Z_N$. Two elements of the family $\mathcal{C}\rightarrow W_{ss}$ are isomorphic if and only if they belong to the same orbit under the action of $\mathfrak{m}athbb{G}_m$. Hence, the map $f$ is one-to-one on closed points (as there are no strictly semistable points in $W_{ss}$, $\tilde\Sigma_n$ is a good categorical quotient \cite[p. 94]{Dolgachev}). It follows that $f$ is an isomorphism. \end{proof} \betagin{lemma}\lambdabel{dictionary2} Assume $n=2s+2$ is even. We have the following dictionary between tautological line bundles on the Hassett space $Z_N$ (idenitified with the GIT quotient $\tilde\Sigma_n$) and $\mathfrak{m}athbb{G}_m$-linearized line bundles on $W_n$: $$\mathcal{O}(\delta_{i0})=pr^*_i\mathcal{O}(1)\left(-\sum_{i\notin T}E_T\right)\otimes z,\quad \mathcal{O}(\delta_{i\inftyty})=pr^*_i\mathcal{O}(1)\left(-\sum_{i\in T}E_T\right)\otimes z^{-1}$$ $$\mathfrak{p}si_0=\mathcal{O}\left(\sum E_T\right)\otimes z^{-2},\quad \mathfrak{p}si_{\inftyty}=\mathcal{O}\left(\sum E_T\right)\otimes z^{2},\quad \mathfrak{p}si_i=pr^*_i\mathcal{O}(-2)\left(\sum E_T\right)\otimes 1,$$ $$\mathcal{O}(\delta_{T\cup\{\inftyty\}})=\mathcal{O}(2E_T)\otimes 1\quad (|T|=s+1).$$ \end{lemma} \betagin{proof} Denote $\delta_T=\delta_{T\cup\{\inftyty\}}$. We start with the proof of $\mathcal{O}(\delta_T)=\mathcal{O}(2E_T)\otimes 1$. Consider the affine chart $$\mathfrak{m}athbb{A}^n=(\mathfrak{m}athbb{P}^1\setminus\{\inftyty\})^{s+1}\times (\mathfrak{m}athbb{P}^1\setminus\{0\})^{s+1}$$ around the point $p_T$ (markings in $T=\{s+2,\ldots,n\}$ are identified with $\inftyty$, and markings in $T^c$ with $0$). We have coordinates $x_1,\ldots, x_{s+1}, y_1,\ldots, y_{s+1}$. The GIT quotient map $(\mathfrak{m}athbb{P}^1)^n_{ss}\rightarrow \Sigma$ is locally at $p_T$ given by $$f: \mathfrak{m}athbb{A}^n\rightarrow Y=f(\mathfrak{m}athbb{A}^n)\subseteq\mathfrak{m}athbb{A}^{(s+1)^2},\quad f((x_i),(y_j))=(x_iy_j)_{ij}.$$ The morphism $F: W_{ss}\rightarrow \tilde\Sigma_n=\tilde\Sigma$ induced by the universal family over $W_{ss}$ (proof of Lemma \ref{identify2}) is locally the restriction to the semistable locus of the rational map (which we still call $F$) $$F: \text{\rm Bl}_0\mathfrak{m}athbb{A}^n\dashrightarrow\text{\rm Bl}_0 Y\subseteq\text{\rm Bl}_0\mathfrak{m}athbb{A}^{(s+1)^2}.$$ Consider coordinates $((x_i,y_i),[t_i,u_i])$ (with $x_it_j=x_jt_i$, $x_iu_j=y_jt_i$, $x_it_j=x_jt_i$) on $\text{\rm Bl}_0\mathfrak{m}athbb{A}^n\subseteq \mathfrak{m}athbb{A}^n\times\mathfrak{m}athbb{P}^{n-1}$ and coordinates $(z_{ij},[w_{ij}])$ on $\text{\rm Bl}_0\mathfrak{m}athbb{A}^{(s+1)^2}$ (with $z_{ij}w_{kl}=z_{kl}w_{ij}$). Consider the affine charts $U_1=\{t_1\neq0\}\subseteq \text{\rm Bl}_0\mathfrak{m}athbb{A}^n$ and $V_{1j}=\{w_{1j}\neq0\}\subseteq \text{\rm Bl}_0\mathfrak{m}athbb{A}^{r^2}$. The map $F_{|U_1}$ is the rational map $$F: U_1=\mathfrak{m}athbb{A}^n_{x_1,t_2,\ldots,t_r,u_1,\ldots,u_r}\dashrightarrow V_{1j}=\mathfrak{m}athbb{A}^{r^2}_{z_{1j},(w_{kl})_{kl\neq 1j}},$$ $$z_{1j}=x_1^2u_j,\quad w_{kl}=\frac{t_ku_l}{u_j}.$$ The exceptional divisor $\tilde E$ in $\text{\rm Bl}_0\mathfrak{m}athbb{A}^{(s+1)^2}$ has local equation $z_{1j}=0$ in $V_{1j}$, while the exceptional divisor $E_T$ of $\text{\rm Bl}_0\mathfrak{m}athbb{A}^n$ has equation $x_1=0$ in $U_1$. It follows that $F^*\mathcal{O}(\tilde E)=\mathcal{O}(2E_T)$. In particular, as $\delta_T=\text{\rm Bl}_0 Y\cap \tilde E$, it follows that $F^*\mathcal{O}(\delta_T)=\mathcal{O}(2E_T)$. It follows that $\mathcal{O}(\delta_T)=\mathcal{O}(2E_T)\otimes z^k$, for some integer $k$ (the same for all $T$, by the $S_n$-symmetry). On the other hand, by the $S_2$-symmetry, $\mathcal{O}(\delta_{T^c})=\mathcal{O}(2E_{T^c})\otimes z^{-k}$. Hence, we must have $k=0$. We now prove that $\mathcal{O}(\delta_{i0})=pr^*_i\mathcal{O}(1)(-\sum_{i\notin T}E_T)\otimes z$. (Note that all other relations will then follow by $S_2$-symmetry and Lemma \ref{relations2}.) Clearly, $F^*\mathcal{O}(\delta_{i0})$ is the line bundle $\mathcal{O}(\tilde\Delta_{i0})_{|W_{ss}}$, where $\tilde\Delta_{i0}$ is the proper transform in $W$ of the diagonal $\Delta_{i0}$ in $(\mathfrak{m}athbb{P}^1)^n$ defined by $x_i=0$, where $z_i=[x_i,y_i]$ now denote coordinates on $(\mathfrak{m}athbb{P}^1)^n$. As $\tilde\Delta_{i0}=\Delta_{i0}-\sum_{i\notin T}E_T$ (markings in $T^c$ are identified with $0$), it follows that $$\mathcal{O}(\delta_{i0})=pr^*_i\mathcal{O}(1)\left(-\sum_{i\notin T}E_T\right)\otimes z^k,$$ for some integer $k$. The pull-back of the canonical section of the effective divisor $\delta_{i0}$ (which is $x_i$) must be an invariant section. The section $x_i$ of $\mathcal{O}_{\mathfrak{m}athbb{P}^1}(1)$ becomes the constant section $1$ in the open chart $U:~x_i\neq 0$. Considering a point $q=(q_1,\ldots, q_n)$ in $U$, with $q_i=\inftyty$ and $q_j\in\mathfrak{m}athbb{P}^1$ general for $j\neq i$, it follows that for the $1$-PS $\lambda(z)=z$ we have $\text{weight}_{\lambda} pr_i^*\mathcal{O}(1)_{|q}=-1$, $\text{weight}_{\lambda} \mathcal{O}\otimes z^k_{|q}=k$, hence, the constant section $1$ becomes $z^{-1+k}$ under the action of $\lambda$ and we must have $k=1$ for the section to be invariant. \end{proof} \betagin{lemma}\lambdabel{restrict to delta} Let $\delta_T:=\delta_{T\cup\{\inftyty\}}=\mathfrak{m}athbb{P}^s\times \mathfrak{m}athbb{P}^s$. We have $${\delta_{i\inftyty}}_{|\delta_T}= \betagin{cases} \mathcal{O}(1,0) & \text{ if }\quad i\in T\cr \mathcal{O} & \text{ if }\quad i\notin T\end{cases},\quad {\delta_{i0}}_{|\delta_T}= \betagin{cases} \mathcal{O}(0,1) & \text{ if }\quad i\notin T\cr \mathcal{O} & \text{ if }\quad i\in T,\end{cases},$$ $${\mathfrak{p}si_{\inftyty}}_{|\delta_T}=\mathcal{O}(-1,0),\quad {\mathfrak{p}si_{0}}_{|\delta_T}=\mathcal{O}(0,-1),\quad \delta_T{|\delta_T}=\mathcal{O}(-1,-1).$$ \end{lemma} \betagin{proof} By symmetry, it suffices to compute ${\delta_{i\inftyty}}_{|\delta_T}$ and ${\mathfrak{p}si_{\inftyty}}_{|\delta_T}$. Clearly, the intersection ${\delta_{i\inftyty}}\cap \delta_T=\emptyset$ if $i\notin T$. We identify $\delta_T=\overline{\text{\rm M}}'\times \overline{\text{\rm M}}''=\mathfrak{m}athbb{P}^s\times \mathfrak{m}athbb{P}^s$, where $\overline{\text{\rm M}}'$, resp., $\overline{\text{\rm M}}''$ are Hassett spaces with weights $(\frac{1}{2}+\eta, \frac{1}{n},\ldots, \frac{1}{n}, 1)$, with the attaching point $x$ having weight $1$. We identify $\overline{\text{\rm M}}'=\mathfrak{m}athbb{P}^s$ via the isomorphism $|\mathfrak{p}si_x|: \overline{\text{\rm M}}'\rightarrow\mathfrak{m}athbb{P}^s$. We have ${\delta_{i\inftyty}}_{|\delta_T}=\delta_{i\inftyty}\otimes\mathcal{O}$, ${\mathfrak{p}si_{\inftyty}}_{|\delta_T}=\mathfrak{p}si_{\inftyty}\otimes\mathcal{O}$. By Lemma \ref{relations}, on $\overline{\text{\rm M}}'$ we have $\mathfrak{p}si_{\inftyty}+\mathfrak{p}si_x=0$ since $\delta_{x\inftyty}=0$, and $\delta_{i\inftyty}=-\mathfrak{p}si_{\inftyty}=\mathcal{O}(1)$ if $i\in T$. The identity $\delta_T{|\delta_T}=\mathcal{O}(-1,-1)$ follows now from the previous ones by restricting to $\delta_T$ any of the identities in Lemma \ref{relations2}. \end{proof} \subsection{Exceptionality} Note that $W_n$ is a polarized toric variety with the polytope $\Deltalta$ obtained by truncating the $n$-dimensional cube at vertices lying on the hyperplane $H$ normal to and bisecting the big diagonal. Then $\tilde\Sigma_n$ is a smooth polarized projective toric variety for the torus $\mathbb{G}_m^{n-1}$ and its polytope is $\Deltalta\cap H$. In particular, the topological Euler characteristic $e(\tilde\Sigma_n)$ is equal to the number of edges $\Deltalta$ intersecting $H$: $$e(\tilde Z_{n})=(s+1)^2{n\text{ch}oose s+1} =s^2{n\text{ch}oose s+1}+(n-1){n\text{ch}oose s+1}\quad (n=2s+2). $$ Note that $(s+1){n\text{ch}oose s+1}=n{n\text{ch}oose 0}+(n-2){n\text{ch}oose 1}+(n-4){n\text{ch}oose 2}+\ldots+2{n\text{ch}oose s}$. \betagin{defn}\lambdabel{even translate} For $E\subseteq N$, $e=|E|$, $p\in\mathfrak{m}athbb{Z}$ such that $p+e$ is even, let $$L_{E,p}:=\mathcal{O}(-E)\left(\sum_{T\subseteq N, |T|=s+1}\alpha_{T,E,p} E_T\right)\otimes z^p\quad \text{where}$$ \betagin{equation}\lambdabel{The alpha} \alpha_{T,E,p}:=-|x_{T,E,p}|,\quad x_{T,E,p}:=|E\cap T|-\frac{e-p}{2} \end{equation} i.e., the descent to $\tilde \Sigma_n$ of the restriction to $(W_n)_{ss}$ of the above $\mathfrak{m}athbb{G}_m$-linearized line bundle on $W_n$. By Lemma \ref{dictionary2} we recover Definition \ref{L even case}: \betagin{equation}\lambdabel{again} L_{E,p}=-\left(\frac{e-p}{2}\right)\mathfrak{p}si_{\inftyty}-\sum_{i\in E}\delta_{i\inftyty}-\sum_{x_{T,E,p}>0} x_{T,E,p}\delta_{T\cup\{\inftyty\}}. \end{equation} We write $x_T$ if there is no ambiguity. Note that $x_{T,E,p}=-x_{T^c,E,-p}$. \end{defn} \betagin{lemma} The action of $S_2$ on $Z_N$ exchanges $L_{E,p}$ with $L_{E,-p}$. \end{lemma} \betagin{proof} The statement follows immediately from (\ref{again}) and Lemma \ref{relations2}. \end{proof} \betagin{proof}[Proof of Theorem \ref{odd} - exceptionality] Lemma \ref{Torsion} implies that the torsion sheaves $\mathcal{O}_{\delta}(-a,-b)$ form an exceptional collection. Let now $\delta:=\delta_{T\cup\{\inftyty\}}$. To prove that $\{\mathcal{O}_{\delta}(-a,-b), L_{E,p}\}$ form an exceptional pair, i.e., that $L^\varepsilone_{|\delta}\otimes\mathcal{O}(-a,-b)$ is acylic, note that by Lemma \ref{restrict to delta} and (\ref{even translate}) we have, letting $\alpha_T:=\alpha_{T,E,p}$: $$L^\varepsilone_{|\delta}=\betagin{cases} \mathcal{O}(0,\alpha_T) & \text{ if }\quad p+|E\cap T|-|E\cap T^c|\geq0\cr \mathcal{O}(\alpha_T,0) & \text{ if }\quad p+|E\cap T|-|E\cap T^c|\leq0,\end{cases}.$$ Clearly, if $a,b>0$ then $L^\varepsilone_{|\delta}\otimes\mathcal{O}(-a,-b)$ is acylic. Consider now the case when one of $a,b$ is $0$. Using the $S_2$-symmetry, we may assume $a=0$. Let $0<b<\frac{s+1}{2}$. Since by (\ref{alpha ineq}) we have $-\lfloor\frac{s+1}{2}\rfloor\leq \alpha_T\leq0$, the result follows. We describe the Kempf-Ness stratification of the unstable locus in $W_n$. Let $G=\mathfrak{m}athbb{G}_m$. As before, we consider $(\lambda, Z)$, with a $1$-PS $\lambda: \mathfrak{m}athbb{G}_m\rightarrow G$ and $Z$ a connected component of the $\lambda$-fixed locus. It suffices to consider $\lambda(z)=z$ and $\lambda'(z)=z^{-1}$. The $G$-fixed locus in $W=W_n$ consists of the points $$Z_I=\{(x_i)\,|\,x_i=\inftyty\ \text{h}box{\rm for}\ i\in I,\ x_i=0\ \text{h}box{\rm for}\ i\not\in I,\}\in(\mathfrak{m}athbb{P}^1)^n\setminus\{p_T\}$$ for every subset $I\subseteq N$ with $|I|\neq s+1$ and the loci $Z^+_T\sqcup Z^-_T\subseteq E_T$, for each subset $T\subseteq N$, $|T|=s+1$. The pairs $(\lambda, Z)$ to be considered are therefore $$(\lambda, Z_I),\quad (\lambda', Z_I)\quad (I\subseteq N, |I|\neq s+1),$$ $$(\lambda, Z^+_T),\quad (\lambda', Z^+_T),\quad (\lambda, Z^-_T),\quad (\lambda', Z^-_T)\quad (T\subseteq N, |T|=s+1).$$ Recall that our polarization is $\mathcal{L}=\mathcal{O}(1,\ldots,1)(-\end{proof}silon\sum E_T)$ and for any subset $I\subseteq N$ with $|I|\neq s+1$ we have $$\text{h}box{\rm weight}_\lambdambda\mathcal{L}|_{Z_I}=|I^c|-|I|, \quad \text{h}box{\rm weight}_{\lambdambda'}\mathcal{L}|_{Z_I}=-|I^c|+|I|,$$ while for all subsets $T\subseteq N$ with $|T|=s+1$ we have: $$\text{h}box{\rm weight}_\lambdambda\mathcal{L}|_{q}=-2\end{proof}silon, \quad \text{h}box{\rm weight}_{\lambdambda'}\mathcal{L}|_{q}=+2\end{proof}silon\quad (q\in Z^+_T),$$ $$\text{h}box{\rm weight}_\lambdambda\mathcal{L}|_{q}=+2\end{proof}silon, \quad \text{h}box{\rm weight}_{\lambdambda'}\mathcal{L}|_{q}=-2\end{proof}silon\quad (q\in Z^-_T).$$ As in the $n$ odd case, we define for any subset $I\subseteq N$ affine subsets: $$S_I=\{(x_i)\,|\,x_i=\inftyty\ \text{h}box{\rm if}\ i\in I, x_i\neq\inftyty\ \text{h}box{\rm if}\ i\notin I\}\cong \mathfrak{m}athbb{A}^{|I^c|}$$ $$S'_I=\{(x_i)\,|\,x_i=0\ \text{h}box{\rm if}\ i\not\in I, x_i\neq0\ \text{h}box{\rm if}\ i\in I\}\cong \mathfrak{m}athbb{A}^{|I|}.$$ The unstable locus arises from the pairs with negative weight: $$(\lambda, Z_I)\quad (\text{for } |I|>s+1),\quad (\lambda', Z_I)\quad (\text{for } |I|<s+1),$$ $$(\lambda, Z^+_T),\quad (\lambda', Z^-_T)\quad (\text{for } |T|=s+1):$$ $$S_I\cong \mathfrak{m}athbb{A}^{|I^c|}\quad (\text{for } |I|>r),\quad S'_I\cong \mathfrak{m}athbb{A}^{|I|}\quad (\text{for } |I|<s+1),$$ $$S^+_T=\text{\rm Bl}_{p_T}S_T=\text{\rm Bl}_0\mathfrak{m}athbb{A}^{|T^c|},\quad S^-_T=\text{\rm Bl}_{p_T}S'_T=\text{\rm Bl}_0\mathfrak{m}athbb{A}^{|T|}\quad (\text{for } |T|=s+1).$$ The destabilizing $1$-PS for $S_I$ (resp.,~for $S'_I$) is $\lambdambda$ (resp.~$\lambdambda'$). The $1$-PS $\lambdambda$ (resp., $\lambdambda'$) acts on the restriction to $Z_I$ of the conormal bundle $N^\varepsilone_{S_I|(\mathfrak{m}athbb{P}^1)^n}$ (resp., $N^\varepsilone_{S'_I|(\mathfrak{m}athbb{P}^1)^n}$) with positive weights. Their sum $\eta_I$ (resp., $\eta'_I$) is: $$\eta_I=2|I|,\quad\text{h}box{\rm resp.,}\quad \eta'_I=2|I^c|.$$ When $|T|=s+1$, the destabilizing $1$-PS for $S^+_T$ (resp.~for $S^-_T$) is $\lambdambda$ (resp.~$\lambdambda'$). The $1$-PS $\lambdambda$ (resp., $\lambdambda'$) acts on $N^\varepsilone_{S^+_T|W}$ (resp., $N^\varepsilone_{S^-_T|W}$) restricted to $q\in Z^+_T$ (resp., $Z^-_T$ ), with positive weights. Their sum $\eta^+_T$ (resp., $\eta^-_T$) is: $$\eta^+_T=4|T|=2n,\quad\text{h}box{\rm resp.,}\quad \eta^-_T=4|T^c|=2n.$$ To see this, let $q\in Z^+_T$. The sum of $\lambdambda$-weights of $\betagin{itemize}g(N^\varepsilone_{S^+_T|W}\betagin{itemize}g)_{|q}$ equals $$\text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(\deltat N^\varepsilone_{S^+_T|W}\betagin{itemize}g)_{|q}=\text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(\deltat T_{S^+_T}\betagin{itemize}g)_{|q}-\text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(\deltat T_W\betagin{itemize}g)_{|q}.$$ We use the local coordinates introduced in \ref{even intro} (assume again w.l.o.g. that $T=\{s+2,\ldots, n\}$). We may assume also that the point $$q=[t_1,\ldots, t_{s+1},0\ldots,0]\in Z^+_T\subseteq E_T=\mathfrak{m}athbb{P}^{n-1}$$ has $t_1=1$. Then local coordinates on an open set $U=\mathfrak{m}athbb{A}^n\subseteq W$ around $q$ are given by $x_1, t_2,\ldots, t_{s+1}$, $u_1,\ldots, u_{s+1},$ with the blow-up map $\mathfrak{m}athbb{A}^n\rightarrow \mathfrak{m}athbb{A}^n$: $$(x_1, t_2,\ldots, t_{s+1}, u_1,\ldots, u_{s+1})\mathfrak{m}apsto (x_1, x_1t_2,\ldots, x_1 t_{s+1}, x_1u_1,\ldots, x_1 u_{s+1}).$$ Then $S^+_T\cap U\subseteq U$ has equations $u_1=\ldots=u_{s+1}$ (the proper transform of $S_T: y_1=\ldots=y_{s+1}$). The action of $G$ on $W$ induces an action on $U$: $$z\cdot x_1=z^2 x_1,\quad z\cdot t_i=t_i\quad (i=2,\ldots,s+1), z\cdot t_i=z^{-4}t_i \quad (i=1,\ldots,s+1).$$ It follows that $$\text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(\deltat T_{W}\betagin{itemize}g)_{|q}=-2-4s,\quad \text{h}box{\rm weight}_\lambdambda \betagin{itemize}g(\deltat T_{S^+_T}\betagin{itemize}g)_{|q}=2.$$ Hence, $\eta_T=4s+4=2n$. Similarly, $\eta'_T=2n$: for $q\in Z^-_T$ and coordinates $y_1, t_1,\ldots, t_{s+1}, u_2,\ldots, u_{s+1}$ on the chart $u_1=1$, the action of $G$ given by: $$z\cdot y_1=z^{-2} y_1,\quad z\cdot t_i=z^4t_i\quad (i=1,\ldots,s+1), z\cdot u_i=u_i \quad (i=2,\ldots,s+1).$$ Letting $m:=\Bigl\lfloor\frac{n}{4}\Bigr\rfloor=\Bigl\lfloor\frac{s+1}{2}\Bigr\rfloor$, we make a choice of windows $\mathfrak{m}athbb{G}_w$: $$[w_I, w_I+\eta_I),\quad [w'_I, w'_I+\eta'_I),\quad [w^+_T, w^+_T+\eta^+_T),\quad [w^-_T, w^-_T+\eta^-_T),$$ $$w_I=w'_I=-(s+1),\quad w^+_T=w^-_T=-4m=-n\quad \text{ if}\quad s\quad \text{is odd},$$ $$w_I=w'_I=-s,\quad w^+_T=w^-_T=-4m=-n+2\quad \text{ if}\quad s\quad \text{is even}.$$ We prove that $\mathfrak{m}athbb{G}_w$ contains the $G$-linearized line bundles that descend to the $L_{E,p}$ in Theorem \ref{even}. Since the collection is $S_2$ invariant and $S_2$ flips the strata $S_I$ and $S'_I$, it suffices to check the window conditions for the strata $S_I$, $S^+_T$. For $I\subseteq N$, $|I|>s+1$, at the point $Z_I\in S_I$ we have by (\ref{A}) $$\text{h}box{\rm weight}_\lambdambda\betagin{itemize}g(L_{E,p}\betagin{itemize}g)_{|Z_I}=|E\cap I|-|E\cap I^c|+p,$$ which lies in $[w_I, w_I+\eta_I)$ by Lemma \ref{MaxMin}. For $T\subseteq N$ with $|T|=s+1$, we have by (\ref{B}) and (\ref{A}) that \betagin{align*} \text{h}box{\rm weight}_\lambdambda\betagin{itemize}g(L_{E,p}\betagin{itemize}g)_{|q\in Z^+_T}&=|E\cap T|-|E\cap T^c|+p-2|x_T|\\ &=\betagin{cases} 0 & \text{ if }\quad |E\cap T|-|E\cap T^c|+p\geq0\cr -4|x_T| & \text{ if }\quad |E\cap T|-|E\cap T^c|+p\leq0,\end{cases} \end{align*} which by (\ref{alpha ineq}) lies in $[w^+_T, w^+_T+\eta^+_T)$. Hence, all $\{L_{E,p}\}$ in Theorem \ref{even} are contained in the window $\mathfrak{m}athbb{G}_w$. We now check exceptionality. Consider two line bundles as in Theorem \ref{even}: $$L_{E,p}=\mathcal{O}(-E)\left(\sum_{|T|=r}\alpha_T E_T\right)\otimes z^p,\quad L_{E',p'}=\mathcal{O}(-E')\left(\sum_{|T|=r}\alpha'_T E_T\right)\otimes z^{p'}.$$ where $\alpha_T:=\alpha_{T,E,p}$, $\alpha'_T:=\alpha_{T,E',p'}$. Assume that $e=|E|\geq e'=|E'|$. Hence, $E\nsubseteq E'$ unless $E=E'$. By the main result of \cite[Theorem 2.10]{DHL}, we have that $R\text{\rm Hom}(L_{E', p'}, L_{E,p})$ equals the weight $(p'-p)$ part (with respect to the canonical action of $G$) of $$R\text{\rm Hom}_{W}(L_{E',p'},L_{E,p})=R\Gamma\left(\mathcal{O}(E'-E)\otimes\mathcal{O}(\sum_{T}(\alpha'_T-\alpha_T)E_T)\right).$$ Hence, letting $$M_0:=\mathcal{O}(E'-E)\otimes\mathcal{O}\left(\sum_{\beta_T\leq0}(-\beta_T )E_T\right),\quad \text{where}\quad \beta_T:=\alpha_T-\alpha'_T,$$ we need to understand the weight $(p'-p)$ part of $$R\Gamma\left(M_0\otimes\mathcal{O}\left(\sum_{|T|=r,\beta_T>0}(-\beta_T )E_T\right)\right).$$ Note that $M_0$ is a pull-back from $(\mathfrak{m}athbb{P}^1)^n$; hence, by the projection formula, $R\Gamma(M_0)=R\Gamma(\mathcal{O}(E'-E))$ (which is $0$ if $E\nsubseteq E'$). Consider a simplified situation. For a line bundle $M$ on $W$, $G:=E_T$, $\beta:=\beta_T>0$ consider the exact sequences: $$0\rightarrow M(-(i+1) G)\rightarrow M(-iG)\rightarrow M(-iG)_{|G}\rightarrow 0,\quad (i=0,\ldots \beta-1).$$ To prove that the weight $(p'-p)$ of $R\Gamma(M(-\beta G))$ is $0$, it suffices to prove that $$R\Gamma(M),\quad R\Gamma(M(-iG)_{|G})\quad (i=0,1,\ldots,\beta-1),$$ have no weight $(p'-p)$ part. Put an arbitrary order on the subsets $T$ with $\beta_T>0$ ($T_1,T_2,\ldots$). Applying the above observation successively, first for $M_0$, $E_{T_1}$, then inductively for $M_0(-\beta_1T_1-\ldots-\beta_iT_i )$, $E_{T_{i+1}}$, it suffices to prove that for all $T$, the following spaces $$R\Gamma(M_0),\quad R\Gamma(M_0(-iE_T)_{|E_T})\quad (i=0,1,\ldots,\beta-1)$$ have no weight $(p'-p)$ part. We start with $R\Gamma(M_0)$. If $E\neq E'$, then $R\Gamma^*(M_0)=0$. If $E=E'$, then $M_0=\mathcal{O}$ and the action of $G$ on $R\Gamma(M_0)$ is trivial. Hence, unless $p=p'$ (i.e., $L_{E,p}=L_{E',p}$), $R\Gamma(M_0)$ has no weight $(p'-p)$ part. We now continue with $R\Gamma(M_0(-iE_T)_{|E_T})$. By the projection formula, $$R\Gamma(M_0(-iE_T)_{|E_T})={M_0}_{|p_T}\otimes R\Gamma(\mathcal{O}(-iE_T)_{|E_T}),$$ where ${M_0}_{|p_T}$ is the fiber of $M_0$ at $p_T$ (we denote $M_0$ both the line bundle on $(\mathfrak{m}athbb{P}^1)^n$ and its pull back to $W$). By (\ref{A}), the action of $G$ on ${M_0}_{|p_T}$ has weight $$\betagin{itemize}g(|E\cap T|-|E\cap T^c|\betagin{itemize}g)-\betagin{itemize}g(|E'\cap T|-|E'\cap T^c|\betagin{itemize}g).$$ Consider coordinates $t_i, u_i$ on $E=\mathfrak{m}athbb{P}^{n-1}$, such that $t_i$ (resp., $u_i$) have weight $2$ (resp., weight $-2$). There is a canonical identification $$R\Gamma(\mathcal{O}(-iE_T)_{|E_T})=\mathfrak{m}athbb{C}\left\{\text{pr}od t_k^{a_k}\text{pr}od u_k^{b_k}\quad |\quad a_k,b_k\in\mathfrak{m}athbb{Z}_{\geq0},\quad \sum a_k+\sum b_k=i\right\},$$ with the weight of $\text{pr}od t_k^{a_k}\text{pr}od u_k^{b_k}$ equal to $2\sum a_k-2\sum b_k$. As $2\sum a_k-2\sum b_k$ ranges through all even numbers between $-2i$ and $2i$, it follows that the possible weights of elements in $R\Gamma(M_0(-iE_T)_{|E_T})$ are $$\betagin{itemize}g(|E\cap T|-|E\cap T^c|\betagin{itemize}g)-\betagin{itemize}g(|E'\cap T|-|E'\cap T^c|\betagin{itemize}g)+2j,$$ for all the values of $j$ between $-i$ and $i$. Assume now that for some $0\leq i\leq \beta_T-1=\alpha_T-\alpha'_T-1$, $-i\leq j\leq i$, $$\betagin{itemize}g(|E\cap T|-|E\cap T^c|\betagin{itemize}g)-\betagin{itemize}g(|E'\cap T|-|E'\cap T^c|\betagin{itemize}g)+2j=p'-p.$$ Using the definition of $\alpha_T$, $\alpha_{T'}$, it follows that $\mathfrak{p}m 2\alpha_T \mathfrak{p}m 2\alpha'_T=-2j$. \betagin{claim} None of $\mathfrak{p}m \alpha_T \mathfrak{p}m \alpha'_T$ lies in the interval $[-(\alpha_T-\alpha'_T-1), (\alpha_T-\alpha'_T-1)]$. \end{claim} \betagin{proof} By symmetry, it is enough to prove that none of $\mathfrak{p}m \alpha_T \mathfrak{p}m \alpha'_T$ lies in the interval $[0, (\alpha_T-\alpha'_T-1)]$. As $\alpha_T, \alpha'_T\leq0$ and $\alpha_T>\alpha'_T$. Hence, it remains to prove that $-\alpha_T-\alpha'_T$, $\alpha_T-\alpha'_T$ do not lie in the interval $[0, (\alpha_T-\alpha'_T-1)]$. But clearly, $-\alpha_T-\alpha'_T>\alpha_T-\alpha'_T-1$ and $\alpha_T-\alpha'_T>\alpha_T-\alpha'_T-1$. \end{proof} This finishes the proof that the collection in Theorem \ref{even} is exceptional. \end{proof} \betagin{lemma}\lambdabel{Torsion} Let $0\leq a, b\leq s$. Let $\delta$ be a divisor in a Hassett space $\overline{\text{\rm M}}$ such that $\delta=\mathfrak{m}athbb{P}^s\times\mathfrak{m}athbb{P}^s$ and with normal bundle $\mathcal{O}(-1,-1)$. Assume that the restriction map $\text{\rm Pic}(\overline{\text{\rm M}})\rightarrow\text{\rm Pic}(\delta)$ is surjective. Then $\{\mathcal{O}_{\delta}(-a,-b), \mathcal{O}_{\delta}(-a',-b')\}$ is not an exceptional collection if and only if one of the following happens: \betagin{itemize} \item $a'\geq a$, $b'\geq b$, \item $a'=0$, $a=s$, $b'>b$, \item $b'=0$, $b=s$, $a'>a$, \item $a'=b'=0$, $a=b=s$. \end{itemize} When $a=a'$, $b=b'$, we have $R\text{\rm Hom}(\mathcal{O}_{\delta}(-a',-b'), \mathcal{O}_{\delta}(-a,-b))=\mathfrak{m}athbb{C}$. \end{lemma} \betagin{proof} As any line bundle on $\delta$ is the restriction of a line bundle on $\overline{\text{\rm M}}$, we have that $$R\text{\rm Hom}(\mathcal{O}_{\delta}(-a',-b'), \mathcal{O}_{\delta}(-a,-b))=R\text{\rm Hom}(\mathcal{O}_{\delta}, \mathcal{O}_{\delta}(a'-a,b'-b)).$$ Applying $R\text{\rm Hom}(-, \mathcal{O}_{\delta}(a'-a,b'-b))$ to the canonical sequence $$0\rightarrow\mathcal{O}(-\delta)\rightarrow\mathcal{O}\rightarrow\mathcal{O}_{\delta}\rightarrow0,$$ it follows that there is a long exact sequence on $\overline{\text{\rm M}}$ $$\ldots\rightarrow\text{\rm Ext}^i(\mathcal{O}_{\delta}, \mathcal{O}_{\delta}(a'-a,b'-b))\rightarrow$$ $$\rightarrow\text{H}^i(\mathcal{O}_{\delta}(a'-a,b'-b))\rightarrow \text{H}^i(\mathcal{O}_{\delta}(a'-a-1,b'-b-1))\rightarrow\ldots$$ It is clear now that if any of the conditions in the Lemma hold, then $$R\text{\rm Hom}(\mathcal{O}_{\delta}(-a',-b'), \mathcal{O}_{\delta}(-a,-b))\neq0.$$ Assume now that none of the conditions holds. Then either $a'<a$ or $b'<b$. Assume $a'<a$. Since $a'-a\geq -a\geq -s$, $\mathcal{O}_{\delta}(a'-a,b'-b)$ is acyclic. But in this case $\mathcal{O}_{\delta}(a'-a-1,b'-b-1)$ is not acyclic if and only if $a'=0$, $a=s$ and either $b'-b>0$ or $b'-b\leq -s$ (in which case, we must have $b'=0$, $b=s$). This gives precisely two of the listed cases. The case $b'<b$ is similar. \end{proof} \betagin{lemma}\lambdabel{MaxMin} Let $n=2s+2$. For a fixed set $I\subseteq N$ with $|I|>s+1$, we have $$\mathfrak{m}ax_{(E,p)}\betagin{itemize}g(|E\cap I|-|E\cap I^c|+p\betagin{itemize}g)= \betagin{cases} 2|I| -(s+3) & \text{ if}\quad s\quad \text{is odd} \cr 2|I| -(s+2) & \text{ if}\quad s\quad \text{is even},\end{cases}$$ $$\mathfrak{m}in_{(E,p)}\betagin{itemize}g(|E\cap I|-|E\cap I^c|+p\betagin{itemize}g)= \betagin{cases} -(s+1) & \text{ if }\quad s\quad \text{is odd}\cr -s & \text{ if }\quad s\quad \text{is even},\end{cases}$$ where the maximum and the minimum are taken over all the pairs $(E,p)$ corresponding to each line bundle $L_{E,p}$ in Theorem \ref{even}. Similarly, for $T\subseteq N$, $|T|=s+1$ $$\mathfrak{m}ax_{(E,p)}\betagin{itemize}g(p+|E\cap T|-|E\cap T^c|\betagin{itemize}g)=2m,\quad\mathfrak{m}in_{(E,p)}\betagin{itemize}g(p+|E\cap T|-|E\cap T^c|\betagin{itemize}g)=-2m,$$ $$\text{where}\quad m:=\Bigl\lfloor\frac{n}{4}\Bigr\rfloor=\Bigl\lfloor\frac{s+1}{2}\Bigr\rfloor.$$ In particular, when $(E,p)$ are as in Theorem \ref{even}, the coefficients $\alpha_{T,E,p}$ in (\ref{The alpha}) satisfy \betagin{equation}\lambdabel{alpha ineq} -m\leq \alpha_{T,E,p}=-|x_{T,E,p}|\leq 0 \end{equation} \end{lemma} The proof is straightforward and we omit it. \subsection{Fullness} Let $\mathcal{C}$ be the collection in Theorem \ref{even}. We denote by $\mathcal{A}\subset \mathcal{C}$ the collection of torsion sheaves in Theorem \ref{even}. We prove more generally: \betagin{thm}\lambdabel{full even} The collection $\mathcal{C}$ in Theorem \ref{even} generates all line bundles $\{L_{E,p}\}$ (see Definition \ref{L even case} and Definition \ref{even translate}) for all $E\subseteq N$, $e=|E|$, $p\in\mathfrak{m}athbb{Z}$ with $e+p$ even. \end{thm} \betagin{proof}[Proof of Theorem \ref{even} - fullness] By Theorem \ref{full even}, the collection $\mathcal{C}$ generates all the objects $Rp_*({\mathfrak{p}i^*_I}\text{h}at\mathfrak{m}athbb{G})$ from Corollary \ref{rewrite}. Fullness then follows by Corollary \ref{S is enough}. \end{proof} To prove Theorem \ref{full even} we do an induction on the \emph{score} $S(E,p)$: \betagin{equation}\lambdabel{score notn} S(E,p):=|p|+\mathfrak{m}in\{e,n-e\}, \end{equation} \betagin{equation}\lambdabel{q} \text{written as}\quad S(E,p)=2\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+2q,\quad q\in\mathfrak{m}athbb{Z}. \end{equation} \betagin{rmk}\lambdabel{reformulate} As $S(E,p)$ is even, the range of $(E,p)$ in Theorem \ref{even} is precisely: \betagin{itemize} \item If $s$ is even: $S(E,p)\leq s$, \item If $s$ is odd: $S(E,p)\leq s+1$ if $e\leq s+1$ and $S(E,p)\leq s-1$ if $e\geq s+2$. \end{itemize} Using notation (\ref{q}), $(E,p)$ is not in the range of Theorem \ref{even} if $q\geq1$ when $s$ is even or $s$ is odd and $e\geq s+2$, and if $q\geq2$ when $s$ is odd and $e\leq s+1$. \end{rmk} To prove Theorem \ref{full even} we introduce three other types of line bundles. \betagin{notn}\lambdabel{extraLB} Let $n=2s+2$, $E\subseteq N$, $e=|E|$ and $p\in\mathfrak{m}athbb{Z}$. On $Z_N$ let $$R_{E,p}=-\left(\frac{e-p}{2}\right)\mathfrak{p}si_{\inftyty}-\sum_{i\in E}\delta_{i\inftyty},\quad Q_{E,p}=-\left(\frac{e+p}{2}\right)\mathfrak{p}si_0-\sum_{i\in E}\delta_{i0},$$ \betagin{equation}\lambdabel{RS to V} V_{E,p}:=R_{E,p}+\sum_{x_{T,E,p}<0}|x_{T,E,p}| \delta_{T\cup\{\inftyty\}} =Q_{E,p}+\sum_{x_{T,E,p}>0}|x_{T,E,p}| \delta_{T\cup\{\inftyty\}}, \end{equation} where the last equality follows from (\ref{L to R}) and (\ref{L to S}). \end{notn} We recall for the reader's convenience that using Notation \ref{The alpha} we have $$L_{E,p}=-\left(\frac{e-p}{2}\right)\mathfrak{p}si_{\inftyty}-\sum_{i\in E}\delta_{i\inftyty}-\sum_{x_{T,E,p}>0}x_{T,E,p}\delta_{T\cup\{\inftyty\}}.$$ Therefore, \betagin{equation}\lambdabel{L to R} R_{E,p}=L_{E,p}+\sum_{x_{T,E,p}>0} |x_{T,E,p}|\delta_{T\cup\{\inftyty\}} \end{equation} and by using Lemma \ref{relations2}, we have also \betagin{equation}\lambdabel{L to S} Q_{E,p}=L_{E,p}+\sum_{x_{T,E,p}<0} |x_{T,E,p}|\delta_{T\cup\{\inftyty\}}. \end{equation} We remark that using Lemma \ref{dictionary2}, we have: $$R_{E,p}=\mathcal{O}(-E)(\sum x_T E_T)\otimes z^p,\quad Q_{E,p}=\mathcal{O}(-E)(-\sum x_T E_T)\otimes z^p,$$ $$L_{E,p}=\mathcal{O}(-E)(-\sum |x_T| E_T)\otimes z^p,\quad V_{E,p}=\mathcal{O}(-E)(\sum |x_T| E_T)\otimes z^p.$$ \betagin{rmk}\lambdabel{symmetry} It is clear by the definition that by the $S_2$ symmetry (i.e., exchanging $0$ with $\inftyty$) the line bundle $R_{E,p}$ is exchanged with $Q_{E,-p}$. The line bundles $R_{E,p}$, $Q_{E,p}$ will be crucial for the proof of Theorem \ref{full even}. We note that the line bundles $V_{E,p}$ are used only in the proof of Corollary \ref{next torsion}. \end{rmk} For every divisor $\delta_T:= \delta_{T\cup\{\inftyty\}}$, we have by Lemma \ref{restrict to delta} that \betagin{equation}\lambdabel{resRS} {R_{E,p}}_{|\delta_T}=\mathcal{O}(-x_{T,E,p},0),\quad {Q_{E,p}}_{|\delta_T}=\mathcal{O}(0, x_{T,E,p}). \end{equation} From here on, the notation $\mathcal{O}(-a,-b)$ indicates that $\mathcal{O}(-a)$ (resp., $\mathcal{O}(-b)$) corresponds to the component marked by $\inftyty$ (resp., marked by $0$). \betagin{defn} We say that line bundles $L$ and $L'$ are \emph{related by quotients} $Q^i$ if there are exact sequences $$0\rightarrow L^{i-1}\rightarrow L^i\rightarrow Q^i\rightarrow0\quad (i=1,\ldots, t),$$ $$L_0=L,\quad L_t=L'.$$ Note that when $L'=L+\sum \beta_T \delta_T$ with $\beta_T\geq 0$ for all $T$, the quotients $Q^i$ are direct sums of torsion sheaves of type $\mathcal{O}_{\delta_T}(-a,-b)$. \end{defn} \betagin{lemma}\lambdabel{quotients} Let $E\subseteq N$, $e=|E|$, $p\in\mathfrak{m}athbb{Z}$, such that $e+p$ even. Then: \betagin{itemize} \item[(i) ] $L_{E,p}$ and $R_{E,p}$ are related by quotients which are direct sums of type $$\mathcal{O}_{\delta_T}(-x_T+i,i),\quad 0\leq i<|x_T|=x_T\quad (x_T>0)$$ \item[(ii) ] $L_{E,p}$ and $Q_{E,p}$ are related by quotients which are direct sums of type $$\mathcal{O}_{\delta_T}(i,x_T+i),\quad 0\leq i<|x_T|=-x_T\quad (x_T<0)$$ \item[(iii) ] $R_{E,p}$ and $V_{E,p}$ are related by quotients which are direct sums of type $$\mathcal{O}_{\delta_T}(-x_T-i,-i),\quad 0<i\leq |x_T|=-x_T\quad (x_T<0)$$ \item[(iv) ] $Q_{E,p}$ and $V_{E,p}$ are related by quotients which are direct sums of type $$\mathcal{O}_{\delta_T}(-i,x_T-i),\quad 0<i\leq |x_T|=x_T\quad (x_T>0),$$ \end{itemize} where we denote for simplicity $\delta_T:=\delta_{T\cup\{\inftyty\}}$ and $x_T:=x_{T,E,p}$. In particular, all pairs are related by quotients of type $$\mathcal{O}(-a,*),\quad \mathcal{O}(*,-a),\quad \text{with}\quad 0<a\leq\frac{S(E,p)}{2}.$$ \end{lemma} \betagin{proof} This follows immediately from (\ref{resRS}), (\ref{L to R}), (\ref{L to S}) and (\ref{RS to V}). The last statement follows by Lemma \ref{ineqX_lemma}. \end{proof} \betagin{lemma}\lambdabel{ineqX_lemma} Let $n=2s+2$, $E\subseteq N$, $e=|E|$, $p\in\mathfrak{m}athbb{Z}$, $e+p$ even. Then for all $T$ \betagin{equation}\lambdabel{score ineq} |x_{T,E,p}|\leq \frac{S(E,p)}{2}, \end{equation} where $S(E,p)$ is the score of the pair $(E,p)$ (Notation \ref{score notn}). Furthermore, $$ |x_{T,E,p}|=x_{T,E,p}=\frac{S(E,p)}{2}\quad\text{ if and only if }\quad T\subseteq E,\quad p\geq0$$ \end{lemma} \noindent The proof is straightforward and we omit it. Note, (\ref{alpha ineq}) is a particular case. \betagin{cor}\lambdabel{next torsion} Let $e=s+1$, $p\geq0$, and $(E,p)$ such that $$S(E,p)=2\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+2q,$$ with $p=2q-1$, $q\geq1$ if $s$ is even, and $p=2q-2$, $q\geq2$ if $s$ is odd. Assume the following objects are generated by $\mathcal{C}$: \betagin{itemize} \item[(i) ] All torsion sheaves $\mathcal{O}_{\delta_T}(-a,0)$ for all $0<a<\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+q$ and all $T$, \item[(ii) ] The line bundles $R_{E,p}$, $Q_{E,p}$. \end{itemize} Then $\mathcal{O}_{\delta_{T}}(-(\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+q),0)$ with $T=E$ is generated by $\mathcal{C}$. Here $\delta_T:=\delta_{T\cup\{\inftyty\}}$. \end{cor} As $\mathcal{C}$ is invariant under the action of $S_2$, it follows from Corollary \ref{next torsion} that a similar statement holds when replacing $\mathcal{O}_{\delta_T}(-a,0)$ with $\mathcal{O}_{\delta_T}(0,-a)$. \betagin{proof} We claim that $V_{E,p}$ is generated by $\mathcal{C}$. Since $R_{E,p}$ is generated by $\mathcal{C}$ by assumption, using Lemma \ref{quotients}(iii), it suffices to prove that when $x_T<0$, $\mathcal{O}(-x_T-i,-i)$ is generated by $\mathcal{A}$, for all $0<i\leq |x_T|$, i.e., $|x_T|<\frac{s+1}{2}$. Since the assumptions on $q$ imply that $p>0$, we have that $$|x_T|=-x_T=\frac{e-p}{2}-|E\cap T|\leq \frac{e-p}{2}=\frac{s+1-p}{2}<\frac{s+1}{2},$$ and the claim follows. By Lemma \ref{quotients}(iv), the quotients relating $Q_{E,p}$ and $V_{E,p}$ have the form $\mathcal{O}_{\delta_T}(-i,x_T-i)$ for $0<i\leq x_T$. By Lemma \ref{ineqX_lemma}, we have that $x_T\leq \frac{S(E,p)}{2}=\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+q$, with equality if and only if $T\subseteq E$. Since $e=s+1$, we must have $T=E$. It follows that all but one quotient, namely $\mathcal{O}_{\delta_{T}}(-(\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+q),0)$ for $T=E$ (when $i=x_T=\frac{S(E,p)}{2}$) are already by assumption generated by $\mathcal{C}$. Note that this quotient appears exactly once. Since $Q_{E,p}$, $V_{E,p}$ are generated by $\mathcal{C}$, it follows that this quotient is also. \end{proof} \betagin{cor}\lambdabel{next torsion 2} Let $q\in\mathfrak{m}athbb{Z}$, $q>0$. Assume that $R_{E,p}$, $Q_{E,p}$ are generated by $\mathcal{C}$ whenever $S(E,p)=2\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+2q'$, with $0<q'\leq q$, and $e=s+1$. Then for all $T$, $\delta_T:=\delta_{T\cup\{\inftyty\}}$, the following torsion sheaves are generated by $\mathcal{C}$: $$\mathcal{O}_{\delta_T}(-a,0),\quad \mathcal{O}_{\delta_T}(0,-a)\quad \text{when}\quad 0<a\leq\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+q$$ \end{cor} \betagin{proof} By the $S_2$ symmetry, it suffices to prove the statement for $\mathcal{O}_{\delta_T}(-a,0)$. For any $q>0$, taking $E\subseteq N$ with $e=s+1$ and $p=2q-1$ when $s$ is even, or $p=2q-2$ when $s$ is odd, gives a pair $(E,p)$ with $S(E,p)=2\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+2q$. If $s$ is even, or if $s$ is odd and $q\geq2$, the assumptions of Corollary \ref{next torsion} are satisfied. By induction on $q>0$, $\mathcal{O}_{\delta_T}(-a,0)$ is generated by $\mathcal{C}$ when $T=E$, $a=\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+q$. The only case left is when $s$ is odd and $q=1$ ($p=0$). By assumption $R_{E,0}$, $Q_{E,0}$ are generated by $\mathcal{C}$ if $e=s+1$ ($S(E,0)=s+1$). We have to prove that $\mathcal{O}_{\delta_T}(-\frac{s+1}{2},0)$ is generated by $\mathcal{C}$. Taking $E=T$, $p=0$, we have that the pair $(E,0)$ is in the range of Theorem \ref{even}. Hence, $L_{E,0}$ is in $\mathcal{C}$. By Lemma \ref{quotients} and Lemma \ref{ineqX_lemma} $L_{E,0}$ and $R_{E,0}$ are related by quotients which are direct sums of sheaves in $\mathcal{A}$, with only one quotient which is $\mathcal{O}_{\delta_T}(-\frac{s+1}{2},0)$ for $T=E$ (the only possibility to have $x_T=\frac{S(E,0)}{2}=\frac{s+1}{2}$ is when $T=E$). Note that this quotient appears exactly once. The statement follows. \end{proof} \betagin{lemma}\lambdabel{koszul}(Koszul resolutions) Let $p\in\mathfrak{m}athbb{Z}$, $E\subseteq N$. \betagin{itemize} \item[(K1) ] If $e\leq s+1$, letting $I\subseteq N\setminus E$, $|I|=s+1$, there is a long exact sequence: $$0\rightarrow Q_{E\cup I,p-s-1}\rightarrow\ldots\rightarrow\betagin{itemize}goplus_{J\subseteq I, |J|=j}Q_{{E\cup J},p-j}\rightarrow\ldots\rightarrow Q_{E, p}\rightarrow0.$$ \item[(K2) ] If $e\geq s+1$, letting $I\subseteq E$, $|I|=s+1$, there is a long exact sequence: $$0\rightarrow R_{E,p}\rightarrow\ldots\rightarrow\betagin{itemize}goplus_{J\subseteq I, |J|=j}R_{{E\setminus J},p-j}\rightarrow\ldots\rightarrow R_{E\setminus I, p-s-1}\rightarrow0.$$ \end{itemize} \end{lemma} \betagin{proof} We have $\betagin{itemize}gcap_{i\in I}\delta_{i\inftyty}=\emptyset$ and the boundary divisors $\{\delta_{i\inftyty}\}_{i\in I}$ intersect transversely (the divisors intersect properly and the intersection is smooth, being a Hassett space). It follows that there is a long exact sequence $$0\rightarrow \mathcal{O}\left(-\sum_{i\in I} \delta_{i\inftyty}\right)\rightarrow\betagin{itemize}goplus_{j\in I} \mathcal{O}\left(-\sum_{i\in I\setminus\{j\}} \delta_{i\inftyty}\right) \rightarrow\betagin{itemize}goplus_{j,k\in I} \mathcal{O}\left(-\sum_{i\in I\setminus\{j,k\}}\delta_{i\inftyty}\right)\rightarrow\ldots$$ $$\ldots\rightarrow \betagin{itemize}goplus_{i\in I} \mathcal{O}(-\delta_{i\inftyty})\rightarrow\mathcal{O}\rightarrow 0.$$ Tensoring this long exact sequence by $-\sum_{i\in E\setminus I}\delta_{i\inftyty}-\frac{e-p}{2}\mathfrak{p}si_{\inftyty},$ gives the second long exact sequence in the lemma. The first long exact sequence is obtained in a similar way by considering the Koszul resolution of the intersection of the boundary divisors $\{\delta_{i0}\}_{i\in I}$. \end{proof} \betagin{lemma}\lambdabel{slopes in K} Assume $p\geq0$ and $E\subseteq N$ such that $$S(E,p)=2\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+2q,$$ and the pair $(E,p)$ is such that $q\geq1$ if $s$ is even and $q\geq 2$ if $s$ is odd. In the notations of Lemma \ref{koszul}, we have: \betagin{itemize} \item[(1) ] If $e\leq s+1$ then $Q_{E\cup J, p-j}$ in Lemma \ref{koszul}(K1) satisfies $S(E\cup J, p-j)\leq S(E,p)$. If equality holds, then $|p-j|<p$ if $j\neq0$ \item[(2) ] If $e\geq s+1$ then $R_{E\setminus J, p-j}$ in Lemma \ref{koszul}(K2) satisfies $S(E\setminus J, p-j)\leq S(E,p)$. If equality holds, then $|p-j|<p$ if $j\neq0$. \end{itemize} \end{lemma} \betagin{proof} We prove (1). We have $S(E,p)=p+e$. If $p-j\geq0$, then $$S(E\cup J, p-j)\leq (p-j)+e+j=p+e=S(E,p),$$ and clearly $|p-j|=p-j<p$ if $j\neq0$. If $p-j<0$, we prove that the inequality on slopes is strict. We have $$S(E\cup J, p-j)\leq (j-p)+(n-e-j)=n-p-e<e+p=S(E,p),$$ since $S(E,p)=e+p=2\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+2q>s+1$. We prove (2). We have $S(E,p)=p+(n-e)$. If $p-j\geq0$, then $$S(E\setminus J, p-j)\leq (p-j)+(n-e+j)=p+(n-e)=S(E,p),$$ and clearly $|p-j|=p-j<p$ if $j\neq0$. If $p-j<0$, we prove that the inequality on slopes is strict. We have $$S(E\setminus J, p-j)\leq (j-p)+(e-j)=e-p<p+n-e=S(E,p),$$ since $e-p<s+1$, as $S(E,p)=p+n-e=2\Bigl\lfloor\frac{s}{2}\Bigr\rfloor+2q>s+1$. \end{proof} \betagin{proof}[Proof of Theorem \ref{full even}] \noindent {\bf Case $s$ even.} For any $(E,p)$ write the score $S(E,p)$ as \betagin{equation} S(E,p)=s+2q. \end{equation} Note that if $q\leq0$ then $L_{E,p}$ is already in $\mathcal{C}$ (Remark \ref{reformulate}). Moreover, if $q\leq 0$, by Lemma \ref{quotients} $R_{E,p}$ and $Q_{E,p}$ are related by quotients which are direct sums of torsion sheaves of the form $\mathcal{O}(-a,*)$ or $\mathcal{O}(*,-a)$, with $0<a\leq |x_T|$. As $|x_T|\leq \frac{S(E,p)}{2}\leq\frac{s}{2}<\frac{s+1}{2}$, such quotients are in $\mathcal{A}$. We prove by induction on $q\geq0$, and for equal $q$, by induction on $|p|$, that $R_{E,p}$, $Q_{E,p}$ with $S(E,p)=s+2q$ are generated by $\mathcal{C}$. By Corollary \ref{next torsion 2}, it follows that all $\mathcal{O}_{\delta_T}(-a,-b)$ are generated by $\mathcal{C}$. Then Lemma \ref{quotients} implies then that all line bundles $L_{E,p}$ are generated by $\mathcal{C}$. We now prove the inductive statement. For $q\leq0$, we already proved that $R_{E,p}$, $Q_{E,p}$ are generated by $\mathcal{C}$. Assume $q\geq1$. Take a pair $(E,p)$ with score $S(E,p)=s+2q$. Using the $S_2$ symmetry, we may assume $p\geq0$. For any $(E',p')$ with strictly smaller score than $s+2q$, or equal score and strictly smaller $|p|$, we have by induction that $Q_{E',p'}$, $R_{E',p'}$ are generated by $\mathcal{C}$. If $e\leq s+1$, we apply Lemma \ref{koszul} and get a resolution for $Q_{E,p}$. Using Lemma \ref{slopes in K}(i), all terms in the resolution are generated by $\mathcal{C}$ by induction. Hence, $Q_{E,p}$ is generated by $\mathcal{C}$ if $e\leq s+1$. Similarly, using Lemma \ref{koszul}, Lemma \ref{slopes in K}(ii) and induction, $R_{E,p}$ is generated by $\mathcal{C}$ if $e\geq s+1$. We have that both $Q_{E,p}$, $R_{E,p}$ are generated by $\mathcal{C}$ if $e=s+1$. By Corollary \ref{next torsion 2} and the induction assumption, $\mathcal{O}_{\delta_T}(-a,0)$, $\mathcal{O}_{\delta_T}(0,-a)$ if $0<a\leq\frac{s}{2}+q$ are generated by $\mathcal{C}$. By Lemma \ref{quotients} we have that $L_{E,p}$ is related to each of $Q_{E,p}$, $R_{E,p}$ by quotients which are direct sums of $\mathcal{O}_{\delta_T}(-a,*)$, $\mathcal{O}_{\delta_T}(*,-a)$ with $0<a\leq\frac{S(E,p)}{2}=\frac{s}{2}+q$. Since for any $e\neq s+1$, one of $Q_{E,p}$, $R_{E,p}$ is generated by $\mathcal{C}$, it follows that $L_{E,p}$ is generated by $\mathcal{C}$. \noindent {\bf Case $s$ odd.} For any $(E,p)$ write the score $S(E,p)$ as \betagin{equation} S(E,p)=(s-1)+2q. \end{equation} We prove by induction on $q\geq0$, and for equal $q$, by induction on $|p|$, that the line bundles $R_{E,p}$ and $Q_{E,p}$ with $S(E,p)=(s-1)+2q$ are generated by $\mathcal{C}$. This proves the theorem, as Corollary \ref{next torsion 2} gives that all torsion sheaves supported on boundary are generated by $\mathcal{C}$. The inductive argument we did for $s$ even goes through verbatim if $q\geq2$ (the assumption is used in Lemma \ref{slopes in K}). Hence, we only need to prove that $R_{E,p}$, $Q_{E,p}$ are generated by $\mathcal{C}$ for $q=0$ and $q=1$. We may assume w.l.o.g. that $p\geq0$. Assume $q=0$. Fix a pair $(E,p)$ with $S(E,p)=s-1$. Then $(E,p)$ is in the range of Theorem \ref{even} and $L_{E,p}$ is in $\mathcal{C}$. As in the previous case, by Lemma \ref{quotients}, the line bundles $R_{E,p}$, $Q_{E,p}$ are related to $L_{E,p}$ by quotients generated by $\mathcal{A}$. Hence, $R_{E,p}$, $Q_{E,p}$ are generated by $\mathcal{C}$. Assume now $q=1$ and fix a pair $(E,p)$ with $S(E,p)=s+1$. \betagin{claim}\lambdabel{first bdry} $\mathcal{O}_{\delta_T}(-\frac{s+1}{2},0)$, $\mathcal{O}_{\delta_T}(0,-\frac{s+1}{2})$ are generated by $\mathcal{C}$. \end{claim} \betagin{proof} By Corollary \ref{next torsion 2}, it suffices to prove that $R_{E,0}$, $Q_{E,0}$ are generated by $\mathcal{C}$ for some $E$ with $e=|E|=s+1$. Take such an $E$. By Remark \ref{symmetry}, $R_{E,0}$ and $Q_{E,0}$ are exchanged by the action of $S_2$. Hence, by symmetry, it suffices to prove that $R_{E,0}$ is generated by $\mathcal{C}$. Consider the resolution in Lemma \ref{koszul}(ii) for $(E,0)$, with $I=E$. The terms that appear, other than $R_{E,0}$, are $R_{E\setminus J, -j}$, with $J\subseteq E$, $j>0$. For all $j\geq0$, $S(E\setminus J, -j)=s+1$ and all $(E\setminus J, -j)$ are in the range of Theorem \ref{even}. Hence, $L_{E\setminus J, -j}$ are generated by $\mathcal{C}$. We claim that if $j>0$, the quotients relating $R_{E\setminus J, -j}$ to $L_{E\setminus J, -j}$ are generated by $\mathcal{A}$. By Lemma \ref{quotients} the quotients relating $R_{E\setminus J, -j}$ to $L_{E\setminus J, -j}$ are $$\mathcal{O}_{\delta_T}(-x_T+i,i),\quad 0\leq i<x_T=x_{T,E\setminus J,-j}\quad \text{where}$$ $$x_T=|(E\setminus J)\cap T|-\frac{s+1}{2}\leq |E\setminus J|-\frac{s+1}{2}\leq s-\frac{s+1}{2}=\frac{s-1}{2}.$$ The claim follows. It follows that for $j>0$, $R_{E\setminus J, -j}$ is generated by $\mathcal{C}$. Using the resolution, it follows that $R_{E,0}$ is generated by $\mathcal{C}$. \end{proof} Assume that $e\leq s+1$. Then $(E,p)$ is in the range of Theorem \ref{even} and $L_{E,p}$ is in $\mathcal{C}$. Since $R_{E,p}$, $Q_{E,p}$ are related to $L_{E,p}$ by quotients $\mathcal{O}_{\delta_T}(-a,*)$, $\mathcal{O}_{\delta_T}(*,-a)$ with $0<a\leq\frac{S(E,p)}{2}=\frac{s+1}{2}$, it follows by Claim \ref{first bdry} that $R_{E,p}$, $Q_{E,p}$ are generated by $\mathcal{C}$. Assume now that $e>s+1$. Then $(E,p)$ is not in the range of Theorem \ref{even}. Note that it suffices to prove that $R_{E,p}$ is generated by $\mathcal{C}$, since by Lemma \ref{quotients} $R_{E,p}$, $L_{E,p}$ are related by quotients which are direct sums of $\mathcal{O}_{\delta_T}(-a,*)$ with $0<a\leq\frac{S(E,p)}{2}=\frac{s+1}{2}$ (generated by $\mathcal{C}$ by Claim \ref{first bdry}). To prove $R_{E,p}$ is generated by $\mathcal{C}$, we do an induction on $e\geq s+1$ (for $(E,p)$ of fixed score $s+1$) by using a resolution as in Lemma \ref{koszul} for $R_{E,p}$. \end{proof} \betagin{rmk}\lambdabel{elaborate2} For $n=2s+2\geq 2$, the exceptional collection on $Z_n$ given in \cite[Theorem 1.15]{CT_partII} consists of: (i) The same torsion sheaves $\mathcal{O}_{\delta_T}(-a,-b)$ as in Theorem \ref{even}. (ii) The line bundles in the so-called group $1$ (group $1A$ and group $1B$ of that theorem coincide in this case): for all $E\subseteq N$, with $e=|E|$ even, \betagin{equation}\lambdabel{F0E n even} F_{0,E}=\frac{e}{2}\mathfrak{p}si_\inftyty+\sum_{j\in E} \delta_{j\inftyty}-\sum_{\frac{e}{2}-|E\cap T|>0} \betagin{itemize}g(\frac{e}{2}-|E\cap T|\betagin{itemize}g)\delta_{T\cup\{\inftyty\}}. \end{equation} The line bundles $F_{0,E}$ are defined in \cite{CT_partII} as $R\mathfrak{p}i_*(N_{0,E})$, for certain line bundles $N_{0,E}$ on the universal family over $Z_n$. One checks directly (or see the proof of \cite[Lemma 5.8]{CT_partII}) that $N_{0,E}$ restrict trivially to every component of any fiber of the universal family $\mathfrak{p}i:\mathcal{U}\rightarrow Z_n$. Hence, $$N_{0,E}=\mathfrak{p}i^*F_{0,E},\quad F_{0,E}=\sigma_u^*N_{0,E},$$ for any marking $u$. In particular, for $u\in\{0,\inftyty\}$, we obtain formula (\ref{F0E n even}). (iii) The objects in the so-called group $2B$, which in this case are line bundles (corresponding only to the $J=\emptyset$ term in \cite[Notation 11.5]{CT_partII}): $$\mathcal{T}T_{l,\{u\}\cup E}:=\frac{e-l-1}{2}\mathfrak{p}si_u+\sum_{j\in E}\delta_{ju}-\sum_{\frac{e-l-1}{2}-|E\cap T|>0}\left(\frac{e-l-1}{2}-|E\cap T|\right)\delta_{T\cup\{u\}}$$ where $u\in\{0,\inftyty\}$, $E\subseteq N$, $e=|E|$, $l\in\mathfrak{m}athbb{Z}$, $l\geq0$ such that $|E\cap\{u\}|+l$ is even (i.e., $e+l$ is odd), subject to the condition $$l+\mathfrak{m}in\{e,n+1-e\}\leq s\quad (\text{group}\quad 2B).$$ The formula generalizing both expressions in (ii) and (iii) is $$\frac{e-p}{2}\mathfrak{p}si_u+\sum_{j\in E}\delta_{ju}-\sum_{\frac{e-p}{2}-|E\cap T|>0}\left(\frac{e-p}{2}-|E\cap T|\right)\delta_{T\cup\{u\}}$$ which, when $u=\inftyty$, is exactly the line bundle $V^\varepsilone_{E,p}$ (the dual of $V_{E,p}$ - see (\ref{extraLB}). Hence, the group $2B$ with $l=p-1$, $u=\inftyty$ recovers all the $\{V^\varepsilone_{E,p}\}$ when $p>0$. Similarly, the group $2B$ with $l=-p-1$, $u=0$ recovers all the $\{V^\varepsilone_{E,p}\}$ when $p<0$. The elements of group $1$ recover all the $\{V^\varepsilone_{E,p}\}$ when $p=0$. A similar proof as in this section will prove that the collection in \cite[Theorem 1.15]{CT_partII} - the torsion sheaves (i) and the line bundles $\{V^\varepsilone_{E,p}\}$, for $(E,p)$ as in Theorem \ref{even}- is a full exceptional collection. \end{rmk} \section{Pushforward of the exceptional collection on the Losev--Manin space $\overline{\text{LM}}_N$ to $Z_N$}\lambdabel{LM} We refer to \cite{CT_partI} for background on Losev--Manin spaces. Recall that the Losev--Manin moduli space $\overline{\text{LM}}_N$ is the Hassett space with markings $N\cup\{0,\inftyty\}$ and weights $(1,1,\frac{1}{n},\ldots,\frac{1}{n})$, where $n=|N|$. The space $\overline{\text{LM}}_N$ parametrizes nodal linear chains of $\mathbb{P}^1$'s marked by $N\cup\{0,\inftyty\}$ with $0$ is on the left tail and $\inftyty$ is on the right tail of the chain. Both $\mathfrak{p}si_0$ and $\mathfrak{p}si_\inftyty$ induce birational morphisms $\overline{\text{LM}}_N\to\mathbb{P}^{n-1}$ (Kapranov models) which realize $\overline{\text{LM}}_N$ as an iterated blow-up of $\mathbb{P}^{n-1}$ in $n$ points (standard basis vectors) followed by blowing up $n\text{ch}oose 2$ proper transforms of lines connecting points, etc. In particular, $\overline{\text{LM}}_N$ is a toric variety of dimension $n-1$. Its toric orbits (or their closures, the boundary strata as a moduli space) are given by partitions $N=N_1\sqcup\ldots\sqcup N_k$, $|N_i|>0$ for all $i$, which correspond to boundary strata $$Z_{N_1,\ldots, N_k}=\delta_{N_1\cup\{0\}}\cap \delta_{N_1\cup N_2\cup\{0\}}\cap\ldots\cap \delta_{N_1\cup \ldots\cup N_{k-1}\cup\{0\}}$$ which parametrizes (degenerations of) linear chains of $\mathbb{P}^1$'s with points marked by, respectively, $N_1\cup\{0\}$, $N_2$,\ldots, $N_{k-1}$, $N_k\cup\{\inftyty\}$. We can identify $$Z_{N_1,\ldots, N_k}\sigmameq \overline{\text{LM}}_{N_1}\times\ldots\times\overline{\text{LM}}_{N_k},$$ where the left node of every $\mathfrak{m}athbb{P}^1$ is marked by $0$ and the right node by $\inftyty$. There are forgetful maps $\mathfrak{p}i_K:\, \overline{\text{LM}}_N\to \overline{\text{LM}}_{N\setminus K}$, for all $K\subseteq N$, $1\le|K|\le n-1$, given by forgetting points marked by $K$ and stabilizing. \betagin{defn}[\text{pr}otect{\emph{cf.} \cite[Definition 1.4]{CT_partI}}] The cuspidal block $D^b_{cusp}(\overline{\text{LM}}_N)$ consists of objects $E\in D^b(\overline{\text{LM}}_N)$ such that for all $i\in N$ we have $$R{\mathfrak{p}i_i}_*E=0.$$ \end{defn} \betagin{prop}[\text{pr}otect{\emph{cf.} \cite[Proposition 1.8]{CT_partI}}]\lambdabel{LM decomp} There is a semi-orthogonal decomposition $$D^b(\overline{\text{LM}}_N)=\lambdangle D^b_{cusp}(\overline{\text{LM}}_N), \ \{\mathfrak{p}i_K^*D^b_{cusp}(\overline{\text{LM}}_{N\setminus K})\}_{K\subset N},\ \mathcal{O}\rightarrowngle$$ where subsets $K$ with $1\le |K|\le n-2$ are ordered by increasing cardinality. \end{prop} \betagin{defn}[\text{pr}otect{\emph{cf.} \cite[Definition 1.9]{CT_partI}}] Let $\mathbb{G}_N=\{G_1^\varepsilone,\ldots,G_{n-1}^\varepsilone\}$ be the set of following line bundles on $\overline{\text{LM}}_N$: $$G_a=a\mathfrak{p}si_{0}-(a-1)\sum_{k\in N}\delta_{k0}-(a-2)\sum_{k,l\in N}\delta_{kl0}-\ldots-\sum_{J\subset N, |J|=a-1}\delta_{J\cup\{0\}} $$ for every $a=1,\ldots,n-1$. Let $\text{h}at\mathbb{G}$ be the collection of sheaves of the form $$\mathcal{T}=(i_Z)_*\mathcal{L},\quad \mathcal{L}=G_{a_1}^\varepsilone\boxtimes\ldots\boxtimes G_{a_t}^\varepsilone$$ for all \emph{massive} strata $Z=Z_{N_1,\ldots,N_t}$, i.e., such that $N_i\ge 2$ for every $i$ and for all $1\leq a_i\leq |N_i|-1$. Here $i_Z:\,Z\text{h}ookrightarrow \overline{\text{LM}}_N$ is the inclusion map. If $t=1$ we get line bundles $\mathbb{G}_N$ and for $t\geq2$ these sheaves are torsion sheaves. \end{defn} \betagin{thm}[\text{pr}otect{\emph{cf.} \cite[Theorem 1.10]{CT_partI}}]\lambdabel{LM main} $\text{h}at\mathbb{G}$ is a full exceptional collection in $D^b_{cusp}(\overline{\text{LM}}_N)$, which is invariant under the group $S_2\times S_N$. \end{thm} Clearly, by Theorem \ref{LM main}, Proposition \ref{LM decomp} and adjointness, we have the following \betagin{cor}\lambdabel{S is enough} If $E\in D^b(Z_N)$ is such that $R\text{\rm Hom}(E, F)=0$ for all $F$ of the form $Rp_*({\mathfrak{p}i_K}^*\text{h}at\mathfrak{m}athbb{G})$, for all $K\subseteq N$, including $K=\emptyset$, then $E=0$. \end{cor} We now proceed to calculate the objects in the collection $Rp_*({\mathfrak{p}i_K}^*\text{h}at\mathfrak{m}athbb{G})$. \betagin{prop}\lambdabel{pull-backs push} Let $p:\overline{\text{LM}}_N\rightarrow Z_N$ be the reduction map. \betagin{itemize} \item[(1) ] For all $I\subseteq N$ with $0\leq |I|\leq n-2$ and all $1\leq a\leq n-|I|-1$, we have $$Rp_*\betagin{itemize}g(\mathfrak{p}i_I^*G_a^\varepsilone\betagin{itemize}g)=-a\mathfrak{p}si_0-\sum_{j\in N\setminus I}\delta_{j0}\quad \text{if $n$ is odd},$$ $$Rp_*\betagin{itemize}g(\mathfrak{p}i_I^*G_a^\varepsilone\betagin{itemize}g)=-a\mathfrak{p}si_0-\sum_{j\in N\setminus I}\delta_{j0}+\sum_{J\subseteq N, |J|=\frac{n}{2}, |J\cap (N\setminus I)|< a} \betagin{itemize}g(a- |J\cap (N\setminus I)|\betagin{itemize}g) \delta_{J\cup\{0\}},$$ if $n$ is even. Moreover, $Rp_*\mathcal{O}=\mathcal{O}$. \item[(2) ] If $n$ is odd, all the torsion sheaves and their pull-backs, i.e, all sheaves $\mathcal{T}$ in the collection $\text{h}at \mathfrak{m}athbb{G}$ not considered in (1), have $Rp_*(\mathcal{T})=0$. \item[(3) ] If $n$ is even, we have $Rp_*\betagin{itemize}g(G^\varepsilone_{a_1}\boxtimes\ldots\boxtimes G^\varepsilone_{a_t}\betagin{itemize}g)=0$, except for sheaves $G^\varepsilone_a\boxtimes G^\varepsilone_b$ with support $Z=\overline{\text{LM}}_{N_1}\times \overline{\text{LM}}_{N_2}$, where $|N_1|=|N_2|=\frac{n}{2}$, when $$Rp_*\betagin{itemize}g(G^\varepsilone_a\boxtimes G^\varepsilone_b\betagin{itemize}g)=\mathcal{O}(-a)\boxtimes\mathcal{O}(-b),$$ where we use the identification $p(Z)=\mathfrak{m}athbb{P}^{\frac{n}{2}-1}\times \mathfrak{m}athbb{P}^{\frac{n}{2}-1}$. \item[(4) ] If $n$ is even, $I\neq\emptyset$ and $\mathcal{T}\in \text{h}at \mathfrak{m}athbb{G}_{N\setminus I}$ is a torsion sheaf, then either $$Rp_*\betagin{itemize}g(\mathfrak{p}i_I^*\mathcal{T}\betagin{itemize}g)=0,$$ or $Rp_*\betagin{itemize}g(\mathfrak{p}i_I^*\mathcal{T}\betagin{itemize}g)$ is generated by the sheaves $\mathcal{O}(-a)\boxtimes\mathcal{O}(-b)$ supported on the images $\mathfrak{m}athbb{P}^{\frac{n}{2}-1}\times \mathfrak{m}athbb{P}^{\frac{n}{2}-1}$ of strata $\overline{\text{LM}}_{N_1}\times \overline{\text{LM}}_{N_2}$ with $|N_1|=|N_2|=\frac{n}{2}$ and with $0<a,b\leq \frac{n}{2}-1$. \end{itemize} \end{prop} We use here that if $n=2s+2$ is even, the restriction of the map $p$ to a stratum of the form $\overline{\text{LM}}_{s+1}\times \overline{\text{LM}}_{s+1}$ is a product of reduction maps of type $\overline{\text{LM}}_{s+1}\rightarrow\overline{\text{\rm M}}_{\mathbf{a}}$, where $\mathbf{a}=(1,\frac{1}{2}+\eta,\frac{1}{n},\ldots, \frac{1}{n})$ (with $\frac{1}{n}$ appearing $(s+1)$ times). By \cite[Remark 4.6]{Ha}, we have $\overline{\text{\rm M}}_{0,\mathcal{A}}=\mathfrak{m}athbb{P}^s$ (the Kapranov model of $\overline{\text{LM}}_{s+1}$ with respect to the first marking). \betagin{proof}[Proof of Proposition \ref{pull-backs push}] Throughout, we denote $s:=\left\lfloor\frac{n-1}{2}\right\rfloor$. We first prove (1). As $p$ is a birational morphism between smooth projective varieties, we have $Rp_*\mathcal{O}=\mathcal{O}$. We write $\mathfrak{p}i_I^*G_a^\varepsilone$ and $p^*\betagin{itemize}g(-a\mathfrak{p}si_0-\sum_{j\in N\setminus I} \delta_{j0}\betagin{itemize}g)$ in the Kapranov model with respect to the $0$ marking. We denote $$H:=\mathfrak{p}si_0,\quad E_J:=\delta_{J\cup\{0\}}\quad (J\subseteq N,\quad |J|\leq n-2),$$ be the hyperplane class and the exceptional divisors respectively. We have: $$\mathfrak{p}i_I^*G_a^\varepsilone=-aH+\sum_{J\subseteq N, |J|\geq1, |J\cap (N\setminus I)|< a} \betagin{itemize}g(a- |J\cap (N\setminus I)|\betagin{itemize}g) E_J.$$ $$p^*\betagin{itemize}g(-a\mathfrak{p}si_0-\sum_{j\in N\setminus I} \delta_{j0}\betagin{itemize}g)=-aH+\sum_{J\subseteq N, 1\leq|J|\leq s} \betagin{itemize}g(a- |J\cap (N\setminus I)|\betagin{itemize}g) E_J,$$ where the last equality follows by Corollary \ref{pull by p}. It follows that $$\mathfrak{p}i_I^*G_a^\varepsilone=p^*\betagin{itemize}g(-a\mathfrak{p}si_0-\sum_{j\in N\setminus I} \delta_{j0}\betagin{itemize}g)+\Sigmagma^1+\Sigmagma^2,$$ where $\Sigmagma^1$ consists of all the terms that appear in $\mathfrak{p}i_I^*G_a^\varepsilone$, but do not appear in $p^*\betagin{itemize}g(-a\mathfrak{p}si_0-\sum_{j\in N\setminus I} \delta_{j0}\betagin{itemize}g)$, and $\Sigmagma^2$ consists of the terms that appear in $p^*\betagin{itemize}g(-a\mathfrak{p}si_0-\sum_{j\in N\setminus I} \delta_{j0}\betagin{itemize}g)$, but do not in $\mathfrak{p}i_I^*G_a^\varepsilone$, taken with a negative sign: $$\Sigmagma^1=\sum_{J\subseteq N, |J|\geq1, |J\cap (N\setminus I)|< a, |J|>s} \betagin{itemize}g(a- |J\cap (N\setminus I)|\betagin{itemize}g) E_J,$$ $$\Sigmagma^2=\sum_{J\subseteq N, |J|\geq1, |J\cap (N\setminus I)|> a, |J|\leq s} \betagin{itemize}g(|J\cap (N\setminus I)|-a\betagin{itemize}g) E_J.$$ When $|J|\leq s$, the codimension of $p(E_J)$ in $Z_N$ is $|J|$. For the terms in the sum $\Sigmagma^2$, the coefficient of $E_J$ satisfies $$|J\cap (N\setminus I)|-a\leq |J|-1=\text{codim}(p(E_J))-1.$$ Hence, one may apply Lemma \ref{push exceptional} successively to the terms of the sum $\Sigmagma^2$. We use here that the map $p$ can be decomposed into a sequence of blow-ups, with exceptional divisors $\delta_{J\cup\{0\}}$, $\delta_{J\cup\{\inftyty\}}$, with $1\leq |J|\leq s$, in order of increasing $|J|$. Note that the divisors $E_J$ with fixed $|J|$ are disjoint. Similarly, when $|N\setminus J|\leq s$, the codimension of $p(E_J)$ in $Z_N$ is $|N\setminus J|$. For the terms in the sum $\Sigmagma^1$, the coefficient of $E_J$ with $|N\setminus J|\leq s$, satisfies $$a-|J\cap (N\setminus I)|\leq n-1-|I|-|J\cap (N\setminus I)|\leq n-1-|J|=\text{codim}(p(E_J))-1,$$ so one may apply again Lemma \ref{push exceptional} to the terms of the sum $\Sigmagma^1$ which satisfy $|N\setminus J|\leq s$. When $n=2s+1$, the inequality $|N\setminus J|\leq s$ is equivalent to $|J|>s$. However, when $n=2s+2$, the inequality $|N\setminus J|\leq s$ is equivalent to $|J|>s+1$. Hence, in the case when $n=2s+2$, one is left with the terms in the sum $\Sigmagma^1$ that have $|J|=s+1$. This proves (1). Now we turn to the torsion objects, i.e., objects of the form $\mathfrak{p}i_I^*(\mathcal{T})$, where $$\mathcal{T}={i_Z}_*\betagin{itemize}g(G^\varepsilone_{a_1}\boxtimes\ldots\boxtimes G^\varepsilone_{a_t}\betagin{itemize}g),\quad Z=\overline{\text{LM}}_{K_1}\times\ldots\times\overline{\text{LM}}_{K_t},$$ where $I\subseteq N$, $N\setminus I=K_1\sqcup\ldots\sqcup K_t$ and $|K_j|\geq2$, for all $j$. Consider first the case when $I=\emptyset$. If $|K_1|\leq s$, the map $Z\rightarrow p(Z)$ is a product of reduction maps, the first of which is the constant map $\overline{\text{LM}}_{K_1}\rightarrow pt$. It follows in this case that $Rp_*(\mathcal{T})=0$, since $R\Gammamma(G^\varepsilone_{a_i})=0$. The same argument applies when $|K_t|\leq s$. It follows that $Rp_*(\mathcal{T})=0$, except possibly in the case when $n=2s+2$, $t=2$ and $|K_1|=|K_2|=s+1$. In this case, the map $Z\rightarrow p(Z)$ is a product of Kapranov maps $\overline{\text{LM}}_{s+1}\times \overline{\text{LM}}_{s+1}\rightarrow \mathfrak{m}athbb{P}^s\times\mathfrak{m}athbb{P}^s$, and it follows (for example by Lemma \cite[Lemma 5.7]{CT_partI}) that in this case $Rp_*(\mathcal{T})=\mathcal{O}_{\mathfrak{m}athbb{P}^s}(-a)\boxtimes\mathcal{O}_{\mathfrak{m}athbb{P}^s}(-b)$. This proves (3) and the case $I=\emptyset$ of (2). Consider now the case when $I\neq\emptyset$. To compute $Rp_*(\mathfrak{p}i_I^*\mathcal{T})$, consider the boundary divisors $D_1, D_2, \ldots, D_{t-1}$ whose intersection is $Z$, denoting $$D_i=\overline{\text{LM}}_{K_1\sqcup\ldots\sqcup K_i}\times \overline{\text{LM}}_{K_{i+1\sqcup\ldots\sqcup K_t}}\quad (i=1,\ldots, t-1).$$ For the remaining part of the proof, we denote for simplicity $$K'=K_1,\quad K''=K_2\sqcup\ldots\sqcup K_t$$ and consider the canonical inclusions $$i_1: Z\text{h}ra D_1=\overline{\text{LM}}_{K'}\times\overline{\text{LM}}_{K''},\quad i_{D_1}: D_1\rightarrow \overline{\text{LM}}_{N\setminus I}.$$ We resolve ${i_1}_*\mathcal{O}_Z$ using the Koszul complex $$\ldots \rightarrow\oplus_{2\leq i<j\leq t}\mathcal{O}(-D_i-D_j)_{|D_1}\rightarrow \oplus_{2\leq i\leq t}\mathcal{O}(-D_i)_{|D_1}\rightarrow\mathcal{O}_{D_1}\rightarrow {i_1}_*\mathcal{O}_Z\rightarrow0.$$ By our choice of $D_1$, for all $2\le i \le t$ we have $\mathcal{O}(D_i)_{|D_1}=\mathcal{O}\boxtimes\mathcal{O}(D'_i)$, for the corresponding boundary divisor on $\overline{\text{LM}}_{K''}$: $$D'_i=\overline{\text{LM}}_{K_2\sqcup\ldots\sqcup K_i}\times \overline{\text{LM}}_{K_{i+1\sqcup\ldots\sqcup K_t}}.$$ By Lemma \ref{lift}, we may choose a line bundle $\mathcal{M}$ on $\overline{\text{LM}}_{K''}$ such that the restriction of $\mathcal{M}$ to the massive stratum $\overline{\text{LM}}_{K_2}\times\ldots\times\overline{\text{LM}}_{K_t}$ is $G^\varepsilone_{a_2}\boxtimes\ldots\boxtimes G^\varepsilone_{a_t}$ and $\mathcal{M}\otimes\mathcal{O}(-D'_{i_1}-\ldots-D'_{i_k})$ is acyclic for any $2\le i_1<\ldots<i_k\le t$. Consider the line bundle $\mathcal{L}=G^\varepsilone_{a_1}\boxtimes\mathcal{M}$ on $D_1$. Then $\mathcal{L}_{|Z}=G^\varepsilone_{a_1}\boxtimes\ldots\boxtimes G^\varepsilone_{a_t}$. We now: (1) Tensor the Koszul sequence with $\mathcal{L}$, (2) Apply $R{i_{D_1}}_*(-)$, and (3) Apply $L\mathfrak{p}i_I^*(-)$. Since $\mathfrak{p}i_I$ is flat, we obtain a resolution for $\mathfrak{p}i_I^*\mathcal{T}$ with sheaves whose support is contained in $\mathfrak{p}i_I^{-1}(D_1)$. To prove (4) and the remaining part of (2), it suffices to show that for all $2\le i_1<\ldots<i_k\le t$ $$Rp_*\mathfrak{p}i_I^*R{i_{D_1}}_*\betagin{itemize}g(\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}\betagin{itemize}g)$$ is $0$ when $n$ is odd, or generated by the sheaves $\mathcal{O}(-a)\boxtimes\mathcal{O}(-b)$ ($a,b>0$) supported on the divisors $\mathfrak{m}athbb{P}^{\frac{n}{2}-1}\times \mathfrak{m}athbb{P}^{\frac{n}{2}-1}$ as in (4), when $n$ is even. Here we need the same statement also for $Rp_*\mathfrak{p}i_I^*R{i_{D_1}}_*\betagin{itemize}g(\mathcal{L}\betagin{itemize}g)$ (i.e., $k=0$). Note that $$\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}=G^\varepsilone_{a_1}\boxtimes\betagin{itemize}g(\mathcal{M}\otimes\mathcal{O}(-D'_{i_1}-\ldots-D'_{i_k})\betagin{itemize}g).$$ There is a commutative diagram \betagin{equation*} \betagin{CD} \mathfrak{p}i_I^{-1}(D_1) @>{i_{\mathfrak{p}i^{-1}(D_1)}}>> \overline{\text{LM}}_N@>p>> Z_N\\ @V{\rho_I}VV @V{\mathfrak{p}i_I}VV\\ D_1 @>{i_{D_1}}>> \overline{\text{LM}}_{N\setminus I} \end{CD} \end{equation*} where ${i_{\mathfrak{p}i_I^{-1}(D_1)}}$ is the canonical inclusion map and $\rho_I$ is the restriction of ${\mathfrak{p}i_I}$ to $\mathfrak{p}i_I^{-1}(D_1)$. Let $q=p \circ i_{\mathfrak{p}i^{-1}(D_1)}$. As $\mathfrak{p}i_I$ is flat, we have $$Rp_*\mathfrak{p}i_I^*R{i_{D_1}}_*\betagin{itemize}g(\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}\betagin{itemize}g)= Rq_*\rho_I^*\betagin{itemize}g(\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}\betagin{itemize}g).$$ The preimage $\mathfrak{p}i_I^{-1}(D_1)$ has several components $B_{I_1,I_2}$: $$B_{I_1,I_2}=\overline{\text{LM}}_{K'\cup I_1}\times \overline{\text{LM}}_{K''\cup I_2}\quad \text{for every partition}\quad I=I_1\sqcup I_2$$ We order the set $\{B_{I_1,I_2}\}$ as follows: $B_{I_1,I_2}$ must come before $B_{J_1,J_2}$ if $|I_1|>|J_1|$ and in a random order if $|I_1|=|J_1|$. Hence, if $B_{I_1,I_2}$ comes before $B_{J_1,J_2}$, then $B_{I_1,I_2}\cap B_{J_1,J_2}\neq\emptyset$ if and only if $J_1\subsetneq I_1$, in which case, the intersection takes the form $$B_{I_1,I_2}\cap B_{J_1,J_2}=\overline{\text{LM}}_{K'\cup J_1}\times \overline{\text{LM}}_{(I_1\setminus J_1)}\times \overline{\text{LM}}_{K''\cup I_2}.$$ For simplicity, we rename the resulting ordered sequence as $B_1, B_2, \ldots, B_r$. A consequence of the ordering is that $B_r$ is the component $B_{I_1,I_2}$ with $I_1=\emptyset$, $I_2=I$, and if $1\leq i\leq r-1$ and $B_i$ is $B_{I_1,I_2}=\overline{\text{LM}}_{K'\cup I_1}\times \overline{\text{LM}}_{K''\cup I_2}$, then $$\mathcal{O}_{B_{i}}(B_{i+1}+\ldots+B_r)=\mathcal{O}(\sum_{J\subsetneq I_1}\delta_{J\cup K'\cup\{0\}})\boxtimes\mathcal{O}= \mathcal{O}(\sum_{\emptyset\neq S\subseteq I_1}\delta_{S\cup\{x\}})\boxtimes\mathcal{O},$$ where the first sum runs over all $J\subsetneq I_1$ (including $J=\emptyset$), while for the second sum we use the identification $$\delta_{J\cup K'\cup\{0\}}=\delta_{(I_1\setminus J)\cup\{x\}}=\overline{\text{LM}}_{J\cup K'}\times\overline{\text{LM}}_{I_1\setminus J},$$ as divisors in $\overline{\text{LM}}_{K'\cup I_1}$ (with $x$ being the attaching point). Consider now the following exact sequences resolving $\mathcal{O}_{\mathfrak{p}i_I^{-1}(D_1)}=\mathcal{O}_{B_1\cup\ldots\cup B_r}$: $$0\rightarrow \mathcal{O}_{B_1\cup\ldots\cup B_{r-1}}(-B_r)\rightarrow\mathcal{O}_{B_1\cup\ldots\cup B_r}\rightarrow\mathcal{O}_{B_r}\rightarrow 0,$$ $$0\rightarrow \mathcal{O}_{B_1\cup\ldots\cup B_{r-2}}(-B_{r-1}-B_r)\rightarrow\mathcal{O}_{B_1\cup\ldots\cup B_{r-1}}(-B_r)\rightarrow\mathcal{O}_{B_{r-1}}(-B_r)\rightarrow 0,$$ $$\vdots$$ $$0\rightarrow \mathcal{O}_{B_1}(-B_2-\ldots-B_r)\rightarrow\mathcal{O}_{B_1\cup B_2}(-B_3-\ldots-B_r)\rightarrow\mathcal{O}_{B_2}(-B_3-\ldots-B_r)\rightarrow 0.$$ We tensor all the above exact sequences with $\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}$ and apply first $\rho_I^*(-)$, then $Rq_*(-)$. As the restriction of the map ${\rho_I}$ to a component $B_i$ of the form $B_{I_1,I_2}$ for some partition $I=I_1\sqcup I_2$, is the product of forgetful maps $\mathfrak{p}i_{I_1}\times\mathfrak{p}i_{I_2}$, it follows that, if $i\neq r$, then \betagin{align*} \mathcal{O}_{B_{i}}(-B_{i+1}-\ldots-B_r)&\otimes\rho_I^*\betagin{itemize}g(\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}\betagin{itemize}g)=\\ &\betagin{itemize}g(\mathfrak{p}i_{I_1}^*G^\varepsilone_{a_1}\otimes\mathcal{O}(-\sum_{\emptyset\neq S\subseteq I_1}\delta_{S\cup\{x\}})\betagin{itemize}g)\boxtimes \mathfrak{p}i_{I_2}^*\betagin{itemize}g(\mathcal{M}\otimes\mathcal{O}(-D'_{i_1}-\ldots-D'_{i_k})\betagin{itemize}g), \end{align*} while $$\mathcal{O}_{B_{s}}\otimes\rho_I^*\betagin{itemize}g(\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}\betagin{itemize}g)=G^\varepsilone_{a_1}\boxtimes \mathfrak{p}i_I^*\betagin{itemize}g(\mathcal{M}\otimes\mathcal{O}(-D'_{i_1}-\ldots-D'_{i_k})\betagin{itemize}g).$$ (Recall that $B_r$ corresponds to the partition $I_1=\emptyset$, $I_2=I$.) We claim that both components of all the above sheaves are acyclic. To prove the claim, recall that $\mathcal{M}\otimes\mathcal{O}(-D'_{i_1}-\ldots-D'_{i_k})$ is acyclic by the choice of $\mathcal{M}$. We are left to prove that $\mathfrak{p}i_{I_1}^*(G^\varepsilone_{a_1})\otimes\mathcal{O}(-\sum_{\emptyset\neq S\subseteq I_1}\delta_{S\cup\{x\}})$ is acyclic when $I_1\neq\emptyset$. Since we may rewrite the line bundle $G^\varepsilone_{a_1}$ using the $x$ marking, we are done by the following: \betagin{claim} Consider the forgetful map $\mathfrak{p}i_I: \overline{\text{LM}}_{N\cup I}\rightarrow \overline{\text{LM}}_N$ for some subset $I\neq\emptyset$. For all $1\leq b\leq |N|-1$, the line bundle $\mathfrak{p}i_{I}^*(G^\varepsilone_{b})\otimes\mathcal{O}(-\sum_{\emptyset\neq S\subseteq I}\delta_{S\cup\{0\}})$ is acyclic. \end{claim} \betagin{proof} Using the Kapranov model with respect to the $0$ marking, we have $$\mathfrak{p}i_{I}^*(G^\varepsilone_{b})=-bH+\sum_{J\subseteq N\cup I, |J\cap N|< b}(b-|J\cap N|) E_J,\quad \mathcal{O}\left(-\sum_{\emptyset\neq S\subseteq I}\delta_{S\cup\{0\}}\right)=-\sum_{\emptyset\neq S\subseteq I}E_S.$$ As $b-|J\cap N|-1\geq0$, the result follows by Lemma \ref{easy acyclic}. \end{proof} Recall that the map $p$ either contracts $B_{I_1,I_2}=\overline{\text{LM}}_{K'\cup I_1}\times \overline{\text{LM}}_{K''\cup I_2}$ by mapping $\overline{\text{LM}}_{K'\cup I_1}$ to a point if $|I_1+K'|<\frac{n}{2}$, or by mapping $LM_{K''\cup I_2}$ to a point if $|I_2+K''|<\frac{n}{2}$), or, we have $|I_1+K'|=|I_2+K''|=\frac{n}{2}$ and $p(B_{I_1,I_2})$ is a divisor in $Z_N$ which is isomorphic to $\mathfrak{m}athbb{P}^{\frac{n}{2}-1}\times \mathfrak{m}athbb{P}^{\frac{n}{2}-1}$. Hence, $$Rq_*\betagin{itemize}g(\mathcal{O}_{B_{i}}(-B_{i+1}-\ldots-B_s)\otimes\rho_I^*\betagin{itemize}g(\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}\betagin{itemize}g)\betagin{itemize}g),$$ $$Rq_*\betagin{itemize}g(\mathcal{O}_{B_{s}}\otimes\rho_I^*\betagin{itemize}g(\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}\betagin{itemize}g)\betagin{itemize}g),$$ are either $0$ or they are supported on the divisors $\mathfrak{m}athbb{P}^{\frac{n}{2}-1}\times \mathfrak{m}athbb{P}^{\frac{n}{2}-1}$ as above (in particular, $n$ is even). In the latter case, writing $n=2s+2$, as both components of the above sheaves are acyclic, such objects are generated by $\mathcal{O}(-a)\boxtimes\mathcal{O}(-b)$ for $0<a,b\leq s$. We use here that if $\mathcal{A}$ is an object in $D^b(\overline{\text{LM}}_{s+1})$ with $R\Gamma(\mathcal{A})=0$ and $f:\overline{\text{LM}}_{s+1}\rightarrow\mathfrak{m}athbb{P}^s$ is a Kapranov map, then $Rf_*\mathcal{A}$ has the same property, therefore it is generated by $\mathcal{O}(-a)$, for $0<a\leq s$. Using the above exact sequences, $$Rq_*\rho_I^*\betagin{itemize}g(\mathcal{L}\otimes \mathcal{O}(-D_{i_1}-\ldots-D_{i_k})_{|D_1}\betagin{itemize}g)$$ is either $0$ or, when $n$ is even, generated by $\mathcal{O}(-a)\boxtimes\mathcal{O}(-b)$ ($0<a,b\leq s$) on $\mathfrak{m}athbb{P}^s\times \mathfrak{m}athbb{P}^s$. Proposition \ref{pull-backs push} now follows. \end{proof} \betagin{lemma}[\emph{cf.} \text{pr}otect{\cite[Lemma 4.6]{CT_partI}}]\lambdabel{easy acyclic} Consider the divisor $D=-dH+\sum m_IE_I$ on $\overline{\text{LM}}_N$ written in some Kapranov model. The divisor $D$ is acyclic if $$1\leq d\leq n-3, \quad 0\leq m_I\leq n-3-|I|.$$ \end{lemma} The following lemma is well known: \betagin{lemma}\lambdabel{push exceptional} Let $p: X\rightarrow Y$ be a blow-up of a smooth subvariety $Z$ of codimension $r+1$ of a smooth projective variety $Y$. Let $E$ be the exceptional divisor. Then for all $1\leq i\leq r$ we have $Rp_*\mathcal{O}_X(iE)=\mathcal{O}_Y$. \end{lemma} \betagin{lemma}\lambdabel{lift} Let $Z=\overline{\text{LM}}_{N_1}\times\ldots\times\overline{\text{LM}}_{N_t}$ be a massive stratum in $\overline{\text{LM}}_N$ and let $D_1,\ldots, D_{t-1}$ be the boundary divisors whose intersection is $Z$. Let $$\mathcal{T}=\mathcal{T}_1\boxtimes\ldots\boxtimes\mathcal{T}_t$$ be a sheaf supported on $Z$, with either $\mathcal{T}_i=\mathcal{O}$ or $\mathcal{T}_i=G^\varepsilone_{a_i}$, for some $1\leq a_i< |N_i|$, and not all $\mathcal{T}_i=\mathcal{O}$. Then there exists a line bundle $\mathcal{L}$ on $\overline{\text{LM}}_N$ such that: \betagin{itemize} \item[(a) ] $\mathcal{L}_{|Z}=\mathcal{T}$; \item[(b) ] $\mathcal{L}$ is acyclic; \item[(c) ] For all $1\le i_1<\ldots<i_k\le t$, the restriction $\mathcal{L}_{|D_{i_1}\cap\ldots\cap D_{i_k}}$ is acyclic. \end{itemize} In addition, $\mathcal{L}\otimes\mathcal{O}(-D_{i_1}-\ldots-D_{i_k})$ is acyclic for all $1\le i_1<\ldots<i_k\le t$. \end{lemma} \betagin{proof} The proof is by induction on $t\geq1$. The statement is trivially true when $t=1$, i.e., when $Z=\overline{\text{LM}}_n$ (as $\mathcal{L}=\mathcal{T}$ and there are no boundary divisors to be considered). In addition, if all but one of the $\mathcal{T}_i$'s are trivial, say $\mathcal{T}_i=G^\varepsilone_{a_i}$, we are done by \cite[Lemma 4.3(3)]{CT_partI}, as we can take $$\mathcal{L}=G^\varepsilone_{a_i+|N_1|+\ldots+|N_{i-1}|}.$$ Assume now $t\geq2$ and at least two of the $\mathcal{T}_i$'s are non-trivial. Consider $\mathfrak{p}i_{N_1}: \overline{\text{LM}}_N\rightarrow\overline{\text{LM}}_{N\setminus N_1}$ and let $Z'=\mathfrak{p}i_{N_1}(Z)$. Then $Z'$ can be identified with $\overline{\text{LM}}_{N_2}\times\ldots\times\overline{\text{LM}}_{N_t}$ and the map $\mathfrak{p}i_{N_1}: Z\rightarrow Z'$ is the second projection. Let $\mathcal{T}'=\mathcal{T}_2\boxtimes\ldots\boxtimes\mathcal{T}_t$. By induction, there is an acyclic line bundle $\mathcal{L}'$ on $\overline{\text{LM}}_{N\setminus N_1}$ such that $\mathcal{L}'_{|Z'}=\mathcal{T}'$ and whose restriction to every stratum containing $Z'$ is also acyclic. If $\mathcal{T}_1=\mathcal{O}$, we let $\mathcal{L}=\mathfrak{p}i_{N_1}^*\mathcal{L}'$ and clearly all of the properties are satisfied. If $\mathcal{T}_1=G^\varepsilone_a$, we define $\mathcal{L}=G^\varepsilone_a\otimes\mathfrak{p}i_{N_1}^*\mathcal{L}'.$ Clearly, $\mathcal{L}_{|Z}=\mathcal{T}$. By the projection formula, $R{\mathfrak{p}i_{N_1}}_*(\mathcal{L})=\mathcal{L}'\otimes R{\mathfrak{p}i_{N_1}}_*(G^\varepsilone_a)$. As $R{\mathfrak{p}i_i}_*(G^\varepsilone_a)=0$ for all $i$, it follows that $R{\mathfrak{p}i_{N_1}}_*(\mathcal{L})=0$,i.e., $\mathcal{L}$ is acyclic. The same argument applies to show that the restriction of $\mathcal{L}$ to a stratum $W$ containing $Z$ is acyclic. Consider such a stratum: $$W=\overline{\text{LM}}_{M_1}\times\ldots\times\overline{\text{LM}}_{M_s},$$ and let $W'=\overline{\text{LM}}_{M_2}\times\ldots\times\overline{\text{LM}}_{M_s}$, considered as a stratum in in $\overline{\text{LM}}_{N\setminus M_1}$. If $M_1=N_1$, the restriction $\mathcal{L}_{|W}$ equals $G^\varepsilone_a\boxtimes(\mathcal{L}'_{|W'})$ and is clearly acyclic. If $M_1\neq N_1$, then $M_1=N_1+\ldots+N_i$, with $i\geq2$, and $\mathfrak{p}i_{N_1}(W)$ is the stratum $\overline{\text{LM}}_{M_1\setminus N_1}\times W'$ in $\overline{\text{LM}}_{N\setminus N_1}$. The restriction of $\mathcal{L}'$ to this stratum has the form $\mathcal{L}'_1\boxtimes\mathcal{L}'_2$. Then $\mathcal{L}_{|W}=(G^\varepsilone_a\otimes{\mathfrak{p}i'}^*_{N_1}{\mathcal{L}'_1})\boxtimes\mathcal{L}'_2$, where $\mathfrak{p}i'_{N_1}: \overline{\text{LM}}_{M_1}\rightarrow\overline{\text{LM}}_{N_1}$ is the forgetful map. Again, by the projection formula, $\mathcal{L}_{|W}$ is acyclic. We now prove the last assertion in the lemma. As $\mathcal{L}$, $\mathcal{L}_{D_i}$ are both acyclic, $\mathcal{L}(-D_i)$ is acyclic (case $k=1$). The statement follows by induction on $k$ using the Koszul resolution for the intersection $\cap_{j\in I} D_j$, $I=\{i_1,\ldots, i_k\}$: $$\ldots \rightarrow\oplus_{l<j, l,j\in I}\mathcal{O}(-D_l-D_j)\rightarrow \oplus_{j\in I}\mathcal{O}(-D_j)\rightarrow\mathcal{O}\rightarrow\mathcal{O}_{D_{i_1}\cap\ldots\cap D_{i_k}}\rightarrow0.$$ \end{proof} Proposition \ref{pull-backs push} and Lemma \ref{dictionary} have the following: \betagin{cor}\lambdabel{rewrite} Assume $n=|N|$ is odd. Let $p:\overline{\text{LM}}_N\rightarrow Z_N$ be the reduction map. For all $I\subseteq N$ with $0\leq |I|\leq n-2$ and all $1\leq a\leq n-|I|-1$, we have $$Rp_*\betagin{itemize}g(\mathfrak{p}i_I^*G_a^\varepsilone\betagin{itemize}g)=\mathcal{O}(-I^c)\otimes z^{2a-|I^c|},\quad I^c=N\setminus I.$$ Alternatively, this is the collection of $P\mathfrak{m}athbb{G}_m$-linearized line bundles $$\mathcal{O}(-E)\otimes z^p,\quad 0\leq |p|\leq e-2,\quad 2\leq e\leq n\quad (e=|E|,\quad E\subseteq N).$$ Moreover, $Rp_*\mathcal{O}=\mathcal{O}$ and $Rp_*E=0$ for all other objects $E$ in the collection $\text{h}at \mathfrak{m}athbb{G}$. \end{cor} Proposition \ref{pull-backs push} and Lemma \ref{dictionary2} have the following: \betagin{cor}\lambdabel{rewrite2} Assume $|N|=2s+2$ is even. Let $p:\overline{\text{LM}}_N\rightarrow Z_N$ be the reduction map. For all $E\subseteq N$, $e=|E|\geq2$ and all $1\leq a\leq e-1$, $$Rp_*\betagin{itemize}g(\mathfrak{p}i_{N\setminus E}^*G_a^\varepsilone\betagin{itemize}g)=\mathcal{O}(-E)\betagin{itemize}g(\sum \left|a-|E\cap T^c|\right|E_T\betagin{itemize}g)\otimes z^{2a-e},$$ where $\left| a-|E\cap T^c|\right|$ denotes the absolute value of $(a-|E\cap T^c|)$. Moreover, $Rp_*\mathcal{O}=\mathcal{O}$. For all $G^\varepsilone_a\otimes G^\varepsilone_b$ supported on strata $\overline{\text{LM}}_{s+1}\times\overline{\text{LM}}_{s+1}$ we have $$Rp_*\betagin{itemize}g(G^\varepsilone_a\otimes G^\varepsilone_b\betagin{itemize}g)=\mathcal{O}(-a)\boxtimes\mathcal{O}(-b)\quad (0<a, b\leq s),$$ All other pushforwards are either $0$ or are generated by the above torsion sheaves. \end{cor} When $n=4$, the map $p:\overline{\text{LM}}_N\rightarrow Z_N$ is an isomorphism. In particular, the objects in $Rp_*\mathfrak{p}i_I^*\text{h}at\mathfrak{m}athbb{G}$ form a full exceptional collection. However, it is straightforward to see that this is different than the collection in Theorem \ref{even}. \betagin{thebibliography}{BFK19} \betagin{itemize}bitem[BFK19]{BFK} M. Ballard, D. Favero, and L. Katzarkov, \emph{Variation of geometric invariant theory quotients and derived categories}, J. Reine Angew. Math. {\bf 746} (2019), 235--303. \betagin{itemize}bitem[BM13]{BergstromMinabe} J. Bergstr\"om and S. Minabe, \emph{On the cohomology of moduli spaces of (weighted) stable rational curves}, Math. Z. {\bf 275} (2013) no. 3-4, 1095--1108. \betagin{itemize}bitem[BM14]{BergstromMinabeLM} J. Bergstr\"om and S. Minabe, \emph{On the cohomology of the Losev--Manin moduli space}, Manuscripta Mathematica {\bf 144} (2014), no. 1-2, 241--252. \betagin{itemize}bitem[CT12]{CT_rigid} A.-M. Castravet and J. Tevelev, \emph{Rigid curves on {$\overline{\text{\rm M}}_{0,n}$} and arithmetic breaks}. In: Compact moduli spaces and vector bundles, pp 19--67, Contemp. Math., vol. 564, Amer. Math. Soc., Providence, RI, 2012. \betagin{itemize}bitem[CT13]{CT_Crelle} A.-M. Castravet and J. Tevelev, \emph{Hypertrees, projections, and moduli of stable rational curves}, J. Reine Angew. Math. {\bf 675} (2013), 121--180. \betagin{itemize}bitem[CT15]{CT_Duke} A.-M. Castravet and J. Tevelev, \emph{$\overline{\text{\rm M}}_{0,n}$ is not a Mori dream space}, Duke Math. J. {\bf 164} (2015), no. 8, 1641--1667. \betagin{itemize}bitem[CT20a]{CT_partI} A.-M. Castravet and J. Tevelev, \emph{Derived category of moduli of pointed curves. I}, Algebr. Geom. {\bf 7} (2020), no. 6, 722--757. \betagin{itemize}bitem[CT20b]{CT_partII} A.-M. Castravet and J. Tevelev, \emph{Derived category of moduli of pointed curves. II}, preprint \arXiv{2002.02889} (2020). \betagin{itemize}bitem[Dol03]{Dolgachev} I. Dolgachev, \emph{Lectures on invariant theory}, London Mathematical Society Lecture Note Series, vol. 296, Cambridge University Press, Cambridge, 2003. \betagin{itemize}bitem[HL15]{DHL} D. Halpern-Leistner, \emph{The derived category of a GIT quotient}, J. Amer. Math. Soc. {\bf 28} (2015), no. 3, 871--912. \betagin{itemize}bitem[Has03]{Ha} B. Hassett, \emph{Moduli spaces of weighted pointed stable curves}, Adv. Math. {\bf 173} (2003), no. 2, 316--352. \betagin{itemize}bitem[Huy06]{Huy} D. Huybrechts, \emph{Fourier-Mukai transforms in algebraic geometry}, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, Oxford, 2006. \betagin{itemize}bitem[KL09]{KeelTevelev} S. Keel and J. Tevelev, \emph{Equations for $\overlineerline M_{0,n}$}, Internat. J. of Math. {\bf 20} (2009), no. 9, 1159--1184. \betagin{itemize}bitem[LM00]{LM} A. Losev and Y. Manin, \emph{New moduli spaces of pointed curves and pencils of flat connections}, Michigan Math. J. {\bf 48} (2000), 443--472. \betagin{itemize}bitem[MS13]{ManinSmirnov1} Yu. I. Manin and M. N. Smirnov, \emph{On the derived category of $\overlineerline M_{0,n}$}, Izv. Ross. Akad. Nauk Ser. Mat. {\bf 77} (2013), no. 3, 93--108. \betagin{itemize}bitem[MS14]{ManinSmirnov2} Yu. I. Manin and M. N. Smirnov, \emph{Towards motivic quantum cohomology of $\overlineerline{M}_{0,S}$}, Proc. Edinb. Math. Soc. (2) {\bf 57} (2014), no. 1, 201--230. \betagin{itemize}bitem[MP97]{MP} A. S. Merkurjev and I. A. Panin, \emph{$K$-theory of algebraic tori and toric varieties}, $K$-Theory {\bf 12} (1997), no. 2, 101--143. \betagin{itemize}bitem[Smi13]{Smirnov Thesis} M. N. Smirnov, \emph{Gromov-Witten correspondences, derived categories, and Frobenius manifolds}, Ph. D. thesis, University of Bonn, 2013. Available at \text{h}ref{https://bonndoc.ulb.uni-bonn.de/xmlui/handle/20.500.11811/5627}{https://bonndoc.ulb.uni-bonn.de/xmlui/handle/20.500.11811/5627} \end{thebibliography} \end{document} \end{thebibliography} \end{document}
\begin{document} \title{Minimal coloring number for $\mathbb{Z} \begin{abstract} The minimal coloring number of a $\mathbb{Z}$-colorable link is the minimal number of colors for non-trivial $\mathbb{Z}$-colorings on diagrams of the link. In this paper, we show that the minimal coloring number of any non-splittable $\mathbb{Z}$-colorable links is four. As an example, we consider the link obtained by replacing each component of the given link with several parallel strands, which we call a parallel of a link. We show that an even parallel of a link is $\mathbb{Z}$-colorable except for the case of 2 parallels with non-zero linking number. We then give a simple way to obtain a diagram which attains the minimal coloring number for such even parallels of links. \end{abstract} \section{Introduction} In \cite{Fox}, Fox introduced one of the most well-known invariants for knots and links, which now it is called \textit{the Fox $n$-coloring}, or simply {\it $n$-coloring} for $n\ge 2$. On the other hand, it is known that the links with $0$ determinants cannot admit Fox $n$-coloring for any $n \ge 2$. For such links, the $\mathbb{Z}$-coloring can be defined as a generalization of the Fox $n$-coloring. In \cite{HK}, Haraly-Kauffman defined the minimal coloring number for Fox $n$-coloring. We define the minimal coloring number for $\mathbb{Z}$-coloring as a generalization of the minimal coloring number for Fox $n$-coloring. See Section 2. The minimal coloring number of any splittable $\mathbb{Z}$-colorable link is shown to be $2$. In \cite{IM}, for a non-splittable $\mathbb{Z}$-colorable link $L$ that has a diagram with a ``simple'' $\mathbb{Z}$-coloring, we proved that the minimal coloring number of $L$ is $4$. In this paper, we show that any non-splittable $\mathbb{Z}$-colorable link has a diagram with a ``simple'' $\mathbb{Z}$-coloring, and its minimal coloring number is $4$. \\ \noindent \textbf{Theorem \ref{main}.} {\it The minimal coloring number of any non-splittable $\mathbb{Z}$-colorable link is equal to $4$.}\\ This result is also proved by Meiqiao Zhang, Xian'an Jin and Qingying Deng almost independently in \cite{Zhang}. Previously Zhang gave us her manuscript for her Master thesis. There she showed that if a $\mathbb{Z}$-colorable link has a diagram with a $1$-diff crossing, the link has a diagram with only $0$-diff crossings and $1$-diff crossings. Our proof is based on her arguments. In the proof of Theorem \ref{main}, we give a procedure to obtain a diagram with a $\mathbb{Z}$-coloring of $4$ colors from any given diagram with a non-trivial $\mathbb{Z}$-coloring of a non-splittable $\mathbb{Z}$-colorable link. However, from a given diagram of a $\mathbb{Z}$-colorable link, by using the procedure given in the our proof of Theorem \ref{main}, the obtained diagram and $\mathbb{Z}$-coloring might be very complicated. In Section \ref{secthm1}, we give a ``simple'' diagrams and $\mathbb{Z}$-coloring of $4$ colors for some particular class of $\mathbb{Z}$-colorable link. In fact, we consider the link obtained by replacing each component of the given link with several parallel strands, which we call a parallel of a link. \section{Preliminaries} Let us begin with the definition of $\mathbb{Z}$-coloring of link. \begin{definition}\label{def1} Let $L$ be a link and $D$ a regular diagram of $L$. We consider a map $\gamma:\{$arcs of $D\}\rightarrow \mathbb{Z}$. If $\gamma$ satisfies the condition $2\gamma(a)= \gamma(b)+\gamma(c)$ at each crossing of $D$ with the over arc $a$ and the under arcs $b$ and $c$, then $\gamma$ is called a \textit{$\mathbb{Z}$-coloring} on $D$. A $\mathbb{Z}$-coloring which assigns the same integer to all the arcs of the diagram is called the \textit{trivial $\mathbb{Z}$-coloring}. A link is called \textit{$\mathbb{Z}$-colorable} if it has a diagram admitting a non-trivial $\mathbb{Z}$-coloring. \end{definition} Throughout this paper, we often call the integers of the image of a $\mathbb{Z}$-coloring {\it colors}. We define the minimal coloring number for $\mathbb{Z}$-coloring as follows. \begin{definition}\label{def2} Let us consider the number of the colors for a non-trivial $\mathbb{Z}$-coloring on a diagram of a $\mathbb{Z}$-colorable link $L$. We call the minimum of such number of colors for all non-trivial $\mathbb{Z}$-colorings on diagrams of $L$ the \textit{minimal coloring number} of $L$, and denote it by $mincol_\mathbb{Z}(L)$. \end{definition} In \cite{IM}, we defined a simple $\mathbb{Z}$-coloring. \begin{definition} Let $L$ be a non-trivial $\mathbb{Z}$-colorable link, and $\gamma$ a $\mathbb{Z}$-coloring on a diagram $D$ of $L$. Suppose that there exists a natural number $d$ such that, at all the crossings in $D$, the differences between the colors of the over arcs and the under arcs are $d$ or $0$. Then we call $\gamma$ a \textit{simple} $\mathbb{Z}$-coloring. \end{definition} Moreover we have proved the following result in \cite{IM}. \begin{theorem}{\cite[Theorem 4.2]{IM}}\label{simplethm} Let $L$ be a non-splittable $\mathbb{Z}$-colorable link. If there exists a simple $\mathbb{Z}$-coloring on a diagram of $L$, then $mincol_\mathbb{Z}(L)=4$. \end{theorem} \section{Main theorem} In this section, we prove Theorem \ref{main}. \begin{theorem}\label{main} The minimal coloring number of any non-splittable $\mathbb{Z}$-colorable link is equal to $4$. \end{theorem} In Zhang's thesis, she calls a crossing an {\it $n$-diff crossing} if $|b-a|$ and $|b-c|$ are equal to $n$, where the over arc is colored by $b$ and the under arcs are colored by $a$ and $c$ by a $\mathbb{Z}$-coloring $\gamma$ at the crossing. We also use this notion in our proof. \begin{proof}[Proof of Theorem \ref{main}] Let $L$ be a non-splittable $\mathbb{Z}$-colorable link. If the link $L$ admits a simple $\mathbb{Z}$-coloring, by Theorem \ref{simplethm}, we see that $mincol_\mathbb{Z}(L)=4$. Here let $D$ be a diagram of $L$ with non-simple $Z$-coloring. We define $d_m$ as the maximum of the set $\{0, d_1, d_2, \cdots, d_m\}$ such that $D$ has $d_i$-diff crossings for $i=1, 2, 3, \cdots$. We can find a path on $D$ from a $d_m$-diff crossing to a $d$-diff crossing passing only $0$-diff crossings with $0<d<d_m$. Such a path is one of the $4$ types [1], [2], [3] and [4] illustrated in Figure \ref{dm}. In all the figures in this proof, the crossing with $n$ inside a circle is an $n$-diff crossing. \begin{figure}\label{dm} \end{figure} In the following, for a path from a $d_m$-diff crossing in Figure \ref{dm}, we will modify the diagram and the coloring to eliminate the $d_m$-diff crossing. For a path of type [1], we modify the diagram and the coloring as shown in Figure \ref{fig1}. \begin{figure}\label{fig1} \end{figure} For a path of type [2], we modify the diagram and the coloring as shown in Figure \ref{fig2-1} or \ref{fig2-2}. \begin{figure}\label{fig2-1} \end{figure} \begin{figure}\label{fig2-2} \end{figure} For a path of type [3], modify the diagram and the coloring as shown in Figure \ref{fig3}. \begin{figure}\label{fig3} \end{figure} For a path of type [4], modify the diagram and the coloring as shown in Figure \ref{fig4-1} or \ref{fig4-2}. \begin{figure}\label{fig4-1} \end{figure} \begin{figure}\label{fig4-2} \end{figure} Here the obtained diagram has $|d_m-d|$-diff crossings, $|d_m-2d|$-diff crossings and no $d_m$-diff crossings. From $0<d<dm$, we see that $|d_m-d|$ and $|d_m-2d|$ are less than $d_m$. By the induction for $dm$, $L$ has a diagram with a $\mathbb{Z}$-coloring such that has only $0$-diff crossings and $\alpha$-diff crossings for $\alpha>0$. That is $L$ admits a simple $\mathbb{Z}$-coloring. By Theorem \ref{simplethm}, we conclude that $mincol_\mathbb{Z}(L)=4$. \end{proof} \begin{remark} By Theorem \ref{main}, any non-splittable $\mathbb{Z}$-colorable link has a diagram with a $\mathbb{Z}$-coloring of $4$ colors. However, from a given diagram of a $\mathbb{Z}$-colorable link, by using the procedure given in the our proof of Theorem \ref{main}, the obtained diagram and $\mathbb{Z}$-coloring might be very complicated. \end{remark} \section{Even parallels}\label{secthm1} In this section, we give a simple way to obtain a diagram which attains the minimal coloring number for a particular family of $\mathbb{Z}$-colorable links. That is, we consider the link obtained by replacing each component of the given link with several parallel strands, which we call a parallel of a link, as follows. \begin{definition} Let $L=K_1\cup\cdots\cup K_c$ be a link with $c$ components and $D$ a diagram of $L$. For a set $(n_1,\cdots ,n_c)$ of integers $n_i\geq 1$, we denote by $D^{(n_1,\cdots ,n_c)}$ the diagram obtained by taking $n_i$-parallel copies of the $i$-th component $K_i$ of $D$ on the plane for $1\leq i\leq c$. The link $L^{(n_1,\cdots ,n_c)}$ represented by $D^{(n_1,\cdots ,n_c)}$ is called {\it the $(n_1,\cdots ,n_c)$-parallel of $L$}. Where $L$ is a knot, that is $c=1$, we call $(n)$-parallel $L^{(n)}$ simply a $n$-parallel, and denote it by $L^n$. We call a $2$-parallel of a knot {\it untwisted} if the linking number of the $2$ components of the parallel is $0$. \end{definition} Examples of $(n_1,\cdots ,n_c)$-parallels of links are shown in Figure \ref{hopf} and Figure \ref{trefoil}. \begin{figure} \caption{A $(3,2)$-parallel of the Hopf link} \label{hopf} \end{figure} \begin{figure} \caption{A $2$-parallel of the trefoil} \label{trefoil} \end{figure} We show that an even parallel of a link is $\mathbb{Z}$-colorable except for the case of 2 parallels with non-zero linking number. \begin{theorem}\label{thmparallel} [1] For a non-trivial knot $K$ and any diagram $D$ of $K$ that the writhe is $0$, $D^{2}$ always represents a $\mathbb{Z}$-colorable link. Moreover, there exists a diagram $D_0$ of $K$ such that $D_0^{2}$ is locally equivalent to a minimally $\mathbb{Z}$-colorable diagram. \noindent [2] Let $L$ be a non-splittable $c$-component link and $D$ any diagram of $L$. For any even number $n_1,\cdots,n_c$ at least $4$, $D^{(n_1,\cdots ,n_c)}$ always represents a $\mathbb{Z}$-colorable link and is locally equivalent to a minimally $\mathbb{Z}$-colorable diagram. \end{theorem} Here we give the definitions used in Theorem \ref{thmparallel}. \begin{definition} Let $L$ be a $\mathbb{Z}$-colorable link, and $D$ a diagram of $L$. $D$ is called a {\it minimally $\mathbb{Z}$-colorable diagram} if there exists a $\mathbb{Z}$-coloring $\gamma$ on $D$ such that the image of $\gamma$ is equal to the minimal coloring number of $L$. \end{definition} \begin{definition} For diagrams $D$ and $D'$ of $L$, $D$ is {\it locally equivalent} to $D'$ if there exist mutually disjoint open subsets on $\mathbb{R}^2$ $U_1, U_2, \cdots ,U_n$ such that $D'$ is obtained from $D$ by Reidemeister moves only in $\bigcup_{i=1}^m U_i$. \end{definition} To prove Theorem \ref{thmparallel} [1], we prepare the next lemma about the linking number of components of $2$-parallel of a knot. \begin{lemma}\label{lem1} Let $D$ a diagram of a knot $K$. For a 2-parallel $K^2=K_1\cup K_2$ represented by $D^2$, the linking number of $K_1$ and $K_2$ is equal to the writhe of $D$. \end{lemma} \begin{proof} Any crossing $c$ on $D$ is replaced by four crossings by taking $2$-parallel copies. In the four crossings, the two crossings consist of the arcs of only $K_1$ or $K_2$, and the other two crossings both $K_1$ and $K_2$. \begin{figure}\label{lem1fig} \end{figure} Then $c$ and the crossings constructed by the arcs of different components have same sign. See Figure \ref{lem1fig}. Therefore the linking number of $K_1$ and $K_2$ is equal to the writhe of $D$. \end{proof} For non-splittability of parallels of knots and links, we can also show the next. \begin{lemma}\label{lem11} [1] Any $n$-parallel of a non-trivial knot is non-splittable. [2] Any $(n_1,\cdots ,n_c)$-parallel of a non-splittable link is non-splittable. \end{lemma} \begin{proof} [1] Let $K^n$ be an $n$-parallel of a non-trivial knot $K$. Suppose that $K^n$ is splittable, that is, there exists a $2$-sphere $S$ in $S^3-K^n$ such that $S$ do not bound any $3$-ball in $S^3-K^n$. On the other hand, there exists an embedded annulus $A$ in $S^3$ such that $K^n \subset A$ and the core of $A$ is parallel to the components of $K^n$. By isotopy of $S$, let $S\cap A$ be minimized. If $S \cap A$ is the empty set then it is in contradiction with the definition of $S$, for any knot complement in $S^3$ is irreducible. We suppose otherwise, that is, $S\cap A$ is not the empty set. We consider a component $C$ of $S\cap A$ on $A$. Then there are two possibilities; (1) $C$ is trivial on $A$, that is , $C$ bounds a disk on $A$, or\\ (2) $C$ is parallel to a component of $K^n$. In the case (1), $C$ can be removed by isotopy of $S$. That is in contradiction with that $S\cap A$ minimized. In the case (2), since $C$ bounds a disk in $S$, $K$ must be trivial. This also gives a contradiction. Therefore we see $K^n$ is non-splittable. \noindent [2] From a link $L=K_1 \cup\cdots\cup K_c$ with $c$ at least 2, we obtain a parallel $L^{(n_1,\cdots ,n_c)}=K^{n_1}_1 \cup\cdots\cup K^{n_c}_c$. We assume $L^{(n_1,\cdots ,n_c)}$ is splittable. Then there exists a $2$-sphere in $S^3-L^{(n_1,\cdots ,n_c)}$ such that $S^3=B_1\cup B_2$ and $L^{(n_1,\cdots ,n_c)}=L_1 \cup L_2$ with a link $L_i \subset B_i$ for $i=1,2$. Here we take a component $l_1$ from $L_1$ and $l_2$ from $L_2$. In the case that $l_1 \subset K^{n_i}_i$ and $l_2 \subset K^{n_j}_j$ with $i\neq j$. Take $l_k \subset K^{n_k}_k$ for each $k\neq i,j$. Then we see $L'=(l_1\cup \l_2)\cup \bigcup_{k\neq i,j}l_k$ is equivalent to $L^{(n_1,\cdots ,n_c)}$. Now $S$ splits $l_1$ and $l_2$. From $L'$ is ambient isotopy to $L^{(n_1,\cdots ,n_c)}$, that is contradictory with the assumption. In the case that $l_1, l_2 \subset K^{n_i}_i$. Take $l_k \subset K^{n_k}_k$ with $k\neq i$. By $l_1 \subset B_1$, together with the assumption that $L$ is non-splittable, the link $l_1\cup \bigcup_{k\neq i} l_k$ is contained in $B_1$. On the other hand, for $l_2$, we have $l_2 \cup \bigcup_{k\neq i}l_k$ is contained in $B_2$. That is contradictory to each other. Therefore $L^{(n_1,\cdots ,n_c)}$ is non-splittable. \end{proof} \begin{proof}[Proof of Theorem \ref{thmparallel}] [1] Let $K$ be a non-trivial knot in $S^3$. First we give an orientation to the knot $K$. Let us take a diagram $D$ of $K$. Consider the $2$-parallel $K^2$ obtained from $D$. Then the $2$-parallel $K^2=K_1\cup K_2$ admits the orientation induced from that of $K$. Suppose that $K^2$ is untwisted, i.e., the linking number of $K_1$ and $K_2$ is $0$. Then we see the writhe of $D$ is $0$ by Lemma \ref{lem1}. Note that $D^2$ has parallel arcs such that the crossings in both ends have positive signs, there exists the same number of parallel arcs such that the crossings in both ends have negative signs as shown in Figure \ref{same}. \begin{figure}\label{same} \end{figure} Here we add a full-twist to the parallel arcs with the sign as shown in Figure \ref{r1a} and Figure \ref{r1b}. \begin{figure}\label{r1a} \label{r1b} \end{figure}another Since the writhe of $D$ is $0$, there exist positive crossings as many as negative crossings on $D$. Therefore the diagram obtained by this modification is equivalent to $D^2$. We see that $D^2$ represents $K^2$. In the following, we will use the same notation $D^2$ to denote the modified diagram for convenience. We set the colors (integers) $a$ and $a+d$ to a pair of parallel arcs on $D^2$. That is $d$ is a difference of colors of two parallel arcs. We give the colors to remaining arcs around the arcs to satisfy the condition of $\mathbb{Z}$-coloring. See Figure \ref{diff}. Then $d$ stays constant after passing under another two parallel arcs. \begin{figure}\label{diff} \end{figure} Therefore, $D$ has only crossings shown in Figures \ref{2+} and \ref{2-}. There $x_i$ and $y_i$ with $i=1,2$ are the arcs of the same component. \begin{figure}\label{2+} \label{2-} \end{figure} They appear alternately with tracing two parallel arcs. We fix the colors of two parallel arcs are $0$ and $1$. For the arc colored by $0$ is changed to be colored by $1$ after passing under another two parallel arcs. Then we see the colors as shown in Figure \ref{diff2}. \begin{figure}\label{diff2} \end{figure} We obtain $y=0, 2$ and $y'=1, 3$. We see $2y=0$ or $4$, $2y-1=-1$ or $3$, $2y'-2=0$ or $4$ and $2y'-3=-1$ or $3$. We see that $D^2$ admits a $\mathbb{Z}$-coloring $C$ such that Im$(C)=\{-1,0,1,2,3,4\}$. Therefore $K^2$ is $\mathbb{Z}$-colorable.\\ Here we focus on the arc colored by $4$ or $-1$. We can get the $\mathbb{Z}$-coloring without the colors $4$ and $-1$ by changing the diagram and the coloring obtained above as shown in Figure \ref{del4} and Figure \ref{del-1}. \begin{figure}\label{del4} \end{figure} \begin{figure}\label{del-1} \end{figure} We see that $K^2$ also admits the coloring $C'$ such that Im$(C')=\{0,1,2,3\}$. By Lemma \ref{lem11}, $K^2$ is non-splittable. By Theorem \ref{main}, the obtained diagram is a minimally $\mathbb{Z}$-colorable diagram, which is locally equivalent to $D^2$. \noindent [2] Let $D^{(n_1,\cdots ,n_c)}$ be a diagram of $L^{(n_1,\cdots ,n_c)}$. The diagram $D^{(n_1,\cdots ,n_c)}$ has only crossings shown in Figure \ref{N-parallel}. \begin{figure}\label{N-parallel} \end{figure} We put a circle as fencing the crossings as shown in Figure \ref{circle}. \begin{figure}\label{circle} \end{figure} Note that each crossing of $D^{(n_1,\cdots,n_k)}$ is contained in some of the regions encircled. For any parallel family of arcs $\{a_1,\cdots,a_k\}$ outside the regions, we fix the colors of $a_{k/2}$ and $a_{k/2+1}$ are $1$ and others are $0$ as shown in Figure \ref{out}. \\ \begin{figure}\label{out} \end{figure} For the arcs inside the region, we assign colors as follows. \\ In the case $n_j=4m$ for some integer $m$, we assign the colors $-1,0,1,2,3$ as shown in Figure \ref{4m}. \begin{figure}\label{4m} \end{figure} Then, at each crossing inside the region, the obtained coloring satisfy the condition of $\mathbb{Z}$-coloring. Here the colors of the arcs intersecting the circle are compatible to those of the arcs outside. \\ In the case $n_j=4m+2$, in the same way, we assign the colors $-1,0,1,2$ as shown in Figure \ref{4m+2}. \begin{figure}\label{4m+2} \end{figure} Then, at each crossing inside the region, the obtained coloring satisfy the condition of $\mathbb{Z}$-coloring. Here the colors of the arcs intersecting the circle are compatible to those of the arcs outside. We see that $D^{(n_1,\cdots ,n_c)}$ admits a $\mathbb{Z}$-coloring $C$ such that Im$(C)=\{-1,0,1,2\}$ or $\{-1,0,1,2,3\}$. Therefore $L^{(n_1,\cdots ,n_c)}$ is $\mathbb{Z}$-colorable.\\ Moreover, in the case that the colors $3$ appears, we delete the color $3$ by changing the diagram and the coloring as shown in Figure \ref{2-del3}. \begin{figure} \caption{Delete the color $3$} \label{2-del3} \end{figure} Therefore there exists the coloring $C'$ such that Im$(C')=\{-1,0,1,2\}$. Since we are assuming that L is non-splittable and Lemma \ref{lem11}, it can be shown that the parallel is non-splittable. By Theorem \ref{main}, the obtained diagram is a minimally $\mathbb{Z}$-colorable diagram, which is locally equivalent to $D^{(n_1,\cdots ,n_c)}$. \end{proof} \end{document}
\begin{document} \title{A Rudimentary Quantum Compiler} \author{Robert R. Tucci\\ P.O. Box 226\\ Bedford, MA 01730\\ [email protected]} \date{ \today} \maketitle \vskip2cm \section*{Abstract} We present a new algorithm for reducing an arbitrary unitary matrix into a sequence of elementary operations (operations such as controlled-nots and qubit rotations). Such a sequence of operations can be used to manipulate an array of quantum bits (i.e., a quantum computer). We report on a C++ program called ``Qubiter" that implements our algorithm. Qubiter source code is publicly available. \section*{1. Introduction} \subsection*{1(a) Previous Work} \mbox{}\indent In classical computation and digital electronics, one deals with sequences of elementary operations (operations such as AND, OR and NOT). These sequences are used to manipulate an array of classical bits. The operations are elementary in the sense that they act on only a few bits (1 or 2) at a time. Henceforth, we will sometimes refer to sequences as products and to operations as operators, instructions, steps or gates. Furthermore, we will abbreviate the phrase ``sequence of elementary operations" by ``SEO". In quantum computation\enote{DiV}, one also deals with SEOs (with operations such as controlled-nots and qubit rotations), but for manipulating quantum bits (qubits) instead of classical bits. SEOs are often represented graphically by a qubit circuit. In quantum computation, one often knows the unitary operator $U$ that describes the evolution of an array of qubits. One must then find a way to reduce $U$ into a SEO. In this paper, we present a new algorithm for accomplishing this task. We also report on a C++ program called ``Qubiter" that implements our algorithm. Qubiter source code is publicly available at www.ar-tiste.com/qubiter.html . We call Qubiter a ``quantum compiler" because, like a classical compiler, it produces a SEO for manipulating bits. Our algorithm is general in the sense that it can be applied to any unitary operator $U$. Certain $U$ are known to be polynomial in $\nb$; i.e, they can be expressed as a SEO whose number of gates varies as a polynomial in $\nb$ as $\nb$ varies, where $\nb$ is the number of bits. For example, a Discrete Fourier Transform (DFT) can be expressed as Order($\nb^2$) steps \enote{DFT}. Our algorithm expresses a DFT as Order($4^\nb$) steps. Hence, our algorithm is not ``polynomially efficient"; i.e., it does not give a polynomial number of steps for all $U$ for which this is possible. However, we believe that it is possible to introduce optimizations into the algorithm so as to make it polynomially efficient. Future papers will report our progress in finding such optimizations. Previous workers \enote{CastOfThousands} have described another algorithm for reducing a unitary operator into a SEO. Like ours, their algorithm is general, and it is not expected to be polynomial efficient without optimizations. Our algorithm is significantly different from theirs. Theirs is based on a mathematical technique described in Refs.\enote{Murn}-\enote{Reck}, whereas ours is based on a mathematical technique called CS Decomposition\enote{Stew}-\enote{PaiWei}, to be described later. Quantum Bayesian (QB) Nets\enote{Tucci95}-\enote{QFog} are a method of modelling quantum systems graphically in terms of network diagrams. In a companion paper\enote{Tucci98b}, we show how to apply the results of this paper to QB nets. \subsection*{1(b) CS Decomposition} \mbox{}\indent As mentioned earlier, our algorithm utilizes a mathematical technique called CS Decomposition\enote{Stew}-\enote{PaiWei}. In this name, the letters C and S stand for ``cosine" and ``sine". Next we will state the special case of the CS Decomposition Theorem that arises in our algorithm. Suppose that $U$ is an $N\times N$ unitary matrix, where $N$ is an even number. Then the CS Decomposition Theorem states that one can always express $U$ in the form \begin{equation} U = \left [ \begin{array}{cc} L_0 & 0 \\ 0 & L_1 \end{array} \right ] D \left [ \begin{array}{cc} R_0 & 0 \\ 0 & R_1 \end{array} \right ] \;, \eqlabel{1.1}\end{equation} where $L_0, L_1, R_0, R_1$ are $\frac{N}{2}\times \frac{N}{2}$ unitary matrices and \begin{equation} D = \left [ \begin{array}{cc} D_{00} & D_{01} \\ D_{10} & D_{11} \end{array} \right ] \;, \eqlabel{1.2a}\end{equation} \begin{equation} D_{00} = D_{11} = diag(C_1, C_2, \ldots, C_{\frac{N}{2}}) \;, \eqlabel{1.2b}\end{equation} \begin{equation} D_{01} = diag(S_1, S_2, \ldots, S_{\frac{N}{2}}) \;, \eqlabel{1.2c}\end{equation} \begin{equation} D_{10} = - D_{01} \;. \eqlabel{1.2d}\end{equation} For $i \in \{ 1, 2, \ldots , \frac{N}{2} \} $, $C_i$ and $S_i$ are real numbers that satisfy \begin{equation} C_i^2 + S_i^2 = 1 \;. \eqlabel{1.2e}\end{equation} Henceforth, we will use the term {\it D matrix} to refer to any matrix that satisfies Eqs.(1.2). If one partitions $U$ into four blocks $U_{ij}$ of size $\frac{N}{2}\times \frac{N}{2}$ , then \begin{equation} U_{ij} = L_i D_{ij} R_j \;, \eqlabel{1.3}\end{equation} for $i, j \in \{0, 1\}$. Thus, $D_{ij}$ gives the singular values of $U_{ij}$. More general versions of the CS Decomposition Theorem allow for the possibility that we partition $U$ into 4 blocks of unequal size. Note that if $U$ were a general (not necessarily unitary) matrix, then the four blocks $U_{ij}$ would be unrelated. Then to find the singular values of the four blocks $U_{ij}$ would require eight unitary matrices (two for each block), instead of the four $L_i, R_j$. This double use of the $L_i, R_j$ is a key property of the CS decomposition. \subsection*{1(c) Bird's Eye View of Algorithm} \mbox{}\indent Our algorithm is described in detail in subsequent sections. Here we will only give a bird's eye view of it. \begin{center} \epsfig{file=Fig1-Compiler.eps} {Fig.1 A binary tree. Each node $\beta$ has a single parent. If the parent is to $\beta$'s right (ditto, left), then $\beta$ contains the names of the matrices produced by applying the CS Decomposition Theorem to the $L$ matrices (ditto, $R$ matrices) of $\beta$'s parent.} \end{center} Consider Fig.1. We start with a unitary matrix $U$. Without loss of generality, we can assume that the dimension of $U$ is $2^\nb$ for some $\nb\geq 1$. (If initially $U$'s dimension is not a power of 2, we replace it by a direct sum $U\oplus I_r$ whose dimension is a power of two.) We apply the CS Decomposition method to $U$. This yields a matrix $D(0, U)$ of singular values, two unitary matrices $L(0, U)$ and $L(1, U)$ on the left and two unitary matrices $R(0, U)$ and $R(1, U)$ on the right. Then we apply the CS Decomposition method to each of the 4 matrices $L(0, U), L(1, U), R(0,U)$ and $R(1, U)$ that were produced in the previous step. Then we apply the CS Decomposition method to each of the 16 $R$ and $L$ matrices that were produced in the previous step. And so on. The lowest row of the pyramid in Fig.1 has $L$'s and $R$'s that are $1\times 1$ dimensional, i.e., just complex numbers. Call a {\it central matrix} either (1) a single D matrix, or (2) a direct sum $D_1 \oplus D_2 \oplus \cdots \oplus D_r$ of D matrices, or (3) a diagonal unitary matrix. From Fig.1 it is clear that the initial matrix $U$ can be expressed as a product of central matrices, with each node of the tree providing one of the central matrices in the product. Later on we will present techniques for decomposing any central matrix into a SEO. \section*{2. Preliminaries} \mbox{}\indent In this section, we introduce some notation and some general mathematical concepts that will be used in subsequent sections. \subsection*{2(a) General Notation} \mbox{}\indent We define $Z_{a, b} = \{ a, a+1, \ldots, b\}$ for any integers $a$ and $b$. $\delta(x,y)$ equals one if $x=y$ and zero otherwise. We will use the symbol $\nb$ for the number ($\geq 1$) of bits and $\ns = 2^\nb$ for the number of states with $\nb$ bits. Let $Bool = \{0,1\}$. We will use lower case Latin letters $a,b,c\ldots \in Bool$ to represent bit values and lower case Greek letters $\alpha, \beta, \gamma, \ldots \in \usualz $ to represent bit positions. A vector such as $\va = a_{\nb-1} \ldots a_2 a_1 a_0$ will represent a string of bit values, $a_\mu$ being the value of the $\mu$'th bit for $\mu \in \usualz$. A bit string $\va$ has a decimal representation $d(\va) = \sum^{\nb-1}_{\mu=0} 2^\mu a_\mu$. For $\beta\in \usualz$, we will use $\vec{u}(\beta)$ to denote the $\beta$'th standard unit vector, i.e, the vector with bit value of 1 at bit position $\beta$ and bit value of zero at all other bit positions. $I_r$ will represent the $r$ dimensional unit matrix. Suppose $\beta\in \usualz$ and $M$ is any $2\times 2$ matrix. We define $M(\beta)$ by \begin{equation} M(\beta) = I_2 \otimes \cdots \otimes I_2 \otimes M \otimes I_2 \otimes \cdots \otimes I_2 \;, \eqlabel{2.1}\end{equation} where the matrix $M$ on the right side is located at bit position $\beta$ in the tensor product of $\nb$ $2\times 2$ matrices. The numbers that label bit positions in the tensor product increase from right to left ($\leftarrow$), and the rightmost bit is taken to be at position 0. For any two square matrices $A$ and $B$ of the same dimension, we define the $\odot$ product by $A \odot B = A B A^{\dagger}$, where $A^{\dagger}$ is the Hermitian conjugate of $A$. $\vec{\sigma} = (\sx, \sy , \sz)$ will represent the vector of Pauli matrices, where \begin{equation} \sx= \left( \begin{array}{cc} 0&1\\ 1&0 \end{array} \right) \;, \;\; \sy= \left( \begin{array}{cc} 0&-i\\ i&0 \end{array} \right) \;, \;\; \sz= \left( \begin{array}{cc} 1&0\\ 0&-1 \end{array} \right) \;. \eqlabel{2.2}\end{equation} \subsection*{2(b) Sylvester-Hadamard Matrices} \mbox{}\indent The Sylvester-Hadamard matrices\enote{Had} $H_r$ are defined by: \begin{equation} H_1 = \begin{tabular}{r|rr} & {\tiny 0} & {\tiny 1} \\ \hline {\tiny 0}& 1& 1\\ {\tiny 1}& 1& -1\\ \end{tabular} \;, \eqlabel{2.3a}\end{equation} \begin{equation} H_2 = \begin{tabular}{r|rrrr} & {\tiny 00} & {\tiny 01} & {\tiny 10} & {\tiny 11}\\ \hline {\tiny 00}& 1& 1& 1& 1\\ {\tiny 01}& 1& -1& 1& -1\\ {\tiny 10}& 1& 1& -1& -1\\ {\tiny 11}& 1& -1& -1& 1\\ \end{tabular} \;, \eqlabel{2.3b}\end{equation} \begin{equation} H_{r+1} = H_1 \otimes H_r \;, \eqlabel{2.3c}\end{equation} for any integer $r\geq 1$. In Eqs.(2.3), we have labelled the rows and columns with binary numbers in increasing order, and $\otimes$ indicates a tensor product of matrices. From Eqs.(2.3), one can show that the entry of $H_r$ at row $\va$ and column $\vb$ is given by \begin{equation} (H_r)_{\va, \vb} = (-1)^{\va\cdot \vb} \;, \eqlabel{2.4}\end{equation} where $\va\cdot \vb = \sum^{r-1}_{\mu=0} a_\mu b_\mu$. It is easy to check that \begin{equation} H_r^T = H_r \;, \eqlabel{2.5}\end{equation} \begin{equation} H_r^2 = 2^r I_r \;. \eqlabel{2.6}\end{equation} In other words, $H_r$ is a symmetric matrix, and the inverse of $H_r$ equals $H_r$ divided by however many rows it has. \subsection*{2(c) Permutations} \mbox{}\indent Subsequent sections will use the following very basic facts about permutations. For more details, see, for example, Ref.\enote{Perm}. A {\it permutation} is a 1-1 onto map from a finite set $X$ onto itself. The set of permutations on set $X$ is a group if group multiplication is taken to be function composition. $S_n$, the {\it symmetric group in $n$ letters}, is defined as the group of all permutations on any set $X$ with $n$ elements. If $X= Z_{1,n}$, then a permutation $G$ which maps $i\in X$ to $a_i\in X$ (where $i\neq j$ implies $a_i\neq a_j$) can be represented by a matrix with entries \begin{equation} (G)_{j, i} = \delta(a_j, i) \;, \eqlabel{2.7}\end{equation} for all $i,j \in X$. Note that all entries in any given row or column equal zero except for one entry which equals one. An alternative notation for $G$ is \begin{equation} G = \left ( \begin{array}{ccccc} 1 & 2 & 3 & \cdots & n \\ a_1 & a_2 & a_3 & \cdots & a_n \end{array} \right ) \;. \eqlabel{2.8}\end{equation} The product of two symbols of the type shown in Eq.(2.8) is defined by function composition. For example, \begin{equation} \left ( \begin{array}{ccc} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array} \right ) \left ( \begin{array}{ccc} 1 & 2 & 3 \\ a_1 & a_2 & a_3 \end{array} \right ) = \left ( \begin{array}{ccc} 1 & 2 & 3 \\ b_1 & b_2 & b_3 \end{array} \right ) \;. \eqlabel{2.9}\end{equation} Note how we have applied the permutations on the left side from right to left ($\leftarrow$). (Careful: Some authors apply them in the opposite direction ($\rightarrow$)). A cycle is a special type of permutation. If $G\in S_n$ maps $a_1\rarrow a_2$, $a_2\rarrow a_3$, \ldots, $a_{r-1}\rarrow a_r$, $a_r\rarrow a_1$, where $i\neq j$ implies $a_i\neq a_j$ and $r\leq n$, then we call $G$ a {\it cycle}. $G$ may be represented as in Eqs.(2.7) and (2.8). Another way to represent it is by \begin{equation} G = ( a_1, a_2, a_3, \ldots, a_r) \;. \eqlabel{2.10}\end{equation} (Careful: some people write $( a_r, \ldots, a_3, a_2, a_1)$ instead.) We say that the cycle of Eq.(2.10) has {\it length} $r$. Cycles of length 1 are just the identity map. A cycle of length 2 is called a {\it transposition}. The product of two cycles need not be another cycle. For example, \begin{equation} (2,1,5) (1,4,5,6) = \left ( \begin{array}{cccccc} 1 & 2 & 3 & 4 & 5 & 6\\ 4 & 1 & 3 & 2 & 6 & 5 \end{array} \right ) \; \eqlabel{2.11}\end{equation} cannot be expressed as a single cycle. Any permutation can be written as a product of cycles. For example, \begin{equation} \left ( \begin{array}{cccccc} 1 & 2 & 3 & 4 & 5 & 6\\ 4 & 1 & 3 & 2 & 6 & 5 \end{array} \right ) = (5,6) (1,4,2) \;. \eqlabel{2.12}\end{equation} The cycles on the right side of Eq.(2.12) are {\it disjoint}; i.e., they have no elements in common. Disjoint cycles commute. Any cycle can be expressed as a product of transpositions (assuming a group with $\geq 2$ elements), by using identities such as: \begin{equation} (a_1, a_2, \ldots, a_n) = (a_1, a_2) (a_2, a_3) \cdots (a_{n-1}, a_n) \;, \eqlabel{2.13}\end{equation} \begin{equation} (a_1, a_2, \ldots, a_n) = (a_1, a_n) \cdots (a_1, a_3)(a_1, a_2) \;. \eqlabel{2.14}\end{equation} Another useful identity is \begin{equation} (a, b) = (a, p)(p, b)(a, p) \;. \eqlabel{2.15}\end{equation} This last identity can be applied repeatedly. For example, applied twice, it gives \begin{equation} (a, b) = (a, p_1)(p_1, b)(a, p_1) = (a, p_1) (p_1, p_2) (p_2, b) (p_1, p_2) (a, p_1) \;. \eqlabel{2.16}\end{equation} Since any permutation equals a product of cycles, and each of those cycles can be expressed as a product of transpositions, all permutations can be expressed as a product of transpositions (assuming a group with $\geq 2$ elements). The decomposition of a permutation into transpositions is not unique. However, the number of transpositions whose product equals a given permutation is always either even or odd. An {\it even} (ditto, {\it odd}) {\it permutation} is defined as one which equals an even (ditto, odd) number of transpositions. \subsection*{2(d) Projection Operators} \mbox{}\indent Consider a single qubit first. The qubit's basis states $\ket{0}$ and $\ket{1}$ will be represented by \begin{equation} \ket{0} = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \;\;,\;\; \ket{1} = \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \;. \eqlabel{2.17}\end{equation} The number operator $n$ of the qubit is defined by \begin{equation} n = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right] = \frac{ 1 - \sz}{2} \;. \eqlabel{2.18}\end{equation} Note that \begin{equation} n \ket{0} = 0 \;\;,\;\; n \ket{1} = \ket{1} \;. \eqlabel{2.19}\end{equation} We will often use $\antin$ as shorthand for \begin{equation} \antin = 1-n = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right] = \frac{ 1 + \sz}{2} \;. \eqlabel{2.20}\end{equation} Define $P_0$ and $P_1$ by \begin{equation} P_0 = \antin = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right] \;\;,\;\; P_1 = n = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right] \;. \eqlabel{2.21}\end{equation} $P_0$ and $P_1$ are orthogonal projectors and they add to one: \begin{equation} P_a P_b = \delta(a, b) P_b \;\;\;\;\; {\rm for} \;\; a,b\in Bool \;, \eqlabel{2.22a}\end{equation} \begin{equation} P_0 + P_1 = I_2 \;. \eqlabel{2.22b}\end{equation} Now consider $\nb$ bits instead of just one. For $\beta\in \usualz$, we define $P_0(\beta)$, $P_1(\beta)$, $n(\beta)$ and $\antin(\beta)$ by means of Eq.(2.1).\enote{Havel} For $\va \in Bool^\nb$, let \begin{equation} P_{\va} = P_{a_{\nb-1}} \otimes \cdots \otimes P_{a_2} \otimes P_{a_1} \otimes P_{a_0} \;. \eqlabel{2.23}\end{equation} Note that \begin{equation} \sum_{\va\in Bool^\nb } P_{\va} = I_2 \otimes I_2 \otimes \cdots \otimes I_2 = I_{2^\nb} \;. \eqlabel{2.24}\end{equation} For example, with 2 bits we have \begin{equation} P_{00} = P_0 \otimes P_0 = diag(1, 0, 0, 0) \;, \eqlabel{2.25a}\end{equation} \begin{equation} P_{01} = P_0 \otimes P_1 = diag(0, 1, 0, 0) \;, \eqlabel{2.25b}\end{equation} \begin{equation} P_{10} = P_1 \otimes P_0 = diag(0, 0, 1, 0) \;, \eqlabel{2.25c}\end{equation} \begin{equation} P_{11} = P_1 \otimes P_1 = diag(0, 0, 0, 1) \;, \eqlabel{2.25d}\end{equation} \begin{equation} \sum_{a,b \in Bool} P_{a,b} = I_4 \;. \eqlabel{2.26}\end{equation} For $r\geq 1$, suppose $P_1, P_2, \ldots P_r$ are orthogonal projection operators (i.e., $P_i P_j = \delta(i, j) P_j$ ), and $\alpha_1, \alpha_2 \ldots \alpha_r$ are complex numbers. Then it is easy to show by Taylor expansion that \begin{equation} \exp ( \sum_{i=1}^r \alpha_i P_i) = \sum_{i=1}^r \exp(\alpha_i) P_i + ( 1 - \sum_{i=1}^r P_i ) \;. \eqlabel{2.27}\end{equation} In other words, one can ``pull out" the sum over $P_i$'s from the argument of the exponential, but only if one adds a compensating term $1 - \sum_i P_i$ so that both sides of the equation agree when all the $\alpha_i$'s are zero. \section*{3. State Permutations that Act on Two Bits} \mbox{}\indent The goal of this paper is to reduce unitary matrices into qubit rotations and controlled-nots (c-nots). A qubit rotation (i.e., $\exp[i\vec{\theta}\cdot\vec{\sigma}(\beta)]$ for $\beta\in \usualz$ and some real 3-dimensional vector $\vec{\theta}$) acts on a single qubit at a time. This section will discuss gates such as c-nots that are state permutations that act on two bits at a time. \subsection*{3(a) $\nb = 2$} \mbox{}\indent Consider first the case when there are only 2 bits. Then there are four possible states--00, 01, 10, 11. With these 4 states, one can build 6 distinct transpositions: \begin{equation} (00, 01) = \left [ \begin{array}{cc} \sx & \\ & I_2 \end{array} \right ] = P_0 \otimes \sx + P_1 \otimes I_2 = \cnotno{1}{0} \;, \eqlabel{3.1a}\end{equation} \begin{equation} (00, 10) = \left [ \begin{array}{cc} P_1 & P_0 \\ P_0 & P_1 \end{array} \right ] = I_2 \otimes P_1 + \sx \otimes P_0 = \cnotno{0}{1} \;, \eqlabel{3.1b}\end{equation} \begin{equation} (00, 11) = \left [ \begin{array}{ccc} & & 1 \\ & I_2 & \\ 1& & \end{array} \right ] \;, \eqlabel{3.1c}\end{equation} \begin{equation} (01, 10) = \left [ \begin{array}{ccc} 1 & & \\ & \sx & \\ & & 1 \end{array} \right ] \;, \eqlabel{3.1d}\end{equation} \begin{equation} (01, 11) = \left [ \begin{array}{cc} P_0 & P_1 \\ P_1 & P_0 \end{array} \right ] = I_2 \otimes P_0 + \sx \otimes P_1 = \cnotyes{0}{1} \;, \eqlabel{3.1e}\end{equation} \begin{equation} (10, 11) = \left [ \begin{array}{cc} I_2 & \\ & \sx \end{array} \right ] = P_0 \otimes I_2 + P_1 \otimes \sx = \cnotyes{1}{0} \;, \eqlabel{3.1f}\end{equation} where matrix entries left blank should be interpreted as zero. The rows and columns of the above matrices are labelled by binary numbers in increasing order (as in Eq.(2.3b) for $H_2$). Note that the 4 transpositions Eqs.(3.1 a,b,e,f) change only one bit value, whereas the other 2 transpositions Eqs.(3.1 c,d) change both bit values. We will call $(00, 11)$ the {\it Twin-to-twin-er} and $(01, 10)$ the {\it Exchanger}. Exchanger\enote{Fey} has four possible representations as a product of c-nots: \begin{equation} (01, 10) = (01, 00) (00, 10) (01, 00) = \cnotno{1}{0}\cnotno{0}{1} \cnotno{1}{0} \;, \eqlabel{3.2a}\end{equation} \begin{equation} (01, 10) = (10, 11) (11, 01) (10, 11) = \cnotyes{1}{0}\cnotyes{0}{1} \cnotyes{1}{0} \;, \eqlabel{3.2b}\end{equation} \begin{equation} (01, 10) = (10, 00) (00, 01) (10, 00) = \cnotno{0}{1}\cnotno{1}{0} \cnotno{0}{1} \;, \eqlabel{3.2c}\end{equation} \begin{equation} (01, 10) = (01, 11) (11, 10) (01, 11) = \cnotyes{0}{1}\cnotyes{1}{0} \cnotyes{0}{1} \;. \eqlabel{3.2d}\end{equation} Note that one can go from Eq.(3.2a) to (3.2b) by exchanging $n$ and $\antin$; from Eq.(3.2a) to (3.2c) by exchanging bit positions 0 and 1; from Eq.(3.2a) to (3.2d) by doing both, exchanging $n$ and $\antin$ and exchanging bit positions 0 and 1. We will often represent Exchanger by $E(0,1)$. It is easy to show that \begin{equation} E^T(0,1) = E(0,1) = E^{-1}(0, 1) \;, \eqlabel{3.3a}\end{equation} \begin{equation} E(0,1) = E(1,0) \;, \eqlabel{3.3b}\end{equation} \begin{equation} E^2(0,1) = 1 \;. \eqlabel{3.3c}\end{equation} Furthermore, if $X$ and $Y$ are any two arbitrary $2\times2$ matrices, then, by using the matrix representation Eq.(3.1d) of Exchanger, one can show that \begin{equation} E(1,0)\odot (X\otimes Y) = Y\otimes X \;. \eqlabel{3.4}\end{equation} Thus, Exchanger exchanges the position of matrices $X$ and $Y$ in the tensor product. Twin-to-twin-er also has 4 possible representations as a product of c-nots. One is \begin{equation} (00, 11) = (00, 01) (01, 11) (00, 01) = \cnotno{1}{0}\cnotyes{0}{1} \cnotno{1}{0} \;. \eqlabel{3.5}\end{equation} As with Exchanger, the other 3 representations are obtained by replacing (1) $n$ and $\antin$, (2) bit positions 0 and 1, (3) both. \begin{center} \epsfig{file=Fig2-Compiler.eps} {Fig.2 Circuit symbols for the 4 different types of c-nots.} \end{center} \begin{center} \epsfig{file=Fig3-Compiler.eps} {Fig.3 Four equivalent circuit diagrams for Exchanger.} \end{center} \begin{center} \epsfig{file=Fig4-Compiler.eps} {Fig.4 Four equivalent circuit diagrams for Twin-to-twin-er.} \end{center} Figures 2, 3 and 4 give a diagrammatic representation of the 6 possible transpositions of states for $\nb=2$. \subsection*{3(b) Any $\nb \geq 2$} \mbox{}\indent Suppose $a_1, b_1, a_2, b_2 \in Bool$ and $\alpha, \beta \in \usualz$. We define \begin{equation} (a_1 b_1, a_2 b_2)_{\alpha, \beta} = \prod_{ (\Lambda, \Lambda', \Lambda'')\in Bool^{\nb -2} } (\Lambda a_1 \Lambda' b_1 \Lambda'', \Lambda a_2 \Lambda' b_2 \Lambda'') \;, \eqlabel{3.6}\end{equation} where on the right side, $a_1, a_2$ are located at bit position $\alpha$, and $b_1, b_2$ are located at bit position $\beta$. (Note that the transpositions on the right side of Eq.(3.6) are disjoint so they commute.) For example, for $\nb=3$, \begin{equation} \cnotyes{1}{0} = (10, 11)_{1,0} = \prod_{a\in Bool} (a10, a11) = (010, 011) (110, 111) \;. \eqlabel{3.7}\end{equation} Clearly, any permutation of states with $\nb$ bits that permutes only 2 bits (i.e., Exchanger, Twin-to-twin-er, and all c-nots) can be represented by $(a_1 b_1, a_2 b_2)_{\alpha, \beta}$. For $\alpha, \beta \in \usualz$, let $E(\alpha, \beta)$ represent Exchanger: \begin{equation} E(\alpha, \beta) = (01, 10)_{\alpha, \beta} \;. \eqlabel{3.8}\end{equation} As in the $\nb=2$ case, $E(\alpha, \beta)$ can be expressed as a product of c-nots in 4 different ways. One way is \begin{equation} E(\alpha, \beta) = \cnotyes{\beta}{\alpha} \cnotyes{\alpha}{\beta} \cnotyes{\beta}{\alpha} \;. \eqlabel{3.9}\end{equation} The other 3 ways are obtained by exchanging (1) $n$ and $\antin$, (2) bit positions $\alpha$ and $\beta$, (3) both. Again as in the $\nb=2$ case, \begin{equation} E^T(\alpha, \beta) = E(\alpha, \beta) = E^{-1}(\alpha, \beta) \;, \eqlabel{3.10a}\end{equation} \begin{equation} E(\alpha, \beta) = E(\beta, \alpha) \;, \eqlabel{3.10b}\end{equation} \begin{equation} E^2(\alpha, \beta) = 1 \;. \eqlabel{3.10c}\end{equation} Furthermore, if $X$ and $Y$ are two arbitrary $2\times 2$ matrices and $\alpha, \beta\in \usualz$ such that $\alpha\neq \beta$, then \begin{equation} E(\alpha, \beta)\odot [X(\alpha) Y(\beta)] = X(\beta) Y(\alpha) \;. \eqlabel{3.11}\end{equation} Equation (3.11) is an extremely useful result. It says that $E(\alpha, \beta)$ is a transposition of bit positions. (Careful: this is not the same as a transposition of bit states.) Furthermore, the $E(\alpha, \beta)$ generate the group of $\nb !$ permutations of bit positions. (Careful: this is not the same as the group of $(2^{\nb})!$ permutations of states with $\nb$ bits.) \begin{center} \epsfig{file=Fig5-Compiler.eps} {Fig.5 Circuit symbol for Exchanger.} \end{center} \begin{center} \epsfig{file=Fig6-Compiler.eps} {Fig.6 Circuit diagram for Eq.(3.12).} \end{center} \begin{center} \epsfig{file=Fig7-Compiler.eps} {Fig.7 Circuit diagram for Eq.(3.14).} \end{center} An example of how one can use Eq.(3.11) is \begin{equation} E(2, 0) \odot [ \cnotyes{1}{0} \cnotyes{0}{2} \cnotyes{2}{3} ] = \cnotyes{1}{2} \cnotyes{2}{0} \cnotyes{0}{3} \;. \eqlabel{3.12}\end{equation} Figure 5 gives a convenient way of representing $E(\alpha, \beta)$ diagrammatically. Using this symbol, the example of Eq.(3.12) can be represented by Fig.6. Of course, identities that are true for a general transposition are also true for $E(\alpha, \beta)$. For example, \begin{equation} (2,0) = (2,1)(1,0)(2,1) \;. \eqlabel{3.13}\end{equation} Therefore, \begin{equation} E(2,0) = E(2,1)E(1,0)E(2,1) \;. \eqlabel{3.14}\end{equation} Figure 7 is a diagrammatic representation of Eq.(3.14). \section*{4. Decomposing Central Matrix into SEO} \mbox{}\indent In Section 1(d), we gave only a partial description of our algorithm. In this section, we complete that description by showing how to decompose each of the 3 possible kinds of central matrices into a SEO. \subsection*{4(a)When Central Matrix is a Single D Matrix} \mbox{}\indent D matrices are defined by Eqs.(1.2). They can be expressed in terms of projection operators as follows: \begin{equation} D = \sum_{\va \in Bool^{\nb-1}} \exp(i \phi_\va \sy )\otimes P_\va \;, \eqlabel{4.1}\end{equation} where the $\phi_\va$ are real numbers. Note that in Eq.(4.1), $\va$ has $\nb -1$ components instead of the full $\nb$. Using the identity Eq.(2.27), one gets \begin{equation} D = \exp \left ( i \sum_{\va\in Bool^{\nb - 1} } \phi_{\va} \sy \otimes P_\va \right ) \;. \eqlabel{4.2}\end{equation} Now define new angles $\theta_\vb$ by \begin{equation} \phi_\va = \sum_{\vb \in Bool^{\nb -1} } (-1)^{\va\cdot\vb} \theta_\vb \;. \eqlabel{4.3}\end{equation} Suppose $\vec{\phi}$ (ditto, $\vec{\theta}$) is a column vector whose components are the numbers $\phi_\va$ (ditto, $\theta_\va$) arranged in order of increasing $\va$. Then Eq.(4.3) is equivalent to \begin{equation} \vec{\phi} = H_{\nb -1} \vec{\theta} \;. \eqlabel{4.4}\end{equation} This is easily inverted to \begin{equation} \vec{\theta} = \frac{1}{ 2^{\nb -1} } H_{\nb - 1} \vec{\phi} \;. \eqlabel{4.5}\end{equation} Let $A_\vb$ for $\vb \in Bool^{\nb - 1}$ be defined by \begin{equation} A_\vb = \exp \left ( i \theta_\vb \sy \otimes \sum_{\va \in Bool^{\nb-1}} (-1)^{\va\cdot\vb} P_\va \right ) \;. \eqlabel{4.6}\end{equation} Then $D$ can be written as \begin{equation} D = \prod_{ \vb \in Bool^{\nb -1} } A_\vb \;. \eqlabel{4.7}\end{equation} Note that the $A_\vb$ operators on the right side commute so the order in which they are multiplied is irrelevant. Next we establish 2 useful identities: If $\beta\in Z_{0, \nb -1} $ and $\vec{u}(\beta) \in Bool^{\nb }$ is the $\beta$'th standard unit vector, then \begin{equation} \begin{array}{l} \sum_{\va \in Bool^{\nb } } (-1)^{\va \cdot \vec{u}(\beta) } P_\va \\ = I_2 \otimes \cdots\otimes I_2\otimes \left [\sum_{a_\beta \in Bool} (-1)^{a_\beta} P_{a_\beta} \right ] \otimes I_2 \otimes \cdots\otimes I_2\\ = P_0(\beta) - P_1(\beta)\\ = \sz(\beta) \end{array} \;. \eqlabel{4.8}\end{equation} If $\beta, \alpha \in \usualz $ and $\alpha \neq \beta$, then \begin{equation} \begin{array}{l} \cnotyes{\alpha}{\beta} \odot \sy(\beta)\\ = [ \sx(\beta) P_1(\alpha) + P_0(\alpha) ] \odot \sy(\beta)\\ = \sy(\beta) [ - P_1(\alpha) + P_0(\alpha) ]\\ = \sy(\beta) \sz(\alpha) \end{array} \;. \eqlabel{4.9}\end{equation} Now we are ready to express $A_\vb$ in terms of elementary operators. For any $\vb \in Bool^{\nb -1} $, we can write \begin{equation} \vb = \sum_{j=0}^{r-1} \vec{u}(\beta_j) \;, \eqlabel{4.10}\end{equation} where \begin{equation} \nb-2 \geq \beta_{r-1} > \cdots > \beta_1 > \beta_0 \geq 0 \;. \eqlabel{4.11}\end{equation} In other words, $\vb$ has bit value of 1 at bit positions $\beta_j$. At all other bit positions, $\vb$ has bit value of 0. $r$ is the number of bits in $\vb$ whose value is 1. When $\vb =0$, $r=0$. Applying Eqs.(4.8) and (4.9), one gets \begin{equation} \begin{array}{l} [ \cnotyes{\beta_{r-1}}{\nb -1 } \cdots \cnotyes{\beta_2}{\nb -1 } \cnotyes{\beta_1}{\nb -1 } ] \odot \sy (\nb -1)\\ = \sy(\nb -1) \prod_{j=0}^{r-1} \sz(\beta_j)\\ = \prod_{j=0}^{r-1} \left( \sum_{\va } (-1)^{\va\cdot\vec{u}(\beta_j)} \sy \otimes P_\va \right )\\ = \sum_{\va } (-1)^{\va \cdot \vb} \sy \otimes P_\va\ \end{array} \;. \eqlabel{4.12}\end{equation} Thus, \begin{equation} A_\vb = [ \cnotyes{\beta_{r-1}}{\nb -1 } \cdots \cnotyes{\beta_2}{\nb -1 } \cnotyes{\beta_1}{\nb -1 } ] \odot \exp[ i\theta_\vb \sy(\nb -1) ] \;. \eqlabel{4.13}\end{equation} There are other ways of decomposing $A_\vb$ into a SEO. For example, using the above method, one can also show that \begin{equation} A_\vb = [ \cnotyes{\beta_{r-1}}{\beta_{r-2} } \cdots \cnotyes{\beta_2}{\beta_1} \cnotyes{\beta_1}{\beta_0} ] \odot \exp[ i\theta_\vb \sy(\beta_0) ] \;. \eqlabel{4.14}\end{equation} In conclusion, we have shown how to decompose a D matrix into a SEO. For example, suppose $\nb = 3$. Then \begin{equation} D = \sum_{a,b \in Bool} \exp ( i \phi_{ab} \sy ) \otimes P_a \otimes P_b \;. \eqlabel{4.15}\end{equation} Define $\vec{\theta}$ by \begin{equation} \vec{\theta} = \frac{1}{4} H_2 \vec{\phi} \;. \eqlabel{4.16}\end{equation} By Eqs.(4.7) and (4.13), \begin{equation} D = A_{00} A_{01} A_{10} A_{11} \;, \eqlabel{4.17}\end{equation} where \begin{equation} A_{00} = \exp( i \theta_{00} \sy ) \otimes I_2 \otimes I_2 \;, \eqlabel{4.18a}\end{equation} \begin{equation} A_{01} = \cnotyes{0}{2}\odot [\exp( i \theta_{01} \sy ) \otimes I_2 \otimes I_2 ] \;, \eqlabel{4.18b}\end{equation} \begin{equation} A_{10} = \cnotyes{1}{2}\odot [\exp( i \theta_{10} \sy ) \otimes I_2 \otimes I_2 ] \;, \eqlabel{4.18c}\end{equation} \begin{equation} A_{11} = [\cnotyes{1}{2} \cnotyes{0}{2}] \odot [\exp( i \theta_{11} \sy ) \otimes I_2 \otimes I_2 ] \;. \eqlabel{4.18d}\end{equation} \subsection*{4(b)When Central Matrix is a Direct Sum of D Matrices} \mbox{}\indent Consider first the case $\nb = 3$. Let $R(\phi) = \exp(i \sy \phi) $. Previously we used the fact that any D matrix $D$ can be expressed as \begin{equation} D = \sum_{a, b\in Bool} R(\phi_{ab}'') \otimes P_a \otimes P_b \;. \eqlabel{4.19}\end{equation} But what if $R$ were located at bit positions 0 or 1 instead of 2? By expressing both sides of the following equations as $8 \times 8$ matrices, one can show that \begin{equation} D_0 \oplus D_1 = \sum_{a,b\in Bool} P_a \otimes R(\phi_{ab}') \otimes P_b \;, \eqlabel{4.20}\end{equation} \begin{equation} D_{00}\oplus D_{01}\oplus D_{10}\oplus D_{11} = \sum_{a,b\in Bool} P_a \otimes P_b \otimes R(\phi_{ab}) \;, \eqlabel{4.21}\end{equation} where the $D_j$ and $D_{ij}$ are D matrices. One can apply a string of Exchangers to move $R$ in Eqs.(4.20) and (4.21) to any bit position. Thus, \begin{equation} D_0 \oplus D_1 = E(1,2) \odot \left ( \sum_{a,b\in Bool} R(\phi_{ab}') \otimes P_a \otimes P_b \right ) \;, \eqlabel{4.22}\end{equation} \begin{equation} D_{00}\oplus D_{01}\oplus D_{10}\oplus D_{11} = E(0,1) E(1,2) \odot \left ( \sum_{a,b\in Bool} R(\phi_{ab}) \otimes P_a \otimes P_b \right ) \;. \eqlabel{4.23}\end{equation} (Careful: $E(0, 2) \neq E(0,1) E(1,2)$. $E(0,2)$ will change $\sum_{a,b} R(\phi_{ab})\otimes P_a \otimes P_b$ to $\sum_{a,b} P_b \otimes P_a \otimes R(\phi_{ab})$, which is not the same as the left side of Eq.(4.21) ). For general $\nb \geq 1$, if $k\in \usualz$ and \begin{equation} E = \left \{ \begin{array}{l} 1 \;\;\;\; {\rm if} \;\; k=0 \;\; {\rm or} \;\; \nb=1\\ E(\nb - k - 1, \nb -k)\cdots E(\nb - 3, \nb - 2) E(\nb - 2, \nb - 1) \;\;\;\; {\rm otherwise} \end{array} \right . \;, \eqlabel{4.24}\end{equation} then a direct sum of $2^k$ D matrices can be expressed as \begin{equation} E\odot \left ( \sum_{\va\in Bool^{\nb -1}} R(\phi_{\va}) \otimes P_\va \right ) \;. \eqlabel{4.25}\end{equation} It follows that if we want to decompose a direct sum of D matrices into a SEO, we can do so in 2 steps: (1) decompose into a SEO the D matrix that one obtains by moving the qubit rotation to bit position $\nb -1$, (2) Replace each bit name in the decomposition by its ``alias". By alias we mean the new name assigned by the bit permutation $E$ defined by Eq.(4.24). \subsection*{4(c)When Central Matrix is a Diagonal Unitary Matrix} \mbox{}\indent Any diagonal unitary matrix $C$ can be expressed as \begin{equation} C = \sum_{\va \in Bool^{\nb}} \exp(i \phi_\va) P_\va \;, \eqlabel{4.26}\end{equation} where the $\phi_\va$ are real numbers. Using the identity Eq.(2.27) yields \begin{equation} C = \exp \left ( i \sum_{ \va \in Bool^\nb } \phi_\va P_\va \right ) \;. \eqlabel{4.27}\end{equation} Now define new angles $\theta_\vb$ by \begin{equation} \phi_\va = \sum_{\vb \in Bool^{\nb} } (-1)^{\va\cdot\vb} \theta_\vb \;. \eqlabel{4.28}\end{equation} In terms of vectors, \begin{equation} \vec{\phi} = H_{\nb} \vec{\theta} \;, \eqlabel{4.29}\end{equation} and \begin{equation} \vec{\theta} = \frac{1}{ 2^{\nb} } H_{\nb} \vec{\phi} \;. \eqlabel{4.30}\end{equation} Let $A_\vb$ for $\vb \in Bool^\nb$ be defined by \begin{equation} A_\vb = \exp \left( i \theta_\vb \sum_{\va \in Bool^{\nb}} (-1)^{\va\cdot\vb} P_\va \right) \;. \eqlabel{4.31}\end{equation} Then $C$ can be written as \begin{equation} C = \prod_{ \vb \in Bool^{\nb} } A_\vb \;, \eqlabel{4.32}\end{equation} where the $A_\vb$ operators commute. For any $\vb \in Bool^\nb$, we can write \begin{equation} \vb = \sum_{j=0}^{r-1} \vec{u}(\beta_j) \;, \eqlabel{4.33}\end{equation} where \begin{equation} \nb-1 \geq \beta_{r-1} > \cdots > \beta_1 > \beta_0 \geq 0 \;. \eqlabel{4.34}\end{equation} (Careful: Compare this with Eq.(4.11). Now $\vb\in Bool^\nb$ instead of $Bool^{\nb -1}$ and $\beta_{r-1}$ can be as large as $\nb-1$ instead of $\nb-2$.) One can show using the techniques of Section 4(a) that \begin{equation} A_\vb = \left \{ \begin{array}{l} \exp[i \theta_0] \;\;\;\; {\rm if} \;\;\; r=0 \\ \exp[i \theta_\vb \sz(\beta_0)] \;\;\;\; {\rm if} \;\;\; r=1 \\ \left [\cnotyes{\beta_{r-1}}{\beta_0 } \cdots \cnotyes{\beta_2}{\beta_0} \cnotyes{\beta_1}{\beta_0} \right ]\odot \exp[ i\theta_\vb \sz(\beta_0) ] \;\;\;\;{\rm if}\;\;\; r\geq 2 \end{array} \right . \;. \eqlabel{4.35}\end{equation} As in Section 4(a), there are other ways of decomposing $A_\vb$ into a SEO. In conclusion, we have shown how to decompose a diagonal unitary matrix into a SEO. For example, suppose $\nb = 2$. Then \begin{equation} C = diag( e^{i \phi_{00}}, e^{i \phi_{01}}, e^{i \phi_{10}},e^{i \phi_{11}} ) \;. \eqlabel{4.36}\end{equation} Define $\vec{\theta}$ by \begin{equation} \vec{\theta} = \frac{1}{4} H_2 \vec{\phi} \;. \eqlabel{4.37}\end{equation} By Eqs.(4.32) and (4.35), \begin{equation} C = A_{00} A_{01} A_{10} A_{11} \;, \eqlabel{4.38}\end{equation} where \begin{equation} A_{00} = \exp( i \theta_{00} ) \;, \eqlabel{4.39a}\end{equation} \begin{equation} A_{01} = I_2 \otimes \exp( i \theta_{01} \sz ) \;, \eqlabel{4.39b}\end{equation} \begin{equation} A_{10} = \exp( i \theta_{10} \sz ) \otimes I_2 \;, \eqlabel{4.39c}\end{equation} \begin{equation} A_{11} = \cnotyes{1}{0} \odot [ I_2 \otimes \exp( i \theta_{11} \sz ) ] \;. \eqlabel{4.39d}\end{equation} \section*{5. Qubiter} \mbox{}\indent At present, Qubiter is a very rudimentary program. We hope that its fans will extend and enhance it in the future. Qubiter1.0 is written in pure C++, and has no graphical user interface. In its ``compiling" mode, Qubiter takes as input a file with the entries of a unitary matrix and returns as output a file with a SEO. In its ``decompiling" mode, it does the opposite: it takes a SEO file and returns the entries of a matrix. The lines in a SEO file are of 4 types: \begin{description} \item{(a) PHAS \qquad $ang$}\newline where $ang$ is a real number. This signifies a phase factor $\exp(i(ang)\frac{\pi}{180})$. \item{(b) CNOT \qquad $\alpha$ \qquad $char$ \qquad $\beta$}\newline where $\alpha, \beta\in \usualz$ and $char\in \{ T, F \}$. $T$ and $F$ stand for true and false. If $char$ is the character $T$, this signifies $\cnotyes{\alpha}{\beta}$. Read it as ``c-not: if $\alpha$ is true, then flip $\beta$." If $char$ is the character $F$, this signifies $\cnotno{\alpha}{\beta}$. Read it as ``c-not: if $\alpha$ is false, then flip $\beta$." \item{(c) ROTY \qquad $\alpha$ \qquad $ang$}\newline where $\alpha \in \usualz$ and $ang$ is a real number. This signifies the rotation of qubit $\alpha$ about the Y axis by an angle $ang$ in degrees. In other words, $\exp(i\sy(\alpha)ang\frac{\pi}{180})$. \item{(d) ROTZ \qquad $\alpha$ \qquad $ang$}\newline This is the same as (c) except that the rotation is about the Z axis instead of the Y one. \end{description} As a example, consider what Qubiter gives for the Discrete Fourier Transform matrix $U$. This matrix has entries \begin{equation} U_{\va, \vb} = \frac{1}{\sqrt{\ns}} \exp \left[ \frac{i 2\pi d(\va) d(\vb) }{\ns} \right ] \;, \eqlabel{5.1}\end{equation} where $\va, \vb \in Bool^{\nb}$. $\ns$ and $d(\cdot)$ were defined in Section 2(a). When $\nb = 2$, Qubiter gives 33 operations (see Fig. 8). After doing the trivial optimization of removing all factors $A_\vb$ for which the rotation angle is zero, the 33 operations in Fig.8 reduce to 25 operations in Fig.9. In the future, we plan to introduce into Qubiter many more optimizations and some quantum error correction. Much work remains to be done. \begin{center} \epsfig{file=Fig8-Compiler.eps} {Fig.8 Output of Qubiter with zero angle optimization turned OFF. (input: 2-bit Discrete Fourier Transform matrix). } \end{center} \begin{center} \epsfig{file=Fig9-Compiler.eps} {Fig.9 Output of Qubiter with zero angle optimization turned ON. (input: 2-bit Discrete Fourier Transform matrix). } \end{center} \section*{FIGURE CAPTIONS:} \begin{description} \item{\sc Fig.1} A binary tree. Each node $\beta$ has a single parent. If the parent is to $\beta$'s right (ditto, left), then $\beta$ contains the names of the matrices produced by applying the CS Decomposition Theorem to the $L$ matrices (ditto, $R$ matrices) of $\beta$'s parent. \item{\sc Fig.2} Circuit symbols for the 4 different types of c-nots. \item{\sc Fig.3} Four equivalent circuit diagrams for Exchanger. \item{\sc Fig.4} Four equivalent circuit diagrams for Twin-to-twin-er. \item{\sc Fig.5} Circuit symbol for Exchanger. \item{\sc Fig.6} Circuit diagram for Eq.(3.12). \item{\sc Fig.7} Circuit diagram for Eq.(3.14). \item{\sc Fig.8} Output of Qubiter with zero angle optimization turned OFF. (input: 2-bit Discrete Fourier Transform matrix). \item{\sc Fig.9} Output of Qubiter with zero angle optimization turned ON. (input: 2-bit Discrete Fourier Transform matrix). \end{description} \end{document}
\begin{document} \title{The SQ-universality and residual properties of relatively hyperbolic groups} \author{G. Arzhantseva \and A. Minasyan \thanks{The work of the first two authors was supported by the Swiss National Science Foundation Grant $\sharp$~PP002-68627.} \and D. Osin \thanks{The work of the third author has been supported by the Russian Foundation for Basic Research Grant $\sharp$~03-01-06555.}} \date{} \maketitle \begin{abstract} In this paper we study residual properties of relatively hyperbolic groups. In particular, we show that if a group $G$ is non-elementary and hyperbolic relative to a collection of proper subgroups, then $G$ is SQ-universal. \end{abstract} \section{Introduction} The notion of a group hyperbolic relative to a collection of subgroups was originally suggested by Gromov \cite{Gro} and since then it has been elaborated from different points of view \cite{Bow,F,DSO,RHG}. The class of relatively hyperbolic groups includes many examples. For instance, if $M$ is a complete finite-volume manifold of pinched negative sectional curvature, then $\pi _1(M)$ is hyperbolic with respect to the cusp subgroups \cite{Bow,F}. More generally, if $G$ acts isometrically and properly discontinuously on a proper hyperbolic metric space $X$ so that the induced action of $G$ on $\partial X$ is geometrically finite, then $G$ is hyperbolic relative to the collection of maximal parabolic subgroups \cite{Bow}. Groups acting on $CAT(0)$ spaces with isolated flats are hyperbolic relative to the collection of flat stabilizers \cite{KH}. Algebraic examples of relatively hyperbolic groups include free products and their small cancellation quotients \cite{RHG}, fully residually free groups (or Sela's limit groups) \cite{Dah}, and, more generally, groups acting freely on $\mathbb R^n$-trees \cite{Gui}. The main goal of this paper is to study residual properties of relatively hyperbolic groups. Recall that a group $G$ is called {\it SQ-universal} if every countable group can be embedded into a quotient of $G$ \cite{Sch}. It is straightforward to see that any SQ-universal group contains an infinitely generated free subgroup. Furthermore, since the set of all finitely generated groups is uncountable and every single quotient of $G$ contains (at most) countably many finitely generated subgroups, every SQ-universal group has uncountably many non-isomorphic quotients. Thus the property of being SQ-universal may, in a very rough sense, be considered as an indication of ``largeness'' of a group. The first non-trivial example of an SQ-universal group was provided by Higman, Neumann and Neumann \cite{HNN}, who proved that the free group of rank $2$ is SQ-universal. Presently many other classes of groups are known to be SQ-universal: various HNN-extensions and amalgamated products \cite{FT,Los,Sas}, groups of deficiency $2$ \cite{BP}, most $C(3)\, \& \, T(6)$-groups \cite{How}, etc. The SQ-universality of non-elementary hyperbolic groups was proved by Olshanskii \cite{Ols} and, independently, by Delzant \cite{Delzant}. On the other hand, for relatively hyperbolic groups, there are some partial results. Namely, in \cite{Fine} Fine proved the SQ-universality of certain Kleinian groups. The case of fundamental groups of hyperbolic 3-manifolds was studied by Ratcliffe in \cite{Rat}. In this paper we prove the SQ-universality of relatively hyperbolic groups in the most general settings. Let a group $G$ be hyperbolic relative to a collection of subgroups $\{ H_\lambda \} _{ \lambda \in \Lambda }$ (called {\it peripheral subgroups}). We say that $G$ is {\it properly hyperbolic relative to $\{ H_\lambda \} _{ \lambda \in \Lambda }$} (or $G$ is a {\it PRH group} for brevity), if $H_\lambda \ne G$ for all $\lambda \in \Lambda $. Recall that a group is {\it elementary}, if it contains a cyclic subgroup of finite index. We observe that every non-elementary PRH group has a unique maximal finite normal subgroup denoted by $E_G(G)$ (see Lemmas \ref{prop} and \ref{Ashot's Lemma} below). \begin{thm}\label{SQ} Suppose that a group $G$ is non-elementary and properly relatively hyperbolic with respect to a collection of subgroups $\{ H_\lambda \} _{ \lambda \in \Lambda }$. Then for each finitely generated group $R$, there exists a quotient group $Q$ of $G$ and an embedding $R\hookrightarrow Q$ such that: \begin{enumerate} \item $Q$ is properly relatively hyperbolic with respect to the collection $\{ \psi(H_\lambda) \} _{\lambda \in \Lambda } \cup \{R\}$ where $\psi\colon G \to Q$ denotes the natural epimorphism; \item For each $\lambda \in \Lambda$, we have $H_\lambda \cap \ker(\psi)= H_\lambda \cap E_G(G)$, that is, $\psi(H_\lambda)$ is naturally isomorphic to $H_\lambda/(H_\lambda \cap E_G(G))$. \end{enumerate} \end{thm} In general, we can not require the epimorphism $\psi$ to be injective on every $H_\lambda$. Indeed, it is easy to show that a finite normal subgroup of a relatively hyperbolic group must be contained in each infinite peripheral subgroup (see Lemma \ref{lem:qbf}). Thus the image of $E_G(G)$ in $Q$ will have to be inside $R$ whenever $R$ is infinite. If, in addition, the group $R$ is torsion-free, the latter inclusion implies $E_G(G) \le \ker(\psi)$. This would be the case if one took $G=F_2 \times \mathbb{Z}/(2\mathbb{Z})$ and $R=\mathbb{Z}$, where $F_2 $ denotes the free group of rank $2$ and $G$ is properly hyperbolic relative to its subgroup $\mathbb{Z}/(2\mathbb{Z})=E_G(G)$. Since any countable group is embeddable into a finitely generated group, we obtain the following. \begin{cor}\label{SQ-cor} Any non-elementary PRH group is SQ-universal. \end{cor} Let us mention a particular case of Corollary \ref{SQ-cor}. In \cite{FT} the authors asked whether every finitely generated group with infinite number of ends is SQ-universal. The celebrated Stallings theorem \cite{Stall} states that a finitely generated group has infinite number of ends if and only if it splits as a nontrivial HNN-extension or amalgamated product over a finite subgroup. The case of amalgamated products was considered by Lossov who provided the positive answer in \cite{Los}. Corollary \ref{SQ-cor} allows us to answer the question in the general case. Indeed, every group with infinite number of ends is non-elementary and properly relatively hyperbolic, since the action of such a group on the corresponding Bass-Serre tree satisfies Bowditch's definition of relative hyperbolicity \cite{Bow}. \begin{cor} A finitely generated group with infinite number of ends is SQ-universal. \end{cor} The methods used in the proof of Theorem \ref{SQ} can also be applied to obtain other results: \begin{thm}\label{comquot} Any two finitely generated non-elementary PRH groups $G_1, G_2$ have a common non-elementary PRH quotient $Q$. Moreover, $Q$ can be obtained from the free product $G_1\ast G_2$ by adding finitely many relations. \end{thm} In \cite{Ols00} Olshanskii proved that any non-elementary hyperbolic group has a non-trivial finitely presented quotient without proper subgroups of finite index. This result was used by Lubotzky and Bass \cite{BL} to construct representation rigid linear groups of non-arithmetic type thus solving in negative the Platonov Conjecture. Theorem \ref{comquot} yields a generalization of Olshanskii's result. \begin{defn} Given a class of groups $\mathcal G$, we say that a group $R$ is {\it residually incompatible with} $\mathcal G$ if for any group $A\in \mathcal G$, any homomorphism $R\to A$ has a trivial image. \end{defn} If $G$ and $R$ are finitely presented groups, $G$ is properly relatively hyperbolic, and $R$ is residually incompatible with a class of groups $\mathcal G$, we can apply Theorem \ref{comquot} to $G_1=G$ and $G_2=R\ast R$. Obviously, the obtained common quotient of $G_1$ and $G_2$ is finitely presented and residually incompatible with $\mathcal G$. \begin{cor}\label{ResInc} Let $\mathcal G$ be a class of groups. Suppose that there exists a finitely presented group $R$ that is residually incompatible with $\mathcal G$. Then every finitely presented non-elementary PRH group has a non-trivial finitely presented quotient group that is residually incompatible with $\mathcal G$. \end{cor} Recall that there are finitely presented groups having no non-trivial recursively presented quotients with decidable word problem \cite{Mil}. Applying the previous corollary to the class $\mathcal G$ of all recursively presented groups with decidable word problem, we obtain the following result. \begin{cor} Every non-elementary finitely presented PRH group has an infinite finitely presented quotient group $Q$ such that the word problem is undecidable in each non-trivial quotient of $Q$. \end{cor} In particular, $Q$ has no proper subgroups of finite index. The reader can easily check that Corollary \ref{ResInc} can also be applied to the classes of all torsion (torsion-free, Noetherian, Artinian, amenable, etc.) groups. \section{Relatively hyperbolic groups} We recall the definition of relatively hyperbolic groups suggested in \cite{RHG} (for equivalent definitions in the case of finitely generated groups see \cite{Bow,DSO,F}). Let $G$ be a group, $\{ H_\lambda \} _{ \lambda \in \Lambda }$ a fixed collection of subgroups of $G$ (called {\it peripheral subgroups}), $X$ a subset of $G$. We say that $X$ is a {\it relative generating set of $G$} with respect to $\{ H_\lambda \} _{ \lambda \in \Lambda }$ if $G$ is generated by $X$ together with the union of all $H_\lambda $ (for convenience, we always assume that $X=X^{-1}$). In this situation the group $G$ can be considered as a quotient of the free product \begin{equation} F=\left( \ast _{\lambda\in \Lambda } H_\lambda \right) \ast F(X), \label{F} \end{equation} where $F(X)$ is the free group with the basis $X$. Suppose that $\mathcal R $ is a subset of $F$ such that the kernel of the natural epimorphism $F\to G$ is a normal closure of $\mathcal R $ in the group $F$, then we say that $G$ has {\it relative presentation} \begin{equation}\label{G} \langle X,\; \{ H_\lambda\}_{\lambda\in \Lambda} \mid R=1,\, R\in\mathcal R \rangle . \end{equation} If sets $X$ and $\mathcal R$ are finite, the presentation (\ref{G}) is said to be {\it relatively finite}. \begin{defn} We set \begin{equation}\label{H} \mathcal H=\bigsqcup\limits_{\lambda\in \Lambda} (H_\lambda \setminus \{ 1\} ) . \end{equation} A group $G$ is {\it relatively hyperbolic with respect to a collection of subgroups} $\{ H_\lambda \} _{ \lambda \in \Lambda }$, if $G$ admits a relatively finite presentation (\ref{G}) with respect to $\{ H_\lambda \} _{ \lambda \in \Lambda }$ satisfying a {\it linear relative isoperimetric inequality}. That is, there exists $C>0$ satisfying the following condition. For every word $w$ in the alphabet $X\cup \mathcal H$ representing the identity in the group $G$, there exists an expression \begin{equation} w=_F\prod\limits_{i=1}^k f_i^{-1}R_i^{\pm 1}f_i \label{prod} \end{equation} with the equality in the group $F$, where $R_i\in \mathcal R$, $f_i\in F $, for $i=1, \ldots , k$, and $k\le C\| w\| $, where $\| w\| $ is the length of the word $w$. This definition is independent of the choice of the (finite) generating set $X$ and the (finite) set $\mathcal R$ in (\ref{G}). \end{defn} For a combinatorial path $p$ in the Cayley graph $\G$ of $G$ with respect to $X\cup \mathcal H$, $p_-$, $p_+$, $l(p)$, and ${\rm\bf Lab\, }(p)$ will denote the initial point, the ending point, the length (that is, the number of edges) and the label of $p$ respectively. Further, if $\Omega$ is a subset of $G$ and $g \in \langle \Omega \rangle \le G$, then $|g|_\Omega$ will be used to denote the length of a shortest word in $\Omega^{\pm 1}$ representing $g$. Let us recall some terminology introduced in \cite{RHG}. Suppose $q$ is a path in $\Gamma (G, X\cup \mathcal H)$. \begin{defn} A subpath $p$ of $q$ is called an {\it $H_\lambda $-component} for some $\lambda \in \Lambda $ (or simply a {\it component}) of $q$, if the label of $p$ is a word in the alphabet $H_\lambda\setminus \{ 1\} $ and $p$ is not contained in a bigger subpath of $q$ with this property. Two components $p_1, p_2$ of a path $q$ in $\Gamma (G, X\cup \mathcal H)$ are called {\it connected} if they are $H_\lambda $-components for the same $\lambda \in \Lambda $ and there exists a path $c$ in $\Gamma (G, X\cup \mathcal H)$ connecting a vertex of $p_1$ to a vertex of $p_2$ such that ${\phi (c)}$ entirely consists of letters from $ H_\lambda $. In algebraic terms this means that all vertices of $p_1$ and $p_2$ belong to the same coset $gH_\lambda $ for a certain $g\in G$. We can always assume $c$ to have length at most $1$, as every nontrivial element of $H_\lambda $ is included in the set of generators. An $H_\lambda $-component $p$ of a path $q$ is called {\it isolated } if no distinct $H_\lambda $-component of $q$ is connected to $p$. A path $q$ is said to be {\it without backtracking} if all its components are isolated. \end{defn} The next lemma is a simplification of Lemma 2.27 from \cite{RHG}. \begin{lem}\label{Omega} Suppose that a group $G$ is hyperbolic relative to a collection of subgroups $\{ H_\lambda \} _{ \lambda \in \Lambda }$. Then there exists a finite subset $\Omega \subseteq G$ and a constant $K\ge 0$ such that the following condition holds. Let $q$ be a cycle in $\Gamma (G, X\cup \mathcal H)$, $p_1, \ldots , p_k$ a set of isolated $H_\lambda $-components of $q$ for some $\lambda \in \Lambda $, $g_1, \ldots , g_k$ elements of $G$ represented by labels $\phi(p_1), \ldots , \phi(p_k)$ respectively. Then $g_1, \ldots , g_k$ belong to the subgroup $\langle \Omega\rangle \le G$ and the word lengths of $g_i$'s with respect to $\Omega $ satisfy the inequality $$ \sum\limits_{i=1}^k |g_i|_\Omega \le Kl(q).$$ \end{lem} \section{Suitable subgroups of relatively hyperbolic groups} Throughout this section let $G$ be a group which is properly hyperbolic relative to a collection of subgroups $\{ H_\lambda \} _{ \lambda \in \Lambda }$, $X$ a finite relative generating set of $G$, and $\Gamma (G, X\cup \mathcal H)$ the Cayley graph of $G$ with respect to the generating set $X\cup \mathcal H$, where $\mathcal H$ is given by (\ref{H}). Recall that an element $g\in G$ is called {\it hyperbolic} if it is not conjugate to an element of some $H_\lambda $, $\lambda\in \Lambda $. The following description of elementary subgroups of $G$ was obtained in \cite{ESBG}. \begin{lem}\label{Eg} Let $g$ be a hyperbolic element of infinite order of $G$. Then the following conditions hold. \begin{enumerate} \item The element $g$ is contained in a unique maximal elementary subgroup $E_G(g)$ of $G$, where \begin{equation} \label{eq:elem} E_G(g)=\{ f\in G\; :\; f^{-1}g^nf=g^{\pm n}\; {\rm for \; some\; } n\in \mathbb N\}. \end{equation} \item The group $G$ is hyperbolic relative to the collection $\Hl\cup \{ E_G(g)\} $. \end{enumerate} \end{lem} Given a subgroup $S\le G$, we denote by $S^0$ the set of all hyperbolic elements of $S$ of infinite order. Recall that two elements $f,g\in G^0$ are said to be {\it commensurable} (in G) if $f^k$ is conjugated to $g^l$ in $G$ for some non-zero integers $k$ and $l$. \begin{defn} A subgroup $S\le G$ is called {\it suitable}, if there exist at least two non-commensurable elements $f_1, f_2\in S^0$, such that $E_G(f_1)\cap E_G(f_2)=\{1\}$. \end{defn} If $S^0\ne \emptyset $, we define $$E_G(S)=\bigcap\limits_{g\in S^0} E_G(g).$$ \begin{lem} \label{Ashot's Lemma} If $S \le G$ is a non-elementary subgroup and $S^0 \neq \emptyset$, then $E_G(S)$ is the maximal finite subgroup of $G$ normalized by $S$. \end{lem} \begin{proof} Indeed, if a finite subgroup $M \le G$ is normalized by $S$, then $|S:C_S(M)|<\infty$ where $C_S(M)=\{g \in S\; : \; g^{-1}xg=x, ~\forall \; x \in M\}$. Formula \eqref{eq:elem} implies that $M\le E_G(g)$ for every $g\in S^0$, hence $M \le E_G(S)$. On the other hand, if $S$ is non-elementary and $S^0\ne \emptyset $, there exist $h \in S^0$ and $a \in S^0 \setminus E_G(h)$. Then $a^{-1}ha \in S^0$ and the intersection $E_G(a^{-1}ha) \cap E_G(h)$ is finite. Indeed if $E_G(a^{-1}ha) \cap E_G(h)$ were infinite, we would have $(a^{-1}ha)^n=h^k$ for some $n,k\in \mathbb Z\setminus \{ 0\}$, which would contradict to $a\notin E_G(h)$. Hence $E_G(S)\le E_G(a^{-1}ha) \cap E_G(h)$ is finite. Obviously, $E_G(S)$ is normalized by $S$ in $G$. \end{proof} The main result of this section is the following \begin{prop}\label{suit} Suppose that a group $G$ is hyperbolic relative to a collection $\{ H_\lambda \} _{ \lambda \in \Lambda }$ and $S$ is a subgroup of $G$. Then the following conditions are equivalent. \begin{enumerate} \item[(1)] $S$ is suitable; \item[(2)] $S^0\ne \emptyset $ and $E_G(S)=\{1\}$. \end{enumerate} \end{prop} Our proof of Proposition \ref{suit} will make use of several auxiliary statements below. \begin{lem}[Lemma 4.4, \cite{ESBG}]\label{ah} For any $\lambda \in \Lambda $ and any element $a\in G\setminus H_\lambda $, there exists a finite subset $\mathcal F_\lambda =\mathcal F_\lambda (a) \subseteq H_\lambda $ such that if $h\in H_\lambda \setminus \mathcal F_\lambda $, then $ah$ is a hyperbolic element of infinite order. \end{lem} It can be seen from Lemma \ref{Eg} that every hyperbolic element $g \in G$ of infinite order is contained inside the elementary subgroup $$E^+_G(g)=\{f \in G\; :\; f^{-1}g^nf=g^{n}\; {\rm for \; some\; } n\in \mathbb{N}\} \le E_G(g),$$ and $|E_G(g):E^+_G(g)| \le 2$. \begin{lem} \label{lem:E^+_G} Suppose $g_1,g_2 \in G^0$ are non-commensurable and $A=\langle g_1, g_2 \rangle \le G$. Then there exists an element $h \in A^0$ such that: \begin{enumerate} \item $h$ is not commensurable with $g_1$ and $g_2$; \item $E_G(h)=E^+_G(h) \le \langle h, E_G(g_1) \cap E_G(g_2) \rangle$. If, in addition, $E_G(g_j)=E^+_G(g_j)$, $j=1,2$, then $E_G(h)=E^+_G(h) =\langle h \rangle \times (E_G(g_1) \cap E_G(g_2))$. \end{enumerate} \end{lem} \begin{proof} By Lemma \ref{Eg}, $G$ is hyperbolic relative to the collection of peripheral subgroups $\mathfrak{C}_1=\{ H_\lambda \} _{ \lambda \in \Lambda }\cup \{E_G(g_1)\}\cup \{E_G(g_2)\}$. The center $Z(E^+_G(g_j))$ has finite index in $E^+_G(g_j)$, hence (possibly, after replacing $g_j$ with a power of itself) we can assume that $g_j \in Z(E^+_G(g_j))$, $j=1,2$. Using Lemma \ref{ah} we can find an integer $n_1 \in \mathbb{N}$ such that the element $g_3=g_2g_1^{n_1} \in A$ is hyperbolic relatively to $\mathfrak{C}_1$ and has infinite order. Applying Lemma \ref{Eg} again, we achieve hyperbolicity of $G$ relative to $\mathfrak{C}_2=\mathfrak{C}_1 \cup \{E_G(g_3)\}$. Set $\mathcal{H}'= \bigsqcup_{H \in \mathfrak{C}_2} (H \setminus \{1\})$. Let $\Omega \subset G$ be the finite subset and $K>0$ the constant chosen according to Lemma~\ref{Omega} (where $G$ is considered to be relatively hyperbolic with respect to $\mathfrak{C}_2$). Using Lemma \ref{ah} two more times, we can find numbers $m_1,m_2,m_3 \in \mathbb{N}$ such that \begin{equation} \label{eq:choose_m} g_i^{m_i} \notin \{y \in \langle \Omega \rangle \; : \; |y|_{\Omega} \le 21K\},\;\;\; i=1,2,3, \end{equation} and $h=g_1^{m_1}g_3^{m_3}g_2^{m_2}\in A$ is a hyperbolic element (with respect to $\mathfrak{C}_2$) and has infinite order. Indeed, first we choose $m_1$ to satisfy \eqref{eq:choose_m}. By Lemma \ref{ah}, there is $m_3$ satisfying \eqref{eq:choose_m}, so that $g_1^{m_1}g_3^{m_3} \in A^0$. Similarly $m_2$ can be chosen sufficiently big to satisfy \eqref{eq:choose_m} and $g_1^{m_1}g_3^{m_3}g_2^{m_2}\in A^0$. In particular, $h$ will be non-commensurable with $g_j$, $j=1,2$ (otherwise, there would exist $f \in G$ and $n \in \mathbb{N}$ such that $f^{-1}h^nf \in E(g_j)$, implying $h \in fE(g_j)f^{-1}$ by Lemma \ref{Eg} and contradicting the hyperbolicity of $h$). Consider a path $q$ labelled by the word $(g_1^{m_1} g_3^{m_3} g_2^{m_2})^l$ in {$\Gamma (G, X\cup \mathcal{H}')$} for some $l \in \mathbb{Z} \setminus \{0\}$, where each $g_i^{m_i}$ is treated as a single letter from $\mathcal{H}'$. After replacing $q$ with $q^{-1}$, if necessary, we assume that $l\in \mathbb{N}$. Let $p_1,\dots,p_{3l}$ be all components of $q$; by the construction of $q$, we have $l(p_j)=1$ for each $j$. Suppose not all of these components are isolated. Then one can find indices $1 \le s <t \le 3l$ and $i \in \{1,2,3\}$ such that $p_s$ and $p_t$ are $E_G(g_i)$-components of $q$, $(p_t)_-$ and $(p_s)_+$ are connected by a path $r$ with $\phi(r) \in E_G(g_i)$, $l(r)\le 1$, and $(t-s)$ is minimal with this property. To simplify the notation, assume that $i=1$ (the other two cases are similar). Then $p_{s+1}, p_{s+4}, \dots, p_{t-2}$ are isolated $E_G(g_3)$-components of the cycle $p_{s+1}p_{s+2}\dots p_{t-1} r$, and there are exactly $(t-s)/3 \ge 1$ of them. Applying Lemma \ref{Omega}, we obtain $g_3^{m_3} \in \langle \Omega \rangle$ and $$\frac{t-s}{3} |g_3^{m_3}|_\Omega \le K(t-s).$$ Hence $|g_3^{m_3}|_\Omega \le 3K,$ contradicting \eqref{eq:choose_m}. Therefore two distinct components of $q$ can not be connected with each other; that is, the path $q$ is without backtracking. To finish the proof of Lemma \ref{lem:E^+_G} we need an auxiliary statement below. Denote by $\mathcal{W}$ the set of all subwords of words $(g_1^{m_1} g_3^{m_3} g_2^{m_2})^l$, $l \in \mathbb{Z}$ (where $g_i^{\pm m_i}$ is treated as a single letter from $\mathcal H^\prime $). Consider an arbitrary cycle $o=rqr'q'$ in {$\Gamma (G, X\cup \mathcal{H}')$}, where $\phi(q), \phi(q') \in \mathcal{W}$; and set $C=\max\{ l(r),l(r')\}$. Let $p$ be a component of $q$ (or $q'$). We will say that $p$ is {\it regular} if it is not an isolated component of $o$. As $q$ and $q'$ are without backtracking, this means that $p$ is either connected to some component of $q'$ (respectively $q$), or to a component of $r$, or $r'$. \begin{lem}\label{lem:new_way} In the above notations \begin{itemize} \item[\rm (a)] if $C\le 1$ then every component of $q$ or $q'$ is regular; \item[\rm (b)] if $C\ge 2$ then each of $q$ and $q'$ can have at most $15C$ components which are not regular. \end{itemize} \end{lem} \begin{proof} Assume the contrary to (a). Then one can choose a cycle $o=rqr'q'$ with $l(r),l(r') \le 1$, having at least one $E(g_i)$-isolated component on $q$ or $q'$ for some $i\in \{1,2,3\}$, and such that $l(q)+l(q')$ is minimal. Clearly the latter condition implies that each component of $q$ or $q'$ is an isolated component of $o$. Therefore $q$ and $q'$ together contain $k$ distinct $E(g_i)$-components of $o$ where $k \ge 1$ and $k\ge \lfloor l(q)/3 \rfloor+\lfloor l(q')/3 \rfloor$. Applying Lemma \ref{Omega} we obtain $g_i^{m_i} \in \langle \Omega \rangle$ and $k|g_i^{m_i}|_\Omega \le K(l(q)+l(q')+2)$, therefore $|g_i^{m_i}|_\Omega \le 11 K$, contradicting the choice of $m_i$ in \eqref{eq:choose_m}. Let us prove (b). Suppose that $C \ge 2$ and $q$ contains more than $15C$ isolated components of $o$. We consider two cases: {\bf Case 1}. No component of $q$ is connected to a component of $q'$. Then a component of $q$ or $q'$ can be regular only if it is connected to a component of $r$ or $r'$. Since $q$ and $q'$ are without backtracking, two distinct components of $q$ or $q'$ can not be connected to the same component of $r$ (or $r'$). Hence $q$ and $q'$ together can contain at most $2C$ regular components. Thus there is an index $i \in \{1,2,3\}$ such that the cycle $o$ has $k$ isolated $E(g_i)$-components, where $k \ge \lfloor l(q)/3 \rfloor + \lfloor l(q')/3 \rfloor -2C \ge \lfloor 5C \rfloor-2C >2C>3$. By Lemma \ref{Omega}, $g_i^{m_i} \in \langle \Omega \rangle$ and $k|g_i^{m_i}|_\Omega \le K(l(q)+l(q')+2C)$, hence $$|g_i^{m_i}|_\Omega \le K\frac{3(\lfloor l(q)/3 \rfloor +1) + 3(\lfloor l(q')/3\rfloor+1)+2C}{\lfloor l(q)/3 \rfloor + \lfloor l(q')/3 \rfloor -2C} \le K \left(3+\frac{6+8C}{2C} \right) \le 9 K,$$ contradicting the choice of $m_i$ in \eqref{eq:choose_m}. {\bf Case 2.} The path $q$ has at least one component which is connected to a component of $q'$. Let $p_1,\dots,p_{l(q)}$ denote the sequence of all components of $q$. By part (a), if $p_{s}$ and $p_{t}$, $1 \le s \le t \le l(q)$, are connected to components of $q'$, then for any $j$, $s \le j \le t$, $p_j$ is regular. We can take $s$ (respectively $t$) to be minimal (respectively maximal) possible. Consequently $p_1,\dots,p_{s-1}, p_{t+1},\dots,p_{l(q)}$ will contain the set of all isolated components of $o$ that belong to $q$. Without loss of generality we may assume that $s-1 \ge 15C/2$. Since $p_s$ is connected to some component $p'$ of $q'$, there exists a path $v$ in {$\Gamma (G, X\cup \mathcal{H}')$} satisfying $v_-=(p_{s})_-$, $v_+=p'_+$, $\phi(v) \in \mathcal{H}'$, $l(v)=1$. Let $\bar q$ (respectively $\bar q'$) denote the subpath of $q$ (respectively $q'$) from $q_-$ to $(p_s)_-$ (respectively from $p'_+$ to $q'_+$). Consider a new cycle $\bar o = r \bar q v \bar q'$. Reasoning as before, we can find $i \in \{1,2,3\}$ such that $\bar o$ has $k$ isolated $E(g_i)$-components, where $k \ge \lfloor l(\bar q)/3 \rfloor + \lfloor l({\bar q}')/3 \rfloor -C-1 \ge \lfloor 15C/6 \rfloor-C-1 >C-1\ge 1$. Using Lemma \ref{Omega}, we get $g_i^{m_i} \in \langle \Omega \rangle$ and $k|g_i^{m_i}|_\Omega \le K(l(\bar q)+l(\bar q')+C+1)$. The latter inequality implies $|g_i^{m_i}|_\Omega \le 21K$, yielding a contradiction in the usual way and proving (b) for $q$. By symmetry this property holds for $q'$ as well. \end{proof} Continuing the proof of Lemma \ref{lem:E^+_G}, consider an element $x \in E_G(h)$. According to Lemma \ref{Eg}, there exists $l \in \mathbb{N}$ such that \begin{equation} \label{eq:xhx} xh^lx^{-1}=h^{\epsilon l}, \end{equation} where $\epsilon=\pm 1$. Set $C=|x|_{X\cup \mathcal{H}'}$. After raising both sides of \eqref{eq:xhx} in an integer power, we can assume that $l$ is sufficiently large to satisfy $l>32C+3$. Consider a cycle $o=rqr'q'$ in {$\Gamma (G, X\cup \mathcal{H}')$} satisfying $r_-=q'_+=1$, $r_+=q_-=x$, $q_+=r'_-=xh^l$, $r'_+=q'_-=xh^lx^{-1}$, $\phi(q) \equiv (g_1^{m_1} g_3^{m_3} g_2^{m_2})^l$, $\phi(q') \equiv (g_1^{m_1} g_3^{m_3} g_2^{m_2})^{-\epsilon l}$, $l(q)=l(q')=3l$, $l(r)=l(r')=C$. Let $p_1,p_2,\dots,p_{3l}$ and $p'_1,p'_2,\dots,p'_{3l}$ be all components of $q$ and $q'$ respectively. Thus, $p_3,p_6,p_9,\dots,p_{3l}$ are all $E_G(g_2)$-components of $q$. Since $l>17C$ and $q$ is without backtracking, by Lemma \ref{lem:new_way}, there exist indices $1 \le s,s' \le 3l$ such that the $E_G(g_2)$-component $p_s$ of $q$ is connected to the $E_G(g_2)$-component $p'_{s'}$ of $q'$. Without loss of generality, assume that $s \le 3l/2$ (the other situation is symmetric). There is a path $u$ in {$\Gamma (G, X\cup \mathcal{H}')$} with $u_-=(p'_{s'})_-$, $u_+=(p_s)_+$, $\phi(u) \in E_G(g_2)$ and $l(u)\le 1$. We obtain a new cycle $o' =up_{s+1} \dots p_{3l}r'p'_1\dots p'_{s'-1}$ in the Cayley graph {$\Gamma (G, X\cup \mathcal{H}')$}. Due to the choice of $s$ and $l$, the same argument as before will demonstrate that there are $E_G(g_2)$-components $p_{\bar s}$, $p'_{\bar s'}$ of $q$, $q'$ respectively, which are connected and $s<\bar s \le 3l$, $1 \le \bar s'<s'$ (in the case when $s>3l/2$, the same inequalities can be achieved by simply renaming the indices correspondingly). It is now clear that there exist $i \in \{1,2,3\}$ and connected $E_G(g_i)$-components $p_{t}$, $p'_{t'}$ of $q$, $q'$ ($s <t\le 3l$, $1\le t' <s'$) such that $t>s$ is minimal. Let $v$ denote a path in {$\Gamma (G, X\cup \mathcal{H}')$} with $v_-=(p_{t})_-$, $v_+=(p_{t'})_+$, $\phi(v) \in E_G(g_i)$ and $l(v) \le 1$. Consider a cycle $o''$ in {$\Gamma (G, X\cup \mathcal{H}')$} defined by $o''=up_{s+1} \dots p_{t-1}vp'_{t'+1}\dots p'_{s'-1}$. By part a) of Lemma \ref{lem:new_way}, $p_{s+1}$ is a regular component of the path $p_{s+1} \dots p_{t-1}$ in $o''$ (provided that $t-1\ge s+1$). Note that $p_{s+1}$ can not be connected to $u$ or $v$ because $q$ is without backtracking, hence it must be connected to a component of the path $p'_{t'+1}\dots p'_{s'-1}$. By the choice of $t$, we have $t=s+1$ and $i=1$. Similarly $t'=s'-1$. Thus $p_{s+1}=p_t$ and $p'_{s'-1}=p'_{t'}$ are connected $E_G(g_1)$-components of $q$ and $q'$. In particular, we have $\epsilon=1$. Indeed, otherwise we would have $\phi(p_{s'-1}) \equiv g_3^{m_3}$ but $g_3^{m_3} \notin E_G(g_1)$. Therefore $x \in E^+_G(h)$ for any $x \in E_G(h)$, consequently $E_G(h)=E^+_G(h)$. Observe that $u_-=v_+$ and $u_+=v_-$, hence $\phi(u)$ and $\phi(v)^{-1}$ represent the same element $z \in E_G(g_2) \cap E_G(g_1)$. By construction, $x=h^{\alpha}zh^{\beta}$ where $\alpha=(3l-s')/3 \in \mathbb{Z}$, and $\beta =-s/3 \in \mathbb{Z}$. Thus $x \in \langle h, E_G(g_1) \cap E_G(g_2) \rangle$ and the first part of the claim 2 is proved. Assume now that $E_G(g_j)=E^+_G(g_j)$ for $j=1,2$. Then $h=g_1^{m_1}(g_2g_1^{n_1})^{m_3}g_2^{m_2}$ belongs to the centralizer of the finite subgroup $E_G(g_1) \cap E_G(g_2)$ (because of the choice of $g_1,g_2$ above). Consequently $E_G(h)=\langle h \rangle \times (E_G(g_1) \cap E_G(g_2))$. \end{proof} \begin{lem} \label{lem:good-elem} Let $S$ be a non-elementary subgroup of $G$ with $S^0 \neq \emptyset$. Then \begin{enumerate} \item[\rm(i)] there exist non-commensurable elements $h_1, h'_1 \in S^0$ with $E_G(h_1) \cap E_G(h_1')=E_G(S)$; \item[\rm(ii)] $S^0$ contains an element $h$ such that $E_G(h)=\langle h \rangle \times E_G(S)$. \end{enumerate} \end{lem} \begin{proof} Choose an element $g_1 \in S^0$. By Lemma \ref{Eg}, $G$ is hyperbolic relative to the collection $\mathfrak{C}=\{ H_\lambda \} _{ \lambda \in \Lambda }\cup \{E_G(g_1)\}$. Since the subgroup $S$ is non-elementary, there is $a \in S \setminus E_G(g_1)$, and Lemma \ref{ah} provides us with an integer $n \in \mathbb{N}$ such that $g_2=ag_1^{n} \in S$ is a hyperbolic element of infinite order (now, with respect to the family of peripheral subgroups $\mathfrak{C}$). In particular, $g_1$ and $g_2$ are non-commensurable and hyperbolic relative to $\Hl$. Applying Lemma \ref{lem:E^+_G}, we find $h_1 \in S^0$ (with respect to the collection of peripheral subgroups $\Hl$) with $E_G(h_1)=E^+_G(h_1)$ such that $h_1$ is not commensurable with $g_j$, $j=1,2$. Hence, $g_1$ and $g_2$ stay hyperbolic after including $E_G(h_1)$ into the family of peripheral subgroups (see Lemma \ref{Eg}). This allows to construct (in the same manner) one more element $h_2 \in \langle g_1,g_2\rangle \le S$ which is hyperbolic relative to $(\{ H_\lambda \} _{ \lambda \in \Lambda }\cup E_G(h_1))$ and satisfies $E_G(h_2)=E^+_G(h_2)$. In particular, $h_2$ is not commensurable with $h_1$. We claim now that there exists $x \in S$ such that $E_G(x^{-1}h_2x) \cap E_G(h_1)=E_G(S)$. By definition, $E_G(S) \subseteq E_G(x^{-1}h_2x) \cap E_G(h_1)$. To obtain the inverse inclusion, arguing by the contrary, suppose that for each $x \in S$ we have \begin{equation} \label{eq:inter-e} (E_G(x^{-1}h_2x) \cap E_G(h_1)) \setminus E_G(S) \neq \emptyset.\end{equation} Note that if $g\in S^0$ with $E_G(g)=E^+_G(g)$, then the set of all elements of finite order in $E_G(g)$ form a finite subgroup $T(g) \le E_G(g)$ (this is a well-known property of groups, all of whose conjugacy classes are finite). The elements $h_1$ and $h_2$ are not commensurable, therefore $$E_G(x^{-1}h_2x) \cap E_G(h_1) =T(x^{-1}h_2x) \cap T(h_1)=x^{-1}T(h_2)x \cap T(h_1).$$ For each pair of elements $(b,a) \in D=T(h_2) \times (T(h_1)\setminus E_G(S))$ choose $x=x(b,a) \in S$ so that $x^{-1}bx=a$ if such $x$ exists; otherwise set $x(b,a)=1$. The assumption \eqref{eq:inter-e} clearly implies that $\displaystyle S=\bigcup_{(b,a) \in D} x(b,a) C_S(a)$, where $C_S(a)$ denotes the centralizer of $a$ in $S$. Since the set $D$ is finite, a well-know theorem of B. Neumann \cite{Neumann} implies that there exists $a \in T(h_1) \setminus E_G(S)$ such that $|S:C_S(a)|<\infty$. Consequently, $a \in E_G(g)$ for every $g \in S^0$, that is, $a \in E_G(S)$, a contradiction. Thus, $E_G(xh_2x^{-1}) \cap E_G(h_1)=E_G(S)$ for some $x \in S$. After setting $h'_1=x^{-1}h_2x \in S^0$, we see that elements $h_1$ and $h'_1$ satisfy the claim (i). Since $E_G(h'_1)=x^{-1}E_G(h_2)x$, we have $E_G(h'_1)=E^+_G(h'_1)$. To demonstrate (ii), it remains to apply Lemma \ref{lem:E^+_G} and obtain an element $h \in \langle h_1,h'_1 \rangle \le S$ which has the desired properties. \end{proof} \begin{proof}[Proof of Proposition \ref{suit}] The implication $(1) \Rightarrow (2)$ is an immediate consequence of the definition. The inverse implication follows directly from the first claim of Lemma \ref{lem:good-elem} ($S$ is non-elementary as $S^0 \neq \emptyset$ and $E_G(S) =\{1\}$). \end{proof} \section{Proofs of the main results} The following simplification of Theorem 2.4 from \cite{SCT} is the key ingredient of the proofs in the rest of the paper. \begin{thm}\label{glue} Let $U$ be a group hyperbolic relative to a collection of subgroups $\{ V_\lambda \} _{ \lambda \in \Lambda }$, $S$ a suitable subgroup of $U$, and $T$ a finite subset of $U$. Then there exists an epimorphism $\eta \colon U\to W$ such that: \begin{enumerate} \item The restriction of $\eta $ to $\bigcup_{\lambda \in \Lambda} V_\lambda $ is injective, and the group $W$ is properly relatively hyperbolic with respect to the collection $\{\eta (V_\lambda )\}_{\lambda \in \Lambda }$. \item For every $t\in T$, we have $\eta (t)\in \eta (S)$. \end{enumerate} \end{thm} Let us also mention two known results we will use. The first lemma is a particular case of Theorem 1.4 from \cite{RHG} (if $g \in G$ and $H \le G$, $H^g$ denotes the conjugate $g^{-1}Hg \le G$). \begin{lem}\label{malnorm} Suppose that a group $G$ is hyperbolic relative to a collection of subgroups $\{ H_\lambda \} _{ \lambda \in \Lambda }$. Then \begin{enumerate} \item[(a)] For any $g\in G$ and any $\lambda , \mu \in \Lambda $, $\lambda \ne \mu $, the intersection $H_\lambda^g\cap H_\mu $ is finite. \item[(b)] For any $\lambda \in \Lambda $ and any $g\notin H_\lambda $, the intersection $H_\lambda^g \cap H_\lambda $ is finite. \end{enumerate} \end{lem} The second result can easily be derived from Lemma \ref{ah}. \begin{lem}[Corollary 4.5, \cite{ESBG}] \label{prop} Let $G$ be an infinite properly relatively hyperbolic group. Then $G$ contains a hyperbolic element of infinite order. \end{lem} \begin{lem}\label{lem:qbf} Let the group $G$ be hyperbolic with respect to the collection of peripheral subgroups $\Hl$ and let $N \lhd G$ be a finite normal subgroup. Then \begin{enumerate} \item If $H_\lambda $ is infinite for some $\lambda \in \Lambda $, then $N\le H_\lambda $; \item The quotient $\bar G=G/N$ is hyperbolic relative to the natural image of the collection $\{ H_\lambda \} _{ \lambda \in \Lambda }$. \end{enumerate} \end{lem} \begin{proof} Let $K_\lambda $, $\lambda \in \Lambda $, be the kernel of the action of $H_\lambda $ on $N$ by conjugation. Since $N$ is finite, $K_\lambda $ has finite index in $H_\lambda $. On the other hand $K_\lambda \le H_\lambda \cap H_\lambda ^g$ for every $g\in N$. If $H_\lambda $ is infinite this implies $N\le H_\lambda $ by Lemma \ref{malnorm}. To prove the second assertion, suppose that $G$ has a relatively finite presentation \eqref{G} with respect to the free product $F$ defined in \eqref{F}. Denote by $\bar X$ and $\bar H_\lambda$ the natural images of $X$ and $H_\lambda$ in $\bar G$. In order to show that $\bar G$ is relatively hyperbolic, one has to consider it as a quotient of the free product $\bar F=(\ast_{\lambda \in \Lambda} \bar H_\lambda)*F(\bar X)$. As $G$ is a quotient of $F$, we can choose some finite preimage $M\subset F$ of $N$. For each element $f \in M$, fix a word in $X \cup \mathcal{H}$ which represents it in $F$ and denote by $\mathcal{S}$ the (finite) set of all such words. By the universality of free products, there is a natural epimorphism $\varphi: F \to \bar F$ mapping $X$ onto $\bar X$ and each $H_\lambda$ onto $\bar H_\lambda$. Define the subsets $\bar{\mathcal{R}}$ and $\bar{\mathcal{S}}$ of words in $\bar X \cup \bar{\mathcal{H}}$ (where $\bar{\mathcal{H}} = \bigsqcup_{\lambda \in \Lambda} (\bar H_\lambda \setminus \{1\})$) by $\bar{\mathcal{R}}=\varphi(\mathcal R)$ and $\bar{\mathcal{S}}=\varphi(\mathcal{S})$. Then the group $\bar G$ possesses the relatively finite presentation \begin{equation} \label{eq:G_1} \langle \bar X,\; \{ \bar H_\lambda\}_{\lambda\in \Lambda} \mid \bar R=1,\, \bar R\in\bar{\mathcal{R}};\, \bar S=1,\,\bar S\in\bar{\mathcal{S}}\rangle.\end{equation} Let $\psi: F \to G$ denote the natural epimorphism and $D=\max\{\|s\|~:~s \in \mathcal{S}\}$. Consider any non-empty word $\bar w$ in the alphabet $\bar X \cup \bar{\mathcal{H}}$ representing the identity in $\bar G$. Evidently we can choose a word $w$ in $X \cup \mathcal{H}$ such that $\bar w =_{\bar F} \varphi(w)$ and $\|w\|=\|\bar w\|$. Since $\ker(\psi) \cdot M$ is the kernel of the induced homomorphism from $F$ to $\bar G$, we have $w=_{F}vu$ where $u \in \mathcal{S}$ and $v$ is a word in $X \cup \mathcal{H}$ satisfying $v=_{G} 1$ and $\|v\| \le \|w\|+D$. Since $G$ is relatively hyperbolic there is a constant $C \ge 0$ (independent of $v$) such that $$ v=_F\prod\limits_{i=1}^k f_i^{-1}R_i^{\pm 1}f_i,$$ where $R_i \in \mathcal{R}$, $f_i \in F$, and $k \le C \|v\| $. Set $\bar R_i=\varphi(R) \in \bar{\mathcal{R}}$, $\bar f_i=\varphi(f_i) \in \bar F$, $i=1,2,\dots,k$, and ${\bar R}_{k+1}=\varphi(u) \in \bar{\mathcal{S}}$, ${\bar f}_{k+1} =1$. Then $$\bar w =_{\bar F} \prod \limits_{i=1}^{k+1} {\bar f}_i^{-1} \bar R_i^{\pm 1} {\bar f}_i, $$ where $$k+1 \le C\|v\|+1 \le C(\|w\|+D)+1 \le C\|\bar w\| +CD + 1 \le (C+CD+1)\|\bar w\|.$$ Thus, the relative presentation \eqref{eq:G_1} satisfies a linear isoperimetric inequality with the constant $(C+CD+1)$. \end{proof} Now we are ready to prove Theorem \ref{SQ}. \begin{proof}[Proof of Theorem \ref{SQ}] Observe that the quotient of $G$ by the finite normal subgroup $N=E_G(G)$ is obviously non-elementary. Hence the image of any finite $H_\lambda $ is a proper subgroup of $G/N$. On the other hand, if $H_\lambda$ is infinite, then $N \le H_\lambda \lneqq G$ by Lemma \ref{lem:qbf}, hence its image is also proper in $G/N$. Therefore $G/N$ is properly relatively hyperbolic with respect to the collection of images of $H_\lambda $, $\lambda \in \Lambda $ (see Lemma \ref{lem:qbf}). Lemma \ref{Ashot's Lemma} implies $E_{G/N}(G/N)=\{ 1\} $. Thus, without loss of generality, we may assume that $E_G(G)=1$. It is straightforward to see that the free product $U=G\ast R$ is hyperbolic relative to the collection $\{ H_\lambda \} _{ \lambda \in \Lambda }\cup \{ R\} $ and $E_{G\ast R} (G)=E_G(G)=1$. Note that $G^0$ is non-empty by Lemma \ref{prop}. Hence $G$ is a suitable subgroup of $G\ast R$ by Proposition \ref{suit}. Let $Y$ be a finite generating set of $R$. It remains to apply Theorem \ref{glue} to $U=G\ast R$, the obvious collection of peripheral subgroups, and the finite set $Y$. \end{proof} To prove Theorem \ref{comquot} we need one more auxiliary result which was proved in the full generality in \cite{RHG} (see also \cite{F}): \begin{lem}[Theorem 2.40, \cite{RHG}]\label{exhyp} Suppose that a group $G$ is hyperbolic relative to a collection of subgroups $\{ H_\lambda \} _{ \lambda \in \Lambda }\cup \{ S_1, \ldots , S_m\} $, where $S_1, \ldots, S_m $ are hyperbolic in the ordinary (non-relative) sense. Then $G$ is hyperbolic relative to $\{ H_\lambda \} _{ \lambda \in \Lambda }$. \end{lem} \begin{proof}[Proof of Theorem \ref{comquot}] Let $G_1$, $G_2$ be finitely generated groups which are properly relatively hyperbolic with respect to collections of subgroups $\{ H_{1\lambda }\} _{ \lambda \in \Lambda } $ and $\{ H_{2\mu }\} _{ \mu \in M }$ respectively. Denote by $X_i$ a finite generating set of the group $G_i$, $i=1,2 $. As above we may assume that $E_{G_1}(G_1)=E_{G_2}(G_2)=\{ 1\} $. We set $G=G_1\ast G_2$. Observe that $E_G(G_i)=E_{G_i}(G_i)=\{ 1\} $ and hence $G_i$ is suitable in $G$ for $i=1,2$ (by Lemma \ref{prop} and Proposition \ref{suit}). By the definition of suitable subgroups, there are two non-commensurable elements $g_1, g_2\in G_2^0$ such that $E_G(g_1)\cap E_G(g_2)=\{ 1\} $. Further, by Lemma \ref{Eg}, the group $G$ is hyperbolic relative to the collection $\mathfrak P= \{ H_{1\lambda }\} _{ \lambda \in \Lambda } \cup \{ H_{2\mu }\} _{ \mu \in M } \cup \{ E_G(g_1), E_G(g_2)\} $. We now apply Theorem \ref{glue} to the group $G$ with the collection of peripheral subgroups $\mathfrak P$, the suitable subgroup $G_1\le G$, and the subset $T=X_2$. The resulting group $W$ is obviously a quotient of $G_1$. Observe that $W$ is hyperbolic relative to (the image of) the collection $\{ H_{1\lambda }\} _{ \lambda \in \Lambda } \cup \{ H_{2\mu }\} _{ \mu \in M } $ by Lemma \ref{exhyp}. We would like to show that $G_2$ is a suitable subgroup of $W$ with respect to this collection. To this end we note that $\eta (g_1)$ and $\eta (g_2) $ are elements of infinite order as $\eta $ is injective on $E_G(g_1)$ and $E_G(g_2)$. Moreover, $\eta (g_1)$ and $\eta (g_2) $ are not commensurable in $W$. Indeed, otherwise, the intersection $\bigr(\eta(E_G(g_1))\bigl)^g \cap \eta (E_G(g_2))$ is infinite for some $g\in G$ that contradicts the first assertion of Lemma \ref{malnorm}. Assume now that $g\in E_{W}(\eta (g_i))$ for some $i\in \{ 1,2\} $. By the first assertion of Lemma \ref{Eg}, $\big( \eta (g_i^m)\big) ^g= \eta (g_i^{\pm m})$ for some $m\ne 0$. Therefore, $\big( \eta (E_G(g_i)) \big) ^g\cap \eta (E_G(g_i))$ contains $\eta (g_i^m)$ and, in particular, this intersection is infinite. By the second assertion of Lemma \ref{malnorm}, this means that $g\in \eta (E_G(g_i))$. Thus, $E_{W}(\eta (g_i))=\eta (E_G(g_i))$. Finally, using injectivity of $\eta $ on $E_G(g_1) \cup E_G(g_2)$, we obtain $$ E_{W}(\eta (g_1))\cap E_{W}(\eta (g_2))=\eta (E_G(g_1))\cap \eta (E_G(g_2))=\eta \big( E_G(g_1)\cap E_G(g_2)\big) = \{ 1\} .$$ This means that the image of $G_2$ is a suitable subgroup of $W$. Thus we may apply Theorem \ref{glue} again to the group $W$, the subgroup $G_2$ and the finite subset $X_1$. The resulting group $Q$ is the desired common quotient of $G_1$ and $G_2$. The last property, which claims that $Q$ can be obtained from $G_1 \ast G_2$ by adding only finitely many relations, follows because $G_1\ast G_2$ and $G$ are hyperbolic with respect to the same family of peripheral subgroups and any relatively hyperbolic group is relatively finitely presented. \end{proof} \small \textsc{ G. Arzhantseva, Universit\'{e} de Gen\`{e}ve, Section de Math\'{e}matiques, 2-4 rue du Li\`{e}vre, Case postale 64, 1211 Gen\`{e}ve 4, Switzerland} {\it Email:} \texttt{[email protected]} \textsc{ A. Minasyan, Universit\'{e} de Gen\`{e}ve, Section de Math\'{e}matiques, 2-4 rue du Li\`{e}vre, Case postale 64, 1211 Gen\`{e}ve 4, Switzerland} {\it Email:} \texttt{[email protected]} \textsc{D. Osin, NAC 8133, Department of Mathematics, The City College of the City University of New York, Convent Ave. at 138th Street, New York, NY 10031, USA} {\it Email:} \texttt{[email protected]} \end{document}
\begin{document} \title[Grothendieck--Lefschetz type theorems] {Grothendieck--Lefschetz type theorems \\ for the local Picard group} \author{J\'anos Koll\'ar} \maketitle A special case of the Lefschetz hyperplane theorem asserts that if $X$ is a smooth projective variety and $H\subset X$ an ample divisor then the restriction map $\pic(X)\to \pic(H)$ is an isomorphism for $\dim X\geq 4$ and an injection for $\dim X\geq 3$. If $X$ is normal, then the isomorphism part usually fails. Injectivity is proved in \cite[p.305]{Kleiman66b} and an optimal variant for the class group is established in \cite{rav-sri}. For the local versions of these theorems, studied in \cite{sga2}, the projective variety is replaced by the germ of a singularity $(p\in X)$ and the ample divisor by a Cartier divisor $p\in X_0\subset X$. The usual (global) Picard group is replaced by the {\it local Picard group} $\pic^{\rm loc}(p\in X) $; see Definition \ref{loc.pic.defn}. Grothendieck proves in \cite[XI.3.16]{sga2} that if $\depth_p \o_X\geq 4$ then the map between the local Picard groups $\pic^{\rm loc}(p\in X)\to \pic^{\rm loc}(p\in X_0)$ is an injection. Note that this does not imply the Lefschetz version since a cone over a smooth projective variety usually has only depth 2 at the vertex. The aim of this note is to propose a strengthening of Grothendieck's theorem that generalizes Kleiman's variant of the global Lefschetz theorem. Then we prove some special cases that have interesting applications to moduli problems. \begin{prob} \label{main.g-l.conj} Let $X$ be a normal (or $S_2$ and pure dimensional) scheme, $X_0\subset X$ a Cartier divisor and $x\in X_0$ a closed point. Assume that $\dim_x X\geq 4$. What can one say about the kernel of the restriction map between the local Picard groups $$ \operatorname{rest}^X_{X_0}:\pic^{\rm loc}(x\in X)\to \pic^{\rm loc}(x\in X_0)? \eqno{(\ref{main.g-l.conj}.1)} $$ We consider three conjectural answers to this question. \begin{enumerate}\setcounter{enumi}{1} \item The map (\ref{main.g-l.conj}.1) is an injection if $X_0$ is $S_2$. \item The kernel is $p$-torsion if $X$ is an excellent, local $\f_p$-algebra. \item The kernel is contained in the connected subgroup of $\pic^{\rm loc}(x\in X) $. \end{enumerate} \end{prob} The main result of this paper gives a positive answer to the topological variant (\ref{main.g-l.conj}.4) in some cases. The precise conditions in Theorem \ref{G-L.setup.say} are technical and they might even seem unrealistically special. Instead of stating them, I focus on three applications first. My interest in this subject started with trying to understand higher dimensional analogs of three theorems and examples concerning surface singularities and their deformations; see \cite{MR0396565, art-def, har-def} for introductions. The main results of this note imply that none of them occurs for isolated singularities in dimensions $\geq 3$. \begin{say}[Three phenomena in the deformation theory of surfaces] \label{3.phenomena.surf}{\ } \begin{enumerate} \item There is a projective surface $S_0$ with quotient singularities and ample canonical class such that $S_0$ has a smoothing $\{S_t:t\in \dd\}$ where $S_t$ is a rational surface for $t\neq 0$. Explicit examples were written down only recently (see \cite{lee-park, pps1, pps2} or the simpler \cite[3.76]{kk-singbook}), but it has been known for a long time that $K^2$ can jump in flat families of surfaces with quotient singularities. The simplest such example is classical and was known to Bertini (though he probably did not consider $K^2$ for a singular surface). Let $C_4\subset \p^5$ be the cone over the degree 4 rational normal curve in $\p^4$. It has two different smoothings. In one family $\{S_t:t\in \dd\}$ (where $S_0:=C_4$) the general fiber is $\p^1\times \p^1\subset \p^5$ embedded by $\o_{\p^1\times \p^1}(1,2)$. In the other family $\{R_t:t\in \dd\}$ (where $R_0:=C_4$) the general fiber is $\p^2\subset \p^5$ embedded by $\o_{\p^2}(2)$. Note that $K_{R_t}^2=9$ and $K_{S_t}^2=8$ for $t\neq 0$, thus $K^2$ jumps in one of the families. In fact, it is easy to compute that $K_{C_4}^2=9$, so the jump occurs in the family $\{S_t:t\in \dd\}$. \item There are non-normal, isolated, smoothable surface singularities $(0\in S_0)$ whose normalization is simple elliptic \cite{MR541025}. \item Every rational surface singularity $(0\in S_0)$ has a smoothing that admits a simultaneous resolution. It is known that such smoothings form a whole component of the deformation space $\defo(0\in S_0)$, called the {\it Artin component} \cite{art-bri}. Generalizations of this can be used to describe all components of the deformation space of quotient singularities \cite{ksb}, and, conjecturally, of all rational surface singularities and many other non-rational singularities \cite{Kollar91, MR1163728}. \end{enumerate} \end{say} The higher dimensional versions of these were studied with the ultimate aim of compactifying the moduli space of varieties of general type; see \cite{k-modsurv} for an introduction. The general theory of \cite{ksb, k-modsurv} suggests that one should work with {\it log canonical} singularities; see \cite{km-book} or (\ref{lc.defn}) for their definition and basic properties. This guides our generalizations of (\ref{3.phenomena.surf}.1--2). In order to develop (\ref{3.phenomena.surf}.3) further, recall that a surface singularity is rational iff its divisor class group is finite; see \cite{mumf-top}. In this paper we state the results for normal varieties. In the applications to moduli questions one needs these results for semi-log canonical pairs $(X,\Delta)$. All the theorems extend to this general setting using the methods of \cite[Chap.5]{kk-singbook}; see the forthcoming \cite[Chap.3]{k-modbook} for details. We say that a variety $W$ is {\it smooth} (resp.\ {\it normal}) {\it in codimension $r$} if there is a closed subscheme $Z\subset W$ of codimension $\geq r+1$ such that $W\setminus Z$ is smooth (resp.\ normal). \begin{thm} \label{3.phenomena.hd} None of the above examples (\ref{3.phenomena.surf}.1--3) exists for varieties with isolated singularities in dimension $\geq 3$. More generally the following hold. \begin{enumerate} \item Let $X_0$ be a projective variety with log canonical singularities and ample canonical class. If $X_0$ is smooth in codimension 2 then every deformation of $X_0$ also has ample canonical class. \item Let $X_0$ be a non-normal variety whose normalization is log canonical. If $X_0$ is normal in codimension 2 then $X_0$ is not smoothable, it does not even have normal deformations. \item Let $ X_0$ be a normal variety whose local class groups are torsion and $\{X_t:t\in \dd\}$ a smoothing. If $X_0$ is smooth in codimension 2 then $\{X_t:t\in \dd\}$ does not admit a simultaneous resolution. \end{enumerate} \end{thm} Our results in Sections \ref{appl.sec}--\ref{exdivdef.sec} are even stronger; we need only some control over the singularities in codimension 2. Such results have been known if $X_0$ (or its normalization) has rational singularities. These essentially follow from \cite[XI.3.16]{sga2}; see \cite{k-flat} for details. Thus the new part of Theorem \ref{3.phenomena.hd}.1--2 is that the claims also hold for log canonical singularities that are not rational. \subsection*{Further comments and problems}\footnote{Recently Bhatt and de~Jong proved Conjecture \ref{main.g-l.conj}.3 in general and Conjecture \ref{main.g-l.conj}.2 for schemes essentially of finite type in characteristic 0.}{\ } The theorems of SGA rarely have unnecessary assumptions, so an explanation is needed why Problem \ref{main.g-l.conj} could be an exception. One reason is that while our assumptions are weaker, the conclusions in \cite{sga2} are stronger. \begin{thm} \cite[XI.2.2]{sga2} \label{fibw.cartier.codim3.G.thm} Let $X$ be a scheme of pure dimension $n+1$, $X_0\subset X$ a Cartier divisor and $Z\subset X$ a closed subscheme such that $Z_0:=X_0\cap Z$ has dimension $\leq n-3$. Let $D^*$ be a Cartier divisor on $X\setminus Z $ such that $D^*|_{X_0\setminus Z_0}$ extends to a Cartier divisor on $X_0$. Assume furthermore that $X_0$ is $S_2$ and $\depth_{Z_0}\o_{X_0}\geq 3$. Then $D^*$ extends to a Cartier divisor on $X$, in some neighborhood of $X_0$. \qed \end{thm} In Problem \ref{main.g-l.conj} we assume that $Z$ is contained in $X_0$, thus it is not entirely surprising that the depth condition $\depth_{Z_0} X_0\geq 3$, could be relaxed. Example \ref{no.str.GL.forcones} shows that in Theorem \ref{fibw.cartier.codim3.G.thm} the condition $\depth_{Z_0} X_0\geq 3$ is necessary. Another reason why Problem \ref{main.g-l.conj} may have escaped attention is that the topological version of it fails. In (\ref{Lef.fails.for.H2.lem}--\ref{Lef.fails.for.H2.exmp}) we construct normal, projective varieties $Y$ (of arbitrary large dimension) with a single singular point $y\in Y$ and a smooth hyperplane section $H$ (not passing through $y$) such that the restriction map $H^2(Y, \q)\to H^2(H, \q)$ is not injective. However, the kernel does not contain $(1,1)$-classes. In the example $Y$ even has a log canonical singularity at $y$. The arguments in Section \ref{top-approach.sec} show that, at least over $\c$, a solution to Problem \ref{main.g-l.conj} would be implied by the following. \begin{prob} \label{main.g-l.stein.conj} Let $W$ be a normal Stein space of dimension $\geq 3$ and $L$ a holomorphic line bundle on $W$. Assume that there is a compact set $K\subset W$ such that the restriction of $c_1(L)$ is zero in $H^2(W\setminus K, \z)$. Does this imply that $c_1(L)$ is zero in $H^2(W, \z)$? \end{prob} Another approach would be to use intersection cohomology to restore Poincar\'e duality in (\ref{leff.in.intersection.hom}.2). For this to work, the solution of the following is needed. \begin{prob} \label{main.IH.prob} Let $W$ be a normal analytic space. Is there an exact sequence $$ H^1(W, \o_W)\to \pic^{\rm an}(W)\otimes\q\to IH^2(W, \q)? \eqno{(\ref{main.IH.prob}.1)} $$ \end{prob} Note first that the sequence exists. Indeed, let $g:W'\to W$ be a resolution of singularities. If $L$ is a line bundle on $W$, then $g^*L$ is a line bundle on $W'$ hence it has a Chern class $c_1\bigl(g^*L)\in H^2(W', \z)$. By the decomposition theorem \cite{MR751966} $IH^2(W, \q)$ is a direct summand of $ H^2(W', \q)$ (at least for algebraic varieties). Arapura explained to me that the sequence (\ref{main.IH.prob}.1) should be exact for projective varieties by weight considerations but the general complex case is not clear. For our applications we need the case when $W$ is Stein. \end{ack} \section{Definitions and examples} \begin{defn}[Local Picard groups]\label{loc.pic.defn} Let $X$ be a scheme and $p\in X$ a point. The {\it local Picard group} $\pic^{\rm loc}(p\in X) $ is a group whose elements are $S_2$ sheaves $F$ on some neighborhood $p\in U\subset X$ such that $F$ is locally free on $U\setminus \{p\}$. Two such sheaves give the same element if they are isomorphic over some neighborhood of $p$. The product is given by the $S_2$-hull of the tensor product. One can also realize the local Picard group as the direct limit of $\pic(U\setminus\{p\})$ as $U$ runs through all open Zariski neighborhoods of $p$ or as $\pic(\spec \o_{x,X}\setminus\{p\})$. If $X$ is normal and $X\setminus \{x\}$ is smooth then $\pic^{\rm loc}(p\in X) $ is isomorphic to the divisor class group of $\o_{x,X}$. In many contexts it is more natural to work with the {\it \'etale-local Picard group} $\pic^{\rm et-loc}(p\in X) := \pic(\spec \o_{x,X}^h\setminus\{p\})$ where $\o_{x,X}^h $ is the Henselization of the local ring $\o_{x,X} $. If $X$ is defined over $\c$, let $W\subset X$ be the intersection of $X$ with a small (open) ball around $p$. The {\it analytic local Picard group} $\pic^{\rm an-loc}(p\in X)$ can be defined as above using (analytic) $S_2$ sheaves on $W$. By \cite{Artin69d}, there is a natural isomorphism $$ \pic^{\rm et-loc}(p\in X)\cong \pic^{\rm an-loc}(p\in X). $$ Note that $\pic^{\rm an}(W\setminus\{p\})$ is usually much bigger than $\pic^{\rm an-loc}(p\in W) $. (This happens already for $X=\c^2$.) However, $\pic^{\rm an}(W\setminus\{p\})=\pic^{\rm an-loc}(p\in X) $ if $\depth_p\o_X\geq 3$ \cite{MR0245837}. (The literature does not seem to be consistent; any of the above four variants is called the local Picard group by some authors.) \end{defn} Let $X$ be a complex space and $p\in X$ a closed point. Set $U:=X\setminus \{p\}$. As usual, $\pic(U)\cong H^1(U, \o_U^*)$ and the exponential sequence $$ 0\to \z_U\stackrel{2\pi i}{\longrightarrow} \o_U \stackrel{\rm exp}{\longrightarrow} \o_U^*\to 1 $$ gives an exact sequence $$ H^1\bigl(U, \o_U\bigr)\to \pic(U) \stackrel{c_1}{\longrightarrow} H^2\bigl(U, \z\bigr). $$ A piece of the local cohomology exact sequence is $$ H^1\bigl(X, \o_X\bigr)\to H^1\bigl(U, \o_U\bigr)\to H^2_p\bigl(X, \o_X\bigr)\to H^2\bigl(X, \o_X\bigr). $$ Thus if $X$ is Stein then we have an isomorphism $$ H^1\bigl(U, \o_U\bigr)\cong H^2_p\bigl(X, \o_X\bigr) $$ and the latter vanishes iff $\depth_p\o_X\geq 3$; see \cite[Sec.3]{gro-loc-coh-MR0224620}. Combining with \cite{MR0245837} we obtain the following well known result. \begin{lem} \label{pic.H2.lem} Let $X$ be a Stein space and $p\in X$ a closed point. If $\depth_p\o_X\geq 3$ then taking the first Chern class gives an injection $$ c_1: \pic^{\rm an-loc}(p\in X)\DOTSB\lhook\joinrel\to H^2\bigl(X\setminus \{p\}, \z\bigr). \qed $$ \end{lem} \begin{defn}[Log canonical singularities]\label{lc.defn} (See \cite{km-book} for an introduction and \cite{kk-singbook} for a comprehensive treatment of these singularities.) Let $X$ be a normal variety such that $mK_X$ is Cartier for some $m>0$. Let $f:Y\to X$ be a resolution of singularities with exceptional divisors $\{E_i:i\in I\}$. One can then write $$ mK_Y\sim f^*(mK_X)+m\cdot \tsum_{i\in I} a(E_i, X)E_i. $$ The number $a(E_i, X)\in \q$ is called the {\it discrepancy} of $E_i$; it is independent of the choice of $m$. If $\min\{a(E_i, X):i\in I\}\geq -1$ then the minimum is independent of the resolution $f:Y\to X$ and its value is called the {\it discrepancy} of $X$. $X$ is called {\it log canonical} if $\min\{a(E_i, X):i\in I\}\geq -1$ and {\it log terminal} if $\min\{a(E_i, X):i\in I\}> -1$. A cone over a smooth variety with trivial canonical class is log canonical but not log terminal. Let $X$ be log canonical, $g:X'\to X$ any resolution and $E\subset X'$ an exceptional divisor such that $a(E, X)=-1$. The subvariety $g(E)\subset X$ is called a {\it log canonical center} of $X$. Log canonical centers hold the key to understanding log canonical varieties, see \cite[Chaps.4--5]{kk-singbook}. Log terminal singularities are rational \cite[5.22]{km-book}. Log canonical singularities are usually not rational but they are Du~Bois \cite{k-db}. Log canonical singularities (and their non-normal versions, called semi-log canonical singularities) are precisely those that are needed to compactify the moduli of varieties of general type. \end{defn} We use only the following two theorems about log canonical singularities. \begin{thm}\label{inv.adj.thms}\cite{MR2264806, ale-lim} Let $X$ be a normal variety over a field of characteristic 0, $Z\subset X$ a closed subscheme of codimension $\geq 3$ and $Z\subset X_0\subset X$ a Cartier divisor such that $X_0\setminus Z$ is normal and the normalization of $X_0$ is log canonical. Assume also that $K_X$ is $\q$-Cartier. Then $X_0$ is normal and it does not contain any log canonical center of $X$. \qed \end{thm} \begin{thm}\label{small.res.thms} Let $X$ be a normal variety over a field of characteristic 0, $Z\subset X$ a closed subscheme of codimension $\geq 3$ and $D$ a Weil divisor on $X$ that is Cartier on $X\setminus Z$. Then there is a proper, birational morphism $f:Y\to X$ such that $f$ is small (that is, its exceptional set has codimension $\geq 2$) and the birational transform $f^{-1}_*D$ is $\q$-Cartier and $f$-ample if one of the following assumptions is satisfied. \begin{enumerate} \item \cite{ksb} There is a Cartier divisor $Z\subset X_0\subset X$ such that $X_0\setminus Z$ is normal and the normalization of $X_0$ is log canonical. \item \cite{birkar11, hacon-xu, oda-xu} $X$ is log canonical and $Z$ does not contain any log canonical center of $X$. \qed \end{enumerate} \end{thm} Observe that $D$ is $\q$-Cartier iff $f:Y\to X$ is an isomorphism. In our applications we show that $Y\neq X$ leads to a contradiction. {\it Comments on the references.} \cite{MR2264806} proves that the pair $(X, X_0)$ is log canonical. This implies that $X_0$ does not contain any log canonical center of $X$ by an easy monotonicity argument \cite[2.27]{km-book}. Then \cite{ale-lim} shows that $X_0$ is $S_2$, hence normal since we assumed normality in codimension 1. \cite{ksb} claimed (\ref{small.res.thms}.1) only for $\dim X=3$ since the necessary results of Mori's program were known only for $\dim X\leq 3$ at that time. The proof of the general case is the same. The second case (\ref{small.res.thms}.2) is not explicitly claimed in the references but it easily follows from them. For details on both cases see \cite[Sec.1.4]{kk-singbook}. The next example shows that Theorem \ref{fibw.cartier.codim3.G.thm} fails if $\depth_{Z_0} X_0<3$, even if the dimension is large. \begin{exmp} \label{no.str.GL.forcones} Let $(A,\Theta)$ be a principally polarized Abelian variety over a field $k$. The affine cone over $A$ with vertex $v$ is $$ C_a(A,\Theta):=\spec_k \tsum_{m\geq 0} H^0\bigl(A, \o_A(m\Theta)\bigr). $$ Note that $\depth_{v} C_a(A,\Theta)=2$ since $H^1(A, \o_A)\neq 0$. Set $X:=C_a(A,\Theta)\times \pic^0(A)$ with $f:X\to \pic^0(A)$ the second projection. Since $L(\Theta)$ has a unique section for every $L\in \pic^0(A)$, there is a unique divisor $D_A$ on $A\times \pic^0(A)$ whose restriction to $A\times \{[L]\}$ is the above divisor. By taking the cone we get a divisor $D_X$ on $X$. For $L\in \pic^0(A)$, let $D_{[L]}$ denote the restriction of $D_X$ to the fiber $C_a(A,\Theta)\times \{[L]\}$ of $f$. We see that \begin{enumerate} \item $D_{[L]}$ is Cartier iff $L\cong \o_A$. \item $mD_{[L]}$ is Cartier iff $L^m\cong \o_A$. \item $D_{[L]}$ is not $\q$-Cartier for very general $L\in \pic^0(A)$. \end{enumerate} \end{exmp} \section{The main technical theorem} The following is our main result concerning Problem \ref{main.g-l.conj}. In the applications the key question will be the existence of the bimeromorphic morphism $f:Y\to X$. This is a very hard question in general, but in our cases existence is guaranteed by Theorem \ref{small.res.thms}. \begin{thm} \label{G-L.setup.say} Let $f:Y\to X$ be a proper, bimeromorphic morphism of normal analytic spaces of dimension $\geq 4$ and $L$ a line bundle on $Y$ whose restriction to every fiber is ample. Assume that there is a closed subvariety $Z_Y\subset Y$ of codimension $\geq 2$ such that $Z:=f\bigl(Z_Y\bigr)$ has dimension $\leq 1$ and $f$ induces an isomorphism $Y\setminus Z_Y\cong X\setminus Z$. Let $X_0\subset X$ be a Cartier divisor such that $Z\cap X_0$ is a single point $p$. Let $p\in U\subset X$ be a contractible open neighborhood of $p$. Note that $U\setminus Z\cong f^{-1}(U)\setminus Z_Y$, hence the restriction $L|_{U\setminus Z}$ makes sense. Set $U_0:=X_0\cap U$. The following are equivalent. \begin{enumerate} \item The map $f$ is an isomorphism over $U$. \item The Chern class of $L|_{U\setminus Z}$ vanishes in $H^2\bigl( U\setminus Z, \q\bigr)$. \item The Chern class of $L|_{U_0\setminus \{p\}}$ vanishes in $H^2\bigl( U_0\setminus \{p\}, \q\bigr)$. \end{enumerate} \end{thm} Proof. If $f$ is an isomorphism then $L$ is a line bundle on the contractible space $U$ hence $c_1(L)=0$ in $H^2\bigl( U, \q\bigr)$. Thus (2) holds and clearly (2) implies (3). The key part is to prove that (3) $\Rightarrow$ (1). The assumption and the conclusion are both local near $p$ in the Euclidean topology. By shrinking $X$ we may assume that the Cartier divisor $X_0$ gives a morphism $g:X\to \dd$ to the unit disc $\dd$ whose central fiber is $X_0$. Note that $Y_0:=f^{-1}(X_0)$ is a Cartier divisor in $Y$. Let $W\subset X$ be the intersection of $X$ with a closed ball of radius $0<\epsilon \ll 1$ around $p$. Set $W_0:=X_0\cap W$, $V:=f^{-1}(W)$ and $V_0:=f^{-1}(W_0)$. We may assume that $W, W_0$ are contractible and $f^{-1}(p)$ is a strong deformation retract of both $V$ and of $V_0$. Let $\bar \dd_{\delta}\subset \dd $ denote the closed disc of radius $\delta$. If $0< \delta\ll \epsilon$ then the pair $\bigl( W_0, \partial W_0\bigr)$ is a strong deformation retract of $\bigl( W\cap g^{-1}\bar \dd_{\delta}, \partial W\cap g^{-1}\bar \dd_{\delta}\bigr)$ and $\bigl( V_0, \partial V_0\bigr)$ is a strong deformation retract of $\bigl( V\cap (gf)^{-1}\bar \dd_{\delta}, \partial V\cap (gf)^{-1}\bar \dd_{\delta}\bigr)$. These retractions induce continuous maps (unique up-to homotopy) $$ r_c: \bigl( V_c, \partial V_c\bigr) \to \bigl( V_0, \partial V_0\bigr), \eqno{(\ref{G-L.setup.say}.4)} $$ where $V_c$ is the fiber of $g\circ f:V\to \dd$ over $c\in \dd_{\delta}$. The induced maps $$ r_c^*: \q\cong H^{2n}\bigl( V_0, \partial V_0, \q\bigr) \stackrel{\cong}{\to} H^{2n}\bigl( V_c, \partial V_c,\q, \bigr)\cong \q \eqno{(\ref{G-L.setup.say}.5)} $$ are isomorphisms where $n=\dim_{\c} V_0=\dim_{\c} V_c$. Our aim is to study the cup product pairing $$ H^{2}\bigl( V_0, \partial V_0, \q\bigr)\times H^{2n-2}\bigl( V_0, \q\bigr)\to H^{2n}\bigl( V_0, \partial V_0, \q\bigr)\cong \q. \eqno{(\ref{G-L.setup.say}.6)} $$ (See \cite{Hatcher}, especially pages 209 and 240 for the relevant facts on cup and cap products.) We prove in Lemmas \ref{G-L.setup.vanish.lem} and \ref{G-L.setup.vanish.lem2}, by arguing on $V_c$, that it is zero and in Lemma \ref{G-L.setup.nonvanish.lem}.2, by arguing on $V_0$, that it is nonzero if $V_0\to W_0$ is not finite. Thus $f^{-1}(p)$ is 0-dimensional, hence $f$ is a biholomorphism. \qed For later applications, in the next lemmas we consider the more general case when $f:Y\to X$ is a proper, bimeromorphic morphism of normal analytic spaces and $g:X\to \dd$ a flat morphism of relative dimension $n$. \begin{lem} \label{G-L.setup.vanish.lem} Notation and assumptions as in {\rm (\ref{G-L.setup.say})}. If $H_{2n-2}\bigl( V_c,\q \bigr)=0$ then the cup product pairing $$ H^{2}\bigl( V_0, \partial V_0, \q\bigr)\times H^{2n-2}\bigl( V_0, \q\bigr)\to H^{2n}\bigl( V_0, \partial V_0, \q\bigr)\cong \q \qtq{is zero.} $$ \end{lem} Proof. Using $r_c^*$ and the Poincar\'e duality map, the cup product pairing factors through the following cup and cap product pairings, where the right hand sides are isomorphic by (\ref{G-L.setup.say}.5), $$ \begin{array}{ccccccc} H^{2}\bigl( V_0, \partial V_0, \q\bigr)&\times& H^{2n-2}\bigl( V_0, \q\bigr)&\to & H^{2n}\bigl( V_0, \partial V_0, \q\bigr)& \cong & \q\\ \downarrow && \downarrow &&\hphantom{\cong}\downarrow & &|| \\ H^{2}\bigl( V_c, \partial V_c, \q\bigr)&\times & H^{2n-2}\bigl( V_c, \q\bigr)&\to & H^{2n}\bigl( V_c, \partial V_c, \q\bigr)& \cong & \q\\ \downarrow && \downarrow &&\hphantom{\cong}\downarrow & &|| \\ H_{2n-2}\bigl( V_c, \q \bigr)&\times & H^{2n-2}\bigl( V_c, \q\bigr)&\to & H_{0}\bigl( V_c, \q\bigr)& \cong & \q \end{array} $$ The first factor in the bottom row is zero, hence the pairing is zero. \qed We apply the next result to $V_c\to W_c$ to check the homology vanishing assumption in Lemma \ref{G-L.setup.vanish.lem}. \begin{lem} \label{G-L.setup.vanish.lem2} Let $V'\to W'$ be a proper bimeromorphic map of normal complex spaces of dimension $n\geq 3$. Assume that every fiber has complex dimension $\leq n-2$ and $W'$ is Stein. Then $H_{2n-2}\bigl( V', \q \bigr)=0$. \end{lem} Proof. Let $E'\subset V'$ denote the exceptional set and $F'\subset W'$ its image. Then $\dim F'\leq n-2$, hence the exact sequence $$ H_{2n-2}\bigl( F', \q \bigr)\to H_{2n-2}\bigl( W', \q \bigr)\to H_{2n-2}\bigl( W', F', \q \bigr) \to H_{2n-3}\bigl( F', \q \bigr) $$ shows that $ H_{2n-2}\bigl( W', \q \bigr)\cong H_{2n-2}\bigl( W', F', \q \bigr)$. The latter group is in turn isomorphic to $H_{2n-2}\bigl( V', E', \q \bigr) $ which sits in an exact sequence $$ H_{2n-2}\bigl( E', \q \bigr)\to H_{2n-2}\bigl( V', \q \bigr)\to H_{2n-2}\bigl( V', E', \q \bigr). $$ Here $H_{2n-2}\bigl( E', \q \bigr) $ is generated by the fundamental classes of the compact irreducible components of $E'$, but we assumed that there are no such. Thus we have an injection $$ H_{2n-2}\bigl( V', \q \bigr)\DOTSB\lhook\joinrel\to H_{2n-2}\bigl( W', \q \bigr). $$ Since $W'$ is Stein and $2n-2>n$, Theorem \ref{stein.top.main.thm} implies that $H_{2n-2}\bigl( W', \q \bigr) =0$. Thus we conclude that $H_{2n-2}\bigl( V', \q \bigr) =0$. \qed During the proof we have used the following. \begin{thm} \label{stein.top.main.thm} \cite{MR684017, MR817633} or \cite[p.152]{gm-book}. Let $W$ be a Stein space of dimension $n$. Then $H_i(W,\z)$ and $H^i(W,\z)$ both vanish for $i>n$. More generally, $W$ is homotopic to a CW complex of dimension $\leq n$.\qed \end{thm} Next we describe two cases when the cup product pairing (\ref{G-L.setup.say}.6) is nonzero. The first of these is used in Proposition \ref{stab.exc.divs.cor} and the second in Theorem \ref{G-L.setup.say}. \begin{lem} \label{G-L.setup.nonvanish.lem} Let $f_0:V_0\to W_0$ be a projective, bimeromorphic morphism between irreducible complex spaces. Let $p\in W_0$ be a point. Assume that $f_0$ is an isomorphism over $W_0\setminus \{p\}$ and $\dim f_0^{-1}(p)>0$. Assume furthermore that one of the following holds. \begin{enumerate} \item There is a nonzero $\q$-Cartier divisor $E_0\subset V_0$ supported on $f_0^{-1}(p)$. \item There is an $f_0$-ample line bundle $L$ such that $c_1(L)|_{\partial V_0}=0$. \end{enumerate} Then $f_0^{-1}(p)$ has codimension 1 in $ V_0$ and the cup product pairing $$ H^{2}\bigl( V_0, \partial V_0, \q\bigr)\times H^{2n-2}\bigl( V_0, \q\bigr)\to H^{2n}\bigl( V_0, \partial V_0, \q\bigr)\cong \q \qtq{is nonzero.} $$ \end{lem} Proof. Consider first Case (1). Then $E_0\neq 0$ shows that $f_0^{-1}(p)$ has codimension 1. Let $H$ be a relatively very ample line bundle. We have $c_1(E_0)\in H^{2}\bigl( V_0, \partial V_0, \q\bigr)$ and $ c_1(H)\in H^{2}\bigl( V_0, \q\bigr)$. If $E_0$ is effective then $$ c_1(E_0)\cup c_1(H)^{n-1}= c_1\bigl(H|_{E_0}\bigr)^{n-1} \in H_0\bigl( E_0, \q\bigr)\to H_0\bigl( V_0, \q\bigr) \eqno{(\ref{G-L.setup.nonvanish.lem}.3)} $$ is positive. If $E_0$ is not assumed effective then we claim that $$ c_1(E_0)\cup c_1(E_0)\cup c_1(H)^{n-2}\in H^{2n}\bigl( V_0, \partial V_0, \q\bigr) \qtq{is nonzero.} $$ The complete intersection of $(n-2)$ general members of $H$ gives an algebraic surface $S$, proper over $W_0$ such that $E_0\cap S$ is a nonzero linear combination of exceptional curves. Thus, by the Hodge index theorem, $$ c_1(E_0)^2\cup c_1(H)^{n-2}=c_1\bigl(E_0|_S\bigr)^2 <0, $$ completing the proof of (1). Next assume that (2) holds. By assumption we can lift $c_1(L)$ to $\tilde c_1(L)\in H^{2}\bigl( V_0, \partial V_0, \q\bigr)$. (The lifting is in fact unique, but this is not important for us.) From this we obtain a class $$ \bigl[\tilde c_1(L)\bigr]\in H_{2n-2}\bigl( V_0, \q\bigr)= H_{2n-2}\bigl( f^{-1}(p), \q\bigr)=\tsum \q[A_i] \eqno{(\ref{G-L.setup.nonvanish.lem}.4)} $$ where $A_i\subset f^{-1}(p)$ are the irreducible components of dimension $n-1$. So far we have not established that $\dim f^{-1}(p)=n-1$, thus the sum in (\ref{G-L.setup.nonvanish.lem}.4) could be empty. The key step is the following. {\it Claim} \ref{G-L.setup.nonvanish.lem}.5. $\bigl[\tilde c_1(L)\bigr]=\sum a_i[A_i]$ where $a_i<0$ for every $i$ and the sum is not empty. Once this is shown we conclude as in (\ref{G-L.setup.nonvanish.lem}.3) using the equality $$ \tilde c_1(L)\cup c_1(L)^{n-1}= \tsum a_i \cdot c_1\bigl(L|_{A_i}\bigr)^{n-1}<0. \eqno{(\ref{G-L.setup.nonvanish.lem}.6)} $$ In order to prove (\ref{G-L.setup.nonvanish.lem}.5) we aim to use \cite[3.39]{km-book}, except there it is assumed that every $A_i$ is $\q$-Cartier. To overcome this, take a resolution $\pi: V'_0\to V_0$ and write the homology class $\bigl[\tilde c_1(\pi^*L)\bigr]$ is a linear combination $\sum a'_i[A'_i]$ where $A'_i\subset (f\circ \pi)^{-1}(p)$ are the irreducible components of dimension $n-1$. Since $L$ is $f$-ample and $\dim f^{-1}(p)>0$, we see that $\pi^*L$ is nef and not numerically trivial on $(f\circ \pi)^{-1}(p)$. Apply \cite[3.39.2]{km-book} to $\pi^*L$. We obtain that $-\sum a'_i[A'_i]$ is effective and its support contains $(f\circ \pi)^{-1}(p)$. Thus $a'_i<0$ for every $i$ and so $\bigl[\tilde c_1(L)\bigr]=\pi_*\sum a'_i[A'_i]$ shows (\ref{G-L.setup.nonvanish.lem}.5) unless there are no $f$-exceptional divisors $A_i\subset f^{-1}(p)$. If this happens, then $\sum a'_i[A'_i]$ is $\pi$-exceptional and, as the homology class of $\pi^*L$, it has zero intersection with every curve that is contracted by $\pi$. Thus we can apply \cite[3.39.1]{km-book} to both $\pm \sum a'_i[A'_i]$ and conclude that $\sum a'_i[A'_i]=0$. This is a contradiction since $L$ and hence $\pi^*L$ have positive intersection with some curve. \qed \section{Deformations of log canonical singularities} \label{appl.sec} Here we derive stronger forms of the three assertions of Theorem \ref{3.phenomena.hd}. We start with (\ref{3.phenomena.hd}.1--2). \begin{thm}\label{codim.3.slc.GL.thm} Let $X$ be a normal variety over $\c$ and $g:X\to C$ a flat morphism of pure relative dimension $n$ to a smooth curve. Let $0\in C$ be a point and $Z_0\subset X_0$ a closed subscheme of dimension $\leq n-3$. Assume that \begin{enumerate} \item $K_X$ is $\q$-Cartier on $X\setminus Z_0$, \item the fibers $X_c$ are log canonical for $c\neq 0$, \item $X_0\setminus Z_0$ is log canonical and \item the normalization of $X_0$ is log canonical. \end{enumerate} Then $X_0$ is normal and $K_X$ is $\q$-Cartier on $X$. \end{thm} Proof. By localization we may assume that $Z_0=\{p\}$ is closed point. Next we use Theorem \ref{small.res.thms} to obtain $f:Y\to X$ such that $f$ is an isomorphism over $X\setminus \{p\}$ and $f^{-1}_*K_X$ is an $f$-ample $\q$-Cartier divisor. By the Lefschetz principle, we may assume that everything is defined over $\c$. We apply Theorem \ref{G-L.setup.say} to $L:=mf^{-1}_*K_X$ for a suitable $m>0$. Let $U$ be the intersection of $X$ with a small ball around $p$ and set $U_0:=X_0\cap U$. Note that $U_0$ is naturally a subset of $X$, of $Y$ and also of the normalization of $ X_0$. The latter shows that $L_{U_0\setminus\{p\}}$ is trivial, thus the assumption (\ref{G-L.setup.say}.3) is satisfied. Hence $f$ is an isomorphism and so $K_X$ is $\q$-Cartier. Now Theorem \ref{inv.adj.thms} implies that $X_0$ is normal. \qed \begin{thm}\label{GL.pic-inj.nonlc.thm} Let $X$ be a log canonical variety of dimension $\geq 4$ over $\c$ and $p\in X$ a closed point that is not a log canonical center (\ref{lc.defn}). Let $p\in X_0\subset X$ be a Cartier divisor. Let $p\in U$ be a Stein neighborhood such that $U$ and $U_0:=X_0\cap U$ are both contractible. Then the restriction maps $$ \pic^{\rm loc}(p\in X)\to \pic^{\rm loc}(p\in X_0) \qtq{and} \pic^{\rm loc}(p\in X)\to H^2\bigl(U_0\setminus \{p\}, \z\bigr) $$ are injective. \end{thm} Proof. Let $D$ be a divisor on $X$ such that $D|_{X\setminus \{p\}}$ is Cartier and $c_1\bigl( D|_{X_0\setminus \{p\}}\bigr)$ is zero in $H^2\bigl(X_0\setminus \{p\}, \z\bigr)$. First we show that $D$ is $\q$-Cartier at $p$. By Theorem \ref{small.res.thms} there is a proper birational morphism $f:Y\to X$ such that $f$ is an isomorphism over $X\setminus \{p\}$, $f^{-1}_*D$ is $f$-ample and $f$ has no exceptional divisors. The Cartier divisor $X_0$ gives a morphism $X\to \dd$ whose central fiber is $X_0$. As in Theorem \ref{G-L.setup.say} let $W_0\subset X_0$ be the intersection of $X_0$ with a closed ball of radius $\epsilon$ around $p$ and $V_0:=f^{-1}(W_0)$. Set $n:=\dim X_0$. By Lemma \ref{G-L.setup.vanish.lem} we see that the cup product pairing $$ H^{2}\bigl( V_0, \partial V_0, \q\bigr)\times H^{2n-2}\bigl( V_0, \q\bigr)\to H^{2n}\bigl( V_0, \partial V_0, \q\bigr)\cong \q \qtq{is zero. } $$ On the other hand, by (\ref{G-L.setup.nonvanish.lem}.2) it is nonzero unless $f:Y\to X$ is finite. Thus $f$ is an isomorphism and $D$ is $\q$-Cartier at $p$. Now we can use \cite[X.3.2]{sga2} to show that $D$ is Cartier at $p$. \qed \begin{cor}\label{fibw.cartier.codim3.thm} Let $g:X\to C$ and $Z_0\subset X_0$ be as in Theorem \ref{codim.3.slc.GL.thm}. Assume that the fibers $X_c$ are all log canonical and $K_X$ is $\q$-Cartier. Let $D^*$ be Cartier divisor on $X\setminus Z_0 $ such that $D^*|_{X_0\setminus Z_0}$ extends to a Cartier divisor on $X_0$. Then $D^*$ extends to a Cartier divisor on $X$. \end{cor} Proof. Choose $Z$ to be the smallest closed subset such that $D^*$ is Cartier on $X\setminus Z$. We need to show that $Z=\emptyset$. If not, let $p\in Z$ be a generic point. By localization we are reduced to the case when $Z=\{p\}$ is a closed point of $X$. Note that $p$ is not a log canonical center of $X$ by Theorem \ref{inv.adj.thms}. Thus (\ref{fibw.cartier.codim3.thm}) is a special case of Theorem \ref{GL.pic-inj.nonlc.thm}. \qed \begin{say}[Proof of Theorem \ref{3.phenomena.hd}.1--2] Let $g:X\to C$ be a flat, proper morphism to a smooth curve whose fibers are normal and log canonical. Let $0\in C$ be a closed point and $Z_0\subset X_0$ a subscheme of codimension $\geq 3$ such that $K_X$ is $\q$-Cartier on $X\setminus Z_0$. Then $K_X$ is $\q$-Cartier by Corollary \ref{fibw.cartier.codim3.thm} thus $mK_X$ is Cartier for some $m>0$. So $\o_X(mK_X)$ is a line bundle on $X$. For a flat family of line bundles, ampleness is an open condition, proving (\ref{3.phenomena.hd}.1). The second assertion (\ref{3.phenomena.hd}.2) directly follows from Theorem \ref{codim.3.slc.GL.thm}. \qed \end{say} \section{Stability of exceptional divisors} \label{exdivdef.sec} We consider part 3 of Theorem \ref{3.phenomena.hd}. Let $g:X\to C$ be a flat morphism to a smooth curve. Let $0\in C$ be a closed point such that $X_0$ is $\q$-factorial. Let $Z_0\subset X_0$ be a subscheme of codimension $\geq 3$ and $f:Y\to X$ be a projective, birational morphism such that $f$ is an isomorphism over $X\setminus Z_0$ and $f_0:Y_0\to X_0$ is birational but not an isomorphism. Let $H_0\subset Y_0$ be an ample divisor. Since $X_0$ is $\q$-factorial, $m\cdot f_0(H_0)$ is Cartier for some $m>0$. Thus $$ E_0:=f_0^*\bigl(m f_0(H_0)\bigr)-mH_0 $$ is a nonzero, $f_0$-exceptional Cartier divisor. We will show that this implies that $Y_t\to X_t$ is not an isomorphism, contrary to our assumptions. More generally, let $Y_0$ be a complex analytic space and $E_0\subset Y_0$ a proper, complex analytic subspace. We would like to prove that, under certain conditions, every deformation $\{Y_t:t\in \dd\}$ induces a corresponding deformation $\{E_t\subset Y_t:t\in \dd\}$. If $E_0$ is a Cartier divisor, then by deformation theory (see, for instance, \cite[Sec.I.2]{rc-book} or \cite[Sec.6]{har-def}) the obstruction space is $H^1\bigl(E_0, \o_{Y_0}(E_0)|_{E_0}\bigr)$. If $E_0$ is smooth and its normal bundle is negative, then by Kodaira's vanishing theorem the obstruction group is zero, hence every flat deformation of $Y_0$ induces a flat deformation of the pair $(E_0\subset Y_0)$. Here we address the more general case when there is a projective morphism $f_0:Y_0\to X'_0$ which contracts $E_0$ to a point. (This always holds if the normal bundle of $E_0$ is negative, at least for analytic or algebraic spaces, see \cite{artin}.) We allow $E_0$ to be singular. By \cite{MR0304703}, any flat deformation $\{Y_t:t\in \dd\}$ induces a corresponding deformation $\{f_t:Y_t\to X_t:t\in \dd\}$ (with the slight caveat that $X_0$ need not be normal, but its normalization is $X'_0$). We can state our result in a more general form as follows. \begin{prop}\label{stab.exc.divs.cor} Let $g:X\to \dd$ be a flat morphism of pure relative dimension $n$. Let $f:Y\to X$ be a projective, bimeromorphic morphism such that $f_0:Y_0\to X_0$ is also bimeromorphic. Assume that there is a nonzero (but not necessarily effective) $\q$-Cartier divisor $E_0\subset Y_0$ such that $\dim f_0\bigl(\supp E_0\bigr)\leq n-3$. Then, for every $|t|\ll 1$ there is a nonzero exceptional divisor $E_t\subset \ex(f_t)$. \end{prop} Proof. By taking general hyperplane sections of $X$ we may assume that $f_0\bigl(\supp E_0\bigr)$ is a point $p\in X_0$. We use the notation of Theorem \ref{G-L.setup.say}. Lemma \ref{G-L.setup.nonvanish.lem} shows that the cup product pairing $$ H^{2}\bigl( V_0, \partial V_0, \q\bigr)\times H^{2n-2}\bigl( V_0, \q\bigr)\to H^{2n}\bigl( V_0, \partial V_0, \q\bigr)\cong \q $$ is nonzero. On the other hand, if $\dim \ex(f_t)\leq n-2$ for $t\neq 0$ then Lemma \ref{G-L.setup.vanish.lem2} applies and so Lemma \ref{G-L.setup.vanish.lem} shows that the above cup product pairing is zero, a contradiction.\qed \begin{rem}\label{divcont.def.applic.rem} (1) Note first that we do not assert that $\{E_t:t\in \dd\}$ is a flat family of divisors, nor do we claim that the $E_t$ are $\q$-Cartier. Most likely both of these hold under some natural hypotheses. (2) The dimension restriction $\dim f_0\bigl(\supp E_0\bigr)\leq n-3$ is indeed necessary. If $Y_0$ is a smooth surface and $E_0\subset Y_0$ is a smooth rational curve then the analog of Proposition \ref{stab.exc.divs.cor} holds only if $E_0$ is a $(-1)$-curve. (3) The existence of a $\q$-Cartier divisor $E_0\subset Y_0$ seems an unusual assumption, but it is necessary, as shown by the following examples. Let $W$ be any smooth projective variety of dimension $n$ and $L$ a very ample line bundle on $W$. Let $Y$ be the total space of the rank $r\geq 2$ bundle $L^{-1}+\cdots +L^{-1}$ with zero section $W\cong E\subset Y$. Let $f:Y\to X$ be the contraction of $W$ to a point; that is, $X$ is the spectrum of the symmetric algebra $ H^0\bigl(W, \sym\bigl(L+\cdots +L\bigr)\bigr)$. For any general map $g:X\to \a^1$ the conclusion of Proposition \ref{stab.exc.divs.cor} fails since we have $Y\to X$. The fiber over the origin is a hypersurface $Y_0\subset Y$ that contains $E$ and the codimension of $E$ in $Y_0$ is $r-1$. If $r>n$ then a general $Y_0$ is smooth but for these $\dim E\leq \frac12 \dim Y_0$. If $r\leq n$ then $Y_0$ is always singular. The most interesting case is when $r=n$. Then, for general choices, the only singularities of $Y_0$ are ordinary nodes along $E$. If $n=r=2$ then $E$ is a divisor in $Y_0$ but it is not $\q$-Cartier at these nodes. An interesting special case arises when $W=\p^n$, $L=\o_{\p^n}(1)$ and $r=n+1$. Then $X_0$ has a terminal singularity and $Y_0\to X_0$ is crepant. (4) The conclusion of Proposition \ref{stab.exc.divs.cor} should hold if $f$ is only proper, but the current proof uses projectivity in an essential way. (5) An examination of the proof shows that Proposition \ref{stab.exc.divs.cor} can be extended to higher codimension exceptional sets as follows. In view of the examples in (3), the assumptions seem to be optimal. \end{rem} \begin{prop} \label{divcont.def.applic.rem.6} Let $g:X\to \dd$ be a flat morphism of pure relative dimension $n$. Let $f:Y\to X$ be a projective, bimeromorphic morphism such that $f_0:Y_0\to X_0$ is also bimeromorphic. Assume that $\ex(f_0)$ is mapped to a point, $d:=\dim \ex(f_0)>n/2$ and $\ex(f_0)$ supports an effective $d$-cycle that is the Poincar\'e dual of a cohomology class in $H^{2(n-d)}(Y_0, \partial Y_0,\q)$. (The latter always holds if $Y_0$ is smooth.) Then $\dim \ex(f_t)=d$ for every $t$. \qed \end{prop} \section{Another topological approach} \label{top-approach.sec} The aim of this section is to recall the usual topological approach to Problem \ref{main.g-l.conj}, going back at least to \cite{MR0215323}. This works if $X\setminus X_0$ is smooth. Then we discuss a possible modification of the method that could lead to a proof over $\c$. \begin{say}[Set-up]\label{leff.in.intersection.hom.setup} Let $X\subset \c^N$ be an analytic space of pure dimension $n$ and $X_0\subset X$ a Cartier divisor. Let $p\in X_0$ be a point, $W\subset X$ the intersection of $X$ with a small closed ball around $p$ and set $W_0:=W\cap X_0$. We assume that \begin{enumerate} \item the interior of $W$ is Stein, \item $W\setminus \{p\}$ is homeomorphic to $\partial W\times [0,1)$, \item $W_0\setminus \{p\}$ is homeomorphic to $\partial W_0\times [0,1)$ and \item $X\setminus X_0$ is smooth. \end{enumerate} \end{say} \begin{prop} \label{leff.in.intersection.hom} Notation and assumptions as above. Then the natural map $$ H^i(W \setminus \{p\}, \z)\to H^i(W_0 \setminus \{p\}, \z) $$ is an isomorphism for $i\leq n-3$ and an injection for $i=n-2$. \end{prop} Proof. By assumption, $$ H^i(W \setminus \{p\}, \z)\cong H^i(\partial W , \z) \qtq{and} H^i(W_0 \setminus \{p\}, \z)\cong H^i(\partial W_0 , \z). $$ We have an exact sequence $$ H^i(\partial W , \partial W_0 ,\z)\to H^i(\partial W , \z)\to H^i(\partial W_0 , \z)\to H^{i+1}(\partial W , \partial W_0 ,\z) \eqno{(\ref{leff.in.intersection.hom}.1)} $$ Since $W\setminus W_0$ is smooth, Poincar\'e duality shows that $$ H^i(\partial W , \partial W_0 ,\z)= H_{2n-1-i} (\partial W \setminus \partial W_0 ,\z)\cong H_{2n-1-i} ( W \setminus W_0 ,\z). \eqno{(\ref{leff.in.intersection.hom}.2)} $$ By assumption the interior of $W$ is $n$-dimensional and Stein, hence so is the interior of $ W \setminus W_0$. Thus $ H_{2n-1-i} ( W \setminus W_0 ,\z)=0$ for $2n-1-i\geq n+1$ by Theorem \ref{stein.top.main.thm}. If $i\leq n-3$ then both groups at the end of (\ref{leff.in.intersection.hom}.1) are zero, giving the isomorphism $H^i(W \setminus \{p\}, \z)\cong H^i(W_0 \setminus \{p\}, \z)$. If $i= n-2$ then only the group on the left vanishes, thus we get only an injection $H^{n-2}(W \setminus \{p\}, \z)\DOTSB\lhook\joinrel\to H^{n-2}(W_0 \setminus \{p\}, \z)$. \qed \begin{cor} Notation and assumptions as above. Assume in addition that $\depth_p\o_{X_0}\geq 2$ and $\dim X_0\geq 3$. Then the natural restriction $$ \pic^{an}(W \setminus \{p\})\to \pic^{an}(W_0 \setminus \{p\}) $$ is an injection. \end{cor} Proof. Consider the commutative diagram $$ \begin{array}{ccc} \pic^{an}(W \setminus \{p\})& \DOTSB\lhook\joinrel\to & H^2(W \setminus \{p\}, \z)\\ \downarrow && \downarrow\\ \pic^{an}(W_0 \setminus \{p\})& \to & H^2(W_0 \setminus \{p\}, \z) \end{array} $$ The top horizontal arrow is injective by Lemma \ref{pic.H2.lem} and the right hand vertical arrow is injective by Proposition \ref{leff.in.intersection.hom}, hence the left hand vertical arrow is also injective.\qed The next lemma shows that, even in the classical setting, that is when $Y$ is projective and $Y_0\subset Y$ is an ample divisor, the restriction map $H^2(Y, \q)\to H^2(Y_0, \q)$ is not injective under some conditions. We then show in (\ref{Lef.fails.for.H2.exmp}) that such examples do exist. \begin{lem} \label{Lef.fails.for.H2.lem} Let $X$ be a smooth projective variety of dimension $n$ and $Z\subset X$ a smooth divisor. Assume that $H^1(X,\q)=0$ and there is a morphism $g:X\to Y$ that contracts $Z$ to a point $y\in Y$ and is an isomorphism otherwise. Let $Y_0\subset Y$ be a smooth divisor (not passing through $y$). Then the kernel of the restriction map $H^2(Y, \q)\to H^2(Y_0, \q)$ contains $H^1(Z, \q) $. \end{lem} Proof. The cohomology sequence of the pair $(Y, y)$ shows that $H^2(Y, \q)\cong H^2(Y,y, \q)$ and $H^2(Y,y, \q)\cong H^2(X,Z, \q)$. The latter in turn sits in an exact sequence $$ H^1(X,\q)\to H^1(Z, \q)\to H^2(X,Z, \q)\to H^2(X, \q). $$ We assumed that $H^1(X,\q)=0$ hence there is an injection $$ H^1(Z, \q)\DOTSB\lhook\joinrel\to \ker\bigl[H^2(X,Z, \q)\to H^2(X, \q)\bigr]. $$ Since $H^2(Y, \q)\to H^2(Y_0, \q)$ factors as $$ H^2(Y, \q)\cong H^2(X,Z, \q)\to H^2(X, \q)\to H^2(Y_0, \q) $$ we obtain an injection $$ H^1(Z, \q)\DOTSB\lhook\joinrel\to \ker\bigl[H^2(Y, \q)\to H^2(Y_0, \q)\bigr].\qed $$ We thus need to find examples as above where $H^1(Z, \q)\neq 0$. In the next examples, $Z$ is an Abelian variety. \begin{exmp}\label{Lef.fails.for.H2.exmp} Let $\pi_0: S\to \p^1$ be a simply connected elliptic surface. For instance, we can take $S$ to be the blow-up of $\p^2$ at the 9 base points of a cubic pencil. By composing $\pi$ with suitable automorphisms of $\p^1$ we get $n$ simply connected elliptic surfaces $\pi_i:S_i\to \p^1$ such that for every point $p\in \p^1$ at most one of the $\pi_i$ has a singular fiber over $p$. Thus the fiber product $$ X_1:=S_1\times_{\p^1}S_2\times_{\p^1}\cdots \times_{\p^1}S_n $$ is a smooth variety of dimension $n+1$. General fibers of the projection $\pi_1:X_1\to \p^1$ are Abelian varieties of dimension $n$ and $X_1$ is simply connected. Fix Abelian fibers $A_1, A_2\subset X_1$. Let $H_1\subset X_1$ be a very ample divisor such that $A_1\cap H_1$ is smooth. Let $X:=B_{A_1\cap H_1}X_1\to X_1$ denote the blow-up of $A_1\cap H_1$. Let $H\subset X$ denote the birational transform of $H_1$ and $A\subset X$ the birational transform of $A_1$. For $m\gg 1$, the linear system $|H+mA_2|$ is base point free. This gives a morphism $g:X\to Y$. Note that $g$ contracts $A$ to a point $y\in Y$ and $g:X\setminus A\to Y\setminus \{y\}$ is an isomorphism. \end{exmp} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def$''${$''$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \noindent Princeton University, Princeton NJ 08544-1000 {\begin{verbatim}[email protected]\end{verbatim}} \end{document}
\begin{document} \title{An adaptive kriging method for solving nonlinear inverse statistical problems} \author{Shuai Fu\thanks{EDF Lab Chatou \& Universit\'e Paris XI}, Mathieu Couplet\thanks{EDF Lab Chatou} and Nicolas Bousquet\thanks{EDF Lab Chatou \& Institut de Math\'ematique de Toulouse: [email protected] }} \scriptscriptstyle{\leq}aketitle \begin{abstract} {In various industrial contexts, estimating the distribution of unobserved random vectors $X_i$ from some noisy indirect observations $H(X_i)+U_i$ is required. If the relation between $X_i$ and the quantity $H(X_i)$, measured with the error $U_i$, is implemented by a CPU-consuming computer model $H$, a major practical difficulty is to perform the statistical inference with a relatively small number of runs of $H$.} Following \citet{fu14}, a Bayesian statistical framework is considered \textcolor{black}{to make use of possible prior knowledge on the parameters of the distribution of the {$X_i$}, which is assumed Gaussian.} Moreover, a \textcolor{black}{Markov Chain Monte Carlo} (MCMC) algorithm is carried out to estimate their posterior distribution by replacing $H$ by a kriging metamodel build from a limited number of \textcolor{black}{simulated experiments}. Two heuristics, involving two different criteria to be optimized, are proposed to sequentially design these computer experiments in the limits of a given computational budget. The first criterion is a Weighted Integrated Mean Square Error (WIMSE) \citep{picheny10}. The second one, called Expected Conditional Divergence (ECD), {developed in the spirit of the} Stepwise Uncertainty Reduction (SUR) {criterion} \citep{vazquez09,bect12}, is based on the {discrepancy} between two consecutive approximations of the target posterior distribution. Several numerical comparisons conducted over a toy example then a motivating real case-study show that such adaptive designs can significantly outperform the classical choice of a maximin Latin Hypercube Design (LHD) of experiments. \textcolor{black}{Dealing with a major concern in hydraulic engineering, a particular emphasis is placed upon the prior elicitation of the case-study, highlighting the overall feasibility of the methodology.} Faster convergences and manageability considerations lead to recommend the use of the ECD criterion in practical applications. \end{abstract} {\small \textbf{Keywords:} {Inverse statistical problems}; {Bayesian inference}; {Kriging}; {Adaptive design of experiments}; {Metropolis-Hasting-within-Gibbs algorithm}; {Prior elicitation}. } \section{Introduction} \label{intro} In many industrial problems, engineers have to deal with uncertain quantities which cannot be directly measured. Moreover, \textcolor{black}{some of them can suffer} from some inherent variability. For instance, in hydraulics, the assessment of a risk of flooding usually depends on some quantities, called coefficients of Manning-Strickler, which represent the roughness of the river bed. Because rivers are complex changeable systems, it appears reasonable to consider these coefficients as random variables. Although they cannot be directly measured, it appears possible to estimate their randomness from flooding data by means of computer simulation. Estimating the probability distribution of such random unobserved variables involves some observations {$Y_i\in\mathbb{R}^p$} (e.g., water levels), $i=1\ldots{}n$, and a computer model $H$ (e.g., Saint-Venant equation solver) which links the unobserved variables of interest {$X_i\in\mathbb{R}^q$} (e.g., Manning-Strickler coefficients) to the $Y_i$: \begin{eqnarray}\label{eq:model} Y_i = H(X_i, d_i) + U_i \end{eqnarray} where $d_i\in\mathbb{R}^{q_2}$ stands for some known or observed quantities and where $U_i$ represents some unobserved measurement errors. A Gaussian framework is adopted in this article: the $\trans{(X_i \; U_i)}$ are assumed to be independent Gaussian random vectors such that \begin{eqnarray}\label{eq:gaussianFramework} \vr{X_i\\ U_i} \sim \scriptscriptstyle{\leq}athcal{N}_{q+p}(\vr{m\\ 0}, \begin{mx}{cc} C & 0\\ 0 & R \end{mx}) \end{eqnarray} where $\scriptscriptstyle{\leq}athcal{N}_k(\scriptscriptstyle{\leq}u, \Sigma)$ is the $k$-dimensional Gaussian distribution of mean $\scriptscriptstyle{\leq}u$ and covariance matrix $\Sigma$. The issue is then to estimate the unknown parameters $\theta=(m, C)$ of the probability distribution of the $X_i$ from some field data $(y_i, d_i)$\footnote{ {Where $y_i$ is a realization of the random vector $Y_i$.}}, $1\leq i\leq n$, given $H$ and the error covariances $R$. The accuracy of measurements is generally given or can be assessed: the assumption that $R$ is known is a sound basis for the inference, since it prevents from a problem of non-identifiability of ($\theta, R$). From the general perspective of the analysis of some independent measurements $y_i$ performed on similar systems (under conditions $d_i$), this statistical model enables to capture the inherent variability of some variables $X_i$ in the population which is studied. For instance, mechanical tests generally involve a production-lot population of components whose precise characteristics (\textit{e.g.} Young's Modulus or thermal expansion coefficient) suffer from a non negligible variability.\\ The major practical obstacle to the estimation of $\theta$ is the CPU cost and time needed to evaluate $H(x, d)$, given an input $(x, d)\in\mathbb{R}^Q$ ($Q=q+q_2$). In hydraulics, one run of $H$ takes typically few hours per CPU. Several methods have been developed to tackle this difficulty. \citet{celeux07} considered a maximum likelihood estimation by Expectation-Conditional Maximisation Either (ECME) \citep{liu94} based on an iterative linearisation of $H$: this algorithm should be avoided if the nonlinearities of $H$ relative to $x$ are significant, otherwise it can be very efficient. \citet{barbillon11} proposed to couple a Stochastic Expectation Maximisation (SEM) algorithm \citep{celeux88} with a kriging metamodelling of $H$ to improve the robustness of the estimation. Kriging, also known as Gaussian Process (GP) regression, was suggested by \citet{sacks89a} to deal with CPU-expensive computer models. The purpose of this metamodelling technique is to build an accurate surrogate model of $H$ from some computer experiments (some runs of $H$). Then a crucial question is how to determine the Design of these Experiments (DoE). Several methods of calibration of computer models relying on kriging were proposed by \citet{kennedy01} and \citet{bayarri07}. Although their statistical models are close to the one postulated here, an important difference is that the $X_i$ are assumed to be random in this article, whereas the unknown inputs $x$ are part of the parameters $\theta$ to estimate in their studies. Hereafter, the Bayesian framework suggested by \citet{fu14}, which involves a kriging of $H$, is considered. It allows to take account of prior information about the $X_i$ (which could possibly arise from expert or past assessments) through the definition of a so-called prior probability distribution for $\theta$, the density of which being denoted ${\scriptscriptstyle{>}}i(\theta)$. A Metropolis-Hastings-within-Gibbs Markov Chain Monte-Carlo (MCMC) sampling can then be carried out to estimate the posterior distribution of $\theta$ (given the field data). The benefit of kriging is twofold: extensive sampling gets feasible, and the uncertainty about $H$ can be accounted for by embedding the GP into the statistical model and the MCMC procedure. From a Bayesian point of view, there is no reason to drop the uncertainty on $H$ by only keeping the kriging predictor $\hat{H}(.)$. Besides, the development of purpose-oriented adaptive DoE approaches, such as the Stepwise Uncertainty Reduction (SUR) \citep{vazquez09,bect12}, is then made possible. Such approaches seek a trade-off between shrinking the uncertainty on $H$ (which is measured by the kriging covariance) as much as possible, and exploring the most interesting areas of the input space of $H$ regarding the considered objective. A classical example comes from the field of global optimisation where the Expected Improvement criteria was proposed by \citet{mockus78,jones98}. The purpose of this article is to contribute to the definition of efficient adaptive DoE algorithms for solving the inverse statistical problem specified earlier.\\ The article is organised as follows. Section \mathds{R}f{review} gives details about kriging metamodelling, as well as the maximin Latin Hypercube Design (maximin LHD) which provides us with a first knowledge about the computer model $H$ (before starting a purpose-oriented exploration of the input space of $H$). In Section \mathds{R}f{prior_MCMC}, the method used to specify an informative prior ${\scriptscriptstyle{>}}i(\theta)$, then the inference by MCMC, are described. Afterwards, two methods, called {Expected Conditional Divergence (ECD)} and Weighted Integrated Mean Square Error (WIMSE), which derive from two purpose-oriented criteria to optimise, are proposed to sequentially enrich the DoE in Section~\mathds{R}f{ECD} and Section~\mathds{R}f{WIMSE}. \textcolor{black}{Numerical studies are conducted on on toy example in Section~\mathds{R}f{tests} to compare the efficiency of these approaches with a posterior approximation standed on a static space-filling design (maximin LHD). Finally, the full methodology is run through over a real hydraulic computer model: its input roughness parameters are calibrated from noisy observations of water levels. } Section~\mathds{R}f{conclusion} concludes the article by giving major directions for further work. \section{Kriging and maximin LHD space-filling design} \label{review} This section recall some basics of kriging and of designing computer experiments which matter for the remainder of the article. \subsection{Kriging} \label{sec:kriging} Kriging is a geostatistical method \citep{matheron71} which was suggested by \citet{sacks89a} to build a cheap surrogate model of a computer model, from a limited of runs of the latter, over a hypercube $\Omega\subset\mathbb{R}^Q$. This method has known a growing interest in metamodelling with the writings of \citet{koehler96,stein99,kennedy01,santner03}, amongst others. In this section, a scalar function $\dfc{h}{\Omega}{\mathbb{R}}$ is considered: in the case of a vector-valued function $\dfc{H}{\Omega}{\mathbb{R}^p}$, each component $h_i(.)$ of $H(.)$ can be ``kriged'' independently from the others, as done in the numerical experiments in Section~\mathds{R}f{tests}. A usual manner to present kriging is starting from the premise that the considered function $h(.)$ is a particular realization of an underlying GP $\scriptscriptstyle{\leq}athcal{H}(.)$: \begin{eqnarray}\label{eq:krig} \forall z\in\Omega, & & \scriptscriptstyle{\leq}athcal{H}(z) = F(z)\,\beta + \mathcal{G}(z), \end{eqnarray} where $\beta$ is a vector of $\mathbb{R}^K$, where $F(z) = \vr{\!\!f_1(z)\cdots f_K(z)\!\!}$ with $\dfc{f_k}{\mathbb{R}^Q}{\mathbb{R}^p}$, $1\leq k\leq K$, a family of linearly independent functions, and where $\mathcal{G}$ is a centered GP ($\mathbb{E}sp{\mathcal{G}(z)}=0$, for all $z\in\Omega$). The GP hypothesis means that $\trans{\vr{\!\!\mathcal{G}(z_1)\:\cdots\:\mathcal{G}(z_k)\!\!}}$ is a $k$-dimensional Gaussian vector for any set $\{z_1, \cdots, z_k\}\in\Omega$ and any $k\geq 1$. Although it may appear artificial, this assumption leads to a flexible statistical model which has been applied successfully in many contexts, and, in a Bayesian perspective, it can be interpreted as the definition of a prior on $h$ \citep{rasmussen06}. For any $(z,w)\in\Omega^2$, the mean function $\dfc{\scriptscriptstyle{\leq}u}{\Omega}{\mathbb{R}}$ of $\scriptscriptstyle{\leq}athcal{H}$ is defined by $\scriptscriptstyle{\leq}u(z) = \mathbb{E}sp{\scriptscriptstyle{\leq}athcal{H}(z)} = F_z\,\beta$, and the covariance function $\dfc{K}{\Omega^2}{\mathbb{R}}$ of $\mathcal{G}$ (and $\scriptscriptstyle{\leq}athcal{H}$) by $\Cov{\mathcal{G}(z),\mathcal{G}(w)} = K(z,w)$. In the following, as most often assumed by authors when modelling computer models, $\mathcal{G}$ is stationary, thus $K(z,w)$ only depends on $z-w$: $K(z,w)=\sigma^2\,K(z-w)$ by abuse of notation, with $K(0)=1$. Let $D_N=\{z_1, \cdots, z_N\}\subset\Omega$ be a DoE associated to observations $h_N\in\mathbb{R}^N$, and $\trans{\scriptscriptstyle{\leq}athcal{H}_N}=\vr{\!\!\scriptscriptstyle{\leq}athcal{H}(z_1)\cdots\scriptscriptstyle{\leq}athcal{H}(z_N)\!\!}$, then, from a direct application of a classical theorem relative to the conditioning of Gaussian vectors, the process $\scriptscriptstyle{\leq}athcal{H}$ conditioned by the observations, that is $\scriptscriptstyle{\leq}athcal{H}|\scriptscriptstyle{\leq}athcal{H}_N\!=\!h_N$, is still a GP over $\Omega$ with mean function $\dfc{\scriptscriptstyle{\leq}u_{D_N}}{\Omega}{\mathbb{R}}$ and covariance function $\dfc{K_{D_N}}{\Omega^2}{\mathbb{R}}$. Namely, for all $(z,w)$, \begin{eqnarray} h_N(z) & \sim & {\cal{N}}\left(\scriptscriptstyle{\leq}u_{D_N}(z), {K_{D_N}(z,\cdot)}\right) \label{dist_GP} \end{eqnarray} where \begin{eqnarray} & \scriptscriptstyle{\leq}u_{D_N}(z) = F(z)\,\beta + K_{z,N} {K_{N,N}}^{-1} (h_N\!-\!F_N\,\beta) \label{eq:mu}\\[1ex] & K_{D_N}(z,w) = K(z,w) - K_{z,N}{K_{N,N}}^{-1}\trans{K_{w,N}} \end{eqnarray} with $K_{z,N}=\vr{\!\!K(z,z_1)\cdots K(z,z_N)\!\!}$ (idem for $K_{w,N}$) and ${[K_{N,N}]}_{i,j} = K(z_i, z_j)$. Of course, $K_{D_N}(z,z) < K(z,z)$: the more observations are available, the less uncertainty on $\scriptscriptstyle{\leq}athcal{H}$ remains. If $K(.,.)$ and $\beta$ are known, then $\scriptscriptstyle{\leq}u_{D_N}(z)$ is the kriging predictor of $\scriptscriptstyle{\leq}athcal{H}(z)|\scriptscriptstyle{\leq}athcal{H}_N\!=\!h_N$, that is its Best Linear Unbiaised Predictor (BLUP), and $K_{D_N}(z,w)$ is the kriging covariance, that is the covariance function of the error of prediction. In particular, $K_{D_N}(z,z)$ is the Mean Square Error (MSE) of the BLUP at $z$. Furthermore, if $K(.,.)$ is known but $\beta$ unknown (universal kriging), then the generalised least-square estimator \begin{eqnarray}\label{eq:beta} \hat{\beta} = {(\trans{F_N}K_{N,N}F_N)}^{-1}\trans{F_N}K_{N,N}h_N \end{eqnarray} of $\beta$ is also the maximum likelihood estimator, and the BLUP $\hat{h}_{D_N}(z)$ of $\scriptscriptstyle{\leq}athcal{H}(z)|\scriptscriptstyle{\leq}athcal{H}_N\!=\!h_N$ is obtained by substituting $\beta$ by $\hat{\beta}$ in Equation~\eqref{mu}. Last but not least, the kriging covariance which is associated to $\hat{h}_{D_N}(z)$ is then \begin{eqnarray} K_{D_N}(z,w) & = & K(z,w) - K_{z,N}{K_{N,N}}^{-1}\trans{K_{w,N}} \nonumber \\ & + & \trans{(F(z) - \trans{F_N}{K_{N,N}}^{-1}\trans{K_{z,N}})} \label{eq:krigingCov} {(\trans{F_N}{K_{N,N}}^{-1}F_N)}^{-1} \\ & & \ \ \times \ (F(w) - \trans{F_N}{K_{N,N}}^{-1}\trans{K_{w,N}}) \nonumber \end{eqnarray} (we use the same notation as before - $\beta$ known - for the sake of simplicity) with $[{F_N}]_{i,j} = f_j(z_i)$. It can be seen as an approximation of the covariance function of the GP $\scriptscriptstyle{\leq}athcal{H}|\scriptscriptstyle{\leq}athcal{H}_N\!=\!h_N$ and, together with $\hat{h}_{D_N}(z)$, is generally used, for example in \citep{bect12}, to model the uncertainty on the Gaussian vector $\trans{\vr{\!\!\scriptscriptstyle{\leq}athcal{H}(w_1)\cdots\scriptscriptstyle{\leq}athcal{H}(w_M)}}|\scriptscriptstyle{\leq}athcal{H}_N\!=\!h_N$ for any set $\{w_1,\cdots,w_M\}\subset\Omega$; see also \citet{santner03,bachoc13} for other precisions. Following \citet{fu14}, $K_{D_N}(z,w)$ is used in the MCMC procedure to account for the dependence between the missing data $X_i$ due to the uncertainty on (each component $h_i(.)$ of) the computer model $H(.)$; see Section~\mathds{R}f{prior_MCMC} for more details. The induced Mean Square Error $MSE_{D_N}:z\scriptscriptstyle{\leq}apsto K_{D_N}(z,z)$ {plays} an important role in Section~\mathds{R}f{WIMSE}. In practical applications, $K(.,.)$ is unknown and is estimated, thanks to a model $K_{\scriptscriptstyle{>}}si(.,.)$ parametrised by ${\scriptscriptstyle{>}}si\in\mathbb{R}^L$, by different techniques such as maximum likelihood (as hereafter) or cross-validation. In the remainder of this article, the plug-in estimates obtained by replacing $K(.,.)$ by $K_{\hat{{\scriptscriptstyle{>}}si}}(.,.)$, with $\hat{{\scriptscriptstyle{>}}si}$ the estimator of ${\scriptscriptstyle{>}}si$, are employed. \subsection{Design of experiments ({\it maximin}-Latin Hypercubic Designs)} Obviously, the predicting accuracy of kriging highly depends on the DoE $D_N$. Following \citet{picheny10}, it is possible to distinguish three kinds of DoEs: \begin{itemize} \item {\it space-filling} designs, which aim to fill the input space with a finite number of points independently of the considered model (e.g., {\it maximin}-LHD); \item {\it model-oriented} designs, which attempt to build a suited DoE accounting for the features of the model $H$ or the metamodel {(e.g. IMSE, see Section~\mathds{R}f{WIMSE:intro})}; \item {\it purpose-oriented} designs, which account for the final aim of the study to find the best adapted DoE (e.g., to compute an exceedance probability by accelerated Monte Carlo methods). \end{itemize} {In this article, a {\it purpose-oriented} DoE is built in an adaptive way. A first calibration of the covariance parameters is {performed} from an initial {\it maximin}-LHD, then the DoE is sequentially improved using sequential strategies, which are detailed in sections~\mathds{R}f{ECD} and~\mathds{R}f{WIMSE}. The concept of LHDs was introduced in \cite{mckay79}; such designs ensure a good coverage of the interval to which each scalar variable belongs. Then \cite{johnson90} proposed the {\it maximin} distance criterion to optimize LHDs. {\it Maximin} means maximizing the minimum inter-site distance between the set of $N$ points: \begin{eqnarray*} \delta_D & = & \scriptscriptstyle{\leq}in_{i\neq j} \mathcal{V}ert z_{(i)}-z_{(j)}\mathcal{V}ert_2. \end{eqnarray*} Therefore, the {\it maximin} criterion prevents the points of the design to be close to each other. In the present work, {\it maximin}-LHDs are obtained by the algorithm of \citet{morris95}.} \section{Bayesian statement and inference} \label{prior_MCMC} \subsection{Prior elicitation}\label{prior.elicitation} In the Bayesian statistical framework favored in \cite{fu14}, a Gaussian-Inverse Wishart prior distribution was elicited: \begin{eqnarray} m\,|\,C & \sim & \mathcal{N}_q(\scriptscriptstyle{\leq}u,C/a),\label{eq:InfoPrior1}\\ C & \sim & \scriptscriptstyle{\leq}athcal{IW}_q(\Lambda,\nu).\label{eq:InfoPrior2} \end{eqnarray} \textcolor{black}{ This prior can be assimilated to the posterior distribution of virtual data given a noninformative prior, which presents some advantages in subjective Bayesian analysis \citep{Bousquet2015}. Especially, a clear sense can be given to hyperparameters $(\scriptscriptstyle{\leq}u,a,\Lambda,\nu)$, which simplifies prior calibration.} \textcolor{black}{ Indeed, $a$ can be understood as the size of {\it virtual} sample of data $X$, that modulates the strength of the practicioner's belief in prior information (for instance provided by subjective experts). It should be calibrated under the constraint $a<n$ to ensure that the posterior behavior is mainly driven by objective data information. A default (let say, ``objective") choice is $a=1$.} \textcolor{black}{Furthermore, $\scriptscriptstyle{\leq}u$ is the prior predictive mean, median and most probable value of $X$, which can be estimated by a measure of central tendency provided by past calibration results in close situations. In the motivating case-study explored in Section \mathds{R}f{numerical.experiments}, such information was found by bibliographical researches (Table \mathds{R}f{prior.strickler}).} \textcolor{black}{Finally, denoting $\scriptscriptstyle{\leq}athbf{X,Y}$ the set of missing and truly observed data, the reparametrizations $\Lambda = (a+1)\cdot C_{e}$ and $\nu = a+q+2$ imply that the conditional posterior distribution of $C$ given $m$ is the Inverse Wishart distribution $\scriptscriptstyle{\leq}athcal{IW}\Big((a+1)\,C_{e}+(n+1)\,\hat{C}_n, \,\nu+n+1\Big)$ with $\hat{C}_n = \frac{1}{n} \sum_{i=1}^n (m-x_i)(m-x_i)^T$, the {expectation} of which being \begin{eqnarray*} \scriptscriptstyle{\leq}athbb{E}[C\,|\,m,\scriptscriptstyle{\leq}athbf{X,Y}] & = & \frac{a+1}{a+n+2}\cdot C_{e} + \frac{n+1}{a+n+2} \cdot \hat{C}_n. \end{eqnarray*} This last expression highlights the meaning and influence of $a$ as a virtual size. The components of $C_{e}$ are to be calibrated in function of prior knowledge on $X$ too, expressed through its predictive prior distribution, which is a decentered Student law: \begin{eqnarray*} X & \sim & \scriptscriptstyle{\leq}box{St}_q\Big(\scriptscriptstyle{\leq}u,\,\frac{(a+1)^2}{a(a+3)}C_{e},\,a+3 \Big) \label{chap6:predic} \end{eqnarray*} with mean vector $\scriptscriptstyle{\leq}u$ and covariance matrix $\frac{a+1}{a} C_{e}$. Again, in the case-study that motivated this work, prior information on the ratio between average values and standard deviation of Strickler-Manning coefficients was available (Figure \mathds{R}f{usa9600}), which allowed for a full prior calibration (Section \mathds{R}f{numerical.experiments}). } \subsection{Posterior computation}\label{posterior.computation} A Gibbs sampler \citep{tierney96} was proposed to compute the posterior distribution of $\theta=(m,C)$. Actually, replacing the expensive-to-compute function $H$ with a kriging emulator $\widehat{H}$, as in \citet{barbillon11}, and introducing a new {\it emulator error} $\mbox{MSE}$, the Gibbs sampler can be adapted as follows: {\scriptscriptstyle{>}}aragraph{Gibbs sampler (at the $(r+1)$-th iteration)}\label{algo:Gibbs} \rule[0.5ex]{6cm}{0.1mm} \small \texttt{Given $(m^{(r)}, C^{(r)}, \scriptscriptstyle{\leq}athbf{X}^{(r)})$ for $r=0,1,2,\dots$, generate:} \\ \begin{enumerate} \item $C^{(r+1)}|\dots \sim \scriptscriptstyle{\leq}athcal{IW}\Big(\Lambda+\sum_{i=1}^n (m^{(r)}-X_i^{(r)})(m^{(r)}-X_i^{(r)})'+a(m^{(r)}-\scriptscriptstyle{\leq}u)(m^{(r)}-\scriptscriptstyle{\leq}u)', \,\nu+n+1\Big)$, \item $m^{(r+1)}|\dots \sim \scriptscriptstyle{\leq}athcal{N}\Big(\frac{a}{n+a}\scriptscriptstyle{\leq}u+\frac{n}{n+a}\overline{\scriptscriptstyle{\leq}athbf{X}_n^{(r)}},\, \frac{C^{(r+1)}}{n+a}\Big)$ where $\overline{\scriptscriptstyle{\leq}athbf{X}_n^{(r)}}=n^{-1} \sum_{i=1}^n X_i^{(r)}$, \item $\scriptscriptstyle{\leq}athbf{X}^{(r+1)}|\dots {\scriptscriptstyle{>}}ropto |\scriptscriptstyle{\leq}athbf{R}+\scriptscriptstyle{\leq}box{MSE}^{(r+1)}|^{-\frac{1}{2}}\cdot \exp\Bigg\{-\frac{1}{2} \sum_{i=1}^n (X_i^{(r+1)}-m^{(r+1)})'\Big[C^{(r+1)}\Big]^{-1}(X_i^{(r+1)}-m^{(r+1)}) - \frac{1}{2}\Big(\Big(\scriptscriptstyle{\leq}athcal{Y}_1-\widehat{H}_{N,1}^{(r+1)}\Big)',\dots,\Big(\scriptscriptstyle{\leq}athcal{Y}_n-\widehat{H}_{N,n}^{(r+1)}\Big)'\Big) \Big(\scriptscriptstyle{\leq}athbf{R}+\scriptscriptstyle{\leq}box{MSE}^{(r+1)}\Big)^{-1} \left( \begin{array}{c} \scriptscriptstyle{\leq}athcal{Y}_1-\widehat{H}_{N,1}^{(r+1)}\\ \vdots \\ \scriptscriptstyle{\leq}athcal{Y}_n-\widehat{H}_{N,n}^{(r+1)} \end{array} \right)\Bigg\}\\ \label{eq:posterX}$ \texttt{where $\widehat{H}_{N,i}^{(r+1)}=\widehat{H}_N(X_i^{(r+1)},d_i)$ and $\scriptscriptstyle{\leq}box{MSE}^{(r+1)}=\scriptscriptstyle{\leq}box{MSE}(\scriptscriptstyle{\leq}athbf{X}^{(r+1)},\scriptscriptstyle{\leq}athbf{d})$ is the block diagonal matrix } \begin{eqnarray*} \scriptscriptstyle{\leq}athbf{\scriptscriptstyle{\leq}box{MSE}}(\scriptscriptstyle{\leq}athbf{X}^{(r+1)},\scriptscriptstyle{\leq}athbf{d}) & = & \begin{array}{cl} \left(\begin{array}{ccc} \scriptscriptstyle{\leq}box{MSE}_1(\scriptscriptstyle{\leq}athbf{X}^{(r+1)},\scriptscriptstyle{\leq}athbf{d}) & & {\bf 0}\\ & \ddots & \\ {\bf 0} & &\scriptscriptstyle{\leq}box{MSE}_p(\scriptscriptstyle{\leq}athbf{X}^{(r+1)},\scriptscriptstyle{\leq}athbf{d}) \end{array}\right)\,\, & \begin{array}{ll} \left. \begin{array}{l} \\ \end{array} \right\rbrace & n \textrm{ lines} \\ \\ \left. \begin{array}{l} \\\end{array} \right\rbrace & n \textrm{ lines} \\ \end{array} \end{array} \end{eqnarray*} \end{enumerate} \rule[0.5ex]{\textwidth}{0.1mm} \normalsize \noindent In the third step, the variance matrices $\scriptscriptstyle{\leq}box{MSE}_j(\scriptscriptstyle{\leq}athbf{X}^{(r+1)},\scriptscriptstyle{\leq}athbf{d})\in\scriptscriptstyle{\leq}athcal{M}^{n\times n}$ are defined by \begin{eqnarray*} \scriptscriptstyle{\leq}box{MSE}_j(\scriptscriptstyle{\leq}athbf{X}^{(r+1)},\scriptscriptstyle{\leq}athbf{d}) & = & \scriptscriptstyle{\leq}athbb{E}\left(\left(\scriptscriptstyle{\leq}athcal{H}_j(\scriptscriptstyle{\leq}athbf{X}^{(r+1)},\scriptscriptstyle{\leq}athbf{d})-\widehat{H}_j(\scriptscriptstyle{\leq}athbf{X}^{(r+1)},\scriptscriptstyle{\leq}athbf{d})\right)^2\,|\,\scriptscriptstyle{\leq}athbf{H }_{D_N}\right), \end{eqnarray*} for $j=1,\ldots,p$, where $\scriptscriptstyle{\leq}athcal{H}_j$ denotes the $j$-th dimension of the Gaussian process $\scriptscriptstyle{\leq}athcal{H}$. Moreover, \begin{eqnarray*} \mathbb{R}r & = & \begin{array}{cl} \left(\begin{array}{ccc} \mathbb{R}r_{1} & & {\bf 0}\\ & \ddots & \\ {\bf 0} & &\mathbb{R}r_{p} \end{array}\right)\,\, & \begin{array}{ll} \left. \begin{array}{l} \\ \end{array} \right\rbrace & n \textrm{ lines} \\ \\ \left. \begin{array}{l} \\\end{array} \right\rbrace & n \textrm{ lines} \\ \end{array} \end{array}, \textrm{ with } \, \mathbb{R}r_{i} \, = \, \left(\begin{array}{ccc} R_{ii} & & 0 \\ & \ddots & \\ 0 & & R_{ii}\end{array}\right), \end{eqnarray*} where $R_{ii}$ is the $i-$th diagonal component of the diagonal variance matrix $R$. It is worth noting that this third conditional distribution does not belong to any closed form family of distributions. Therefore a Metropolis-Hastings (MH) step is used to simulate $\scriptscriptstyle{\leq}athbf{X}^{(r+1)}$ (see Appendix A). As discussed in \cite{fu14}, the use of the MCMC algorithms involves many possible errors. According to experimental trials, the accuracy of the metamodel plays a critical role in the the estimation problem. MCMC algorithms can produce Markov chains converging towards the desired posterior distribution. However, if the function $H$ is really badly approximated, apart from the {\it algorithmic error} introduced by the MCMC algorithm, the result can also suffer from an {\it emulator error}. \section{The Expected Conditional Divergence criterion for adaptive designs} \label{ECD} The two following sections address the issue of building adaptive designs of experiments, by proposing two strategies. In this section, a criterion called $\mbox{ECD}$ (Expected Conditional Divergence) is {built}, which {can be seen as an adaptation of} the Expected Improvement criterion proposed in \citet{jones98}. {Let us notice that the expected divergence criterion proposed in the next section, although close to a Stepwise Uncertainty Reduction (SUR) criterion, does not derive from the SUR formulation of \citet{vazquez09,bect12}. The latter would lead to a more challenging approach from a computational perspective in our context.} \subsection{Principle} Ideally, the posterior distribution of the parameters $\theta=(m,C)$ after adding a new point $z_{(N+1)}$ to the current DoE $D_N$ should be as close as possible to the posterior distribution knowing the original function $H$, i.e. a relevant discrepancy measure between the two relative distributions must be minimized. Based on information-theoretical arguments given in \citet{cover06}, the Kullback-Leibler {(KL)} divergence \begin{eqnarray} & & \mbox{KL}\Big({\scriptscriptstyle{>}}i(\theta|{\bf y,d},H)\, ||\,{\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{H(z)\}))\Big), \label{eq:idealKL} \end{eqnarray} is a good choice of discrepancy measure. Remind that given two densities $p(x)$ and $q(x)$ defined over the same space $\scriptscriptstyle{\leq}athcal{X}$, \begin{eqnarray*} \mbox{KL}(p||q) & = & \int_{\scriptscriptstyle{\leq}athcal{X}} p(x)\log\frac{p(x)}{q(x)}\, dx. \end{eqnarray*} Ideally, the next point $z_{(N+1)}$ should be searched within the feasible region $\Omega$, as the global minimum of this divergence. \textcolor{black}{But obviously, the unknown term ${\scriptscriptstyle{>}}i(\theta|{\bf y,d},H)$ makes this formulation intractable. But a tractable sub-optimal criterion can be heuristically derived from it by the following rationale.} It must be noticed that \begin{eqnarray} z_{(N+1)} & = & \argmin {z\in\Omega}\, \mbox{KL}\Big({\scriptscriptstyle{>}}i(\theta|{\bf y,d},H)\, ||\, {\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{H(z)\})\Big),\nonumber\\ & = & \argmin {z\in\Omega}\, \mbox{KL}\Big({\scriptscriptstyle{>}}i(\theta|{\bf y,d},H)\, ||\, {\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{H(z)\})\Big) \nonumber\\ & & \ \ \ - \ \mbox{KL}\Big({\scriptscriptstyle{>}}i(\theta|{\bf y,d},H)\, ||\, {\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N})\Big),\nonumber\\ & = & \argmax {z\in\Omega}\, \int_{\theta \in \Omega} {\scriptscriptstyle{>}}i(\theta|{\bf y,d},H)\, \log\frac{{\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{H(z)\})}{{\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N})}\, d \theta\nonumber. \end{eqnarray} The {intractable} target density ${\scriptscriptstyle{>}}i(\theta|{\bf y,d},H)$ has to be replaced with its best available approximation, which is ${\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{H(z)\})$. Under the kriging assumptions, for any $z$ this distribution is closer of ${\scriptscriptstyle{>}}i(\theta|{\bf y,d},H)$ than ${\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N})$. Therefore, a sub-optimal version of the idealistic criterion is: \begin{eqnarray*} z_{(N+1)} & = & \argmax {z\in\Omega}\, \mbox{KL}\Big({\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{H(z)\})\, ||\,{\scriptscriptstyle{>}}i(\theta|{\bf y,d}, \scriptscriptstyle{\leq}athbf{H}_{D_N})\Big).\label{eq:KL1} \end{eqnarray*} In other words, the chosen strategy aims at finding the optimal point $z_{(N+1)}$ which modifies the actual distribution ${\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N})$ as much as possible in an information-theoretic sense. First proposed by \citet{stein1964} as a loss function, the dissymetric {KL} divergence between the two consecutive posterior distributions, which is invariant under one-to-one transformation of the random vector $\theta$, has an operative interpretation as the loss of information (in natural information units or {\it nits}) which may be expected by choosing the baddest approximation ${\scriptscriptstyle{>}}i(\theta|{\bf y,d}, \scriptscriptstyle{\leq}athbf{H}_{D_N})$ instead of the best (available) ${\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{H(z)\})$ \citep{cover06,berger2015}. \\ The preceding formulation is not satisfactory yet, since one evaluation of the criterion requires one evaluation of $H$, which is time-consuming. However, in the spirit of $\scriptscriptstyle{\leq}box{EGO}$, it is possible to derive a new criterion considering the following Gaussian process based on the available observations $\scriptscriptstyle{\leq}athbf{H}_{D_N}$ instead of $H$: \begin{eqnarray}\label{eq:hNz} h_N(z) & := & \scriptscriptstyle{\leq}athcal{H}(z)\,|\,\scriptscriptstyle{\leq}athbf{H}_{D_N}, \end{eqnarray} which follows the normal distribution given in (\mathds{R}f{dist_GP}). Thus, we define the {\it expected divergence criterion}: \begin{eqnarray} z_{(N+1)} & = & \argmax {z\in\Omega} \scriptscriptstyle{\leq}athbb{E}_{{\scriptscriptstyle{>}}i(h_N)}\left[ \mbox{KL}\left({\scriptscriptstyle{>}}i(\theta|{\bf y,d},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{h_N(z)\})\, \right.\right. \label{eq:expKL} \\ & & \hspace{3.5cm} \left.\left. ||\,{\scriptscriptstyle{>}}i(\theta|{\bf y,d}, \scriptscriptstyle{\leq}athbf{H}_{D_N})\right)\right].\hspace*{1.5cm} \nonumber \end{eqnarray} The idea of considering the Gaussian variable $h_N(z)$ rather than the predictor $\widehat{H}_N(z)$ allows to account for the uncertainty introduced by the kriging methodology, while it requires usual Monte Carlo methods to approximate the double integrals, i.e. the expectation and the {KL divergence}. {Even if no run of $H$ is required, the evaluation of this expected divergence criterion requires many calculations. In the next section, a heuristic is proposed to shrink the computational cost of the approach.} \subsection{The Expected Conditional Divergence {heuristic}} \label{CD:algo} Preliminary experiments showed that the criterion defined in \eqref{expKL} is {generally too expensive to be useful, except for extremely CPU-consuming code $H$. The main reason is that any test of a new point $z$ requires to run a Gibbs sampler.} Therefore a last adaptation of the criterion is proposed: the Expected Conditional Divergence ($\mbox{ECD}$) criterion depends only on the intermediate full-conditional posterior distributions of $\theta$. More precisely, at the $(r+1)$-th iteration of the Metropolis-Hastings-within-Gibbs algorithm, the strategy is defined as: \begin{eqnarray}\label{eq:critSequ} z_{(N+1)} & = & \argmax {z\in\Omega} \mbox{ECD}(z) \end{eqnarray} with \begin{eqnarray}\label{eq:ecd} \mbox{ECD}(z) & = & \scriptscriptstyle{\leq}athbb{E}_{{\scriptscriptstyle{>}}i(h_{N})}\left[ \mbox{KL}\Big({\scriptscriptstyle{>}}i(\theta|\tilde{\mathbf{X}}^{(r+1)}(z))\, ||\,{\scriptscriptstyle{>}}i(\theta|\mathbf{X}^{(r+1)})\Big)\right], \label{ecdtheo} \end{eqnarray} where $\mathbf{X}^{(r+1)}$ and $\tilde{\mathbf{X}}^{(r+1)}(z)$ denote the missing data samples simulated from \begin{eqnarray*} \mathbf{X}^{(r+1)} & \sim & {\scriptscriptstyle{>}}i\left(\cdot|{\bf y,d},\theta^{(r+1)},\scriptscriptstyle{\leq}athbf{H}_{D_N}\right),\\ \tilde{\mathbf{X}}^{(r+1)}(z) & \sim & {\scriptscriptstyle{>}}i\left(\cdot|{\bf y,d},\theta^{(r+1)},\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{h_{N}(z)\}\right). \end{eqnarray*} It is worth noting that in the $\mbox{ECD}$ criterion, the final posterior distribution of $\theta$ is replaced by its sequential conditional posterior distribution at the $(r+1)$-th iteration. {At the $(r+1)$-th iteration of the Gibbs sampling, given a candidate $z$ to enrich the DoE, this heuristic enables to compute a value $\mbox{ECD}(z)$ which is likely a sufficient approximation of the expected divergence criterion for the global algorithm to perform well. Moreover, once $\mbox{ECD}(z)$ has been evaluated, the computation of $\mbox{ECD}(z')$ at a new candidate $z'$ takes benefit of the computations performed during the calculation of $\mbox{ECD}(z)$ (sampling of $\mathbf{X}^{(r+1)}$ by Metropolis-Hastings, then sampling of $\theta$ given $\mathbf{X}^{(r+1)}$) and does not require a full Gibbs sampling anymore (just the MCMC sampling of $\tilde{\mathbf{X}}^{(r+1)}(z)$, then the sampling of $\theta$ given $\tilde{\mathbf{X}}^{(r+1)}(z)$). Hence it allows an exploration of the input space (optimization of $\mbox{ECD}$) for a acceptable CPU-cost. Finally, using a standard Monte-Carlo estimator to estimate the expectation of the KL divergence according to ${\scriptscriptstyle{>}}i(h_{N})$ (see~\eqref{ecd}), the ECD heuristic algorithm proceeds as follows: {\scriptscriptstyle{>}}aragraph{ECD strategy} \rule[0.5ex]{11cm}{0.1mm} \vspace*{0.1cm} \small \texttt{~\\[-2ex]Given $(m^{(0)}, C^{(0)}, \scriptscriptstyle{\leq}athbf{X}^{(0)})$, an initial design $D_N$ with the corresponding evaluations $\scriptscriptstyle{\leq}athbf{H}_{D_N}$ of $H$: \begin{enumerate} \item $r := 0$. \item Perform $k$ new Gibbs iterations (Section~\mathds{R}f{posterior.computation}); $r := r+k$: this gives $\theta^{(r+1)}$. \item Sample $\mathbf{X}^{(r+1)}$ from ${\scriptscriptstyle{>}}i\left(\cdot|{\bf y,d},\theta^{(r+1)},\scriptscriptstyle{\leq}athbf{H}_{D_N}\right)$ (see Appendix~A). \item Sample $\Upsilon=\{\theta_1,\dots,\theta_{L_2}\}$ from ${\scriptscriptstyle{>}}i(\cdot|\mathbf{X}^{(r+1)},{\bf y,d})$ (explicit distribution: see steps~1 and~2 of Section~\mathds{R}f{posterior.computation}). \item Get a new point $z_{(N+1)}$ to enrich the DoE by the optimization of $\mbox{ECD}$ (simulated annealing, see Appendix~C): for any $z$, assess $\mbox{ECD}(z)$ if needed by: \begin{enumerate} \item Generate $M$ samples $(h^1_N(z),\dots,h^M_N(z))$ according to \eqref{hNz} and build $M$ corresponding emulators $(\widehat{H}^1_{N+1}(z),\dots,\widehat{H}^M_{N+1}(z))$ with $\widehat{H}^i_{N+1}(z)$ based on the dataset $\scriptscriptstyle{\leq}athbf{H}_{D_N}\cup \{h^i_N(z)\}$ (no re-estimation of the covariance function parameters ${\scriptscriptstyle{>}}si$, see Section~\mathds{R}f{sec:kriging}). \item for $1\leq i\leq M$, \begin{description} \item[(i)] Sample $\tilde{\mathbf{X}}^{(r+1),i}(z)$ from ${\scriptscriptstyle{>}}i(\cdot|{\bf y,d},\theta^{(r+1)},\widehat{H}^i_{N+1}(z))$ (see Appendix~A). \item[(ii)] Sample $\Theta^i=\{\theta^i_1,\dots,\theta^i_{L_1}\}$ with $\theta=(m_1,\dots,m_q,C_{11},\dots,C_{qq})$ from ${\scriptscriptstyle{>}}i(\cdot|\tilde{\mathbf{X}}^{(r+1),i}(z),{\bf y,d})$(explicit distribution: see steps~1 and~2 of Section~\mathds{R}f{posterior.computation}). \end{description} \item $ECD(z) := \frac{1}{M}\sum_{i=1}^M \widehat{\mbox{KL}}\Big(\Theta^i\, ||\,\Upsilon\Big)$ where $\widehat{\mbox{KL}}(.||.)$ denotes the KL divergence estimate (see Appendix~B). \end{enumerate} \item $D_N := D_N\cup\{z_{N+1}\}$ and $\scriptscriptstyle{\leq}athbf{H}_{D_N} := \scriptscriptstyle{\leq}athbf{H}_{D_N}\cup\{H(z_{N+1})\}$ (new run of $H$). \item Return to 2 if $\#\scriptscriptstyle{\leq}athbf{H}_{D_N}$ is less than the maximal number of runs of $H$. \end{enumerate} \rule[0.5ex]{\textwidth}{0.1mm} } \normalsize In our numerical experiments, the optimization (step~5) and the KL divergence estimation (step 5.(c)) are respectively performed using the simulated annealing (SA) method \citep{kirkpatrick83} and the Nearest-Neighbor (NN) method of \citet{wang09} (see Appendices~C and~B for detail): other choices are possible.\\ Let us remark that it can be reasonable to decrease the CPU-cost of ECD by neglecting the dependencies between the components of $\theta$: eventually, assuming that these components are independent substantially decreases the cost of the k-NN KL divergence estimation, since the multivariate KL divergence is then the sum of univariate KL divergences. It would be also feasible to suppose that $\theta$ is made up with independent random vectors (e.g. assuming independence between $m$ and $C$). In fact, this technique could be directly applied to the expected divergence criterion (previous section), thus offers an alternative to ECD. However, it is not investigated hereafter, because ECD alone leads to a satisfactory trade-off between efficiency of the DoE enrichment and computational cost, in our industrial context.} \section{The Weighted-IMSE criterion for adaptive designs} \label{WIMSE} This section is devoted to propose an alternative criterion of adaptive design, by adapting the popular weighted-IMSE criterion \citep{sacks89a,picheny10}, reminded hereinafter, to the Bayesian context of probabilistic inversion. \subsection{The Integrated MSE criterion} \label{WIMSE:intro} The Integrated Mean Square Error (IMSE) criterion \citep{sacks89a} is a measure of the average accuracy of the kriging metamodel over the domain $\Omega$: \begin{eqnarray*} \scriptscriptstyle{\leq}box{IMSE}(\Omega) & = & \int_{\Omega} \mbox{MSE}(z) \, dz, \end{eqnarray*} where $\mbox{MSE}(z)$ is defined in the Gibbs sampler in $\S$ \mathds{R}f{posterior.computation}. Given a current design $D_N$ of $N$ points, \citet{picheny10} proposed the following $\mbox{WIMSE}$ criterion as an alternative approach to improve the prediction accuracy in regions of main interest: \begin{eqnarray} \mbox{WIMSE}(z^*) & = & \int_{\Omega} \mbox{MSE}\left(z|D_N \cup\{z^*\}\right) w\left(z|D_N,\scriptscriptstyle{\leq}athbf{H}_{D_N}\right) \, dz, \label{eq:wimse1} \end{eqnarray} where $\mbox{MSE}\left(z|D_N \cup\{z^*\}\right)$ denotes the prediction variance by adding the point $z^*=(x^*,d^*)$ into $D_N$ and $w\left(z|D_N,\scriptscriptstyle{\leq}athbf{H}_{D_N}\right)$ is a weight function emphasizing the $\mbox{MSE}$ term over these regions of interest. The calculation of $\mbox{MSE}$ does not depend on the expensive evaluation $H(z^*)$ and the weight factor $w$ only depends on the available observations $\scriptscriptstyle{\leq}athbf{H}_{D_N}$. The next point to add to the DoE is thus defined by \begin{eqnarray*} z_{(N+1)} & = & \arg \scriptscriptstyle{\leq}in_{z\in\Omega} \mbox{WIMSE}(z). \end{eqnarray*} \subsection{Adaptation to the Bayesian inversion context} \label{WIMSE:adapt} Defining the regions of interest is the essential task in applying the $\mbox{WIMSE}$ criterion. As presented in previous sections, a probabilistic solution to inverse problems is to approximate the posterior distribution of the parameters $\theta=(m,C)$ using a Metropolis-Hastings-within-Gibbs algorithm (cf. {Section~\mathds{R}f{posterior.computation}}). Assuming that the $(N+1)-$th new point is added at the $(r+1)-$th iteration of the Gibbs sampling, the weight function is defined by the following formula: \begin{eqnarray} w\left(z|D_N,\scriptscriptstyle{\leq}athbf{H}_{D_N}\right) & {\scriptscriptstyle{>}}ropto & {\scriptscriptstyle{>}}rod_{i=1}^n {\scriptscriptstyle{>}}i\left(x,d|y_i,\theta^{(r+1)},D_N,\scriptscriptstyle{\leq}athbf{H}_{D_N} \right), \label{eq:omega} \\ & {\scriptscriptstyle{>}}ropto & {\scriptscriptstyle{>}}rod_{i=1}^n |\scriptscriptstyle{\leq}athbf{R}+\scriptscriptstyle{\leq}box{MSE}(x,d)|^{-\frac{1}{2}} \cdot \exp\Bigg\{- \frac{1}{2} \Delta_i \Bigg\}\nonumber \end{eqnarray} where \begin{eqnarray*} \Delta_i & =& (x-m^{(r+1)})'\Big[C^{(r+1)}\Big]^{-1}(x-m^{(r+1)}) \\ & & \ - \ \Big(y_i-\widehat{H}(x,d)\Big)' \Big(\scriptscriptstyle{\leq}athbf{R}+\scriptscriptstyle{\leq}box{MSE}(x,d)\Big)^{-1} \Big(y_i-\widehat{H}(x,d)\Big), \nonumber \end{eqnarray*} which is derived from the full conditional posterior distribution of $\mathbf{X}$ described in {Section~\mathds{R}f{posterior.computation}}. It can be considered as a measure of the posterior prediction error. The advantage of this choice is twofold. First, this weight function $\omega$ indicates a potential position for the missing-data $\mathbf{X}$ where the accuracy of the metamodel should be improved. Second, this weight function depends on the observation sample $\mathbf{y}=\{y_1,\dots,y_n\}$, coherently with the Bayesian conditioning process and providing a {\it purpose-oriented} sense to the design. \\ Besides, since the two terms $\mbox{MSE}(\cdots)$ and $w(\cdots)$ of \eqref{wimse1} are different in nature, a tuning parameter $\alpha$ is introduced (as an exponent) to allow for a trade-off between the two. Therefore the following version of the $\mbox{WIMSE}$ criterion is proposed: \begin{eqnarray} \mbox{WIMSE}(z^*) & = & \int_{\Omega} \mbox{MSE}^{\alpha}\left(z|D_N \cup\{z^*\}\right) \, w^{1-\alpha}\left(z|D_N,\scriptscriptstyle{\leq}athbf{H}_{D_N}\right) \, dz. \label{eq:wimse2} \end{eqnarray} In this equation, $\alpha$ varying between 0 and 1 makes the criterion more flexible: if $\alpha$ is close to 1, the impact of the weight parameter $\omega$ disappears and the criterion becomes IMSE; if $\alpha$ approaches to 0, the prediction error $\mbox{MSE}$ will not be accounted for. Experimental trails proved that the choice of $\alpha$ is critical. Furthermore, such a chosen weight function $w$, defined as the product of $n$ possible small densities, may cause numerical (underflow) problems. Replacing $w^{1-\alpha}$ by the probability density function ${w^{1-\alpha}}/{\int w^{1-\alpha}}$, as suggested in \citet{picheny10}, can solve such difficulties. In practice, a Monte Carlo method must be used to estimate the normalizing constant. For a DoE of dimension one or two, a Cartesian grid over the design space $\Omega$ can be used to solve the numerical integration and optimization problems \citep{picheny10}. In more general cases of higher dimension, stochastic integration and global optimization techniques should be preferred, e.g. Monte Carlo methods and SA algorithms (Appendix C). \\ \section{Numerical experiments}\label{numerical.experiments2} \label{tests} \textcolor{black}{In this section, numerical studies are conducted on a manageable example to assess the performances of both adaptive kriging strategies. The performances of the $\mbox{WIMSE}$ and $\mbox{ECD}$ criteria are compared with the standard {\it maximin}-LHD and the simple MMSE (maximum MSE) criterion, defined by \begin{eqnarray*} z_{(N+1)} & = & \argmin {z^* \in \Omega} \, \scriptscriptstyle{\leq}ax_{z\in\Omega} \mbox{MSE}\left(z|D_N \cup\{z^*\}\right), \end{eqnarray*} under the same evaluation budget. A good kriging metamodel has been built using a large DoE for playing a benchmark role.} \\ Consider the parametric function previously used in \citet{bastos09}: \begin{eqnarray} \label{eq:modelToyTwo} H(x_1,x_2) & = & \left(1-\exp\left(-\frac{1}{2x_2}\right) \right)\left(\frac{2300x^3_1+1900x^2_1+2092x_1+60}{100x^3_1+500x^2_1+4x_1+20} \right), \end{eqnarray} with $x_i\in[0,1], i=1,2$. In the experimental trials, the design domain $\Omega=[0,1]^2$. The dataset $\scriptscriptstyle{\leq}athbf{Y}=(Y_i,i=1,\dots,30)$ of size $n=30$ is simulated from the uncertainty model \eqref{modelToyTwo} where the missing data $X_i$ is generated with the following Gaussian distribution, truncated in domain $\Omega$: \begin{eqnarray} X_i & \sim & \scriptscriptstyle{\leq}athcal{N}_2\left\{\left(\begin{array}{c} 0.52 \\ 0.59 \\ \end{array} \right),\left( \begin{array}{ll} 0.19^2 & 0 \\ 0 & 0.25^2 \\ \end{array} \right)\right\} \cdot \mathds{1}_{\Omega}, \end{eqnarray} and the error term $U_i$ is the realization of a $\scriptscriptstyle{\leq}athcal{N}_1(0, 10^{-5})$ random variable. Moreover, in \eqref{InfoPrior1} and \eqref{InfoPrior2}, the hyperparameters are chosen as follows: $a=1$, $\nu=5$, $\scriptscriptstyle{\leq}u=(0,0)$ and \begin{eqnarray*} \Lambda & = & 2\cdot\left( \begin{array}{ll} 0.18^2 & 0 \\ 0 & 0.4^2 \\ \end{array} \right). \end{eqnarray*} \textcolor{black}{In practice, the burn-in period of the MCMC algorithm can be verified by the Brooks-Gelman diagnostic $\widehat{R}_{BG}$ of convergence \citep{brooks98}. It was calculated every 50 iterations and the convergence was not accepted until $\widehat{R}_{BG}<1.05$ for at least 3,000 successive iterations.} The main features of the generated DoEs are summarized on Table \mathds{R}f{table:DoE1}. All initial DoEs consist of the same five points produced by {\it maximin}-LHD, and then are completed by five other points selected by the criteria. \textcolor{black}{Table \mathds{R}f{table:ToyModel} displays the value of parameters involved in carrying out the two criteria and the SA algorithm.} Figure \mathds{R}f{fig:DesignWIMSE2D} provides a comparison of all designs with the standard 10-points-{\it maximin}-LHD (encompassing the initial DoE). For the W-IMSE criterion, the added points are found not far from the hypothesized mean $(0.5,0.7)$ and the four $\mbox{WIMSE}$ designs are quite similar. However, the posterior distributions of $\theta$ are quite sensitive to the choice of $\alpha$. Figure \mathds{R}f{fig:TestKrigWIMSE2D} displays these posterior distributions for the corresponding metamodels. The $\mbox{WIMSE}$ criterion improved the posterior distributions of $m_2$ and $C_{22}$, but the choices $\alpha=1, 0.5$ and $0.2$ do not work well for the posterior distribution of $m_1$ and $C_{11}$. It can be seen that the 10-points-{\it maximin}-LHD performs poorly, with respect to a 5-points-{\it maximin}-LHD sequentially completed. Moreover, the MMSE criterion performs correctly. However, other experiments, conducted using the best value $\alpha=0.8$ for the $\mbox{WIMSE}$ criterion, are summarized on Figure \mathds{R}f{fig:krigECDTwoTyo}. These results highlight, on this example, that the design build using the $\mbox{ECD}$ criterion can significantly outperform the 10-point-{\it maximin}-LHD, can perform more efficiently than the MMSE criterion and can do as well as the $\mbox{WIMSE}$ criterion. \section{Case-study: calibrating roughness coefficients of an hydraulic engineering model}\label{numerical.experiments} \textcolor{black}{ The case-study that motivated this work is the calibration, from observed water levels $Y$ and upstream flow values $d$, of the roughness (so-called Strickler) coefficient $X$ of the hydraulic computer model TELEMAC-2D. This software tool is considered as one of the major standards in the field of free-surface flow by solving shallow water (Saint-Venant) equations \citep{galland1991}. This parameter vector summarizes the influence of the land nature on the water level, for a given discharge $d$. The model is used here to reproduce in two dimensions (geographical coordinates) the downstream water level of the French river La Garonne between {Tonneins and La R\'eole} (Figure \mathds{R}f{fig:RiverGaronne}).} \textcolor{black}{ The flow simulation of this 50km river section, including riverbed and floodplain (cf. Figure \mathds{R}f{beds}), is conducted on very fine meshes defined by 41,000 knots, each parametrized by a roughness value. The dimension of $X$ is diminished to $q=4$ by taking account of: (a) the homogeneity of the land regularity in large areas surrounding the riverbed between four measuring stations (Table \mathds{R}f{details} and Figure \mathds{R}f{fig:RiverGaronne}) ; and : (b) the lack of observations of floodplain water levels at the uppermost subsection, which requires to fix the corresponding roughness coefficient. Details about the notation and meaning of each component of $X$ are provided in Table \mathds{R}f{details}.} \\ \textcolor{black}{ The strong but physically limited uncertainty that penalizes the knowledge of Strickler coefficients is compatible, according to \citet{WOH98}, with simple and classic statistical distributions as the Gaussian law (numerically truncated in 0). Based on available bibliography summarized in Table \mathds{R}f{prior.strickler} and after discussing with ground experts, values for the hyperparameter $\scriptscriptstyle{\leq}u$ for each dimension of $X$ were simple to elicit (see Table \mathds{R}f{details}). It was more tricky to find information about the correlations between the $X$. The strong differences of land nature between the riverbed and the foodplain made plausible the assumption of independence between the corresponding components of $X$. On the contrary, it is likely that two connected riverbed section share roughness features. However, in absence of any additional information about these possibe correlations, $C_e$ was chosen diagonal: \begin{eqnarray*} C_{e}& = & \left( \begin{array}{llll} \sigma^2_{\text{maj}} & 0 & 0 & 0 \\ 0 & \sigma^2_{\text{min}_{TA}} & 0 & 0 \\ 0 & & \sigma^2_{\text{min}_{AA}} & \\ 0 & 0 & 0 & \sigma^2_{\text{min}_{AL}} \\ \end{array} \right). \end{eqnarray*} The calibration of each $\sigma$ was conducted by using marginal prior knowledge about the mean variation of the Manning coefficient $M=1/X$, discussed in \citet{LIU09} and displayed on Figure \mathds{R}f{usa9600}. A prior Manning estimator $(\hat{M}=1/\scriptscriptstyle{\leq}u,\sigma_M)$ can then be produced. A magnitude for the corresponding prior estimator of $\sigma$ (for the Strickler $X=1/M$) can be derived assuming that the results on Table \mathds{R}f{prior.strickler} and Figure \mathds{R}f{usa9600} summarize a large number of past estimations. Further to this assumption, a crude in-law convergence \begin{eqnarray*} \sigma^{-1}_M(\hat{M}-M) & \xrightarrow{{\cal{L}}}{} & {\cal{N}}(0,1). \end{eqnarray*} associated to a Delta method provides the approximate result \begin{eqnarray*} \scriptscriptstyle{\leq}u^{-2}\sigma^{-1}_M(\scriptscriptstyle{\leq}u-X) & \xrightarrow{{\cal{L}}}{} & {\cal{N}}(0,1), \end{eqnarray*} and finally $\sigma^2\simeq \scriptscriptstyle{\leq}u^{4} \sigma^{2}_M$. The prior assessments of these variances are provided on Table \mathds{R}f{prior.strickler}, assuming a virtual size $a=1$ for each dimension (see $\S$ \mathds{R}f{prior.elicitation} for details). } \textcolor{black}{ The relevance of a metamodelling approach was acknowledged since each run of TELEMAC-2D can take several hours. {\it Maximin}-LHD designs were produced over the domain $\Omega$, defined for the input vector $z=(x,d)$ as \begin{eqnarray*} \Omega & = & \Omega_{\text{maj}} \times \Omega_{\text{min}_{TA}} \times \Omega_{\text{min}_{AA}} \times\Omega_{\text{min}_{AL}} \times \Omega_d \end{eqnarray*} in function of the bounds of variation domains summarized in Table \mathds{R}f{prior.strickler}: $\Omega_{\text{maj}}=[0,30]$ and $\Omega_{\text{min}_{TA}}=\Omega_{\text{min}_{AA}}=\Omega_{\text{min}_{AL}}=[20,70]$ (in $(m^{1/3}.s^{-1})$). The domain $\Omega_d$ was chosen as $[q_{0.05},q_{0.95}]=[510,2373]$ where $q_{\alpha}$ is the $\alpha-$order percentile of the known flow distribution, which is Gumbel with mode 1013 $m^3.s^{-1}$ and scale parameter 458. } \\ \textcolor{black}{Before running TELEMAC-2D, however, a Bayesian inferential study was briefly conducted using the MASCARET simplified computer code \citep{goutal12}, which describes a river by a curvilinear abscissa and uses the same input vector. While much more imprecise than TELEMAC-2D, the advantage of this simplified model is that the CPU time used for one run is shorter, so that the MCMC proposed in \cite{fu14} can be conducted in due time using a static {\it Maximin}-LHD design (and metamodelling calibrated once), using 20,000 iterations. The aim of this study was to test the agreement between the prior assessments and the observations, following recommendations in \cite{bousquet08,fu14}. A set of $n=50$ observations were available, among which the 10 most recent were preferentially selected, as the most representative of the actual conditions (riverbed homogeneity). For several sizes of design and the two datasets the marginal posterior distributions are displayed on Figure \mathds{R}f{mascaret-1}. For each dimension, it appears that the regions of highest posterior density are in accordance with the prior guesses, which makes us confident in the relevance of the prior elicitation process. }\\ \textcolor{black}{Based on this good relevance of the Bayesian model, a comparison of the three designs considered in this article was conducted by comparing the {\it emulator errors} yielded by the designs, using the {\it coefficient of predictability} $Q_2$. A cross-validation {\it leave-one-out} version of this criterion is used here for computational simplicity \citep{vanderpoorten01}:} \begin{eqnarray*} Q_{2} & = & 1-\frac{\scriptscriptstyle{\leq}box{PRESS}}{\sum_{i=1}^{N} \big\|H(z_{(i)})-\overline H_{D_N}\big\|^2}. \end{eqnarray*} where $\overline H_{D_N} = \frac{1}{N} \sum_{i=1}^N H(z_{(i)})$ and $\scriptscriptstyle{\leq}box{PRESS} = \sum_{i=1}^N e_{(i)}^2 \, = \, \sum_{i=1}^N \big\|H(z_{(i)})-\widehat{H}_{-i}(z_{(i)})\big\|^2$, with \begin{itemize} \item $e_{(i)}$ is the prediction error at $z_{(i)}$ of a fitted model without the point $z_{(i)}$; \item $\widehat{H}_{-i}(z_{(i)})$ is the approximation of $H$ at $z_{(i)}$ derived from all the points of the design except $z_{(i)}$. \end{itemize} \textcolor{black}{The closer $Q_2$ to 1, the smaller the variance explained by the emulator and the better the quality of the design (in terms of prediction power for the metamodel). Four designs are tested. Two {\it Maximin}-LHD designs $D_{20}$ and $D_{500}$ of 20 and 500 points, respectively (the second one playing the role of a "reference design" leading to a very good approximation of the posterior distribution. Two other designs are sequentially elaborated using the ECD and WISE criterion, starting from an initial design $D_{10}$ of 10 points: 10 other points are added.} \textcolor{black}{Displayed on Figure \mathds{R}f{Q2compare}, the $Q_2$ coefficient related to the {\it maximin}-LHD $D_{20}$ equals 0.9745 and the benchmark $Q_2$ corresponding to the $D_{500}$ equals 0.9933. Starting from a design of 10 points only, it appears natural that other designs are characterized by a lower $Q_2$. However, by adding 10 points iteratively to the initial design $D_{10}$ according to the two proposed criteria, an increasing value of $Q_2$ is obtained, which quickly beats the predictability generated by the {\it maximin}-LHD $D_{20}$. Finally using the $\mbox{ECD}$ criterion provides a slightly better $Q_2$ value than using the WISE criterion. } \\ \textcolor{black}{Coming back to the TELEMAC-2D computer code, the convergence of MCMC chains were obtained (using the $n=10$ best observations) after 30,000 iterations. For the various designs proposed in this article, the marginal posterior distributions of the four first parameters are displayed on Figure \mathds{R}f{telemac-1}. The {\it Maximin}-LHD design $D_{20}$ (producing the approximate posterior in red) was made of 40 points, while other situations start from a DOE of 20 initial points, to which 20 other points are added sequentially (producing the approximate posteriors in blue and black). The reference {\it Maximin}-LHD design $D_{500}$ (producing the best approximation of the target posterior, in green) is made of 500 points, as for the MASCARET application. A better proximity of the approximate posterior distribution produced using $\mbox{ECD}$ to the target can be again noticed with respect to the approximation produced by the WIMSE approach.} \section{Conclusions and perspectives} \label{conclusion} This article aims to provide an adaptive methodology to calibrate, in a Bayesian framework, the distribution of unknown inputs of a nonlinear, time-consuming numerical model from observed outputs. This methodology is based on improving a space-filling design of experiments, typically the {\it maximin}-Latin Hypercube Design, that offers a non-intrusive exploration of the model. Kriging metamodelling is used to avoid costly runs of the model. In this methodology, two adaptive criteria have been proposed to complete sequentially the current design. The first one is an adaptation of the standard Weighted-IMSE criterion to the Bayesian framework. It is obtained by weighting the $\mbox{MSE}$ term over a region of interest indicated by the current full conditional posterior distribution. The other criterion, called Expected CD, is based on maximizing the Kullback-Leibler (KL) divergence between two consecutive approximate posterior distributions related to the DoE. A clearer interpretation can be given to the second criterion, as a crude approximation of the negative KL divergence between the target posterior and the current approximate posterior distributions. Numerical experiments have highlighted, on two examples, that applying this adaptive procedure can reduce the prediction error and improve the accuracy of the metamodeling approximation, compared with a standard space-filling DoE. Therefore such adaptive procedures appear to be useful when the CPU time required to compute an occurrence of the simulator $H$ of physical models is dramatically greater than the time required to run a Gibbs sampler, a Monte Carlo integration or to perform an optimization with a Simulated Annealing procedure. Both criteria involve expensive numerical integration. For a similar gain in information, the $\mbox{ECD}$ criterion appears to be a little more expensive than the $\mbox{WIMSE}$ criterion since it requires the calculation of the empirical $\mbox{KL}$ divergence. However, in the definition of $\mbox{WIMSE}$, the choice of $\alpha$ is quite important. As the second weight function is globally much smaller that the first prediction error, this balance parameter permits us to find a good behavior of this criterion. In this article, this important parameter was not systematically studied, but the computation of the best (or at least a "good") value of $\alpha$ makes the use of $\mbox{WIMSE}$ much less easy. In addition of this better interpretation (in information-theoretic terms), this feature lets us have a clear preference for the use of the ECD criterion. \\ This work is a first approach to designing sequential strategies for both exploring a black-box, time-consuming computer code and in parallel calibrating some of its unobserved random inputs. The democratized use of metamodelling requires, in practice, to make various approximations. For instance, it is current that the hyper-parameters of kriging metamodels are updated (e.g., by maximum likelihood estimation) after several additions of points to an original design, since each updating (which should be formally conducted after each addition of a new point) can be a costly operation itself without fundamental improvement \citep{toal08,toal11}. Following a same idea of reaching a trade-off between a theoretical aim and practical easiness, idealistic criteria are often necessarily approximated, or favored partially because their computation can made explicit. This is for instance the case of the Expected Improvement (EI) criterion proposed by \citet{jones98} which makes profit from the Gaussian properties of kriging metamodels. Such approximations appeared needed to conduct this first study and highlight the interest of the approach. The rationale developed in Section \mathds{R}f{ECD} must now be followed by a truly theoretical work that could robustify the proposed choices, accompanied with more systematical simulation studies with other static or dynamic designs of numerical experiments. Especially, the statistical control of the metamodelling-based posterior approximation with respect to the target posterior should be a focus point in future studies, by making profit of the relationships between Kullback-Leibler divergences and discrepancy measures \citep{pollard13} as well as recent theoretical developments about relaxing assumptions under which metamodelling provides a fair approximation of the real numerical model (e.g., \citet{vazquez11}). Such works are currently being conducted. For the present time, it must be noticed that the approximate posterior distribution produced by the ECD approach can be considered as a fast non-intrusive way of modelling an instrumental distribution, to be used in a final step of importance sampling (typically to compute a posterior mean), provided a small computational budget be kept or made available for running the numerical model. \section*{Acknowledgments} The authors gratefully thank Gilles Celeux (INRIA) for many fruitful discussions and advices. This work was partially supported by the French Ministry of Economy in the context of the CSDL (\textit{Complex Systems Design Lab}) project of the Business Cluster System@tic Paris-R\'{e}gion. \subsection*{Appendix A. Metropolis-Hastings step within the Gibbs sampler\label{Appen:MH}} At step $r+1$ of Gibbs sampling, after simulating $m^{(r+1)}$,$C^{(r+1)}$, the missing data $\scriptscriptstyle{\leq}athbf{X}^{(r+1)}$ can be updated with a Metropolis-Hasting ($\scriptscriptstyle{\leq}box{MH}$) algorithm. The MH step is updating $\scriptscriptstyle{\leq}athbf{X}^{(r)}=(X^r_1, \dots, X^r_n)'$ in the following way: \begin{itemize} \item \texttt{For $i=1,\ldots,n$} \begin{enumerate} \item \texttt{Generate $\widetilde{X}_i \sim J(\cdot \scriptscriptstyle{\leq}id X_i^r)$ where $J$ is the proposal distribution.} \item \texttt{Let} \begin{eqnarray*} \alpha(X_i^{r}, \widetilde{X}_i) & = & \scriptscriptstyle{\leq}in \Big(\frac{{\scriptscriptstyle{>}}i_{\widehat{H}}(\widetilde{\scriptscriptstyle{\leq}athbf{X}} \scriptscriptstyle{\leq}id \boldsymbol{\scriptscriptstyle{\leq}athcal{Y}}, \theta^{(r+1)},\rho,\scriptscriptstyle{\leq}athbf{d},H_D)\, J(X_i^r|\widetilde{X}_i)}{{\scriptscriptstyle{>}}i_{\widehat{H}}(\scriptscriptstyle{\leq}athbf{X}^{(r)} \scriptscriptstyle{\leq}id \boldsymbol{\scriptscriptstyle{\leq}athcal{Y}}, \theta^{(r+1)},\rho,\scriptscriptstyle{\leq}athbf{d},H_D)\, J(\widetilde{X}_i|X_i^r)}, 1\Big),\nonumber\\ \label{eq:alpha_2} \end{eqnarray*} where \begin{eqnarray*} \widetilde{\scriptscriptstyle{\leq}athbf{X}} & = & \Big(X_1^{r+1},\, \dots,\,X_{i-1}^{r+1},\,\widetilde{X}_i,\,X_{i+1}^r,\,\dots,\,X_n^r \Big)'\\ \scriptscriptstyle{\leq}athbf{X}^{(r)} & = & \Big(X_1^r,\, \dots,\,X_{i-1}^r,\,X_i^r,\,X_{i+1}^r,\,\dots,\,X_n^r \Big)' \end{eqnarray*} \item \texttt{Take} \begin{eqnarray*} X_i^{r+1} & = & \left\{\begin{array}{ll} \widetilde{X}_i & \textrm{\texttt{with probability} $\alpha(X_i^r, \widetilde{X}_i)$},\\ X_i^{r+1} & \textrm{\texttt{otherwise.}} \end{array}\right. \end{eqnarray*} \end{enumerate} \end{itemize} {\scriptscriptstyle{>}}aragraph{Remarks} \begin{itemize} \item Many choices are possible for the proposal distribution $J$. It appears that choosing an independent MH sampler with $J$ chosen to be the normal distribution $\scriptscriptstyle{\leq}athcal{N}\Big( m^{(r+1)}, C^{(r+1)}\Big)$ give satisfying results for the model (\mathds{R}f{eq:model}). \item In practice, it can be beneficial to choose the order of the updates by a random permutation of $\{1, \ldots, n\}$ to accelerate the convergence of the Markov chain to its limit distribution. \end{itemize} \subsection*{Appendix B. Nearest-Neighbor approach\label{Appen:NN}} The {Kullback-Leibler} (KL) divergence between samples $\Theta^i$ and $\Psi$ can be empirically calculated through the Nearest-Neighbor approach. \begin{eqnarray}\label{eq:NNesti} \widehat{\mbox{KL}}_{L_1,L_2}(\Theta^i\, ||\,\Psi) & = & \frac{d}{L_1}\sum_{j=1}^{L_1} \log \frac{\nu_{L_2}(\theta^i_j)}{\rho^i_{L_1}(\theta^i_j)}\,+\,\log\frac{L_2}{L_1-1}, \end{eqnarray} where $d$ denotes the dimension of the parameter $\theta$ ($2q$ in our case), $\nu_{L_2}(\theta^i_j)$ denotes the (Euclidean) distance between $\theta^i_j\in\Theta^i$ and its nearest neighbor in sample $\Psi$ \begin{eqnarray*} \nu_{L_2}(\theta^i_j) & = & \scriptscriptstyle{\leq}in_{\substack{r=1,\dots, L_2}}\,||\theta_r-\theta^i_j||_2, \end{eqnarray*} and $\rho^i_{L_1}(\theta^i_j)$ denotes the (Euclidean) distance of $\theta^i_j$ to its nearest neighbor in $\Theta^i$ except itself (as it is also included in $\Theta^i$) \begin{eqnarray*} \rho^i_{L_1}(\theta^i_j) & = & \scriptscriptstyle{\leq}in_{\substack{l=1,\dots, L_1;\, l\neq j}}\,||\theta^i_l-\theta^i_j||_2. \end{eqnarray*} It has been proved in \cite{wang09} that under some regularity conditions on the samples $\Theta^i$ and $\Psi$, the estimator $\widehat{\mbox{KL}}_{L_1,L_2}(\Theta^i\, ||\,\Psi)$ is consistent in the sense that \begin{eqnarray} \lim_{L_1,L_2\rightarrow \infty} \scriptscriptstyle{\leq}athbb{E}\left(\widehat{\mbox{KL}}_{L_1,L_2}(\Theta^i\, ||\,\Psi) - \mbox{KL}(\Theta^i\, ||\,\Psi) \right)^2 & = & 0, \end{eqnarray} and asymptotically unbiased, i.e. \begin{eqnarray} \lim_{\substack{L,R\rightarrow \infty}} \scriptscriptstyle{\leq}athbb{E}\left[\widehat{\mbox{KL}}_{L_1,L_2}(\Theta^i\, ||\,\Psi) \right] & = & \mbox{KL}(\Theta^i\, ||\,\Psi). \end{eqnarray} \subsection*{Appendix C. Simulated Annealing algorithm (searching for the minimum of a function $f$) \label{Appen:SA}} Proposed by \citet{kirkpatrick83}, the SA algorithm is a stochastic optimization algorithm. \texttt{Given the current point $z^{(k)}$, at iteration $k+1$~:} \\ \begin{enumerate} \item \texttt{Generate $\widetilde{z}\sim\scriptscriptstyle{\leq}athcal{N}\Big(z^{(k)},\sigma^2\Big)$, with a certain fixed variance $\sigma^2$.} \item \texttt{Let} \begin{eqnarray*} \lambda\Big(z^{(k)}, \widetilde{z}\Big) & = & \scriptscriptstyle{\leq}in\Big(1, \exp\Big(\frac{f(z^{(k)})-f(\widetilde{z})}{\beta_{k+1}}\Big) \Big), \end{eqnarray*} \texttt{where $\beta_{k+1}$ is the current temperature at step $k+1$. } \item \texttt{Accept} \begin{eqnarray*} z^{[k+1]} & = & \left\{\begin{array}{ll} \widetilde{z}, & \textrm{\texttt{with probability} $\lambda\Big(z^{(k)}, \widetilde{z}\Big)$},\\ z^{(k)}, & \textrm{\texttt{otherwise.}} \end{array} \right. \end{eqnarray*} \item \texttt{Update $\beta_{k+1} = 0.99\times \beta_k$.} \end{enumerate} \begin{table}[h!] \small \begin{center} \begin{tabular}{lcc} \hline \textbf{DoE 1} & \scriptscriptstyle{\leq}ulticolumn{2}{c}{10-point-{\it maximin}-LHD} \\ & \\ \textbf{DoE 2} & 5-points-{\it maximin}-LHD + & 5-points-$\mbox{WIMSE}$ \\ & & or 5-points-$\mbox{ECD}$\\ & & \hspace{0.1cm} or 5-points-MMSE\\ & \\ \textbf{DoE 3} & \scriptscriptstyle{\leq}ulticolumn{2}{c}{100-points-{\it maximin}-LHD ({\it benchmark})} \\ \hline \end{tabular} \caption{Description of the three types of designs of experiments (DOE) (two-dimensional toy example).} \label{table:DoE1} \end{center} \end{table} \begin{table}[h!] \small \centering \begin{tabular}{lccc} \hline \textbf{$\mbox{WIMSE}$} & $\alpha$ & Number $L$ of iterations & Size $M$ of the \\ & & of the SA algorithm & Monte Carlo algorithm \\ & 1, 0.8, 0.5, 0.2& 1,000 & 1,000 \\ & \\ \textbf{$\mbox{ECD}$} & Number $M$ of & Sizes $L_1$ and $L_2$ of & Number $L$ of iterations \\ & generated GPs & the samples $\Theta^i$ and $\Psi$ & of the SA algorithm \\ & 100 & 1,000 & 1,000 \\ & \\ \textbf{SA algorithm} & Initial point $x^{[0]}$ & Initial temperature $\beta$ & Standard deviation $\sigma$ \\ & $x$ & 100 & 100 \\ \hline \end{tabular} \caption{Choice of parameters for the design criteria computation and the SA algorithm (two-dimensional toy example).} \label{table:ToyModel} \end{table} \begin{table}[hbtp] \centering \begin{tabular}{ll} \hline Nature of surface & Value of Strickler coefficient ($m^{1/3}\cdot s^{-1}$) \\ \hline & \\ {\bf Riverbed} & \\ Smooth concrete & 75-90 \\ Earthen channel & 50-60 \\ Plain river, without shrub vegetation & 35-40 \\ Plain river, with shrub vegetation & 30 \\ Slow winding natural river & 30-50 \\ Very cluttered riverbed & 10-30 \\ Proliferating algae & 3.3-12.5 \\ & \\ {\bf Foodplain} & \\ Meadows, uncultivated fields & 20 \\ Cultivated lands with low size vegetation & 15-20 - {\bf 18} \\ Cultivated lands with large size vegetation & 10-15 - {\bf 13} \\ Bush and undergrowth areas & 8-12 - {\bf 10} \\ Forest & $<$10 \\ Low density urban sprawl & 8-10 \\ High density urban sprawl & 5-8 \\ \hline \end{tabular} \caption{Realistic ranges of value for the Strickler coefficient in function of the nature of the surface, summarized from \citet{USG89,WAL89,SEL97} and \citet{VIO98}. Median values in bold type are interpreted by international experts as the most likely values taking account of uncertainties about the nature of vegetation, topographic irregularities, etc. } \label{prior.strickler} \end{table} \begin{table}[hbtp] \centering \begin{tabular}{c|llll} (Sub)section & Position & $X$ component & \scriptscriptstyle{\leq}ulticolumn{2}{c}{Marginal hyperparameters $(m^{1/3}.s^{-1})$} \\ \hline Tonneins & & \\ $\downarrow$ & foodplain & $X_{s,\text{maj}}$ & $\scriptscriptstyle{\leq}u_{\text{maj}} = 17$ & $\sigma_{\text{maj}} = 4.1$\\ La R\'eole & & & \\ & \\ & \\ Tonneins & & \\ $\downarrow$ & riverbed & $X_{s,\text{min}_{TA}}$ & $\scriptscriptstyle{\leq}u_{\text{min}_{TA}} = 45$ & $\sigma_{\text{min}_{TA}} = 7.1$ \\ Aval de Mas d'Augenais & & & \\ $\downarrow$ & riverbed & $X_{s,\text{min}_{AA}}$ & $\scriptscriptstyle{\leq}u_{\text{min}_{AA}} = 38$ & $\sigma_{\text{min}_{AA}} = 7.1$ \\ Amont de Marmande & & & \\ $\downarrow$ & riverbed & $X_{s,\text{min}_{AL}}$ & $\scriptscriptstyle{\leq}u_{\text{min}_{AL}} = 40$ & $\sigma_{\text{min}_{AL}} = 7.1$ \\ La R\'eole & & & \\ \hline \end{tabular} \caption{Detailed meanings and prior modelling for each component of $X$ (La Garonne roughness coefficients). The riverbed roughness coefficients are differentiated between the measuring stations listed in the first column. A virtual size $a=1$ was chosen for each dimension.} \label{details} \end{table} \begin{figure} \caption{Standard {\it maximin} \label{fig:DesignWIMSE2D} \end{figure} \begin{figure} \caption{Posterior distributions of $\theta$ with benchmark, standard {\it maximin} \label{fig:TestKrigWIMSE2D} \end{figure} \begin{figure} \caption{Posterior distributions of $\theta$ with benchmark, standard {\it maximin} \label{fig:krigECDTwoTyo} \end{figure} \begin{figure} \caption{Riverbed profile of French river La Garonne.\label{fig:RiverGaronne} \label{fig:RiverGaronne} \end{figure} \begin{figure} \caption{Cross-section of a classical river.\label{beds} \label{beds} \end{figure} \begin{figure} \caption{Uncertainty over the estimators of Manning coefficient ($M=1/X$), from \citet{USA96} \label{usa9600} \end{figure} \begin{figure} \caption{Approximations of the marginal posterior distributions of $\theta$ for several sizes $N$ {\it maximin} \label{mascaret-1} \end{figure} \begin{figure} \caption{Comparison of the quality of different designs of numerical experiments using the MASCARET computer code.} \label{Q2compare} \end{figure} \begin{figure} \caption{Approximations of the marginal posterior distributions of $\theta$ (first four dimensions) produced by several designs using the TELEMAC-2D computer code. The red stars indicates the prior means for each parameter.} \label{telemac-1} \end{figure} \end{document}
\begin{document} \begin{frontmatter} \title{Stable Hierarchical Model Predictive Control Using an Inner Loop Reference Model and $\lambda$-Contractive Terminal Constraint Sets} \author[First]{Chris Vermillion} \author[Second]{Amor Menezes} \author[Third]{Ilya Kolmanovsky} \address[First]{Altaeros Energies, Boston, MA 02110 (e-mail: [email protected])} \address[Second]{California Institute for Quantitative Biosciences, University of California, Berkeley, Berkeley, CA 94704 (e-mail: [email protected])} \address[Third]{Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI 48109 (e-mail: [email protected])} \begin{abstract} This paper presents a hierarchical model predictive control (MPC) framework that is presented in \cite{hierarchical_vermillion} and \cite{hierarchical_vermillion_journal}, along with proofs that were omitted in the aforementioned works. The method described in this paper differs significantly from previous approaches to guaranteeing overall stability, which have relied upon a multi-rate framework where the inner loop (low level) is updated at a faster rate than the outer loop (high level), and the inner loop must reach a steady-state within each outer loop time step. In contrast, the method proposed in this paper is aimed at stabilizing the origin of an error system characterized by the difference between the inner loop state and the state specified by a full-order reference model. This makes the method applicable to systems with reduced levels of time scale separation. This paper reviews the fundamental results of \cite{hierarchical_vermillion} and \cite{hierarchical_vermillion_journal} and presents proofs that were omitted due to space limitations. \end{abstract} \begin{keyword} Model predictive control, Hierarchical control, Control of constrained systems, Decentralization. \end{keyword} \end{frontmatter} \section{Introduction}\label{intro} This paper focuses on a two-layer inner loop/outer loop hierarchical control structure where the ultimate objective is to stabilize the overall system. The actuator and plant represent a cascade, depicted in Fig. \ref{hierarchical_ol}, wherein an actuator output, denoted by $v$, characterizes an overall force, moment, or generalized effect produced by the actuators, and is referred to as a \emph{virtual control input}. In the hierarchical control strategy, an outer loop controller sets a desired value for this virtual control input, denoted by $v_{des}$, and it is the responsibility of the inner loop to generate control inputs $u$ that drive $v$ close to $v_{des}$. \begin{figure} \caption{Block diagram of the actuator/plant cascade considered in this work.} \label{hierarchical_ol} \end{figure} This control approach is employed in a number of automotive, aerospace, and marine applications, such as \cite{ca2}, \cite{ca3}, \cite{ca4}, \cite{ca5}, and \cite{ca6}. Hierarchical control has become commonplace in industrial applications, as it offers two key advantages over its centralized counterpart: \begin{enumerate} \item Plug-and-play integration of new design features (for example, a new inner loop) without requiring a complete system redesign; \item Reduction in overall computational complexity, in terms of the number of inputs and/or states considered by each controller. \end{enumerate} The use of MPC for constrained hierarchical control has been a natural choice in instances when constraint satisfaction was critical and/or multiple control objectives were traded off. Only recently, however, has an effort been made to provide theoretical stability guarantees for the hierarchical system. In many recent papers, including \cite{hierarchical_app1}, \cite{hierarchical1}, \cite{hierarchical2}, \cite{hierarchical_review}, and \cite{hierarchical3}, the inner loop is updated at a faster rate than the outer loop, and the inner loop is designed to reach a steady-state, wherein $v=v_{des}$, within a single outer loop time step. This strategy represents an effective way of guaranteeing stability under large time-scale separation, but numerous systems, including those described in \cite{ca2}, \cite{ca3}, and \cite{ca4} (which address a flight control application), and \cite{ca6}, \cite{ca7}, and \cite{ca8} (which address a thermal management system), do not exhibit such a demonstrable time scale separation. Our approach differs from that of \cite{hierarchical1}, \cite{hierarchical2}, and \cite{hierarchical3} in that it drives the inner loop states to those of a \emph{reference model} rather than to the steady state values corresponding to $v_{des}$. Our stability formulation relies on $\lambda$-contractive terminal constraint sets for the outer and inner loop, in addition to rate-like constraints that ensure that the optimized MPC trajectories do not vary too much from one instant to the next. The contractive nature of the terminal constraint allows MPC-optimized control input trajectories to vary from one time step and the next. The work presented in this paper is an extension of an original IFAC conference paper (\cite{hierarchical_vermillion}) and builds upon it through two key mechanisms: \begin{itemize} \item Allowance for inexact (approximate) inner loop reference model matching; \item Greater flexibility in the decay of contraction rates within the MPC optimization. \end{itemize} \section{Problem Statement}\label{problem} In this paper, we consider two interconnected systems, as depicted in Fig. \ref{hierarchical_basic}, whose dynamics in discrete time are given by: \begin{eqnarray}\label{ssr_d} \nonumber x_{1}(k+1)&=&A_{1}x_{1}(k)+B_{1}v(k), \\ x_{2}(k+1)&=&A_{2}x_{2}(k)+B_{2}u(k), \\ \nonumber v(k) &=& Cx_{2}(k), \end{eqnarray} \noindent where $v\in\mathbb{R}^{q}$ represents the virtual control input, $x_{1} \in \mathbb{R}^{n_{1}}$ represents the plant states, which are driven by the virtual control input, $v$, whereas $x_{2}\in \mathbb{R}^{n_{2}}$ represents the actuator states, which are driven by the real control inputs, $u\in\mathbb{R}^{p}$, where $p \geq q$. The control inputs, $u$ are subject to a saturation constraint set $U$, such that $u(k) \in U$ at every time instant. We assume that: \begin{itemize} \item \emph{Assumption 1:} The pair $(A_{1},B_{1})$ is stabilizable. \item \emph{Assumption 2:} The pair $(A_{2},B_{2})$ is controllable. \item \emph{Assumption 3:} Without loss of generality, the actuator dynamics of (\ref{ssr_d}) are written in block controllable canonical form (CCF) described in \cite{miso_ccf}. \end{itemize} The assumption of stabilizability is clearly essential to any problem whose objective is stabilization of the origin. The stronger assumption of inner loop controllability (Assumption 2) allows us to generate an inner loop error system and control law that appropriately places all of the poles of the closed inner loop to satisfy the reference model specifications. \section{Control Design Formulation}\label{control_design} Our approach relies on the design of an inner loop \emph{reference model}, which describes the ideal input-output behavior from $v_{des}$ to $v$. We will proceed to derive an error system describing the difference between the inner loop and reference model states, and we will show how closed-form control laws can be used to achieve exact or sufficiently accurate reference model matching near the origin of this system, ultimately resulting in local stability of the overall system. MPC is used to enlarge the region of attraction of the overall system to include states under which the closed-form control laws hit saturation constraints. The specific control algorithm incorporates both outer and inner loop terminal constraint sets, wherein closed-form control laws are used to achieve reference model matching (or approximate matching) behavior. Farther from the origins of the outer and inner loops, MPC is used to drive the system into these constraint sets, explicitly accounting for saturation constraints. \subsection{Reference Model Design and Assumptions} This reference model is given by: \begin{eqnarray}\label{ref_model} x_{f}(k+1) &=& A_{f}x_{f}(k)+B_{f}v_{des}(k), \\ \nonumber v_{des}^{f}(k) &=& Cx_{f}(k), \end{eqnarray} \noindent where $x_{f}\in \mathbb{R}^{n_{2}}$, $v_{des} \in \mathbb{R}^{q}$, and $v_{des}^{f} \in \mathbb{R}^{q}$. We assume that: \begin{itemize} \item \emph{Assumption 4:} The reference model is stable, i.e., $\|\bar{\lambda}_{i}(A_{f}))\| < 1, \forall i$ ($\bar{\lambda}_{i}$ represents the $i^{th}$ eigenvalue of $A_{f}$); \item \emph{Assumption 5:} The reference model does not share any zeros with unstable poles of $A_{1}$. \end{itemize} \noindent Because Assumptions 4 and 5 are on the reference model, which is freely chosen by the control system designer, they do not restrict the applicability of the proposed control design. We use the reference model to analyze the closed-loop behavior of the inner loop through the following error system: \begin{eqnarray}\label{il_error_dynamics} \nonumber \tilde{x}(k+1) &=& A_{2}\tilde{x}(k)+(A_{2}-A_{f})x_{f}(k)+B_{2}u(k) \\ && -B_{f}v_{des}(k), \\ \nonumber \tilde{v}(k) &=& C\tilde{x}(k), \end{eqnarray} \noindent where $\tilde{x}(k) = x_{2}(k)-x_{f}(k)$ and it follows that $\tilde{v}(k)=v(k)-v_{des}^{f}(k)$. For notational convenience throughout the paper, because the reference model is embedded in the outer loop, we will introduce the augmented outer loop state, $x_{1}^{aug} \triangleq \left[\begin{array}{cc}x_{1}^{T} & x_{f}^{T}\end{array} \right]^{T}$, which results in augmented outer loop dynamics given by: \begin{eqnarray}\label{ol_aug_dynamics} x_{1}^{aug}(k+1) &=& A_{1}^{aug}x_{1}^{aug}(k) + B_{1}^{aug}\tilde{v}(k) \\ \nonumber && + B_{f}^{aug}v_{des}(k) \end{eqnarray} \noindent where: \begin{eqnarray} A_{1}^{aug} &=& \left[ \begin{array}{cc} A_{1} & B_{1}C \\ 0 & A_{f} \end{array} \right], \\ \nonumber B_{1}^{aug} &=& \left[\begin{array}{cc} B_{1}^{T} & 0 \end{array}\right]^{T}, \\ \nonumber B_{f}^{aug} &=& \left[\begin{array}{cc} 0 & B_{f}^{T} \end{array}\right]^{T}. \end{eqnarray} \subsection{Model Predictive Control Framework} An MPC optimization is carried out whenever the outer \emph{or} inner loop states are outside of predetermined $\lambda$-contractive terminal constraint sets $G_{1}$ and $G_{2}$ respectively. A closed-form terminal control law is active once the inner \emph{and} outer loop states have reached the terminal sets. The block diagram of the closed-loop system when MPC is active is given in Fig. \ref{hierarchical_MPC}, whereas the closed-loop system under closed-form terminal control laws conforms to the block diagram of Fig. \ref{hierarchical_basic}. Whenever the MPC optimization is carried out, an optimal control trajectory is computed for an $N$ step prediction horizon, along with a corresponding state trajectory. The outer loop virtual control and state trajectories are given by: \begin{eqnarray}\label{sequences} \nonumber \mathbf{v_{des}}(k) &=& \left[\begin{array}{ccc}\mathbf{v_{des}}(k|k) & \ldots & \mathbf{v_{des}}(k+N-1|k) \end{array} \right], \\ \mathbf{x_{1}^{aug}}(k) &=& \left[\begin{array}{ccc}\mathbf{x_{1}^{aug}}(k|k) & \ldots & \mathbf{x_{1}^{aug}}(k+N|k) \end{array} \right]. \end{eqnarray} \noindent The inner loop control and state trajectories are given by: \begin{eqnarray}\label{sequences_inner} \nonumber \mathbf{u}(k) &=& \left[\begin{array}{ccc}\mathbf{u}(k|k) & \ldots & \mathbf{u}(k+N-1|k) \end{array} \right], \\ \mathbf{\tilde{x}}(k) &=& \left[\begin{array}{ccc}\mathbf{\tilde{x}}(k|k) & \ldots & \mathbf{\tilde{x}}(k+N|k) \end{array} \right]. \end{eqnarray} \noindent The notation $(i|k)$ denotes the chosen/predicted value of a variable at step $i$ when the optimization is carried out at time $k$ ($k \leq i$). \begin{figure} \caption{Block diagram of the hierarchical control strategy that is implemented when model predictive control is active. In this scenario, N-step predictions of interconnection variables (shown in \textbf{bold} \label{hierarchical_MPC} \end{figure} \begin{figure} \caption{Block diagram of the hierarchical control strategy that is implemented under terminal control laws.} \label{hierarchical_basic} \end{figure} The mathematical description of the outer loop control law is: \begin{eqnarray}\label{mpc_outer_law} \nonumber v_{des}(k) &=& \left\{ \begin{array}{ccccc} -K_{1}x_{1}^{aug}(k) & \text{if} & x_{1}^{aug}(k) \in G_{1}, \tilde{x}(k) \in G_{2}, \\ \mathbf{v_{des}^{o}}(k|k) & , & \text{otherwise} \end{array}\right. \end{eqnarray} \noindent Here, $K_{1}$ is the terminal control gain and $\mathbf{v}_{des}^{o}(k)$ is the optimized control input sequence from the outer loop MPC optimization, given by: \begin{equation}\label{mpc_outer_opt} \mathbf{v_{des}^{o}}(k) = \arg \underset{\mathbf{v_{des}}\in \mathbf{V_{des}}}{\min} J_{1}(\mathbf{v_{des}}(k)|x_{1}^{aug}(k),\mathbf{\tilde{v}}(k-1)), \end{equation} subject to the dynamics of (\ref{ol_aug_dynamics}) and constraints: \begin{eqnarray}\label{mpc_outer_constraints} \nonumber \mathbf{x_{1}^{aug}}(k+N-1|k) & \in & G_{1}, \\ \mathbf{x_{1}^{aug}}(k+N|k) & \in & \lambda_{1}G_{1}, \\ \nonumber \|\mathbf{v_{des}}(k+i|k)-\mathbf{v_{des}^{o}}(k+i|k-1)\| & \leq & (\delta_{v_{des}}^{max})\beta^{\min(k,N_{1}^{*})}, \\ \nonumber && i=0 \ldots N-2, \end{eqnarray} \noindent and cost function: \begin{eqnarray}\label{mpc_outer_cost} \nonumber J_{1}(\mathbf{v_{des}}(k)|x_{1}^{aug}(k),\mathbf{\tilde{v}}(k-1)) &=& \sum_{i=k}^{k+N-1}g_{1}(\mathbf{x_{1}^{aug}}(i|k), \\ && \mathbf{v_{des}}(i|k)). \end{eqnarray} \noindent Here, $\lambda_{1}$, $\delta_{v_{des}}^{max}$, $\beta$, and $N_{1}^{*}$ are design parameters, which are summarized in Table \ref{MPC_parameters}. $\mathbf{V_{des}}$ is the set of all feasible $\mathbf{v_{des}}$ trajectories. For the results in this paper, there are no restrictions to the form of the stage cost, $g_{1}(\mathbf{x_{1}^{aug}}(i|k),\mathbf{v_{des}}(i|k))$. The mathematical description of the inner loop control law is: \begin{eqnarray}\label{mpc_inner_law} u(k) &=& \left\{\begin{array}{ccccc} u_{t}(k) & if & x_{1}^{aug}(k) \in G_{1}, \tilde{x}(k) \in G_{2} \\ \nonumber \mathbf{u^{o}}(k|k) & , & otherwise \end{array}\right. \end{eqnarray} \noindent where \begin{equation}\label{inner_terminal} u_{t}(k) = K_{21}v_{des}(k) - K_{22}x_{2}(k). \end{equation} Here, $\mathbf{u^{o}}(k)$ is the optimized control input sequence from the inner loop MPC optimization, given by: \begin{equation}\label{mpc_inner_opt} \mathbf{u^{o}}(k) = \arg \underset{\mathbf{u}(k) \in \mathbf{U}}{\min} J_{2}(\mathbf{u}(k)|\tilde{x}(k),\mathbf{x_{f}}(k)), \end{equation} subject to the dynamics of (\ref{il_error_dynamics}) and constraints: \begin{eqnarray}\label{mpc_inner_constraints} \nonumber \mathbf{\tilde{x}}(k+N|k) & \in & \lambda_{2}G_{2}, \\ \|\mathbf{u}(k+i|k)-\mathbf{u^{o}}(k+i|k-1)\| & \leq & (\delta_{u}^{max})\beta^{\min(k,N_{2}^{*})}, \\ \nonumber && i=0 \ldots N-2 \\ \nonumber \mathbf{u}(k+i|k) &\in& U, i=0 \ldots N-1, \end{eqnarray} \noindent where $U$ reflects the actuator saturation limits of $u$ and $\mathbf{U}$ is the set of all feasible control input $\mathbf{u}$ trajectories. The inner loop cost function is given by: \begin{equation}\label{mpc_inner_cost} J_{2}(\mathbf{u}(k)|\tilde{x}(k),\mathbf{x_{f}}(k)) = \sum_{i=k}^{k+N-1}g_{2}(\mathbf{\tilde{x}}(i|k),\mathbf{u}(i|k)). \end{equation} Here, $\lambda_{2}$, $\delta_{u}^{max}$, $\beta$, and $N_{2}^{*}$ are design parameters, which are summarized in Table \ref{MPC_parameters}. As with the outer loop, there are no restrictions to the form of the stage cost, $g_{2}(\mathbf{\tilde{x}}(i|k),\mathbf{u}(i|k))$. The terms $\beta^{\min(k,N_{1}^{*})}$ and $\beta^{\min(k,N_{2}^{*})}$ impose the requirement that trajectories $\mathbf{v_{des}}(k)$ and $\mathbf{u}(k)$ calculated at any two subsequent time steps must be sufficiently close to each other, and that the required proximity of trajectories decrease over time, until $k=N_{1}^{*}$ and $k=N_{2}^{*}$, respectively. Formulas for the required values for $N_{1}^{*}$ and $N_{2}^{*}$ are given in the proof of Proposition \ref{convergence}; required values depend on the contraction rates $\lambda_{1}$ and $\lambda_{2}$, system dynamics, horizon length ($N$), and $\beta$. Key MPC design parameters, including those that are essential for our stability formulation, are provided in Table \ref{MPC_parameters}. Fig. \ref{MPC_sequence} provides a graphical depiction of the sequence of operations that occur in a single time instant when MPC is active. \begin{figure} \caption{Sequence of operations when MPC is active, including the physical locations where the operations occur.} \label{MPC_sequence} \end{figure} Computationally, the outer loop MPC must consider $n_{1}+n_{2}$ states and $q$ control inputs, whereas the inner loop must consider $n_{2}$ states and $p$ control inputs. Both optimizations are individually computationally simpler than their centralized counterparts, which must consider $n_{1}+n_{2}$ states and $p$ control inputs. The resulting computational simplification can be especially significant when the algorithm is applied to systems with complex outer loops ($n_{1} >> n_{2}$) and several actuators for a given virtual control ($p >> q$), which is commonplace in industry. \begin{table} \centering \normalsize \caption{Key MPC Design Parameters}\label{MPC_parameters} \begin{tabular}{|c|c|} \hline Parameter & Description \\ \hline $G_{1}$ & outer loop terminal constraint set \\ $G_{2}$ & inner loop terminal constraint set \\ $K_{1}$ & outer terminal control gain matrix \\ $K_{21},K_{22}$ & inner terminal control gain matrices \\ $\lambda_{1}$ & outer contraction rate ($<1$) \\ $\lambda_{2}$ & inner contraction rate ($<1$) \\ $\delta_{v_{des}}^{max}$ & rate-like constraint on outer loop MPC \\ $\delta_{u}^{max}$ & rate-like constraint on inner loop MPC \\ $\beta$ & any scalar that is $<1$ \\ $N_{1}^{*}$ & maximum steps until convergence to $G_{1}$ \\ $N_{2}^{*}$ & maximum steps until convergence to $G_{2}$ \\ \hline \end{tabular} \end{table} \section{Deriving Terminal Control Laws and $\lambda$-Contractive Terminal Constraint Sets}\label{lambda_contractive} In this section, we will first derive control laws that, in the absence of constraints, will lead to overall system stability. Having derived these control laws, we will then show that there exist $\lambda$- contractive sets $G_{1}$ and $G_{2}$, as described in \cite{lambda_contractive}, such that once $x_{1}^{aug}$ and $\tilde{x}$ enter $G_{1},G_{2}$, they remain there (and in fact are driven further into the sets at the next instant). We consider two options for inner loop terminal control design, namely: \begin{itemize} \item \emph{Exact reference model matching} - We design the inner loop control law such that $\tilde{x}(k+1)=A_{f}\tilde{x}(k)$, which guarantees that $v(k)$ asymptotically tracks $v_{des}^{f}(k)$. \item \emph{Approximate reference model matching} - The inner loop control law is designed such that the closed inner loop is stable and a small gain condition is satisfied. \end{itemize} \subsection{Terminal Control Law Design with Exact Reference Model Matching} In the case of exact reference model matching, in order to derive a model-matching controller, we assume that the reference model is cast in a specific form that is compatible with the actuator dynamics; specifically, we assume that: \begin{itemize} \item \emph{Assumption 6.} $A_{f}$ in (\ref{ref_model}) is written in the same block CCF as $A_{2}$ (as described in \cite{miso_ccf}). \end{itemize} \begin{itemize} \item \emph{Assumption 7.} Taking $R_{2}$ and $R_{f}$ as the set of rows of $B_{2}$ and $B_{f}$, respectively, that contain nonzero entries (which also correspond to the full rows of $A_{2}$ and $A_{f}$), we assume that $R_{f} \subset R_{2}$, i.e., each nonzero row of $B_{f}$ is also a nonzero row of $B_{2}$. \end{itemize} \noindent These assumptions, in conjunction with Assumptions 1-5, place restrictions on $A_{f}$ and $B_{f}$ that ensure that a stabilizing, reference model-matching inner loop control law can be designed. In particular,it is possible to design outer and inner loop terminal control laws with desirable properties, according to the following proposition: \begin{prop} \label{terminal_control} (Terminal control laws for exact matching): Given that Assumptions 1-7 hold, there exist control laws $v_{des}(k) = -K_{1}x_{1}^{aug}(k)$ and $u(k) = K_{21}v_{des}(k) - K_{22}x_{2}(k)$ which, when substituted into (\ref{ol_aug_dynamics}) and (\ref{il_error_dynamics}), yield: \begin{eqnarray}\label{full_cl_dynamics} \nonumber x_{1}^{aug}(k+1) &=& (A_{1}^{aug}-B_{1}^{aug}K_{1})x_{1}^{aug}(k) \\ && + B_{1}^{aug}\tilde{v}(k), \\ \nonumber \tilde{x}(k+1) &=& A_{f}\tilde{x}(k), \end{eqnarray} \noindent where $\|\bar{\lambda}_{i}(A_{1}^{aug}-B_{1}^{aug}K_{1}))\| < 1, \forall i$, and render the origin of the overall system, $x_{1}^{aug}=0,\tilde{x}=0$, asymptotically stable. \end{prop} \begin{pf} Since the pair $(A_{1},B_{1})$ is controllable and the reference model does not share zeros with unstable poles of $A_{1}$, it follows that the pair $(A_{1}^{aug},B_{1}^{aug})$ is stabilizable. Thus, $K_{1}$ can be designed to ensure that $\|\bar{\lambda}_{i}(A_{1}^{aug}-B_{1}^{aug}K_{1}))\| < 1, \forall i$. To show the second part of the proposition, recall that the inner loop dynamics are expressed in (\ref{il_error_dynamics}) by: \begin{equation}\label{inner_dynamics_again} \tilde{x}(k+1) = A_{2}\tilde{x}(k)+(A_{2}-A_{f})x_{f}(k)+B_{2}u(k)-B_{f}v_{des}(k). \end{equation} It follows from the block CCF of $A_{2}$, $B_{2}$, in conjunction with Assumptions 6 and 7 (which impose a suitable block CCF structure on $A_{f}$ and $B_{f}$), that we can choose $K_{21}$ and $K_{22}$ to satisfy: \begin{eqnarray} B_{2}K_{21} &=& B_{f}, \\ \nonumber B_{2}K_{22} &=& A_{2} - A_{f}, \end{eqnarray} \noindent which, when substituted into the inner loop dynamics, yields: \begin{equation} \tilde{x}(k+1) = A_{f} \tilde{x}(k). \end{equation} \noindent To see this, let $R_{2}$ be the indices corresponding to the nonzero rows of $B_{2}$, and let $R_{f}$ be the indices corresponding to the nonzero rows of $B_{f}$. Because Assumption 7 requires that that $R_{f} \subseteq R_{2}$, it possible to achieve $B_{2}K_{21} = B_{f}$. Furthermore, let $i$ represent any zero row of $B_{2}$ (and also $B_{f}$). It follows from block CCF (imposed by Assumption 6) and Assumption 7 that $A_{2ij} = A_{fij} \forall j$, making it possible to achieve $B_{2}K_{22} = A_{2} - A_{f}$. Because $\|\bar{\lambda}_{i}(A_{1}^{aug}-B_{1}^{aug}K_{1}))\| < 1, \forall i$ and $\|\bar{\lambda}_{i}(A_{f}))\| < 1, \forall i$, it follows that both the closed inner and outer loops (\ref{full_cl_dynamics}) are input-to-state stable (ISS) under the aforementioned control laws. For small gain analysis, it is convenient to recast the system block diagram of Fig. \ref{hierarchical_basic} in the nonminimal representation of Fig. \ref{hierarchical_rm_analysis}, where offsetting copies of the reference model are embedded in both the inner and outer loops. Closed outer loop stability guarantees a finite $l_{2}$ gain, $\gamma_{1}$, from $\tilde{v}$ to $v_{des}$, and exact inner loop reference model matching guarantees an $l_{2}$ gain of $\gamma_{2}=0$, from $v_{des}$ to $\tilde{v}$. Therefore, the small gain condition, $\gamma_{1}\gamma_{2}<1$, is satisfied, and in conjunction with outer and inner loop ISS, this proves asymptotic stability of $x_{1}^{aug}=0,\tilde{x}=0$. \begin{flushright} $\Box$ \end{flushright} \end{pf} \begin{figure} \caption{Block diagram of the hierarchical control strategy under terminal control laws, rearranged for analysis purposes.} \label{hierarchical_rm_analysis} \end{figure} \subsection{Terminal Control Law Design with Inexact Reference Model Matching and Small Gain Condition} For many systems, such as non-minimum phase systems and high-order, high relative degree systems, exact reference model matching is unrealistic. Exact matching is not essential, however, as shown in the following Proposition: \begin{prop} \label{terminal_control_inexact} (Terminal control laws for inexact matching): Given that Assumptions 1-5 hold, there exist control laws $v_{des}(k) = -K_{1}x_{1}^{aug}(k)$ and $u(k) = K_{21}v_{des}(k) - K_{22}x_{2}(k)$ which, when substituted into (\ref{ol_aug_dynamics}) and (\ref{il_error_dynamics}), yield: \begin{eqnarray}\label{full_inexact_cl_dynamics} \nonumber x_{1}^{aug}(k+1) &=& (A_{1}^{aug}-B_{1}^{aug}K_{1})x_{1}^{aug}(k) + B_{1}^{aug}\tilde{v}(k), \\ \tilde{x}(k+1) &=& (A_{2}-B_{2}K_{22})\tilde{x}(k) \\ \nonumber && + (A_{2}-A_{f}-B_{2}K_{22})x_{f}(k) \\ \nonumber && + (B_{2}K_{21}-B_{f})v_{des}(k), \end{eqnarray} \noindent where $\|\bar{\lambda}_{i}(A_{1}^{aug}-B_{1}^{aug}K_{1}))\| < 1, \forall i$ and $\|\bar{\lambda}_{i}(A_{2}-B_{2}K_{22}))\| < 1, \forall i$. Furthermore, suppose that $K_{1}$, $K_{21}$, and $K_{22}$ are designed so that the $l_{2}$ gains from $\tilde{v}$ to $v_{des}$ and $v_{des}$ to $\tilde{v}$, denoted $\gamma_{1}$ and $\gamma_{2}$, respectively, satisfy the small gain condition, $\gamma_{1}\gamma_{2} < 1$. Then the origin of the overall system, $x_{1}^{aug}=0,\tilde{x}=0$, is asymptotically stable. \end{prop} \begin{pf} Since the pair $(A_{1},B_{1})$ is controllable and the reference model does not share zeros with unstable poles of $A_{1}$, it follows that the pair $(A_{1}^{aug},B_{1}^{aug})$ is stabilizable. Thus, $K_{1}$ can be designed to ensure that $\|\bar{\lambda}_{i}(A_{1}^{aug}-B_{1}^{aug}K_{1}))\| < 0, \forall i$. From Assumption 2, the pair ($A_{2},B_{2}$) is controllable, and therefore there exists $K_{2}$ such that $\|\bar{\lambda}_{i}(A_{2}-B_{2}K_{2}))\| < 0, \forall i$. Thus, both the inner and outer closed-loop dynamics of (\ref{full_inexact_cl_dynamics}), are input-to-state stable (ISS). By the hypotheses of Proposition \ref{terminal_control_inexact}, the small gain condition, i.e., $\gamma_{1}\gamma_{2}<1$, is satisfied. Together with ISS, this proves asymptotic stability of $x_{1}^{aug}=0,\tilde{x}=0$. \begin{flushright} $\Box$ \end{flushright} \end{pf} Since $\|\bar{\lambda}_{i}(A_{1}^{aug}-B_{1}^{aug}K_{1}))\| < 1, \forall i$ and $\|\bar{\lambda}_{i}(A_{2}-B_{2}K_{22}))\| < 1, \forall i$ it follows (\cite{hierarchical1}) that there exist quadratic Lyapunov functions, $V(x_{1}^{aug}) = x_{1}^{aug,T}Qx_{1}^{aug}$ and $V(\tilde{x}) = \tilde{x}^{T}P\tilde{x}$, where $Q$ and $P$ are positive definite symmetric matrices, such that when $v_{des}(k)=-K_{1}x_{1}^{aug}(k)$ and $u(k) = K_{21}v_{des}(k) - K_{22}x_{2}(k)$: \begin{eqnarray}\label{lyap_lambda_inexact} V_{1}(x_{1}^{aug}(k+1))-V_{1}(x_{1}^{aug}(k)) &<& -\alpha_{1} V_{1}(x_{1}^{aug}(k)) \\ \nonumber && + \bar{\gamma}_{1} \|\tilde{v}(k)\|^{2}, \end{eqnarray} \begin{eqnarray}\label{lyap_alpha_inexact} V_{2}(\tilde{x}(k+1))-V_{2}(\tilde{x}(k)) &<& -\alpha_{2}V_{2}(\tilde{x}(k)) \\ \nonumber && + \bar{\gamma}_{21} \|v_{des}(k)\|^{2} + \bar{\gamma}_{22} \|x_{f}(k)\|^{2}. \end{eqnarray} \noindent for some $\alpha_{1} > 0$, $\alpha_{2} > 0$, $\bar{\gamma}_{1} > 0$, $\bar{\gamma}_{21} > 0$, and $\bar{\gamma}_{22} \geq 0$ ($\bar{\gamma}_{22} = 0$ under exact reference model matching). This fact will be important in demonstrating the existence and construction of $\lambda$-contractive terminal constraint sets. \subsection{Design of Terminal Constraint Sets Under Exact Reference Model Matching} Now, we show that $\lambda$-contractive sets, $G_{1}$ and $G_{2}$, conforming to the definition in \cite{lambda_contractive}, exist for the outer and inner loops. To guarantee that such sets exist, we make the following trivial assumption regarding the feasible control input set, $U$: \begin{itemize} \item \emph{Assumption 8.} $u=0$ lies in the interior of $U$. \end{itemize} \noindent We now demonstrate the existence of $\lambda$-contractive sets through the following proposition: \begin{prop} \label{lambda_contractive} (Existence of $\lambda$-contractive sets): Under Assumptions 1-8, there exist sets $G_{1} \subset \mathbb{R}^{n_{1}+n_{2}}$ and $G_{2} \subset \mathbb{R}^{n_{2}}$, along with scalars $\lambda_{1}: 0 \leq \lambda_{1} < 1$ and $\lambda_{2}: 0 \leq \lambda_{2} < 1$ such that if: \begin{eqnarray} \nonumber x_{1}^{aug}(k) &\in& G_{1}, \\ \tilde{x}(k) &\in& G_{2}, \\ \nonumber v_{des}(k) &=& -K_{1}x_{1}(k), \\ \nonumber u(k) &=& K_{21}v_{des}(k) - K_{22}x_{2}(k), \end{eqnarray} then: \begin{eqnarray} \nonumber u(k) &\in& U, \\ x_{1}^{aug}(k+1) &\in& \lambda_{1}G_{1}, \\ \nonumber \tilde{x}(k+1) &\in& \lambda_{2}G_{2}. \end{eqnarray} \end{prop} \begin{pf} To construct $G_{1}$, take: \begin{equation}\label{G_1_def} G_{1} \triangleq \{x_{1}^{aug}: V_{1}(x_{1}^{aug}) < V_{1}^{*}\}, \end{equation} \noindent where $V_{1}^{*} > 0$. It follows from the continuity of $V_{1}(x_{1}^{aug})$ that there exists some $\lambda_{1}: 0 \leq \lambda_{1} < 1$, $\lambda_{1}^{*}: 0 \leq \lambda_{1}^{*} < 1$, $\epsilon_{1} > 0$ such that: \begin{eqnarray}\label{epsilon} \lambda_{1}^{*} &>& 1-\alpha_{1}+\epsilon_{1}, \\ \nonumber V_{1}(x_{1}^{aug}(k+1))&\leq& \lambda_{1}^{*}V_{1}^{*} \Rightarrow x_{1}(k+1) \in \lambda_{1}G_{1}. \end{eqnarray} It follows from (\ref{lyap_lambda_inexact}), (\ref{G_1_def}), and (\ref{epsilon}) that if: \begin{equation}\label{v_tilde_limits} \|\tilde{v}(k)\|^{2} \leq \frac{\epsilon_{1} V_{1}^{*}}{\bar{\gamma}_{1}}, \end{equation} and $x_{1}^{aug}(k) \in G_{1}$, then: \begin{eqnarray}\label{last_G1} V_{1}(x_{1}^{aug}(k+1)) &\leq& \lambda_{1}^{*} V_{1}^{*}, \\ \nonumber x_{1}^{aug}(k+1) &\in& \lambda_{1}G_{1}. \end{eqnarray} \noindent To see this, note that (\ref{lyap_lambda_inexact}) can be rearranged as: \begin{eqnarray} V_{1}(x_{1}^{aug}(k+1)) &<& (1-\alpha_{1}+\epsilon_{1}) V_{1}^{*} \\ \nonumber && + (1-\alpha_{1})(V_{1}(x_{1}^{aug}(k))-V_{1}^{*}), \end{eqnarray} \noindent and noting that $(1-\alpha_{1})(V_{1}(x_{1}^{aug}(k))-V_{1}^{*}) \leq 0$ when $x_{1}^{aug} \in G_{1}$, it follows that: \begin{equation}\label{last_G1} V_{1}(x_{1}^{aug}(k+1)) < (1-\alpha_{1}+\epsilon_{1}) V_{1}^{*}. \end{equation} To construct $G_{2}$, take: \begin{equation} G_{2} \triangleq \{\tilde{x}: V_{2}(\tilde{x})\leq V_{2}^{*}\}, \end{equation} \noindent where $V_{2}^{*} > 0$. It follows from (\ref{lyap_alpha_inexact}) and the continuity of $V_{2}(\tilde{x})$ that if $\tilde{x}(k) \in G_{2}$ and $u(k)=u_{t}(k)$, then: \begin{eqnarray} V_{2}(\tilde{x}(k+1)) &<& (1-\alpha_{2})V_{2}^{*}, \\ \nonumber \tilde{x}(k+1) &\in& \lambda_{2} G_{2}, \end{eqnarray} \noindent for some $\lambda_{2}: 0 \leq \lambda_{2} < 1$. It remains to select $V_{1}^{*}$ and $V_{2}^{*}$ such that $u_{t}(k) \in U, \forall x_{1}^{aug}(k) \in G_{1}, \tilde{x}(k) \in G_{2}$, and $\|\tilde{v}(k)\|^{2}$ satisfies (\ref{v_tilde_limits}) whenever $\tilde{x}(k) \in G_{2}$. To ensure that $u_{t}(k) \in U$, note that the inner loop terminal control law (\ref{inner_terminal}) can be written as: \begin{equation}\label{inner_control_rewritten} u_{t}(k) = (-K_{21}K_{1} + K_{22})x_{f}(k) - K_{22}\tilde{x}(k), \end{equation} \noindent and that \begin{equation}\label{inner_control_norms} \|u_{t}(k)\| \leq (\|K_{21}K_{1}\|+\|K_{22}\|)\|x_{1}^{aug}(k)\| + \|K_{22}\|\|\tilde{x}(k)\|. \end{equation} It follows from (\ref{inner_control_norms}) and Assumption 8 that one can choose $x_{1}^{max} > 0$ and $\tilde{x}^{max} > 0$ such that whenever $\|x_{1}^{aug}(k)\| \leq x_{1}^{max}$ and $\|\tilde{x}(k)\| \leq \tilde{x}^{max}$, $u_{t}(k) \in U$. From the quadratic structure of $V_{1}(x_{1}^{aug})$, it follows that $\|x_{1}^{aug}(k)\| \leq x_{1}^{max}$ whenever \begin{equation} V_{1}(x_{1}^{aug}(k)) \leq \bar{\lambda}_{min}(Q)(x_{1}^{max})^{2}, \end{equation} \noindent where $\bar{\lambda}_{min}(Q)$ is the smallest eigenvalue of $Q$. Therefore, taking \begin{equation}\label{V_1_star_constraint} V_{1}^{*} \leq \bar{\lambda}_{min}(Q)(x_{1}^{max})^{2} \end{equation} \noindent guarantees that $\|x_{1}^{aug}\| \leq x_{1}^{max}$ whenever $x_{1}^{aug} \in G_{1}$. Similarly, from the quadratic structure of $V_{2}(\tilde{x})$, it follows that $\|\tilde{x}(k)\| \leq \tilde{x}^{max}$ whenever \begin{equation} V_{2}(\tilde{x}(k)) \leq \bar{\lambda}_{min}(P)(\tilde{x}^{max})^{2}, \end{equation} \noindent where $\bar{\lambda}_{min}(P)$ is the smallest eigenvalue of $P$. Therefore, taking \begin{equation}\label{V_2_star_constraint1} V_{2}^{*} \leq \bar{\lambda}_{min}(P)(\tilde{x}^{max})^{2} \end{equation} \noindent guarantees that $\|\tilde{x}\| \leq \tilde{x}^{max}$ whenever $\tilde{x} \in G_{2}$. (\ref{V_1_star_constraint}) and (\ref{V_2_star_constraint1}) together guarantee that $u_{t}(k) \in U$ whenever $x_{1}^{aug}(k) \in G_{1}, \tilde{x}(k) \in G_{2}$. Finally, $V_{2}^{*}$ needs to be selected so that $\|\tilde{v}(k)\|^{2}$ satisfies (\ref{v_tilde_limits}) whenever $\tilde{x}(k) \in G_{2}$. Manipulation of (\ref{v_tilde_limits}) shows that this is the case when \begin{equation}\label{first_v_tilde_eq} \|\tilde{x}(k)\|^{2} \leq \frac{\epsilon_{1} V_{1}^{*}}{\|C\|^{2}\bar{\gamma}_{1}}, \end{equation} \noindent and it follows from the quadratic structure of $V_{2}(\tilde{x})$ that (\ref{first_v_tilde_eq}) is satisfied whenever \begin{equation} V_{2}(\tilde{x}(k)) \leq \frac{\epsilon_{1} V_{1}^{*}\bar{\lambda}_{min}(P)}{\|C\|^{2}\bar{\gamma}_{1}}. \end{equation} \noindent Substituting (\ref{V_1_star_constraint}) for $V_{1}^{*}$, it follows that by taking \begin{equation} V_{2}^{*} \leq \frac{\epsilon_{1} (x_{1}^{max})^{2}\bar{\lambda}_{min}(Q)\bar{\lambda}_{min}(P)}{\bar{\gamma}_{1} \|C\|^{2}}, \end{equation} \noindent one guarantees that (\ref{v_tilde_limits}) is satisfied whenever $\tilde{x}(k) \in G_{2}$. In order to simultaneously ensure that $u_{t}(k) \in U$, we take: \begin{equation}\label{V_2_star_constraint} V_{2}^{*} = \min \{(\tilde{x}^{max})^{2}\bar{\lambda}_{min}(P),\frac{\epsilon_{1} (x_{1}^{max})^{2}\bar{\lambda}_{min}(Q)\bar{\lambda}_{min}(P)}{\bar{\gamma}_{1} \|C\|^{2}}\}. \end{equation} \begin{flushright} $\Box$ \end{flushright} \end{pf} The proof of Proposition \ref{lambda_contractive} is constructive in the sense that it provides the method by which one can construct $G_{1}$ and $G_{2}$, and determine suitable values for $\lambda_{1}$ and $\lambda_{2}$, respectively. \subsection{Design of Terminal Constraint Sets Under Inexact Reference Model Matching} It is also possible to derive constraint sets $G_{1}$ and $G_{2}$ under inexact, but sufficiently accurate reference model matching. The existence of $\lambda$-contractive constraint sets and the conditions under which they are guaranteed to exist are given in the following proposition: \begin{prop} \label{lambda_contractive_inexact} (Existence of $\lambda$-contractive sets with inexact reference model matching): Suppose that Assumptions 1-5 and Assumption 8 hold. Furthermore, suppose that $V_{1}(x_{1}^{aug})$ and $V_{2}(\tilde{x})$, along with scalars $\alpha_{1}$, $\alpha_{2}$, $\bar{\gamma}_{1}$, $\bar{\gamma}_{21}$, and $\bar{\gamma}_{22}$ from (\ref{lyap_lambda_inexact}) and (\ref{lyap_alpha_inexact}), along with matrices $C$ and $K_{1}$, satisfy the following inequality: \begin{equation}\label{small_gain_constraint_inequality} \alpha_{1}\alpha_{2}\bar{\lambda}_{min}(P)\bar{\lambda}_{min}(Q) \geq \bar{\gamma}_{1}\|C\|^{2}(\bar{\gamma}_{21}\|K_{1}\|^{2}+\bar{\gamma}_{22}), \end{equation} \noindent where $\bar{\lambda}_{min}(P)$ and $\bar{\lambda}_{min}(Q)$ are the minimum eigenvalues of $P$ and $Q$, respectively. Then there exist sets $G_{1} \subset \mathbb{R}^{n_{1}+n_{2}}$ and $G_{2} \subset \mathbb{R}^{n_{2}}$, along with scalars $\lambda_{1}: 0 \leq \lambda_{1} < 1$ and $\lambda_{2}: 0 \leq \lambda_{2} < 1$ such that if: \begin{eqnarray} \nonumber x_{1}^{aug}(k) &\in& G_{1}, \\ \tilde{x}(k) &\in& G_{2}, \\ \nonumber v_{des}(k) &=& -K_{1}x_{1}(k), \\ \nonumber u(k) &=& K_{21}v_{des}(k) - K_{22}x_{2}(k), \end{eqnarray} then: \begin{eqnarray} \nonumber u(k) &\in& U, \\ x_{1}^{aug}(k+1) &\in& \lambda_{1}G_{1}, \\ \nonumber \tilde{x}(k+1) &\in& \lambda_{2}G_{2}. \end{eqnarray} \end{prop} \begin{pf} The construction of $G_{1}$ is done identically to Proposition \ref{lambda_contractive}, taking: \begin{equation}\label{G_1_def_inexact} G_{1} \triangleq \{x_{1}^{aug}: V_{1}(x_{1}^{aug}) < V_{1}^{*}\}, \end{equation} \noindent where $V_{1}^{*} > 0$. Equations (\ref{epsilon})-(\ref{last_G1}) remain unchanged and follow the same derivation as in Proposition \ref{lambda_contractive}. For the construction of $G_{2}$, we take: \begin{equation}\label{G_2_def_inexact} G_{2} \triangleq \{\tilde{x}: V_{2}(\tilde{x})\leq V_{2}^{*}\}, \end{equation} \noindent where $V_{2}^{*} > 0$. It follows from the continuity of $V_{2}(\tilde{x})$ that there exists some $\lambda_{2}: 0 \leq \lambda_{2} < 1$, $\lambda_{2}^{*}: 0 \leq \lambda_{2}^{*} < 1$, $\epsilon_{2} > 0$ such that: \begin{eqnarray}\label{epsilon_inexact} \lambda_{2}^{*} &>& 1-\alpha_{2}+\epsilon_{2}, \\ \nonumber V_{2}(\tilde{x}(k+1))&\leq& \lambda_{2}^{*}V_{2}^{*} \Rightarrow \tilde{x}(k+1) \in \lambda_{2}G_{2}. \end{eqnarray} It follows from (\ref{lyap_alpha_inexact}), (\ref{G_2_def_inexact}), and (\ref{epsilon_inexact}) that if: \begin{equation}\label{v_des_limits} \bar{\gamma}_{21}\|v_{des}(k)\|^{2}+\bar{\gamma}_{22}\|x_{f}(k)\|^{2} \leq \epsilon_{2}V_{2}^{*}, \end{equation} and $\tilde{x}(k) \in G_{2}$, then: \begin{eqnarray} V_{2}(\tilde{x}(k+1)) &\leq& \lambda_{2}^{*} V_{2}^{*}, \\ \nonumber \tilde{x}(k+1) &\in& \lambda_{2}G_{2}. \end{eqnarray} It remains to select $V_{1}^{*}$ and $V_{2}^{*}$ such that $u_{t}(k) \in U, \forall x_{1}^{aug}(k) \in G_{1}, \tilde{x}(k) \in G_{2}$, and $\|\tilde{v}(k)\|^{2}$ satisfies (\ref{v_tilde_limits}) whenever $\tilde{x}(k) \in G_{2}$. This derivation is exactly the same here as in Proposition \ref{lambda_contractive}, and (\ref{inner_control_rewritten})-(\ref{V_2_star_constraint1}) all hold. Finally, $V_{1}^{*}$ needs to be selected so that $\|v_{des}(k)\|^{2}$ and $\|x_{f}(k)\|^{2}$ satisfy (\ref{v_des_limits}) whenever $x_{1}^{aug} \in G_{1}$,and $V_{2}^{*}$ needs to be selected so that $\|\tilde{v}(k)\|^{2}$ satisfies (\ref{v_tilde_limits}) whenever $\tilde{x}(k) \in G_{2}$. For $V_{2}^{*}$, the derivation is the same as in Proposition \ref{lambda_contractive} and the requirement is given by: \begin{equation}\label{V2_inexact} V_{2}^{*} \leq \frac{\epsilon_{1} V_{1}^{*}\bar{\lambda}_{min}(P)}{\|C\|^{2}\bar{\gamma}_{1}}. \end{equation} \noindent For $V_{1}^{*}$, we begin by noting that if: \begin{equation}\label{V1_inexact} V_{1}^{*} \leq \frac{\epsilon_{2} V_{2}^{*}\bar{\lambda}_{min}(Q)}{\|K_{1}\|^{2}\bar{\gamma}_{21}+\bar{\gamma}_{22}}, \end{equation} \noindent then (\ref{v_des_limits}) is satisfied. To see this, note first that whenever $x_{1}^{aug}(k) \in G_{1}$, it follows from the quadratic form of $V_{1}(x_{1}^{aug})$ that: \begin{equation} \bar{\lambda}_{min}(Q)\|x_{1}^{aug}(k)\|^{2} \leq V_{1}^{*}, \end{equation} \noindent from which it follows from substitution into (\ref{V1_inexact}) that: \begin{equation} \|x_{1}^{aug}(k)\|^{2} \leq \frac{\epsilon_{2}V_{2}^{*}}{\|K_{1}\|^{2}\bar{\gamma}_{21}+\bar{\gamma}_{22}}. \end{equation} \noindent Noting that $\bar{\gamma}_{21}\|v_{des}(k)\|^{2}+\bar{\gamma}_{22}\|x_{f}(k)\|^{2} \leq \|x_{1}^{aug}(k)\|^{2}(\|K_{1}\|^{2}\bar{\gamma}_{21}+\bar{\gamma}_{22})$, we can see immediately that (\ref{v_des_limits}) is satisfied. Combining (\ref{V2_inexact}) and (\ref{V1_inexact}) with the requirements of (\ref{V_1_star_constraint}) and (\ref{V_2_star_constraint}) gives the following two nonlinear equations that must be solved for $V_{1}^{*}$ and $V_{2}^{*}$: \begin{equation}\label{V_1_star_constraint_inexact} V_{1}^{*} = \min \{(x_{1}^{max})^{2}\bar{\lambda}_{min}(Q),\frac{\epsilon_{2} V_{2}^{*}\bar{\lambda}_{min}(Q)}{\|K_{1}\|^{2}\bar{\gamma}_{21}+\bar{\gamma}_{22}}\}. \end{equation} \begin{equation}\label{V_2_star_constraint_inexact} V_{2}^{*} = \min \{(\tilde{x}^{max})^{2}\bar{\lambda}_{min}(P),\frac{\epsilon_{1} V_{1}^{*}\bar{\lambda}_{min}(P)}{\|C\|^{2}\bar{\gamma}_{1}}\}. \end{equation} \noindent (\ref{V_1_star_constraint_inexact}) and (\ref{V_2_star_constraint_inexact}) will only admit a solution if: \begin{equation}\label{inexact_constraint_epsilon} \frac{\epsilon_{1} \bar{\lambda}_{min}(P)}{\|C\|^{2}\bar{\gamma}_{1}} \geq \frac{ \|K_{1}\|^{2}\bar{\gamma}_{21}+\bar{\gamma}_{22}}{\epsilon_{2}\bar{\lambda}_{min}(Q)}. \end{equation} \noindent Noting that the only requirements on $\epsilon_{1}$ and $\epsilon_{2}$ are that $\epsilon_{1}<\alpha_{1}$ and $\epsilon_{2}<\alpha_{2}$, $\epsilon_{1}$ and $\epsilon_{2}$ in (\ref{inexact_constraint_epsilon}) can be replaced with $\alpha_{1}$ and $\alpha_{2}$, and (\ref{inexact_constraint_epsilon}) can be rearranged to yield the constraint: \begin{equation}\label{small_gain_constraint_inequality_repeat} \alpha_{1}\alpha_{2}\bar{\lambda}_{min}(P)\bar{\lambda}_{min}(Q) \geq \bar{\gamma}_{1}\|C\|^{2}(\bar{\gamma}_{21}\|K_{1}\|^{2}+\bar{\gamma}_{22}), \end{equation} \noindent completing the proof. \begin{flushright} $\Box$ \end{flushright} \end{pf} \noindent The proof of Proposition \ref{lambda_contractive_inexact} follows similar arguments to that of Proposition \ref{lambda_contractive}, with the exception that now $V_{1}^{*}$ and $V_{2}^{*}$, which define the boundaries of $G_{1}$ and $G_{2}$, must satisfy two coupled equations, and a solution to these coupled equations only exists when (\ref{small_gain_constraint_inequality}) is satisfied. Qualitatively speaking, satisfaction of (\ref{small_gain_constraint_inequality}) depends on two factors: \begin{enumerate} \item Free response speed of the outer and inner loop systems, indicated by $\alpha_{1}$ and $\alpha_{2}$; \item Level of coupling between the outer and inner loop systems, indicated by $\bar{\gamma}_{1}$, $\bar{\gamma}_{21}$, and $\bar{\gamma}_{22}$. \end{enumerate} \section{Deriving Rate-Like Constraints on Control Inputs and Desired Virtual Control Inputs}\label{rate_constraints} The results of Section \ref{lambda_contractive} provide a means by which outer and inner loop control laws can be designed to yield local stability of the origin of the overall system, i.e., $x_{1}^{aug}=0$, $\tilde{x}=0$. The MPC optimizations of (\ref{mpc_outer_opt})-(\ref{mpc_outer_cost}) and (\ref{mpc_inner_opt})-(\ref{mpc_inner_cost}) are employed in order to expand the region of attraction beyond the intersection of $G_{1}$ and $G_{2}$. In order to guarantee convergence to $G_{1}$ and $G_{2}$, the MPC optimizations must not only impose a terminal constraint but must also ensure that optimized trajectories do not differ too much from one time step to the next in order to ultimately guarantee persistent feasibility of the optimization. This assurance is accomplished through the imposition of rate-like constraints presented in this section. These rate-like constraints, $\delta_{v_{des}}^{max}$ and $\delta_{u}^{max}$, which limit the variation of $\mathbf{v_{des}}$ and $\mathbf{u}$ trajectories from one time instant to the next. We begin with the following proposition, which follows from examination of the time series representation of the $x_{1}^{aug}$ trajectory: \begin{prop} \label{OL_robustness_to_v_tilde_variation} (Robustness of outer loop MPC to variation in $\mathbf{\tilde{v}}$): Suppose that, given \begin{equation} \nonumber \mathbf{\tilde{v}}(k-1) = \left[\begin{array}{ccc} \mathbf{\tilde{v}}(k-1|k-1) & \ldots & \mathbf{\tilde{v}}(k+N-1|k-1)\end{array}\right], \end{equation} \noindent a trajectory \begin{equation} \nonumber \mathbf{v_{des}}(k) = \left[\begin{array}{ccc} \mathbf{v_{des}}(k|k) & \ldots & \mathbf{v_{des}}(k+N-1|k)\end{array}\right], \end{equation} \noindent is computed that yields $\mathbf{x_{1}^{aug}}(k+N|k) \in \lambda_{1}G_{1}$. Then there exists $\epsilon_{\tilde{v}}^{max} > 0$ such that if $\|\mathbf{\tilde{v}}(k+i|k)-\mathbf{\tilde{v}}(k+i|k-1)\| \leq \epsilon_{\tilde{v}^{max}}, i=1 \ldots N-1$ and $\mathbf{v_{des}}(k+i|k+1)= \mathbf{v_{des}}(k+i|k), i=1 \ldots N-1$, then $\mathbf{x_{1}^{aug}}(k+N|k+1) \in G_{1}$. \end{prop} \begin{pf} At step $k$ the outer loop dynamics over the MPC horizon can be expressed as: \begin{eqnarray} \mathbf{x_{1}^{aug}}(k+i|k) &=& (A_{1}^{aug})^{i}x_{1}^{aug}(k) \\ \nonumber && +\sum_{j=0}^{i-1}(A_{1}^{aug})^{j}(B_{f}^{aug}\mathbf{v_{des}}(k+i-j-1|k)\\ \nonumber && +B_{1}^{aug}\mathbf{\tilde{v}}(k+i-j-1|k-1)) \end{eqnarray} \noindent for $i=1 \ldots N$. An analogous expression exists at step $k+1$. When $\mathbf{v_{des}}(k+i|k)= \mathbf{v_{des}}(k+i|k+1), i=1 \ldots N-1$, the difference between the predicted trajectories at steps $k$ and $k+1$ is then given by: \begin{eqnarray} \nonumber \mathbf{x_{1}^{aug}}(k+i|k+1)-\mathbf{x_{1}^{aug}}(k+i|k) &=& \sum_{j=0}^{i-1}(A_{1}^{aug})^{j}B_{1}^{aug}(\mathbf{\tilde{v}}(k+i-j-1|k) \\ \nonumber &&-\mathbf{\tilde{v}}(k+i-j-1|k-1)), \end{eqnarray} \noindent which, in the case that $\|\mathbf{\tilde{v}}(k+i|k)-\mathbf{\tilde{v}}(k+i|k-1)\| \leq \epsilon_{\tilde{v}^{max}}, i=1 \ldots N-1$, leads to the inequality: \begin{equation} \|\mathbf{x_{1}^{aug}}(k+i|k)-\mathbf{x_{1}^{aug}}(k+i|k+1)\| \leq \epsilon_{\tilde{v}}^{max}\sum_{j=0}^{i-1}\|(A_{1}^{aug})^{j}B_{1}^{aug}\|. \end{equation} \noindent Since $\lambda_{1} < 1$, it follows that there exists $\Delta_{1} > 0$ such that if $\mathbf{x_{1}^{aug}}(k+N|k) \in \lambda_{1}G_{1}$ and $\mathbf{x_{1}^{aug}}(k+N|k)-\mathbf{x_{1}^{aug}}(k+N|k+1) \leq \Delta_{1}$, then $\mathbf{x_{1}^{aug}}(k+N|k+1) \in G_{1}$. Thus, by taking: \begin{equation} \epsilon_{\tilde{v}}^{max} \leq \frac{\Delta_{1}}{\sum_{j=0}^{N-1}\|(A_{1}^{aug})^{j}B_{1}^{aug}\|} \end{equation} \noindent we guarantee that $\mathbf{x_{1}^{aug}}(k+N|k) \in G_{1}$. \begin{flushright} $\Box$ \end{flushright} \end{pf} \noindent The proof relies on a time series representation of the $x_{1}^{aug}$ trajectory, which demonstrates that the step-to-step variation in $\mathbf{x_{1}^{aug}}$ can be upper bounded by restricting the variation in $\mathbf{\tilde{v}}$. We arrive at a very similar conclusion regarding the robustness of the inner loop MPC to variation in $\mathbf{x_{f}}$: \begin{prop} \label{IL_robustness_to_x_f_variation} (Robustness of inner loop MPC to variation in $\mathbf{x_{f}}$): Suppose that, given \begin{equation} \nonumber \mathbf{x_{f}}(k) = \left[\begin{array}{ccc} \mathbf{x_{f}}(k|k) & \ldots & \mathbf{x_{f}}(k+N|k)\end{array}\right], \end{equation} \noindent a trajectory \begin{equation} \nonumber \mathbf{u}(k) = \left[\begin{array}{ccc} \mathbf{u}(k|k) & \ldots & \mathbf{u}(k+N-1|k)\end{array}\right], \end{equation} \noindent is computed that yields $\mathbf{\tilde{x}}(k+N|k) \in \lambda_{2}G_{2}$. Then there exists $\epsilon_{x_{f}}^{max} > 0$ such that if $\|\mathbf{x_{f}}(k+N|k+1)-\mathbf{x_{f}}(k+N|k)\| \leq \epsilon_{x_{f}}^{max}$ and $\mathbf{u}(k+i|k+1)= \mathbf{u}(k+i|k), i=1 \ldots N-1$, then $\mathbf{\tilde{x}}(k+N|k+1) \in G_{2}$. \end{prop} \begin{pf} Taking $\mathbf{u}(k+i|k+1)=\mathbf{u}(k+i|k), i=1 \ldots N-1$ yields $\mathbf{x_{2}}(k+i|k+1)=\mathbf{x_{2}}(k+i|k), i=1 \ldots N$. Thus, \begin{equation} \label{x2_traj} \mathbf{\tilde{x}}(k+i|k+1)-\mathbf{\tilde{x}}(k+i|k) = \mathbf{x_{f}}(k+i|k+1)-\mathbf{x_{f}}(k+i|k), i=1 \ldots N. \end{equation} \noindent Since $\lambda_{2}<1$, there exists $\Delta_{2}>0$ such that if $\mathbf{\tilde{x}}(k+N|k) \in \lambda_{2}G_{2}$ and $\|\mathbf{\tilde{x}}(k+N|k+1)-\mathbf{\tilde{x}}(k+N|k)\| \leq \Delta_{2}$, then \begin{equation} \label{x_tilde_inthere} \mathbf{\tilde{x}}(k+N|k+1) \in G_{2}. \end{equation} \noindent From (\ref{x2_traj}) and (\ref{x_tilde_inthere}) it follows that by taking $\epsilon_{x_{f}}^{max} = \Delta_{2}$, we guarantee that $\mathbf{\tilde{x}}(k+N|k+1) \in G_{2}$. \begin{flushright} $\Box$ \end{flushright} \end{pf} It is possible to convert the state constraints of Propositions \ref{OL_robustness_to_v_tilde_variation} and \ref{IL_robustness_to_x_f_variation} to input constraints (on $\mathbf{v_{des}}$ and $\mathbf{u}$), which are easily enforced and will always result in a feasible optimization problem (as opposed to state constraints, which are not in general guaranteed to result in a feasible constrained optimization). These input constraints are given in the following propositions: \begin{prop} \label{OL_robustness_to_u_variation} (Converting constraints on $\mathbf{\tilde{v}}$ to constraints on $\mathbf{u}$): There exists $\delta_{u}^{max} > 0$ such that if $\|\mathbf{u}(k+i|k)-\mathbf{u^{o}}(k+i|k-1)\|\leq \delta_{u}^{max}$, $i=0 \ldots N-2$, then $\|\mathbf{\tilde{v}}(k+i|k)-\mathbf{\tilde{v}}(k+i|k-1)\| \leq \epsilon_{\tilde{v}}^{max}$, $i=0 \ldots N-1$. \end{prop} \begin{pf} For this proof, it is convenient to express the inner loop dynamics as: \begin{equation} \tilde{v}(k+1) = C(A_{2}x_{2}(k)+B_{2}u(k)-x_{f}(k+1)), \end{equation} \noindent from which it from a time series expansion that: \begin{eqnarray} \nonumber \mathbf{\tilde{v}}(k+i|k)-\mathbf{\tilde{v}}(k+i|k-1) &=& C\sum_{j=0}^{i-1}(A_{2}^{j}B_{2}(\mathbf{u}(k+i-j-1|k)\\ && -\mathbf{u^{o}}(k+i-j-1|k-1))) \\ && \nonumber -C(\mathbf{x_{f}}(k+i|k)-\mathbf{x_{f}}(k+i|k-1)), \end{eqnarray} \noindent and \begin{equation} \|\mathbf{\tilde{v}}(k+i|k)-\mathbf{\tilde{v}}(k+i|k-1)\| \leq \delta_{u}^{max} \|C\| \sum_{j=0}^{i-1}\|A_{2}^{j}B_{2}\| + \|C\| \epsilon_{x_{f}}^{max}. \end{equation} \noindent It follows that if we take: \begin{equation} \delta_{u}^{max} \leq \frac{\epsilon_{\tilde{v}^{max}}-\|C\| \epsilon_{x_{f}^{max}}}{\|C\| \sum_{j=0}^{N-1}\|A_{2}^{j}B_{2}\|}, \end{equation} \noindent then we have $\|\mathbf{\tilde{v}}(k+i|k)-\mathbf{\tilde{v}}(k+i|k-1)\| \leq \epsilon_{\tilde{v}}^{max}, i = 0 \ldots N-1$. \begin{flushright} $\Box$ \end{flushright} \end{pf} \noindent The proof uses the time series expression of the inner loop dynamics to demonstrate that one can restrict the step-to-step variation in $\mathbf{u}$ and achieve the required bound on the step-to-step variation in $\mathbf{\tilde{v}}$. Constraints on $\mathbf{x_{f}}$ can similarly be converted to constraints on $\mathbf{v_{des}}$, as presented in the following proposition: \begin{prop} \label{IL_robustness_to_v_des_variation} (Converting constraints on $\mathbf{x_{f}}$ to constraints on $\mathbf{v_{des}}$): There exists $\delta_{v_{des}}^{max} > 0$ such that if $\|\mathbf{v_{des}}(k+i|k+1)-\mathbf{v_{des}}(k+i|k)\| \leq \delta_{v_{des}}^{max}$, $i=1 \ldots N-1$, then $\|\mathbf{x_{f}}(k+N|k+1)-\mathbf{x_{f}}(k+N|k)\| \leq \epsilon_{x_{f}}^{max}$. \end{prop} \begin{pf} Recall that the reference model dynamics are given by: \begin{equation} x_{f}(k+1) = A_{f}x_{f}(k) + B_{f}v_{des}(k), \end{equation} \noindent from which it follows that: \begin{eqnarray} \nonumber \mathbf{x_{f}}(k+i|k+1)-\mathbf{x_{f}}(k+i|k) &=& \sum_{j=0}^{i-1}(A_{f}^{j}B_{f}(\mathbf{v_{des}}(k+i-j-1|k+1)\\ && -\mathbf{v_{des}}(k+i-j-1|k))), \end{eqnarray} \noindent and \begin{equation} \|\mathbf{x_{f}}(k+i|k+1)-\mathbf{x_{f}}(k+i|k)\| \leq \delta_{v_{des}}^{max} \sum_{j=0}^{i-1}\|A_{f}^{j}B_{f}\|. \end{equation} \noindent If we take: \begin{equation} \delta_{v_{des}}^{max} \leq \frac{\epsilon_{x_{f}^{max}}}{\sum_{j=0}^{N-1}\|A_{f}^{j}B_{f}\|}, \end{equation} \noindent then we have $\|\mathbf{x_{f}}(k+N|k+1)-\mathbf{x_{f}}(k+N|k)\| \leq \epsilon_{x_{f}}^{max}$. \begin{flushright} $\Box$ \end{flushright}\end{pf} \section{Persistent Feasibility, Convergence, and Stability}\label{feasibility_stability} In this section, we show how the constraints derived in Sections 4 and 5 result in persistent feasibility of the MPC optimization problem and asymptotic stability of the overall system, with a region of attraction that is identical to the set of states for which the initial optimization problem is feasible. \subsection{Persistent Feasibility} Because the rate-like constraints cannot be applied at step $k=0$ (since there is no step $k=-1$ against which to compare), we make the following initial feasibility Assumption for step $k=0$: \emph{Initial Feasibility Assumption}: There exists a set $X \in \mathbb{R}^{n_{1}+2n_{2}}$, such that if $\left[\begin{array}{cc}x_{1}^{aug}(0)^{T} & \tilde{x}(0)^{T}\end{array}\right]^{T} \in X$, then $\mathbf{v_{des}}(0)$ and $\mathbf{u}(0)$ can be chosen and are chosen such that $|\mathbf{\tilde{v}}(i|0)-\mathbf{\tilde{v}}(i|-1)| \leq \epsilon_{\tilde{v}}, i=0 \ldots N-1$ and the MPC optimization problem is feasible. Given this assumption, we now state the persistent feasibility result. \begin{prop} \label{feasibility} (Persistent feasibility): Suppose that the initial conditions satisfy $\left[\begin{array}{cc}x_{1}^{aug}(0)^{T} & \tilde{x}(0)^{T}\end{array}\right]^{T} \in X$. Then both the outer and inner loop MPC optimizations are feasible at every step, $k \geq 0$. \end{prop} \begin{pf} Feasibility at $k=0$ is guaranteed by the initial feasibility assumption. \emph{Outer loop MPC feasibility for $k \geq 1$}: By inner loop constraint (\ref{mpc_inner_constraints}), combined with Proposition \ref{OL_robustness_to_u_variation}, we guarantee that $\|\mathbf{\tilde{v}}(k+i|k)-\mathbf{\tilde{v}}(k+i|k-1)\| \leq \epsilon_{\tilde{v}}^{max}$ for $i= 0 \ldots N-2$. Thus, if we take $\mathbf{v_{des}}(k+i|k)=\mathbf{v_{des}}(k+i|k-1)$ for $i=0 \ldots N-2$, then we achieve: \begin{eqnarray} \mathbf{x_{1}^{aug}}(k+N-1|k) & \in & G_{1}, \\ \nonumber \|\mathbf{v_{des}}(k+i|k)-\mathbf{v_{des}}(k+i|k-1)\| = 0 &\leq& \delta_{v_{des}}^{max} \beta^{k}, \\ \nonumber && i=0 \ldots N-2. \end{eqnarray} By construction of $G_{1}$ and $G_{2}$, taking $\mathbf{v_{des}^{o}}(k+N-1|k)=-K_{1}\mathbf{x_{1}^{aug}}(k+N-1|k)$ results in $\mathbf{x_{1}^{aug}}(k+N|k) \in \lambda G_{1}$. \emph{Inner loop MPC feasibility for $k \geq 1$}: By outer loop constraint (\ref{mpc_outer_constraints}), combined with Proposition \ref{IL_robustness_to_v_des_variation}, we guarantee that $\|\mathbf{x_{f}}(k+i|k)-\mathbf{x_{f}}(k+i|k-1)\| \leq \epsilon_{x_{f}}^{max}$ for $i=0 \ldots N-2$. Thus, if we take $\mathbf{u^{o}}(k+i|k)=\mathbf{u^{o}}(k+i|k-1)$ for $i=0 \ldots N-2$, then we achieve: \begin{eqnarray} \nonumber \mathbf{\tilde{x}}(k+N-1|k) & \in & G_{2}, \\ \mathbf{u}(k+i|k) & \in & U, \\ \nonumber \|\mathbf{u}(k+i|k)-\mathbf{u^{o}}(k+i|k-1)\| = 0 &\leq& \delta_{u}^{max} \beta^{k}, \\ \nonumber && i=0 \ldots N-2. \end{eqnarray} Given that $\mathbf{x_{1}^{aug}}(k+N-1|k) \in G_{1}$, applying $\mathbf{u}(k+N-1|k) = K_{21}\mathbf{v_{des}}(k+N-1|k) - K_{22}\mathbf{x_{f}}(k+N-1|k) - K_{22}\mathbf{\tilde{v}}(k+N-1|k)$ yields $\mathbf{\tilde{x}}(k+N|k) \in \lambda_{2}G_{2}$. \begin{flushright} $\Box$ \end{flushright} \end{pf} \noindent The proof follows from the rate-like constraints imposed on $\mathbf{v_{des}}(k)$ and $\mathbf{u}(k)$. Specifically, if the variations in $\mathbf{v_{des}}$ and $\mathbf{u}$ are sufficiently small from step $k$ to $k+1$, then the optimization problem remains feasible at step $k+1$. \subsection{Convergence} Having shown that the optimization problems are persistently feasible, the next step is to show that the control laws do in fact result in finite-time convergence to $G_{1}$ and $G_{2}$. This is given in the following proposition: \begin{prop} \label{convergence} (Convergence to $G_{1}$, $G_{2}$): Suppose that the initial conditions satisfy $\left[\begin{array}{cc}x_{1}^{aug}(0)^{T} & \tilde{x}(0)^{T}\end{array}\right] \in X$. Then there exists a scalar integer $N^{*} > 0$ such that, after applying the MPC algorithm for $N^{*}$ steps, we have $x_{1}^{aug}(N^{*}) \in G_{1}$ and $\tilde{x}(N^{*}) \in G_{2}$. \end{prop} \begin{pf} By the inner and outer loop rate-like constraints, we have: \begin{eqnarray} \|\mathbf{u^{o}}(k+i|k)-u(k+i)\| &\leq& i \delta_{u}^{max} \beta^{k}, \\ \nonumber \|\mathbf{v_{des}^{o}}(k+i|k)-v_{des}(k+i)\| &\leq& i \delta_{v_{des}}^{max} \beta^{k}, \end{eqnarray} \noindent For the outer loop, it follows that: \begin{eqnarray} \nonumber \|\mathbf{x_{1}^{aug}}(k+N|k)-x_{1}^{aug}(k+N)\| &\leq& N(\sum_{j=0}^{i-1} \|(A_{1}^{aug})^{j}B_{1}^{aug}\| \epsilon_{\tilde{v}}^{max} \\ \nonumber && + \sum_{j=0}^{i-1} \|A_{1}^{j}B_{2}\| \epsilon_{x_{f}}^{max})\beta^{k}, \end{eqnarray} \noindent which, after collecting constant terms into one lumped constant, $Q$, can be rewritten compactly as: \begin{equation}\label{ol_convergence_1} \|\mathbf{x_{1}^{aug}}(k+N|k)-x_{1}^{aug}(k+N)\| \leq QN \beta^{k}. \end{equation} Because $\lambda_{1}G_{1} \in G_{1}$, there exists a positive scalar $\Delta x_{1}^{aug}$ such that for any two vectors $x_{1a}^{aug} \in \lambda_{1}G_{1}$ and $x_{1b}^{aug} \in G_{1}$, $\|x_{1a}^{aug}-x_{1b}^{aug}\|<\Delta x_{1}^{aug}$. To guarantee that $\mathbf{x_{1}^{aug}}(k+N|k) \in \lambda_{1}G_{1} \Rightarrow x_{1}^{aug}(k+N) \in G_{1}$, it suffices to ensure that: \begin{equation}\label{ol_convergence_2} \|\mathbf{x_{1}^{aug}}(k+N|k)-x_{1}^{aug}(k+N)\| < \Delta x_{1}^{aug}. \end{equation} It follows through manipulation of (\ref{ol_convergence_1}), using (\ref{ol_convergence_2}), that whenever \begin{equation} k > \frac{\ln (\frac{\Delta x_{1}^{aug}}{QN})}{\ln \beta} =: N_{1}^{*}, \end{equation} \noindent $\mathbf{x_{1}^{aug}}(k+N|k) \in \lambda_{1}G_{1} \Rightarrow x_{1}^{aug}(k+N) \in G_{1}$. Through the same process, one can show that there exists $N_{2}^{*}$ for which $\tilde{x} \in G_{2}$. Specifically, for the inner loop: \begin{equation} \|\mathbf{\tilde{x}}(k+N|k)-\tilde{x}(k+N)\| \leq N (\sum_{j=0}^{i-1} \|A_{2}^{j}B_{2}\|\delta_{u}^{max} + \epsilon_{x_{f}}^{max})\beta^{k}, \end{equation} \noindent which, after collecting constant terms into one lumped constant $P$ can be rewritten compactly as: \begin{equation}\label{il_convergence_1} \|\mathbf{\tilde{x}}(k+N|k)-\tilde{x}(k+N) \leq PN\beta^{k}. \end{equation} \noindent Because $\lambda_{2}G_{2} \in G_{2}$, there exists a positive scalar $\Delta \tilde{x}$ such that for any two vectors $\tilde{x}_{a} \in \lambda_{2}G_{2}$ and $\tilde{x}_{b} \in G_{2}$, $\|\tilde{x}_{a}-\tilde{x}_{b}\|<\Delta \tilde{x}$. To guarantee that $\mathbf{\tilde{x}}(k+N|k) \in \lambda_{1}G_{1} \Rightarrow \tilde{x}(k+N) \in G_{2}$, it suffices to ensure that: \begin{equation}\label{il_convergence_2} \|\mathbf{\tilde{x}}(k+N|k)-\tilde{x}(k+N)\| < \Delta \tilde{x}. \end{equation} It follows through manipulation of (\ref{il_convergence_1}), using (\ref{il_convergence_2}), that whenever \begin{equation} k > \frac{\ln (\frac{\Delta \tilde{x}}{PN})}{\ln \beta} =: N_{2}^{*}, \end{equation} \noindent $\mathbf{\tilde{x}}(k+N|k) \in \lambda_{2}G_{2} \Rightarrow \tilde{x}(k+N) \in G_{2}$. Taking $N^{*} \triangleq \max \{N_{1}^{*},N_{2}^{*}\}$ completes the proof. \begin{flushright} $\Box$ \end{flushright} \end{pf} \noindent The proof relies on the fact that the variation in $\mathbf{v_{des}^{o}}$ and $\mathbf{u^{o}}$ is not only limited, but is also required to decay over time (through the use of $\beta < 1$ in (\ref{mpc_outer_constraints}) and (\ref{mpc_inner_constraints})). \subsection{Overall Stability} We now state our main result, namely asymptotic stability of the origin of the overall system, with region of attraction $X$: \begin{thm} \label{stability} (Asymptotic stability): Under the MPC controller, specified by (\ref{mpc_outer_law})-(\ref{mpc_inner_cost}), the origin, $x_{1}^{aug} = 0$, $\tilde{x} = 0$, is asymptotically stable with region of attraction $X$. \end{thm} \begin{pf} Propositions \ref{terminal_control} and \ref{terminal_control_inexact} establish the local asymptotic stability of the origin, $x_{1}^{aug}=0, \tilde{x}=0$, under the terminal control laws, $v_{des}(k)=-K_{1}x_{1}^{aug}(k)$ and $u(k)=u_{t}(v_{des}(k),\tilde{x}(k),x_{f}(k))$. Because these terminal control laws are active whenever $x_{1}^{aug} \in G_{1}$ and $\tilde{x} \in G_{2}$, and because $x_{1}^{aug}(k) \in G_{1}, \tilde{x}(k) \in G_{2} \rightarrow x_{1}^{aug}(k+1) \in G_{1}, \tilde{x}(k+1) \in G_{2}$, it follows that the origin of the overall system, $x_{1}^{f} = 0$, $\tilde{x} = 0$, is (locally) asymptotically stable with region of attraction $\{x_{1}^{aug},\tilde{x}\ : x_{1}^{aug} \in G_{1}, \tilde{x} \in G_{2}\}$. From Proposition \ref{convergence}, we know that, under the proposed control law, if $\left[\begin{array}{cc} x_{1}^{aug}(0)^{T} & \tilde{x}(0)^{T} \end{array}\right]^{T} \in X$, then there exists $N^{*}$ for which $x_{1}^{aug}(N^{*}) \in G_{1}$ and $\tilde{x}(N^{*}) \in G_{2}$. It follows that $x_{1}^{aug}=0$, $\tilde{x}=0$ has region of attraction $X$. \begin{flushright} $\Box$ \end{flushright} \end{pf} \noindent The proof contains two parts. First, local asymptotic stability with region of attraction $\{(x_{1}^{aug},\tilde{x}): x_{1}^{aug} \in G_{1}, \tilde{x} \in G_{2}\}$ is shown by demonstrating that both the inner and outer loop systems are input-to-state stable (ISS) and the small gain condition is satisfied within this (invariant) region of attraction. Through the use of MPC, the region of attraction is enlarged to $X$. \section{Conclusions and Future Work} In this paper, we reviewed a novel alternative approach to hierarchical MPC that relies on an inner loop reference model rather than a multi-rate approach for achieving overall system stability. This new approach broadens the class of systems for which overall stability of a hierarchical MPC framework can be guaranteed by allowing the inner closed loop to track the output of a prescribed reference model rather than requiring the inner loop to reach a steady state at each outer loop step. This paper presented proofs that were omitted in other works by the authors due to space constraints. \end{document}
\begin{document} {\mathbb{P}^1(\mathbb{C})}agestyle{empty} {\mathbb{P}^1(\mathbb{C})}agenumbering{roman} \begin{titlepage} \begin{center} \vspace*{1.0cm} {\mathcal H}uge {\bf Grothendieck's Classification of Holomorphic Bundles over the Riemann Sphere} \vspace*{1.0cm} {\mathcal L}arge Andean Medjedovic \\ \normalsize \vspace*{1.0cm} \begin{center}\textbf{Abstract}\end{center} In this paper we look at Grothendieck's work on classifying holomorphic bundles over ${\mathbb{P}^1(\mathbb{C})}$. The paper is divided into $4$ parts. The first and second part we build up the necessary background to talk about vector bundles, sheaves, cohomology, etc. The main result of the $3^{rd}$ chapter is the classification of holomorphic vector bundles over ${\mathbb{P}^1(\mathbb{C})}$. In the $4^{th}$ chpater we introduce principal $G$-bundles and some of the theory behind them and finish off by proving Grothendieck's theorem in full generality. The goal is a (mostly) self-contained proof of Grothendieck's result accessible to someone who has taken differential geometry. \end{center} \end{titlepage} {\mathbb{P}^1(\mathbb{C})}agestyle{plain} \setcounter{page}{2} \cleardoublepage \renewcommand\contentsname{Table of Contents} \tableofcontents \cleardoublepage {\mathbb{P}^1(\mathbb{C})}hantomsection {\mathbb{P}^1(\mathbb{C})}agenumbering{arabic} \chapter{Complex Manifolds and Vector Bundles} \section{Complex Manifolds} \begin{definition}[Complex Manifold] We say a manifold $M$ is a complex manifold if each of the charts, ${\mathbb{P}^1(\mathbb{C})}hi_\alpha$, map from an open subset $U_\alpha$ to an open subset of $V_\alpha \subset {\mathbb C}^n$ and the transition maps ${\mathbb{P}^1(\mathbb{C})}hi_{\alpha \beta} = {\mathbb{P}^1(\mathbb{C})}hi_{\beta}\circ{\mathbb{P}^1(\mathbb{C})}hi^{-1}_{\alpha}$ are biholomorphisms (bijective holomorphisms with a holomorphic inverse) as maps from ${\mathbb{P}^1(\mathbb{C})}hi_{\alpha}(U_{\alpha} \cap U_\beta)$ to ${\mathbb{P}^1(\mathbb{C})}hi_{\beta}(U_{\alpha} \cap U_\beta)$. \end{definition} We say that the (complex) dimension of the manifold over ${\mathbb C}$ is $n$. A Riemannian surface is a manifold in the special case that the dimension is $1$. \begin{definition}[Projective Spaces] We define the $n$ dimensional projective space over $\mathbb{C}$, $\mathbb{P}({\mathbb C})^n$, as the set of equivalence classes of non-zero vectors in $v \in {\mathbb C}^{n+1}$ under the equivalence $v \sim \lambda v$ for $\lambda \in {\mathbb C}$. \end{definition} \begin{prop} The $n$ dimensional projective space is indeed an $n$-dimensional complex manifold. \end{prop} \begin{proof} Let $[z_1, \cdots, z_{n+1}]$ ($z_i \in {\mathbb C}$, not all zero) be the equivalence class corresponding to $(z_1, \cdots, z_{n+1}) \in {\mathbb C}^{n+1}$. Let $U_i$ be the set of equivalence classes with $z_i$ non-zero. Then $U_i$ cover $\mathbb{P}({\mathbb C})^n$. Let ${\mathbb{P}^1(\mathbb{C})}hi_i : U_i \to {\mathbb C}^n$ by $[z_1, \cdots, z_{n+1}] \mapsto (\mathfrak rac{z_1}{z_i},\cdots,\mathfrak rac{z_{i-1}}{z_i},\mathfrak rac{z_{i+1}}{z_i},\cdots, \mathfrak rac{z_{n+1}}{z_i})$. One immediately sees that ${\mathbb{P}^1(\mathbb{C})}hi_i$ is well defined and if $z_i,z_j \ne 0$ then ${\mathbb{P}^1(\mathbb{C})}hi_j \circ {\mathbb{P}^1(\mathbb{C})}hi_i^{-1}:{\mathbb{P}^1(\mathbb{C})}hi_{i}(U_i \cap U_j) \to {\mathbb{P}^1(\mathbb{C})}hi_{j}(U_{i} \cap U_{j})$ is a biholomorphism. \end{proof} \begin{theorem} A holomorphic function, $f$, on a compact Riemann surface is constant. \end{theorem} \begin{proof} We have that $|f(p)|$ is maximal for some $p$. Take a chart around $p$ to a neighbourhood of $0$. Then the composition of $f$ with the chart is maximal at $0$, contradicting the maximum modulus principle. \end{proof}The Riemann sphere is defined to be the Riemann surface ${\mathbb{P}^1(\mathbb{C})}$. \section{Vector Bundles} Let $M$ be a manifold, we say a (real) vector bundle $V$ over $M$ is pair of a manifold and projection map $(V, {\mathbb{P}^1(\mathbb{C})}i)$ with ${\mathbb{P}^1(\mathbb{C})}i: V \to M$ so that for every $p \in M$, ${\mathbb{P}^1(\mathbb{C})}i^{-1}(p)$ is a ${\mathbb R}$-vector space we have that there is some open $U$ around $p$, and a homeomorphism $\vec{a}rphi_U$ with $\vec{a}rphi_{U_i}: {\mathbb{P}^1(\mathbb{C})}i^{-1}(U) \to U \times {\mathbb R}^k$. We say that the rank of the vector bundle $V$ (over ${\mathbb R}$) is $k$. We can extend this definition to holomorphic vector bundles over a complex manifold in the following way: \begin{definition} Let $M$ be a complex manifold, we say a pair $(E,{\mathbb{P}^1(\mathbb{C})}i)$ over $M$ with rank $k$ is a holomorphic vector bundle if for every $p \in M$, ${\mathbb{P}^1(\mathbb{C})}i^{-1}(p)$ is a ${\mathbb C}$-vector space and there is an open subset $U$ of $M$ with a biholomorphism $\vec{a}rphi_U: {\mathbb{P}^1(\mathbb{C})}i^{-1}(U) \to U \times {\mathbb C}^k$. Equivalently, we can require the transition maps to ${\mathbb C}$ be linear isomorphisms: $$proj(\vec{a}rphi_U \circ \vec{a}rphi_V^{-1})|_{(U \cap V)\times {\mathbb C}^k} : {\mathbb C}^l \to {\mathbb C}^l$$ \end{definition} Furthermore, we say a vector bundle is a line bundle if it has rank $1$ and we say a (complex) vector bundle $E$ is trivial if it is isomorphic to ${\mathbb C}^k \times M$. Note that, locally, every bundle is trivial. \begin{definition} A section of a vector bundle $V$ is a continuous map $\sigma: M \to V$ so that ${\mathbb{P}^1(\mathbb{C})}i\circ\sigma = 1_M$. The vector space of all sections on $V$ over $M$ is denoted by $\Gamma(M,V)$. If the vector bundle $E$ is holomorphic and the map $\sigma$ is holomorphic we say it is a holomorphic section and denote the corresponding vector space $H^{0}(M,\mathcal{O}(E))$. \end{definition} The reason we use this notation will become clear later on. \begin{prop} Let $E_1$ and $E_2$ be $2$ holomorphic vector bundles over a complex manifold $M$ of rank $k,l$. Then we can define the vector bundles $E_1^*$, $\det(E_1)$, $E_1 \oplus E_2$, and $E_1 \otimes E_2$. \end{prop} \begin{proof} Let $U,V$ be sufficiently small open sets around $p$. Let ${\mathbb{P}^1(\mathbb{C})}hi_1$, ${\mathbb{P}^1(\mathbb{C})}hi_2$ be $2$ corresponding charts for $E_1$ and similarly $\vec{a}rphi_1$, $\vec{a}rphi_2$ for $E_2$. Let $T_{12}$ and $\mathcal{T}_{12}$ be the linear transition maps for $E_1$ and $E_2$. Then we define the transition charts for $E_1^*$ , $\det(E_1)$, $E_1 \oplus E_2$, and $E_1 \otimes E_2$: \begin{itemize} \item $T_{12}^*$ \item $\det(T_{12})$ \item $T_{12}\oplus \mathcal{T}_{12}$ \item $T_{12} \otimes \mathcal{T}_{12}$ \end{itemize} These are then invertible and linear and the vector bundles have rank $k, 1, k+l$, and $kl$ respectively. \end{proof} \begin{definition} Let $E_1$ and $E_2$ be $2$ holomorphic vector bundles over a complex manifold $M$. Suppose we have an invertible map $f$ so that the following diagram commutes and the restriction, $f|_{{\mathbb{P}^1(\mathbb{C})}i^{-1}(p)}: {\mathbb{P}^1(\mathbb{C})}i^{-1}(p) \to {\mathbb{P}^1(\mathbb{C})}i^{-2}(p)$ is linear. \tikzset{every picture/.style={line width=0.75pt}} \begin{center} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (277.5,124) -- (349.5,123) ; \draw [shift={(351.5,123)}, rotate = 539.23] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (273.5,134) -- (303.35,176.37) ; \draw [shift={(304.5,178)}, rotate = 234.82999999999998] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (358.5,130) -- (326.65,175.36) ; \draw [shift={(325.5,177)}, rotate = 305.07] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (314,107) node {$f$}; \draw (274,159) node {${\mathbb{P}^1(\mathbb{C})}i _{1}$}; \draw (352,159) node {${\mathbb{P}^1(\mathbb{C})}i _{2}$}; \draw (266,124) node {$E_{1}$}; \draw (364,124) node {$E_{2}$}; \draw (314,187) node {$M$}; \end{tikzpicture} \end{center} We say that $E_1$ and $E_2$ are isomorphic. Similarly, if there is an injection from $E_1$ to $E_2$ then we say $E_1$ is a subbundle of $E_2$. \end{definition} \begin{theorem} Let $M$ be a complex manifold and consider a short exact sequence of vector bundles over $M$: $$0 \to E_1 \xrightarrow{p} E \xrightarrow{q} E_2 \to 0$$ This sequence splits, that is, $E \cong E_1 \oplus E_2$. \end{theorem} \begin{proof} We first construct an inner product over $E$. Let $U_\alpha$ cover $M$ with $E$ trivial over each $U_\alpha$. Let $\rho_\alpha$ be a corresponding partition of unity. We can choose an inner product $\langle, \rangle _\alpha$ on each $E|_{U_\alpha}$. Extend each inner product to be $0$ outside $E|_{U_\alpha}$. Now consider the inner product given by: $$\langle, \rangle = \sum_{\alpha}\rho_\alpha\langle, \rangle _\alpha$$ This is defined on all of $E$. Under this inner product we can write $E = (p(E_1))^{{\mathbb{P}^1(\mathbb{C})}erp} \oplus (p(E_1))$. Note that $p(E_1) \cong E_1$ by injectivity. We also have the restriction of $q|_{(\ker q)^{{\mathbb{P}^1(\mathbb{C})}erp}}$ from $(\ker q)^{{\mathbb{P}^1(\mathbb{C})}erp} \to E_2$ is surjective (by exactness) and injective as $q(x) = q(y)$ means $x-y \in \ker(q)$. But $E_2 \cong (\ker q)^{{\mathbb{P}^1(\mathbb{C})}erp} = (p(E_1))^{{\mathbb{P}^1(\mathbb{C})}erp}$ and so the sequence splits. \end{proof} \begin{lemma} Let $L$ be a line bundle on a complex manifold $M$. Then $L$ is trivial if and only if there is some nowhere $0$ section on $L$. \end{lemma} \begin{proof} Suppose $L$ is the trivial bundle. Let $x \in M$. Then the section sending $x \mapsto (x,1)$ is nowhere $0$. Suppose we have a nowhere section $\sigma$, sending $x \to (x,\sigma(x))$. Then consider the isomorphism $f: M \times {\mathbb C} \to L$ by $(x,c) \mapsto (x,c\sigma(x))$. \end{proof} \begin{lemma} Let $S$ be a Riemann surface and let $E$ be a vector bundle. Then $E \cong E' \oplus I_{{\rm rank} E - 1}$ for some line bundle $E'$. \end{lemma} \begin{proof} We note that if the rank of $E$ is at least $2$ then there is a section $\sigma$ that is non-zero everywhere by perturbing a section (not identically $0$) locally around its zeroes. Take the line bundle parametrized by $\sigma$, $L \cong I_1$. We can then split $E$ as: $$0 \to I_1 \to E \to E_1$$ Is short exact for some $E_1$. We then do the same for $E_1$, inductively, so $E \cong E' \oplus I_{{\rm rank} E - 1}$. \end{proof} Once we have the notion of a degree of a line bundle we will be able to show $2$ line bundles are isomorphic if and only if their degree is the same. This, combined with the above, gives us that $E_1 \cong E_2$ if and only if they have the same rank and same degree. \chapter{Sheaves And Cohomology} \section{Sheaves} \begin{definition} Let $X$ be a topological space. For every open $U \subset X$ we associate an abelian group $\mathcal{F}(U)$ so that: \begin{itemize} \item $F(\emptyset) = 0$ \item If $V \subseteq U$ there is a group morphism $\rho_{U,V}:\mathcal{F}(U) \to \mathcal{F}(V)$ \item $\rho_{U,U} = 1$ \item If $W \subseteq V \subseteq U$ then $\rho_{U,W} = \rho_{V,W} \circ \rho_{U,V}$ \end{itemize} We say that $\mathcal{F}$ is a presheaf. We can write $\rho_{U,V}(f)$ as $f|_V^{U}$ \end{definition} \begin{definition} Let $\mathcal{F}$ be a presheaf on $X$. Let $U$ be open with open cover $\{U_i\}$. $\mathcal{F}$ is a sheaf if we have: \begin{itemize} \item If $s\in \mathcal{F(U)}$ with $\rho_{U,U_i}(s) = 0$ for all $i$, then $s= 0$ \item If $s_i \in \mathcal{F}(U_i)$ with (for any $i,j$): $$\rho_{U, U_i\cap U_j}(s_i) = \rho_{U, U_i\cap U_j}(s_j)$$ then there is $s \in \mathcal{F}(U)$ so that $\rho_{U,U_i}(s) = s_i$ \end{itemize} \end{definition} \begin{prop} Let $k \in \mathbb{Z}$. Let $E$ be a vector bundle over a complex manifold $M$. Let $X$ be a Riemann surface with $x \in X$ and $L$ be a line bundle over $X$. Then the following are sheaves: \begin{itemize} \item $\mathcal{O}(E)$ where to each $U \subset M$ we associate the abelian group (under pointwise multiplication) $H^{0}(E,U)$ with the maps $\rho_{U,V}$ being restrictions. \item $\mathcal{O}_M$, where to each $U \subset M$ we associate the abelian group of holomorphic functions $f: U \to {\mathbb C}$ \item $\mathcal{O}^*_M$, where to each $U \subset M$ we associate the abelian group of holomorphic functions $f: U \to {\mathbb C}^*$ \item $\mathcal{O}_X(-kx)$, where to each $U \subset X$ we associate the abelian group of holomorphic functions $f: U \to {\mathbb C}$ that vanish at $x$ with multiplicity $k$, \item $\mathcal{O}_X(kx)$ where to each $U \subset X$ we associate the abelian group of holomorphic functions $f: U \to {\mathbb C}$ that that have a pole of order $k$ at $x$. \item $L(-x)$, where $U$ is associated to the holomorphic sections of $L|_{U}$, vanishing at $x$. \item ${\mathbb C}_x$, the skyscraper sheave, with $U$ associated to ${\mathbb C}$ if $x\in U$ and $0$, otherwise. \end{itemize} \end{prop} \section{Cech Cohomology} Let $\mathfrak{U} =\{U_\alpha\}$ cover a complex manifold $X$ (for $\alpha$ in some index set $I$) and let $\mathcal{F}$ be a sheaf on $X$. \begin{definition} Let $$C^i = {\mathbb{P}^1(\mathbb{C})}rod_{\alpha_1, \cdots, \alpha_i \in I} \mathcal{F}(\cap_{k = 1}^{i}U_{\alpha_k})$$ Let $d_i : C^i \to C^{i+1}$ via (taking the product of the maps over the indices): $$f_{\{\alpha_1, \cdots, \alpha_{i}\}} \mapsto \sum_{k}^{i+1}(-1)^k f|_{\cap{j}U_{\alpha_j}}^{\cap{j \ne k}U_{\alpha_j}} $$ \end{definition} This gives rise to the Čech complex: $$C^0 \xrightarrow{d_0}C^1 \xrightarrow{d_1} \cdots $$ \begin{exer} One can see this forms a complex by verifying that $d_{i+1} \circ d_i = 0$ \end{exer} \begin{definition} We define the $p^{th}$ Čech cohomology group by taking the quotient group: $$H^p(X, \mathfrak{U}, \mathfrak{F}) = \mathfrak rac{\ker(d_p)}{\im (d_{p+1})}$$ \end{definition} One may wonder to what extent does the cohomology depend on the open cover. It turns out that by a result due to Leray, beyond the scope of this paper, we can choose sufficiently refined coverings so the cohomology doesn't change. \begin{note} Our use of the notation $H^{0}(X, \mathcal{O}(E))$ before is justified as $\ker d_0 = H^{0}(X, \mathcal{O}(E)) $ \end{note} \begin{lemma}[Induced Long Exact Sequences] Suppose we have a short exact sequence of sheaves: $$0 \to \mathcal{E} \to \mathcal{F} \to \mathcal{G} \to 0$$ That is, for any open set $U$, the functors at $U$ to the category of abelian groups form a short exact sequence. Then there is an induced long exact sequence of cohomology groups: $$0 \to H^0(X, \mathcal{E}) \to H^0(X, \mathcal{F}) \to H^0(X, \mathcal{F}) \to H^1(X,\mathcal{E}) \cdots$$ \end{lemma} \begin{proof} The proof is an easy but tedious application of Snake lemma twice. We will not, however, prove it here. \end{proof} \chapter{Line Bundles over ${\mathbb{P}^1(\mathbb{C})}$ and Grothendieck's Theorem for Vector Bundles} \section{The degree map} For the rest of the paper we can fix $p \in X$. We now turn our attention over to line bundles over ${\mathbb{P}^1(\mathbb{C})}$. We proved earlier in the paper that any vector bundle $E$ over $X$ can be written as $L \oplus I_m$ where $L$ is a line bundle. When are $2$ line bundles isomorphic? We leave the following proposition as an exercise. \begin{prop} Let $E,F,G,H$ be vector bundles. If $$0\to E \to F\to G\to 0$$ is short exact then: $$0\to E\otimes H \to F\otimes H\to G\otimes H\to 0$$ Is short exact as well. Furthermore, if $H^1(X,G^* \otimes E) = 0$, then $F \cong E\otimes G$. \end{prop} \begin{definition} Consider the short exact sequence: $$0 \to \mathcal{O}_X({\mathbb Z}) \to \mathcal{O}_X \xrightarrow{e^{2{\mathbb{P}^1(\mathbb{C})}i i f}} \mathcal{O}_X({\mathbb C}^*) \to 0$$ And the induced long exact sequence: $$\cdots H^1(X, \mathcal{O}_X) \to H^1(X, \mathcal{O}_X^*) \xrightarrow{deg} H^2(X, \mathcal{O}_X({\mathbb Z})) \to H^2(X, \mathcal{O}_X) \cdots$$ The degree map is then defined to be $deg$. \end{definition} \begin{lemma} The degree map is a bijection. \end{lemma} \begin{proof} By Leray $H^1(X, \mathcal{O}_X) \cong 0$ and by a theorem of Grothendieck's $H^2(X, \mathcal{O}_X) \cong 0$. By exactness, the degree map is a bijection. Furthermore, by Poincaré duality, $H^2(X, \mathcal{O}_X({\mathbb Z})) \cong H_0(X, \mathcal{O}_X({\mathbb Z}))\cong {\mathbb Z}$. \end{proof} \section{The Classification of Vector Bundles} \begin{prop} $H^1(X,\mathcal{O}_X^*)$ is the set of line bundles on $X$ (up to isomorphism). \end{prop} \begin{proof} Let $L$ be a line bundle. Choose a cover fine enough so that $L$ is trivial on each intersection. Let ${\mathbb{P}^1(\mathbb{C})}hi_j^{-1} \circ {\mathbb{P}^1(\mathbb{C})}hi_i$ be the transitions, then $\ker d_1$ is precisely the set of ${\mathbb{P}^1(\mathbb{C})}hi_j^{-1} \circ {\mathbb{P}^1(\mathbb{C})}hi_i$, as $({\mathbb{P}^1(\mathbb{C})}hi_j^{-1} \circ {\mathbb{P}^1(\mathbb{C})}hi_i)^{-1} = {\mathbb{P}^1(\mathbb{C})}hi_i^{-1} \circ {\mathbb{P}^1(\mathbb{C})}hi_j$. Let $\vec{a}rphi_i$ be another trivialization of $L$. Then $\vec{a}rphi_i^{-1} \circ \vec{a}rphi_j^ = f^{-1}({\mathbb{P}^1(\mathbb{C})}hi_i^{-1} \circ {\mathbb{P}^1(\mathbb{C})}hi_j)g$ where $f,g$ are biholomorphic maps on $U_i \cap U_j$. Note that $\im(d_0)$ is the set of maps that can be written as $fg^{-1}$ for some $f:U_i \to {\mathbb C}^*$, $g:U_j \to {\mathbb C}^*$. So up to $\im(d_0)$, line bundles are unique elements of $\ker d_1$. The conclusion follows. \end{proof} \begin{prop} $H^1(X, {\mathbb C}_p) = 0$. \end{prop} \begin{proof} We want to show $\ker(d_1) = 0$ so it suffices to check that $d_0(C_0) = 0$. Take any refinement with only one open set, $U_1$ containing $p$. Let $c \in\mathcal{F}(U_1)$, then $d_0(c) = 0$. \end{proof} We also need the following $2$ lemmas: \begin{lemma} $\dim H^0(X, \mathcal{O}(m)) = m+1$ if $m \geq 0$. \end{lemma} \begin{proof} There are some charts ${\mathbb{P}^1(\mathbb{C})}hi_0$ on $U_1$around $0$ and some chart ${\mathbb{P}^1(\mathbb{C})}hi_1$ on $U_2$ so that $U_1 \cap U_2 \ne \empty$ of that set so that the transition function is $\mathfrak rac{1}{z^m}$. It follows that the image of any section under the charts must have Laurent expansion: $$\mathfrak rac{1}{z^m}\sum_{k = 0}^{m}\alpha_i z^i$$ \end{proof} Note that if $m < 0$, we only have the $0$ section. \begin{lemma} For any vector bundle $E$ of rank $k$ over $X$ we can find some $O(n)$ so that $E \otimes O(n)$ has a holomorphic section that is not everywhere $0$. \end{lemma} \begin{proof} Let $n > \dim H^1(X, E)$. We have a section $\sigma$ that only vanishes at $p$. This yields the following short exact sequence: $$0 \to\ E \to E\otimes{\mathcal O}(n) \xrightarrow{\sigma(p)^n} {\mathbb C}_p^{kn} \to 0$$ This induces a long exact sequence with the sum of alternating dimensions being $0$, so we now have \begin{align*} \dim H^0(X, E \otimes O(n)) &=\dim H^1(X, E \otimes O(n)) + \dim H^0({\mathbb C}_p^{nk}) \\ & + \dim H^0(X,E) - \dim H^1(X,E)\\ &\geq nk - \dim H^1(X,E). \end{align*} And so $\dim H^0(X, E \otimes {\mathcal O}(n)) \geq 1$. \end{proof} Note that this implies we can let $n$ be so that $\dim H^0(X, E \otimes {\mathcal O}(n-1)) = 0$ but $\dim H^0(X, E \otimes O(n)) > 0$ (as $\dim H^0(X, E \otimes {\mathcal O}(n-1)) < \dim H^0(X, E \otimes {\mathcal O}(n))$). We are finally ready to prove Grothendieck's classification of vector bundles. \begin{theorem}[Grothendieck] Let $E$ be a rank $k$ vector bundle over $X$. Then: $$E \cong \bigoplus_{i = 1}^{k}{\mathcal O}(d_i)$$ \end{theorem} \begin{proof} Let ${\mathcal O}(n)$ be as above for $p \in X$, arbitrary. We can then take a holomorphic section $\sigma$ that never vanishes (If it did vanish at $p$ then $\sigma\sigma_p^{-1} \in E\otimes {\mathcal O}(m-1)$ wouldn't) and thus find a trivial subbundle, $L$ of $E\otimes {\mathcal O}(n)$. We let $Q$ be the quotient bundle of $L$ and $E$ and suppose by induction that it decomposes as $Q = \bigoplus_{i=1}^{k-1}{\mathcal O}(b_i)$.\\ Note by Riemamm-Roch, $\dim H^1(X, {\mathcal O}(-1)) = 0$. So we have the following $2$ exact sequences (after tensoring with ${\mathcal O}(-1)$): $$0\to {\mathcal O}(-1) \to {\mathcal O}(E\otimes O(n-1)) \to {\mathcal O}(Q(-1)) \to 0$$ $$0 \to H^0(X,{\mathcal O}(Q(-1))) \to 0$$ So $b_i \leq 0$. Note by Riemamm-Roch, $\dim H^1(X, {\mathcal O}(-b_i)) = 0$. We now calculate: $$H^1(X, Q^*) = H^1(X, \oplus_{i = 1}^{k-1}{\mathcal O}(-b_i)) = 0$$ Now consider again $$0 \to L \to E\otimes {\mathcal O}(m) \xrightarrow{\alpha} Q \to 0$$ Tensoring by $Q^*$: $$0 \to {\mathcal O}(Q^*) \to {\mathcal O}({\rm Hom}(Q, E\otimes O(m))) \to {\mathcal O}({\rm Hom} (Q,Q))\to 0$$ The induced cohomology has the following surjection: $$H^0(X, {\rm Hom}(Q,E\otimes {\mathcal O}(m))) \to H^0(X, {\rm Hom}(Q,Q)) \to 0$$ Thus there is some $\beta: Q \to E\otimes {\mathcal O}(m)$ so that $\alpha \circ \beta = id_Q$, and by splitting lemma $$E\otimes {\mathcal O}(m) \cong L \oplus Q.$$ Tensoring $$E \cong {\mathcal O}(-m)\oplus\bigoplus_{i=1}^{k-1}{\mathcal O}(-m+b_i),$$ as required. \end{proof} \chapter{Principal Bundles} \section{Preliminaries} Let $X$ be a Riemann Surface. \begin{defi} A fiber bundle over $X$ is a triple $(E,F,{\mathbb{P}^1(\mathbb{C})}i)$ with $E$ and $F$ being topologies so that: \begin{itemize} \item ${\mathbb{P}^1(\mathbb{C})}i: E \to X$ is a surjection. \item For every $x\in X$ there is an open set $x \in U$ and a chart ${\mathbb{P}^1(\mathbb{C})}hi: {\mathbb{P}^1(\mathbb{C})}i^{-1}(U) \to U\times F$ so that ${\rm proj}_{U}\circ {\mathbb{P}^1(\mathbb{C})}hi(q) = {\mathbb{P}^1(\mathbb{C})}i(q)$ for $q \in {\mathbb{P}^1(\mathbb{C})}i^{-1}(U)$ \end{itemize} \end{defi} We say that $F$ is the fiber, $E$ is the total space and ${\mathbb{P}^1(\mathbb{C})}i$ is the projection. \begin{defi} Let $G$ be a group. A principal $G$-Bundle, $P$, is a fiber bundle with $G$ as its fiber. We also require a continuous right $G$-action on $P$ that is free and transitive. \end{defi} We mainly concern ourselves with $G$ being a Lie-group. \begin{defi} Let $(P,{\mathbb{P}^1(\mathbb{C})}i)$ be a principal $G$-bundle over $X$. Let $\rho$ be a continuous action on the space of homeomorphisms of a topology $F$. Let $\rho$ be the right action given by the $(p,f)g = (pg, \rho(g^{-1}f)$. We say the associated bundle is $(P \times_{\rho} F, {\mathbb{P}^1(\mathbb{C})}i_\rho)$ where: \begin{itemize} \item P $\times_{\rho} F = P \times F /\sim $ where the equivalence classes are given by $[pg,f] = [p,\rho(g)f]$ \item ${\mathbb{P}^1(\mathbb{C})}i_{\rho}[p,f] = {\mathbb{P}^1(\mathbb{C})}i(p)$ \end{itemize} \end{defi} \begin{defi} Let $H$ be a subgroup of $G$. We say $P$ has a reduction to $H$, if there is a non-zero section in $P \times_G G/H$. \end{defi} \begin{defi} Let $\rho$ be a representation of $G$ into $GL(V)$. We define $P \times_G \rho: P\times_G G \to P \times_G GL(V)$. \end{defi} \begin{defi} Let $G$ be a connected compact Lie group. Let $T$ be a maximal torus and $N$, its normalizer. Then we define the Weyl group to be $N/T$. \end{defi} \begin{defi} We say a Lie group is reductive if its Lie algebra is reductive. We say a Lie algebra is reductive if it can be written as a direct sum of a semi-simple algebra and its center. \end{defi} From this point on we let $G$ be a compact Lie group, let $G_0$ be the connected component at the identity and let $\mathfrak{g}$ be its Lie-algebra. We let $H$,$N$,$W$ and $\mathfrak{h}$ be a Cartan subgroup, normalizer of a Cartan subgroup, Weyl group and the lie subalgebra of the Cartan subgroup. Let let $ad$ be the adjoint representation of $G$. Let $P$ be a holomorphic principal $G$-bundle and $E = P \times_G ad$. Finally, let $H^1(X,{\mathcal O}_X(G))$ be the set of holomorphic $G$-bundles over $X$. \begin{theorem}[Grothendieck's Theorem for the Orthogonal Case] A vector bundle $V$ has an orthogonal form if and only if it is isomorphic to its dual. \end{theorem} We do not prove this theorem within this paper. \begin{lemma} Suppose we have a holomorphic section $s$ in $E$ and there is a fiber $E_a$ so that $s(a)$ is a regular element of the lie algebra of $E_a$. Then for any $x$, $s(x)$ is regular in $E_x$. \end{lemma} \begin{proof} The coefficients of the polynomial defining $ad s(x)$ must be constant as they are holomorphic functions, by compactness of $X$. Thus $s(x)$ is a regular element everywhere. \end{proof} \begin{lemma} Suppose we have a section $s$ in $E$. Then we have a section in $P \times_G G/N$. \end{lemma} \begin{proof} By the maximal torus theorem, any $2$ Cartan subgroups are conjugate. The kernel of the action on any particular maximal torus $T$ is $N(T)$. It follows that $G/N$ is the set of Cartan subalgebras. The section given by sending $s(x)$ to its corresponding subalgebra gives a section in $P \times_G G/N$. \end{proof} \begin{lemma} Suppose we have a section in $P \times_G G/N$. Then we have a section in $P \times_G G/T$. \end{lemma} \begin{proof} We first prove the Weyl group is discrete. For any torus $T$ of rank $n$ we have the following short exact sequence: $$0 \to {\mathbb Z}^n \to {\mathbb R}^n \to T \to 0$$ It follows that $\aut(T) \subset \GL_n(Z)$ which is discrete.\\ We then have the sequence: $$0 \to P \times_G W \to P\times_G G/T \to P\times_G G/N \to 0$$ Since $X$ is simply connected $P \times_G W$ is trivial and we have the desired. \end{proof} \begin{defi} We define the Killing form as: $B(x,y) = {\rm tr}(ad(x)ad(y))$ For $x,y\in \mathfrak{g}$. It has a few key properties that we will use. Namely: \begin{itemize} \item That the Killing form of a nilpotent algebra is everywhere $0$. \item A Lie algebra is Semi-simple iff the Killing form is non-degenerate over the algebra \item $2$ ideals of a Lie algebra have no intersections then they are orthogonal with respect to the Killing form. \end{itemize} \end{defi} Suppose $G$ is a compact reductive Lie group. Writing $\mathfrak{g} = \mathfrak{z} \oplus \mathfrak{s}$ for the abelian and semi-simple parts respectively induces a decomposition of each of the fibers $E_x = E_x^1 \oplus E_x^o$. It suffices to show we can find a regular element in the semi-simple part.\\ Now let $G$ be a compact semi-simple Lie group. Let $E_k$ be the vector subfibers of $E$ with meromorphic sections of degree at least $k$. Notice that $[E_i,E_j] \subset E_{i+j}$ by counting degrees. This implies that elements of $E_1$ are $ad_{\mathfrak{g}}$-nilpotent. Let the sub-algebra defined by $E_1$ be $\mathfrak{g}_1$ and we now have an orthogonal fiber $E_0$, by the Killing form. Let the orthogonal sub-algebra under the Killing form be $\mathfrak{g}_0$. \begin{lemma} There is a section in $P \times_G ad$ that is regular at some point. \end{lemma} \begin{proof} Consider the Cartan subalgebras of $\mathfrak{g}_0$. Choose a regular element. Since $\mathfrak{g}_0$ is orthogonal to $\mathfrak{g}_1$, lift it to a global section. \end{proof} We need $1$ more lemma before we are finally ready to prove Grothendieck's theorem. \begin{lemma} If $G$ is a reductive connected Lie group. There is some finite subgroup, $z$, so that $G/z$ is the product of an abelian and semisimple group. \end{lemma} \section{Grothendieck's Theorem} \begin{theorem}{Classification of Principle Bundles on ${\mathbb{P}^1(\mathbb{C})}$} Let $G$ be a reductive connected Lie group. The map: $$H^1(X,O_X(H))/W \to H^1(X,O_X(G))$$ Is a bijection. \end{theorem} \begin{proof} We have seen the surjectivity of it above. Consider the commutative diagram: \tikzset{every picture/.style={line width=0.75pt}} \begin{center} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (296,105.22) -- (338,105.22) ; \draw [shift={(340,105.22)}, rotate = 180] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (301,168.22) -- (343,168.22) ; \draw [shift={(345,168.22)}, rotate = 180] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (240,121.22) -- (240.95,161.22) ; \draw [shift={(241,163.22)}, rotate = 268.64] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (395,120.22) -- (395.95,158.22) ; \draw [shift={(396,160.22)}, rotate = 268.57] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (245,104) node {$H^{1}( X,O_{x}( H))$}; \draw (391,103) node {$H^{1}( X,O_{x}( G))$}; \draw (404,170) node {$H^{1}( X,O_{x}( G/z))$}; \draw (243,172) node {$H^{1}( X,O_{x}( H/z))$}; \end{tikzpicture} \end{center} Suppose $\alpha,\beta \in H^1(X,O_X(H))$ are mapped to the same image in $H^1(X,O_X(G))$. Looking at the diagram, it is clear that they must have the same image in $H^1(X,O_X(G/z))$ or $H^1(X,O_X(H/z))$. In the first, by $3^{rd}$ isomorphism theorem we have that $\alpha$ and $\beta$ are in the same equivalence class when taking the quotient with the Weyl group: $(N/z)/(H/z) = W$. In the second case we have a contradiction as $H^1(X,z) = 0$ by $z$ being finite and $X$ being connected inducing a bijection in the first cohomology groups $H^1(X,H)$ and $H^1(X,H/z)$. \end{proof} \chapter{Acknowledgments} We would like to thank Stephen New for his patience when explaining topics in differential geometry. He taught me all the Lie theory I know and suggested this topic as a capstone for the course in Compact Lie Theory. {\mathbb{P}^1(\mathbb{C})}rintbibliography \nocite{*} \end{document}
\begin{document} \title{Unconditionality of orthogonal spline systems in $L^p$} \author[M. Passenbrunner]{Markus Passenbrunner} \address{Institute of Analysis, Johannes Kepler University Linz, Austria, 4040 Linz, Altenberger Strasse 69} \email{[email protected]} \keywords{orthonormal spline system, unconditional basis, $L^p$} \subjclass[2010]{42C10, 46E30} \date{\today} \begin{abstract} Given any natural number $k$ and any dense point sequence $(t_n)$, we prove that the corresponding orthonormal spline system of order $k$ is an unconditional basis in reflexive $L^p$. \end{abstract} \maketitle \section{Introduction} In this work, we are concerned with orthonormal spline systems of arbitrary order $k$ with arbitrary partitions. We let $(t_n)_{n=2}^\infty$ be a dense sequence of points in the open unit interval such that each point occurs at most $k$ times. Moreover, define $t_0:=0$ and $t_1:=1$. Such point sequences are called \emph{admissible}. For $n\geq 2$, we define $\mathcal S_n^{(k)}$ to be the space of polynomial splines of order $k$ with grid points $(t_j)_{j=0}^n$, where the points $0$ and $1$ both have multiplicity $k$. For each $n\geq 2$, the space $\mathcal S_{n-1}^{(k)}$ has codimension $1$ in $\mathcal S_{n}^{(k)}$ and, therefore, there exists a function $f_{n}^{(k)}\in \mathcal S_{n}^{(k)}$ that is orthonormal to the space $\mathcal S_{n-1}^{(k)}$. Observe that this function $f_{n}^{(k)}$ is unique up to sign. In addition, let $(f_{n}^{(k)})_{n=-k+2}^1$ be the collection of orthonormal polynomials in $L^2[0,1]$ such that the degree of $f_n^{(k)}$ is $k+n-2$. The system of functions $(f_{n}^{(k)})_{n=-k+2}^\infty$ is called \emph{orthonormal spline system of order $k$ corresponding to the sequence $(t_n)_{n=0}^\infty$}. We will frequently omit the parameter $k$ and write $f_n$ instead of $f_{n}^{(k)}$. The purpose of this article is to prove the following \begin{thm}\label{thm:uncond} Let $k\in\mathbb{N}$ and $(t_n)_{n\geq 0}$ be an admissible sequence of knots in $[0,1]$. Then the corresponding general orthonormal spline system of order $k$ is an unconditional basis in $L^p[0,1]$ for every $1<p<\infty$. \end{thm} A celebrated result of A. Shadrin \cite{Shadrin2001} states that the orthogonal projection operator onto the space $\mathcal S_n^{(k)}$ is bounded on $L^\infty[0,1]$ by a constant that depends only on the spline order $k$. As a consequence, $(f_n)_{n\geq -k+2}$ is a basis in $L^p[0,1],$ $1\leq p<\infty$. There are various results on the unconditionality of spline systems restricting either the spline order $k$ or the partition $(t_n)_{n\geq 0}$. The first result in this direction is \cite{Bockarev1975}, who proves that the classical Franklin system---that is orthonormal spline systems of order $2$ corresponding to dyadic knots---is an unconditional basis in $L^p[0,1],\ 1<p<\infty$. This argument was extended in \cite{Ciesielski1975} to prove unconditionality of orthonormal spline systems of arbitrary order, but still restricted to dyadic knots. Considerable effort has been made in the past to weaken the restriction to dyadic knot sequences. In the series of papers \cite{GevorkyanKamont1998, GevorkyanSahakian2000, GevKam2004} this restriction was removed step-by-step for general Franklin systems, with the final result that it was shown for each admissible point sequence $(t_n)_{n\geq 0}$ with parameter $k=2$, the associated general Franklin system forms an unconditional basis in $L^p[0,1]$, $1<p<\infty$. We combine the methods used in \cite{GevorkyanSahakian2000, GevKam2004} with some new inequalities from \cite{PassenbrunnerShadrin2013} to prove that orthonormal spline systems are unconditional in $L^p[0,1],$ $1<p<\infty$, for any spline order $k$ and any admissible point sequence $(t_n)_{n\geq 0}$. The organization of the present article is as follows. In Section \ref{sec:prel}, we give some preliminary results concerning polynomials and splines. Section \ref{sec:proporth} develops some estimates for the orthonormal spline functions $f_n$ using the crucial notion of associating to each function $f_n$ a characteristic interval $J_n$ in a delicate way. Section \ref{sec:comb} treats a central combinatorial result concerning the cardinality of indices $n$ such that a given grid interval $J$ can be a characteristic interval of $f_n$. In Section \ref{sec:techn} we prove a few technical lemmata used in the proof of Theorem \ref{thm:uncond} and Section \ref{sec:main} finally proves Theorem \ref{thm:uncond}. We remark that the results and proofs in Sections \ref{sec:techn} and \ref{sec:main} follow closely \cite{GevKam2004}. \section{Preliminaries}\label{sec:prel} Let $k$ be a positive integer. The parameter $k$ will always be used for the order of the underlying polynomials or splines. We use the notation $A(t)\sim B(t)$ to indicate the existence of two constants $c_1,c_2>0$ that depend only on $k$, such that $c_1 B(t)\leq A(t)\leq c_2 B(t)$ for all $t$, where $t$ denotes all implicit and explicit dependences that the expressions $A$ and $B$ might have. If the constants $c_1,c_2$ depend on an additional parameter $p$, we write this as $A(t)\sim_p B(t)$. Correspondingly, we use the symbols $\lesssim,\gtrsim,\lesssim_p,\gtrsim_p$. For a subset $E$ of the real line, we denote by $|E|$ the Lebesgue measure of $E$ and by $\ensuremath{\mathbbm 1}_E$ the characteristic function of $E$. First, we recall a few elementary properties of polynomials. \begin{prop}\label{prop:poly} Let $0<\rho<1$. Let $I$ be an interval and $A\subset I$ be a subset of $I$ with $|A|\geq \rho |I|$. Then, for every polynomial $Q$ of order $k$ on $I$, \[ \max_{t\in I}|Q(t)|\lesssim_\rho \sup_{t\in A}|Q(t)|\qquad\text{and}\qquad \int_I |Q(t)|\,\mathrm{d} t\lesssim_\rho \int_A |Q(t)|\,\mathrm{d} t. \] \end{prop} \begin{lem}\label{lem:proj} Let $V$ be an open interval and $f$ be a function satisfying $\int_V |f(t)|\,\mathrm{d} t\leq \lambda |V|$ for some $\lambda>0$. Then, denoting by $T_V f$ the orthogonal projection of the function $f\cdot\ensuremath{\mathbbm 1}_V$ onto the space of polynomials of order $k$ on $V$, \begin{equation}\label{eq:polyproj1} \|T_V f\|_{L^2(V)}^2\lesssim \lambda^2 |V|. \end{equation} Moreover, \begin{equation}\label{eq:polyproj2} \|T_V f\|_{L^p(V)}\lesssim \|f\|_{L^p(V)},\qquad 1\leq p\leq \infty. \end{equation} \end{lem} \begin{proof} Let $l_j$, $0\leq j\leq k-1$ be the $j$-th Legendre polynomial on $[-1,1]$ with the normalization $l_j(1)=1$. It is a consequence of the integral identity \[ l_j(x)=\frac{1}{\pi}\int_0^\pi \Big(x+\sqrt{x^2-1}\cos\varphi\Big)^j\,\mathrm{d} \varphi,\quad x\in\mathbb{C}\setminus\{-1,1\}, \] that $l_j$ is uniformly bounded by $1$ on the interval $[-1,1]$. We have the orthogonality relation \begin{equation}\label{eq:legendre} \int_{-1}^1 l_i(x) l_j(x) \,\mathrm{d} x=\frac{2}{2j+1}\delta(i,j),\qquad 0\leq i,j\leq k-1, \end{equation} where $\delta(\cdot,\cdot)$ denotes the Kronecker delta. Now let $\alpha:=\inf V$ and $\beta:=\sup V$. For \[ l_j^V(x):=2^{1/2}|V|^{-1/2} l_j\Big(\frac{2x-\alpha-\beta}{\beta-\alpha}\Big),\quad x\in [\alpha,\beta], \] relation \eqref{eq:legendre} still holds for the sequence $(l_j^V)_{j=0}^{k-1}$, that is \[ \int_\alpha^\beta l_i^V(x) l_j^V(x) \,\mathrm{d} x=\frac{2}{2j+1}\delta(i,j),\qquad 0\leq i,j\leq k-1. \] So, $T_Vf$ can be represented in the form \[ T_Vf=\sum_{j=0}^{k-1} \frac{2j+1}{2}\langle f,l_j^V\rangle l_j^V. \] Thus we obtain \begin{align*} \|T_V f\|_{L^2(V)}&\leq \sum_{j=0}^{k-1}\frac{2j+1}{2}|\langle f,l_j^V \rangle| \|l_j^V\|_{L^2(V)}=\sum_{j=0}^{k-1}\sqrt{\frac{2j+1}{2}}|\langle f,l_j^V\rangle| \\ &\leq \|f\|_{L^1(V)}\sum_{j=0}^{k-1}\sqrt{\frac{2j+1}{2}}\|l_j^V\|_{L^\infty(V)}\lesssim \|f\|_{L^1(V)}|V|^{-1/2}, \end{align*} Now, \eqref{eq:polyproj1} is a consequence of the assumption $\int_V|f(t)|\,\mathrm{d} t\leq \lambda|V|$. If we set $p'=p/(p-1)$, the second inequality \eqref{eq:polyproj2} follows from \begin{align*} \|T_Vf\|_{L^p(V)}\leq \sum_{j=0}^{k-1}\frac{2j+1}{2}\|f\|_{L^p(V)}\|l_j^V\|_{L^{p'}(V)}\|l_j^V\|_{L^p(V)}\lesssim \|f\|_{L^p(V)}, \end{align*} since $\|l_j^V\|_{L^p(V)}\lesssim |V|^{1/p-1/2}$ for $0\leq j\leq k-1$ and $1\leq p\leq \infty$. \end{proof} We now let \begin{equation}\label{eq:part} \mathcal T=(0=\tau_1=\dots=\tau_k<\tau_{k+1}\leq\dots\leq\tau_M<\tau_{M+1}=\dots=\tau_{M+k}=1) \end{equation} be a partition of $[0,1]$ consisting of knots of multiplicity at most $k$, that means $\tau_i<\tau_{i+k}$ for all $1\leq i\leq M$. Let $\mathcal{S}_{\mathcal T}^{(k)}$ be the space of polynomial splines of order $k$ with knots $\mathcal T$. The basis of $L^\infty$-normalized B-spline functions in $\mathcal{S}_{\mathcal{T}}^{(k)}$ is denoted by $(N_{i,k})_{i=1}^M$ or for short $(N_{i})_{i=1}^M$. Corresponding to this basis, there exists a biorthogonal basis of $\mathcal{S}_{\mathcal{T}}^{(k)}$, which is denoted by $(N_{i,k}^*)_{i=1}^M$ or $(N_{i}^*)_{i=1}^M$. Moreover, we write $\nu_i = \tau_{i+k}-\tau_i$. We continue with recalling a few important results for B-splines $N_i$ and their dual functions $N_i^*$. \begin{prop}\label{prop:lpstab} Let $1\leq p\leq \infty$ and $g=\sum_{j=1}^M a_j N_j$. Then, \begin{equation}\label{eq:lpstab} |a_j|\lesssim |J_j|^{-1/p}\|g\|_{L^p(J_j)},\qquad 1\leq j\leq M, \end{equation} where $J_j$ is the subinterval $[\tau_i,\tau_{i+1}]$ of $[\tau_j,\tau_{j+k}]$ of maximal length. Additionally, \begin{equation}\label{eq:deboorlpstab} \|g\|_p\sim \Big(\sum_{j=1}^M |a_j|^p \nu_j\Big)^{1/p}=\| (a_j\nu_j^{1/p})_{j=1}^M\|_{\ell^p}. \end{equation} Moreover, if $h=\sum_{j=1}^M b_j N_j^*$, \begin{equation} \|h\|_p\lesssim\Big(\sum_{j=1}^M |a_j|^p \nu_j^{1-p}\Big)^{1/p}=\|(a_j\nu_j^{1/p-1})_{j=1}^M\|_{\ell^p}. \label{eq:lpstabdual} \end{equation} \end{prop} The two inequalites \eqref{eq:lpstab} and \eqref{eq:deboorlpstab} are Lemma 4.1 and Lemma 4.2 in \cite[Chapter 5]{DeVoreLorentz1993}, respectively. Inequality \eqref{eq:lpstabdual} is a consequence of the celebrated result of Shadrin \cite{Shadrin2001}, that the orthogonal projection operator onto $\mathcal S_{\mathcal{T}}^{(k)}$ is bounded on $L^\infty$ independently of $\mathcal T$. For a deduction of \eqref{eq:lpstabdual} from this result, see \cite[Property P.7]{Ciesielski2000}. The next thing to consider are estimates for the inverse of the Gram matrix $(\langle N_{i,k},N_{j,k}\rangle)_{i,j=1}^{M}$. Before we do that, we recall the concept of totally positive matrices: Let $Q_{m,n}$ the set of strictly increasing sequences of $m$ integers from the set $\{1,\dots,n\}$ and $A$ be an $n\times n$-matrix. For $\alpha,\beta\in Q_{m,n}$, we denote by $A[\alpha;\beta]$ the submatrix of $A$ consisting of the rows indexed by $\alpha$ and the columns indexed by $\beta$. Furthermore we let $\alpha'$ (the complement of $\alpha$) be the uniquely determined element of $Q_{n-m,n}$ that consists of all integers in $\{1,\dots,n\}$ not occurring in $\alpha$. In addition, we use the notation $A(\alpha;\beta):=A[\alpha';\beta']$. \begin{defin} Let $A$ be an $n\times n$-matrix. $A$ is called \emph{totally positive}, if \begin{equation*} \det A[\alpha;\beta]\geq 0,\quad \text{for }\alpha,\beta\in Q_{m,n},1\leq m\leq n. \end{equation*} \end{defin} The cofactor formula $b_{ij}=(-1)^{i+j}\det A(j;i)/\det A$ for the inverse $B=(b_{ij})_{i,j=1}^M$ of the matrix $A$ leads to \begin{prop}\label{prop:checkerboard} Inverses $B=(b_{ij})$ of totally positive matrices $A=(a_{ij})$ have the checkerboard property. This means that \begin{equation*} (-1)^{i+j} b_{ij}\geq 0\quad \text{for all }i,j. \end{equation*} \end{prop} \begin{thm}[\cite{deBoor1968}] Let $k\in\mathbb{N}$ and $\mathcal{T}$ be an arbitrary partition of $[0,1]$ as in \eqref{eq:part}. Then the Gram matrix $A=(\langle N_{i,k},N_{j,k}\rangle)_{i,j=1}^{M}$ of the B-spline functions is totally positive. \end{thm} This theorem is a consequence of the so called basic composition formula \cite[Chapter 1, Equation (2.5)]{Karlin1968} and the fact that the kernel $N_{i,k}(x)$, depending on the variables $i$ and $x$, is totally positive \cite[Theorem 4.1, Chapter 10]{Karlin1968}. As a consequence, the inverse $B=(b_{ij})_{i,j=1}^M$ of $A$ possesses the checkerboard property by Proposition \ref{prop:checkerboard}. \begin{thm}[\cite{PassenbrunnerShadrin2013}]\label{thm:maintool} Let $k\in\mathbb{N}$, the partition $\mathcal T$ be defined as in \eqref{eq:part} and $(b_{ij})_{i,j=1}^M$ the inverse of the Gram matrix $(\langle N_{i,k},N_{j,k}\rangle)_{i,j=1}^{M}$ of B-spline functions $N_{i,k}$ of order $k$ corresponding to the partition $\mathcal T$. Then, \[ |b_{ij}|\leq C \frac{\gamma^{|i-j|}}{\tau_{\max(i,j)+k}-\tau_{\min(i,j)}},\qquad 1\leq i,j\leq M, \] where the constants $C>0$ and $0<\gamma<1$ depend only on the spline order $k$. \end{thm} Let $f\in L^p[0,1]$ for some $1\leq p<\infty$. Since the orthonormal spline system $(f_n)_{n\geq -k+2}$ is a basis in $L^p[0,1]$, we can write $f=\sum_{n=-k+2}^\infty a_n f_n$. Based on this expansion, we define the \emph{square function} $Sf:=\big(\sum_{n=-k+2}^\infty |a_n f_n|^2\big)^{1/2}$ and the \emph{maximal function} $Mf:=\sup_m \big| \sum_{n\leq m} a_n f_n \big|$. Moreover, given a measurable function $g$, we denote by $\mathcal Mg$ the \emph{Hardy-Littlewood maximal function} of $g$ defined as \[ \mathcal Mg(x):=\sup_{I\ni x} |I|^{-1} \int_I |g(t)|\,\mathrm{d} t, \] where the supremum is taken over all intervals $I$ containing the point $x$. A corollary of Theorem \ref{thm:maintool} gives the following relation between $M$ and $\mathcal M$: \begin{thm}[\cite{PassenbrunnerShadrin2013}]\label{thm:maxbound} If $f\in L^1[0,1]$, we have \[ Mf(t)\lesssim \mathcal M f(t),\qquad t\in[0,1]. \] \end{thm} \section{Properties of orthogonal spline functions}\label{sec:proporth} This section treats the calculation and estimation of one explicit orthonormal spline function $f_n^{(k)}$ for fixed $k\in\mathbb N$ and $n\geq 2$ induced by the admissible sequence $(t_n)_{n=0}^\infty$. Let $i_0$ be an index with $k+1\leq i_0\leq M$. The partition $\mathcal T$ is defined as follows: \begin{align*} \mathcal T=(0=\tau_1=\dots=\tau_k<\tau_{k+1}&\leq\dots\leq\tau_{i_0} \\ &\leq\dots\leq\tau_{M}<\tau_{M+1}=\dots=\tau_{M+k}=1), \end{align*} and the partition $\widetilde{\mathcal T}$ is defined to be the same as $\mathcal T$, but with $\tau_{i_0}$ removed. In the same way we denote by $(N_i:1\leq i\leq M)$ the B-spline functions corresponding to $\mathcal T$ and by $(\widetilde{N}_i:1\leq i\leq M-1)$ the B-spline functions corresponding to $\widetilde{\mathcal T}$. Böhm's formula \cite{Boehm1980} gives us the following relationship between $N_i$ and $\widetilde{N}_i$: \begin{equation}\label{eq:boehm} \left\{ \begin{aligned} \widetilde{N}_i(t)&=N_i(t) &\text{if }1\leq i\leq i_0-k-1, \\ \widetilde{N}_i(t)&=\frac{\tau_{i_0}-\tau_i}{\tau_{i+k}-\tau_i}N_i(t)+\frac{\tau_{i+k+1}-\tau_{i_0}}{\tau_{i+k+1}-\tau_{i+1}}N_{i+1}(t) &\text{if }i_0-k\leq i\leq i_0-1, \\ \widetilde{N}_i(t)&= N_{i+1}(t) &\text{if }i_0\leq i\leq M-1. \end{aligned} \right. \end{equation} In order to calculate the orthonormal spline function corresponding to the partitions $\widetilde{\mathcal T}$ and $\mathcal T$, we first determine a function $g\in\operatorname{span}\{N_i:1\leq i\leq M\}$ such that $g\perp \widetilde{N}_j$ for all $1\leq j\leq M-1$. That is, we assume that $g$ is of the form \[ g=\sum_{j=1}^M \alpha_j N_j^*, \] where $(N_j^*:1\leq j\leq M)$ is the biorthogonal system to the functions $(N_i:1\leq i\leq M)$. In order for $g$ to be orthogonal to $\widetilde{N}_j$, $1\leq j\leq M-1$, it has to satisfy the identities \[ 0=\langle g,\widetilde{N}_i\rangle=\sum_{j=1}^M \alpha_j\langle N_j^*,\widetilde{N}_i\rangle,\quad 1\leq i\leq M-1. \] Using \eqref{eq:boehm}, this implies $\alpha_j=0$ if $1\leq i\leq i_0-k-1$ or $i_0+1\leq i\leq M$. For $i_0-k\leq i\leq i_0-1$, we have the recursion formula \begin{equation}\label{eq:recalpha} \begin{aligned} \alpha_{i+1}\frac{\tau_{i+k+1}-\tau_{i_0}}{\tau_{i+k+1}-\tau_{i+1}}+\alpha_{i}\frac{\tau_{i_0}-\tau_i}{\tau_{i+k}-\tau_i}=0, \end{aligned} \end{equation} which determines the sequence $(\alpha_j)$ up to a multiplicative constant. We choose \begin{equation*} \alpha_{i_0-k}=\prod_{\ell=i_0-k+1}^{i_0-1}\frac{\tau_{\ell+k}-\tau_{i_0}}{\tau_{\ell+k}-\tau_{\ell}} \end{equation*} for symmetry reasons. This starting value and the recursion \eqref{eq:recalpha} yield the explicit formula \begin{equation}\label{eq:alpha2} \alpha_j=(-1)^{j-i_0+k}\Big(\prod_{\ell=i_0-k+1}^{j-1}\frac{\tau_{i_0}-\tau_{\ell}}{\tau_{\ell+k}-\tau_{\ell}}\Big)\Big(\prod_{\ell=j+1}^{i_0-1}\frac{\tau_{\ell+k}-\tau_{i_0}}{\tau_{\ell+k}-\tau_{\ell}}\Big),\quad i_0-k\leq j\leq i_0. \end{equation} So, the function $g$ is given by \begin{align*} g&=\sum_{j=i_0-k}^{i_0} \alpha_j N_j^* =\sum_{j=i_0-k}^{i_0} \sum_{\ell=1}^{M} \alpha_j b_{j\ell} N_\ell, \end{align*} where $(b_{j\ell})_{j,\ell=1}^M$ is the inverse of the Gram matrix $(\langle N_j,N_\ell\rangle)_{j,\ell=1}^M$. We remark that the sequence $(\alpha_j)$ alternates in sign and since the matrix $(b_{j\ell})_{j,\ell=1}^M$ is checkerboard, we see that the B-spline coefficients of $g$, namely \begin{equation}\label{eq:defwj} w_\ell:=\sum_{j=i_0-k}^{i_0} \alpha_j b_{j\ell},\qquad 1\leq \ell\leq M, \end{equation} satisfy \begin{equation}\label{eq:betragreinziehen} \Big| \sum_{j=i_0-k}^{i_0}\alpha_j b_{j\ell}\Big|= \sum_{j=i_0-k}^{i_0}|\alpha_j b_{j\ell}|,\qquad 1\leq j\leq M. \end{equation} In the following Definition \ref{def:characteristic}, we assign to each orthonormal spline function a characteristic interval that is a grid point interval $[\tau_i,\tau_{i+1}]$ and lies in the proximity of the newly inserted point $\tau_{i_0}$. We will later see that the choice of this interval is crucial for proving important properties that are needed for showing that the system $(f_n^{(k)})_{n=-k+2}^\infty$ is an unconditional basis in $L^p$, $1<p<\infty$ for all admissible knot sequences $(t_n)_{n\geq 0}$. This approach was already used by G.\ G.\ Gevorkyan and A.\ Kamont \cite{GevKam2004} in the proof that general Franklin systems are unconditional in $L^p$, $1<p<\infty$, where the characteristic intervals were called J-intervals. Since we give a slightly different construction here, we name them characteristic intervals. \begin{defin}\label{def:characteristic} Let $\mathcal T,\widetilde{\mathcal T}$ be as above and $\tau_{i_0}$ the new point in $\mathcal T$ that is not present in $\widetilde{\mathcal T}$. We define the \emph{characteristic interval $J$ corresponding to $\tau_{i_0}$} as follows. \begin{enumerate} \item Let \[ \Lambda^{(0)}:=\{i_0-k\leq j\leq i_0 : |[\tau_j,\tau_{j+k}]|\leq 2\min_{i_0-k\leq \ell\leq i_0}|[\tau_\ell,\tau_{\ell+k}]| \} \] be the set of all indices $j$ for which the corresponding support of the B-spline function $N_j$ is approximately minimal. Observe that $\Lambda^{(0)}$ is nonempty. \item Define \[ \Lambda^{(1)}:=\{j\in \Lambda^{(0)}: |\alpha_j|=\max_{\ell\in \Lambda^{(0)}} |\alpha_\ell|\}. \] For an arbitrary, but fixed index $j^{(0)}\in \Lambda^{(1)}$, set $J^{(0)}:=[\tau_{j^{(0)}},\tau_{j^{(0)}+k}]$. \item The interval $J^{(0)}$ can now be written as the union of $k$ grid intervals \[ J^{(0)}=\bigcup_{\ell=0}^{k-1}[\tau_{j^{(0)}+\ell},\tau_{j^{(0)}+\ell+1}]\qquad\text{with }j^{(0)}\text{ as above}. \] We define the \emph{characteristic interval} $J=J(\tau_{i_0})$ to be one of the above $k$ intervals that has maximal length. \end{enumerate} \end{defin} We remark that in the definition of $\Lambda^{(0)}$, we may replace the factor $2$ by any other constant $C>1$. It is essential though that $C>1$ in order to obtain the following theorem which is crucial for further investigations. \begin{thm}\label{thm:estwj} With the above definition \eqref{eq:defwj} of $w_\ell$ for $1\leq \ell\leq M$ and the index $j^{(0)}$ given in Definition \ref{def:characteristic}, \begin{equation}\label{eq:estwj} |w_{j^{(0)}}|\gtrsim b_{j^{(0)},j^{(0)}}. \end{equation} \end{thm} Before we start the proof of this theorem, we state a few remarks and lemmata. For the choice of $j^{(0)}$ in Definition \ref{def:characteristic}, we have, by construction, the following inequalities: for all $i_0-k\leq \ell\leq i_0$ with $\ell\neq j^{(0)}$, \begin{equation}\label{eq:constrJ1} |\alpha_\ell|\leq |\alpha_{j^{(0)}}|\quad\text{or}\quad |[\tau_\ell,\tau_{\ell+k}]|>2 \min_{i_0-k\leq s\leq i_0} |[\tau_s,\tau_{s+k}]|. \end{equation} We recall the identity \begin{equation}\label{eq:formalphaj} |\alpha_j|=\Big(\prod_{\ell=i_0-k+1}^{j-1}\frac{\tau_{i_0}-\tau_{\ell}}{\tau_{\ell+k}-\tau_{\ell}}\Big)\Big(\prod_{\ell=j+1}^{i_0-1}\frac{\tau_{\ell+k}-\tau_{i_0}}{\tau_{\ell+k}-\tau_{\ell}}\Big),\qquad i_0-k\leq j\leq i_0. \end{equation} Since by \eqref{eq:betragreinziehen}, \begin{equation*} |w_{j^{(0)}}|=\sum_{j=i_0-k}^{i_0}|\alpha_j b_{j,j^{(0)}}|\geq |\alpha_{j^{(0)}}||b_{j^{(0)},j^{(0)}}|, \end{equation*} in order to show \eqref{eq:estwj}, we prove the inequality \begin{equation*} |\alpha_{j^{(0)}}|\geq D_k>0 \end{equation*} with a constant $D_k$ only depending on $k$. By \eqref{eq:formalphaj}, this inequality follows from the more elementary inequalities \begin{equation}\label{eq:alphabed} \begin{aligned} \tau_{i_0}-\tau_{\ell}&\gtrsim \tau_{\ell+k}-\tau_{i_0},&\qquad i_0-k+1&\leq \ell\leq j^{(0)}-1, \\ \tau_{\ell+k}-\tau_{i_0}&\gtrsim \tau_{i_0}-\tau_{\ell},&\qquad j^{(0)}+1&\leq \ell\leq i_0-1. \end{aligned} \end{equation} We will only prove the second line of \eqref{eq:alphabed} for all choices of $j^{(0)}$. The first line of \eqref{eq:alphabed} is then proved by a similar argument. We observe that if $j^{(0)}\geq i_0-1$, then there is nothing to prove, so we assume \begin{equation}\label{eq:assj0} j^{(0)}\leq i_0-2. \end{equation} Moreover, we need only show the single inequality \begin{equation}\label{eq:ulttoshow} \tau_{j^{(0)}+k+1}-\tau_{i_0}\gtrsim \tau_{i_0}-\tau_{j^{(0)}+1}, \end{equation} since if we assume \eqref{eq:ulttoshow}, for any $j^{(0)}+1\leq \ell\leq i_0-1$, \begin{equation*} \tau_{\ell+k}-\tau_{i_0}\geq \tau_{j^{(0)}+k+1}-\tau_{i_0}\gtrsim \tau_{i_0}-\tau_{j^{(0)}+1}\geq \tau_{i_0}-\tau_\ell. \end{equation*} We now choose the index $j$ to be the minimal index in the range $i_0\geq j>j^{(0)}$ such that \begin{equation}\label{eq:bedalpha} |\alpha_j|\leq |\alpha_{j^{(0)}}|. \end{equation} If there is no such $j$, we set $j=i_0+1$. If $j\leq i_0$, we employ \eqref{eq:formalphaj} to get that \eqref{eq:bedalpha} is equivalent to \begin{equation}\label{eq:finalpha} \begin{aligned} (\tau_{j+k}-\tau_j)^{1-\delta(j,i_0)}&\prod_{\ell=j^{(0)}\vee(i_0-k+1)}^{j-1}(\tau_{i_0}-\tau_\ell) \\ &\leq(\tau_{j^{(0)}+k}-\tau_{j^{(0)}})^{1-\delta(j^{(0)},i_0-k)}\prod_{\ell=j^{(0)}+1}^{j\wedge(i_0-1)}(\tau_{\ell+k}-\tau_{i_0}), \end{aligned} \end{equation} where $\delta(\cdot,\cdot)$ is the Kronecker delta. Furthermore, let the index $m$ in the range $i_0-k\leq m\leq i_0$ be such that $\tau_{m+k}-\tau_m=\min_{i_0-k\leq s\leq i_0}(\tau_{s+k}-\tau_s)$. Now, from the minimality of $j$ and \eqref{eq:constrJ1}, we obtain the inequalities \begin{equation}\label{eq:condmax} \tau_{\ell+k}-\tau_\ell>2 (\tau_{m+k}-\tau_m),\qquad j^{(0)}+1\leq \ell\leq j-1. \end{equation} Thus, by definition, the index $m$ satisfies \begin{equation}\label{eq:rangem} m\leq j^{(0)}\quad\text{or}\quad m\geq j. \end{equation} \begin{lem}\label{lem:tau} In the above notation, if $m\leq j^{(0)}$ and $j-j^{(0)}\geq 2$, we have \eqref{eq:ulttoshow} or more precisely, \begin{equation}\label{eq:lem:tau} \tau_{j^{(0)}+k+1}-\tau_{i_0}\geq \tau_{i_0}-\tau_{j^{(0)}+1}. \end{equation} \end{lem} \begin{proof} We expand the left hand side of \eqref{eq:lem:tau} and write \[ \tau_{j^{(0)}+k+1}-\tau_{i_0}=\tau_{j^{(0)}+k+1}-\tau_{j^{(0)}+1}-(\tau_{i_0}-\tau_{j^{(0)}+1}). \] By \eqref{eq:condmax} (observe that $j-j^{(0)}\geq 2$), we further conclude \[ \tau_{j^{(0)}+k+1}-\tau_{i_0}\geq 2(\tau_{m+k}-\tau_m)-(\tau_{i_0}-\tau_{j^{(0)}+1}). \] Since $m+k\geq i_0$ and $m\leq j^{(0)}$, we obtain finally \[ \tau_{j^{(0)}+k+1}-\tau_{i_0}\geq \tau_{i_0}-\tau_{j^{(0)}+1}, \] which is the conclusion of the lemma. \end{proof} \begin{lem}\label{lem:tau2} Let $j^{(0)},j$ and $m$ be as above. If $j^{(0)}+1\leq \ell\leq j-1$ and $m\geq j$, we have \begin{equation*} \tau_{i_0}-\tau_\ell\geq \tau_{\ell+1+k}-\tau_{i_0}. \end{equation*} \end{lem} \begin{proof} Let $j^{(0)}+1\leq \ell\leq j-1$. Then we obtain from \eqref{eq:condmax} \begin{equation}\label{eq:mgtrj1} \tau_{i_0}-\tau_{\ell}=\tau_{\ell+1+k}-\tau_{\ell}-(\tau_{\ell+1+k}-\tau_{i_0})\geq 2(\tau_{m+k}-\tau_m)-(\tau_{\ell+1+k}-\tau_{i_0}). \end{equation} Since we assumed $m\geq j\geq \ell+1$, we get $m+k\geq \ell+1+k$ and additionally we have $m\leq i_0$ by definition of $m$. Thus we conclude from \eqref{eq:mgtrj1} \begin{equation*} \tau_{i_0}-\tau_{\ell}\geq \tau_{\ell+1+k}-\tau_{i_0}. \end{equation*} Since the index $\ell$ was arbitrary in the range $j^{(0)}+1\leq \ell\leq j-1$, the proof of the lemma is completed. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:estwj}] We employ the above definition of the indices $j^{(0)},j,$ and $m$ and split our analysis in a few cases distinguishing various possibilities for the parameters $j^{(0)}$ and $j$. In each case we will show \eqref{eq:ulttoshow}. \textsc{Case 1: }There is no index $j>j^{(0)}$ such that $|\alpha_j|\leq |\alpha_{j^{(0)}}|$. \\ In this case, \eqref{eq:rangem} implies $m\leq j^{(0)}$. Since $j^{(0)}\leq i_0-2$ by \eqref{eq:assj0}, we apply Lemma \ref{lem:tau} to conclude the proof of \eqref{eq:ulttoshow}. \textsc{Case 2: } $i_0-k+1\leq j^{(0)}<j\leq i_0-1$. \\ Using the restrictions on our parameters $j^{(0)}$ and $j$, we see that \eqref{eq:finalpha} becomes \begin{equation*} (\tau_{j^{(0)}+k}-\tau_{j^{(0)}})\prod_{\ell=j^{(0)}+1}^{j}(\tau_{\ell+k}-\tau_{i_0})\geq (\tau_{j+k}-\tau_j)\prod_{\ell=j^{(0)}}^{j-1}(\tau_{i_0}-\tau_{\ell}). \end{equation*} This implies \begin{align*} \tau_{j^{(0)}+k+1}-\tau_{i_0}\geq \frac{(\tau_{j+k}-\tau_j)(\tau_{i_0}-\tau_{j^{(0)}})}{\tau_{j^{(0)}+k}-\tau_{j^{(0)}}}\prod_{\ell=j^{(0)}+1}^{j-1}\frac{\tau_{i_0}-\tau_\ell}{\tau_{\ell+1+k}-\tau_{i_0}}. \end{align*} Since by definition of $j^{(0)}$, we have in particular $\tau_{j^{(0)}+k}-\tau_{j^{(0)}}\leq 2 (\tau_{j+k}-\tau_j)$, we conclude further \begin{equation}\label{eq:case2-1} \tau_{j^{(0)}+k+1}-\tau_{i_0}\geq \frac{\tau_{i_0}-\tau_{j^{(0)}+1}}{2}\prod_{\ell=j^{(0)}+1}^{j-1}\frac{\tau_{i_0}-\tau_\ell}{\tau_{\ell+1+k}-\tau_{i_0}}. \end{equation} If $j=j^{(0)}+1$, the assertion \eqref{eq:ulttoshow} follows from \eqref{eq:case2-1}, since then, the product is empty. If $j\geq j^{(0)}+2$ and $m\leq j^{(0)}$, we apply Lemma \ref{lem:tau} to obtain \eqref{eq:ulttoshow}. If $j\geq j^{(0)}+2$ and $m\geq j$, we use Lemma \ref{lem:tau2} on the terms in the product appearing in \eqref{eq:case2-1} to conclude \eqref{eq:ulttoshow}. This finishes the proof of Case 2. \textsc{Case 3: }$i_0-k+1\leq j^{(0)}<j=i_0$.\\ Recall that $j^{(0)}\leq i_0-2=j-2$ by \eqref{eq:assj0}. If $m\leq j^{(0)}$, we apply Lemma \ref{lem:tau} and we are done with the proof of \eqref{eq:ulttoshow}. So we assume $m\geq j$. Since $i_0=j$ and $m\leq i_0$, we have $m=j$. The restrictions on the indices $j^{(0)},j$ yield that condition \eqref{eq:finalpha} is nothing else than \begin{equation*} (\tau_{j^{(0)}+k}-\tau_{j^{(0)}})\prod_{\ell=j^{(0)}+1}^{i_0-1}(\tau_{\ell+k}-\tau_{i_0})\geq \prod_{\ell=j^{(0)}}^{i_0-1}(\tau_{i_0}-\tau_{\ell}). \end{equation*} Thus, in order to show \eqref{eq:ulttoshow}, it is enough to prove that there exists a constant $D_k>0$ only depending on $k$ such that \begin{equation}\label{eq:tau1} \frac{\tau_{i_0}-\tau_{j^{(0)}}}{\tau_{j^{(0)}+k}- \tau_{j^{(0)}}}\prod_{\ell=j^{(0)}+2}^{i_0-1}\frac{\tau_{i_0}-\tau_{\ell}}{\tau_{\ell+k}-\tau_{i_0}}\geq D_k. \end{equation} First observe that by Lemma \ref{lem:tau2}, \[ \tau_{i_0}-\tau_{j^{(0)}}\geq \tau_{j^{(0)}+k+2}-\tau_{i_0}\geq \tau_{j^{(0)}+k}-\tau_{i_0}. \] Inserting this inequality in \eqref{eq:tau1} and applying Lemma \eqref{lem:tau2} directly to the terms in the product, we obtain the assertion \eqref{eq:tau1}. \textsc{Case 4: } $i_0-k=j^{(0)}<j=i_0$. \\ We have $j^{(0)}\leq i_0-2$ by \eqref{eq:assj0}. If $m\leq j^{(0)}$, just apply Lemma \ref{lem:tau} to obtain \eqref{eq:ulttoshow}. Thus we assume $m\geq j$. Since $i_0=j$ and $m\leq i_0$, we have $m=j$. The restrictions on the indices $j^{(0)},j$ yield that condition \eqref{eq:finalpha} takes the form \begin{equation*} \prod_{\ell=i_0-k+1}^{i_0-1}(\tau_{\ell+k}-\tau_{i_0})\geq \prod_{\ell=i_0-k+1}^{i_0-1}(\tau_{i_0}-\tau_{\ell}). \end{equation*} Thus, in order to show \eqref{eq:ulttoshow}, it is enough to prove that there exists a constant $D_k>0$ only depending on $k$ such that \begin{equation*} \prod_{\ell=i_0-k+2}^{i_0-1}\frac{\tau_{i_0}-\tau_{\ell}}{\tau_{\ell+k}-\tau_{i_0}}\geq D_k. \end{equation*} But this is a consequence of Lemma \ref{lem:tau2}, finishing the proof of Case 4. \textsc{Case 5: }$i_0-k=j^{(0)}<j\leq i_0-1$.\\ In this case, \eqref{eq:ulttoshow} becomes \begin{equation}\label{eq:tautoshow5} \tau_{i_0+1}-\tau_{i_0}\gtrsim \tau_{i_0}-\tau_{i_0-k+1}. \end{equation} and \eqref{eq:finalpha} is nothing else than \begin{equation}\label{eq:lastcase} \prod_{\ell=i_0-k+1}^{j} (\tau_{\ell+k}-\tau_{i_0}) \geq (\tau_{j+k}-\tau_j)\prod_{\ell=i_0-k+1}^{j-1}(\tau_{i_0}-\tau_{\ell}). \end{equation} For $j=i_0-k+1$, \eqref{eq:tautoshow5} is implied very easily from \eqref{eq:lastcase}. If we assume $j-j^{(0)}\geq 2$ and $m\leq j^{(0)}$, we just apply Lemma \ref{lem:tau} to obtain \eqref{eq:ulttoshow}. If $j-j^{(0)}\geq 2$ and $m\geq j$, showing \eqref{eq:tautoshow5} is equivalent to the existence of a constant $D_k>0$ only depending on $k$ such that \begin{equation*} \frac{(\tau_{j+k}-\tau_j)\prod_{\ell=i_0-k+2}^{j-1}(\tau_{i_0}-\tau_{\ell})}{\prod_{\ell=i_0-k+2}^{j}(\tau_{\ell+k}-\tau_{i_0})}\geq D_k. \end{equation*} This follows from the obvious inequality $\tau_{j+k}-\tau_j\geq \tau_{j+k}-\tau_{i_0}$ and from Lemma \ref{lem:tau2}. Thus, the proof of Case 5 is completed, thereby concluding the proof of Theorem \ref{thm:estwj}. \end{proof} We will use this result to prove lemmata connecting the $L^p$ norm of the function $g$ and the corresponding characteristic interval $J$. Before we start, we need another simple \begin{lem}\label{lem:estdiaginverse} Let $C=(c_{ij})_{i,j=1}^n$ be a symmetric positive definite matrix. Then, for $(d_{ij})_{i,j=1}^n=C^{-1}$ we have \[ c_{ii}^{-1}\leq d_{ii},\qquad 1\leq i\leq n. \] \end{lem} \begin{proof} Since $C$ is symmetric, $C$ is diagonalizable and we have \[ C=S\Lambda S^T, \] for some orthogonal matrix $S=(s_{ij})_{i,j=1}^n$ and for the diagonal matrix $\Lambda$ consisting of the eigenvalues $\lambda_1,\dots,\lambda_n$ of $C$. These eigenvalues are positive, since $C$ is positive definite. Clearly, \[ C^{-1}=S\Lambda ^{-1}S^T. \] Let $i$ be an arbitrary integer in the range $1\leq i\leq n$. Then, \[ c_{ii}=\sum_{\ell=1}^n s_{i\ell}^2 \lambda_\ell\qquad\text{and}\qquad d_{ii}=\sum_{\ell=1}^n s_{i\ell}^2 \lambda_\ell^{-1}. \] Since $\sum_{\ell=1}^n s_{i\ell}^2=1$ and the function $x\mapsto x^{-1}$ is convex on $(0,\infty)$, we conclude by Jensen's inequality \[ c_{ii}^{-1}=\big(\sum_{\ell=1}^n s_{i\ell}^2 \lambda_\ell\big)^{-1}\leq \sum_{\ell=1}^n s_{i\ell}^2 \lambda_\ell^{-1}=d_{ii}, \] and thus the assertion of the lemma. \end{proof} \begin{lem}\label{lem:orthsplineJinterval}Let $\mathcal T,\,\widetilde{\mathcal T}$ be as above and $g=\sum_{j=1}^M w_jN_j$ be the function in $\operatorname{span}\{N_i:1\leq i\leq M\}$ that is orthogonal to every $\widetilde{N}_i$, $1\leq i\leq M-1$, with $(w_j)_{j=1}^M$ given in \eqref{eq:defwj}. Moreover, let $\varphi=g/\|g\|_2$ be the $L^2$-normalized orthogonal spline function corresponding to the mesh point $\tau_{i_0}$. Then, \[ \|\varphi\|_{L^p(J)}\sim\|\varphi\|_p\sim |J|^{1/p-1/2},\qquad 1\leq p\leq \infty, \] where $J$ is the characteristic interval associated to the point $\tau_{i_0}$ given in Definition \ref{def:characteristic}. \end{lem} \begin{proof} As a consequence of inequality \eqref{eq:lpstab} in Proposition \ref{prop:lpstab}, we get \begin{equation}\label{eq:orthlpnorm:1} \|g\|_{L^p(J)} \gtrsim |J|^{1/p} |w_{j^{(0)}}|. \end{equation} By Theorem \ref{thm:estwj}, $|w_{j^{(0)}}|\gtrsim b_{j^{(0)},j^{(0)}}$, where we recall that $(b_{ij})_{i,j=1}^M$ is the inverse of the Gram matrix $(a_{ij})_{i,j=1}^M=(\langle N_i,N_j\rangle)_{i,j=1}^M$. Now we invoke Lemma \ref{lem:estdiaginverse} and identity \eqref{eq:deboorlpstab} of Proposition \ref{prop:lpstab} to conclude from \eqref{eq:orthlpnorm:1} \begin{align*} \|g\|_{L^p(J)} & \gtrsim |J|^{1/p} b_{j^{(0)},j^{(0)}}\geq |J|^{1/p} a_{j^{(0)},j^{(0)}}^{-1} \\ & = |J|^{1/p} \|N_{j^{(0)}}\|^{-2}_2 \gtrsim |J|^{1/p} \nu_{j^{(0)}}^{-1} \end{align*} Since, by construction, $J$ is the maximal subinterval of $J^{(0)}$ and there are exactly $k$ subintervals of $J^{(0)}$, we finally get \begin{equation}\label{eq:Jinterval1} \|g\|_{L^p(J)} \gtrsim |J|^{1/p-1}. \end{equation} On the other hand, $g=\sum_{j=i_0-k}^{i_0}\alpha_j N_j^*$, so we use equation \eqref{eq:lpstabdual} of Proposition \ref{prop:lpstab} to obtain \[ \|g\|_p \lesssim \Big(\sum_{j=i_0-k}^{i_0} |\alpha_j|^p \nu_j^{1-p}\Big)^{1/p}. \] Since $|\alpha_j|\leq 1$ for all $j$ and $\nu_{j^{(0)}}$ is minimal (up to the factor $2$) among the values $\nu_j,\ i_0-k\leq j\leq i_0$, we can estimate this further by \[ \|g\|_p\lesssim \nu_{j^{(0)}}^{1/p-1}. \] We now use the inequality $|J|\leq \nu_{j^{(0)}}=|J^{(0)}|$ from the construction of $J$ to get \begin{equation}\label{eq:Jinterval2} \|g\|_p \lesssim |J|^{1/p-1} \end{equation} The assertion of the lemma now follows from the two inequalites \eqref{eq:Jinterval1} and \eqref{eq:Jinterval2} after renormalization. \end{proof} By $d_{\mathcal T} (x)$ we denote the number of points in $\mathcal T$ between $x$ and $J$ counting endpoints of $J$. Correspondingly, for an interval $V\subset [0,1]$, by $d_{\mathcal T}(V)$ we denote the number of points in $\mathcal T$ between $V$ and $J$ counting endpoints of both $J$ and $V$. \begin{lem}\label{lem:lporthspline} Let $\mathcal T,\widetilde{\mathcal T}$ be as above and $g=\sum_{j=1}^M w_jN_j$ be orthogonal to every $\widetilde{N}_i$, $1\leq i\leq M-1$, with $(w_j)_{j=1}^M$ as in \eqref{eq:defwj}. Moreover, let $\varphi=g/\|g\|_2$ be the normalized orthogonal spline function corresponding to $\tau_{i_0}$ and $\gamma<1$ the constant from Theorem \ref{thm:maintool} depending only on the spline order $k$. Then we have \begin{equation}\label{eq:wj} |w_j|\lesssim \frac{\gamma^{d_{\mathcal T}(\tau_j)}}{|J|+\operatorname{dist}(\operatorname{supp} N_j,J)+\nu_j}\quad\text{for all }1\leq j\leq M. \end{equation} Moreover, if $x<\inf J$, we have \begin{equation}\label{eq:phiplinks} \|\varphi\|_{L^p(0,x)} \lesssim \frac{\gamma^{d_{\mathcal T}(x)}|J|^{1/2}}{(|J|+\operatorname{dist}(x,J))^{1-1/p}},\qquad 1\leq p\leq \infty. \end{equation} Similarly, for $x>\sup J$, \begin{equation}\label{eq:phiprechts} \|\varphi\|_{L^p(x,1)} \lesssim \frac{\gamma^{d_{\mathcal T}(x)}|J|^{1/2}}{(|J|+\operatorname{dist}(x,J))^{1-1/p}},\qquad 1\leq p\leq \infty. \end{equation} \end{lem} \begin{proof} We begin with showing \eqref{eq:wj}. By definition of $w_j$ and $\alpha_\ell$ (see \eqref{eq:defwj} and \eqref{eq:alpha2}), we have \[ |w_j|\lesssim \max_{i_0-k\leq \ell\leq i_0} |b_{j\ell}|. \] Now we invoke Theorem \ref{thm:maintool} to deduce \begin{equation}\label{eq:startwj} \begin{aligned} |w_j| &\lesssim \frac{\max_{i_0-k\leq \ell\leq i_0}\gamma^{|\ell-j|}}{\min_{i_0-k\leq \ell\leq i_0}(\tau_{\max(\ell,j)+k}-\tau_{\min(\ell, j)})} \\ &\lesssim \frac{\gamma^{d_{\mathcal T}(\tau_j)}}{\min_{i_0-k\leq \ell\leq i_0}(\tau_{\max(\ell,j)+k}-\tau_{\min(\ell, j)})}, \end{aligned} \end{equation} where the second inequality follows from the location of $J$ in the interval $[\tau_{i_0-k},\tau_{i_0+k}]$. It remains to estimate the minimum in the denominator of this expression. Let $\ell$ be an arbitrary natural number in the range $i_0-k\leq \ell\leq i_0$. First we observe \begin{equation} \tau_{\max(\ell,j)+k}-\tau_{\min(\ell,j)}\geq \tau_{j+k}-\tau_j=|\operatorname{supp} N_j|=\nu_j. \label{eq:phi1} \end{equation} Moreover, by definition of $J$, \begin{equation} \begin{aligned} \tau_{\max(\ell,j)+k}-\tau_{\min(\ell,j)} & \geq \min_{i_0-k\leq r\leq i_0} (\tau_{r+k}-\tau_r) \geq |J^{(0)}|/2\geq |J|/2. \end{aligned} \label{eq:phi2} \end{equation} If now $j\geq \ell$, \begin{equation} \tau_{\max(\ell,j)+k}-\tau_{\min(\ell,j)}=\tau_{j+k}-\tau_\ell\geq \tau_{j+k}-\tau_{i_0}\geq \max(\tau_j-\sup J^{(0)},0), \label{eq:phi3} \end{equation} since $\tau_{i_0}\leq \sup J^{(0)}$. But $\max(\tau_j-\sup J^{(0)},0)=d([\tau_j,\tau_{j+k}],J^{(0)})$ due to the fact that $\inf J^{(0)}\leq \tau_{i_0}\leq \tau_{\ell+k}\leq \tau_{j+k}$ for the current choice of $j$. Additionally, $d([\tau_j,\tau_{j+k}],J)\leq |J^{(0)}|+ d([\tau_j,\tau_{j+k}], J^{(0)})$. So, as a consequence of \eqref{eq:phi3}, \begin{equation} \tau_{\max(\ell,j)+k}-\tau_{\min(\ell,j)}\geq d([\tau_j,\tau_{j+k}],J)-|J^{(0)}|. \label{eq:phi4} \end{equation} An analogous calculation proves \eqref{eq:phi4} also in the case $j\leq \ell$. We now combine our inequality \eqref{eq:startwj} with \eqref{eq:phi1}, \eqref{eq:phi2} and \eqref{eq:phi4} to obtain the assertion \eqref{eq:wj}. We now consider the integral $\big(\int_0^x |g(t)|^p\,\mathrm{d} t\big)^{1/p}$ for $x<\inf J$. The analogous estimate \eqref{eq:phiprechts} follows from a similar argument. Let $\tau_s$ be the first grid point in $\mathcal T$ to the right of $x$ and observe that that $\operatorname{supp} N_r\cap [0,\tau_s)=\emptyset$ for $r\geq s$. Then we get \[ \|g\|_{L^p(0,x)}\leq \|g\|_{L^p(0,\tau_s)}\leq \Big\|\sum_{i=1}^{s-1} w_i N_i\Big\|_p. \] By \eqref{eq:deboorlpstab} of Proposition \ref{prop:lpstab}, we conclude further \[ \|g\|_{L^p(0,x)}\leq \big\|\big(w_i\nu_i^{1/p}\big)_{i=1}^{s-1}\big\|_{\ell^p}. \] We now use \eqref{eq:wj} for $w_i$ and get \[ \|g\|_{L^p(0,x)}\lesssim \Big\|\Big(\frac{\gamma^{d_{\mathcal T}(\tau_i)}\nu_i^{1/p}}{|J|+\operatorname{dist}(\operatorname{supp} N_i,J)+\nu_i}\Big)_{i=1}^{s-1}\Big\|_{\ell^p}. \] Since $\nu_i\leq |J|+\operatorname{dist}(\operatorname{supp} N_i,J)+\nu_i$ for all $1\leq i\leq M$ and $\operatorname{dist}(\operatorname{supp} N_i,J)+\nu_i\geq \operatorname{dist}(x,J)$ for all $1\leq i\leq s-1$, we estimate the last display further to get \[ \|g\|_{L^p(0,x)}\lesssim \big(|J|+\operatorname{dist}(x,J)\big)^{-1+1/p}\|(\gamma^{d_{\mathcal T}(\tau_i)})_{i=1}^{s-1}\|_{\ell^p}. \] This $\ell^p$-norm is a geometric sum and the biggest term is $\gamma^{d_{\mathcal T}(x)}$, so we obtain \[ \|g\|_{L^p(0,x)} \lesssim \frac{\gamma^{d_{\mathcal T}(x)}}{(|J|+\operatorname{dist}(x,J))^{1-1/p}}. \] This concludes the proof of the lemma, since we have seen in the proof of Lemma \ref{lem:orthsplineJinterval} that $\|g\|_2\sim |J|^{-1/2}$. \end{proof} \begin{rem}\label{rem:linftyorthspline} Analogously we obtain the inequality \begin{align*} \sup_{\tau_{j-1}\leq t\leq \tau_j}|\varphi(t)|&\lesssim \max_{j-k\leq i\leq j-1} \frac{\gamma^{d_{\mathcal T}(\tau_i)}|J|^{1/2}}{|J|+\operatorname{dist}(\operatorname{supp} N_i,J)+\nu_i} \\ &\lesssim \frac{\gamma^{d_{\mathcal T}(\tau_{j})}|J|^{1/2}}{|J|+\operatorname{dist}(J,[\tau_{j-1},\tau_j])+|[\tau_{j-1},\tau_j]|}, \end{align*} since for all integers $i$ with $j-k\leq i\leq j-1$ we have $[\tau_{j-1},\tau_j]\subset \operatorname{supp} N_i$. \end{rem} \section{Combinatorics of characteristic intervals}\label{sec:comb} Let $(t_n)_{n=0}^\infty$ be an admissible sequence of points and $(f_n)_{n=-k+2}^\infty$ the corresponding orthonormal spline functions of order $k$. For $n\geq 2$, the associated partitions $\mathcal T_n$ to $f_n$ are defined to consist of the grid points $(t_j)_{j=0}^n$, the knots $t_0=0$ and $t_1=1$ having both multiplicity $k$ in $\mathcal T_n$. If $n\geq 2$, we denote by $\ J_n^{(0)}$ and $J_n$ the characteristic intervals $J^{(0)}$ and $J$ from Definition \ref{def:characteristic} associated to the new grid point $t_n$. If $n$ is in the range $-k+2\leq n\leq 1$, we additionally set $J_n:=[0,1]$. For any $x\in [0,1]$, we define $d_n(x)$ to be the number of grid points in $\mathcal T_n$ between $x$ and $J_n$ counting endpoints of $J_n$. Moreover, for a subinterval $V$ of $[0,1]$, we denote by $d_n(V)$ the number of knots in $\mathcal T_n$ between $V$ and $J_n$ counting endpoints of both $V$ and $J_n$. Finally, if $\mathcal T_n$ is of the form \begin{align*} \mathcal T_n=(0=\tau_{n,1}&=\dots=\tau_{n,k}<\tau_{n,k+1}\leq&\\ &\leq\dots\leq\tau_{n,n+k-1}<\tau_{n,n+k}=\dots=\tau_{n,n+2k-1}=1), \end{align*} and if $t_n=\tau_{n,i_0}$, then we denote by $t_n^{+\ell}$ the point $\tau_{n,i_0+\ell}$. For the proof of the central Lemma \ref{lem:jinterval} of this section, we need the combinatorial Lemma of Erd\H{o}s and Szekeres: \begin{lem}[Erd\H{o}s-Szekeres]\label{lem:erdsze} Let $n$ be an integer. Every sequence $(x_1,\dots,$ $x_{(n-1)^2+1})$ of length $(n-1)^2+1$ contains a monotone sequence of length $n$. \end{lem} We now use this result to prove a lemma about the combinatorics of characteristic intervals $J_n$: \begin{lem}\label{lem:jinterval} Let $x,y\in (t_n)_{n=0}^\infty$ such that $x<y$ and $0\leq \beta\leq 1/2$. Then there exists a constant $F_{k}$ only depending on $k$ such that \[ N_0:=\card\{n:J_n\subseteq [x,y], |J_n|\geq(1-\beta)|[x,y]|\} \leq F_{k}, \] where $\card E$ denotes the cardinality of the set $E$. \end{lem} \begin{proof} Let $N_0$ be defined as above. If $n$ is an index such that $J_n\subseteq [x,y]$ and $|J_n|\geq (1-\beta)|[x,y]|$, then, by definition of $J_n$, we have $t_n\in[0,(1-\beta)x+\beta y]\cup [\beta x+(1-\beta)y,1]$. Thus, by the pigeon hole principle, in one of the two sets $[0,(1-\beta)x+\beta y]$ and $[\beta x+(1-\beta)y,1]$, there are at least \[ N_1:=\Big\lfloor \frac{N_0-1}{2}\Big\rfloor +1 \] indices $n$ with $J_n\subset [x,y]$ and $|J_n|\geq (1-\beta)|[x,y]|$. Assume without loss of generality that this set is $[\beta x+(1-\beta)y,1]$. Now, let $(n_i)_{i=1}^{N_1}$ be an increasing sequence of indices such that $t_{n_i}\in [\beta x+(1-\beta)y,1]$ and $J_{n_i}\subset [x,y]$, $ |J_{n_i}|\geq (1-\beta)|[x,y]|$ for every $1\leq i\leq N_1$. Observe that for such $i$, $J_{n_i}$ is to the left of $t_{n_i}$. By the Erd\H{o}s-Szekeres-Lemma \ref{lem:erdsze}, the sequence $(t_{n_i})_{i=1}^{N_1}$ contains a monotone subsequence $(t_{m_i})_{i=1}^{N_2}$ of length \[ N_2:=\lfloor\sqrt{N_1-1}\rfloor +1. \] If $(t_{m_i})_{i=1}^{N_2}$ is increasing, we obtain that $N_2\leq k$. Indeed, if $N_2\geq k+1$, there are at least $k$ points (namely $t_{m_1},\dots,t_{m_k}$) in the sequence $\mathcal{T}_{m_{k+1}}$ between $\inf J_{m_{k+1}}$ and $t_{m_{k+1}}$. This is in conflict with the location of $J_{m_{k+1}}$. If $(t_{m_i})_{i=1}^{N_2}$ is decreasing, we let \[ s_1\leq \dots \leq s_L \] be an enumeration of the elements in the sequence $\mathcal{T}_{m_1}$ such that $\inf J_{m_1}\leq s\leq t_{m_1}$. By definition of $J_{m_1}$, we obtain that $L\leq k+1$. Thus, there are at most $k$ intervals $[s_\ell,s_{\ell+1}], 1\leq \ell\leq L-1$, contained in $[\inf J_{m_1},t_{m_1}]$. Again, by the pigeon hole principle, there exists one index $1\leq \ell\leq L-1$ such that the interval $[s_\ell,s_{\ell+1}]$ contains (at least) \[ N_3:=\Big\lfloor \frac{N_2-1}{k}\Big\rfloor +1 \] points of the sequence $(t_{m_i})_{i=1}^{N_2}$. Let $(t_{r_i})_{i=1}^{N_3}$ be a subsequence of length $N_3$ of such points. Furthermore, define \[ N_4:=\Big\lfloor \frac{N_3}{k}\Big\rfloor. \] Since $(t_{r_i})_{i=1}^{N_3}$ is decreasing, we have a quantity of $N_4$ disjoint intervals \[ I_\mu:=(t_{r_{\mu\cdot k}},t_{r_{\mu\cdot k}}^{+k})\subseteq [s_\ell,s_{\ell+1}],\qquad 1\leq \mu\leq N_4. \] Consequently, there exists (at least) one index $\mu$ such that \[ |I_\mu|\leq \frac{|[s_\ell,s_{\ell+1}]|}{N_4}. \] We next observe that the definition of $J_{m_1}$ yields \begin{equation*} |J_{m_1}|\geq |[s_\ell,s_{\ell+1}]|. \end{equation*} We thus get \begin{equation}\label{eq:J02} \begin{aligned} |J_{r_{\mu\cdot k}}^{(0)}|&\geq |J_{r_{\mu\cdot k}}|\geq (1-\beta)|[x,y]|\geq (1-\beta)|J_{m_1}|\\ &\geq (1-\beta)|[s_\ell,s_{\ell+1}]|\geq (1-\beta)N_4|I_\mu|. \end{aligned} \end{equation} On the other hand, the construction of $J_{r_{\mu\cdot k}}^{(0)}$ implies in particular \begin{equation}\label{eq:J01} |J_{r_{\mu\cdot k}}^{(0)}|\leq 2(t_{r_{\mu\cdot k}}^{+k}-t_{r_{\mu \cdot k}}) = 2|I_\mu|. \end{equation} The inequalities \eqref{eq:J02} and \eqref{eq:J01} imply $N_4\leq 2/(1-\beta)\leq 4$. Since the definition of $N_4$ involves only $k$, this proves the assertion of the lemma. \end{proof} \section{Technical estimates}\label{sec:techn} \begin{lem}\label{lem:techn1} Let $f=\sum_{n=-k+2}^\infty a_n f_n$ and $V$ be an open subinterval of $[0,1]$. Then, \begin{align} \int_{V^c}\sum_{j\in\Gamma}|a_j f_j(t)|\,\mathrm{d} t&\lesssim\int_V\Big(\sum_{j\in\Gamma}|a_j f_j(t)|^2\Big)^{1/2}\,\mathrm{d} t, \label{eq:lemtechn1:1} \end{align} where \[ \Gamma:=\{j : J_j\subset V\text{ and }-k+2\leq j<\infty\}. \] \end{lem} \begin{proof} First, assume that $|V|=1$. Then \eqref{eq:lemtechn1:1} holds trivially. In the following, we assume that $|V|<1$. We define $x:=\inf V$, $y:=\sup V$ and fix an index $n\in \Gamma$. Observe that in this case, the definition of $\Gamma$ implies $n\geq 2$, since $J_j=[0,1]$ for $-k+2\leq j\leq 1$. We only estimate the integral in \eqref{eq:lemtechn1:1} over the interval $[y,1]$. The integral over $[0,x]$ is estimated similarly. Lemma \ref{lem:lporthspline} implies \[ \int_y^1|f_n(t)|\,\mathrm{d} t\lesssim \gamma^{d_n(y)}|J_n|^{1/2}. \] Applying Lemma \ref{lem:orthsplineJinterval} yields \begin{equation}\label{eq:techn1:1} \int_y^1 |f_n(t)|\,\mathrm{d} t\lesssim \gamma^{d_n(y)}\int_{J_n}|f_n(t)|\,\mathrm{d} t. \end{equation} Now choose $\beta=1/4$ and let $J_n^\beta$ be the unique closed interval that satisfies \[ |J_n^\beta|=\beta|J_n|\quad\text{and}\quad \inf J_n^\beta=\inf J_n. \] Since $f_n$ is a polynomial of order $k$ on the interval $J_n$, we apply Proposition \ref{prop:poly} to \eqref{eq:techn1:1} and estimate further \begin{equation}\label{eq:techn1:2} \int_y^1 |a_n f_n(t)|\,\mathrm{d} t \lesssim \gamma^{d_n(y)} \int_{J_n^\beta} |a_n f_n(t)|\,\mathrm{d} t\leq \gamma^{d_n(y)} \int_{J_n^\beta} \Big(\sum_{j\in\Gamma}|a_jf_j(t)|^2\Big)^{1/2}\,\mathrm{d} t \end{equation} Define $\Gamma_s:=\{j\in\Gamma:d_j(y)=s\}$ for $s\geq 0$. For fixed $s\geq 0$ and $j_1,\,j_2\in\Gamma_s$, we have either \[ J_{j_1}\cap J_{j_2}=\emptyset\quad\text{or}\quad \sup J_{j_1}=\sup J_{j_2}. \] So, Lemma \ref{lem:jinterval} implies that there exists a constant $F_{k}$ only depending on $k$, such that each point $t\in V$ belongs to at most $F_{k}$ intervals $J_{j}^\beta$, $j\in\Gamma_s$. Thus, summing over $j\in \Gamma_s$, we get from \eqref{eq:techn1:2} \begin{equation*} \begin{aligned} \sum_{j\in\Gamma_s}\int_y^1 |a_jf_j(t)|\,\mathrm{d} t &\lesssim \sum_{j\in\Gamma_s}\gamma^s \int_{J_{j}^\beta} \Big(\sum_{\ell\in\Gamma}|a_\ell f_\ell(t)|^2\Big)^{1/2} \,\mathrm{d} t \\ &\lesssim \gamma^s\int_V\Big(\sum_{\ell \in\Gamma}|a_\ell f_\ell(t)|^2\Big)^{1/2} \,\mathrm{d} t. \end{aligned} \end{equation*} Finally, we sum over $s\geq 0$ to obtain inequality \eqref{eq:lemtechn1:1}. \end{proof} Let $g$ be a real-valued function defined on the closed unit interval. In the following, we denote by $[g>\lambda]$ the set $\{x\in[0,1]: g(x)>\lambda\}$ for any number $\lambda>0$. \begin{lem}\label{lem:triv} Let $f=\sum_{n=-k+2}^\infty a_n f_n$ with only finitely many nonzero coefficients $a_n$, $\lambda>0,\ r<1$ and \[ E_\lambda=[Sf>\lambda],\quad B_{\lambda,r}=[\mathcal{M}\ensuremath{\mathbbm 1}_{E_\lambda}>r]. \] Then we have \[ E_\lambda\subset B_{\lambda,r}. \] \end{lem} \begin{proof} Let $t\in E_\lambda$ be fixed. The square function $Sf=\big(\sum_{n=-k+2}^\infty |a_nf_n|^2\big)^{1/2}$ is continuous except possibly at finitely many grid points, where $Sf$ is at least continuous from the right. As a consequence, for $t\in E_\lambda$, there exists an interval $I\subset E_\lambda$ such that $t\in I$. This implies the following estimate: \begin{align*} (\mathcal{M}\ensuremath{\mathbbm 1}_{E_\lambda})(t)&=\sup_{t\ni U}|U|^{-1}\int_U \ensuremath{\mathbbm 1}_{E_\lambda}(x)\,\mathrm{d} x \\ &=\sup_{t\ni U}\frac{|E_\lambda\cap U|}{|U|}\geq \frac{|E_\lambda\cap I|}{|I|}=\frac{|I|}{|I|}=1>r. \end{align*} The above inequality shows $t\in B_{\lambda,r}$, proving the lemma. \end{proof} \begin{lem}\label{lem:Sf} Let $f=\sum_{n=-k+2}^\infty a_n f_n$ with only finitely many nonzero coefficients $a_n$, $\lambda>0$ and $r<1$. Then we define \[ E_\lambda:=[Sf>\lambda],\qquad B_{\lambda,r}:= [\mathcal M\ensuremath{\mathbbm 1}_{E_\lambda}>r]. \] If \[ \Lambda=\{n:J_n\not\subset B_{\lambda,r}\text{ and }-k+2\leq n<\infty\}\qquad\text{and}\qquad g=\sum_{n\in\Lambda} a_nf_n, \] we have \begin{equation}\label{eq:Sfid} \int_{E_\lambda}Sg(t)^2\,\mathrm{d} t\lesssim_r \int_{E_\lambda^c}Sg(t)^2\,\mathrm{d} t. \end{equation} \end{lem} \begin{proof} First, we observe that in the case $B_{\lambda,r}=[0,1]$, the index set $\Lambda$ is empty and thus, \eqref{eq:Sfid} holds trivially. So let us assume $B_{\lambda,r}\neq [0,1]$. Then, we start the proof of \eqref{eq:Sfid} with an application of Lemma \ref{lem:orthsplineJinterval} (for $n\geq 2$) and the fact that $J_n=[0,1]$ for $n\leq 1$ to obtain \[ \int_{E_\lambda} Sg(t)^2\,\mathrm{d} t=\sum_{n\in\Lambda}\int_{E_\lambda}|a_n f_n(t)|^2\,\mathrm{d} t\lesssim \sum_{n\in\Lambda} \int_{J_n}|a_nf_n(t)|^2\,\mathrm{d} t. \] We split the latter expression into the parts \[ I_1:=\sum_{n\in\Lambda} \int_{J_n\cap E_\lambda^c}|a_nf_n(t)|^2\,\mathrm{d} t,\quad I_2:=\sum_{n\in\Lambda} \int_{J_n\cap E_\lambda}|a_nf_n(t)|^2\,\mathrm{d} t. \] For $I_1$, we clearly have \begin{equation}\label{eq:lemSf:1} I_1\leq \sum_{n\in\Lambda} \int_{E_\lambda^c}|a_nf_n(t)|^2\,\mathrm{d} t=\int_{E_\lambda^c} Sg(t)^2\,\mathrm{d} t. \end{equation} It remains to estimate $I_2$. First we observe that by Lemma \ref{lem:triv}, $E_\lambda\subset B_{\lambda,r}$. Since the set $B_{\lambda,r}=[\mathcal M\ensuremath{\mathbbm 1}_{E_\lambda}>r]$ is open in $[0,1]$, we decompose it into a countable collection of disjoint open subintervals $(V_j)_{j=1}^\infty$ of $[0,1]$. Utilizing this decomposition, we estimate \begin{equation}\label{eq:lemSf:2} I_2\leq \sum_{n\in\Lambda}\sum_{j:|J_n\cap V_j|>0} \int_{J_n\cap V_j}|a_nf_n(t)|^2\,\mathrm{d} t. \end{equation} If the indices $n$ and $j$ are such that $n\in\Lambda$ and $|J_n\cap V_j|>0$, then, by definition of $\Lambda$, $J_n$ is an interval containing at least one endpoint $x\in\{\inf V_j,\sup V_j\}$ of $V_j$ for which \[ \mathcal{M}\ensuremath{\mathbbm 1}_{E_\lambda}(x)\leq r. \] This implies \[ |E_\lambda\cap J_n\cap V_j|\leq r |J_n\cap V_j|\quad\text{or equivalently}\quad |E_\lambda^c\cap J_n\cap V_j|\geq (1-r)|J_n\cap V_j|. \] Using this inequality and that $|f_n|^2$ is a polynomial of order $2k-1$ on $J_n$ allows us to use Proposition \ref{prop:poly} to conclude from \eqref{eq:lemSf:2} \begin{equation*} \begin{aligned} I_2&\lesssim_r \sum_{n\in\Lambda}\sum_{j:|J_n\cap V_j|> 0} \int_{E_\lambda^c\cap J_n\cap V_j}|a_nf_n(t)|^2\,\mathrm{d} t \\ &\leq \sum_{n\in\Lambda} \int_{E_\lambda^c\cap J_n\cap B_{\lambda,r}}|a_nf_n(t)|^2\,\mathrm{d} t \\ &\leq \sum_{n\in\Lambda} \int_{E_\lambda^c}|a_nf_n(t)|^2\,\mathrm{d} t=\int_{E_\lambda^c}Sg(t)^2\,\mathrm{d} t, \end{aligned} \end{equation*} The latter inequality combined with \eqref{eq:lemSf:1} completes the proof the lemma. \end{proof} \begin{lem}\label{lem:techn2} Let $V$ be an open subinterval of $[0,1]$, $x:=\inf V$, $y:=\sup V$ and $f=\sum_{n=-k+2}^\infty a_n f_n \in L^p[0,1]$ for $1<p<2$ with $\operatorname{supp} f\subset V$. Let $R>1$ be an arbitrary number satisfying $R\gamma<1$ with the constant $\gamma$ from Theorem \ref{thm:maintool}. Then, \begin{equation}\label{eq:lemtechn2} \sum_{n=\polyfun(V)}^\infty R^{p d_n(V)}|a_n|^p \|f_n\|_{L^p(\widetilde{V}^c)}^p\lesssim_{p,R} \|f\|_p^p, \end{equation} where $\polyfun(V)=\min\{n:\mathcal{T}_n\cap V\neq\emptyset\}$ and $\widetilde{V}=(\widetilde{x},\widetilde{y})$ with $\widetilde{x}=x-2|V|$ and $\widetilde{y}=y+2|V|$. \end{lem} \begin{proof} First observe that $\widetilde{V}^c=[0,\widetilde{x}]\cup [\widetilde{y},1]$. We estimate only the part corresponding to the interval $[0,\widetilde{x}]$ and assume that $\widetilde{x}>0$. The other part is treated analogously. Let $m\geq 0$ and define \begin{equation}\label{eq:techn2:0.5} T_{m}:=\{n\in\mathbb{N}:n\geq \polyfun(V),\ \card\{i\leq n:\widetilde{x}\leq t_i\leq x\}=m\}, \end{equation} where $\card E$ is the cardinality of a set $E$. We remark that the index set $T_{m}$ is finite, since the sequence $(t_n)_{n=0}^\infty$ is dense in the unit interval $[0,1]$. We now split the index set $T_{m}$ further into the following six subcollections. \begin{align*} T_{m}^{(1)}&= \{n\in T_{m}:J_n\subset [\widetilde{x},x]\},\\ T_{m}^{(2)}&= \{n\in T_{m}:\widetilde{x}\in J_n, |J_n\cap [\widetilde{x},x]|\geq |V|, J_n\not\subset [\widetilde{x},x]\},\\ T_{m}^{(3)}&= \{n\in T_{m}: J_n\subset [0,\widetilde{x}] \text{ or } \\ &\qquad\big(\widetilde{x}\in J_n \text{ with }|J_n\cap [\widetilde{x},x]|\leq |V| \text{ and }J_n\not\subset [\widetilde{x},x]\big)\},\\ T_{m}^{(4)}&= \{n\in T_{m}: x\in J_n, |J_n\cap [\widetilde{x},x]|\geq |V|, J_n\not\subset[\widetilde{x},x]\},\\ T_{m}^{(5)}&= \{n\in T_{m}:J_n\subset [x,\widetilde{y}]\text{ or } \\ &\qquad\big(x\in J_n\text{ with }|J_n\cap[\widetilde{x},x]|\leq |V|\text{ and }J_n\not\subset[\widetilde{x},x]\big)\},\\ T_{m}^{(6)}&= \{n\in T_{m}:J_n\subset [\widetilde{y},1]\text{ or }\big(\widetilde{y}\in J_n\text{ with }J_n\not\subset [x,\widetilde{y}]\big)\}. \end{align*} We treat each of these index sets separately. Before we begin examining sums like in \eqref{eq:lemtechn2} where $n$ is restricted to one of the above index sets, we note that for all $n$ we have by definition of $a_n=\langle f,f_n\rangle$ and the support assumption on $f$ \begin{equation}\label{eq:techn2:1} |a_n|^p\leq \int_V |f(t)|^p\,\mathrm{d} t\cdot \Big(\int_V |f_n(t)|^{p'}\,\mathrm{d} t\Big)^{p-1}, \end{equation} where $p'=p/(p-1)$ denotes the conjugate Hölder exponent to $p$. \textsc{Case 1: } $n\in T_{m}^{(1)}=\{n\in T_{m}:J_n\subset [\widetilde{x},x]\}$.\\ Let $\widetilde{T}_{m}^{(1)}:=T_{m}^{(1)}\setminus \{\min T_{m}^{(1)}\}$. By definition, the interval $J_n$ is at most $k-1$ grid points in $\mathcal{T}_n$ away from $t_n$. Since the number $m$ of grid points between $\widetilde{x}$ and $x$ is constant for all $n\in T_{m}$, there are only $2(k-1)$ possibilities for $J_n$ with $n\in \widetilde{T}_{m}^{(1)}$. By Lemma \ref{lem:jinterval}, applied with $\beta=0$, every $J_n$ is characteristic interval of at most $F_{k}$ points $t_m$ and thus, \begin{equation}\label{eq:techn2:2} \card T_{m}^{(1)}\leq 2(k-1)F_k+1. \end{equation} By Lemma \ref{lem:lporthspline} and Lemma \ref{lem:orthsplineJinterval} respectively, \begin{equation}\label{eq:techn2:3} \int_0^{\widetilde{x}}|f_n(t)|^p\,\mathrm{d} t\lesssim \gamma^{p d_n(\widetilde{x})}\|f_n\|_p^p\quad\text{ and } \int_V |f_n(t)|^{p'}\,\mathrm{d} t\lesssim \gamma^{p' d_n(V)}\|f_n\|_{p'}^{p'} \end{equation} for $n\in T_{m}^{(1)}$. Furthermore, $d_n(\widetilde{x})+d_n(V)=m$ by definition of $d_n$, the location of $J_n$ and the fact that $n\in T_{m}^{(1)}$. So, using \eqref{eq:techn2:1}, \eqref{eq:techn2:3} and Lemma \ref{lem:orthsplineJinterval} respectively, \begin{align*} \sum_{n\in T_{m}^{(1)}}& R^{p d_n(V)} |a_n|^p \int_0^{\widetilde{x}} |f_n(t)|^p\,\mathrm{d} t \\ &\leq \sum_{n\in T_{m}^{(1)}} R^{p d_n(V)} \int_V |f(t)|^p\,\mathrm{d} t\cdot\Big(\int_V|f_n(t)|^{p'}\,\mathrm{d} t\Big)^{p-1} \int_0^{\widetilde{x}}|f_n(t)|^p\,\mathrm{d} t\\ &\lesssim \sum_{n\in T_{m}^{(1)}} R^{pd_n(V)} \gamma^{p(d_n(\widetilde{x})+d_n(V))} \|f_n\|_p^p \|f_n\|_{p'}^p \int_V|f(t)|^p\,\mathrm{d} t \\ &\lesssim \sum_{n\in T_{m}^{(1)}} (R\gamma)^{pm} \int_{V} |f(t)|^p\,\mathrm{d} t. \end{align*} Finally, we employ \eqref{eq:techn2:2} to obtain \begin{equation}\label{eq:techn2:4} \sum_{n\in T_{m}^{(1)}} R^{p d_n(V)} |a_n|^p \int_0^{\widetilde{x}} |f_n(t)|^p\,\mathrm{d} t \lesssim (R\gamma)^{pm}\int_V |f(t)|^p\,\mathrm{d} t, \end{equation} which concludes the proof of Case 1. \textsc{Case 2: } $n\in T_{m}^{(2)}=\{n\in T_{m}:\widetilde{x}\in J_n, |J_n\cap [\widetilde{x},x]|\geq |V|, J_n\not\subset [\widetilde{x},x]\}$. \\ In this case we have $d_n(V)=m$ and thus Lemma \ref{lem:lporthspline} implies \begin{equation*} \int_V |f_n(t)|^{p'}\,\mathrm{d} t\leq \|f_n\|_{L^\infty(V)}^{p'}|V| \lesssim \gamma^{p'm}|J_n|^{-p'/2}|V|. \end{equation*} So we use \eqref{eq:techn2:1} and this estimate and to obtain \begin{equation*} \begin{aligned} |a_n|^p\|f_n\|_p^p &\leq \int_V |f(t)|^p\,\mathrm{d} t\cdot \Big(\int_V |f_n(t)|^{p'}\,\mathrm{d} t\Big)^{p-1} \|f_n\|_p^p \\ &\lesssim \int_V |f(t)|^p\,\mathrm{d} t\cdot \gamma^{pm}|J_n|^{-p/2}|V|^{p-1} \|f_n\|_p^p. \end{aligned} \end{equation*} We continue and employ Lemma \ref{lem:orthsplineJinterval} to get further \begin{equation}\label{eq:techn2:5} \begin{aligned} |a_n|^p\|f_n\|_p^p &\lesssim \gamma^{pm} |J_n|^{-p/2+1-p/2}|V|^{p-1} \int_V |f(t)|^p\,\mathrm{d} t \\ &\leq \gamma^{pm}|J_n|^{1-p}|V|^{p-1}\|f\|_p^p. \end{aligned} \end{equation} If $n_0<n_1<\dots <n_s$ is an enumeration of all elements in $T_{m}^{(2)}$, we have by definition of $T_{m}^{(2)}$ \begin{equation*} J_{n_0}\supset J_{n_1}\supset \dots \supset J_{n_s}\qquad \text{and} \qquad|J_{n_s}|\geq |V|. \end{equation*} Thus, Lemma \ref{lem:jinterval} and the fact that $1<p<2$ imply \begin{equation}\label{eq:techn2:6} \sum_{n\in T_{m}^{(2)}} |J_n|^{1-p} \sim_p |J_{n_s}|^{1-p}\leq |V|^{1-p} \end{equation} We finally use \eqref{eq:techn2:5} and \eqref{eq:techn2:6} to conclude \begin{equation}\label{eq:techn2:7} \begin{aligned} \sum_{n\in T_{m}^{(2)}} R^{p d_n(V)} |a_n|^p \|f_n\|_p^p&\lesssim (R\gamma)^{pm}|V|^{p-1}\|f\|_p^p \sum_{n\in T_{m}^{(2)}} |J_n|^{1-p} \\ &\lesssim_p (R\gamma)^{pm} \|f\|_p^p. \end{aligned} \end{equation} \textsc{Case 3: }$n\in T_{m}^{(3)}=\{n\in T_{m}: J_n\subset [0,\widetilde{x}] \text{ or } \big(\widetilde{x}\in J_n \text{ with }|J_n\cap [\widetilde{x},x]|\leq |V| \text{ and }J_n\not\subset [\widetilde{x},x]\big)\}$. For $n\in T_{m}^{(3)}$, we denote by the finite sequence $(x_i)_{i=1}^m$ the points in $\mathcal{T}_n\cap [\widetilde{x},x]$ in increasing order and counting multiplicities. If there exists an index $n\in T_{m}^{(3)}$ such that $x_1$ is the right endpoint of $J_n$ and $\widetilde{x}\in J_n$, we define $x^*:=x_1$. If not, we set $x^*:=\widetilde{x}$. By definition of $T_{m}^{(3)}$ and $x^*$, we have \begin{equation}\label{eq:techn2:8} |V|\leq |[x^*,x]|\leq 2|V|. \end{equation} Furthermore, for all $n\in T_{m}^{(3)}$, \begin{equation*} J_n\subset [0,x^*]\quad \text{and}\quad |[x^*,x]\cap \mathcal{T}_n|=m. \end{equation*} Moreover, \begin{equation}\label{eq:techn2:10} m+d_n(x^*)-k\leq d_n(V)\leq m+d_n(x^*), \end{equation} where the exact value of $d_n(V)$ depends on the multiplicity of $x^*$ in $\mathcal{T}_n$ (which cannot exceed $k$). By Lemma \ref{lem:lporthspline} and \eqref{eq:techn2:10} we have \begin{equation*} \sup_{t\in V} |f_n(t)|\lesssim \gamma^{m+d_n(x^*)}\frac{|J_n|^{1/2}}{|J_n|+\operatorname{dist}(x,J_n)}. \end{equation*} We use this inequality to get \begin{equation}\label{eq:techn2:12} \int_V |f_n(t)|^{p'}\,\mathrm{d} t\lesssim |V|\cdot \gamma^{p'(m+d_n(x^*))}\frac{|J_n|^{p'/2}}{(|J_n|+\operatorname{dist}(x,J_n))^{p'}}. \end{equation} Employing \eqref{eq:techn2:1}, \eqref{eq:techn2:12} and Lemma \ref{lem:orthsplineJinterval} respectively, \begin{equation*} \begin{aligned} R^{p d_n(V)}&|a_n|^p \|f_n\|_p^p \\ &\leq R^{pd_n(V)}\int_V |f(t)|^p\,\mathrm{d} t\cdot\Big(\int_V |f_n(t)|^{p'}\,\mathrm{d} t\Big)^{p-1}\|f_n\|_p^p \\ &\lesssim R^{pd_n(V)}\|f\|_p^p |V|^{p-1}\gamma^{p(m+d_n(x^*))}\frac{|J_n|^{p/2}}{(|J_n|+\operatorname{dist}(x,J_n))^{p}} \|f_n\|_p^p \\ &\lesssim R^{pd_n(V)}\|f\|_p^p |V|^{p-1}\gamma^{p(m+d_n(x^*))}\frac{|J_n|}{(|J_n|+\operatorname{dist}(x,J_n))^{p}}. \end{aligned} \end{equation*} Inequality \eqref{eq:techn2:10} then yields \begin{equation}\label{eq:techn2:13} R^{p d_n(V)}|a_n|^p \|f_n\|_p^p \leq (R\gamma)^{p(m+d_n(x^*))}\|f\|_p^p |V|^{p-1}\frac{|J_n|}{(|J_n|+\operatorname{dist}(x,J_n))^{p}}. \end{equation} We now have to sum this inequality. In order to do this we split our analysis depending on the value of $d_n(x^*)$. For fixed $j\in\mathbb{N}_0$ we view $n\in T_{m}^{(3)}$ with $d_n(x^*)=j$. Let $\beta=1/4$, then, by Lemma \ref{lem:jinterval}, each point $t$ (which is not a grid point) belongs to at most $F_{k}$ intervals $J_{n}^\beta$ with $n\in T_{m}^{(3)}$ and $d_n(x^*)=j$. Here $J_n^\beta$ is the unique closed interval that satisfies the requirements \[ |J_n^\beta|=\beta|J_n|\quad\text{and}\quad \inf J_n^\beta=\inf J_n. \] Furthermore, for $t\in J_n$, we have \begin{equation*} |J_n|+\operatorname{dist}(x,J_n)\geq x-t. \end{equation*} These facts allow us to estimate \begin{equation*} \begin{aligned} \sum_{\substack{n\in T_{m}^{(3)}\\ d_n(x^*)=j}}\frac{|J_n||V|^{p-1}}{(|J_n|+\operatorname{dist}(x,J_n))^p}&\leq \beta^{-1} \sum_{\substack{n\in T_{m}^{(3)}\\ d_n(x^*)=j}}\int_{J_n^\beta}\frac{|V|^{p-1}}{(x-t)^p}\,\mathrm{d} t \\ &\leq \frac{F_{k}}{\beta} |V|^{p-1}\int_{-\infty}^{x^*}(x-t)^{-p} \,\mathrm{d} t\\ &\lesssim_{p} \frac{|V|^{p-1}}{(x-x^*)^{p-1}}\leq 1, \end{aligned} \end{equation*} where in the last step we used \eqref{eq:techn2:8}. Combining \eqref{eq:techn2:13} and the latter and summing over $j$ (here we use the fact that $R\gamma<1$), we arrive at \begin{equation}\label{eq:techn2:16} \sum_{n\in T_{m}^{(3)}} R^{pd_n(V)}|a_n|^p\|f_n\|_p^p\lesssim_{p,R} (R\gamma)^{pm}\|f\|_p^p. \end{equation} \textsc{Case 4: }$n\in T_{m}^{(4)}=\{n\in T_{m}: x\in J_n, |J_n\cap [\widetilde{x},x]|\geq |V|, J_n\not\subset[\widetilde{x},x]\}$. \\ We can ignore the cases $(m=0)$ or ($m=1$ and $[\widetilde{x},x]\cap \mathcal{T}_n=\{x\}$) since these are settled in Case 2. We thus define $\widetilde{T}_{m}^{(4)}$ as the set of all remaining indices from $T_{m}^{(4)}$. Let $n\in \widetilde{T}_{m}^{(4)}$. Then the definition of $T_{m}^{(4)}$ implies \begin{equation}\label{eq:techn2:17} d_n(V)=d_n([x,y])=0. \end{equation} Moreover, there exists at least one point of $\mathcal{T}_n$ in $V$ (since $n\geq \polyfun(V)$ for $n\in T_m$) and at least one point of $\mathcal{T}_n$ in $[\widetilde{x},x]$ (since $m\geq 1$). Thus we have the following two-sided bound on $|J_n|$: \begin{equation}\label{eq:techn2:18} |V|\leq |J_n|\leq 3|V|. \end{equation} Since $x\in J_n$ for all $n\in \widetilde{T}_{m}^{(4)}$, the family $\{J_n:n\in \widetilde{T}_m^{(4)}\}$ forms a decreasing collection of sets. Inequality \eqref{eq:techn2:18} and a multiple application of Lemma \ref{lem:jinterval} with sufficiently large $\beta$ gives us a constant $c_k$ depending only on $k$ such that \begin{equation}\label{eq:techn2:19} \card \widetilde{T}_{m}^{(4)}\leq c_k. \end{equation} We employ Lemma \ref{lem:lporthspline} and Lemma \ref{lem:orthsplineJinterval} respectively to get \begin{equation}\label{eq:techn2:20} \int_0^{\widetilde{x}}|f_n(t)|^p\,\mathrm{d} t\lesssim \gamma^{pm}|J|^{p/2-p+1}=\gamma^{pm}|J|^{1-p/2}\lesssim \gamma^{pm}\|f_n\|_p^p. \end{equation} Thus we are able to conclude \begin{equation*} \begin{aligned} \sum_{n\in \widetilde{T}_{m}^{(4)}}& R^{p d_n(V)} |a_n|^p\int_0^{\widetilde{x}}|f_n(t)|^p\,\mathrm{d} t \\ &\lesssim \sum_{n\in \widetilde{T}_{m}^{(4)}}\int_V |f(t)|^p\,\mathrm{d} t\cdot \Big(\int_V |f_n(t)|^{p'}\,\mathrm{d} t\Big)^{p-1} \int_0^{\widetilde{x}}|f_n(t)|^p\,\mathrm{d} t \\ &\lesssim \sum_{n\in \widetilde{T}_{m}^{(4)}}\int_V |f(t)|^p\,\mathrm{d} t\cdot \|f_n\|_{p'}^p \gamma^{pm}\|f_n\|_p^p, \\ &\leq\sum_{n\in \widetilde{T}_m^{(4)}}\gamma^{pm}\|f\|_p^p, \end{aligned} \end{equation*} where we used \eqref{eq:techn2:17} and \eqref{eq:techn2:1} in the first inequality, \eqref{eq:techn2:20} in the second inequality and Lemma \ref{lem:orthsplineJinterval} in the last inequality. Consequently, considering \eqref{eq:techn2:19}, the latter display implies \begin{equation}\label{eq:techn2:22} \sum_{n\in \widetilde{T}_{m}^{(4)}}R^{p d_n(V)}|a_n|^p\int_0^{\widetilde{x}} |f_n(t)|^p\,\mathrm{d} t\lesssim \gamma^{pm}\|f\|_p^p. \end{equation} \textsc{Case 5: }$n\in T_{m}^{(5)}=\{n\in T_{m}:J_n\subset [x,\widetilde{y}]\text{ or }\big(x\in J_n\text{ with }|J_n\cap[\widetilde{x},x]|\leq |V|\text{ and }J_n\not\subset[\widetilde{x},x]\big)\}.$\\ If there exists $n\in T_{m}^{(5)}$ with $x_m=\inf J_n$, then we define $x'=x_m$. If there exists no such index, we set $x'=x$. We now fix $n\in T_m^{(5)}$. By definition of $x'$ and $\widetilde{x}$, \begin{equation}\label{eq:techn2:25} m+d_n(x')-k\leq d_n(\widetilde{x})\leq m+d_n(x'). \end{equation} The exact relation between $d_n(\widetilde{x})$ and $d_n(x')$ depends on the multiplicity of the point $x'$ in the grid $\mathcal{T}_n$. By definition of $T_m^{(5)}$, \[ d(\widetilde{x},J_n)\leq 5|V|\qquad\text{and}\qquad |V|\leq d(\widetilde{x},J_n). \] Moreover, \begin{equation}\label{eq:techn2:26} |J_n|\leq |[x',\widetilde{y}]|\leq 4|V|\qquad\text{and}\qquad d_n(V)\leq d_n(x'). \end{equation} The latter two displays now imply \begin{equation*} |J_n|+\operatorname{dist}(\widetilde{x},J_n)\sim |V|. \end{equation*} Lemma \ref{lem:lporthspline}, together with the former observation, yields \begin{align*} \int_0^{\widetilde{x}}|f_n(t)|^p\,\mathrm{d} t&\lesssim \gamma^{p d_n(\widetilde{x})}\frac{|J_n|^{p/2}}{(|J_n|+\operatorname{dist}(\widetilde{x},J_n))^{p-1}} \\ &\lesssim \gamma^{pd_n(\widetilde{x})}\frac{|J_n|^{p/2}}{|V|^{p-1}}. \end{align*} Inserting \eqref{eq:techn2:25} in this inequality, we get \begin{equation}\label{eq:techn2:29} \int_0^{\widetilde{x}}|f_n(t)|^p\,\mathrm{d} t\lesssim\gamma ^{p(d_n(x')+m)}\frac{|J_n|^{p/2}}{|V|^{p-1}}. \end{equation} For each $n\in T_{m}^{(5)}$, we split the interval $[x',\widetilde{y}]$ into the union of three disjoint subintervals $I_\ell$, $1\leq \ell\leq 3$, defined by \begin{equation*} I_1:=[x',\inf J_n],\qquad I_2:=J_n, \qquad I_3:=[\sup J_n,\widetilde{y}]. \end{equation*} Corresponding to these subintervals, we set \begin{equation*} a_{n,\ell}:=\int_{I_\ell\cap V} f(t)f_n(t)\,\mathrm{d} t,\qquad\ell=1,2,3. \end{equation*} We start with analyzing the parameter choice $\ell=2$ and first observe that by definition of $I_2$, \begin{equation}\label{eq:techn2:32} |a_{n,2}|^p\leq \|f_n\|_{p'}^p \int_{J_n}|f(t)|^p\,\mathrm{d} t. \end{equation} We split the index set $T_{m}^{(5)}$ further and look at the set of those $n\in T_m^{(5)}$ such that $d_n(x')=j$ for fixed $j\in\mathbb{N}_0$. These indices $n$ may be arranged in packets such that the intervals $J_n$ from one packet have the same left endpoint and such that the maximal intervals of different packets are disjoint. Observe that the intervals $J_n$ from one packet form a decreasing collection of sets. Let $J_{n_0}$ be the maximal interval of one packet. Define the index set $\mathcal{I}_j:=\{n\in T_m^{(5)}: d_n(x')=j,\ J_n\subset J_{n_0}\}$. Then we use \eqref{eq:techn2:26} and \eqref{eq:techn2:32} to estimate \begin{align*} E_{2,j}&:=\sum_{n\in\mathcal{I}_j} R^{p d_n(V)}|a_{n,2}|^p\int_{0}^{\widetilde{x}}|f_n(t)|^p\,\mathrm{d} t \\ &\leq \sum_{n\in \mathcal{I}_j} R^{pj} \|f_n\|_{p'}^p \int_{J_n}|f(t)|^p\,\mathrm{d} t \int_0^{\widetilde{x}} |f_n(t)|^p\,\mathrm{d} t. \end{align*} We continue and use \eqref{eq:techn2:29} to get \[ E_{2,j}\lesssim R^{pj}\int_{J_{n_0}}|f(t)|^p\,\mathrm{d} t \sum_{n\in\mathcal I_j} \|f_n\|_{p'}^p \gamma^{p(d_n(x')+m)}\frac{|J_n|^{p/2}}{|V|^{p-1}}. \] By Lemma \ref{lem:orthsplineJinterval}, $\|f_n\|_{p'}\sim |J|^{1/p'-1/2}$, an thus, \[ E_{2,j}\lesssim (R\gamma)^{pj} \gamma^{pm} \int_{J_{n_0}} |f(t)|^p\,\mathrm{d} t \cdot\sum_{n\in\mathcal{I}_j} \frac{|J_n|^{p-1}}{|V|^{p-1}}. \] We apply Lemma \ref{lem:jinterval} to the above sum and conclude \begin{align*} E_{2,j}&\lesssim_p (R\gamma)^{pj}\gamma^{pm}\int_{J_{n_0}}|f(t)|^p\,\mathrm{d} t\cdot \frac{|J_{n_0}|^{p-1}}{|V|^{p-1}} \\ &\lesssim (R\gamma)^{pj}\gamma^{pm}\int_{J_{n_0}}|f(t)|^p\,\mathrm{d} t, \end{align*} where in the last inequality, we used \eqref{eq:techn2:26}. Now, summing over all maximal intervals $J_{n_0}$ and over $j$ finally yields (note that $R\gamma<1$) \begin{equation}\label{eq:techn2:34} \sum_{n\in T_{m}^{(5)}} R^{p d_n(V)}|a_{n,2}|^p \int_{0}^{\widetilde{x}} |f_n(t)|^p\,\mathrm{d} t \lesssim_{p,R} \gamma^{pm}\|f\|_p^p. \end{equation} This completes the proof of the part $\ell=2$. We continue with the parameter choice $\ell=3$. Let $j\in \mathbb{N}_0$ fixed and let $(n_{j,r})_{r=1}^\infty$ be the subsequence of all $n\in T_{m}^{(5)}$ with $d_n(x')=j$. For two such indices $n_1<n_2$ we have either \begin{equation*} (\inf J_{n_1}=\inf J_{n_2}\text{ and }J_{n_2}\subset J_{n_1})\quad\text{or}\quad \sup J_{n_2}\leq \inf J_{n_1}. \end{equation*} Observe that $J_{n_2}=J_{n_1}$ is possible, but by Lemma \ref{lem:jinterval} (with $\beta=0$) only $F_k$ times with $F_k$ only depending on $k$. Therefore, with $\beta_{n_{j,r}}:=\sup J_{n_{j,r}}$ for $r\geq 1$ and $\beta_{n_{j,0}}:=\widetilde{y}$, \begin{equation*} d_{n_{j,s}}(\beta_{n_{j,r}})\geq \frac{s-r}{F_k}-1,\qquad s\geq r\geq 1. \end{equation*} Thus we obtain for $s\geq r\geq 1$ by Lemma \ref{lem:lporthspline} and Lemma \ref{lem:orthsplineJinterval} \begin{equation}\label{eq:techn2:37} \begin{aligned} \int_{\beta_{n_{j,r}}}^{\beta_{n_{j,r-1}}} |f_{n_{j,s}}(t)|^{p'}\,\mathrm{d} t&\lesssim \gamma^{p'd_{n_{j,s}}(\beta_{n_{j,r}})} \|f_{n_{j,s}}\|_{p'}^{p'} \lesssim \gamma^{p'\frac{s-r}{F_k}} \|f_{n_{j,s}}\|_{p'}^{p'} \end{aligned} \end{equation} and similarly, using also \eqref{eq:techn2:25}, \begin{equation}\label{eq:techn2:37.5} \int_{0}^{\widetilde{x}}|f_{n_{j,s}}|^p\,\mathrm{d} t\lesssim \gamma^{p d_{n_{j,s}}(\widetilde{x})}\|f_{n_{j,s}}\|^p_p \lesssim \gamma^{p (m+d_{n_{j,s}}(x'))}\|f_{n_{j,s}}\|^p_p. \end{equation} Choosing $\kappa:=\gamma^{1/(2F_k)}<1$, we conclude \begin{equation*} \begin{aligned} |a_{n_{j,s},3}|^p&=\Big|\int_{\beta_{n_{j,s}}}^{\widetilde{y}} f(t) f_{n_{j,s}}(t)\,\mathrm{d} t \Big|^p \\ &=\Big|\sum_{r=1}^s \kappa^{s-r}\kappa^{r-s}\int_{\beta_{n_{j,r}}}^{\beta_{n_{j,r-1}}} f(t) f_{n_{j,s}}(t)\,\mathrm{d} t\Big|^p \\ &\leq \Big(\sum_{r=1}^s\kappa^{p'(s-r)}\Big)^{p/p'}\sum_{r=1}^s \kappa^{p(r-s)}\Big|\int_{\beta_{n_{j,r}}}^{\beta_{n_{j,r-1}}} f(t) f_{n_{j,s}}(t)\,\mathrm{d} t\Big|^p \\ &\lesssim \sum_{r=1}^s \kappa^{p(r-s)} \int_{\beta_{n_{j,r}}}^{\beta_{n_{j,r-1}}} |f(t)|^p \,\mathrm{d} t\cdot \Big(\int_{\beta_{n_{j,r}}}^{\beta_{n_{j,r-1}}} |f_{n_{j,s}}(t)|^{p'}\,\mathrm{d} t\Big)^{p/p'}. \\ \end{aligned} \end{equation*} We use inequality \eqref{eq:techn2:37} to obtain from the latter expression \begin{equation}\label{eq:techn2:38} |a_{n_{j,s},3}|^p \lesssim\sum_{r=1}^s \gamma^{p\frac{s-r}{2F_k}} \int_{\beta_{n_{j,r}}}^{\beta_{n_{j,r-1}}} |f(t)|^p \,\mathrm{d} t \cdot \|f_{n_{j,s}}\|_{p'}^p. \end{equation} Combining \eqref{eq:techn2:38} and \eqref{eq:techn2:37.5} yields \begin{equation*} \begin{aligned} E_{3,j}&:=\sum_{\substack{n\in T_{m}^{(5)}\\d_n(x')=j}} R^{p d_n(V)}|a_{n,3}|^p \|f\|_{L^p(0,\widetilde{x})}^p \\ &= \sum_{s\geq 1} R^{pj}|a_{n_{j,s},3}|^p \|f_{n_{j,s}}\|_{L^p(0,\widetilde{x})}^p \\ &\lesssim \sum_{s\geq 1}R^{pj}\sum_{r=1}^s \gamma^{p\frac{s-r}{2F_k}}\|f_{n_{j,s}}\|_{p'}^p \int_{\beta_{n_{j,r}}}^{\beta_{n_{j,r-1}}} |f(t)|^p \,\mathrm{d} t\cdot \gamma^{p(m+j)} \|f_{n_{j,s}}\|^p_p. \\ \end{aligned} \end{equation*} Using again Lemma \ref{lem:orthsplineJinterval} gives \begin{align*} E_{3,j}&\lesssim \gamma^{pm}(R\gamma)^{pj}\sum_{r\geq 1} \int_{\beta_{n_{j,r}}}^{\beta_{n_{j,r-1}}} |f(t)|^p \,\mathrm{d} t\sum_{s\geq r}\gamma^{p\frac{s-r}{2F_k}} \\ &\lesssim \gamma^{pm}(R\gamma)^{pj}\|f\|_p^p. \end{align*} Summing over $j$ finally yields \begin{equation}\label{eq:techn2:40} \sum_{n\in T_{m}^{(5)}} R^{p d_n(V)}|a_{n,3}|^p\|f\|_{L^p(0,\widetilde{x})}^p\lesssim_{p,R} \gamma^{pm}\|f\|_p^p, \end{equation} since $R\gamma<1$. This finishes the proof of the part $\ell=3$. We now come to the final part $\ell=1$. Let $j$ and $n$ be fixed such that $d_n(x')=j$ and let $L_{1,n},\dots, L_{j,n}$ be the grid intervals in the grid $\mathcal T_n$ between $x'$ and $J_n$ from left to right. Observe that $f_n$ is a polynomial on each of the intervals $L_{i,n}$. We define \begin{equation*} b_{i,n}:=\int_{L_{i,n}} f(t)f_n(t)\,\mathrm{d} t,\qquad 1\leq i\leq j. \end{equation*} For $n$ with $d_n(x')=j$, we clearly have $a_{n,1}=\sum_{i=1}^j b_{i,n}$ and Hölder's inequality implies \begin{equation}\label{eq:techn2:42} |b_{i,n}|^p\leq \int_{L_{i,n}} |f(t)|^p\,\mathrm{d} t \cdot\Big(\int_{L_{i,n}}|f_n(t)|^{p'}\,\mathrm{d} t\Big)^{p/p'}. \end{equation} Remark \ref{rem:linftyorthspline} yields the bound \begin{equation*} \sup_{t\in L_{i,n}}|f_n(t)|\lesssim \gamma^{j-i}\frac{|J_n|^{1/2}}{|J_n|+\operatorname{dist}(J_n,L_{i,n})+|L_{i,n}|} \end{equation*} and inserting this in \eqref{eq:techn2:42} gives \begin{equation}\label{eq:techn2:44} |b_{i,n}|^p\leq \int_{L_{i,n}} |f(t)|^p\,\mathrm{d} t\cdot \gamma^{p(j-i)}\frac{|J_n|^{p/2}|L_{i,n}|^{p-1}}{(|J_n|+\operatorname{dist}(J_n,L_{i,n})+|L_{i,n}|)^p}. \end{equation} Observe that we have the elementary inequality \begin{equation}\label{eq:techn2:45} \begin{aligned} \frac{|J_n|^{p/2}|L_{i,n}|^{p-1}}{(|J_n| +\operatorname{dist}(J_n,L_{i,n})+|L_{i,n}|)^p}&\frac{|J_n|^{p/2}}{|V|^{p-1}}\leq \\ & \frac{|J_n|}{|V|^{p-1}}(|J_n|+ \operatorname{dist}(J_n,L_{i,n})+|L_{i,n}|)^{p-2}. \end{aligned} \end{equation} Combining \eqref{eq:techn2:44}, \eqref{eq:techn2:45} and \eqref{eq:techn2:29} allows us to estimate (recall that we assumed $n$ is such that $d_n(x')=j$) \begin{equation}\label{eq:techn2:46} \begin{aligned} R&^{pd_n(V)}|b_{i,n}|^p \cdot \int_0^{\widetilde{x}} |f_n(t)|^p\,\mathrm{d} t\\ &\lesssim R^{pj} \gamma^{p(j-i)}\int_{L_{i,n}} |f(t)|^p\,\mathrm{d} t \frac{|J_n|^{p/2}|L_{i,n}|^{p-1}}{(|J_n|+\operatorname{dist}(J_n,L_{i,n})+|L_{i,n}|)^p}\cdot \gamma^{p(j+m)}\frac{|J_n|^{p/2}}{|V|^{p-1}} \\ &\lesssim R^{pj}\gamma^{p(2j+m-i)} \frac{|J_n|}{|V|^{p-1}}(|J_n|+\operatorname{dist}(J_n,L_{i,n})+|L_{i,n}|)^{p-2}\int_{L_{i,n}}|f(t)|^p\,\mathrm{d} t. \end{aligned} \end{equation} For fixed $j$ and $i$ we view those indices $n$ such that $d_n(x')=j$ and consider the corresponding intervals $L_{i,n}$. These intervals can be collected in packets such that intervals $L_{i,n}$ from one packet have the same left endpoint and maximal intervals of different packets are disjoint. For $\beta=1/4$, we denote by $J_{n}^\beta$ the unique interval that has the same right endpoint as $J_n$ and length $\beta |J_n|$. The intervals $J_n$ corresponding to $L_{i,n}$'s from one packet can now be grouped in the same way as the $L_{i,n}$'s and thus, Lemma \ref{lem:jinterval} implies the existence of a constant $F_{k}$ depending only on $k$ such that every point $t\in[0,1]$ belongs to at most $F_k$ intervals $J_n^\beta$ corresponding to the intervals $L_{i,n}$ from one packet. We now consider one such packet and denote by $u^*$ the left endpoint of (all) intervals $L_{i,n}$ in this packet. Then we have for $t\in J_n^\beta$ \begin{equation}\label{eq:techn2:47} |J_n|+\operatorname{dist}(L_{i,n},J_n)+|L_{i,n}|\geq |t-u^*|. \end{equation} If $L_{i}^*$ is the maximal interval of the present packet, \eqref{eq:techn2:46} and \eqref{eq:techn2:47} yield \begin{equation*} \begin{aligned} &\hspace{-1cm}\sum_{n:L_{i,n}\text{ in one packet}} R^{p d_n(V)}|b_{i,n}|^p \|f_n\|_{L^p(0,\widetilde{x})}^p \\ &\lesssim \frac{R^{pj}\gamma^{p(2j+m-i)}}{|V|^{p-1}} \sum_n|J_n|(|J_n|+\operatorname{dist}(L_{i,n},J_n)+|L_{i,n}|)^{p-2}\int_{L_{i,n}}|f(t)|^p\,\mathrm{d} t \\ &\lesssim \frac{R^{pj}\gamma^{p(2j+m-i)}}{|V|^{p-1}}\int_{L_{i}^*}|f(t)|^p\,\mathrm{d} t\cdot \sum_n \int_{J_n^\beta} |t-u^*|^{p-2}\,\mathrm{d} t. \end{aligned} \end{equation*} Since every point $t$ belongs to at most $F_k$ intervals $J_n^\beta$ in one package of $L_{i,n}$'s, we can continue this chain of inequalities and get further, by using the facts $J_n\subset [x',\widetilde{y}]$ and $p<2$: \begin{equation*} \begin{aligned} &\hspace{-1.5cm}\sum_{n:L_{i,n}\text{ in one packet}} R^{p d_n(V)}|b_{i,n}|^p \|f_n\|_{L^p(0,\widetilde{x})}^p \\ &\lesssim \frac{R^{pj}\gamma^{p(2j+m-i)}}{|V|^{p-1}}\int_{L_{i}^*}|f(t)|^p\,\mathrm{d} t\cdot \int_{u^*}^{\widetilde{y}} |t-u^*|^{p-2}\,\mathrm{d} t \\ &\lesssim R^{pj}\gamma^{p(2j+m-i)}\int_{L_{i}^*}|f(t)|^p\,\mathrm{d} t, \end{aligned} \end{equation*} where in the last inequality we used \eqref{eq:techn2:26}. Since the maximal intervals $L_i^*$ of different packets are disjoint, we can sum over all packets (for fixed $j$ and $i$) to obtain \begin{equation}\label{eq:techn2:50} \sum_{\substack{n\in T_{m}^{(5)}\\d_n(x')=j}} R^{p d_n(V)}|b_{i,n}|^p\|f_n\|_{L^p(0,\widetilde{x})}^p \lesssim R^{pj}\gamma^{p(2j+m-i)}\|f\|_p^p. \end{equation} Let $\kappa:=\gamma^{1/2}<1$. Then, for $n$ such that $d_n(x')=j$ we have \begin{equation}\label{eq:techn2:51} |a_{n,1}|^p=\Big|\sum_{i=1}^j b_{i,n}\Big|^p=\Big| \sum_{i=1}^j \kappa^{j-i}\kappa^{i-j}b_{i,n}\Big|^p \lesssim_p\sum_{i=1}^j \kappa^{p(i-j)}|b_{i,n}|^p \end{equation} Combining \eqref{eq:techn2:51} with \eqref{eq:techn2:50} we get \begin{equation*} \begin{aligned} \sum_{\substack{n\in T_{m}^{(5)}\\d_n(x')=j}}& R^{p d_n(V)}|a_{1,n}|^p\|f_n\|_{L^p(0,\widetilde{x})}^p \\ &\lesssim_p \sum_{i=1}^j \kappa^{p(i-j)}\sum_{\substack{n\in T_{m}^{(5)}\\d_n(x')=j}} R^{p d_n(V)}|b_{i,n}|^p\|f_n\|_{L^p(0,\widetilde{x})}^p \\ &\lesssim \sum_{i=1}^j \kappa^{p(i-j)} R^{pj}\gamma^{p(2j+m-i)}\|f\|_p^p\lesssim (R\gamma)^{pj}\gamma^{pm}\|f\|_p^p. \end{aligned} \end{equation*} Since $R\gamma<1$ we sum over $j$ to conclude finally \begin{equation}\label{eq:techn2:53} \sum_{n\in T_{m}^{(5)}} R^{p d_n(V)}|a_{n,1}|^p \|f_n\|_{L^p(0,\widetilde{x})}^p \lesssim_{p,R} \gamma^{pm}\|f\|_p^p \end{equation} This finishes the proof of case $\ell=1$. We can now combine the proved inequalities for $\ell=1,2,3$, that is \eqref{eq:techn2:53}, \eqref{eq:techn2:34} and \eqref{eq:techn2:40}, to complete the analysis of Case 5 with the estimate \begin{equation}\label{eq:techn2:54} \sum_{n\in T_{m}^{(5)}} R^{pd_n(V)} |a_n|^p \|f_n\|_{L^p(0,\widetilde{x})}^p\lesssim_{p,R} \gamma^{pm}\|f\|_p^p. \end{equation} \textsc{Case 6: }$n\in T_{m}^{(6)}=\{n\in T_{m}:J_n\subset [\widetilde{y},1]\text{ or }\big(\widetilde{y}\in J_n\text{ with }J_n\not\subset [x,\widetilde{y}]\big)\}$. \\ Similarly to \eqref{eq:techn2:0.5}, we may use the symmetric splitting of the indices $n$ to \begin{equation*} T_{\operatorname{r},s}:=\{n\geq \polyfun(V):|[y,\widetilde{y}]\cap \mathcal{T}_n|=s\}, \end{equation*} where $\operatorname{r}$ stands for ``right". These collections of indices are again splitted into six subcollections $T_{\operatorname{r},s}^{(i)}$, $1\leq i\leq 6$, where the two of interest are \begin{equation*}\label{eq:techn2:56} \begin{aligned} T_{\operatorname{r},s}^{(2)}&=\{n\in T_{r,s}:\widetilde{y}\in J_n, |J_n\cap [y,\widetilde{y}]|\geq |V|, J_n\not\subset [y,\widetilde{y}]\}, \\ T_{\operatorname{r},s}^{(3)}&=\{ n\in T_{r,s}: J_n\subset [\widetilde{y},1] \text{ or } \\ &\qquad\big(\widetilde{y}\in J_n \text{ with } |J_n\cap [y,\widetilde{y}]|\leq |V| \text{ and } J_n\not\subset[y,\widetilde{y}]\big)\}. \end{aligned} \end{equation*} The results \eqref{eq:techn2:7} and \eqref{eq:techn2:16} for $T_{m}^{(2)}$ and $T_{m}^{(3)}$ respectively had the form \begin{equation*} \sum_{n\in T_{m}^{(2)}\cup T_{m}^{(3)}} R^{pd_n(V)}|a_n|^p \|f_n\|_p^p \lesssim_{p,R} (R\gamma)^{pm}\|f\|_p^p. \end{equation*} Observe that the $p$-norm of $f_n$ on the left hand side of the inequality is over the whole interval $[0,1]$. The same argument as for $T_m^{(2)}$ and $T_m^{(3)}$ yields \begin{equation}\label{eq:techn2:58} \sum_{n\in T_{\operatorname{r},s}^{(2)}\cup T_{\operatorname{r},s}^{(3)}} R^{pd_n(V)}|a_n|^p \|f_n\|_p^p \lesssim_{p,R} (R\gamma)^{ps}\|f\|_p^p. \end{equation} Now, since \begin{equation*} \bigcup_{m\geq 0} T_{m}^{(6)}\subset \bigcup_{s\geq 0} T_{\operatorname{r},s}^{(2)}\cup T_{\operatorname{r},s}^{(3)}, \end{equation*} inequality \eqref{eq:techn2:58} implies \begin{equation}\label{eq:techn2:60} \begin{aligned} &\hspace{-0.5cm}\sum_{m=0}^\infty \sum_{n\in T_{m}^{(6)}} R^{pd_n(V)} |a_n|^p \|f_n\|_p^p \\ &\leq \sum_{s=0}^\infty \sum_{n\in T_{r,s}^{(2)}\cup T_{r,s}^{(3)}} R^{p d_n(V)} |a_n|^p\|f_n\|_p^p\lesssim_{p,R} \|f\|_p^p \end{aligned} \end{equation} After summation of \eqref{eq:techn2:4}, \eqref{eq:techn2:7}, \eqref{eq:techn2:16}, \eqref{eq:techn2:22} and \eqref{eq:techn2:54} over $m$, we add inequality \eqref{eq:techn2:60} to obtain finally \begin{equation*} \sum_{n\geq \polyfun(V)} R^{pd_n(V)} |a_n|^p \|f_n\|_{L^p(0,\widetilde{x})}^p \lesssim_{p,R} \|f\|_p^p, \end{equation*} The symmetric inequality \begin{equation*} \sum_{n\geq \polyfun(V)} R^{pd_n(V)} |a_n|^p \|f_n\|_{L^p(\widetilde{y},1)}^p \lesssim_{p,R} \|f\|_p^p \end{equation*} is treated analogously and thus, the proof of the lemma is completed. \end{proof} \section{Proof of the Main Theorem}\label{sec:main} In this section, we prove our main result Theorem \ref{thm:uncond}, that is unconditionality of orthonormal spline systems corresponding to an arbitrary admissible point sequence $(t_n)_{n\geq 0}$ in reflexive $L^p$. \begin{proof}[Proof of Theorem \ref{thm:uncond}] We recall the notation \[ Sf(t)=\Big(\sum_{n=-k+2}^\infty |a_nf_n(t)|^2\Big)^{1/2},\quad Mf(t)=\sup_{m\geq -k+2}\Big|\sum_{n=-k+2}^m a_n f_n(t)\Big| \] when \[ f=\sum_{n=-k+2}^\infty a_nf_n. \] Since $(f_n)_{n=-k+2}^\infty$ is a basis in $L^p[0,1],\ 1\leq p<\infty,$ Khintchine's inequality implies that a necessary and sufficient condition for $(f_n)_{n=-k+2}^\infty$ to be an unconditional basis in $L^p[0,1]$ for some $p$ in the range $1<p<\infty$ is \begin{equation}\label{eq:maintoprove} \|Sf\|_p \sim_p \|f\|_p,\quad f\in L^p[0,1]. \end{equation} We will prove \eqref{eq:maintoprove} for $1<p<2$ since the cases $p>2$ then follow by a duality argument. We first prove the inequality \begin{equation}\label{eq:firsttoprove} \|f\|_p\lesssim_p \|Sf\|_p. \end{equation} To begin with, let $f\in L^p[0,1]$ with $f=\sum_{n=-k+2}^\infty a_n f_n$. Without loss of generality, we may assume that the sequence $(a_n)_{n\geq -k+2}$ has only finitely many nonzero entries. We will prove \eqref{eq:firsttoprove} by showing the inequality $\|Mf\|_p\lesssim_p \|Sf\|_p$ and we first observe that \begin{equation}\label{eq:vertfkt} \|Mf\|_p^p = p \int_0^\infty \lambda^{p-1} \psi(\lambda)\,\mathrm{d} \lambda, \end{equation} with $\psi(\lambda):=[Mf>\lambda].$ Next we decompose $f$ into two parts $\varphi_1,\varphi_2$ and estimate the corresponding distribution functions $\psi_i(\lambda):=[M\varphi_i >\lambda/2]$, $i\in\{1,2\}$, separately. We continue with the definition of the functions $\varphi_i$. For $\lambda>0$, we define \begin{align*} E_\lambda &:= [Sf>\lambda],& B_\lambda&:=[\mathcal{M}\ensuremath{\mathbbm 1}_{E_\lambda}>1/2], \\ \Gamma&:=\{n:J_n\subset B_\lambda,-k+2\leq n<\infty\},& \Lambda&:=\Gamma^c, \end{align*} where we recall that $J_n$ is the characteristic interval corresponding to the grid point $t_n$ and the function $f_n$. Then, let \begin{align*} \varphi_1:= \sum_{n\in\Gamma} a_nf_n\qquad\text{and}\qquad \varphi_2:=\sum_{n\in\Lambda}a_nf_n. \end{align*} Now we estimate $\psi_1=[M\varphi_1>\lambda/2]$: \begin{align*} \psi_1(\lambda) &=|\{t\in B_\lambda: M\varphi_1(t)>\lambda/2\}|+|\{t\notin B_\lambda: M\varphi_1(t)>\lambda/2\}| \\ &\leq |B_\lambda|+\frac{2}{\lambda}\int_{B_\lambda^c}M\varphi_1(t)\,\mathrm{d} t \\ &\leq |B_\lambda|+\frac{2}{\lambda}\int_{B_\lambda^c} \sum_{n\in \Gamma} |a_nf_n(t)|\,\mathrm{d} t. \end{align*} We decompose $B_\lambda$ into a disjoint collection of open subintervals of $[0,1]$ and apply Lemma \ref{lem:techn1} to each of those intervals to conclude from the latter expression \begin{align*} \psi_1(\lambda) &\lesssim |B_\lambda|+\frac{1}{\lambda}\int_{B_\lambda} Sf(t)\,\mathrm{d} t \\ &= |B_\lambda|+\frac{1}{\lambda}\int_{B_\lambda\setminus E_\lambda} Sf(t)\,\mathrm{d} t +\frac{1}{\lambda}\int_{E_\lambda\cap B_\lambda}Sf(t)\,\mathrm{d} t \\ &\leq |B_\lambda|+|B_\lambda\setminus E_\lambda|+\frac{1}{\lambda}\int_{E_\lambda} Sf(t)\,\mathrm{d} t, \end{align*} where in the last inequality, we simply used the definition of $E_\lambda$. Since the Hardy-Littlewood maximal function operator $\mathcal M$ is of weak type (1,1), $|B_\lambda|\lesssim |E_\lambda|$ and thus we obtain finally \begin{equation} \label{eq:main:5} \psi_1(\lambda)\lesssim |E_\lambda|+\frac{1}{\lambda}\int_{E_\lambda} Sf(t)\,\mathrm{d} t. \end{equation} We now estimate $\psi_2(\lambda)$ and obtain from Theorem \ref{thm:maxbound} and the fact that $\mathcal M$ is a bounded operator on $L^2[0,1]$ \begin{equation*} \begin{aligned} \psi_2(\lambda)&\lesssim \frac{1}{\lambda^2}\|\mathcal{M} \varphi_2\|_2^2\lesssim \frac{1}{\lambda^2}\|\varphi_2\|_2^2=\frac{1}{\lambda^2}\|S\varphi_2\|_2^2 \\ &=\frac{1}{\lambda^2}\Big(\int_{E_\lambda}S\varphi_2(t)^2\,\mathrm{d} t+\int_{E_\lambda^c}S\varphi_2(t)^2\,\mathrm{d} t\Big). \end{aligned} \end{equation*} We apply Lemma \ref{lem:Sf} on the former expression to get \begin{equation}\label{eq:main:3.5} \psi_2(\lambda)\lesssim \frac{1}{\lambda^2}\int_{E_\lambda^c} S\varphi_2(t)^2\,\mathrm{d} t \end{equation} Thus, combining \eqref{eq:main:5} and \eqref{eq:main:3.5}, \begin{equation*} \begin{aligned} \psi(\lambda)&\leq \psi_1(\lambda)+\psi_2(\lambda)\\ &\lesssim |E_\lambda|+\frac{1}{\lambda}\int_{E_\lambda}Sf(t)\,\mathrm{d} t+\frac{1}{\lambda^2}\int_{E_\lambda^c}Sf(t)^2\,\mathrm{d} t. \end{aligned} \end{equation*} Inserting this inequality into \eqref{eq:vertfkt}, \begin{equation*} \begin{aligned} \|Mf\|_p^p &\lesssim p\int_0^\infty \lambda^{p-1} |E_\lambda|\,\mathrm{d}\lambda+p\int_0^\infty \lambda^{p-2}\int_{E_\lambda} Sf(t)\,\mathrm{d} t\,\mathrm{d}\lambda \\ &\quad+p\int_0^\infty \lambda^{p-3} \int_{E_\lambda^c}Sf(t)^2\,\mathrm{d} t\,\mathrm{d}\lambda \\ &=\|Sf\|_p^p+p\int_0^1 Sf(t)\int_0^{Sf(t)}\lambda^{p-2} \,\mathrm{d}\lambda\,\mathrm{d} t \\ &\quad + p\int_0^1 Sf(t)^2 \int_{Sf(t)}^\infty \lambda^{p-3}\,\mathrm{d}\lambda\,\mathrm{d} t, \end{aligned} \end{equation*} and thus, since $1<p<2$, \[ \|Mf\|_p \lesssim_p \|Sf\|_p. \] So, the inequality $\|f\|_p\lesssim_p \|Sf\|_p$ is proved. We now turn to the proof of the inequality \begin{equation}\label{eq:mainproofsquarefunction} \|Sf\|_p\lesssim_p \|f\|_p,\qquad 1<p<2. \end{equation} It is enough to show that the operator $S$ is of weak type $(p,p)$ for each exponent $p$ in the range $1<p<2$. This is because $S$ is (clearly) also of strong type $2$ and we can use the Marcinkiewicz interpolation theorem to obtain \eqref{eq:mainproofsquarefunction}. Thus we have to show \begin{equation}\label{eq:mainproofweaktypesquarefunction} |[Sf>\lambda]|\lesssim_p \frac{\|f\|_p^p}{\lambda^p},\qquad f\in L^p[0,1],\ \lambda>0. \end{equation} We fix the function $f$ and the parameter $\lambda>0$. To begin with the proof of \eqref{eq:mainproofweaktypesquarefunction}, we define $G_\lambda:=[\mathcal Mf>\lambda]$ for $\lambda>0$ and observe that \begin{equation}\label{eq:weaktypeGlambda} |G_\lambda| \lesssim_p \frac{\|f\|_p^p}{\lambda^p}, \end{equation} since $\mathcal M$ is of weak type $(p,p)$, and, by the Lebesgue differentiation theorem, \begin{equation}\label{eq:lebesgue} |f|\leq \lambda\qquad\text{a.\,e. on $G_\lambda^c$}. \end{equation} We decompose the open set $G_\lambda\subset[0,1]$ into a collection $(V_j)_{j=1}^\infty$ of disjoint open subintervals of $[0,1]$ and split the function $f$ into the two parts $h$ and $g$ defined by \begin{equation*} h:=f\cdot\ensuremath{\mathbbm 1}_{G_\lambda^c}+\sum_{j=1}^\infty T_{V_j}f,\qquad g:=f-h, \end{equation*} where for fixed index $j$, $T_{V_j}f$ is the projection of $f\cdot\ensuremath{\mathbbm 1}_{V_j}$ onto the space of polynomials of order $k$ on the interval $V_j$. We treat the functions $h,g$ separately and begin with $h$. The definition of $h$ implies \[ \|h\|_2^2=\int_{G_\lambda^c} |f(t)|^2\,\mathrm{d} t+\sum_{j=1}^\infty \int_{V_j} (T_{V_j}f)(t)^2\,\mathrm{d} t, \] since the intervals $V_j$ are disjoint. We apply \eqref{eq:lebesgue} to the first summand and \eqref{eq:polyproj1} to the second to obtain \[ \|h\|_2^2 \lesssim \lambda^{2-p}\int_{G_\lambda^c} |f(t)|^p\,\mathrm{d} t+\lambda^2 |G_\lambda|, \] and thus, in view of \eqref{eq:weaktypeGlambda}, \[ \|h\|_2^2\lesssim_p \lambda^{2-p}\|f\|_p^p. \] This inequality allows us to estimate \begin{equation*} |[Sh>\lambda/2]|\leq \frac{4}{\lambda^2}\|Sh\|_2^2 = \frac{4}{\lambda^2}\|h\|_2^2\lesssim_p \frac{\|f\|_p^p}{\lambda^p}, \end{equation*} which concludes the proof of \eqref{eq:mainproofweaktypesquarefunction} for the part $h$. We turn to the proof of \eqref{eq:mainproofweaktypesquarefunction} for the function $g$. Since $p<2$, we have \begin{equation}\label{eq:main:8} Sg(t)^p=\Big(\sum_{n=-k+2}^\infty|\langle g,f_n\rangle|^2 f_n(t)^2\Big)^{p/2}\leq \sum_{n=-k+2}^\infty |\langle g,f_n\rangle|^p |f_n(t)|^p \end{equation} For each index $j$, we define $\widetilde{V}_j$ to be the open interval with the same center as $V_j$ but with $5$ times its length. Then, set $\widetilde{G}_\lambda:=\bigcup_{j=1}^\infty \widetilde{V}_j\cap [0,1]$ and observe that $|\widetilde{G}_\lambda|\leq 5|G_\lambda|$. We get \begin{equation*} |[Sg>\lambda/2]|\leq |\widetilde{G}_\lambda|+\frac{2^p}{\lambda^p}\int_{\widetilde{G}_\lambda^c}Sg(t)^p\,\mathrm{d} t. \end{equation*} By \eqref{eq:weaktypeGlambda} and \eqref{eq:main:8}, this becomes \[ |[Sg>\lambda/2]|\lesssim_p \lambda^{-p}\Big(\|f\|_p^p+\sum_{n=-k+2}^\infty \int_{\widetilde{G}_\lambda^c}|\langle g,f_n\rangle|^p|f_n(t)|^p\,\mathrm{d} t\Big). \] But by definition of $g$ and \eqref{eq:polyproj2}, \[ \|g\|_p^p=\sum_j \int_{V_j} |f(t)-T_{V_j}f(t)|^p\,\mathrm{d} t\lesssim_p\sum_j\int_{V_j}|f(t)|^p\lesssim \|f\|_p^p, \] so in order to prove the inequality $|[Sg>\lambda/2]|\leq \lambda^{-p}\|f\|_p^p$, it is enough to show the inequality \begin{equation}\label{eq:main:finaltoprove} \sum_{n=-k+2}^\infty \int_{\widetilde{G}_\lambda^c} |\langle g,f_n\rangle|^p|f_n(t)|^p\,\mathrm{d} t\lesssim \|g\|_p^p. \end{equation} We now let $g_j:=g\cdot \ensuremath{\mathbbm 1}_{V_j}$. The supports of $g_j$ are therefore disjoint and we have $\|g\|_p^p=\sum_{j=1}^\infty \|g_j\|_p^p$. Furthermore $g=\sum_{j=1}^\infty g_j$ with convergence in $L^p$. Thus for each $n$, we obtain \[ \langle g,f_n\rangle=\sum_{j=1}^\infty \langle g_j,f_n\rangle, \] and it follows from the definition of $g_j$ that \begin{equation*} \int_{V_j} g_j(t)p(t)\,\mathrm{d} t=0 \end{equation*} for each polynomial $p$ on $V_j$ of order $k$. This implies that $\langle g_j,f_n\rangle=0$ for $n<\polyfun(V_j)$, where \[ \polyfun(V):=\min\{n:\mathcal{T}_n\cap V\neq\emptyset\}. \] Thus we obtain for all $R>1$ and for every $n$ \begin{equation}\label{eq:main:9} \begin{aligned} |\langle g,f_n\rangle |^p&=\Big| \sum_{j:n\geq\polyfun(V_j)}\langle g_j,f_n\rangle\Big|^p\leq \Big(\sum_{j:n\geq \polyfun(V_j)}R^{d_n(V_j)}|\langle g_j,f_n\rangle|R^{-d_n(V_j)}\Big)^p \\ &\leq \Big(\sum_{j:n\geq \polyfun(V_j)}R^{p d_n(V_j)}|\langle g_j,f_n\rangle|^p\Big)\Big(\sum_{j:n\geq\polyfun(V_j)}R^{-p'd_n(V_j)}\Big)^{p/p'}, \end{aligned} \end{equation} where $p'=p/(p-1)$. If we fix $n\geq\polyfun(V_j)$, there is at least one point of the partition $\mathcal T_n$ contained in $V_j$. This implies that for each fixed $s\geq 0$, there are at most two indices $j$ such that $n\geq \polyfun(V_j)$ and $d_n(V_j)=s$. Therefore, \[ \Big(\sum_{j:n\geq\polyfun(V_j)}R^{-p'd_n(V_j)}\Big)^{p/p'}\lesssim_p 1, \] thus we obtain from \eqref{eq:main:9}, \begin{equation*} |\langle g,f_n\rangle |^p\lesssim_p \sum_{j:n\geq\polyfun(V_j)}R^{pd_n(V_j)}|\langle g_j,f_n\rangle|^p. \end{equation*} Now we insert this inequality in \eqref{eq:main:finaltoprove} to get \begin{equation*} \begin{aligned} \sum_{n=-k+2}^\infty&\int_{\widetilde{G}_\lambda^c}|\langle g,f_n\rangle|^p|f_n(t)|^p\,\mathrm{d} t \\ &\lesssim_p\sum_{n=-k+2}^\infty\sum_{j:n\geq\polyfun(V_j)}R^{p d_n(V_j)}|\langle g_j,f_n\rangle|^p\int_{\widetilde{G}_\lambda^c} |f_n(t)|^p\,\mathrm{d} t \\ &\leq \sum_{n=-k+2}^\infty\sum_{j:n\geq\polyfun(V_j)}R^{p d_n(V_j)}|\langle g_j,f_n\rangle|^p\int_{\widetilde{V}_j^c} |f_n(t)|^p\,\mathrm{d} t \\ &\leq \sum_{j=1}^\infty\sum_{n\geq\polyfun(V_j)}R^{p d_n(V_j)}|\langle g_j,f_n\rangle|^p\int_{\widetilde{V}_j^c} |f_n(t)|^p\,\mathrm{d} t \end{aligned} \end{equation*} We choose $R>1$ such that $R\gamma<1$ with the parameter $\gamma<1$ from Theorem \ref{thm:maintool} and apply Lemma \ref{lem:techn2} to obtain \[ \sum_{n=-k+2}^\infty\int_{\widetilde{G}_\lambda^c}|\langle g,f_n\rangle|^p|f_n(t)|^p\,\mathrm{d} t \lesssim_p \sum_{j=1}^\infty \|g_j\|_p^p = \|g\|_p^p, \] proving \eqref{eq:main:finaltoprove} and with it the inequality $\|Sf\|_p^p\lesssim_p \|f\|_p^p$. Thus the proof of Theorem \ref{thm:uncond} is completed. \end{proof} \subsubsection*{\bfseries \emph{Acknowledgments}} During the development of this paper, the author visited repeatedly the IMPAN in Sopot/Gda\'nsk. It is his pleasure to thank this institution for its hospitality and for providing excellent working conditions. He is grateful to Anna\,Kamont, who suggested the problem considered in this paper and contributed many valuable remarks towards its solution, and to Zbigniew\,Ciesielski for many useful discussions. The author is supported by the Austrian Science Fund, FWF project P 23987-N18 and the stays in Sopot/Gda\'nsk were supported by MNiSW grant N N201 607840. \end{document}
\begin{document} \title[Threshold results for parabolic systems ] {Threshold results for semilinear parabolic systems } \author{Qiuyi Dai \ Haiyang He \ Junhui Xie} \thanks{Department of Mathematics, Hunan Normal University, Changsha Hunan 410081, P.R.China} \thanks{Corresponding author: Haiyang He([email protected])} \thanks{This work was supported by NNSFC(No.10971061)} \maketitle \vspace*kip 0.2cm \arraycolsep1.5pt \hspace*pace*{0em}ewtheorem{Lemma}{Lemma}[section] \hspace*pace*{0em}ewtheorem{Theorem}{Theorem}[section] \hspace*pace*{0em}ewtheorem{Definition}{Definition}[section] \hspace*pace*{0em}ewtheorem{Proposition}{Proposition}[section] \hspace*pace*{0em}ewtheorem{Remark}{Remark}[section] \hspace*pace*{0em}ewtheorem{Corollary}{Corollary}[section] \begin{abstract} In this paper, we study initial boundary value problem of semi-linear parabolic systems \begin{equation}\label{eq:0.1} \left\{ \begin{array}{ll} u_t-\Delta u=v^p \ & (x,t)\in \Omega\times(0, T), \\[2mm] v_t-\Delta v= u^q \ & (x,t)\in \Omega\times(0, T),\\[2mm] u(x,t)=v(x, t)=0 \ & (x,t)\in \partial\Omega\times[0, T],\\[2mm] u(x,0)=u_0(x)\geq0 \ \ & x\in \Omega,\\[2mm] v(x,0)=v_0(x)\geq 0 \ \ & x\in \Omega \end{array} \right. \end{equation} and prove that any positive solution of its steady-state problem \begin{equation}\label{eq:0.2} \left\{ \begin{array}{ll} -\Delta u=v^p \ & x\in \Omega, \\[2mm] -\Delta v= u^q \ & x\in \Omega,\\[2mm] u=v=0 \ & x\in \partial\Omega \end{array} \right. \end{equation} is an initial datum threshold for the existence and nonexistence of global solution to problem(\ref{eq:0.1}). For the precisely statement of this result, see Theorem 1.1 in the introduction of this paper. \end{abstract} \vspace*pace{3mm} \hspace*pace*{0em}oindent {\bf Key words}: Initial boundary value problem, Semi-linear parabolic systems, Threshold result, Steady-state problem. \\ \hspace*pace*{0em}oindent {\bf AMS} classification: 35J50, 35J60. \vspace*pace{3mm} \setcounter{equation}{0}\Section {Introduction} \setcounter{equation}{0} Let $\Omega$ be a bounded domain in $R^N$. We consider the following initial-boundary value problem \begin{equation}\label{eq:1.1} \left\{ \begin{array}{ll} u_t-\Delta u=v^p \ & (x,t)\in \Omega\times(0, T), \\[2mm] v_t-\Delta v= u^q \ & (x,t)\in \Omega\times(0, T),\\[2mm] u(x,t)=v(x, t)=0 \ & (x,t)\in \partial\Omega\times[0, T],\\[2mm] u(x,0)=u_0(x)\geq 0 \ \ & x\in \Omega,\\[2mm] v(x,0)=v_0(x)\geq 0 \ \ & x\in \Omega, \end{array} \right. \end{equation} where $u_t, v_t$ are, respectively, the partial derivatives of $u(x,t)$ and $v(x,t)$ with respect to variable $t$, $\Delta=\sum\limits_{i=1}^{N}\frac{\partial^2}{\partial x_i^2}$ is the Laplace operator, and $N\geq 2$, $p,q>1$ satisfy \begin{equation}\label{eq2} \frac{1}{p+1}+\frac{1}{q+1}>\frac{N-2}{N}. \end{equation} It is well known that for any $u_0(x), v_0(x)\in L^{\infty}(\Omega)$, problem (\ref{eq:1.1}) has an unique classical solution $(u(x,t),v(x,t))$ in a short time, which is called a local solution of problem (\ref{eq:1.1}). Let $T_{\max}$ denote the maximum existence time of $(u(x,t),v(x,t))$ as a classical solution. If $T_{\max}=+\infty$, then we say that $(u(x,t),v(x,t))$ exists globally, or problem (\ref{eq:1.1}) has global solution. If $T_{\max}<+\infty$, then we have $$\lim\limits_{t\rightarrow T_{\max}^-}\sup\limits_{x\in\Omega}u(x,t)=\lim\limits_{t\rightarrow T_{\max}^-}\sup\limits_{x\in\Omega}v(x,t)=+\infty,$$ for which we say that $(u(x,t),v(x,t))$ blows up in a finite time (see for example \cite{QuSoup} for more details). It is also well known that the solution $(u(x,t),v(x,t))$ exists globally when the initial value $(u_0(x),v_0(x))$ is small enough in some sense, and blows up in a finite time when the initial value $(u_0(x),v_0(x))$ is large enough in a suitable sense (see \cite{CM}\cite{D}\cite{EH1}\cite{EH2}\cite{QuSoup} for the exact statement). However, the classification of the initial datum $(u_0(x),v_0(x))$ according to the existence or nonexistence of global solutions to problem (\ref{eq:1.1}) is still far from complete. Hence, an important task in the study of problem (\ref{eq:1.1}) is to find exact conditions on the initial datum $(u_0(x),v_0(x))$ which can ensure the existence or nonexistence of global solutions to problem (\ref{eq:1.1}). On this direction, we present here a so called threshold result for problem (\ref{eq:1.1}) by making use of its positive equilibriums. To state our result simply and precisely, we introduce some notations and definitions first. For any planar vector $(a,b)$ and $(c,d)$, we use $(a,b)\geq(c,d)$ to mean that $a\geq c$ and $b\geq d$, and $(a,b)=(c,d)$ to mean that $a=c$ and $b=d$. If $a,b,c,d$ are functions of variable $x$, we use $(a(x),b(x))\hspace*pace*{0em}ot\equiv(c(x),d(x))$ to mean that there exists at least one point $x_0$ such that $(a(x_0),b(x_0))\hspace*pace*{0em}eq(c(x_0),d(x_0))$. Finally, we say that $(U(x),V(x))$ is a positive equilibrium of problem (\ref{eq:1.1}) if it is a solution of the following steady-state problem related to problem (\ref{eq:1.1}). \begin{equation}\label{eq:1.2} \left\{\begin{array}{ll} -\Delta U=V^p & x\in \Omega, \\ -\Delta V=U^q & x\in \Omega,\\ (U,V)>(0,0) & x\in\Omega\\ (U,V)=(0,0) \ & x\in\partial\Omega. \end{array} \right. \end{equation} Keeping the above notations and definitions in mind, our main result of this paper can be stated as \begin{theorem}\label{tm:1.1} Assume that $p,q>1$ satisfy (\ref{eq2}), and that $(U(x), V(x))$ is an arbitrary smooth solution of problem (\ref{eq:1.2}). Then there holds \vspace*kip 0.1in (i) If $\big(0, 0\big)\leq \big(u_0(x), v_0(x)\big)\leq \big(U(x), V(x)\big)$ and $\big(u_0(x), v_0(x)\big)\hspace*pace*{0em}ot\equiv \big(U(x), V(x)\big)$, then problem (\ref{eq:1.1}) has a global solution $(u(x,t), v(x, t))$. Moreover, $\lim\limits_{t\rightarrow\infty} (u(x, t),v(x,t))=(0,0)$. \vspace*kip 0.1in (ii) If $\big(u_0(x), v_0(x)\big)\geq \big(U(x), V(x)\big)$ and $\big(u_0(x), v_0(x)\big)\hspace*pace*{0em}ot\equiv \big(U(x), V(x)\big)$, then the solution $(u(x,t), v(x, t))$ of problem (\ref{eq:1.1}) blows up in a finite time. \end{theorem} We remark here that Theorem 1.1 is a natural generalization of results on scalar equations proved by P.L.Lions in \cite{Lion} and A.A.Lacey in \cite{Lac},but the method we use here is different. Roughly speaking, Theorem 1.1 says that any smooth solution of problem (\ref{eq:1.2}) is an initial datum threshold for the existence and nonexistence of global solutions to problem (\ref{eq:1.1}). It is also worth pointing out that the restriction (\ref{eq2}) on the exponents $p$ and $q$ is optimal in the sense that problem (\ref{eq:1.2}) has no solutions for star-shaped domains $\Omega$ when (\ref{eq2}) is violated (see \cite{CFM}). The plan of this paper is as follows. Section 2 devotes to prove two lemmas need in the proof of theorem 1.1. The proof of Theorem 1.1 is given in Section 3. Some further remarks are included in Section 4. \setcounter{equation}{0}\Section {Preliminaries} In this section, we prove two lemmas which will be used later in the proof of our main result. \begin{lemma}\label{lm:3.1} Let $(g(x), h(x))$ and $(U(x), V(x))$ be two distinct smooth solutions of problem (\ref{eq:1.4}). Then we have \[\int_\Omega g(x) U(x)(g^{q-1}-U^{q-1})\ dx=\int_\Omega h(x)V(x)(V^{p-1}-h^{p-1})\ dx.\] \end{lemma} \begin{proof} This result can be found in \cite{DF}. However, for the reader's convenience, We give a proof here. Since $(g(x), h(x))$ and $(U(x), V(x))$ are solutions of problem (\ref{eq:1.2}), we have \begin{equation}\label{eq:3.2} \left\{ \begin{array}{ll} -\Delta g(x)=h^p \ & x\in \Omega, \\[2mm] -\Delta h(x)= g^q \ & x\in \Omega,\\[2mm] g=h=0 \ & x\in \partial\Omega \end{array} \right. \end{equation} and \begin{equation}\label{eq:3.3} \left\{ \begin{array}{ll} -\Delta U(x)=V^p \ & x\in \Omega, \\[2mm] -\Delta V(x)= U^q \ & x\in \Omega,\\[2mm] U=V=0 \ & x\in \partial\Omega. \end{array} \right. \end{equation} From these, we can derive \[\int_\Omega h^p V(x)\ dx=-\int_\Omega \Delta g(x) V(x) \ dx=-\int_\Omega g(x)\Delta V\ dx=\int_\Omega g(x) U^q,\] \[\int_\Omega h V^p\ dx=-\int_\Omega \Delta U(x) h(x) \ dx=-\int_\Omega U(x) \Delta h(x)\ dx=\int_\Omega U g^q.\] Consequently \[\int_\Omega g(x)U(x)(g^{q-1}-U^{q-2})\ dx=\int_\Omega h(x)V(x)(V^{p-1}-h^{p-1})\ dx.\] \end{proof} \begin{lemma}\label{lm:2.1} Assume that $x>0, y>0$, and $0<a<1$. Then \[x^a+y^a\leq 2^{1-a}(x+y)^a.\] \end{lemma} \begin{proof} Let $g(t)=t^a+(1-t)^a$, $0<t<1$. An easy computations yields \[g'(t)=a t^{a-1}-a(1-t)^{a-1}. \] Hence, we have \[ \left\{ \begin{array}{ll} g'(t)>0\ \ \ & \ \ 0<t<\frac{1}{2}, \\[2mm] g'(t)=0 \ \ \ & \ \ t=\frac{1}{2},\\[2mm] g'(t)<0 \ \ \ & \ \ \frac{1}{2}<t<1 \end{array} \right.\] From this, we conclude that \[g(t)=t^a+(1-t)^a\leq 2^{1-a}. \] Substituting $t=\frac{x}{x+y}$ into the above inequality, we finally obtain that \[x^a+y^a\leq 2^{1-a}(x+y)^a.\] \end{proof} \setcounter{equation}{0}\Section {Proof of Theorem 1.1} \textbf{Proof of Theorem 1.1:}\ (i) Since $(0,0)\leq(u_0(x), v_0(x))\leq(U(x), \ V(x))$, and $U(x), V(x)\in L^\infty(\Omega)$, we know that problem (\ref{eq:1.1}) has a global solution $(u(x, t), v(x, t))$. Noticing that $(u_0(x), v_0(x))\hspace*pace*{0em}ot\equiv(U(x), V(x))$, it follows from the maximum principle and the strong comparison principle that \[(0,0)\leq(u(x, t), v(x, t))<(U(x), V(x)\] for any $(x,t)\in \Omega\times (0, +\infty)$. Therefore, we may assume, by replacing $(u_0(x), v_0(x))$ with $(u(x, T), v(x, T))$ for some $T>0$ if necessary, that $(u_0(x), v_0(x))\leq(\alpha U(x), \alpha V(x))$ for some constant $0<\alpha<1$. Let $(g_\alpha (x), h_\alpha(x))=(\alpha U(x), \alpha V(x))$. It is easy to verify that $(g_\alpha (x), h_\alpha(x))$ satisfies \begin{equation}\label{eq:3.4} \left\{ \begin{array}{ll} -\Delta g_\alpha >(h_\alpha)^p \ & x\in \Omega, \\[2mm] -\Delta h_\alpha> (g_\alpha )^q \ & x\in \Omega,\\[2mm] g_\alpha=h_\alpha=0 \ & x\in \partial\Omega,\\[2mm] g_\alpha >0, h_\alpha >0\ & x\in \Omega. \end{array} \right. \end{equation} This implies that $(g_\alpha (x), h_\alpha(x))$ is a strict super-solution of the following problem \begin{equation}\label{eq:3.5} \left\{ \begin{array}{ll} G_t-\Delta G=H^p \ & (x,t)\in \Omega\times(0, T), \\[2mm] H_t-\Delta H= G^q \ & (x,t)\in \Omega\times(0, T),\\[2mm] G=H=0 \ & (x,t)\in \partial\Omega\times[0, T],\\[2mm] G(x,0)=g_\alpha \geq0 \ \ & x\in \Omega,\\[2mm] H(x,0)=h_\alpha\geq 0 \ \ & x\in \Omega. \end{array} \right. \end{equation} Let $(G(x,t), H(x, t))$ be the solution of (\ref{eq:3.5}). By strong comparison principle we know that $(G(x, t), H(x, t))$ is strictly decreasing with respect to $t$, and $(0,0)\leq(G(x, t), H(x,t))\leq(U(x), V(x))$. Therefore, $(G(x, t), H(x, t))$ exists globally. Moreover, there are some functions $g(x)$ and $h(x)$ such that \[\lim\limits_{t\rightarrow\infty} G(x,t)=g(x), \quad \lim\limits_{t\rightarrow\infty} H(x,t)=h(x).\] uniformly on $\Omega$, and $(g(x), h(x))$ is a smooth solution of the following problem. \begin{equation}\label{eq:1.4} \left\{\begin{array}{ll} -\Delta g=h^p & x\in \Omega, \\ -\Delta h=g^q & x\in \Omega,\\ (g,h)=(0,0) \ & x\in\partial\Omega. \end{array} \right. \end{equation} From this, we conclude that $(g(x), h(x))\equiv(0,0)$. Otherwise, by strong maximum principle, we have $(g(x), h(x))>(0,0)$. On the other hand, we have $(g(x),h(x))<(U(x),V(x))$ since $(G(x, t), H(x, t))$ is strictly decreasing with respect to $t$. Thus \[\int_\Omega g(x) U(x)(g^{q-1}-U^{q-1})\ dx<0, \quad \int_\Omega h(x)V(x)(V^{p-1}-h^{p-1})\ dx>0.\] This is a contradiction with Lemma 2.1. Therefore \[\lim\limits_{t\rightarrow\infty} (G(x,t),H(x,t))=(0,0).\] Noticing that $(0, 0)\leq(u_0, v_0) \leq( g_\alpha(x), h_\alpha(x)) $, comparison principle ensures \[(0,0)\leq (u(x, t),v(x,t))\leq (G(x, t),H(x, t)).\] By applying squeeze principle, we obtain \[\lim\limits_{t\rightarrow\infty}(u(x, t),v(x,t))=(0,0).\] \vspace*kip 0.1in (ii)\ we prove the conclusion (ii) of Theorem 1.1 by contradiction. To this end, we assume that $(u_0(x), v_0(x))\geq(U(x), V(x))$, $(u_0(x), v_0(x)) \hspace*pace*{0em}ot \equiv( U(x), V(x))$ and problem (\ref{eq:1.1}) has a global solution $(u(x, t), v(x,t))$. By strong comparison principle, we have \[(u(x, t), v(x,t)) > (U(x), V(x)), \] for any $(x, t)\in \bar{\Omega}\times (0,+\infty)$. Therefore, we may assume, by replacing $(u_0(x), v_0(x))$ with $(u(x, T), v(x, T; u_0, v_0))$ for some $T>0$ if necessary, that $(u_0(x), v_0(x))\geq (\beta U(x),\beta V(x))$ for some constant $\beta>1$. Let $(g_\beta, h_\beta)=(\beta U(x), \beta V(x))$. It is easy to verify that $(g_\beta, h_\beta)$ satisfies \begin{equation}\label{eq:3.7} \left\{ \begin{array}{ll} -\Delta g_\beta<(h_\beta)^p \ & x\in \Omega, \\[2mm] -\Delta h_\beta< (g_\beta)^q \ & x\in \Omega,\\[2mm] g_\beta=h_\beta=0 \ & x\in \partial\Omega. \end{array} \right. \end{equation} Hence, $(g_\beta, h_\beta)$ is a strict sub-solution of the following problem \begin{equation}\label{eq:3.8} \left\{ \begin{array}{ll} G_t-\Delta G=H^p \ & (x,t)\in \Omega\times(0, T), \\[2mm] H_t-\Delta H= G^q \ & (x,t)\in \Omega\times(0, T),\\[2mm] G=H=0 \ & (x,t)\in \partial\Omega\times[0, T],\\[2mm] G(x,0)=g_\beta \geq0 \ \ & x\in \Omega,\\[2mm] H(x,0)=h_\beta\geq 0 \ \ & x\in \Omega. \end{array} \right. \end{equation} Let $(G(x, t), H(x, t))$ be the solution of problem (\ref{eq:3.8}). Then it follows from the comparison principle that \[(G(x, t), H(x, t))\leq (u(x, t), v(x, t))\] for any $(x, t)$ due to $(g_\beta (x), h_\beta(x))\leq \big(u_0, v_0\big)$. Consequently, $(G(x, t), H(x, t))$ exists globally and is strictly increasing with respect to $t$. Let \[\varphi(t)=\int_\Omega G(x, t)H(x,t)\ dx.\] \[E(t)=\int_\Omega \hspace*pace*{0em}abla G\hspace*pace*{0em}abla H\ dx-\frac{1}{p+1}\int_\Omega H^{p+1}\ dx-\frac{1}{q+1}\int_\Omega G^{q+1}\ dx,\] By making use of (\ref{eq:3.8}), we can verify that $\varphi(t)$ and $E(t)$ satisfy \[\dfrac{d \varphi}{dt}=-2E(t)+\frac{p-1}{p+1}\int_\Omega H^{p+1}\ dx+\frac{q-1}{q+1}\int_\Omega G^{q+1}\ dx,\] \[\begin{array}{ll} \dfrac{d E(t)}{dt}&=\int_\Omega \hspace*pace*{0em}abla G_t \hspace*pace*{0em}abla H\ dx+\int_\Omega \hspace*pace*{0em}abla H_t \hspace*pace*{0em}abla G\ dx -\int_\Omega H^p H_t\ dx-\int_\Omega G^q G_t\ dx \\[5mm] &=-2\int_\Omega G_t H_t\ dx\leq 0. \end{array}\] Let $\gamma=\frac{(p+1)(q+1)}{p+q+2}$. It follows from the assumption $p>1$ and $q>1$ that \[\gamma>1, \ \ \frac{q+1}{\gamma}>1, \ \ \frac{p+1}{\gamma}>1, \ \ \frac{\gamma}{q+1}+\frac{\gamma}{p+1}=1.\] By H\"{o}lder's inequality, Young's inequality, and Lemma 2.2, we have \[\begin{array}{ll} \varphi(t)&\leq \frac{q+1}{\gamma}\int_\Omega G^\frac{q+1}{\gamma}\ dx +\frac{p+1}{\gamma}\int_\Omega H^\frac{p+1}{\gamma}\ dx\\[5mm] &\leq \frac{\max\{p,q\}+1}{\gamma}|\Omega|^{1-\frac{1}{\gamma}}\big((\int_\Omega G^{q+1}\ dx)^\frac{1}{\gamma} +(\int_\Omega H^{p+1}\ dx)^\frac{1}{\gamma}\big)\\[5mm] &\leq \frac{\max\{p,q\}+1}{\gamma}|\Omega|^{1-\frac{1}{\gamma}} 2^{1-\frac{1}{\gamma}}[\int_\Omega G^{q+1}\ dx+\int_\Omega H^{p+1}\ dx]^\frac{1}{\gamma}. \end{array}\] Hence, there exists a positive constant $C$ such that \[\dfrac{d \varphi}{dt}\geq -2E(t)+C\varphi^{\gamma}(t).\] Since $E(t)$ is decreasing in $t$, we have $E(t)\leq E(0)$ for any $t>0$. Consequently, \[\dfrac{d \varphi}{dt}\geq -2E(0)+C\varphi^{\gamma}(t).\] From this, we may conclude that \[\sup\limits_{t\geq0}\int_\Omega GH\ dx<+\infty .\] Otherwise, we have $\int_\Omega GH\ dx\rightarrow +\infty, \ as \ t\rightarrow\infty$ due to $\int_\Omega GH\ dx=\varphi(t)$ is strictly increasing in $t$. Hence, there exists a constant $T>0$ large enough such that \[\frac{d}{dt}\int_\Omega GH\ dx\geq \frac{C}{2}(\int_\Omega GH\ dx)^{\gamma},\] for any $t>T$. This implies that $(G(x,t),H(x,t))$ must blow up in a finite time which contradicts the fact that $(G(x,t),H(x,t))$ is a global solution of problem (\ref{eq:3.8}). Let \[T(t)=\int_\Omega G^{q+1}\ dx+\int_\Omega H^{p+1}\ dx.\] Then, $T(t)$ is strictly increasing in $t$ because $(G(x,t),H(x,t))$ does. Thus, for any $t>0$, we have \[\begin{array}{ll} C\geq \int_t^{t+1}\frac{d}{d s}\int_\Omega GH \ dx \ ds&=-2\int_t^{t+1} E(s) ds +\frac{p-1}{p+1} \int_t^{t+1} \int_\Omega G^{p+1}\ dx \ ds \\[5mm] &\ \ \ \ +\frac{q-1}{q+1}\int_t^{t+1} \int_\Omega H^{q+1}\ dx \ ds,\\[5mm] &\geq -2 E(0)+\min\{\frac{p-1}{p+1}, \frac{q-1}{q+1}\}T(t). \end{array}\] From this, we can easily see that \[\sup\limits_{t\geq 0}T(t)<+\infty.\] Consequently, there are functions $g(x)\in L^{p+1}(\Omega)$ and $ h(x)\in L^{q+1}(\Omega)$ such that \[G(x, t)\rightarrow g(x)\ \ \mbox{weakly in}\ \ \ L^{p+1}(\Omega), \] \[H(x, t)\rightarrow h(x)\ \ \ \mbox{weakly in} \ \ \ L^{q+1}(\Omega).\] Multiplying the first and the second equation in (\ref{eq:3.8}) by $\varphi$ and $\psi$ respectively, and integrating the result equations on $[t,t+1]$, we obtain \[\int_\Omega [G(x, t+1)-G(x, t)]\varphi \ dx \ ds +\int_t^{t+1}\int_\Omega G(-\Delta \varphi )\ dx \ ds= \int_t^{t+1}\int_\Omega H^p\varphi \ dx \ ds,\] \[\int_\Omega[ H(x, t+1)-H(x, t)]\psi \ dx \ ds +\int_t^{t+1}\int_\Omega H(-\Delta \psi )\ dx \ ds= \int_t^{t+1}\int_\Omega G^p\psi \ dx \ ds,\] Passing to the limit as $t\rightarrow \infty$, we find that \[\int_\Omega g(-\Delta \varphi )\ dx = \int_\Omega h^p\varphi \ dx ,\] \[\int_\Omega h(-\Delta \psi )\ dx = \int_\Omega g^p\psi \ dx .\] This implies that $(g(x), h(x))$ is a $L^1$ solution of problem (\ref{eq:1.2}) (For the definition of the $L^1$ solution, we refer to \cite{QuSoup}). Noticing that $p,q>1$ satisfy (\ref{eq2}) and \[\int_\Omega g^{q+1}\ dx<+\infty, \quad \int_\Omega h^{p+1}\ dx<+\infty,\] it follows from the regularity theory (bootstrap method) of $L^1$ solution that $g, h\in L^\infty(\Omega)$ (see \cite{QuSoup}). With $L^{\infty}$ estimate in hand, we can establish the $H_0^1$ estimate of $g(x)$ and $h(x)$ by making use of the following facts \begin{equation}\label{eq:3.9} \left\{ \begin{array}{ll} -\Delta G\leq H^p \quad & (x,t)\in\Omega\times(0,+\infty), \\[2mm] -\Delta H\leq G^q \quad & (x,t)\in\Omega\times(0, +\infty),\\[2mm] G=H=0 \quad & (x,t)\in\partial\Omega\times[0, +\infty). \end{array} \right. \end{equation} Now, we can conclude that $(g, h)$ is a classical solution of problem (\ref{eq:1.2}) by the standard regularity theory of elliptic differential equations (see \cite{GT}). Since $( g_\beta(x), h_\beta(x))>(U(x),V(x))$, it follows from the strong comparison principle that \[ G(x, t)>U(x), \quad H(x, t)>V(x)\] for any $(x, t)$. Consequently \[g(x)>U(x), \quad h(x)>V(x).\] From this, we have \[\int_\Omega g(x) U(x)(g^{q-1}-U^{q-1})\ dx>0\ \ \ \mbox{and}\ \ \ \int_\Omega h(x)V(x)(V^{p-1}-h^{p-1})\ dx<0.\] This is a contradiction with the conclusion of Lemma 2.1 and we complete the proof of Theorem 1.1 (ii). $\Box$ \setcounter{equation}{0}\Section {Further Remarks} The method used in the proof of theorem 1.1 can be applied to study the following inhomogeneous problem \begin{equation}\label{eq:4.1} \left\{\begin{array}{ll} u_t-\Delta u=v^p+\lambda f(x) \ & (x,t)\in \Omega\times(0, T), \\[2mm] v_t-\Delta v=u^q+\lambda g(x) \ & (x,t)\in \Omega\times(0, T),\\[2mm] (u,v)=(0,0) \ & (x,t)\in \partial\Omega\times[0, T],\\[2mm] (u(x,0),v(x,0))=(u_0(x),v_0(x))\geq(0,0) \ \ & x\in \Omega, \end{array} \right. \end{equation} where $p,q>1$ satisfy (\ref{eq2}), and $(0,0)\leq(f(x),g(x))\hspace*pace*{0em}ot\equiv(0,0)$. The main difference between problem (\ref{eq:1.1}) and (\ref{eq:4.1}) lies in the structure of their equilibrium sets. From lemma 2.1, we can easily see that any two distinct equilibriums of problem (\ref{eq:1.1}) must intersect. However, problem (\ref{eq:4.1}) has an unique minimal equilibrium for $\lambda>0$ small enough which separates from other equilibriums. To state our results precisely, we consider the following steady-state problem of problem (\ref{eq:4.1}) \begin{equation}\label{eq:4.2} \left\{\begin{array}{ll} -\Delta u=v^p+\lambda f(x) \ & x\in \Omega, \\[2mm] -\Delta v=u^q+\lambda g(x) \ & x\in \Omega,\\[2mm] (u,v)>(0,0)\ & x\in\Omega,\\ (u,v)=(0,0) \ & x\in\partial\Omega. \end{array} \right. \end{equation} By sub-solution and sup-solution method, it is not difficult to prove the following \begin{lemma}\label{lm:4.1} There exists a positive number $\lambda^*$ such that the following two statements are true. \vspace*kip 0.08in (i)\ If $\lambda>\lambda^*$, then problem (\ref{eq:4.2}) has no solution. \vspace*kip 0.08in (ii)\ If $0<\lambda<\lambda^*$, then problem (\ref{eq:4.2}) has an unique minimal solution $(u_{\min}(x),v_{\min}(x))$ in the sense that $((u_{\min}(x),v_{\min}(x))\leq(u(x),v(x))$ for any solution $(u(x),v(x))$ of problem (\ref{eq:4.2}). Moreover, if $(u(x),v(x))\hspace*pace*{0em}ot\equiv(u_{\min}(x),v_{\min}(x))$, then $((u_{\min}(x),v_{\min}(x))<(u(x),v(x))$. \end{lemma} Let $u(x)=U(x)+u_{\min}(x)$ and $v(x)=V(x)+v_{\min}(x)$. Then, it is easy to see that $(U,V)$ satisfies \begin{equation}\label{eq:4.3} \left\{\begin{array}{ll} -\Delta U=(V+v_{\min})^p-v^p_{\min} & x\in \Omega, \\ -\Delta V=(U+u_{\min})^q-u^q_{\min} & x\in \Omega,\\ (U,V)=(0,0) \ & x\in\partial\Omega. \end{array} \right. \end{equation} By variational method, we can prove that problem (\ref{eq:4.3}) has at least one positive solution provided that (\ref{eq2}) holds (see \cite{HanL}). Hence, we have \begin{theorem}\label{tm:4.2} Assume that $p,q>1$ satisfy (\ref{eq2}). Let $\lambda^*$ be the number obtained in lemma 4.1. Then, for any $\lambda\in(0,\lambda^*)$, problem (\ref{eq:4.2}) has at least two solutions, and among them there exists a minimal one. \end{theorem} By the same method as that used in the proof of lemma 2.1, we can prove the following \begin{lemma}\label{lm:4.3} Let $(U_1, V_1)$ and $(U_2, V_2)$ be any two smooth solutions of problem (\ref{eq:4.3}), $G(u)=\frac{(u+u_{\min})^{q}-u^{q}_{\min}}{u}$ and $H(v)=\frac{(v+v_{\min})^{p}-v^{p}_{\min}}{v}$. Then we have \[\int_\Omega U_1U_2(G(U_2)-G(U_1))\ dx =\int_\Omega V_1V_2(H(V_1)-H(V_2))\ dx.\] \end{lemma} Noting that $G(u)$ and $H(v)$ are strictly increasing in $u$ and $v$ respectively due to $p,q>1$, we infer from lemma 4.2 that the following result on the structure of solution set of problem (\ref{eq:4.2}) holds \begin{theorem}\label{tm:4.4} With the same assumption as that of theorem 4.1, problem (\ref{eq:4.2}) has at least two solutions, and among them there exists a minimal one. Moreover, any two distinct solutions of problem (\ref{eq:4.2}) which are also different from the minimal one must intersect somewhere. \end{theorem} With theorem 4.2 established, by a similar argument to that used in the proof of theorem 1.1, we can reach the following \begin{theorem}\label{tm:4.3} Assume that $p,q>1$ satisfy (\ref{eq2}). Let $\lambda^*$ be the number obtained in lemma 4.1. Then, we have \vspace*kip 0.08in (i)\ If $\lambda>\lambda^*$, then, for any initial value $(u_0(x),v_0(x))\geq(0,0)$, the solution $(u(x,t),v(x,t))$ of problem (\ref{eq:4.1}) must blow up in a finite time. \vspace*kip 0.08in (ii)\ If $0<\lambda<\lambda^*$, and $(U(x), V(x))$ is an arbitrary smooth solution of problem(\ref{eq:4.2}) which is different from the minimal one, then problem (\ref{eq:4.1}) has a global solution $(u(x,t), v(x, t))$ with $\lim\limits_{t\rightarrow\infty} (u(x, t),v(x,t))=(u_{\min}(x),v_{\min}(x))$ provided that $(0, 0)\leq (u_0(x), v_0(x))\leq(U(x), V(x))$ and $(u_0(x), v_0(x))\hspace*pace*{0em}ot\equiv (U(x), V(x))$; whereas, the solution $(u(x,t),v(x,t))$ of problem (\ref{eq:4.1}) must blow up in a finite time if $(u_0(x), v_0(x))\geq(U(x), V(x))$ and $(u_0(x), v_0(x))\hspace*pace*{0em}ot\equiv(U(x), V(x))$. \end{theorem} Finally, we point out that the method of this paper can also be applied to study the following initial-boundary value problem with Robin boundary conditions. \begin{equation}\label{eq:4.5} \left\{\begin{array}{ll} u_t-\Delta u=v^p \ & (x,t)\in \Omega\times(0, T), \\[2mm] v_t-\Delta v=u^q \ & (x,t)\in \Omega\times(0, T),\\[2mm] \frac{\partial}{\partial n}(u,v)+\beta(u,v)=(0,0) \ & (x,t)\in \partial\Omega\times[0, T],\\[2mm] (u(x,0),v(x,0))=(u_0(x),v_0(x))\geq(0,0) \ \ & x\in \Omega, \end{array} \right. \end{equation} where $n$ is the outer unit vector normal to the boundary $\partial\Omega$ of $\Omega$, and $\beta$ is a positive constant. By similar arguments to that used in the proof of theorem 1.1, we can also prove the following result. \begin{theorem}\label{tm:4.6} Assume that $p,q>1$ satisfy (\ref{eq2}), and that $(U(x), V(x))$ is an arbitrary smooth positive equilibrium of problem (\ref{eq:4.5}). Then there holds \vspace*kip 0.1in (i) If $(0, 0)\leq(u_0(x), v_0(x))\leq (U(x), V(x))$ and $(u_0(x), v_0(x))\hspace*pace*{0em}ot\equiv(U(x), V(x))$, then problem (\ref{eq:4.5}) has a global solution $(u(x,t), v(x, t))$. Moreover, $\lim\limits_{t\rightarrow\infty} (u(x, t),v(x,t))=(0,0)$. \vspace*kip 0.1in (ii) If $(u_0(x), v_0(x))\geq(U(x), V(x))$ and $(u_0(x), v_0(x))\hspace*pace*{0em}ot\equiv(U(x), V(x))$, then the solution $(u(x,t), v(x, t))$ of problem (\ref{eq:4.5}) must blows up in a finite time. \end{theorem} \end{document}
\begin{document} \title{A Discrete Event Simulation Model for Coordinating Inventory Management and Material Handling in Hospitals} \author[1a]{Amogh Bhosekar} \affil[1]{\small Department of Industrial Engineering, Clemson University, Freeman Hall, Clemson, SC 29634 \authorcr Email: [email protected] \authorcr Email: [email protected]} \author[2]{Sandra Ek\c{s}io\u{g}lu} \affil[2]{Department of Industrial Engineering, University of Arkansas, Bell Engineering Center, Fayetteville, AR 72701 \authorcr Email: [email protected]} \author[1b]{Tu\u{g}\c{c}e I\c{s}{\i}k} \author[3]{Robert Allen} \affil[3]{Perioperative Services, Prisma Health, 701 Grove Rd, Greeville, SC 29605 \authorcr Email: [email protected]} \date{} \maketitle \title{\large \bf A Discrete Event Simulation Model for Coordinating Inventory Management and Material Handling in Hospitals} \begin{abstract} { For operating rooms (ORs) and hospitals, inventory management of surgical instruments and material handling decisions of perioperative services are critical to hospitals’ service levels and costs. However, efficiently integrating these decisions is challenging due to hospitals’ interdependence and the uncertainties they face. These challenges motivated the development of this study to answer the following research questions: (R1) \emph{How does the inventory level of surgical instruments, including owned, borrowed and consigned, impact the efficiency of ORs?} (R2): \emph{How do material handling activities impact the efficiency of ORs?} (R3): \emph{How do integrating decisions about inventory and material handling impact the efficiency of ORs?} Three discrete event simulation models are developed here to address these questions. Model 1, \emph{Current}, assumes no coordination of material handling and inventory decisions. Model 2, \emph{Two Batch}, assumes partial coordination, and Model 3, \emph{Just-In-Time} (JIT), assumes full coordination. These models are verified and validated using real life-data from a partnering hospital. A thorough numerical analysis indicates that, in general, coordination of inventory management of surgical instruments and material handling decisions has the potential to improve the efficiency and reduce OR costs. More specifically, a \emph{JIT} delivery of instruments used in short-duration surgeries leads to lower inventory levels without jeopardizing the service level provided. } \end{abstract} \textbf{Keywords: }{OR in Health Services, Simulation, Inventory Management, Automated Guided Vehicles, Data Analytics} \section{Introduction}\label{intro} \textbf{Motivation:} The cost of supply chain activities that support ORs contributes to a hospital’s total expenses, accounting for as much as 40\% of the operating budget in hospitals [\citenum{dobson2015configuring}]. Holding inventory of supplies and surgical instruments makes up about 10\% to 18\% of these expenses [\citenum{volland2017material}]. For example, a 2014 study conducted in acute care hospitals in California suggests that an OR costs, on average, between \$36 and \$37 each minute. The use of surgical instruments costs between \$2.50 and \$3.50 each minute [\citenum{childers2018understanding}]. Hospitals maintain large inventories to ensure that the required instruments are available for a scheduled surgery, since the lack of an instrument leads to delays. A study conducted by \Mycite{wubben2010equipment} suggests that 45.9\% of the delays in an OR happened because an instrument was unavailable [\citenum{wubben2010equipment}]. These delays resulted in longer working hours for doctors and staff, and thus, additional costs for the hospital. A surgery delayed due to a lack of instruments also negatively impacts the quality of care and adverse effects can occur [\citenum{wubben2010equipment}]. Increasing inventory levels may not necessarily eliminate these delays since some delays occur because of inefficiencies in the material handling process. For example, congestion, due to the movement of Automated Guided Vehicles (AGVs) along the narrow corridors of a hospital, can lead to delays. Based on our review of the literature, very little research evaluates the effects of inventory and material handling decisions relative to the efficiency of and service levels provided by ORs in hospitals. This gap in the literature is the main motivation for this research. {\bf Background:} Surgeries performed in most hospitals are categorized as elective or emergency [\citenum{gupta2008appointment}]. This research focuses on elective surgeries. An elective surgery is scheduled within 12 weeks, and the exact timing of the surgery and the room assignment are finalized between 24 and 48 hours before the day of the surgery. The OR scheduler makes these assignments after considering the availability and preferences of the surgeon and surgical staff, as well as the availability of the required equipment and instruments. These assignments impact the availability of instruments for the rest of the day. Emergency surgeries are incorporated into the daily schedule since the timing of intervention is critical for patients’ safety [\citenum{gupta2008appointment}]. To maintain high service levels, some hospitals develop plans to ensure that sufficient ORs, instruments, and equipment are available. The limited availability of instruments restricts hospitals from utilizing ORs and surgeons’ time efficiently. In most hospitals, an instrument is not used more than once the same day because it should be decontaminated and sterilized before reuse. This process can take up to 3 or 4 hours. Most of the time, an OR operates 12 hours each day, and most of the surgeries last no more than 5 hours. \textbf{Research Questions:} Surgical instruments are categorized as: (1) owned by the hospital, (2) borrowed from other hospitals or doctors, or (3) consigned by a vendor who owns the instrument [\citenum{chobin2015best}]. The process of adopting a new surgical instrument is initiated upon a surgeon’s request. Next, the hospital evaluates whether buying, renting, or consigning this instrument is the best option. The main factors impacting this decision are the selling price of the instrument and the frequency of its use. Typically, a hospital would not purchase an instrument if only used in rare or specialty surgeries [\citenum{chobin2015best}]. In such a case, the hospital would consign or borrow the instrument and pay the owner upon its use. A hospital has several other reasons to borrow instruments, such as to match demand and supply, accommodate doctors’ requests for scheduling consecutive surgeries during a given day, continue operations on a limited budget, or mitigate a lack of storage space [\citenum{seavey2010reducing}]. According to \Mycite{seavey2010reducing}, such practices lead to inefficiencies because borrowed instruments create an additional workload for the sterile processing department (SPD) since the hospital is required to maintain documentation and pack and sterilize instruments [\citenum{seavey2010reducing}]. Additionally, some instruments have special cleaning procedures, which may differ from other procedures for instruments owned by the hospital. Following these procedures adds to employees’ workloads. Also, consigned instruments stored at the hospital occupy additional storage space. A recent study conducted in a major US academic hospital suggests that half of the instruments are consigned, and their cost is, on average, 12\% more than instruments owned by the hospital [\citenum{mandava2017how}]. These challenges motivate the first research question: \textbf{(R1)} \emph{How does the inventory level of surgical instruments, including owned, borrowed, and consigned, impact the efficiency of ORs?} Typically, the inventory level is determined by the total number of surgeries scheduled in a day, the daily schedule of surgeries that use the same instrument, the processing capacity of the central sterile storage division (CSSD), and the schedule of material handling activities. This study presents data-driven, discrete event simulation (DES) models and a numerical study that evaluates how inventory levels impact the utilization of instruments and the delay of surgeries. The models are validated with data from a partnering US-based hospital. Instruments are delivered to ORs via containers called case carts that are transported by carriers, such as staff or AGVs. After a surgery, soiled instruments are delivered to the CSSD using the same carriers. Inefficiencies in material handling activities lead to delays, which impact the availability of instruments. Additionally, the duration of a surgery is uncertain; thus, a surgery may take longer than planned, keeping instruments unavailable. For these reasons, some hospitals deliver instruments to a storage area beside the OR the night before the surgery. Such a practice ensures that the instruments required are available during the surgery. As a result, the same instrument cannot be reused in other surgeries scheduled on the same day. Alternatively, an instrument could be delivered to an OR directly from the CSSD a short time before the start of a surgery, the JIT delivery approach. Such a practice increases the utilization of instruments used in short-duration surgeries performed earlier in the day. This approach can also lead to lower inventory levels and lower inventory holding costs. However, such an approach requires coordination of material handling, instrument decontamination and sterilization, and OR scheduling. These challenges motivate the second research question: \textbf{(R2)} \emph{How do material handling activities impact the efficiency of ORs?} A numerical analysis via the proposed DES models is conducted to answer this question. These models take different approaches to the handling of instruments. Each material handling approach follows a different schedule of delivering case carts to ORs. For each approach, a numerical study is conducted to evaluate how the number of carriers impacts travel time, congestion, the utilization of carriers, delivery time, and the utilization of instruments. Coordinating decisions about material handling schedules and instrument inventory is challenging. For example, the decision to reduce the inventory of an instrument limits the time that instrument is available. This, in turn, negatively impacts the flexibility of scheduling a surgery needing the instrument and, therefore, the service level provided. The problem becomes even more challenging when other important considerations are added, such as the inconsistent material handling system, stochastic demand, and uncertain surgery duration. These challenges motivate the following research question: \textbf{(R3)} \emph{How do integrating decisions about inventory and material handling impact the efficiency of ORs?} To answer this question, another numerical study using the DES models is conducted. \textbf{Contributions:} The proposed research offers several important contributions: (\textit{i}) This study highlights the role of coordinating decisions between material handling and inventory management in improving OR efficiency and reducing costs while maintaining high service levels. In particular, this work demonstrates that JIT delivery of surgical cases for short-duration surgeries can potentially improve the efficiency of ORs and reduce the cost of healthcare. (\textit{ii}) This study develops a real-life case study using data from a US-based hospital. The proposed material handling approaches, which are intuitive and easy to implement, are verified and validated using historical data. The results of the proposed analysis have inspired the partner hospital featured in this study to make improvements in material handling and inventory management practices. While other healthcare facilities may not choose to implement the models presented here, they can learn from these practices. \textbf{Outline:} The rest of this article is organized as follows: Section 2 reviews the literature relevant to this work, and Section 3 provides a detailed description of the problem. Section 4 describes the proposed simulation models, and Section 5 introduces the case study. Section 6 discusses the results of the experiments, and finally, Section 7 summarizes the results and presents concluding remarks. \section{Literature Review} The main stream of existing literature relevant to this work is inventory management of reusable surgical instruments. Since AGVs are used as carriers by this study's partner hospital, as well as in many others, the literature that discusses the use of AGV systems for material handling in hospitals is also reviewed. {\bf Inventory Management of Reusable Surgical Instruments:} The cost of ORs is impacted by the availability of surgical supplies and implants [\citenum{chasseigne2018assessing}]. Surgical supplies include soft goods and instruments required for surgery. These supplies can be either reusable or disposable. Numerous studies show that the cost of reusable supplies is significantly lower than the cost of disposable supplies [\citenum{demoulin1996cost}, \citenum{eddie1996comparison}, \citenum{schaer1995single}, \citenum{manatakis2014reducing}, \citenum{adler2005comparison}], {and} disposable supplies negatively impact the environment [\citenum{adler2005comparison}]. \Mycite{chasseigne2018assessing} conduct a study to evaluate the cost of opened, unused soft goods and instruments in a French hospital. They reported that wasted supplies have a median cost of \euro 4.1 per procedure, which accounts for about 20.1\% of the cost of surgical supplies. However, most hospitals do not have standardized procedures to manage the inventory of surgical supplies [\citenum{ahmadi2018inventory}]. In their review paper, \Mycite{ahmadi2018inventory} indicate that inventory management of sterile instruments requires three important considerations: instrument and quantity assignment for each tray-type, the tray-type’s assignment to a surgeon or procedure, and the number of trays carried by the hospital. Decisions related to the first two considerations are impacted by the surgeon’s preferences, indicated in the doctor preference card (DPC). The cost of surgical supplies can be reduced in several ways, including by 1) improving the accuracy of the DPCs, 2) increasing surgeon awareness, and 3) standardizing surgical techniques. Accuracy of the DPC can be improved by reviewing it periodically [\citenum{harvey2017physician}, \citenum{farrelly2017surgical}] or by recording which instruments are used on a tray and removing the instruments that are not used [\citenum{nast2019decreasing}, \citenum{dyas2018reducing}]. For example, \Mycite{harvey2017physician} show that engaging physicians in the review of the corresponding DPC led to the removal of 109 disposable supplies and the elimination of 3 reusable instrument trays. Consequently, the cost of a case cart was reduced by \$16 on average. According to a survey conducted by \Mycite{jackson2016surgeon}, surgeons often underestimate the cost of expensive items and overestimate the cost of less expensive items due to internal bias and cost ignorance [\citenum{jackson2016surgeon}]. Thus, the cost of a surgical procedure can be reduced by increasing surgeons’ awareness of standardized operating equipment and the cost of instruments [\citenum{gitelis2015educating}, \citenum{avansino2013standardization}]. Finally, work by \Mycite{skarda2015one} shows that standardization of surgical techniques can significantly reduce operating costs without impacting the quality of a procedure [\citenum{skarda2015one}]. \Mycite{stockert2014assessing} indicate that tailored and streamlined tray compositions lead to significant cost savings [\citenum{stockert2014assessing}]. Additionally, surgeons prefer trays that have fewer unsolicited instruments [\citenum{dobson2015configuring}, \citenum{stockert2014assessing}]. Several optimization models have been developed to solve the tray optimization problem and address tray composition and inventory management for reusable surgical instruments. The objective of this problem is to minimize an OR's cost by optimizing the number of trays utilized and the amount of inventory supplied. The problem also addresses surgeon preferences for instruments. \Mycite{dobson2015configuring} develop a linear integer programming formulation and propose a heuristic algorithm to obtain a solution to this problem [\citenum{dobson2015configuring}]. \Mycite{reymondon2008optimization} propose a resource sharing method for reusable devices [\citenum{reymondon2008optimization}]. The objective is to minimize storage, processing, and wastage costs for supplies that have not been used. \Mycite{van2008optimizing} propose a deterministic model that minimizes the storage and delivery cost of instruments by optimizing tray composition [\citenum{van2008optimizing}]. \Mycite{ahmadi2019bi} present a bi-objective optimization model for configuration of surgical trays with ergonomic considerations [\citenum{ahmadi2019bi}]. The first objective function minimizes the total number of assembled tray types, and the second objective function minimizes the total number of instruments that were not requested. They use the $\epsilon$- constraint method to obtain the Pareto-optimal front. \Mycite{dollevoet2018solution} develop an exact integer linear programming formulation, a row and column generation approach, a greedy heuristic, and some metaheuristics. These approaches are evaluated based on the average computation time, the average value of the objective function, and the number of solutions for which optimality is proven. {\bf AGV Systems and Operations in Hospitals:} The existing literature on AGV systems focuses on fleet size selection. Simulation and optimization models are proposed to identify AGV fleet size (\Mycite{choobineh2012fleet}, \Mycite{maxwell1982design}, \Mycite{arifin2000determination}, \Mycite{rajotia1998determination}, \Mycite{sinriech1992economic}, \Mycite{egbelu1987use}, \Mycite{tanchoco1987determination}, and \Mycite{Bhosekar:2018}). \Mycite{katevas2001mobile} discusses several factors that must be considered to design a mobile robotic system for healthcare applications, and his work provides several guidelines for researchers to improve these designs. \Mycite{rossetti2000simulation} compare the manual delivery of clinical and pharmaceutical items with the performance of robotic courier delivery [\citenum{rossetti2000simulation}]. They use cost, turnaround time, variability of turnaround time, cycle time, and utilization as performance measures. The proposed simulation model shows that using robotic delivery is economically viable and improves the performance measures listed above. \Mycite{rossetti2001multi} use Analytic Hierarchy Process to build a decision problem that evaluates the performance of a robotic healthcare delivery system based on technical, economical, and several other factors. Their proposed simulation model assesses the technical {factors} that include speed of robot and human couriers based on the arrival rates of visitors who request the elevator, elevator availability, the arrival rates of the delivery items that request robots, and robot availability. In their case study, \Mycite{chikul2017technology} compare three supply chain models that use: a) manual inventory check and delivery, b) RFID inventory check and manual delivery, and c) manual inventory check and AGV-based material handling. This study shows that combined RFID tracking and AGV-based delivery maximizes cost savings in the supply chain model and yields ergonomic benefits due to reduced manpower requirements. Via a simulation-based case study, \Mycite{pedan2017implementation} identify potential benefits of utilizing an AGV system in a hospital. Finally, \Mycite{fragapane2018material} evaluate the impact that material and information flow have on costs at a Norway hospital. {Different from this literature which focuses either on improving the management of inventory of surgical instruments, or improving material handling activities in hospitals, our proposed work focuses on evaluating the impacts of integrating material handling and inventory management decisions. Via our numerical analysis we show that coordinating these decisions leads to reduced inventory levels, reduced number of AGVs used and increased utilization of AGVs, while the quality of service provided remains intact. } \section{Problem Description}\label{mhp} \textbf{Material Handling and Inventory Management Processes:} Figure \ref{fig: MHProcess} describes a typical material handling process for the delivery of surgical case carts in a hospital. The process begins by creating a detailed schedule of surgeries. This schedule is prepared by the OR manager. Based on the schedule and the doctors’ preferences, a list of instruments and soft goods is prepared and submitted to the materials division (MD). For each surgery, a clean case cart is loaded with the requested instruments, soft goods, and implants. These case carts are moved to pick-up/drop-off stations for carriers to pick up. Then, the clean case carts are moved from the MD to the case cart storage area (CCSA). At the CCSA, each case cart is inspected to ensure that it contains the required materials. The case carts are held at the CCSA until they are moved to the corresponding OR at the time of the surgery. The case carts are delivered to ORs prior to the surgeries. ORs are divided into separate cores based on the specialties they serve. Specialty instruments and implants required for the surgical cases, which are stored in the OR cores, are added to the case carts before the surgery. After the surgery, the instruments and case carts are considered soiled and should be decontaminated. The soiled carts and instruments are transported to the CSSD by the carrier. The instruments and case carts are washed and sterilized at the CSSD. The specialty surgical instruments are returned to the corresponding OR cores. This process ensures the availability of instruments before the scheduled surgery. Figure \ref{fig: Model1} presents the locations of the departments and paths traversed by the carriers (i.e., AGVs) at our partnering hospital. \begin{figure} \caption{Material Handling Process \label{fig: MHProcess} \label{fig: MHProcess} \end{figure} \begin{figure} \caption{Map - GMH Floor Map} \label{fig: Model1} \end{figure} \begin{table}[ht] \centering \caption{AGV Movements by the Time of the Day \label{Table: ByTime}} {\begin{tabular}{llllll} \hline Route & Time & No. of & \multicolumn{2}{c}{Travel Time [Min]} & Coefficient of \\\cline{4-5} & Interval & Trips & Average & Std. Dev. & Variation \\ \hline & 12 am-3 am & 131 & 4.66 & 10.26 & 2.2\\ & 3 am-6 am & 227 & 5.96 & 9.55 & 1.6 \\ & 6 am-9 am & 112 & 5.27 & 6.17 & 1.17 \\ Materials Department - & 9 am-12 pm & 80 & 5.8 & 2.4 & 0.41 \\ Case Cart Storage Area & 12 pm-3 pm & 101 & 5.33 & 3.69 & 0.69 \\ & 3 pm-7 pm & \textbf{1,416} & \textbf{8.94} & \textbf{6.49} & 0.73 \\ & 7 pm-9 pm & 254 & 5.67 & 4.45 & 0.78\\ & 9 pm-12 am & 196 & 4.88 & 5.72 & 1.17\\ \hline & 12 am-3 am & 44 & 7.28 & 6.16 & 0.85\\ & 3 am-6 am & 53 & 6.87 & 6.16 & 0.9\\ & 6 am-9 am & 146 & 5.55 & 3.27 & 0.59\\ 2nd Floor Soiled & 9 am-12 pm & 981 & 4.56 & 4.04 & 0.88 \\ Cart Storage - CSSD & 12 pm-3 pm & 882 & 5.17 & 2.56 & 0.49\\ & 3 pm-7 pm & 753 & \textbf{9.71} & \textbf{8.51} & 0.88 \\ & 7 pm-9 pm & 126 & 6.65 & 3.04 & 0.46\\ & 9 pm-12 am & 77 & 5.45 & 1.01 & 0.19\\ \hline \end{tabular}}\\ {\footnotesize{This table was obtained from the prior research. [\citenum{Bhosekar:2018}]}} \end{table} To reduce the cost of inventory, hospitals need to coordinate inventory management and material handling decisions. This coordination becomes ever more important in face of uncertainty. For example, if surgery duration and travel time of carriers is fixed, hospitals can calculate the necessary inventory levels with certainty and decide how many instruments to loan or consign. However, in order to ensure high service level under uncertainty, many hospitals keep large inventories, loan instruments, and prepare/deliver case carts one day before the surgery. We observed at our partner hospital that, if the delivery of instruments from the CSSD to ORs was completed within a short time before the start of the surgery, the instruments could be reused within the same day. Such an approach has the potential to lead to a reduction in the cost of using loaned or consigned instruments. This observation led the development of the material handling process proposed next. \begin{table}[ht] \centering \caption{A List of Instruments Used \label{Table: IAGMH}} {\begin{tabular}{r|rr} \hline \multicolumn{1}{c|}{\bf Type} & {\bf Total Number} & {\bf Total in \%} \\ \hline Loaner & 266 & 5\% \\ Consigned & 1,095 & 19\% \\ Owned & 3,507 & 61\% \\ Other Services & 927 & 16\% \\ \hline {\bf Total} & 5,795 & 100\% \\ \hline \end{tabular}} {} \end{table} \textbf{Experimental Setup:} The movement of clean instruments to ORs and the movement of soiled instruments to the CSSD affect inventory {availability} and the starting times of surgeries. This is the reason why some hospitals, like our partner hospital, prepare and deliver the surgical case carts to the CCSA one day in advance. Such a practice ensures the availability of surgical instruments, but a number of inefficiencies results regarding material handling and inventory management. For example, Table \ref{Table: ByTime} summarizes the data obtained on the travel times of AGVs during different times of the day at the partner hospital. The data shows that the average travel time and the corresponding standard deviation are highest during 3 pm to 7 pm. This is because the clean surgical carts are being delivered to the CSSA for elective surgeries. The consequent increase in the number of AGV movements leads to congestion and thus, longer travel times for every AGV that uses the same path. These delays lead to an increased inventory of instruments since an instrument cannot be reused in different surgeries scheduled on the same day. Table \ref{Table: IAGMH} lists the types of the total number of instruments used by the partner hospital and the percentage of each type. Notice that about 24\% of the instruments used are either loaned or consigned. This study evaluates three approaches to delivering surgical supplies to ORs and compares their performances. The first approach, Model 1, is referred to as the \emph{Current} approach and assumes that materials required by surgeries are delivered to the CCSA the night before the surgery. The \emph{Current} approach is the ongoing practice of the partner hospital in the data presented here. Next, the \emph{Two Batch} approach, Model 2, assumes that materials required by surgeries scheduled in the morning are delivered to the CCSA the previous evening, and the materials required by surgeries scheduled in the afternoon are delivered in the morning on the day of the surgery. This approach provides the opportunity to reuse the instruments from the surgeries scheduled later in the day. Since the CSSD works 24 hours each day, the instruments can be washed overnight and delivered in the morning. Since instruments are delivered a few hours in advance of the surgery, the staff has an abundant amount of time to intervene if an instrument becomes unavailable. Thus, the risk of instruments not being delivered on time is only minimal and does not affect the quality of care in the hospital. Finally, the \emph{Just-in-Time} approach, Model 3, assumes that materials required are delivered shortly before the start of the surgery. The time between the delivery of surgical supplies and the surgery, referred to as the delivery interval, needs to be determined and impacts the inventory levels. The inventory level required increases with the delivery interval. For example, consider two surgeries that require the same instrument and are scheduled on the same day. In the current system, a hospital must have two sets of the identical instruments since they are delivered to the CCSA the day before the surgery. If the delivery interval is 1 hour, the instrument can be sterilized and delivered before the subsequent surgery, provided that the two surgeries are scheduled several hours apart. In this case, the hospital needs only one instrument. However, if the delivery interval is chosen to be less than 3 hours, then, to avoid any delays of the second surgery, two instruments are needed. Note that the implementation of JIT and other lean methods in healthcare, unlike with manufacturing, should be considered with caution because such practices could delay surgeries and jeopardize the well being of patients. {Three performance measures are used to compare the proposed approaches: (\textit{i}) the average delay of a surgery's start time, which is a measure of the service level provided, (\textit{ii}) the number of instruments inventoried, which measures the efficiency of the inventory system, and (\textit{iii}) the number of carriers required for each proposed material handling approach, which measure the efficiency of the material handling systems.} The delays of a surgery start time are separated into two categories: delays due to carriers, e.g., long travel time because of congestion or unavailability of carriers, and delays due to unavailable instruments. Delays due to carriers can create challenges for the JIT approach, but these delays can be reduced by optimizing fleet size. Delays due to unavailable instruments are caused either by delays in the delivery of soiled case carts or by an increase in the number of emergency surgeries. A delay in the delivery of soiled case carts subsequently delays the cleaning process of instruments and carts, which leads to the delay of the start of the next surgery that uses the same instrument. Delays due to unavailable instruments can be reduced by optimizing the inventory level. In this research, simulation experiments are conducted to determine the optimal fleet size and inventory level under each proposed material handling approach. Based on the results of these experiments, the delivery interval that optimizes the performance measures identified is also determined. Each of the proposed material handling approaches requires a different number of carriers to deliver materials on time. This number is impacted by the surgery schedule and the material handling process. For instance, the number of carriers needed for the JIT delivery approach is lowest since the delivery of case carts is spread throughout the day. The number of carriers needed by the \emph{Current} delivery approach is larger because the delivery of case carts is completed within a short time period. The number of carriers needed also depends on the total number of cases scheduled and the spread of the schedule. A tight schedule would require more carriers to complete material handling on time. {\bf Limitations of this Research:} {\bf Model:} The research proposed here is conducted in collaboration with Greenville Memorial Hospital (GMH), a US-based hospital located in Greenville, South Carolina. The models presented here are motivated by the material handling and inventory management practices at GMH. The research team worked closely with the perioperative services department, which consists of the MD, the CSSD, and the OR Division. GMH uses AGVs to transport surgical case carts to and from ORs. The problem setting proposed here and the assumptions made are influenced by the practices at GMH. The models presented here are a valuable contribution to the literature because, based on a careful review of the literature, similar practices are followed by other hospitals for material handling and inventory management of surgical instruments. \noindent {\bf Data:} Nine months of real-life data are used to develop the case study. This data includes information about the number of surgical cases each day and ranges over a time period long enough to observe how seasonality impacts the number of surgical cases. Ideally, larger amounts of data would be available, but that does not apply here. \section{Simulation Model} DES models are developed to evaluate and compare the three approaches proposed for the delivery of surgical supplies. These models are created in ARENA simulation software by Rockwell Automation. An entity type represents a surgery type, and each entity represents a surgical case of a particular type. An entity has three attributes: \emph{duration, starting time, and type}. \emph{Duration} is randomly generated using the distributions listed in Table 3. These distributions are derived from the data collected at GMH. The \emph{starting time} and \emph{type} are fed to the model from the actual data. Other entities are used to control the movement of AGVs and elevators, as well as to handle other specific requirements, such as calculating the value of certain variables (e.g., the number of AGVs to activate each day). ORs, case carts, cart washers, and elevators were modeled as resources. Variables are used to track the number of busy resources. A guided path transporter network is developed with intersections and links to replicate the movement of AGVs along the corridors of the hospital. This network was constructed using actual distances obtained from a GMH floor map. The links of the network are unidirectional, bidirectional, or spurs (dead ends). The intersections represent the areas where two or more links intersect. The intersections allow AGVs to make turns and move from one link to the next, following their routes. Intersections are also used to represent pick-up/drop-off stations. A spur link marks the end of a route. Departments can only handle a certain number of AGVs, and their processing capacity is limited by variables. \begin{table}[ht] \caption{Input Parameters: Surgery Duration \label{Table: LOS}} {\footnotesize \begin{tabular}{l|cc|l|l|c} \hline Service & From & To & Distribution & Expression (Length of Surgery) & Squared Error \\ \hline ENT Surgery& 00:00 & 08:00 & Lognormal & LOGN(2.02, 2.12) & 0.008 \\ & 08:00 & 14:00 & Lognormal & LOGN(1.62, 1.2) & 0.007 \\ & 14:00 & 00:00 & Lognormal & LOGN(1.23, 0.672) & 0.003 \\ \hline Gynecology Service & 07:00 & 08:00 & Beta & 0.01 + 4.81 * BETA(2.85, 4.03) & 0.009 \\ & 15:00 & 16:00 & Lognormal & 0.27 + LOGN(0.965, 0.511) & 0.005 \\ & 16:00 & 07:00 & Lognormal & LOGN(1.65, 0.859) & 0.011 \\ \hline Neurological Surgery & 00:00 & 09:00 & Gamma & GAMM(0.494, 5.44) & 0.007 \\ & 09:00 & 13:00 & Erlang & ERLA(0.454, 5) & 0.005 \\ & 13:00 & 00:00 & Beta & 12 * BETA(4.95, 25.8) & 0.028 \\ \hline Ortho Trauma Surgery & 0:00 & 8:00 & Erlang & ERLA(0.587, 5) & 0.002 \\ & 08:00 & 14:00 & Lognormal & LOGN(2.57, 1.29) & 0.004 \\ & 14:00 & 00:00 & Lognormal & LOGN(2.09, 0.983) & 0.004 \\ \hline Pediatric Surgery & 00:00 & 00:00 & Lognormal & LOGN(1.35, 0.693) & 0.011 \\ \hline Urology Surgery & 00:00 & 07:00 & Lognormal & LOGN(1.63, 0.975) & 0.001 \\ & 07:00 & 08:00 & Lognormal & LOGN(1.2, 0.821) & 0.007 \\ & 08:00 & 00:00 & Erlang & ERLA(0.244, 4) & 0.011 \\ \hline Vascular Surgery & 00:00 & 07:00 & Beta & 0.03 + 8.97 * BETA(0.97, 1.78) & 0.004 \\ & 07:00 & 09:00 & Gamma & GAMM(0.608, 3.58) & 0.017 \\ & 09:00 & 14:00 & Gamma & GAMM(0.42, 4.39) & 0.025 \\ & 14:00 & 00:00 & Triangular & TRIA(0.13, 0.83, 3.54) & 0.011 \\ \hline \end{tabular}} {} \end{table} The first DES model, Model 1, depicts the \emph{Current} material handling approach used at GMH. Figure \ref{fig: SimulationFlowchart} describes this model and Table \ref{tab:param} summarizes the values of its input parameters. In this model, the release of entities begins at 3 pm. The start time of these entities takes place after 6 am the next day. Next, the availability of instruments is checked using the decide module. If an instrument is not available, the case cart is held at MD until the instrument becomes available. An available instrument seizes a case cart and is delivered to the CCSA. There, the entity is held until the scheduled start time of the surgery. At this point in time, the entity seizes an available OR for the duration of the surgery. At the end of a surgery, the OR is released, and the corresponding case cart and instrument are moved to the CSSD to wash and sterilize. The resources at the CSSD are seized for the duration of service. The variables which record the number of busy units are updated when resources are released. \begin{figure} \caption{Flowchart of the Simulation Model \label{fig: SimulationFlowchart} \label{fig: SimulationFlowchart} \end{figure} The second DES model, Model 2, depicts the \emph{Two-Batch} material handling approach. In this model, entities are released twice a day, at 6 am and 3 pm. The entities released at 6 am have a start time between after 12 pm the same day. Next, these entities follow a similar procedure as described above in Model 1. Entities released at 3 pm have a start time between 6 am and noon the next day. These entities are held until the next morning using the hold module, and then, they follow the procedure outlined above. The third DES model, Model 3, depicts the \emph{JIT} approach. In this model, entities are released one hour prior to their \emph{start time}. This delivery interval was chosen based on the results obtained in Table \ref{Table:CCS}. Next, these entities follow the same procedure outlined above. In every model, the delivery of soiled case carts begins as soon as a surgery is completed. \begin{table}[h] \caption{Summary of Input Parameters } \label{tab:param} \resizebox{\textwidth}{!}{ \begin{tabular}{lll} \hline \textbf{Parameter} & \textbf{Source} & \textbf{Description} \\ \hline Entity Creation Time & Surgery schedule data & Read from the data \\ Attribute Duration & Surgery schedule data & Random variable from \\ & & the corresponding distribution \\ Network link distances & GMH floor maps & Read from the data \\ No. of Case carts & AGV system data & 110 \\ No. of ORs & GMH Survey & 32 \\ No. of loading personnel & GMH Survey & 4 \\ No. of AGVs & AGV system data & {[}6,8,10{]} \\ Capacity of elevators & GMH Survey & {[}2,2{]} \\ Capacity of cart washers & GMH Survey & 3 \\ Cart loading delay & GMH Survey & Triangular(2,3,5) minutes \\ Cart washing delay & GMH Survey & 20 Minutes \\ Elevator movement delay to carry AGV & GMH Survey & 40 seconds \\ Cart loading unloading delay & GMH Survey & 15 seconds \\ Instrument washing delay & GMH Survey & 3 hours \\ \hline \end{tabular} } \end{table} \section{A Case Study} {\bf Input Data Analysis:} The main objective of the data collection and analysis is to evaluate the impacts that the \emph{Current} material handling approach has on the inventory of surgical instruments. The data collected is used to develop the DES models. The data is presented in the following sets: The first data set provides information about the surgeries scheduled at GMH from Jan. 1, 2018, until Sept. 11, 2018. This data includes the surgery identification number (ID), OR’s ID , the date of the surgery, the scheduled start and finish times of a surgery, the type of surgery (i.e., vascular, orthopedic, neurological etc.), information about the surgeon, the primary procedure, and the instruments requested. The second data set provides information about the surgical instruments used at GMH. This data presents the instrument ID, the type of surgery the instrument is used for, the inventory level, and information regarding its ownership. The hospital offers 46 different surgical services. Our experimental analysis focuses on the following 7 types of surgeries: ENT, pediatric, ortho trauma, neurology, gynecology, urology, and vascular. We focus on these surgical services because they are scheduled multiple times each day. Therefore, there is an opportunity to reduce the size and cost of inventory by reusing some of the instruments. The duration of a surgery is calculated using the actual start and finish times. Surgeries are grouped based on service type, duration, and scheduled start times. For each service type, an hypothesis test is conducted to evaluate whether the duration of surgeries within each service type differ based on the starting time of the given surgery. When differences were observed, the distribution of surgery duration was estimated separately. Otherwise, the data was used to derive a single distribution for surgeries of the same type that were started at different times of the day. The results of the hypothesis test generated the input parameters used in the simulation model. For example, the surgery duration differs based on the time of the day the surgery is scheduled, by day of the week, and also by service type. A continuous distribution was fitted using the Input Analyzer of Rockwell Automation to represent the surgery duration. Table \ref{Table: LOS} shows the service types, distribution of the length of surgeries, and the squared error. The real-life scheduled start times of the surgeries are used in the simulation model obtained from the data set and presented here. Table \ref{Table: NOS} summarizes the total number of surgical cases scheduled between Jan. 1, 2018, and Sept. 11, 2018. Here, only the surgery types that were scheduled more frequently are listed. Each of these surgery type is scheduled more than once a day and requires multiple instruments of the same kind. For each surgery type, only one set of instruments, common to all the surgical cases of that type, is used. Table \ref{Table: IISI} lists the instruments selected for this study and their corresponding inventory levels. GMH carries multiple instruments for each surgery type for three main reasons: First, the same surgery could be scheduled more than once in the same day if the hospital follows a block schedule approach. This approach assigns the same block of time to a surgeon or a group of surgeons who perform similar procedures every week because surgeons perform back-to-back specialty surgeries in the assigned blocks and use similar instruments. The Current material handling system requires that every instrument is available one day before the surgery. Second, surgeons of different specialties may request the same instrument for the same procedure. Third, the hospital carries safety stock to respond to instrument-related incidents, such as dropping or breaking an instrument during a surgical procedures. \begin{table}[ht] \caption{Input Parameters: Number of Surgeries \label{Table: NOS}} \resizebox{\textwidth}{!}{ {\begin{tabular}{l|ccccccc|c} \hline \textbf{Service} & \textbf{Sunday} & \textbf{Monday} & \textbf{Tuesday} & \textbf{Wednesday} & \textbf{Thursday} & \textbf{Friday} & \textbf{Saturday} & \textbf{Total} \\ \hline \begin{tabular}[c]{@{}c@{}} ENT Surgery\end{tabular} & 25 & 295 & 148 & 264 & 231 & 302 & 20 & 1,285 \\ \begin{tabular}[c]{@{}c@{}}Gynecology Service\end{tabular} & 17 & 227 & 133 & 181 & 176 & 198 & 13 & 945 \\ \begin{tabular}[c]{@{}c@{}}Neurological Surgery\end{tabular} & 22 & 174 & 168 & 293 & 163 & 225 & 17 & 1,062 \\ \begin{tabular}[c]{@{}c@{}}Ortho Trauma Surgery\end{tabular} & 2 & 205 & 171 & 174 & 206 & 207 & 56 & 1,021 \\ \begin{tabular}[c]{@{}c@{}}Pediatric Surgery\end{tabular} & 62 & 145 & 248 & 153 & 276 & 158 & 79 & 1,121 \\ \begin{tabular}[c]{@{}c@{}}Urology Surgery\end{tabular} & 39 & 293 & 333 & 298 & 382 & 466 & 72 & 1,883 \\ \begin{tabular}[c]{@{}c@{}}Vascular Surgery\end{tabular} & 61 & 141 & 242 & 224 & 241 & 235 & 81 & 1,225 \\ \hline {\bf Total} & 228 & 1,480 & 1,443 & 1,587 & 1,675 & 1,791 & 338 & 8,542 \\ \hline \end{tabular}} {} } \end{table} \begin{table}[ht] \centering \caption{Number of Instruments in the Inventory \label{Table: IISI}} {\begin{tabular}{llc} \hline \textbf{Service Type} & \textbf{Instrument} & \textbf{Inventory} \\ \hline {ENT Surgery} & Set T \& A GMMC 1047 & 10 \\ {Gynecology Service} & Set D \& C mini GMMC 15896 & 10 \\ {Neurological Surgery} & Set Back Neuro GMMC 1341 & 12 \\ {Ortho Trauma Surgery} & Set Minor Ortho GMMC 100031 & 17 \\ {Pediatric Surgery} & Set Pediatric Minor GMMC 1247 & 8 \\ {Urology Surgery} & Ureteroscope 7.5 Comp GMMC 12656 & 18 \\ {Vascular Surgery} & Probe Doppler Pencil 8.1 GMMC 1824 & 25 \\ \hline \end{tabular}} {} \end{table} {\bf Verification and Validation:} Verification and validation procedures are used to compare the conceptual model with the proposed DES models. The development of the DES models is guided by the process flowchart and uses input data provided by GMH staff, who examined and approved these models. Additionally, the approach proposed by \Mycite{sargent2010verification} is adopted to verify and validate the DES models. \textbf{Data Validity:} The input data analysis section describes our data collection and analysis. This analysis indicates that our data is correct and adequately used. \textbf{Conceptual Model Validation:} The proposed conceptual model is validated via \emph{face validation} by GMH staff and via \emph{traces} following specific entities through the model. Flowcharts of the conceptual model are verified by GMH staff. Computerized Model Verification: The DES models are verified via techniques listed in \Mycite{sargent2010verification}. These techniques include animation, comparison with other models, and running several replications of the model. \textbf{Operational Validation:} A thorough sensitivity analysis was conducted to check the accuracy of the DES models. In the sensitivity analysis, the number of resources used (i.e., the number of AGVs, the number of instruments, etc.) changed, so the impact of these changes on the behavior of the model outputs was monitored. For example, the model outputs, after changing the number of AGVs, equal the average and standard deviation of travel time of the AGVs. Next, hypothesis tests were conducted to evaluate whether the difference between the outputs of DES models and the real-world data are statistically different. At a $p$-value of 0.05, the test indicates that the difference is not statistically significant. \section{Discussion of Results} The results from the DES models are used to address the research questions outlined in Section \ref{intro}. \textbf{R1: How does the inventory level of surgical instruments, including owned, borrowed and cosigned, impact the efficiency of ORs?} A simulation-optimization experiment is conducted using ARENA Opt-Quest to answer this question. The objective of the simulation-optimization is to minimize the total delays at the start of a surgery by changing the inventory level. The delay of a surgery is calculated as the difference between the \textit{Actual Start Time} and the \textit{Scheduled Start Time}. The decision variables of type \textit{integer} are the number of instruments in the inventory for each of the seven service types (see Table \ref{Table: IISI}). In order to reduce the computational time of the simulation-optimization experiments, a lower bound, based on the data collected at GMH, is developed on the number of instruments inventoried. Let $n$ be the maximum number of surgeries scheduled in a day for each service type. The lower bound equals $\ceil{n/2}$. A lower bound is added for each surgery type via these constraints: (\textit{i}) the number of instruments used $\leq$ number of instruments in the inventory, and (\textit{ii}) the number of instruments used $\geq$ lower bound. Experiments are conducted for three different scenarios. Scenario 1 assumes that the available inventory of instruments equals the current inventory level of GMH. Consider this inventory level to be an upper bound. Scenario 2 assumes that the available inventory of instruments equals the lower bound. Scenario 3 assumes that the available inventory of instruments equals the average value of the upper and lower bounds. Table \ref{Table:R1} summarizes the results of these experiments, and the following observations result: \emph{Observation 1:} The \emph{Current} material handling approach, Model 1, is the most sensitive to changes in the inventory level, compared to \emph{Two Batch}, Model 2, and \emph{JIT}, Model 3. A decrease of inventory level, from Scenario 1 to Scenario 2, leads to an increase of the average delay from 0.42 to 31 minutes per surgery in the \emph{Current} approach. In the \emph{Two Batch} approach, the corresponding average delay increases from 0.01 to 5.12 minutes, and in the \emph{JIT} approach from 0.00 to 1.47 minutes per surgery (see Table \ref{Table:R1}). \emph{Observation 2:} The \emph{Current} material handling approach requires additional levels of inventory to maintain the same service level, as measured by the average delay per surgery, compared to the proposed \emph{Two Batch} and \emph{JIT approaches} (see Table \ref{Table:R2}). \emph{Observation 3:} The \emph{JIT} approach leads to reduced inventory levels of instruments used in short-duration surgeries without reducing the service level. Table \ref{Table: NOS} presents the total number of neurological surgeries conducted at GMH during the 9-month period reviewed here. This number averages about 4.2 surgeries per day. Table \ref{Table: NOS} also presents the number of pediatric surgeries during the same time period, which corresponds to about 4.4 surgeries per day. The duration of neurological surgeries is about 1 hour longer than for pediatric surgeries. An hypothesis testing ($p$-value = 0.05) was conducted to evaluate the difference between the duration of neurological and pediatric surgeries. This test indicates that the difference is statistically significant (see Table \ref{tab:LengthComparison}). The results of Table \ref{Table:R2} show that the number of instruments required by neurological surgeries is higher than pediatric surgeries in Models 2 and 3 versus Model 1. This is because instruments used in Pediatric surgeries can be reused in the same day due to the shorter duration of these surgeries. \begin{table}[H] \caption{The Average Delay per Surgery} \label{Table:R1} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccc} \hline \multicolumn{7}{c}{\textbf{Number of Instruments per Service Type}} & \textbf{} & \multicolumn{3}{c}{\textbf{Average Delay/Surgery (Minutes)}} \\ \hline \multicolumn{1}{c|}{\textbf{Scenario}} & \textbf{ENT} & \textbf{Gynecology} & \textbf{Neurological} & \textbf{Ortho Trauma} & \textbf{Pediatric} & \textbf{Urology} & \multicolumn{1}{c|}{\textbf{Vascular}} & \textbf{Model 1} & \textbf{Model 2} & \textbf{Model 3} \\ \hline \multicolumn{1}{c|}{\textbf{1}} & 10 & 10 & 13 & 17 & 8 & 18 & \multicolumn{1}{c|}{16} & 0.42 & 0.01 & 0 \\ \multicolumn{1}{c|}{\textbf{2}} & 6 & 6 & 5 & 9 & 4 & 10 & \multicolumn{1}{c|}{6} & 31.27 & 5.12 & 1.47 \\ \multicolumn{1}{c|}{\textbf{3}} & 8 & 8 & 9 & 13 & 6 & 14 & \multicolumn{1}{c|}{10} & 3.58 & 0.25 & 0.01 \\ \hline \end{tabular} } \end{table} \begin{table}[H] \caption{Inventory Level of Instruments} \label{Table:R2} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline & & \multicolumn{7}{c}{\textbf{Number of Instruments per Service}} \\ \hline \multicolumn{1}{c|}{\textbf{Model}} & \multicolumn{1}{c|}{\textbf{Delay/ Surgery (Minutes)}} & \textbf{ENT} & \textbf{Gynecology} & \textbf{Neurological} & \textbf{Ortho Trauma} & \textbf{Pediatric} & \textbf{Urology} & \textbf{Vascular} \\ \hline \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{0.42} & 10 & 10 & 13 & 17 & 8 & 18 & 16 \\ \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{0.41} & 6 & 10 & 13 & 13 & 6 & 18 & 12 \\ \multicolumn{1}{c|}{\textbf{3}} & \multicolumn{1}{c|}{0.41} & 6 & 6 & 9 & 9 & 4 & 16 & 12 \\ \hline \end{tabular} } \end{table} \begin{table}[H] \caption{Comparison of Two Service Types} \label{tab:LengthComparison} \centering \resizebox{0.55\textwidth}{!}{ \begin{tabular}{l|cc} \hline \textbf{Statistics} & \textbf{Neurology Surgery} & \textbf{Pediatric Surgery} \\ \hline \textbf{Sample Size} & 1062 & 1121 \\ \textbf{Average Length} & 2.31 & 1.36 \\ \textbf{95\% CI} & (2.24,2.38) & (1.31,1.40) \\ \textbf{Standard Deviation} & 1.16 & 0.77 \\ \hline \end{tabular} } \end{table} \textbf{R2: How do material handling activities impact the efficiency of ORs?} Two sets of experiments are conducted. The first set focuses on the impact that changing the number of AGVs has on the performance of the material handling system. This performance is measured via the average travel time per trip, the total travel time, and the corresponding standard deviations. The delivery time of clean and soiled case carts is analyzed as the number of AGVs increases from 6 to 8 to 10. Experiments with fewer than 6 AGVs led to extensive delays in delivering all the case carts in \emph{Current} system, which requires employees to work overtime, so these experiments are not considered in this analysis. The second set of experiments focuses on the impact that changing the timing of delivery has on the performance of the material handling system. For this purpose, the performances of Models 1, 2, and 3 are compared. The results of these experiments are summarized in Tables \ref{Table:CCS} and \ref{Table:SCS} and Figures \ref{fig:EffectofMaterialHandlingCCS} and \ref{fig:EffectofMaterialHandlingSCS}. The following observations result: \emph{Observation 1:} The average daily travel time of clean case carts is longest in the \emph{Current} material handling approach and shortest in the \emph{JIT} approach (see Figures \ref{fig:cm1}, \ref{fig:cm2}, and \ref{fig:cm3}). The \emph{Current} approach has the longest travel time due to congestion since the delivery of clean case carts for elective surgeries takes place during 3-7pm. \emph{Observation 2:} The average daily travel time of clean case carts increases with the number of AGVs (see Figures \ref{fig:cm1}, \ref{fig:cm2}, and \ref{fig:cm3}). This increase is highest in the \emph{Current} material handling approach. \emph{Observation 3:} The average daily travel time of clean case carts in the \emph{JIT} approach is not impacted by the increase in the number of AGVs since the delivery of case carts is spread over the day. These deliveries do not cause congestion (see Figure \ref{fig:cm3}). \emph{Observation 4:} The average daily travel time of soiled case carts for every material handling approach is slightly impacted by the increase in the number of AGVs (see Figures \ref{fig:dm1}, \ref{fig:dm2}, and \ref{fig:dm3}). Note that the difference in the average travel time per trip is small but still statistically significant. The change in travel time due to the increase in the number of AGVs for every Model is small because soiled case carts are delivered to CSSD right after the surgery; thus, they are delivered throughout the day, and these deliveries have a minimal impact on congestion. \begin{table}[ht] \caption{Sensitivity Analysis of Clean Case Carts Delivery} \label{Table:CCS} \centering \resizebox{0.7\textwidth}{!}{ \begin{tabular}{c|c|cccc} \hline \textbf{} & \textbf{} & \multicolumn{4}{c}{\textbf{Travel Time (Minutes)}} \\ \hline \textbf{No. of AGVs} & \textbf{Model} & \textbf{Average} & \textbf{StDev} & \textbf{CI for Average} & \textbf{CI for StDev} \\ \hline \textbf{6} & \textbf{1} & 4.83 & 0.31 & (4.83, 4.84) & (0.30, 0.31) \\ \textbf{} & \textbf{2} & 4.96 & 0.17 & (4.95, 4.96) & (0.17, 0.18) \\ \textbf{} & \textbf{3} & 3.31 & 0.27 & (3.31, 3.32) & (0.26, 0.27) \\ \hline \textbf{8} & \textbf{1} & 6.53 & 0.60 & (6.51, 6.54) & (0.59, 0.61) \\ \textbf{} & \textbf{2} & 6.47 & 0.64 & (6.46, 6.48) & (0.62, 0.65) \\ \textbf{} & \textbf{3} & 3.43 & 0.36 & (3.42, 3.44) & (0.35, 0.36) \\ \hline \textbf{10} & \textbf{1} & 8.02 & 1.16 & (7.99, 8.05) & (1.14, 1.17) \\ \textbf{} & \textbf{2} & 7.66 & 1.17 & (7.63, 7.69) & (1.14, 1.19) \\ \textbf{} & \textbf{3} & 3.46 & 0.39 & (3.46, 3.47) & (0.38, 0.39) \\ \hline \end{tabular} } \end{table} \begin{figure} \caption{Total Travel Time: Model 1} \label{fig:cm1} \caption{Total Travel Time: Model 2} \label{fig:cm2} \caption{Total Travel Time: Model 3} \label{fig:cm3} \caption{Sensitivity Analysis of Clean Case Carts Delivery} \label{fig:EffectofMaterialHandlingCCS} \end{figure} \begin{table}[ht] \centering \caption{Sensitivity Analysis of Soiled Case Carts Delivery} \label{Table:SCS} \resizebox{0.7\textwidth}{!}{ \begin{tabular}{c|c|cccc} \hline \textbf{} & \textbf{} & \multicolumn{4}{c}{\textbf{Travel Time (Minutes)}} \\ \hline \textbf{No. of AGVs} & \textbf{Model} & \textbf{Average} & \textbf{StDev} & \textbf{CI for Average} & \textbf{CI for StDev} \\ \hline \textbf{6} & \textbf{1} & 5.46 & 0.10 & (5.46, 5.46) & (0.094, 0.097) \\ \textbf{} & \textbf{2} & 5.44 & 0.09 & (5.44, 5.44) & (0.087, 0.089) \\ \textbf{} & \textbf{3} & 5.39 & 0.07 & (5.39, 5.39) & (0.069, 0.071) \\ \hline \textbf{8} & \textbf{1} & 5.60 & 0.18 & (5.59, 5.60) & (0.179, 0.186) \\ \textbf{} & \textbf{2} & 5.53 & 0.15 & (5.53, 5.54) & (0.149, 0.154) \\ \textbf{} & \textbf{3} & 5.40 & 0.07 & (5.39, 5.40) & (0.072, 0.074) \\ \hline \textbf{10} & \textbf{1} & 5.75 & 0.29 & (5.75, 5.76) & (0.290, 0.299) \\ \textbf{} & \textbf{2} & 5.64 & 0.23 & (5.63, 5.64) & (0.232, 0.238) \\ \textbf{} & \textbf{3} & 5.40 & 0.07 & (5.39, 5.40) & (0.072, 0.074) \\ \hline \end{tabular} } \end{table} \begin{figure} \caption{Total Travel Time: Model 1} \label{fig:dm1} \caption{Total Travel Time: Model 2} \label{fig:dm2} \caption{Total Travel Time: Model 3} \label{fig:dm3} \caption{Sensitivity Analysis of Soiled Case Carts Delivery} \label{fig:EffectofMaterialHandlingSCS} \end{figure} \textbf{R3: How does integrating decisions about inventory and material handling impact the efficiency of ORs?} \emph{Observation 1:} The average delay per surgery and the total number of delays are lowest in the \emph{JIT} material handling approach. These statistics are highest in the \emph{Current} approach. A successful implementation of \emph{JIT} requires coordination of material handling and inventory decisions, and numerical results show that this coordination leads to improved OR efficiency. This observation is true at different inventory levels, represented by Scenarios 1, 2, and 3 in Table \ref{Table:R1}; for different material handling approaches, represented by Models 1, 2, and 3 in Tables \ref{tab:AvgDelayByType} and \ref{tab:delayfrequency}; and for different material handling capacities, represented by the number of AGVs in Tables \ref{tab:AvgDelayByType} and \ref{tab:delayfrequency}. \emph{Observation 2:} The average inventory level is lowest in the \emph{JIT} material handling approach. This observation is supported by the results of Tables \ref{Table:R1} and \ref{Table:R2}. \textbf{Recommendations:} The following recommendations are made based on the observations presented above: \emph{Recommendation 1:} Coordinating material handling and inventory management decisions has the potential to improve the efficiency and reduce the cost of ORs without jeopardizing the service level provided. To facilitate this coordination, the requirements for each surgery, the number of available instruments, and the location of instruments must be known at all times. Transparent information technology systems will facilitate the coordination of decisions. \emph{Recommendation 2:} Hospitals should consider implementing a \emph{JIT} material handling approach for instruments used in short-duration surgeries because such an approach leads to lower inventory levels without jeopardizing the service level provided. \emph{Recommendation 3:} Hospitals should frequently reevaluate their material handling system to identify improvements. For example, GMH currently uses 10 AGVs. Using only 6 or 8 AGVs leads to reduced congestion along the corridors of the hospital and leads to shorter delivery times. The remaining AGVs can be used for transportation of trash, linen, and pharmaceuticals, among other items. \begin{table}[H] \caption{Average Delay by Service Type (Hours)} \label{tab:AvgDelayByType} \resizebox{\textwidth}{!}{ \begin{tabular}{c|ccc|ccc|ccc} \hline \textbf{} & \multicolumn{3}{c|}{\textbf{Model 1}} & \multicolumn{3}{c|}{\textbf{Model 2}} & \multicolumn{3}{c}{\textbf{Model 3}} \\ \hline \textbf{Service Type} & \textbf{6 AGV} & \textbf{8 AGV} & \textbf{10 AGV} & \textbf{6 AGV} & \textbf{8 AGV} & \textbf{10 AGV} & \textbf{6 AGV} & \textbf{8 AGV} & \textbf{10 AGV} \\ \hline ENT & 0.060 & 0.006 & 0.001 & 0.009 & 0.000 & 0.000 & 0.001 & 0.000 & 0.000 \\ Gynecology & 0.132 & 0.011 & 0.001 & 0.014 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ Neurological & 1.672 & 0.124 & 0.005 & 0.330 & 0.011 & 0.000 & 0.085 & 0.000 & 0.000 \\ Ortho trauma & 0.018 & 0.000 & 0.000 & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ Pediatric & 1.186 & 0.244 & 0.042 & 0.266 & 0.020 & 0.002 & 0.030 & 0.001 & 0.000 \\ Urology & 0.245 & 0.024 & 0.001 & 0.014 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ Vascular & 0.324 & 0.003 & 0.000 & 0.028 & 0.000 & 0.000 & 0.003 & 0.000 & 0.000 \\ \hline \multicolumn{3}{c}{\footnotesize{*The total number of replications is 30.}} \end{tabular} } \end{table} \begin{table}[H] \caption{Frequency of Delayed Surgeries} \resizebox{\textwidth}{!}{ \begin{tabular}{c|ccc|ccc|ccc} \hline & \multicolumn{3}{c|}{\textbf{Model 1}} & \multicolumn{3}{c|}{\textbf{Model 2}} & \multicolumn{3}{c|}{\textbf{Model 3}} \\ \hline \textbf{Service Type} & \textbf{6 AGV} & \textbf{8 AGV} & \textbf{10 AGV} & \textbf{6 AGV} & \textbf{8 AGV} & \textbf{10 AGV} & \textbf{6 AGV} & \textbf{8 AGV} & \textbf{10 AGV} \\ \hline {ENT} & 378 & 41 & 5 & 175 & 14 & 2 & 17 & 0 & 0 \\ {Gynecology} & 580 & 60 & 5 & 138 & 10 & 0 & 10 & 0 & 0 \\ {Neurological} & 6,254 & 595 & 22 & 3,352 & 314 & 0 & 1,820 & 0 & 0 \\ {Ortho Trauma} & 120 & 0 & 0 & 63 & 0 & 0 & 0 & 0 & 0 \\ {Pediatric} & 4,859 & 1,293 & 255 & 2,532 & 377 & 52 & 743 & 20 & 0 \\ {Urology} & 2,261 & 268 & 8 & 652 & 39 & 0 & 8 & 0 & 0 \\ {Vascular} & 1,722 & 24 & 0 & 615 & 5 & 0 & 85 & 0 & 0 \\ \hline \multicolumn{3}{c}{\footnotesize{*The total number of replications is 30.}} \end{tabular} } \label{tab:delayfrequency} \end{table} \section{Summary and Concluding Remarks} The proposed research and the models presented here are motivated by the opportunities for improvement observed in GMH’s inventory management and material handling. Based on their \emph{Current} material handling system, case carts loaded with instruments required by surgeries scheduled during a day are delivered, via AGVs, to ORs within a few hours or the evening before the surgery. This delivery schedule leads to (\textit{i}) increased inventory of instruments, whether owned, loaned, or cosigned; (\textit{ii}) increased traffic and congestion, which delay the deliveries of case carts and the delivery other materials that use AGVs; and (\textit{iii}) delayed surgery start times. The inefficiencies identified motivated the following research questions: \textbf{(R1)}: \emph{How does the inventory level of surgical instruments, including owned, borrowed and cosigned, impact the efficiency of ORs?} \textbf{(R2)}: \emph{How do material handling activities impact the efficiency of ORs?} \textbf{(R3)}: \emph{How does integrating decisions about inventory and material handling impact the efficiency of ORs?} In order to address these research questions, two new material handling approaches are proposed and compared to the approach already in place. Along with the already-existing \emph{Current} approach, the \emph{Two Batch} approach delivers surgical carts to ORs twice a day, in the morning and in the evening, and the \emph{JIT} approach delivers a surgical carts to an OR before the surgery. Three DES models are developed for each of the three approaches, and they are verified and validated using real-life data collected at a partnering hospital. A thorough sensitivity analysis of the DES models is conducted and leads to a number of observations and recommendations. The \emph{Current} material handling approach is most sensitive to changes in the inventory level, requires the highest levels of inventory to maintain a high service level, and leads to congestion and delays of the delivery of surgical case carts. Both the \emph{Two Batch} and \emph{JIT} approaches outperform the Current material handling approach. The implementation of the \emph{JIT} approach leads to the greatest improvements in OR efficiency and service levels. Based on these observations, hospitals should identify opportunities to coordinate material handling and inventory management decisions since it leads to improved efficiency for ORs. New, data-based approaches to material handling and inventory management, like the \emph{JIT} delivery of surgical cases, have the potential to improve the efficiency of short-duration surgeries in hospital ORs. \subsection*{ACKNOWLEDGMENTS} This research is partially funded via a Spark Research Grant awarded by Material Handling Institute (MHI) and College-Industry Council on Material Handling Education (CICMHE). The authors are grateful to HMI and CICMHE for their support. The authors are thankful to the staff of Greenville Memorial Hospital for their continuous support in this research by providing the data, and the expertise necessary to verify and validate the models developed. \end{document}
\begin{document} \baselineskip 18pt \setcounter{section}{0} \title{A $\Delta^2_2$ Well-Order of the Reals And Incompactness of $L(Q^{MM})$} \author{Uri Abraham \\ Department of mathematics and computer science\\ Ben-Gurion University, Be'er Sheva, ISRAEL; \and Saharon Shelah \thanks{Supported by a grant from the Israeli Academy of Sciences. Pub. 403} \\ Institute of Mathematics\\ The Hebrew University, Jerusalem, ISRAEL.} \date{8-16-91} \maketitle \begin{abstract} A forcing poset of size $2^{2^{\aleph_1}}$ which adds no new reals is described and shown to provide a $\Delta^2_2$ definable well-order of the reals (in fact, any given relation of the reals may be so encoded in some generic extension). The encoding of this well-order is obtained by playing with products of \forallron\ trees: Some products are special while other are Suslin trees. The paper also deals with the Magidor-Malitz logic: it is consistent that this logic is highly non compact. \end{abstract} \noindent{\bf\large Preface} \label{sP} This paper deals with three issues: the question of definable well-orders of the reals, the compactness of the Magidor-Malitz logic and the forcing techniques for specializing \forallron\ trees without addition of new reals. In the hope of attracting a wider readership, we have tried to make this paper as self contained as possible; in some cases we have reproved known results, or given informal descriptions to remind the reader of what he or she probably knows. We could not do so for the theorem that the iteration of $\dD$-complete proper forcing adds no reals, and the reader may wish to consult chapter V of Shelah [1982], or the new edition [1992]. Anyhow, we rely on this theorem only at one point. The question of the existence of a definable well-order of the set of reals, $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$, with all of its variants, is central in set theory. As a starting point for the particular question which is studied here, we take the theorem of Shelah and Woodin [1990] by which from the existence of a large cardinal (a supercompact and even much less) it follows that there is no well order of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ in $L(\hbox{{\sf I}\kern-.1500em \hbox{\sf R}})$. Assuming that there exists a cardinal which is simultaneously measurable and Woodin, Woodin [in preparation] has shown that: {\it If CH holds, then every $\Sigma^2_1$ set of reals is determined. Hence there is no $\Sigma^2_1$ well order of the reals. } A $\Sigma^2_i$ formula is a formula over the structure $$\langle$gle \hbox{\bf N}, {\cal P}(\hbox{\bf N}),{\cal P}({\cal P}(\hbox{\bf N})), \in \ldots $\rangle$gle$ of type $\exists X_1\subseteq{\cal P} (\hbox{\bf N}) \forall X_2\subseteq {\cal P} (\hbox{\bf N})\ldots \varphi(X_1\ldots)$, where there are $i$ alternations of quantifiers, and in $\varphi$ all quantifications are over $\hbox{\bf N}$ and ${\cal P}(\hbox{\bf N})$. Equivalently, ${\cal P}(\hbox{\bf N})$ can be replaced by $$\langle$gle H=H(\omega_1),\in $\rangle$gle$ the collection of all hereditarily countable sets; this seems to be useful in applications. so a $\Sigma^2_2$ formula has the form $\exists X_1 \subseteq H \forall X_2 \subseteq H \varphi(X_1\ldots)$, where $\varphi$ is first order over $H$ and the $X_i$'s are predicates (subsets of $H$). A natural question asked by Woodin is whether his theorem cited above could not be generalized to exclude $\Sigma^2_2$ well-order of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$: Perhaps CH and some large cardinal may imply that there is no $\Sigma^2_2$ well-order of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$. We give a negative answer by providing a forcing poset (of small size, $2^{2^{\aleph_0}}$) which adds no reals and gives generic extensions in which there exists a $\Sigma^2_2$ well-order of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$. Since supposedly any large cardinal retains its largeness after a ``small" forcing extension, no large cardinal contradicts a $\Sigma^2_2$ well order of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$. Specifically, we are going to prove the following Main Theorem. \begin{ntheorem}{Theorem A} Assume $\diamond_{\omega_{1}}$. Let $P(x)$ be a predicate (symbol). There is a (finite) sentence $\psi$ in the language containing $P(x)$ with the Magidor-Malitz quantifiers, such that the following holds. Given any ${\sf P}\subseteq \omega_1$, (1) there is a model $M$ of $\psi$, enriching $(\omega_1,<,{\sf P})$ such that $P^M={\sf P}$, and (2) assuming $2^{\aleph_0}=\aleph_1$ and $2^{\aleph_1}=\aleph_2$ there is a forcing poset $Q$ of size $\aleph_2$ satisfying the $\aleph_2$-c.c. and adding no reals such that in $V^Q$ \begin{tabular}{c} $M$ is the single model of $\psi$ (up to isomorphism). \end{tabular} \end{ntheorem} Recall that the Magidor-Malitz logic $L(Q^{MM})$ is obtained by adjoining to the regular first-order logic the quantifiers $Qxy\varphi(x,y)$ which is true in a structure of size $\aleph_1$ iff: there exists an uncountable subset of that structure's universe such that for any two distinct $x$ and $y$ in the set, $\varphi(x,y)$ holds. (See Magidor and Malitz [1977].) Observe that if only $CH$ is assumed in the ground model, but not $\diamond_{\omega_{1}}$, the theorem would still be aplicable since $\diamond_{\omega_{1}}$ can be obtained in such a case by a forcing which adds no reals and is of size $\aleph_1$. (See Jech [1978], Exercise 22.12.) Let us see why Theorem A implies a $\Sigma^2$ well-order of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ in the generic extension. Since $CH$ is assumed, it is possible to find ${\sf P}\subseteq \omega_1$ which encodes in a natural way a well-order of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ of type $\omega_1$. For example, set ${\sf P}\subseteq \omega_1$ in such a way that ${\sf P}\cap[\omega\alpha,\omega\alpha+\omega)$, the intersection of ${\sf P}$ with the $\alpha$th $\omega$-block of $\omega_1$, ``is" a subset, $r_\alpha$, of $\omega$ and so that $$\langle$gle r_\alpha$: $\alpha<\omega_1$\rangle$gle$, is an enumeration of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$. Now use Theorem A to find a formula $\psi$ and a model $M$ of $\psi$ (with $P^M={\sf P}$) and a generic extension in which $M$ is the unique model of $\psi$. In this generic extension, the relation $r_\alpha<r_\beta$, $\alpha<\beta$, can be defined by: {\em There is a model $K$ of $\psi$ where $r_\alpha$ appears in $P^K$ before $r_\beta$ does.} Now, for any formula $\varphi$ in the Magidor-Malitz logic, the statement: ``there is a model $K$ of $\varphi$" is (equivalent to) a $\Sigma^2_2$ statement (see below), and hence the well-order of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ is $\Delta_2^2$ in the generic extension. (Since any $\Sigma^2_2$ linear order must be $\Delta^2_2$.) We can start with any relation ${\sf P} \subset \omega_1$ (not necessarily a well-order of the reals) and get by Theorem A a generic extension $V^Q$ in which this relation is $\Delta^2_2$. To see the above remark, for any Magidor-Malitz formula $\varphi$, we encode the existence of $K\models \varphi$ as a $\Sigma^2_2$ statement concerning $H(\omega_1)$ thus:\\ {\it There is a relation $\varepsilon$ on $H(\omega_1)$ and a truth function which defines a model $(H,\varepsilon)$ of enough set theory, in which $\omega_1^H$ is (isomorphic to) the real $\omega_1$, and inside $H$ there is a model $K$ for the formula $\varphi$, such that: For any subformula $\delta(u,v)$ of $\varphi$ with parameters in $K$, if $X\subseteq \omega_1$ is such that for any two distinct $a,b\in X$ $\delta(a,b)$ holds, then there is such an $X$ in $H$ as well.} Now this statement ``is" $\Sigma^2_2$, and the model $K$ of $\varphi$ found in $H$ is a real Magidor-Malitz model of $\varphi$, not only in the eyes of $H$. The second issue of the paper, the ``strong'' incompactness of the Magidor-Malitz logic, is an obvious consequence of the fact that $\psi$ has no non standard models. The proof of the Main Theorem involves a construction of an $\omega_1$ sequence of Suslin trees at the first stage (constructing the model $M$ of $\psi$), and then an iteration of posets which specialize given \forallron\ trees at the second stage (making $M$ the unique model of $\psi$). The main ingredient in the iteration is the definition of a new poset $\dS({\sf T})$ for specializing an \forallron\ tree ${\sf T}$ without addition of new reals. For his well-known model of CH $\&$ {\it there are no Suslin trees} (SH), Jensen provides (in $L$) a poset which iteratively specializes each of the \forallron\ trees. Each step in this iteration (including the limit stages) is in fact a Suslin tree. Both the square and the diamond are judiciously used to construct this $\omega_2$-sequence of Suslin trees. Since forcing with a Suslin tree adds no new reals, the generic extension satisfies CH. (See Devlin and Johnsbr\aa ten [1974].) In Shelah [1982] (Chapter 5) this result is obtained in the general and more flexible setting of proper-forcing iterations which add no reals. In particular, a proper forcing which adds no reals and specializes a given \forallron\ tree is defined there. The poset $\dS({\sf T})$ of our paper is simpler than the one in Shelah [1982] because it involves no closed unbounded subsets of $\omega_1$, and so our paper could be profitably read by anyone who wants a (somewhat) simpler proof of Jensen's CH $\&$ SH. The paper is organized as follows:\\ Section ~\ref{s1} gives preliminaries and sets our notation. Section ~\ref{s2} shows how to construct sequences of Suslin trees such that, at will, some products of the trees are Suslin while the others are special. Section ~\ref{s3} is a preservation theorem for countable support iteration of proper forcing: A Suslin tree cannot suddenly lost its Suslinity at limit stages of the iteration. Section ~\ref{s4} describes the poset which is used to specialize \forallron\ trees. Section ~\ref{s5} shows that the specializing posets of Section ~\ref{s4} can be iterated without adding reals. In Section ~\ref{s6} we start with a given family of Suslin trees and show how the iteration of the specializing posets obtains a model of ZFC in which this given family is the family of {\em all} Suslin trees; all other Suslin trees are killed. Sections ~\ref{s7} and ~\ref{s8} are the heart of the paper and the reader may want to look there first to get some motivation. In Section ~\ref{s8} a version of Theorem A is proved first which suffices to answer Woodin's question, and then the remaining details (by now easy) are given to complete the proof. Concerning the related question for models where CH does not hold, let us report that:\\ \begin{enumerate} \item Woodin obtained the following: Assume there is an inaccessible cardinal $\kappa$. Then there is a c.c.c forcing extension in which $\kappa=2^{\aleph_0}$ (is weakly inaccessible) and there is a $\Delta^2_2$-well ordering of the reals. \item Extending the methods of this paper, Solovay obtained a forcing poset of size $2^{2^{\aleph_0}}$ such that the following holds in the extension: \begin{enumerate} \item $2^{\aleph_0}=2^{\aleph_1}=\aleph_2$, \item MA for $\sigma$-centered posets, \item There is a $\Delta^2_1$-well ordering of the reals. \end{enumerate} \item Motivated by this result of Solovay, Shelah obtained the following: If $\kappa$ is an inaccessible cardinal and GCH holds on a cofinal segment of cardinals below $\kappa$, then there is an extension such that \begin{enumerate} \item $2^{\aleph_0}=\kappa$, cardinals and cofinalities are not changed, \item MA, \item There is a $\Delta^2_1$-well ordering of the reals. \end{enumerate} \end{enumerate} Theorem (A) was obtained by Shelah during his visit to Caltech in 1985 and he would like to thank H. Woodin for asking this question and R. Solovay for encouraging conversations. We also thank Solovay for some helpful suggestions which were incorporated here. The result of Section ~\ref{s6} (a model of ZFC with few Suslin trees) is due to Abraham and appeared in fact in Section 4 of Abraham and Shelah [1985]. (However, there the machinery of Jensen iteration of Suslin trees was used, while here the approach of proper forcing is used.) The poset $\dS({\sf T})$ for specializing an \forallron\ tree ${\sf T}$ was found by Abraham who proved that any Suslin tree ${\sf S}$ remains Suslin after the forcing, unless ${\sf S}$ is embeddable into ${\sf T}$. As said above, $\dS({\sf T})$ is simpler than the corresponding poset $P$ of Shelah [1982], but the closed unbounded set forcing involved in $P$ is still necessary in order to make two \forallron\ trees isomorphic on a club. \section{Preliminaries}\label{s1} In this section we set our notations and remind the reader of some facts concerning trees and forcings. All of these appear with more details in the book of Jech [1978], or Todor\v{c}evi\`{c} [1984] or in the monograph Devlin and Johansbr\aa ten [1974] which describes Jensen's results. In saying that ``${\sf T}$ is a tree'' we intend that the height of ${\sf T}$ is $\omega_1$, each level ${\sf T}_\alpha$ is countable ($\alpha<\omega_1)$, and every node has $\aleph_0$ many (immediate) successors. We do not insist that the tree has a unique root. For a node $t\in {\sf T}$ define its predecessor branch by $$ (\cdot,t)=\{s\in{\sf T} \mid s <_{{\sf T}} t\}. $$ Usually it is required for limit $\delta$ that $(\cdot,a)\not=(\cdot,b)$ for $a \not=b$ in ${\sf T}$, but we allow branches with more than one least upper bound. For a node $a \in {\sf T}$, ${\it level}(a)$ is that ordinal $\alpha$ such that $a \in {\sf T}_\alpha$ (that is, the order-type of $(\cdot,a)$). We also say that $a$ is of height $\alpha$ in this case. ${\sf T}\rest \alpha$ is the tree consisting of all nodes of height $< \alpha$. For a node $a \in {\sf T}$, ${\sf T}_a=\{x\in{\sf T} \mid a \leq_{{\sf T}} x \}$, is the tree consisting of all extensions of $a$ in ${\sf T}$. A {\em branch} in a tree is a linearly ordered (usually downward closed) subset. An {\em antichain} is a pairwise incomparable subset of the tree. An \forallron\ tree is a tree with no uncountable branches. It is special if there is an order preserving map $f:{\sf T}\rightarrow \Q,\ (x <_{\sf T} y\ \mbox{implies} f(x)<f(y))$. A {\em Suslin} tree is one with no uncountable antichain (and hence no uncountable chain as well). A Suslin tree has this property that any cofinal branch (in an extension of the universe) is in fact a generic branch. The reason being that for any dense open subset $D\subset {\sf T}$, for some $\alpha,\ {\sf T}_\alpha \subset D$ (see Lemma 22.2 in Jech [1978]). If $G \subset {\sf T}$ is a branch of length $\alphamma$, then for $\alpha<\alphamma$, $G_\alpha$ denotes ${\sf T}_\alpha \cap G$, and $G\rest \alpha=G\cap({\sf T}\rest \alpha)$. {\bf Product of trees:} The product ${\sf T}^1\times{\sf T}^2$ of two trees consists of all pairs $$\langle$gle a_1,a_2$\rangle$gle$, where for some $\alpha,\ a_i\in {\sf T}^i_\alpha$. The pairs are ordered coordinatewise: $$\langle$gle a_1,a_2$\rangle$gle < $\langle$gle a^{'}_1,a^\prime _2$\rangle$gle$ iff $a_i<_{{\sf T}^i}a^\prime _i$ for both $i$'s. The product of a finite number of trees is similarly defined. When $$\langle$gle {\sf T}^\xi \mid \xi < \alpha $\rangle$gle$ is a sequence of trees, and $e=$\langle$gle \xi_1 \dots \xi_n $\rangle$gle$ is a sequence (or set) of indices, then the product of these $n$ trees is denoted $$ {\sf T}^{(e)} = {\mbox{\large $\times$}}_{\xi \in e} {\sf T}^\xi= {\sf T}^{\xi_1} \times \ldots \times {\sf T}^{\xi_n}. $$ This notation should not be confused with the one for their union: $$ {\sf T}^e = \bigcup \{ {\sf T}^\xi \mid \xi \in e \}= {\sf T}^{\xi_1} \cup \ldots \cup {\sf T}^{\xi_n}. $$ The union of the trees is defined under the assumption that their domains are pairwise disjoint. (It is to simplify this definition that we drop the requirement for a unique root.) A {\em derived} tree of ${\sf T}$ is formed by taking, for some $\alpha < \omega_1$, $n$ distinct nodes, $a_1,\ldots,a_n$, and forming the product ${\sf T}_{a_1}\times {\sf T}_{a_2} \times \cdots {\sf T}_{a_n}$. The product of a Suslin tree with itself is never a Suslin tree. And the product of a special tree with any tree is again special. In Devlin and Johansbr\aa ten [1974] Jensen constructs (using the diamond $\diamond$) a Suslin tree such that all of its derived trees are Suslin too. We will describe this construction in Section 3.1. Let {\sf T}\ be an \forallron\ tree (of {\it height}\ $\omega_1$). A function $f:{\sf T}\rightarrow\Q$ is a {\em specialization} (of {\sf T}) if $x<_{{\sf T}} y\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}ightarrow f(x)<_{{\sf T}} f(y)$. When $f$ is a partial function, it is called a {\em partial specializing} function on {\sf T}. $^n{\sf T}_\beta$ denotes the set of all $n$-tuples $\bar{x}=\seqn{x_0}{x_{n-1}}$ where $x_i\in{\sf T}_\beta$ for all $i<n$. We also write $\bar{x}\in{\sf T}_\beta$ instead of $\bar{x}\in\ ^n{\sf T}_\beta$. $^n{\sf T}=\cup\{ ^n{\sf T}_\beta\mid \beta < {\it height} {\sf T}\}$. For $Y \subseteq ^n{\sf T},\ Y_\alphamma=Y \cap ^n{\sf T}_\alphamma$. If $x\in{\sf T}_\beta$ and $\alpha\leq\beta$, then $x\lceil \alpha$ denotes the unique $y\leq_{{\sf T}} x$ with $y\in{\sf T}_\alpha$. Similarly, for $\bar{x}\in{\sf T}_\beta$, $\bar{x}\lceil\alpha\eqdf\seqn{x_0\lceil \alpha}{x_{n-1}\lceil\alpha}\in{\sf T}_\alpha$. If $\alpha<\beta$, and $X\subseteq\ ^n{\sf T}_\beta$, then $X\lceil\alpha\eqdf\setm{\bar{x}\lceil \alpha}{\bar{x}\in X}$. Also, if $h:\ ^n{\sf T}_\beta\rightarrow\Q$ is a finite function, then for $\alpha<\beta$, if the projection taking $x \in ^nT_\beta$ to $x \lceil\alpha$ is one to one, then $h\lceil \alpha:{\sf T}_\alpha\rightarrow\Q$ is the function $h'$ defined by $h'(x\lceil \alpha)=h(x)$. And for a set, $H$, of finite functions, $H\lceil\alpha=\setm{h\lceil\alpha}{h\in H}$. We use similar notation for a branch, $B$, of ${\sf T}$ denoting by $B\lceil\alpha$ the subset of $B$ consisting of those nodes of $B$ of ${\it height}<\alpha$. Sometimes, we think of $\bar{x}\in\ ^n{\sf T}$ as a set rather than a sequence. For example, when we say that $\bar{x}_1$ and $\bar{x}_2$ are disjoint: in this case we refer to the range of the sequences, of course. More often, $\bar{x}$ refers to the sequence $\seqn{x_0}{x_{n-1}}$. For example, $\bar{x}_1\leq_{{\sf T}} \bar{x}_2$ means that ${\it length}(\bar{x}_1)={\it length}(\bar{x_2})=n$, and for $i<n$, $x_{1i}\leq_{{\sf T}} x_{2i}$. We do not demand that $x_i\neq x_j$. A set of $n$-tuples, $X\subseteq\ ^n{\sf T}_\beta$, is said to be {\em dispersed} if for every finite $t\subseteq{\sf T}_\beta$ there is an $n$-tuple in $X$ disjoint to $t$. The following Lemma is from Devlin and Johnsbr\aa ten [1974] (Lemma 7 in Chapter VI): \begin{lemma} \label{l1.1} If ${\sf T}$ is an \forallron\ tree and $X\subseteq\ ^n{\sf T}$ is uncountable and downward closed ($\bar{y}\leq_{{\sf T}}\bar{x}\in X\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}ightarrow\bar{y}\in X$), then , for some $\beta<\omega_1$, there is an uncountable $Y\subseteq X$ such that: \begin{enumerate} \item For $\beta\leq \gamma_0<\gamma_1<\omega_1,\ Y_{\gamma_{0}}=Y_{\gamma_{1}}\lceil \gamma_0$. \item $Y_\gamma=Y\cap\ ^n{\sf T}_\gamma$ is dispersed for every $\beta\leq\gamma<\omega_1$ (Equivalently, $Y_\beta$ is dispersed). \end{enumerate} \end{lemma} \section{Construction of Suslin trees} \label{s2} The diamond sequence, $\diamond$, on $\omega_1$, enables the construction of Suslin trees with some degree of freedom concerning their products. For example, the construction of two Suslin trees ${\sf A}$ and ${\sf B}$ such that the product ${\sf A}\times{\sf B}$ is special; or the construction of three Suslin trees ${\sf A}, {\sf B}$ and ${\sf C}$ such that ${\sf A}\times{\sf B}\times{\sf C}$ is special, but ${\sf A}\times{\sf B}$, ${\sf A}\times{\sf C}$ and ${\sf B}\times{\sf C}$ are Suslin trees. This freedom is demonstrated in this section by showing that, given any reasonable prescribed requirement on which products are Suslin and which are special, the diamond constructs a sequence $\ssseqn{{\sf S}(\zeta)}{\zeta<\omega_1}$ of trees satisfying this requirement. `Reasonable' here means that no subproduct of a Suslin product is required to be special. None of the {\em ideas} in this section is new, and we could have shortened our construction by referring to Devlin and Johnsbr\aa ten [1974], and leaving the details to the reader. We decided however to give a somewhat fuller presentation in the hope that some readers will find it useful. The constructions are presented gradually, so that for the more complex constructions we can concentrate on the main ideas and claim that some of the technical details are as before. In the following subsection we use the diamond to construct a Suslin tree such that all of its derived trees are Suslin as well. (Recall that a derived tree of ${\sf T}$ has the form ${\sf T}_{ a_{1}}\times\ldots\times{\sf T}_{a_{n}}$ where $a_1,\ldots,a_n\in{\sf T}_\alpha$ are distinct members of the $\alpha$th level of ${\sf T}$ for some $\alpha<\omega_1$.) Then we show the construction of two Suslin trees ${\sf A}$ and ${\sf B}$ such that ${\sf A}\times{\sf B}$ is special; and the last subsection gives the desired general construction. For the rest of this section we assume a `diamond' sequence $\ssseqn{S_\xi}{\xi<\omega_i}$ where $S_\xi\subseteq\xi$ and for every $X\subseteq\omega_1,\ \setm{\xi}{X\cap\xi=S_\xi}$ is stationary in $\omega_1$. \subsection{A Suslin tree with all derived trees Suslin} \label{s2.1} Let us recall the construction of a Suslin tree ${\sf A}_\alpha$. The $\alpha$th level of the Suslin tree ${\sf A}$, is defined by induction on $\alpha$. In order to be able to apply the diamond to ${\sf A}$ we wish to see ${\sf A}$'s universe as $\omega_1$ and assume that the subtree ${\sf A}\rest\alpha$ consists of the set of ordinals $\omega\alpha$. For $\alpha<\beta$ we shall require that the tree ${\sf A}\rest\beta$ is an end-extension of ${\sf A}\rest\alpha$ (the reader is asked to forgive us for using the notation ${\sf A}\rest\beta$ even though the tree ${\sf A}$ itself has not yet been constructed). At successor stages, the passage from ${\sf A}\rest(\mu+1)$ to ${\sf A}\rest(\mu+2)$, that is, the construction of ${\sf A}_{\mu+1}$, requires no special care: only that each node in ${\sf A}_\mu$ has countably many extensions in ${\sf A}_{\mu+1}$. At limit stage, $\delta<\omega_1$, first set ${\sf A}\rest\delta=\bigcup_{\alpha<\delta}\ {\sf A}\rest\alpha$, and then the $\delta$th level ${\sf A}_\delta$ is obtained by defining (as follows) a countable set of branches, $\setm{b_i}{i\in\omega}$, and putting one point in ${\sf A}_\delta$ above each $b_i$. The branch $b_i$ is cofinal in ${\sf A}\rest\delta$, and each node in ${\sf A}\rest\delta$ is contained in at least one $b_i$. If we only wish to construct a Suslin tree, then the diamond set $S_\delta\subseteq{\sf A}\rest\delta$ is used as usual: Each node $a$ in ${\sf A}\rest\delta$ is first extended to some $x_0\geq a$ in $S_\delta$ (if possible), and then, in $\omega$ steps, an increasing sequence, $x_0<x_1<\ldots,$ is defined so that ${\it level}(x_i),\ i<\omega_1$, is cofinal in $\delta$. This sequence defines one of our countably many branches. We see now the need for the following statement to hold at every stage $\delta <\omega_1$. $$ \mbox{\em For any }\zeta_0<\zeta_1<\delta\ \mbox{\em and }a\in{\sf A}_{\zeta_{0}},\mbox{\em there is some } b\in{\sf A}_{\zeta_{1}}\mbox{\em extending }a.$$ Now let us require a little more of ${\sf A}$ and ask that each of its derived trees is Suslin too (Devlin and Johansbr\aa ten [1974]). This variation is manifest in the construction of ${\sf A}_\delta$ for limit $\delta$, and is perhaps better described by means of a generic filter over a countable structure as follows. Let $\dP=\dP({\sf A}\rest\delta)$ be the poset defined thus: $$ \dP=\setm{\bar{a}}{\mbox{\it for some }n,\ \bar{a}=\trpl{a_0}{\ldots}{a_{n-1}} \mbox{\it and, for some }\alpha<\delta,\\ \mbox{\it for all }0\leq i<n,\ a_i\in{\sf A}_\alpha} $$ The ${\it level}$ of $\bar{a}\in\dP$ is the $\alpha$ such that $a_i\in{\sf A}_\alpha$, for all $i<{\it length}(\bar{a})$. A partial-order, $\bar{b}$ {\it extends}\ $\bar{a}$, is defined:\\ $\bar{b}\ {\it extends}\ \bar{a} $ iff\\ {\it level}\ $(\bar{a})\leq{\it level}(\bar{b})$, ${\it length}(\bar{a})\leq{\it length}(\bar{b})$, and $\forall i<{\it length}(\bar{a})$, $a_i\leq b_i$ (in ${\sf A}\rest\delta$).\\ It is not required for $\bar{a}\in \dP$ to be one-to-one: $a_i=a_j$ is possible, although by genericity, they will split at some stage. Now let $M$ be a countable model (of a sufficient portion of set-theory) which includes $\dP$ and ${\sf A}\rest\delta$ and the diamond set $S_\delta$; and let $G$ be a $\dP$-generic filter over $M$. Using suitably defined dense sets, it is easy to see that for each fixed $i<\omega$, $b_i=$\setn{$x$}{for some $\bar{a}\in G,\ x=a_i$} is a branch in ${\sf A}\rest\delta$ going all the way up to $\delta$. The collection $\setm{b_i}{i<\omega}$ determines ${\sf A}_\delta$ and this ends the definition of ${\sf A}$. Let ${\sf T}={\sf A}_{t_{0}}\times\ldots\times{\sf A}_{t_{n-1}}$ be any derived tree of ${\sf A}$, we will prove that ${\sf T}$ is Suslin. Let $\alpha$ be the level of $\seqn{t_0}{t_{n-1}}$ in ${\sf A}$ (so $t_i \in {\sf A}_\alpha$ for all $i < n$). Let $E\subseteq{\sf T}$ be any dense open subset. By the diamond property, using some natural encoding of $n$-tuples of ordinals as ordinals, for some limit $\delta>\alpha$, $E\cap({\sf T}\rest\delta)=E\cap\delta=S_\delta$, and $E\cap \delta$ is dense open in ${\sf T}\rest\delta$. We must prove that every $\bar{x}\in{\sf T}_\delta$ is in $E$, in order to be able to prove that an arbitrary antichain in ${\sf T}$ is countable. $\bar{x} \in {\sf T}_\delta$ has the form $\bar{x}=\seqn{x_0}{x_{n-1}}$ where $x_k\in {\sf A}_{t_{k}}$. Since the $t_k$'s are distinct, the $x_k$'s give distinct branches of ${\sf A}\rest\delta$. Recall the $\dP({\sf A}\rest\delta)$ generic filter $G$ (over $M$) used to define ${\sf A}_\delta$, and let $b_{i(k)}$ be the branch of ${\sf A}\rest\delta$ which gave $x_k$. If for some $\bar{a}=\seqn{a_0}{a_{l-1}} \in G$ the $n$-tuple $\seqn{a_{i(0)}}{a_{i(n-1)}}$ is in the dense set $E$, then $\bar{x}$ which is above this $n$-tuple must be in $E$ too. The existence of such $\bar{a}$ in $G$ is a consequence of the following density argument: Let $D$ contains all those $\bar{a}\in\dP({\sf A}\rest\delta)$ for which (1) $i(k)<{\it length}(\bar{a})$ for every $k<n$, and either the subsequence $s=\seqn{a_{i(0)}}{a_{i(k)}}$ of $\bar{a}$ is not in ${\sf T}$, or else $s\in E$. $D$ is dense in $\dP$ and $D\in M$, because $S_\delta$ is in $M$. So that $D\cap G\neq 0$, and any $\bar{a}\in D\cap G$ of height $>\alpha$ is as required. \subsection{The case of two trees} \label{s2.2} The construction in the previous section is combined now with the construction of a special \forallron\ tree to yield two Suslin trees ${\sf A}$ and ${\sf B}$ such that: \begin{enumerate} \item Each derived tree of ${\sf A}$ and of ${\sf B}$ is Suslin, \item ${\sf A}\times{\sf B}$ is a special tree. \end{enumerate} We commence by recalling the construction of a special \forallron\ tree ${\sf A}$ together with a strictly increasing $f:{\sf A}\rightarrow \Q$. In the inductive definition, ${\sf A}\rest a$ and $f\rest\alpha= f\rest({\sf A}\rest \alpha)$, $\alpha < \omega_1$, are defined so that the following hold: $$ ({\it 1})\ \mbox{\em For any }a\in{\sf A}\rest\alpha\ \mbox{\em and rational }\varepsilon>0, \mbox{\em and ordinal }\tau<\alpha\ \mbox{\em such that }{\it height}(a)<\tau,$$ $$\mbox{\em there is an extension }b>a \ \mbox{\em in }{\sf A}_\tau\ \mbox{\em such that }0< f(b)-f(a)<\varepsilon.$$ This condition is needed at a limit stage $\alpha$, if we don't want our cofinal branches to run out of rational numbers; it enables the assignment of $f(e)\in\Q$ for $e\in{\sf A}_\alpha$, but it requires some care to keep it true at all stages. There is nothing very special at successor stages: Since we assume that each node has $\aleph_0$ many successors, condition (1) above may be achieved by assigning to these successors of $e$ all the possible values of rational numbers $>f(e)$ (a forcing-like description of the successor stage is also possible---see below). For a limit $\alpha < \omega_1$, it seems again convenient to formulate the construction of ${\sf A}_\alpha$ and $f\rest{\sf A}_\alpha$, in terms of a generic filter $G$ over a countable structure $M$. So given $A\rest\alpha$ and $f\rest\alpha$, a countable poset $\dq=\dq({\sf A}\rest\alpha,f\rest\alpha)$ is defined first. \begin{defn} Let $\dq$ be the collection of all pairs $(\bar{a},\bar{q})$ of the form $\bar{a}=\seqn{a_0}{a_{n-1}}$ $\bar{q}=\seqn{q(0)}{q(n-1)}$ such that: \begin{enumerate} \item $\bar{a}$ is an $n$-tuple in ${\sf A}_\xi$ for some $\xi<\alpha$. \item $q(i)\in\Q$ and $f(a_i)<{q}(i)$ for all $i<n$. \end{enumerate} \end{defn} Intuitively, ${q}(i)$ is going to be the value of $f(b)$ for $b\in {\sf A}_\alpha$ defined by the `generic' branch $\setm{a_i}{(\bar{a},\bar{q})\in G}$. The order relation on $\dq$ is accordingly defined: $(\bar{a}_2,\bar{q}_2){\it extends}(\bar{a}_1,\bar{q}_1)$ iff $\bar{a}_2\ {\it extends}\ \bar{a}_1$ and $\bar{q}_1$ is an initial sequence of $\bar{q}_2$ (that is, ${\it length}(\bar{q}_1)\leq{\it length}(\bar{q}_2)$, and for $i<{\it length}(\bar{q}_1),\ \bar{q}_1(i)=\bar{q}_2(i)$). Now we turn to the construction of two Suslin trees ${\sf A}$ and ${\sf B}$ such that all the derived trees of ${\sf A}$ and ${\sf B}$ are Suslin and yet ${\sf A}\times{\sf B}$ is special. In this case both ${\sf A}\rest\alpha$, ${\sf B}\rest\alpha$ and the specializing function $f:({\sf A}\rest\alpha)\times({\sf B}\rest\alpha)\rightarrow \Q$ are simultaneously constructed by induction. There are three jobs to do at the limit $\alpha$th stage: (i) to ensure that the cofinal branches of ${\sf A}\rest\alpha$ and of its derived trees all pass through the diamond set $S_\alpha$. (ii) To ensure the similar requirement for ${\sf B}\rest\alpha$. (iii) To specialize $({\sf A}\rest\alpha+1)\times({\sf B}\rest\alpha+1)$. It turns out that it suffices to take care of (iii) in a natural way - and genericity will take care of the two other requirements, thereby ensuring that ${\sf A}$ and ${\sf B}$ and their derived trees are Suslin. The inductive requirement ({\it 1}) is needed here too, but in fact an even stronger requirement will be used: {\it (2)} If $\bar{a},\ \bar{b}$ are $n$ and $m$ tuples in ${\sf A}\rest\alpha$ and ${\sf B}\rest\alpha$, and for $\theta < \alpha$ $\bar{c}$ is an $n$-tuple in ${\sf A}_\theta$ extending $\bar{a}$, and if $q:\ n \times m \rightarrow \Q$ is such that $\forall i,j\ f(a_i,b_j) < q(i,j)$, THEN there is an $n$-tuple $\bar{d}$ in ${\sf B}_\theta$, extending $\bar{b}$ such that $\forall i,j\ f(c_i, d_j)<q(i,j)$. (A similar requirement is made for $\bar{c}$ in ${\sf B}_\theta$.) In fact, by taking smaller $q(i,j)$, it even follows that for any finite $D \subset B_\theta$ there is an $n$-tuple $\bar{d}$ in ${\sf B}_\theta$, extending $\bar{b}$ and disjoint to $D$, such that $\forall i,j\ f(c_i, d_j)=q(i,j)$. (A similar requirement is made for $\bar{c}$ in ${\sf B}_\theta$.) Again, we only describe the limit case, and leave the details of the successor case to the reader (take care of condition ({\it 2})). So assume $\alpha<\omega_1$ is a limit ordinal and ${\sf A}\rest\alpha\ (=\cup_{\mu<\alpha}{\sf A}\rest\mu),\ {\sf B}\rest\alpha$ and $f\rest\alpha$ are given. Let $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}=\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}(({\sf A}\rest\alpha)\times({\sf B}\rest\alpha), f\rest\alpha)$ be the poset defined in the following: $(\bar{a},\bar{b},\bar{q})\in\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ iff for some $\mu<\alpha$, $\bar{a}$ is an $n$-tuple in ${\sf A}_\mu$, $\bar{b}$ is an $m$-tuple in ${\sf B}_\mu$, and $\bar{q}:n\times m\rightarrow\Q$, are such that for all $0\leq i<n,\ 0\leq j<m$, $f(a_i,b_j)<\bar{q}(i,j)$. Extension is defined naturally. If $M$ is now a chosen countable structure containing all the above, and the diamond $S_\alpha$ in particular, then pick an $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$-generic filter, $G$, over $M$ and define the $\alpha$th levels ${\sf A}_\alpha$ and ${\sf B}_\alpha$ and extend $f$ on ${\sf A}_\alpha\times{\sf B}_\alpha$ in the following way: For each $i$, $$ \begin{array}{l} u_i=\mbox{\it a node above the branch\ } \setm{x}{\mbox{\it For some }(\bar{a},\bar{b},\bar{q})\in G,\ \ a_i=x}\\ v_i=\mbox{\it a node above the branch\ } \setm{y}{\mbox{\it For some }(\bar{a},\bar{b},\bar{q})\in G,\ \ b_i=y}\\ f(u_i,v_j)=\bar{q}(i,j),\mbox{\it where } (\bar{a},\bar{b},\bar{q})\in G\mbox{ \it for some }\bar{a},\bar{b} \end{array} $$ By condition ({\it 2}), it is clear that this local forcing adds branches above every node, and that condition ({\it 2}) continues to hold for $\alpha+1$. So the product of the trees ${\sf A}$ and ${\sf B}$ thus obtained is special; but why are ${\sf A}$ and ${\sf B}$ and each of their derived trees Suslin? To see that, we argue that if we restrict our attention to ${\sf A}$, for example, then it is in fact the construction of a Suslin tree given in subsection \ref{s2.1} which describes ${\sf A}$. For this aim we will define for each limit $\alpha$ a projection $\Pi_{{\sf A}}$ from the poset $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ used in the construction of $({\sf A}\rest\alpha+1)\times({\sf B}\rest\alpha+1)$ onto the poset $\dP({\sf A}\rest\alpha)$ used in \ref{s2.1}. Simply set $\Pi_{{\sf A}}(\bar{a},\bar{b},\bar{q})=\bar{a}$. We must check the following properties which ensure that the projection of the $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$-generic filter is a $\dP({\sf A}\rest\alpha)$-generic filter: \begin{enumerate} \item $\Pi_{{\sf A}}$ is order preserving: $x_0\leq_{\mbox{\small \hbox{{\sf I}\kern-.1500em \hbox{\sf R}}}} x_1\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}ightarrow\Pi_{{\sf A}}(x_0)\leq_{\dP}\Pi_{{\sf A}}(x_1)$. \item Whenever $p\in\dP$ extends $\Pi_{{\sf A}}(x_0)$, there is an extension $x_1$ of $x_0$ in $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ such that $\Pi_{{\sf A}}(x_1)=p$. \end{enumerate} This is not difficult to prove by ({\it 2}). \subsection{$\omega_1$ many trees} \label{s2.3} In this section (waving our hands even harder) we extend the previous construction to $\omega_1$ many Suslin trees with any reasonable requirement on which trees are Suslin and which are special. \begin{theorem} Assume $\diamond_{\omega_{1}}$. Let {\sf sp}\ (for special) be a collection of non-empty finite subsets of $\omega_1$ closed under supersets, and let {\sf su}\ be those non-empty finite sets $e\subset \omega_1$ which are not in ${\sf sp}$. Then there is a sequence of $\omega_1$-trees $\sseqn{{\sf A}^\zeta }{\zeta<\omega_1}$ such that for a finite set $e=\fsetn{\zeta_1}{\zeta_n}$ \begin{enumerate} \item If $e\in{\sf sp}$, ${\sf A}^{(e)}{\sf su}ackrel{Def}{=} \nseqn{{\sf A}^{\zeta_1}\times}{\times{\sf A}^{\zeta_n}}$ is special. \item If $e\in{\sf su}$, ${\sf A}^e {\sf su}ackrel{Def}{=} {\sf A}^{\zeta_1}\cup\ldots\cup{\sf A}^{\zeta_n}$ and all of its derived trees are Suslin. \end{enumerate} \end{theorem} \par\noindent{\bf Proof:}\quad By induction on $\alpha<\omega_1$, the sequence $\setm{{\sf A}^\zeta\rest\alpha+1}{\zeta<\alpha}$ is defined together with specializing functions $f_e\rest\alpha+1$ for $e\in{\sf sp},\ e\subseteq\alpha$. $f_e$ is, of course, a specializing function from ${\sf A}^{(e)}$ into $\Q$. It is convenient to require that $f_e\rest\alpha+1$ is only defined on the $\beta$ levels of the product tree for $\beta > \max (e)$. The definition of the trees requires some notations and preliminary definitions. Let $\alpha < \omega_1$ be any ordinal---successor or limit, and assume that $({\sf A}^\zeta\rest\alpha)$ for $\zeta<\alpha$, and $f_e\rest\alpha$ for $e\subseteq\alpha$ in {\sf sp}\ are given. Let us define, for any finite $d \subseteq \alpha$, $$ \dP({\sf A}^d\rest\alpha)={\mbox{\large $\times$}}_{\xi \in d} \dP({\sf A}^\xi\rest\alpha) $$ in the following. $a=$\langle$gle\overline{a}^\xi \mid \xi \in d $\rangle$gle \in \dP({\sf A}^d\rest\alpha)$ if for some $\mu < \alpha$ for all $\xi \in d,\ \overline{a}^\xi$ is an $n_\xi$-tuple in ${\sf A}^\xi_\mu$ enumerated as follows: $\overline{a}^\xi=$\langle$gle a^\xi_i \mid i \in I_\xi $\rangle$gle$ where $ I_\xi = I_\xi(a) \subset \omega$ is a finite set of size $n_\xi$. Extension is naturally defined in $\dP({\sf A}^d\rest\alpha)$: $b$ extends $a$ iff for all $\xi \in d$,\ $I_\xi(a) \subseteq I_\xi(b)$ and for every $i \in I_\xi(a),\ a^\xi_i < b^\xi _i$ in ${\sf A}^\xi \rest \alpha$. We need one more definition. For $a \in \dP({\sf A}^d \rest \alpha)$, and $e \subseteq d$ with $e \in {\sf sp}$, let us say that $q^e$ {\em bounds} $a$ iff $q^e$ is a function $$ q^e:\ I^{(e)}={\mbox{\large $\times$}}_{\xi \in e} I_\xi(a) \rightarrow \Q $$ such that for every $\overline{i}=$\langle$gle i_\xi \mid \xi \in e $\rangle$gle \in I^{(e)},$ \[ f_e($\langle$gle a^\xi_{i_\xi} \mid \xi \in e $\rangle$gle ) < q^e(\overline{i}).\] Now we can formulate the inductive property of the trees and functions at the $\alpha $ stage:\\ $ (3_\alpha)$ {\it If } $a \in \dP({\sf A}^d \rest \alpha),$ {\it where} $ d \subseteq \alpha$ {\it is finite, and if} $$\langle$gle q^s \mid s \subseteq d,\ s \in {\sf sp} $\rangle$gle$ {\it is such that each} $q^s$ {\it bounds} $a$, {\it then for every } $e \subseteq d$ {\it with } $e \in {\sf su},$ {\it and } $b$ {\it extending } $a\rest e$ {\it in } $\dP({\sf A}^e \rest \alpha),$ {\it there is} $b_1>a$ {\it in} $\dP({\sf A}^d \rest \alpha)$ {\it such that each} $q^s$ {\it bounds } $b_1,$ {\it and } $b_1 \rest e =b.$ \\ Observe that this property makes sense for $\alpha$'s which are limit as well as for $\alpha$'s which are successor ordinals. Let us now return to the inductive definition of the trees.\\ {\bf Case 1} $\alpha$ is a limit ordinal. In this case we first take the union of the trees and functions obtained so far. So for each $\xi<\alpha$, and $e \subseteq \alpha$ in ${\sf sp}$: $$ {\sf A}^\xi \rest \alpha = \bigcup_{\mu < \alpha} {\sf A} ^\xi \rest \mu,\ {\it and } f_e \rest \alpha = \bigcup _{\mu < \alpha} f_e\rest \mu,\ $$ Then we add the $\alpha$-th levels and extend $f_e$ according to the following procedure. A countable poset $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}=\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}_\alpha$ is defined as a convenient way to express how the $\alpha$-branches are added to each $({\sf A}^\zeta\rest\alpha)$, enabling the definition of ${\sf A}^\zeta_\alpha$ and of the extensions of the $f_e$'s. A condition in $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ gives finite information on the branches and the values of the appropriate $f_e$'s. So a condition $r\in\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ has two components: $r=\{a,\overline{q}\}$, where for some $\mu=\mu(a)<\alpha$, $a$ gives information on the intersection of the (locally) generic $\alpha$-branches with the $\mu$ level, and $\overline{q}$ tells us the future rational values on products of these branches. Formally, we require that for some finite $d=d(r) \subseteq \alpha$ $$ \begin{array}{ll} (1) & a \in \dP({\sf A}^d\rest\alpha),\\ (2) & \overline{q}=$\langle$gle q^e \mid e \subseteq d,\ e \in {\sf sp} $\rangle$gle,\ \mbox{and for each } e \subseteq d\ \mbox{in } {\sf sp},\ q^e \ \mbox{bounds } a. \end{array} $$ In plain words, $r\in\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}_\alpha$ has a finite domain $d\subseteq\alpha$ on which it speaks. For $\zeta\in d$, $a^\zeta_i$ is the intersection with ${\sf A}^\zeta_{\mu}$, of the proposed $i$th branch added to ${\sf A}^\zeta\rest\alpha$, and $q^e$ gives information on how to specialize those trees required to be special. So if $e\subseteq d$ is in {\sf sp}, then $q^e$ gives rational upper bounds to the range of the specializing function $f_e$ on the branches added to ${\sf A}^{(e)}\rest\alpha$. We write $d=d(r),\ \bar{a}=\bar{a}(r),\ \bar{q}=\bar{q}(r),\ \mu=\mu(r)$ etc. to denote the components of $r\in\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$. A countable $M$ is chosen with $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}_\alpha$, $S_\alpha$, the trees so far constructed and so on in $M$, and an $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}_\alpha$ generic filter $G$ over $M$ is used to define the branches and the new values of $f_e$. \begin{enumerate} \item For $\zeta<\alpha$ and $i<\omega$, $u_i^\zeta=\setm{x}{\mbox{\it For some } r\in G,\ a_i^\zeta=x}$, is the $i$th branch added to ${\sf A}^\zeta\rest\alpha$. This determines ${\sf A}_\alpha^\zeta$. \item For $e\in {\sf sp}$, we define $f_e$ on the $\alpha$th level of ${\sf A}^{(e)}$, as follows. Any $\alpha$-level node, $w$, of ${\sf A}^{(e)}$ has the form $\seqn{u_{i_1}^{\zeta_1}}{u_{i_n}^{\zeta_n}}$ where $e=\fsetn{\zeta_1}{\zeta_n}$; then $f_e(w)=q^e(i_1,\ldots,i_n)$, where $q^e$ comes from $G$, (That is $q^e=\bar{q}(r)^e$ for some $r\in G$). \end{enumerate} As evidenced by $f_e$, ${\sf A}^{(e)}$ becomes a special tree for $e\in {\sf sp}$. When $e\not\in{\sf sp}$, ${\sf A}^e$ is Suslin and so are all of its derived trees. It is here that the assumption $e\not\in{\sf sp}\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}ightarrow\mbox{ for } e'\subseteq e,\ e'\not\in {\sf sp}$ is used. We must prove that for $e\not\in{\sf sp}$, the construction of ${\sf A}^e$ follows the specification described in subsection \ref{s2.1}. To do that, observe that for $e\not\in{\sf sp}$ the map $\Pi$ taking $r$ to $\ssseqn{\bar{a}^\zeta(r)}{\zeta\in e}$ is a projection of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$ onto $\dP({\sf A}^e\rest\alpha)$. {\bf Case 2} is for $\alpha$ a succesor ordinal. Put $\alpha = \rho+1$. Not only the $\alpha$th level has to be defined for all existing trees, but a new tree, ${\sf A}^\rho$, and new functions must be introduced. The definition of the new functions $f_e$ with $\rho \in e$ is somewhat facilitated by our assumption that these are only defined on the $\alpha$th level. \section{Suslin-tree preservation by proper forcing} \label{s3} In this section it is shown that any Suslin tree ${\sf S}$ remains Suslin in a countable support iteration, if each single step of the iteration does not destroy the Suslin property of ${\sf S}$. We assume our posets are separative: If $q$ does not extend $p$ then some extension of $q$ is incompatible with $p$. Let $\vec{\dP}=\ssseqn{\dP_\alpha}{\alpha\leq\beta}$ be a countable support iteration of length $\beta$ (limit ordinal) of proper forcing posets; where $\dP_{\alpha+1}=\dP_\alpha*\dq_\alpha$ is a two step iteration: $\dP_\alpha$ followed by $\dq_\alpha$. The following preservation theorem holds for $\vec{\dP}$. \begin{theorem} \label{t4.1} Let ${\sf S}$ be a Suslin tree of {\it height}\ $\omega_1$; suppose for every $\beta'<\beta$, ${\sf S}$ remains Suslin in $V^{\dP_{\beta'}}$. Then {\sf S}\ remains Suslin in $V^{\dP_{\kern -2pt \beta}}$ as well. \end{theorem} \par\noindent{\bf Proof:}\quad To show that every antichain of {\sf S}\ is countable, it is enough to prove that:\\ { \em For any dense open set $E\subseteq {\sf S}$ there is a level ${\sf S}_\lambda$, $\lambda<\omega_1$, such that ${\sf S}_\lambda\subseteq E$}. So let $\bE$ be a $\dP_\beta$ name of a dense open subset of {\sf S}. Fix some countable elementary submodel, $M\prec H(\kappa)$, such that {\sf S}, $\beta,\ \dP_\beta,\ \bE$ etc. are in $M$, where $\kappa$ is ``big enough". ($H(\kappa)$ is the collection of all sets of cardinality hereditarily $<\kappa$. In fact, all that is needed is that $M$ reflects enough of $V$ to enable the following constructions and arguments to be carries out.) \begin{description} \item{$\hbox{\bf u}llet$} Let $\lambda=M\cap\omega_1$. $\lambda$ is a countable ordinal. \item{$\hbox{\bf u}llet$} Let $\ssseqn{\beta(i)}{i<\omega}$ be an increasing $\omega$-sequence of ordinals in $\beta\cap M$ and cofinal in $\beta\cap M$. \item{$\hbox{\bf u}llet$} Let $\setm{b_n}{n\in\omega}$ be an enumeration of ${\sf S}_\lambda$. \end{description} We will produce a condition $q\in\dP_\beta$ (extending some given condition) such that for every $n\in\omega$, $q{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt} b_n\in \bE,$ and thus $$ q{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt} {\sf S}_\lambda\subseteq \bE $$ First, a sequence $q_n\in\dP_{\beta(n)}$, and $\underline{p}_n$ is inductively constructed such that the following holds: \begin{enumerate} \item $q_n$ is an $M$-generic condition for $\dP_{\beta(n)}$ and $q_{n+1}\rest\beta(n)=q_n$. \item $\underline{p}_n$ is a {\it name}\ in $V^{\dP_{\kern -2pt\beta(n)}}$, forced to be a condition in $\dP_\beta\cap M$. \item \begin{description} \item{(a)} $q_n\force_{\dP_{\gb(n)}}\ \underline{p}_n\rest\beta(n)$ {\em is in the canonical generic filter}, $G_n$. \item{(b)} $q_n{\force_{\dP_{\gb(n)}}}\ \underline{p}_n$ {\it extends}\ $\underline{p}_{n-1}$ {\em in} $\dP_\beta$. \item{(c)} $q_n\force_{\dP_{\gb(n)}}$ ($\underline{p}_n\force_{\dP_{\gb}}\ b_n$ {\em is above some member of } $\bE$). \end{description} \end{enumerate} (Recall that the canonical generic filter $G$ is defined so that $q{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt} q\in G$ for every $q$). Suppose for a moment that we do have such sequences, and this is how $q$ is obtained: $q=\cup_{n<\omega}q_n$. Then $q\in\dP_\beta$ extends each $q_n$, since $q_{n+1}\rest \beta(n)=q_n$ is assumed for all $n$. We also have the following: \begin{claim} $q\force_{\dP_{\gb}} {\sf S}_\lambda\subseteq \bE$. \end{claim} \par\noindent{\bf Proof:}\quad This is a consequence of 1-3 obtained as follows. Let $\underline{G}$ be the name of the $\dP_\beta$ canonical generic filter. It is enough to show that for each $n$ $$ \leqno(*)\hspace{1cm} q\force_{\dP_{\gb}}\ \underline{p}_n\in \underline{G}, $$ because then we use 3(c) to deduce that $q$ forces $b_n$ to be in $\bE$. To prove (*) we observe that \begin{enumerate} \item $q\force_{\dP_{\gb}}\ \underline{p}_n$ {\em extends} $\underline{p}_m$ {\em in } $\dP_\beta$ {\em for } $m<n$, and \item $q\force_{\dP_{\gb}}\ (\underline{p}_m\rest\beta(m))\in \underline{G}_m$ {\em for all } $m<\omega$. \end{enumerate} Hence: \[ q\force_{\dP_{\gb}} (\underline{p}_m\rest\beta(n))\in \underline{G}_n \mbox{\em for all } m<n.\] From this it follows that for any $q'$ extending $q$ in $\dP_\beta$, if $q'$ determines $\underline{p}_m$, that is for some $p\in\dP_\beta$, $q'{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt} \underline{p}_m=p$, then $q'{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt} p\rest\beta(n)\in \underline{G}_n$, and hence $q'$ extends $p\rest\beta(n)$, for all $n$'s, and thus $q'$ extends $p$. Thus $q'\force_{\dP_{\gb}}\ \underline{p}_m\in \underline{G}$. This is so for an arbitrary extension of $q$ which determines $\underline{p}_m$, and hence (*): $q\force_{\dP_{\gb}}\ \underline{p}_m\in \underline{G}$, Return now to the construction of the sequences. Suppose $\underline{p}_n$ and $q_n$ are constructed (or that we are about to start the construction). In order to describe $\underline{p}_{n+1}$ and $q_{n+1}$ (in that order), imagine a generic extension $V[G_n]$ of our universe $V$, where $G_n\subseteq\dP_{\beta(n)}$ is a generic filter containing $q_n$. Then $M[G_n]$ can be formed; it is the $G_n$-interpretation of all $\dP_{\beta(n)}$ names in $M$. Then $M[G_n]\prec H(\kappa)[G_n]$. {\sf S}\ is still a Suslin tree in $V[G_n]$ by our assumption. In $V[G_n]$, $\underline{p}_n$ is realized as a condition denoted $p_n$; $p_n\in\dP_\beta\cap M$, and $p_n\rest\beta(n)\in G_n$ by the inductive assumption in 3(a). Since $\bE$ is forced to be dense in {\sf S}, for any $s\in{\sf S}$ and $p\in\dP_\beta$, there are $s\leq_{{\sf S}} s'$ and an extension $p'$ of $p$ in $\dP_\beta$ such that $$ \leqno(**)\hspace{1cm} p'\force_{\dP_{\gb}}\ s'\in \bE $$ Moreover, by genericity of $G_n$, we may require that $$ p\rest\beta(n)\in G_n\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}ightarrow p'\rest\beta(n)\in G_n. $$ Thus, the set $F$ of $s'\in {\sf S}$ for which there is $p'\in\dP_\beta$ extending $p_n$ with $p'\rest\beta(n)\in G_n$ and satisfying ($\ast \ast$) is dense in {\sf S}\ and is (defined) in $M[G_n]$. Now {\sf S}\ is a Suslin tree in $M[G_n]$, and hence every branch of ${\sf S}\rest\lambda$ of length $\lambda$ is $M[G_n]$ generic. (Recall $\lambda=\omega_1\cap M$). Thus $b_{n+1}$ (the ($n+1$)th node of ${\sf S}_\lambda$) is above some node in $F$; and it is possible to pick $p_{n+1}$ in $\dP_\beta\cap M$ extending $p_n$ with $p_{n+1}\rest\beta(n)\in G_n$ and such that $p_{n+1}\force_{\dP_{\gb}}\ b_{n+1}\in \bE$. This description of $p_{n+1}$ made use of the $\dP_{\beta(n)}$-generic filter $G_n$. Back in $V$, we define $\underline{p}_{n+1}$ to be the {\em name} of that $p_{n+1}$ in $V^{\dP_{\beta(n)}}$ (and so, evidently, in $V^{\dP_{\beta(n+1)}}$). Next we define $q_{n+1}$. We demand the following from $q_{n+1}$: \begin{enumerate} \item $q_{n+1}$ is an $M$-generic condition for $\dP_{\beta(n+1)}$, and $q_{n+1}\rest\beta(n)=q_n$. \item $q_{n+1}{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dP_{\beta(n+1)}}\ \underline{p}_{n+1}\rest\beta(n+1)\in G_{n+1}$. \end{enumerate} The existence of $q_{n+1}$ satisfying (1) and (2) is a general fact about proper forcing. It is a consequence of the following statement, which can be proved by induction on $\beta_2$: Suppose $\beta_1<\beta_2\leq\beta$, and $q_1$ is an $M$-generic condition over $\dP_{\beta_{1}}$, and $\underline{p}$ is a name in $V^{\dP_{\beta_1}}$ such that $q_1{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dP_{\beta_1}}$ $\underline{p}\in\dP_{\beta_2}\cap M$ {\em and} $\underline{p}\rest\beta_1$ {\it is in the canonical $\dP_{\beta_1}$ generic filter.} Then there is an $M$-generic condition over $\dP_{\beta_2}$, $q_2$, such that $q_2\rest\beta_1=q_1$, and $q_2\force_{\dP_{\gb}}s{2}\ \underline{p}\rest \beta_2$ {\it is in the canonical $\dP_{\beta_{2}}$ generic filter.} \section{ How to specialize \forallron\ trees without adding reals} \label{s4} The forcing notions which turn a given \forallron\ tree into a special tree, naturally fall into two categories: those which use finite conditions and satisfy properties such as the c.c.c., and those which use infinite conditions and have nice closure properties. In this section we describe how infinite conditions can be used to specialize an \forallron\ tree, without addition of new countable sets, and how to iterate such posets. In a moment we will define the poset $\dS({\sf T})$ used to specialize an \forallron\ tree {\sf T}. Meanwhile, let us see what are the problems with the direct approach, which takes the poset $\dS_1$ of all specializing functions $f$ defined on some downward closed countable subtree of the form ${\sf T}\rest \alpha+1$. To see that this poset collapses $\omega_1$ in forcing, look at the following dense open sets, defined for $n<\omega$. $$ D_n=\setm{f\in\dS_1 } {f\ 1\mbox{\it is defined on } {\sf T}\rest\alpha+1, \mbox{\it and for every }x\in {\sf T}_\alpha,\ f(x)\geq n} $$ Clearly $D_n$ is dense open. So for every $n < \omega$ there is an $\alpha < \omega_1$ such that some $f \in D_n$ defined on ${\sf T} \rest \alpha+1$ is in the generic filter $G$. But if $\omega_1$ is {\em not} collapsed, the generic filter must contain a condition which is simultaneously in every $D_n$, and this is a contradiction. Thus, there must be some limitation on the growth of the generic specializing function. We may try the following poset: $\dS_2$ consists of all specializing $f:{\sf T}\rest \alpha+1\rightarrow\Q$, $\alpha<\omega_1$, such that $$ \forall\alpha_0<\alpha\ \forall\bar{x}\in{\sf T}_{\alpha_0}, \mbox{ if } f(\bar{x})<\bar{q}\mbox{ then for some }\bar{y}\in{\sf T}_\alpha,\ \bar{x}<\bar{y}\mbox{ and } f(\bar{y})=\bar{q}. $$ Here, $\bar{x}$ is an $n$-tuple of nodes, $\bar{q}$ is an $n$-tuple of rational numbers, and $f(\bar{x})<\bar{q}$ is a shorthand for: $f(x_i)<q_i$ for all $i$'s. If we assume that every node in {\sf T}\ has infinitely many successors, it is not difficult to see that any condition in $\dS_2$ can be extended to any height. Therefore forcing with $\dS_2$ specializes {\sf T}. If {\sf T}\ is a Suslin tree such that each derived tree of {\sf T}\ is Suslin too, then $\dS_2$ adds no new countable sets. If {\sf T}\ is an arbitrary \forallron\ tree, however, $\dS_2$ may collapse $\omega_1$. Such is the case when {\sf T}\ has the form ${\sf T}_1\cup{\sf T}_2$, a disjoint union of ${\sf T}_1$ with a copy ${\sf T}_2$ of itself. Let $i:{\sf T}_1\rightarrow{\sf T}_2$ be the map which takes a node in ${\sf T}_1$ to its copy in ${\sf T}_2$. Define $$ D_n=\left\{f\in\dS_2|\begin{array}{l} dom(f)={\sf T}\rest\alpha+1\mbox{ and } \forall x\in{\sf T}_{1,\alpha}\\ f(x)>n\mbox{ or } f(i(x))>n\end{array}\right\} $$ Again, $D_n$ is seen to be dense open; and since the generic function contains a condition in $D_n$ but there is no condition in the intersection of all the $D_n$'s, $\omega_1$ must be collapsed. This shows that the limitations one imposes on the growth of the generic specializing function must have a different character. We are now going to define the poset $\dS=\dS({\sf T})$ used to specialize a given \forallron\ tree {\sf T}. A condition $p\in\dS$ is a pair $p=(f,\Gamma)$ where $f$ is a countable partial specialization of {\sf T}, called an ``approximation"; and $\Gamma$ is an uncountable object, called a ``promise". Its role is to ensure that $\dS$ is a proper poset. $\Gamma$ consists of ``requirements", so we have to explain what these are first. Throughout, {\sf T}\ is a fixed \forallron\ tree. \begin{defn} \label{d4.1} (1) We say that $H$ is a {\em requirement} (of {\it height}\ $\gamma<\omega_1$) iff for some $n=n(H)<\omega$, $H$ is a set of finite functions of the form $h:{\sf T}_\gamma\rightarrow\Q$, with $dom(h)\in\ ^n{\sf T}_\gamma$.\\ \noindent(2) An {\em approximation} (on {\sf T}) is a partial specializing function $f:{\sf T}\rest(\alpha+1)\rightarrow\Q$; that is, an order-preserving function defined on $\bigcup_{\zeta\leq\alpha}{\sf T}_\zeta$ into the rationals. The countable ordinal $\alpha$ is called ${\it last}(f)$. We say that a finite function $h:{\sf T}_\alpha\rightarrow\Q$ {\em bounds} $f$ iff $\forall x\in dom(h) (f(x)<h(x))$. More generally, for $\beta \geq \alpha={\it last}(f),\ h: {\sf T}_\beta \rightarrow \Q$ bounds $f$ iff $\forall x \in {\it dom}(h) (f(x\lceil\alpha)<h(x))$ (i.e., if $h\lceil\alpha$ is defined, then $h\lceil\alpha$ bounds $f$).\\ \noindent(3) An approximation $f$ with ${\it last}(f)=\alpha$ is said to {\em fulfill} requirement $H$ of {\it height}\ $\alpha$ iff for every finite $t\subseteq{\sf T}_\alpha$ there is some $h\in H$ which bounds $f$ and such that $dom(h)$ is disjoint to $t$.\\ \noindent(4) A {\em promise} $\Gamma$ (for {\sf T}) is a function $\sseqn{\Gamma(\gamma)}{\beta\leq\gamma<\omega_1}$ ($\beta$ is denoted $\beta(\Gamma)$) such that \begin{description} \item{(a)} $\Gamma(\gamma)$ is a countable collection of requirements of height\ $\gamma$. There is a fixed $n$ such that $n=n(H)$ for all $\gamma$ and $H \in \Gamma(\gamma)$. \item{(b)} For $\gamma\geq\beta$, each $H\in\Gamma(\gamma)$ is dispersed. That is, for every finite $t\subseteq{\sf T}_\gamma$ for some $h\in H$, $t\cap dom(h)=\emptysetyset$. \item{(c)} For every $\beta\leq\alpha_0<\alpha_1<\omega_1$, $$ \Gamma(\alpha_0)=\setm{(X\lceil\alpha_0)}{X\in\Gamma(\alpha_1)} $$ \end{description} \noindent(5) An approximation $f$ fulfills promise $\Gamma$ iff ${\it last}(f)\geq \beta(\Gamma)$, and $f$ fulfills each requirement $H$ in $\Gamma({\it last}(f))$. \end{defn} \begin{defn}[of $\dS({\sf T})$] \label{d4.2} For any \forallron\ tree {\sf T}\ define $\dS=\dS({\sf T})$ by $p=(f,\Gamma)\in\dS$ iff $f$ is an approximation on {\sf T}, $\Gamma$ is a promise, and $f$ fulfills $\Gamma$. The partial order is naturally defined: $p_1=(f_1,\Gamma_1){\it extends}\ p_0=(f_0,\Gamma_0)$ iff $f_0\subseteq f_1$ and $\Gamma_0\rest(\omega_1-{\it last}(f_1))\subseteq\Gamma_1$. That is, any requirement of height $\gamma\geq{\it last}(f_1)$ in $\Gamma_0$ is also in $\Gamma_1$. If $p=(f,\Gamma)$ is a condition in $\dS$ we write $f=f(p),\ \Gamma=\Gamma(p)$. In an abuse of notation, we write ${\it last}(p)$ for ${\it last}(f(p))$, and $p(x)$ instead of $f(x)$. We also call ${\it last}(p)$ `the {\it height}' of $p$. (Recall that $f(p)$ is defined on ${\sf T}\rest{\it last}(p)+1$.) \end{defn} \noindent{\bf Remark} If our only aim is to obtain a model of CH \& SH, it is enough to assume that $\Gamma(\gamma)$ is a singleton. The assumption that $\Gamma(\gamma)$ is a countable collection of requirements will be used in order to show that this forcing preserves certain Suslin trees. \noindent{\bf Remark} If $p=(f,\Gamma)\in\dS$, $\gamma$ is the {\it height}\ of $p$, and $g:{\sf T}\rest\gamma+1\rightarrow \Q$ is a specializing function satisfying: $\forall x\in{\it dom}(f)\ (g(x)\leq f(x))$, then $g$ fulfills the promise $\Gamma$ which $f$ fulfills. This simple remark is used in the following way. Suppose that $p_1$ extends $p_0$; put $\mu_i={\it last}(p_i)$, $f_i=f(p_i)$ for $i=0,1$. Let $\delta$ be an order-preserving map of the set of positive rationals $\Q^+$ into $\Q^+$ such that $\delta(r)\leq r$ for all $r$. Then define, for any $x\in{\sf T}_\alpha$, where $\mu_0<\alpha\leq \mu_1$, $$ g(x)=f_0(x\lceil\mu_0)+\delta(f_1(x)-f_0(x\lceil\mu_0)). $$ In words: $g$ uses $\delta$ to compress $f_1$ on ${\sf T}\rest\mu_1+1 \setminus {\sf T}\rest\mu_0+1$. \noindent Extend further $g$ and, for $x\in{\sf T}\rest\mu_0+1$, define $g(x)=f_0(x)$. Then $(g,\Gamma(p_1))$ is also an extension of $p_0$ of height\ $\mu_1$. Our next aim is to show that it is possible to extend conditions to any height, and to enlarge promises. Then we will show properness of $\dS$. In the following subsection, $\dS$ is shown to specialize only those trees it must specialize. Then, in the next subsection $\dS$ is proved to satisfy the condition which allows to conclude that a countable support iteration of such posets adds no new countable sets. \begin{lemma}[The extension lemma] If $p\in\dS$ and ${\it last}(p)<\mu<\omega_1$, then there is an extension $q$ of $p$ in $\dS$ with $\mu={\it last}(q)$, and such that $\Gamma(q)=\Gamma(p)$. Moreover, if $h:{\sf T}_\mu\rightarrow\Q$ is finite and bounds $p$, then $h$ bounds an extension $q$ of $p$ of {\it height}\ $\mu$. \end{lemma} \par\noindent{\bf Proof:}\quad The `moreover' clause of the Lemma is, in fact, a direct consequence of the first part and the Remark above. Indeed, if $h$ bounds $p$ as in the Lemma, pick first {\em any} extension $p_1$ of $p$ with $\mu={\it last}(p_1)$, and then correct $p_1$ as follows to obtain $q$. Put $\mu_0={\it last}(p),\ f=f(p)$. For some $d>0$, $\forall x\in{\it dom}(h)$ $$ h(x)>f(x\lceil\mu_0)+d. $$ Let $\delta$ be an order-preserving map of $\Q^+$ into the interval $(0,d)$ such that $\delta(x)<x$ for all $x$. Now use the Remark to correct $p_1$ and to obtain an extension $q$ of $p$ which satisfy for every $x\in{\sf T}\rest((\mu+1)-\mu_0)$, $q(x)-p(x\lceil\mu_0)<d$. Hence $h$ bounds $q$. The proof of the first part of the Extension Lemma is done by induction on $\mu$. Since the proof is quite easy, only the outline is given. \noindent{\bf CASE I} $\mu=\mu_0+1$ is a successor ordinal. By the inductive assumption, ${\it last}(p)=\mu_0$ can be assumed, and we have to extend $f=f(p)$ on ${\sf T}_{\mu_0+1}$, fulfilling all the requirements in $\Gamma(\mu)\ (\Gamma=\Gamma(p)$). Given any requirement $H\in\Gamma(\mu)$, we know that $H\lceil\mu_0=H_0\in\Gamma(\mu_0)$ is fulfilled by $f$. So, $H_0$ contains an infinite pairwise disjoint set of functions $h$ which bound $f$. This allows plenty of time to extend $f$, in $\omega$ steps, and to keep the promise $\Gamma$ at the level $\mu$. \noindent{\bf CASE II} $\mu$ is a limit ordinal. Pick an increasing sequence of ordinals $\mu_i, i<\omega$, cofinal in $\mu$. We are going to define an increasing sequence $p_i\in \dS$ (beginning with $p_0=p$) and finite $h_i:{\sf T}_\mu\rightarrow\Q$ which bound $p_i$, by induction on $i<\omega$. Then we will set $q=(f,\Gamma)$, by $f=\bigcup\setm{f(p_i)}{i<\omega}\cup\bigcup\setm{h_i}{i<\omega}$, and $\Gamma=\Gamma(p)$. ${\it last}(p_i)=\mu_i$, and the passage from $p_i$ to $p_{i+1}$ uses the inductive assumption for $\mu_{i+1}$. The role of the $h_i$'s is not only to ensure that $f$ is bounded on the $\mu$ branches determined by ${\sf T}_\mu$, but also to ensure that the promise made in $\Gamma(p)=\Gamma$, namely $\Gamma(\mu) $, is kept by $h=\bigcup_{i<\omega}h_i$. Each requirement $H\in\Gamma(\mu)$ must appear infinitely often in a list of missions, and at each step, $i<\omega$, of the definition, $h_{i+1}$ takes care of one more $h\in H$, so that finally an infinite pairwise disjoint subset of $H$ consists of functions which bound $f$. It is here that we use the assumption that $\Gamma(\mu_i)=\Gamma(\mu)\lceil\mu_i$, i.e., that $\Gamma(\mu_i)=\setm{(H\lceil\mu_i)}{H\in\Gamma(\mu)}$. Next we show that promised can be added. Let $p=(f,\Gamma)\in\dS$ be a condition of height\ $\mu$, and let $\Psi$ be any promise. We say that $p$ `includes' $\Psi$ iff for all $\gamma$ such that $\mu\leq\gamma<\omega_1$ $$ \Psi(\gamma)\subseteq\Gamma(\gamma). $$ That is, any requirement $H\in\Psi(\gamma)$ is already in $\Gamma(\gamma)$. If $p$ includes $\Psi$ then, obviously $p$ fulfills $\Psi$. Otherwise, it is not always possible to extend $p$ to fulfill $\Psi$. However, if the following simple condition holds, then this can be done. \begin{lemma}\label{l4.4} [Addition of promises] Let $p\in\dS$, put $\mu={\it last}(p)$. Let $\Psi$ be a promise with $\mu<\beta=\beta(\Psi)$. Suppose for some finite $g:{\sf T}_\mu\rightarrow\Q$ (called a {\em basis} for $\Psi$), $g$ bounds $f(p)$ and $$ \forall \gamma\geq\beta,\ \forall H\in\Psi(\gamma),\ \forall h\in H\ (h\lceil\mu=g), $$ then there is an extension $p_1$ of $p$ in $\dS$ of {\it height}\ $\beta$ which includes $\Psi$. \end{lemma} \par\noindent{\bf Proof:}\quad This is an easy application of the Extension Lemma. Put $f=f(p)$, then for some rational $d>0$, $\forall x\in{\it dom}(g)\ g(x)>f(x)+d$. Now every $H\in\Psi(\beta)$ is a dispersed collection of functions $h$ with $h\lceil\mu=g$. Let $p_1$ be any extension of $p$ of ${\it height}\ \beta$; set $f_1=f(p_1)$. The desired extension of $p$ will be obtained by correcting $f_1$ so as to fulfill $\Psi(\beta)$ and then to add $\Psi$. This is done as follows. Let $\delta$ be an order-preserving map of the positive rationals into the rational interval $(0,d)$, such that $\delta(r)<r$ for every $r$. Define now for $x\in T_\alpha,\ \mu<\alpha\leq\beta$: $f_2(x)=f(x\lceil\mu)+\delta(f_1(x)-f(x\lceil\mu))$. Then $f_2 \cup f$ fulfills each $H\in\Psi(\beta)$, and thus gives the desired extension. The properness of $\dS$ is not so easy to prove, and it is here that the need for the promises appears. Given an elementary countable substructure $M\prec H(\kappa)$, such that $\dS\in M$, and given a condition $p_0\in M$, we have to find an ``$M$-generic" condition $q$ extending $p_0$. In fact, we will find $q$ with a stronger property which implies that no new reals are added: for every dense open set $D\subseteq\dS$ in $M$, $q\in D$. As in the definition of $q$ in the Extension Lemma (the limit case), here too an increasing sequence $p_i\in\dS\cap M$ of conditions and finite functions $h_i:{\sf T}_\mu \rightarrow\Q$ are defined; where $\mu=\omega_1\cap M$. But now we are faced with an extra mission in defining $p_{i+1}$: to put $p_{i+1}$ in $D$, the $i$-th dense open subset of $\dS$ in $M$ (in some enumeration of the countable $M$). The problem with this mission is that perhaps whenever $r$ extends $p_i$ is in $D$, then $h_i$ does not bound $r$. To show that this bad event never happens, requires the following main Lemma. \begin{lemma} Let ${\sf T}$ be an \forallron\ tree. Let $M\prec H(\kappa)$ be a countable elementary substructure, where $\kappa$ is some big enough cardinal; ${\sf T},\dS=\dS({\sf T})\in M$. Let $p\in M$ be a condition in $\dS$, $\mu=\omega_1\cap M$ and $h:{\sf T}_\mu\rightarrow\Q$ be a finite function which bounds $p$. Let $D\subseteq\dS$, $D\in M$ be dense open. Then there is an extension of $p$, $r\in D\cap M$, such that $h$ bounds $r$. \end{lemma} \par\noindent{\bf Proof:}\quad Assume for the sake of a contradiction that this is not so, and let ${\sf T}, M, p,h$ etc. be a counterexample. Let $\mu_0={\it last}\ {p}$; $\bar{x}={\it domain}\ (h)$ enumerated in some way; so $\bar{x}\in\ ^n{\sf T}_\mu$, $\bar{x}=\seqn{x_0}{x_{n-1}}$. Put $\bar{q}=h(\bar{x})$; that is, $q_i=h(x_i)$. Denote $\bar{v}=\bar{x}\lceil \mu_0$; then $\bar{v}\in\ ^n{\sf T}_{\mu_0}$, and we may assume $v_i\neq v_j$ for $i\neq j$ (or else, extend $p$ above the splittings of $\bar{x}$). In $M$: {\it If $r\in D\cap M$ extends $p$, then $h$ does not bound $r$.} \noindent Put $g_0=h\lceil\mu_0$. Then $g_0\in M$. Say that a finite function $g:{\sf T}_\gamma\rightarrow \Q$ is {\em bad} iff \begin{enumerate} \item $\mu_0\leq\gamma<\omega_1$, and $g\lceil\mu_0=g_0$. \item Whenever $r\in D$ extends $p$ and $\gamma\geq{\it last}(r)$, $g$ does not bound $r$. \end{enumerate} In other words, $g$ is bad if it mimics $h\lceil \gamma$, but it may live on other $n$-tuples of ${\sf T}$. Of course, $h\lceil \gamma$ itself is bad for any $\gamma$ with $\mu_0\leq \gamma<\mu$. It follows that, in $M$ and hence in $H(\kappa)$, there are uncountably many bad $g$'s. Indeed, if there were only countably many bad functions, there would be a bound $\gamma$, in $M$, for $\setm{{\it height}(g)}{g\mbox{ is bad}}$; and as $\gamma<\mu$, $h\lceil \gamma$ would not be bad. Observe that if $g$ is bad and $\mu_0\leq\gamma_0<{\it height}(g)$, then $g\lceil\gamma_0$ is bad too. Now put $$ B=\setm{dom(g)}{g\mbox{ is bad}}. $$ Then $B$ is uncountable and closed downwards (above $\mu_0$) subset of $\bigcup_{\mu_0\leq\gamma<\omega_1}\ ^n{\sf T}_\gamma$. As ${\sf T}$ is an \forallron\ tree, Lemma ~\ref{l1.1} implies that for some $\beta>\mu_0$ and some $B^0\subseteq{\sf B}$, if we put $B^0_\gamma=B^0\cap\ ^n{\sf T}_\gamma$, then \begin{enumerate} \item For $\beta\leq\gamma_0<\gamma_1<\omega_1$,\ $B^0_{\gamma_0}=B^0_{\gamma_1}\lceil\gamma_0$, and \item $B^0_\beta$ (and thus every $B^0_\gamma$, $\beta<\gamma$) is dispersed. \end{enumerate} We may find $B^0$ in $M$, since only parameters in $M$ were mentioned in its definition. For $\beta \leq \gamma < \omega_1$, let $\Psi(\gamma)$ consists of $ H_\gamma= \setm{g}{g\mbox{ is bad and } dom(g)\in B^0_\gamma}$. By Lemma ~\ref{l4.4} (Addition of promises), there is an extension $p_o$ of $p$ in $M$ of height $\beta$ which includes $\Psi$. That is, if $\Gamma_0=\Gamma(p_0)$, then for every $\gamma\geq{\it last}(p_0)=\beta$, $H_\gamma\in\Gamma_0(\gamma)$. Now let $r\in\dS$ be {\em any} condition extending $p_0$ and in $D$. Let $\gamma={\it last}(r)$. Since $r$ fulfills $\Gamma_0$, for some $g\in H_\gamma$, $g$ bounds $r$. But this contradicts the fact that $g$ is bad. \subsection{Specialization, while Safeguarding Suslin trees} Suppose that we care about a Suslin tree {\sf S}, and wish to specialize an \forallron\ tree {\sf T}\ while keeping {\sf S}\ Suslin. Obviously, this is not always possible: for example if {\sf S}\ {\it is} {\sf T}, or if they contain isomorphic uncountable subtrees. We will show in this section that, if {\sf T}\ remains \forallron\ even after the addition of a cofinal branch to {\sf S}, then the poset $\dS({\sf T})$ specializes {\sf T}\ while keeping {\sf S}\ Suslin. \begin{theorem} \label{t4.6} Let {\sf S}\ be a Suslin tree, and {\sf T}\ be an \forallron\ tree such that $\|{\sf T}\mbox{ is \forallron}\|^{{\sf S}}=\bf \mbox{{\small $1$}}$. Then $\|{\sf S}\mbox{ is Suslin }\|^{\dS({\sf T})}=\bf \mbox{{\small $1$}}$. \end{theorem} \par\noindent{\bf Proof:}\quad The forcing poset $\dS=\dS({\sf T})$ was defined in the previous subsection and shown there to be proper. To prove the theorem we let $\bD$ be a name in $\dS$ forcing of a dense open subset of the tree ${\sf S}$. We will find a condition $p\in\dS$ (extending an arbitrarily given condition in $\dS$) such that for some $\mu<\omega_1$, $p{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dS}\ {\sf S}_\mu\subseteq \bD$. This is enough to show that $\dS$ does not destroy the Susliness of ${\sf S}$. The framework for the construction of $p$ is similar to the one for showing the properness of $\dS$, and the following Lemma suffices for the proof of the Theorem. \begin{lemma} Let ${\sf S}$ and ${\sf T}$ be as in the Theorem. Let $\bD$ be a name in $\dS=\dS(T)$ forcing of a dense open subset of ${\sf S}$. Let $M\prec H(\kappa)$ be a countable elementary substructure, containing ${\sf T},{\sf S}, \bD $, and let $p_0\in \dS\cap M$ be a condition. Let $\mu=\omega_1\cap M$, and $h_0:{\sf T}_\mu \rightarrow\Q$ be a finite function which bounds $p_0$. For any $b\in{\sf S}_\mu$ there is an extension $p\in\dS\cap M$ of $p_0$ such that $h_0$ bounds $p$, and $p{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dS}$ $b\in \bD$. \end{lemma} \par\noindent{\bf Proof:}\quad Assume that this Lemma does not hold. Let $M, p_0, h_0$ etc. be a counterexample. Put $\mu_0={\it last}(p_0)$, and $g_0=h_0\lceil\mu_0$. The Suslin tree ${\sf S}$ is a c.c.c. forcing notion which adds no new countable sets. We are going to define first a {\em name} $\bB$, in ${\sf S}$ forcing, of an uncountable tree of `bad' functions, and derive a promise $\Gamma$ out of this $\bB$, a promise which, when adjoined to $p_0$, will give the desired contradiction. Forcing with ${\sf S}$, extend $b\in{\sf S}_\mu$ to a (generic) branch $G$ of ${\sf S}$, and let $V[G]$ be the extension of the universe $V$ thus obtained. We have: $$ M[G]\prec H(\kappa)[G]=H(\kappa)^{V[G]}. $$ In $V[G]$, and hence in $M[G]$, ${\sf T}$ is still an \forallron\ tree by the assumption of the Theorem. The following definition is carried out in $V[G]$, but all its parameters are in $M[G]$: \begin{defn} A finite function $h:{\sf T}_\gamma\rightarrow\Q$ is bad iff:\\ \noindent 1. $\mu_0\leq\gamma<\omega_1$, and $h\lceil\mu_0=g_0$.\\ 2. Whenever $p\in\dS$ extends $p_0$ and $\gamma\geq {\it last}{p}$ and $G_\gamma=e$, if $p{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dS}\ e\in \bD$ then $h$ does not dominate $p$. (Recall that $G_\gamma$ is the unique node in $G\cap {\sf S}_\gamma$.) \end{defn} For any $\mu_0\leq\gamma <\mu$, $h_0\lceil \gamma$ is bad. (If not, by elementarity of $M[G]$, there is, in $M$, an extension $p$ of $p_0$, of height $\gamma$ and such that $h_0\lceil\gamma$ bounds $p$ and $p{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dS}\ (G_\gamma)\in \bD$. But then, as $b>G_\gamma$, $p{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dS}\ b\in \bD$, in contradiction to our assumption.) Hence the set of bad functions is uncountable. Obviously, if $h$, of {\it height}\ $\gamma$, is bad and $\mu_0\leq\gamma'<\gamma$, then $h\lceil\gamma'$ is bad too. We know how to find (in $M[G]$) an ordinal $\mu_0 \leq \beta < \omega_1$, and a collection $B(\gamma)$, $\beta\leq\gamma<\omega_1$, such that $B(\gamma)$ is a set of bad functions of height $\gamma$, and\\ 1. For $\beta\leq\gamma_0<\gamma_1<\omega_1$, $B(\gamma_0)=B(\gamma_1)\lceil\gamma_0$,\\ 2. $B(\beta)$ is dispersed. (See Definition ~\ref{d4.1} (4)(b), and Lemma ~\ref{l1.1}.) Let $\bB\in M$ be a name of $B$ in $V^{{\sf S}}$, and let $b_0<b$ be a condition in ${\sf S}$ which forces these properties of $B$. In particular, $b_0$ forces ``all functions in $\bB(\gamma)$ are bad". Now, back in $V$, we define the promise $\Gamma$. For every countable $\gamma\geq\beta$, $\Gamma(\gamma)$ is the collection of all requirements $H$ of height $\gamma$ such that $\|H=\bB(\gamma)\|^{{\sf S}}>{\mbox{{\small\bf $0$}}}$. Again $\Gamma\in M$. Since ${\sf S}$ is a c.c.c. poset, $\Gamma(\gamma)$ is countable, and since ${\sf S}$ adds no new countable sets, $\Gamma(\gamma)$ is non-empty (some condition in ${\sf S}$ above $b_0$ `describes' $\bB(\gamma)$) and $\Gamma(\gamma)$ is countable. Since $g_0$ is a basis of $\Gamma$, and $g_0$ bounds $p_0$, there is an extension $p_1$ of $p_0$, in $\dS\cap M$, which includes $\Gamma$. (See the Addition of Promises Lemma ~\ref{l4.4}.) Next, find a node $d\in {\sf S}$, with $b_0<d$, such that $p_2{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dS }\ d\in \bD$ for some extension $p_2$ of $p_1$ with $p_2\in\dS\cap M$. This is possible since $\bD$ is assumed to be a name in $V^{\dS}$ such that $\|\bD\mbox{ is dense in } {\sf S}\|^{\dS}=\bf \mbox{{\small $1$}}$. Let $\gamma={\it last} (p_2)$, and let $d_1>d$ be a node in ${\sf S}$ which forces ``$H=\bB(\gamma)$" for some requirement $H$ of height $\gamma$. Then $H\in\Gamma(\gamma)$ and so some $h\in H$ bounds $p_2$ (as $p_2$ fulfills $\Gamma$). But $d_1{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{{\sf S}}\ h$ {\em is bad}, contradicts $p_2{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{\dS}\ d\in \bD$. \section{$\alpha$-properness and $\omega_1$-\hbox{D \kern -20pt D}-completeness of \dS}\label{s5} In chapter V of Shelah [1982] (Sections 3,5 and 6) the notions of $\alpha$-properness and $\dD$-completeness are defined, and an $<\omega_1$-proper, simple $\dD$-complete forcing which specializes an \forallron\ tree is described. Section 7 there shows that the iteration of such forcings adds no reals. In chapters VII and VIII different notions of chain conditions are introduced: the $\aleph_2$-e.c.c and the $\aleph_2$-p.i.c. Any of them can be used to show that our iterations satisfy the $\aleph_2$-c.c. (The second is particularly useful if $2^{\aleph_1}>\aleph_2$). We shall review here these definitions, but will not give proofs for the preservation theorems which may be found in the Proper Forcing book. To use the theory of proper forcings which add no reals we will show that (1) the specializing poset $\dS=\dS({\sf T})$ is $\alpha$-proper for every $\alpha<\omega_1$, and that (2) for some simple $\omega_1$-completeness system \dD, \dS\ is \dD-complete. By ``a tower of {\it length}\ $\alpha+1$ of substructures of $H(\lambda)$" we mean here a sequence $\bar{N}=\sseqn{N_i}{i\leq \alpha}$ of countable $N_i\prec H(\lambda)$ such that \begin{enumerate} \item $\bar{N}$ is continuously increasing. ($N_\delta=\bigcup_{i<\delta} N_i$, for limit $\delta \leq \alpha$). \item $\sseqn{N_j}{j\leq i}\in N_{i+1}$. \end{enumerate} \begin{defn} $P$ is $\alpha$-proper ($\alpha<\omega_1$), iff for every large enough $\lambda$ and tower $\sseqn{N_i}{i\leq\alpha}$ of countable substructures of $H(\lambda)$ of {\it length}\ $\alpha+1$, if $P\in N_0$ and $p\in P\cap N_0$, then there is an extension $q$ of $p$ in $P$ such that $q$ is an $(N_i,P)$-generic condition {\em for every} $i\leq\alpha$. \end{defn} \begin{theorem} $\dS$ is $\alpha$-proper for every $\alpha<\omega_1$. \end{theorem} We only {\em indicate} the proof since there is not much to say which was not said for the case $\alpha=1$. The proof is by induction on $\alpha$. The case of a successor ordinal is an obvious application of the inductive assumption and the properness of $\dS$. In case $\alpha$ is a limit ordinal, given a tower $\sseqn{N_i}{i\leq \alpha}$ as in the definition, pick an increasing $\omega$-sequence $i_n<\alpha$, $n<\omega$, converging to $\alpha$. We will define an increasing sequence $p_n\in N_{i_{n}+1}$ such that $p_n$ is $(N_j,P)$-generic condition for every $j\leq i_n$. The inductive assumption and the assumption that $\sseqn{N_k}{k\leq i_n}\in N_{i_{n}+1}$ are used to get $p_n$ in the elementary substructure $N_{i_{n}+1}$. We must be careful so that $f{\sf su}ackrel{\rm def}{=} \bigcup_{n<\omega} f(p_n)$ is bounded on every branch of ${\sf T}\rest\alpha$ determined by points in ${\sf T}_\alpha$, and that $f$ fulfills every requirement in $\Gamma(\alpha)$ for $\Gamma=\Gamma(p_n),\ n<\omega$. But we learned how to do it when proving properness of \dS. The \dD-completeness of \dS\ is equally simple, if only the definition is clear. Let us review it on the informal level first. Think on the difference between the poset \dP, for adding a new subset to $\omega_1$ with countable conditions on the one hand, and the poset ${\sf T}$, a Suslin tree, on the other hand. Both posets add no new countable sets, but while posets like \dP\ can be iterated without adding reals, an iteration of Suslin trees can add a new real (see Jensen and Johansbr\aa ten [1974]). Pick a countable $M\prec H(\lambda)$ with $\dP, {\sf T}\in M$ and look for \dP-generic and ${\sf T}$-generic filters $G_{\dP}$ and $G_{\sf T}$ over $M$ which have an upper bound in \dP, and in ${\sf T}$ (this is what it takes to show ``no new countable sets are added"). While $G_{\dP}$ can be defined, in a sense, from within $M$; the definition of $G_ {\sf T}$ requires knowing ${\sf T}_\alpha$. To clearly see this difference, let $\Pi:M\rightarrow\overline{M}$ be the transitive collapse of $M$ onto the transitive structure $\overline{M}$. If we only have $\overline{M}$ at hand (and a countable enumeration of $\overline{M}$) then we can define a \dP-generic filter over $\overline{M}$, and any such filter has an upper bound in \dP. However, for ${\sf T}$ the situation is radically different: even though any branch of ${\sf T}\cap M$ is $\overline{M}$-generic, there is no way to know which branches have an upper bound in ${\sf T}$, unless ${\sf T}_\alpha$ is given to us. For the poset $\dS({\sf T})$ (${\sf T}$ now an \forallron\ tree) the situation is subtly in between \dP\ and ${\sf T}$: It seems that we need to know ${\sf T}_\alpha$ (and more) to define $M$-generic filters over \dS, but in fact this is less crucial: there is room for some errors. Let us make this more precise in the following. Recall the properness proof, and suppose that $M\prec H(\lambda)$ and the collapse $\Pi:\ M \rightarrow \overline{M}$ are given. We seek to find a generic filter $G$ over $\overline{M}$ such that $\Pi^{-1}G$ has an upper bound in $\dS$. Besides $\overline{M}$, the only parameters of importance were ${\sf T}_\mu$ ($\mu=\omega_1\cap M$) and the function $\Gamma$ which assigns to each $p\in\dS\cap M$, the countable set $\Gamma(p)(\mu)$ of requirements (in the sense of ${\sf T}'_\mu$) of height $\mu$. Suppose that not the real ${\sf T}_\mu$ and $\Gamma$ are given, but just a countable set of cofinal branches of ${\sf T} \cap \overline{M}$ (called ${\sf T}'_\mu$) and any function $\Gamma'$ such that $\Gamma'(p)$ is a countable collection of requirements of height $\mu$, and for every $p\in\dS\cap M$ and $\beta\in\omega_1\cap M$, $\Gamma'(p)(\beta)=\setm{X\lceil \beta}{X\in \Gamma'(p)}$. Then the increasing, $\overline{M}$-generic sequence of conditions, $p_i\in\dS\cap \overline{M}$, $i<\omega$, could be defined to give a filter $G$. Of course, if ${\sf T}'_\mu$ and $\Gamma'$ are arbitrary, then we cannot be sure that $\Pi^{-1}$ of the filter $G$ thus obtained has an upper bound in \dS. {\em However}, the following observation comes to our rescue: given a countable collection $\setm{\pair{{\sf T}^i_\mu}{\Gamma^i}}{i<\omega}$, it is possible to find a generic $G$ which is good for {\em every} $\pair{{\sf T}_\mu^i}{\Gamma^i}$. `Good' in the sense that if some $\pair{{\sf T}^i_\mu} {\Gamma^i}$ were the real thing, then $G$ would have an upper bound in the external \dS. This is the essence of the notion of simple \dD-completeness. For completeness, we give now the definition from chapter V of Shelah's Proper Forcing [1982]. The reader can then see that \dS\ is indeed \dD-complete for an $\omega$-completeness simple system. The theory developed there shows that the countable support iteration of $\alpha$-proper ($\alpha < \omega_1$) and simple \dD-complete posets adds no new countable sets. \noindent{\bf Definition of \dD-completeness}. (See Definitions 5.2, 5.3 and 5.5 in Shelah [1982]). For any structure $N$, let $\pi: N \rightarrow \bar{N}$ denote the Mostowski collapse of $N$ to a transitive structure. When enough set-theory is present in $\bar{N}$, the forcing relation can be defined in $\bar{N}$. So, given a poset $\bar{P} \in \bar{N}$, if $\bar{N}$ is countable, an $\bar{N}$-generic filter over $\bar{P}$ can be found, and the generic extension $\bar{N}[G]$ can be formed. We let\\ $Gen(\bar{N},\bar{P},\bar{p})=\{G\subset \bar{P} \mid G \ \mbox{\it is } \bar{N}-{\it generic}\ \mbox{\it filter over}\ \bar{P},\ {\it and}\ \bar{p} \in G\}.$ The function $\dD$ is called an $\aleph_1$-completeness system iff\\ {\it For every countable transitive model} $\bar{N}$ {\it (of enough set-theory) and} $\bar{p} \in \bar{P} \in \bar{N}$, $\dD(\bar{N}, \bar{P}, \bar{p})$ {\it is a family of subsets of} $Gen(\bar{N}, \bar{P}, \bar{p})$ {\it such that every intersection of countably many sets in that family is non-empty.} Thus if $G \in A \in \dD(\bar{N},\bar{P},\bar{p})$ then $G$ is an $\bar{N}$-generic filter over $\bar{P}$ with $\bar{p} \in G$, and if $A^i \in \dD(\bar{N},\bar{P},\bar{p})$ then $\cap_{i<\omega}A^i$ is non-empty. Given a completeness system $\dD$, we say that the poset $P$ is $\dD$-complete iff for some large enough $\kappa$ the following holds: For every $N \prec H(\kappa)$, with $\bar{P} \in N$ and for every $p \in P$, let $\pi:N \rightarrow \bar{N}$ be the transitive collapse of $N$; put $\bar{P}=\pi(P)$, $\bar{p}=\pi(p)$. There is some $A \in \dD(\bar{N},\bar{P},\bar{p})$ such that for every $G \in A$:\\ $\pi^{-1}(G)=\{\pi^{-1}(g) \mid g \in G \}$ contains an upper bound in $P$. Finally, let us say that the completeness system $\dD$ is {\em simple} iff it is given by a formula $\psi(G,\bar{P},\bar{p},x)$ in the following way:\\ $\dD(\bar{N},\bar{P},\bar{p})=\{A_x \mid x \subset \bar{N}\}$, where\\ $A_x=\{G \in Gen(\bar{N},\bar{P},\bar{p})\mid $\langle$gle \bar{N} \cup {\cal P}(\bar{N}),\in $\rangle$gle \models \psi(G,\bar{P},\bar{p},x) \}.$ In our case, the parameter $x$ describes ${\sf T}_\alpha$ and the function $p \longmapsto \Gamma(p)(\alpha)$, where $\alpha=\omega_1^{\bar{N}}$. As for the $\aleph_2$-chain condition of $\dS$, it follows from CH by the obvious remark that if two conditions have the same specializing function (but different promises) then they are compatible. In Chapter 8 of Shelah [1982] the notion of $\aleph_2$-p.i.c ($\aleph_2$ proper isomorphism condition) is defined, and it is shown that countable support iteration of length $\omega_2$ of such posets satisfies the $\aleph_2$-c.c. if CH is assumed. Our posets $\dS$ clearly satisfy the $\aleph_2$-p.i.c. and hence that result may be applied to conclude that the $\aleph_2$ chain condition holds for the iteration. \section{Models with few Suslin trees} \label{s6} Suppose ${\sf S}$ and all the derived trees of ${\sf S}$ are Suslin trees. We shall find now a generic extension in which the only Suslin trees are ${\sf S}$ and its derived trees, and such that no new countable sets are added by this extension. The extension is obtained as an iteration of length $2^{\aleph_1}=\aleph_2$ of posets of type $\dS({\sf T})$ described as follows. By the result of Section ~\ref{s3}, we know that if ${\sf S}$ and its derived trees remain Suslin at each stage of the iteration, then this also holds for the final limit of the iteration. We know that no new countable sets are added by the iteration of $\dS({\sf T})$ forcings, and that the $\aleph_2$-chain condition holds. The definition of the iteration is such that if, for some $\alpha<\omega_2$, $\dP_\alpha$ is defined, then $\dP_{\alpha+1}$ is obtained as an iteration $\dP_{\alpha}*\dqq_\alpha$ where (in $V^{\dP_\alpha}$) $\dqq_\alpha$ has the form $\dS( {\sf T})$ for an `appropriate' \forallron\ tree (${\sf T}$ is appropriate if for any derived tree ${\sf S}^1$ of ${\sf S}$, $\dline{{\sf T}\mbox{ is \forallron}}^{{\sf S}^1}=\bf \mbox{{\small $1$}}$). Then, by Theorem ~\ref{t4.6}, $\dline{\mbox{all derived trees of } {\sf S} \mbox{ are Suslin }}^{\dS(T)}=\bf \mbox{{\small $1$}}$. When care is taken of all appropriate ${\sf T}$ as above, the final extension $V[G]$ satisfies \begin{enumerate} \item ${\sf S}$ and its derived trees are Suslin. \item For any \forallron\ tree ${\sf T}$, either ${\sf T}$ is special, or for some derived tree ${\sf S}^1$ of ${\sf S}$, $\dline{{\sf T}\mbox{ is not \forallron}}^{{\sf S}^1}>{\bf 0}$. \end{enumerate} The latter possibility implies, as the following Lemma shows, that ${\sf T}$ contains a club-isomorphic copy of a derived tree of ${\sf S}$. \begin{lemma} Assume that ${\sf S}$ and its derived trees are all Suslin. Suppose ${\sf T}$ is an \forallron\ tree, and $\dline{{\sf T}\mbox{ is not \forallron}}^{{\sf S}^{1}}=\bf \mbox{{\small $1$}}$ for a derived tree ${\sf S}^1$ of ${\sf S}$. Assume further that ${\sf S}^1$ is of least dimension with this property. Then ${\sf S}^1$ is embeddable on a club set into ${\sf T}$. \end{lemma} \par\noindent{\bf Proof:}\quad Let us fix first some notation. ${\sf S}^1$ has the form ${\sf S}_{a_{1}}\times\ldots\times {\sf S}_{a_{n}}$ for some $n$-tuple $\seqn{a_1}{a_n}$ of distinct elements of $S_\gamma$ for some $\gamma<\omega_1$. $n$ is called the dimension of ${\sf S}^1$. Now, when we say that $e\in {\sf S}^1$ is of the form $e=\seqn{e_1}{e_n}$ it is assumed that $e_i>a_i$ in ${\sf S}$. For this Lemma, we assume that any node of limit hight in ${\sf T}$ is determined by its predeccessors. Let $b$ be a name in $V^{{\sf S}^{1}}$ such that $$\dline{b\mbox{ is a cofinal branch in }{\sf T}}^{{\sf S}^{1}}=\bf \mbox{{\small $1$}}.$$ For any $e_1\in {\sf S}^1$ and $\alpha<\omega_1$ there is an extension $e_2$ of $e_1$ which determines $b_\alpha=b\cap {\sf T}_\alpha$. That is, for some $x\in {\sf T}_\alpha$, $e_2{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{{\sf S}^1}\ x\in b$. The set $D_\alpha$ of all conditions in ${\sf S}^1$ which thus determine $b_\alpha$ is a dense open set in ${\sf S}^1$. Since ${\sf S}^1$ is Suslin, there is a club set $E\subseteq \omega_1$ of limit ordinals such that for $\alpha\in E$, if $e\in ({\sf S}^1)_\alpha$ then $e\in D_\beta$ for all $\beta<\alpha$. That is, $e$ determines $b_\beta$ for all $\beta<\alpha$. But then $e$ must determine $b_\alpha$ as well, since there is a {\em single} node in ${\sf T}_\alpha$ above all those determined $b_\alpha $'s. So, we have that for $\alpha\in E$, $({\sf S}^1)_\alpha\subseteq D_\alpha$; hence for every $e\in({\sf S}^1)_\alpha$ there is some $f(e)\in {\sf T}_\alpha$ with $e{\raise 2pt\hbox{\small{$|$}}\kern -3pt \vdash \kern 3pt}_{{\sf S}^1}\ b_\alpha=f(e)$. As we will see, $f$ is an embedding of ${\sf S}^1$ on some club into ${\sf T}$. Clearly $f$ is an order preserving map of ${\sf S}^1\rest E$ into ${\sf T}\rest E$. We will find a club set $D\subseteq E$ such that, on ${\sf S}^1\rest D$, $f$ is one-to-one. The basic observation is that, as ${\sf T}$ is \forallron\ (and any node in ${\sf S}^1$ has extensions to every higher level), every $e\in {\sf S}^1\rest E$ has two extensions, $e_1$ and $e_2$ such that $f(e_1)\neq f(e_2)$. The following is a slight strengthening of this, which is obtained from the minimality of the dimension $n$ of ${\sf S}^1$. \noindent{\bf Claim:} For every $e\in {\sf S}^1$, and for every set of indices $h\subset\fsetn{1}{n}$ (strict inclusion) there are two extensions, $e'=\seqn{e'_1}{e'_n}$ and $e''=\seqn{e''_1}{e''_n}$ of $e$, such that $f(e')\neq f(e'' )$; and $e'_i= e''_i$ for $i\in h$. \par\noindent{\bf Proof:}\quad Suppose this is not so, and for some $h=\fsetn{h(1)}{h(k)}\subseteq\fsetn{1}{n}$ with $k<n$, for some $e=\seqn {e_1}{e_n}$ for every two extensions $e'$ and $e''$ of $e$ with the same restriction on $h$, $f(e')=f(e'')$. This means that restricted to ${\sf S}^1_e$, the function $f$ actually depends on ${\sf S}^2= {\sf S}_{e_{h(1)}}\times\ldots\times {\sf S}_{e_{h(k)}}$. This enables us to define a name, in ${\sf S}^2$ forcing, of a branch in ${\sf T}$. But the dimension $k$ of ${\sf S}^2$ contradicts the minimality of $n$. Now the proof of the Lemma can be concluded by showing that the embedding $f$ defined above is one-to-one on a club set. This follows from the Claim since not only ${\sf S}^1$ but any other derived tree of ${\sf S}$ is Suslin. Take for example a countable elementary substructure, $M$, of some $H_\kappa$, and put $\delta=M \cap \omega_1$. We claim that if $e^1 \not = e^2$ are in $({\sf S}^1)_\delta$ then $f(e^1 \not = f(e^2)$. Confusing sequences with sets, put $e=e^1 \cup e^2$; then for some $k$ with $n<k \leq 2n$, $e$ is a $k$-tuple. $e$ `is' in fact an $M$-generic branch of a derived tree of ${\sf S}$ of dimension $k$. What the Claim implies is that for a dense open set of $k$-tuples of the form $e' \cup e^{''}$ in that derived tree, $f(e') \not = f(e^{''})$. Since $e$ is in that dense set, $f(e') \not =f(e^{''})$. The generalization of our discussion to any collection of trees poses no problems. Suppose we are given a collection ${\cal U}$ of Suslin trees such that if ${\sf S}\in {\cal U}$ then all derived trees of ${\sf S}$ are Suslin. Assume: $2^{\aleph_0}=\aleph_1$ and $2^{\aleph_1}=\aleph_2$. Then iterate $\dS({\sf T})$ posets, just as before, so that all trees in ${\cal U}$ and their derived trees remain Suslin. We know that this is possible, for any \forallron\ tree ${\sf T}$, unless $\dline{{\sf T}\mbox{ is not \forallron}}^{{\sf S}^{1}}>0$ for some ${\sf S}^1$ which is a derived tree of a tree in ${\cal U}$. In such a case, we know that ${\sf T}$ must contain a restriction to a club set of a derived tree of some ${\sf S}\in {\sf T}$. \section{The uniqueness of simple primal Suslin sequences}\label{s7} This section sets the preliminaries needed to prove the main theorem: the notions of {\em simple} and {\em primal} sequences of Suslin trees are defined, and the uniqueness of such sequences is proved. Using this material and the machinery developed to construct Suslin trees and to specialize them at will, the Encoding Theorem will be easily demonstrated in the following section. We will deal here not only with $\omega_1$-sequences of \forallron\ trees, but also with $I$-sequences, ${\cal T}=\sseqn{{\sf T}^\zeta}{\zeta\in I}$, of \forallron\ trees, where $I$ is an $\omega_1$-like set of indices. A linear order $(I,<)$ is said to be $\omega_1$-like iff it is uncountable but all proper initial segments are countable. In this paper we need a slightly stronger version, and add to these requirements of $\omega_1$-like that any point has a successor, and that a first element exists. We say that $a\in I$ is a `limit' point if it is not a successor (so the first element is a limit). A point $a\in I$ is said to be `even' iff it is a limit or it has the form $\delta+n$, where $\delta$ is a limit and $n<\omega$ is an even integer. Similarly `odd' points of $I$ are defined. We will call the members of $I$ `indices', since this is how they will be used. In some cases, $I$ is or is isomorphic to $\omega_1$, but in general an $\omega_1$-like order need not to be well-founded. Indeed, an important point of our argument is that, in some universe, the Magidor-Malitz quantifiers can force $I$ to be well-founded. Let ${\cal T}=\sseqn{{\sf T}^\zeta}{\zeta\in I}$ be a sequence of \forallron\ trees. Recall that for $d\in[I]^{<\omega}-\{\emptysetyset\}$ (a finite non empty subset of $I$), ${\sf T}^d=\bigcup_{\zeta\in d}{\sf T}^\zeta$ is the disjoint union of the \forallron\ trees with indices in $d$. A derived tree of ${\sf T}^d$ is thus a product of derived trees of the ${\sf T}^\zeta$'s. One of these derived trees is ${\sf T}^(d)={\mbox{\large $\times$}}_{\zeta\in d}{\sf T}^\zeta$. Let {\sf su}\ be a collection of non-empty finite subsets of $I$ which is closed under subsets, and let ${\sf sp}=[I]^{<\omega}-\{\emptyset\}-{\sf su}$ be the complement of {\sf su}. ({\sf sp}\ is closed under supersets.) We say then that $({\sf su},{\sf sp})$ is a pattern (over $I$) \begin{defn} We say that the $I$-sequence ${\cal T}$ of \forallron\ trees has the pattern $({\sf su},{\sf sp})$ if \begin{enumerate} \item For $d\in{\sf su}$, every derived tree of ${\sf T}^d$ is Suslin. \item For $d\in{\sf sp}$, ${\sf T}^{(d)}$ is a special tree. \end{enumerate} \end{defn} \begin{defn} \begin{enumerate} \item A collection ${\cal U}$ of Suslin trees is {\em primal} iff all derived trees of trees in ${\cal U}$ are Suslin, and for any Suslin tree ${\sf A}$ there exists some ${\sf S}\in{\cal U}$ such that a derived tree of ${\sf S}$ is club-embeddable into ${\sf A}$. \item The sequence ${\cal T}$ with Suslin-special pattern $({\sf su},{\sf sp})$ is called primal iff the collection ${\cal U}=\setm{{\sf T}^d}{d\in{\sf su}}$ is primal. \end{enumerate} \end{defn} We may summarize our results obtained so far in the following: \begin{theorem} {\rm 1.} Assume $\diamond_{\omega_{1}}$. Given any pattern $({\sf su},{\sf sp})$ over $\omega_1$, there exists an $\omega_1$-sequence of \forallron\ trees ${\cal T}=\sseqn{{\sf T}^\zeta}{\zeta\in\omega_1}$ with this pattern. (Section ~\ref{s2.3}.)\\ \noindent{\rm 2.} Assume $2^{\aleph_0}=\aleph_1$ and $2^{\aleph_1}=\aleph_2$. Let ${\cal U}$ be a collection of Suslin trees such that for ${\sf S}\in{\cal U}$ all derived trees of ${\sf S}$ are Suslin as well. There is then an $\aleph_2$-c.c. generic extension which adds no new countable sets, and in which ${\cal U}$ is a primal collection of Suslin trees. (Section ~\ref{s6}.) \end{theorem} \subsection{Simple Patterns}\label{s7.1} Let $I$ be an $\omega_1$-like order. We will have to refer to quadruples $\zeta_1<\zeta_2<\zeta_3<\zeta_4$ of indices given in increasing order in $I$, with some simple properties called types. For example, $\bar{\zeta}=\seqn{\zeta_1}{\zeta_4}$ is of type $$\langle$gle$odd, even, odd, even$$\rangle$gle$ if $\zeta_1$ and $\zeta_3$ are odd, and $\zeta_2,\ \zeta_4$ are even. Similar notations are obvious. Now we define when the pattern pair $({\sf su},{\sf sp})$ is said to be {\em simple}: If {\sf su}\ contains all tuples in the columns Suslin, and {\sf sp}\ contains all tuples in the column Special, in the following table. In case $I=\omega_1$, for each limit ordinal $\delta$, pick a canonical well-order $E_\delta$ of $\omega$ of order-type $\delta$. By standard encoding, we assume that $E_\delta\subseteq \omega$. \begin{tabular}{ll} \underline{Suslin} & \underline{Special}\\ \\ All triples, pairs, and singletons & All quintuples\\ $\langle$ even, even, even, even$\rangle$ & \\ $\langle$ odd, odd, odd, odd$\rangle$ \\ $\langle$ odd, even, odd, even$\rangle$& $\langle$ even, odd, even, odd$\rangle$\\ $\langle$ odd, even, even, odd$\rangle$ & $\langle$ odd, even, even, even$\rangle$ \\ $\langle$ limit $\delta$, even, odd, odd$\rangle$ & $\langle$ non-limit even, even, odd, odd$\rangle$ \\ $\langle$ $\zeta$, $\zeta+1$, odd, odd$\rangle$ & $\langle$ $\alpha$, $\beta$,odd, odd$\rangle$ where $\alpha$ and $\beta$ have\\ & different parity, and $\alpha +1 < \beta$.\\ $\langle$ $\delta+1,\ \delta+3,\ \delta+4,\ \delta+(5+i)$$\rangle$ & $\langle$ $\delta+1,\ \delta+3,\ \delta+4,\ \delta+(5+i)$$\rangle$ \\ where $i \in E_\delta$, $\delta$ a limit & where $\delta$ is limit and $i\not\in E_\delta$. \end{tabular} In the table we used the sets $E_\delta\subseteq\omega$; these are required only in case $I$ is well-ordered. It should be checked that if $d\subseteq I$ appears in the Suslin column of the table, and $e\subseteq I$ appears in the special column, then $e\not\subseteq d$. The Suslin tuples are closed under subsets, and even after closing the the special tuples under supersets, disjoint sets are obtained. The following theorem explains the use of simple sequences (sequences of trees with simple patterns). In fact, as the reader may find out, there are other notions of simplicity which can be used to derive the conclusion of the Theorem. \begin{theorem}[Unique Pattern] \label{t7.4} Suppose that ${\cal T}=\sseqn{{\sf T}^\zeta}{\zeta\in \omega_1}$, and ${\cal A}=\setm{{\sf A}^\zeta}{\zeta\in I}$ are $\omega_1$ and $\omega_1$-like sequences of Suslin trees with simple Suslin-special patterns. IF ${\cal T}$ is primal, then $I$ is isomorphic to $\omega_1$, and ${\cal T}$ and ${\cal A}$ have the {\em same} Suslin-special pattern. \end{theorem} \par\noindent{\bf Proof:}\quad We are going to define an order isomorphism $\fnn{d}{I}{\omega_1}$, and show that for every $\zeta\in I$, ${\sf A}^\zeta$ contains a club-embedding of a derived tree of ${\sf T}^{d(\zeta)}$. This suffices to derive the equality of the patterns of ${\cal T}$ and ${\cal A}$. We shall use the following easy observations: If ${\sf A}$ is special and {\sf B}\ is any tree, then ${\sf A}\times{\sf B}$ is special too. If $\fnn{h}{{\sf A}}{{\sf B}}$ is a club embedding of the tree ${\sf A}$ into ${\sf B}$, then \\ \begin{enumerate} \item ${\sf B}$ is special $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}ightarrow$ {\sf A}\ is special (on a club set of levels and hence on all levels. See Devlin and Johnsbr\aa ten [1974]). \item ${\sf A}$ is special $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}ightarrow$ ${\sf B}$ is not Suslin. \end{enumerate} Let $({\sf su}_1,{\sf sp}_1),\ ({\sf su}_2,{\sf sp}_2)$ be the patterns of ${\cal T}$ and ${\cal A}$ respectively. Since ${\cal T}$ is assumed to be primal, for every Suslin tree ${\sf A}^\zeta$ there is $d=d(\xi)\in{\sf su}_1$ such that ${\sf A}^\xi$ contains a club image of a derived tree of ${\sf T}^d$. Our aim is to prove that $$ d(\xi)\mbox{\it is a singleton, and } d\ \mbox{\it establishes an isomorphism of }I\mbox{\it onto}\ \omega_1. $$ This will be achieved in the following steps. \begin{description} \itm{a} {\em There is no quintuple $\xi_1,\ldots,\xi_5$ such that, for all indices $i,j$, $d(\xi_i)=d(\xi_j)$.} Suppose, for the sake of a contradiction, that for some $d\in{\sf su}_1$, $d=d(\xi_i)$ for five indices $\xi_1,\ldots,\xi_5$. Then there are club embeddings, $h_i$, from derived trees of ${\sf T}^d$ into $A^{\xi_i}$, $1\leq i\leq 5$. We may combine these embeddings into a club embedding of a derived tree of ${\sf T}^d$ into ${\mbox{\large $\times$}} _{1\leq i\leq 5} {\sf A}^{\xi_i}$. But since the product of a quintuple of trees is a special tree, this derived tree of ${\sf T}^d$ cannot be Suslin. \itm{b} {\em There are not uncountably many $\xi$'s with $|d(\xi)|>1$.} To see this, suppose the contrary, and let an uncountable set $X\subseteq I$ be such that for $\xi\in X$, $d(\xi )$ contains more than one element. We may assume that the finite sets $d(\xi), \xi\in X$, form a $\Delta$-system, and that either all members of $X$ are odd or all are even. Hence, for all quadruples $d \subset X$, ${\sf A}^d$ is Suslin. We will get the contradiction be considering the two possibilities for the $\Delta$-system. If the core of the system is all of $d(\xi)$, i.e., $d=d(\xi_1)=d(\xi_2)$ for $\xi_1,\xi_2\in X$, then a contradiction to (a) is obtained. If the core of the system is strictly included in $d(\xi),$ for $ \xi\in X$, then for a quadruple $\xi_1,\ldots,\xi_4$ in $X$, $d=\bigcup_{1\leq i\leq 4} d(\xi_i)$ contains $\geq 5$ indices. Now ${\mbox{\large $\times$}}_{i\in d}{\sf T}^i$ is a special tree, and has a derived tree which is club embeddable into ${\mbox{\large $\times$}}_{1\leq i\leq 4}{\sf A}^{\xi_i}$ which is a Suslin tree. This is clearly impossible since a derived tree of a special tree is special. Now that we have proved that on a co-countable set $d(\xi)$ is a singleton, we proceed to show that $d(\xi)$ is a singleton for every $\xi$. \itm{c} {\em For every $\xi\in I,\ |d(\xi)|=1$.} This will enable us to change notation and write $d(\xi)\in \omega_1$ (instead of $d(\xi)\subseteq \omega_1$). Assume, for some $\xi_1$, $|d(\xi_1)|\geq 2$. Suppose, for example, that $\xi_1$ is even. We can find (in the co-countable set of (b)) even indices $\xi_2,\xi_3,\xi_4$ such that $c=\bigcup_{1\leq i\leq 4} d(\xi_i)$ contains $\geq 5$ indices. Since $e=\fsetn{\xi_1}{\xi_4}\in{\sf su}_2$, all derived trees of $A^e$ are Suslin, and in particular $B={\mbox{\large $\times$}}_{i\in e}A^i$ is Suslin. On the other hand, there is an embedding of a derived tree of ${\sf T}^c$ on a club into {\sf B}, but this is impossible as any such derived tree of ${\sf T}^c$ is special (as $\mid c \mid \geq 5$). \itm{d} $d$ {\em is one-to-one}. Suppose that $\xi_1<\xi_2$ but $d(\xi_1)=d(\xi_2)$. Consider the four possibilities for $(\xi_1,\xi_2)$. (1) both are even, (2) both are odd, (3) $\xi_1$ is even and $\xi_2$ is odd, (4) $\xi_1$ is odd and $\xi_2$ is even. In each case, it is possible to find $\zeta_1,\ \zeta_2$ such that $e=\{\xi_1,\xi_2\}\cup \{ \zeta_1,\zeta_2\} \in{\sf sp}_2$. (For example, if both $\xi_1$ and $\xi_2$ are even, find odd $\zeta_1,\ \zeta_2$ such that $\xi_1<\zeta_1 < \xi_2 < \zeta_2$.) Yet $t=\setm{d(\xi)}{\xi\in e}$ contains at most 3 indices, and a club embedding of a derived tree of ${\sf T}^t$ into ${\mbox{\large $\times$}}_{\xi\in e}{\sf A}^\xi$ is obtained. This contradicts the fact that the first tree is Suslin and the latter is special. At this stage we don't know yet that $d$ is order preserving; but as $d$ is one-to-one from an $\omega_1$-like order into an $\omega_1$-like order, for any $\alpha$, if $\beta > \alpha$ is sufficiently large, then $d(\beta)>d(\alpha)$. This simple remark is used below. \itm{e} $\xi$ {\em is even if and only if $d(\xi)$ is even}. Let us first check this: Could it be that there are both uncountably many even $\xi$'s with $d(\xi)$ odd, and uncountably many odd $\xi$'s with $d(\xi)$ even? No, because in such a case (using the remark made above) we will find a quadruple $\seqn{\xi_1}{\xi_4}$ of type $\langle$ even, odd, even, odd$\rangle$\ with $d$-image of type $\langle$ odd, even, odd, even$\rangle$. But this is impossible since the first type is in the special column, and the second in the Suslin column of the simplicity table. It follows now that there are not uncountably many odd $\xi$'s for which $d(\xi)$ is even. For otherwise, using our result above, on a co-countable set of even $\xi$'s, $d(\xi)$ is even. Then a quadruple of type $\langle$ even, odd, even, odd$\rangle$\ has $d$-image of type $\langle$ even, even, even, even$\rangle$. Again this is impossible. Similarly, there are no uncountable many even $\xi$'s with $d(\xi)$ odd. So there can be at most countably many changes of parity. In fact even if a single odd $\xi_1$ is with even $d(\xi_1)$, we get a contradiction by finding even $\xi_2,\xi_3,\xi_4$ so that $$\langle$gle \xi_1\ldots \xi_4$\rangle$gle$ is of type $\langle$ odd, even, even, even$\rangle$, but its $d$-image is of type $\langle$ even, even, even, even$\rangle$. Likewise, there is no even $\xi$ with odd $d(\xi)$. \itm{f} $d$ {\em is order preserving}. Since $d$ is one-to-one, it is order preserving on an uncountable set. First we prove that if $\xi_1<\xi_2$ is of type $\langle$ even, odd$\rangle$\ or of type $\langle$ odd, even$\rangle$, then $d(\xi_1)<d(\xi_2)$. Assume this is not the case, and $d(\xi_1)>d(\xi_2)$. Then find $\xi_3,\xi_4$ such that $\seqn{\xi_1}{\xi_4}$ is of type $t_1=$$\langle$ even, odd, even, odd$\rangle$\ or of type $t_2=$$\langle$ odd, even, even, odd$\rangle$\ and such that only $\xi_1$ and $\xi_2$ change places. Then, since the first two coordinates change their place, the $d$-image of this quadruple is (respectively) of type $t_2$ or $t_1$. But since $t_1$ and $t_2$ are in different columns, a contradiction is derived. To see now that $d$ is order preserving on {\em any} pair, pick $\xi_1<\xi_2$ with the same parity. Then, since $\xi_1+1$ is of the other parity, $d(\xi_1)<d(\xi_1+1)<d(\xi_2)$. \itm{g}{\em $\delta$ is limit iff $d(\delta)$ is limit}. \\ This follows from the fact that $\langle$ limit, even, odd, odd$\rangle$\ is of type Suslin, while $\langle$ even but not limit, even, odd, odd$\rangle$\ is of type special. \itm{h} {\em For every $\xi$, $d(\xi+1)=d(\xi)+1$.} This is a consequence of the assumption that $$\langle$gle \xi,\xi+1,\mbox{odd, odd}$\rangle$gle$ is of type Suslin, while $$\langle$gle d(\xi),d(\xi+1),\mbox{ odd, odd}$\rangle$gle$ is of type special in case $d(\xi)+1<d(\xi +1)$. \itm{i} $d$ {\em is onto} $\omega_1$. Since $d$ preserves the order, $I$ is well-ordered as well. Now, since for limit $\delta\in I$, $d(\delta)$ is limit, $d$ maps the block $[\delta,\delta+\omega)$ onto the block $[d(\delta),d(\delta)+\omega)$; thus the order-type of $\delta$ is the same as the order-type of $d(\delta)$ (use $E_\delta$). Hence $\delta=d(\delta)$ and $d$ is onto. \end{description} So we have concluded that $d$ is the identity, and thence the Unique Pattern Theorem. \section{ The encoding scheme} \label{s8} We now have all the ingredients for the main result---to show how to encode subsets of $\omega_1$ with simple patterns of Suslin sequences. The Magidor-Malitz quantifiers provide a concise way to describe our result. Recall that the quantified formula $Q x,y\ \varphi(x,y)$ holds in a structure $M$ if there is a set ${\sf A}\subseteq M$ of cardinality $\aleph_1$, and for any two distinct $x,y\in {\sf A}$, $M$ satisfies $\varphi(x,y)$. (Magidor and Malitz [1977]) Let us see what can be stated in this language. We may say that the set ${\sf A}$ (a unary predicate) is uncountable, simply by stating $Q x,y ({\sf A}(x))$. To say that a linear order relation $\prec$ is $\omega_1$-like we just say that it is uncountable, and any initial segment $\setm{x}{x\prec y}$ is countable (and the obvious first-order properties). Let us accept a slightly freer notion of $\omega_1$-trees: that of $\omega_1$-like trees. These are trees with set of levels, not $\omega_1$, but $\omega_1$-like. The predecessors of a node in an $\omega_1$-like tree form a countable chain which is not necessarily well-ordered. Since an $\omega_1$-like order embedds $\omega_1$, any $\omega_1$-like tree contains an $\omega_1$ tree. The notions of $\omega_1$-like Suslin trees, and $\omega_1$-like special trees can be defined and characterized in the Magidor-Malitz logic. There is a sentence $\sigma$ (in the language containing a binary relation $<$) such that ${\sf T}\models \sigma$ iff $({\sf T},<)$ is an $\omega_1$-like Suslin tree. $\sigma$ will simply state that ${\sf T}$ is an uncountable tree (with obvious properties), and there is no uncountable set of pairwise incomparable nodes in ${\sf T}$. Going one more step, we describe now a sentence, $\varphi$, which holds true only in simple $\omega_1$-like sequences of $\omega_1$-like Suslin trees $\sseqn{{\sf T}^\zeta}{\zeta\in I}$ where $I$ is an $\omega_1$-like ordering $\prec$. For this we may have to introduce a one-place predicate symbol $I$, and a two-place predicate ${\sf T}(a,i)$ which, for a particular $i\in I$, describes the tree ${\sf T}^i$. There is need also for a function which specializes those products which must be specialized. Giving more information on how $\varphi$ looks may annoy the reader who can find these details for herself; so we stop and state our theorem. \begin{theorem}[Encoding Theorem] \label{t8.1} There is a sentence $\psi$ in the Magidor-Malitz logic which contains, besides the symbols $\prec$ etc. described above, a one place predicate $P(x)$ such that the following holds: Assuming $\diamond_{\omega_{1}}$, for any $X\subseteq \omega_1$, \begin{enumerate} \item There is a model $M\models \psi$ for which $I^M=\omega_1$ and $P^N=X$. \item Assume $2^{\aleph_1}=\aleph_2$. There is a generic extension of the universe which adds no new countable sets and collapses no cardinals, and such that in this extension: If $N$ is any model satisfying $\psi$, then $I^N$ has order-type $\omega_1$, and (identifying $I^N$ with $\omega_1$) $$ P^N=X $$ \end{enumerate} \end{theorem} \par\noindent{\bf Proof:}\quad Let us first remark that the assumptions $\diamond_{\omega_1}$ and $2^{\aleph_1}=\aleph_2$ are not crucial, since these assumptions can be obtained with a forcing of size $2^{2^{\aleph_1}}$. The sentence $\psi$ is the simple-sequence sentence $\varphi$ partially described above with the addition of $$ \forall \zeta\in I($\langle$gle 3,5,6,\zeta$\rangle$gle\in{\sf su}\mbox{ iff }P(\zeta)) $$ Given $X\subseteq\omega_1$ (assume that $X$ contains only ordinals $>6$), let $$\langle$gle {\sf su},{\sf sp}$\rangle$gle$ be a simple pattern such that $X=\setm{\zeta\in\omega_1}{$\langle$gle 3,5,6,\zeta$\rangle$gle\in{\sf su}}$. Then use $\Diamond_{\omega_{1}}$ to construct an $\omega_1$-sequence ${\cal T}=\sseqn{{\sf T}^\zeta}{\zeta\in\omega_1}$ of Suslin trees which has the pattern $$\langle$gle{\sf su},{\sf sp}$\rangle$gle$. This takes care of {\it 1}. The generic extension is a countable support iteration of specializing posets which keeps every derived tree of $\setm{{\sf T}^d}{d\in{\sf su}}$ Suslin but specializes any \forallron\ tree, when possible. As we showed (Section ~\ref{s6}), an iteration of {\it length}\ $\aleph_2$ suffices to make ${\cal T}$ a primal sequence: Any \forallron\ tree ${\sf A}$ in the extension is either special or it contains a club-embedding of a derived tree of some ${\sf T}^d$, $d\in{\sf su}$. Now we have the required uniqueness. Let $N$ be a model of $\psi$. Then $I=I^N$ is an $\omega_1$-like ordering, and for each $i\in I$, a Suslin tree ${\sf A}^i$ can be reconstituted from the set of $a$'s for which ${\sf T}(a,i)$ holds in $N$. The sequence ${\cal A}= \sseqn{{\sf A}^i}{i\in I}$ is simple and with pattern $\pair{{\sf su}^N}{{\sf sp}^N}$. The Unique Pattern Theorem ~\ref{t7.4} can now be applied to ${\cal T}$ and ${\cal A}$ to yields $I^N\approx\omega_1$. And ${\sf su}={\sf su}^N$. That is, the two sequences have the same pattern. From this it follows that $X=P^N$. \subsection{The complete proof of Theorem A} To obtain the $\Delta^2_2$ encoding of any subset of $\hbox{{\sf I}\kern-.1500em \hbox{\sf R}}$, Theorem ~\ref{t8.1} is sufficient. However, the statement of Theorem A in the Preface is neater because it provides a categorical sentence, while Theorem ~\ref{t8.1} only establishes the uniqueness of $P^N$. The sentence $\psi$ described above contains predicates and function symbols other than $P$, and they too must be encoded by the unique pattern sequence of trees. To prove Theorem A, we ``catch our tail'' in the following way. Not only the tree sequence $$\langle$gle {\sf T}^\xi \rest \alpha +1 \mid \xi \in \alpha $\rangle$gle$ is constructed inductively, but so is the simple Suslin-special pattern itself. More precisely, at the limit $\alpha$ stage, we encode the countable structure so far defined (the trees and $P$ etc.) as a subset of the ordinal-interval $(\alpha, \alpha+\omega )$, and we put $$\langle$gle 3,5,6,\zeta $\rangle$gle \in {\sf su}$ for $\xi \in (\alpha, \alpha+\omega)$ so that it encodes that countable structure. The categorical sentence $\psi'$ tells us this fact as well. The proof continues just as before: any model $M$ for $\psi'$ determine a Suslin-special pattern which must be the unique such simple pattern, but it determines in turn $M$, and hence the uniqueness of $M$. {\it new}page \section*{References} \noindent \vskip 9pt\noindent {\sf U.Abraham and S. Shelah} \ipar{1985} {\em Isomorphism Types of Aronszajn Trees}, Israel J. of Math., Vol. 50, pp. 75-113. \vskip 9pt\noindent {\sf K. J. Devlin and H. Johnsbr\aa ten} \ipar{1974} {The Souslin Problem}, Lecture Notes in Math. {\bf 405}, Springer-Verlag, New-York. \vskip 9pt\noindent {\sf T. Jech} \ipar{1978} {\em Set Theory} Academic Press, New-York. \vskip 9pt\noindent {\sf R. B. Jensen and H. Johnsbr\aa ten} \ipar{1974} {\em A New Construction of a Non-Constructible $\Delta^1_3$ Subset of $\gamma$}, Fund. Math. 81, pp.279-290. \vskip 9pt\noindent {\sf M. Magidor and J. Malitz} \ipar{1977} {\em Compact Extensions of $L(Q)$}, Annals of Mathematical Logic, Vol. 11, pp. 217-261. \vskip 9pt\noindent {\sf S. Shelah} \ipar{1982} {\em Proper Forcing}, Lecture Notes in Math. {\bf 940}, Springer-Verlag, Berlin, Heidelberg, New-York. \ipar{1992} {\em Proper and Improper Forcing}, (relevant preprints may be obtained from the author). \vskip 9pt\noindent {\sf S. Shelah and H. Woodin} \ipar{1990} {\em Large cardinals imply every reasonably definable set is measurable}, Israel J. Math, 70, pp.381-394. \vskip 9pt\noindent {\sf S. Todor\v{c}evi\'{c}} \ipar{1984} {\em Trees and Linearly Ordered Sets}, in {\em Handbook of Set-Theoretic Topology} (K. Kunen and J. E. Vaughan, eds), North-Holland, Amsterdam, pp. 235-293. \vskip 9pt\noindent {\sf H. Woodin} \ipar{.} {\em Large cardinals and determinacy}. in preparation. \end{document}
\betagin{document} \sloppy \newenvironment{proo}{\betagin{trivlist} \item{\sc {Proof.}}} { $\square$ \end{trivlist}} \longrightarrowg\def\symbolfootnote[#1]#2{\betagingroup \def\thefootnote{{\mathcal O}nsymbol{footnote}}{\mathcal O}ootnote[#1]{#2}\endgroup} \title{On quantizable odd Lie bialgebras} \author{Anton~Khoroshkin} {\mathrm a\mathrm d}dress{Anton~Khoroshkin: National Research University Higher School of Economics, International Laboratory of Representation Theory and Mathematical Physics, 20 Myasnitskay Ulitsa, Moscow 101000, Russia \newline and ITEP, Bolshaya Cheremushkinskaya 25, 117259, Moscow, Russia } \email{[email protected]} \author{Sergei~Merkulov} {\mathrm a\mathrm d}dress{Sergei~Merkulov: Mathematics Research Unit, Luxembourg University, Grand Duchy of Luxembourg} \email{[email protected]} \author{Thomas~Willwacher} {\mathrm a\mathrm d}dress{Thomas~Willwacher: Institute of Mathematics, University of Zurich, Zurich, Switzerland} \email{[email protected]} \betagin{abstract} Motivated by the obstruction to the deformation quantization of Poisson structures in {\em infinite}\, dimensions we introduce the notion of a quantizable odd Lie bialgebra. The main result of the paper is a construction of the highly non-trivial minimal resolution of the properad governing such Lie bialgebras, and its link with the theory of so called {\em quantizable}\, Poisson structures. \end{abstract} \maketitle \section{Introduction} \subsection{Even and odd Lie bialgebras} A Lie ($c,d$)-bialgebra is a graded vector space $V$ which carries both a degree $c$ Lie algebra structure $$ \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}\simeq [\ ,\ ]: V\otimes V\bar{i}ghtarrow V[c] $$ and a degree $d$ Lie coalgebra structure, $$ \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{}, \end{xy}\simeq \Delta:V \bar{i}ghtarrow V\otimes V[d] $$ satisfying the following compatibility condition: \betagin{multline*} \betagin{xy} <0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-}, <-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-}, <0mm,3mm>*{\circ};<0mm,3mm>*{}**@{}, <0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-}, <0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{}, <-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{}, <0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{}, <0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy} \ - \ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{}, \end{xy} \ - (-1)^{d}\ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{}, \end{xy} \ -(-1)^{c+d} \ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{}, \end{xy} \ - (-1)^{c} \ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{}, \end{xy}\ \simeq \\ \simeq\ \Delta([x_1,x_2]) - [\Delta(x_1),x_2\otimesimes 1 + 1\otimesimes x_2] {\partial}m [x_1\otimesimes 1 + 1\otimesimes x_1,\Delta(x_2)] = 0. \end{multline*} If the ${\mathbb Z}_2$-parities of both structures are the same, i.e.\ if $c+d\in 2{\mathbb Z}$, the Lie bialgebra is called {\em even}, if the ${\mathbb Z}_2$-parities are opposite, $c+d\in 2{\mathbb Z}+1$, it is called {\em odd}. In the even case the most interesting for applications Lie bialgebras have $c=d=0$. Such Lie bialgebras were introduced by Drinfeld in \cite{D1} in the context of the theory of Yang-Baxter equations, and they have since found numerous applications, most prominently in the theory of Hopf algebra deformations of universal enveloping algebras (see the book \cite{ES} and references cited there). If the composition of the cobracket and bracket of a Lie bialgebra is zero, that is \betagin{equation}\lambdabel{1: involutivity condition} \betagin{array}{c}\resizebox{4mm}{!} {\xy (0,0)*{\circ}="a", (0,6)*{\circ}="b", (3,3)*{}="c", (-3,3)*{}="d", (0,9)*{}="b'", (0,-3)*{}="a'", \ar@{-} "a";"c" <0pt> \ar @{-} "a";"d" <0pt> \ar @{-} "a";"a'" <0pt> \ar @{-} "b";"c" <0pt> \ar @{-} "b";"d" <0pt> \ar @{-} "b";"b'" <0pt> \endxy} \end{array} = 0, \end{equation} then the Lie bialgebra is called \emph{involutive}. This additional constraint is satisfied in many interesting examples studied in homological algebra, string topology, symplectic field theory, Lagrangian Floer theory of higher genus, and the theory of cohomology groups $H({\mathcal M}_{g,n})$ of moduli spaces of algebraic curves with labelings of punctures skewsymmetrized \cite{D1,ES,Ch,ChSu,CFL,S,CMW,MW0}. In the odd case the most interesting for applications Lie bialgebras have $c=1$, $d=0$. They have been introduced in \cite{Me1} and have seen applications in Poisson geometry, deformation quantization of Poisson structures \cite{Me2} and in the theory of cohomology groups $H({\mathcal M}_{g,n})$ of moduli spaces of algebraic curves with labelings of punctures { symmetrized} \cite{MW0}. The homotopy and deformation theories of even/odd Lie bialgebras and also of involutive Lie bialgebras have been studied in \cite{CMW,MW}. A key tool in those studies is a minimal resolution of the properad governing the algebraic structure under consideration. The minimal resolutions of properads $\mathcal{L}\mathit{ieb}$ and ${\caL \mathit{ieb}_{odd}}$ governing even and, respectively, odd Lie bialgebras were constructed in \cite{Ko,MaVo,V} and, respectively, in \cite{Me1,Me2}. Constructing a minimal resolution $\mathcal{H}\mathit{olieb}^\diamond$ of the properad $\mathcal{L}\mathit{ieb}^\diamond$ governing {\em involutive}\, Lie bialgebras turned out to be a more difficult problem, and that goal was achieved only very recently in \cite{CMW}. \subsection{Quantizable odd Lie bialgebras} For odd Lie bialgebras the involutivity condition (\ref{1: involutivity condition}) is trivial, i.e.\ it is satisfied automatically for any odd Lie bialgebra $V$. There is, however, a higher genus analogue of that condition, \betagin{equation}\lambdabel{1: quantizability condition} \betagin{array}{c} \resizebox{8mm}{!} { \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,0.8mm>*{}**@{}, <2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{}; <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<3.4mm,7.0mm>*{}**@{-}, <2.9mm,8.2mm>*{};<2.9mm,10.5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\circ}**@{}, <0mm,-0.8mm>*{\circ}; <2.96mm,2.4mm>*{\circ}; <2.96mm,7.5mm>*{\circ}**@{}, \end{xy}} \end{array}\ =\ 0, \end{equation} which is highly non-trivial, and which can be considered as an odd analogue of (\ref{1: involutivity condition}). We prefer, however, to call odd Lie bialgebras satisfying the extra constraint (\ref{1: quantizability condition}) {\em quantizable}\, ones rather than involutive. The reason for this terminology is explained in \S 2: this graph controls, on the one hand, the obstruction to the universal quantization of Poisson structures in {\em infinite}\, dimensions \cite{Me1,Me2}, and, on the other hand, it controls the obstruction to the existence of geometrically meaningful Frobenius structure on $\mathrm{Chains}({\mathbb R})$ \cite{JF1}. Our main result is an explicit construction in \S 3 of a (highly non-obvious) minimal resolution ${\caH \mathit{olieb}_{odd}^\diamond}$ of the properad ${\caL \mathit{ieb}_{odd}^\diamond}$ governing quantizable Lie bialgebras. One of the key tricks in \cite{CMW} used to solve an analogous problem for the properad $\mathcal{L}\mathit{ieb}^\diamond$ of even involutive Lie bialgebras reduced the ``hard" problem of computing the cohomology of some dg properad to the ``easy" computation of the minimal resolutions of a family of some auxiliary {\em quadratic}\, algebras. Remarkable enough, this approach works for the constraint (\ref{1: quantizability condition}) as well, but it leads instead to a certain family of {\em cubic}\, homogeneous algebras which are studied in the Appendix. Another important technical ingredient in our construction of ${\caH \mathit{olieb}_{odd}^\diamond}$ comes from the paper \cite{MW}, in which the cohomologies of the deformation complexes of the properads $\mathcal{L}\mathit{ieb}$, ${\caL \mathit{ieb}_{odd}}$ and $\mathcal{L}\mathit{ieb}^\diamond$ have been computed, and it was proven in particular that the properad ${\caL \mathit{ieb}_{odd}}$ admits precisely one non-trivial deformation; in fact it is that unique non-trivial deformation which leads us to the dg properad ${\caH \mathit{olieb}_{odd}^\diamond}$. We explain this link in \S 3. There are plenty of examples of ${\caH \mathit{olieb}_{odd}^\diamond}$ algebra structures associated with ordinary (or formal power series) Poisson structures ${\partial}i\in {\mathcal T}_{poly}({\mathbb R}^d)$ on ${\mathbb R}^d$, $d\geq 2$, which vanish at $0\in {\mathbb R}^d$; for a generic Poisson structure ${\partial}i$ on ${\mathbb R}^d$ the associated ${\caH \mathit{olieb}_{odd}^\diamond}$ algebra structure ${\partial}i^\diamond$, $$ {\partial}i^\diamond: {\caH \mathit{olieb}_{odd}^\diamond} \longrightarrow {\mathcal E} nd_{{\mathbb R}^d}, $$ in ${\mathbb R}^d$ is highly non-trivial and can be given explicitly only by transcendental formulae (i.e.\ the ones involving converging integrals over suitable configuration spaces \cite{MW2}). Such a ${\caH \mathit{olieb}_{odd}^\diamond}$ algebra structure ${\partial}i^\diamond$ in ${\mathbb R}^d$ can also be interpreted as a formal power series bivector field ${\partial}i^\diamond\in {\mathcal T}_{poly}({\mathbb R}^d)[[\hbar]]$, $$ {\partial}i^\diamond = {\partial}i_0 + \hbar {\partial}i_1 + \hbar^3 {\partial}i_2 +\ldots, \ \ \ \ \ \ $$ satisfying a certain formal power series equation, $$ {\mathfrak r}ac{1}{2}[{\partial}i^\diamond,{\partial}i^\diamond]_2 + {\mathfrak r}ac{\hbar}{4!} [{\partial}i^\diamond,{\partial}i^\diamond,{\partial}i^\diamond,{\partial}i^\diamond]_4 + \ldots =0, $$ Here the collection of operators, $$ \left\{[\, \ , \ldots ,\ ]_{2n}: {\mathcal T}_{poly}({\mathbb R}^d)^{\otimes 2n}\bar{i}ghtarrow {\mathcal T}_{poly}({\mathbb R}^d)[3-4n] \bar{i}ght\}_{n\geq 1} $$ defines a so called {\em Kontsevich-Shoikhet} ${\mathcal L} ie_\infty$ structure \cite{Sh} in ${\mathcal T}_{poly}({\mathbb R}^d)$ with $[\ ,\ ]_2$ being the standard Schouten bracket. This huge class $\{{\partial}i^\diamond\}$ of highly-non-trivial examples of ${\caH \mathit{olieb}_{odd}^\diamond}$ algebra structures in ${\mathbb R}^d$ depends in general on the choice of Drinfeld associator \cite{MW2}; it motivated much our current study of the homotopy theory of odd Lie bialgebras. \subsection{Some notation} The set $\{1,2, \ldots, n\}$ is abbreviated to $[n]$; its group of automorphisms is denoted by ${\mathbb S}_n$; the trivial one-dimensional representation of ${\mathbb S}_n$ is denoted by ${\mbox{1 \hskip -8pt 1}}_n$, while its one dimensional sign representation is denoted by ${\mathit s \mathit g\mathit n}_n$. The cardinality of a finite set $A$ is denoted by $\# A$. We work throughout in the category of ${\mathbb Z}$-graded vector spaces over a field ${\mathbb K}$ of characteristic zero. If $V=\oplus_{i\in {\mathbb Z}} V^i$ is a graded vector space, then $V[k]$ stands for the graded vector space with $V[k]^i:=V^{i+k}$. For a prop(erad) ${\mathcal P}$ we denote by ${\mathcal P}\{k\}$ a prop(erad) which is uniquely defined by the following property: for any graded vector space $V$ a representation of ${\mathcal P}\{k\}$ in $V$ is identical to a representation of ${\mathcal P}$ in $V[k]$. \section{\bf Quantizable odd Lie bialgebras} } \subsection{Odd lie bialgebras} By definition \cite{Me1}, the properad, ${\caL \mathit{ieb}_{odd}^\diamond}$, of odd Lie bialgebras is a quadratic properad given as the quotient, $$ {\caL \mathit{ieb}_{odd}^\diamond}:={\mathcal F} ree\lambdangle E{\bar{a}}ngle/\lambdangle{\mathcal R} {\bar{a}}ngle, $$ of the free properad generated by an ${\mathbb S}$-bimodule $E=\{E(m,n)\}_{m,n\geq 1}$ with all $E(m,n)=0$ except $$ E(2,1):= {\mathit s \mathit g\mathit n}_2\otimes {\mbox{1 \hskip -8pt 1}}_1=\mbox{span}\left\lambdangle \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{}, \end{xy} =- \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^1}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^2}**@{}, \end{xy} \bar{i}ght{\bar{a}}ngle $$ $$ E(1,2):= {\mbox{1 \hskip -8pt 1}}_1\otimes {\mbox{1 \hskip -8pt 1}}_2[-1]=\mbox{span}\left\lambdangle \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}= \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^1}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^2}**@{}, \end{xy} \bar{i}ght{\bar{a}}ngle $$ modulo the ideal generated by the following relations \betagin{equation}\lambdabel{R for LieB} {\mathcal R}:\left\{ \betagin{array}{c} \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{}, \end{xy} \ + \ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^2}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^1}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^3}**@{}, \end{xy} \ + \ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^1}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^3}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^2}**@{}, \end{xy}\ =\ 0, \\ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <-2.4mm,-2.4mm>*{\circ};<-2.4mm,-2.4mm>*{}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{}, <-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{}, <-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{}, \end{xy} \ + \ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <-2.4mm,-2.4mm>*{\circ};<-2.4mm,-2.4mm>*{}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^2}**@{}, <-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^1}**@{}, <-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^3}**@{}, \end{xy} \ + \ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <-2.4mm,-2.4mm>*{\circ};<-2.4mm,-2.4mm>*{}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^1}**@{}, <-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^3}**@{}, <-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^2}**@{}, \end{xy}\ =\ 0, \\ \betagin{xy} <0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-}, <-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-}, <0mm,3mm>*{\circ};<0mm,3mm>*{}**@{}, <0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-}, <0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{}, <-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{}, <0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{}, <0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy} \ - \ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{}, \end{xy} \ -\ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{}, \end{xy} \ + \ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{}, \end{xy} \ + \ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{}, \end{xy}\ =\ 0. \end{array} \bar{i}ght. \end{equation} A minimal resolution ${\caH \mathit{olieb}_{odd}}$ of ${\caL \mathit{ieb}_{odd}}$ was constructed in \cite{Me1,Me2}. It is a free properad, $$ {\caH \mathit{olieb}_{odd}}={\mathcal F} ree \lambdangle\hat{E}{\bar{a}}ngle $$ generated by an ${\mathbb S}$--bimodule $\hat{E}=\{ \hat{E}(m,n)\}_{m,n\geq 1, m+n\geq 3}$, $$ \hat{E}(m,n):= sgn_m\otimes {\mbox{1 \hskip -8pt 1}}_n[m-2]=\mbox{span}\left\lambdangle \resizebox{14mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{}, \end{xy}} \bar{i}ght{\bar{a}}ngle, $$ and comes equipped with the differential $$ \delta \resizebox{14mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{}, \end{xy}} \ \ = \ \ \sum_{[1,\ldots,m]=I_1\sqcup I_2\atop {|I_1|\geq 0, |I_2|\geq 1}} \sum_{[1,\ldots,n]=J_1\sqcup J_2\atop {|J_1|\geq 1, |J_2|\geq 1} }\hspace{0mm} {\partial}m\ \resizebox{24mm}{!}{ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<12.4mm,4.8mm>*{}**@{-}, <0mm,0mm>*{};<-2mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <0mm,0mm>*{};<-2mm,9mm>*{^{I_1}}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <0mm,0mm>*{};<0mm,-10.6mm>*{_{J_1}}**@{}, <13mm,5mm>*{};<13mm,5mm>*{\circ}**@{}, <12.6mm,5.44mm>*{};<5mm,10mm>*{}**@{-}, <12.6mm,5.7mm>*{};<8.5mm,10mm>*{}**@{-}, <13mm,5mm>*{};<13mm,10mm>*{\ldots}**@{}, <13.4mm,5.7mm>*{};<16.5mm,10mm>*{}**@{-}, <13.6mm,5.44mm>*{};<20mm,10mm>*{}**@{-}, <13mm,5mm>*{};<13mm,12mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <13mm,5mm>*{};<13mm,14mm>*{^{I_2}}**@{}, <12.4mm,4.3mm>*{};<8mm,0mm>*{}**@{-}, <12.6mm,4.3mm>*{};<12mm,0mm>*{\ldots}**@{}, <13.4mm,4.5mm>*{};<16.5mm,0mm>*{}**@{-}, <13.6mm,4.8mm>*{};<20mm,0mm>*{}**@{-}, <13mm,5mm>*{};<14.3mm,-2mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }}**@{}, <13mm,5mm>*{};<14.3mm,-4.5mm>*{_{J_2}}**@{}, \end{xy}} $$ It was shown in \cite{Me1,Me2} that representations ${\caH \mathit{olieb}_{odd}}\bar{i}ghtarrow {\mathcal E} nd_V$ of the minimal resolution of ${\caL \mathit{ieb}_{odd}}$ in a graded vector space $V$ are in 1-1 correspondence with formal graded Poisson structures ${\partial}i\in {\mathcal T}_{poly}^{\geq 1}(V^*)$ on the dual vector space $V^*$ (viewed as a linear manifold) which vanish at the zero point in $V$, ${\partial}i|_0=0$. \subsection{\bf Quantizable odd Lie bialgebras} We define the properad ${\caL \mathit{ieb}_{odd}^\diamond}$ of {\em quantizable}\, odd Lie bialgebras as the quotient of the properad ${\caL \mathit{ieb}_{odd}}$ by the ideal generated by the following element \betagin{equation}\lambdabel{equ:obstr_element} \betagin{array}{c} \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.96mm,2.4mm>*{\circ};<2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\circ}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{};<2.96mm,7.5mm>*{\circ}**@{}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<2.8mm,7.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,10.5mm>*{}**@{-}, \end{xy} \end{array}\ \in {\caL \mathit{ieb}_{odd}}. \end{equation} The associated relation on Lie and coLie brackets looks like a higher genus odd analogue of the involutivity condition (\ref{1: involutivity condition}) in the case of even Lie bialgebras. However, we prefer to use the adjective {\em quantizable}\, rather than {\em involutive} for odd Lie bialgebras satisfying (\ref{1: quantizability condition}) because that condition has a clear interpretation within the framework of the theory of deformation quantization, and its quantizability property becomes even more clear when one raises it to the level of representations of its minimal resolution ${\caH \mathit{olieb}_{odd}^\diamond}$. An odd Lie bialgebra structure in a vector space $V$ can be understood as a pair $$ \left(\xi\in {\mathcal T}_{V^*}, {\mathbb P}hi\in \wedge^2 {\mathcal T}_{V^*}\bar{i}ght) $$ consisting of a degree 1 quadratic vector field $\xi$ (corresponding to the Lie cobracket $\Delta$ in $V$) and a linear Poisson structure ${\mathbb P}hi$ in $V^*$ (corresponding to the Lie bracket $[\ ,\ ]$ in $V$). All the (compatibility) equations for the algebraic operations $[\ ,\ ]$ and $\Delta$ get encoded into a single equation, $$ \{\xi + {\mathbb P}hi, \xi + {\mathbb P}hi\}=0, $$ where $\{\ ,\ \}$ stand for the standard Schouten bracket in the algebra ${\mathcal T}_{poly}(V^*)$ of polyvector fields on $V^*$ (viewed as an affine manifold). Therefore, the sum $\xi + {\mathbb P}hi$ gives us a graded Poisson structure on $V^*$ and one can talk about its deformation quantization, that is, about an associated Maurer-Cartan element $\Gamma$ (deforming $\xi + {\mathbb P}hi$) in the Hochschild dg Lie algebra, $$ C^\bullet({\mathcal O}_V,{\mathcal O}_V):= \bigoplus_{n\geq 0} {\mathrm H\mathrm o\mathrm m}({\mathcal O}_V^{\otimes n}, {\mathcal O}_V) $$ where ${\mathrm H\mathrm o\mathrm m}({\mathcal O}_V^{\otimes n}, {\mathcal O}_V)$ stands for the vector space of polydifferential operators on the graded commutative algebra ${\mathcal O}_V:=\odot^\bullet V$ of polynomial functions on $V^*$. As the graded Poisson structure $\xi+{\mathbb P}hi$ is non-negatively graded, its deformation quantization must satisfy the condition $$ \Gamma\in {\mathrm H\mathrm o\mathrm m}({\mathbb K}, {\mathcal O}_V) \oplus {\mathrm H\mathrm o\mathrm m}({\mathcal O}_V,{\mathcal O}_V) \oplus {\mathrm H\mathrm o\mathrm m}({\mathcal O}_V^{\otimes 2},{\mathcal O}_V) $$ with the corresponding splitting of $\Gamma$ into a sum of three terms, $$ \Gamma=\Gamma_0 + \Gamma_1 + \Gamma_2 $$ of degrees $2$, $1$, and $0$ respectively. The term $\Gamma_2=\Gamma_2({\mathbb P}hi)$ has degree zero and hence can depend (universally) only on the Lie bracket ${\mathbb P}hi$. It makes $({\mathcal O}_V, \star:=\Gamma_2)$ into an associative non-commutative algebra, and up to gauge equivalence, the algebra $({\mathcal O}_V, \star)$ can always be identified with the universal enveloping algebra of the Lie algebra $(V,[ \ ,\ ])$. The operation $\Gamma_1$ is a deformation of the differential on ${\mathcal O}_V$ induced by $\xi$. (Note that this latter undeformed differential squares to zero by the second Jacobi identity in the list (\ref{R for LieB}).) The Maurer-Cartan equation for $\Gamma$ states that $\Gamma_1$ is a derivation with respect to the star product $\Gamma_2$, and squares to zero modulo the star product commutator with the (potential) obstruction $\Gammamma_0$. Now one can ask whether it is possible to find $\Gamma$ as above such that the obstruction (or sometimes called "curvature") term $\Gamma_0$ vanishes, and hence $\Gamma_1^2=0$, so that $\Gammamma_1$ is an honest differential. Since the algebra $({\mathcal O}_V, \star)$ is generated by $V$, any derivation with respect to the product $\star$ is uniquely determined by its values on $V$. Let $\xi_\star$ be the unique derivation of the associative algebra $({\mathcal O}_V, \star)$ that agrees with $\xi$ on $V$. This derivation is well defined since it annihilates the defining relations of the universal enveloping algebra by the third relation in (\ref{R for LieB}). Then the derivation $\xi_\star^2 = {\mathfrak r}ac 1 2 [\xi_\star,\xi_\star]=0$ if and only if $\xi \simeq \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}$ and ${\mathbb P}hi\simeq \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}$ satisfy the extra compatibility condition (\ref{1: quantizability condition}) (see \cite{Me3}). Therefore, if $\xi$ and ${\mathbb P}hi$ come from a representation of ${\caL \mathit{ieb}_{odd}^\diamond}$ in $V$, then they admit a very simple deformation quantization in the form $$ \Gamma= \xi_\star + \Gamma_2({\mathbb P}hi), $$ and this quantization makes sense even in the case when $V$ is {\em infinite}-dimensional. If $\xi$ and ${\mathbb P}hi$ do not satisfy the extra compatibility condition (\ref{1: quantizability condition}), then their deformation quantization is possible only in {\em finite}\, dimensions, and involve a non-zero ``curvature" term $\Gamma_0$ which in turn involves graphs with closed paths of directed edges and is given explicitly in \cite{Me3} (in fact this argument proves non-existence of Kontsevich formality maps for {\em infinite}\, dimensional manifolds). These considerations shall motivate our notation ``quantizable Lie bialgebras'' for odd Lie bialgebras satisfying the condition (\ref{1: quantizability condition}). We shall construct below a minimal resolution ${\caH \mathit{olieb}_{odd}^\diamond}$ of the properad ${\caL \mathit{ieb}_{odd}^\diamond}$; its representations in a graded vector space $V$ give us so called {\em quantizable}\, Poisson structures on $V$ which can be deformation quantized via a trivial (i.e.\ without using Drinfeld associators) perturbation even if $\dim V=\infty$ (see \cite{Wi2,B}); in finite dimensions there is a 1-1 correspondence between ordinary Poisson structure on $V$ and quantizable ones, but this correspondence is highly non-trivial --- it depends on the choice of a Drinfeld associator \cite{MW2}. The properad Koszul dual to the properad ${\caL \mathit{ieb}_{odd}}$ is the properad of odd Frobenius algebras (cf.\ \cite{V}). A remarkable ``Koszul dual" meaning of the graph (\ref{1: quantizability condition}) was found by Theo Johnson-Freyd in \cite{JF1} --- it controls the obstruction to the existence of a geometrically meaningful homotopy odd Frobenius structure on the complex $\mathrm{Chains}_\bullet({\mathbb R})$. {\Large \section{\bf A minimal resolution of ${\caL \mathit{ieb}_{odd}^\diamond}$} } \subsection{Oriented graph complexes and a Kontsevich-Shoikhet MC element} Let $G_{n,l}^{or}$ be a set of connected graphs $\Gamma$ with $n$ vertices and $l$ directed edges such that (i) $\Gamma$ has no {\em closed}\, directed paths of edges, and (ii) some bijection from the set of edges $E(\Gamma)$ to the set $[l]$ is fixed. There is a natural right action of the group ${\mathbb S}_l$ on the set $G^{or}_{n,l}$ by relabeling the edges. Consider a graded vector space $$ \mathsf{fGC}_2^{or}:= {\partial}rod_{n\geq 1, l\geq 0} {\mathbb K} \lambdangle G_{n,l}^{or}{\bar{a}}ngle \otimes_{{\mathbb S}_l} {\mathit s \mathit g\mathit n}_l [l +2(1-n)] $$ It was shown in \cite{Wi2} that this vector space comes equipped with a Lie bracket $[\ ,\ ]$ (given, as often in the theory of graph complexes, by substituting graphs into vertices of another graphs), and that the degree $+1$ graph $$\xy (0,0)*{\bulletllet}="a", (6,0)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy \in \mathsf{fGC}_2^{or} $$ is a Maurer-Cartan element making $\mathsf{fGC}_2^{or}$ into a {\em differential}\, Lie algebra with the differential given by $$ \delta:=[\xy (0,0)*{\bulletllet}="a", (6,0)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy, \ ]. $$ It was proven in \cite{Wi2} that the cohomology group $H^1(\mathsf{fGC}_2^{or})$ is one-dimensional and is spanned by the following graph $$ \Upsilon_4:=\betagin{array}{c}\xy (0,0)*{\bulletllet}="1", (-7,16)*{\bulletllet}="2", (7,16)*{\bulletllet}="3", (0,10)*{\bulletllet}="4", \ar @{<-} "2";"4" <0pt> \ar @{<-} "3";"4" <0pt> \ar @{<-} "4";"1" <0pt> \ar @{<-} "2";"1" <0pt> \ar @{<-} "3";"1" <0pt> \endxy\end{array} + 2 \betagin{array}{c}\xy (0,0)*{\bulletllet}="1", (-6,6)*{\bulletllet}="2", (6,10)*{\bulletllet}="3", (0,16)*{\bulletllet}="4", \ar @{<-} "4";"3" <0pt> \ar @{<-} "4";"2" <0pt> \ar @{<-} "3";"2" <0pt> \ar @{<-} "2";"1" <0pt> \ar @{<-} "3";"1" <0pt> \endxy\end{array} + \betagin{array}{c}\xy (0,16)*{\bulletllet}="1", (-7,0)*{\bulletllet}="2", (7,0)*{\bulletllet}="3", (0,6)*{\bulletllet}="4", \ar @{->} "2";"4" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"1" <0pt> \ar @{->} "2";"1" <0pt> \ar @{->} "3";"1" <0pt> \endxy\end{array}. $$ while the cohomology group $H^2(\mathsf{fGC}_2^{or},\delta_0)$ is also one-dimensional and is generated by a linear combination of graphs with four vertices (whose explicit form plays no role in this paper). This means that one can construct by induction a new Maurer-Cartan element (the integer subscript stand for the number of vertices) $$ \Upsilon_{KS}= \xy (0,0)*{\bulletllet}="a", (6,0)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy + \Upsilon_4 + \Upsilon_6 + \Upsilon_8 + \ldots $$ in the Lie algebra $\mathsf{fGC}_2^{or}$. Indeed, the Lie brackets in $\mathsf{fGC}_2^{or}$ has the property that a commutator $[A,B]$ of a graph $A$ with $p$ vertices and a graph $B$ with $q$ vertices has $p+q-1$ vertices. Therefore, all the obstructions to extending the sum $ \xy (0,0)*{\bulletllet}="a", (6,0)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy + \Upsilon_4 $ to a Maurer-Cartan element have $7$ or more vertices and hence do not hit the unique cohomology class in $H^2(\mathsf{fGC}_2^{or},\delta)$. Up to gauge equivalence, this new MC element $\Upsilon_{KS}$ is the {\em only}\, non-trivial deformation of the standard MC element $\xy (0,0)*{\bulletllet}="a", (6,0)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy$. We call it the {\em Kontsevich-Shoikhet}\, element as it was introduced (via a different line of thought) by Boris Shoikhet in \cite{Sh} with a reference to an important contribution by Maxim Kontsevich via an informal communication. \subsubsection{\bf A formal power series extension of $\mathsf{fGC}_2^{or}$} Let $\hbar$ be a formal parameter of degree $0$ and let $\mathsf{fGC}_2^{or}[[\hbar]]$ be a topological vector space of formal power series in $\hbar$ with coefficients in $\mathsf{fGC}_2^{or}$. This is naturally a topological Lie algebra in which the formal power series $$ \Upsilon_{KS}^\hbar= \xy (0,0)*{\bulletllet}="a", (6,0)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy + \hbar\Upsilon_4 + + \hbar^2 \Upsilon_6 + \hbar^3 \Upsilon_8 + \ldots $$ is a Maurer-Cartan element. \subsection{From the Kontsevich-Shoikhet element to a minimal resolution of ${\caL \mathit{ieb}_{odd}^\diamond}$} Consider a (non-differential) free properad ${\caH \mathit{olieb}_{odd}^\diamond}$ generated by the following (skewsymmetric in outputs and symmetric in inputs) corollas of degree $2-m$, \betagin{equation}\lambdabel{2: generating corollas of LoB infty} \resizebox{15mm}{!}{ \xy (-9,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (-5,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (9,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (5,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (0,-6)*{\ldots}; (-10,-8)*{_1}; (-6,-8)*{_2}; (10,-8)*{_n}; (-9,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (-5,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (9,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (5,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (0,6)*{\ldots}; (-10,8)*{_1}; (-6,8)*{_2}; (10,8)*{_m}; \endxy} =(-1)^{\sigma} \resizebox{18mm}{!}{ \xy (-9,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (-5,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (9,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (5,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (0,-6)*{\ldots}; (-12,-8)*{_{{\mathbf a}u(1)}}; (-6,-8)*{_{{\mathbf a}u(2)}}; (12,-8)*{_{{\mathbf a}u(n)}}; (-9,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (-5,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (9,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (5,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (0,6)*{\ldots}; (-12,8)*{_{\sigma(1)}}; (-6,8)*{_{\sigma(2)}}; (12,8)*{_{\sigma(m)}}; \endxy}\ \ \ {\mathcal O}orall \sigma\in {\mathbb S}_m, {\mathcal O}orall {\mathbf a}u\in {\mathbb S}_n, \end{equation} where $m+n+ a\geq 3$, $m\geq 1$, $n\geq 1$, $a\geq 0$. Let $\widehat{\mathcal{H}\mathit{olieb}}_{odd}^{\Ba{c} _{\hspace{-2mm}\diamond} \Ea}$ be the genus completion of ${\caH \mathit{olieb}_{odd}^\diamond}$. \subsubsection{\bf Lemma} {\em The Lie algebra $\mathsf{fGC}_2^{or}[[\hbar]]$ acts (from the right) on the properad $\widehat{\mathcal{H}\mathit{olieb}}_{odd}^{\Ba{c} _{\hspace{-2mm}\diamond} \Ea}$ by continuous derivations, that is, there is a morphism of Lie algebras $$ \betagin{array}{rccc} F: & \mathsf{fGC}_2^{or}[[\hbar]] & \longrightarrow & \mathrm{Der}(\widehat{\mathcal{H}\mathit{olieb}}_{odd}^{\Ba{c} _{\hspace{-2mm}\diamond} \Ea})\\ & \hbar^k \Gamma &\longrightarrow & F(\hbar^k \Gamma) \end{array} $$ where the derivation $F(\hbar^k \Gamma)$ is given on the generators as follows $$ F(\hbar^k \Gamma) \cdot \resizebox{15mm}{!}{ \xy (-9,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (-5,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (9,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (5,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (0,-6)*{\ldots}; (-10,-8)*{_1}; (-6,-8)*{_2}; (10,-8)*{_n}; (-9,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (-5,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (9,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (5,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (0,6)*{\ldots}; (-10,8)*{_1}; (-6,8)*{_2}; (10,8)*{_m}; \endxy} := \left\{\betagin{array}{ll} \displaystyle \sum_{m,n} \sum_{a=a_1+\ldots+ a_{\#V(\Gamma)} +k } \overbrace{ \underbrace{\betagin{array}{c}\resizebox{10mm}{!} { \xy (0,4.5)*+{...}, (0,-4.5)*+{...}, (0,0)*+{\Gamma}="o", (-5,6)*{}="1", (-3,6)*{}="2", (3,6)*{}="3", (5,6)*{}="4", (-3,-6)*{}="5", (3,-6)*{}="6", (5,-6)*{}="7", (-5,-6)*{}="8", \ar @{-} "o";"1" <0pt> \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} }_{n\times} }^{m\times} & \ \ \ \ \ \ {\mathcal O}orall\ \Gamma\in \mathsf{fGC}_2^{or}, \ \ \ \ {\mathcal O}orall\ k\in [0,1,\ldots, a]\\ 0 & \ \ \ \ \ \ {\mathcal O}orall\ \Gamma\in \mathsf{fGC}_2^{or},\ \ \ \ {\mathcal O}orall\ k>a, \end{array} \bar{i}ght. $$ where the first sum is taking over to attach $m$ output legs and $n$ input legs to the vertices of the graph $\Gamma$, and the second sum is taken over all possible ways to decorate the vertices of $\Gamma$ with non-negative integers $a_1,\ldots,a_{\# V(\Gamma)}$ such they sum to $a-k$. } { Proof} is identical to the proofs of similar statements (Theorems 1.2.1 and 1.2.2) in \cite{MW}. \subsubsection{\bf Corollary} {\em The completed free properad $\widehat{\mathcal{H}\mathit{olieb}}_{odd}^{\Ba{c} _{\hspace{-2mm}\diamond} \Ea}$ comes equipped with a differential $\delta_\diamond:=F(\Upsilon_{KS}^\hbar)$. The differential $\delta$ restricts to a differential in the free properad ${\caH \mathit{olieb}_{odd}^\diamond}$}. \betagin{proof} When applied to any generator of $\widehat{\mathcal{H}\mathit{olieb}}_{odd}^{\Ba{c} _{\hspace{-2mm}\diamond} \Ea}$ the differential $\delta$ gives always a {\em finite}\, sum of graphs. It follows that it is well defined in ${\caH \mathit{olieb}_{odd}^\diamond}$ as well. \end{proof} There is an injection of dg free properads $$ ({\caH \mathit{olieb}_{odd}}, \delta) \longrightarrow ({\caH \mathit{olieb}_{odd}^\diamond}, \delta_\diamond) $$ given on generators by $$ \resizebox{14mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{}, \end{xy}} \longrightarrow \resizebox{15mm}{!}{ \xy (-9,-6)*{}; (0,0)*+{0}*\cir{} **\dir{-}; (-5,-6)*{}; (0,0)*+{0}*\cir{} **\dir{-}; (9,-6)*{}; (0,0)*+{0}*\cir{} **\dir{-}; (5,-6)*{}; (0,0)*+{0}*\cir{} **\dir{-}; (0,-6)*{\ldots}; (-10,-8)*{_1}; (-6,-8)*{_2}; (10,-8)*{_n}; (-9,6)*{}; (0,0)*+{0}*\cir{} **\dir{-}; (-5,6)*{}; (0,0)*+{0}*\cir{} **\dir{-}; (9,6)*{}; (0,0)*+{0}*\cir{} **\dir{-}; (5,6)*{}; (0,0)*+{0}*\cir{} **\dir{-}; (0,6)*{\ldots}; (-10,8)*{_1}; (-6,8)*{_2}; (10,8)*{_m}; \endxy} $$ Identifying from now on weight zero generators of ${\caH \mathit{olieb}_{odd}^\diamond}$ with generators of ${\caH \mathit{olieb}_{odd}}$, we may write $$ \delta_\diamond \xy (0,5)*{}; (0,0)*+{_1}*\cir{} **\dir{-}; (0,-5)*{}; (0,0)*+{_1}*\cir{} **\dir{-}; \endxy =\betagin{array}{c} \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.96mm,2.4mm>*{\circ};<2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\circ}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{};<2.96mm,7.5mm>*{\circ}**@{}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<2.8mm,7.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,10.5mm>*{}**@{-}, \end{xy} \end{array} $$ and hence conclude that there is a natural morphism of dg properads $$ {\partial}i: ({\caH \mathit{olieb}_{odd}^\diamond},\delta_\diamond) \longrightarrow ({\caL \mathit{ieb}_{odd}^\diamond},0) $$ Our main result in this paper is the following theorem. \subsection{\bf Main Theorem} {\em The map ${\partial}i$ is a quasi-isomorphism, i.e.\ ${\caH \mathit{olieb}_{odd}^\diamond}$ is a minimal resolution of ${\caL \mathit{ieb}_{odd}^\diamond}$.} \betagin{proof} Let ${\mathcal P}$ be a dg properad generated by a degree $1$ corollas $\betagin{xy} <0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-}, <0mm,0.5mm>*{};<0mm,3mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy}$\, and $\betagin{array}{c}\betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}= \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^1}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^2}**@{}, \end{xy}\end{array}$, and a degree zero corolla, $ \betagin{array}{c} \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{}, \end{xy} =- \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^1}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^2}**@{}, \end{xy}\end{array} $ and modulo relations \betagin{equation}\lambdabel{2: relation in P} \xy (0,-1.9)*{\bullet}="0", (0,1.9)*{\bullet}="1", (0,-5)*{}="d", (0,5)*{}="u", \ar @{-} "0";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "1";"d" <0pt> \endxy=0, \ \ \ \ \xy (0,-1.9)*{\circ}="0", (0,1.9)*{\bullet}="1", (-2.5,-5)*{}="d1", (2.5,-5)*{}="d2", (0,5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d1" <0pt> \ar @{-} "0";"d2" <0pt> \endxy + \xy (-2.5,-1.9)*{\bullet}="0", (0,1.9)*{\circ}="1", (-2.5,-1.9)*{}="d1", (2.5,-1.9)*{}="d2", (-2.5,-5)*{}="d", (0,5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d" <0pt> \ar @{-} "1";"d1" <0pt> \ar @{-} "1";"d2" <0pt> \endxy + \xy (2.5,-1.9)*{\bullet}="0", (0,1.9)*{\circ}="1", (-2.5,-1.9)*{}="d1", (2.5,-1.9)*{}="d2", (2.5,-5)*{}="d", (0,5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d" <0pt> \ar @{-} "1";"d1" <0pt> \ar @{-} "1";"d2" <0pt> \endxy =0\ , \ \ \ \ \xy (0,1.9)*{\circ}="0", (0,-1.9)*{\bullet}="1", (-2.5,5)*{}="d1", (2.5,5)*{}="d2", (0,-5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d1" <0pt> \ar @{-} "0";"d2" <0pt> \endxy - \xy (-2.5,1.9)*{\bullet}="0", (0,-1.9)*{\circ}="1", (-2.5,1.9)*{}="d1", (2.5,1.9)*{}="d2", (-2.5,5)*{}="d", (0,-5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d" <0pt> \ar @{-} "1";"d1" <0pt> \ar @{-} "1";"d2" <0pt> \endxy - \xy (2.5,1.9)*{\bullet}="0", (0,-1.9)*{\circ}="1", (-2.5,1.9)*{}="d1", (2.5,1.9)*{}="d2", (2.5,5)*{}="d", (0,-5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d" <0pt> \ar @{-} "1";"d1" <0pt> \ar @{-} "1";"d2" <0pt> \endxy =0. \end{equation} and the three relations in (\ref{R for LieB}). The differential in ${\mathcal P}$ is given on the generators by \betagin{equation}\lambdabel{2: differential in P} d\, \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}=0, \ \ \ \ d\, \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy} =0, \ \ \ \ d\, \betagin{xy} <0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-}, <0mm,0.5mm>*{};<0mm,3mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy}= \betagin{array}{c} \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.96mm,2.4mm>*{\circ};<2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\circ}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{};<2.96mm,7.5mm>*{\circ}**@{}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<2.8mm,7.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,10.5mm>*{}**@{-}, \end{xy} \end{array} \end{equation} {\sc Claim I}: {\em The surjective morphism of dg properads, \betagin{equation}\lambdabel{2: nu quasi-iso to P} \nu: {\caH \mathit{olieb}_{odd}^\diamond} \longrightarrow {\mathcal P}, \end{equation} which sends all generators to zero except for the following ones \betagin{equation}\lambdabel{2: map from P to P} \nu\left( \xy (-3,-4)*{}; (0,0)*+{_0}*\cir{} **\dir{-}; (3,-4)*{}; (0,0)*+{_0}*\cir{} **\dir{-}; (0,5)*{}; (0,0)*+{_0}*\cir{} **\dir{-}; \endxy\bar{i}ght) =\betagin{xy} <0mm,0.66mm>*{};<0mm,4mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-3.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-3.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\ ,\ \ \ \ \ \ \nu\left( \xy (-3,4)*{}; (0,0)*+{_0}*\cir{} **\dir{-}; (3,4)*{}; (0,0)*+{_0}*\cir{} **\dir{-}; (0,-5)*{}; (0,0)*+{_0}*\cir{} **\dir{-}; \endxy\bar{i}ght) =\betagin{xy} <0mm,-0.66mm>*{};<0mm,-4mm>*{}**@{-}, <0.4mm,0.4mm>*{};<2.2mm,3.2mm>*{}**@{-}, <-0.4mm,0.4mm>*{};<-2.2mm,3.2mm>*{}**@{-}, <0mm,-0.1mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\ \ , \ \ \nu\left( \xy (0,5)*{}; (0,0)*+{_1}*\cir{} **\dir{-}; (0,-5)*{}; (0,0)*+{_1}*\cir{} **\dir{-}; \endxy\bar{i}ght)=\betagin{xy} <0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-}, <0mm,0.5mm>*{};<0mm,3mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy} \end{equation} is a quasi-isomorphism.} The proof of this claim is identical to the proof of Theorem 2.7.1 in \cite{CMW} so that we can omit the details. The proof of the Main Theorem will be completed once we show the following {\sc Claim II:} {\em The natural map \betagin{equation}\lambdabel{3: Claim 2 map mu} \mu: ({\mathcal P}, d) \longrightarrow ({\caL \mathit{ieb}_{odd}^\diamond},0) \end{equation} is a quasi-isomorphism.} Let us define a new homological grading in the properad ${\mathcal P}$ by assigning to the generator $\betagin{xy} <0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-}, <0mm,0.5mm>*{};<0mm,3mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy}$ degree $-1$ and to the remaining generators the degree zero; to avoid confusion with the original grading let us call this new grading $s$-{\em grading}. Then Claim II is proven once we show that the cohomology $H({\mathcal P})$ of ${\mathcal P}$ is concentrated in $s$-degree zero. Consider a path filtration \cite{Ko,MaVo} of the dg properad ${\mathcal P}$. The associated graded $\mathrm{gr}{\mathcal P}$ can be identified with dg properad generated by the same corollas $\betagin{xy} <0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-}, <0mm,0.5mm>*{};<0mm,3mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy}$, $\betagin{array}{c}\betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}= \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^1}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^2}**@{}, \end{xy}\end{array}$ and $ \betagin{array}{c} \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{}, \end{xy} =- \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^1}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^2}**@{}, \end{xy}\end{array} $ which are subject to the relations (\ref{2: relation in P}), the first two relation in (\ref{R for LieB}) and the following one $$ \betagin{xy} <0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-}, <-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-}, <0mm,3mm>*{\circ};<0mm,3mm>*{}**@{}, <0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-}, <0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{}, <-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{}, <0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{}, <0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy}=0 $$ The differential in $\mathrm{gr}{\mathcal P}$ is given by the original formula (\ref{2: differential in P}). The {\sc Claim II} is proven once it is shown that the cohomology of the properad $\mathrm{gr}{\mathcal P}$ is concentrated in $s$-degree zero, or equivalently, the cohomology of dg prop ${\mathcal U}(\mathrm{gr}{\mathcal P})$ generated by this properad is concentrated in $s$-degree zero (as the universal enveloping functor ${\mathcal U}$ from the category of properads to the category of props is exact). Consider a free prop ${\mathcal F} ree\lambdangle E {\bar{a}}ngle$ generated by an ${\mathbb S}$-bimodule $E=\{E(m,n)\}$ with all $E(m,n)=0$ except the following ones, \betagin{equation}rn E(1,1) &=& {\mathbb K}[-1]=\mathrm{span}\left\lambdangle \betagin{xy} <0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-}, <0mm,0.5mm>*{};<0mm,3mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy} \bar{i}ght{\bar{a}}ngle \\ E(2,1) &=& sgn_2 =\mathrm{span}\left\lambdangle \betagin{array}{c} \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{}, \end{xy} =- \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^1}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^2}**@{}, \end{xy}\end{array}\bar{i}ght{\bar{a}}ngle\\ E(1,2) &=& {\mathbb K}[{\mathbb S}_2][-1] =\mathrm{span}\left\lambdangle \betagin{array}{c}\betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}\neq \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^1}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^2}**@{}, \end{xy}\end{array} \bar{i}ght{\bar{a}}ngle \end{equation}rn We assign to the above generators $s$-degrees $-1$, $0$ and $0$ respectively. Define next a dg prop ${\mathcal A}$ as the quotient of the above free prop ${\mathcal F} ree\lambdangle E{\bar{a}}ngle$ by the ideal generated by the relations $$ \xy (0,-1.9)*{\bullet}="0", (0,1.9)*{\bullet}="1", (0,-5)*{}="d", (0,5)*{}="u", \ar @{-} "0";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "1";"d" <0pt> \endxy=0, \ \ \ \ \xy (0,-1.9)*{\bullet}="0", (0,1.9)*{\bullet}="1", (-2.5,-5)*{}="d1", (2.5,-5)*{}="d2", (0,5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d1" <0pt> \ar @{-} "0";"d2" <0pt> \endxy + \xy (-2.5,-1.9)*{\bullet}="0", (0,1.9)*{\bullet}="1", (-2.5,-1.9)*{}="d1", (2.5,-1.9)*{}="d2", (-2.5,-5)*{}="d", (0,5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d" <0pt> \ar @{-} "1";"d1" <0pt> \ar @{-} "1";"d2" <0pt> \endxy + \xy (2.5,-1.9)*{\bullet}="0", (0,1.9)*{\bullet}="1", (-2.5,-1.9)*{}="d1", (2.5,-1.9)*{}="d2", (2.5,-5)*{}="d", (0,5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d" <0pt> \ar @{-} "1";"d1" <0pt> \ar @{-} "1";"d2" <0pt> \endxy =0\ , \ \ \ \ \xy (0,1.9)*{\circ}="0", (0,-1.9)*{\bullet}="1", (-2.5,5)*{}="d1", (2.5,5)*{}="d2", (0,-5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d1" <0pt> \ar @{-} "0";"d2" <0pt> \endxy - \xy (-2.5,1.9)*{\bullet}="0", (0,-1.9)*{\circ}="1", (-2.5,1.9)*{}="d1", (2.5,1.9)*{}="d2", (-2.5,5)*{}="d", (0,-5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d" <0pt> \ar @{-} "1";"d1" <0pt> \ar @{-} "1";"d2" <0pt> \endxy - \xy (2.5,1.9)*{\bullet}="0", (0,-1.9)*{\circ}="1", (-2.5,1.9)*{}="d1", (2.5,1.9)*{}="d2", (2.5,5)*{}="d", (0,-5)*{}="u", \ar @{-} "1";"u" <0pt> \ar @{-} "0";"1" <0pt> \ar @{-} "0";"d" <0pt> \ar @{-} "1";"d1" <0pt> \ar @{-} "1";"d2" <0pt> \endxy =0 $$ and $$ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{}, \end{xy} \ + \ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^2}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^1}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^3}**@{}, \end{xy} \ + \ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^1}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^3}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^2}**@{}, \end{xy}\ =\ 0, \ \ \ \ \betagin{array}{c}\betagin{xy} <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <-2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{}, <-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{}, <-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{}, \end{xy}\end{array} \ + \ \betagin{array}{c}\betagin{xy} <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{}, <2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <2.8mm,-2.9mm>*{};<4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<-3mm,-4.0mm>*{^1}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-6.7mm>*{^2}**@{}, <-2.8mm,-2.9mm>*{};<5.2mm,-6.7mm>*{^3}**@{}, \end{xy}\end{array} =0, \ \ \ \ \ \ \betagin{xy} <0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-}, <-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-}, <0mm,3mm>*{\circ};<0mm,3mm>*{}**@{}, <0mm,-0.8mm>*{\bullet};<0mm,-0.8mm>*{}**@{}, <-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-}, <0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{}, <-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{}, <0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{}, <0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy}=0 $$ A differential in ${\mathcal A}$ is defined by $$ d\, \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy}=0, \ \ \ \ d\, \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy} =0, \ \ \ \ d\, \betagin{xy} <0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-}, <0mm,0.5mm>*{};<0mm,3mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy}= \betagin{array}{c} \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.96mm,2.4mm>*{\circ};<2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\ast}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{};<2.96mm,7.5mm>*{\ast}**@{}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<2.8mm,7.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,10.5mm>*{}**@{-}, \end{xy} \end{array} \ \ \mbox{where}\ \ \ \betagin{array}{c}\xy <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\ast};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \endxy \end{array}:= \betagin{array}{c}\xy <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \endxy \end{array} + \betagin{array}{c}\xy <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^1}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^2}**@{}, \endxy \end{array} $$ Note that the generator $\betagin{array}{c}\xy <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\ast};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \endxy \end{array}$ satisfies the second relation in (\ref{R for LieB}) so that we have a canonical injection of dg props $$ i: {\mathcal U}(\mathrm{gr}{\mathcal P}) \longrightarrow {\mathcal A} $$ It is easy to see that image of ${\mathcal U}(\mathrm{gr}{\mathcal P})$ under this injection is a {\em direct}\, summand in the complex $({\mathcal A}, d)$. Hence {\sc Claim 2} is proven once we show that the cohomology of the prop ${\mathcal A}$ is concentrated in $s$-degree zero. Using the associativity relation for the generator $ \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, \end{xy}$ and the Jacobi relation for the generator $\betagin{array}{c} \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\end{array}$ one obtains an equality \betagin{equation}rn \betagin{array}{c} \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.96mm,2.4mm>*{\circ};<2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\ast}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{};<2.96mm,7.5mm>*{\ast}**@{}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<2.8mm,7.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,10.5mm>*{}**@{-}, \end{xy} \end{array} &=& \betagin{array}{c}\betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.96mm,2.4mm>*{\circ};<2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\ast}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{};<2.96mm,7.5mm>*{\bullet}**@{}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<2.8mm,7.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,10.5mm>*{}**@{-}, \end{xy} \end{array} \ + \ \betagin{array}{c}\betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.96mm,2.4mm>*{\circ};<2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\bullet}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{};<2.96mm,7.5mm>*{\bullet}**@{}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<2.8mm,7.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,10.5mm>*{}**@{-}, \end{xy} \end{array} \ - \ 2 \betagin{array}{c}\betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <-0.4mm,0.0mm>*{};<-2.4mm,2.1mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<5mm,5mm>*{}**@{-}, <5mm,5mm>*{};<0mm,10mm>*{}**@{-}, <0mm,-0.8mm>*{\circ}; <-2.96mm,2.4mm>*{\circ}; <-2.4mm,2.8mm>*{};<-0.2mm,5mm>*{}**@{-}, <-3.35mm,2.9mm>*{};<-5.5mm,5mm>*{}**@{-}, <0mm,10.5mm>*{\bullet}**@{}, <-2.8mm,7.5mm>*{};<0mm,10.5mm>*{}**@{-}, <-2.96mm,7.5mm>*{\bullet}**@{}, <-0.2mm,5.0mm>*{};<-2.8mm,7.5mm>*{}**@{-}, <-0.2mm,5.1mm>*{};<-2.8mm,7.5mm>*{}**@{-}, <-5.5mm,5mm>*{};<-2.8mm,7.5mm>*{}**@{-}, <0mm,10.5mm>*{};<0mm,13.5mm>*{}**@{-}, \end{xy} \end{array} \\ &=& -3\ \betagin{array}{c}\betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <-0.4mm,0.0mm>*{};<-2.4mm,2.1mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.5mm,2.5mm>*{}**@{-}, <2.5mm,7.5mm>*{};<0mm,10mm>*{}**@{-}, <0mm,-0.8mm>*{\circ}; <-2.96mm,2.4mm>*{\circ}; <-2.4mm,2.8mm>*{};<2.5mm,7.5mm>*{}**@{-}, <-3.35mm,2.9mm>*{};<-5.5mm,5mm>*{}**@{-}, <0mm,10.5mm>*{\bullet}**@{}, <-2.8mm,7.5mm>*{};<0mm,10.5mm>*{}**@{-}, <-2.96mm,7.5mm>*{\bullet}**@{}, <2.5mm,2.5mm>*{};<-2.8mm,7.5mm>*{}**@{-}, <-5.5mm,5mm>*{};<-2.8mm,7.5mm>*{}**@{-}, <0mm,10.5mm>*{};<0mm,13.5mm>*{}**@{-}, \end{xy} \end{array}= -3\ {\mathfrak r}ac{\betagin{array}{c}\xy <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <-2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{}, <-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{}, <-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{}, \endxy\end{array}}{\betagin{array}{c}\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^2}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^3}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{}, \end{xy}\end{array}} \end{equation}rn where the horizontal line stands for the properadic composition in accordance with the labels shown. This result propmts us to consider a dg associative non-commutative algebra $\mathbf{A}_n$ generated by degree zero variable $\{x_i\}_{1\leq i\leq n}$ and degree $-1$ generators $\{u_{i,i+1,u+2}\}_{1\leq i\leq n-2}$ with the differential $$ d u_{i,i+1,i+2}= -3[[x_i,x_{i+2}],x_{i+1}] $$ or, equivalently (after rescaling the generators $u_\bullet$), with the differential $$ d u_{i,i+1,i+2}= [[x_i,x_{i+2}],x_{i+1}] $$ Arguing in exactly the same way as in \cite{CMW} one concludes that the cohomology of the dg operad operad ${\mathcal A}$ is concentrated in $s$-degree zero if and only if the collections of algebras $\mathbf{A}_n$, $n\geq 3$, has cohomology concentrated in ordinary degree zero. The latter fact is established in the appendix. The proof is completed. \end{proof} \subsection{Representations of ${\caH \mathit{olieb}_{odd}^\diamond}$ and quantizable Poisson structures} Let $V={\mathbb R}^d$ be a $d$-dimensional vector space, and ${\mathcal O}_V={\partial}rod_{n\geq 1} \odot^n V^*$ the commutative algebra of formal power series functions on $V$, with the obvious complete topology. If $\mathrm{Der}({\mathcal O}_V)$ stands for the Lie algebra of continuous derivations of ${\mathcal O}_V$, then the Lie algebra of formal polyvector fields on $V$ is defined as the Lie algebra of continuous multiderivations, $$ {\mathcal T}_{poly}(V):= \widehat{\odot^\bullet_{{\mathcal O}_V}}\left(\mathrm{Der}({\mathcal O}_V)[1]\bar{i}ght)\cong {\partial}rod_{m\geq 0} \wedge^m V\otimes {\mathcal O}_{V} \simeq {\partial}rod_{m,n\geq 0} \wedge^m V \otimes \odot^n V^* \, . $$ There is an obvious chain of injections of topological commutative algebras, $$ \ldots \longrightarrow {\mathcal O}_{{\mathbb R}^d} \longrightarrow {\mathcal O}_{{\mathbb R}^{d+1}} \longrightarrow {\mathcal O}_{{\mathbb R}^{d+2}} \longrightarrow \ldots. $$ We denote the associated {\em direct}\, limit by $$ {\mathcal O}_{{\mathbb R}^\infty} := \lim_{d\bar{i}ghtarrow \infty} {\mathcal O}_{{\mathbb R}^d}. $$ For $V={\mathbb R}^\infty$ we define ${\mathcal T}_{poly}(V)$ as the Lie algebra of continuous multiderivations of ${\mathcal O}_{{\mathbb R}^\infty}$, i.e., $$ {\mathcal T}_{poly}(V)\cong {\partial}rod_{m\geq 0} {\mathrm H\mathrm o\mathrm m}(\wedge^m {\mathbb R}^\infty, {\mathcal O}_{{\mathbb R}^\infty}) \, . $$ We can also consider the space ${\mathcal T}_{poly}(V)[[\hbar]]$ of formal power series in a formal variable $\hbar$ with coefficients in ${\mathcal T}_{poly}(V)$. These arguments can be easily generalized to a finite/infinite dimensional {\em graded}\, vector space $V$. Consider now a representation of our minimal resolution $$ \rho: {\caH \mathit{olieb}_{odd}^\diamond} \longrightarrow {\mathcal E} nd_V $$ in a (possible, infinite-dimensional) dg vector space $V$. It is uniquely determined by the values of $\rho$ on the generators of ${\caH \mathit{olieb}_{odd}^\diamond}$, $$ \rho\left( \resizebox{15mm}{!}{ \xy (-9,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (-5,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (9,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (5,-6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (0,-6)*{\ldots}; (-10,-8)*{_1}; (-6,-8)*{_2}; (10,-8)*{_n}; (-9,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (-5,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (9,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (5,6)*{}; (0,0)*+{a}*\cir{} **\dir{-}; (0,6)*{\ldots}; (-10,8)*{_1}; (-6,8)*{_2}; (10,8)*{_m}; \endxy}\bar{i}ght):={\partial}i^m_n(a)\in \wedge^m V \otimes \odot^n V^*. $$ We can assemble these values into a formal power series $$ {\partial}i^\diamond := \sum_{m,n\geq 0}\sum_{a\geq 0}\hbar^a {\partial}i^m_n(a)\in {\mathcal T}_{poly}(V) $$ which gives us a formal polyvector field on $V$. The values ${\partial}i^m_n(a)$ can not be chosen arbitrarily as the map $\rho$ must respect differentials in ${\caH \mathit{olieb}_{odd}^\diamond}$ and $V$, $$ \rho\circ \delta_\diamond= d\circ \rho. $$ Untwisting the definition of $\delta_\diamond$, we conclude that the above formal power series ${\partial}i^\diamond$ (with ${\partial}i^1_1(0):=d$) comes from a representation of ${\caH \mathit{olieb}_{odd}^\diamond}$ if and only if it satisfies the equation $$ {\mathfrak r}ac{1}{2}[{\partial}i^\diamond,{\partial}i^\diamond]_2 + {\mathfrak r}ac{\hbar}{4!} [{\partial}i^\diamond,{\partial}i^\diamond,{\partial}i^\diamond,{\partial}i^\diamond]_4 + \ldots =0, $$ where the collection of operators, $$ \left\{[\, \ , \ldots ,\ ]_{2n}: {\mathcal T}_{poly}(V)^{\otimes 2n}\bar{i}ghtarrow {\mathcal T}_{poly}(V)[3-4n] \bar{i}ght\}_{n\geq 1} $$ comes from the values on the graphs $\Upsilon_{2n}$ from \S 3.1 of the standard morphism \cite{Wi1} of dg Lie algebras $$ \mathsf{fGC}or_2 \bar{i}ghtarrow CE^\bullet( {\mathcal T}_{poly}(V), {\mathcal T}_{poly}(V)), $$ $CE^\bullet( {\mathcal T}_{poly}(V), {\mathcal T}_{poly}(V))$ being the Chevalley-Eilenberg deformation complex of the Lie algebra of polyvector fields. Therefore formal {\em quantizable}\, Poisson structures on a graded vector space $V$ (viewed as a formal manifold) come from representations of our properad ${\caH \mathit{olieb}_{odd}^\diamond}$. There are plenty of examples of such quantizable Poisson structures on {\em finite}-dimensional vector spaces, one for each ordinary formal graded Poisson structure ${\partial}i$ on $V$ (which is, by definition, an element of $ {\mathcal T}_{poly}(V)$ which satisfies the standard Schouten equation $[{\partial}i,{\partial}i]_2=0$). However the association $$ {\partial}i \longrightarrow {\partial}i^\diamond $$ is highly non-trivial and depends on the choice of a Drinfeld associator \cite{MW2}. It is an open problem to find a non-trivial example of a quantizable Poisson structure in {\em infinite}\, dimensions. Perhaps, for any graded vector space $V$ equipped with an odd symplectic form, the associated total space of cyclic words $$ V:= {\mathbb P}i_{n\geq 1} (\otimes^n W)_{{\mathbb Z}_n} $$ comes equipped with such a structure given by formulae from Theorem 4.3.3 in \cite{MW0}; however it is hard to check this conjecture by a direct computation as it involves infinitely many equations. \appendix \section{} \subsection{\bf Lemma} {} {\lambdabel{lem::free::grob}} {\em Let $\{c_\sigma| \sigma\in S_3\}$ be a collection of $6$ numbers such that \betagin{equation} \betagin{split} \lambdabel{eq::prop::lm} {\mathbf e}xt{for each pair $(c_{ijk},c_{ikj})$ with the same first index } i \\ {\mathbf e}xt{ at least one of these elements is different from zero } \end{split} \end{equation} Then the associative algebra $ A:= \Bbbk \left\lambdangle x_1,\ldots, x_n \left| \betagin{array}{c} \sum_{\sigma\in S_3} c_\sigma x_{i+\sigma(1)} x_{i+\sigma(2)} x_{i+\sigma(3)}, \\ i = 0,\ldots, n-2 \end{array} \bar{i}ght. \bar{i}ght{\bar{a}}ngle $ has global dimension $2$. } \betagin{proof} Let us consider any linear ordering of the set of generators, such that $$ {\mathcal O}orall k,l,m \ \ x_{3k} > x_{3l+2}, x_{3m+1} $$ We extend this ordering to a degree-lexicographical ordering of the set of monomials in the free associative algebra $\Bbbk\lambdangle x_1,\ldots,x_n{\bar{a}}ngle$. The leading monomials of relation $\sum_{\sigma\in S_3} c_\sigma x_{i+\sigma(1)} x_{i+\sigma(2)} x_{i+\sigma(3)}$ are different for all $i$ because they contain different letters $\{x_{i+1},x_{i+2},x_{i+3}\}$. Moreover, there is exactly one number divisible by $3$ in each subsequent triple of integer numbers, thus after reordering we have $\{i+1,i+2,i+3\}= \{3s,r,t\}$ for appropriate $r,s$ and $t$, such that $r$ and $t$ are not divisible by $3$. Recall that by property~\eqref{eq::prop::lm} at least one of the two monomials $c_{{3s} r t}x_{3s}x_r x_t$ and $c_{{3s} t r}x_{3s}x_t x_r$ is different from zero. Hence, the first letters in the leading monomials of the relation $\sum_{\sigma\in S_3} c_\sigma x_{i+\sigma(1)} x_{i+\sigma(2)} x_{i+\sigma(3)}$ have index divisible by $3$ and two remaining letters is not divisible by $3$. Consequently, the leading monomials of generating relations have no compositions and the set of generating relations form a \emph{strongly free} Gr\"obner bases following that the algebra $A$ has global dimension $2$. (See~\cite{ufn} \S4.3 for details on strongly free relations.) \end{proof} \subsection{\bf Corollary} {\em The minimal free resolution $\mathbf{ A}_{n}$ of the algebra $${A}_{n}:= \Bbbk\left\lambdangle x_1,\ldots,x_n \left| \betagin{array}{c} [[x_i,x_{i+2}],x_{i+1}] \\ i=1,\ldots,n-2 \end{array} \bar{i}ght. \bar{i}ght{\bar{a}}ngle$$ is generated by $x_1,\ldots,x_n$ and $u_{1,2,3}\ldots,u_{n-2,n-1,n}$ such that } $$ \betagin{array}{ccc} \deg(x_i)=0, & & \deg(u_{i,i+1,i+2})=-1;\\ d(x_i)=0; & \quad & d(u_{i,i+1,i+2})= [[x_i,x_{i+2}],x_{i+1}]. \end{array} $$ \betagin{proof} Let us expand the commutators in the relations we are working with: $$ [[x_1,x_{3}],x_{2}] = (x_1 x_3 x_2 - x_3 x_1 x_2 - x_2 x_1 x_3 + x_2 x_3 x_1) $$ As we can see they satisfy the condition~\eqref{eq::prop::lm} of Lemma~{\ref{lem::free::grob}} and algebra $A_n$ has global dimension $2$, meaning that the following complex $$ 0\to \mathrm{span}\lambdangle{\mathbf e}xt{relations}{\bar{a}}ngle \otimesimes A_n \to \mathrm{span}\lambdangle{\mathbf e}xt{generators}{\bar{a}}ngle\otimesimes A_n \to A_n \to \Bbbk \to 0 $$ is acyclic in the leftmost term and, consequently, acyclic everywhere. Therefore, the minimal resolution of $A_n$ is generated by generators and generating relations of $A_n$. \end{proof} \def$'${$'$} \betagin{thebibliography}{} \bibitem[B]{B} T.\ Backman, PhD thesis under preparation. \bibitem[CMW]{CMW} R.\ Campos, S.\ Merkulov and T.\ Willwacher {\em The Frobenius properad is Koszul}, arXiv:1402.4048. To appear in Duke Math.\ J. \bibitem[C]{Ch} M.\ Chas, {\em Combinatorial Lie bialgebras of curves on surfaces}, Topology {\bf 43} (2004), no. 3, 543--568. \bibitem[CS]{ChSu} M.\ Chas and D.\ Sullivan, {\em Closed string operators in topology leading to Lie bialgebras and higher string algebra}, in: { The legacy of Niels Henrik Abel}, pp.\ 771--784, Springer, Berlin, 2004. \bibitem[CFL]{CFL} K. Cieliebak, K. Fukaya and J. Latschev, {\em Homological algebra related to surfaces with boundary}, preprint arxiv:1508.02741, 2015. \bibitem[D]{D1} V.\ Drinfeld, {\em Hamiltonian structures on Lie groups, Lie bialgebras and the geometric meaning of the classical Yang-Baxter equations}, Soviet Math. Dokl. {\bf 27} (1983) 68--71. \bibitem[ES]{ES} P.\ Etingof and O.\ Schiffmann, Lectures on Quantum Groups, International Press, 2002. \bibitem[JF]{JF1} T.\ Johnson-Freyd, {\em $\mathrm{Chains}({\mathbb R})$ does not admit a geometrically meaningful properadic homotopy Frobenius algebra structure}, preprint arXiv:1308.3423 (2013). \bibitem[KM]{KM} M.\ Kapranov and Yu.I.\ Manin, {\em Modules and Morita theorem for operads}. Amer. J. Math. {\bf 123} (2001), no. 5, 811-838. \bibitem[Ko]{Ko} M.\ Kontsevich, a letter to Martin Markl, November 2002. \bibitem[MaVo]{MaVo} M.\ Markl and A.A.\ Voronov, {\em PROPped up graph cohomology}, in: Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Vol. II, Progr. Math., 270, Birkh\"auser Boston, Inc., Boston, MA (2009) pp. 249--281. \bibitem[M1]{Me1} S.A.\ Merkulov, {\em Prop profile of Poisson geometry}, Commun.\ Math.\ Phys. {\bf 262} (2006), 117-135. \bibitem[M2]{Me2} S.A.\ Merkulov, {\em Graph complexes with loops and wheels}, in: ``Algebra, Arithmetic and Geometry - Manin Festschrift" (eds. Yu.\ Tschinkel and Yu.\ Zarhin), Progress in Mathematics, Birkha\"user (2010), pp. 311-354. \bibitem[M3]{Me3} S.A.\ Merkulov, {\em Lectures on props, Poisson geometry and deformation quantization}, in {Poisson Geometry in Mathematics and Physics}, Contemporary Mathematics, 450 (eds.\ G.\ Dito, J.H.\ Lu, Y.\ Maeda and A.\ Weinstein), AMS (2009), 223-257. \bibitem[MV]{MV} S.A.\ Merkulov and B.\ Vallette, {\em Deformation theory of representations of prop(erad)s I \& II}, Journal f\"ur die reine und angewandte Mathematik (Crelle) {\bf 634}, 51--106, \& {\bf 636}, 123--174 (2009). \bibitem[MW1]{MW0} S.A. Merkulov and T.\ Willwacher, {\em Props of ribbon graphs, involutive Lie bialgebras and moduli spaces of curves}, preprint arXiv:1511.07808 (2015) 51pp. \bibitem[MW2]{MW} S.A. Merkulov and T.\ Willwacher, {\em Deformation theory of Lie bialgebra properads}, preprint 2015. \bibitem[MW3]{MW2} S.A. Merkulov and T.\ Willwacher, {\em On explicit formula for quantization of Lie bialgebras}, to appear. \bibitem[Sc]{S} T.\ Schedler, {\em A Hopf algebra quantizing a necklace Lie algebra canonically associated to a quiver}. Intern. Math. Res. Notices (2005), 725--760. \bibitem[Sh]{Sh} B.\ Shoikhet, {\em A $L_\infty$ algebra structure on polyvector fields }, preprint arXiv:0805.3363, (2008). \bibitem[U]{ufn} V. A. Ufnarovsky, {\it Combinatorial and asymptotical methods in algebra,} Sovr. probl. mat., Fund. napr., {\bf 57} (1990), p.~5--177 [Russian] Engl. transl.: Algebra VI, Encycl. Math. Sci., Springer, Berlin 1995, p.~1--196 \bibitem[V]{V} B.\ Vallette, {\em A Koszul duality for props}, Trans.\ AMS, {\bf 359} (2007), 4865-4943. \bibitem[W1]{Wi1} T.\ Willwacher, {\em M.\ Kontsevich's graph complex and the Grothendieck-Teichm\"uller Lie algebra}, Invent. Math. 200 (2015), no. 3, 671--760. \bibitem[W2]{Wi2} T.\ Willwacher, {\em The oriented graph complexes}, Comm. Math. Phys. 334 (2015), no. 3, 1649--1666. \end{thebibliography} \end{document}
\begin{document} \title{Stability of geometric flows of closed forms} \author{Lucio Bedulli and Luigi Vezzoni} \date{\today} \begin{abstract} We prove a general result about the stability of geometric flows of \lq\lq closed\rq\rq\, sections of vector bundles on compact manifolds. Our theorem allows us to prove a stability result for the modified Laplacian coflow in ${\rm G}_2$-geometry introduced by Grigorian in \cite{Gri} and for the balanced flow introduced by the authors in \cite{Wlucione}. \begin{equation}nd{abstract} \maketitle \section{Introduction} In \cite{Bryant} Bryant introduced a new flow in ${\rm G}_2$-geometry which evolves an initial closed ${\rm G}_{2}$-structure along its Laplacian. Bryant's {\begin{equation}m Laplacian flow} is a flow of closed $3$-forms and its well-posedness is not standard since the evolution equation is weakly parabolic only in the direction of closed forms. The short-time existence of the flow on compact manifolds was proved by Bryant and Xu in \cite{BryantXu} introducing a gauge fixing of the flow called {\begin{equation}m Laplacian-DeTurck flow} and then applying Nash-Moser theorem. In \cite{L2} Lotay and Wei proved that in the compact case torsion-free ${\rm G}_2$-structures are stable under the Laplacian flow. This means that if the initial datum is \lq\lq close enough\rq\rq\, to a torsion free ${\rm G}_2$-structure, the Laplacian flow is defined for any positive time $t$ and converges as $t\to\infty$ in $C^\infty$-topology to a torsion-free ${\rm G}_2$-structure. Following Bryant and Xu ideas, other similar flows have been introduced in ${\rm G}_2$-geometry. For instance Karigiannis, McKay and Tsui defined in \cite{K} the {\begin{equation}m Laplacian coflow} which is the \lq\lq dual flow\rq\rq\, to the Laplacian flow since it evolves a closed ${\rm G_2}$ $4$-form along its Laplacian. Although the Laplacian flow and coflow are similar from the geometric point of view, it turns out that their defining equations are quite different from the analytic point of view and the well-posedness of the Laplacian coflow is still an open problem. To overcome this technical difficulty, Grigorian modified in \cite{Gri} the Laplacian coflow by introducing two extra terms, one of which depends on a parameter $A$. In the compact case, this modification is always well-posed for any choice of $A\in \R$ \cite{Gri}, but it has been shown that the behaviour of the flow may significantly depend on the choice of $A$ (see \cite{Anna1}). In \cite{Wlucione} the authors showed that the proof of Bryant and Xu about the well-posedness of the Laplacian-DeTurck flow can be generalized to a quite large family of flows proving a general result which allows us to treat short-time behaviour of Grigorian's modified Lapalcian coflow and of a new flow of balanced metrics in Hermitian geometry. In the same spirit in the present paper we prove a general result about the stability of a significant class of flows around linearly stable static solutions. Our theorem can be used to re-obtain the Lotay-Wei stability of the Laplacian-DeTurck flow around torsion-free $G_2$-structures (which is a significant part of the main theorem in \cite{L2}). As a main application of our theorem we prove the stability of the modified Laplacian coflow when the parameter $A$ is zero. Note that in \cite{Gri} it is suggested to consider only the case $A>0$ and big enough, in order to ensure at least initially that the volume increases. However the term involving the parameter $A$ does not affect the well-posedness of the flow and our result suggests to consider the case $A=0$ as the best choice for the parameter. Finally the main result of the present paper applies to the geometric flow of balanced metrics introduced by the authors in \cite{Wlucione} and yields the stability of the flow around Ricci-flat K\"ahler metrics. The paper is organized as follows. In section \ref{2} we give the statement of the main result and we declare some notation we will use in the sequel. Section \ref{coflow} is devoted to the proof of the stability of the modified Laplacian coflow around torsion-free ${\rm G}_2$-structures when the parameter $A$ is zero. The proof is obtained by mixing the use of our main theorem with some techniques used in \cite{L2} to prove the stability of the Laplacian flow. In section \ref{3} we prove a stability result involving the balanced flow around Calabi-Yau metrics. In the last section we give the proof of the main theorem. \noindentoindent {\bf Acknowledgements}. The authors would like to thank Jason Lotay and Gao Chen for useful discussions. The authors are also grateful to the anonymous referee for raising important points and helping to considerably improve the presentation of the paper. \section{Statement of the main result}\label{2} In this section we describe our setting and give the precise statement of our result. \\ Following the terminology introduced in \cite{Wlucione} a {\begin{equation}m Hodge system} on a compact Riemannian manifold $(M,g)$ consists of a quadruplet $(E_-,E,D,\Delta_D)$, where $E_-$ and $E$ are vector bundles over $M$ with an assigned metric along their fibers, $D\colon C^{\infty}(M,E_-)\to C^{\infty}(M,E)$ and $\Delta_D\colon C^{\infty}(M,E)\to C^{\infty}(M,E)$ are differential operators such that \begin{equation}\label{G} \partialsi=DGD^*\partialsi \begin{equation}nd{equation} for every $\partialsi\in {\rm Im}\,D$, where $G$ is the Green operator of $\Delta_D$ and $D^*$ is the formal adjoint of $D$. The foremost example of Hodge system over $M$ is defined by $E_-=\Lambda^p$, $E=\Lambda^{p+1}$, $D=d$ and $\Delta_D=dd^*+d^*\!d$ is the standard Laplace operator, on a compact Riemannian manifold. Condition \begin{equation}qref{G} in this case is a consequence of the standard Hodge theory. Another interesting example of Hodge system occurs in the study of {\begin{equation}m balanced metrics} in complex geometry and it is defined by setting $D=i\partialartial \bar\partialartial$ and as $\Delta_D$ the {\begin{equation}m Aeppli Laplacian} (see the discussion in section \ref{3}). Given a compact manifold $M$ with a Hodge system $(E_-,E,D,\Delta_D)$, we consider an open fiber subbundle $\mathcal E$ of $E$ and a partial differential operator of order $2m$ $$ Q\colon C^{\infty}(M,\mathcal E)\to C^{\infty}(M,E)\,, $$ and a linear partial differential operator $$ D_+\colon C^{\infty}(M,E)\to C^{\infty}(M,E_+) $$ such that $$ {\rm Im}\,D\subseteq \ker\, D_+ \,, $$ where $E_+$ is a vector bundle over $M$. Let $\Phi=\ker D_+\cap C^{\infty}(M,\mathcal E)$. We assume \begin{enumerate} \item[1.] $Q(\Phi)\subset {\rm Im}\,D$; \item[2.] there exists a smooth family of strongly elliptic linear partial differential operators $L_\varphi\colon C^{\infty}(M,E)\to C^{\infty}(M,E)$, $\varphi\in C^{\infty}(M,\mathcal E)$, such that $$ Q_{*|\varphi}(\partialsi)=L_{\varphi}(\partialsi) $$ for every $\varphi\in \Phi$ and $\partialsi\in {\rm Im}\,D$; \item[3.] there exists a smooth family of strongly elliptic linear partial differential operators $l_\varphi\colon C^{\infty}(M,E_-)\to C^{\infty}(M,E_-)$, $\varphi\in C^{\infty}(M,\mathcal E)$, such that $$ Q_{*|\varphi}(D\theta)=Dl_{\varphi}(\theta) $$ for every $\varphi\in \Phi$ and $\theta \in C^\infty(M,E_-)$. \begin{equation}nd{enumerate} In \cite{Wlucione} the authors proved that for every $\varphi_0\in \Phi$ the evolution problem \begin{equation}\label{problem} \partialartial_t\varphi_t=Q(\varphi_t)\,,\quad \varphi_{|t=0}=\varphi_0\,,\quad \varphi_t\in U\,, \begin{equation}nd{equation} is always well-posed, where $$ U=\{\varphi_0+D\gamma \,\,:\,\,\gamma\in C^\infty(M,E_-)\}\,\cap C^{\infty}(M,\mathcal E)\,. $$ The main result of the present paper is the following \begin{theorem} \label{teoremone} In the situation described above, let $\bar \varphi\in\Phi$ be such that \begin{enumerate} \item[1.] $Q(\bar\varphi)=0$; \item[2.] the restriction to $DC^\infty(M,E_-)$ of $L_{\bar \varphi}$ is symmetric and negative definite with respect to the $L^2$ inner product induced by $g$. \begin{equation}nd{enumerate} Then for every $\begin{equation}psilon>0$ there exist $\delta>0$ and $C>0$ such that if $$ \|\varphi_0-\bar\varphi\|_{C^{\infty}}<\delta $$ then \begin{equation}qref{problem} has a unique long-time solution $\{\varphi_t\}_{t\in [0,\infty)}$ such that $$ \|\varphi_t-\bar \varphi\|_{C^{\infty}}<\begin{equation}psilon\,, \mbox{ and } \quad \|Q(\varphi_t)\|_{C^\infty}\leq C \|Q(\varphi_0)\|_{L^2} \,{\rm e}^{-\lambda t} $$ for every $t\in [0,\infty)$, where $\lambda$ is half the first positive eigenvalue of $-L_{\bar\varphi}$. Moreover, $\varphi_t$ converges exponentially fast in $C^{\infty}$ topology to a $\varphi_\infty \in U$ such that $Q(\varphi_\infty)=0$ as $t\to \infty$. \begin{equation}nd{theorem} Whenever we write $\|f\|_{C^{\infty}}<\begin{equation}psilon$, we mean that $\|f\|_{C^k}<\begin{equation}psilon$ for every $k\in \mathbb N$. In the statement above the $C^k$-norms and the $L^2$-norm are with respect to the background metric $g$. Conditions 1. and 2. in theorem \ref{teoremone} say that $\bar \varphi$ is a linearly stable fixed point of the flow. Hence roughly speaking the theorem says that linearly stable fixed points of the class of flows we are considering are indeed dynamically stable. \section{From theorem \ref{teoremone} to the stability of the modified Laplacian coflow}\label{coflow} Let $(M,\varphi_0)$ be a compact manifold with a fixed ${\rm G}_2$-structure. The {\begin{equation}m Laplacian flow} is defined as \begin{equation} \label{G2flow} \partialartial_t \varphi_t=\Delta_{\varphi_t}\varphi_t\,,\quad d\varphi_t=0\,,\quad \varphi_t\in C^{\infty}(M,\Lambda^3_+)\,, \quad \varphi_{|t=0}=\varphi_0\,, \begin{equation}nd{equation} where $\Lambda^3_+$ is the fiber bundle whose sections are ${\rm G}_2$-forms on $M$ and for any $\varphi \in C^\infty(M,\Lambda^3_+)$ the Laplacian operator induced by ${\varphi}$ is denoted by $\Delta_{\varphi}$. The well-posedness of the flow was proved by Bryant and Xu in \cite{BryantXu} applying Nash-Moser inverse function theorem to the gauge fixing of the flow given by the following proposition. \begin{prop}[Bryant-Xu]\label{BXpre} There exists a smooth map $V\colon C^{\infty}(M,\Lambda^3_+)\mathcal \to C^{\infty}(M,TM)$ such that the operator $$ Q\colon C^{\infty}(M,\Lambda^3_+)\to\Omegaega^3(M),\quad Q(\varphi)=\Delta_{\varphi}\varphi+\mathcal{L}_{V(\varphi)}\varphi $$ satisfies $$ Q_{*|{\varphi}}(\sigma)=-\Delta_{\varphi}\sigma+d\Psi(\sigma) $$ for every closed ${\rm G}_2$-structure $\varphi$ and $\sigma \in d\Omegaega^2(M)$, where $\mathcal L$ is the Lie derivative and $\Psi$ is an algebraic linear operator on $\sigma$ with coefficients depending on the torsion of $\varphi$ in a universal way. \begin{equation}nd{prop} We briefly recall the definition of the map $V$ since we need it for studying the modified Laplacian coflow. Let $\noindentabla^0$ be a fixed background torsion-free connection on $M$. For any $\f \in C^{\infty}(M,\Lambda^3_+)$ let $\noindentabla^\f$ be the Levi-Civita connection of the metric induced by $\f$. Let $$ T^{\f}=\noindentabla^{\f}-\noindentabla^0. $$ We can locally write $T^{\f}=\frac12 T_{jk}^i\partialartial_{x^i}\otimes dx^j\circ dx^k$. Then $V$ is locally defined as \begin{equation}\label{V} V(\f)^i=c_1\, g^{pq}T^{i}_{pq}+c_2\,g^{ki}T^{j}_{jk} \begin{equation}nd{equation} where $c_1$ and $c_2$ are universal constants. The \lq\lq modified\rq\rq\,\, Laplacian flow \begin{equation}\label{modifiedflow} \partialartial_t \varphi_t=\Delta_{\varphi_t}\varphi_t+\mathcal{L}_{V(\varphi_t)}\varphi_t\,,\quad d \varphi_t=0\,,\quad \varphi_t\in C^{\infty}(M,\Lambda^3_+)\,, \quad \varphi_{|t=0}=\varphi_0\,, \begin{equation}nd{equation} is often called the {\begin{equation}m Laplacian-DeTurck} flow. In \cite{L2} Lotay and Wei proved the following stability result about Laplacian flow \begin{theorem}[Lotay-Wei \cite{L2}]\label{Lotay} Let $\bar{\varphi}$ be a torsion-free ${\rm G}_2$-structure on a compact $7$-manifold $M$. There exists $\delta>0$ such that for any closed ${\rm G}_2$-structure $\varphi_0$ cohomologous to $\bar \varphi$ and satisfying $\|\varphi_0-\bar\varphi\|_{C^{\infty}} < \delta$, the Laplacian flow \begin{equation}qref{G2flow} with initial value $\varphi_0$ exists for all $t\in [0,\infty)$ and converges in $C^{\infty}$-topology to $\varphi_\infty \in {\rm Diff}^0\cdot \bar{\varphi}$ as $t\to \infty$. \begin{equation}nd{theorem} In the statement above the $C^k$-norms are meant with respect to the metric induced by $\bar \varphi$. The proof of Lotay-Wei theorem in \cite{L2} can be subdivided in two steps: in the first step it is proved the stability of the Laplacian-DeTurck flow and in the second step it is recovered the stability of the Laplacian flow. The stability of the Laplacian-DeTurck flow can be deduced from our theorem \ref{teoremone} taking into account lemma 4.2 in \cite{L2}. Indeed, according to our setting we put $$ E_-=\Lambda^2M\,,\quad E=\Lambda^3M\,,\quad E_+=\Lambda^4M\,\quad \mathcal{E}=\Lambda^3_+ $$ $$ D=d\colon \Omegaega^2(M)\to \Omegaega^3(M)\,,\quad D_+=d\colon \Omegaega^3(M)\to \Omegaega^4(M) $$ and we take $\Delta_D\colon \Omegaega^3(M)\to \Omegaega^3(M)$ to be the Laplacian induced by a fixed background Riemannian metric. Furthermore $\Phi$ is the space of closed ${\rm G}_2$-forms on $M$ and for $\varphi\in \Phi$ we take $$ \begin{aligned} L_{\varphi}&=-\Delta_{\varphi}+d\Psi\,,\mbox{ on $3$-forms};\\ l_{\varphi}&=-\Delta_{\varphi}+\Psi\,, \,\,\,\mbox{ on $2$-forms}\,, \begin{equation}nd{aligned} $$ where $\Psi$ is defined in proposition \ref{BXpre}. If $\bar \varphi$ is a torsion free ${\rm G}_2$-structure, then $Q(\bar \varphi)=0$ and the restriction of $L_{\bar \varphi}$ to $d\Omegaega^2(M)$ is $-\Delta_{\bar \varphi}$. Therefore theorem \ref{teoremone} implies that for every $\begin{equation}psilon>0$ there exists $\delta>0$ such that if $\varphi_0$ is a closed ${\rm G}_2$-structure cohomologous to $\bar \varphi$ satisfying $\|\varphi_0-\bar\varphi\|_{C^{\infty}}<\delta$, then flow \begin{equation}qref{modifiedflow} with initial value $\varphi_0$ has a long-time solution $\tilde\varphi_t$ defined for $t\in [0,\infty)$ such that $\|\tilde \varphi_t - \bar \varphi\|_{C^\infty} < \begin{equation}psilon$ and $\tilde\varphi_t$ converges in $C^{\infty}$-topology to some torsion-free ${\rm G}_2$-structure $\tilde \varphi_{\infty}$ in $[\bar \varphi]\in H^3(M,\R)$. Now lemma 4.2 of \cite{L2} implies that $\tilde \varphi_\infty=\bar \varphi$ if $\begin{equation}psilon$ is taken small enough. Indeed lemma 4.2 of \cite{L2} implies that for $\begin{equation}psilon$ small enough the $L^2$-norm of $\tilde \varphi_t-\bar\varphi$ decays exponentially and consequently $\tilde \varphi_{\infty}=\bar\f$. The long-time existence of the Laplacian flow easily follows. Indeed, let $\partialhi_t$ be the curve of diffeomorphisms solving $$ \partialartial_t\partialhi_t=-V(\tilde \varphi_t)_{|\partialhi_t}\,\quad \partialhi_{0}={\rm Id}_M\,, $$ then $\varphi_t:=\partialhi_t^*(\tilde \varphi_t)$, $t\in [0,\infty)$, is a long-time solution of the Laplacian flow with initial condition $\varphi_{|t=0}=\varphi_0$. The convergence of the flow in theorem \ref{Lotay} is proved in \cite{L2} by using the Shi-type estimates for the Laplacian flow proved in \cite{L1}. Next we focus on the Laplacian coflow. The {\begin{equation}m Laplacian coflow} is the analogue of the Laplacian flow where the initial ${\rm G}_2$-structure is assumed to be coclosed instead of closed. Indeed a ${\rm G}_2$-structure $\varphi$ on a smooth manifold $M$ can be alternatively given by the $4$-form $\partialsi=*_{\varphi}\varphi$ and the fixed orientation. In \cite{K} it is defined the Laplacian coflow $$ \partialartial_t\partialsi_t=\Delta_{\partialsi_t}\partialsi_t\,,\quad d\partialsi_t=0\,,\quad \quad \partialsi_t\in C^{\infty}(M,\Lambda^4_+)\,,\quad \partialsi_{|t=0}=\partialsi_0\,, $$ where here $\Lambda^4_+$ is the fiber bundle whose sections are ${\rm G}_2$-$4$-forms and $\Delta_{\partialsi_t}$ is the Laplacian operator induced by ${\partialsi_t}$. Unlike the Laplacian flow, there is no known gauge fixing that makes the Laplacian coflow parabolic, for this reason in \cite{Gri} Grigorian proposed the following modification \begin{equation}\label{modified coflow} \partialartial_t\partialsi_t=\Delta_{\partialsi_t}\partialsi_t+ 2d((A -{\rm tr}\, T_{\partialsi_t})*_{\partialsi_t}\partialsi_t)\,,\quad d\partialsi_t=0\,,\quad \quad \partialsi_t\in C^{\infty}(M,\Lambda^4_+)\,,\quad \partialsi_{|t=0}=\partialsi_0\,, \begin{equation}nd{equation} where $A$ is a constant and for a ${\rm G_2}$ $4$-form $\partialsi\in C^{\infty}(M,\Lambda^4_+)$ the function ${\rm tr} \,T_{\partialsi}$ is the trace with respect to the metric $g$ induced by $\partialsi$ of the torsion tensor $$ T_{\partialsi}(X,Y):=\frac{1}{24}g(\noindentabla_X *_{\partialsi}\partialsi, \iota_Y \partialsi) $$ for $X,Y$ in $C^{\infty}(M,TM)$, where $\noindentabla$ is the Levi-Civita connection of $g$. It is not difficult to see that $$ {\rm tr} \,T_\partialsi=\frac14 *_{\partialsi} (d*_{\partialsi}\partialsi\wedge*_{\partialsi}\partialsi)\,. $$ Under this modification the flow is still not parabolic, but it can be further modified by using a DeTurck trick. In this section we revise Grigorian's proof of the well-posedness of the Laplacian coflow in order to show how the flow fits in our setting in the case $A=0$. We first recall how the space of smooth forms on a ${\rm G}_2$-manifold splits into a direct sum of irreducible modules (we refer to \cite{Bryant} for details). Given a ${\rm G}_2$-manifold $(M,\f)$ the space of $2$-forms and $3$-forms split in irreducible ${\rm G}_2$-modules as $$ \Omegaega^2(M)=\Omegaega^{2}_{14} (M)\overline{\partialartial}lus\Omegaega^2_{7}(M)\,,\quad\quad \Omegaega^3(M)=\Omegaega^{3}_{27}(M)\overline{\partialartial}lus \Omegaega^{3}_{7}(M)\overline{\partialartial}lus \Omegaega^{3}_{1}(M) $$ where $$ \begin{aligned} \Omegaega^2_{7}(M)&=\{*_{\f}(\alpha \wedge *_{\f}\f)\,\,:\,\,\alpha\in \Omegaega^1(M)\}\,,\\ \Omegaega^{2}_{14}(M)&=\{\alpha\in \Omegaega^2(M)\,\,:\,\,\alpha\wedge \f=-*_\f\alpha \} \begin{equation}nd{aligned} $$ and \begin{eqnarray*} &\Omegaega^{3}_{27}(M)&=\{\alpha\in \Omegaega^3(M)\,\,:\,\,\alpha\wedge \f=\alpha\wedge *_\f\f=0\}\,,\\ &\Omegaega^{3}_{7}(M)&=\{*_\f(\alpha\wedge \f)\,\,:\,\, \alpha\in \Omegaega^1(M)\}\,,\\ &\Omegaega^{3}_{1}(M)&=\{f\,\f\,:\,f\in C^\infty(M)\}\,. \begin{equation}nd{eqnarray*} The space of symmetric $2$-tensors $S^2(M)$ on $M$ is isomorphic to $\Omegaega^{3}_{1}(M)\overline{\partialartial}lus \Omegaega^{3}_{27}(M)$ via the map $\mathsf{i}_\f\colon S^2(M)\to \Omegaega^{3}_{1}(M)\overline{\partialartial}lus \Omegaega^{3}_{27}(M)$ locally defined as $$ \mathsf{i}_\f(h)=h_r^l\f_{lsk}dx^r\wedge dx^s\wedge dx^k $$ for every $h=h_{rs}dx^r \circ dx^s$, where $\f_{lsk}$ are the components of $\f$ in the coordinates $\{x^1,\dots,x^7\}$. Although the following lemma arises from \cite{Gri}, we prefer to give a proof of it in order to frame the Laplacian coflow in our setting and point out that the vector field needed to apply the DeTurck trick is the same used in the Laplacian flow. \begin{lemma}\label{lemmaG} Let $Q\colon C^{\infty}(M,\Lambda^4_+)\to \Omegaega^{4}(M)$ be defined as $$ Q(\partialsi)=\Delta_{\partialsi}\partialsi+2d((A-{\rm tr}\, T_{\partialsi})*_\partialsi \partialsi)+\mathcal{L}_{V(*_{\partialsi}\partialsi)}\partialsi\,, $$ where $V(*_{\partialsi}\partialsi)$ is defined in \begin{equation}qref{V}. Let $\{\partialsi_t\}_{t\in (-\begin{equation}psilon,\begin{equation}psilon)}$ be a smooth curve in $C^{\infty}(M,\Lambda^4_+)$ and $$ \f_t=*_{\partialsi_t}\partialsi_t\,, \quad \dot\partialsi=\partialartial_{t|t=0}\partialsi_t \,,\quad \dot\f=\partialartial_{t|t=0}\f_t\,. $$ Then $$ \partialartial_{t|t=0}Q(\partialsi_t)=-\Delta_{\partialsi_0}\dot \partialsi+2Ad\dot \f+d\Psi(\dot \partialsi) $$ where $\Psi$ is an algebraic linear operator on $\dot \partialsi$ with coefficients depending on the torsion of $\partialsi_0$ in a universal way. \begin{equation}nd{lemma} \begin{proof} In this proof we use the same notation as in \cite{Bryant,BryantXu} denoting by $d_{r}^s$ the projection of $d$ onto $\Omegaega_r^{s}(M)$. We also set $\partialsi=\partialsi_0$ and $\varphi=\varphi_0$ to simplify notation. From \cite{JJDG} it follows that $\dot \f$ and $\dot \partialsi $ are related as follows $$ \dot \f=3f^0\f+*_{\f}(f^1\wedge \f)+f^3\,,\quad \dot \partialsi=4f^0\partialsi+f^1\wedge\f-*_{\f}f^3\,, $$ where $f^0$ is a smooth function, $f^1\in \Omegaega^1(M)$ and $f^3\in \Omegaega^3_{27}(M)$. A direct computation via the formulas in \cite{BryantXu} yields $$ \partialartial_{t|t=0}\Delta_{\partialsi_t}\partialsi_t=dp(\dot\partialsi)\,,\quad \partialartial_{t|t=0}\mathcal{L}_{V(*_{\partialsi}\partialsi)}\partialsi=dq(\dot\partialsi) $$ where $$ \begin{aligned} p(\dot \partialsi)&=3*_{\f}d(f^0\wedge \f)+*_{\f}df^3+*_\f d*_\f(f^1\wedge \varphi)+{\rm l.o.t.}\,,\\ q(\dot \partialsi)&=5 *_\f(df^0\wedge \f)+*_\f(d^{27}_7 f^3\wedge \f)+{\rm l.o.t.}\,, \begin{equation}nd{aligned} $$ and by \lq\lq${\rm l.o.t.}$\rq\rq\, we mean \lq\lq lower order terms\rq\rq. Moreover if we denote by $\partiali_7(d\dot \partialsi)$ the component of $d\dot \partialsi$ in $*_{\f}\Omegaega^2_{7}(M)=\{\alpha \wedge \partialsi\,\,:\,\,\alpha\in \Omegaega^1(M)\}$ we have $$ \partiali_7(d\dot \partialsi)=4df^0\wedge \partialsi+\frac13d^{27}_7f^3\wedge \partialsi+\frac23 d^7_7f^1\wedge \partialsi+{\rm l.o.t.} $$ and from $d\dot\partialsi=0$ we deduce $$ d^{27}_7f^3=-12 df^0-2d_7^7f^1+{\rm l.o.t.} $$ which implies $$ q(\dot \partialsi)=-7 *_{\f}(df^0\wedge \f)-2*_\f(d^{7}_7 f^1\wedge \f)+{\rm l.o.t.} $$ Therefore $$ p(\dot \partialsi)+q(\dot \partialsi)+*_\f d*_\f \dot \partialsi=2*_\f d*_\f (f^1\wedge \f)-2*_\f(d^{7}_7 f^1\wedge \f)+{\rm l.o.t.} $$ Now $$ *_\f d*_\f (f^1\wedge \f)=\frac47 d^7_1f^1\f+\frac12 *_\f(d_7^7f^1\wedge \f)+ d_{27}^7f^1 $$ and so $$ \begin{aligned} p(\dot \partialsi)+q(\dot \partialsi)+*_\f d*_\f \dot \partialsi&\,=2*_\f d*_\f (f^1\wedge \f)-2*_\f(d^{7}_7 f^1\wedge \f)+{\rm l.o.t.}\\ &\,= \frac87 d^7_1f^1\f+*_\f(d_7^7f^1\wedge \f)+2d_{27}^7f^1-2*_\f(d^{7}_7 f^1\wedge \f)+{\rm l.o.t.}\\ &\,= \frac87 d^7_1f^1\f-*_\f(d_7^7f^1\wedge \f)+2d_{27}^7f^1+{\rm l.o.t.} \begin{equation}nd{aligned} $$ From $$ d(*_\f(f^1\wedge \partialsi))=-\frac37 d^7_1f^1\f-\frac12 *_\f (d_7^7f^1\wedge \f)+d_{27}^7f^1 $$ we deduce $$ p(\dot \partialsi)+q(\dot \partialsi)+*_\f d*_\f \dot \partialsi=2d(*_\f(f^1\wedge \partialsi))+2 d^7_1f^1\f+{\rm l.o.t.} $$ Finally $$ {\rm Tr}\, T_{\partialsi}=\frac14 *_\f (d\f\wedge\f)=d_1^7f^1+{\rm l.o.t.} $$ and consequently $$ p(\dot \partialsi)+q(\dot \partialsi)+2(A-{\rm Tr}\, T_{\partialsi})\f+*_\f d*_\f \dot \partialsi=2d(*_\f(f^1\wedge \partialsi))+2A\f+{\rm l.o.t.} $$ Therefore $$ \partialartial_{t|t=0}Q(\partialsi_t)=dp(\dot \partialsi)+dq(\dot \partialsi)+2d(A-{\rm Tr}\, T_{\partialsi})=-\Delta_{\partialsi}\dot \partialsi+2Ad\dot \f+{\rm l.o.t.} $$ and the claim follows. \begin{equation}nd{proof} Now we note that torsion-free $\rm G_2$-structures are critical points of the functional $Q$ regardless of the value of $A$. We concentrate on the case $A=0$. The main result of this section is the following \begin{theorem}\label{coflow} Let $\bar \partialsi\in C^\infty(M,\Lambda^4_+)$ be a torsion-free ${\rm G}_2$-structure on a compact 7-manifold $M$. There exists $\delta>0$ such that if $\partialsi_0 \in C^\infty(M,\Lambda^4_+)$ is closed and satisfies $$ \|\partialsi_0-\bar\partialsi\|_{C^{\infty}}<\delta\,,\quad [\partialsi_0]=[\bar \partialsi]\,, $$ then the evolution equation \begin{equation}\label{A=0} \partialartial_t\partialsi_t=\Delta_{\partialsi_t}\partialsi_t-2d(({\rm tr}\, T_t)*_{\partialsi_t}\partialsi_t)\,,\quad \partialsi_{|t=0}=\partialsi_0\,, \begin{equation}nd{equation} has a unique long-time solution $\{\partialsi_t\}_{t\in [0,\infty)}$ which converges in $C^{\infty}$-topology to $\partialsi_\infty \in {\rm Diff}^0\cdot \bar{\partialsi}$ as $t\to \infty$. \begin{equation}nd{theorem} In the statement above and in the following proof the $C^k$-norms and the $L^2$-norm, where not specified, are meant with respect to the metric $\bar g$ induced by $\bar \partialsi$. \begin{proof} Our approach mixes the use of theorem \ref{teoremone} with some techniques used in \cite{L2} to prove the stability of the Laplacian flow. We first apply theorem \ref{teoremone} to show the stability of the gauge fixing of the flow and then use Shi-type estimates in \cite{shi-type} to recover the stability of the original flow. The proof is subdivided in the following three steps: \begin{enumerate} \item[1.] We prove that for $\delta$ small enough a gauge fixing to \begin{equation}qref{A=0} has a long-time solution $\tilde\partialsi_t$ which converges in $C^{\infty}$ topology to a torsion-free ${\rm G}_2$-structure $\tilde \partialsi_{\infty}\in [\bar\partialsi]$ and stays $C^\infty$-close to $\bar \partialsi$. \item[2.] We show that for a suitable choice of $\delta$, $\tilde\partialsi_t$ converges in $L^2$-norm to $\bar\partialsi$, which implies that $\tilde\partialsi_{\infty}=\bar\partialsi$; \item[3.] We recover the stability of the original flow. \begin{equation}nd{enumerate} The proof of steps $2$ and $3$ are close to the case of the Laplacian flow. \noindentoindent {\begin{equation}m Step $1.$} If we choose as background metric the metric $\bar g$ induced by the torsion free ${\rm G}_2$-structure $\bar \partialsi$, then lemma \ref{lemmaG} together with theorem \ref{teoremone} implies that for every $\begin{equation}psilon>0$ there exists $\delta >0$ and $\kappa>0$ such that if $\partialsi_0 \in C^{\infty}(M,\Lambda^4_+)$ is closed and satisfies $$ \|\partialsi_0-\bar\partialsi\|_{C^{\infty}}<\delta\,,\quad [\partialsi_0]=[\bar \partialsi]\,, $$ then the evolution problem \begin{equation}\label{gauge fixing A=0} \partialartial_t \tilde \partialsi_t=\Delta_{\tilde\partialsi_t}\tilde\partialsi_t-2d(({\rm tr}\, \tilde T_t)*_{\tilde\partialsi_t} \tilde\partialsi_t)+\mathcal{L}_{V(\tilde\partialsi_t)}\tilde\partialsi_t\,,\quad \tilde\partialsi_{|t=0}=\partialsi_0 \begin{equation}nd{equation} has a long-time solution $\{\tilde \partialsi_t\}_{t\in [0,\infty)}$ such that $$ \|\tilde\partialsi_t-\bar\partialsi\|_{C^{\infty}}< \begin{equation}psilon\,, \quad \mbox{for every } t \in [0,+\infty) $$ and \begin{equation}\label{exp} \|\Delta_{\tilde \partialsi_t}\tilde \partialsi_t-2d(({\rm tr}\, \tilde T_t)*_{\tilde \partialsi_t}\tilde \partialsi_t)+\mathcal{L}_{V(\tilde\partialsi_t)}\tilde\partialsi_t\|_{C^{\infty}} \leq \kappa {\rm e}^{-\lambda t} \begin{equation}nd{equation} for every $t\in[0,\infty)$, where $\lambda$ is half the first positive eigenvalue of $\Delta_{\bar \partialsi}$. Furthermore, $\tilde \partialsi_t$ converges exponentially fast to a torsion-free ${\rm G}_2$-structure $\tilde\partialsi_{\infty}\in[\bar\partialsi]$. \noindentoindent {\begin{equation}m Step $2.$} We show that if we choose $\begin{equation}psilon$ small enough in the previous step, then the $L^2$-norm of $\theta_t=\tilde\partialsi_t-\bar\partialsi$ decays exponentially. Let $Q\colon C^{\infty}(M,\Lambda^4_+)\to \Omegaega^{4}(M)$ be defined as $$ Q(\partialsi)=\Delta_{\partialsi}\partialsi-2d(({\rm tr}\, T_{\partialsi})*_\partialsi \partialsi)+\mathcal{L}_{V(*_{\partialsi}\partialsi)}\partialsi\,. $$ Since $Q$ sends closed forms to exact forms we can write \begin{equation} \label{ev_estimate} \partialartial_t\theta_t=Q(\tilde \partialsi_t)=Q_{*|\bar\partialsi}(\theta_t)+dF(\tilde\partialsi_t)=-\Delta_{\bar\partialsi} \tilde\partialsi_t+dF(\theta_t)\,, \begin{equation}nd{equation} where we used also lemma \ref{lemmaG}. Next we consider $\Theta(\partialsi)=*_{\partialsi}\partialsi$ and write $$ \Theta(\tilde \partialsi_t)=*_{\bar \partialsi} \bar \partialsi +S_1(\theta_t)+S_2(\theta_t) $$ where $S_1(\theta_t)=\Theta_{*|\bar\partialsi}(\theta_t)$. Arguing as in \cite{JJDG} we can observe that both $dS_1(\theta_t)$ and $dS_2(\theta_t)$ are dominated by $\theta_t\bar\noindentabla \theta_t$ for $\|\theta_t\|_{C^\infty_{\bar g}}$ small, where $\bar \noindentabla$ is the Levi-Civita connection of $\bar g$. Note also that $dS_1(\theta_t)$ is linear in $\bar \noindentabla \theta_t$. \\ Let $Q^1(\partialsi):=\Delta_{\partialsi}\partialsi$. Then $$ Q^1_{*|\bar\partialsi}(\theta_t)=d*_{\bar \partialsi}d S_1(\theta_t) $$ and $$ \begin{aligned} Q^1(\tilde\partialsi_t)-Q^1(\bar\partialsi)-Q^1_{*|\bar\partialsi}(\theta_t)=&\,d*_{\tilde\partialsi_t}d *_{\tilde \partialsi_t} \tilde \partialsi_t - d*_{\bar \partialsi}d S_1(\theta_t)\\ =&\,d*_{\tilde\partialsi_t}d *_{\tilde \partialsi_t} {\tilde \partialsi_t} - d*_{\bar\partialsi}d *_{\tilde \partialsi_t} {\tilde \partialsi_t} + d*_{\bar\partialsi}d *_{\tilde \partialsi_t} {\tilde \partialsi_t} - d*_{\bar \partialsi}d S_1(\theta_t)\\ =&\,d(*_{\tilde\partialsi_t}-*_{\bar\partialsi})d *_{\tilde \partialsi_t} {\tilde \partialsi_t} + d*_{\bar\partialsi}d *_{\tilde \partialsi_t} {\tilde \partialsi_t} - d*_{\bar \partialsi}d S_1(\theta_t)\\ =&\,d(*_{\tilde\partialsi_t}-*_{\bar\partialsi})d *_{\tilde \partialsi_t} {\tilde \partialsi_t} + d*_{\bar\partialsi}d S_2(\theta_t)\,.\\ \begin{equation}nd{aligned} $$ Thus we can write $\Delta_{\tilde \partialsi_t}\tilde \partialsi_t = Q^1_{*|\bar\partialsi}(\theta_t) + dF^1(\theta_t)$ with $F^1(\theta_t)$ dominated by $\theta_t\bar\noindentabla \theta_t$, for $\|\theta_t\|_{C^\infty_{\bar g}}$ small, since $(*_{\tilde\partialsi_t}-*_{\bar \partialsi})$ is a $0^{th}$-order operator depending on $\theta_t$ polynomially. Now let $q^2(\partialsi)=(*_{\partialsi}(d^*_\partialsi \partialsi\wedge\partialsi))*_\partialsi\partialsi$ and $Q^2(\partialsi)=dq^2(\partialsi)$, then $$ q^2_{*|\bar\partialsi}(\theta_t)=(*_{\bar \partialsi}(*_{\bar \partialsi} dS_1(\theta_t)\wedge\bar \partialsi))*_{\bar \partialsi}\bar\partialsi\,. $$ Moreover $$ \begin{aligned} &\,q^2(\tilde\partialsi_t)-q^2(\bar\partialsi)-q^2_{*|\bar\partialsi}(\theta_t)= \,(*_{\tilde\partialsi_t}(d^*_{\tilde\partialsi_t} \tilde\partialsi_t\wedge\tilde\partialsi_t))*_{\tilde\partialsi_t}\tilde\partialsi_t -(*_{\bar \partialsi}(*_{\bar \partialsi} dS_1(\theta_t)\wedge\bar \partialsi))*_{\bar\partialsi}\bar\partialsi\,\\ & = \,((*_{\tilde\partialsi_t}-*_{\bar\partialsi})(d^*_{\tilde\partialsi_t} \tilde\partialsi_t\wedge(\tilde\partialsi_t-\bar\partialsi)))(*_{\tilde\partialsi_t}\tilde\partialsi_t-*_{\bar\partialsi}\bar\partialsi) +(*_{\tilde\partialsi_t}(d^*_{\tilde\partialsi_t} \tilde\partialsi_t\wedge(\tilde\partialsi_t-\bar\partialsi)))*_{\bar\partialsi}\bar\partialsi \\ & +(*_{\tilde\partialsi_t}(d^*_{\tilde\partialsi_t} \tilde\partialsi_t\wedge\bar\partialsi))*_{\bar\partialsi}\bar\partialsi +((*_{\tilde\partialsi_t}-*_{\bar\partialsi})(d^*_{\tilde\partialsi_t} \tilde\partialsi_t\wedge \bar\partialsi))(*_{\tilde\partialsi_t}\tilde\partialsi_t-*_{\bar\partialsi}\bar\partialsi) +(*_{\bar\partialsi}(d^*_{\tilde\partialsi_t} \tilde\partialsi_t\wedge \bar\partialsi))(*_{\tilde\partialsi_t}\tilde\partialsi_t-*_{\bar\partialsi}\bar\partialsi) \\ & +(*_{\bar\partialsi}(d^*_{\tilde\partialsi_t} \tilde\partialsi_t\wedge(\tilde\partialsi_t-\bar\partialsi)))(*_{\tilde\partialsi_t}\tilde\partialsi_t-*_{\bar\partialsi}\bar\partialsi) -(*_{\bar \partialsi}(*_{\bar \partialsi} dS_1(\theta_t)\wedge\bar \partialsi))*_{\bar\partialsi}\bar\partialsi\,,\\ \begin{equation}nd{aligned} $$ and consequently we have that $Q^2(\tilde \partialsi_t) = Q^2_{*|\bar\partialsi}(\theta_t) + dF^2(\theta_t)$, with $F^2(\theta_t)$ dominated by $\theta_t\bar \noindentabla\theta_t$, for $\|\theta_t\|_{C^\infty_{\bar g}}$ small. Finally, if $Q^3(\partialsi)=\mathcal{L}_{V(*_{\partialsi}\partialsi)}\partialsi$, then \cite[formula (4.13)]{L2} (once regarded on $4$-forms) yields $$ Q^3(\tilde\partialsi_t) = Q^3_{*|\bar\partialsi}(\theta_t) + dF^3(\theta_t)\,, $$ with $F^3(\theta_t)$ again dominated by $\theta_t\bar \noindentabla\theta_t$, for $\|\theta_t\|_{C^\infty_{\bar g}}$ small. Therefore in \begin{equation}qref{ev_estimate} we can put $F(\theta_t)=F^1(\theta_t)-\frac{7}{2}F^2(\theta_t)+F^3(\theta_t)$ and we have that there exists a constant $C' >0$ such that the pointwise estimate $|F(\theta_t)|_{\bar g} \leq C' |\theta_t|_{\bar g} |\bar \noindentabla\theta_t|_{\bar g}$ holds for $\|\theta_t\|_{C^\infty_{\bar g}}$ small. Now $$ \begin{aligned} \frac{d}{dt}\|\theta_t\|^2_{L^2}=&\,\frac{d}{dt}\int_M|\theta_t|_{\bar g}^2\,{\rm Vol}_{\bar g} =2\int_M\bar g(\theta_t,\partialartial_t\theta_t)\,{\rm Vol}_{\bar g} =2\int_M \bar g(\theta_t,-\Delta_{\bar{\partialsi}}\theta_t+dF(\theta_t))\,{\rm Vol}_{\bar g}\\ =&\,-2\|d^*\theta_t\|^2_{L^2}+2\int_M\bar g(d^*\theta_t,F(\theta_t))\,{\rm Vol}_{\bar g} \leq-2\|d^*\theta_t\|^2_{L^2}+2\int_M|d^*\theta_t|_{\bar g} |F(\theta_t)|_{\bar g}\,{\rm Vol}_{\bar g}\\ \leq& \,-2\|d^*\theta_t\|^2_{L^2}+2C'\int_M|d^*\theta_t|_{\bar g} |\theta_t|_{\bar g}|\bar \noindentabla\theta_t|_{\bar g}\,{\rm Vol}_{\bar g} \leq -2\|d^*\theta_t\|^2_{L^2}+2C'\begin{equation}psilon \int_M|d^*\theta_t|_{\bar g} |\bar \noindentabla\theta_t|_{\bar g}\,{\rm Vol}_{\bar g} \\ \leq& \,-2\|d^*\theta_t\|^2_{L^2}+C'\begin{equation}psilon (\|d^*\theta_t\|^2_{L^2}+\|\bar \noindentabla\theta_t\|^2_{L^2})\,. \begin{equation}nd{aligned} $$ Weitzenb\"ock formula yields that there exists a constant $C''>0$ depending only on bounds of the curvature of $\bar g$ such that $$ \|\bar\noindentabla\theta_t\|^2_{L^2}\leq \|d^*\theta_t\|^2_{L^2}+C''\|\theta_t\|^2_{L^2}\,. $$ Since $$ \|d^*\theta_t\|^2_{L^2}\geq 2 \lambda\|\theta_t\|^2_{L^2} $$ we get $$ \frac{d}{dt}\|\theta_t\|^2_{L^2} \leq 4\lambda(C'\begin{equation}psilon-1)\|\theta_t\|^2_{L^2}+C'C''\begin{equation}psilon \|\theta_t\|^2_{L^2}=(-4\lambda+C'''\begin{equation}psilon)\|\theta_t\|^2_{L^2}\,, $$ with $C'''=C'(4\lambda +C'')$. Therefore for $\begin{equation}psilon$ small enough, Gronwall lemma implies that $\|\tilde \partialsi_t-\bar\partialsi\|_{L^2}$ decays exponentially. \noindentoindent {\begin{equation}m Step $3$.} We recover from $\tilde \partialsi_t$ a long-time solution $\partialsi_t$ to \begin{equation}qref{A=0} and we show that $\partialsi_t$ converges exponentially fast to a torsion-free ${\rm G}_2$-structure in $C^{\infty}$-topology. Let $\{\partialhi_t\}$ be the family of diffeomorphisms solving $$ \partialartial_t\partialhi_t=-V(*_{\tilde\partialsi_t}\tilde\partialsi_t)_{|\partialhi_t}\,,\quad \partialhi_{|t=0}={\rm Id}\,, $$ and correspondingly let us set $$ \partialsi_t = \partialhi_t^* \tilde \partialsi_t\,. $$ From the convergence of $\tilde \partialsi_t$ in $C^{\infty}_{\bar g}$ topology it follows that $\partialhi_t$ converges to a limit map $\partialhi_\infty$. Arguing as in \cite{L2} we get that $\partialhi_{\infty}$ is in fact a diffeomorphism. Indeed, if $X$ is a vector field on $M$, we have $$ \frac12 \d_t |\partialhi_{t*}(X)|_{\bar g}^2=\bar g\left( \d_t\ \partialhi_{t*}(X), \partialhi_{t*}(X)\right)\geq -\left| \d_t \partialhi_{t*}(X)\right|_{\bar g}\, \left|\partialhi_{t*}(X)\right|_{\bar g}\geq -\,\|V(*_{\tilde\partialsi_t}\tilde\partialsi_t )\|_{C^{1}}\left|\partialhi_{t*}(X)\right|^2_{\bar g}\,. $$ Hence $$ \d_t \log\,|\partialhi_{t*}(X)|_{\bar g}\geq -\,\|V(*_{\tilde\partialsi_t}\tilde\partialsi_t )\|_{C^{1}} $$ and integrating we deduce $$ |\partialhi_{t*}(X)|_{\bar g}\geq |X|_{\bar g}\, {\rm e}^{- \,\int_{0}^t\|V(*_{\tilde\partialsi_s}\tilde\partialsi_s )\|_{C^{1}}\,ds}\,. $$ Since $\partialsi_t$ converges exponentially to $\bar \partialsi$ in $C^{\infty}$ topology, we have that $\|V(*_{\tilde\partialsi_t}\tilde\partialsi_t )\|_{C^{1}}$ decays exponentially so that \begin{equation}\label{<>} |\partialhi_{t*}(X)|_{\bar g}\geq C\,|X|_{\bar g}\,, \begin{equation}nd{equation} where $C$ is a positive constant which does not depend on $X$ and $t$. This last inequality holds true for $\partialhi_{\infty}$ and it follows that $\partialhi_\infty$ is a local diffeomorphism homotopic to the identity and hence a diffeomorphism. Since $\tilde\partialsi_t$ stays close to $\bar \partialsi$ in $C^\infty$-topology, up to choosing a smaller $\begin{equation}psilon$, \begin{equation}qref{exp} yields $$ \|\Delta_{\tilde \partialsi_t}\tilde \partialsi_t-2d(({\rm tr}\, \tilde T_t)*_{\tilde \partialsi_t}\tilde \partialsi_t))+\mathcal{L}_{V(\tilde\partialsi_t)}\tilde\partialsi_t\|_{C^{\infty}_{\tilde g_t}}\leq 2\|\Delta_{\tilde \partialsi_t}\tilde \partialsi_t-2d(({\rm tr}\, \tilde T_t) *_{\tilde \partialsi_t}\tilde \partialsi_t))+\mathcal{L}_{V(\tilde\partialsi_t)}\tilde\partialsi_t\|_{C^{\infty}_{\bar g}}\leq 2\kappa {\rm e}^{-\lambda t} $$ where $\tilde g_t$ is the metric induced by $\tilde \partialsi_t$. By diffeomorphism invariance it follows that \begin{equation} \label{stima} \|\partialartial_t \partialsi_t\|_{C^{\infty}_{g_t}}=\|\Delta_{\partialsi_t}\partialsi_t-2d(({\rm tr}\, T_t)*_{\partialsi_t}\partialsi_t))\|_{C^{\infty}_{g_t}} \leq 2\kappa {\rm e}^{-\lambda t} \begin{equation}nd{equation} where $g_t$ is the metric induced by $\partialsi_t$. Now if we write $$ \partialartial_t\partialsi_t=\alpha_t\wedge*\partialsi_t+3*_{\partialsi_t}\mathsf{i}_{\partialsi_t}(h_t) $$ we have in particular $$ \|\partialartial_t\partialsi_t\|_{C^{0}_{g_t}}^2=\|\alpha_t\wedge*_{\partialsi_t}\partialsi_t\|_{C^{0}_{g_t}}^2+\|3*_{\partialsi_t}\mathsf{i}_{\partialsi_t}(h_t)\|_{C^{0}_{g_t}}^2\,. $$ Thus \begin{equation}qref{stima} implies $\|h_t\|_{C^{0}_{g_t}}^2\leq C\kappa {\rm e}^{-\lambda t}\,,$ where $C$ is a positive universal constant. Moreover by \cite[Proposition 3.1]{Gri} $$ \partialartial_tg_t=\frac12 ({\rm tr}_{g_t}\, h_t)\,g_t-2h_t\,, $$ so that for a vector field $X\noindenteq 0$ on $M$ we have $$ |\partialartial_tg_t(X,X)|\leq \left(\tfrac12|{\rm tr}_{g_t}\, h_t|+2\|h_t\|_{C^0_{g_t}}\right) \,g_t(X,X) \leq C\|h_t\|_{C^0_{g_t}}\,g_t(X,X) $$ for some constant $C>0$ and integrating $\frac{\partialartial_tg_t(X,X)}{g_t(X,X)}$ we deduce $$ {\rm e}^{-\frac{C}{\lambda}}g_0\leq g_t\leq {\rm e}^{\frac{C}{\lambda}}g_0 $$ so that the metrics $g_t$ and $g_0$ are uniformly equivalent. It follows that $g_t$ is uniformly equivalent to $\bar g$ and so $\|\partialartial_t\partialsi_t\|_{C^0_{\bar g}}\leq C {\rm e}^{-{\lambda t}}$ for some constant $C>0$. Thus $\partialsi_t$ converges in $C^0_{\bar g}$-norm to some $4$-form $\partialsi_\infty$. On the other hand, by \begin{equation}qref{<>} we have $$ \begin{aligned} |\partialsi_{\infty}-\partialhi_{\infty}^*\bar \partialsi|_{\bar g} & \leq\lim_{t\to \infty}\left( |\partialsi_{\infty}-\partialsi_t|_{\bar g}+|\partialsi_{t}-\partialhi_t^*\bar \partialsi|_{\bar g}+|\partialhi_t^*\bar \partialsi-\partialhi_{\infty}^*\bar \partialsi |_{\bar g}\right)\\ & \leq\lim_{t\to \infty}\left( |\partialsi_{\infty}-\partialsi_t|_{\bar g}+C|\tilde\partialsi_{t}-\bar \partialsi|_{\bar g}+|(\partialhi_t^*-\partialhi_\infty^*)\bar\partialsi|_{\bar g}\right)=0\,, \begin{equation}nd{aligned} $$ so that $\partialsi_\infty = \partialhi_{\infty}^*\bar \partialsi$. The last part consists in showing that $\partialsi_t$ converges to $\partialsi_{\infty}$ in $C^{\infty}$-topology. We just describe the procedure and refer to \cite{L2} for details. First we have exponential estimates for the $\bar g$-norm of the curvature $\tilde R_t$ of $\tilde g_t$ and the first covariant derivatives of the torsion $\tilde T_t$ of $\tilde \partialsi_t$. Then for $t$ large enough we deduce corresponding estimates with respect to the $g_t$-norms (since $g_t$ is uniformly equivalent to $\bar g$) and finally by diffeomorphism invariance we have uniform bounds for the $g_t$-norm of the curvature $R_t$ of $g_t$ and the covariant derivative of the torsion $T_t$ of $\partialsi_t$. This allows us to use the a-priori Shi-type estimates for the Laplacian co-flow \cite[Theorem 2.1]{shi-type} (Note that the flow \begin{equation}qref{A=0} is a {\begin{equation}m reasonable} flow of ${\rm G}_2$-structures in the sense of \cite{shi-type}). Now the lower bound on the injectivity radius of $g_t$ (again by uniform equivalence) and the compactness theorem for $ {\rm G}_2$-structures \cite[Theorem 7.1]{L1} gives us the convergence of $\partialsi_t$ to $\partialsi_\infty$ in $C^{\infty}$-topology. \begin{equation}nd{proof} \subsection{Examples and Remarks} It is known that in the case $A>0$, the modified Laplacian coflow may have some stationary points which are not torsion-free. Here we observe that such stationary points are not stable in general. A class of examples is provided by {\begin{equation}m nearly parallel } ${\rm G_2}$-structures which are characterized by the equations \begin{equation}\label{NP} d\partialsi=0 \quad \mbox{and} \quad d(*_\partialsi\partialsi)=\tau_0\partialsi\,, \begin{equation}nd{equation} where $\tau_0$ is a constant. For instance the standard ${\rm G}_2$-structure on the $7$-sphere ${\rm Spin}(7)/{\rm G}_2$ is nearly parallel. Let us study the evolution of a nearly parallel ${\rm G}_2$-structure $\bar\partialsi $ by equation \begin{equation}qref{modified coflow} with $A \geq 0$. For $\partialsi$ nearly parallel one has $$ \Delta_{\partialsi}\partialsi+2d((A-{\rm tr}\, T_{\partialsi})*_\partialsi \partialsi)= \tau_0 \left(2A-\tfrac{5}{2}\tau_0\right)\partialsi $$ It is immediate to note that if the torsion form $\tau_0$ is $\frac45 A$ then $\bar\partialsi $ is stationary for \begin{equation}qref{modified coflow} (see also \cite{Lotaysurvey}). In general the modified Laplacian coflow starting from a nearly parallel ${\rm G}_2$-structure $\partialsi_0$ acts by rescaling $\partialsi_t=c_t\partialsi_0$, where $c_t$ solves the ODE \begin{equation}\label{nearly} \tfrac{d}{dt}c_t=c_t^{3/4}\tau_0\left(2A-\tfrac{5}{2}c_t^{-1/4}\tau_0\right)\,. \begin{equation}nd{equation} where $\tau_0$ is the torsion form of $\partialsi_0$ defined by \begin{equation}qref{NP}. Now consider $A > 0$ fixed and take $\bar\partialsi$ nearly parallel and stationary. Take $\partialsi_0 = \mu \bar\partialsi$ with $\mu$ a real constant. Now if $\mu > 1$ we have that $c_t$ is increasing, while if $\mu<1$ we have that $c_t$ is decreasing. In both cases the flow steps away from the stationary solution and $\bar \partialsi$ is unstable. \\ We can use the same computation to illustrate another phenomenon. Grigorian noted in \cite{Gri} that the volume is increasing along the modified Laplacian coflow \begin{equation}qref{modified coflow} if and only if the following inequality is satisfied for every $t$ $$ |T_t|^2+{\rm tr}\, T_t (4A-3 {\rm tr}\,T_t)>0\,. $$ In the case $A=0$, the volume may decrease. Indeed in the situation above we have $$ \tfrac{d}{dt}c_t=-\tfrac{5}{2}c_t^{1/2}\tau_0^2\,, $$ i.e. $$ c_t=(1-\tfrac{5}{4}\tau_0^2\,t)^2\,. $$ Since $$ {\rm Vol}_{\partialsi_t}=c_t^{7/4} {\rm Vol}_{\partialsi_0} $$ the volume decreases. Next we focus on examples of static solutions to the modified Laplacian coflow which are not torsion free. To construct such examples we consider nilpotent Lie groups and we work in their Lie algebras in an algebraic fashion. \begin{ex}\label{EE1}{\begin{equation}m Let $M=\mathbb T^{4}\times \mathbb{H}^3/\Gamma$, where $\mathbb T^{4}$ is the $4$-dimensional torus, $\mathbb H^3$ is the $3$-dimensional Heisenberg Lie group $$ \mathbb{H}^3=\left\{\left[\begin{smallmatrix}1&x&z\\0&1&y\\0&0&1\begin{equation}nd{smallmatrix}\right]\mid x,y,z, \in \mathbb R\right\} $$ and $\Gamma$ is the co-compact lattice of matrices in $\mathbb{H}^3 $ with integral entries. Notice that $M$ can be regarded as the product of the Kodaira-Thurston manifold with the $3$-dimensional torus and it is a $2$-step nilmanifold admitting a global coframe $\{e^1,\dots, e^7\}$ which satisfies $$ de^i=0, \quad i=1,2,3,4,5,7 \quad \quad de^6=e^1 \wedge e^7. $$ Let $$ \bar\varphi= e^{123}+e^{145}+e^{167}+e^{246}-e^{257}-e^{347}-e^{356} $$ be the \lq\lq standard\rq\rq ${\rm G}_2$-structure with respect to the co-frame we have fixed. (As usual we denote by $e^{ijk\ldots}$ the form $e^i\wedge e^j\wedge e^k\wedge \ldots$). An easy computation implies that $\bar\varphi$ is co-closed and that $$ \bar \partialsi=*_{\bar\varphi}\bar\varphi=e^{4567}+e^{2367}+e^{2345}+e^{1357}-e^{1346}-e^{1256}-e^{1247} $$ is a static solution to \begin{equation}qref{A=0}. More generally it can be noticed that every left-invariant ${\rm G}_2$-structure $M$ of the form $$ \varphi=c_1e^{123}+c_2e^{145}+c_3e^{167}+c_4e^{246}-c_5e^{257}-c_6e^{347}-c_7e^{356} $$ gives a static solution to \begin{equation}qref{A=0} and that every left-invariant co-closed ${\rm G}_2$-structure on $M$ is static. } \begin{equation}nd{ex} \begin{ex}\label{EE2}{\begin{equation}m Let $\mathfrak g$ be the nilpotent Lie algebra admitting a coframe $\{e^1,\dots,e^7\}$ satisfying $$ d e^i=0,\quad i=1,3,5,7\,,\quad de^2=-e^{13}\,,\quad de^4=e^{15}\,,\quad de^6=e^{17} $$ and let $G$ be the simply-connected Lie group having $\g$ as Lie algebra. Then $G$ has a co-compact lattice $\Gamma$ and we set $M=G/\Gamma$. A direct computation gives that the standard ${\rm G}_2$-structure $$ \varphi=e^{123}+e^{145}+e^{167}+e^{246}-e^{257}-e^{347}-e^{356} $$ is coclosed and static with respect to the modified Laplacian coflow with $A=0$. However, in contrast with the previous example, we have that on $M$ there are left-invariant coclosed ${\rm G}_2$-structures which are not static. For instance if we consider $$ \varphi=c_1^2e^{123}+c_2^2e^{145}+c_3^2e^{167}+c_4^2e^{246}-c_5^2e^{257}-c_6^2e^{347}-c_7^2e^{356} $$ for $c_i$ constant, then $\varphi$ is coclosed and the corresponding $\partialsi$ satisfies $$ \Delta_{\partialsi}\partialsi-2d({\rm tr}\, T_{\partialsi}*_\partialsi \partialsi)= \frac{2(c_{2}c_{4}c_{7}+c_{2}c_{5}c_{6}-c_{3}c_{4}c_{6})}{c_1^2c_{3}c_{5}c_{7}}\,e^{1357}\,. $$ } \begin{equation}nd{ex} \begin{rem}{\begin{equation}m Note that the ${\rm G}_2$-structures in Example \ref{EE1} and Example \ref{EE2} cannot be torsion-free since a (compact) nilmanifold $M$ cannot admit a left-invariant ${\rm G}_2$-structure unless it is a torus. } \begin{equation}nd{rem} \section{From theorem \ref{teoremone} to the stability of the Balanced flow}\label{3} We recall that given a K\"ahler manifold $(M,\omega_0)$ the Calabi-flow starting from $\omega_0$ is the geometric flow of K\"ahler forms governed by the equation \begin{equation}\label{CF} \partialartial_t\omega_t=i\partialartial \bar\partialartial s_{\omega_t}\,,\quad \omega_{|t=0}=\omega_0 \begin{equation}nd{equation} where $s_{\omega_t}$ is the Riemannian scalar curvature of $\omega_t$. Many properties of the flow were proved in \cite{chen}. Here we recall the theorem by Chen and He about the stability of the Calabi-flow. \begin{theorem}[Chen-He]\label{Chen-He} Let $(M,\bar \omega)$ be a compact K\"alher manifold with constant scalar curvature. Then there exists $\delta>0$ such that if $\omega_0$ is a K\"ahler metric satisfying $$ \|\omega_0-\bar\omega\|_{C^{\infty}}<\delta\,, $$ then the Calabi-flow starting from $\omega_0$ is immortal and converges in $C^{\infty}$ topology to a constant scalar curvature K\"ahler metric in $[\bar \omega].$ \begin{equation}nd{theorem} The Calabi-flow was generalized to the context of balanced geometry by the authors in \cite{Wlucione} (see also \cite{Wlucione2} for a generalizations in a different direction). A Hermitian metric on a complex manifold is called {\begin{equation}m balanced} if its fundamental form is co-closed (instead of closed as in the K\"ahler case). Given a compact balanced manifold $(M,\omega_0)$ of complex dimension $n$ the {\begin{equation}m balanced flow} consists in evolving $\omega_0$ as \begin{equation}\label{bflow} \begin{cases} & \partialartial_t*_t\omega_t=i\partialartial\bar{\partialartial} *_{t}(\rho_t \wedge \omega_t)+(n-1)\Delta_{BC}^t*_{t}\omega_t\\ &d\omega_t^{n-1}=0\\ & \omega_{|t=0}=\omega_0\,, \begin{equation}nd{cases} \begin{equation}nd{equation} where $*_t$, $\rho_t$ and $\Delta_{BC}^t$ are the Hodge star operator, the Chern-Ricci form and the modified Bott-Chern Laplacian of $\omega_t$, respectively (see \cite{Wlucione} for details). Also this flow fits in the class of flows of theorem \ref{teoremone} when we consider the following Hodge system: $$ \begin{CD} \Omegaega^{n-2,n-2} @>D=i\partialartial \bar\partialartial >>\Omegaega^{n-1,n-1} \\ @VV\Delta_D=\Delta_A V\\% @VV{\rm Id}V\\ \Omegaega^{n-2,n-2} @<D^*<<\Omegaega^{n-1,n-1} \begin{equation}nd{CD} $$ where $\Delta_A$ is the modified Aeppli Laplacian $$\Delta_{A}:=\ov{\partialartial}^*\partialartial^*\partialartial\ov{\partialartial}+\partialartial\ov{\partialartial}\ov{\partialartial}^*\partialartial^*+\partialartial\ov{\partialartial}^*\ov{\partialartial}\partialartial^*+\ov{\partialartial}\partialartial^*\partialartial\ov{\partialartial}^*+\partialartial\partialartial^*+\ov{\partialartial}\ov{\partialartial}^*\,.$$ \begin{theorem}\label{stabbalanced} Let $(M,\bar \omega)$ be a compact Ricci-flat K\"ahler manifold of complex dimension $n$. Then there exists $\delta>0$ such that if $\omega_0$ is a balanced metric on $M$ satisfying $\|\omega_0-\bar \omega\|_{C^{\infty}}<\delta$, then flow \begin{equation}qref{bflow} with initial datum $\omega_{|t=0}=\omega_0$ exists for all $t\in [0,\infty)$ and as $t\to \infty$ it converges in $C^{\infty}$ topology to a balanced form $\omega_{\infty}$ satisfying $$ i\partialartial\bar{\partialartial} *_{\omega_{\infty}}(\rho_{\omega_{\infty}} \wedge \omega_{\infty})+(n-1)\Delta_{BC}^{\omega_\infty}*_{\omega_{\infty}}\omega_{\infty}=0\,. $$ \begin{equation}nd{theorem} \begin{proof} A Hermitian form $\omega$ on a complex manifold $M$ is determined by an $(n-1,n-1)$-form $\varphi$ which is positive in the sense that \begin{equation}\label{balancedform} \varphi(Z_1,\dots, Z_{n-1},\bar Z_1,\dots, \bar Z_{n-1})>0 \begin{equation}nd{equation} for every $\{Z_1,\dots,Z_{n-1}\}$ linearly independent vector fields of type $(1,0)$ on $M$ (here $n$ is the complex dimension of $M$). Indeed, once such a form $\varphi$ is given, there exists a unique Hermitian form $\omega$ such that $*_{\omega}\omega=\varphi$. We denote by $\mathcal E\subseteq \Lambda^{n-1,n-1}_\R$ the bundle whose sections are real $(n\!-\!1,n\!-\!1)$--forms satisfying \begin{equation}qref{balancedform}. Flow \begin{equation}qref{stabbalanced} can be alternatively written in terms of $\varphi$ as \begin{equation}\label{theflow} \begin{cases} & \partialartial_t\varphi_t=i\partialartial\bar{\partialartial} *_t(\rho_t \wedge *_t\varphi_t)+(n-1)\Delta_{BC}^t\varphi_t\\ &d\varphi_t=0\\ & \varphi_{|t=0}=\varphi_0\,, \begin{equation}nd{cases} \begin{equation}nd{equation} and we denote by $Q\colon C^{\infty}(M,\mathcal E)\to \Omegaega^{n-1,n-1}_\R$ the operator $$ Q(\varphi)=i\partialartial\bar{\partialartial} *_{\varphi}(\rho_{\varphi} \wedge *_\varphi\varphi)+(n-1)\Delta_{BC}^\f \varphi\,. $$ In order to apply theorem \ref{teoremone} we show that for a Ricci-flat K\"ahler form $\bar\omega$ on $M$ the corresponding $\bar\f=*_{\bar\omega} \bar \omega$ is such that \begin{enumerate} \item[1.] $Q(\bar \f)=0$; \item[2.] the restriction to $D\,\Omegaega^{n-2,n-2}$ of $L_{\bar \varphi}$ is symmetric and negative definite with respect to the $L^2$ inner product induced by $\bar \omega$\,. \begin{equation}nd{enumerate} Item 1 is trivial and item 2 can be deduced from \cite[Section 5]{Wlucione}, but we prove it for the sake of completeness. Let $\{\omega_t\}_{t\in (-\begin{equation}psilon,\begin{equation}psilon)}$, be a smooth curve of balanced forms which is $\bar \omega$ at $t=0$ and such that the corresponding $(n\!-\!1,n\!-\!1)$-forms $\varphi_t$ are in the Bott-Chern cohomology class of $\bar\varphi$, and let $$ \chi=\partialartial_{t|t=0}*_{t}\omega_t\,. $$ Then we can write $$ \chi=h_1\bar\f+*_{\bar \omega} h_0 $$ for a smooth function $h_1$ and a $(1,1)$-form $h_0$ such that $h_0\wedge\omega^{n-1}=0$. In this way $$ \partialartial_{t|t=0}\omega_t=\frac{h_1}{n-1}\bar \omega-h_0\, $$ see \cite[Lemma 2.5]{Wlucione}. Since $\bar \rho=0$ we have $$ \partialartial_{t|t=0} i\partialartial\bar{\partialartial} *_{t}(\rho_t \wedge \omega_t)= i\partialartial\bar{\partialartial} *_{\bar \omega}(\dot \rho \wedge \bar \omega) $$ where we have set $ \dot \rho= \partialartial_{t|t=0} \rho_{t}\,. $ In view of \cite[lemma 5.1]{Wlucione} $\dot\rho=-in\partialartial \bar\partialartial h_1$ and so $$ \partialartial_{t|t=0} i\partialartial\bar{\partialartial} *_t(\rho_t \wedge \omega_t)=n\partialartial\bar{\partialartial} *_{\bar \omega}(\partialartial\bar\partialartial h_1\wedge \omega)\,. $$ On the other hand it is clear that for a curve of Bott-Chern-cohomologous $(n\!-\!1,n\!-\!1)$-forms $\varphi_t$ starting at $\bar \varphi$ we have $$ \partialartial_{t|t=0}\Delta^{\varphi_t}_{BC}\varphi_t=- \,\partialartial \bar\partialartial *_{\bar \omega} \partialartial \bar\partialartial\, \dot \omega=- \,\partialartial \bar\partialartial *_{\bar \omega} \partialartial \bar\partialartial\,\left(\frac{h_1}{n-1}\bar \omega-h_0\right)\,. $$ And then we obtain \begin{eqnarray*} L_{\bar\varphi}(\partialsi) & = &\partialartial_{t|t=0}i\partialartial\bar{\partialartial} *_t(\rho_t \wedge \omega_t)+(n-1)\partialartial_{t|t=0}\Delta^{\varphi_t}_{BC}\varphi_t=(n-1)\partialartial \bar\partialartial *_{\bar \omega} \partialartial \bar\partialartial\,\left(h_1\bar \omega-h_0\right) \begin{equation}nd{eqnarray*} Moreover since $\bar\omega$ is K\"ahler we have $$ \begin{aligned} \Delta^{\bar\omega}_{BC}(\partialsi) & = - \dd*_{\bar\omega}\dd*_{\bar\omega} \partialsi = - \dd*_{\bar\omega}\dd*_{\bar\omega}(h_1\bar\f+*_{\bar \omega} h_0) \\ & = - \dd*_{\bar\omega}\dd(h_1\bar\omega) + \dd*_{\bar\omega}\dd h_0 = - \dd*_{\bar\omega}\dd( h_1 \w \bar\omega -h_0) \begin{equation}nd{aligned} $$ and so $$ L_{\bar\varphi}(\partialsi) =-(n-1)\Delta^{\bar\omega}_{BC}(\partialsi) $$ which implies the statement. \begin{equation}nd{proof} \begin{rem} {\begin{equation}m It is quite natural wondering if theorem \ref{stabbalanced} can be improved by showing that the limit balanced metric is actually Calabi-Yau, or by proving the stability around more general static solutions of the flow, such as constant scalar curvature K\"ahler metrics or even balanced metrics $\omega$ satisfying \begin{equation}\label{GB} i\partialartial\bar{\partialartial} *(\rho \wedge \omega)+(n-1)\Delta_{BC}*\omega=0\,. \begin{equation}nd{equation} These improvements cannot be easily deduced from our theorem \ref{teoremone} and will be the subject of some future studies. On the other hand, theorem \ref{teoremone} suggests to consider balanced metrics satisfying \begin{equation}qref{GB} as natural generalizations of extremal K\"ahler metrics to the context of balanced geometry (for a generalization in another direction see \cite{fei}) and the problem of the existence and uniqueness of such metrics in a fixed Bott-Chern cohomology class arises. More general it could be interesting to compare the geometry of extremal K\"ahler metrics to the geometry of balanced metrics satisfying \begin{equation}qref{GB}. } \begin{equation}nd{rem} \section{Proof of theorem \ref{teoremone}} In this last section we prove theorem \ref{teoremone}. The scheme of the proof resembles the one of the main theorem of \cite{witt1} and of \cite{Wlucione2}. First we need to recall some basic facts about the category of tame Fr\'echet spaces and tame maps (see \cite{HamiltonNash} for the relevant details). A {\begin{equation}m tame Fr\'echet space} is a vector space $\mathcal V$ endowed with a topology given by an increasing countable family of seminorms $\{|\cdot|_n\}$. Thus a sequence $\{x_n\}\subseteq \mathcal V$ will be {\begin{equation}m convergent} if it converges with respect to each seminorm. A continuous map $F\colon (\mathcal V,|\cdot|_n)\to (\mathcal W,|\cdot|'_n)$ between two tame Fr\'echet spaces is called {\begin{equation}m tame} if for every $x\in \mathcal V$ there are a neighborhood $U_x$ of $x$, a natural number $r$ and positive numbers $b,C_n$ such that $$ |F(y)|'_n\leq C_n(1+|y|_{n+r}) $$ for every $y\in U_x$ and $n>b$. A differentiable map between tame Fr\'echet spaces is called {\begin{equation}m smooth tame} if all its derivatives are tame maps. The main relevant result is the celebrated {\begin{equation}m Nash-Moser theorem.} \begin{theorem}[\bf{Nash-Moser}]\label{Nash-Moser} Let $\mathcal{V}$, $\mathcal{W}$ be tame Fr\'echet spaces and let $\mathcal{U}$ be an open subset of $\mathcal{V}$. Let $F\colon \mathcal U\to \mathcal W$ be a smooth map. If the differential of $F$, $F_{*|x}\colon \mathcal V\to \mathcal W$, is an isomorphism for every $x\in\mathcal{U}$ and the map $(x,y) \mapsto F_{*|x}^{-1}y$ is smooth tame, then $F$ is locally invertible with smooth tame local inverses. \begin{equation}nd{theorem} Let $\partiali\colon E\to M$ be a vector bundle over a compact oriented Riemannian manifold $(M,g)$ with a metric $h$ along its fibres. Once a connection $\noindentabla$ on $E$ preserving $h$ is fixed, the space $C^{\infty}(M,E)$ of global smooth sections of $E$ has a natural structure of tame Fr\'echet space given by the Sobolev norms $\|.\|_{H^n}$ induced by $h$, $\noindentabla$ and the volume form of $g$. Fix now a closed interval $[a,b]$ and consider the space of time-dependent partial differential operators $P\colon C^{\infty}(M \times [a,b],E)\to C^{\infty}(M \times [a,b],E)$ having degree at most $r$. This space is tame Fr\'echet with respect to the family of seminorms $$ |[P]|_n=\sum_{jr\leq n} [\partialartial_t^{j}P]_{n-jr} $$ where $[P]_n$ is the supremum of the norm of $P$ and its space covariant derivatives up to degree $n$. Now we can focus on the setting described in the introduction considering a Hodge system $(E_-,E,D,\Delta_D)$ on $M$, $\mathcal E, D_+,Q$ as in section \ref{2} and studying flow \begin{equation}qref{problem} under the assumptions 1, 2, 3. Let us fix a connection $\noindentabla$ on $E$ and define the spaces $$ \mathcal{F}[a,b]=DC^{\infty}(M\times [a,b],E_{-})\,,\quad \mathcal{G}[a,b]=\mathcal{F}[a,b]\times DC^{\infty}( M,E_-)\,. $$ Both the above spaces have a structure of tame Fr\'echet spaces given by the gradings $$ \|\beta\|_{\mathcal{F}^n[a,b]}=\sum_{2rj\leq n}\int_{a}^{b}\|\partialartial_t^{j}\beta_t \|_{H^{n-2rj}}\, dt $$ and $$ \|(\beta,\sigma)\|_{\mathcal{G}^n[a,b]}=\|\beta \|_{\mathcal{F}^n[a,b]}+\|\sigma\|_{H^{n}} $$ respectively. If we fix $\bar\begin{equation}ta \in \Phi=\ker D_+\cap C^{\infty}(M,\mathcal E)$, then we can set $$ \mathcal{U}=\{\beta\in \mathcal{F}[a,b]\,\,:\,\, \bar\begin{equation}ta+\beta_t \in U \mbox{ for every }t\in [a,b]\}\,, $$ where, accordingly to section \ref{2}, $U=\{\bar\begin{equation}ta+D\gamma \,\,:\,\,\gamma\in C^\infty(M,E_-)\}\,\cap C^{\infty}(M,\mathcal E)$. Note that $\mathcal U$ is open in $\mathcal{F}[a,b]$. The following theorem is proved in \cite{positive} for second order operators and then extended in \cite{Wlucione} to operators of arbitrary degree. \begin{theorem}\label{CRELLE} Let $$ F\colon \mathcal{U}\to\mathcal G[a,b]\,,\quad F(\beta)=\left(\partialartial_t \beta-Q(\bar\begin{equation}ta+\beta),\beta_a \right)\,. $$ Then \begin{enumerate} \item[1.] $F$ is smooth tame; \item[2.] $F_{*|\beta}$ is an isomorphism for every $\beta\in\mathcal{U}$; \item[3.] the map $\mathcal{U}\times \mathcal{G}[a,b]\to \mathcal{F}$, $(\beta,\partialsi)\mapsto F_{*|\beta}^{-1}\partialsi$, is smooth tame. \begin{equation}nd{enumerate} \begin{equation}nd{theorem} The starting point of the proof of theorem \ref{teoremone} is the following weak stability result that is a consequence of theorem \ref{CRELLE}. \begin{prop} \label{weakstability} Let $\bar\varphi\in Q^{-1}(0)$. For every $T>0$ and $\varepsilon>0$, there exists $\delta>0$ such that if $\varphi_0\in \Phi$ and satisfies $$ \varphi_0-\bar \varphi\in DC^{\infty}(M,E_{-})\,,\quad \|\varphi_0-\bar\varphi\|_{C^\infty}\leq \delta\,, $$ then there exists a smooth solution $\{\varphi_t\}_{t \in [0,T]}$ to \begin{equation}qref{problem} such that $$ \|\varphi-\bar \varphi\|_{\mathcal F^{n}[0,T]}\leq \varepsilon\,,\mbox{ for every }n\in \mathbb N\,. $$ \begin{equation}nd{prop} \begin{proof} We use theorem \ref{CRELLE} with $\bar\begin{equation}ta=\bar\varphi$. Since $F(0)=(0,0)$, theorem \ref{CRELLE} together with Nash-Moser theorem \ref{Nash-Moser} implies that there exist an open neighborhood $\mathcal U'$ of $0$ in $\mathcal U$ and an open neighborhood $\mathcal V'$ of $(0,0)$ in $\mathcal G$ such that $F\colon \mathcal U'\to \mathcal V'$ is invertible with smooth tame inverse. By choosing $\delta$ small enough we may assume that $(0,\varphi_0-\bar\varphi) \in \mathcal V'$. So we can take $\beta_t \in \mathcal U'$ such that $F(\beta_t)=(0,\varphi_0-\bar\varphi)$. Hence $\varphi_t=\bar\varphi+\beta_t$ satisfies $$ \partialartial_t\varphi_t=Q(\varphi_t)\,,\quad \varphi_{|t=0}=\varphi_0\,. $$ Since $F^{-1}$ is continuous, if we fix $\varepsilon>0$ and we choose $\delta$ small enough, we have $\|\varphi-\bar\varphi\|_{\mathcal F^n[0,T]}\leq \varepsilon$ for every $n\in \mathbb N$ and the claim follows. \begin{equation}nd{proof} The next step is the following \begin{lemma}[Interior Estimate] \label{intest_gen} For every $n,T>0$ and $\begin{equation}psilon \in (0,T)$, there exists $\delta , C> 0$ and and $l=l(n) \in \mathbb{N}$, with $C$ depending on $T,\begin{equation}psilon$ and an upper bound on $\delta$ such that if $\{\varphi_t\}_{t\in[0,T ]}$ is a smooth curve in $U$ with $$ \| \varphi-\bar\varphi\|_{\mathcal F^{l}[0,T]} \leq \delta $$ and $\sigma\in \mathcal F[0,T]$ satisfies $$ \partialartial_t\sigma=L_{\varphi}\sigma\,, $$ then \begin{equation}\label{induction} \|\sigma\|_{\mathcal{F}^{2nr}[t_0+\begin{equation}psilon ,T]}\leq C\|\sigma\|_{\mathcal{F}^0[t_0 ,T]}\,, \begin{equation}nd{equation} for every $t_0 \in [0,T-\begin{equation}psilon]$. \begin{equation}nd{lemma} \begin{proof} We prove the statement by induction on $n$. For $n=0$ the claim is trivial and we assume the statement true up to $N$. From \cite[lemma 4.6]{Wlucione} there exist $\delta, C \in \R^+$, with $C$ depending on $T$ and an upper bound on $\delta$, such that for $t\in [0,T)$ and $\sigma\in \mathcal F[0,T]$ we have \begin{eqnarray*} \|\sigma\|_{ \mathcal{F}^{N+2r}[t,T]} & \leq & C\left( \|\partialartial_t\sigma-L_{\varphi}(\sigma)\|_{ \mathcal{F}^{N}[0,T]} +\|\sigma_{0}\|_{H^{N+r}}\right)\\ & & +C|[L_{\varphi}]|_N \left( \|\partialartial_t\sigma-L_{\varphi}(\sigma)\|_{ \mathcal{F}^{0}[0,T]}+\|\sigma_{0}\|_{H^{N}}\right) \\ & \leq & (1+C)|[L_{\varphi}]|_N \left( \|\partialartial_t\sigma-L_{\varphi}(\sigma)\|_{ \mathcal{F}^{N}[0,T]} +\|\sigma_{0}\|_{H^{N+r}}\right)\,. \begin{equation}nd{eqnarray*} If $\| \varphi-\bar\varphi\|_{\mathcal F^{l}[0,T]}< \delta$ for $l$ big enough then $|[L_{\varphi}]|_N \leq 1+|[L_{\bar\varphi}]|_N$, so we have \begin{equation}\label{est1} \|\sigma\|_{ \mathcal{F}^{N+2r}[t,T]}\leq (1+C)(1+ |[L_{\bar\varphi}]|_N)\left( \|\partialartial_t\sigma-L_{\varphi}(\sigma)\|_{ \mathcal{F}^{N}[0,T]}+\|\sigma_{0}\|_{H^{N+r}}\right) \begin{equation}nd{equation} for every $\sigma \in \mathcal F[0,T]$. Now take $\sigma\in \mathcal F[0,T]$ solution of the linear equation $\partialartial_t\sigma=L_{\varphi}\sigma$ and fix $\begin{equation}psilon \in (0,T)$. Choose a smooth function $\chi\colon \R \to [0,1]$ such that $$ \chi(t)=0 \quad \mbox{ for } t \leq t_0+\begin{equation}psilon/2\,,\quad \chi(t)=1\quad \mbox{ for } t \geq t_0+\begin{equation}psilon\,. $$ Set $\tilde \sigma=\chi \sigma$. Then $\partialartial_t\tilde \sigma =\dot \chi \sigma+\chi \partialartial_t \sigma$ and $$ \partialartial_t \tilde \sigma-L_\varphi(\tilde \sigma)=\dot \chi \sigma\,. $$ Hence using \begin{equation}qref{est1} we find $C'>0$ depending only on $\begin{equation}psilon$ such that \begin{multline*} $$ \|\sigma\|_{\mathcal F^{2r(N+1)}[t_0+\begin{equation}psilon,T]}\leq\|\tilde \sigma\|_{\mathcal F^{2r(N+1)}[t_0+\begin{equation}psilon/2,T]}\leq (1+C)(1+ |[L_{\bar\varphi}]|_N) \|\dot \chi \sigma\|_{\mathcal F^{2rN}[t_0+\begin{equation}psilon/2,T]} \\ \leq C'(1+C)(1+ |[L_{\bar\varphi}]|_N)\,\| \sigma\|_{\mathcal F^{2rN}[t_0+\begin{equation}psilon/2,T]} $$ \begin{equation}nd{multline*} and the induction assumption implies the statement. \begin{equation}nd{proof} Now we need a general lemma for families of symmetric operators on Hilbert spaces. Here we will say that, given a Hilbert space $\mathcal{H}_1$ continuously embedded in a Hilbert space $\mathcal{H}_2$, an operator $L: \mathcal{H}_1 \to \mathcal{H}_2$ is symmetric if $\langle L z_1,z_2\rangle_{\mathcal{H}_2} = \langle z_1,L z_2 \rangle_{\mathcal{H}_2}$ for every $z_1, z_2 \in \mathcal{H}_1\,$. Analogously we will say that $L$ is negative semidefinite if $\langle L z,z\rangle_{\mathcal{H}_2}\leq 0$ for every $z \in {\mathcal{H}_1}$. \begin{lemma} \label{Hilbert} Let $(X,\bar x)$ be a pointed metric space and let $\mathcal{H}_1$ and $\mathcal{H}_2$ be two Hilbert spaces with $\mathcal{H}_1$ continuously embedded in $\mathcal{H}_2$. Let $\{L_x\}_{x \in X}$ be a continuous family of bounded symmetric operators $L_x : \mathcal{H}_1 \to \mathcal{H}_2$. Assume that $L_{\bar x}$ is negative semidefinite and that there exists $C>0$ such that \begin{equation} \label{boh} \|z_0\|_{\mathcal{H}_1} \leq C \| z_0\|_{\mathcal{H}_2}\,, \quad \mbox{for every $z_0 \in \ker L_{\bar x}\,,$} \begin{equation}nd{equation} and \begin{equation} \label{inversa} \|z_1\|_{\mathcal{H}_1} \leq C \|L_{\bar x} z_1\|_{\mathcal{H}_2}\,, \quad \mbox{for every $z_1 \in (\ker L_{\bar x})^{\partialerp}\,.$} \begin{equation}nd{equation} Then for every $\begin{equation}psilon>0$ there exists $\delta>0$ such that if $x \in X$ satisfies $d(x,\bar x)<\delta$, \begin{equation} \langle L_{x} z ,z\rangle_{\mathcal{H}_2} \leq (1-\begin{equation}psilon) \langle L_{\bar x} z ,z\rangle_{\mathcal{H}_2}+\begin{equation}psilon \|z\|^2_{\mathcal{H}_2} \quad \mbox{for every $z\in \mathcal{H}_1$.} \begin{equation}nd{equation} \begin{equation}nd{lemma} \begin{proof} Fix $\begin{equation}psilon >0$. Let $T:=-\begin{equation}psilon L_{\bar x}$ and $V_x:=L_{\bar x}-L_x$, for every $x \in X$. Now $T$ is symmetric and positive semidefinite. Let us write $z=z_0+z_1$ according to the decomposition $\mathcal{H}_1=\ker L_{\bar x} \overline{\partialartial}lus (\ker L_{\bar x})^\partialerp$. Thus for $b>0$ arbitrarily small, using also \begin{equation}qref{inversa} we can find $\delta>0$ such that if $d(x,\bar x)\leq \delta $, we have $$ \|V_x z_1\|_{\mathcal{H}_2}\leq b \begin{equation}psilon C^{-1} \|z_1\|_{\mathcal{H}_1} \leq b \|T z\|_{\mathcal{H}_2} $$ for every $z\in \mathcal{H}_1$. Consequently using \begin{equation}qref{boh}, up to shrinking $\delta$ we have $$ \|V_x z\|_{\mathcal{H}_2}\leq \|L_x z_0 \|_{\mathcal{H}_2} + \|V_x z_1\|_{\mathcal{H}_2} \leq a\|z\|_{\mathcal{H}_2} + b \|T z \|_{\mathcal{H}_2} $$ with $a>0$ arbitrarily small. Taking $a=\frac{\begin{equation}psilon}{2}$ and $b=\frac12$ and using \cite[Theorem 9.1]{weidmann} we have that $\langle (T +V_x) z, z \rangle_{\mathcal{H}_2} \geq -\begin{equation}psilon \|z\|_{\mathcal{H}_2}^{2}$ for every $z \in {\mathcal{H}_1}$ and the claim follows. \begin{equation}nd{proof} Now we apply the previous lemma to the family of operators $L_\varphi$ in the following situation: $\mathcal{H}_1 = H^{2r}(M,E)$, {\begin{equation}m i.e.} the space of sections of $E$ whose local components have square integrable derivatives up to order $2r$, $\mathcal{H}_2 = L^{2}(M,E)$, $(X,\bar x) = (\Phi,\bar \varphi)$ where on $\Phi$ we consider the distance induced by $H^{\bar n}(M,E)$, where $\bar n=\frac{\dim M}{2}+2r+1$. The choice of $\bar n$ ensures via the Sobolev embedding theorem that $\{L_{\varphi}\}_{\varphi\in \Phi}$ is a {\begin{equation}m continuous} family of bounded operators. Since inequality \begin{equation}qref{inversa} comes from Fredholm alternative and inequality \begin{equation}qref{boh} holds due to elliptic regularity of $L_{\bar \varphi}$ we get the following corollary. \begin{cor} \label{corollario_a} For every $a>0$ there exists $\delta>0$ such that if $\varphi\in C^\infty(M,\mathcal E)$ satisfies $\|\varphi-\bar\varphi\|_{H^{\bar n}}<\delta$, then \begin{equation} \langle L_{\varphi}(z),z\rangle_{L^2} \leq (1-a) \langle L_{\bar\varphi}(z),z\rangle_{L^2}+a \|z\|^2_{L^2} \begin{equation}nd{equation} for every $z\in H^{2r}(M,E)$. \begin{equation}nd{cor} Next we deduce the following trace-type theorem in $C^\infty(M \times [0,T],E_-)$: \begin{prop}\label{mante} For every $n\in \mathbb N$ and $\begin{equation}ll \in \R_+$ there exists positive constants $C$ and $m \in {\mathbb N}$ such that $$ \|\beta_t\|_{H^n}\leq C\|\beta\|_{\mathcal F^{m}I} $$ for every $\beta\in C^\infty(M \times I,E_-)$, and $t \in I$, where $I\subseteq \R$ is a closed interval of length $\begin{equation}ll$. \begin{equation}nd{prop} \begin{proof} Arguing exactly as in \cite[proposition 4.1]{MM} for every $s \in \mathbb N$ we get the following inequality $$ \|\noindentabla^s\beta\|_{C^0(M\times I,E_-)} \leq C\|\beta\|_{\mathcal F^{m}I} $$ for $m>{\rm max}\{\frac{s+\dim M+2r}{4r},\frac{s}{2r}\}$ and $C$ independent of $\beta$. (Here $\noindentabla^s$ denotes spatial derivatives only). \begin{equation}nd{proof} \begin{cor}\label{cormante} For every $T>0$, $n\in \mathbb N$ and $\begin{equation}psilon'>0$ there exist $C>0$ and $m \in {\mathbb N}$ such that every $\beta \in C^\infty(M \times [0,T+\begin{equation}psilon'],E_-)$ satisfies $$ \|\beta_t\|_{H^n}\leq C\|\beta\|_{\mathcal F^{m}[t,T+\begin{equation}psilon']} $$ for every $t\in [0,T]$. \begin{equation}nd{cor} \begin{proof} It is enough to apply proposition \ref{mante} with $I=[t,t+\begin{equation}psilon']$. In this way $$ \|\beta_t\|_{H^n}\leq C\|\beta\|_{\mathcal F^{m}[t,t+\begin{equation}psilon']} \leq C\|\beta\|_{\mathcal F^{m}[t,T+\begin{equation}psilon']} $$ with $C$ independent of $\beta$ and $t$ and the claim follows. \begin{equation}nd{proof} \begin{lemma}[exponential decay]\label{Wexp} Let $\begin{equation}psilon>0$ and $T>\begin{equation}psilon $. There exists $\delta>0$ such that if $\varphi_0\in \Phi$ satisfies \begin{equation}\label{phi0} \varphi_0-\bar \varphi\in DC^{\infty}(M,E_{-})\,,\quad \|\varphi_0-\bar\varphi\|_{C^\infty}\leq \delta\,, \begin{equation}nd{equation} then the solution $\varphi_t$ to \begin{equation}qref{problem} is defined in $M\times [0,T]$ and satisfies $$ \|Q(\varphi_t)\|_{H^n}\leq C \|Q(\varphi_0)\|_{L^2} \,{\rm e}^{-\lambda t} \,,\mbox{ for every }t\in [\begin{equation}psilon,T]\,, $$ where $\lambda$ is half the first positive eigenvalue of $-L_{\bar\varphi}$ and $C$ is a constant depending on $n$, $\begin{equation}psilon$, $T$ and an upper bound on $\delta$. \begin{equation}nd{lemma} \begin{proof} Fix a small time $\begin{equation}psilon'>0$ arbitrary. Proposition \ref{weakstability} implies that there exists $\delta>0$ such that if $\varphi_0$ satisfies \begin{equation}qref{phi0}, then problem \begin{equation}qref{problem} has a solution $\f\in C^{\infty}(M\times [0,T+2\begin{equation}psilon'],E)$ with $\|\varphi-\bar\varphi\|_{\mathcal F^l[0,T+2\begin{equation}psilon']}$ bounded for every $l$. Now $$ \partialartial_t^2\varphi_t=\partialartial_tQ(\varphi_t)\,, $$ implies $$ \partialartial_tQ(\varphi_t)=L_{\varphi_t}Q(\varphi_t) $$ and $$ \partialartial_t\|Q(\varphi_t)\|^2_{L^2}=2 \langle \partialartial_tQ(\varphi_t),Q(\varphi_t)\rangle_{L^2}=2 \langle L_{\varphi_t}Q(\varphi_t),Q(\varphi_t)\rangle_{L^2} $$ for every $t\in [0 ,T+2\begin{equation}psilon']$. In view of corollary \ref{cormante} and corollary \ref{corollario_a} we can choose the initial $\delta$ so small that for every $t\in [0,T+\begin{equation}psilon']$ we have $$ \langle L_{\varphi_t}Q(\varphi_t),Q(\varphi_t)\rangle_{L^2}\leq (1-a) \langle L_{\bar \varphi}Q(\varphi_t),Q(\varphi_t)\rangle_{L^2}+ a \|Q(\varphi_t)\|^2_{L^2} $$ with $a=\frac{\lambda}{2\lambda+1}\,.$ Taking into account that $\langle L_{\bar \varphi} Q(\varphi_t),Q(\varphi_t) \rangle_{L^2} \leq -2\lambda \|Q(\varphi_t)\|^2_{L^2}\,,$ we have $$ \langle L_{\varphi_t}Q(\varphi_t),Q(\varphi_t)\rangle_{L^2}\leq -{\lambda} \langle L_{\bar \varphi}Q(\varphi_t),Q(\varphi_t)\rangle_{L^2}\,. $$ So $$ \partialartial_t\|Q(\varphi_t)\|^2_{L^2} \leq -2\lambda \|Q(\varphi_t)\|^2_{L^2} $$ and by Gronwall's lemma we get $$ \|Q(\varphi_t)\|^2_{L^2}\leq {\rm e}^{- 2\lambda t} \|Q(\varphi_0)\|^2_{L^2} $$ for every $t \in [0,T+\begin{equation}psilon']$. We have \begin{equation} \label{exp2} \|Q(\varphi)\|^2_{\mathcal F^{0}[t,T+\begin{equation}psilon']}=\int_{t}^{T+\begin{equation}psilon'}\|Q(\varphi_s)\|_{L^2}^2\,ds\leq \|Q(\varphi_0)\|^2_{L^2}\int_{t}^{T+\begin{equation}psilon'}{\rm e}^{-2\lambda s} ds\leq \|Q(\varphi_0)\|^2_{L^2} \frac{{\rm e}^{-2\lambda t}}{2\lambda}\,. \begin{equation}nd{equation} By corollary \ref{cormante} we find $m$ such that for every $t\in [0,T]$ $$ \|Q(\varphi_t)\|_{H^{n}} \leq C \|Q(\varphi)\|_{\mathcal{F}^{m}[t,T+\begin{equation}psilon']}\,. $$ Now by lemma \ref{intest_gen} we can take $l$ big enough such that if $\|\varphi-\bar\varphi\|_{\mathcal{F}^l[0,T+2\begin{equation}psilon']} \leq\delta$ we have $$ \|Q(\varphi)\|_{\mathcal{F}^{m}[t,T+\begin{equation}psilon']} \leq C\|Q(\varphi)\|_{\mathcal{F}^{0}[t-\begin{equation}psilon,T+\begin{equation}psilon']}\,, $$ for every $t\in [\begin{equation}psilon,T+\begin{equation}psilon']\,.$ Finally putting these together with \begin{equation}qref{exp2} we have $$ \|Q(\varphi_t)\|_{H^{n}} \leq C \|Q(\varphi_0)\|_{L^2}{\rm e}^{{-\lambda} t}\, $$ for $t \in [\begin{equation}psilon,T]$ as required. \begin{equation}nd{proof} Now we are ready to prove the main theorem. \begin{proof}[Proof of theorem $\ref{teoremone}$] Let $T>0$ and $\begin{equation}psilon \in (0,\frac{T}{2})$ be fixed. Using theorem \ref{Wexp}, there exists $\delta'>0$ such that if $\|\varphi_0-\bar\varphi\|_{C^\infty} \leq \delta'$, then the solution $\varphi_t$ to the geometric flow \begin{equation}qref{problem} exists in $[0,T]$ and for every $n\in \mathbb N$ \begin{equation} \|Q(\varphi_t)\|_{H^{n}}\leq C\|Q(\varphi_0)\|_{L^2}{\rm e}^{-\lambda t} \mbox{ for every }t\in [\begin{equation}psilon,T]\,, \begin{equation}nd{equation} for some $C>0$ depending on $n$, $\begin{equation}psilon$, $T$ and an upper bound on $\delta'$. Now we choose $\delta \leq \delta'$ such that if $\|\varphi_0-\bar\varphi\|_{C^{\infty}}\leq \delta$ then \begin{equation}\label{condizione} C\|Q(\varphi_0)\|_{L^2}\frac{{\rm e}^{-\lambda \begin{equation}psilon}}{\lambda}\sum_{j=0}^{\infty}{\rm e}^{-\lambda j(T-\begin{equation}psilon)}+\|\varphi_{\begin{equation}psilon}-\bar\varphi\|_{H^{n}}\leq \delta'\,. \begin{equation}nd{equation} We show that $\varphi$ can be extended to $M\times [0,\infty)$ and converges to an element of $U$ lying in the 0 level set of $Q$ as $t\to \infty$. We have $$ \begin{aligned} \|\varphi_{t}-\bar\varphi\|_{H^{n}}=&\left\|\int_{\begin{equation}psilon}^{t}Q(\varphi_\tau)\,d\tau+\varphi_\begin{equation}psilon-\bar\varphi\right\|_{H^{n}} \leq \int_{\begin{equation}psilon}^{t}\|Q(\varphi_\tau)\|_{H^{n}}\,d\tau+\|\varphi_\begin{equation}psilon-\bar\varphi\|_{H^{n}}\\ & \leq C\|Q(\varphi_0)\|_{L^2}\frac{{\rm e}^{-\lambda\begin{equation}psilon}}{\lambda}+\|\varphi_\begin{equation}psilon-\bar\varphi\|_{H^{n}}\,,\quad t \in [\begin{equation}psilon,T] \begin{equation}nd{aligned} $$ and condition \begin{equation}qref{condizione} implies $\|\varphi_{T-\begin{equation}psilon}-\bar\varphi\|_{H^{n}}\leq \delta'$ and therefore $\varphi$ can be extended in $M\times [0,2T-\begin{equation}psilon]$. Moreover, $$ \|Q(\varphi_t)\|_{H^{n}}\leq C \|Q(\varphi_0)\|_{L^2} \,{\rm e}^{-\lambda t} \,,\mbox{ for every }t\in [T,2T-\begin{equation}psilon]\,. $$ Now $$ \begin{aligned} \|\varphi_{t}-\bar\varphi\|_{H^{n}}=&\left\|\int_{T}^{t}Q(\varphi_\tau)\,d\tau+\varphi_{T}-\bar\varphi \right\|_{H^{n}} \leq \int_{T}^{t}\|Q(\varphi_\tau)\|_{H^{n}}\,d\tau+\|\varphi_{T}-\bar\varphi\|_{H^{n}}\\ &\leq C\|Q(\varphi_0)\|_{L^2}\frac{{\rm e}^{-\lambda T}}{\lambda}+\|\varphi_{T}-\bar\varphi\|_{H^{n}}\\ &\leq C\|Q(\varphi_0)\|_{L^2}\left(\frac{{\rm e}^{-\lambda T}}{\lambda}+\frac{{\rm e}^{-\lambda\begin{equation}psilon}}{\lambda}\right)+\|\varphi_{\begin{equation}psilon}-\bar\varphi\|_{H^{n}}\leq \delta'\,,\quad t\in [T,2T-\begin{equation}psilon] \begin{equation}nd{aligned} $$ therefore the flow can be extended in $M\times [0,3T-2\begin{equation}psilon ]$ with exponential decay in $[2T-\begin{equation}psilon,3T-2\begin{equation}psilon]$. Analogously $$ \begin{aligned} \|\varphi_{t}-\bar\varphi\|_{H^{n}}=&\left\|\int_{2T-\begin{equation}psilon}^{t}Q(\varphi_\tau)\,d\tau+\varphi_{2T-\begin{equation}psilon}-\bar\varphi \right\|_{H^{n}} \leq \int_{2T-\begin{equation}psilon}^{t}\|Q(\varphi_\tau)\|_{H^{n}}\,d\tau+\|\varphi_{2T-\begin{equation}psilon}-\bar\varphi\|_{H^{n}}\\ &\leq C\|Q(\varphi_0)\|_{L^2}\frac{{\rm e}^{-\lambda (2T-\begin{equation}psilon)}}{\lambda}+\|\varphi_{2T-\begin{equation}psilon}-\bar\varphi\|_{H^{n}}\\ &\leq C\|Q(\varphi_0)\|_{L^2}\left(\frac{{\rm e}^{-\lambda (2T-\begin{equation}psilon)}}{\lambda}+\frac{{\rm e}^{-\lambda T}}{\lambda}+\frac{{\rm e}^{-\lambda\begin{equation}psilon}}{\lambda}\right)+\|\varphi_{\begin{equation}psilon}-\bar\varphi\|_{H^{n}}\leq \delta'\,, \begin{equation}nd{aligned} $$ for $t\in [2T-\begin{equation}psilon,3T-2\begin{equation}psilon]$ and the flow can be extended in $M\times [0,4T-3\begin{equation}psilon]$ with exponential decay in $[3T-2\begin{equation}psilon,4T-3\begin{equation}psilon]$. In this way for any $t\in [NT-(N-1)\begin{equation}psilon,(N+1)T-N\begin{equation}psilon]$ we have $$ \|\varphi_t-\bar \varphi\|_{H^{n}}\leq C\|Q(\varphi_0)\|_{L^2}\frac{{\rm e}^{-\lambda \begin{equation}psilon}}{\lambda}\sum_{j=0}^{N}{\rm e}^{-\lambda j(T-\begin{equation}psilon)}+\|\varphi_{\begin{equation}psilon}-\bar\varphi\|_{H^{n}}\leq \delta' $$ and the solution $\varphi$ is defined in $M\times [0,\infty)$. Now let $\varphi_\infty:=\varphi_0+\int_{0}^\infty Q(\varphi_s)ds\in C^{\infty}(M,E)$; since $$ \lim_{t\to \infty}\|\varphi_t-\varphi_{\infty}\|_{H^n}\leq \lim_{t\to \infty}C\|Q(\varphi_0)\|_{L^2}{\rm e}^{-\lambda t}=0\,, \mbox{ for $n$ large enough} $$ $\varphi_t$ converges to $\varphi_\infty$ in $C^\infty$-topology. We clearly have $D_+\varphi_\infty=0$, since $D_+\varphi_t=0$ for every $t\in [0,\infty)$ and by construction $$ \|\varphi_t-\bar\varphi\|_{C^0}\leq C'\delta', \mbox{ for every }t\in [0,\infty)\,, $$ where $C'$ does not depend on $\delta$. So up to take $\delta'$ smaller we have $\varphi_\infty\in C^{\infty}(M,\mathcal E)$. Finally $$ Q(\varphi_\infty)=\lim_{t\to 0} Q(\varphi_t)=0 $$ and the claim follows. \begin{equation}nd{proof} \begin{thebibliography}{12} \bibitem{Anna1} {\sc L. Bagaglini, M Fern\'andez, A. Fino,} Laplacian coflow on the $7$-dimensional Heisenberg group. To appear in {\begin{equation}m Asian J. Math.}, {\tt arXiv:1704.00295}, . \bibitem{Wlucione} {\sc L. Bedulli and L. Vezzoni}, A parabolic flow of balanced metrics. {\begin{equation}m J. Reine Angew. Math.} {\bf 723} (2017), 79--99. \bibitem{Wlucione2} {\sc L. Bedulli and L. Vezzoni}, A scalar Calabi-type flow in Hermitian Geometry: Short-time existence and stability. To appear in {\begin{equation}m Ann. Sc. Norm. Super. Pisa Cl. Sci.}, {\tt arXiv:1703.05068}. \bibitem{Bryant} {\sc R. Bryant}, {\it Some remarks on G$_2$-structures}, Proceedings of Gokova Geometry/Topology Conference, Gokova 2006, 75--109. \bibitem{BryantXu} {\sc R. Bryant, F. Xu}, Laplacian Flow for Closed $G_2$-Structures: Short Time Behavior. {\tt arXiv:1101.2004}\,. \bibitem{shi-type} {\sc G. Chen}, Shi-type estimates and finite time singularities of flows of ${\rm G}_2$ structures, Q. J. Math. {\bf 69} (2018), no. 3, 779--797. \bibitem{chen} {\sc X.X. Chen, W. Y. He}, On the Calabi flow. {\begin{equation}m Amer. J. Math.} {\bf 130} (2008), no. 2, 539--570. \bibitem{fei}{\sc T. Fei}, Some torsional local models of heterotic strings. {\begin{equation}m Comm. Anal. Geom.} {\bf 25} (2017) No. 5, 941--968. \bibitem{Gri} {\sc S. Grigorian}, Short-time behaviour of a modified Laplacian coflow of $G_2$-structures. {\begin{equation}m Adv. Math.} {\bf 248} (2013), 378--415. \bibitem{HamiltonNash} {\sc R. S. Hamilton}, The inverse function theorem of Nash and Moser, {\begin{equation}m Bull. Amer. Math. Soc.} (N.S.) {\bf 7} (1982), no. 1, 65--222. \bibitem{positive} {\sc R. S. Hamilton}, Three-manifolds with positive Ricci curvature. {\begin{equation}m J. Differential Geom.} {\bf 17} (1982), no. 2, 255--306. \bibitem{JJDG} {\sc D. Joyce}, Compact Riemannian 7-manifolds with holonomy G2 (I, II), {\begin{equation}m J. Differential Geom.} {\bf 43} (1996), 291--328, 329--375. \bibitem{K} {\sc S. Karigiannis, B. McKay, M.-P. Tsui}, Soliton solutions for the Laplacian coflow of some ${\rm G}_2$ structures with symmetry, {\begin{equation}m Differential Geom. Appl.} {\bf 30} (2012), 318--333. \bibitem{Lotaysurvey} {\sc J. D. Lotay}, Geometric Flows of ${\rm G}_2$-Structures, to appear in {\begin{equation}m Lectures and Surveys on ${\rm G}_2$ manifolds and related topics}, {\tt arXiv:1810.13417}. \bibitem{L1} {\sc J. D. Lotay and Y. Wei}, Laplacian Flow for Closed ${\rm G}_2$ Structures: Shi-type Estimates, Uniqueness and Compactness, {\begin{equation}m Geometric and Functional Analysis} {\bf 27} (2017), 165--233. \bibitem{L2} {\sc J. D. Lotay and Y. Wei}, Stability of Torsion-free ${\rm G}_2$ Structures along the Laplacian Flow, {\begin{equation}m J. Differential Geom.}, {\bf 111} (2019), no. 3, 495--526. \bibitem{MM} {\sc C. Mantegazza and L. Martinazzi}, A note on quasilinear parabolic equations on manifolds. {\begin{equation}m Ann. Sc. Norm. Super. Pisa Cl. Sci. (5)} {\bf 11} (2012), no. 4, 857--874. \bibitem{weidmann} {\sc J. Weidmann}, {\begin{equation}m Linear Operators in Hilbert Spaces}, Springer, Berlin, 1980. \bibitem{witt1} {\sc H. Weiss, F. Witt}, A heat flow for special metrics. {\begin{equation}m Adv. Math.} {\bf 231} (2012), no. 6, 3288--3322.. \begin{equation}nd{thebibliography} \begin{equation}nd{document}
\begin{document} \begin{abstract} In \cite{small-generators} Belabas, Diaz y Diaz and Friedman show a way to determine, assuming the Generalized Riemann Hypothesis, a set of prime ideals that generate the class group of a number field. Their method is efficient because it produces a set of ideals that is smaller than earlier proved results. Here we show how to use their main result to algorithmically produce a bound that is lower than the one they prove. \varepsilonnd{abstract} \maketitle \section{Introduction} We refer the reader to the paper \cite{small-generators} for an outline of Buchmann's algorithm. Let \KM be a number field of degree {\ensuremath{n_\K}}\xspace, with $r_1$ (resp. $r_2$) real (resp. pair of complex) embeddings. We denote {\varepsilonnsuremath{\Delta_\KM}}\xspace the absolute value of its discriminant. \begin{defi} Let \WC be the set of functions $F\colon[0,{+\infty})\to\RM$ such that \begin{itemize} \item $F$ is continuous; \item $\varepsilonxists\varepsilon>0$ such that the function $F(x)e^{(\frac12+\varepsilon)x}$ is integrable and of bounded variation; \item $F(0)>0$; \item $(F(0)-F(x))/x$ is of bounded variation. \varepsilonnd{itemize} Let then, for $T>1$, $\WC(T)$ be the subset of $\WC$ such that \begin{itemize} \item $F$ has support in $[0,\log T]$; \item the Fourier cosine transform of $F$ is non-negative. \varepsilonnd{itemize} \varepsilonnd{defi} The main result of \cite{small-generators} is, up to a minor reformulation: \begin{theo}[\textbf{Belabas, Diaz y Diaz, Friedman}]\label{theoKB} Let \KM be a number field satisfying the Riemann Hypothesis for all {\ensuremath{\mathrm L}}\xspace-functions attached to non-trivial characters of its ideal class group ${\varepsilonnsuremath{\CC\!\varepsilonll}}\xspace_\KM$, and suppose there exists, for some $T>1$, an $F\in\WC(T)$ with $F(0)=1$ and such that \begin{multline}\label{theoeq} 2\sum_\pG\log{\ensuremath{\mathrm N}}\xspace\pG\sum_{m=1}^{+\infty}\frac{F(m\log{\ensuremath{\mathrm N}}\xspace p)}{{\ensuremath{\mathrm N}}\xspace\pG^{m/2}} > \log\Delta_\KM-{\ensuremath{n_\K}}\xspace\gamma- {\ensuremath{n_\K}}\xspace\log(8\pi)-\frac{r_1\pi}2\\ + r_1\int_0^{+\infty}\frac{1-F(x)}{2\cosh(x/2)}\,\mathrm d x+{\ensuremath{n_\K}}\xspace\int_0^{+\infty}\frac{1-F(x)}{2\sinh(x/2)}\,\mathrm d x\ . \varepsilonnd{multline} Then the ideal class group of \KM is generated by the prime ideals of \KM having norm less than $T$. \varepsilonnd{theo} The authors apply the result to the function $\frac1LC_L\ast C_L$ where $L=\log T$, $\ast$ is the convolution operator and $C_L$ is the characteristic function of $({-\frac L2},\frac L2)$, to get the \begin{coro}[\textbf{Belabas, Diaz y Diaz, Friedman}]\label{coroKB} Suppose \KM is a number field satisfying the Riemann Hypothesis for all {\ensuremath{\mathrm L}}\xspace-functions attached to non-trivial characters of its ideal class group ${\varepsilonnsuremath{\CC\!\varepsilonll}}\xspace_\KM$, and for some $T>1$ we have \begin{multline}\label{coroeq} 2\sum_{\substack{\pG,m\\{\ensuremath{\mathrm N}}\xspace\pG^m<T}}\frac{\log{\ensuremath{\mathrm N}}\xspace\pG}{{\ensuremath{\mathrm N}}\xspace\pG^{m/2}}\left(1-\frac{\log{\ensuremath{\mathrm N}}\xspace\pG^m}{\log T}\right) > \log\Delta_\KM-{\ensuremath{n_\K}}\xspace\left(\gamma+\log(8\pi)-\frac{c_1}{\log T}\right)\\ - r_1\left(\frac\pi2-\frac{c_2}{\log T}\right)\ , \varepsilonnd{multline} where $$c_1=\frac{\pi^2}2\ ,\quad c_2=4C\ .$$ (Here $C=\sum_{k>=0}(-1)^k(2k+1)^{-2}=0.915965\cdots$ is Catalan's constant.) Then the ideal class group of \KM is generated by the prime ideals of \KM having norm less than $T$. \varepsilonnd{coro} Our aim is to find a good $T$ for the number field \KM as fast as possible exploiting the bilinearity of the convolution product. \section{Setup} We use the following definition to simplify a little bit the language. \begin{defi} A \varepsilonmph{bound} for \KM is an $L=\log T$ with $T$ as in Theorem~\ref{theoKB}. \varepsilonnd{defi} \subsection{Rewriting the theorem} We begin by homogenizing Equation~\varepsilonqref{theoeq} and relaxing the requirement $F(0)=1$ to $F(0)>0$ so that now the condition on the function is \begin{multline}\label{homoeq} 2\sum_\pG\log{\ensuremath{\mathrm N}}\xspace\pG\sum_{m=1}^{+\infty}\frac{F(m\log{\ensuremath{\mathrm N}}\xspace p)}{{\ensuremath{\mathrm N}}\xspace\pG^{m/2}} > F(0)\left(\log\Delta_\KM-{\ensuremath{n_\K}}\xspace\gamma- {\ensuremath{n_\K}}\xspace\log(8\pi)-\frac{r_1\pi}2\right)\\ + r_1\int_0^{+\infty}\frac{F(0)-F(x)}{2\cosh(x/2)}\,\mathrm d x+{\ensuremath{n_\K}}\xspace\int_0^{+\infty}\frac{F(0)-F(x)}{2\sinh(x/2)}\,\mathrm d x\ . \varepsilonnd{multline} \begin{defi} Let \SC be the real vector space of even and compactly supported step functions and, for $T>1$, let $\SC(T)$ be the subspace of \SC of functions supported in $\left[{-\frac{\log T}2},\frac{\log T}2\right]$. \varepsilonnd{defi} \begin{defi} For any integer $N>=1$ and positive real $\,\mathrm delta$ we define the subspace $\SC(N,\,\mathrm delta)$ of $\SC(e^{2N\,\mathrm delta})$ made of functions which are constant $\forall k\in{\ensuremath{\mathrm N}}\xspaceM$ on $[k\,\mathrm delta,(k+1)\,\mathrm delta)$. \varepsilonnd{defi} The elements of $\SC(N,\,\mathrm delta)$ are thus step functions with fixed step width $\,\mathrm delta$. If $N>=1$, $\,\mathrm delta>0$ and $T=e^{2N\,\mathrm delta}$ we have \begin{subequations} \begin{alignat}{1} \SC(N,\,\mathrm delta) & \subset\SC(T)\subset\SC \\ \forall\Phi\in\SC(T),\ \ \frac1{\|\Phi\|_2^{\,2}}\Phi\ast\Phi & \in\WC(T)\\ \SC(N,\,\mathrm delta) &\subset\SC(N+1,\,\mathrm delta)\label{incnext}\\ \forall k>=1,\ \ \SC\left(kN,\frac\,\mathrm delta k\right) & \subseteq\SC(N,\,\mathrm delta)\label{incmul}\ . \varepsilonnd{alignat} \varepsilonnd{subequations} If, for some $T>1$, $\Phi\in\SC(T)$ and $F=\Phi\ast\Phi$ satisfies \varepsilonqref{homoeq} then, according to Theorem \ref{theoKB}, ${\varepsilonnsuremath{\CC\!\varepsilonll}}\xspace_\KM$ is generated by prime ideals \pG such that ${\ensuremath{\mathrm N}}\xspace\pG<T$. This leads us to define the linear form $\varepsilonll_\KM$ on $\SC\ast\SC$ by \begin{multline*} \varepsilonll_\KM(F)=-2\sum_\pG\log{\ensuremath{\mathrm N}}\xspace\pG\sum_{m=1}^{+\infty}\frac{F(m\log{\ensuremath{\mathrm N}}\xspace p)}{{\ensuremath{\mathrm N}}\xspace\pG^{m/2}} + F(0)\left(\log\Delta_\KM-{\ensuremath{n_\K}}\xspace\gamma- {\ensuremath{n_\K}}\xspace\log(8\pi)-\frac{r_1\pi}2\right)\\ + r_1\int_0^{+\infty}\frac{F(0)-F(x)}{2\cosh(x/2)}\,\mathrm d x+{\ensuremath{n_\K}}\xspace\int_0^{+\infty}\frac{F(0)-F(x)}{2\sinh(x/2)}\,\mathrm d x \varepsilonnd{multline*} and the quadratic form $q_\KM$ on \SC by $q_\KM(\Phi)=\varepsilonll_\KM(\Phi\ast\Phi)$. We can at this point give a weaker version of Theorem~\ref{theoKB} as \begin{coro}\label{coroKBL} Let \KM be a number field satisfying \textrm{\upshape GRH}\xspace and $T>1$. If the restriction of $q_\KM$ to $\SC(T)$ has a negative eigenvalue then ${\varepsilonnsuremath{\CC\!\varepsilonll}}\xspace_\KM$ is generated by prime ideals \pG such that ${\ensuremath{\mathrm N}}\xspace\pG<T$. \varepsilonnd{coro} Note that $q_\KM$ is a continuous function as a function from $(\SC(T),\|.\|_1)$ to \RM. Therefore if $\log T$ is a bound for \KM then there exists an $L'<\log T$ such that $L'$ is a bound for \KM. Note also that, in terms of $T$, only the norms of prime ideals are relevant, which means that we do not need the smallest possible $T$ to get the best result. \begin{rem*} If $T>1$ and $\Phi\in\SC(T)$, then for any $\varepsilon>0$ there exists $N>=1$, $\,\mathrm delta>0$ and $\Phi_\,\mathrm delta\in\SC(N,\,\mathrm delta)$ such that $\|\Phi\ast\Phi-\Phi_\,\mathrm delta\ast\Phi_\,\mathrm delta\|_\infty<=\varepsilon$ and $e^{2N\,\mathrm delta}<=T$. Hence we do not loose anything in terms of bounds if we consider only the subspaces of the form $\SC(N,\,\mathrm delta)$. \varepsilonnd{rem*} \subsection{Computing the integrals} Let $T>1$ be a real, $L=\log T$ and $F_L=C_L\ast C_L$ where, as above, $C_L$ is the characteristic function of $\left[-\frac L2,\frac L2\right]$. We readily see that $F_L(x)=(L-x)C_{2L}(x)$ for any $x>=0$. We easily compute \begin{align*} \int_0^{+\infty}\frac{F_L(0)-F_L(x)}{2\cosh(x/2)}\,\mathrm d x &= 4C-4\Imm\,\mathrm dilog\left(\frac i{\sqrt T}\right) \intertext{and} \int_0^{+\infty}\frac{F_L(0)-F_L(x)}{2\sinh(x/2)}\,\mathrm d x &= \frac{\pi^2}2-4\,\mathrm dilog\left(\frac1{\sqrt T}\right)+\,\mathrm dilog\left(\frac1T\right) \varepsilonnd{align*} where $C$ is Catalan's constant and $\,\mathrm dilog(x)$ is the dilogarithm function normalized to be the primitive of $-\frac{\log(1-x)}x$ such that $\,\mathrm dilog(0)=0$ (this is the normalization of \cite{PARI}). \subsection{A remark on the restriction of quadratic forms}\label{rkqf} Let $q$ be a quadratic form on an $n$-dimensional vector space $V$ of signature $(z,p,m)$. We can interpret $p$ (resp. $m$) as the dimension of a maximal subspace on which $q$ is positive (resp. negative) definite while the kernel of $q$ has dimension $z=n-p-m$. Let $H$ be an hyperplane of $V$ and $q'$ the restriction of $q$ to $H$. A maximal subspace on which $q'$ is definite is a subspace on which $q$ is definite, thus the intersection of a maximal subspace on which $q$ is definite with $H$. This means the signature $(z',p',m')$ of $q'$ will be such that $p'<=p<=p'+1$ and $m'<=m<=m'+1$. Cases $p=p'+1$, $m=m'+1$ and $p=p'$, $m=m'$ are both possible with $z=n-p-m=z'-1$ and $z=z'+1$ respectively. \section{Improving the result} \subsection{Basic bound} We restate \cite[Section 3, p. 1191]{small-generators} which determines an optimal bound for Corollary~\ref{coroKB}. Let~$\textrm{\upshape GRH}\xspacecheck(\KM,\log T)$ be the function that returns the right hand side of \varepsilonqref{coroeq} minus its left hand side and $\BDyDF(\KM)$ be the function which computes the optimal bound, by dichotomy for instance. The computation of $\BDyDF(\KM)$ is very fast because the only arithmetic information we need on $\KM\simeq\quot{\QM[x]}{(P)}$ is the splitting information for primes $p<T$ and is determined easily for nearly all $p$. Indeed if $p$ does not divide the index of $\quot{\ZM[x]}{(P)}$ in {\ensuremath{\OC_K}}\xspace, then the splitting of $p$ in \KM is determined by the factorization of $P\mod p$. We can also store such splitting information for all $p$ that we consider and do not recompute it each time we test whether a given bound $\log T$ is sufficient. \subsection{Improving the bound} We fix a number field \KM. We denote $q_{\KM,N,\,\mathrm delta}$ the restriction of $q_\KM$ to $\SC(N,\,\mathrm delta)$. According to Corollary~\ref{coroKBL}, if $q_{\KM,N,\,\mathrm delta}$ has a negative eigenvalue then $2N\,\mathrm delta$ is a bound for \KM. This justifies the following definition. \begin{defi} The pair $(N,\,\mathrm delta)$ is $K$-good\xspace when $q_{\KM,N,\,\mathrm delta}$ has a negative eigenvalue. \varepsilonnd{defi} We can reinterpret Functions~\textrm{\upshape GRH}\xspacecheck and \BDyDF saying that if $\textrm{\upshape GRH}\xspacecheck(\KM,2\,\mathrm delta)$ is negative then $(1,\,\mathrm delta)$ is $K$-good\xspace and that $\left(1,\frac12\log\BDyDF(\KM)\right)$ is $K$-good\xspace. As a first step to improve on Corollary~\ref{coroKB}, given $\,\mathrm delta>0$ we look for the smallest $N$ such that $(N,\,\mathrm delta)$ is $K$-good\xspace. Looking for such an $N$ can be done fairly easily with this setup. For any $i>=1$, let $\Phi_i$ be the characteristic function of $({-i\,\mathrm delta},i\,\mathrm delta)$. Then $(\Phi_i)_{1<=i<=N}$ is a basis of $\SC(N,\,\mathrm delta)$. We have $\Phi_i\ast\Phi_i=F_{2i\,\mathrm delta}=(2i\,\mathrm delta-|x|)C_{4i\,\mathrm delta}(|x|)$; observe also that the function considered in Corollary~\ref{coroKB} is $\frac1{\log T}F_{\log T}$. We further observe that $$\Phi_i\ast\Phi_j=F_{(i+j)\,\mathrm delta}-F_{|i-j|\,\mathrm delta}\ .$$ This means that the matrix $A_N$ of $q_{\KM,N,\,\mathrm delta}$ can be computed by computing only the values of $\varepsilonll_\KM(F_{2i\,\mathrm delta})$ for $1<=i<=2N$ and subtracting those values. We then stop when the determinant of $A_N$ is negative or when $2N\,\mathrm delta>=\BDyDF(\KM)$. This does not guarantee that we stop as soon as there is a negative eigenvalue. Indeed, consider the following sequence of signatures: $$(0,p,0)\to(1,p,0)\to(1,p,1)\to(0,p+1,2)\to\cdots$$ We should have stopped when the signature was $(1,p,1)$ however the determinant was zero there. Our algorithm will stop as soon as there is an odd number of negative eigenvalues (and no zero) or we go above $\BDyDF(\KM)$. Such unfavorable sequence of signatures is however very unlikely and can be ignored in practice. The corresponding algorithm is presented in Function~{\ensuremath{\mathrm N}}\xspaceDelta. We have added a limit $N_{\max}$ for $N$ which is not needed right now but will be used later. In Function~{\ensuremath{\mathrm N}}\xspaceDelta, we need to slightly change \textrm{\upshape GRH}\xspacecheck to returns the difference of both sides of Equation~\varepsilonqref{homoeq} instead of~\varepsilonqref{coroeq}. Note that $(\Phi_i)$ is a basis adapted to the inclusion~\varepsilonqref{incnext} so that we only need to compute the edges of the matrix $A_N$ at each step. The test $\,\mathrm det A<0$ in line~\ref{line:detA<0} can be implemented using Cholesky $LDL^*$ decomposition which is incremental. One way to use this function is to compute $T=\BDyDF(\KM)$ and for some $N_{\max}>=2$, let $\,\mathrm delta=\frac{\log T}{2N_{\max}}$ and $N={\ensuremath{\mathrm N}}\xspaceDelta(\KM,\,\mathrm delta,N_{\max})$. Using the inclusion~\varepsilonqref{incmul}, we see that $(N,\,\mathrm delta)$ is $K$-good\xspace and that $N<=N_{\max}$, so that we have improved the bound. \subsection{Adaptive steps} Unfortunately Function~{\ensuremath{\mathrm N}}\xspaceDelta is not very efficient mostly for two reasons. To explain them and to improve the function we introduce some extra notations.\\ For any $\,\mathrm delta>0$, let $N_\,\mathrm delta$ be the minimal $N$ such that $(N,\,\mathrm delta)$ is $K$-good\xspace. Observe that Function~{\ensuremath{\mathrm N}}\xspaceDelta computes $N_\,\mathrm delta$, as long as $N_\,\mathrm delta<=N_{\max}$ and no zero eigenvalue prevents success. Obviously, using~\varepsilonqref{incnext}, we see that for any $N>=N_\,\mathrm delta$, $(N,\,\mathrm delta)$ is $K$-good\xspace. We have observed numerically that the sequence $N\,\mathrm delta_N$ is roughly decreasing, i.e. for most values of $N$ we have $N\,\mathrm delta_N>=(N+1)\,\mathrm delta_{N+1}$.\\ For any $N>=1$, let $\,\mathrm delta_N$ be the infimum of the $\,\mathrm delta$'s such that $(N,\,\mathrm delta)$ is $K$-good\xspace. It is not necessarily true that if $\,\mathrm delta>=\,\mathrm delta_N$ then $(N,\,\mathrm delta)$ is $K$-good\xspace, however we have never found a counterexample. The function $\,\mathrm delta\mapsto\,\mathrm delta N_\,\mathrm delta$ is piecewise linear with discontinuities at points where $N_\,\mathrm delta$ changes; the function is increasing in the linear pieces and decreasing at the discontinuities. This means that if we take $0<\,\mathrm delta_2<\,\mathrm delta_1$ but we have $N_{\,\mathrm delta_2}>N_{\,\mathrm delta_1}$ then we may have $N_{\,\mathrm delta_2}\,\mathrm delta_2>N_{\,\mathrm delta_1}\,\mathrm delta_1$ so the bound we get for $\,\mathrm delta_2$ is not necessarily as good as the one for $\,\mathrm delta_1$.\\ The resolution of Function~{\ensuremath{\mathrm N}}\xspaceDelta is not very good: going from $N-1$ to $N$ the bound for the norm of the prime ideals is multiplied by $e^{2\,\mathrm delta}$. This is the first reason reducing the efficiency of the function. The second one is that if $N_{\max}$ is above $20$ or so, the number $\,\mathrm delta=\frac{\log\BDyDF(\KM)}{2N_{\max}}$ has no specific reason to be near $\,\mathrm delta_{N_\,\mathrm delta}$; as discussed above, this means that we can get a better bound for \KM by choosing $\,\mathrm delta$ to be just above either $\,\mathrm delta_{N_\,\mathrm delta}$ or $\,\mathrm delta_{1+N_\,\mathrm delta}$. Both reasons derive from the same facts and give a bound for \KM that can be overestimated by at most $2\,\mathrm delta$ for the considered $N={\ensuremath{\mathrm N}}\xspaceDelta(\KM,\,\mathrm delta,N_{\max})$. To improve the result, we can use once again inclusion~\varepsilonqref{incmul} and determine a good approximation of $\,\mathrm delta_N$ for $N=2^n$. We determine first by dichotomy a $\,\mathrm delta_0$ such that $(N_0,\,\mathrm delta_0)$ is $K$-good\xspace for some $N_0>=1$ (we use $N_0=8$ in our computation). For any $k>=0$, we take $N_{k+1}=2N_k$ and determine by dichotomy a $\,\mathrm delta_{k+1}$ such that $(N_{k+1},\,\mathrm delta_{k+1})$ is $K$-good\xspace; we already know that $\frac{\,\mathrm delta_k}2$ is an upper bound for $\,\mathrm delta_{k+1}$ and we can either use $0$ as a lower bound or try to find a lower bound not too far from the upper bound because the upper bound is probably not too bad. The algorithm is described in Function~\Bound. It uses a subfunction $\OptimalT(\KM,N,T_l,T_h)$ which returns the smallest integer $T\in[T_l,T_h]$ such that ${\ensuremath{\mathrm N}}\xspaceDelta(\KM,\log T/(2N),N)>0$. The algorithm does not return a bound below those proved in~\label{theo:Phieasynt} and~\label{coro:Bach4.01}. \subsection{Further refinements} To reduce the time used to compute the determinants, we tried to use steps of width $4\,\mathrm delta$ in $\left[-\frac12\log T,\frac12\log T\right]$ and of width $2\,\mathrm delta$ in the rest of $\left[-\frac34\log T,\frac34\log T\right]$, to halve the dimension of $\SC(N,\,\mathrm delta)$. It worked in the sense that we found substantially the same $T$ faster. However we decided that the total time of the algorithm is not high enough to justify the increase in code complexity. \section{Examples} In this section we will denote $T(\KM)$ the result of Function~\BDyDF and $T_1(\KM)$ the result of Function~\Bound. \subsection{Various fields} We tested the algorithm on several fields. Let first $\KM=\quot{\QM[x]}{(P)}$ where $$\catcode`\*=\active\,\mathrm def*{} P=x^3 + 559752270111028720*x + 55137512477462689. $$ The polynomial $P$ has been chosen so that for all primes $2<=p<=53$ there are two prime ideals of norms $p$ and $p^2$. This ensures that there are lots of small norms of prime ideals. We have $T(\KM)=19162$. There are $2148$ non-zero prime ideals with norms up to $T(\KM)$. We found that $T_1(\KM)=11071$ and that there are $1343$ non-zero prime ideals of norms up to $T_1(\KM)$. The time used by Function~\BDyDF was 58ms on our test computer while the time used by our algorithm was an \varepsilonmph{additional} 36ms. The test was designed in such a way that our algorithm used the decomposition information of Function~\BDyDF, so it saved a little time. \varepsilonnlargethispage{3\baselineskip} We tested also the algorithm on the set of $4686$ fields of degree $2$ to $27$ and small discriminant coming from a benchmark of~\cite{PARI}. The mean value of $\frac{T_1(\KM)}{T(\KM)}$ for those fields is lower than $\frac12$. For cyclotomic fields, the new algorithm does not give results significantly better than those of Belabas, Diaz y Diaz and Friedman. It might be because the discriminant of a cyclotomic field is not large enough with respect to its degree. \subsection{Pure fields} We computed $T(\KM)$ and $T_1(\KM)$ for fields of the form $\quot{\QM[x]}{(P)}$ with $P=x^n\pm p$ and $p$ is the first prime after $10^a$ for a certain family of integers $n$ and $a$. We computed the family of $\frac{T_1(\KM)}{T(\KM)}$ for each fixed degree. The graph shows that it is decreasing with the discriminant. The graph of $\frac{T_1(\KM)}{T(\KM)}({\ensuremath{\log\lDK}}\xspace)^2$ is much more regular and looks to have a non-zero limit, see Figure~\ref{fig1} below. We computed the mean of $\frac{T_1(\KM)}{T(\KM)}({\ensuremath{\log\lDK}}\xspace)^2$ for each fixed degree. The results are summarized below: \[ \let\mc=\multicolumn \begin{array}{l|r|r|l} P & a<={} & {\ensuremath{\log\DK}}\xspace<={} & \text{mean} \\ \hline x^2-p & 3999 & 9212 & 13.19 \\ x^6+p & 1199 & 13818 & 13.38 \\ x^{21}-p& 328 & 15169 & 13.68 \varepsilonnd{array} \] The small discriminants are (obviously) much less sensitive to the new algorithm. We reduced the range for each series to have ${\ensuremath{\log\DK}}\xspace<=500$. The results are as follows: \[ \begin{array}{l|r|l} P & a<={} & \text{mean} \\ \hline x^2-p & 218 & 12.35 \\ x^6+p & 43 & 13.66 \\ x^{21}-p& 10 & 17.19 \varepsilonnd{array} \] \subsection{Biquadratic fields} We repeated the computations above also for biquadratic fields $\QM[\sqrt{p_1},\sqrt{p_2}]$ where each $p_i$ is the first prime after $10^{a_i}$ for a certain family of integers $a_i$. We found that the mean of $\frac{T_1(\KM)}{T(\KM)}({\ensuremath{\log\lDK}}\xspace)^2$ is $13.63$ for the $7119$ fields computed and $13.88$ if we restrict the family to the $1537$ ones with ${\ensuremath{\log\DK}}\xspace<=500$. \subsection*{Final remarks} In~\cite[Th. 4.3]{small-generators} the authors prove that for a fixed degree $T(\KM)\gammag({\ensuremath{\log\DK}}\xspace{\ensuremath{\log\lDK}}\xspace)^2$ and conjecture that $T(\KM)\sim\frac1{16}({\ensuremath{\log\DK}}\xspace{\ensuremath{\log\lDK}}\xspace)^2$ while our computations suggest that $T_1(\KM)$ has smaller order. We will prove in a subsequent article~\cite{quality} that $T(\KM)\asymp({\ensuremath{\log\DK}}\xspace{\ensuremath{\log\lDK}}\xspace)^2$ and that $T_1(\KM)\ll(\log{\varepsilonnsuremath{\Delta_\KM}}\xspace)^2$. \centerline{ \begin{tabular}{c} \begingroup \makeatletter \providecommand\color[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package color not loaded in conjunction with terminal option `colourtext' }{See the gnuplot documentation for explanation. }{Either use 'blacktext' in gnuplot or load the package color.sty in LaTeX.} \renewcommand\color[2][]{} } \providecommand\includegraphics[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package graphicx or graphics not loaded }{See the gnuplot documentation for explanation. }{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.} \renewcommand\includegraphics[2][]{} } \providecommand\rotatebox[2]{#2} \@ifundefined{ifGPcolor}{ \newif\ifGPcolor \GPcolorfalse }{} \@ifundefined{ifGPblacktext}{ \newif\ifGPblacktext \GPblacktexttrue }{} \let\gammaplgaddtomacro\gamma@addto@macro \gammadef\gammaplbacktext{} \gammadef\gammaplfronttext{} \makeatother \ifGPblacktext \,\mathrm def\color{black}{} \,\mathrm def\colorgray#1{} \varepsilonlse \ifGPcolor \,\mathrm def\color{black}{\color[rgb]{#1}} \,\mathrm def\colorgray#1{\color[gray]{#1}} \varepsilonxpandafter\,\mathrm def\csname LTw\varepsilonndcsname{\color{white}} \varepsilonxpandafter\,\mathrm def\csname LTb\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LTa\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT0\varepsilonndcsname{\color[rgb]{1,0,0}} \varepsilonxpandafter\,\mathrm def\csname LT1\varepsilonndcsname{\color[rgb]{0,1,0}} \varepsilonxpandafter\,\mathrm def\csname LT2\varepsilonndcsname{\color[rgb]{0,0,1}} \varepsilonxpandafter\,\mathrm def\csname LT3\varepsilonndcsname{\color[rgb]{1,0,1}} \varepsilonxpandafter\,\mathrm def\csname LT4\varepsilonndcsname{\color[rgb]{0,1,1}} \varepsilonxpandafter\,\mathrm def\csname LT5\varepsilonndcsname{\color[rgb]{1,1,0}} \varepsilonxpandafter\,\mathrm def\csname LT6\varepsilonndcsname{\color[rgb]{0,0,0}} \varepsilonxpandafter\,\mathrm def\csname LT7\varepsilonndcsname{\color[rgb]{1,0.3,0}} \varepsilonxpandafter\,\mathrm def\csname LT8\varepsilonndcsname{\color[rgb]{0.5,0.5,0.5}} \varepsilonlse \,\mathrm def\color{black}{\color{black}} \,\mathrm def\colorgray#1{\color[gray]{#1}} \varepsilonxpandafter\,\mathrm def\csname LTw\varepsilonndcsname{\color{white}} \varepsilonxpandafter\,\mathrm def\csname LTb\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LTa\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT0\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT1\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT2\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT3\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT4\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT5\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT6\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT7\varepsilonndcsname{\color{black}} \varepsilonxpandafter\,\mathrm def\csname LT8\varepsilonndcsname{\color{black}} \fi \fi \setlength{\unitlength}{0.0500bp} \ifx\gammaptboxheight\undefined \newlength{\gammaptboxheight} \newlength{\gammaptboxwidth} \newsavebox{\gammaptboxtext} \fi \setlength{\fboxrule}{0.5pt} \setlength{\fboxsep}{1pt} \begin{picture}(8502.00,3968.00) \gammaplgaddtomacro\gammaplbacktext{ \csname LTb\varepsilonndcsname \put(462,440){\makebox(0,0)[r]{\strut{}$10$}} \put(462,906){\makebox(0,0)[r]{\strut{}$11$}} \put(462,1372){\makebox(0,0)[r]{\strut{}$12$}} \put(462,1838){\makebox(0,0)[r]{\strut{}$13$}} \put(462,2305){\makebox(0,0)[r]{\strut{}$14$}} \put(462,2771){\makebox(0,0)[r]{\strut{}$15$}} \put(462,3237){\makebox(0,0)[r]{\strut{}$16$}} \put(462,3703){\makebox(0,0)[r]{\strut{}$17$}} \put(657,220){\makebox(0,0){\strut{}$0$}} \put(2387,220){\makebox(0,0){\strut{}$4000$}} \put(4117,220){\makebox(0,0){\strut{}$8000$}} \put(5847,220){\makebox(0,0){\strut{}$12000$}} \put(7577,220){\makebox(0,0){\strut{}$16000$}} \put(7709,440){\makebox(0,0)[l]{\strut{}$10$}} \put(7709,906){\makebox(0,0)[l]{\strut{}$11$}} \put(7709,1372){\makebox(0,0)[l]{\strut{}$12$}} \put(7709,1838){\makebox(0,0)[l]{\strut{}$13$}} \put(7709,2305){\makebox(0,0)[l]{\strut{}$14$}} \put(7709,2771){\makebox(0,0)[l]{\strut{}$15$}} \put(7709,3237){\makebox(0,0)[l]{\strut{}$16$}} \put(7709,3703){\makebox(0,0)[l]{\strut{}$17$}} } \gammaplgaddtomacro\gammaplfronttext{ \colorrgb{0.58,0.00,0.83} \put(5993,3530){\makebox(0,0)[l]{\strut{}quadratic}} \colorrgb{0.00,0.62,0.45} \put(5993,3310){\makebox(0,0)[l]{\strut{}degree 6}} \colorrgb{0.34,0.71,0.91} \put(5993,3090){\makebox(0,0)[l]{\strut{}degree 21}} \colorrgb{0.90,0.62,0.00} \put(5993,2870){\makebox(0,0)[l]{\strut{}biquadratic}} } \gammaplbacktext \put(0,0){\includegraphics{T-small-fields}} \gammaplfronttext \varepsilonnd{picture} \varepsilonndgroup \\ \textsc{Figure 1}: $\frac{T_1(\KM)}{T(\KM)}({\ensuremath{\log\lDK}}\xspace)^2$ for some pure fields; in abscissa {\ensuremath{\log\DK}}\xspace. {\varepsilonxpandafter\,\mathrm def\csname @currentlabel\varepsilonndcsname{1}\label{fig1}} \varepsilonnd{tabular} } \begin{function} \relax \KMwIn{a number field \KM} \KMwIn{a positive real $\,\mathrm delta$} \KMwIn{a positive integer $N_{\max}$} \KMwOut{an $N\leqslant N_{\max}$ such that $(N,\,\mathrm delta)$ is $K$-good\xspace or $0$} $tab\leftarrow\text{$(2N_{\max}+1)$-dimensional array}$\; $tab[0]\leftarrow0$\; $A\leftarrow\text{$N_{\max}\times N_{\max}$ identity matrix}$\; $N\leftarrow 0$\; \While{$N<N_{\max}$}{ $N\leftarrow N+1$\; $tab[2N-1]\leftarrow(2N-1)\textrm{\upshape GRH}\xspacecheck(\KM,(2N-1)\,\mathrm delta)$\; $tab[2N]\leftarrow2N\textrm{\upshape GRH}\xspacecheck(\KM,2N\,\mathrm delta)$\; \For{$i\leftarrow1$ \KMwTo $N$}{ $A[N,i]\leftarrow tab[N+i]-tab[N-i]$\; $A[i,N]\leftarrow A[N,i]$\; } \If{$\,\mathrm det A<0$}{\label{line:detA<0} \KMwRet{$N$}\; } } \KMwRet{$0$}\; \caption{NDelta(\KM,$\,\mathrm delta$,$N_{\max}$)} \varepsilonnd{function} \begin{function} \relax \KMwIn{a number field \KM} \KMwOut{a bound for the norm of a system of generators of ${\varepsilonnsuremath{\CC\!\varepsilonll}}\xspace_\KM$} $T_0\leftarrow 4\left({\ensuremath{\log\DK}}\xspace+{\ensuremath{\log\lDK}}\xspace-(\gamma+\log 2\pi){\ensuremath{n_\K}}\xspace+1+({\ensuremath{n_\K}}\xspace+1)\frac{\log(7{\ensuremath{\log\DK}}\xspace)}{\ensuremath{\log\DK}}\xspace\right)^2$\; $T_0\leftarrow \min\left(T_0,4.01\log^2{\varepsilonnsuremath{\Delta_\KM}}\xspace\right)$\; $N\leftarrow8$; $\,\mathrm delta\leftarrow0.0625$\; \While{${\ensuremath{\mathrm N}}\xspaceDelta(\KM,\,\mathrm delta,N)=0$}{ $\,\mathrm delta\leftarrow\,\mathrm delta+0.0625$\; } $T_h\leftarrow \OptimalT(\KM,N,e^{2N\,(\,\mathrm delta-0.0625)},e^{2N\,\,\mathrm delta})$\; $T\leftarrow T_h+1$\; \While{$T_h<T\mathop{||}T>T_0$}{ $T\leftarrow T_h$; $N\leftarrow2N$\; $T_h\leftarrow \OptimalT(\KM,N,1,T_h)$\; } \KMwRet{$T$}\; \caption{Bound(\KM)} \varepsilonnd{function} \pagebreak \begin{thebibliography}{BDF08} \bibitem[PARI15]{PARI} The PARI~Group, Bordeaux, \varepsilonmph{{PARI/GP {\tt version 2.8}}}, 1985--2015, available from \url{http://pari.math.u-bordeaux.fr/}. \bibitem[BDF08]{small-generators} Karim Belabas, Francisco {Diaz y Diaz}, and Eduardo Friedman, \varepsilonmph{Small generators of the ideal class group}, Math. Comp. \textbf{77} (2008), no.~262, 1185--1197. \bibitem[GM15]{quality} Lo{\"{\i}}c Greni\'{e} and Giuseppe Molteni, \varepsilonmph{{E}xplicit bounds for algorithms computing class field generators}, preprint. \varepsilonnd{thebibliography} {\catcode`\$=10 \,\mathrm def\ident Id: #1,v #2 #3 #4 #5 Exp {\,\mathrm def(\arabic{footnote}){}\footnotetext{\tiny version #2, #3 #4}} \ident$Id: note.tex,v 1.77 2016/09/30 13:48:08 grenie Exp $ } \varepsilonnd{document}
\begin{document} \author[Ikshu Neithalath]{Ikshu Neithalath} \thanks {The author was supported by NSF grants DMS-1708320 and DMS-2003488.} \address {Centre for Quantum Mathematics, Syddansk Universitet, Campusvej 55, Odense M, Denmark 5230} \email {[email protected]} \begin{abstract} We establish a relationship between $\mathit{HP}(Y)$, the Abouzaid-Manolescu sheaf-theoretic $\operatorname{SL}(2, \mathbb{C})$ Floer cohomology, for $Y$ a surgery on a small knot in $S^3$ and Curtis' $\operatorname{SL}(2, \mathbb{C})$ Casson invariant. We use this to compute $\mathit{HP}$ for most surgeries on two-bridge knots. We also compute $\mathit{HP}$ for surgeries on two non-small knots, the granny and square knots. We provide a partial calculation of the framed sheaf-theoretic $\operatorname{SL}(2, \mathbb{C})$ Floer cohomology, $\mathit{HP}_{\#}(Y)$, for surgeries on two-bridge knots and apply the data we obtain to show the non-existence of a surgery exact triangle for $\mathit{HP}_{\#}$. \end{abstract} \title{$\SLC$ Floer cohomology for surgeries on some knots} \section{Introduction} In \cite{AM}, the authors defined a new invariant of closed, connected, orientable 3-manifolds $Y$ called sheaf-theoretic $\operatorname{SL}(2, \mathbb{C})$ Floer cohomology, denoted $\mathit{HP}(Y)$. It is defined as the hypercohomology of a certain perverse sheaf on the character scheme $\mathscr{X}_{\text{irr}}(Y)$. The perverse sheaf comes from a description of this space as a complex Lagrangian intersection. In this paper, we compute this invariant for surgeries on two-bridge knots, the granny knot, and the square knot. Let $K$ be a knot in $S^3$ and $S^3_{p/q}(K)$ its $p/q$ Dehn surgery. When $S^3\backslash K$ contains no closed, essential surfaces, we say that $K$ is a small knot. The calculation of $\mathit{HP}(S^3_{p/q}(K))$ for $K$ a small knot and for most values of $p/q$ reduces to the $\operatornameeratorname{SL}(2,\mathbb{C})$ Casson invariant $\lambda_{SL(2,\mathbb{C})}$ as defined by Curtis \cite{curtis} and explored in her joint work with Boden \cite{BC}. Specifically, we have \begin{customthm}{1}\label{smallknot} Let $K\subset S^3$ be a small knot, and let $Y=S^3_{p/q}(K)$ denote $p/q$ surgery on $K$. If $p/q$ is not one of the finitely many boundary slopes of $K$, then $HP(Y)\cong \mathbb{Z}_{(0)}^{\lambda_{\operatorname{SL}(2, \mathbb{C})}(Y)}$. \end{customthm} \begin{remark} We will often use the notation $A_{(k)}$ to denote a graded abelian group with $A$ in degree $k$. A more common notation for this is $A[-k]$. \end{remark} We combine this theorem with calculations of $\lambda_{\operatorname{SL}(2, \mathbb{C})}(Y)$ in the literature to produce explicit formulae for $\mathit{HP}(Y)$ when $Y$ is a surgery on a two-bridge knot. In \cite{AM}, the authors also define a framed version of sheaf-theoretic Floer cohomology denoted $\mathit{HP}_{\#}(Y)$. It is defined as the hypercohomology of a certain perverse sheaf on the representation scheme of $Y$, $\operatornameeratorname{Hom}(\pi_1(Y),\operatorname{SL}(2, \mathbb{C}))$. We would like to compute the framed sheaf-theoretic Floer cohomology, $\mathit{HP}_{\#}(S^3_{p/q}(K))$, for surgeries on knots. However, the representation schemes are usually not zero-dimensional and are often singular. So, we only give a formula for $\mathit{HP}_{\#}$ of surgeries for which the character scheme is zero-dimensional, smooth, and does not contain non-abelian reducible representations. \begin{customthm}{2}\label{repcalc} Let $K$ be a knot and let $Y=S^3_{p/q}(K)$ denote the 3-manifold obtained from $p/q$ Dehn surgery on $K$. Let $p'=p$ for $p$ odd and $p'=\frac{p}{2}$ for $p$ even. Assume that the character scheme $\mathscr{X}_{\text{irr}}(Y)$ is zero-dimensional and smooth and no $p'$-th root of unity is a root of the Alexander polynomial of $K$. Then, \begin{equation*} \mathit{HP}^*_{\#}(Y)= H^*(\text{pt})^{\operatornamelus 2-\sigma(p)} \operatornamelus H^{*+2}(\mathbb{CP}^1)^{\operatornamelus \frac{1}{2}(|p|-2+\sigma(p))} \operatornamelus H^{*+3} (\operatorname{PSL}(2,\mathbb{C}))^{\operatornamelus \lambda_{\operatorname{SL}(2, \mathbb{C})}(Y)}, \end{equation*} where $\sigma(p)\in\{0,1\}$ is the parity of $p$. \end{customthm} We use this theorem in conjunction with the calculation of $\lambda_{\operatorname{SL}(2, \mathbb{C})}(Y)$ for surgeries on two-bridge knots in \cite{2bridge} to show that there do not exist exact triangles relating $\mathit{HP}_{\#}$ for surgeries on two-bridge knots. In light of Theorem \ref{smallknot}, we are interested in computing $\mathit{HP}(S^3_{p/q}(K))$ when $K$ is not a small knot. The character schemes of such manifolds may have positive dimensional components, in which case the calculation of the $\operatorname{SL}(2, \mathbb{C})$ Casson invariant is insufficient to determine $\mathit{HP}$. In fact, when $K=K_1\# K_2$ is a composite knot, we are guaranteed to have positive dimensional components. We provide a calculation of $\mathit{HP}$ with $\mathbb{F}=\mathbb{Z}/2\mathbb{Z}$ coefficients for surgeries on the square and granny knots. Recall that the granny knot is the connected sum of two right-handed trefoils, whereas the square knot is a composite of a trefoil with its mirror. \begin{customthm}{3}\label{HPgranny} Let $S^3_{p/q}(3_1\# 3_1)$ denote the 3-manifold obtained from $p/q$ Dehn surgery on the granny knot, $3_1\# 3_1$. Then we have the following formula for the sheaf-theoretic Floer cohomology: \begin{equation*} \mathit{HP}(S^3_{p/q}(3_1\# 3_1);\mathbb{F})= \begin{cases} \mathbb{F}_{(0)}^{|6q-p|+\frac{1}{2}|12q-p|-\frac{3}{2}}\operatornamelus \mathbb{F}_{(-1)}^{\frac{1}{2}|12q-p|-\frac{1}{2}} &\text{if $p$ is odd,}\ \\[10 pt] \mathbb{F}_{(0)}^{|6q-p|+\frac{1}{2}|12q-p|-1}\operatornamelus \mathbb{F}_{(-1)}^{\frac{1}{2}|12q-p|-1} &\text{if $p$ is even, $p\neq 12k$,}\ \\[10 pt] \mathbb{F}_{(0)}^{|6q-p|+\frac{1}{2}|12q-p|-5}\operatornamelus \mathbb{F}_{(-1)}^{\frac{1}{2}|12q-p|+1}&\text{if $p=12k, p/q\neq 12$,}\ \\[10 pt] \mathbb{F}_{(1)}^4 \operatornamelus\mathbb{F}_{(0)}^4 \operatornamelus \mathbb{F}_{(-2)} &\text{if $p/q=12$.}\ \\ \end{cases} \end{equation*} \end{customthm} \begin{customthm}{4}\label{HPsquare} Let $S^3_{p/q}(3_1\# 3_1^*)$ denote the 3-manifold obtained from $p/q$ Dehn surgery on the square knot, $3_1\# 3_1^*$ (where $3_1^*$ is the left-handed trefoil). Then we have the following formula for the sheaf-theoretic Floer cohomology: \begin{equation*} \mathit{HP}(S^3_{p/q}(3_1\# 3_1^*);\mathbb{F})= \begin{cases} \mathbb{F}_{(0)}^{\frac{1}{2}|6q-p|+\frac{1}{2}|6q+p|+\frac{1}2{}|p|-\frac{3}{2}}\operatornamelus \mathbb{F}_{(-1)}^{\frac{1}{2}|p|-\frac{1}{2}} &\text{if $p$ is odd,}\ \\[10 pt] \mathbb{F}_{(0)}^{\frac{1}{2}|6q-p|+\frac{1}{2}|6q+p|+\frac{1}{2}|p|-1}\operatornamelus \mathbb{F}_{(-1)}^{\frac{1}{2}|p|-1} &\text{if $p$ is even, $p\neq 12k$,}\ \\[10 pt] \mathbb{F}_{(0)}^{\frac{1}{2}|6q-p|+\frac{1}{2}|6q+p|+\frac{1}{2}|p|-5}\operatornamelus \mathbb{F}_{(-1)}^{\frac{1}{2}|p|+3} &\text{if $p=12k, p\neq 0$,}\ \\[10 pt] \mathbb{F}_{(1)}^4 \operatornamelus\mathbb{F}_{(0)}^4 \operatornamelus \mathbb{F}_{(-2)} &\text{if $p=0$.}\ \\ \end{cases} \end{equation*} \end{customthm} The organization of this paper is as follows. In Section 2 we provide some background on character varieties and the invariants $\mathit{HP}$, $\mathit{HP}_{\#}$, and $\lambda_{\operatorname{SL}(2, \mathbb{C})}$. In Section 3, we prove Theorem \ref{smallknot} and explain how it gives explicit formulae for the sheaf-theoretic $\operatorname{SL}(2, \mathbb{C})$ Floer cohomology of knot surgeries. In Section 4, we prove Theorem \ref{repcalc} and compute $\mathit{HP}_{\#}$ for most surgeries on two-bridge knots. In Section 5, we determine the character variety of the composite knot $3_1\# 3_1$, allowing us to compute the A-polynomials of the square and granny knots in Section 6. In Section 7, we consider surgeries on composite knots and establish Theorems \ref{HPgranny} and \ref{HPsquare}. In Section 8, we apply our calculation of $\mathit{HP}_{\#}$ of two-bridge knot surgeries to demonstrate the non-existence of a surgery exact triangle. \textbf{Acknowledgements}. We have benefited from discussions with Laurent C\^{o}t\'{e}, Matt Kerr, Mohan Kumar, Jack Petok, Vivek Shende, and Burt Totaro. We also thank the two referees who provided detailed comments on the initial versions of this paper. We are particularly indebted to Ciprian Manolescu for his advice, support, and encouragement at all stages in the writing of this paper. \section{Background} For a topological space $X$, let $\mathscr{R}(X)$ denote the $\operatorname{SL}(2, \mathbb{C})$ representation scheme of $\pi_1(X)$, defined as \begin{align*} \mathscr{R}(X)=\operatornameeratorname{Hom}(\pi_1(X),\operatorname{SL}(2, \mathbb{C})). \end{align*} Assuming $\pi_1(X)$ is finitely generated, this set is naturally identified as the $\mathbb{C}$ points of an affine scheme. The character scheme $\mathscr{X}(X)$ is the GIT quotient of $\mathscr{R}(X)$ by the conjugation action of $\operatorname{SL}(2, \mathbb{C})$. A representation $\rho\in\mathscr{R}(X)$ is irreducible if the image of $\rho$ is not contained in any proper Borel subgroup. The irreducible representations comprise the stable locus for the GIT action. Let $\mathscr{R}_{\text{irr}}(X)\subset\mathscr{R}(X)$ denote the open subscheme corresponding to irreducible representations, and similarly $\mathscr{X}_{\text{irr}}(X)\subset\mathscr{X}(X)$. When $X$ is a closed surface of genus $g>1$, $\mathscr{X}_{\text{irr}}(X)$ is a holomorphic symplectic manifold of dimension $6g-6$ \cite{goldman}. To investigate character schemes of 3-manifolds, we take the perspective of \cite{AM} using Heegaard splittings. Let $Y=U_0\cup_{\Sigma}U_1$ be a Heegaard splitting of a closed, orientable, 3-manifold $Y$ into two handlebodies $U_0$ and $U_1$ with Heegaard surface $\Sigma$. Then $\mathscr{X}_{\text{irr}}(U_i)$ is a complex Lagrangian in $\mathscr{X}_{\text{irr}}(\Sigma)$ and $\mathscr{X}_{\text{irr}}(Y)=\mathscr{X}_{\text{irr}}(U_0)\cap\mathscr{X}_{\text{irr}}(U_1)$ is a Lagrangian intersection \cite{AM}. In \cite{bussi}, the author applies the work of \cite{joyce} to define a perverse sheaf of vanishing cycles associated to any Lagrangian intersection in a holomorphic symplectic manifold. A perverse sheaf on a scheme $X$ is a certain type of object in $D_c^b(X)$, the bounded derived category of complexes of constructible sheaves on $X$. The category of perverse sheaves, $\operatornameeratorname{Perv}(X)$, is an abelian subcategory of $D_c^b(X)$. Perverse sheaves have wide application in algebraic geometry and are often used to study the topology of complex varieties. Given a function $f:U\to\mathbb{C}$ on a smooth scheme $U$, we can define a perverse sheaf of vanishing cycles, $\mathcal{PV}_f\in\operatornameeratorname{Perv}(U)$, with the property that the cohomology of the stalk of $\mathcal{PV}_f$ at a point $x$ is the cohomology of the Milnor fiber of $f$ at $x$ (up to a degree shift). The perverse sheaf associated to a Lagrangian intersection in \cite{bussi} is modeled on perverse sheaves of vanishing cycles. In \cite{AM}, the authors use Bussi's construction to associate a perverse sheaf to a Heegaard splitting of a 3-manifold. Moreover, they show that the perverse sheaf is independent of the Heegaard splitting: \begin{theorem}[\cite{AM}] Let $Y$ be a closed, connected, oriented 3-manifold with a Heegaard splitting $Y=U_0\cup_{\Sigma} U_1$. Define the Lagrangians $L_i=\mathscr{X}_{\text{irr}}(U_i)\subset \mathscr{X}_{\text{irr}}(\Sigma)$. Apply the construction of \cite{bussi} to obtain a perverse sheaf $P_{L_0,L_1}\in\operatornameeratorname{Perv}(\mathscr{X}_{\text{irr}}(Y))$ associated to the Lagrangian intersection $\mathscr{X}_{\text{irr}}(Y)=L_0\cap L_1$. Then $P(Y):=P_{L_0,L_1}$ is an invariant of the 3-manifold $Y$ up to canonical isomorphism in $\operatornameeratorname{Perv}(\mathscr{X}_{\text{irr}}(Y))$. \end{theorem} We call its hypercohomology $\mathit{HP}^*(Y)=\mathbb{H}^*(P(Y))$ the sheaf-theoretic $\operatorname{SL}(2, \mathbb{C})$ Floer cohomology of $Y$. They also define an invariant using the representation scheme that takes into account the reducibles, called the framed sheaf-theoretic $\operatorname{SL}(2, \mathbb{C})$ Floer cohomology of $Y$, $\mathit{HP}_{\#}(Y)$. To define this invariant, we use the notion of the twisted character variety. \begin{defn} Let $\Sigma$ be a closed surface with a basepoint $w$. Let $D$ be a small disc neighborhood of $w$. We define the \emph{twisted character variety} as \begin{align*} \mathscr{X}_{\text{tw}}(\Sigma,w)=\{\rho\in\operatornameeratorname{Hom}(\pi_1(\Sigma-\{w\}),\operatorname{SL}(2, \mathbb{C}))\mid\rho(\partial D)=-I\}//\operatorname{SL}(2, \mathbb{C}). \end{align*} \end{defn} The twisted character variety is a smooth, holomorphic symplectic manifold. Given a Heegaard splitting $Y=U_0\cup_{\Sigma} U_1$ and a base point $z\in\Sigma$, we define $Y^{\#}=Y\#(T^2\times[0,1])$, where the connected sum is performed in a neighborhood of $z$, arranged so that $T^2\times [0,1/2]$ is attached to $U_0$ and $T^2\times [1/2,1]$ is attached to $U_1$. Let $\Sigma^{\#}=\Sigma\#(T^2\times[1/2])$ be the new splitting surface and $U_i^{\#}$ be the resulting compression bodies. Then, choose a basepoint $w\in T^2\times\{1/2\}$ away from the connected sum region. Let $\ell_0=w\times[0,1/2]$ and $\ell_1=w\times[1,1/2]$ be lines in each compression body. In the holomorphic symplectic manifold $\mathscr{X}_{\text{tw}}(\Sigma^{\#},w)$, the subspaces $L_i^{\#}$ consisting of twisted representations that factor through $\pi_1(U_i^{\#}-\ell_i)$ are complex Lagrangian submanifolds. Furthermore, their intersection $L_0^{\#}\cap L_1^{\#}$ can be identified with the representation variety $\mathscr{R}(Y)$ \cite{AM}. Analogously to the previous situation, this leads to a perverse sheaf invariant of the 3-manifold, $\mathcal{P}_{\#}(Y)\in\operatornameeratorname{Perv}(\mathscr{R}(Y))$. We denote its hypercohomology $\mathit{HP}_{\#}(Y)$, the framed sheaf-theoretic $\operatorname{SL}(2, \mathbb{C})$ Floer cohomology of $Y$. To compute the invariants $\mathit{HP}(Y)$ and $\mathit{HP}_{\#}(Y)$, we can use the following proposition: \begin{proposition}\label{smooth} Let $X\subset \mathscr{X}_{\text{irr}}(Y)$ (resp. $X\subset \mathscr{R}_{\text{irr}}(Y)$) be a smooth topological component of the character scheme (resp. representation scheme) of complex dimension $d$. Then the restriction of the perverse sheaf $\mathcal{P}(Y)$ (resp. $\mathcal{P}_{\#}(Y)$) to $X$ is a local system with stalks isomorphic to $\mathbb{Z}[d]$. In particular, if $X$ is simply connected, then $\mathit{HP}(Y)$ (resp. $\mathit{HP}_{\#}(Y)$) contains $H^*(X)[d]$ as a direct summand. Furthermore, if $[\rho]$ is an isolated irreducible character and $X\cong \operatorname{PSL}(2,\mathbb{C})$ is the orbit of $[\rho]$ in the representation scheme, then the local system $P_{\#}(Y)\vert_X$ is trivial. \end{proposition} \begin{proof} The first part is Proposition 6.2 in \cite{AM}. The second part is Lemma 8.3 of \cite{AM}. \end{proof} When $X$ is smooth but not simply connected, then there is some ambiguity over the local system $P(Y)\vert_X$. This can be circumvented by using $\mathbb{Z}/2\mathbb{Z}$ coefficients. \begin{corollary}\label{modtwo} Assume $\mathscr{X}_{\text{irr}}(M)$ is smooth with topological components $X_i$ of complex dimensions $d_i$. Then $\mathit{HP}(Y;\mathbb{Z}/2\mathbb{Z})=\bigoplus\limits_{i} H^*(X_i;\mathbb{Z}/2\mathbb{Z})[d_i]$. \end{corollary} \begin{proof} This follows from the fact that all local systems with $\mathbb{Z}/2\mathbb{Z}$ coefficients are trivial, since $\operatornameeratorname{Aut}(\mathbb{Z}/2\mathbb{Z})$ is trivial. \end{proof} Morally, $\mathit{HP}(Y)$ should be a version of (the dual of) instanton Floer homology using the gauge group $\operatorname{SL}(2, \mathbb{C})$ instead of $\operatornameeratorname{SU}(2)$. Pursuing this analogy, the Euler characteristic of $\mathit{HP}(Y)$, denoted $\lambda^P(Y)$, should be a type of Casson invariant, just as the Euler characteristic of instanton Floer homology is related to the original Casson invariant, which is a count of irreducible $\operatornameeratorname{SU}(2)$ characters. There is another invariant called the $\operatorname{SL}(2, \mathbb{C})$ Casson invariant defined in \cite{curtis} that counts isolated, irreducible $\operatorname{SL}(2, \mathbb{C})$ characters. To distinguish it from this invariant, $\lambda^P(Y)$ is called the full Casson invariant since it takes into account the positive dimensional components of the character scheme. When $\mathscr{X}_{\text{irr}}(Y)$ is zero-dimensional, $\lambda^P$ and $\lambda_{\operatorname{SL}(2, \mathbb{C})}$ agree. In fact, we have \begin{theorem}\label{thm:zerodim} Let $Y$ be a closed, orientable 3-manifold such that $\mathscr{X}_{\text{irr}}(Y)$ is zero-dimensional. Then $\mathit{HP}(Y)\cong \mathbb{Z}_{(0)}^{\lambda_{SL(2,\mathbb{C})}(Y)}$, where $\lambda_{SL(2,\mathbb{C})}(Y)$ is the $SL(2,\mathbb{C})$ Casson invariant as defined in \cite{curtis}. \end{theorem} \begin{proof} The definition of $\mathit{HP}(Y)$ uses the characterization of $\mathscr{X}_{\text{irr}}(Y)$ as a complex Lagrangian intersection $L_0\cap L_1$ in the character scheme of a Heegaard surface for $Y$. The stalk of the perverse sheaf $P^{\bullet}(Y)$ at a point $p\in \mathscr{X}_{\text{irr}}(Y)$ is the degree-shifted cohomology of the Milnor fiber of some function $f:U\to\mathbb{C}$, for $U$ an open neighborhood in one of the Lagrangians, such that the graph $\Gamma_{df}\subset T^*U$ is identified with $L_1$ in an appropriate polarization of the symplectic manifold near $p$. Since $\mathscr{X}_{\text{irr}}(Y)$ is zero-dimensional, we know that $f$ has an isolated singularity at $p$. Thus, the Milnor fiber has the homotopy type of a bouquet of spheres. The number of spheres in the bouquet is the Milnor number, denoted $\mu_p$. Then, the stalk is given by $(P^{\bullet}(Y))_p\cong \mathbb{Z}_{(0)}^{\mu_p}$. The hypercohomology is $\mathit{HP}(Y)\cong \mathbb{Z}_{(0)}^{\sum\mu_p}$, where the sum is over all components of $\mathscr{X}_{\text{irr}}(Y)$. The definition of the Casson invariant in terms of intersection cycles given in \cite{curtis} is $\lambda_{SL(2,\mathbb{C})}(Y)=\sum_{p} n_p$, where the sum is over all zero-dimensional components of $X_{\text{irr}}(Y)$, and $n_p$ is the intersection multiplicity of $L_0$ with $L_1$. But the Milnor number $\mu_p$ is equal to the intersection multiplicity of $\Gamma_{df}$ with $L_0$, hence the result follows. \end{proof} Theorem \ref{thm:zerodim} is useful when we can guarantee that $\mathscr{X}_{\text{irr}}(Y)$ is zero-dimensional. One case in which this holds is when $Y$ has no incompressible surfaces. \begin{corollary}\label{nsl} Let $Y$ be a closed, orientable, not sufficiently large 3-manifold. Then $\mathit{HP}(Y)\cong \mathbb{Z}_{(0)}^{\lambda_{SL(2,\mathbb{C})}(Y)}$. \end{corollary} \begin{proof} By the main result of \cite{culler-shalen}, the character variety of a NSL 3-manifold is zero-dimensional. So, Theorem \ref{thm:zerodim} applies. \end{proof} \section{Surgeries on Small Knots and the $\lambda_{SL(2,\mathbb{C})}$ Casson Invariant} \subsection{Surgeries on small knots} By applying Theorem \ref{thm:zerodim}, we can establish the connection between $\mathit{HP}(Y)$ for $Y$ a surgery on a small knot in $S^3$ and the $\operatorname{SL}(2, \mathbb{C})$ Casson invariant, $\lambda_{\operatorname{SL}(2, \mathbb{C})}(Y)$ as given in Theorem \ref{smallknot}. \begin{proof}[Proof of Theorem \ref{smallknot}] For any knot $K$, an incompressible surface in $Y=S^3_{p/q}(K)$ either comes from a closed essential surface in $S^3\backslash K$ or from an essential surface in $S^3\backslash K$ with boundary slope $p/q$ \cite{dehnknots}. When $K$ is small, the first case is ruled out. So, Corollary \ref{nsl} shows that $\mathit{HP}(Y)\cong \mathbb{Z}_{(0)}^{\lambda_{SL(2,\mathbb{C})}(Y)}$ whenever $p/q$ is not a boundary slope. Note that there are only finitely many boundary slopes by \cite{hatcherboundary}. \end{proof} The invariant $\lambda_{SL(2,\mathbb{C})}$ has been computed for a range of 3-manifolds, including surgeries on many families of knots \cite{curtis}\cite{2bridge}\cite{seifertfiber}. We provide a few examples of how those results yield formulae for the sheaf-theoretic Floer cohomology of surgeries on knots. First we review the results of \cite{curtis} in order to consider the case of a general small knot. Let $M=S^3\backslash N(K)$ be a knot exterior. Let $i:\partial M\to M$ denote the inclusion and $r:\mathscr{X}(M)\to \mathscr{X}(\partial M)$ denote the restriction map. \begin{defn} A slope $\gamma\in\partial M$ is \emph{irregular} if there exists an irreducible representation $\rho$ of $\pi_1(M)$ such that: (i) the character $[\rho]$ is in a one-dimensional component $\mathcal{X}_i$ of $\mathscr{X}_{\text{irr}}(M)$ such that $r(\mathcal{X}_i)$ is also one-dimensional; (ii) $\operatorname{tr}(\rho(\alpha))=\pm 2$ for all $\alpha\in \partial M$; (iii) $\ker(\rho\circ i_*)$ is cyclic, generated by $[\gamma]$. \end{defn} \begin{defn}\label{admissible} A slope $p/q$ is \emph{admissible} if: (i) It is regular and not a strict boundary slope; (ii) No $p'$-th root of unity is a root of the Alexander polynomial of $K$, where $p'=p$ for $p$ odd and $p'=\frac{p}{2}$ for $p$ even. \end{defn} With these definitions, we can state Theorem 4.8 of \cite{curtis}: \begin{theorem}[\cite{curtis}]\label{thm:curtis} Let $K$ be a small knot in $S^3$ with complement $M$. Let $\{\mathcal{X}_i\}$ be the collection of one-dimensional components of $\mathscr{X}(M)$ such that $r(\mathcal{X}_i)$ is one-dimensional and such that $\mathcal{X}_i$ contains an irreducible representation. Then there exist integral weights $m_i>0$ depending only on $\mathcal{X}_i$ and non-negative $E_0,E_1\in\frac{1}{2}\mathbb{Z}$ depending only on $K$ such that for every admissble $\frac{p}{q}$ we have \begin{align*} \lambda_{\operatornameeratorname{SL}(2,\mathbb{C})}(S^3_{p/q}(K))=\frac{1}{2}\sum\limits_{i} m_i ||p\mathscr{M}+q\mathscr{L}||_i-E_{\sigma(p)}, \end{align*} where $\sigma(p)\in\{0,1\}$ is the parity, $||-||_i$ is the Culler-Shalen seminorm associated to $\mathcal{X}_i$ and $\sum\limits_{i} m_i ||p\mathscr{M}+q\mathscr{L}||_i:=||p/q||_T$ is the total Culler-Shalen seminorm. \end{theorem} There are only finitely many irregular slopes and only finitely many boundary slopes \cite{curtis}. Provided $p$ is chosen so that no $p'$-th root of unity is a root of the Alexander polynomial, where $p'$ is as in Definition \ref{admissible}(ii), the above theorem only excludes finitely many slopes $p/q$. Thus, by combining Theorem \ref{smallknot} with Theorem \ref{thm:curtis} we obtain a formula for the sheaf-theoretic Floer cohomology for most surgeries on small knots. \subsection{$\mathit{HP}$ for surgeries on two-bridge knots} Let $K(\alpha,\beta)$ be the two-bridge knot as defined by the notation in \cite{burdezieschang}. The $\operatorname{SL}(2, \mathbb{C})$ Casson invariants of surgeries on these small knots were computed in Theorem 2.5 of \cite{2bridge}. Applying their result and Theorem \ref{smallknot}, we obtain \begin{proposition}\label{HP2bridge} Let $S^3_{p/q}(K(\alpha,\beta))$ denote $p/q$ surgery on the two-bridge knot $K(\alpha,\beta)$. Assume $p/q$ is not a boundary slope and no $p'$-th root of unity is a root of the Alexander polynomial of $K(\alpha,\beta)$, where $p'=p$ for $p$ odd and $p'=p/2$ for $p$ even. Let $||p/q||_T$ denote the total Culler-Shalen seminorm of $p/q$. Then, $$\mathit{HP}(S^3_{p/q}(K(\alpha,\beta)))= \begin{cases} \mathbb{Z}_{(0)}^{\frac{1}{2}||p/q||_T} &\text{if $p$ is even,}\ \\[10 pt] \mathbb{Z}_{(0)}^{\frac{1}{2}||p/q||_T-\frac{1}{4}(\alpha-1)} &\text{if $p$ is odd.}\ \\ \end{cases}$$ \end{proposition} \section{$\mathit{HP}_{\#}$ for surgeries on two-bridge knots} In this section, we prove Theorem \ref{repcalc} computing $\mathit{HP}_{\#}(Y)$ when $Y=S^3_{p/q}(K)$ is a surgery on a knot $K$ (under some strong restrictions). \begin{proof}[Proof of Theorem \ref{repcalc}] We are assuming that no $p'$-th root of unity is a root of the Alexander polynomial of $K$, where $p'=p$ for $p$ odd and $p'=\frac{p}{2}$ for $p$ even. By Lemma \ref{nars}, this condition ensures that there are no non-abelian reducibles. We first consider the abelian representations. These representations are those which factor through $H_1(Y;\mathbb{Z})\cong\mathbb{Z}/p\mathbb{Z}$. So, we have \begin{align*} \mathscr{R}_{\text{ab}}(Y)\cong \operatornameeratorname{Hom}(\mathbb{Z}/p\mathbb{Z},\operatorname{SL}(2, \mathbb{C})). \end{align*} Letting $a$ denote the generator of $\mathbb{Z}/p\mathbb{Z}$, we see that $\rho(a)$ can be any $\operatorname{SL}(2, \mathbb{C})$ matrix with eigenvalues $p$-th roots of unity. There are $|p|$ such roots. When $p$ is even, two of these roots are $\pm 1$, and $\pm I$ are the unique matrices with thoses eigenvalues. The other $|p|-2$ roots come in pairs $\zeta,\zeta^{-1}$ yielding conjugate $\operatorname{SL}(2, \mathbb{C})$ matrices. This gives $\frac{1}{2}(|p|-2)$ distinct conjugacy classes of matrices, each of which gives a conjugation orbit's worth of choices, which for an abelian, non-central representation is a copy of $T\mathbb{CP}^1$. Thus, we obtain 2 points and $\frac{1}{2}(|p|-2)$ copies of $T\mathbb{CP}^1$ in the representation variety. Similarly, for $p$ odd, there is only one central representation and $\frac{1}{2}(|p|-1)$ copies of $T\mathbb{CP}^1$. For the irreducible representations, the Casson invariant $\lambda_{\operatorname{SL}(2, \mathbb{C})}(Y)$ gives the count of isolated points with multiplicity in the character variety. Since we are assuming that $\mathscr{X}_{\text{irr}}(Y)$ is zero-dimensional, the isolated points account for all irreducibles. Since we assume the scheme is smooth, the multiplicities of the points are all $1$ and the Casson invariant gives an honest count of points. The conjugation orbit of each isolated irreducible representation is a copy of $\operatornameeratorname{PSL}(2,\mathbb{C})$. For an irreducible representation $\rho$, the character scheme is smooth at $[\rho]$ if and only if the representation scheme is smooth at $\rho$ by Lemma 2.4 in \cite{AM}. Since we are assuming that the character scheme is smooth, we conclude that the representation scheme is smooth. Hence, we can apply Proposition \ref{smooth} to compute $\mathit{HP}_{\#}$. \end{proof} One situation in which we can apply Theorem \ref{repcalc} is when $K$ is a two-bridge knot. \begin{proposition}\label{HPframed2bridge} Let $K(\alpha,\beta)$ be a two-bridge knot and let $Y=S^3_{p/q}(K(\alpha,\beta))$ denote the 3-manifold obtained from $p/q$ Dehn surgery on $K$. Let $p'=p$ for $p$ odd and $p'=\frac{p}{2}$ for $p$ even. Assume that $p/q$ is not a boundary slope and that no $p'$-th root of unity is a root of the Alexander polynomial of $K$. Then, \begin{equation*} \mathit{HP}^*_{\#}(Y)= H^*(\text{pt})^{\operatornamelus 2-\sigma(p)} \operatornamelus H^{*+2}(\mathbb{CP}^1)^{\operatornamelus \frac{1}{2}(|p|-2+\sigma(p))} \operatornamelus H^{*+3} (\operatorname{PSL}(2,\mathbb{C}))^{\operatornamelus (\frac{1}{2}||p/q||_T-\sigma(p)\frac{\alpha-1}{4})}, \end{equation*} where $\sigma(p)\in\{0,1\}$ is the parity of $p$. \end{proposition} \begin{proof} According to Proposition 2.2 of \cite{2bridge}, the zero-dimensional components of $\mathscr{X}_{\text{irr}}(S^3_{p/q}(K))$ are smooth under these hypotheses. Since $p/q$ is not a boundary slope, we know that $\mathscr{X}_{\text{irr}}(S^3_{p/q}(K))$ is entirely zero-dimensional by Culler-Shalen theory. Thus, Theorem \ref{repcalc} implies the result. \end{proof} In Section 8, we use this calculation to show that there does not exist an exact triangle relating $\mathit{HP}_{\#}$ for surgeries on two-bridge knots. \begin{remark}The hypothesis that the character scheme is zero-dimensional allows us to identify $\mathscr{R}_{\text{irr}}$ as some copies of $\operatornameeratorname{PSL}(2,\mathbb{C})$. In general, assuming $\mathscr{X}_{\text{irr}}$ is smooth, $\mathscr{R}_{\text{irr}}$ would be a fibration over $\mathscr{X}_{\text{irr}}$ with fibers diffeomorphic to $\operatornameeratorname{PSL}(2,\mathbb{C})$. Determining the precise structure of this fibration is not immediate. In Propositions \ref{grannyvar} and \ref{squarevar}, we compute the character schemes for the granny and square knots and show they are not a disjoint union of contractible components. So, one cannot immediately deduce the cohomology groups of $\mathscr{R}_{\text{irr}}$ (which give the interesting part of $\mathit{HP}_{\#}$) from those of $\mathscr{X}_{\text{irr}}$.\end{remark} \begin{section}{The character variety of $S^3\backslash (3_1\# 3_1)$} The knot group of the trefoil has the presentations \begin{align*} \pi_1(S^3\backslash 3_1)=& \langle a,b\mid a^3=b^2\rangle, \\ \cong & \langle r,s \mid rsr=srs\rangle. \end{align*} In the first presentation, the meridian is given by $a^2b^{-1}$ and the longitude is given by $ba(b^{-1}a)^5$. In the second presentation, the meridian is $r$ and the longitude is $sr^2sr^{-4}$. The character scheme of $3_1$ is \begin{align*} \mathscr{X}(S^3\backslash 3_1)\cong \{(y-2)(x^2-y-1)=0\}\subset\mathbb{C}^2, \end{align*} where $x=\operatorname{tr}\rho(r)$ and $y=\operatorname{tr}(rs^{-1})$. The line $\{y=2\}$ is $\mathscr{X}_{\text{red}}$ and $\{x^2-y=1,y\neq 2\}$ is $\mathscr{X}_{\text{irr}}$. The fundamental group of the complement of the knot $3_1\#3_1$ (which is isomorphic to the knot group of $3_1\#3_1^*$) has the presentation \begin{align*} \Gamma=\langle a,b,c,d\mid a^3=b^2, c^3=d^2, d=ba^{-2}c^2\rangle, \end{align*} where the subgroup $\Gamma_0$ generated by $a$ and $b$ corresponds to a copy of $\pi_1(S^3\backslash 3_1)$ and similarly the subgroup $\Gamma_1$ generated by $c,d$ corresponds to the knot group of the other $3_1$ summand. The relation $a^2b^{-1}=c^2d^{-1}$ comes from setting the meridian in $\Gamma_0$ equal to the meridian in $\Gamma_1$. Consider the following closed subsets of $\mathscr{X}(\Gamma)$, \begin{align*} \mathscr{X}_{\operatornameeratorname{red}}= &\{[\rho]\mid \rho \text{ is abelian}\}\text{ and } \\ \mathscr{X}_{i}= &\{[\rho]\mid \rho\vert_{\Gamma_{|1-i|}} \text{ is abelian}\}, \end{align*} where clearly $\mathscr{X}_{\operatornameeratorname{red}}\subset \mathscr{X}_{i}$. Since the abelianization of the knot group is generated by the meridian, we have that $\mathscr{X}_{\operatornameeratorname{red}}=\mathscr{X}(\mathbb{Z})\cong\mathbb{C}$, where the meridional trace is a coordinate for $\mathbb{C}$. \begin{lemma} Let $\mathscr{X}(\Gamma)\overset{r}{\rightarrow} \mathscr{X}(\Gamma_{i})$ denote the natural restriction map. Then the composite $\mathscr{X}_i\hookrightarrow\mathscr{X}(\Gamma)\overset{r}{\rightarrow} \mathscr{X}(\Gamma_{i})$ is an isomorphism $\mathscr{X}_i\cong \mathscr{X}(\Gamma_{i})$. \end{lemma} \begin{proof} If $\rho\vert_{\Gamma_{|1-i|}}$ is abelian, then it is determined by its value on the meridian. But the value of $\rho$ on the meridian is determined by its restriction to $\Gamma_{i}$, since the meridian lies in the intersection $\Gamma_0\cap\Gamma_1$, establishing injectivity. For surjectivity, we observe that for any representation $\rho\in \mathscr{X}(\Gamma_i)$, there exists an extension of $\rho$ to a representation of $\Gamma$ given by setting $\rho\vert_{\Gamma_{|1-i|}}$ to be the abelian representation of $\Gamma_{|1-i|}$ with the required meridional value. This lies in $\mathscr{X}_i$ by construction. \end{proof} Recall the following general fact: \begin{lemma}\cite{CCGSL}\label{nars} Let $\rho$ be a representation of $\pi_1(S^3\backslash K)$ with $[\rho]\in \mathscr{X}_{\operatornameeratorname{red}}\cap\overline{\mathscr{X}_{\operatornameeratorname{irr}}}$. Then the following equivalent conditions hold: \begin{itemize} \item $\Delta(\mu^2)=0$, where $\Delta$ is the Alexander polynomial of $K$ and $\mu$ is an eigenvalue of $\rho(m)$, for $m$ the meridian of the knot. \item There exists a non-abelian reducible representation $\rho'$ with the same character as $\rho$. \end{itemize} \end{lemma} The Alexander polynomial of the trefoil is the sixth cyclotomic polynomial, $\Delta_{3_1}(t)=t^2-t+1$. Thus, the above lemma guarantees non-abelian reducibles at meridional trace $\pm\sqrt{3}$. The same holds for $3_1\# 3_1$ since $\Delta_{3_1\# 3_1}=(\Delta_{3_1})^2$. This allows us to establish the following proposition: \begin{proposition}\label{components} Let $\mathscr{X}_{i,\operatornameeratorname{irr}}=\mathscr{X}_{i}\cap \mathscr{X}_{\operatornameeratorname{irr}}$ and let $S=\mathscr{X}(\Gamma)\backslash (\mathscr{X}_0\cup\mathscr{X}_1)$. Then the four irreducible components of $\mathscr{X}(\Gamma)$ are $\mathscr{X}_{\operatornameeratorname{red}}, \overline{\mathscr{X}}_{0,\operatornameeratorname{irr}}, \overline{\mathscr{X}}_{1,\operatornameeratorname{irr}}$ and $\overline{S}$. Moreover, these four components pairwise intersect in the same two points, corresponding to characters of non-abelian reducibles. \end{proposition} \begin{proof} That none of the four closed sets share any irreducible components follows from the description of the intersections. If $[\rho]\in \overline{S}\cap \overline{\mathscr{X}}_{0,\operatornameeratorname{irr}}$, then by restricting to $\mathscr{X}(\Gamma_{1})$, we see that $[\rho\vert_{\Gamma_1}]\in \mathscr{X}_{\operatornameeratorname{red}}(\Gamma_1)\cap\overline{\mathscr{X}_{\operatornameeratorname{irr}}(\Gamma_1)}$. Thus, Lemma \ref{nars} implies that $[\rho]$ is one of two points in $\mathscr{X}_{\operatornameeratorname{red}}$ corresponding to non-abelian reducibles. The other intersections follow similarly. It only remains to check that each of the four pieces is in fact irreducible. From the coordinate description of $\mathscr{X}(S^3\backslash 3_1)$, we see that $\mathscr{X}_{\operatornameeratorname{red}}=\{y=2\}$ and the $\overline{\mathscr{X}}_{i,\operatornameeratorname{irr}}$ are equal to $\{x^2-y=1\}$. In either case, they are isomorphic to $\mathbb{C}$. The irreducibility of $\overline{S}$ follows from Proposition \ref{cubic} below. \end{proof} \begin{proposition}\label{cubic} $\overline{S}$ is an affine cubic surface with precisely two $A_1$ singularities at the points $S_{\operatornameeratorname{sing}}=\overline{S}\backslash S=\mathscr{X}_{\text{nar}}$, the two characters of non-abelian reducible representations. \end{proposition} \begin{proof} Let $A=\rho(a)$, $B=\rho(b)$, etc. If $[\rho]\in S$, then $\rho\vert_{\Gamma_i}$ is non-abelian. But since $a^3=b^2$ is a central element of $\Gamma_0$, we must have $A^3=B^2=\pm I$. However, if $B^2=I$, then $B=\pm I$ and $\rho\vert_{\Gamma_0}$ would be abelian. Thus, we must have $A^3=B^2=-I$, and similarly $C^3=D^2=-I$ and $A,C\neq -I$. These equations are equivalent to $\operatornameeratorname{tr}(A)=\operatornameeratorname{tr}(C)=1$ and $\operatornameeratorname{tr}(B)=\operatornameeratorname{tr}(D)=0$. Now, since $d=ba^{-2}c^2$, we see that $D=BAC^{-1}$. Thus, we have the inclusion \begin{align*} S\subset\mathscr{S}=\{[\rho]\in\mathscr{X}(F_3)\mid \operatornameeratorname{tr}(A)=\operatornameeratorname{tr}(C)=1, \operatornameeratorname{tr}(B)=\operatornameeratorname{tr}(BAC^{-1})=0\}, \end{align*} where $F_3$ is the free group generated by $a,b,c$. Also, any representation of $F_3$ that lies in $\mathscr{S}$ is a representation of $\Gamma$, so that $\mathscr{S}\subset\mathscr{X}(\Gamma)$. Since $S$ is open in $\mathscr{X}(\Gamma)$, $\overline{S}$ is a union of the irreducible components meeting $S$. Thus, $\overline{S}=\mathscr{S}$ provided $\mathscr{S}$ is irreducible. So, we now turn to describing the algebraic set $\mathscr{S}$. Regarding $\mathscr{X}(F_3)$ as the character variety of the four-holed sphere, we see that $\mathscr{S}$ is a relative character variety; $\mathscr{S}$ is the locus of characters of $\pi_1(S^2-\{p_0,p_2,p_3,p_4\})$ with fixed traces along the four boundary circles. This relative character variety can be computed \cite{frickeklein} to be the affine cubic hypersurface in $\mathbb{C}^3$ given by the equation \begin{align*} f=x^2+y^2+z^2+xyz-z-2=0, \end{align*} where $x=\operatornameeratorname{tr}(AB), y=\operatornameeratorname{tr}(B^{-1}C)$ and $z=\operatornameeratorname{tr}(A^{-1}C)$. Furthermore, the reducible representations, which are the points in $\overline{S}\backslash S$, correspond to $(x,y,z)=(\pm \sqrt{3}, \mp\sqrt{3}, 2)$. These are precisely the singular points of the affine cubic surface $\overline{S}$. Since the Tjurina number, $\dim\widehat{\mathcal{O}}_{(x,y,z)}/(f,\partial_x f,\partial_y f,\partial_z f)$, is equal to 1 at the singularities, they are $A_1$ singularities. \end{proof} We record a calculation of the singular cohomology groups of $S$ for use in Section 7. \begin{proposition}\label{cohomology} The singular cohomology groups of $S$ are \begin{equation*} H^*(S;\mathbb{Z})= \begin{cases} \mathbb{Z} & i=0,\ \\ 0 & i=1,\ \\ \mathbb{Z}^2 & i=2,\ \\ \mathbb{Z}^4 & i=3,\ \\ 0 & i\geq 4.\ \end{cases} \end{equation*} \end{proposition} \begin{proof} Let $Q$ denote the projective closure of $\overline{S}$ inside of $\mathbb{P}^3$. One can check that $Q$ is smooth at infinity, meaning that $Q_{\operatornameeratorname{sm}}$, the smooth locus of $Q$, is the complement of the two singularities at $\overline{S}\backslash S$. By Theorem 4.3 in \cite{dimcasingularities}, the homology groups of $Q$ are \begin{equation*} H_*(Q;\mathbb{Z})= \begin{cases} \mathbb{Z} & i=0,\ \\ 0 & i=1,\ \\ \mathbb{Z}^5 & i=2,\ \\ 0 & i=3,\ \\ \mathbb{Z} & i=4.\ \end{cases} \end{equation*} By Poincar\' {e} duality, \begin{align*} H_n(Q_{\operatornameeratorname{sm}};\mathbb{Z})\cong H_c^{4-n}(Q_{\operatornameeratorname{sm}};\mathbb{Z}). \end{align*} We can equate the compactly supported cohomology with a relative cohomology group, \begin{align*} H_c^{n}(Q_{\operatornameeratorname{sm}};\mathbb{Z})\cong H^{n}(Q,Q_{\operatornameeratorname{sing}};\mathbb{Z}), \end{align*} which can be determined from the long exact sequence \begin{align*} \dots \to H^n(Q,Q_{\operatornameeratorname{sing}};\mathbb{Z})\to H^n(Q;\mathbb{Z})\to H^n(Q_{\operatornameeratorname{sing}};\mathbb{Z})\to\dots \end{align*} In particular, since $Q_{\operatornameeratorname{sing}}$ is zero-dimensional, we see that $H_n(Q_{\operatornameeratorname{sm}};\mathbb{Z})\cong H^{4-n}(Q;\mathbb{Z})$ for $n\leq 2$. And \begin{align*} \operatornameeratorname{rk} H_3(Q_{\operatornameeratorname{sm}};\mathbb{Z})= & \operatornameeratorname{rk} H^1(Q,Q_{\operatornameeratorname{sing}};\mathbb{Z}),\\ =& \operatornameeratorname{rk} H^1(Q;\mathbb{Z})+|Q_{\operatornameeratorname{sing}}|-1. \end{align*} So, the homology groups of $Q_{\operatornameeratorname{sm}}$ are \begin{equation*} H_*(Q_{\operatornameeratorname{sm}}; \mathbb{Z})= \begin{cases} \mathbb{Z} & i=0,\ \\ 0 & i=1,\ \\ \mathbb{Z}^5 & i=2,\ \\ \mathbb{Z} & i=3,\ \\ 0 & i\geq 4.\ \end{cases} \end{equation*} Let $Q_{\infty}=Q\backslash \overline{S}$. Then $S=Q_{\operatornameeratorname{sm}}\backslash Q_{\infty}$. We have $Q_{\infty}=\{xyz=0\}\subset\mathbb{P}^2$, which is a triangular arrangement of three lines. The normal bundle of each of these three copies of $\mathbb{P}^1$ has degree $-1$. So, a neighborhood of each sphere inside of $S$ is diffeomorphic to the $D^2$ bundle over $S^2$ with Euler number $-1$. The boundary of this neighborhood is diffeomorphic to $S^3$. Hence, the boundary of a neighborhood of $Q_{\infty}$, $\partial N(Q_{\infty})$, is a necklace of three copies of $S^3$. We can then apply the Mayer-Vietoris sequence \begin{align*} \dots \longrightarrow H_*(\partial N(Q_{\infty})) \longrightarrow H_*(Q_{\infty})\operatornamelus H_*(S) \longrightarrow H_*(Q_{\operatornameeratorname{sm}}) \longrightarrow \dots \end{align*} to compute the stated cohomology groups. \end{proof} \end{section} \begin{section}{The A-polynomials of the square and granny knots} We wish to describe the image of the natural map $r: \mathscr{X}(\Gamma)\to \mathscr{X}(\partial(S^3\backslash (3_1 \# 3_1^\circ)))$ given by restriction to the boundary torus. We use the notation $3_1 \# 3_1^\circ$ to collectively refer to either $3_1 \# 3_1$ or $3_1 \# 3_1^*$. Coordinates on $\mathscr{X}(\partial S^3\backslash N(3_1 \# 3_1^\circ))=\mathscr{X}(T^2)$ are given by the traces of the meridian and longitude. One may consider the double branched cover $d:\mathbb{C}^*\times\mathbb{C}^*\to \mathscr{X}(T^2)$ where the coordinates on the cover are given by the eigenvalues of the meridian and longitude, $M$ and $L$. The definining polynomial for the closure of the pull-back of the image of $r$ to $\mathbb{C}^*\times\mathbb{C}^*$ is called the A-polynomial \cite{CCGSL}. For the right-handed trefoil, the A-polynomial is $M^{-6}+L=0$, whereas for the left-handed trefoil it is $M^6+L=0$ \cite{CCGSL}. These equations define the image under $r$ of the components $\overline{\mathscr{X}_{i,irr}}$. $\mathscr{X}_{\text{red}}$ is mapped to the line $L=1$. \begin{lemma}\label{compapoly} Let $S$ be as in Proposition \ref{components}. Then the defining equation of the algebraic set $d^{-1}(\overline{r(S)})$ in eigenvalue coordinates is $L-M^{-12}=0$ for the granny knot (the composite of two right-handed trefoils) and $L=1$ for the square knot (the composite of oppositely oriented trefoils). \end{lemma} \begin{proof} Let $\ell_i$ denote the longitude of the $i$-th summand of $3_1\# 3_1^\circ$. Then, the longitude of $3_1\# 3_1^\circ$ is $\ell=\ell_0\ell_1$. Also, each of the $\ell_i$ commutes with the meridian $\mu$ in $\Gamma$. Since $\rho(\mu)$ is non-central, this means that $\rho(\ell_0)$ and $\rho(\ell_1)$ must commute with each other. In fact, for the irreducible representations of the right-handed trefoil, we have $\rho(\ell_i)=-\rho(m)^{-6}$ and similarly $\rho(\ell_i)=-\rho(m)^{6}$ for the left-handed trefoil. For $\rho\in S$, we have that $\rho$ restricted to either summand is irreducible. So for the granny knot, we then have $\rho(\ell)=\left(-\rho(m)^{-6}\right)^2=\rho(m)^{-12}$ and for the square knot we obtain $\rho(\ell)=1$. These matrix equations give the desired eigenvalue equations. \end{proof} \begin{proposition} The A-polynomial of the granny knot, $3_1\# 3_1$, is \begin{align*} A_{3_1\# 3_1}=(L-1)(L+M^{-6})(L-M^{-12}). \end{align*} The A-polynomial of the square knot, $3_1\# 3_1^*$, is \begin{align*} A_{3_1\# 3_1^*}=(L-1)(L+M^{-6})(L+M^{6}). \end{align*} \end{proposition} \begin{proof} The A-polynomial is a product (omitting repeated factors) of the defining polynomials for the images of the four components of $\mathscr{X}(\Gamma)$. Two of the components are copies of $\mathscr{X}(3_1)$, and therefore contribute factors corresponding to the A-polynomial of right or left-handed trefoil. The reducibles give the factor of $L-1$. The factor coming from the two-dimensional component $\overline{S}$ was determined in Lemma \ref{compapoly}. \end{proof} We will also be interested in the defining equation for the image of the map $r: \mathscr{X}_{\text{irr}}(\Gamma)\to \mathscr{X}(T^2)$, where we only consider the irreducibles. Let us call the defining polynomial for this curve $A_{K}^{\text{irr}}(M,L)$. Then, by the above discussion, we find: \begin{align*} A_{3_1\# 3_1}^{\text{irr}}= &(L+M^{-6})(L-M^{-12}),\\ A_{3_1\# 3_1^*}^{\text{irr}}= &(L+M^{-6})(L+M^{6})(L-1). \end{align*} \end{section} \section{Surgeries on the Granny and Square knots} In this section, we prove Theorems \ref{HPgranny} and \ref{HPsquare}. We proceed by calculating the relevant character schemes, showing they are smooth, and then computing their singular cohomology groups so that we can apply Corrolary 2.2 to write $\mathit{HP}$ as the (degree-shifted) singular cohomology of the character scheme. \subsection{Character scheme of a composite knot} First, we establish a general procedure for computing the (set-theoretic) characters of the exterior of a composite knot. Although the character variety of $3_1\# 3_1$ was computed in Section 5, the description given here will be particularly amenable for computing the character varieties of the surgeries. The description from Section 5 will also be useful. Let $K_1$ and $K_2$ be two oriented knots in $S^3$ and set $K=K_1\# K_2$, $M_i=S^3\backslash K_i$, $M=S^3\backslash K$. We have the following pushout diagram of spaces: \[ \begin{tikzcd} M \arrow[hookleftarrow]{r}\arrow[hookleftarrow]{d} & M_1 \arrow[hookleftarrow]{d}{i_1} \\ M_2 \arrow[hookleftarrow]{r}{i_2} & S^1 \end{tikzcd} \] where $i_j(S^1)= m_j$, a meridian for $K_j$, $j=1,2$. By the Van Kampen theorem, we have the pushout diagram of groups: \[ \begin{tikzcd} \pi_1(M) \arrow[hookleftarrow]{r}\arrow[hookleftarrow]{d} & \pi_1(M_1) \arrow[hookleftarrow]{d}\\ \pi_1(M_2) \arrow[hookleftarrow]{r} & \pi_1(S^1) \end{tikzcd} \] That is, $\pi_1(M)\cong \pi_1(M_1)*\pi_1(M_2)/\langle m_1= m_2 \rangle$. We have a pullback diagram of representation spaces: \[ \begin{tikzcd} \mathscr{R}(M) \arrow[twoheadrightarrow ]{r}\arrow[twoheadrightarrow]{d} & \mathscr{R}(M_1) \arrow[twoheadrightarrow]{d}{r_1}\\ \mathscr{R}(M_2) \arrow[twoheadrightarrow]{r}{r_2} & \mathscr{R}(S^1) \end{tikzcd} \] To analyze $\mathscr{X}(M)=\mathscr{R}(M)//\operatorname{SL}(2, \mathbb{C})$, we can compare it to a simpler object: the fiber product of the character schemes $\mathscr{X}(M_1)\times_{\mathscr{X}(S^1)} \mathscr{X}(M_2)$. We have the diagram \[ \begin{tikzcd} \mathscr{X}(M) \arrow[bend left]{drr} \arrow[bend right,swap]{ddr} \arrow{dr}[description]{\varphi} & & \\ & \mathscr{X}(M_1)\times_{\mathscr{X}(S^1)} \mathscr{X}(M_2) \arrow[twoheadrightarrow ]{r}\arrow[twoheadrightarrow]{d} & \mathscr{X}(M_1) \arrow[twoheadrightarrow]{d}{\overline{r}_1}\\ & \mathscr{X}(M_2) \arrow[twoheadrightarrow]{r}{\overline{r}_2} & \mathscr{X}(S^1) \end{tikzcd} \] where $\overline{r}_1([\rho_1])=\operatorname{tr}(\rho_1( m_1))$. \subsubsection*{Pullbacks and quotients} In order to understand the character scheme of $M$ from the fiber product of the character schemes of $M_1$ and $M_2$, we must determine the pre-images of points under $\varphi$. We establish the following lemma: \begin{lemma}\label{pullbackquot} Let $\varphi: \mathscr{X}(M)\to \mathscr{X}(M_1)\times_{\mathscr{X}(S^1)} \mathscr{X}(M_2)$ denote the natural map as above. Then for any $p=([\rho_1],[\rho_2])\in \mathscr{X}(M_1)\times_{\mathscr{X}(S^1)} \mathscr{X}(M_2)$, we have \begin{align*} \varphi^{-1}(p)\cong \operatornameeratorname{Stab}(m)/\langle \operatornameeratorname{Stab}(\rho_1), \operatornameeratorname{Stab}(\rho_2)\rangle, \end{align*} where $m=r_1(\rho_1)=r_2(\rho_2)$ \end{lemma} \begin{proof} The pre-image of $p$ in $\mathscr{R}(M_1)\times \mathscr{R}(M_2)$ is $\operatornameeratorname{Orb}(\rho_1)\times \operatornameeratorname{Orb}(\rho_2)$. The pair $(\rho_1,\rho_2)$ is a point here that is also in $\mathscr{R}(M_1)\times_{\mathscr{R}(S^1)} \mathscr{R}(M_2)$. All other such points can be obtained by using the action of $\operatornameeratorname{Stab}(m)$ on each factor, or else using the diagonal action of $\operatorname{SL}(2, \mathbb{C})$. This gives the set \begin{align*} (\mathscr{R}(M_1)\times_{\mathscr{R}(S^1)} \mathscr{R}(M_2))\cap(\operatornameeratorname{Orb}(\rho_1)\times \operatornameeratorname{Orb}(\rho_2)) = & \operatorname{SL}(2, \mathbb{C})\cdot (\operatornameeratorname{Stab}(m)\cdot \rho_1\times \operatornameeratorname{Stab}(m)\cdot\rho_2),\\ = & \operatorname{SL}(2, \mathbb{C})\cdot (\operatornameeratorname{Stab}(m)\cdot \rho_1\times \rho_2). \end{align*} Reducing modulo the diagonal action of $\operatorname{SL}(2, \mathbb{C})$, \begin{align*} & \operatorname{SL}(2, \mathbb{C})\cdot (\operatornameeratorname{Stab}(m)\cdot \rho_1\times \rho_2)/G,\\ = & \operatornameeratorname{Stab}(m)/\langle\operatornameeratorname{Stab}(\rho_1), \operatornameeratorname{Stab}(\rho_2)\rangle. \end{align*} Thus, $\varphi^{-1}(p)\cong \operatornameeratorname{Stab}(m)/\langle \operatornameeratorname{Stab}(\rho_1), \operatornameeratorname{Stab}(\rho_2)\rangle$. \end{proof} \subsection{Irreducible representations in the character scheme of a composite knot} To determine the locus of irreducible representations $\mathscr{X}_{\text{irr}}(M)$, we first describe $\mathscr{X}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}(M_2)$ and then use Lemma \ref{pullbackquot} to understand the fibers of $\varphi$ over the various components. Recall that $\mathscr{X}(M)$ has a stratification $\mathscr{X}_{\text{nar}}\subset \mathscr{X}_{\text{red}}\subset \mathscr{X}$, where $\mathscr{X}_{\text{nar}}$ is the locus of characters of non-abelian reducible representations. The complement $\mathscr{X}_{\text{irr}}=\mathscr{X}\backslash \mathscr{X}_{\text{red}}$ is the locus of irreducibles. The scheme $\mathscr{X}_{\text{nar}}$ can be identified from Lemma 4.2. The characters of non-abelian reducibles are also the characters of abelian reducibles. That is, every reducible character has an associated orbit of abelian representations, but for those characters in $\mathscr{X}_{\text{nar}}$, there is an additional orbit corresponding to non-abelian reducible representations. Taking the product stratification on $\mathscr{X}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}(M_2)$ gives nine different strata of six essentially different types. The following proposition states which strata intersect the image $\varphi(\mathscr{X}_{\text{irr}}(M))$ and also identifies the set of irreducible representations in the fiber of $\varphi$ over a point in a given stratum. \begin{proposition}\label{compositechar} Using the previously established notation, $\varphi(\mathscr{X}_{\text{irr}}(M))$ consists of the following pieces \begin{itemize} \item $\mathscr{X}_{\text{irr}}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}_{\text{irr}}(M_2),$ \item $\mathscr{X}_{\text{irr}}(M_i),$ \item $\mathscr{X}_{\text{nar}}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}_{\text{nar}}(M_2).$ \end{itemize} The fibers of $\varphi$ are copies of: \begin{itemize} \item $\mathbb{C}^*$ over points in $\mathscr{X}_{\text{irr}}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}_{\text{irr}}(M_2)$ with meridional eigenvalue $\mu\neq \pm 1$. \item $\mathbb{C}$ over points in $\mathscr{X}_{\text{irr}}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}_{\text{irr}}(M_2)$ with meridional eigenvalue $\mu=\pm 1$. \item A single point over points in $\mathscr{X}_{\text{irr}}(M_i)$ with $\Delta(\mu^2)\neq 0$. \item $\mathbb{C}$ over points in $\mathscr{X}_{\text{irr}}(M_i)$ with $\Delta(\mu^2)= 0$. \item $\mathbb{C}^*\backslash\{1\}$ over points in $\mathscr{X}_{\text{nar}}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}_{\text{nar}}(M_2)$. \end{itemize} \end{proposition} \begin{proof} First, we identify the copy of $\mathscr{X}_{\text{irr}}(M_1)$ that appears in $\mathscr{X}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}(M_2)$. A reducible character is the character of an abelian representation, and the meridian generates the abelianization of the knot group. Thus, the isomorphism $H_1(M_1)\cong \pi_1(S^1)$, where $S^1$ a meridional circle, yields an isomorphism $\mathscr{X}_{\text{red}}(M_1)\cong \mathscr{X}(S^1)$. And taking fiber products, $\mathscr{X}_{\text{irr}}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}_{\text{red}}(M_2)\cong \mathscr{X}_{\text{irr}}(M_1)$. Now, we show that image of $\varphi$ consists of the stated pieces. Indeed, the only strata not included in the list are contained in $(\mathscr{X}_{\text{red}}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}_{\text{red}}(M_2))\backslash (\mathscr{X}_{\text{nar}}(M_1)\times_{\mathscr{X}(S^1)}\mathscr{X}_{\text{nar}}(M_2))$. These correspond to representations of the form $\rho_1*\rho_2$ where (e.g.) $\rho_1$ is abelian and $\rho_2$ is reducible. However, for an abelian representation, $\operatornameeratorname{im}(\rho_1)=\operatornameeratorname{im}(\rho_1\vert_{m_1})$ since the meridian $m_1$ generates the abelianization of $\pi_1(M_1)$. Thus, since the $\rho_i$ agree on $m_i$, we see that $\operatornameeratorname{im}(\rho_1*\rho_2)=\operatornameeratorname{im}(\rho_2)$, so that the composite representation is also reducible. Thus, none of these pairings provide irreducible representations. For $p=([\rho_1],[\rho_2])\in \mathscr{X}(M_1)\times_{\mathscr{X}(S^1)} \mathscr{X}(M_2)$, if both $[\rho_1],[\rho_2] \in \mathscr{X}_{\text{irr}}$, then $\operatornameeratorname{Stab}(\rho_i)=\{\pm 1\}$. Furthermore, $r_1(\rho_1)$ is an abelian, non-central representation (if $\rho_1( m)=\pm I$, then the entire representation is central because $ m_1$ normally generates $\pi_1(M_1)$). Thus, $\operatornameeratorname{Stab}(r_1(\rho_1))\cong\mathbb{C}^*$ for meridional trace not $\pm 2$, and $\operatornameeratorname{Stab}(r_1(\rho_1))\cong\mathbb{C}\times\mathbb{Z}/2$ otherwise. So, $\varphi^{-1}(p)\cong\mathbb{C}^*/\{\pm 1\}\cong\mathbb{C}^*$ or $\varphi^{-1}(p)\cong\mathbb{C}$ by Lemma \ref{pullbackquot}. If $[\rho_1]$ is irreducible but $[\rho_2]$ is reducible, then we can find an abelian lift $\rho_2$, so that $\operatornameeratorname{Stab}(\rho_2)=\operatornameeratorname{Stab}(r_2(\rho))$, and the fiber $\varphi^{-1}(p)$ is a point. For a non-abelian lift of $\rho_2$, $\operatornameeratorname{Stab}(\rho_2)$ is trivial. Moreover, the trace of the meridian cannot be $\pm 1$ for a non-abelian reducible because $\Delta(\pm 1)\neq 0$. Therefore, the stabilizer of the meridian must be $\mathbb{C}^*$. The abelian lies in the closure of the orbit of non-abelian reducibles, so that $\varphi^{-1}(p)=\mathbb{C}$ for such a point. If both are reducible and at least one is abelian, then the overall representation is reducible. If both are non-abelian reducibles, then the stabilizers of each representation are trivial and the stabilizer of the meridian is $\mathbb{C}^*$, giving that the fiber of $\varphi$ is $\mathbb{C}^*$. However, not all of these representations are irreducible. We have that $\operatornameeratorname{im}(\rho_i)\subset B_i$, for $B_1,B_2$ Borel subgroups. For some $d\in \operatornameeratorname{Stab}(r_1(\rho_1))$, the composite representation corresponding to $d$ has image generated by $\langle \operatornameeratorname{im}(\rho_1), d^{-1}\operatornameeratorname{im}(\rho_2)d\rangle$. If this image were contained in some Borel subgroup $B$, then $\operatornameeratorname{im}(\rho_1)$ would be contained in two Borel subgroups, so either it is contained in a diagonal subgroup (but then $\rho_1$ is abelian), or else $B=B_1$. Then, we have $d^{-1}\operatornameeratorname{im}(\rho_2)d\subset B_1$, and so by the same argument we conclude $B_1=d^{-1}B_2d$. Thus, $d\in\operatornameeratorname{Stab}(B_2)$, which is trivial in $G^{\operatornameeratorname{ad}}$. Hence, precisely one point in $\operatornameeratorname{Stab}(r_1(\rho_1))$ corresponds to a reducible --- the rest are irreducible. So, the irreducibles in $\varphi^{-1}(p)$ form a copy of $\mathbb{C}^* - \{1\}$. \end{proof} \subsection{Character scheme of a connected sum of two trefoils} We now focus on the case when $K_1=K_2=3_1$. The character scheme of the trefoil can be described as a plane curve: \begin{align*} \mathscr{X}(3_1)\cong \{(y-2)(x^2-y-1)=0\}\subset\mathbb{C}^2, \end{align*} where $x$ is the trace of the meridian. In terms of the Wirtinger presentation, we have $x=\operatorname{tr}(\rho(r))=\operatorname{tr}(\rho(s))$ and $y=\operatornameeratorname{tr}(rs^{-1})$. The line $\{y=2\}$ is $\mathscr{X}_{\text{red}}$ and $\{x^2-y=1, y\neq 2\}$ is $\mathscr{X}_{\text{irr}}$. The map $\overline{r}_1$ is projection onto the $x$ coordinate. The longitude for $3_1$ is $\ell=sr^2sr^{-4}$, and its trace in the $x,y$ coordinates is given by the polynomial \begin{align*} L(x,y) = x^6y-2x^6-x^4y^2-2x^4y+8x^4+2x^2y^2+x^2y-10x^2+2. \end{align*} The restriction of $L(x,y)$ to $y=2$ is the constant function $2$, as expected. On this component, $\rho(\ell)=I$. The restriction of $L(x,y)$ to $y=x^2-1$ is $L=-x^6+6x^4-9x^2+2$, which can be deduced from the fact that for the irreducible representations, we have $\rho(\ell)=-\rho( m)^{-6}$. The Alexander polynomial has roots that are primitive 6th roots of unity. So, non-abelian reducibles occur at the points $(\pm \sqrt{3}, 2)\in \mathscr{X}_{\text{red}}$. Observe that this is precisely $\overline{\mathscr{X}_{\text{irr}}}-\mathscr{X}_{\text{irr}}$. The fiber product of the character varieties over the meridional trace map is \begin{align*} \mathscr{X}(3_1)\times_{\mathbb{C}} \mathscr{X}(3_1)\cong \{(y-2)(x^2-y-1)=0,(z-2)(x^2-z-1)=0\}\subset\mathbb{C}^3. \end{align*} Applying Proposition \ref{compositechar}, we have the following explicit descriptions of the fibers of $\varphi$ over points in the various strata of $\varphi(\mathscr{X}_{\text{irr}}(K_1\# K_2))$: \begin{itemize} \item $\mathscr{X}_{\text{irr}}\times_{\mathbb{C}} \mathscr{X}_{\text{irr}}= \{x^2-y-1=0,x^2-z-1=0,y\neq 2,z\neq 2\}$. The fibers of $\varphi$ are $\mathbb{C}^*$ unless $x=\pm 2$, in which case they are $\mathbb{C}$. \item $\mathscr{X}_{\text{irr}}(M_1)=\{z=2,x^2-y-1=0,y\neq 2\}$. Note that since $y\neq 2$, we have $x\neq \pm\sqrt{3}$ and $\Delta(m^2)\neq 0$. So, the fibers of $\varphi$ are just points. The same holds for $\mathscr{X}_{\text{irr}}(M_2)=\{y=2,x^2-z-1=0,z\neq 2\}$. \item $\mathscr{X}_{\text{nar}}\times_{\mathbb{C}}\mathscr{X}_{\text{nar}}=\{(\pm \sqrt{3}, 2,2)\}$. The fibers of $\varphi$ are $\mathbb{C}^*\backslash\{1\}$. \end{itemize} \begin{remark} To compare this description with that of Proposition \ref{components}, we see that \begin{itemize} \item $\varphi^{-1}(\mathscr{X}_{\text{irr}}\times_{\mathbb{C}} \mathscr{X}_{\text{irr}}\cup \mathscr{X}_{\text{nar}}\times_{\mathbb{C}}\mathscr{X}_{\text{nar}})=S$, \item $\mathscr{X}_{\text{irr}}(M_i)=\mathscr{X}_{i,irr}$. \end{itemize} \end{remark} Since $\pi_1(3_1\# 3_1)$ and $\pi_1(3_1\# 3_1^*)$ are isomorphic, the same description applies to $\mathscr{X}_{\text{irr}}(3_1\# 3_1^*)$. \subsection{Character scheme for granny knot surgeries} Let $S^3_{p/q}(3_1\# 3_1)$ denote the $p/q$ surgery on the granny knot. We have the following description of $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1))$. \begin{proposition}\label{grannyvar} $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1))$ consists of $2\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/q}(3_1))$ points and \begin{itemize} \item $\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/2q}(3_1))$ copies of $\mathbb{C}^*$ when $p$ is odd. \item $\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/2q}(3_1))-1$ copies of $\mathbb{C}^*$ when $p$ is even, $p\neq 12k$. \item $\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/2q}(3_1))-3$ copies of $\mathbb{C}^*$ and 2 copies of $\mathbb{C}^*\backslash\{1\}$ when $p=12k, p/q\neq 12$. \item $S=\varphi^{-1}(\mathscr{X}_{\text{nar}}\times_{\mathbb{C}}\mathscr{X}_{\text{nar}}\cup\mathscr{X}_{\text{irr}}\times_{\mathbb{C}}\mathscr{X}_{\text{irr}})$ when $p/q=12$. \end{itemize} \end{proposition} We will describe $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1))$ as a closed subscheme of $\mathscr{X}_{\text{irr}}(S^3\backslash (3_1\# 3_1))$. First, we have the following lemma. \begin{lemma} Let $\varphi: \mathscr{X}_{\text{irr}}(S^3\backslash (3_1\# 3_1))\to \mathscr{X}(S^3\backslash 3_1)\times_{\mathbb{C}}\mathscr{X}(S^3\backslash 3_1)$ denote the map to the fiber product over the meridional trace. Then \begin{align*} \mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1)) = \varphi^{-1}(\varphi( \mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1)))). \end{align*} \end{lemma} \begin{proof} A character $[\rho]=[(\rho_1,\rho_2)]\in \mathscr{X}_{\text{irr}}(S^3\backslash (3_1\# 3_1))$ is in the character scheme for the $p/q$ surgery if the surgery equation $\rho( m^p\ell^q)=I$ is satisfied. For a composite knot, the longitude $\ell$ is the product of the two longitudes for the constituent knots. Thus, the surgery equation is \begin{align*} \rho_1( m)^p\left(\rho_1(\ell_1)\rho_2(\ell_2)\right)^q=I. \end{align*} If $[\rho']\in\varphi^{-1}(\varphi([\rho]))$, then it is of the form $[\rho']=[(\rho_1,g^{-1}\rho_2 g)]$ for some $g\in\operatorname{Stab}(\rho(m))$. For an irreducible representation, we cannot have $\rho( m)=\pm I$. Thus, $\operatorname{Stab}(\rho( m))$ is one-dimensional. Furthermore, since $\ell_2$ and $ m$ commute, we must have $\operatornameeratorname{Stab}(\rho( m))\subset \operatorname{Stab}(\rho(\ell_2))$. Therefore, $g^{-1}\rho_2(\ell_2) g=\rho_2(\ell_2)$, verifying the surgery equation for $[\rho']$. \end{proof} Thanks to this lemma, it suffices to describe $\varphi(\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1)))$. We consider each of the three different types of points in $\mathscr{X}_{\text{irr}}(3_1)\times_{\mathbb{C}} \mathscr{X}_{\text{irr}}(3_1)$ separately. \begin{lemma} The locus of characters of $\pi_1(S^3_{p/q}(3_1\# 3_1))$ that restrict to an irreducible in $\pi_1(S^3\backslash K_1)$ and an abelian in $\pi_1(S^3\backslash K_2)$ is $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1))\cap \varphi^{-1}(\mathscr{X}_{\text{irr}}(M_i))$. This space consists of $\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/q}(3_1))$ points. \end{lemma} \begin{proof} For $([\rho_1],[\rho_2])\in \mathscr{X}_{\text{irr}}(M_1)\subset \mathscr{X}(3_1)\times_{\mathbb{C}} \mathscr{X}(3_1)$, $\rho_2$ is an abelian representation. Thus, $\rho_2(\ell_2)=I$. The surgery equation then reduces to $\rho( m^p\ell_1^q)=I$, which is just the condition for $p/q$ surgery on the trefoil. So, \begin{align*} |\varphi(\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1)))\cap \mathscr{X}_{\text{irr}}(M_1)|=\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/q}(3_1)). \end{align*} Since the fibers of $\varphi$ over these types of characters are just points, we obtain the result. \end{proof} \begin{lemma} The set of characters that restrict to an irreducible representation on both factors is given by $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1))\cap \varphi^{-1}(\mathscr{X}_{\text{irr}}\times_{\mathbb{C}} \mathscr{X}_{\text{irr}})$, which consists of \begin{equation*} \begin{cases} \lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/2q}(3_1))=\frac{1}{2}|p-12q|-\frac{1}{2} \text{ copies of }\mathbb{C}^* & \text{ if } p \text{ is odd}, \ \\ \frac{1}{2}|p-12q|-1\text{ copies of }\mathbb{C}^* & \text{ if } p \text{ is even}, p\neq 12k, \ \\ \frac{1}{2}|p-12q|-3\text{ copies of }\mathbb{C}^* & \text{ if } p=12k, p/q\neq 12,\ \\ \varphi^{-1}(\mathscr{X}_{\text{irr}}\times_{\mathbb{C}} \mathscr{X}_{\text{irr}}) & \text{ if } p/q=12. \end{cases} \end{equation*} \end{lemma} \begin{proof} For irreducible representations of $\pi_1(S^3\backslash 3_1)$, $\rho(\ell)$ is determined by $\rho( m)$. In fact, we have $\rho(\ell)=-\rho( m)^{-6}$. For a point $\varphi([\rho])=([\rho_1],[\rho_2])\in \mathscr{X}_{\text{irr}}\times_{\mathbb{C}} \mathscr{X}_{\text{irr}}$, we have $\rho_1( m_1)=\rho_2( m_2)$, so that $\rho_1(\ell_1)=\rho_2(\ell_2)$. Thus, \begin{align*} \rho( m^p\ell^q)=\rho_1( m^p\ell_1^{2q}). \end{align*} For $p$ odd, the equation $\rho_1( m^p\ell_1^{2q})=I$ is just the defining equation for $p/2q$ surgery on the trefoil. Thus, we obtain $\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/2q}(3_1))$ points. None of these occur at meridional trace $\pm 2$, so that the fiber of $\varphi$ is a copy of $\mathbb{C}^*$ for all of these points. For $p$ even, $p\neq 12k$, the surgery equation \begin{align*} \rho( m)^{p-12q}=I \end{align*} has an even exponent. Thus, we obtain \begin{align*} \frac{1}{2}(|12q-p|-2) \end{align*} distinct characters, where the $-2$ term serves to discount the roots at $\rho( m)=\pm I$. For $p=12k, p/q\neq 12$, two of the characters in this count occur at meridional trace $\pm\sqrt{3}$, so we subtract $2$ in this case. Again, all of the fibers of $\varphi$ are $\mathbb{C}^*$. For $p/q=12$, the surgery equation is trivial, so that every representation of this form provides a representation of the surgery. \end{proof} \begin{lemma}\label{grannynonab} The set of irreducible representations formed from a composite of non-abelian reducible representations is \begin{equation*} \mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1))\cap \varphi^{-1}(\mathscr{X}_{\text{nar}}\times_{\mathbb{C}} \mathscr{X}_{\text{nar}}) = \begin{cases} 2 \text{ copies of }\mathbb{C}^*\backslash\{1\} & \text{ if } p=12k, \ \\ \emptyset & \text{ else. } \ \\ \end{cases} \end{equation*} \end{lemma} \begin{proof} For $([\rho_1],[\rho_2])\in \mathscr{X}_{\text{nar}}\times_{\mathbb{C}} \mathscr{X}_{\text{nar}}$, we have $\operatorname{tr}(\rho_i( m))=\pm\sqrt{3}$ and $\rho_i(\ell_i)=I$. Thus, the surgery equation becomes $\rho( m)^p=I$. This holds if and only if $p=12k$. \end{proof} For the remaining case of $p/q=12$, we have found that the character scheme of 12 surgery on the granny knot, $\mathscr{X}_{\text{irr}}(S^3_{12}(3_1\# 3_1))$, consists of 2 points coming from the irreducible representation in each of the two copies of $\mathscr{X}_{\text{irr}}(S^3_{12}(3_1))$ and the surface \begin{align*} S=\varphi^{-1}(\mathscr{X}_{\text{nar}}\times_{\mathbb{C}}\mathscr{X}_{\text{nar}}\cup\mathscr{X}_{\text{irr}}\times_{\mathbb{C}}\mathscr{X}_{\text{irr}}). \end{align*} Putting this and the preceding lemmas together, we obtain Proposition \ref{grannyvar}. \begin{remark} 12 surgery on the granny knot yields a Seifert fiber space fibered over the orbifold base $S^2(2,2,3,3)$ \cite{kalliongis}. Thus, \begin{align*} \pi_1(S_{12}^3(3_1\# 3_1))\cong \langle a,b,c \mid a^3=b^3=c^2=(abc)^{-2}\rangle. \end{align*} \end{remark} \subsection{Character scheme for square knot surgeries} Let $3_1\# 3_1^*$ denote the square knot, a connected sum of two mirror trefoils, and $S^3_{p/q}(3_1\# 3_1^*)$ the $p/q$ surgery. We have the following description of $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1^*))$ \begin{proposition}\label{squarevar} The character scheme $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1^*))$ consists of $\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/q}(3_1))+\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{-p/q}(3_1))$ points and \begin{itemize} \item $\frac{1}{2}|p|-\frac{1}{2}$ copies of $\mathbb{C}^*$ when $p$ is odd, \item $\frac{1}{2}|p|-1$ copies of $\mathbb{C}^*$ when $p$ is even, $p\neq 12k$, \item $\frac{1}{2}|p|-3$ copies of $\mathbb{C}^*$ and 2 copies of $\mathbb{C}^*\backslash\{1\}$ when $p=12k\neq 0$, \item $S=\varphi^{-1}(\mathscr{X}_{\text{nar}}\times_{\mathbb{C}}\mathscr{X}_{\text{nar}}\cup\mathscr{X}_{\text{irr}}\times_{\mathbb{C}}\mathscr{X}_{\text{irr}})$ when $p=0$. \end{itemize} \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{grannyvar}. The essential difference is that we also need to consider the representations of the left-handed trefoil. Since $S^3_{p/q}(3_1)\cong S^3_{-p/q}(3_1^*)$, we can relate the Casson invariants by \begin{align*} \lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/q}(3_1))= \lambda_{\operatorname{SL}(2, \mathbb{C})} (S^3_{-p/q}(3_1^*)). \end{align*} Thus, the intersection of $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1^*))$ with the two copies of $\mathscr{X}_{\text{irr}}(3_1)$ give contributions of $\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/q}(3_1))$ and $\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{-p/q}(3_1))$ points, depending on whether the copy of $\mathscr{X}_{\text{irr}}(3_1)$ corresponds to the right or left-handed trefoil. For irreducible representations of the right-handed trefoil, we have $\rho_1(\ell_1)=-\rho_1(m)^{-6}$, whereas for the left-handed trefoil we have $\rho_2(\ell_2)=-\rho_2(m)^6$. So, for a representation of the composite that restricts to irreducibles on either factor, we find that $\rho(\ell)=\rho(\ell_1\ell_2)=I$. The equation for $p/q$ surgery reduces to \begin{align*} \rho(m)^p=I. \end{align*} Throwing away the solutions $\rho(m)=\pm I$ and counting solutions up to conjugacy (i.e. dividing by the equivalence $\rho(m)\sim\rho(m)^{-1}$), we find $\frac{1}{2}|p|-\frac{1}{2}$ solutions for $p$ odd, and $\frac{1}{2}|p|-1$ solutions for $p$ even, $p\neq 12k$. For $p=12k\neq 0$, we omit the two solutions with $\operatorname{tr}(\rho(m))=\pm\sqrt{3}$, as these correspond to non-abelian reducible representations rather than irreducibles. The case of irreducibles formed from the composite of non-abelian reducible representations, which only occurs when $p=12k$, is the same as in Lemma \ref{grannynonab}. When $p=0$, the surgery equation is trivial, and we have the same situation as for $p=12$ for the granny knot. \end{proof} \begin{remark} 0 surgery on the square knot yields a Seifert fiber space fibered over the orbifold base $S^2(-2,2,3,3)$ \cite{kalliongis}. Thus, \begin{align*} \pi_1(S_{0}^3(3_1\# 3_1^*))\cong \langle a,b,c \mid a^3=b^3=c^2=(abc)^{2}\rangle. \end{align*} \end{remark} \subsection{Smoothness of the Character Schemes} \begin{proposition} Let $3_1\# 3_1$ and $3_1\# 3_1^*$ denote the granny and square knots, respectively. The schemes $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1))$ and $\mathscr{X}_{\text{irr}}(S^3_{p/q}(3_1\# 3_1^*))$ are smooth schemes for all $p$ and $q$. \end{proposition} \begin{proof} The sets of complex points of these schemes were computed in the previous section. They consisted of components of dimensions zero, one, and, in the cases of $S^3_{12}(3_1\# 3_1)$ and $S^3_0(3_1\# 3_1^*)$, two. To establish the smoothness of the character scheme near some irreducible representation $\rho$, we must show that the local dimension of the set of complex points at $\rho$ equals the dimension of the tangent space to the scheme at $\rho$. Recall that for an irreducible representation $\rho$ the tangent space is computed by $T_{[\rho]}\mathscr{X}_{\text{irr}}(\Gamma)=H^1(\Gamma;\operatornameeratorname{ad}\rho)$. Thus, the proposition follows from the calculation of these $H^1$ groups in Lemma \ref{h1surgery} below, which we prove after two preliminary lemmas. \end{proof} \begin{lemma}\label{h1exterior} Let $\rho$ be an irreducible representation of $\pi_1(S^3\backslash(3_1\#3_1^\circ))$ (where $3_1\#3_1^\circ$ is either the square or granny knot, which have isomorphic fundamental groups). Let $\rho_1$ and $\rho_2$ be the restrictions of $\rho$ to each of the two copies of $\pi_1(S^3\backslash 3_1)$. Then, \begin{equation*} \dim H^1(\pi_1(S^3\backslash(3_1\# 3_1^\circ));\operatornameeratorname{ad}\rho)= \begin{cases} 2 & \text{ if neither of the } \rho_i \text{ are abelian,} \ \\ 1 & \text{ if either of the } \rho_i \text{ are abelian.} \end{cases} \end{equation*} \end{lemma} \begin{proof} We can compute $H^1(\pi_1(S^3\backslash (3_1\#3_1^\circ));\operatornameeratorname{ad}\rho)$ (we will suppress the $\pi_1$ from this notation without confusion, as all spaces in consideration are aspherical) from the following portion of the Mayer-Vietoris sequence: \begin{align}\label{MV1} \begin{split} 0&\to H^0(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_1)\operatornamelus H^0(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_2)\to H^0(S^1\operatornameeratorname{ad}\rho)\to H^1(S^3\backslash (3_1\#3_1^\circ)\operatornameeratorname{ad}\rho) \to \\ & \to H^1(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_1)\operatornamelus H^1(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_2) \to H^1(S^1;\operatornameeratorname{ad}\rho)\to\dots \end{split} \end{align} The $\rho_i$ are the restrictions of $\rho$ to the two copies of $S^3\backslash 3_1$, and the $S^1$ refers to the meridional annulus along which the connected sum operation is performed. Technically, $\rho$ restricts to the complement of the meridional annulus inside of $S^3\backslash 3_1$, but since removing a subset of the boundary of a manifold does not change its homotopy type, this is homotopy equivalent to $S^3\backslash 3_1$ so we ignore the distinction. Observe that $H^1(S^1;\operatornameeratorname{ad}\rho)\cong H^0(S^1;\operatornameeratorname{ad}\rho)\cong \mathbb{C}$. The first isomorphism follows from Poincar\' {e} duality. The second follows from the fact that since $\rho$ is an irreducible representation of $\pi_1(S^3\backslash (3_1\#3_1^\circ))$, it restricts to a non-central abelian representation on the meridian and the invariants of such a representation are a one-dimensional subspace of $\operatornameeratorname{ad}\rho$. The last map in \eqref{MV1} is the sum of two maps, each of the form $H^1(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_i) \to H^1(S^1;\operatornameeratorname{ad}\rho)$. When $\rho_i$ is irreducible, this is the derivative at $[\rho_i]$ of the natural map $\mathscr{X}_{\text{irr}}(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_i)\to \mathscr{X}(S^1;\operatornameeratorname{ad}\rho)$, where $S^1$ refers to the meridional circle. From our description of $\mathscr{X}_{\text{irr}}(S^3\backslash 3_1)$ as a plane curve, we observe that the meridional trace map is non-singular at all points. Thus, the map on tangent spaces is surjective. We now consider the case when the $\rho_i$ are both irreducible or both non-abelian reducibles. In this case, $H^0(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_i)=0$. When $\rho_i$ is an irreducible representation, we observe that $\dim H^1(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_i)=1$ because the character scheme is smooth of dimension 1. When $\rho_i$ is a non-abelian reducible, we can compute $\dim H^1(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_i)=1$ directly, as there are only finitely many non-abelian reducible representations up to conjugacy. From this data, \eqref{MV1} yields $\dim H^1(S^3\backslash (3_1\#3_1^\circ);\operatornameeratorname{ad}\rho) =2$. When $\rho_1$ is abelian and $\rho_2$ is irreducible, $H^0(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_2)=0$ and the map $H^0(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_1)\to H^0(S^1;\operatornameeratorname{ad}\rho)$ at the start of \eqref{MV1} is an isomorphism. For an abelian representation, $\dim H^1(S^3\backslash 3_1;\operatornameeratorname{ad}\rho_1)=1$. Thus, we compute $\dim H^1(S^3\backslash (3_1\#3_1^\circ);\operatornameeratorname{ad}\rho)=1$. \end{proof} \begin{lemma}\label{h1surgery} Let $3_1\# 3_1$ and $3_1\# 3_1^*$ denote the granny and square knots (and let $3_1\#3_1^{\circ}$ denote either). Let $\rho$ be an irreducible representation of $\pi_1(S_{p/q}^3(3_1\#3_1^{\circ}))$. Let $\rho_1$ and $\rho_2$ be the restrictions of $\rho$ to each of the two copies of $\pi_1(S^3\backslash 3_1)$. Then, \begin{equation*} \dim H^1(\pi_1(S_{p/q}^3(3_1\#3_1^{\circ}));\operatornameeratorname{ad}\rho)= \begin{cases} 2 & \text{ if both of the } \rho_i \text{ are non-abelian and } $p/q=12$ \text{ for the granny knot or } \\ & $p/q=0$ \text{ for the square knot,} \ \\ 1 & \text{ if both of the } \rho_i \text{ are irreducible and we are not in the above case,} \ \\ 0 & \text{ if either of the } \rho_i \text{ are abelian.} \end{cases} \end{equation*} \end{lemma} \begin{proof} We can compute $H^1(S^3_{p/q}(3_1\#3_1^{\circ});\operatornameeratorname{ad}\rho)$ from the following Mayer-Vietoris sequence: \begin{align}\label{MV2} \dots \overset{0}\rightarrow H^1(S^3_{p/q}(3_1\#3_1^{\circ});\operatornameeratorname{ad}\rho)\to H^1(S^3\backslash (3_1\#3_1^{\circ});\operatornameeratorname{ad}\rho)\operatornamelus H^1(D^2\times S^1, \operatornameeratorname{ad}\rho) \overset{f}\rightarrow H^1(T^2;\operatornameeratorname{ad}\rho)\to\dots \end{align} Since $\rho$ must restrict to a non-central abelian representation on the boundary torus, we have $H^2(T^2;\operatornameeratorname{ad}\rho)\cong H^0(T^2;\operatornameeratorname{ad}\rho)\cong\mathbb{C}$. From the Euler characteristic, we compute $\dim H^1(T^2;\operatornameeratorname{ad}\rho)=2$. Similarly, $\rho$ restricts to a non-central abelian representation on the solid torus (if it sent the core of the solid torus to a central element, then in fact $\rho$ would be central on the entire boundary torus, and in particular on the meridian). So, $\dim H^1(D^2\times S^1;\operatornameeratorname{ad}\rho)=1$. We claim that $f$ has rank 1 when $p/q=12$ for the granny knot and $p/q=0$ for the square knot and neither of the $\rho_i$ are abelian representations, and in all other cases, $f$ has rank 2. Let $s:\mathscr{X}(D^2\times S^1)\to \mathscr{X}(T^2)$ be the restriction map. The map on cohomology groups $H^1(D^2\times S^1; \operatornameeratorname{ad}\rho)\to H^1(T^2;\operatornameeratorname{ad}\rho)$ can be identified with $ds_{[\rho]}$, the derivative of $s$ at $[\rho]$. Similarly, we can identify the map $H^1(S^3\backslash (3_1\#3_1^{\circ});\operatornameeratorname{ad}\rho)\to H^1(T^2;\operatornameeratorname{ad}\rho)$ with the derivative at $[\rho]$ of the restriction map $r: \mathscr{X}_{\text{irr}}(S^3\backslash (3_1\#3_1^{\circ}))\to \mathscr{X}(T^2)$. Thus, we can write $f$ as $f=(dr\operatornamelus ds)_{[\rho]}$. By a standard application of Lefschetz duality and the long exact sequence of the pair $(Y,\partial Y)$, where here $Y=D^2\times S^1$ or $S^3\backslash(3_1\#3_1^{\circ})$, we know that $\operatornameeratorname{rank}(dr)=\operatornameeratorname{rank}(ds)=1$ \cite{sik}. Thus, the rank of $f$ is $2$ unless the images of $r$ and $s$ have the same tangent spaces at $[\rho]$, in which case the rank of $f$ is $1$. We claim that this equality of tangent spaces occurs only when $p/q=12$ for the granny knot and $p/q=0$ for the square knot and neither of the $\rho_i$ are abelian representations. Let $t:\mathbb{C}^*\times\mathbb{C}^*\to \mathscr{X}(T^2)$ be the map from the eigenvalue variety to the character variety. With the coordinates $(M,L)$ on $\mathbb{C}^*\times\mathbb{C}^*$ for the meridional and longitudinal eigenvalues, $t(M,L)$ is the class of a representation with $\rho(m)=\operatornameeratorname{diag}(M,M^{-1})$ and $\rho(\ell)=\operatornameeratorname{diag}(L,L^{-1})$. Away from the central representations, $t$ is a degree two covering map. Thus, we can consider the tangent spaces to $t^{-1}(\operatornameeratorname{im}(s))$ and $t^{-1}(\operatornameeratorname{im}(r))$ in order to prove the claim. The curve $t^{-1}(\operatornameeratorname{im}(s))$ is the surgery curve $\{M^pL^q=1\}$. The closure of the curve $t^{-1}(\operatornameeratorname{im}(r))$ is the vanishing locus of the $A$-polynomial of the knot (ignoring the factor coming from reducibles). Recall our calculation of the $A$-polynomials from Section 6, \begin{align*} A_{3_1\# 3_1}^{\text{irr}}(M,L)= & (L+M^{-6})(L-M^{-12}), \\ A_{3_1\# 3_1^*}^{\text{irr}}(M,L)= & (L+M^{-6})(L+M^{6})(L-1). \end{align*} The factor of $L+M^{-6}$ (which is the $A$-polynomial of the right-handed trefoil) comes from representations that are irreducible on a $3_1$ summand and abelian on the other summand. Similarly, $L+M^{6}$ is the A-polynomial of the left-handed trefoil. The last factors come from the composites of two non-abelian representations. For such representations of the the granny knot, we have $L_1=L_2=-M^{-6}$ and $L=L_1L_2$, so that $L=M^{-12}$. For the square knot, $L_1=L_2^{-1}$, so that this component is mapped to the line $L=1$. Now we see that the only situations in which the tangent space to the vanishing locus of the $A$-polynomial coincides with the tangent space to the surgery curve are when $p=12,q=1$ for the granny knot or $p=0,q=1$ for the square knot and $\rho$ is a composite of two non-abelian representations $\rho_i$. This proves the claim. From \eqref{MV2}, we see that \begin{align*} & \dim H^1(S^3_{p/q}(3_1\#3_1^{\circ});\operatornameeratorname{ad}\rho) = \dim H^1(S^3\backslash (3_1\#3_1^{\circ});\operatornameeratorname{ad}\rho)+1-\operatornameeratorname{rank}(f). \end{align*} The result follows from combining the above formula, our computations of the rank of $f$, and Lemma \ref{h1exterior}. \end{proof} Theorems \ref{HPgranny} and \ref{HPsquare} now follow from applying Corollary \ref{modtwo} to the calculation of the respective character varieties in Propositions \ref{grannyvar} and \ref{squarevar} and the determination of the singular cohomology of these character schemes from Proposition \ref{cohomology}. \begin{remark} We use $\mathit{HP}$ with $\mathbb{Z}/2\mathbb{Z}$ coefficients in Theorems \ref{HPgranny} and \ref{HPsquare} only to avoid determining the relevant local system. Indeed, the character schemes of surgeries on $3_1\# 3_1$ include some components isomorphic to $\mathbb{C}^*$ and $\mathbb{C}^*\backslash\{1\}$, while the other topological types of components that appear are simply connected. We conjecture that the local systems are in fact trivial on all of the components and that Theorems \ref{HPgranny} and \ref{HPsquare} hold over $\mathbb{Z}$. \end{remark} \section{Further Discussion} \subsection{Exact triangles} In analogy with other Floer theories \cite{OS}\cite{scaduto}\cite{floer}, one may conjecture the existence of a surgery exact triangle for $\mathit{HP}_{\#}$. That is, one may hope that there exists a long exact sequence \begin{align*} \mathit{HP}_{\#}(S^3)[1]\to \mathit{HP}_{\#}(S^3_{p+1}(K)) \to \mathit{HP}_{\#}(S^3_{p}(K))\to\mathit{HP}_{\#}(S^3). \end{align*} However, since $\mathit{HP}_{\#}(S^3)$ is supported in degree zero, such a long exact sequence would imply that $\mathit{HP}_{\#}(S^3_{p}(K))$ and $\mathit{HP}_{\#}(S^3_{p+1}(K))$ are isomorphic except possibly in degree zero. Yet the data from Proposition \ref{HPframed2bridge} shows that this is not the case for surgeries on two-bridge knots. For example, if $p=2k$ and $p+1=2k+1$ both satisfy the hypotheses of Proposition \ref{HPframed2bridge}, then $\mathit{HP}_{\#}(S^3_{p}(K))$ has rank $k-1$ in degree $-2$, whereas $\mathit{HP}_{\#}(S^3_{p+1}(K))$ has rank $k$ in degree $-2$. One can also ask whether a surgery exact triangle exists for $\mathit{HP}$. The data in Proposition \ref{HP2bridge} can be used to show that such a triangle cannot exist for two-bridge knots. However, one would not even expect such a surgery exact triangle for $\mathit{HP}$ since exact triangles in Floer theories are not usually formulated for the versions that exclude reducibles. For example, there is no surgery exact triangle for $\mathit{HF}_{\operatornameeratorname{red}}^{\circ}$ in Heegaard Floer homology. \subsection{A conjecture} In \cite{BC}, the authors define an $\operatorname{SL}(2, \mathbb{C})$ Casson knot invariant by \begin{align*} \lambda_{\operatorname{SL}(2, \mathbb{C})}'(K)=\lim\limits_{q\to\infty} \frac{1}{q}\lambda_{\operatorname{SL}(2, \mathbb{C})}(S^3_{p/q}(K)), \end{align*} where $p$ is fixed and the limit is taken over all $q$ relatively prime to $p$. In particular, this quantity is independent of $p$. We can make the analogous conjecture for $\mathit{HP}$ and $\mathit{HP}_{\#}$. \begin{conjecture} Let $K\subset S^3$ be a knot and $S^3_{p/q}(K)$ it $p/q$ surgery. Then the quantities \begin{align*} \lim\limits_{q\to\infty} \frac{1}{q}\operatornameeratorname{rk}(\mathit{HP}^n(S^3_{p/q}(K))) \end{align*} and \begin{align*} \lim\limits_{q\to\infty} \frac{1}{q}\operatornameeratorname{rk}(\mathit{HP}_{\#}^n(S^3_{p/q}(K))) \end{align*} are well-defined invariants of the knot $K$. \end{conjecture} For example, by Theorems \ref{HPgranny} and \ref{HPsquare} we can verify this conjecture for $\mathit{HP}$ of surgeries on the granny and square knots. We obtain the numerical data \begin{align*} \lim\limits_{q\to\infty} \frac{1}{q}\operatornameeratorname{rk}(\mathit{HP}^0(S^3_{p/q}(3_1\# 3_1)))=12,\\ \lim\limits_{q\to\infty} \frac{1}{q}\operatornameeratorname{rk}(\mathit{HP}^{-1}(S^3_{p/q}(3_1\# 3_1)))=6, \end{align*} and \begin{align*} \lim\limits_{q\to\infty} \frac{1}{q}\operatornameeratorname{rk}(\mathit{HP}^0(S^3_{p/q}(3_1\# 3_1^*)))=6,\\ \lim\limits_{q\to\infty} \frac{1}{q}\operatornameeratorname{rk}(\mathit{HP}^{-1}(S^3_{p/q}(3_1\# 3_1^*)))=0. \end{align*} \begin{bibdiv} \begin{biblist}*{labels={alphabetic}} \bibselect{biblio} \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title{Navier-Stokes and stochastic Navier-Stokes equations via Lagrange multipliers} \makeatletter \renewcommand\arabic{section}.\arabic{equation}{\thesection.\arabic{equation}} \@addtoreset{equation}{section} \makeatother \begin{abstract} \it We show that the Navier-Stokes as well as a random perturbation of this equation can be derived from a stochastic variational principle where the pressure is introduced as a Lagrange multiplier. Moreover we describe how to obtain corresponding constants of the motion.\rm \end{abstract} \vskip 7mm \section{ \bf Introduction}\label{section1.} \quad Navier-Stokes equation describes the velocity of incompressible viscous fluids. We consider this equation with periodic boundary conditions; namely, if $v=v(t,x), ~ x\in \mathbb{T}$, denotes this velocity at time $t$, with $\mathbb{T}$ being the $d$-dimensional flat torus that we identify with $[0,2\pi ]^d$, it reads \begin{equation}\label{eq1.1} \frac{\partial v}{\partial t}+ (v\cdot\nabla)v =\nu \Delta v-\nabla p,\quad \textup{div}(v )=0, \end{equation} where $\nu$ is a positive constant (the viscosity coefficient) and $t\in [0,T]$. The function $p=p(t,x)$ denotes the pressure and is also an unknown in the equation. \quad Lagrange's point of view consists in describing positions of particles: it concerns the flows driven by the velocity fields. Lagrangian trajectories for the Euler equation (the case where there is no viscosity term) have been identified as minimisers of the kinetic energy defined on the space of diffeomorphisms by V. Arnold in \cite{Arnold}. In other words they are geodesics for a $L^2$ metric on such space of curves. This geometric approach to the Euler equation was developed in the fundamental paper by D. Ebin and J. Marsden (\cite{EM}) and gave rise to many subsequent works. It is well known that the pressure in an incompressible fluid acts like a Lagrange multiplier and one can, indeed, derive the Euler equation from a variational principle with such a multiplier (cf. for example \cite{Cons, H, S}). \quad Navier-Stokes equation, being a dissipative physical system, does not correspond to analogous deterministic variational principles. Nevertheless, by replacing the Lagrangian flows by stochastic ones, we may still derive this equation from a (stochastic) variational principle associated with the energy. Then the velocity field is identified with the drift of the Lagrangian diffusion process, which is a time derivative after conditional expectation of the paths. Inspired by \cite{NYZ} and \cite{Y}, such a stochastic variational principle was proved in \cite{CC}. More recently it was generalised in the context of Lie groups in \cite{ACC} and many other dissipative systems can be derived using the same kind of ideas (cf. also \cite{CCR}). Moreover stochastic partial differential equations were also obtained by variational principles, corresponding to random perturbations of the action functionals, in \cite{CCR}. We refer to \cite{H1} and other subsequent works from the same author, where a different variational approach to stochastic fluid dynamics is developed (to derive stochastic partial differential equations). \quad In this paper we show that it is possible to derive the Navier-Stokes equation from a (stochastic) variational principle with a Lagrange multiplier expressed in terms of the pressure. Although we consider here a flat case, the principle can be extended to general manifolds following the construction in \cite{AC}. For the general theory of stochastic differential equations on manifolds we refer for example to \cite{IW}. \quad Stochastic Noether's theorem was introduced in \cite{TZ1}, \cite{TZ2} in the context of stochastic processes associated with the heat equation. A conserved quantity corresponds there to a martingale. In the spirit of this theorem as well as of \cite{CL}, we present a result about conserved quantities associated to our stochastic variational principle. The main difference with the the one of \cite{CL} is that we consider here the Lagrangian motion as a stochastic flow (with respect to its initial values $x$) and in the notion of symmetry we integrate with respect to the variable $x$. \quad It should be stressed, here, that in our derivation of the Navier-Stokes equation no random perturbation is added. What we advocate is an approach where the presence of the Laplacian in Navier-Stokes equation is interpreted as the underlying presence of diffusion processes, used afterwards for studying (1.1) in probabilistic terms. In the last section we show how to derive a variational approach to a randomly perturbed Navier-Stokes equation ((4.3)). \vskip 7mm \section{ \bf A stochastic variational principle}\label{section2.} \quad On a fixed standard probability space $(\Omega, \mathbb{P}, P)$ endowed with an increasing filtration $\mathbb{P}_t$ that satisfies the standard assumptions, we consider $\xi$ to be a semimartingale with values in $\mathbb{T}$, namely \begin{equation} d\xi_t (x)=dM_t (x) +D_t \xi (x) dt,\qquad \xi_0 (x)=x, \end{equation} where $x\in \mathbb{T}$, $M_t$ is the martingale part in the decomposition of $\xi_t$ and $D_t \xi$ its drift (for simplicity we do not write the probability parameter $\omega \in \Omega$ in the formulae). \quad Recall the definition of generalised derivative, that we denote by $D_t$: for $F$ defined in $[0,T] \times \mathbb{T} $, \begin{equation} D_t F(t, \xi_t (x))=\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}E_t [F(t+\epsilon , \xi_{t+\epsilon}(x)) - F(t ,\xi_t (x))] \end{equation} when such (a.s.) limit exists, and where $E_t$ denotes the conditional expectation with respect to $\mathbb{P}_t$. This definition justifies in particular the notation used in (2.1), since the generalised derivative corresponds to the seminartingale's drift. \quad If $W_t$ is a $\mathbb{P}_t$-adapted Wiener process, we denote by $g_t (\cdot )$ diffusions on the torus $\mathbb{T}$ of the form \begin{equation} dg_t (x)=\sqrt{2\nu} dW_t +v(t, g_t (x))dt,\qquad g_0 (x)=x \end{equation} with $x\in \mathbb{T}$, $dW_t $ the It\^o differential. The drift function $v$ is assumed to be regular enough so that $g_t (\cdot )$ are diffeomorphisms (cf. \cite{K}). \quad Note that we do not require, a priori, the vector field $v$ to be divergence free. \quad For the particular cases $F(t,x) =x$ and $F(t,x) =v(t,x)$, we have, respectively, $$ D_t g_t (x)= v(t, g_t (x))$$ and, using It\^o's formula, \begin{equation} D_t v(t,g_t (x))=D_t D_t g_t (x)=\big( \frac{\partial}{\partial t} v +(v.\nabla )v +\nu \Delta v \big) (t, g_t (x)). \end{equation} \vskip 5mm Let $\mathbb{H}$ be a linear subspace dense in $L^2 ([0,T]\times \mathbb{T})$. Define the action functional \begin{align} S(g,p)&=\frac{1}{2} E\int_0^T \int |D_t g_t (x)|^2 dt dx +E\int_0^T \int p(t, g_t (x)) (\det \nabla g_t (x) -1)dtdx \\ &:=S^1 (g,p) +S^2 (g,p) \end{align} for $p\in \mathbb{H}$ and where $E$ denotes expectation (with respect to $P$). \vskip 5mm We consider variations $$ g_t (\cdot )\rightarrow g_t^\epsilon (\cdot ) =g_t (\cdot ) +\epsilon h (t,g_t (\cdot )) $$ $$ p(t, \cdot )\rightarrow p^\epsilon (t,\cdot )= p(t, \cdot ) +\epsilon \varphi (t,g_t (\cdot )) $$ with $h(t,x)$ and $\varphi (t,x )$ deterministic and smooth in $x$, $\varphi \in \mathbb{H}$. We also assume that $h(T,\cdot )=h(0,\cdot )=0$. \vskip 5mm \quad We have the following \vskip 3mm \bf Theorem. \rm A diffusion $g_t $ of the form (2.3) and a function $p\in \mathbb{H}$ are critical for the action functional (2.5) iff the drift $v(t,\cdot )$ of $g_t$ satisfies the Navier-Stokes equation (without external force) \begin{equation} \label{NS} \partial_ t v +(v.\nabla )v =\nu \Delta v -\nabla p,\qquad \hbox{div} ~v(t,\cdot)=0, \end{equation} with $x\in \mathbb{T}, t\in [0,T]$. \vskip 5mm \bf Proof. \rm Using the notation $\delta S (g,p)= \left.\frac{d}{d\varepsilon}\right|_{\varepsilon=0} S(g^\epsilon ,p^\epsilon )$, the variation of the first term in the action gives: $$\delta S^1 (g,p)=E\int_0^T \int (D_t g_t (x).D_t h (t,g_t (x)))dt dx.$$ The notation $<\cdot ,\cdot >$ stands below for the $L^2 (\mathbb{T} )$ scalar product. By It\^o's formula, the expression $$ d<D_t g_t ,h> - <D_t D_t g_t ,h> dt-<D_t g_t , D_t h>dt- <dDg_t , dh> $$ where the last term denotes the It\^o contraction, is the differential of a martingale (whose expectation vanishes); therefore \begin{equation} D_t < D_t g_t . h> =<D_t D_t g_t .h>+<D_t g_t ,D_t h> +<dD_t g_t , dh>. \end{equation} We deduce that $$\delta S^1 =E <Dg_T,h(T, g_T )>-E<Dg_0,h(0, g_0 )> - E\int_0^T \int (D_t D_t g_t (x). h(t,g_t (x) ))dtdx$$ $$-E\int_0^T \int (dD_t g_t (x). dh(t, g_t (x))) dx.$$ $$=- E\int_0^T \int (D_t D_t g_t (x). h(t,g_t (x) ))dtdx -2\nu E\int_0^T (\nabla v. \nabla h) (t, g_t (x))dt dx$$ $$= -E \int_0^T \int ((\partial_t v +(v.\nabla )v-\nu \Delta v). h)(t, g_t (x)$$ where, for the last equality we have used the equality $D_t D_t g_t (x) =D_t v(t, g_t (x))= (\partial_t v +(v.\nabla )v+\nu \Delta v)(t, g_t (x)$ and integration by parts. Concerning the second part of the action functional, we have \begin{align} \delta S^2 &= E\int_0^T \int \varphi (t, g_t (x)) (\det \nabla g_t (x) -1)dtdx \\ & + E\int_0^T \int (\nabla p (t,g_t (x)) .h(t,g_t (x)) (\det \nabla g_t (x) -1)dtdx\\ &+ E\int_0^T \int p(t,g_t (x)) \left.\frac{d}{d\varepsilon}\right|_{\varepsilon=0} \det \nabla (g_t (x)+\epsilon h(t,g_t (x)) dt dx \end{align} \vskip 5mm Since $\varphi$ is arbitrary we conclude from (2.9) that critical points of the action are volume-preserving diffeomorphisms ($\det \nabla g_t (x) =1$) and therefore have divergence-free drifts. It follows immediately that $(2.10)=0$ so we only have to compute (2.11). We have, $$ \left.\frac{d}{d\varepsilon}\right|_{\varepsilon=0} \det \nabla (g_t (x)+\epsilon h(t,g_t (x))) =\det \nabla g_t (x)~ \hbox{tr} \Big( (\nabla g_t (x))^{-1} \left.\frac{d}{d\varepsilon}\right|_{\varepsilon=0}\nabla g_t^\epsilon (x) \Big)$$ $$=\det (\nabla g_t (x)) ~\hbox{tr} \big( (\nabla g_t (x))^{-1} \nabla (h(t,g_t (x))) \big).$$ Since $$\partial_i (p(t,g_t ) (\nabla g_t )^{-1}_{ij} h(t,g_t )^j )= \partial_i (p(t,g_t ))(\nabla g_t )^{-1}_{ij} h(t,g_t )^j +p(t,g_t )(\nabla g_t )^{-1}_{ij} \partial_i (h^j (t,g_t ))$$ $$+p(t,g_t ) h^j (t,g_t ) \partial_i (\nabla g_t )^{-1}_{ij}, $$ and we are in the periodic case, $$(2.11)= -E\int_0^T \int [\partial_i (p(t,g_t ))(\nabla g_t )^{-1}_{ij} +p(t,g_t )\partial_i ((\nabla g_t )^{-1}_{ij} )] h^j (t,g_t ) \det \nabla g_t ~dtdx.$$ Notice that we already concluded that $\det \nabla g_t =1$. On the other hand, $$\sum_i \partial_i (\nabla g_t )^{-1}_{ij} =0.$$ Indeed, derivating the equality $\det \nabla g_t =1$, we get $$\partial_k \det (\nabla g_t )= ~\hbox{tr}\big( (\nabla g_t )^{-1}\partial_k (\nabla g_t )\big) =\sum_i (\nabla g_t )^{-1}_{ij}\partial_k \partial_i g_t^j =0.$$ Also, derivating equality $$(\nabla g_t )^{-1}_{ij}\partial_k g_t^j =\delta_{ik}$$ we obtain $$\sum_i \partial_i (\nabla g_t )^{-1}_{ij}\partial_k g_t^j +(\nabla g_t )^{-1}_{ij}\partial_i \partial_k g_t^j =0;$$ therefore $$\sum_i \partial_i (\nabla g_t )^{-1}_{ik}=-\big( (\nabla g_t )^{-1}_{ij}\partial_k \partial_i g_t^j \big)(\nabla g_t )^{-1}_{jk}=0$$ and $$(2.11)= -E\int_0^T \int (\partial_i (p(t,g_t (x)))(\nabla g_t (x))^{-1}_{ij}) h_t^j (g_t (x)) \det \nabla g_t (x) )dtdx $$ $$=-E\int_0^T \int (\nabla p (t,g_t (x)).h(t,g_t (x)))dtdx.$$ Putting together the expressions for $\delta S^1$ and $\delta S^2$, we conclude that $\delta S=0$ in the class of variations considered, is equivalent to the condition $$E \int_0^T \int (\partial_ t v +(v.\nabla )v -\nu \Delta v+\nabla p (t,g_t (x)))h(t, g_t (x)) dt dx=0$$ for every test function $h$, together with the incompressibility condition $\det \nabla g_t (x)=1$. \vskip 7mm \bf Remark 1. \rm Comparing with \cite{CC, ACC}, the variations we have used here are defined by shifts, since we do not have to work a priori in the class of measure-preserving flows. \vskip 2mm \bf Remark 2. \rm It is possible to consider subspaces $\mathbb{H}$ which are not dense in $L^2 ([0,T]\times \mathbb{T})$. In this case the resulting equation of motion is the projection of the Navier-Stokes one in the corresponding space. \vskip 7mm \section{ \bf On conserved quantities}\label{section3.} \quad In this section we present a Noether-type result where only transformations in space of the Lagrangian function are considered. A more general study of symmetries for equations obtained by stochastic variational principles will be considered in a forthcoming work. \quad Let us consider transformations of the following form: $$ g_t (\cdot )\rightarrow g_t^\alpha (\cdot ) =g_t (\cdot ) +\alpha \eta (t, g_t (\cdot )) $$ with $\eta$ smooth, $\eta (0,\cdot )=\eta (T,\cdot )=0$. We say that the Lagrangian \begin{align} L (g,p) &= \frac{1}{2}|D_t g_t (x)|^2 + p(t, g_t (x)) (\det \nabla g_t (x) -1)\\ &:=L^1 (g,p) +L^2 (g,p) \end{align} used in the definition of the action functional (2.5), is invariant under the transformation associated with $\eta$ if there exits a function $G: [0,T]\times \mathbb{T} \rightarrow \mathbb R$ such that for every $t$, $P$-a.e., $$\left.\frac{d}{d\alpha}\right|_{\alpha =0} \int L(g_t^{\alpha} ,p) dx= \int D_t G (t, g_t (x) )dx.$$ \vskip 3mm \bf Theorem. \rm If $L$ is invariant under the transformation associated with $\eta$ then, denoting $\mathcal L_t = \frac{\partial}{\partial t} +(v\cdot \nabla ) +\nu \Delta $ where $v(t,\cdot )$ is the solution of the Navier-Stokes equation considered above, the following identity $$\int \Big( \mathcal L_t (v \eta -G )\Big) (t,x) dx =0 $$ holds for all $t\in [0,T]$. \vskip 5mm \bf Proof. \rm Considering the first term in the Lagrangian, we have $$\left.\frac{d}{d\alpha}\right|_{\alpha =0} L^1 (g_t^{\alpha} ,p) =(D_t g_t (x). D_t \eta (t, g_t (x)))$$ and, by the arguments in the proof of last section's Theorem, $$ \left.\frac{d}{d\alpha}\right|_{\alpha =0} L^2 (g_t^{\alpha} ,p) = (\nabla p (t, g_t (x))\cdot \eta (t, g_t (x)) (det \nabla g_t (x)-1) $$ \begin{equation} +\partial_i (p (t, g_t (x))\nabla g_t (x)^{-1} \eta^j (t, g_t (x)))-(\nabla p (t, g_t (x)). \eta (t, g_t (x))). \end{equation} We know that $ (\det \nabla g_t (x)-1)=0$ on the critical points of the action functional, therefore the first term in the r.h.s. of last equality vanishes. The second one also vanishes after integration in $x$, as we consider periodic boundary conditions. We are therefore left with the equality, valid for $g_t$ critical of the action functional, $$\int \Big( (D_t g_t (x).D_t \eta (t, g_t (x))) - (\nabla p (t, g_t (x)). \eta (t, g_t (x)))\Big) dx = \int D_t G (t, g_t (x) )dx,$$ $P$-a.e. Using the identity $$ D_t ( D_t g_t (x).\eta (t, g_t (x))) =(D_t D_t g_t (x).\eta (t, g_t (x))) +(D_t g_t (x). D_t \eta (t, g_t (x)))$$ \begin{equation} +(dD_t g_t (x). d\eta (t, g_t (x)). \end{equation} From the two last equalities we deduce that $$\int D_t \Big( ( D_t g_t (x). \eta (t, g_t (x)) - G (t, g_t (x) )\Big) dx =0.$$ We have $D_t g_t (x)=v(t, g_t (x))$. By the incompressibility condition, the flow $g_t (\cdot )$ keeps the measure $dx$ invariant (a.s.) and the result follows from the expression of the operator $D_t$. \vskip 3mm \quad Comparing with the finite-dimensional Noether's theorem of \cite{TZ1}, \cite{TZ2}, here we have an extra integration with respect to the space variable $x$ in the derived notion of conserved quantities. \vskip 7mm \section{ \bf A stochastic Navier-Stokes equation}\label{section4.} In this section we show that it is also possible to derive random perturbations of the Navier-Stokes equation from a stochastic variational principle. \quad Let $\xi$ be a semimartingale with values in $\mathbb{T}$ of the form (2.1). We consider the random action functional $$ \tilde S(\xi ,p)=\frac{1}{2} \int_0^T \int |D_t \xi_t (x)|^2 dt dx +\int_0^T \int (D_t \xi_t (x). dM_t (x)) - \sqrt{2\nu} \int_0^T \int (D_t \xi_t (x). dW_t ) $$ \begin{equation} +\int p(t, \xi_t (x)) (\det \nabla \xi_t (x) -1)dtdx, \end{equation} with $p \in \mathbb{H} \subset L^2 ([0,T]\times \mathbb{T})$. Variations of $\xi $ and $p$ are taken as in Section 1, namely $$ g_t (\cdot )\rightarrow g_t^\epsilon (\cdot ) =g_t (\cdot ) +\epsilon h (t,g_t (\cdot )) $$ $$ p(t, \cdot )\rightarrow p^\epsilon (t,\cdot )= p(t, \cdot ) +\epsilon \varphi (t,g_t (\cdot )) $$ except that here we allow $h$ and $\varphi$ to be random. \quad We want to characterise critical points of $\tilde S$ of the form $$dg_t (x)=\sqrt{2\nu} dW_t +v(t, g_t (x))dt,\qquad g_0 (x)=x$$ now considering the vector field $v$ to be random. We proceed as in the theorem of section 1. The computations are analogous and we have to add, in the variations of $S$, those of the second and third new terms of this functional. These terms give, $$\int_0^T \int [(h(t, g_t ). \sqrt{2\nu} dW_t )+ (D_t g_t .(\nabla h (t, g_t ).dW_t ))-(h(t, g_t ). \sqrt{2\nu} dW_t )]dx$$ that reduces to $$\int_0^T \int v(t, g_t (x). (\nabla h (t, g_t (x)).dW_t )dx= \int_0^T v(t,x).(\nabla h(t,x).dW_t )$$ $$=- \int_0^T ((\nabla v(t,x).h(t,x)).dW_t )$$ equality which holds $P$-almost surely. \quad We therefore conclude that a diffusion process of the form $$dg_t (x)=\sqrt{2\nu} dW_t +v(t, g_t (x))dt, \quad g_0 (x)=x$$ is critical for the action functional $\tilde S$ iff its (random) drift $v(t,\cdot )$ satisfies the following Navier-Stokes stochastic partial differential equation: \begin{equation} d v +(v.\nabla )v =\sqrt{2\nu} \nabla v .dW_t + \nu \Delta v -\nabla p,\qquad \textup{div} ~v(t,\cdot)=0, \end{equation} with $x\in \mathbb{T}, t\in [0,T]$. \quad This stochastic equation can be also regarded as a (Stratonovich) perturbation of the Euler one. Indeed, denoting by $\circ dW$ the Stratonovich differential, it can be written as \begin{equation} d v +(v.\nabla )v =\sqrt{2\nu} \nabla v \circ dW_t -\nabla p, \qquad \textup{div} ~v(t,\cdot)=0. \end{equation} \vskip 10mm \noindent{\bf Acknowledgements}\\ The author was supported by FCT Portuguese grant PTDC/MAT-STA/0975/2014. She wishes to thank the anonymous referees for a careful reading of the first manuscript of this paper. \vskip 5mm \end{document}
\begin{document} \title{Uncertainty Quantification in Neural Differential Equations} \begin{abstract} Uncertainty quantification (UQ) helps to make trustworthy predictions based on collected observations and uncertain domain knowledge. With increased usage of deep learning in various applications, the need for efficient UQ methods that can make deep models more reliable has increased as well. Among applications that can benefit from effective handling of uncertainty are the deep learning based differential equation (DE) solvers. We adapt several state-of-the-art UQ methods to get the predictive uncertainty for DE solutions and show the results on four different DE types. \end{abstract} \section{Introduction}\label{intro} Driven by the growing popularity of deep learning, several areas of research have obtained state-of-the-art performances with deep neural networks (NNs). Among other applications, deep NNs have been applied for solving differential equations (DEs) \citep{Lagaris_1998} — a fundamental tool for mathematical modeling in engineering, finance, and the natural sciences. Deep learning based solutions of DEs have recently appeared in, e.g., \citep{RAISSI2019686}, \citep{Piscopo_2019}, \citep{raissi2018forwardbackward}, \citep{Sirignano_2018}, \citep{hagge2017solving}, \citep{mattheakis2021unsupervised}, \citep{paticchio2020semi}, \citep{flamant2020solving}, \citep{randle2020unsupervised}, \citep{Giovanni2020FindingMS}. Typically, the NN itself approximates the solution of a DE. Thanks to that, parallelization is natural and, in contrast to classical numerical methods, the solution at any time can be computed without the burden of having to compute all previous time steps. Furthermore, NNs are continuous and differentiable. Until recently, the focus of deep learning was on achieving better accuracy in the NN predictions, but now it is increasingly being shifted to measuring the prediction's uncertainty, especially if the task at hand is safety critical. Uncertainty quantification (UQ) has been considered for deep models in computer vision, medical image analysis, bionformatics, etc \citep{Abdar_2021}. Likewise, UQ is important for deep models that solve DEs. The uncertainty here stems from the fact that we cannot train a NN on an infinite time and/or space domain. Therefore, we seek to estimate the solution's uncertainty in the regions where the model was not trained. Moreover, another source of uncertainty comes from the model's limitations such as its architecture. To the best of our knowledge, this is the first work to discuss UQ for deep models that solve DEs. Contrary to common deep learning setup, we solve DEs without any observed data, relying only on the samples of time and/or space and on the mathematical statement that relates functions and their derivatives. This makes the application of existing UQ methods not so straightforward. In this paper, we make the following contributions: \begin{enumerate} \item We propose an adaptation of the four state-of-the-art UQ methods in deep learning — \emph{Bayes By Backprop} \citep{blundell2015weight}, \emph{Flipout} \citep{wen2018flipout}, \emph{Neural Linear Model} \citep{Snoek2015, Ober2019}, and \emph{Deep Evidential Regression} \citep{Amini2019, meinert2021multivariate} — to the case of solving DEs. \item We test the above-mentioned methods on four different DE types: linear ordinary DE (ODE), non-linear ODE, system of non-linear ODEs, and partial DE (PDE). \end{enumerate} \section{Preliminaries}\label{prelim} \subsection{Solving differential equations with neural networks}\label{neuralde} A DE can be expressed as $\mathcal{L}u - f = 0 $, where $ \mathcal{L}$ is the differential operator, $u(\mathbf{x})$ is the solution that we wish to find on some (possibly multidimensional) domain $\mathbf{x}$, and $f$ is a known forcing function. We denote the NN approximation of the true solution by $u_N$. To solve the DE, we minimize the square loss of the residual function $\mathcal{R}(u_N):=\mathcal{L}u_N-f$, i.e., the optimization objective is \begin{equation}\label{eq:loss} \min_{\mathbf{w}}(\mathcal{R}^2(u_N)), \end{equation} where $\mathbf{w}$ are the NN parameters. It is also necessary to inform the NN about any initial and/or boundary conditions, $u_{\mathrm{c}}=u(\mathbf{x}_{\mathrm{c}})$. One can achieve that in a straightforward way by adding a penalizing term to the loss function. However, the exact satisfaction of initial/boundary conditions is not possible in this case, causing problems in case of high sensitivity to initial conditions, and also yielding unnecessary local uncertainty at $\mathbf{x}_{\mathrm{c}}$. Therefore, we employ an alternative approach and consider a transformation of $u_N$ which enforces the initial/boundary conditions and satisfies them by construction. E.g., in one-dimensional case, given an initial condition $u_0=u(t_0)$, we consider a transformation $\tilde u_N(t)=u_0 + (1-e^{-(t-t_0)}) u_N(t)$. In general, the transformation has the form $\tilde u_N(\mathbf{x})=A(\mathbf{x},\mathbf{x}_{\mathrm{c}}, u_{\mathrm{c}})+ B(\mathbf{x},\mathbf{x}_{\mathrm{c}})u_N(\mathbf{x})$. Hereinafter, $\tilde u_N$ will denote the enforced solution. Accordingly, we replace $u_N$ by $\tilde u_N$ in the optimization objective \eqref{eq:loss}. Besides the advantage of satisfying the initial/boundary conditions exactly, the latter approach can also reduce the effort required during training \citep{mcfall2009}. \subsection{Uncertainty quantification under Bayesian framework}\label{bayes} While classical learning considers \emph{deterministic model parameters} $\mathbf{\theta}$, the Bayesian framework introduces uncertainty by considering a \emph{posterior distribution over the model parameters}, $p(\mathbf{\theta} | \mathcal{D})$, obtained after observing some data $\mathcal{D}$. The posterior distribution is given by Bayes' theorem, $p(\mathbf{\theta} | \mathcal{D}) = p(\mathcal{D} | \mathbf{\theta})\cdot p(\mathbf{\theta}) \hspace{2pt}/\hspace{2pt} p(\mathcal{D})$, where $p(\mathcal{D} | \mathbf{\theta})$ is the likelihood, $p(\mathbf{\theta})$ is the prior distribution over the parameters, and $p(\mathcal{D})$ is the evidence. The predictions $y$ at a new test point $\mathbf{x}$ are given by the posterior predictive distribution, \begin{equation}\label{eq:postpred} p(y | \mathbf{x}, \mathcal{D}) = \textstyle{\int}{p(y| \mathbf{x}, \mathbf{\theta}) \cdot p(\mathbf{\theta} | \mathcal{D}) d\mathbf{\theta}}. \end{equation} For probabilistic deep models, there are two main strategies of estimating \eqref{eq:postpred}. \textbf{Inference through the posterior distribution of model parameters}. As stated in \eqref{eq:postpred}, the posterior predictive is obtained by averaging over the posterior uncertainty in the model parameters. Thus, we can start with estimating the posterior distribution of the NN weights. In this case, well-suited is Bayesian NN \citep{Neal2012} which places a prior distribution on all the weights (and biases) $\mathbf{w}$. Since an analytical solution for the posterior is intractable for Bayesian NNs, we have to use numerical approximation methods such as MCMC or variational methods. Despite the need for sampling in both cases, variational methods are computationally less expensive for high-dimensional parameter spaces and also provide an analytical approximation. \emph{Bayes By Backprop} (\emph{BBB}) is a variational, backpropagation-compatible method for training a Bayesian NN. Its optimization objective seeks to minimize the Kullback-Leibler divergence between the true posterior and the variational posterior which is re-parametrized as $\mathcal{N}(\mu, \sigma=\log(1+\exp(\rho)))$ to allow for backpropagation. At each optimization step, weights $\mathbf{w}=\mu+\sigma\circ\epsilon$, where $\epsilon\sim\mathcal{N}(0,I)$ and $\circ$ is pointwise multiplication, are obtained by sampling from the variational posterior. \emph{BBB} is followed by \emph{Flipout} which adds a pseudo-independent perturbation to the weights at each training point $\mathbf{x}_n$ in the mini-batch, namely, $\mathbf{w}_n=\mu+(\sigma\circ\epsilon)R_n$, where $R_n$ is the random sign matrix. Intuitively, the weights get flipped symmetrically around the mean with probability $0.5$. \emph{Neural Linear Model} (\emph{NLM}) is an alternative to Bayesian NN. It places a prior distribution only on the last layer's weights, and learns point estimates for the remaining layers. One can interpret the output of these layers as a basis defined by the feature embedding of the data. The last layer of \emph{NLM} performs Bayesian linear regression on this feature basis. \emph{NLM} provides tractable inference under the Gaussian assumption on likelihood; we get analytical solution for the posterior distribution. \textbf{Inference through the higher-order evidential distribution}. It is also possible to infer parameters of the posterior predictive directly, using Bayesian hierarchical modeling \citep{Gelman2006, Gelman2008}. In \emph{Deep Evidential Regression} (\emph{DER}), the higher-order, evidential prior is placed over the Gaussian likelihood function. Choosing Normal Inverse-Gamma (NIG) prior yields an analytical solution for the model evidence which is maximized by the optimization objective with respect to the NIG hyperparameters. \emph{DER} also proposes an evidence regularizer which minimizes evidence on incorrect predictions. The posterior predictive mean and variance are computed analytically using the learned hyperparameters. \section{Uncertainty quantification in neural differential equations}\label{uq} Instead of learning a deterministic solution $\tilde u_N$, we now aim to learn a probabilistic solution $u_{\theta}$, characterized by a posterior predictive distribution. We estimate it using some probabilistic model $g_{\theta}$ parametrized by $\theta$. \subsection{Proposed approach}\label{propose} UQ methods described in Section \ref{bayes} rely upon the assumption that the likelihood function is Gaussian, centered at the model's prediction and evaluated at the observed data points. Namely, \emph{DER} and \emph{NLM} use it to derive the analytical form of a loss function and a posterior distribution, respectively. \emph{BBB} and \emph{Flipout} can in principle use any likelihood in the loss function, but it has to be of known analytical form. In case of DEs, a natural way of computing likelihood is to evaluate it at the \emph{residual} $\mathcal{R}$ \emph{on the training domain} $\mathbf{x}_{\mathrm{t}}$, which can be seen as a counterpart to the observed data points in the classical setting. However, we are left with an open problem of choosing the underlying distribution for the likelihood function. It makes sense to assume that the probability density is high enough for values close to zero, but no further assumptions immediately follow. E.g., it may happen that the limitations of NN architecture do not allow for the perfect fit, i.e., the distribution of residuals is not centered around zero. To circumvent this problem, we propose an alternative way of computing likelihood. In this first work on UQ for neural DE solvers, we will focus on comparing predictions outside of the training domain given by different UQ methods, leaving the detailed treatment of the model fit and its associated uncertainty for future work. Although in Bayesian framework this uncertainty also affects the uncertainty outside of the training domain, we hypothesize that even a simplified treatment, i.e., without using residuals' distribution, gives reasonable uncertainty estimates. We propose a two-stage training procedure: 1) We first train a classical NN on the training domain $\mathbf{x}_{\mathrm{t}}$ to find a \emph{deterministic solution} $\tilde u_N$, 2) We use $\tilde u_N$ as \emph{observed data} for our probabilistic model $g_{\theta}$ and define the likelihood using a Gaussian assumption, $p(\tilde u_N | \mathbf{\theta}) = \prod_{\mathbf{x}_{\mathrm{t}}}\mathcal{N}(\tilde u_N; u_{\theta},\varepsilon)$. Now the optimization objective will be trying to align the probabilistic model with the given reference $\tilde u_N$ rather than trying to minimize the residuals at all costs. We note that despite interpreting $\tilde u_N$ in stage two as observed data rather than a function that solves DE, its associated variance $\varepsilon$ is not of aleatoric nature (i.e. irreducible variance that comes from the noise inherent to the data), as it would be in the classical regression problem. It can be still interpreted as a source of epistemic (reducible) uncertainty coming from the NN model limitations. Here, we consider a simplified treatment of $\varepsilon$. In case of \emph{BBB}, \emph{Flipout}, and \emph{NLM}, we pre-define $\varepsilon$ with some small number. In case of \emph{DER}, we learn $\varepsilon$ along with posterior predictive distribution, but the result is not particularly useful since \emph{DER} does not have direct access to residuals during learning. Eventually, the probabilistic model allows us to find the posterior predictive distribution of $u_{\theta}$. In case of \emph{BBB}, \emph{Flipout}, and \emph{NLM}, we have $\tilde g_{\theta}(\mathbf{x}_{\mathrm{t}}, \tilde u_N)=u_{\theta}$, i.e., the model outputs a single instance $u_{\theta}$. For \emph{BBB} and \emph{Flipout}, the posterior predictive distribution $p(u_{\theta} | \mathbf{x}_{\mathrm{t}}, \tilde u_N )$ is computed as an approximation of integral \eqref{eq:postpred} using sampling; for \emph{NLM}, an analytical form is available. In case of \emph{DER}, we have $g_{\theta}(\mathbf{x}_{\mathrm{t}}, \tilde u_N)=(\gamma, \nu, \alpha, \beta)$, where $(\gamma, \nu, \alpha, \beta)$ are the NIG hyperparameters. The mean of the posterior predictive distribution is equal to $\tilde\gamma$ and the variance is computed using the remaining hyperparameters. We note that the predictive uncertainty also requires the initial and/or boundary condition enforcement; this way we are able to eliminate unnecessary uncertainties at $\mathbf{x}_{\mathrm{c}}$. Main drawbacks of the current approach are the double computational burden and the not so useful uncertainty for NN approximation of the true solution in the traning region; both of them are subject to further improvement. \section{Experiments and discussion}\label{discuss} We corroborate our theory with experimental results on four equations: 1. Linear ODE for squared exponential, $\frac{du}{dt}=-2tu$; 2. Non-linear ODE for Duffing-type oscillator, $\ddot u+\omega^2 u + \epsilon u^3=0$; 3. Lotka-Volterra equations (system of non-linear ODEs), $\dot u=\alpha u-\beta uv \hspace{4pt} \wedge \hspace{4pt} \dot v=-\delta u+\gamma uv$; 4. Burgers' equation (non-linear PDE), $\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\nu\frac{\partial^2 u}{\partial x^2}$. Our implementation is based on a DE solver provided by \emph{neurodiffeq} \citep{neurodiffeq}, a Python package built with PyTorch \citep{paszke2019pytorch}. Since we are considering relatively simple DEs, we use networks with one to three fully-connected hidden layers. For the prior distribution of the weights in \emph{BBB}, \emph{Flipout}, and \emph{NLM}, we use flat Gaussian priors with mean zero. In \emph{BBB} and \emph{Flipout}, we estimate the posterior predictive from 1000 samples. In \emph{DER}, there is no need for choosing a weight prior, but an appropriate regularization parameter has to be chosen instead. Here, we tune the regularization parameter manually. \begin{figure} \caption{\textbf{Uncertainty estimation for neural ODEs} \label{fig:results} \end{figure} We demonstrate the UQ results for ODEs and for PDE in Figure 1 and Figure 2, accordingly. For all ODEs, the deterministic solution is able to approximate the true solution well; we incorporate this fact in our Bayesian inference by choosing small $\varepsilon$ which yields that there is almost no uncertainty in the training domain. We observe that the epistemic uncertainty away from the training domain is high enough for all methods, which is our main desired result in this paper. For Burgers' equation, however, we see that the NN is not able to learn the true solution, and our probabilistic model is underestimating the epistemic uncertainty in the training domain and outside of it. In this case, either a better deterministic model or a better UQ methodology is needed. Nevertheless, even for a non-perfect fit, the uncertainty starts inflating outside of the training domain which proves our initial hypothesis. We have witnessed comparable performance in sampling-dependent (\emph{BBB}, \emph{Flipout}) and sampling-free (\emph{NLM}, \emph{DER}) methods. Given the computational expense of sampling during Bayesian NN training, the latter two methods could be preferable in the case of complex DEs on a multidimentional domain. We believe that further enhancement in terms of diversifying experiments (e.g., considering more complex high-dimensional DEs) and developing theory (e.g., calibrating $\varepsilon$ with residuals at each optimization step) will help the deep learning based DE solutions to outperform classical ones and lead to their increased presence in applications. \begin{figure} \caption{\textbf{Uncertainty estimation for NN based solution of Burgers' equation.} \end{figure} { \small } \appendix \end{document}
\begin{document} \maketitle \section{Introduction} \label{sec:introduction} It is always interesting to know whether a generating function is D-finite, because D-finiteness gives easy access to a lot of useful information about the series. It is also interesting to know whether a D-finite series is algebraic, because algebraicity gives access to even more useful information or makes more efficient algorithms applicable. Every algebraic series is D-finite but not vice versa, and it is a notoriously difficult problem to decide for a given D-finite power series whether it is algebraic or not~\cite[\S4(g)]{Stanley80}. There exists an algorithm for deciding whether a given linear differential equation has only algebraic solutions~\cite{Singer80}. This algorithm can be generalized in order to compute, for a given linear differential operator~$L$, another differential operator $L^{\text{alg}}$, whose solution space is spanned by the algebraic solutions of $L$~\cite{Singer14}. The operator $L^{\text{alg}}$ can then be used to decide whether a specific solution~$y$ of $L(y)=0$ is algebraic or transcendental. However, the algorithm for computing $L^{\text{alg}}$ is very expensive, and to our knowledge it was never implemented. A popular and simple check is to inspect the asymptotic behaviour of the coefficient sequence: if it is not of the form $c\phi^n n^\alpha$ with $\alpha\in\set Q\setminus\set Z_{<0}$, then the series is transcendental~\cite{Flajolet87}. However, this condition is only necessary but not sufficient. The purpose of this paper is to highlight a less popular condition which is also necessary but not sufficient, and which can be tried in many cases where the asymptotic test fails. The method consists in finding (using computer algebra) a closed form representation of the D-finite function in question and to prove (also using computer algebra) that the function has a logarithmic singularity. {Our method is an illustration of the \emph{guess-and-prove} paradigm, which is classically used to prove algebraicity~\cite{BoKa10}: one guesses an algebraic equation, then post-certifies it. Transcendence is a more difficult task, as one needs to prove that no algebraic equation exists. However, if one can still guess a differential equation, and solve it in explicit form, then the explicit solution can lead to transcendence proofs. This is the methodology promoted here. In order to facilitate its application to other examples, we include a detailed description of the required computer algebra calculations for Maple and Mathematica\footnote{Also available online: \url{http://www.algebra.uni-linz.ac.at/research/kreweras-interacting}}. } As a concrete example, we consider a power series that appears in a recent study of restricted lattice walk models with interacting boundaries. A model is determined by a step set $S\subseteq\{-1,0,1\}^2\setminus\{(0,0)\}$ and consists of walks in the quarter plane $\set N^2$ starting at~$(0,0)$. For each step set $S$, Beaton et al. \cite{Beaton_2019,Beaton_2020} are interested in the generating functions $Q(a,b,c,x,y,t)\in\set Q[[a,b,c,x,y,t]]$, where $[a^hb^vc^ux^i y^j t^n]Q$ is the number of walks of length $n$ starting at $(0,0)$ ending at $(i,j)$, with $h$ visits of the horizontal axis, $v$~visits of the vertical axis, and $u$~visits of the origin. Among other things, they show that the generating function is algebraic for the step set $\{\step{-1,-1},\step{1,0},\step{0,1}\}$ (known as reverse Kreweras), and that the generating function is D-finite for the step set $\{\step{1,1},\step{-1,0},\step{0,-1}\}$ (known as Kreweras). They conjecture that this latter series is not algebraic, and this is what we will prove here. More precisely, we will show the following: \begin{thm} \label{prop:main-thm} Let $Q(a,b,c;x,y;t) \in \QQ[a,b,c][[x,y,t]]$ be the generating function counting Kreweras walks with interacting boundaries, restricted to the quarter plane. The generating function $Q(a,b,c;x,y;t)$ is not algebraic over $\QQ(a,b,c,x,y,t)$. Furthermore, for values $a,b,c \in \QQ$ with $c \neq 0$, the generating function $Q(a,b,c;x,y;t) \in \QQ[[x,y,t]]$ is algebraic over $\QQ(x,y,t)$ if and only if $a=b$. \end{thm} \section{Notations and the kernel equation} \label{sec:notations} We first recall some notations used in this paper. Whenever possible, we follow the notations used in \cite{Beaton_2020}. Let $R$ be an integral domain with fraction field $K$. We denote: \begin{itemize} \item $R[t]$ the ring of polynomials in $t$ with coefficients in $R$; \item $K(t)$ the field of rational functions in $t$ with coefficients in $K$, which is the fraction field of $R[t]$; \item $R[t,1/t]$ the ring of Laurent polynomials in $t$ with coefficients in $R$; \item $R[[t]]$ the ring of formal power series in $t$ with coefficients in $R$. \end{itemize} Given $f(t) \in K((t))$, we denote by $[t^{n}] f$ the coefficient of $t^n$ in $f(t)$, so that $f(t) = \sum_{n \in \ZZ} ([t^{n}]f) t^n$. We denote by $[t^{>}]f$ the sum of the terms of $f$ with positive exponents, that is, $[t^{>}]f = \sum_{n \in \ZZ_{>0}} ([t^{n}]f) t^n$. We denote by $R[t]\langle \partial_{t}\rangle$ the Ore algebra of differential operators in $t$ with polynomial coefficients. It is a non-commutative ring, and it has a left-action on the rings above, given by $\partial_{t}(f) = \pddt{f}$. Those definitions can be iterated to extend them to multiple variables, and we group together the brackets when applicable: for example $R[[x,y]]$ is the ring of formal power series in $x,y$ with coefficients in $R$, and given $f \in R[[x,y]]$, we denote by $[x^{>}y^{0}] f$ the sum of terms of positive degree in $x$ and degree $0$ in $y$ in~$f$. \paragraph{} For $n,k,l,h,v,u \in \ZZ$, we denote by $q_{h,v,u;k,l;n}$ the number of walks of length $n$ which: \begin{itemize} \item start at $(0,0)$ and end at $(k,l)$;\par\kern- amount \item never leave the upper-right quadrant $\{(x,y) \in \ZZ^{2} : x \geq 0, y \geq 0\}$; \item visit the horizontal boundary (excluding the origin) $\{(x,y) \in \ZZ^{2} : x > 0, y = 0\}$ exactly $h$ times; \item visit the vertical boundary (excluding the origin) $\{(x,y) \in \ZZ^{2} : x = 0, y > 0\}$ exactly $v$ times; \item visit the origin $u$ times (not counting the starting point). \end{itemize} The associated generating function $Q(a,b,c;x,y;t)$ is defined as \begin{equation*} \label{eq:1} Q(a,b,c;x,y;t) = \sum_{n} t^{n} \sum_{k,l} x^{k}y^{l} \sum_{h,v,u} q_{h,v,u;k,l;n} a^{h}b^{v}c^{u}. \end{equation*} Note that, since there are only finitely many walks of a given length, for each $n$, the two innermost sums define a polynomial. Hence $Q(a,b,c;x,y;t)$ lives in $\QQ[a,b,c,x,y][[t]] \subset \QQ[a,b,c][[x,y,t]]$. For shortness, we shall write $Q(x,y) := Q(a,b,c;x,y;t)$. In particular, $Q(0,0)$ is the generating function counting interacting walks ending at $(0,0)$, $Q(x,0)$ is the generating function counting interacting walks ending on the horizontal axis and $Q(0,y)$ is the generating function counting interacting walks ending on the vertical axis. Finally, we denote by $Q_{i,j} = Q_{i,j}(a,b,c;t) := [x^{i}y^{j}]Q(x,y) \in \QQ[a,b,c][[t]]$ the generating function counting interacting walks ending at point $(i,j)$. The coefficient of $t^{n}$ in $Q_{i,j}$ is a polynomial in $a,b,c$, and its coefficient for the monomial $a^{h}b^{v}c^{u}$ is exactly $q_{h,v,u;k,l;n}$. The elements $a,b,c$ are called weights associated respectively to the horizontal boundary (excluding the origin), the vertical boundary (excluding the origin) and the origin. \paragraph{} Given a step set $\mathcal{S} \subseteq \{-1,0,1\}^{2} \setminus \{(0,0)\}$, the step generator $S$ is \begin{equation*} \label{eq:2} S(x,y) = \sum_{(i,j) \in \mathcal{S}} x^{i}y^{j} \in \QQ[x,1/x,y,1/y]. \end{equation*} We denote by \begin{itemize} \item $A(x,y) = \sum_{(i,-1) \in \mathcal{S}} x^{i}y^{-1}$ the step generator for the steps going southwards; \item $B(x,y) = \sum_{(-1,j) \in \mathcal{S}} x^{-1}y^{j}$ the step generator for the steps going westwards; \item $G(x,y) = x^{-1}y^{-1}$ if $(-1,-1) \in \mathcal{S}$ and $0$ otherwise, the step generator for the steps going south-westwards. \end{itemize} The \emph{kernel} of the step set is $K(x,y) = 1 - tS(x,y) \in \QQ[t,x,1/x,y,1/y]$. The kernel equation is a functional equation satisfied by the generating function counting walks restricted to the quarter plane. \begin{thm}[{\cite[Theorem 1]{Beaton_2020}}] For a lattice walk restricted to the quarter-plane, starting at the origin, with weights $a$ (resp. $b$) associated with vertices on the $x$-axis excluding the origin (resp. the $y$ axis excluding the origin), and weight $c$ associated with the origin, the generating function $Q(x,y)$ satisfies the following functional equation \begin{multline} \label{eq:kernel} K(x,y)Q(x,y) = \frac{1}{c} + \frac{1}{a}\left( a-1-taA(x,y)\right)Q(x,0) + \frac{1}{b}\left( b-1-tbB(x,y) \right)Q(0,y)\\ + \left( \frac{1}{abc}(ac + bc - ab - abc) + t G(x,y) \right)Q(0,0). \end{multline} \end{thm} \section{Main result and the power series $\Theta$} \label{sec:kern-equat-seri} We consider specifically the Kreweras step set $\mathcal{S} = \{(1,1),(-1,0),(0,-1)\}$. By exhaustive enumeration, the generating function $Q(a,b,c;x,y;t)\in \QQ[a,b,c,x,y][[t]]$ starts \[ 1+xy \, t+ \left( {x}^{2}{y}^{2}+ax+by \right) {t}^{2}+ \left( {x}^{3}{y}^{3}+ (a+1){x}^{2}y+(b+1)x{y}^{2}+ac+bc \right) {t}^{3}+\cdots. \] For instance, at length 3, the walk $(0,0) \rightarrow (1,1) \rightarrow (2,2) \rightarrow (3,3)$ corresponds to the term $a^0 b^0 c^0 x^3 y^3 t^3 = x^3 y^3 t^3$, as it does not touch any of the axes after leaving the origin, while the walk $(0,0) \rightarrow (1,1) \rightarrow (1,0) \rightarrow (0,0)$, corresponds to $a^1 b^0 c^1 t^3 x^0 y^0 = ac t^3$, as after leaving the origin it touches the positive horizontal axis once, it returns to the origin once, but does not touch the positive vertical axis. The main result of this paper is that $Q$ is not algebraic (Theorem~\ref{prop:main-thm}). In order to prove it, we define \begin{equation} \label{eq:3} \Theta = [x^{>}y^{0}]\left( \frac{(x-y)(x^{2}y-1)(xy^{2}-1)}{xyK(x,y)} \right), \end{equation} where $K(x,y) = 1-t \left( xy+{x}^{-1}+{y}^{-1} \right)$. We will prove that $\Theta$ is not algebraic. The connection between $\Theta$ and $Q$ comes from the following lemma. \begin{lem}[{\cite[Lemma 10]{Beaton_2020}}] There exist Laurent polynomials $\beta, \beta_{x,0}, \beta_{0,x}, \beta_{0,0}, \beta_{1,0}, \beta_{2,0}, \beta_{3,0} \in \QQ[t,1/t,x,1/x]$, such that \begin{multline} \label{eq:4} \beta + \beta_{x,0}Q(x,0) + \beta_{0,x}Q(0,x) + \beta_{0,0}Q(0,0) + \beta_{1,0}Q_{1,0} + \beta_{2,0}Q_{2,0} + \beta_{3,0}Q_{3,0} \\ = t^{3} \frac{a-b}{c}(ab - (ab-ac-bc+abc)Q(0,0)) \Theta. \end{multline} \end{lem} \begin{proof} It is a straightforward transposition of~\cite[Lemma 10]{Beaton_2020}, by observing that with the notations therein, $\theta = t^{3}\frac{a-b}{c} ab \Theta$ and $\theta_{0,0} = t^{3}\frac{a-b}{c}(ab-ac-bc+abc) \Theta$. \end{proof} \section{Transcendence of $\Theta$} \label{sec:non-algebr-theta} Recall that for $\alpha,\beta,\gamma\in\set Q$ and $-\gamma \notin \mathbb{N}$, the Gaussian hypergeometric series $\pFqText{\alpha,\,\beta}{\gamma}{t}$ is defined as \begin{equation*} \label{eq:11} \pFq{\alpha, \beta}{\gamma}{t} := \sum_{n=0}^{\infty} \frac{(\alpha)_n(\beta)_n}{(\gamma)_n} \, \frac{t^n}{n!} \in \QQ[[t]], \end{equation*} where $(u)_n$ denotes the Pochhammer symbol $(u)_n=u(u+1)\cdots(u+n-1)$ for $n\in\mathbb{N}$. It satisfies the differential equation $ \left( {t}^{2}-t \right) y'' \left( t \right) + \left( (\alpha+\beta+1)t - \gamma \right) y' \left( t \right) + \alpha \beta \, y \left( t \right) = 0. $ \begin{thm} \label{thm:theta-non-algebraic} The power series $\Theta$ defined in~\eqref{eq:3} admits the following closed form representation: \begin{equation*} \label{eq:6} \Theta(t;x) = A_{1}(t;x) + A_{2}(t;x) \int_{0}^{t} A_{3}(s;x) T(s;x) ds, \end{equation*} where \begin{equation*} \begin{aligned} A_{0} &= \sqrt{1-\frac{2t}{x}-(4x^3-1)\frac{t^{2}}{x^2}}, \\[5pt] A_{1} &= \frac{1}{6xt^{3}} - \frac{x^{3}-1}{2x^{2}t^{2}} + \frac{2-3x^{3}}{6x^{3}t} + \frac{tx^{3}+2t-x}{6t^{3}x^{2}} A_{0}, \\[5pt] A_{2} &= \frac{x^{2}(x-tx^{3}-2t)}{3t^{3}} A_{0}, \\[5pt] A_{3} &= \frac{1}{(tx^{3}+2t-x)^{2}(4t^{2}x^{3}-(x-t)^{2})A_{0}}, \\[5pt] T &= (3t-x)x \;\pFq{-1/3, -2/3}{1}{27t^{3}} + 4t(2tx^{3}+t-x)\, \pFq{-1/3,1/3}{2}{27t^{3}}. \end{aligned} \end{equation*} The power series $A_{0}, A_{1}, A_{2}$ and $A_{3}$ are algebraic, and the power series $T$ is transcendental. In particular, $\Theta$ is transcendental. \end{thm} \begin{proof} We prove that $\Theta$ is equal to $C:=A_{1}+A_{2}\int A_{3}T$, in four steps: \begin{enumerate} \item use creative telescoping~\cite{Chyzak00,Koutschan10} to obtain a differential operator $L_{ct} \in \QQ(x,t)\langle \partial_{t}\rangle$ annihilating $\Theta$; \item verify that $L_{ct}$ annihilates~$C$; \item find $r\in\mathbb N$ such that if $s \in\set Q(x)[[t]]$ is a power series solution of $L_{ct}$ and $s = 0 \mod{t^{r}}$, then $s=0$; \item verify that $\Theta$ and $C$, both power series solution of $L_{ct}$, are equal modulo $t^{r}$ and so they are actually equal. \end{enumerate} For the first step, define \begin{equation*} \label{eq:8} \Theta_{0}(t;x,y) = \frac{(x-y)(x^{2}y-1)(xy^{2}-1)}{xyK(x,y)} \in \QQ(x,y,t) \end{equation*} so that $\Theta = [x^{>}y^{0}]\Theta_{0}$. In order to bring the problem to a form suitable to creative telescoping algorithms, we encode the coefficient extractions as residues. Extracting the constant coefficient in~$y$ is immediate: for any $F \in \QQ[x,1/x,y,1/y][[t]]$, as by definition, \begin{equation*} \label{eq:9} [y^{0}]F(t;x,y) = \Res_{y=0} \left(\frac{F(t;x,y)}{y}\right) \in \QQ[x,1/x][[t]]. \end{equation*} For extracting the positive part, we follow~\cite[Theorem~3]{BCHKP17}: for any $F \in \QQ[x,1/x][[t]]$, \begin{equation*} \label{eq:10} [x^{>}]F(t;x) = \Res_{z=0}\left[ \frac{1}{z} F(t;z) \frac{\frac{x}{z}}{1 - \frac{x}{z}} \right] \in \QQ[x][[t]]. \end{equation*} So composing the two, we get \begin{equation} \label{eq:7} \Theta(t;x) = \Res_{z=0}\Res_{y=0} \left[ \frac{1}{yz} \Theta_{0}(t;z,y) \frac{\frac{x}{z}}{1 - \frac{x}{z}} \right] . \end{equation} An annihilator for this can now be computed using creative telescoping, for example, using the Mathematica package \texttt{HolonomicFunctions}~\cite{Koutschan09}, with the following: \begin{verbatim} (* In Mathematica *) << "HolonomicFunctions.m" Theta0 = (x-y)*(x^2*y-1)*(x*y^2-1)/(x*y*(1-t*(1/x+1/y+x*y))) Theta0z = Theta0 /. x -> z Lct = First[First[CreativeTelescoping[ First[CreativeTelescoping[Theta0z/z/y * (x/z)/(1-x/z), Der[y], {Der[z],Der[t]}]], Der[z], {Der[t]}]]] \end{verbatim} This yields an operator $L_{ct} \in\set Q(x,t)\langle \partial_{t}\rangle$ of order~6, which annihilates~$\Theta$. Checking that $L_{ct}$ annihilates $C$ is a straightforward computation with a computer algebra software. For instance, in Maple, the following command evaluates to 0: \begin{verbatim} # In Maple with(DEtools); simplify(eval(diffop2de(Lct, [Dt,t], y(t)), y(t) = C)); \end{verbatim} For the last step, we need to look at a basis of power series solutions of~$L_{ct}$. Computer algebra software can again be used to compute (truncations of) elements in such a basis. For instance, this can be done with the following lines in Maple. \begin{verbatim} # In Maple Order := 8; sols := formal_sol(Lct,[Dt,t]); # Keep only the power series solutions sols := select(s -> type(series(s,t=0),'taylor'), sols); \end{verbatim} The output shows that the set of power series solutions is a $\QQ(x)$-vector space of dimension $2$ spanned, after a change of basis bringing the first terms to echelon form, by \begin{align*} s_{0} &= 1 + xt^{2} + t^{3} + O(t^{4}), \\ s_{1} &= t + \frac{1-x^{3}}{x} t^{2} + \frac{1}{x^{2}}t^{3} + O(t^{4}). \end{align*} So a power series solution of $L_{ct}$ is entirely determined by its coefficients of degree $0$ and~$1$, and in particular knowing a power series modulo $t^{2}$ is enough. Finally, checking that the first two coefficients of $C$ and $\Theta$ are equal is again a straightforward computation. For instance, again using Maple: \begin{verbatim} # In Maple map(normal,series(C,t,5)); series(Theta,t,2); \end{verbatim} returns the same result $- x^{2} + O(t^{2})$. This allows to conclude that $\Theta = C$. For the second statement of the theorem, note that $A_{0}$, $A_{1}$, $A_{2}$ and $A_{3}$ are algebraic by closure properties of algebraic functions. The proof of the fact that $T$ is transcendental combines human observation and computer algebra. First, observe that if $T(t;x)$ was algebraic, then by closure properties so would be $T(t;3t)$, which has only one hypergeometric term, and thus $H(t) = \pFqText{-1/3,1/3}{2}{t}$ would be algebraic. But it is straightforward to verify that this cannot be the case, either by a lookup in Schwarz's classification of algebraic~${}_{2}F_{1}$'s~\cite{Schwarz1873}, or simply by observing that the minimal-order linear differential equation $ \left( 9 {t}^{2} - 9t \right) H'' \left( t \right) + \left( 9t - 18 \right) H' \left( t \right) - \, H \left( t \right) = 0 $ of $H$ has solutions which cannot be algebraic because one of them has logarithms in its local expansion at~$0$. Finally, it follows that $\Theta$ is also transcendental, again by closure properties: if $\Theta$ was algebraic, so would be $\int_{0}^{t} A_{3}(s;x) T(s;x) ds$, and so would be its derivative $A_{3}T$, and so would be~$T$. \end{proof} We now give some more explanation on the process we followed to find the closed form proved above. The first step is to compute a small-order differential operator annihilating~$\Theta$. It is possible, given the data of the series coefficients, to guess such an operator $L_{g}$ of order $4$ (hence smaller in order than $L_{ct}$), by using for instance the guesser~\cite{kauers09a}: \begin{verbatim} (* In Mathematica, continuation of the previous calculations *) << "Guess.m" Theta = Expand[x*Expand[1/x CoefficientList[Series[Theta0,{t,0,70}],t]] /. (x^i_ /; i<0) -> 0 /. (y^i_ /; i!=0) -> 0]; Lg = GuessMinDE[Theta,TT[t]] \end{verbatim} The output is an operator $L_{g} \in\set Q(x,t)\langle \partial_{t}\rangle$ which is likely to annihilate $\Theta$. We can increase our trust in this operator by verifying that $L_{g}$ right-divides $L_{ct}$: \begin{verbatim} (* In Mathematica, continuation of the previous calculations *) OreReduce[Lct,{ToOrePolynomial[Lg,TT[t]]}] \end{verbatim} This line returns the right-remainder of $L_{ct}$ modulo $L_{g}$, which is $0$ as expected. At this point, we could also compute the quotient, and, examining its solutions similarly to what was done in the proof of Theorem~\ref{thm:theta-non-algebraic}, prove that $L_{g}$ annihilates~$\Theta$. But this is not necessary: the constructed closed form will (by design) be annihilated by $L_{g}$, so the fact that~$L_{g}$ is an annihilator of $\Theta$ is a consequence of the theorem. As a next step, we compute a closed form solution $C$ of $L_{g}$. The starting point is to decompose $L_{g}$ as the least common left multiple (LCLM) of two operators of smaller order~\cite{Hoeij96}: \begin{verbatim} # In Maple L1, L3 := op(DFactorLCLM(Lg, [Dt,t])); \end{verbatim} The output is a pair of two operators $L_1$ of order 1, and $L_3$ of order~3, such that $L_{g} = \text{LCLM}(L_1, L_3)$. Equivalently, in terms of solution spaces, this means that a basis of solutions of $L_g(y)=0$ is obtained by the union of the bases of $L_1$ and $L_3$, respectively. The operator $L_1$ admits a simple solution; this can be seen using the Maple command \begin{verbatim} dsolve(diffop2de(L1, [Dt,t], y(t)), y(t)); \end{verbatim} which outputs \[ {\frac {3\,{x}^{3}}{t}}+{\frac {3\,{x}^{4}}{{t}^{2}}}-\frac{2}{t}+{\frac {3\,x}{{ t}^{2}}}-{\frac {{x}^{2}}{{t}^{3}}} . \] It remains to treat the operator $L_3$. The starting point is to decompose it as the product of two operators of smaller order~\cite{Hoeij97}: \begin{verbatim} # In Maple fac := DFactor(L3, [Dt,t]); \end{verbatim} The output is a pair $\texttt{fac} = [L_2, S_1]$ of two operators of order 2, respectively~1, such that $L_{3} = L_2 \, S_1$. Now the differential equation $L_3(z)=0$ is equivalent to $L_2(y)=0$ and $S_1(z) = y$. Hence, it remains to solve $L_2(y)=0$. This can be done by using the algorithm in~\cite{KuHo13} and its Maple implementation provided by the authors\footnote{\url{https://www.math.fsu.edu/~vkunwar/hypergeomdeg3/hypergeomdeg3}}. Using the command \texttt{hypergeomdeg3}, one gets a solution in terms of hypergeometric $_2F_1$ functions: \begin{verbatim} SOL:=x/t^3/(t*x^3+2*t-x)/(4*t^2*x^3-t^2+2*t*x-x^2)* ((x-3*t)*x*hypergeom([-1/3, -2/3],[1],27*t^3) -4*t*(2*t*x^3+t-x)*hypergeom([-1/3, 1/3],[2],27*t^3)); \end{verbatim} One can check that this is indeed a solution of $L_2$; indeed, the simplification command \begin{verbatim} simplify(eval(diffop2de(fac[1], [Dt,t], y(t)), y(t) = SOL)); \end{verbatim} return~0. Moreover, one can show that this solution coincides (locally at $t=0$) with the unique power series solution of~$L_2$. Finally, the solution of $L_3(z)=0$ can be found using \begin{verbatim} simplify(dsolve( diffop2de(fac[2], [Dt,t], z(t) ) = SOL, z(t))); \end{verbatim} which yields \[ {\frac {t{x}^{3}+2t-x}{{t}^{3}}\sqrt { \left( 4{x}^{3}-1 \right) {t}^{2}+2tx-{x}^{2}} \left( \int \!{\frac {{\rm SOL} \cdot {t}^{3}}{t{x}^{3}+2t-x}{\frac {1}{\sqrt { \left( 4{x}^{3}-1 \right) {t}^{2}+2tx-{x}^{2}}}}} \,dt+c \right) }, \] where ${\rm SOL}$ is the hypergeometric expression found above and $c=c(x)$ is a constant function in~$t$, that is found by fitting initial terms of the power series expansions. Putting pieces together yields the expression in the statement of Theorem~\ref{thm:theta-non-algebraic}. Note that the method sketched above is rigorous in the sense that the closed form solution of $L_{ct}$ found in this way is correct by construction. The alternative correctness argument in the proof of the theorem is independent of how the closed form was found. It is also worth noting that providing a closed form for~$\Theta$ is somewhat more than just proving its transcendence. If we were not interested in the hypergeometric expression, we could prove the transcendence of $\Theta$ directly on the level of operators. For, the lclm decomposition quoted above translates into a decomposition $\Theta=\Theta_1+\Theta_3$ where $\Theta_1$ is a solution of $L_1$ and $\Theta_3$ is a solution of~$L_3$. Since $\Theta_1$ (stated above) is rational, transcendence of $\Theta$ is equivalent to transcendence of~$\Theta_3$. Now assume that $\Theta_3$ is algebraic. Then the factorization $L_3=L_2S_1$ implies that $y:=S_1(\Theta_3)$ is algebraic as well, and that it is a solution of~$L_2$. To conclude the argument, it suffices to observe that $y\neq0$, that $S_1$ is irreducible, and that $S_1$ has a logarithmic singularity. \section{Transcendence of $Q$} \label{sec:non-algebraicity-q} \begin{thm} Assume that $a\neq b$ and $c \neq 0$. In particular, this is the case if $a,b,c$ are variables in the polynomial ring $\QQ[a,b,c]$. Then the power series $Q(x,y)$, $Q(x,0)$ and $Q(0,y)$ are transcendental over $\QQ(a,b,c,x,y,t)$. \end{thm} \begin{proof} First note that the algebraicity of the three series is equivalent: if $Q(x,y)$ is algebraic, then so are its specializations $Q(0,y)$ and $Q(x,0)$; and conversely, if, say, $Q(0,y)$ is algebraic, then by symmetry of the step set so is $Q(x,0)$, and by the kernel equation, so is $Q(x,y)$. To reach a contradiction, assume that $Q(x,y)$ is algebraic. Then, by taking the derivative along $x$ and taking the value at $x=y=0$, the power series $Q_{1,0}$ is also algebraic. Repeating the same process, $Q_{2,0}$ and $Q_{3,0}$ are algebraic. Recall that $Q(0,0)$ is algebraic~\cite[Corollary~3]{Beaton_2020}. So all in all, the left-hand side~$L$ of Equation~\eqref{eq:4} is algebraic. If $(a-b)\Big(ab - (ab - ac - bc + abc)Q(0,0)\Big) \neq 0$, this would imply that \begin{equation*} \label{eq:5} \Theta = \frac{c\,L}{(a-b) \Big(ab - (ab - ac - bc + abc)Q(0,0)\Big) t^{3}} \end{equation*} is also algebraic, which is a contradiction with Theorem~\ref{thm:theta-non-algebraic}. Thus, $(a-b)\big(ab - (ab - ac - bc + abc)Q(0,0)\big) = 0$. By assumption, $a \neq b$, so the second factor has to be zero. Since $Q(0,0) = 1 + (a+b)c t^3 + \cdots$, extracting coefficients of $t^0$ and $t^3$ in this second factor yields $abc=ac+bc$ and $0 = ab(a+b)c$. Since $c\neq 0$, these relations imply $ab=a+b$ and $0=ab(a+b)$, thus $ab=a+b=0$, and finally $a=b=0$, which contradicts the assumption $a\neq b$. \end{proof} \section{Particular cases and additional remarks} \label{sec:part-cases-addit} If $a=b$, as observed in~\cite[Section~5.5]{Beaton_2020}, the right-hand side of Equation~\eqref{eq:4} vanishes, and then the series $Q(x,y)$ is algebraic. If $c=0$, then in particular $Q(0,0)=1$, and both sides of Equations~\eqref{eq:4} and~\eqref{eq:kernel} (after clearing out the denominator $c$) vanish. We do not know if the power series $Q(x,y)$ is algebraic or even D-finite in that case. With sample values of $a,b,x,y$ and $c=0$, we were not able to guess any algebraic, differential or recurrence relation with the first \num{10000} coefficients in $t$ of the series $Q(a,b,c;x,y;t)$. The generating function $Q(1,1)$, which counts interacting walks regardless of their ending point, is also of interest, besides $Q(0,0)$ and $Q(x,y)$. Experimentally, this generating function appears to be algebraic: we could guess a polynomial \footnote{Available online: \url{http://www.algebra.uni-linz.ac.at/research/kreweras-interacting/equations_Q11}} $P(a,b,c;t,u) \in \mathbf{F}_{45007}[a,b,c,t,u]$, with $\mathbb{F}_{45007}$ the finite field with $45007$ elements, such that, for a large number of values of $(a,b,c) \in \mathbb{F}_{45007}^{3}$, one has $P(a,b,c;t,Q(a,b,c;1,1;t)) = 0 \bmod t^{2350}$. The polynomial $P$ has degree $92$ in $t$, degree $24$ in $u$, degree $60$ in $a$ and in $b$, and degree $24$ in $c$. In dense monomial form it has a size of more than 1GB. The next step would be to \emph{lift} the result to obtain a polynomial $P \in \QQ[a,b,c,t,u]$, and to \emph{prove} that $P(a,b,c;t,Q(a,b,c;1,1;t)) = 0$. In principle, this is doable using (a variant of) the approach in~\cite{BoKa10}. \acknowledgements{ We warmly thank Mark van Hoeij for his advice, and for his precious help with the package \href{{https://www.math.fsu.edu/~vkunwar/hypergeomdeg3/hypergeomdeg3}}{hypergeomdeg3}, especially in the parametric case. } \end{document}
\begin{document} \begin{abstract} We study a combination of the refracted and reflected L\'{e}vy processes. Given a spectrally negative L\'{e}vy process and two boundaries, it is reflected at the lower boundary while, whenever it is above the upper boundary, a linear drift at a constant rate is subtracted from the increments of the process. Using the scale functions, we compute the resolvent measure, the Laplace transform of the occupation times as well as other fluctuation identities that will be useful in applied probability including insurance, queues, and inventory management. \underline{n}oindent \small{\textbf{Key words:} L\'{e}vy processes; fluctuation theory; scale functions; insurance risk. \\ \underline{n}oindent AMS 2010 Subject Classifications: 60G51, 91B30, 90B22}\\ \mathbb{E}nd{abstract} \maketitle \section{Introduction} In this paper, we study what we call the \mathbb{E}mph{refracted-reflected} spectrally negative L\'{e}vy process. Its dynamics can be understood as follows: given a spectrally negative L\'{e}vy process and two boundaries, we reflect the path of the process at the lower boundary while, whenever it is above the upper boundary, a linear drift at a constant rate is subtracted from the increments of the process. This process can be seen as a combination of a L\'evy process reflected at the lower boundary and the refracted L\'{e}vy process, introduced by Kyprianou and Loeffen \cite{KL}, with the upper boundary being the refraction trigger level. The former is well-studied and is known to be expressed as the difference between the underlying L\'{e}vy process and its running infimum process. The latter moves like the original process below a fixed level while it behaves like a drift-changed process above it. Various fluctuation identities have been developed for reflected and refracted L\'evy processes via the use of scale functions. We refer the reader to \cite{APP2007,P2004,P2007} and \cite{KL, KPP} for a review on the study of the former and the latter, respectively. Our study of the refracted-reflected L\'evy process is motivated by its potential applications in applied probability, as exemplified by insurance mathematics and queueing theory. In insurance, the classical Cram\'er-Lundberg model uses a compound Poisson process with negative jumps as the surplus of an insurance company. Nevertheless, very recent studies motivated by insurance risk have seen preference in working with a general spectrally negative L\'{e}vy process, partly thanks to the development of the fluctuation theory of L\'evy processes and scale functions. See for example \cite{APP2007,F1998,HPSV2004a,HPSV2004b,KKM2004,KK2006,KP2007,R2014,RZ2007,SV2007}. In particular, in \mathbb{E}mph{de Finetti's optimal dividend problem}, an insurance company aims to maximize the expected net present value (NPV) of the total dividends paid until ruin. It is shown, under a certain condition (see \cite{Loeffen}), that the barrier strategy is optimal and the resulting controlled surplus process becomes the L\'{e}vy process reflected at an upper boundary; see, among others, \cite{APP2007} and \cite{Loeffen}. To this classical setting, there are two existing extensions. First, in the \mathbb{E}mph{bail-out} model, it is assumed that the capital must be injected to prevent the surplus process from going below zero. In this setting, Avram et al. \cite{APP2007} showed that it is optimal to reflect at zero and at some upper boundary, with the optimally controlled process being the doubly reflected L\'{e}vy process of \cite{Pistorius_2003}. Second, under the restriction that the rate at which the dividends are paid is bounded and, instead, absolutely continuous, Kyprianou et al.\ \cite{KLP} showed that, if the L\'evy measure has a completely monotone density, it is optimal to pay dividends at the maximal rate as long as the surplus is above some fixed level. The optimally controlled process becomes then the refracted L\'{e}vy process. Naturally, it is of great interest to think of a joint problem with both capital injection and an absolutely continuous assumption; a refracted-reflected L\'{e}vy process is an obvious candidate for the optimally controlled process. In queues and inventory management, a L\'{e}vy process is also a main tool in modeling. As an important characteristic of queues and inventories, they have a lower bound at zero, and hence it is common to model them as those reflected at zero. As a more realistic model, \mathbb{E}mph{queues with abandonments} take into account the well-studied phenomenon that a customer is not patient enough to line up if the queue is too long; consequently the rate of increments of a queue decreases when the current level is high. On the other hand in inventory management, the rate of replenishment must be decreased when the inventory level is high; this is due to the fact that there is a limited capacity of inventory and it is necessary to reduce future unsalable stock. In these settings, refracted-reflected L\'evy processes are again a natural choice to model these processes. In this paper, we construct and study the refracted-reflected process when the underlying process is a spectrally negative L\'{e}vy process. Our objective is to compute several fluctuation identities including the resolvent measure, the expected NPV of capital injection (in the insurance context described above) and the Laplace transform of the occupation time above and below the level of refraction. Given that this type of processes constitute an extension of the refracted case, we apply and adopt several techniques used in \cite{KL}. We shall first consider the bounded variation case, and then by an approximation scheme we study the unbounded variation case. In order to avoid complicated expressions in terms of the L\'{e}vy measure, we obtain several formulae to simplify the expressions of the obtained fluctuation identities. Our challenges are mainly caused by the negative jumps. There are essentially three regions that determine the movement of the process: (1) the refraction region above the upper boundary, (2) the reflection region below the lower boundary, and (3) the waiting region between these. Due to negative jumps, the process can potentially jump from the refraction region to any of the other remaining regions. We shall show, nevertheless, that the fluctuation identities can be obtained and can be simplified by the formulae discussed above. The rest of the paper is organized as follows. Section \ref{section_process} gives a construction of the refracted-reflected process. Section \ref{section_scale_functions} reviews the theory of scale functions and develops new simplifying formulae that will be useful in the sequel and are potentially beneficial in future work. Section \ref{section_resolvents} computes the resolvent measure and, as special cases, we obtain the one-sided exit identity as well as the expected NPV of dividends in the insurance context. In Section \ref{section_capital_injection}, we obtain the expected NPV of capital injection. Finally, in Section \ref{section_occupation_time}, we study the occupation time of the process above and below the refraction level. \section{Refracted-reflected spectrally one-sided L\'{e}vy processes} \label{section_process} \subsection{Spectrally negative L\'{e}vy processes and their reflected/refracted processes} Let $X=(X_t; t\geq 0)$ be a L\'evy process defined on a probability space $(\Omega, \mathcal{F}, \p)$. For $x\in \R$, we denote by $\p_x$ the law of $X$ when it starts at $x$ and write for convenience $\p$ in place of $\p_0$. Accordingly, we shall write $\mathbb{E}_x$ and $\mathbb{E}$ for the associated expectation operators. We let $(\mathcal{F}_t)_{t \geq 0}$ be the filtration generated by $X$. In this paper, we shall assume throughout that $X$ is \textit{spectrally negative}, meaning here that it has no positive jumps and that it is not the negative of a subordinator. This allows us to define the Laplace exponent $\psi(\theta):[0,\infty) \to \R$, i.e. \[ \mathbb{E}\big[{\rm e}^{\theta X_t}\big]=:{\rm e}^{\psi(\theta)t}, \qquad t, \theta\ge 0, \] given by the \mathbb{E}mph{L\'evy-Khintchine formula} \begin{equation} \psi(\theta):=\gamma\theta+\frac{\sigma^2}{2}\theta^2+\int_{(-\infty,0)}\big({\rm e}^{\theta x}-1-\theta x\mathbf{1}_{\{x>-1\}}\big)\Pi({\rm d} x), \quad \theta \geq 0,\underline{n}otag \mathbb{E}nd{equation} where $\gamma\in \R$, $\sigma\ge 0$, and $\Pi$ is a measure on $(-\infty,0)$ called the L\'evy measure of $X$ that satisfies \[ \int_{(-\infty,0)}(1\land x^2)\Pi({\rm d} x)<\infty. \] It is well-known that $X$ has paths of bounded variation if and only if $\sigma=0$ and $\int_{(-1, 0)} x\Pi(\mathrm{d}x)$ is finite. In this case $X$ can be written as \begin{equation} X_t=ct-S_t, \,\,\qquad t\geq 0,\underline{n}otag \mathbb{E}nd{equation} where \begin{align} c:=\gamma-\int_{(-1,0)} x\Pi(\mathrm{d}x) \label{def_drift_finite_var} \mathbb{E}nd{align} and $(S_t; t\geq0)$ is a driftless subordinator. Note that necessarily $c>0$, since we have ruled out the case that $X$ has monotone paths; its Laplace exponent is given by \begin{equation*} \psi(\theta) = c \theta+\int_{(-\infty,0)}\big( {\rm e}^{\theta x}-1\big)\Pi({\rm d} x), \quad \theta \geq 0. \mathbb{E}nd{equation*} The \mathbb{E}mph{L\'{e}vy process reflected at the lower boundary} $0$ is a strong Markov process written concisely by \begin{align} U_t := X_t+\sup_{0 \leq s\leq t}(-X_s)\vee0,\qquad t\geq0. \label{reflected_levy} \mathbb{E}nd{align} The supremum term pushes the process upward whenever it attempts to down-cross the level $0$; as a result the process only takes values on $[0, \infty)$. The \mathbb{E}mph{refracted L\'{e}vy process} is a variant of the reflected L\'{e}vy process, first introduced by Kyprianou and Loeffen \cite{KL}. Informally speaking, a linear drift at rate $\delta>0$ is subtracted from the increments of the underlying L\'{e}vy process $X$ whenever it exceeds a pre-specified positive level $b>0$. More formally, it is the unique strong solution to the stochastic differential equation given by \begin{equation}\label{SDE} {\rm d} A_t={\rm d} X_t-\delta\mathbf{1}_{\{A_t>b\}}{\rm d} t, \qquad t\geq 0. \mathbb{E}nd{equation} In deriving the fluctuation identities, it is important that the \mathbb{E}mph{drift-changed process} \begin{align} Y_t := X_t - \delta t, \quad t \geq 0, \label{def_Y} \mathbb{E}nd{align} is again a spectrally negative L\'{e}vy process that is not the negative of a subordinator. Hence, in \cite{KL} and \cite{KLP}, the standing assumption (with $c$ defined as in \mathbb{E}qref{def_drift_finite_var}) \begin{equation} \mathrm{({\bf H})}\qquad\delta< c,\qquad\text{if $X$ has paths of bounded variation}, \underline{n}otag \mathbb{E}nd{equation} is imposed. The reader is referred to Bertoin \cite{B} and Kyprianou \cite{K} for a complete introduction to the theory of L\'evy processes and their reflected processes. For refracted L\'{e}vy processes, see \cite{KL}, \cite{KLP}, \cite{KPP}, and \cite{R2014}. \subsection{Refracted-reflected L\'{e}vy processes}\label{RRLPD} For the rest of the paper, we fix $b > 0$ and $\delta \geq 0$ such that condition $\mathrm{({\bf H})}$ above holds. We define the \mathbb{E}mph{refracted-reflected L\'{e}vy process} as follows. While the process is above $b$ a linear drift at rate $\delta$ is subtracted from the increments of process. On the other hand, when it attempts to down-cross $0$, it is pushed upward so that it will not go below $0$. The process can be formally constructed by the recursive algorithm given below: \begin{center} \line(1,0){300} \mathbb{E}nd{center} \textbf{Construction of the refracted-reflected L\'{e}vy process $V$ under $\p_x$} \begin{description} \item[Step 0] Set $V_{0-}=x$. If $x \geq 0$, then set $\underline{\tau} := 0$ and go to \textbf{Step 1}. Otherwise, set $\overline{\tau} := 0$ and go to \textbf{Step 2}. \item[Step 1] Let $\{ \widetilde{A}_t; t \geq \underline{\tau} \}$ be the refracted L\'{e}vy process (with refraction level $b$) that starts at the time $\underline{\tau}$ at the level $x$, and $\overline{\tau} := \inf \{ t > \underline{\tau}: \widetilde{A}_t < 0\}$. Set $V_t=\widetilde{A}_t$ for all $\underline{\tau} \leq t < \overline{\tau}$. Then go to \textbf{Step 2}. \item[Step 2] Let $\{ \widetilde{U}_t; t \geq \overline{\tau}\}$ be the L\'{e}vy process reflected at the lower boundary $0$ that starts at time $\overline{\tau}$ at $0$, and $\underline{\tau} := \inf \{ t > \overline{\tau}: \widetilde{U}_t > b\}$. Set $V_t=\widetilde{U}_t$ for all $\overline{\tau} \leq t < \underline{\tau}$ and $x = b$. Then go to \textbf{Step 1}.\mathbb{E}nd{description} \begin{center} \line(1,0){300} \mathbb{E}nd{center} Let us define the nondecreasing and right-continuous processes $R_t$ and $L_t$ that represent, respectively, the cumulative amounts of modification up to $t$ that pushes the process upward when it attempts to go below $0$ and that pushes downward when it is above $b$. Then, we have a decomposition \begin{align*} V_t = X_t + R_t - L_t, \quad t \geq 0, \mathbb{E}nd{align*} where we can write \begin{align*} L_t = \delta \int_0^t 1_{\{ V_s > b \}} {\rm d} s, \quad t \geq 0. \mathbb{E}nd{align*} In particular, \mathbb{E}mph{for the case of bounded variation,} \begin{align*} R_t = \sum_{t\geq0: V_{t-} + \Delta X_t < 0} |V_{t-} + \Delta X_t | \quad t \geq 0. \mathbb{E}nd{align*} Here and for the rest of the paper, we define $\Delta \xi_t := \xi_t - \xi_{t-}$, $t \geq 0$, for any right-continuous process $\xi$. \subsection{Applications} \label{subsection_dividends} As discussed in the introduction, the process $V$ can model the surplus of a dividend-paying company under the bail-out setting: they receive an injection of capital so as to prevent the process from going below the default level $0$, while they also pay dividends at a fixed rate $\delta > 0$ whenever the surplus is above the level $b$. Under this scenario, it is clear from the construction above that $L$ is the cumulative amount of dividend payments whereas $R$ is that of injected capital. As we have reviewed in the introduction, $V$ can also model queues with abandonments as well as inventories. In the former, $L$ models the number of customers who decide not to line up because it is too long. In the latter, $R$ models the lost sales due to the lack of stock. Motivated by these applications, it is thus of great interest to compute the expected NPV's: \begin{align} \label{net_present_value} \mathbb{E}_x \Big(\int_0^\infty e^{-qt}{\rm d} L_t \Big)= \delta \mathbb{E}_x \Big(\int_0^\infty e^{-qt}1_{\{V_s>b\}}{\rm d} s \Big) \quad \text{ and } \quad \mathbb{E}_x \Big(\int_{[0, \infty)} e^{-qt}{\rm d} R_t \Big). \mathbb{E}nd{align} We compute these values as corollaries to the fluctuation identities of $V$; see Sections \ref{section_resolvents} and \ref{section_capital_injection}. In addition, it is worth studying further the length of the time $\{ s\geq 0: V_s > b \}$. In particular, in the queues with abandonments, it is important to compute the distribution of the occupation times $\int 1_{\{ V_s > b \}} {\rm d} s$ and $\int 1_{\{ V_s < b \}} {\rm d} s$ where the former is the length of the busy period where customers may decide not to line up and the latter is that of the non-busy period; we obtain their Laplace transforms in Section \ref{section_occupation_time}. \subsection{Approximation of the case of unbounded variation} As has been done in \cite{KL}, our approach first obtains identities for the case in which $X$ is of bounded variation and then extends the results to the unbounded variation case. In order to carry this out, we need to show that the refracted-reflected L\'{e}vy process $V$ associated with the L\'{e}vy process $X$ of unbounded variation can be approximated by those of bounded variation. Following Definition 11 of \cite{KL}, given a stochastic process $(\xi_s; s \geq 0)$, a sequence of processes $\{(\xi_s^{(n)})_{s \geq 0};n\geq1\}$ is strongly approximating for $\xi$, if $\lim_{n \uparrow \infty}\sup_{0 \leq s \leq t} |\xi_s - \xi^{(n)}_{s}| =0$ for any $t > 0$ a.s. As is well-known (see Definition 11 of \cite{KL} and page 210 of \cite{B}), for any spectrally negative L\'{e}vy process $X$, there exists a strongly approximating sequence $X^{(n)}$ with paths of bounded variation. We shall obtain a similar result for the refracted-reflected L\'{e}vy process. \begin{proposition} \label{prop_approximation} Suppose $X$ is of unbounded variation and $(X^{(n)}; n \geq 1)$ is a strongly approximating sequence for $X$. In addition, define $V$ and $V^{(n)}$ as the refracted-reflected processes associated with $X$ and $X^{(n)}$, respectively. Then $V^{(n)}$ is a strongly approximating sequence of $V$. \mathbb{E}nd{proposition} \begin{proof} See Appendix \ref{proof_prop_approximation}. \mathbb{E}nd{proof} \section{Review of scale functions and some new identities} \label{section_scale_functions}In this section, we review the scale function of spectrally negative L\'{e}vy processes and develop some new identities that will be used to simplify the expression of the fluctuation identities in our later arguments. Fix $q \geq 0$. Following the same notations as in \cite{KL}, we use $W^{(q)}$ and $\mathbb{W}^{(q)}$ for the scale functions of $X$ and $Y$ (see \mathbb{E}qref{def_Y} for the latter), respectively. Namely, these are the mappings from $\R$ to $[0, \infty)$ that take value zero on the negative half-line, while on the positive half-line they are strictly increasing functions that are defined by their Laplace transforms: \begin{align} \label{scale_function_laplace} \begin{split} \int_0^\infty \mathrm{e}^{-\theta x} W^{(q)}(x) {\rm d} x &= \frac 1 {\psi(\theta)-q}, \quad \theta > \Phi(q), \\ \int_0^\infty \mathrm{e}^{-\theta x} \mathbb{W}^{(q)}(x) {\rm d} x &= \frac 1 {\psi_Y(\theta) -q}, \quad \theta > \varphi(q), \mathbb{E}nd{split} \mathbb{E}nd{align} where $\psi_Y(\theta) := \psi(\theta) - \delta \theta$, $\theta \geq 0$, is the Laplace exponent for $Y$ and \begin{align} \begin{split} \Phi(q) := \sup \{ \lambda \geq 0: \psi(\lambda) = q\} \quad \textrm{and} \quad \varphi(q) := \sup \{ \lambda \geq 0: \psi_Y(\lambda) = q\}. \underline{n}otag \mathbb{E}nd{split} \mathbb{E}nd{align} In particular, when $q=0$, we shall drop the superscript. By the strict convexity of $\psi$, we derive the inequality $\varphi(q) > \Phi(q) > 0$ for $q > 0$ and $\varphi(0) \geq \Phi(0) \geq 0$. We also define, for $x \in \R$, \begin{align*} \overline{W}^{(q)}(x) &:= \int_0^x W^{(q)}(y) {\rm d} y, \\ Z^{(q)}(x) &:= 1 + q \overline{W}^{(q)}(x), \\ \overline{Z}^{(q)}(x) &:= \int_0^x Z^{(q)} (z) {\rm d} z = x + q \int_0^x \int_0^z W^{(q)} (w) {\rm d} w {\rm d} z. \mathbb{E}nd{align*} Noting that $W^{(q)}(x) = 0$ for $-\infty < x < 0$, we have \begin{align} \overline{W}^{(q)}(x) = 0, \quad Z^{(q)}(x) = 1 \quad \textrm{and} \quad \overline{Z}^{(q)}(x) = x, \quad x \leq 0. \label{z_below_zero} \mathbb{E}nd{align} In addition, we define $\overline{\mathbb{W}}^{(q)}$, $\mathbb{Z}^{(q)}$ and $\overline{\mathbb{Z}}^{(q)}$ analogously for $Y$. The scale functions of $X$ and $Y$ are related, for $x \in \R$ and $p, q \geq 0$, by the following equalities \begin{align}\label{RLqp} &\int_0^x\mathbb{W}^{(p)}(x-y)\big[\delta W^{(q)}(y)-(q-p)\overline{W}^{(q)}(y)\big]{\rm d} y=\overline{\mathbb{W}}^{(p)}(x)-\overline{W}^{(q)}(x), \quad \\ &\int_0^x\mathbb{W}^{(p)}(x-y)\big[\delta Z^{(q)}(y)-(q-p)\overline{Z}^{(q)}(y)\big]{\rm d} y=\overline{\mathbb{Z}}^{(p)}(x)-\overline{Z}^{(q)}(x)+\delta\overline{\mathbb{W}}^{(p)}(x),\label{RLqp2} \mathbb{E}nd{align} which can be proven by showing that the Laplace transforms on both sides are equal. Regarding their asymptotic values as $x \downarrow 0$ we have, as in Lemma 3.1 of \cite{KKR}, \begin{align*} \begin{split} W^{(q)} (0) &= \left\{ \begin{array}{ll} 0 & \textrm{if $X$ is of unbounded variation,} \\ c^{-1} & \textrm{if $X$ is of bounded variation,} \mathbb{E}nd{array} \right. \\ \mathbb{W}^{(q)} (0) &= \left\{ \begin{array}{ll} 0 & \textrm{if $Y$ is of unbounded variation,} \\ (c-\delta)^{-1} & \textrm{if $Y$ is of bounded variation,} \mathbb{E}nd{array} \right. \mathbb{E}nd{split} \mathbb{E}nd{align*} and, as in Lemma 3.3 of \cite{KKR}, \begin{align} \begin{split} e^{-\Phi(q) x}W^{(q)} (x) \underline{n}earrow \psi'(\Phi(q))^{-1} \quad \textrm{and} \quad e^{-\varphi(q) x}\mathbb{W}^{(q)} (x) \underline{n}earrow \psi_Y'(\varphi(q))^{-1}, \quad \textrm{as } x \rightarrow \infty, \mathbb{E}nd{split} \label{W_q_limit} \mathbb{E}nd{align} where in the case $\psi'(0+) = 0$ or $\psi'_Y(0+) = 0$, the right hand side, when $q=0$, is understood to be infinity. The increments of the refracted-reflected L\'{e}vy process can be decomposed into those of $Y$ and $U$ that are defined in \mathbb{E}qref{def_Y} and \mathbb{E}qref{reflected_levy}, respectively. Here, we summarize a few known identities of these processes in terms of the scale function that will be used later in the paper. For the drift-changed process $Y$, let us define the first down- and up-crossing times, respectively, by \begin{align} \label{first_passage_time} \tau_a^- := \inf \left\{ t > 0: Y_t < a \right\} \quad \textrm{and} \quad \tau_a^+ := \inf \left\{ t > 0: Y_t > a \right\}, \quad a \in \R; \mathbb{E}nd{align} here and throughout, let $\inf \varnothing = \infty$. Then, for any $a > b$ and $x \leq a$, \begin{align} \begin{split} \E_x \left( e^{-q \tau_a^+} 1_{\left\{ \tau_a^+ < \tau_b^- \right\}}\right) &= \frac {\mathbb{W}^{(q)}(x-b)} {\mathbb{W}^{(q)}(a-b)}, \\ \E_x \left( e^{-q \tau_b^-} 1_{\left\{ \tau_a^+ > \tau_b^- \right\}}\right) &= \mathbb{Z}^{(q)}(x-b) - \mathbb{Z}^{(q)}(a-b) \frac {\mathbb{W}^{(q)}(x-b)} {\mathbb{W}^{(q)}(a-b)}. \mathbb{E}nd{split} \label{laplace_in_terms_of_z} \mathbb{E}nd{align} In addition, the \mathbb{E}mph{$q$-resolvent measure} is known to have a density written as \begin{align} \label{resolvent_density} \E_x \Big( \int_0^{\tau_{b}^- \wedge \tau^+_a} e^{-qt} 1_{\left\{ Y_t \in {\rm d} y \right\}} {\rm d} t\Big) &= \Big[ \frac {\mathbb{W}^{(q)}(x-b) \mathbb{W}^{(q)} (a-y)} {\mathbb{W}^{(q)}(a-b)} -\mathbb{W}^{(q)} (x-y) \Big] {\rm d} y, \quad b < x < a; \mathbb{E}nd{align} see Theorem 8.7 of \cite{K}. It is known that a spectrally negative L\'{e}vy process creeps downwards (i.e.\ $\p_x \{ Y_{\tau_b^-}= b, \tau_b^- < \infty \} > 0$ for $x > b$) if and only if $\sigma > 0$ (see Exercise 7.6 of \cite{K}). Hence, \mathbb{E}mph{for the case of bounded variation}, the above identity \mathbb{E}qref{resolvent_density} together with the compensation formula (see Theorem 4.4 of \cite{K}) gives, for any $b < a$ and a nonnegative measurable function $l$, \begin{multline} \label{undershoot_expectation} \E_x\left(e^{-q\tau_b^-}l(Y_{\tau_b^-})1_{\{\tau_b^-<\tau_a^+\}}\right)\\ =\int_0^{a-b}\int_{(-\infty,-y)} l(y+u+b)\left\{\frac{\mathbb{W}^{(q)}(x-b) \mathbb{W}^{(q)}(a-b-y)}{ \mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-b-y)\right\}\Pi({\rm d} u){\rm d} y, \quad x \in \R. \mathbb{E}nd{multline} Similar identities can also be computed for the reflected L\'{e}vy process $U$ as in \mathbb{E}qref{reflected_levy}. Let \begin{align} \kappa^+_b:=\inf\{t>0:U_t>b\}. \label{def_kappa_time} \mathbb{E}nd{align} By Theorem 1 (i) of \cite{P2004}, for any Borel set $B$, \begin{align}\label{rsr} \mathbb{E}_x\bigg(\int_0^{\kappa_b^+}e^{-qt}1_{\{ U_t \in B \}}{\rm d} t \bigg)=\frac{Z^{(q)}(x)}{Z^{(q)}(b)}\rho^{(q)}(b;B)-\rho^{(q)}(x;B), \quad x \leq b, \mathbb{E}nd{align} where \[ \rho^{(q)}(x;B) :=\int_0^xW^{(q)}(x-y)1_{\{y\in B\}}{\rm d} y. \] In particular, \begin{align} \label{upcrossing_time_reflected} \mathbb{E}_x\big(e^{-q\kappa_b^+}\big)=\frac{Z^{(q)}(x)}{Z^{(q)}(b)}, \quad x \leq b. \mathbb{E}nd{align} In addition, if we define, for $t \geq 0$, $\tilde{R}_t := \sup_{s\leq t}(-X_s)\vee0$ so that $U_t = X_t + \tilde{R}_t$, then \begin{align} \mathbb{E}_x\Big(\int_{[0,\kappa_b^+]}e^{-qt}{\rm d} \tilde{R}_t\Big)&=- \Big( \overline{Z}^{(q)}(x) + \frac {\psi'(0+)} q\Big) +\Big( \overline{Z}^{(q)}(b) + \frac {\psi'(0+)} q \Big) \frac{Z^{(q)}(x)} {Z^{(q)}(b)}, \quad x \leq b; \label{capital_injection_identity_SN} \mathbb{E}nd{align} see page 167 in the proof of Theorem 1 of \cite{APP2007}. It is noted that \mathbb{E}qref{upcrossing_time_reflected} and \mathbb{E}qref{capital_injection_identity_SN} hold even for $x < 0$ by \mathbb{E}qref{z_below_zero}. \begin{remark} \label{remark_strongly_approximating}Suppose $(X^{(n)}; n \geq 1)$ is a strongly approximating sequence for $X$ and $(W^{(q)}_n; n \geq 1)$ and $(\mathbb{W}^{(q)}_n; n \geq 1)$ are the corresponding scale functions of $X^{(n)}$ and $Y^{(n)} :=( X^{(n)}_t - \delta t, t \geq 0)$ respectively. Then, by the continuity theorem, as is discussed in Lemma 20 of \cite{KL}, $W^{(q)}_n(x)$ (resp.\ $\mathbb{W}^{(q)}_n(x)$) converges to $W^{(q)}(x)$ (resp.\ $\mathbb{W}^{(q)}(x)$) for every $x \in \R$. In addition, as recently shown in Remark 3.2 of \cite{PY_astin}, (if $X$ is of unbounded variation) $W^{(q)\prime}_n(x+)$ (resp.\ $\mathbb{W}^{(q)\prime}_n(x+)$) converges to $W^{(q)\prime}(x)$ (resp.\ $\mathbb{W}^{(q)\prime}(x)$) for all $x > 0$. This observation together with Proposition \ref{prop_approximation} is used to extend the result from the case of bounded variation to that of unbounded variation. \mathbb{E}nd{remark} We conclude this section with some new identities that will simplify the expressions of our results and will help us avoid the use of the L\'{e}vy measure. In particular, the first item below is a generalization of (4.17) in \cite{KL}. \begin{lemma} \label{lemma_useful_identity} Suppose $X$ is of bounded variation. For any $p,q\geq0$ and $v \leq b \leq x$, we have \begin{itemize} \item[(i)] \begin{align} \label{useful_identity_W} \begin{split} \int_{0}^{\infty}&\int_{(-\infty,-y)}W^{(q)}(y+u+b-v)\mathbb{W}^{(p)}(x-b-y)\Pi({\rm d} u){\rm d} y \\ &=(c-\delta){W^{(q)}(b-v)} \mathbb{W}^{(p)}(x-b)-W^{(q)}(x-v)-\delta\int_b^x\mathbb{W}^{(p)}(x-y)W^{(q)\prime}(y-v){\rm d} y \\ &+(q-p)\int_b^x \mathbb{W}^{(p)} (x-z) W^{(q)} (z-v) {\rm d} z, \mathbb{E}nd{split} \mathbb{E}nd{align} \item[(ii)] \begin{align} \label{lemma_useful_identity_c} \begin{split} \int_{0}^{\infty}\int_{(-\infty,-y)} &Z^{(q)}(y+u+b-v)\mathbb{W}^{(p)}(x-b-y)\Pi({\rm d} u){\rm d} y \\ &= (c-\delta) Z^{(q)}(b-v) \mathbb{W}^{(p)}(x-b)-Z^{(q)}(x-v)-(p-q)\overline{\mathbb{W}}^{(p)}(x-b) \\ &+q\int_b^x\mathbb{W}^{(p)}(x-y)\left((q-p)\overline{W}^{(q)}(y-v)-\delta W^{(q)}(y-v)\right){\rm d} y, \mathbb{E}nd{split} \mathbb{E}nd{align} \item[(iii)] \begin{align} \label{lemma_useful_identity_Z_bar} \begin{split} \int_0^{\infty}&\int_{(-\infty,-y)}\overline{Z}^{(q)}(y+u+b-v)\mathbb{W}^{(p)}(x-b-y)\Pi({\rm d} u){\rm d} y \\ &=(c-\delta)\overline{Z}^{(q)}(b-v) \mathbb{W}^{(p)}(x-b)-\overline{Z}^{(q)}(x-v)-\delta\int_b^x\mathbb{W}^{(p)}(x-y)Z^{(q)}(y-v){\rm d} y \\ &+(q-p)\int_b^x\mathbb{W}^{(p)}(x-y)\overline{Z}^{(q)}(y-v) {\rm d} y +\psi'(0+)\overline{\mathbb{W}}^{(p)}(x-b). \mathbb{E}nd{split} \mathbb{E}nd{align} \mathbb{E}nd{itemize} \mathbb{E}nd{lemma} \begin{proof} The formulae (i) and (ii) can be derived directly using \mathbb{E}qref{undershoot_expectation} and Lemma 1 of Renaud \cite{R2014}, which obtains \mathbb{E}qref{cc0_general} below and the same expression with $Z^{(p+q)}$ replaced with $W^{(p+q)}$. For the proof of (iii), see Appendix \ref{proof_lemma_useful_identit}. \mathbb{E}nd{proof} These expressions are to be frequently used in later arguments. In particular, by \mathbb{E}qref{undershoot_expectation} and (\ref{lemma_useful_identity_c}), as obtained in Lemma 1 of Renaud \cite{R2014}, for $x \geq b$, and $p,p+q \geq 0$, \begin{align} \label{cc0_general} \begin{split} &\mathbb{E}_x\left(e^{-p\tau_b^-}Z^{(p+q)}(Y_{\tau_b^-})1_{\{\tau_b^-<\tau_a^+\}}\right) \\ &= \int_0^{a-b}\int_{(-\infty,-y)} Z^{(p+q)}(y+u+b)\left\{\frac{\mathbb{W}^{(p)}(x-b)\mathbb{W}^{(p)}(a-b-y)}{\mathbb{W}^{(p)}(a-b)}-\mathbb{W}^{(p)}(x-b-y)\right\}\Pi({\rm d} u){\rm d} y\\ &=\mathcal{R}^{(p,q)}(x)-\mathcal{R}^{(p,q)}(a)\frac{\mathbb{W}^{(p)}(x-b)}{\mathbb{W}^{(p)}(a-b)}, \mathbb{E}nd{split} \mathbb{E}nd{align} where \begin{multline} \label{mathcal_R_def} \mathcal{R}^{(p,q)}(x):=Z^{(p+q)}(x)-q\overline{\mathbb{W}}^{(p)}(x-b)\\-(p+q)\int_b^x\mathbb{W}^{(p)}(x-y)\left(q\overline{W}^{(p+q)}(y)-\delta W^{(p+q)}(y)\right){\rm d} y, \quad x \in \R, \; p, p+q \geq 0. \mathbb{E}nd{multline} As its special case, we have \begin{align} \label{cc0} \begin{split} \mathbb{E}_x&\left(e^{-q\tau_b^-}Z^{(q)}(Y_{\tau_b^-})1_{\{\tau_b^-<\tau_a^+\}}\right)\\ &= \int_0^{a-b}\int_{(-\infty,-y)} Z^{(q)}(y+u+b)\left\{\frac{\mathbb{W}^{(q)}(x-b)\mathbb{W}^{(q)}(a-b-y)}{\mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-b-y)\right\}\Pi({\rm d} u){\rm d} y\\ &=r^{(q)}(x) -\frac{\mathbb{W}^{(q)}(x-b)}{\mathbb{W}^{(q)}(a-b)} r^{(q)}(a) \mathbb{E}nd{split} \mathbb{E}nd{align} where \begin{align*} r^{(q)}(x)&:= \mathcal{R}^{(q,0)}(x) = Z^{(q)}(x)+q\delta\int_b^x\mathbb{W}^{(q)}(x-y)W^{(q)}(y){\rm d} y, \quad x \in \R, \; q \geq 0. \mathbb{E}nd{align*} Note that similar expressions to \mathbb{E}qref{cc0_general} and \mathbb{E}qref{cc0} with $Z$ replaced with $W$ and $\overline{Z}$ can be computed using \mathbb{E}qref{useful_identity_W} and \mathbb{E}qref{lemma_useful_identity_Z_bar}, respectively. \section{Resolvent Measures} \label{section_resolvents} In this section, we study the resolvent measure and, as its byproducts, we also obtain the Laplace transform of the up-crossing time and the expected NPV of $L$ as defined in \mathbb{E}qref{net_present_value}. Let us define the following set of stopping times \[ T^+_a:=\inf\{t>0:V_t>a\} \quad \textrm{and} \quad T ^-_a:=\inf\{t>0:V_t<a\}, \quad a >0. \] Our derivation of the results relies on the following remark on the connection with the drift-changed process $Y$ and the reflected process $U$. \begin{remark} \label{remark_connection_Y_U} Recall the hitting times $\tau_b^-$ of $Y$ and $\kappa_b^+$ of $U$ as in \mathbb{E}qref{first_passage_time} and \mathbb{E}qref{def_kappa_time}, respectively. Almost surely, under $\p_x$ for any $x \in \R$, we have $T_b^+ =\kappa_b^+$ and $V_t = U_t$ on $0 \leq t \leq T_b^+$; similarly, we have $T_b^- =\tau_b^-$ and $V_t = Y_t$ on $0 \leq t < T_b^-$ and $V_{T_b^--} + \Delta X_{T_b^-} = Y_{T_b^-}$ on $\{ T_b^- < \infty\}$. \mathbb{E}nd{remark} \begin{theorem}[Resolvent]\label{resol} For $q \geq 0$, $x \leq a$ and a Borel set $B$ on $[0,a]$, \begin{align}\label{resol1} \begin{split} \mathbb{E}_x\bigg(\int_0^{T_a^+}e^{-qt}1_{\{ V_t \in B \}}{\rm d} t\bigg) = \int_{B} \Big( w^{(q)}(a,z)\frac{r^{(q)}(x)}{r^{(q)}(a)} - w^{(q)}(x,z) \Big) {\rm d} z, \mathbb{E}nd{split} \mathbb{E}nd{align} where, for all $0 \leq z \leq a$, \begin{align*} w^{(q)}(x, z) &:= 1_{\{ 0 < z < b \}} \Big(W^{(q)}(x-z) +\delta \int_{b}^x\mathbb{W}^{(q)}(x-y)W^{(q)\prime}(y-z){\rm d} y \Big)+ 1_{\{ b < z < x\}} \mathbb{W}^{(q)}(x-z). \mathbb{E}nd{align*} \mathbb{E}nd{theorem} \begin{proof} We shall prove the result for $q > 0$ because the case $q = 0$ can be obtained by monotone convergence and the continuity of the scale function in $q$ as in Lemma 8.3 of \cite{K}. (i) Suppose $X$ is of bounded variation, and let $f^{(q)}(x,a;B)$ be the left hand side of \mathbb{E}qref{resol1}. For $x< b$, using the strong Markov property, Remark \ref{remark_connection_Y_U}, \mathbb{E}qref{rsr}, and \mathbb{E}qref{upcrossing_time_reflected}, \begin{align}\label{iden6} f^{(q)}(x,a;B)&=\mathbb{E}_x\bigg(\int_0^{\kappa_b^+}e^{-qt}1_{\{U_t\in B\}}{\rm d} t\bigg)+\mathbb{E}_x\big(e^{-q\kappa_b^+}\big) f^{(q)}(b,a;B)\underline{n}otag\\ &=\frac{Z^{(q)}(x)}{Z^{(q)}(b)} [\rho^{(q)}(b;B)+ f^{(q)}(b,a;B)] -\rho^{(q)}(x;B).\mathbb{E}nd{align} On the other hand, for $x \geq b$, again by the strong Markov property and Remark \ref{remark_connection_Y_U}, together with \mathbb{E}qref{resolvent_density} and \mathbb{E}qref{undershoot_expectation}, \begin{align}\label{potentiala} \begin{split} f^{(q)}&(x,a;B)=\E_x\Big(\int_0^{\tau_a^+\wedge\tau_b^-}e^{-qt}1_{\{Y_t\in B\}}{\rm d} t\Big)+\E_x\left(e^{-q\tau_b^-}f^{(q)}(Y_{\tau_b^-},a;B)1_{\{\tau_b^-<\tau_a^+\}}\right)\\ &=\int_{b}^{a}1_{\{y\in B\}}\left\{\frac{\mathbb{W}^{(q)}(x-b)\mathbb{W}^{(q)}(a-y)}{\mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-y)\right\}{\rm d} y \\ &+\int_0^{a-b}\int_{(-\infty,-y)} f^{(q)}(y+u+b,a;B)\left\{\frac{\mathbb{W}^{(q)}(x-b)\mathbb{W}^{(q)}(a-b-y)}{\mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-b-y)\right\}\Pi({\rm d} u) {\rm d} y. \mathbb{E}nd{split} \mathbb{E}nd{align} Here, by setting $x=b$, dividing both sides by $\mathbb{W}^{(q)}(0)/\mathbb{W}^{(q)}(a-b)= [(c-\delta) \mathbb{W}^{(q)}(a-b)]^{-1}$, and substituting (\ref{iden6}), \begin{align} \label{recursion_W_time_v} \begin{split} &(c-\delta) {\mathbb{W}^{(q)}(a-b)} f^{(q)}(b,a;B)\\ &=\int_{b}^{a}1_{\{y\in B\}} \mathbb{W}^{(q)}(a-y) {\rm d} y \\ &+\int_0^{a-b}\int_{(-\infty,-y)}\Big(\frac{Z^{(q)}(y+u+b)}{Z^{(q)}(b)} [\rho^{(q)}(b;B) + f^{(q)}(b,a; B)]-\rho^{(q)}(y+u+b;B) \Big) {\mathbb{W}^{(q)}(a-b-y)}\Pi({\rm d} u) {\rm d} y. \mathbb{E}nd{split} \mathbb{E}nd{align} In particular, by Fubini's theorem and \mathbb{E}qref{useful_identity_W}, \begin{align} \label{varphi_time_W_formula} \begin{split} &\int_0^{a-b} \int_{(-\infty,-y)}\rho^{(q)}(y+u+b;B)\mathbb{W}^{(q)}(a-b-y) \Pi ({\rm d} u) {\rm d} y\\ &=\int_0^{\infty}\int_{(-\infty,-y)}\int_0^{\infty}W^{(q)}(y+u+b-z)\mathbb{W}^{(q)}(a-b-y)1_{\{z\in B\}}{\rm d} z\Pi({\rm d} u){\rm d} y\\ &=\int_0^b 1_{\{z\in B\}}\int_0^{\infty}\int_{(-\infty,-y)}W^{(q)}(y+u+b-z)\mathbb{W}^{(q)}(a-b-y)\Pi({\rm d} u){\rm d} y {\rm d} z \\ &=\int_0^b 1_{\{z\in B\}}\left((c-\delta) W^{(q)}(b-z)\mathbb{W}^{(q)}(a-b)-W^{(q)}(a-z)-\delta\int_{b}^{a}\mathbb{W}^{(q)}(a-y)W^{(q)\prime}(y-z){\rm d} y\right) {\rm d} z. \mathbb{E}nd{split} \mathbb{E}nd{align} By substituting \mathbb{E}qref{lemma_useful_identity_c} and \mathbb{E}qref{varphi_time_W_formula} in \mathbb{E}qref{recursion_W_time_v} and solving for $f^{(q)}(b,a;B)$, we obtain \begin{align}\label{potentialb} f^{(q)}(b,a;B)&=Z^{(q)}(b)\frac{\int_{B} w^{(q)}(a,z) {\rm d} z}{r^{(q)}(a)}-\rho^{(q)}(b;B). \mathbb{E}nd{align} Substituting (\ref{potentialb}) in (\ref{iden6}), the claim holds for $x < b$. \par For $x > b$, the equation (\ref{potentiala}) gives \begin{align} \label{f_above_b} \begin{split} f^{(q)}(x,a;B) &=\int_b^a 1_{\{y\in B\}}\left\{\frac{\mathbb{W}^{(q)}(x-b)\mathbb{W}^{(q)}(a-y)}{\mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-y)\right\}{\rm d} y \\ &+\int_0^{a-b}\int_{(-\infty,-y)}\int_B \Big[w^{(q)}(a,z)\frac{Z^{(q)}(y+u+b)}{r^{(q)}(a)} - W^{(q)}(y+u+b-z)\Big] {\rm d} z \\ &\qquad \times \left\{\frac{\mathbb{W}^{(q)}(x-b)\mathbb{W}^{(q)}(a-b-y)}{\mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-b-y)\right\}\Pi({\rm d} u) {\rm d} y \mathbb{E}nd{split} \mathbb{E}nd{align} where the second integral equals, by \mathbb{E}qref{useful_identity_W}, \mathbb{E}qref{cc0}, and \mathbb{E}qref{varphi_time_W_formula}, \begin{align*} &\frac{\int_B w^{(q)}(a,z){\rm d} z}{r^{(q)}(a)}\int_0^{a-b}\int_{(-\infty,-y)}Z^{(q)}(y+u+b)\left\{\frac{\mathbb{W}^{(q)}(x-b)\mathbb{W}^{(q)}(a-b-y)}{\mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-b-y)\right\}\Pi({\rm d} u) {\rm d} y\underline{n}otag\\ &- \int_0^{a-b}\int_{(-\infty,-y)}\rho^{(q)}(y+u+b;B)\left\{\frac{\mathbb{W}^{(q)}(x-b)\mathbb{W}^{(q)}(a-b-y)}{\mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-b-y)\right\}\Pi({\rm d} u) {\rm d} y\underline{n}otag\\ &= \frac{\int_B w^{(q)}(a,z){\rm d} z}{r^{(q)}(a)}\left(r^{(q)}(x)-\frac{\mathbb{W}^{(q)}(x-b)}{\mathbb{W}^{(q)}(a-b)}r^{(q)}(a)\right)\underline{n}otag\\ &+ \frac{\mathbb{W}^{(q)}(x-b)}{\mathbb{W}^{(q)}(a-b)} \int_0^b 1_{\{ z \in B\}}\left(W^{(q)}(a-z)+\delta\int_{b}^{a}\mathbb{W}^{(q)}(a-y)W^{(q)\prime}(y-z){\rm d} y\right) {\rm d} z \underline{n}otag\\ &- \int_0^b 1_{\{z \in B\}}\left(W^{(q)}(x-z)+\delta\int_{b}^{x}\mathbb{W}^{(q)}(x-y)W^{(q)\prime}(y-z){\rm d} y\right){\rm d} z. \mathbb{E}nd{align*} Substituting these identities in \mathbb{E}qref{f_above_b} and after some computations, we obtain that the term involving ${\mathbb{W}^{(q)}(x-b)} / {\mathbb{W}^{(q)}(a-b)}$ vanishes, and the claim follows. (ii) We now extend the result to the case of unbounded variation. Let $V^{(n)}$ be the refracted-reflected process associated to $X^{(n)}$ and $T_{a,n}^+$ its first passage time above $a$, of a strongly approximating sequence (of bounded variation) for $X$ as in Proposition \ref{prop_approximation}. First, it can easily be shown that $T_{a,n}^+$ and $T_a^+$ are both finite a.s.\ because these are bounded from above by the upcrossing times at $a$ for the reflected process of the drift-changed process $Y$ (without refraction) as in \mathbb{E}qref{reflected_levy} with $X$ replaced with $Y$ (which are known to be finite a.s.); this will be confirmed in Corollary \ref{corollary_one_sided} below. Now in order to show $T_{a,n}^+ \xrightarrow{n \uparrow \infty} T_a^+$ holds a.s, as in page 212 of \cite{B}, it suffices to show \begin{align} T_{a-}^+ = T_{a}^+ \quad a.s. \label{T_a_cont} \mathbb{E}nd{align} Indeed, for $0 < \varepsilon < a-b$, because when the process takes values in $(b,a)$ then $V$ is a spectrally negative L\'{e}vy process $Y$, we have \begin{align} 0 \leq T_{a}^+ - T_{a-\varepsilon}^+ \leq (\tau_a^+ 1_{\{ \tau_b^- > \tau_a^+\}}) \circ \theta_{T^+_{a-\varepsilon}} + (1_{\{ \tau_b^- < \tau_a^+ \}} \circ \theta_{T^+_{a-\varepsilon}}) T_a^+,\label{diff_T_a} \mathbb{E}nd{align} where $\theta$ is the time-shift operator. The process $\{ \tau_s^+; s \geq a- \varepsilon \}$ is a subordinator (that is a.s.\ continuous at time $a$) and hence $(\tau_a^+ 1_{\{ \tau_b^- > \tau_a^+\}}) \circ \theta_{T^+_{a-\varepsilon}}$ vanishes in the limit as $\varepsilon \downarrow 0$. On the other hand, by the regularity of the spectrally negative L\'{e}vy process $Y$, we have $1_{\{ \tau_b^- < \tau_a^+ \}} \circ \theta_{T^+_{a-\varepsilon}} \rightarrow 0$ as $\varepsilon \downarrow 0$, showing (together with the finiteness of $T_a^+$) that the second term on the right hand side of \mathbb{E}qref{diff_T_a} vanishes as well. In sum, we have \mathbb{E}qref{T_a_cont} and the convergence $T_{a,n}^+ \xrightarrow{n \uparrow \infty} T_a^+$ holds. Now, by Remark \ref{remark_strongly_approximating} and dominated convergence, it is sufficient to show that $\p_x ( V_t \in \partial B)$ and $\p_x( \sup_{0\leq s \leq t}V_s = a )$ vanish for Lebesgue a.e.\ $t > 0$; similar results have been obtained in the proof of Theorem 6 of \cite{KL} for the refracted L\'{e}vy process. For the former, we give the following result. \begin{lemma} \label{lemma_mass_zero}We have $\p_x ( V_t = y)=0$ for $y \in [0, \infty)$ and Lebesgue a.e.\ $t > 0$. \mathbb{E}nd{lemma} \begin{proof} We will show that the result holds for the case $x < b$; the case $x \geq b$ can be shown similarly. \par Let us define a sequence of stopping times $0 =: S_b^-(0) < S_b^+(1) < S_b^- (1) < S_b^+(2) < S_b^-(2) < \cdots < S_b^+(n) < S_b^- (n) < \cdots$ as follows: \begin{align*} S_b^+(n)&:=\inf\{t > S_b^-(n-1):V_t > b\} \text{ and } S_b^-(n):=\inf\{t > S_b^+(n):V_t =0\}, \quad n \geq 1. \mathbb{E}nd{align*} Then we have \begin{align*} \mathbb{P}_x(V_t=y)&= \p_x \left(V_t=y, t\in[0,S_b^+(1))\right) \\ &+\sum_{n\geq 1}\Big[ \mathbb{P}_x\left(V_t=y, t\in[S_b^+(n),S_b^-(n))\right) + \mathbb{P}_x\left(V_t=y, t\in[S_b^-(n),S_b^+(n+1))\right) \Big].\mathbb{E}nd{align*} By Remark \ref{remark_connection_Y_U} and \mathbb{E}qref{rsr}, we have $\p_x \left(V_t=y, t\in[0,S_b^+(1))\right) = \p_x \left(U_t=y, t \in[0,\kappa_b^+)\right) = 0$ for a.e. $t > 0$. In addition, \begin{align*} \mathbb{P}_x\left(V_t=y, t\in[S_b^+(n),S_b^-(n))\right) = \mathbb{P}_x\left[ \p_x (V_t=y, t\in[S_b^+(n),S_b^-(n)) | S_b^+(n) ) \right]. \mathbb{E}nd{align*} With $\tilde{A}_{t}$ being a refracted process starting at $b$ that is independent of $\mathcal{F}_{S_b^+(n)}$, \begin{align*} \p_x (V_t=y, t\in[S_b^+(n),S_b^-(n)) |S_b^+(n) ) &\leq \p_x (\tilde{A}_{t -S_b^+(n)}=y, t\in[S_b^+(n),\infty) | S_b^+(n) ) \\&= \p_b ( A_{t-s} = y ) |_{s =S_b^+(n)}. \mathbb{E}nd{align*} Hence, \begin{align*} \mathbb{P}_x\left(V_t=y, t\in[S_b^+(n),S_b^-(n))\right) \leq \int_0^t \p_x ( S_{b}^+(n) \in {\rm d} s ) \p_b (A_{t-s} = y) = 0.\mathbb{E}nd{align*} Similarly, we have $\mathbb{P}_x\left(V_t=y, t\in[S_b^-(n),S_b^+(n+1))\right) = 0$. Summing up these, we have that $\p_x (V_t = y) = 0$ for a.e. $t > 0$. \mathbb{E}nd{proof} By this lemma, $\p_x \{ V_t \in \partial B\} = 0$ for a.e.\ $t > 0$. The proof of $\p_x\{ \sup_{0 \leq s \leq t}V_s= a \} = 0$ can be similarly done using the fact that $\p \{ \sup_{0 \leq s \leq t}A_s = a\}=0$ for a.e. $t > 0$ as proved in Theorem 6 of \cite{KL}. \mathbb{E}nd{proof} \begin{remark} In particular, when $\delta = 0$ or $a=b$ in Theorem \ref{resol}, we recover \mathbb{E}qref{rsr}. \mathbb{E}nd{remark} By taking $a$ to $\infty$ in the previous result, we shall obtain the following. \begin{corollary}\label{corollaryresol1} Fix $x \in \R$ and any Borel set $B$ on $[0,\infty)$. (i) For $q > 0$, \begin{align*} \mathbb{E}_x\bigg(\int_0^{\infty}e^{-qt}1_{\{ V_t \in B \}}{\rm d} t\bigg) &= \int_B \Big( \frac { e^{-\varphi(q)z}1_{\{ b < z \}} +\delta 1_{\{ 0 < z < b \}}\int_{b}^{\infty}e^{-\varphi(q)u}W^{(q)\prime}(u-z){\rm d} u } {\delta q \int_b^\infty e^{-\varphi(q)y}W^{(q)}(y){\rm d} y}{r^{(q)}(x)} - w^{(q)}(x,z) \Big) {\rm d} z. \mathbb{E}nd{align*} (ii) For $q = 0$ and $\psi_Y'(0+) > 0$ (or $Y_t \xrightarrow{t \uparrow \infty} \infty$ a.s.), then \begin{align*} \mathbb{E}_x\bigg(\int_0^{\infty} 1_{\{ V_t \in B \}}{\rm d} t\bigg) &= \int_B \Big( \frac { 1_{\{ b < z \}} + 1_{\{ 0 < z < b \}} \big( 1- \delta^{-1}W (b-z) \big) } {\psi_Y'(0+)} - w^{(0)}(x,z) \Big){\rm d} z. \mathbb{E}nd{align*} For $q = 0$ and $\psi_Y'(0+) \leq 0$, it becomes infinity given $Leb(B) > 0$. \mathbb{E}nd{corollary} \begin{proof} (i) We have \begin{align*} \frac {w^{(q)}(a, z)} {r^{(q)}(a)} &= \frac {\mathbb{W}^{(q)}(a)^{-1} w^{(q)}(a, z)} {\mathbb{W}^{(q)}(a)^{-1} r^{(q)}(a)} \xrightarrow{a \uparrow \infty} \frac { e^{-\varphi(q)z}1_{\{ b < z \}} +\delta 1_{\{ 0 < z < b \}}\int_{b}^{\infty}e^{-\varphi(q)u}W^{(q)\prime}(u-z){\rm d} u } {\delta q \int_b^\infty e^{-\varphi(q)y}W^{(q)}(y){\rm d} y}.\mathbb{E}nd{align*} Here the convergence follows by dominated convergence thanks to the bound $\mathbb{W}^{(q)}(a-y)/\mathbb{W}^{(q)}(a-b) \leq e^{-\varphi(q)(y-b)}$ by \mathbb{E}qref{W_q_limit}, and from Exercise 8.5 (i) in \cite{K} and by \mathbb{E}qref{W_q_limit}, that for all $z \in \R$, \[ \lim_{a\to\infty}\frac{Z^{(q)}(a)}{W^{(q)}(a)}=\frac{q}{\Phi(q)}, \quad\lim_{a\to\infty}\frac{\mathbb{W}^{(q)}(a-z)}{\mathbb{W}^{(q)}(a)}=e^{-\varphi(q)z}, \quad \lim_{a\to\infty}\frac{W^{(q)}(a-z)}{\mathbb{W}^{(q)}(a)}=\lim_{a\to\infty}\frac{Z^{(q)}(a-z)}{\mathbb{W}^{(q)}(a)}=0. \] (ii) By monotone convergence we shall take $q \rightarrow 0$ in (i). We have \begin{align*} \delta q \int_b^\infty e^{-\varphi(q)y}W^{(q)}(y){\rm d} y = \delta q \Bigg( \frac 1 {\psi(\varphi(q))-q }- \int_0^b e^{-\varphi(q)y}W^{(q)}(y){\rm d} y \Bigg) = \frac q {\varphi(q)} - \delta q \int_0^b e^{-\varphi(q)y}W^{(q)}(y){\rm d} y. \mathbb{E}nd{align*} When $\psi_Y'(0+) \geq 0$, then $\varphi(0) = 0$ and hence $\delta q \int_b^\infty e^{-\varphi(q)y}W^{(q)}(y){\rm d} y \xrightarrow{q \downarrow 0} \psi_Y'(0+)$; when $\psi_Y'(0+) < 0$, then $\varphi(0) > 0$ and $\delta q \int_b^\infty e^{-\varphi(q)y}W^{(q)}(y){\rm d} y \xrightarrow{q \downarrow 0} 0$. On the other hand, using (8.20) of \cite{K}, \begin{align*} \int_{b}^{\infty}e^{-\varphi(q)u}W^{(q)\prime}(u-z){\rm d} u &= e^{-\varphi(q) z}\int_{b-z}^{\infty}e^{-\varphi(q)y}W^{(q)\prime}(y){\rm d} y \\ &= e^{-\varphi(q) z} \Big( \frac {\varphi(q)} {\psi(\varphi(q))-q} - W^{(q)} (0) - \int^{b-z}_0e^{-\varphi(q)y}W^{(q)\prime}(y){\rm d} y \Big) \\ &= e^{-\varphi(q) z} \Big( \delta^{-1} - W^{(q)} (0) - \int^{b-z}_0e^{-\varphi(q)y}W^{(q)\prime}(y){\rm d} y \Big) \\ &\xrightarrow{q \downarrow 0} e^{-\varphi(0) z} \Big( \delta^{-1} - W (0) - \int^{b-z}_0e^{-\varphi(0)y}W^{\prime}(y){\rm d} y \Big), \mathbb{E}nd{align*} which simplifies to $\delta^{-1} - W (b-z)$ when $\varphi(0) = 0$. Combining these, we have the result for $q = 0$. \mathbb{E}nd{proof} Another corollary can be obtained by setting $B = [0,a]$ in \mathbb{E}qref{resol1}. \begin{corollary}[One-sided exit] \label{corollary_one_sided} For any $q\geq0$ and $x \leq a$, we have \begin{align} \mathbb{E}_x\left(e^{-qT_a^+} \right)=\frac{r^{(q)}(x)}{r^{(q)}(a)}.\underline{n}otag \mathbb{E}nd{align} In particular, $T_a^+ < \infty$ $\p_x$-a.s. \mathbb{E}nd{corollary} \begin{proof} (i) By \mathbb{E}qref{RLqp}, we get \begin{align*} \delta\int_b^a\int_0^b\mathbb{W}^{(q)}(a-u)W^{(q)\prime}(u-z){\rm d} z {\rm d} u &= -\delta\int_b^a\mathbb{W}^{(q)}(a-u)W^{(q)}(u-b){\rm d} u+\delta\int_b^a\mathbb{W}^{(q)}(a-u)W^{(q)}(u){\rm d} u \\ &= -\overline{\mathbb{W}}^{(q)}(a-b) + \overline{W}^{(q)}(a-b) +\delta\int_b^a\mathbb{W}^{(q)}(a-u)W^{(q)}(u){\rm d} u. \mathbb{E}nd{align*} Hence, \begin{align*} \int_0^a w^{(q)}(a, z) {\rm d} z &=\overline{W}^{(q)}(a) + \delta \int_b^a\mathbb{W}^{(q)}(a-u)W^{(q)}(u){\rm d} u = \frac {r^{(q)}(a)-1} q. \mathbb{E}nd{align*} Therefore, substituting the above expression in \mathbb{E}qref{resol1} with $B = [0,a]$, we get \begin{multline*} \mathbb{E}_x\bigg(\int_0^{T_a^+}e^{-qt}{\rm d} t\bigg) = \bigg(\int_0^a w^{(q)}(a,z) {\rm d} z \bigg) \frac{r^{(q)}(x)}{r^{(q)}(a)} - \int_0^x w^{(q)}(x,z) {\rm d} z \\ =\left( \frac {r^{(q)}(a)-1} q\right) \frac{r^{(q)}(x)}{r^{(q)}(a)} - \frac {r^{(q)}(x)-1} q = q^{-1} \Big( 1- \frac {r^{(q)}(x)} {r^{(q)}(a)}\Big). \mathbb{E}nd{multline*} The result follows by noting that $\mathbb{E}_x\big(e^{-qT_a^+}\big)=1-q\mathbb{E}_x (\int_0^{T_a^+} e^{-qt} {\rm d} t )$. (ii) The finiteness of $T_a^+$ holds by setting $q = 0$ and noting that $r^{(0)}(x)=r^{(0)}(a)=1$. \mathbb{E}nd{proof} \begin{remark} In particular, when $\delta = 0$ in Corollary \ref{corollary_one_sided}, we recover \mathbb{E}qref{upcrossing_time_reflected}. \mathbb{E}nd{remark} We conclude this section with the expression of the expected NPV of $L$ as discussed in Section \ref{subsection_dividends}; these are immediate by Theorem \ref{resol} and Corollary \ref{corollaryresol1}. \begin{corollary} For any $q\geq0$ and $x\leq a$, we have \begin{align*} \mathbb{E}_x\left(\int_0^{T_a^+}e^{-qt}{\rm d} L_t\right) &= \delta \overline{\mathbb{W}}^{(q)}(a-b) \frac{r^{(q)}(x)}{r^{(q)}(a)}- \delta \overline{\mathbb{W}}^{(q)}(x-b). \mathbb{E}nd{align*} \mathbb{E}nd{corollary} \begin{corollary} \label{cor_dividend_infty} Fix $x \in \R$. For $q > 0$, we have \begin{align} \mathbb{E}_x&\left(\int_0^{\infty}e^{-qt}{\rm d} L_t\right)={e^{-\varphi(q)b}} \frac { r^{(q)}(x)} {\displaystyle \varphi(q)q\int_b^{\infty}e^{-\varphi(q)y}W^{(q)}(y){\rm d} y} - \delta \overline{\mathbb{W}}^{(q)}(x-b).\underline{n}otag \mathbb{E}nd{align} For $q = 0$, it becomes infinity. \mathbb{E}nd{corollary} \section{Costs of capital injection} \label{section_capital_injection} In this section, we derive the expression for the second item of \mathbb{E}qref{net_present_value}, which corresponds to the NPV of capital injection in the insurance context. \begin{proposition} \label{prop_capital_injection}Suppose $\psi'(0+) > -\infty$ and $q > 0$. For any $x\leq a$, we have \begin{align}\label{capital cost} \begin{split} \mathbb{E}_x&\left(\int_{[0,T_a^+]}e^{-qt}{\rm d} R_t\right)=\tilde{r}^{(q)}(a) \frac{r^{(q)}(x)} {r^{(q)}(a)}- \tilde{r}^{(q)}(x), \mathbb{E}nd{split} \mathbb{E}nd{align} where \begin{align*} \tilde{r}^{(q)}(x) &:= \overline{Z}^{(q)}(x) + \frac {\psi'(0+)} q +\delta\int_b^x\mathbb{W}^{(q)}(x-y)Z^{(q)}(y){\rm d} y, \quad x \in \R. \mathbb{E}nd{align*} \mathbb{E}nd{proposition} \begin{proof} (i) Assume that $X$ is of bounded variation, and let $g^{(q)}(x,a)$ be the left hand side of \mathbb{E}qref{capital cost}. For $x<b$, by an application of the strong Markov property, Remark \ref{remark_connection_Y_U}, \mathbb{E}qref{upcrossing_time_reflected}, and \mathbb{E}qref{capital_injection_identity_SN}, \begin{align}\label{12} g^{(q)}(x,a)&=\mathbb{E}_x\Big(\int_{[0,T_b^+]} e^{-qt}{\rm d} R_t\Big)+\mathbb{E}_x(e^{-qT_b^+})g^{(q)}(b,a)\underline{n}otag\\ &=-\Big( \overline{Z}^{(q)}(x)+\frac{\psi'(0+)}{q} \Big)+\frac{Z^{(q)}(x)}{Z^{(q)}(b)}\left(\overline{Z}^{(q)}(b)+\frac{\psi'(0+)}{q}+g^{(q)}(b,a)\right). \mathbb{E}nd{align} \par Now in the case where $x\geq b$, we obtain using (\ref{12}), Remark \ref{remark_connection_Y_U}, and the fact that $R$ stays constant on $[0, T_b^-)$, \begin{align} \label{g_x_a_probabilistic} \begin{split} g^{(q)}(x,a) &=\mathbb{E}_x \big(e^{-q\tau_b^-}g^{(q)}(Y_{\tau_b^-},a) 1_{\{\tau_b^-<\tau_a^+\}} \big)\\ &=\mathbb{E}_x\left(e^{-q\tau_b^-}\left\{-\overline{Z}^{(q)}(Y_{\tau_b^-})-\frac{\psi'(0+)}{q}+\frac{Z^{(q)}(Y_{\tau_b^-})}{Z^{(q)}(b)}\left(\overline{Z}^{(q)}(b)+\frac{\psi'(0+)}{q}+g^{(q)}(b,a)\right)\right\} 1_{\{\tau_b^-<\tau_a^+\}}\right). \mathbb{E}nd{split} \mathbb{E}nd{align} Here, by using \mathbb{E}qref{undershoot_expectation} and \mathbb{E}qref{lemma_useful_identity_Z_bar}, we obtain \begin{align}\label{cc1} \begin{split} \mathbb{E}_x&\left(e^{-q\tau_b^-}\overline{Z}^{(q)}(Y_{\tau_b^-}) 1_{\{\tau_b^-<\tau_a^+ \}}\right) \\ &=\int_0^{a-b}\int_{(-\infty,-y)}\overline{Z}^{(q)}(y+u+b)\left\{\frac{\mathbb{W}^{(q)}(x-b)\mathbb{W}^{(q)}(a-b-y)}{\mathbb{W}^{(q)}(a-b)}-\mathbb{W}^{(q)}(x-b-y)\right\}\Pi({\rm d} u){\rm d} y \\ &=\overline{Z}^{(q)}(x)+\delta\int_b^x\mathbb{W}^{(q)}(x-y)Z^{(q)}(y){\rm d} y-\psi'(0+)\overline{\mathbb{W}}^{(q)}(x-b)\\ &-\frac{\mathbb{W}^{(q)}(x-b)}{\mathbb{W}^{(q)}(a-b)}\Big(\overline{Z}^{(q)}(a)+\delta\int_b^a\mathbb{W}^{(q)}(a-y)Z^{(q)}(y){\rm d} y- \psi'(0+)\overline{\mathbb{W}}^{(q)}(a-b)\Big). \mathbb{E}nd{split} \mathbb{E}nd{align} Substituting \mathbb{E}qref{laplace_in_terms_of_z}, \mathbb{E}qref{cc0}, and \mathbb{E}qref{cc1} in \mathbb{E}qref{g_x_a_probabilistic}, we get, for $x\geq b$, \begin{align} \label{g_x_a_above_b} g^{(q)}(x,a) &=-\tilde{r}^{(q)}(x)+\frac{\mathbb{W}^{(q)}(x-b)}{\mathbb{W}^{(q)}(a-b)}\tilde{r}^{(q)}(a) +\frac {\tilde{r}^{(q)}(b)+g^{(q)}(b,a)} {Z^{(q)}(b)} \left(r^{(q)}(x)-\frac{\mathbb{W}^{(q)}(x-b)}{\mathbb{W}^{(q)}(a-b)} r^{(q)}(a)\right). \mathbb{E}nd{align} Setting $x=b$, we obtain \begin{align*} g^{(q)}(b,a) &=-\tilde{r}^{(q)}(b)+\frac {\tilde{r}^{(q)}(a)} { (c-\delta) \mathbb{W}^{(q)}(a-b)} +\frac {\tilde{r}^{(q)}(b)+g^{(q)}(b,a) } {Z^{(q)}(b)} \left(Z^{(q)}(b)-\frac {r^{(q)}(a)} {(c-\delta)\mathbb{W}^{(q)}(a-b)} \right). \mathbb{E}nd{align*} Hence we have \begin{align}\label{above expression} g^{(q)}(b,a)=\frac{\tilde{r}^{(q)}(a)}{r^{(q)}(a)}Z^{(q)}(b)-\tilde{r}^{(q)}(b). \mathbb{E}nd{align} By substituting (\ref{above expression}) in \mathbb{E}qref{12}, we obtain \mathbb{E}qref{capital cost} for $x< b$; on the other hand, by using (\ref{above expression}) in \mathbb{E}qref{g_x_a_above_b}, we get \mathbb{E}qref{capital cost} for $x \geq b$. (ii) We now extend this result to the case $X$ is of unbounded variation. Let $V^{(n)}$, $Y^{(n)}$, $R^{(n)}$, $L^{(n)}$, and $T_{a,n}^+$ are those for $X^{(n)}$ (of bounded variation) of a strongly approximating sequence for $X$. Without loss of generality, we can choose such sequence so that the Poisson random measure $N^{(n)}({\rm d} s, {\rm d} u)$ of $X^{(n)}$, for all $n \in \mathbb{N}$, coincides with the Poisson random measure $N({\rm d} s, {\rm d} u)$ of $X$ for $u < -1$ a.s. Recall also, as in the proof of Theorem \ref{resol}, that $T_{a,n}^+ \xrightarrow{n \uparrow \infty} T_a^+$ a.s. Define, for $n \in \mathbb{N}$, \begin{align*} g_n^{(q)}(x,a) := \mathbb{E}_x\Big(\int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)}\Big). \mathbb{E}nd{align*} In view of the expression \mathbb{E}qref{capital cost} for the bounded variation case and by Remark \ref{remark_strongly_approximating}, we have \begin{align*} \|g \| := \sup_{n\in\mathbb{N}} \sup_{0 \leq y \leq a}g^{(q)}_n(y,a) < \infty. \mathbb{E}nd{align*} We define $\tau_{-M,n}^{-} := \inf \{ t > 0: Y_t^{(n)} < - M\}$ and consider the decomposition, for $M > 0$, \begin{align} \label{g_decomposition} g^{(q)}_n(x,a) &=\E_x \Big( \int_{[0, T_{a,n}^+ \wedge \tau_{-M,n}^-]} e^{-qs} {\rm d} R_s^{(n)} \Big)+\E_x \Big( 1_{\{T_{a,n}^+ > \tau_{-M,n}^-\}} \int_{(\tau_{-M,n}^-, T_{a,n}^+]} e^{-qs} {\rm d} R_s^{(n)}\Big). \mathbb{E}nd{align} (1) We shall first show that the second expectation can be made arbitrarily small uniformly in $n \geq N$ by choosing $M, N$ sufficiently large. We have, with $(\mathcal{F}_t^{(n)}; t \geq 0)$ being the natural filtration of $X^{(n)}$, \begin{align} \label{bound_R_tail} \begin{split} \E_x \Big( 1_{\{T_{a,n}^+ > \tau_{-M,n}^-\}} \int_{(\tau_{-M,n}^-, T_{a,n}^+]} e^{-qs} {\rm d} R_s^{(n)} \Big) &= \E_x \Big[ \E_x \Big( 1_{\{T_{a,n}^+ > \tau_{-M,n}^-\}} \int_{(\tau_{-M,n}^-, T_{a,n}^+]} e^{-qs} {\rm d} R_s^{(n)} \Big| \mathcal{F}_{\tau_{-M,n}^-}^{(n)}\Big)\Big] \\ &\leq \E_x\big( e^{-q \tau_{-M,n}^-}1_{\{ \tau_{-M,n}^-<T_{a,n}^+\}} \big) \|g \|. \mathbb{E}nd{split} \mathbb{E}nd{align} Bounded convergence gives \begin{align*} \E_x \big( e^{-q \tau_{-M,n}^-}1_{\{ \tau_{-M,n}^-<T_{a,n}^+\}} \big) &\xrightarrow{n \uparrow \infty}\E_x \big( e^{-q \tau_{-M}^-}1_{\{ \tau_{-M}^-<T_a^+\}} \big), \\ \E_x \big( e^{-q \tau_{-M}^-}1_{\{ \tau_{-M}^-<T_a^+\}} \big) \leq \E_x \big( e^{-q \tau_{-M}^-}1_{\{ \tau_{-M}^-< \infty\}} \big) &\xrightarrow{M \uparrow \infty} 0. \mathbb{E}nd{align*} This means, for any arbitrary $\varepsilon > 0$, we can choose sufficiently large $N \in \mathbb{N}$ and $M > 0$ such that \[ \sup_{n \geq N} \E_x \big( e^{-q \tau_{-M,n}^-}1_{\{ \tau_{-M,n}^-<T_{a,n}^+\}} \big) < \varepsilon / \|g \|. \] Namely, the left hand side of \mathbb{E}qref{bound_R_tail} can be made arbitrarily small by choosing sufficiently large $N$ and $M$. (2) Consider now the first expectation in \mathbb{E}qref{g_decomposition}. On $[0, T_{a,n}^+]$, we have $V_t^{(n)} = X_t^{(n)} + R_t^{(n)} - L_t^{(n)} \leq a$ and hence \begin{align} R_t^{(n)} \leq a + L_t^{(n)} - X_t^{(n)} \leq a - Y_t^{(n)} \leq a - \inf_{0 \leq s \leq t}Y_s^{(n)}. \label{R_bound} \mathbb{E}nd{align} On $[0, T_{a,n}^+ \wedge \tau_{-M,n}^{-})$, $R_t^{(n)} \leq a + M$ and therefore $\int_{[0, T_{a,n}^+ \wedge \tau_{-M,n}^{-})} e^{-qs} {\rm d} R_s^{(n)}$ is bounded. Furthermore, by noting that $\Delta R_{T^+_{a,n}}^{(n)} = 0$ and by \mathbb{E}qref{R_bound}, \begin{multline*} \int_{\{ T_{a,n}^+ \wedge \tau_{-M,n}^{-} \}} e^{-qs} {\rm d} R_s^{(n)} =\int_{\{\tau_{-M,n}^{-}\}} e^{-qs} {\rm d} R_s^{(n)} 1_{\{\tau_{-M,n}^{-} < T_{a,n}^+\}} \\ \leq e^{-q \tau_{-M,n}^-} \big|R_{\tau_{-M,n}^-}^{(n)} \big| 1_{\{\tau_{-M,n}^{-} < T_{a,n}^+\}} \leq e^{-q \tau_{-M,n}^-} (\big|Y_{\tau_{-M,n}^-}^{(n)} \big| + a). \mathbb{E}nd{multline*} By $N({\rm d} s, {\rm d} u) = N^{(n)}({\rm d} s, {\rm d} u)$ for $u < -1$ and with $\underline{Y}$ the running infimum process of $Y$, we have uniformly in $n \in \mathbb{N}$ \begin{align*} e^{-q\tau_{-M,n}^-} \big|Y_{\tau_{-M,n}^-}^{(n)} \big|1_{\big\{Y_{\tau_{-M,n}^-}^{(n)} < -M-1\big\}} &= \int_0^\infty \int_{(-\infty,0)} e^{-qs} (-u-Y_{s-}^{(n)} ) 1_{ \{\underline{Y}_{s-}^{(n)} > -M, Y_{s-}^{(n)} + u < - M-1 \} } N^{(n)}({\rm d} s, {\rm d} u) \\ &\leq \int_0^\infty \int_{(-\infty, -1)} e^{-qs} (|u|+M) 1_{ \{\underline{Y}_{s-}^{(n)} > -M, Y_{s-}^{(n)} + u < - M-1 \} } N^{(n)}({\rm d} s, {\rm d} u) \\ &= \int_0^\infty \int_{(-\infty,-1)} e^{-qs} (|u|+M) 1_{ \{\underline{Y}_{s-}^{(n)} > -M, Y_{s-}^{(n)} + u < - M-1 \} } N({\rm d} s, {\rm d} u) \\ &\leq \int_0^\infty \int_{(-\infty,-1)} e^{-qs} (|u|+M) N({\rm d} s, {\rm d} u), \mathbb{E}nd{align*} which is integrable because, by the assumption that $\psi'(0+) > -\infty$, \begin{align*} \E_x \Big( \int_0^\infty \int_{(-\infty,-1)} e^{-qs} |u| N({\rm d} s, {\rm d} u) \Big) = \E_x \Big( \int_0^\infty \int_{(-\infty,-1)} e^{-qs} |u| \Pi({\rm d} u) {\rm d} s \Big) = q^{-1} \int_{(-\infty,-1)} |u| \Pi({\rm d} u) < \infty. \mathbb{E}nd{align*} In sum, $\int_{[0, T_{a,n}^+ \wedge \tau_{-M,n}^{-}]} e^{-qs} {\rm d} R_s^{(n)}$ is bounded in $n$ by an integrable random variable. Hence, Fatou's lemma gives \begin{align*} \limsup_{n \rightarrow \infty}\E_x \Big( \int_{[0, T_{a,n}^+ \wedge \tau_{-M,n}^{-}]} e^{-qs} {\rm d} R_s^{(n)} \Big) \leq \E_x \Big( \limsup_{n \rightarrow \infty} \int_{[0, T_{a,n}^+ \wedge \tau_{-M,n}^{-}]} e^{-qs} {\rm d} R_s^{(n)} \Big), \quad M > 0. \mathbb{E}nd{align*} Combining with (1) and Fatou's lemma, we have that \begin{multline} \label{bound_fatou} \mathbb{E}_x\Big( \liminf_{n \rightarrow \infty}\int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)}\Big) \leq \liminf_{n \rightarrow \infty} \mathbb{E}_x\Big( \int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)}\Big) \\ \leq \limsup_{n \rightarrow \infty} \mathbb{E}_x\Big( \int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)}\Big) \leq \mathbb{E}_x\Big( \limsup_{n \rightarrow \infty}\int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)}\Big). \mathbb{E}nd{multline} To see how the last inequality holds, by Fatou's lemma, for any $\varepsilon > 0$, we can choose sufficiently large $M$ such that \begin{align*} &\limsup_{n \rightarrow \infty} \mathbb{E}_x\Big( \int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)}\Big) \leq \limsup_{n \rightarrow \infty} \mathbb{E}_x\Big( \int_{[0,T_{a,n}^{+} \wedge \tau_{-M,n}^{-} ]}e^{-qt}{\rm d} R_t^{(n)}\Big) + \varepsilon \\ &\leq \mathbb{E}_x\Big( \limsup_{n \rightarrow \infty}\int_{[0,T_{a,n}^{+} \wedge \tau_{-M,n}^{-} ]}e^{-qt}{\rm d} R_t^{(n)}\Big) + \varepsilon \leq \mathbb{E}_x\Big( \limsup_{n \rightarrow \infty}\int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)}\Big) + \varepsilon. \mathbb{E}nd{align*} (3) In order to finish the proof, in view of \mathbb{E}qref{bound_fatou}, it remains to show that almost surely \begin{align*} \lim_{n \rightarrow \infty }\int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)} = \int_{[0,T_{a}^{+}]}e^{-qt}{\rm d} R_t. \mathbb{E}nd{align*} Integration by parts gives \begin{align*} \int_{[0,T_{a,n}^{+}]}e^{-qt}{\rm d} R_t^{(n)} = e^{-q T_{a,n}^+} R_{T_{a,n}^+}^{(n)} + q \int_0^{T_{a,n}^{+}}e^{-qt}R_t^{(n)} {\rm d} t. \mathbb{E}nd{align*} In view of Proposition \ref{prop_approximation}, we have $R_t^{(n)} \xrightarrow{n \uparrow \infty} R_t$, $t \geq 0$, and, as in the proof of Theorem \ref{resol}, $T_{a,n}^+ \xrightarrow{n \uparrow \infty} T_a^+$ a.s.\ (recall that $T_a^+ < \infty$ a.s.\ by Corollary \ref{corollary_one_sided}). In addition, the triangle inequality gives \begin{align*} \big|e^{-q T_{a,n}^+} R_{T_{a,n}^+}^{(n)} - e^{-q T_{a}^+} R_{T_{a}^+} \big| \leq \big|e^{-q T_{a,n}^+} R_{T_{a,n}^+}^{(n)} - e^{-q T_{a,n}^+} R_{T_{a,n}^+} \big| + \big|e^{-q T_{a,n}^+} R_{T_{a,n}^+} - e^{-q T_{a}^+} R_{T_{a}^+} \big|. \mathbb{E}nd{align*} The first term on the right hand side vanishes by the convergence of $R_t^{(n)}$ and $T_{a,n}^+$. To see how the latter also vanishes, set $\underline{\sigma} := \sup \{ t < T_a^+: V_t =0\}$ (with $\sup \varnothing = 0$) and $\overline{\sigma} := \inf \{ t > T_a^+: V_t =0\}$, then $R_t$ does not increase in the interval $(\underline{\sigma}, \overline{\sigma})$ (clearly $\underline{\sigma} < T_a^+ < \overline{\sigma}$) when $T^+_{a} > 0$; in the case when $T^+_{a} =0$ it does not increase in the interval $[0, \overline{\sigma})$ (clearly $0= T_a^+ < \overline{\sigma}$). This together with the convergence of $T^+_{a,n}$ to $T^+_a$ shows that it indeed vanishes. \mathbb{E}nd{proof} \begin{remark} In Proposition \ref{prop_capital_injection}, when $\delta = 0$ or $a=b$, we have $r^{(q)}(x)=Z^{(q)}(x)$ and $\tilde{r}^{(q)}(x) =\overline{Z}^{(q)}(x) + {\psi'(0+)} / q$, and hence we recover \mathbb{E}qref{capital_injection_identity_SN}. \mathbb{E}nd{remark} By taking $a$ to $\infty$ in identity (\ref{capital cost}), we obtain the following result. \begin{corollary} Assume $\psi'(0+) > -\infty$ and $q > 0$. For any $x \in \R$, we have \begin{align*} \mathbb{E}_x\left(\int_{[0,\infty)} e^{-qt}{\rm d} R_t\right) &=-\tilde{r}^{(q)}(x) +\left(\int_b^\infty e^{-\varphi(q) (y-b)}Z^{(q)}(y){\rm d} y \right)\frac{r^{(q)}(x)} {q \int_b^\infty e^{-\varphi(q) (y-b)}W^{(q)}(y){\rm d} y}. \mathbb{E}nd{align*} \mathbb{E}nd{corollary} \begin{proof} We have, by \mathbb{E}qref{W_q_limit}, \begin{align*} \lim_{a \rightarrow \infty} \frac{\tilde{r}^{(q)}(a)} {r^{(q)}(a)} &=\lim_{a \rightarrow \infty}\frac {\int_b^a\mathbb{W}^{(q)}(a-y)Z^{(q)}(y){\rm d} y } {q \int_b^a\mathbb{W}^{(q)}(a-y)W^{(q)}(y){\rm d} y}. \mathbb{E}nd{align*} Here, because $\mathbb{W}^{(q)}(a-y)/\mathbb{W}^{(q)}(a-b) \leq e^{-\varphi(q)(y-b)}$ by \mathbb{E}qref{W_q_limit}, dominated convergence gives \begin{align*} \lim_{a\to\infty}\mathbb{W}^{(q)}(a-b)^{-1}\int_b^a\mathbb{W}^{(q)}(a-y)Z^{(q)}(y){\rm d} y &=\int_b^{\infty}e^{-\varphi(q)(y-b)}Z^{(q)}(y){\rm d} y, \\ \lim_{a\to\infty}\mathbb{W}^{(q)}(a-b)^{-1}\int_b^a\mathbb{W}^{(q)}(a-y)W^{(q)}(y){\rm d} y &=\int_b^{\infty}e^{-\varphi(q)(y-b)}W^{(q)}(y){\rm d} y. \mathbb{E}nd{align*} This shows the claim. \mathbb{E}nd{proof} \section{Occupation times} \label{section_occupation_time} In this section, we are interested in computing the occupation time of the process $V$ above and below the level of refraction $b$. Namely, for $a > 0$, we compute the joint Laplace transform of the stopping time $T_a^+$ and the following quantities: \[ \int_0^{T_a^+}1_{\{V_s<b\}}{\rm d} s\qquad\text{and}\qquad \int_0^{T_a^+}1_{\{V_s> b\}}{\rm d} s. \] Recall as in Corollary \ref{corollary_one_sided} that $T_a^+ < \infty$ a.s. \begin{proposition}\label{occupation_time_below} For any $p\geq0$, $q\geq -p$, $a > 0$ and $x \leq a$, \begin{align} \E_x\left(e^{-pT_a^+-q\int_0^{T_a^+}1_{\{V_s<b\}}{\rm d} s} \right)=\frac{\mathcal{R}^{(p,q)}(x)}{\mathcal{R}^{(p,q)}(a)}, \label{occupation_result_1} \\ \E_x\left(e^{-pT_a^+-q\int_0^{T_a^+}1_{\{V_s> b\}}{\rm d} s} \right)=\frac{\mathcal{L}^{(p,q)}(x)}{\mathcal{L}^{(p,q)}(a)}, \label{occupation_result_2} \mathbb{E}nd{align} where $\mathcal{R}^{(p,q)}$ is defined as in \mathbb{E}qref{mathcal_R_def} and \begin{align*} \mathcal{L}^{(p,q)}(x)&:= \mathcal{R}^{(p+q,-q)}(x) \\ &= Z^{(p)}(x)+q\overline{\mathbb{W}}^{(p+q)}(x-b) +p\int_b^x\mathbb{W}^{(p+q)}(x-y)\left(q\overline{W}^{(p)}(y)+\delta W^{(p)}(y)\right){\rm d} y, \quad x \in \R. \mathbb{E}nd{align*} In particular, for $x \leq 0$, $\mathcal{R}^{(p,q)}(x) = \mathcal{L}^{(p,q)}(x) = 1$. \mathbb{E}nd{proposition} \begin{proof} (i) We will show the result for the case in which $X$ is of bounded variation; the case of unbounded variation can be obtained by Remark \ref{remark_strongly_approximating} and dominated convergence. Let us define, for each $x \leq a$, $h^{(p,q)}(x)$ as the left hand side of \mathbb{E}qref{occupation_result_1}. First, consider the case $x<b$. By an application of the strong Markov property, Remark \ref{remark_connection_Y_U}, and \mathbb{E}qref{upcrossing_time_reflected}, \begin{align} \label{p_x_delta_below_b} h^{(p,q)}(x,a)=\E_x\left[e^{-(p+q)\kappa_b^+} \right] \E_b\left(e^{-pT_a^+-q\int_0^{T_a^+}1_{\{V_s<b\}}{\rm d} s} \right) =h^{(p,q)}(b,a)\frac{Z^{(p+q)}(x)}{Z^{(p+q)}(b)}. \mathbb{E}nd{align} Second, for the case $x\geq b$, we obtain, again by the strong Markov property and Remark \ref{remark_connection_Y_U}, \begin{align} \label{p_x_delta_above_b} \begin{split} h^{(p,q)}(x,a)&=\E_x\left(e^{-p\tau_a^+} 1_{\{\tau_a^+< \tau_b^- \}}\right)+\E_x\left[e^{-p\tau_b^-}\E_{Y_{\tau_b^-}}\left(e^{-p T_a^+-q\int_0^{T_a^+}1_{\{V_s<b\}}{\rm d} s} \right) 1_{\{\tau_b^-<\tau_a^+\}}\right]\\ &=\frac{\mathbb{W}^{(p)}(x-b)}{\mathbb{W}^{(p)}(a-b)}+\E_x\left(e^{-p\tau_b^-}h^{(p,q)}(Y_{\tau_b^-},a) 1_{\{\tau_b^-<\tau_a^+ \}}\right). \mathbb{E}nd{split} \mathbb{E}nd{align} In order to compute the expectation on the right hand side, by \mathbb{E}qref{cc0_general} and \mathbb{E}qref{p_x_delta_below_b}, \begin{align*} \begin{split} \E_x\left(e^{-p\tau_b^-}h^{(p,q)}(Y_{\tau_b^-},a) 1_{\{\tau_b^-<\tau_a^+ \}}\right) &=\frac{h^{(p,q)}(b,a)}{Z^{(p+q)}(b)} \mathbb{E}_x\left(e^{-p\tau_b^-}Z^{(p+q)}(Y_{\tau_b^-})1_{\{\tau_b^-<\tau_a^+\}}\right) \\ &= \frac{h^{(p,q)}(b,a)}{Z^{(p+q)}(b)} \Big( \mathcal{R}^{(p,q)}(x)-\mathcal{R}^{(p,q)}(a)\frac{\mathbb{W}^{(p)}(x-b)}{\mathbb{W}^{(p)}(a-b)} \Big). \mathbb{E}nd{split} \mathbb{E}nd{align*} Substituting this in \mathbb{E}qref{p_x_delta_above_b} gives \begin{align} \label{p_x_total} \begin{split} h^{(p,q)}(x,a) &=\frac{\mathbb{W}^{(p)}(x-b)}{\mathbb{W}^{(p)}(a-b)}+\frac{h^{(p,q)}(b,a)}{Z^{(p+q)}(b)} \Big( \mathcal{R}^{(p,q)}(x)-\mathcal{R}^{(p,q)}(a)\frac{\mathbb{W}^{(p)}(x-b)}{\mathbb{W}^{(p)}(a-b)} \Big).\mathbb{E}nd{split} \mathbb{E}nd{align} Setting $x=b$ and solving for $h^{(p,q)}(b,a)$, we have \begin{align*} h^{(p,q)}(b,a)=\frac{Z^{(p+q)}(b)}{\mathcal{R}^{(p,q)}(a)}. \mathbb{E}nd{align*} This together with \mathbb{E}qref{p_x_delta_below_b} and \mathbb{E}qref{p_x_total} completes the proof of \mathbb{E}qref{occupation_result_1}. (ii) By Lemma \ref{lemma_mass_zero} it is clear that $\int_0^{T_a^+}1_{\{V_s= b\}}{\rm d} s = 0$ a.s. Therefore \begin{align} \E_x\left(e^{-pT_a^+-q\int_0^{T_a^+}1_{\{V_s>b\}}{\rm d} s}\right)=\E_x\left(e^{-(p+q)T_a^++q\int_0^{T_a^+}1_{\{V_s< b\}}{\rm d} s} \right), \underline{n}otag \mathbb{E}nd{align} and hence the equation \mathbb{E}qref{occupation_result_2} follows directly from (\ref{occupation_result_1}). \mathbb{E}nd{proof} By setting $\delta = 0$, we can obtain the same identity for the reflected process $U$. \begin{corollary} For any $p\geq0$, $q\geq -p$, $a, b > 0$ and $x \leq a$, \begin{align*} \E_x\left(e^{-p\kappa_a^+-q\int_0^{\kappa_a^+}1_{\{U_s<b\}}{\rm d} s} \right)&=\frac{Z^{(p+q)}(x)-q\overline{W}^{(p)}(x-b)-(p+q) q\int_b^xW^{(p)}(x-y)\overline{W}^{(p+q)}(y){\rm d} y}{Z^{(p+q)}(a)-q\overline{W}^{(p)}(a-b)-(p+q) q\int_b^aW^{(p)}(a-y)\overline{W}^{(p+q)}(y){\rm d} y}, \\ \E_x\left(e^{-p\kappa_a^+-q\int_0^{\kappa_a^+}1_{\{U_s> b\}}{\rm d} s} \right)&=\frac{Z^{(p)}(x)+q\overline{W}^{(p+q)}(x-b) +pq\int_b^x W^{(p+q)}(x-y)\overline{W}^{(p)}(y) {\rm d} y}{Z^{(p)}(a)+q\overline{W}^{(p+q)}(a-b) +pq\int_b^a W^{(p+q)}(a-y)\overline{W}^{(p)}(y) {\rm d} y}. \mathbb{E}nd{align*} \mathbb{E}nd{corollary} \appendix \section{Proofs} \subsection{Proof of Proposition \ref{prop_approximation}} \label{proof_prop_approximation} We shall first show the following lemma. \begin{lemma} \label{lemma_piecewise_bound} Fix $t > 0$. Let $(x_s; 0 \leq s \leq t )$ and $(\tilde{x}_s; 0 \leq s \leq t )$ be the paths of two different L\'{e}vy processes such that, for some $\varepsilon > 0$, \begin{align*} \sup_{0 \leq s \leq t} |x_s - \tilde{x}_s| < \varepsilon. \mathbb{E}nd{align*} Fix $z, \tilde{z} \in \R$ and $0 \leq t_0 < t$. (i) Define reflected paths $u_s(z, t_0)$ and $\tilde{u}_s(z, t_0)$ on $[t_0, t]$ of the shifted paths $(z + (x_s - x_{t_0}); t_0 \leq s \leq t )$ and $(\tilde{z} + (\tilde{x}_{s} - \tilde{x}_{t_0}); t_0 \leq s \leq t )$, respectively. In other words, for all $t_0 \leq s \leq t$, let \begin{align*} u_s(z, t_0) &:= z + (x_s - x_{t_0}) + ( - \inf_{t_0 \leq u \leq s} [z + (x_u - x_{t_0})]) \vee 0, \\ \tilde{u}_s(\tilde{z}, t_0) &:= \tilde{z} + (\tilde{x}_s - \tilde{x}_{t_0}) + ( - \inf_{t_0 \leq u \leq s} [\tilde{z} + (\tilde{x}_u - \tilde{x}_{t_0})]) \vee 0. \mathbb{E}nd{align*} Then, \begin{align*} \sup_{t_0 \leq s \leq t}|u_s(z, t_0) - \tilde{u}_s(\tilde{z}, t_0)| < 2|z - \tilde{z} | + 4 \varepsilon. \mathbb{E}nd{align*} (ii) Similarly, define refracted paths $a_s (z; t_0)$ and $\tilde{a}_s (\tilde{z}; t_0)$ on $[t_0, t]$ that solve, for all $t_0 \leq s \leq t$, \begin{align*} a_s (z, t_0) &= z + (x_s - x_{t_0}) - \delta \int_{t_0}^{s} 1_{\{a_u (z, t_0)> b\}} {\rm d} u, \\ \tilde{a}_s (\tilde{z}, t_0) &= \tilde{z} + (\tilde{x}_s - \tilde{x}_{t_0}) - \delta \int_{t_0}^{s} 1_{\{\tilde{a}_u (\tilde{z}, t_0)> b\}} {\rm d} u. \mathbb{E}nd{align*} Then, \begin{align*} \sup_{t_0 \leq s \leq t}|a_s(z, t_0) - \tilde{a}_s(\tilde{z}, t_0)| < 2|z - \tilde{z} | + 4 \varepsilon. \mathbb{E}nd{align*} \mathbb{E}nd{lemma} \begin{proof} (i) Regarding the distance between the shifted paths, the triangle inequality gives \begin{align} \label{bound_with_tilde} \sup_{t_0 \leq s \leq t}|[z + (x_{s } - x_{t_0})] - [\tilde{z} + (\tilde{x}_{s } - \tilde{x}_{t_0})]| \leq |z - \tilde{z} | + \sup_{t_0 \leq s \leq t}|x_s - \tilde{x}_s| + |x_{t_0} - \tilde{x}_{t_0}| < |z - \tilde{z} | + 2 \varepsilon. \mathbb{E}nd{align} It can be easily verified that \begin{multline*} \Big| \Big( - \inf_{t_0 \leq u \leq s} [z + (x_u - x_{t_0})] \Big) \vee 0 - \Big( - \inf_{t_0 \leq u \leq s} [\tilde{z} + (\tilde{x}_u - \tilde{x}_{t_0})] \Big) \vee 0 \Big| \leq \sup_{t_0 \leq s \leq t}|[z + (x_s - x_{t_0})] - [\tilde{z} + (\tilde{x}_s - \tilde{x}_{t_0})]|.\mathbb{E}nd{multline*} Hence \mathbb{E}qref{bound_with_tilde} and the triangle inequality gives the result. (ii) As in the proof of Lemma 12 of \cite{KL} (in particular the equation (3.8)), \begin{align*} \sup_{t_0 \leq s \leq t} |a_s(z, t_0) - \tilde{a}_s(\tilde{z}, t_0)| \leq 2 \sup_{t_0 \leq s \leq t}|[z + (x_{s } - x_{t_0})] - [\tilde{z} + (\tilde{x}_{s } - \tilde{x}_{t_0})]|. \mathbb{E}nd{align*} Hence, \mathbb{E}qref{bound_with_tilde} shows the result. \mathbb{E}nd{proof} We shall now use this lemma to show the proposition. With $\beta := b/2 > 0$, let us define a sequence of increasing random times $(\underline{T}_1, \overline{T}_1, \underline{T}_2, \overline{T}_2, \ldots )$ as follows: \begin{align*} &\underline{T}_1 := \inf \{ s > 0: V_s > \beta\}, \quad \overline{T}_1 := \sup \{ s < \sigma_1: V_s > \beta\}, \quad \textrm{ with } \sigma_1 := \inf \{ s > \underline{T}_1: V_s = 0\}, \mathbb{E}nd{align*} and, for all $k \geq 2$, \begin{align*} &\underline{T}_k := \inf \{ s > \sigma_{k-1}: V_s > \beta \}, \quad \overline{T}_k := \sup \{ s < \sigma_k: V_s > \beta \},\quad \textrm{ with } \sigma_k := \inf \{ s > \underline{T}_k: V_s = 0\}. \mathbb{E}nd{align*} For convenience, we also let $\overline{T}_0 := 0$. Let \begin{align*} K := K_1 + K_2 + 1, \quad \textrm{with } \; K_1 := \sup \{ k \geq 0: \underline{T}_k < t\} \textrm{ and } K_2 := \sup \{ k \geq 0: \overline{T}_k < t\}, \mathbb{E}nd{align*} be the number of times switching has occurred until time $t$ (plus one), and define \begin{align} \underline{\beta} := \min_{1 \leq k \leq K_1} \inf_{s\in[\underline{T}_k,\overline{T}_k \wedge t)} V_s.\label{beta_underline}\mathbb{E}nd{align} Then it is clear from the definitions of $\underline{T}$ and $\overline{T}$ (and $\sigma$) that $\underline{\beta} > 0$. For the rest of the proof, fix $\omega \in \Omega \backslash \Omega_0$ where $\Omega_0 := \{ \omega' : \sup_{0 \leq s \leq t} |X_s ( \omega' ) - X_s^{(n)} ( \omega' ) | \underline{n}rightarrow 0 \textrm{ or } K ( \omega' )= \infty \}$ is a null set. It is sufficient to show that there exist a finite number $C$ and $\underline{n} \in \mathbb{N}$ such that \begin{align} \sup_{0 \leq s \leq t} |V^{(n)}_s (\omega) - V_s (\omega)| \leq C \sup_{0 \leq s \leq t} |X^{(n)}_s (\omega) - X_s (\omega)|, \quad n \geq \underline{n}. \label{bound_with_C} \mathbb{E}nd{align} In this proof, we choose $\underline{n}$ large enough so that \begin{align} \sup_{m \geq \underline{n}}\sup_{0 \leq s \leq t} |X^{(m)}_s (\omega) - X_s (\omega)| < [4(2^{K(\omega)}-1)]^{-1} \underline{\beta} (\omega). \label{bound_sup_sup} \mathbb{E}nd{align} We will see in later arguments that, for all $n \geq \underline{n}$, this bound guarantees that $\underline{T} (\omega)$ and $\overline{T}(\omega)$ can act as switching times for both $V(\omega)$ and $V^{(n)}(\omega)$ (so that, on each interval $[\underline{T}_k(\omega), \overline{T}_k(\omega)]$ and $[\overline{T}_k(\omega), \underline{T}_{k+1}(\omega)]$, both $V(\omega)$ and $V^{(n)}(\omega)$ are refracted and reflected paths, respectively). Let us fix $n \geq \underline{n}$ and \begin{align*} \varepsilon := \sup_{0 \leq s \leq t} |X^{(n)}_s (\omega) - X_s (\omega)|. \mathbb{E}nd{align*} Now we define a sequence $(\mathbb{E}ta_k; 0 \leq k \leq K(\omega))$ such that $\mathbb{E}ta_0 = 0$ and \begin{align*} \mathbb{E}ta_{k+1} = 2 \mathbb{E}ta_{k} + 4 \varepsilon, \quad k \geq 0. \mathbb{E}nd{align*} The latter gives $\mathbb{E}ta_{k+1} - \mathbb{E}ta_k = 2 (\mathbb{E}ta_k - \mathbb{E}ta_{k-1})$, and hence $\mathbb{E}ta_{k} - \mathbb{E}ta_{k-1} = 2^{k-1} (\mathbb{E}ta_1-\mathbb{E}ta_0) = 2^{k+1} \varepsilon$. Therefore, $\mathbb{E}ta_k = (\mathbb{E}ta_1-\mathbb{E}ta_0) + \cdots + (\mathbb{E}ta_k-\mathbb{E}ta_{k-1}) = 4(2^k-1) \varepsilon$, and by \mathbb{E}qref{bound_sup_sup} \begin{align} \mathbb{E}ta_1 <\mathbb{E}ta_2 < \cdots < \mathbb{E}ta_{K (\omega)} \leq 4(2^{K(\omega)}-1) \varepsilon < \underline{\beta}(\omega) \leq b/2. \label{bound_eta} \mathbb{E}nd{align} We will show that \begin{align} \label{diff_interval_bound} \begin{split} \sup_{\overline{T}_k (\omega)\leq s \leq \underline{T}_{k+1} (\omega) \wedge t } |V_s (\omega)- V^{(n)}_s (\omega)| &\leq \mathbb{E}ta_{2k+1}, \quad 0 \leq k \leq K_2(\omega), \\ \sup_{\underline{T}_k (\omega)\leq s \leq \overline{T}_k (\omega) \wedge t} |V_s (\omega)- V^{(n)}_s (\omega)| &\leq \mathbb{E}ta_{2k}, \quad 1 \leq k \leq K_1(\omega), \mathbb{E}nd{split} \mathbb{E}nd{align} and hence \mathbb{E}qref{bound_with_C} holds with $C = 4(2^{K(\omega)}-1)$. Toward this end, we first show the following claims. \begin{claim} \label{claim_induction} (i) Fix $k \geq 0$. Suppose $\overline{T}_k (\omega) < t$ (or $k \leq K_2(\omega)$) and \begin{align*} |(V_{\overline{T}_k-} (\omega) + \Delta X_{\overline{T}_k} (\omega)) - (V_{\overline{T}_k-}^{(n)} (\omega)+ \Delta X_{\overline{T}_k}^{(n)} (\omega)) | &\leq \mathbb{E}ta, \mathbb{E}nd{align*} and \begin{align} \tilde{\mathbb{E}ta} := 2\mathbb{E}ta + 4 \varepsilon < \underline{\beta} < b/2. \label{bound_eta_lemma} \mathbb{E}nd{align} Then, \begin{align*} |V_s (\omega)- V^{(n)}_s (\omega)| \leq \tilde{\mathbb{E}ta}, \quad \overline{T}_k (\omega)\leq s \leq \underline{T}_{k+1} (\omega) \wedge t. \mathbb{E}nd{align*} (ii) Fix $k \geq 1$. Suppose $\underline{T}_k (\omega) < t$ (or $k \leq K_1(\omega)$) and \begin{align} |V_{\underline{T}_k} (\omega) - V^{(n)}_{\underline{T}_k} (\omega) | \leq \mathbb{E}ta, \label{v_diff_previous_under}\mathbb{E}nd{align} such that \mathbb{E}qref{bound_eta_lemma} holds. Then, \begin{align*} |V_s (\omega) - V_s^{(n)} (\omega) | &\leq \tilde{\mathbb{E}ta}, \quad \underline{T}_k(\omega) \leq s < \overline{T}_k(\omega) \wedge t, \\ |(V_{\overline{T}_k-} (\omega) + \Delta X_{\overline{T}_k} (\omega)) - (V_{\overline{T}_k-}^{(n)} (\omega)+ \Delta X_{\overline{T}_k}^{(n)} (\omega)) | &\leq \tilde{\mathbb{E}ta}, \quad \textrm{if } \overline{T}_k (\omega) < t. \mathbb{E}nd{align*} \mathbb{E}nd{claim} \begin{proof} (i) Consider the reflected paths on $[\overline{T}_k (\omega), \underline{T}_{k+1} (\omega) \wedge t]$: \begin{multline} \label{U_time_space_shifted} U_s (V_{\overline{T}_k-}+ \Delta X_{\overline{T}_k}, \overline{T}_k) (\omega) := V_{\overline{T}_k-}(\omega) + \Delta X_{\overline{T}_k} (\omega)+ (X_s (\omega) - X_{\overline{T}_k} (\omega)) \\ + \Big( - \inf_{\overline{T}_k (\omega) \leq u \leq s} [V_{\overline{T}_k-} (\omega)+ \Delta X_{\overline{T}_k}(\omega)+ (X_u (\omega)- X_{\overline{T}_k} (\omega))] \Big) \vee 0, \mathbb{E}nd{multline} and \begin{multline} \label{U_n_time_space_shifted} U_s^{(n)} (V_{\overline{T}_k-}^{(n)}+ \Delta X_{\overline{T}_k}^{(n)}, \overline{T}_k) (\omega) := V_{\overline{T}_k-}^{(n)} (\omega)+ \Delta X_{\overline{T}_k}^{(n)}(\omega)+ (X^{(n)}_s (\omega)- X^{(n)}_{\overline{T}_k} (\omega)) \\+ \Big( - \inf_{\overline{T}_k (\omega) \leq u \leq s} [V_{\overline{T}_k-}^{(n)}(\omega) + \Delta X_{\overline{T}_k}^{(n)}(\omega)+ (X^{(n)}_u(\omega) - X^{(n)}_{\overline{T}_k} (\omega))] \Big) \vee 0. \mathbb{E}nd{multline} By an application of Lemma \ref{lemma_piecewise_bound} (i) with $z = V_{\overline{T}_k-} (\omega)+ \Delta X_{\overline{T}_k} (\omega)$ and $\tilde{z} = V_{\overline{T}_k-}^{(n)} (\omega) + \Delta X_{\overline{T}_k}^{(n)} (\omega)$ and $t_0=\overline{T}_k (\omega)$, we have \begin{multline*} \big|U_s (V_{\overline{T}_k-}+ \Delta X_{\overline{T}_k}, \overline{T}_k) (\omega)- U_s^{(n)} (V_{\overline{T}_k-}^{(n)}+ \Delta X_{\overline{T}_k}^{(n)}, \overline{T}_k) (\omega) \big| \\ < 2 \big|[V_{\overline{T}_k-} (\omega)+ \Delta X_{\overline{T}_k} (\omega) ] - [V_{\overline{T}_k-}^{(n)} (\omega) + \Delta X_{\overline{T}_k}^{(n)} (\omega)] \big| + 4 \varepsilon \leq 2 \mathbb{E}ta + 4\varepsilon = \tilde{\mathbb{E}ta}, \quad \overline{T}_k (\omega) \leq s \leq \underline{T}_{k+1} (\omega) \wedge t. \mathbb{E}nd{multline*} Using that $V_s (\omega) \leq \beta$ on $[\overline{T}_k (\omega), \underline{T}_{k+1} (\omega)]$ and \mathbb{E}qref{bound_eta_lemma}, we can conclude that there is no refraction for the path $V^{(n)}(\omega)$ on $[\overline{T}_k (\omega), \underline{T}_{k+1} (\omega) \wedge t)$. Therefore $V (\omega)$ and $V^{(n)}(\omega)$ coincide with their associated reflected paths, defined in \mathbb{E}qref{U_time_space_shifted} and \mathbb{E}qref{U_n_time_space_shifted}, respectively, on $[\overline{T}_k (\omega), \underline{T}_{k+1} (\omega) \wedge t]$, and hence the claim holds. (ii) Consider the refracted paths $A_s (V_{\underline{T}_k}, \underline{T}_k) (\omega)$ and $A_s^{(n)} (V_{\underline{T}_k}^{(n)}, \underline{T}_k) (\omega)$ on $[\underline{T}_k (\omega) , \overline{T}_k (\omega) \wedge t]$ that solve, for $\underline{T}_k(\omega) \leq s \leq \overline{T}_k (\omega) \wedge t$, \begin{align*} A_s (V_{\underline{T}_k}, \underline{T}_k) (\omega) &= V_{\underline{T}_k}(\omega) + (X_s (\omega)- X_{\underline{T}_k}(\omega)) - \delta \int_{\underline{T}_k(\omega)}^{s} 1_{\{A_u (V_{\underline{T}_k}, \underline{T}_k)(\omega) > b\}} {\rm d} u, \\ A_s^{(n)} (V^{(n)}_{\underline{T}_k}, \underline{T}_k) (\omega) &= V_{\underline{T}_k}^{(n)} (\omega) + (X_s^{(n)} (\omega)- X_{\underline{T}_k}^{(n)}(\omega)) - \delta \int_{\underline{T}_k(\omega)}^{s} 1_{\{A_u^{(n)} (V_{\underline{T}_k}^{(n)}, \underline{T}_k)(\omega) > b\}} {\rm d} u. \mathbb{E}nd{align*} By Lemma \ref{lemma_piecewise_bound} (ii) and \mathbb{E}qref{v_diff_previous_under}, \begin{align*} |A_s(V_{\underline{T}_k}, \underline{T}_k) (\omega) - A^{(n)}_s(V_{\underline{T}_k }^{(n)}, \underline{T}_k ) (\omega) | < 2|V_{\underline{T}_k} (\omega) - V_{\underline{T}_k}^{(n)} (\omega) | + 4 \varepsilon \leq 2 \mathbb{E}ta + 4\varepsilon = \tilde{\mathbb{E}ta}, \quad \underline{T}_k (\omega) \leq s \leq \overline{T}_k (\omega) \wedge t. \mathbb{E}nd{align*} By the above inequality, \mathbb{E}qref{beta_underline} and \mathbb{E}qref{bound_eta_lemma} , there is no reflection for both $V$ and $V^{(n)}$ on $[\underline{T}_k(\omega), \overline{T}_k(\omega) \wedge t)$ and we have $A_s(V_{\underline{T}_k}, \underline{T}_k) (\omega) = V_{s-} (\omega) + \Delta X_s (\omega)$ and $A_s^{(n)}(V_{\underline{T}_k}^{(n)}, \underline{T}_k) (\omega) = V^{(n)}_{s-} (\omega) + \Delta X^{(n)}_s (\omega) $ for all $\underline{T}_k (\omega) \leq s \leq \overline{T}_k (\omega) \wedge t$. This completes the proof. \mathbb{E}nd{proof} We are now ready to show \mathbb{E}qref{diff_interval_bound} by mathematical induction. The base case is clear by Claim \ref{claim_induction} (i) with $k=0$ and \begin{align*} |(V_{0-} (\omega) + \Delta X_0 (\omega)) - (V_{0-}^{(n)} (\omega)+ \Delta X_0^{(n)} (\omega)) | = |x-x| = 0 =\mathbb{E}ta_0. \mathbb{E}nd{align*} By applying repeatedly Claim \ref{claim_induction} (i) and (ii) one after the other (for $K(\omega)$ times), it is clear that \mathbb{E}qref{diff_interval_bound} holds, as desired. \subsection{Proof of Lemma \ref{lemma_useful_identity} (iii).} \label{proof_lemma_useful_identit} \par Using the fact that $\overline{Z}^{(q)}(x)=x$ for $x\leq0$, we obtain \begin{align} \int_0^{\infty}&\int_{(-\infty,-y)}\left(\overline{Z}^{(q)}(y+u+b-v) - (y+u)\right)\mathbb{W}^{(p)}(x-b-y)\Pi({\rm d} u){\rm d} y\underline{n}otag\\ &=\int_0^{\infty}\int_{(-\infty,-y)}\left(\overline{Z}^{(q)}(y+u+b-v)- \overline{Z}^{(q)}(y+u+0)\right)\mathbb{W}^{(p)}(x-b-y)\Pi({\rm d} u){\rm d} y\underline{n}otag\\ &=\int_v^b \int_0^{\infty}\int_{(-\infty,-y)}Z^{(q)}(y+u+b-z)\mathbb{W}^{(p)}(x-b-y)\Pi({\rm d} u){\rm d} y {\rm d} z\underline{n}otag \mathbb{E}nd{align} which, by \mathbb{E}qref{RLqp2} and \mathbb{E}qref{lemma_useful_identity_c}, equals \begin{align}\label{Z_bar_y_u_W_integral} \begin{split} &\int_v^b \Big[ (c-\delta) {Z^{(q)}(b-z)}\mathbb{W}^{(p)}(x-b)-Z^{(q)}(x-z)-(p-q)\overline{\mathbb{W}}^{(p)}(x-b) \\ &\qquad +\int_b^x\mathbb{W}^{(p)}(x-y)\big((q-p)(Z^{(q)}(y-z)-1)-\delta Z^{(q)\prime}(y-z)\big) {\rm d} y\Big]{\rm d} z\\ &=(c-\delta)\overline{Z}^{(q)}(b-v){\mathbb{W}^{(p)}(x-b)}+\overline{Z}^{(q)}(x-b)-\overline{Z}^{(q)}(x-v) \\ &-\delta\int_b^x\mathbb{W}^{(p)}(x-y)\left(-Z^{(q)}(y-b)+Z^{(q)}(y-v)\right) {\rm d} y\\ &+(q-p)\int_b^x\mathbb{W}^{(p)}(x-y)\left(-\overline{Z}^{(q)}(y-b)+\overline{Z}^{(q)}(y-v)\right) {\rm d} y. \\ &=(c-\delta)\overline{Z}^{(q)}(b-v){\mathbb{W}^{(p)}(x-b)}- \overline{Z}^{(q)}(x-v) + \overline{\mathbb{Z}}^{(p)}(x-b)+\delta\overline{\mathbb{W}}^{(p)}(x-b)\\ &-\delta\int_b^x\mathbb{W}^{(p)}(x-y) Z^{(q)}(y-v) {\rm d} y +(q-p)\int_b^x\mathbb{W}^{(p)}(x-y) \overline{Z}^{(q)}(y-v) {\rm d} y. \mathbb{E}nd{split} \mathbb{E}nd{align} Now we will compute the first term in the right hand side of the previous identity. To this end, an application of Lemma 3.1 in \cite{BKY} gives \begin{align} \E\left(e^{-p\tau_0^{-}}Y_{\tau_0^-} 1_{\{\tau_0^-<\tau_{x-b}^+ \}}\right)= \frac {\psi'_{Y}(0+)\overline{\mathbb{W}}^{(p)}(x-b)-\overline{\mathbb{Z}}^{(p)}(x-b) }{(c-\delta)\mathbb{W}^{(p)}(x-b)}; \underline{n}otag \mathbb{E}nd{align} therefore by \mathbb{E}qref{undershoot_expectation} we obtain \begin{align} \label{y_u_W_integral} \begin{split} \int_0^{\infty}\int_{(-\infty,-y)}(y+u)\mathbb{W}^{(p)}(x-b-y)\Pi({\rm d} u){\rm d} y&=(c-\delta)\mathbb{W}^{(p)}(x-b)\E\left(e^{-p\tau_0^{-}}Y_{\tau_0^-}1_{\{\tau_0^-<\tau_{x-b}^+\}}\right) \\ &=\psi'_Y(0+)\overline{\mathbb{W}}^{(p)}(x-b)-\overline{\mathbb{Z}}^{(p)}(x-b). \mathbb{E}nd{split} \mathbb{E}nd{align} By summing up \mathbb{E}qref{Z_bar_y_u_W_integral} and \mathbb{E}qref{y_u_W_integral} and recalling that $\psi'(0+) = \psi_Y'(0+) + \delta$, we have \mathbb{E}qref{lemma_useful_identity_Z_bar}. \begin{thebibliography}{99} \bibitem{APP2007}\sc Avram, F., Palmowski, Z. and Pistorius, M.R. \rm On the optimal dividend problem for a spectrally negative L\'evy process. {\it Ann. Appl.Probab.} {\bf 17}, 156-180, (2007). \bibitem{BKY}\sc Bayraktar, E., Kyprianou, A.E., Yamazaki, K. \rm Optimal dividends in the dual model under transaction costs. {\it Insur. Math. Econ.} {\bf 54}, 133-143, (2014). \bibitem{B} \sc Bertoin, J. {\it L\'evy processes. }\rm Cambridge University Press, Cambridge, (1996). \bibitem{F1998} \sc Furrer, H. \rm Risk processes perturbed by $\alpha$-stable L\'evy motion. {\it Scand. Actuar. J.} 59--74, (1998). \bibitem{HPSV2004a}\sc Huzak, M., Perman, M., \v{S}iki\'c, H. and Vondra\v cek, Z. \rm Ruin probabilities and decompositions for general perturbed risk processes. {\it Ann. Appl. Probab.} {\bf 14}, 1378--1397, (2004). \bibitem{HPSV2004b} \sc Huzak, M., Perman, M., \v{S}iki\'c, H. and Vondra\v cek, Z. \rm Ruin probabilities for competing claim processes. {\it J. Appl. Probab.} {\bf 41}, 679--690, (2004). \bibitem{KKR}\sc Kuznetsov, A., Kyprianou, A.E., Rivero, V. \rm The theory of scale functions for spectrally negative L\'evy processes. {\it L\'evy Matters II, Springer Lecture Notes in Mathematics}, (2013). \bibitem{KKM2004} \sc Kl\"uppelberg, C., Kyprianou, A.E. and Maller, R.A. \rm Ruin probabilities and overshoots for general L\'evy insurance risk processes. {\it Ann. Appl. Probab.} {\bf 14}, 1766--1801, (2004). \bibitem{KK2006} \sc Kl\"uppelberg, C. and Kyprianou, A.E. \rm On extreme ruinous behaviour of L\'{e}vy insurance risk processes. {\it J. Appl. Probab.} {\bf 43(2)}, 594--598, (2006). \bibitem{K} \sc Kyprianou, A.E. {\it Fluctuations of L\'evy processes with applications.} \rm Springer, Berlin, second edition, (2006). \bibitem{KL}\sc Kyprianou, A.E., Loeffen, R. \rm Refracted L\'evy processes. {\it Ann. Inst. H. Poincar\'e,} {46} (1), 24--44, (2010). \bibitem{KLP} \sc Kyprianou, A.E., Loeffen, R., and P\'erez, J. L. \rm Optimal control with absolutely continuous strategies for spectrally negative L\'evy processes. {\it J. Appl. Probab.} {\bf 49}(1), 150-166, (2012). \bibitem{KP2007} \sc Kyprianou, A.E. and Palmowski, Z. \rm Distributional study of de Finetti's dividend problem for a general L\'evy insurance risk process. {\it J. Appl. Probab.} {\bf 44}, 349-365, (2007) \bibitem{KPP} \sc Kyprianou, A.E., Pardo, J. C., and P\'erez, J. L. \rm Occupation times of refracted L\'evy processes. {\it J. Theoret. Probab.} {\bf 27}(4), 1292-1315, (2014). \bibitem{Loeffen} Loeffen, R.L., On optimality of the barrier strategy in de Finetti's dividend problem for spectrally negative L\'evy processes. {\it Ann. Appl.Probab.} {\bf 18} (5),1669-1680, (2008). \bibitem{LoRZ} Loeffen, R.L., Renaud, J-F. and Zhou, X. Occupation times of intervals until first passage times for spectrally negative L\'evy processes with applications. {\it Stochastic Process. Appl.}, {\bf 124 (3)}, 1408--1435, (2014). \bibitem{PY_astin} P\'erez, J.L. and Yamazaki, K. Refraction-reflection strategies in the dual model. {\it ASTIN Bulletin}, (forthcoming). \bibitem{Pistorius_2003} Pistorius, M.R. \rm On doubly reflected completely asymmetric {L}\'evy processes. {\it Stochastic Process. Appl.}, {\bf 107 (1)}, 131--143, (2003). \bibitem{P2004}\sc Pistorius, M.R. \rm On exit and ergodicity of the spectrally one-sided L\'evy process reflected at its infimum. {\it J. Theoret. Probab.} {\bf 17} (1), 183--220, (2004). \bibitem{P2007}\sc Pistorius, M.R. \rm An excursion-theoretical approach to some boundary crossing problems and the Skorokhod embedding for reflected L\'evy processes. {\it Seminaire de Probabilit\'es XL}, 287--307, (2007). \bibitem{R2014} \sc Renaud, J-F. \rm On the time spent in the red by a refracted L\'evy risk process. {\it J. Appl. Probab.} {\bf 51} (4), 1171-1188, (2014). \bibitem{RZ2007} \sc Renaud, J-F. and Zhou, X. \rm Distribution of the dividend payments in a general L\'evy risk model. {\it J. Appl. Probab.} {\bf 44}, 420--427, (2007). \bibitem{SV2007} \sc Song, R. and Vondra\v{c}ek, Z. \rm On suprema of L\'evy processes and application in risk theory. {\it Ann. lnst. H. Poincar\'e}, {\bf 44}, 977--986, (2008). \mathbb{E}nd{thebibliography} \mathbb{E}nd{document}
\begin{document} \begin{abstract} We study birational transformations $\varphi:{\mathbb{P}}^n\dashrightarrow\overline{\varphi({\mathbb{P}}^n)}\subseteq{\mathbb{P}}^N$ defined by linear systems of quadrics whose base locus is smooth and irreducible of dimension $\leq3$ and whose image $\overline{\varphi({\mathbb{P}}^n)}$ is sufficiently regular. \end{abstract} \maketitle \setcounter{secnumdepth}{1} \setcounter{tocdepth}{1} \tableofcontents \section*{Introduction}\label{sec: introduction} In this note we continue the study of special quadratic birational transformations $\varphi:{\mathbb{P}}^n\dashrightarrow{\mathbf{S}}:=\overline{\varphi({\mathbb{P}}^n)}\subseteq{\mathbb{P}}^{N}$ started in \cite{note}, by reinterpreting techniques and well-known results on special Cremona transformations (see \cite{crauder-katz-1989}, \cite{crauder-katz-1991}, \cite{ein-shepherdbarron} and \cite{hulek-katz-schreyer}). While in \cite{note} we required that ${\mathbf{S}}$ was a hypersurface, here we allow more freedom in the choice of ${\mathbf{S}}$, but we only treat the case in which the dimension of the base locus ${\mathfrak{B}}$ is $r=\dim({\mathfrak{B}})\leq3$. In the last section, we shall also obtain partial results in the case $r=4$. Note that for every closed subscheme $X\subset{\mathbb{P}}^{n-1}$ cut out by the quadrics containing it, we can consider ${\mathbb{P}}^{n-1}$ as a hyperplane in ${\mathbb{P}}^n$ and hence $X$ as a subscheme of ${\mathbb{P}}^n$. So the linear system $|{\mathcal{I}}_{X,{\mathbb{P}}^n}(2)|$ of all quadrics in ${\mathbb{P}}^n$ containing $X$ defines a quadratic rational map $\psi:{\mathbb{P}}^n\dashrightarrow{\mathbb{P}}^N$ ($N=h^0({\mathcal{I}}_{X,{\mathbb{P}}^n}(2))-1=n+h^0({\mathcal{I}}_{X,{\mathbb{P}}^{n-1}}(2))$), which is birational onto the image and whose inverse is defined by linear forms, i.e. $\psi$ is of type $(2,1)$. Conversely, every birational transformation $\psi:{\mathbb{P}}^n\dashrightarrow\overline{\psi({\mathbb{P}}^n)}\subseteq{\mathbb{P}}^N$ of type $(2,1)$ whose image is nondegenerate, normal and linearly normal arise in this way. From this it follows that there are many (special) quadratic transformations. However, when the image ${\mathbf{S}}$ of the transformation $\varphi$ is sufficiently regular, by straightforward generalization of \cite[Proposition~2.3]{ein-shepherdbarron}, we obtain strong numerical and geometric restrictions on the base locus ${\mathfrak{B}}$. For example, as soon as ${\mathbf{S}}$ is not too much singular, the secant variety ${\mathrm{Sec}}({\mathfrak{B}})\subset{\mathbb{P}}^n$ has to be a hypersurface and ${\mathfrak{B}}$ has to be a $QEL$-variety of type $\delta=\delta({\mathfrak{B}})=2\dim({\mathfrak{B}})+2-n$; in particular $n\leq 2\dim({\mathfrak{B}})+2$ and ${\mathrm{Sec}}({\mathfrak{B}})$ is a hyperplane if and only if $\varphi$ is of type $(2,1)$. So the classification of transformations $\varphi$ of type $(2,1)$ whose base locus has dimension $\leq 3$ essentially follows from classification results on $QEL$-manifold: \cite[Propositions~1.3 and 3.4]{russo-qel1}, \cite[Theorem~2.2]{ionescu-russo-conicconnected} and \cite[Theorems~4.10 and 7.1]{ciliberto-mella-russo}. When $\varphi$ is of type $(2,d)$ with $d\geq2$, then ${\mathrm{Sec}}({\mathfrak{B}})$ is a nonlinear hypersurface and it is not so easy to exhibit examples. The most difficult cases of this kind are those for which $n=2r+2$ i.e. $\delta=0$. In order to classify these transformations, we first determine the Hilbert polynomial of ${\mathfrak{B}}$ in Lemmas \ref{lemma: r=2 B nondegenerate} and \ref{lemma: r=3 B nondegenerate}, by using the usual Castelnuovo's argument, Castelnuovo's bound and some refinement of Castelnuovo's bound, see \cite{ciliberto-hilbertfunctions} and \cite{mella-russo-baselocusleq3}. Consequently we deduce Propositions \ref{prop: r=2 B nondegenerate} and \ref{prop: r=3 B nondegenerate} by applying the classification of smooth varieties of low degree: \cite{ionescu-smallinvariants}, \cite{ionescu-smallinvariantsII}, \cite{ionescu-smallinvariantsIII}, \cite{fania-livorni-nine}, \cite{fania-livorni-ten}, \cite{besana-biancofiore-deg11}, \cite{ionescu-degsmallrespectcodim}. We also apply the double point formula in Lemmas: \ref{lemma: double point formula r=2}, \ref{lemma: double point formula}, \ref{lemma: quadric fibration}, \ref{lemma: scroll over surface} and \ref{lemma: scroll over curve}, in order to obtain additional informations on $d$ and $\Delta=\deg({\mathbf{S}})$. We summarize our classification results in Table \ref{tabella: all cases 3-fold}. In particular, we provide an answer to a question left open in the recent preprint \cite{alzati-sierra}. \section{Notation and general results}\label{sec: notation} Throughout the paper we work over ${\mathbb{C}}$ and keep the following setting. \begin{assumption} Let $\varphi:{\mathbb{P}}^n\dashrightarrow{\mathbf{S}}:=\overline{\varphi({\mathbb{P}}^n)}\subseteq{\mathbb{P}}^{n+a}$ be a quadratic birational transformation with smooth connected base locus ${\mathfrak{B}}$ and with ${\mathbf{S}}$ nondegenerate, linearly normal and factorial. \end{assumption} Recall that we can resolve the indeterminacies of $\varphi$ with the diagram \begin{equation}\label{eq: diagram resolving map} \xymatrix{ & \widetilde{{\mathbb{P}}^n} \ar[dl]_{\pi} \ar[dr]^{\pi'}\\ {\mathbb{P}}^n\ar@{-->}[rr]^{\varphi}& & {\mathbf{S}} } \end{equation} where $\pi:\widetilde{{\mathbb{P}}^n}={\mathfrak{B}}l_{{\mathfrak{B}}}({\mathbb{P}}^n)\rightarrow{\mathbb{P}}^n$ is the blow-up of ${\mathbb{P}}^n$ along ${\mathfrak{B}}$ and $\pi'=\varphi\circ\pi:\widetilde{{\mathbb{P}}^n}\rightarrow{\mathbf{S}}$. Denote by ${\mathfrak{B}}'$ the base locus of $\varphi^{-1}$, $E$ the exceptional divisor of $\pi$, $E'=\pi'^{-1}({\mathfrak{B}}')$, $H=\pi^{\ast}(H_{{\mathbb{P}}^n})$, $H'={\pi'}^{\ast}(H_{{\mathbf{S}}})$, and note that, since $\pi'|_{\widetilde{{\mathbb{P}}^n}\setminus E'}:\widetilde{{\mathbb{P}}^n}\setminus E'\rightarrow {\mathbf{S}}\setminus{\mathfrak{B}}'$ is an isomorphism, we have $({\mathrm{sing}}({\mathbf{S}}))_{\mathrm{red}}\subseteq ({\mathfrak{B}}')_{\mathrm{red}}$. We also put $r=\dim({\mathfrak{B}})$, $r'=\dim({\mathfrak{B}}')$, $\lambda=\deg({\mathfrak{B}})$, $g=g({\mathfrak{B}})$ the sectional genus of ${\mathfrak{B}}$, $c_j=c_j({\mathcal{T}}_{{\mathfrak{B}}})\cdot H_{{\mathfrak{B}}}^{r-j}$ (resp. $s_j=s_j({\mathcal{N}}_{{\mathfrak{B}},{\mathbb{P}}^n})\cdot H_{{\mathfrak{B}}}^{r-j}$) the degree of the $j$-th Chern class (resp. Segre class) of ${\mathfrak{B}}$, $\Delta=\deg({\mathbf{S}})$, $c=c({\mathbf{S}})$ the \emph{coindex} of ${\mathbf{S}}$ (the last of which is defined by $-K_{{\mathrm{reg}}({\mathbf{S}})}\sim (n+1-c)H_{{\mathrm{reg}}({\mathbf{S}})}$, whenever ${\mathrm{Pic}}({\mathbf{S}})={\mathbb{Z}}\langle H_{{\mathbf{S}}}\rangle$). \begin{assumption}\label{assumption: liftable} We suppose that there exists a rational map $\widehat{\varphi}:{\mathbb{P}}^{n+a}\dashrightarrow{\mathbb{P}}^n$ defined by a sublinear system of $|{\mathcal{O}}_{{\mathbb{P}}^{n+a}}(d)|$ and having base locus $\widehat{{\mathfrak{B}}}$ such that $\varphi^{-1}=\widehat{\varphi}|_{{\mathbf{S}}}$ and ${\mathfrak{B}}'=\widehat{{\mathfrak{B}}}\cap{\mathbf{S}}$. We then will say that $\varphi^{-1}$ is \emph{liftable} and that $\varphi$ is \emph{of type} $(2,d)$. \end{assumption} The above assumption yields the relations: \begin{equation}\label{eq: lift} \begin{array}{ll} H' \sim 2H-E, & H \sim dH'-E', \\ E'\sim (2d-1)H-dE, & E \sim (2d-1)H'-2E' , \end{array} \end{equation} and hence also $ {\mathrm{Pic}}(\widetilde{{\mathbb{P}}^n})\simeq {\mathbb{Z}}\langle H \rangle\oplus {\mathbb{Z}}\langle E \rangle \simeq {\mathbb{Z}}\langle H'\rangle\oplus {\mathbb{Z}}\langle E'\rangle $. Note that, by the proofs of \cite[Proposition~1.3 and 2.1(a)]{ein-shepherdbarron} and by factoriality of ${\mathbf{S}}$, we obtain that $E'$ is a reduced and irreducible divisor. Moreover we have ${\mathrm{Pic}}({\mathbf{S}})\simeq {\mathrm{Pic}}({\mathbf{S}}\setminus{\mathfrak{B}}')\simeq {\mathrm{Pic}}(\widetilde{{\mathbb{P}}^n}\setminus E') \simeq {\mathbb{Z}}\langle H'\rangle\simeq {\mathbb{Z}}\langle H_{{\mathbf{S}}}\rangle$. Finally, we require the following:\footnote{See Example \ref{example: B2=singSred} and \cite[Example~4.6]{note} for explicit examples of special quadratic birational transformations for which Assumption \ref{assumption: ipotesi} is not satisfied.} \begin{assumption}\label{assumption: ipotesi} $({\mathrm{sing}}({\mathbf{S}}))_{\mathrm{red}}\neq ({\mathfrak{B}}')_{\mathrm{red}}$. \end{assumption} Now we point out that, since $E'$ is irreducible, by Assumption \ref{assumption: ipotesi} and \cite[Theorem~1.1]{ein-shepherdbarron}, we deduce that $\pi'|_V:V\rightarrow U$ coincides with the blow-up of $U$ along $Z$, where $U={\mathrm{reg}}({\mathbf{S}})\setminus{\mathrm{sing}}(({\mathfrak{B}}')_{\mathrm{red}})$, $V=\pi'^{-1}(U)$ and $Z=U\cap ({\mathfrak{B}}')_{\mathrm{red}}$. It follows that $K_{\widetilde{{\mathbb{P}}^n}} \sim (-n-1)H+(n-r-1)E \sim (c-n-1)H'+(n-r'-1)E'$, from which, together with (\ref{eq: lift}), we obtain $2r+3-n=n-r'-1$ and $c=\left( 1-2d\right) r+dn-3d+2$. One can also easily see that, for the general point $x\in{\mathrm{Sec}}({\mathfrak{B}})\setminus {\mathfrak{B}}$, $\overline{\varphi^{-1}\left(\varphi\left(x\right)\right)}$ is a linear space of dimension $n-r'-1$ and $\overline{\varphi^{-1}\left(\varphi\left(x\right)\right)}\cap {\mathfrak{B}}$ is a quadric hypersurface, which coincides with the entry locus $\Sigma_{x}({\mathfrak{B}})$ of ${\mathfrak{B}}$ with respect to $x$. For more details we refer the reader to \cite[Proposition~2.3]{ein-shepherdbarron} and \cite[Proposition~3.1]{note}. So we can establish one of the main results useful for our purposes: \begin{proposition}\label{prop: B is QEL} ${\mathrm{Sec}}({\mathfrak{B}})\subset{\mathbb{P}}^n$ is a hypersurface of degree $2d-1$ and ${\mathfrak{B}}$ is a $QEL$-variety of type $\delta=2r+2-n$. \end{proposition} In many cases, ${\mathfrak{B}}$ has a much stronger property of being $QEL$-variety. Recall that a subscheme $X\subset{\mathbb{P}}^n$ is said to have the $K_2$ property if $X$ is cut out by quadratic forms $F_0,\ldots,F_N$ such that the Koszul relations among the $F_i$ are generated by linear syzygies. We have the following fact (see \cite{vermeire} and \cite{alzati-syz}): \begin{fact}\label{fact: K2 property} Let $X\subset{\mathbb{P}}^n$ be a smooth variety cut out by quadratic forms $F_0,\ldots,F_N$ satisfying $K_2$ property and let $F=[F_0,\ldots,F_N]:{\mathbb{P}}^n\dashrightarrow{\mathbb{P}}^N$ be the induced rational map. Then for every $x\in{\mathbb{P}}^n\setminus X$, $\overline{F^{-1}\left(F\left(x\right)\right)}$ is a linear space of dimension $n+1-\mathrm{rank}\left(\left({\partial F_i}/{\partial x_j}(x)\right)_{i,j}\right)$; moreover, $\dim(\overline{F^{-1}\left(F\left(x\right)\right)})>0$ if and only if $x\in{\mathrm{Sec}}(X)\setminus X$ and in this case $\overline{F^{-1}\left(F\left(x\right)\right)}\cap X$ is a quadric hypersurface, which coincides with the entry locus $\Sigma_{x}(X)$ of $X$ with respect to $x$. \end{fact} We have a simple sufficient condition for the $K_2$ property (see \cite[Proposition~2]{alzati-russo-subhomaloidal}): \begin{fact}\label{fact: test K2} Let $X\subset{\mathbb{P}}^n$ be a smooth linearly normal variety and suppose $h^1({\mathcal{O}}_X)=0$ if $\dim(X)\geq2$. Putting $\lambda=\deg(X)$ and $s=\mathrm{codim}_{{\mathbb{P}}^n}(X)$ we have: \begin{itemize} \item if $\lambda\leq 2s+1$, then $X$ is arithmetically Cohen-Macaulay; \item if $\lambda\leq 2s$, then the homogeneous ideal of $X$ is generated by quadratic forms; \item if $\lambda\leq2s-1$, then the syzygies of the generators of the homogeneous ideal of $X$ are generated by the linear ones. \end{itemize} \end{fact} \begin{remark} Let $\psi:{\mathbb{P}}^n\dashrightarrow\mathbf{Z}:=\overline{\psi({\mathbb{P}}^n)}\subseteq{\mathbb{P}}^{n+a}$ be a birational transformation ($n\geq3$). We point out that, from Grothendieck's Theorem on parafactoriality (Samuel's Conjecture) \cite[\Rmnum{11} Corollaire~3.14]{sga2} it follows that $\mathbf{Z}$ is factorial whenever it is a local complete intersection with $\dim({\mathrm{sing}}(\mathbf{Z}))<\dim(\mathbf{Z})-3$. Of course, every complete intersection in a smooth variety is a local complete intersections. Moreover, $\psi^{-1}$ is liftable whenever ${\mathrm{Pic}}(\mathbf{Z})={\mathbb{Z}}\langle H_{\mathbf{Z}}\rangle$ and $\mathbf{Z}$ is factorial and projectively normal. So, from \cite{larsen-coomology} and \cite[\Rmnum{4} Corollary~3.2]{hartshorne-ample}, $\psi^{-1}$ is liftable whenever $\mathbf{Z}$ is either smooth and projectively normal with $n\geq a+2$ or a factorial complete intersection. \end{remark} \section{Numerical restrictions} Proposition \ref{prop: B is QEL} already provides a restriction on the invariants of the transformation $\varphi$; here we give further restrictions of this kind. \begin{proposition}\label{prop: hilbert polynomial} Let $\epsilon=0$ if $\langle {\mathfrak{B}} \rangle ={\mathbb{P}}^n$ and let $\epsilon=1$ otherwise. \begin{itemize} \item If $r=1$ we have: \begin{eqnarray*} \lambda &=& (n^2-n+2\epsilon-2a-2)/2 , \\ g &=& (n^2-3n+4\epsilon-2a-2)/2 . \end{eqnarray*} \item If $r=2$ we have: \begin{eqnarray*} \chi({\mathcal{O}}_{{\mathfrak{B}}}) &=& (2a-n^2+5n+2g-6\epsilon+4)/4 , \\ \lambda &=& (n^2-n+2g+2\epsilon-2a-4)/4 . \\ \end{eqnarray*} \item If $r=3$ we have: \begin{eqnarray*} \chi({\mathcal{O}}_{{\mathfrak{B}}}) &=& (4\lambda-n^2+3n-2g-4\epsilon+2a+6)/2 . \end{eqnarray*} \end{itemize} \end{proposition} \begin{proof} By Proposition \ref{prop: B is QEL} we have $h^0({\mathbb{P}}^n,{\mathcal{I}}_{{\mathfrak{B}}}(1))=\epsilon$. Since ${\mathbf{S}}$ is normal and linearly normal, we have $h^0({\mathbb{P}}^n,{\mathcal{I}}_{{\mathfrak{B}}}(2))=n+1+a$ (see \cite[Lemma~2.2]{note}). Moreover, since $n\leq 2r+2$ (being $\delta\geq0$), proceeding as in \cite[Lemma~3.3]{note} (or applying \cite[Proposition~1.8]{mella-russo-baselocusleq3}), we obtain $h^j({\mathbb{P}}^n,{\mathcal{I}}_{{\mathfrak{B}}}(k))=0$ for every $j,k\geq1$. So we obtain $\chi({\mathcal{O}}_{{\mathfrak{B}}}(1))=n+1-\epsilon$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}}(2))= (n+1)(n+2)/2 - (n+1+a)$. \end{proof} \begin{proposition}\label{prop: segre and chern classes} \hspace{1pt} \begin{itemize} \item If $r=1$ we have: \begin{eqnarray*} {c}_{1} &=& 2-2\,g, \\ {s}_{1} &=& \left( -n-1\right) \,\lambda-2\,g+2, \\ d &=& \left(2\,\lambda-{2}^{n}\right)/\left(\left( 2\,n-2\right) \,\lambda-{2}^{n+1}-4\,g+4\right), \\ \Delta &=& \left( 1-n\right) \,\lambda+{2}^{n}+2\,g-2. \end{eqnarray*} \item If $r=2$ we have: \begin{eqnarray*} {c}_{1} &=& \lambda-2\,g+2, \\ {c}_{2} &=& -\left(\left( {n}^{2}-3\,n\right) \,\lambda-{2}^{n+1}+\left( 4-4\,g\right) \,n+4\,g+2\,\Delta-4\right)/2, \\ {s}_{1} &=& -n\,\lambda-2\,g+2, \\ {s}_{2} &=& 2\,n\,\lambda+{2}^{n}+\left( 4\,g-4\right) \,n-\Delta, \\ d\,\Delta &=& \left( 2-n\right) \,\lambda+{2}^{n-1}+2\,g-2. \\ \end{eqnarray*} \item If $r=3$ we have: \begin{eqnarray*} {c}_{1} &=& 2\,\lambda-2\,g+2, \\ {c}_{2} &=& -\left(\left( {n}^{2}-5\,n+2\right) \,\lambda-{2}^{n}+\left( 4-4\,g\right) \,n+12\,g+2\,d\,\Delta-12\right)/2, \\ {c}_{3} &=& \left(\left( 2\,{n}^{3}-12\,{n}^{2}+22\,n-12\right) \,\lambda+9\,{2}^{n}+n\,\left( -3\,{2}^{n}+18\,g+6\,d\,\Delta-18\right) \right.\\ && \left. +\left( 6-6\,g\right) \,{n}^{2}-24\,g+\left( -6\,d-6\right) \,\Delta+24\right)/6, \\ {s}_{1} &=& \left( 1-n\right) \,\lambda-2\,g+2, \\ {s}_{2} &=& \left(\left( 4\,n-4\right) \,\lambda+{2}^{n}+\left( 8\,g-8\right) \,n-8\,g-2\,d\,\Delta+8\right)/2, \\ {s}_{3} &=& \left(\left( 2\,{n}^{3}-12\,{n}^{2}+10\,n\right) \,\lambda+3\,{2}^{n}+n\,\left( -3\,{2}^{n}+12\,g+6\,d\,\Delta-12\right) \right. \\ && \left. +\left( 12-12\,g\right) \,{n}^{2}-3\,\Delta\right)/3 . \end{eqnarray*} \end{itemize} \end{proposition} \begin{proof} See also \cite{crauder-katz-1989} and \cite{crauder-katz-1991}. By \cite[page~291]{crauder-katz-1989} we see that \begin{displaymath} H^j\cdot E^{n-j}= \left\{ \begin{array}{ll} 1, & \mbox{if } j= n ; \\ 0, & \mbox{if } r+1\leq j \leq n-1 ; \\ (-1)^{n-j-1} s_{r-j}, & \mbox{if } j \leq r . \end{array} \right. \end{displaymath} Since $H'=2H-E$ and $H=dH'-E'$ we have \begin{eqnarray} \Delta &=& {H'}^n=(2H-E)^n, \\ d \Delta &=& d{H'}^{n}={H'}^{n-1}\cdot(dH'-E')=(2H-E)^{n-1}\cdot H. \end{eqnarray} From the exact sequence $0\rightarrow\mathcal{T}_{{\mathfrak{B}}}\rightarrow \mathcal{T}_{{\mathbb{P}}^n}|_{{\mathfrak{B}}}\rightarrow\mathcal{N}_{{\mathfrak{B}},{\mathbb{P}}^n}\rightarrow0$ we get: \begin{eqnarray} s_1 &=& - \lambda \left( n+1\right) + {c}_{1} ,\\ s_2 &=& \lambda \begin{pmatrix}n+2\cr 2\end{pmatrix}-{c}_{1} \left( n+1\right) +{c}_{2} ,\\ s_3 &=& -\lambda \begin{pmatrix}n+3\cr 3\end{pmatrix}+{c}_{1} \begin{pmatrix}n+2\cr 2\end{pmatrix}-{c}_{2} \left( n+1\right) +{c}_{3} ,\\ & \vdots & \nonumber \end{eqnarray} Moreover $c_1=-K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^{r-1}$ and it can be expressed as a function of $\lambda$ and $g$. Thus we found $r+3$ independent equations on the $2r+5$ variables: $c_1,\ldots,c_r,s_1,\ldots,s_r,d,\Delta,\lambda,g,n$. \end{proof} \begin{remark} Proposition \ref{prop: segre and chern classes} holds under less restrictive assumptions, as shown in the above proof. Here we treat the special case: let $\psi:{\mathbb{P}}^8\dashrightarrow\mathbf{Z}:=\overline{\psi({\mathbb{P}}^8)}\subseteq{\mathbb{P}}^{8+a}$ be a quadratic rational map whose base locus is a smooth irreducible $3$-dimensional variety $X$. Without any other restriction on $\psi$, denoting with $\pi:{\mathfrak{B}}l_X({\mathbb{P}}^8)\rightarrow {\mathbb{P}}^8$ the blow-up of ${\mathbb{P}}^8$ along $X$ and with $s_i(X)=s_i({\mathcal{N}}_{X,{\mathbb{P}}^8})$, we have \begin{equation}\label{eq: grado mappa razionale} \deg(\psi)\deg(\mathbf{Z}) = (2\pi^{\ast}(H_{{\mathbb{P}}^8})-E_X)^8 = -s_3(X)-16s_2(X)-112s_1(X)-448\deg(X)+256. \end{equation} Moreover, if $\psi$ is birational with liftable inverse and $\dim({\mathrm{sing}}(\mathbf{Z}))\leq 6$, we also have \begin{equation}\label{eq: sollevabile} d \deg(\mathbf{Z}) =(2\pi^{\ast}(H_{{\mathbb{P}}^8})-E_X)^7\cdot \pi^{\ast}(H_{{\mathbb{P}}^8}) = -s_2(X) -14 s_1(X) -84 \deg(X) +128, \end{equation} where $d$ denotes the degree of the linear system defining $\psi^{-1}$. \end{remark} Proposition \ref{prop: double point formula} is a translation of the well-known \emph{double point formula} (see for example \cite{peters-simonis} and \cite{laksov}), taking into account Proposition \ref{prop: B is QEL}. \begin{proposition}\label{prop: double point formula} If $\delta=0$ then $$ 2(2d-1)= \lambda^2 - \sum_{j=0}^{r}\begin{pmatrix} 2r+1 \cr j \end{pmatrix} s_{r-j}({\mathcal{T}}_{{\mathfrak{B}}})\cdot H_{{\mathfrak{B}}}^{j}. $$ \end{proposition} \section{Case of dimension 1}\label{sec: dim 1} Lemma \ref{lemma: numerical 1-fold} directly follows from Propositions \ref{prop: hilbert polynomial} and \ref{prop: segre and chern classes}. \begin{lemma}\label{lemma: numerical 1-fold} If $r=1$, then one of the following cases holds: \begin{enumerate}[(A)] \item $n=3$, $a=1$, $\lambda=2$, $g=0$, $d=1$, $\Delta=2$; \item $n=4$, $a=0$, $\lambda=5$, $g=1$, $d=3$, $\Delta=1$; \item $n=4$, $a=1$, $\lambda=4$, $g=0$, $d=2$, $\Delta=2$; \item\label{case: escluso 1-fold} $n=4$, $a=2$, $\lambda=4$, $g=1$, $d=1$, $\Delta=4$; \item $n=4$, $a=3$, $\lambda=3$, $g=0$, $d=1$, $\Delta=5$. \end{enumerate} \end{lemma} \begin{proposition}\label{prop: possibili casi 1-fold} If $r=1$, then one of the following cases holds: \begin{enumerate}[(I)] \item $n=3$, $a=1$, ${\mathfrak{B}}$ is a conic; \item $n=4$, $a=0$, ${\mathfrak{B}}$ is an elliptic curve of degree $5$; \item $n=4$, $a=1$, ${\mathfrak{B}}$ is the rational normal quartic curve; \item $n=4$, $a=3$, ${\mathfrak{B}}$ is the twisted cubic curve. \end{enumerate} \end{proposition} \begin{proof} From Lemma \ref{lemma: numerical 1-fold} it remains only to exclude case (\ref{case: escluso 1-fold}). In this case ${\mathfrak{B}}$ is a complete intersection of two quadrics in ${\mathbb{P}}^3$ and also it is an $OADP$-curve. This is absurd because the only $OADP$-curve is the twisted cubic curve. \end{proof} \section{Case of dimension 2}\label{sec: dim 2} Proposition \ref{prop: possibili casi 2-fold} follows from \cite[Propositions~1.3 and 3.4]{russo-qel1} and \cite[Theorem~4.10]{ciliberto-mella-russo}. \begin{proposition}\label{prop: possibili casi 2-fold} If $r=2$, then either $n=6$, $d\geq2$, $\langle {\mathfrak{B}} \rangle = {\mathbb{P}}^6$, or one of the following cases holds: \begin{enumerate}[(I)] \setcounter{enumi}{4} \item\label{case 2-fold a} $n=4$, $d=1$, $\delta=2$, ${\mathfrak{B}}={\mathbb{P}}^1\times{\mathbb{P}}^1\subset{\mathbb{P}}^3\subset{\mathbb{P}}^4$; \item\label{case 2-fold b} $n=5$, $d=1$, $\delta=1$, ${\mathfrak{B}}$ is a hyperplane section of ${\mathbb{P}}^1\times{\mathbb{P}}^2\subset{\mathbb{P}}^5$; \item\label{case 2-fold c} $n=5$, $d=2$, $\delta=1$, ${\mathfrak{B}}=\nu_2({\mathbb{P}}^2)\subset{\mathbb{P}}^5$ is the Veronese surface; \item\label{case 2-fold d} $n=6$, $d=1$, $\delta=0$, ${\mathfrak{B}}\subset{\mathbb{P}}^5$ is an $OADP$-surface, i.e. ${\mathfrak{B}}$ is as in one of the following cases: \begin{enumerate}[($\ref{case 2-fold d}_1$)] \item\label{case 2-fold d1} ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(3))$ or ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(2)\oplus{\mathcal{O}}(2))$; \item\label{case 2-fold d2} del Pezzo surface of degree $5$ (hence the blow-up of ${\mathbb{P}}^2$ at $4$ points $p_1,\ldots,p_4$ and $|H_{{\mathfrak{B}}}|=|3H_{{\mathbb{P}}^2}-p_1-\cdots-p_4|$). \end{enumerate} \end{enumerate} \end{proposition} \begin{lemma}\label{lemma: r=2 B nondegenerate} If $r=2$, $n=6$ and $\langle {\mathfrak{B}} \rangle = {\mathbb{P}}^6$, then one of the following cases holds: \begin{enumerate}[(A)] \item \label{case 2-fold a=0 lambda=7} $a=0$, $\lambda=7$, $g=1$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=0$; \item \label{case 2-fold a leq 3} $0\leq a \leq 3$, $\lambda=8-a$, $g=3-a$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$. \end{enumerate} \end{lemma} \begin{proof} By Proposition \ref{prop: hilbert polynomial} it follows that $g=2\lambda+a-13$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=\lambda+a-7$. By \cite[Lemma~6.1]{note} and using that $g\geq0$ (proceeding as in \cite[Proposition~6.2]{note}), we obtain $(13-a)/2 \leq \lambda \leq 8-a$. \end{proof} \begin{lemma}\label{lemma: double point formula r=2} If $r=2$, $n=6$ and $\langle {\mathfrak{B}} \rangle = {\mathbb{P}}^6$, then one of the following cases holds: \begin{itemize} \item $a=0$, $d=4$, $\Delta=1$; \item $a=1$, $d=3$, $\Delta=2$; \item $a=2$, $d=2$, $\Delta=4$; \item $a=3$, $d=2$, $\Delta=5$. \end{itemize} \end{lemma} \begin{proof} We have $s_1({\mathcal{T}}_{{\mathfrak{B}}})\cdot H_{{\mathfrak{B}}} = -c_1 $ and $ s_2({\mathcal{T}}_{{\mathfrak{B}}}) = c_1^2-c_2=12\chi({\mathcal{O}}_{{\mathfrak{B}}}) -2c_2 $. So, by Proposition \ref{prop: double point formula}, we obtain \begin{equation} 2(2d-1) = \lambda^2-10\lambda-12\chi({\mathcal{O}}_{{\mathfrak{B}}})+2c_2+5c_1 . \end{equation} Now, by Propositions \ref{prop: hilbert polynomial} and \ref{prop: segre and chern classes}, we obtain \begin{equation} d\Delta = 2a+4,\quad \Delta = (g^2+(-2a-4)g-16d+a^2-4a+75)/8 , \end{equation} and then we conclude by Lemma \ref{lemma: r=2 B nondegenerate}. \end{proof} \begin{proposition}\label{prop: r=2 B nondegenerate} If $r=2$, $n=6$ and $\langle {\mathfrak{B}}\rangle={\mathbb{P}}^6$ then one of the following cases holds: \begin{enumerate}[(I)] \setcounter{enumi}{8} \item $a=0$, $\lambda=7$, $g=1$, ${\mathfrak{B}}$ is an elliptic scroll ${\mathbb{P}}_{C}({\mathcal{E}})$ with $e({\mathcal{E}})=-1$; \item $a=0$, $\lambda=8$, $g=3$, ${\mathfrak{B}}$ is the blow-up of ${\mathbb{P}}^2$ at $8$ points $p_1\ldots,p_8$, $|H_{{\mathfrak{B}}}|=|4H_{{\mathbb{P}}^2}-p_1-\cdots-p_8|$; \item $a=1$, $\lambda=7$, $g=2$, ${\mathfrak{B}}$ is the blow-up of ${\mathbb{P}}^2$ at $6$ points $p_0\ldots,p_5$, $|H_{{\mathfrak{B}}}|=|4H_{{\mathbb{P}}^2}-2p_0-p_1-\cdots-p_5|$; \item $a=2$, $\lambda=6$, $g=1$, ${\mathfrak{B}}$ is the blow-up of ${\mathbb{P}}^2$ at $3$ points $p_1,p_2,p_3$, $|H_{{\mathfrak{B}}}|=|3H_{{\mathbb{P}}^2}-p_1-p_2-p_3|$; \item $a=3$, $\lambda=5$, $g=0$, ${\mathfrak{B}}$ is a rational normal scroll. \end{enumerate} \end{proposition} \begin{proof} For $a=0$, $a=1$ and $a\in\{2,3\}$ the statement follows, respectively, from \cite{crauder-katz-1989}, \cite[Proposition~6.2]{note} and \cite{ionescu-smallinvariants}. \end{proof} \section{Case of dimension 3}\label{sec: dim 3} Proposition \ref{prop: possibili casi C1} follows from: \cite[Proposition~1.3 and 3.4]{russo-qel1}, \cite{fujita-3-fold}, \cite{ionescu-russo-conicconnected}, \cite[page~62]{fujita-polarizedvarieties} and \cite{ciliberto-mella-russo}. \begin{proposition}\label{prop: possibili casi C1} If $r=3$, then either $n=8$, $d\geq2$, $\langle {\mathfrak{B}} \rangle = {\mathbb{P}}^8$, or one of the following cases holds: \begin{enumerate}[(I)] \setcounter{enumi}{13} \item\label{case C1 a} $n=5$, $d=1$, $\delta=3$, ${\mathfrak{B}}=Q^3\subset{\mathbb{P}}^4\subset{\mathbb{P}}^5$ is a quadric; \item\label{case C1 b} $n=6$, $d=1$, $\delta=2$, ${\mathfrak{B}}={\mathbb{P}}^1\times{\mathbb{P}}^2\subset{\mathbb{P}}^5\subset{\mathbb{P}}^6$; \item\label{case C1 c} $n=7$, $d=1$, $\delta=1$, ${\mathfrak{B}}\subset{\mathbb{P}}^6$ is as in one of the following cases: \begin{enumerate}[($\ref{case C1 c}_1$)] \item\label{case C1 c1} ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(1)\oplus{\mathcal{O}}(2))$; \item\label{case C1 c2} linear section of ${\mathbb{G}}(1,4)\subset{\mathbb{P}}^9$; \end{enumerate} \item\label{case C1 d} $n=7$, $d=2$, $\delta=1$, ${\mathfrak{B}}$ is a hyperplane section of ${\mathbb{P}}^2\times{\mathbb{P}}^2\subset{\mathbb{P}}^8$; \item\label{case C1 e} $n=8$, $d=1$, $\delta=0$, ${\mathfrak{B}}\subset{\mathbb{P}}^7$ is an $OADP$-variety, i.e. ${\mathfrak{B}}$ is as in one of the following cases: \begin{enumerate}[($\ref{case C1 e}_1$)] \item\label{case C1 e1} ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(1)\oplus{\mathcal{O}}(3))$ or ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(2)\oplus{\mathcal{O}}(2))$; \item\label{case C1 e2} Edge variety of degree $6$ (i.e. ${\mathbb{P}}^1\times{\mathbb{P}}^1\times{\mathbb{P}}^1$) or Edge variety of degree $7$; \item\label{case C1 e3} ${\mathbb{P}}_{{\mathbb{P}}^2}({\mathcal{E}})$, where ${\mathcal{E}}$ is a vector bundle with $c_1({\mathcal{E}})=4$ and $c_2({\mathcal{E}})=8$, given as an extension by the following exact sequence $0\rightarrow{\mathcal{O}}_{{\mathbb{P}}^2}\rightarrow{\mathcal{E}}\rightarrow {\mathcal{I}}_{\{p_1,\ldots,p_8\},{\mathbb{P}}^2}(4)\rightarrow0$. \end{enumerate} \end{enumerate} \end{proposition} In the following we denote by ${\mathcal{L}}ambda\subsetneq C\subsetneq S\subsetneq {\mathfrak{B}}$ a sequence of general linear sections of ${\mathfrak{B}}$. \begin{lemma}\label{lemma: r=3 B nondegenerate} If $r=3$, $n=8$ and $\langle {\mathfrak{B}} \rangle = {\mathbb{P}}^8$, then one of the following cases holds: \begin{enumerate}[(A)] \item \label{a=0,lambda=13} $a=0$, $\lambda=13$, $g=8$, $K_S\cdot H_S=1$, $K_S^2=-1$; \item \label{a=1,lambda=12} $a=1$, $\lambda=12$, $g=7$, $K_S\cdot H_S=0$, $K_S^2=0$; \item \label{a geq2} $0\leq a\leq6$, $\lambda=12-a$, $g=6-a$, $K_S\cdot H_S=-2-a$. \end{enumerate} \end{lemma} \begin{proof} Firstly we note that, from the exact sequence $0\rightarrow\mathcal{T}_{S}\rightarrow\mathcal{T}_{{\mathfrak{B}}}|_{S}\rightarrow{\mathcal{O}}_{S}(1)\rightarrow0$, we deduce $c_2=c_2(S)+c_1(S)=12\chi({\mathcal{O}}_{S})-K_{S}^2-K_{S}\cdot H_{S}$ and hence \begin{equation} K_S^2=14\lambda+12\chi({\mathcal{O}}_S)-12g+d\Delta-116 = -22\lambda+12g+d\Delta-12a+184. \end{equation} Secondly we note that (see \cite[Lemma~6.1]{note}), putting $h_{{\mathcal{L}}ambda}(2):=h^0({\mathbb{P}}^5,{\mathcal{O}}(2))-h^0({\mathbb{P}}^5,{\mathcal{I}}_{{\mathcal{L}}ambda}(2))$, we have \begin{equation}\label{hilbert-function} \mathrm{min}\{\lambda,11\} \leq h_{{\mathcal{L}}ambda}(2)\leq 21-h^0({\mathbb{P}}^8,{\mathcal{I}}_{{\mathfrak{B}}}(2))=12-a. \end{equation} Now we establish the following: \begin{claim}\label{claim: KsHs<0} If $K_S\cdot H_S\leq0$ and $K_S\nsim 0$, then $\lambda=12-a$ and $g=6-a$. \end{claim} \begin{proof}[Proof of the Claim] Similarly to \cite[Case~6.1]{note}, we obtain that $P_{{\mathfrak{B}}}(-1)=0$ and $P_{{\mathfrak{B}}}(0)=1-q$, where $q:=h^1(S,{\mathcal{O}}_S)=h^1({\mathfrak{B}},{\mathcal{O}}_{{\mathfrak{B}}})$; in particular $g=-5q-a+6$ and $\lambda=-3q-a+12$. Since $g\geq0$ we have $5q\leq 6-a$ and the possibilities are: if $a\leq1$ then $q\leq1$; if $a\geq 2$ then $q=0$. If $(a,q)=(0,1)$ then $(g,\lambda)=(1,9)$ and the case is excluded by \cite[Theorem~12.3]{fujita-polarizedvarieties}\footnote{Note that ${\mathfrak{B}}$ cannot be a scroll over a curve (this follows from (\ref{eq: relation scroll}) and (\ref{eq: second relation scroll}) below and also it follows from \cite[Proposition~3.2(i)]{mella-russo-baselocusleq3}).}; if $(a,q)=(1,1)$ then $(g,\lambda)=(0,8)$ and the case is excluded by \cite[Theorem~12.1]{fujita-polarizedvarieties}. Thus we have $q=0$ and hence $g=6-a$ and $\lambda=12-a$; in particular we have $a\leq 6$. \end{proof} Now we discuss the cases according to the value of $a$. \begin{case}[$a=0$] It is clear that $\varphi$ must be of type $(2,5)$ and hence $K_S^2=-22\lambda+12g+189$. By Claim \ref{claim: KsHs<0}, if $K_S\cdot H_S=2g-2-\lambda<0$, we fall into case (\ref{a geq2}). So we suppose that $K_S\cdot H_S\geq0$, namely that $g\geq\lambda/2+1$. From Castelnuovo's bound it follows that $\lambda\geq12$ and if $\lambda=12$ then $K_S\cdot H_S=0$, $g=7$ and hence $K_S^2=9$. Since this is impossible by Claim \ref{claim: KsHs<0}, we conclude that $\lambda\geq 13$. Now by (\ref{hilbert-function}) it follows that $11\leq h_{{\mathcal{L}}ambda}(2)\leq12$, but if $h_{{\mathcal{L}}ambda}(2)=11$ from Castelnuovo Lemma \cite[Lemma~1.10]{ciliberto-hilbertfunctions} we obtain a contradiction. Thus we have $h_{{\mathcal{L}}ambda}(2)=12$ and $h^0({\mathbb{P}}^5,{\mathcal{I}}_{{\mathcal{L}}ambda}(2))=h^0({\mathbb{P}}^8,{\mathcal{I}}_{{\mathfrak{B}}}(2))=9$. So from \cite[Theorem~3.1]{ciliberto-hilbertfunctions} we deduce that $\lambda\leq 14$ and furthermore, by the refinement of Castelnuovo's bound contained in \cite[Theorem~2.5]{ciliberto-hilbertfunctions}, we obtain $g\leq 2\lambda-18$. In summary we have the following possibilities: \begin{enumerate}[(i)] \item\label{case: T1} $\lambda=13$, $g=8$, $K_S\cdot H_S=1$, $\chi({\mathcal{O}}_S)=2$, $K_S^2=-1$; \item\label{case: T2} $\lambda=14$, $g=8$, $K_S\cdot H_S=0$, $\chi({\mathcal{O}}_S)=-1$, $K_S^2=-23$; \item\label{case: T3} $\lambda=14$, $g=9$, $K_S\cdot H_S=2$, $\chi({\mathcal{O}}_S)=1$, $K_S^2=-11$; \item\label{case: T4} $\lambda=14$, $g=10$, $K_S\cdot H_S=4$, $\chi({\mathcal{O}}_S)=3$, $K_S^2=1$. \end{enumerate} Case (\ref{case: T1}) coincides with case (\ref{a=0,lambda=13}). Case (\ref{case: T2}) is excluded by Claim \ref{claim: KsHs<0}. In the circumstances of case (\ref{case: T3}), we have $h^1(S,{\mathcal{O}}_S)=h^2(S,{\mathcal{O}}_S)=h^0(S,K_S)$. If $h^1(S,{\mathcal{O}}_S)>0$, since $(K_{{\mathfrak{B}}}+4H_{{\mathfrak{B}}})\cdot K_S=K_S^2+3 K_S\cdot H_S=-5<0$, we see that $K_{{\mathfrak{B}}}+4H_{{\mathfrak{B}}}$ is not nef and then we obtain a contradiction by \cite{ionescu-adjunction}. If $h^1(S,{\mathcal{O}}_S)=0$, then we also have $h^1({\mathfrak{B}},{\mathcal{O}}_{{\mathfrak{B}}})=h^2({\mathfrak{B}},{\mathcal{O}}_{{\mathfrak{B}}})=0$ and hence $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1-h^3({\mathfrak{B}},{\mathcal{O}}_{{\mathfrak{B}}})\leq 1$, against the fact that $\chi({\mathcal{O}}_{{\mathfrak{B}}})=2\lambda-g-17=2$. Thus case (\ref{case: T3}) does not occur. Finally, in the circumstances of case (\ref{case: T4}), note that $h^0(S,K_S)=2+h^1(S,{\mathcal{O}}_S)\geq2$ and we write $|K_S|=|M|+F$, where $|M|$ is the mobile part of the linear system $|K_S|$ and $F$ is the fixed part. If $M_1=M$ is a general member of $|M|$, there exists $M_2\in|M|$ having no common irreducible components with $M_1$ and so $M^2=M_1\cdot M_2=\sum_{p}\left(M_1\cdot M_2\right)_{p}\geq0$; furthermore, by using Bertini Theorem, we see that ${\mathrm{sing}}(M_1)$ consists of points $p$ such that the intersection multiplicity $\left(M_1\cdot M_2\right)_{p}$ of $M_1$ and $M_2$ in $p$ is at least $2$. By definition, we also have $M\cdot F\geq0$ and so we deduce $2p_a(M)-2=M\cdot (M+K_S)= 2 M^2+ M\cdot F\geq 0$, from which $p_a(M)\geq 1$ and $p_a(M)=2$ if $F=0$. On the other hand, we have $M\cdot H_S\leq K_S\cdot H_S=4$ and, since $S$ is cut out by quadrics, $M$ does not contain planar curves of degree $\geq3$. If $M\cdot H_S=4$, then $F=0$, $M^2=1$ and $M$ is a (possibly disconnected) smooth curve; since $p_a(M)=2$, $M$ is actually disconnected and so it is a disjoint union of twisted cubics, conics and lines. But then we obtain the contradiction that $p_a(M)=1-\#\{\mbox{connected components of }M\}<0$. If $M\cdot H_S\leq3$, then $M$ must be either a twisted cubic or a union of conics and lines. In all these cases we again obtain the contradiction that $p_a(M)=1-\#\{\mbox{connected components of }M\}\leq 0$. Thus case (\ref{case: T4}) does not occur. \end{case} \begin{case}[$a=1$] By \cite[Proposition~6.4]{note} we fall into case (\ref{a=1,lambda=12}) or (\ref{a geq2}). \end{case} \begin{case}[$a\geq2$] By (\ref{hilbert-function}) it follows that $\lambda\leq 10$ and by Castelnuovo's bound it follows that $K_S\cdot H_S\leq -4<0$. Thus, by Claim \ref{claim: KsHs<0} we fall into case (\ref{a geq2}). \end{case} \end{proof} Now we apply the double point formula (Proposition \ref{prop: double point formula}) in order to obtain additional numerical restrictions under the hypothesis of Lemma \ref{lemma: r=3 B nondegenerate}. \begin{lemma}\label{lemma: double point formula} If $r=3$, $n=8$ and $\langle {\mathfrak{B}}\rangle={\mathbb{P}}^8$, then $$ K_{{\mathfrak{B}}}^3=\lambda^2+23\lambda-24g-(7d+1)\Delta-4d+36a-226 . $$ \end{lemma} \begin{proof} We have (see \cite[App. A, Exercise~6.7]{hartshorne-ag}): \begin{eqnarray*} s_1({\mathcal{T}}_{{\mathfrak{B}}})\cdot H_{{\mathfrak{B}}}^2 &=& -c_1({\mathfrak{B}})\cdot H_{{\mathfrak{B}}}^2=K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^2 , \\ s_2({\mathcal{T}}_{{\mathfrak{B}}})\cdot H_{{\mathfrak{B}}} &=& c_1({\mathfrak{B}})^2\cdot H_{{\mathfrak{B}}}-c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}} = K_{{\mathfrak{B}}}^2\cdot H_{{\mathfrak{B}}}-c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}} \\ &=& 3K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^2-2H_{{\mathfrak{B}}}^3-2c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}}+12\left(\chi({\mathcal{O}}_{{\mathfrak{B}}}(H_{{\mathfrak{B}}}))-\chi({\mathcal{O}}_{{\mathfrak{B}}})\right), \\ s_3({\mathcal{T}}_{{\mathfrak{B}}}) &=& -c_1({\mathfrak{B}})^3+2c_1({\mathfrak{B}})\cdot c_2({\mathfrak{B}})-c_3({\mathfrak{B}}) =K_{{\mathfrak{B}}}^3+48\chi({\mathcal{O}}_{{\mathfrak{B}}})-c_3({\mathfrak{B}}). \end{eqnarray*} Hence, applying the double point formula and using the relations $\chi({\mathcal{O}}_{{\mathfrak{B}}})=2\lambda-g+a-17$, $\chi({\mathcal{O}}_{{\mathfrak{B}}}(H_{{\mathfrak{B}}}))=9$, we obtain: \begin{eqnarray*} 4d-2 &=& 2\,\deg({\mathrm{Sec}}({\mathfrak{B}}))\\ &=& \deg({\mathfrak{B}})^2-s_3({\mathcal{T}}_{{\mathfrak{B}}})-7\,s_2({\mathcal{T}}_{{\mathfrak{B}}})\cdot H_{{\mathfrak{B}}}-21\,s_1({\mathcal{T}}_{{\mathfrak{B}}})\cdot H_{{\mathfrak{B}}}^2-35\,H_{{\mathfrak{B}}}^3 \\ &=& \deg({\mathfrak{B}})^2-21\,\deg({\mathfrak{B}})-42\,K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^2+14\,c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}}-K_{{\mathfrak{B}}}^3 \\ && +c_3({\mathfrak{B}})-84\,\chi({\mathcal{O}}_{{\mathfrak{B}}}(H_{{\mathfrak{B}}}))+36\,\chi({\mathcal{O}}_{{\mathfrak{B}}}) \\ &=& -K_{{\mathfrak{B}}}^3+\lambda^2+23\lambda-24g-(7d+1)\Delta+36a-228. \end{eqnarray*} \end{proof} \begin{lemma}\label{lemma: quadric fibration} If $r=3$, $n=8$, $\langle {\mathfrak{B}}\rangle={\mathbb{P}}^8$ and ${\mathfrak{B}}$ is a quadric fibration over a curve, then one of the following cases holds: \begin{itemize} \item $a=3$, $\lambda=9$, $g=3$, $d=3$, $\Delta=5$; \item $a=4$, $\lambda=8$, $g=2$, $d=2$, $\Delta=10$. \end{itemize} \end{lemma} \begin{proof} Denote by $\beta:({\mathfrak{B}},H_{{\mathfrak{B}}})\rightarrow (Y,H_Y)$ the projection over the curve $Y$ such that $\beta^{\ast}(H_Y)=K_{{\mathfrak{B}}}+2H_{{\mathfrak{B}}}$. We have \begin{eqnarray*} 0&=&\beta^{\ast}(H_Y)^2\cdot H_{{\mathfrak{B}}} = K_{{\mathfrak{B}}}^2\cdot H_{{\mathfrak{B}}}+4K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^2+4H_{{\mathfrak{B}}}^3, \\ 0&=& \beta^{\ast}(H_Y)^3= K_{{\mathfrak{B}}}^3+6K_{{\mathfrak{B}}}^2\cdot H_{{\mathfrak{B}}}+12 K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^2+8 H_{{\mathfrak{B}}}^3, \\ \chi({\mathcal{O}}_{{\mathfrak{B}}}(H_{{\mathfrak{B}}})) &=& \frac{1}{12} K_{{\mathfrak{B}}}^2\cdot H_{{\mathfrak{B}}}-\frac{1}{4}K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^2+\frac{1}{6}H_{{\mathfrak{B}}}^3+\frac{1}{12}c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}}+\chi({\mathcal{O}}_{{\mathfrak{B}}}), \\ \end{eqnarray*} from which it follows that \begin{eqnarray} K_{{\mathfrak{B}}}^3 &=& -8\lambda+24g-24, \\ c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}} &=& -36\lambda+26g-12a+298. \end{eqnarray} Hence, by Lemma \ref{lemma: double point formula} and Proposition \ref{prop: segre and chern classes}, we obtain \begin{eqnarray} d\Delta&=& 23\lambda-16g+12a-180 , \\ \Delta+4d&=&\lambda^2-130\lambda+64g-48a+1058 . \end{eqnarray} Now the conclusion follows from Lemma \ref{lemma: r=3 B nondegenerate}, by observing that the case $a=6$ cannot occur. In fact, if $a=6$, by \cite{ionescu-smallinvariants} it follows that ${\mathfrak{B}}$ is a rational normal scroll and by a direct calculation (or by Lemma \ref{lemma: scroll over curve}) we see that $d=2$ and $\Delta=14$. \end{proof} \begin{lemma}\label{lemma: scroll over surface} If $r=3$, $n=8$, $\langle {\mathfrak{B}}\rangle={\mathbb{P}}^8$ and ${\mathfrak{B}}$ is a scroll over a smooth surface $Y$, then we have: \begin{eqnarray*} c_2\left(Y\right) &=& \left(\left(7d-1\right)\lambda^2+\left(177-679d\right)\lambda+\left(292d-92\right)g-28d^2 \right. \\ && \left. +\left(5554-252a\right)d+36a-1474\right)/\left(2d+2\right), \\ \Delta &=& \left(\lambda^2-107\lambda+48g-4d-36a+878\right)/\left(d+1\right) . \end{eqnarray*} \end{lemma} \begin{proof} Similarly to Lemma \ref{lemma: quadric fibration}, denote by $\beta:({\mathfrak{B}},H_{{\mathfrak{B}}})\rightarrow (Y,H_Y)$ the projection over the surface $Y$ such that $\beta^{\ast}(H_Y)=K_{{\mathfrak{B}}}+2H_{{\mathfrak{B}}}$. Since $\beta^{\ast}(H_Y)^3=0$ we obtain \begin{eqnarray*} K_{{\mathfrak{B}}}^3 &=&-8H_{{\mathfrak{B}}}^3-12K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^2-6K_{{\mathfrak{B}}}^2\cdot H_{{\mathfrak{B}}} \\ &=& -30K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^2+4H_{{\mathfrak{B}}}^3+6c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}}-72\chi({\mathcal{O}}_{{\mathfrak{B}}}(H_{{\mathfrak{B}}}))+72\chi({\mathcal{O}}_{{\mathfrak{B}}}) \\ &=& 130\lambda-72g-6d\Delta+72a-1104. \end{eqnarray*} Now we conclude comparing the last formula with Lemma \ref{lemma: double point formula} and using the relation \begin{equation} 70\lambda-44g+(7d-1)\Delta-596=c_3({\mathfrak{B}})=c_1({\mathbb{P}}^1)c_2(Y)=2c_2(Y). \end{equation} \end{proof} \begin{lemma}\label{lemma: scroll over curve} If $r=3$, $n=8$, $\langle {\mathfrak{B}}\rangle={\mathbb{P}}^8$ and ${\mathfrak{B}}$ is a scroll over a smooth curve, then we have: $a=6$, $\lambda=6$, $g=0$, $d=2$, $\Delta=14$. \end{lemma} \begin{proof} We have a projection $\beta:({\mathfrak{B}},H_{{\mathfrak{B}}})\rightarrow (Y,H_Y)$ over a curve $Y$ such that $\beta^{\ast}(H_Y)=K_{{\mathfrak{B}}}+3H_{{\mathfrak{B}}}$. By expanding the expressions $\beta^{\ast}(H_Y)^2\cdot H_{{\mathfrak{B}}}=0$ and $\beta^{\ast}(H_Y)^3=0$ we obtain $K_{{\mathfrak{B}}}^2\cdot H_{{\mathfrak{B}}}=3\lambda-12g+12$ and $K_{{\mathfrak{B}}}^3=54(g-1)$, and hence by Lemma \ref{lemma: double point formula} we get \begin{equation}\label{eq: relation scroll} \lambda^2+23\lambda-78g-(7d+1)\Delta-4d+36a-172 = 0. \end{equation} Also, by expanding the expression $\chi({\mathcal{O}}_{{\mathfrak{B}}}(H_{{\mathfrak{B}}}))=9$ we obtain $c_2=-35\lambda+30g-12a+294 $ and hence by Proposition \ref{prop: segre and chern classes} we get \begin{equation}\label{eq: second relation scroll} 22\lambda-20g-d\Delta+12a-176 = 0. \end{equation} Now the conclusion follows from Lemma \ref{lemma: r=3 B nondegenerate}. \end{proof} Finally we conclude our discussion about classification with the following: \begin{proposition}\label{prop: r=3 B nondegenerate} If $r=3$, $n=8$ and $\langle {\mathfrak{B}}\rangle={\mathbb{P}}^8$, then one of the following cases holds: \begin{enumerate}[(I)] \setcounter{enumi}{18} \item $a=0$, $\lambda=12$, $g=6$, ${\mathfrak{B}}$ is a scroll ${\mathbb{P}}_{Y}({\mathcal{E}})$ over a birationally ruled surface $Y$ with $K_Y^2=5$, $c_2({\mathcal{E}})=8$ and $c_1^2({\mathcal{E}})=20$; \item $a=0$, $\lambda=13$, $g=8$, ${\mathfrak{B}}$ is obtained as the blow-up of a Fano variety $X$ at a point $p\in X$, $|H_{{\mathfrak{B}}}|=|H_{X}-p|$; \item\label{case: cubic hypersurface} $a=1$, $\lambda=11$, $g=5$, ${\mathfrak{B}}$ is the blow-up of $Q^3$ at $5$ points $p_1,\ldots,p_5$, $|H_{{\mathfrak{B}}}|=|2H_{Q^3}-p_1-\cdots-p_5|$; \item $a=1$, $\lambda=11$, $g=5$, ${\mathfrak{B}}$ is a scroll over ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}\oplus{\mathcal{O}}(-1))$; \item $a=1$, $\lambda=12$, $g=7$, ${\mathfrak{B}}$ is a linear section of $S^{10}\subset{\mathbb{P}}^{15}$; \item\label{case: scroll over Q2 or quadric fibration} $a=2$, $\lambda=10$, $g=4$, ${\mathfrak{B}}$ is a scroll over $Q^2$; \item $a=3$, $\lambda=9$, $g=3$, ${\mathfrak{B}}$ is a scroll over ${\mathbb{P}}^2$ or a quadric fibration over ${\mathbb{P}}^1$; \item $a=4$, $\lambda=8$, $g=2$, ${\mathfrak{B}}$ is a hyperplane section of ${\mathbb{P}}^1\times Q^3$; \item $a=6$, $\lambda=6$, $g=0$, ${\mathfrak{B}}$ is a rational normal scroll. \end{enumerate} \end{proposition} \begin{proof} For $a=6$ the statement follows from \cite{ionescu-smallinvariants}. The case with $a=5$ is excluded by \cite{ionescu-smallinvariants} and Example \ref{example: a=5}. For $a=4$ the statement follows from \cite{ionescu-smallinvariantsIII}. For $a\in\{2,3\}$, by \cite{fania-livorni-nine}, \cite{fania-livorni-ten} and \cite{ionescu-smallinvariantsII} it follows that the abstract structure of ${\mathfrak{B}}$ is as asserted, or $a=2$ and ${\mathfrak{B}}$ is a quadric fibration over ${\mathbb{P}}^1$; the last case is excluded by Lemma \ref{lemma: quadric fibration}. For $a=1$ the statement is just \cite[Proposition~6.6]{note}. Now we treat the cases with $a=0$. \begin{case}[$a=0, \lambda=12$] Since $\deg({\mathfrak{B}})\leq 2\mathrm{codim}_{{\mathbb{P}}^8}({\mathfrak{B}})+2$, it follows that $({\mathfrak{B}},H_{{\mathfrak{B}}})$ must be as in one of the cases (a),\ldots,(h) of \cite[Theorem~1]{ionescu-degsmallrespectcodim}. Cases (a), (d), (e), (g), (h) are of course impossible and case (c) is excluded by Lemma \ref{lemma: quadric fibration}. If ${\mathfrak{B}}$ is as in case (b), by Lemma \ref{lemma: scroll over curve} we obtain that ${\mathfrak{B}}$ is a scroll over a birationally ruled surface. Now suppose that $({\mathfrak{B}},H_{{\mathfrak{B}}})$ is as in case (f). Thus there is a reduction $(X,H_X)$ as in one of the cases: \begin{enumerate}[(f1)] \item\label{case1} $X={\mathbb{P}}^3$, $H_X\in|{\mathcal{O}}(3)|$; \item\label{case2} $X=Q^3$, $H_X\in|{\mathcal{O}}(2)|$; \item\label{case3} $X$ is a ${\mathbb{P}}^2$-bundle over a smooth curve such that ${\mathcal{O}}_X(H_X)$ induces ${\mathcal{O}}(2)$ on each fiber. \end{enumerate} By definition of reduction we have $X\subset{\mathbb{P}}^{N}$, where $N=8+s$, $\deg(X)=\lambda+s=12+s$ and $s$ is the number of points blown up on $X$ to get ${\mathfrak{B}}$. Case (f\ref{case1}) and (f\ref{case2}) are impossible because they force $\lambda$ to be respectively $16$ and $11$. In case (f\ref{case3}), we have a projection $\beta:(X,H_{X})\rightarrow (Y,H_Y)$ over a curve $Y$ such that $\beta^{\ast}(H_Y)=2K_{X}+3H_{X}$. Hence we get \begin{displaymath} K_X H_X^2= (2K_X+3H_X)^2\cdot H_X/12 -K_X^2\cdot H_X/3 - 3H_X^3/4 = -K_X^2\cdot H_X/3 - 3H_X^3/4 , \end{displaymath} from which we deduce that \begin{eqnarray*} 0&=&(2K_X+3H_X)^3 = 8K_X^3+36K_X^2\cdot H_X+54K_X\cdot H_X^2+27H_X^3 \\ &=& 8K_X^3+18K_X^2\cdot H_X-27 H_X^3/2 \\ &=& 8( K_{{\mathfrak{B}}}^3 - 8s )+18K_X^2\cdot H_X-27 (\deg({\mathfrak{B}})+s)/2 \\ &=& 18 K_X^2\cdot H_X-155s/2-210. \end{eqnarray*} Since $s\leq 12$ (see \cite[Lemma~8.1]{besana-biancofiore-numerical}), we conclude that case (f) does not occur. Thus, ${\mathfrak{B}}={\mathbb{P}}_{Y}({\mathcal{E}})$ is a scroll over a surface $Y$; moreover, by Lemma \ref{lemma: scroll over surface} and \cite[Theorem~11.1.2]{beltrametti-sommese}, we obtain $K_Y^2=5$, $c_2({\mathcal{E}})=K_Y^2-K_S^2=8$ and $c_1^2({\mathcal{E}})=\lambda+c_2({\mathcal{E}})=20$. \end{case} \begin{case}[$a=0, \lambda=13$] The proof is located in \cite[page~16]{mella-russo-baselocusleq3}, but we sketch it for the reader's convenience. By Lemma \ref{lemma: r=3 B nondegenerate} we know that $\chi({\mathcal{O}}_S)=2$ and $K_S$ is an exceptional curve of the first kind. Thus, if we blow-down the divisor $K_S$, we obtain a $K3$-surface. By using adjunction theory (see for instance \cite{beltrametti-sommese} or Ionescu's papers cited in the references) and by Lemmas \ref{lemma: quadric fibration}, \ref{lemma: scroll over surface} and \ref{lemma: scroll over curve} it follows that the adjunction map $\phi_{|K_{{\mathfrak{B}}}+2H_{{\mathfrak{B}}}|}$ is a generically finite morphism; moreover, since $(K_{{\mathfrak{B}}}+2H_{{\mathfrak{B}}})\cdot K_S=0$, we see that $\phi_{|K_{{\mathfrak{B}}}+2H_{{\mathfrak{B}}}|}$ is not a finite morphism. So, we deduce that there is a $({\mathbb{P}}^2,{\mathcal{O}}_{{\mathbb{P}}^2}(-1))$ inside ${\mathfrak{B}}$ and, after the blow-down of this divisor, we get a smooth Fano $3$-fold $X\subset{\mathbb{P}}^9$ of sectional genus $8$ and degree $14$. \end{case} \end{proof} \section{Examples}\label{sec: examples} The calculations in the following examples can be verified with the aid of the computer algebra system \cite{macaulay2}. \begin{example}[$r=1,2,3; n=3,4,5; a=1; d=1$]\label{example: 1} See also \cite[\S 2]{note}. If $Q\subset{\mathbb{P}}^{n-1}\subset{\mathbb{P}}^n$ is a smooth quadric, then the linear system $|{\mathcal{I}}_{Q,{\mathbb{P}}^n}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^n\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{n+1}$ of type $(2,1)$ whose image is a smooth quadric. \end{example} \begin{example}[$r=1; n=4; a=0; d=3$]\label{example: 2} See also \cite{crauder-katz-1989}. If $X\subset{\mathbb{P}}^4$ is a nondegenerate curve of genus $1$ and degree $5$, then $X$ is the scheme-theoretic intersection of the quadrics (of rank $3$) containing $X$ and $|{\mathcal{I}}_{X,{\mathbb{P}}^4}(2)|$ defines a Cremona transformation ${\mathbb{P}}^4\dashrightarrow{\mathbb{P}}^4$ of type $(2,3)$. \end{example} \begin{example}[$r=1,2,3; n=4,5,7; a=1,0,1; d=2$]\label{example: 3} See also \cite{ein-shepherdbarron} and \cite[Example~4.1]{note}. If $X\subset{\mathbb{P}}^n$ is a Severi variety, then $|{\mathcal{I}}_{X,{\mathbb{P}}^n}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^n\dashrightarrow{\mathbb{P}}^n$ of type $(2,2)$ whose base locus is $X$. The restriction of $\psi$ to a general hyperplane is a birational transformation ${\mathbb{P}}^{n-1}\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^n$ of type $(2,2)$ and ${\mathbf{S}}$ is a smooth quadric. \end{example} \begin{example}[$r=1; n=4; a=2; d=1$ - not satisfying \ref{assumption: ipotesi}]\label{example: B2=singSred} We have a special birational transformation $\psi:{\mathbb{P}}^4\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^6$ of type $(2,1)$ with base locus $X$, image ${\mathbf{S}}$ and base locus of the inverse $Y$, as follows: \begin{eqnarray*} X &=& V(x_0x_1-x_2^2-x_3^2,-x_0^2-x_1^2+x_2x_3,x_4), \\ {\mathbf{S}} &= & V(y_2y_3-y_4^2-y_5^2-y_0y_6,y_2^2+y_3^2-y_4y_5+y_1y_6), \\ P_{{\mathbf{S}}}(t) &=& (4t^4+24t^3+56t^2+60t+24)/4!, \\ {\mathrm{sing}}({\mathbf{S}}) &=& V(y_6,y_5^2,y_4y_5,y_3y_5,y_2y_5,y_4^2,y_3y_4,y_2y_4,2y_1y_4+y_0y_5, \\ && y_0y_4+2y_1y_5,y_3^2,y_2y_3,y_2^2,y_1y_2+2y_0y_3,2y_0y_2+y_1y_3), \\ P_{{\mathrm{sing}}({\mathbf{S}})}(t) &=& t + 5, \\ ({\mathrm{sing}}({\mathbf{S}}))_{\mathrm{red}} &=& V(y_6,y_5,y_4,y_3,y_2), \\ Y=(Y)_{\mathrm{red}}&=&({\mathrm{sing}}({\mathbf{S}}))_{\mathrm{red}}= V(y_6,y_5,y_4,y_3,y_2). \end{eqnarray*} See also \cite[Example~4.6]{note} for another example in which \ref{assumption: ipotesi} is not satisfied. \end{example} \begin{example}[$r=1,2,3; n=4,5,6; a=3; d=1$]\label{example: 5} See also \cite{russo-simis} and \cite{semple}. If $X={\mathbb{P}}^1\times{\mathbb{P}}^2\subset{\mathbb{P}}^5\subset{\mathbb{P}}^{6}$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^6}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{6}\dashrightarrow {\mathbf{S}}\subset{\mathbb{P}}^{9}$ of type $(2,1)$ whose base locus is $X$ and whose image is ${\mathbf{S}}={\mathbb{G}}(1,4)$. Restricting $\psi$ to a general ${\mathbb{P}}^5\subset{\mathbb{P}}^{6}$ (resp. ${\mathbb{P}}^4\subset{\mathbb{P}}^{6}$) we obtain a birational transformation ${\mathbb{P}}^5\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{8}$ (resp. ${\mathbb{P}}^4\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{7}$) whose image is a smooth linear section of ${\mathbb{G}}(1,4)\subset{\mathbb{P}}^{9}$. \end{example} \begin{example}[$r=2; n=6; a=0; d=4$]\label{example: 6} See also \cite{crauder-katz-1989} and \cite{hulek-katz-schreyer}. Let $Z=\{p_1,\ldots,p_8\}\subset{\mathbb{P}}^2$ be such that no $4$ of the $p_i$ are collinear and no $7$ of the $p_i$ lie on a conic and consider the blow-up $X={\mathfrak{B}}l_Z({\mathbb{P}}^2)$ embedded in ${\mathbb{P}}^6$ by $|4H_{{\mathbb{P}}^2}-p_1-\cdots-p_8|$. Then the homogeneous ideal of $X$ is generated by quadrics and $|{\mathcal{I}}_{X,{\mathbb{P}}^6}(2)|$ defines a Cremona transformation ${\mathbb{P}}^6\dashrightarrow{\mathbb{P}}^6$ of type $(2,4)$. The same happens when $X\subset{\mathbb{P}}^6$ is a septic elliptic scroll with $e=-1$. \end{example} \begin{example}[$r=2; n=6; a=1; d=3$]\label{example: 7} See also \cite[Examples~4.2 and 4.3]{note}. If $X\subset{\mathbb{P}}^6$ is a general hyperplane section of an Edge variety of dimension $3$ and degree $7$ in ${\mathbb{P}}^7$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^6}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^6\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^7$ of type $(2,3)$ whose base locus is $X$ and whose image is a rank $6$ quadric. \end{example} \begin{example}[$r=2; n=6;a=2;d=2$]\label{example: 8} If $X\subset{\mathbb{P}}^6$ is the blow-up of ${\mathbb{P}}^2$ at $3$ general points $p_1,p_2,p_3$ with $|H_{X}|=|3H_{{\mathbb{P}}^2}-p_1-p_2-p_3|$, then ${\mathrm{Sec}}(X)$ is a cubic hypersurface. By Fact \ref{fact: K2 property} and \ref{fact: test K2} we deduce that $|{\mathcal{I}}_{X,{\mathbb{P}}^6}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^6\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^8$ and its type is $(2,2)$. The image ${\mathbf{S}}$ is a complete intersection of two quadrics, $\dim({\mathrm{sing}}({\mathbf{S}}))=1$ and the base locus of the inverse is ${\mathbb{P}}^2\times{\mathbb{P}}^2\subset{\mathbb{P}}^8$. Alternatively, we can obtain the transformation $\psi:{\mathbb{P}}^6\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^8$ by restriction to a general ${\mathbb{P}}^6\subset{\mathbb{P}}^8$ of the special Cremona transformation ${\mathbb{P}}^8\dashrightarrow{\mathbb{P}}^8$ of type $(2,2)$. \end{example} \begin{example}[$r=2; n=6; a=3; d=2$]\label{example: 9} See also \cite{russo-simis} and \cite{semple}. If $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(4))$ or $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(2)\oplus{\mathcal{O}}(3))$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^6}(2)|$ defines a birational transformations $\psi:{\mathbb{P}}^6\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^9$ of type $(2,2)$ whose base locus is $X$ and whose image is ${\mathbf{S}}={\mathbb{G}}(1,4)$. \end{example} \begin{example}[$r=2,3;n=6,7;a=5; d=1$]\label{example: 10} See also \cite[\Rmnum{3} Theorem~3.8]{zak-tangent}. If $X={\mathbb{G}}(1,4)\subset{\mathbb{P}}^9\subset{\mathbb{P}}^{10}$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^{10}}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{10}\dashrightarrow {\mathbf{S}}\subset{\mathbb{P}}^{15}$ of type $(2,1)$ whose base locus is $X$ and whose image is the spinorial variety ${\mathbf{S}}=S^{10}\subset{\mathbb{P}}^{15}$. Restricting $\psi$ to a general ${\mathbb{P}}^7\subset{\mathbb{P}}^{10}$ (resp. ${\mathbb{P}}^6\subset{\mathbb{P}}^{10}$) we obtain a special birational transformation ${\mathbb{P}}^7\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{12}$ (resp. ${\mathbb{P}}^6\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{11}$) whose dimension of the base locus is $r=3$ (resp. $r=2$) and whose image is a linear section of $S^{10}\subset{\mathbb{P}}^{15}$. In the first case ${\mathbf{S}}=\overline{\psi({\mathbb{P}}^7)}$ is smooth while in the second case the singular locus of ${\mathbf{S}}=\overline{\psi({\mathbb{P}}^6)}$ consists of $5$ lines, image of the $5$ Segre $3$-folds containing del Pezzo surface of degree $5$ and spanned by its pencils of conics. \end{example} \begin{example}[$r=2,3; n=6,7; a=6; d=1$]\label{example: 11} See also \cite{russo-simis}, \cite{semple} and \cite[\Rmnum{3} Theorem~3.8]{zak-tangent}. We have a birational transformation $\psi:{\mathbb{P}}^{8}\dashrightarrow{\mathbb{G}}(1,5)\subset{\mathbb{P}}^{14}$ of type $(2,1)$ whose base locus is ${\mathbb{P}}^1\times{\mathbb{P}}^3\subset{\mathbb{P}}^7\subset{\mathbb{P}}^{8}$ and whose image is ${\mathbb{G}}(1,5)$. Restricting $\psi$ to a general ${\mathbb{P}}^7\subset{\mathbb{P}}^{8}$ we obtain a birational transformation ${\mathbb{P}}^7\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{13}$ whose base locus $X$ is a rational normal scroll and whose image ${\mathbf{S}}$ is a smooth linear section of ${\mathbb{G}}(1,5)\subset{\mathbb{P}}^{14}$. Restricting $\psi$ to a general ${\mathbb{P}}^6\subset{\mathbb{P}}^{8}$ we obtain a birational transformation $\psi=\psi|_{{\mathbb{P}}^6}:{\mathbb{P}}^6\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{12}$ whose base locus $X$ is a rational normal scroll (hence either $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(3))$ or $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(2)\oplus{\mathcal{O}}(2))$) and whose image ${\mathbf{S}}$ is a singular linear section of ${\mathbb{G}}(1,5)\subset{\mathbb{P}}^{14}$. In this case, we denote by $Y\subset{\mathbf{S}}$ the base locus of the inverse of $\psi$ and by $F=(F_0,\ldots,F_5):{\mathbb{P}}^5\dashrightarrow{\mathbb{P}}^5$ the restriction of $\psi$ to ${\mathbb{P}}^5={\mathrm{Sec}}(X)$. We have \begin{eqnarray*} Y&=&\overline{\psi({\mathbb{P}}^5)}=\overline{F({\mathbb{P}}^5)}={\mathbb{G}}(1,3)\subset{\mathbb{P}}^5\subset{\mathbb{P}}^{12} , \\ J_4&:=&\left\{x=[x_0,...,x_5]\in{\mathbb{P}}^5\setminus X: \mathrm{rank}\left(\left({\partial F_i}/{\partial x_j}(x)\right)_{i,j}\right)\leq 4 \right\}_{\mathrm{red}}\\ &=& \left\{x=[x_0,...,x_5]\in{\mathbb{P}}^5\setminus X: \dim\left(\overline{F^{-1}\left(F(x)\right)}\right)\geq2 \right\}_{\mathrm{red}}\mbox{ and }\dim\left(J_4\right) = 3,\\ \overline{\psi\left(J_4\right)} &=& \left({\mathrm{sing}}\left({\mathbf{S}}\right)\right)_{\mathrm{red}} ={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(2)) \subset Y. \\ \end{eqnarray*} \end{example} \begin{example}[$r=3;n=8;a=0;d=5$]\label{example: 12} See also \cite{hulek-katz-schreyer}. If $\mathcal{X}\subset{\mathbb{P}}^9$ is a general $3$-dimensional linear section of ${\mathbb{G}}(1,5)\subset {\mathbb{P}}^{14}$, $p\in \mathcal{X}$ is a general point and $X\subset{\mathbb{P}}^8$ is the image of $\mathcal{X}$ under the projection from $p$, then the homogeneous ideal of $X$ is generated by quadrics and $|{\mathcal{I}}_{X,{\mathbb{P}}^8}(2)|$ defines a Cremona transformation ${\mathbb{P}}^8\dashrightarrow{\mathbb{P}}^8$ of type $(2,5)$. \end{example} \begin{example}[$r=3;n=8;a=1;d=3$]\label{example: 13} See also \cite[Example~4.5]{note}. If $X\subset{\mathbb{P}}^8$ is the blow-up of the smooth quadric $Q^3\subset{\mathbb{P}}^4$ at $5$ general points $p_1,\ldots,p_5$ with $|H_{X}|=|2H_{Q^3}-p_1-\cdots-p_5|$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^8}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^9$ of type $(2,3)$ whose base locus is $X$ and whose image is a cubic hypersurface with singular locus of dimension $3$. \end{example} \begin{example}[$r=3;n=8;a=1;d=4$ - incomplete]\label{example: 14} By \cite{alzati-fania-ruled} (see also \cite{besana-fania-flamini-f1}) there exists a smooth irreducible nondegenerate linearly normal $3$-dimensional variety $X\subset{\mathbb{P}}^8$ with $h^1(X,{\mathcal{O}}_X)=0$, degree $\lambda=11$, sectional genus $g=5$, having the structure of a scroll ${\mathbb{P}}_{\mathbb{F}^1}({\mathcal{E}})$ with $c_1({\mathcal{E}})=3C_0+5f$ and $c_2({\mathcal{E}})=10$ and hence having degrees of the Segre classes $s_1(X)=-85$, $s_2(X)=386$, $s_3(X)=-1330$. Now, by Fact \ref{fact: test K2}, $X\subset{\mathbb{P}}^8$ is arithmetically Cohen-Macaulay and by Riemann-Roch, denoting with $C$ a general curve section of $X$, we obtain \begin{equation}\label{eq: riemann-rock} h^0({\mathbb{P}}^8,{\mathcal{I}}_X(2))=h^0({\mathbb{P}}^6,{\mathcal{I}}_C(2)) = h^0({\mathbb{P}}^6,{\mathcal{O}}_{{\mathbb{P}}^6}(2))-h^0(C,{\mathcal{O}}_C(2)) =28-(2\lambda+1-g), \end{equation} hence $h^0({\mathbb{P}}^8,{\mathcal{I}}_X(2))=10$. If the homogeneous ideal of $X$ is generated by quadratic forms or at least if $X=V(H^0({\mathcal{I}}_X(2)))$, the linear system $|{\mathcal{I}}_X(2)|$ defines a rational map $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}=\overline{\psi({\mathbb{P}}^8)}\subset{\mathbb{P}}^{9}$ whose base locus is $X$ and whose image ${\mathbf{S}}$ is nondegenerate. Now, by (\ref{eq: grado mappa razionale}) we deduce $\deg(\psi)\deg({\mathbf{S}})=2$, from which $\deg(\psi)=1$ and $\deg({\mathbf{S}})=2$. \end{example} \begin{example}[$r=3;n=8;a=1;d=4$]\label{example: 15} See also \cite[\S 4]{ein-shepherdbarron} and \cite[Example~4.4]{note}. If $X\subset{\mathbb{P}}^8$ is a general linear $3$-dimensional section of the spinorial variety $S^{10}\subset{\mathbb{P}}^{15}$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^8}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^9$ of type $(2,4)$ whose base locus is $X$ and whose image is a smooth quadric. \end{example} \begin{example}[$r=3;n=8;a=2;d=3$]\label{example: 16} By \cite{fania-livorni-ten} (see also \cite{besana-fania-threefolds}) there exists a smooth irreducible nondegenerate linearly normal $3$-dimensional variety $X\subset{\mathbb{P}}^8$ with $h^1(X,{\mathcal{O}}_X)=0$, degree $\lambda=10$, sectional genus $g=4$, having the structure of a scroll ${\mathbb{P}}_{Q^2}({\mathcal{E}})$ with $c_1({\mathcal{E}})={\mathcal{O}}_Q(3,3)$ and $c_2({\mathcal{E}})=8$ and hence having degrees of the Segre classes $s_1(X)=-76$, $s_2(X)=340$, $s_3(X)=-1156$. By Fact \ref{fact: test K2}, $X\subset{\mathbb{P}}^8$ is arithmetically Cohen-Macaulay and its homogeneous ideal is generated by quadratic forms. So by (\ref{eq: riemann-rock}) we have $h^0({\mathbb{P}}^8,{\mathcal{I}}_X(2))=11$ and the linear system $|{\mathcal{I}}_X(2)|$ defines a rational map $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{10}$ whose base locus is $X$ and whose image ${\mathbf{S}}$ is nondegenerate. By (\ref{eq: grado mappa razionale}) it follows that $\deg(\psi)\deg({\mathbf{S}})=4$ and hence $\deg(\psi)=1$ and $\deg({\mathbf{S}})=4$. \end{example} \begin{example}[$r=3;n=8;a=3;d=2,3$]\label{example: 17} By \cite{fania-livorni-nine} (see also \cite{besana-fania-threefolds}) there exists a smooth irreducible nondegenerate linearly normal $3$-dimensional variety $X\subset{\mathbb{P}}^8$ with $h^1(X,{\mathcal{O}}_X)=0$, degree $\lambda=9$, sectional genus $g=3$, having the structure of a scroll ${\mathbb{P}}_{{\mathbb{P}}^2}({\mathcal{E}})$ with $c_1({\mathcal{E}})=4$ and $c_2({\mathcal{E}})=7$ (resp. of a quadric fibration over ${\mathbb{P}}^1$) and hence having degrees of the Segre classes $s_1(X)=-67$, $s_2(X)=294$, $s_3(X)=-984$ (resp. $s_1(X)=-67$, $s_2(X)=295$, $s_3(X)=-997$). By Fact \ref{fact: test K2}, $X\subset{\mathbb{P}}^8$ is arithmetically Cohen-Macaulay and its homogeneous ideal is generated by quadratic forms. So by (\ref{eq: riemann-rock}) we have $h^0({\mathbb{P}}^8,{\mathcal{I}}_X(2))=12$ and the linear system $|{\mathcal{I}}_X(2)|$ defines a rational map $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{11}$ whose base locus is $X$ and whose image ${\mathbf{S}}$ is nondegenerate. By (\ref{eq: grado mappa razionale}) it follows that $\deg(\psi)\deg({\mathbf{S}})=8$ (resp. $\deg(\psi)\deg({\mathbf{S}})=5$) and in particular $\deg(\psi)\neq 0$ i.e. $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}$ is generically quasi-finite. Again by Fact \ref{fact: test K2} and Fact \ref{fact: K2 property} it follows that $\psi$ is birational and hence $\deg({\mathbf{S}})=8$ (resp. $\deg({\mathbf{S}})=5$). \end{example} \begin{example}[$r=3;n=8;a=4;d=2$]\label{example: 18} Consider the composition $$ f:{\mathbb{P}}^1\times{\mathbb{P}}^3\longrightarrow{\mathbb{P}}^1\times Q^3\subset {\mathbb{P}}^1\times{\mathbb{P}}^4\longrightarrow{\mathbb{P}}^9, $$ where the first map is the identity of ${\mathbb{P}}^1$ multiplied by $[z_0,z_1,z_2,z_3]\mapsto [z_0^2,z_0z_1,z_0z_2,z_0z_3,z_1^2+z_2^2+z_3^2]$, and the last map is $([t_0,t_1],[y_0,\ldots,y_4])\mapsto [t_0y_0,\ldots,t_0y_4,t_1y_0,\ldots,t_1y_4] =[x_0,\ldots,x_9]$. In the equations defining $\overline{f({\mathbb{P}}^1\times{\mathbb{P}}^3)}\subset{\mathbb{P}}^{9}$, by replacing $x_9$ with $x_0$, we obtain the quadrics: \begin{equation}\label{equazioni-di-B-a=4} \begin{array}{c} -x_0x_3 + x_4x_8, \ -x_0x_2 + x_4x_7, \ x_3x_7 - x_2x_8, \ -x_0x_5 + x_6^2 + x_7^2 + x_8^2, \ -x_0x_1 + x_4x_6, \\ x_3x_6 - x_1x_8, \ x_2x_6 - x_1x_7, \ -x_0^2 + x_1x_6 + x_2x_7 + x_3x_8, \ -x_0^2 + x_4x_5, \ x_3x_5 - x_0x_8, \\ x_2x_5 - x_0x_7, \ x_1x_5 - x_0x_6, \ x_1^2 + x_2^2 + x_3^2 - x_0x_4. \end{array} \end{equation} Denoting with $I$ the ideal generated by quadrics (\ref{equazioni-di-B-a=4}) and $X=V(I)$, we have that $I$ is saturated (in particular $I_2=H^0({\mathcal{I}}_{X,{\mathbb{P}}^8}(2))$) and $X$ is smooth. The linear system $|{\mathcal{I}}_{X,{\mathbb{P}}^8}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset {\mathbb{P}}^{12}$ whose base locus is $X$ and whose image is the variety ${\mathbf{S}}$ with homogeneous ideal generated by: \begin{equation} \begin{array}{c} y_6y_9-y_5y_{10}+y_2y_{11}, \ y_6y_8-y_4y_{10}+y_1y_{11}, \ y_5y_8-y_4y_9+y_0y_{11}, \ y_2y_8-y_1y_9+y_0y_{10}, \\ y_2y_4-y_1y_5+y_0y_6, \ y_2^2+y_5^2+y_6^2+y_7^2-y_7y_8+y_0y_9+y_1y_{10}+y_4y_{11}-y_3y_{12}. \end{array} \end{equation} We have $\deg({\mathbf{S}})=10$ and $\dim({\mathrm{sing}}({\mathbf{S}}))=3$. The inverse of $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}$ is defined by: \begin{equation} \begin{array}{c} -y_7y_8+y_0y_9+y_1y_{10}+y_4y_{11}, \ y_0y_5+y_1y_6-y_4y_7-y_{11}y_{12}, \ y_0y_2-y_4y_6-y_1y_7-y_{10}y_{12}, \\ -y_1y_2-y_4y_5-y_0y_7-y_9y_{12}, \ -y_0^2-y_1^2-y_4^2-y_8y_{12}, \ -y_3y_8-y_9^2-y_{10}^2-y_{11}^2, \\ -y_3y_4-y_5y_9-y_6y_{10}-y_7y_{11}, \ -y_1y_3-y_2y_9-y_7y_{10}+y_6y_{11}, \ -y_0y_3-y_7y_9+y_2y_{10}+y_5y_{11}. \end{array} \end{equation} Note that ${\mathbf{S}}\subset{\mathbb{P}}^{12}$ is the intersection of a quadric hypersurface in ${\mathbb{P}}^{12}$ with the cone over ${\mathbb{G}}(1,4)\subset{\mathbb{P}}^9\subset{\mathbb{P}}^{12}$. \end{example} \begin{example}[$r=3;n=8;a=5$ - with non liftable inverse]\label{example: a=5} If $X\subset{\mathbb{P}}^8$ is the blow-up of ${\mathbb{P}}^3$ at a point $p$ with $|H_{X}|=|2H_{{\mathbb{P}}^3}-p|$, then (modulo a change of coordinates) the homogeneous ideal of $X$ is generated by the quadrics: \begin{equation} \begin{array}{c} x_6x_7-x_5x_8,\ x_3x_7-x_2x_8,\ x_5x_6-x_4x_8,\ x_2x_6-x_1x_8,\ x_5^2-x_4x_7,\ x_3x_5-x_1x_8,\ x_2x_5-x_1x_7,\\ x_3x_4-x_1x_6,\ x_2x_4-x_1x_5,\ x_2x_3-x_0x_8,\ x_1x_3-x_0x_6,\ x_2^2-x_0x_7,\ x_1x_2-x_0x_5,\ x_1^2-x_0x_4 . \end{array} \end{equation} The linear system $|{\mathcal{I}}_{X,{\mathbb{P}}^8}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{8}\dashrightarrow{\mathbb{P}}^{13}$ whose base locus is $X$ and whose image is the variety ${\mathbf{S}}$ with homogeneous ideal generated by: \begin{equation} \begin{array}{c} y_8y_{10}-y_7y_{12}-y_3y_{13}+y_5y_{13},\ y_8y_9+y_6y_{10}-y_7y_{11}-y_3y_{12}+y_1y_{13},\ y_6y_9-y_5y_{11}+y_1y_{12},\\ y_6y_7-y_5y_8-y_4y_{10}+y_2y_{12}-y_0y_{13},\ y_3y_6-y_5y_6+y_1y_8+y_4y_9-y_2y_{11}+y_0y_{12},\\ y_3y_4-y_2y_6+y_0y_8,\ y_3^2y_5-y_3y_5^2+y_1y_3y_7-y_2y_3y_9+y_2y_5y_9-y_0y_7y_9-y_1y_2y_{10}+y_0y_5y_{10}. \end{array} \end{equation} We have $\deg({\mathbf{S}})=19$, $\dim({\mathrm{sing}}({\mathbf{S}}))=4$ and the degrees of Segre classes of $X$ are: $s_1=-49$, $s_2=201$, $s_3=-627$. So, by (\ref{eq: sollevabile}), we deduce that the inverse of $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}$ is not liftable; however, a representative of the equivalence class of $\psi^{-1}$ is defined by: \begin{equation} \begin{array}{c} y_{12}^2-y_{11}y_{13},\ y_8y_{12}-y_6y_{13},\ y_8y_{11}-y_6y_{12},\ -y_6y_{10}+y_7y_{11}+y_3y_{12}-y_5y_{12},\ y_8^2-y_4y_{13},\\ y_6y_8-y_4y_{12},\ y_3y_8-y_2y_{12}+y_0y_{13},\ y_6^2-y_4y_{11},\ y_5y_6-y_1y_8-y_4y_9. \end{array} \end{equation} We also point out that ${\mathrm{Sec}}(X)$ has dimension $6$ and degree $6$ (against Proposition \ref{prop: B is QEL}). \end{example} \begin{example}[$r=3;n=8;a=6;d=2$]\label{example: 20} See also \cite{russo-simis} and \cite{semple}. If $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(1)\oplus{\mathcal{O}}(4))$ or $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(2)\oplus{\mathcal{O}}(3))$ or $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(2)\oplus{\mathcal{O}}(2)\oplus{\mathcal{O}}(2))$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^8}(2)|$ defines a birational transformation ${\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{14}$ of type $(2,2)$ whose base locus is $X$ and whose image is ${\mathbf{S}}={\mathbb{G}}(1,5)$. \end{example} \begin{example}[$r=3; n=8; a=7; d=1$]\label{example: oadpDegree8} See also \cite[Example~2.7]{ciliberto-mella-russo} and \cite{ionescu-smallinvariantsIII}. Let $Z=\{p_1,\ldots,p_8\}\subset{\mathbb{P}}^2$ be such that no $4$ of the $p_i$ are collinear and no $7$ of the $p_i$ lie on a conic and consider the scroll ${\mathbb{P}}_{{\mathbb{P}}^2}(\mathcal{E})\subset{\mathbb{P}}^7$ associated to the very ample vector bundle $\mathcal{E}$ of rank $2$, given as an extension by the following exact sequence $0\rightarrow{\mathcal{O}}_{{\mathbb{P}}^2}\rightarrow{\mathcal{E}}\rightarrow {\mathcal{I}}_{Z,{\mathbb{P}}^2}(4)\rightarrow0.$ The homogeneous ideal of $X\subset{\mathbb{P}}^7$ is generated by $7$ quadrics and so the linear system $|{\mathcal{I}}_{X,{\mathbb{P}}^8}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{15}$ of type $(2,1)$. Since we have $c_1(X)=12$, $c_2(X)=15$, $c_3(X)=6$, we deduce $s_1(\mathcal{N}_{X,{\mathbb{P}}^8})=-60$, $s_2(\mathcal{N}_{X,{\mathbb{P}}^8})=267$, $s_3(\mathcal{N}_{X,{\mathbb{P}}^8})=-909$, and hence $\deg({\mathbf{S}})=29$, by (\ref{eq: grado mappa razionale}). The base locus of the inverse of $\psi$ is $\psi({\mathbb{P}}^7)\simeq {\mathbb{P}}^6\subset{\mathbf{S}}\subset{\mathbb{P}}^{15}$. We also observe that the restriction of $\psi|_{{\mathbb{P}}^7}:{\mathbb{P}}^7\dashrightarrow{\mathbb{P}}^6$ to a general hyperplane $H\simeq{\mathbb{P}}^6\subset{\mathbb{P}}^7$ gives rise to a transformation as in Example \ref{example: 6}. \end{example} \begin{example}[$r=3; n=8; a=8,9; d=1$]\label{example: edge} If $X\subset{\mathbb{P}}^7\subset{\mathbb{P}}^8$ is a $3$-dimensional Edge variety of degree $7$ (resp. degree $6$), then $|{\mathcal{I}}_{X,{\mathbb{P}}^8}(2)|$ defines a birational transformation ${\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{16}$ (resp. ${\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{17}$) of type $(2,1)$ whose base locus is $X$ and whose degree of the image is $\deg({\mathbf{S}})=33$ (resp. $\deg({\mathbf{S}})=38$). For memory overflow problems, we were not able to calculate the scheme ${\mathrm{sing}}({\mathbf{S}})$; however, it is easy to obtain that $1\leq \dim({\mathrm{sing}}({\mathbf{S}}))<\dim(Y)=6$ and $\dim({\mathrm{sing}}(Y))=1$, where $Y$ denotes the base locus of the inverse. \end{example} \begin{example}[$r=3; n=8; a=10; d=1$]\label{example: oadp10} See also \cite{russo-simis}, \cite{semple} and \cite[\Rmnum{3} Theorem~3.8]{zak-tangent}. We have a birational transformation ${\mathbb{P}}^{10}\dashrightarrow{\mathbb{G}}(1,6)\subset{\mathbb{P}}^{20}$ of type $(2,1)$ whose base locus is ${\mathbb{P}}^1\times{\mathbb{P}}^4\subset{\mathbb{P}}^9\subset{\mathbb{P}}^{10}$ and whose image is ${\mathbb{G}}(1,6)$. Restricting it to a general ${\mathbb{P}}^8\subset{\mathbb{P}}^{10}$ we obtain a birational transformation $\psi:{\mathbb{P}}^8\dashrightarrow{\mathbf{S}}\subset{\mathbb{P}}^{18}$ whose base locus $X$ is a rational normal scroll (hence either $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(1)\oplus{\mathcal{O}}(3))$ or $X={\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(2)\oplus{\mathcal{O}}(2))$) and whose image ${\mathbf{S}}$ is a linear section of ${\mathbb{G}}(1,6)\subset{\mathbb{P}}^{20}$. We denote by $Y\subset{\mathbf{S}}$ the base locus of the inverse of $\psi$ and by $F=(F_0,\ldots,F_9):{\mathbb{P}}^7\dashrightarrow{\mathbb{P}}^9$ the restriction of $\psi$ to ${\mathbb{P}}^7={\mathrm{Sec}}(X)$. We have \begin{eqnarray*} Y&=&\overline{\psi({\mathbb{P}}^7)}=\overline{F({\mathbb{P}}^7)}={\mathbb{G}}(1,4)\subset{\mathbb{P}}^9\subset{\mathbb{P}}^{18} , \\ J_6&:=&\left\{x=[x_0,...,x_7]\in{\mathbb{P}}^7\setminus X: \mathrm{rank}\left(\left({\partial F_i}/{\partial x_j}(x)\right)_{i,j}\right)\leq 6 \right\}_{\mathrm{red}}\\ &=& \left\{x=[x_0,...,x_7]\in{\mathbb{P}}^7\setminus X: \dim\left(\overline{F^{-1}\left(F(x)\right)}\right)\geq2 \right\}_{\mathrm{red}}\mbox{ and }\dim\left(J_6\right) = 5,\\ \overline{\psi\left(J_6\right)} &=& \left({\mathrm{sing}}\left({\mathbf{S}}\right)\right)_{\mathrm{red}} \subset Y\mbox{ and }\dim\left(\overline{\psi\left(J_6\right)}\right) = 3. \\ \end{eqnarray*} \end{example} \section{Summary results}\label{sec: table} \begin{theorem}\label{theorem: classification} Table \ref{tabella: all cases 3-fold} classifies all special quadratic transformations $\varphi$ as in \S \ref{sec: notation} and with $r\leq3$. \end{theorem} As a consequence, we generalize \cite[Corollary~6.8]{note}. \begin{corollary}\label{corollary: coindex 2} Let $\varphi:{\mathbb{P}}^n\dashrightarrow{\mathbf{S}}\subseteq{\mathbb{P}}^{n+a}$ be as in \S \ref{sec: notation}. If $\varphi$ is of type $(2,3)$ and ${\mathbf{S}}$ has coindex $c=2$, then $n=8$, $r=3$ and one of the following cases holds: \begin{itemize} \item $\Delta=3$, $a=1$, $\lambda=11$, $g=5$, ${\mathfrak{B}}$ is the blow-up of $Q^3$ at $5$ points; \item $\Delta=4$, $a=2$, $\lambda=10$, $g=4$, ${\mathfrak{B}}$ is a scroll over $Q^2$; \item $\Delta=5$, $a=3$, $\lambda=9$, $g=3$, ${\mathfrak{B}}$ is a quadric fibration over ${\mathbb{P}}^1$. \end{itemize} \end{corollary} \begin{proof} We have that ${\mathfrak{B}}\subset{\mathbb{P}}^n$ is a $QEL$-variety of type $\delta=(r-d-c+2)/d=(r-3)/3$ and $n=((2d-1)r+3d+c-2)/d=(5r+9)/3$. From Divisibility Theorem \cite[Theorem~2.8]{russo-qel1}, we deduce $(r,n,\delta)\in\{(3,8,0),(6,13,1),(9,18,2)\}$ and from the classification of $CC$-manifolds \cite[Theorem~2.2]{ionescu-russo-conicconnected}, we obtain $(r,n,\delta)=(3,8,0)$. Now we apply the results in \S \ref{sec: dim 3}. \end{proof} We can also regard Corollary \ref{corollary: coindex 2} in the same spirit of \cite[Theorem~5.1]{note}, where we have classified the transformations $\varphi$ of type $(2,2)$, when ${\mathbf{S}}$ has coindex $1$. Moreover, in the same fashion, one can prove the following: \begin{proposition} Let $\varphi$ be as in \S \ref{sec: notation} and of type $(2,1)$. If $c=2$, then $r\geq1$ and ${\mathfrak{B}}$ is ${\mathbb{P}}^1\times{\mathbb{P}}^2\subset{\mathbb{P}}^5$ or one of its linear sections. If $c=3$, then $r\geq2$ and ${\mathfrak{B}}$ is either ${\mathbb{P}}^1\times{\mathbb{P}}^3\subset{\mathbb{P}}^7$ or ${\mathbb{G}}(1,4)\subset{\mathbb{P}}^9$ or one of their linear sections. If $c=4$, then $r\geq3$ and ${\mathfrak{B}}$ is either an $OADP$ $3$-fold in ${\mathbb{P}}^7$ or ${\mathbb{P}}^1\times{\mathbb{P}}^4\subset{\mathbb{P}}^{9}$ or one of its hyperplane sections. \end{proposition} In Table \ref{tabella: all cases 3-fold} we use the following shortcuts: \begin{description} \item[$\exists^{\ast}$] flags cases for which is known a transformation $\varphi$ with base locus ${\mathfrak{B}}$ as required, but we do not know if the image ${\mathbf{S}}$ satisfies all the assumptions in \S \ref{sec: notation}; \item[$\exists^{\ast\ast}$] flags cases for which is known that there is a smooth irreducible variety $X\subset{\mathbb{P}}^n$ such that, if $X=V(H^0({\mathcal{I}}_X(2)))$, then the linear system $|{\mathcal{I}}_X(2)|$ defines a birational transformation $\varphi:{\mathbb{P}}^n\dashrightarrow{\mathbf{S}}=\overline{\varphi({\mathbb{P}}^n)}\subset{\mathbb{P}}^{n+a}$ as stated; \item[$?$] flags cases for which we do not know if there exists at least an abstract variety ${\mathfrak{B}}$ having the structure and the invariants required; \item[$\exists$] flags cases for which everything works fine. \end{description} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|c||ll|} \hline $r$ & $n$ & $a$ & $\lambda$ & $g$ & Abstract structure of ${\mathfrak{B}}$ & $d$ & $\Delta$ & $c$ & \multicolumn{2}{|c|}{Existence} \\ \hline \hline \multirow{4}{*}{$1$} & $3$ & $1$ & $2$ & $0$ & $\nu_2({\mathbb{P}}^1)\subset{\mathbb{P}}^2$ & $1$ & $2$ & $1$ & $\exists$ & Ex. \ref{example: 1} \\ \cline{2-11} & $4$ & $0$ & $5$ & $1$ & Elliptic curve & $3$ & $1$ & $0$ & $\exists$ & Ex. \ref{example: 2} \\ \cline{2-11} & $4$ & $1$ & $4$ & $0$ & $\nu_4({\mathbb{P}}^1)\subset{\mathbb{P}}^4$ & $2$ & $2$ & $1$ & $\exists$ & Ex. \ref{example: 3} \\ \cline{2-11} & $4$ & $3$ & $3$ & $0$ & $\nu_3({\mathbb{P}}^1)\subset{\mathbb{P}}^3$ & $1$ & $5$ & $2$ & $\exists$ & Ex. \ref{example: 5} \\ \hline \hline \multirow{14}{*}{$2$} & $4$ & $1$ & $2$ & $0$ & ${\mathbb{P}}^1\times{\mathbb{P}}^1\subset{\mathbb{P}}^3$ & $1$ & $2$ & $1$ & $\exists$ & Ex. \ref{example: 1} \\ \cline{2-11} & $5$ & $0$ & $4$ & $0$ & $\nu_2({\mathbb{P}}^2)\subset{\mathbb{P}}^5$ & $2$ & $1$ & $0$ & $\exists$ & Ex. \ref{example: 3} \\ \cline{2-11} & $5$ & $3$ & $3$ & $0$ & Hyperplane section of ${\mathbb{P}}^1\times{\mathbb{P}}^2\subset{\mathbb{P}}^5$ & $1$ & $5$ & $2$ & $\exists$ & Ex. \ref{example: 5} \\ \cline{2-11} & $6$ & $0$ & $7$ & $1$ & Elliptic scroll ${\mathbb{P}}_{C}({\mathcal{E}})$ with $e({\mathcal{E}})=-1$ & $4$ & $1$ & $0$& $\exists$ & Ex. \ref{example: 6} \\ \cline{2-11} & $6$ & $0$ & $8$ & $3$ & \begin{tabular}{c} Blow-up of ${\mathbb{P}}^2$ at $8$ points $p_1,\ldots,p_8$,\\ $|H_{{\mathfrak{B}}}|=|4H_{{\mathbb{P}}^2}-p_1-\cdots-p_8|$ \end{tabular} & $4$ & $1$ & $0$&$\exists$ & Ex. \ref{example: 6} \\ \cline{2-11} & $6$ & $1$ & $7$ & $2$ & \begin{tabular}{c} Blow-up of ${\mathbb{P}}^2$ at $6$ points $p_0,\ldots,p_5$,\\ $|H_{{\mathfrak{B}}}|=|4H_{{\mathbb{P}}^2}-2p_0-p_1-\cdots-p_5|$ \end{tabular} & $3$ & $2$ & $1$& $\exists$ & Ex. \ref{example: 7} \\ \cline{2-11} & $6$ & $2$ & $6$ & $1$ & \begin{tabular}{c} Blow-up of ${\mathbb{P}}^2$ at $3$ points $p_1,p_2,p_3$,\\ $|H_{{\mathfrak{B}}}|=|3H_{{\mathbb{P}}^2}-p_1-p_2-p_3|$ \end{tabular} & $2$ & $4$ & $2$& $\exists$ & Ex. \ref{example: 8} \\ \cline{2-11} & $6$ & $3$ & $5$ & $0$ & ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(4))$ or ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(2)\oplus{\mathcal{O}}(3))$ & $2$ & $5$ & $2$ & $\exists$ & Ex. \ref{example: 9} \\ \cline{2-11} & $6$ & $5$ & $5$ & $1$ & \begin{tabular}{c} Blow-up of ${\mathbb{P}}^2$ at $4$ points $p_1\ldots,p_4$,\\ $|H_{{\mathfrak{B}}}|=|3H_{{\mathbb{P}}^2}-p_1-\cdots-p_4|$ \end{tabular} & $1$ & $12$ & $3$ & $\exists$ & Ex. \ref{example: 10} \\ \cline{2-11} & $6$ & $6$ & $4$ & $0$ & ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(3))$ or ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(2)\oplus{\mathcal{O}}(2))$ & $1$ & $14$ & $3$ & $\exists$ & Ex. \ref{example: 11} \\ \hline \hline \multirow{23}{*}{$3$} & $5$ & $1$ & $2$ & $0$ & $Q^3\subset{\mathbb{P}}^4$ & $1$ & $2$ & $1$&$\exists$ & Ex. \ref{example: 1} \\ \cline{2-11} & $6$ & $3$ & $3$ & $0$ & ${\mathbb{P}}^1\times{\mathbb{P}}^2\subset{\mathbb{P}}^5$ & $1$ & $5$ & $2$& $\exists$ & Ex. \ref{example: 5} \\ \cline{2-11} & $7$ & $1$ & $6$ & $1$ & Hyperplane section of ${\mathbb{P}}^2\times{\mathbb{P}}^2\subset{\mathbb{P}}^8$ & $2$ & $2$ & $1$& $\exists$ & Ex. \ref{example: 3} \\ \cline{2-11} & $7$ & $5$ & $5$ & $1$ & Linear section of ${\mathbb{G}}(1,4)\subset{\mathbb{P}}^9$ & $1$ & $12$ &$3$ & $\exists$ & Ex. \ref{example: 10} \\ \cline{2-11} & $7$ & $6$ & $4$ & $0$ & ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}(1)\oplus{\mathcal{O}}(1)\oplus{\mathcal{O}}(2))$ & $1$ & $14$ &$3$ & $\exists$ & Ex. \ref{example: 11} \\ \cline{2-11} & $8$ & $0$ & $12$ & $6$ & \begin{tabular}{c} Scroll ${\mathbb{P}}_{Y}({\mathcal{E}})$, $Y$ birat. ruled surface, \\ $K_Y^2=5$, $c_2({\mathcal{E}})=8$, $c_1^2({\mathcal{E}})=20$ \end{tabular} & $5$ & $1$ & $0$ & $?$ & \\ \cline{2-11} & $8$ & $0$ & $13$ & $8$ & \begin{tabular}{c} Variety obtained as the projection \\ of a Fano variety $X$ from a point $p\in X$ \end{tabular} & $5$ & $1$ & $0$ & $\exists$ & Ex. \ref{example: 12} \\ \cline{2-11} & $8$ & $1$ & $11$ & $5$ & \begin{tabular}{c} Blow-up of $Q^3$ at $5$ points $p_1,\ldots,p_5$, \\ $|H_{{\mathfrak{B}}}|=|2H_{Q^3}-p_1-\cdots-p_5|$ \end{tabular} & $3$ & $3$ & $2$ & $\exists$ & Ex. \ref{example: 13} \\ \cline{2-11} & $8$ & $1$ & $11$ & $5$ & Scroll over ${\mathbb{P}}_{{\mathbb{P}}^1}({\mathcal{O}}\oplus{\mathcal{O}}(-1))$ & $4$ & $2$& $1$ & $\exists^{\ast\ast}$ & Ex. \ref{example: 14} \\ \cline{2-11} & $8$ & $1$ & $12$ & $7$ & Linear section of $S^{10}\subset{\mathbb{P}}^{15}$ & $4$ & $2$& $1$ &$\exists$ & Ex. \ref{example: 15} \\ \cline{2-11} & $8$ & $2$ & $10$ & $4$ & Scroll over $Q^2$ & $3$ & $4$ & $2$ & $\exists^{\ast}$ & Ex. \ref{example: 16} \\ \cline{2-11} & $8$ & $3$ & $9$ & $3$ & Scroll over ${\mathbb{P}}^2$ & $2$ & $8$ & $3$ & $\exists^{\ast}$ & Ex. \ref{example: 17} \\ \cline{2-11} & $8$ & $3$ & $9$ & $3$ & Quadric fibration over ${\mathbb{P}}^1$ & $3$ & $5$ & $2$ & $\exists^{\ast}$ & Ex. \ref{example: 17} \\ \cline{2-11} & $8$ &$4$ & $8$ & $2$ & Hyperplane section of ${\mathbb{P}}^1\times Q^3$ & $2$ & $10$ & $3$ & $\exists^{\ast}$ & Ex. \ref{example: 18} \\ \cline{2-11} & $8$ &$6$ & $6$ & $0$ & Rational normal scroll & $2$ & $14$ & $3$ & $\exists$ & Ex. \ref{example: 20} \\ \cline{2-11} & $8$ & $7$ & $8$ & $3$ & \begin{tabular}{c} ${\mathbb{P}}_{{\mathbb{P}}^2}({\mathcal{E}})$, where $0\rightarrow{\mathcal{O}}_{{\mathbb{P}}^2}\rightarrow$ \\$\rightarrow{\mathcal{E}}\rightarrow {\mathcal{I}}_{\{p_1,\ldots,p_8\},{\mathbb{P}}^2}(4)\rightarrow0$ \end{tabular} & $1$ & $29$ & $4$ & $\exists^{\ast}$ & Ex. \ref{example: oadpDegree8} \\ \cline{2-11} & $8$ & $8$ & $7$ & $2$ & Edge variety & $1$ & $33$ & $4$ & $\exists^{\ast}$ & Ex. \ref{example: edge} \\ \cline{2-11} & $8$ & $9$ & $6$ & $1$ & ${\mathbb{P}}^1\times{\mathbb{P}}^1\times{\mathbb{P}}^1\subset{\mathbb{P}}^7$ & $1$ & $38$ & $4$ & $\exists^{\ast}$ & Ex. \ref{example: edge} \\ \cline{2-11} & $8$ & $10$ & $5$ & $0$ & Rational normal scroll & $1$ & $42$ & $4$ & $\exists$ & Ex. \ref{example: oadp10} \\ \hline \end{tabular} \end{center} \caption{All transformations $\varphi$ as in \S \ref{sec: notation} and with $r\leq3$} \label{tabella: all cases 3-fold} \end{table} \section{Towards the case of dimension 4}\label{sec: dim 4} In this section we treat the case in which $r=4$. However, when $\delta=0$, we are well away from having an exhaustive classification. Proposition \ref{prop: delta mag0 4fold} follows from \cite[Propositions~1.3, 3.4, Corollary~3.2]{russo-qel1} and \cite[Theorem~2.2]{ionescu-russo-conicconnected}. \begin{proposition}\label{prop: delta mag0 4fold} If $r=4$, then either $n=10$, $d\geq2$, $\langle {\mathfrak{B}} \rangle = {\mathbb{P}}^{10}$, or one of the following cases holds: \begin{itemize} \item $n=6$, $d=1$, $\delta=4$, ${\mathfrak{B}}=Q^4\subset{\mathbb{P}}^5$ is a quadric; \item $n=8$, $d=1$, $\delta=2$, ${\mathfrak{B}}\subset{\mathbb{P}}^7$ is either ${\mathbb{P}}^1\times{\mathbb{P}}^3\subset{\mathbb{P}}^7$ or a linear section of ${\mathbb{G}}(1,4)\subset{\mathbb{P}}^9$; \item $n=8$, $d=2$, $\delta=2$, ${\mathfrak{B}}$ is ${\mathbb{P}}^2\times{\mathbb{P}}^2\subset{\mathbb{P}}^8$; \item $n=9$, $d=1$, $\delta=1$, ${\mathfrak{B}}$ is a hyperplane section of ${\mathbb{P}}^1\times{\mathbb{P}}^4\subset{\mathbb{P}}^9$; \item $n=10$, $d=1$, $\delta=0$, ${\mathfrak{B}}\subset{\mathbb{P}}^9$ is an $OADP$-variety. \end{itemize} \end{proposition} In Proposition \ref{prop: 4foldnondegenerate}, we more generally assume that the image ${\mathbf{S}}$ is nondegenerate, normal and linearly normal (not necessarily factorial) and furthermore we do not assume Assumptions \ref{assumption: liftable} and \ref{assumption: ipotesi}. As noted earlier, we have $P_{{\mathfrak{B}}}(1)=11$ and $P_{{\mathfrak{B}}}(2)=55-a$ and hence \begin{eqnarray*} P_{{\mathfrak{B}}}(t)&=& \lambda \begin{pmatrix}t+3\cr 4\end{pmatrix}+\left( 1-g\right) \begin{pmatrix}t+2\cr 3\end{pmatrix}+\left(2g-3\lambda+\chi({\mathcal{O}}_{{\mathfrak{B}}})-a+31 \right) \begin{pmatrix}t+1\cr 2\end{pmatrix} \\ && +\left(-g+2\lambda-2\chi({\mathcal{O}}_{{\mathfrak{B}}})+a-21\right) t+\chi({\mathcal{O}}_{{\mathfrak{B}}}) . \end{eqnarray*} \begin{proposition}\label{prop: 4foldnondegenerate} If $r=4$, $n=10$ and $\langle {\mathfrak{B}}\rangle={\mathbb{P}}^{10}$, then one of the following cases holds: \begin{itemize} \item $a=10$, $\lambda=7$, $g=0$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$, ${\mathfrak{B}}$ is a rational normal scroll; \item $a=7$, $\lambda=10$, $g=3$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$, ${\mathfrak{B}}$ is either \begin{itemize} \item a hyperplane section of ${\mathbb{P}}^1\times Q^4\subset{\mathbb{P}}^{11}$ or \item ${\mathbb{P}}({\mathcal{T}}_{{\mathbb{P}}^2}\oplus {\mathcal{O}}_{{\mathbb{P}}^2}(1))\subset{\mathbb{P}}^{10}$; \end{itemize} \item $a=6$, $\lambda=11$, $g=4$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$, ${\mathfrak{B}}$ is a quadric fibration over ${\mathbb{P}}^1$; \item $a=5$, $\lambda=12$, $g=5$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$, ${\mathfrak{B}}$ is one of the following: \begin{itemize} \item ${\mathbb{P}}^4$ blown up at $4$ points $p_1\ldots,p_4$ embedded by $|2H_{{\mathbb{P}}^4}-p_1-\cdots-p_4|$, \item a scroll over a ruled surface, \item a quadric fibration over ${\mathbb{P}}^1$; \end{itemize} \item $a=4$, $\lambda=14$, $g=8$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$, ${\mathfrak{B}}$ is either \begin{itemize} \item a linear section of ${\mathbb{G}}(1,5)\subset{\mathbb{P}}^{14}$ or \item the product of ${\mathbb{P}}^1$ with a Fano variety of even index; \end{itemize} \item $a=4$, $\lambda=13$, $g=6$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$, ${\mathfrak{B}}$ is either \begin{itemize} \item a scroll over a birationally ruled surface or \item a quadric fibration over ${\mathbb{P}}^1$; \end{itemize} \item $a=3$, $14\leq\lambda\leq 16$, $g\leq 11$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(-g+2\lambda-18)/3$; \item $a=2$, $15\leq \lambda\leq 18$, $g\leq 14$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(-g+2\lambda-19)/3$; \item $a=1$, $15\leq\lambda\leq 20$, $g\leq17$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(-g+2\lambda-20)/3$; \item $a=0$, $15\leq\lambda$. \end{itemize} \end{proposition} \begin{proof} Denote by ${\mathcal{L}}ambda\subsetneq C\subsetneq S\subsetneq X\subsetneq {\mathfrak{B}}$ a sequence of general linear sections of ${\mathfrak{B}}$ and put $h_{{\mathcal{L}}ambda}(2):=h^0({\mathbb{P}}^6,{\mathcal{O}}(2))-h^0({\mathbb{P}}^6,{\mathcal{I}}_{{\mathcal{L}}ambda}(2))$. Since $C$ is a nondegenerate curve in ${\mathbb{P}}^7$, we have $\lambda\geq 7$. By Castelnuovo's argument \cite[Lemma~6.1]{note}, it follows that \begin{equation}\label{eq: castelnuovo-argument-4fold} 7\leq \min\{\lambda,13\}\leq h_{{\mathcal{L}}ambda}(2)\leq 28 - h^0({\mathbb{P}}^{10},{\mathcal{I}}_{{\mathfrak{B}}}(2))=17-a \end{equation} and in particular we have $a\leq 10$. Moreover \begin{itemize} \item if $\lambda\geq 13$, then $h_{{\mathcal{L}}ambda}(2)\geq 13$ and $a\leq4$, by (\ref{eq: castelnuovo-argument-4fold}); \item if $\lambda\geq 15$, then $h_{{\mathcal{L}}ambda}(2)\geq 14$ and $a\leq3$, by Castelnuovo Lemma \cite[Lemma~1.10]{ciliberto-hilbertfunctions}; \item if $\lambda\geq 17$, then $h_{{\mathcal{L}}ambda}(2)\geq 15$ and $a\leq2$, by \cite[Theorem~3.1]{ciliberto-hilbertfunctions}; \item if $\lambda\geq 19$, then $h_{{\mathcal{L}}ambda}(2)\geq 16$ and $a\leq1$, by \cite[Theorem~3.8]{ciliberto-hilbertfunctions}; \item if $\lambda\geq 21$, then $h_{{\mathcal{L}}ambda}(2)\geq 17$ and $a=0$, by \cite[Theorem~2.17(b)]{petrakiev}. \end{itemize} According to the above statements, we consider the refinement $\theta=\theta(\lambda)$ of Castelnuovo's bound $\rho=\rho(\lambda)$, contained in \cite[Theorem~2.5]{ciliberto-hilbertfunctions}. So, we have \begin{equation}\label{eq: KBHB3} K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^3=2g-2-3\lambda \leq 2\theta(\lambda)-2-3\lambda\leq 2\rho(\lambda)-2-3\lambda . \end{equation} Now, if $t\geq1$, by Kodaira Vanishing Theorem and Serre Duality, it follows that $P_{{\mathfrak{B}}}(-t)=h^4({\mathfrak{B}},{\mathcal{O}}_{{\mathfrak{B}}}(-t))=h^0({\mathfrak{B}},K_{{\mathfrak{B}}}+tH_{{\mathfrak{B}}}) $; hence, if $P_{{\mathfrak{B}}}(-t)\neq0$, then $K_{{\mathfrak{B}}}+tH_{{\mathfrak{B}}}$ is an effective divisor and we have either $K_{{\mathfrak{B}}}\cdot H_{{\mathfrak{B}}}^3 > -tH_{{\mathfrak{B}}}^4=-t \lambda$ or $K_{{\mathfrak{B}}}\sim -tH_{{\mathfrak{B}}}$. Thus, by (\ref{eq: KBHB3}) and straightforward calculation, we deduce (see Figure \ref{fig: upperbounds}): \begin{figure} \caption{Upper bounds of $K_{{\mathfrak{B} \label{fig: upperbounds} \end{figure} \begin{enumerate}[(\ref{prop: 4foldnondegenerate}.a)] \item\label{case: lambda8} if $\lambda\leq8$, then either $P_{{\mathfrak{B}}}(-3)=P_{{\mathfrak{B}}}(-2)=P_{{\mathfrak{B}}}(-1)=0$ or $\lambda=8$ and $K_{{\mathfrak{B}}}\sim -3H_{{\mathfrak{B}}}$; \item\label{case: lambda14} if $\lambda\leq14$, then either $P_{{\mathfrak{B}}}(-2)=P_{{\mathfrak{B}}}(-1)=0$ or $\lambda=14$ and $K_{{\mathfrak{B}}}\sim -2H_{{\mathfrak{B}}}$; \item\label{case: lambda20} if $\lambda\leq24$, then either $P_{{\mathfrak{B}}}(-1)=0$ or $\lambda=24$ and $K_{{\mathfrak{B}}}\sim -H_{{\mathfrak{B}}}$. \end{enumerate} In the same way, one also sees that $h^4({\mathfrak{B}},{\mathcal{O}}_{{\mathfrak{B}}})=0$ whenever $\lambda\leq 31$. Now we discuss the cases according to the value of $a$. \begin{case}[$9\leq a\leq 10$] We have $\lambda\leq 8$. From the classification of del Pezzo varieties in \cite[\Rmnum{1} \S 8]{fujita-polarizedvarieties}, we see that the case $\lambda=8$ with $K_{{\mathfrak{B}}}\sim -3H_{{\mathfrak{B}}}$ is impossible and so we obtain $\lambda=11-2a/5$, $g=1-a/10$, by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda8}). Hence $a=10$, $\lambda=7$, $g=0$ and ${\mathfrak{B}}$ is a rational normal scroll. \end{case} \begin{case}[$5\leq a\leq 8$] We have $\lambda\leq 12$. By (\ref{prop: 4foldnondegenerate}.\ref{case: lambda14}) we obtain $g=(3\lambda+a-31)/2$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(\lambda+a-11)/6$ and, since $\chi({\mathcal{O}}_{{\mathfrak{B}}})\in {\mathbb{Z}}$, we obtain $\lambda=17-a$, $g=10-a$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$. So, we can determine the abstract structure of ${\mathfrak{B}}$ by \cite{fania-livorni-ten}, \cite{besana-biancofiore-deg11}, \cite[Theorem~2]{ionescu-smallinvariantsII}, \cite[Lemmas~4.1 and 6.1]{besana-biancofiore-numerical} and we also deduce that the case $a=8$ does not occur, by \cite{fania-livorni-nine}. \end{case} \begin{case}[$a = 4$] We have $\lambda\leq 14$. Again by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda14}), we deduce that either $g=(3\lambda-27)/2$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(\lambda-7)/6$ or ${\mathfrak{B}}$ is a Mukai variety with $\lambda=14$ ($g=8$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$). In the first case, since $\chi({\mathcal{O}}_{{\mathfrak{B}}})\in {\mathbb{Z}}$ and $g\geq 0$, we obtain $\lambda=13$, $g=6$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$ and then we can determine the abstract structure of ${\mathfrak{B}}$ by \cite[Theorem~1]{ionescu-degsmallrespectcodim} and \cite[Lemmas~4.1 and 6.1]{besana-biancofiore-numerical}. In the second case, if $b_2=b_2({\mathfrak{B}})=1$ then ${\mathfrak{B}}$ is a linear section of ${\mathbb{G}}(1,5)\subset{\mathbb{P}}^{14}$, otherwise ${\mathfrak{B}}$ is a Fano variety of product type, see \cite[Theorems~2 and 7]{mukai-biregularclassification}. \end{case} \begin{case}[$a=3$] We have $\lambda\leq 16$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(-g+2\lambda-18)/3$, by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda20}). Moreover, if $\lambda\leq14$, by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda14}) it follows that $\lambda=14$, $g=7$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$. \end{case} \begin{case}[$a=2$] We have $\lambda\leq 18$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(-g+2\lambda-19)/3$, by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda20}). Moreover, by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda14}) it follows that $\lambda\geq 15$. \end{case} \begin{case}[$a=1$] We have $\lambda\leq 20$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(-g+2\lambda-20)/3$, by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda20}). Moreover, if $\lambda\leq14$, by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda14}) it follows that $\lambda=10$, $g=0$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=0$, which is of course impossible. \end{case} \begin{case}[$a=0$] If $\lambda\leq 14$, by (\ref{prop: 4foldnondegenerate}.\ref{case: lambda14}) and (\ref{prop: 4foldnondegenerate}.\ref{case: lambda20}) it follows that $\lambda=11$, $g=1$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=0$. Thus, ${\mathfrak{B}}$ must be an elliptic scroll and $\varphi$ must be of type $(2,6)$; so, by (\ref{eq: c2 4-fold}) we obtain the contradiction $c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}}^2=(990+c_4({\mathfrak{B}}))/37=990/37\notin{\mathbb{Z}}$. \end{case} \end{proof} \begin{remark} Under the hypothesis of Proposition \ref{prop: 4foldnondegenerate}, reasoning as in Proposition \ref{prop: segre and chern classes}, we obtain that if $\varphi$ is of type $(2,d)$, then \begin{eqnarray} \label{eq: c2 4-fold} 37c_2({\mathfrak{B}})\cdot H_{{\mathfrak{B}}}^2-c_4({\mathfrak{B}}) &=& -231\lambda+188g+(1-9d)\Delta+3396 ,\\ 37c_3({\mathfrak{B}})\cdot H_{{\mathfrak{B}}}+7c_4({\mathfrak{B}}) &=& 655\lambda-428g+(26d-7)\Delta-5716 . \end{eqnarray} \end{remark} \begin{remark} If Eisenbud-Green-Harris Conjecture $I_{11,6}$ holds (see \cite{eisenbud-green-harris}), then we have that $\lambda\leq 24$, even in the case with $a=0$. If $a=0$ and $\lambda\leq 24$, we have $g\leq \theta(24)=25$ and one of the following cases holds: \begin{itemize} \item $\lambda=24$, $g=25$, $\chi({\mathcal{O}}_{{\mathfrak{B}}})=1$ and ${\mathfrak{B}}$ is a Fano variety of coindex $4$; \item $g\leq 24$ and $\chi({\mathcal{O}}_{{\mathfrak{B}}})=(-g+2\lambda-21)/3$. \end{itemize} \end{remark} \begin{example} Note that in Proposition \ref{prop: delta mag0 4fold}, all cases with $\delta>0$ really occur (see \S \ref{sec: examples}); when $\delta=0$, an example is obtained by taking a general $4$-dimensional linear section of ${\mathbb{P}}^1\times{\mathbb{P}}^5\subset{\mathbb{P}}^{11}\subset{\mathbb{P}}^{12}$. Below we collect some examples of special quadratic birational transformations appearing in Proposition \ref{prop: 4foldnondegenerate}. \begin{itemize} \item If $X\subset{\mathbb{P}}^{10}$ is a (smooth) $4$-dimensional rational normal scroll, then $|{\mathcal{I}}_{X,{\mathbb{P}}^{10}}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{10}\dashrightarrow{\mathbb{G}}(1,6)\subset{\mathbb{P}}^{20}$ of type $(2,2)$. \item If $X\subset{\mathbb{P}}^{10}$ is a general hyperplane section of ${\mathbb{P}}^1\times Q^4\subset{\mathbb{P}}^{11}$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^{10}}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{10}\dashrightarrow \overline{\psi({\mathbb{P}}^{10})}\subset{\mathbb{P}}^{17}$ of type $(2,2)$ whose image has degree $28$. \item If $X={\mathbb{P}}({\mathcal{T}}_{{\mathbb{P}}^2}\oplus{\mathcal{O}}_{{\mathbb{P}}^2}(1))\subset{\mathbb{P}}^{10}$, since $h^1(X,{\mathcal{O}}_X)=h^1({\mathbb{P}}^2,{\mathcal{O}}_{{\mathbb{P}}^2})=0$, $|{\mathcal{I}}_{X,{\mathbb{P}}^{10}}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{10}\dashrightarrow \overline{\psi({\mathbb{P}}^{10})}\subset{\mathbb{P}}^{17}$ (see Facts \ref{fact: test K2} and \ref{fact: K2 property}). \item There exists a smooth linearly normal $4$-dimensional variety $X\subset{\mathbb{P}}^{10}$ with $h^1(X,{\mathcal{O}}_X)=0$, degree $11$, sectional genus $4$, having the structure of a quadric fibration over ${\mathbb{P}}^1$ (see \cite[Remark~3.2.5]{besana-biancofiore-deg11}); thus $|{\mathcal{I}}_{X,{\mathbb{P}}^{10}}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{10}\dashrightarrow \overline{\psi({\mathbb{P}}^{10})}\subset{\mathbb{P}}^{16}$ (see Facts \ref{fact: test K2} and \ref{fact: K2 property}). \item If $X\subset{\mathbb{P}}^{10}$ is the blow-up of ${\mathbb{P}}^4$ at $4$ general points $p_1,\ldots,p_4$, embedded by $|2H_{{\mathbb{P}}^4}-p_1-\cdots-p_4|$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^{10}}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{10}\dashrightarrow \overline{\psi({\mathbb{P}}^{10})}\subset{\mathbb{P}}^{15}$ whose image has degree $29$; in this case ${\mathrm{Sec}}(X)$ is a complete intersection of two cubics. \item If $X\subset{\mathbb{P}}^{10}$ is a general $4$-dimensional linear section of ${\mathbb{G}}(1,5)\subset{\mathbb{P}}^{14}$, then $|{\mathcal{I}}_{X,{\mathbb{P}}^{10}}(2)|$ defines a birational transformation $\psi:{\mathbb{P}}^{10}\dashrightarrow \overline{\psi({\mathbb{P}}^{10})}\subset{\mathbb{P}}^{14}$ of type $(2,2)$ whose image is a complete intersection of quadrics. \end{itemize} \end{example} \end{document}
\begin{document} \begin{titlepage} \title{Cluster-weighted latent class modeling} \author[1]{Roberto Di Mari\thanks{[email protected]}} \author[1]{Antonio Punzo} \author[2]{Zsuzsa Bakk} \affil[1]{Department of Economics and Business, University of Catania, Italy} \affil[2]{Leiden University, Institute of Psychology, Methodology \& Statistics Unit} \date{\today} \maketitle \begin{abstract} Usually in Latent Class Analysis (LCA), external predictors are taken to be cluster conditional probability predictors (LC models with covariates), and/or score conditional probability predictors (LC regression models). In such cases, their distribution is not of interest. Class specific distribution is of interest in the distal outcome model, when the distribution of the external variable(s) is assumed to dependent on LC membership. In this paper, we consider a more general formulation, typical in cluster-weighted models, which embeds both the latent class regression and the distal outcome models. This allows us to test simultaneously both whether the distribution of the covariate(s) differs across classes, and whether there are significant direct effects of the covariate(s) on the indicators, by including most of the information about the covariate(s) - latent variable relationship. We show the advantages of the proposed modeling approach through a set of population studies and an empirical application on assets ownership of Italian households. \end{abstract} \textsc{Key-Words}: latent class analysis, latent class regression models, continuous distal outcomes, direct effects, cluster-weighted models, household wealth, assets ownership \end{titlepage} \section{Introduction}\label{sec:intro} Latent class analysis \citep{mccutcheon1985latent} is widely used in the social and behavioral sciences to locate subgroups of observations in the sample based on a set of $J$ observed response variables $\mathbf{Y}$. Examples of applications include identification of types of mobile internet usage in travel planning and execution \citep{okazaki15}, types of political involvement \citep{hagenaars1989}, classes of treatment engagement in adolescents with psychiatric problems \citep{roedelof13}, a typology of infant temperament \citep{loken2004}, modeling phases in the development of transitive reasoning \citep{bouwmeester}, or classes of self disclosure \citep{henk}. In many empirical studies, interest lies in investigating which external variables $\mathbf{Z}$ predict latent class membership $X$. Latent class models with covariates \citep{dayton} are a well-known extension of the baseline model, in which external variables are included in the latent class modeling framework as predictors of class membership \citep{collins2010latent}. \cite{stegmann2017}, for instance, discuss the inclusion of covariates in more complicated LC models. However, recent methodological development has shifted the attention towards modeling the effect in the opposite direction. That is, predicting a - possibly continuous - distal outcome based on the latent class membership \citep{bakk:12, lanza:13}, as depicted in Figure \ref{fig:distalmod}. Although there can be more than one external variable available, for sake of exposition here we describe the models with only one external variable\footnote{All considered modeling scenarios can be straightforwardly extended to the multiple external variables case. See, for instance, \cite{bakk:12}}. \begin{figure} \caption{\footnotesize \label{fig:distalmod} \label{fig:distalmod} \end{figure} For instance, \cite{roberts2011} predict distal pain outcomes based on class memberships defined by patterns of barriers to pain management and \cite{mulder2012} compared average measures of recidivism in clusters of juvenile offenders. Typically, in distal outcome models, the distal outcome and the $J$ response variables $\mathbf{Y}$'s are assumed to be conditionally independent given the latent variable $X$ \citep{bakk:12, lanza:13}. A direct effect of $Z$ on $\mathbf{Y}$ is therefore not allowed for, neither its presence tested. In latent variable modeling, it is well known that Maximum Likelihood (ML) estimation is subject to severe bias when direct effects are present in LC and latent trait models (\citealp{asparouhov2012auxiliary}, regression mixture models \citep{leepaper,nylund16}, and latent Markov models \citep{robzsuzsa17}, and are not accounted for. Given the restrictiveness of the conditional independence assumption and the possible severity of its violation, we propose a more general model that can account for complex interdependencies between the external variable, LC membership, and the indicators of the LC model. In regression mixtures, a ``circular'' relation among $\mathbf{Y}$-$X$-$Z$ is commonly considered in the cluster-weighted modeling approach \citep{ingrassia2012,Ingr:Mino:Punz:Mode:2014,Ingr:Punz:Vitt:TheG:2015,Punz:Flex:2014,Dang:Punz:McNi:Ingr:Brow:Mult:2017}. That is, a more general model is specified, where next to modeling the class specific distribution of $Z$ (distal outcome situation), also the direct effect of $Z$ on $\mathbf{Y}$ is modeled (latent class regression). If $\mathbf{Y}$ are indicators of assets ownership, and $Z$ is a measure (in euro) of net (of liabilities) wealth, the cluster-weighted modeling approach allows net wealth also to directly affect a household decision to own assets. With standard inference, the statistical significance of each effect can then be tested to see whether intermediate model specifications are more appropriate. In LC regression models \citep{kamakura1989,wedel94}, although the assumption of conditional independence of $\mathbf{Y}$ and $Z$ can be relaxed (see Figure \ref{fig:latregmod}), the distal outcome's distribution is not of interest and hence not modeled. Therefore, in the traditional LCA approach, an external variable enters the model either as a covariate (latent class regression) or as a distal outcome, but never as both at the same time. We propose to extend the idea of cluster-weighted modeling in the context of latent class analysis, by proposing a generalized version of the models in Figures \ref{fig:distalmod} and \ref{fig:latregmod}, as depicted in Figure \ref{fig:cwmlca}, which embeds them both. \begin{figure} \caption{\footnotesize \label{fig:latregmod} \label{fig:latregmod} \end{figure} \begin{figure} \caption{\footnotesize \label{fig:cwmlca} \label{fig:cwmlca} \end{figure} By starting from the most general model, the user can proceed backwards, testing the model assumptions of both the distal outcome and the latent class regression models. In particular, in this paper we will show evidence, based on a set of population studies and an empirical application, that 1) if direct effects are present, our approach, contrary to the distal outcome model, yields unbiased estimates of the distal outcome cluster specific means and variances; and 2) if the most suitable model is one between the distal outcome model or the latent class regression model, the relative class sizes and compositions will be the same as the ones delivered under the proposed modeling approach. The paper proceeds as follows. In Section~\ref{sec:popstudy}, we illustrate the proposed modeling approach through three population studies, in comparison with the LC regression and the distal outcome models. We give model definitions and details on the parameterizations in Section~\ref{sec:models}. In Section~\ref{sec:empirical}, we analyze data from the Household Finance and Consumption Survey, and conclude with some final remarks in Section~\ref{sec:concl}. \section{Population studies} \label{sec:popstudy} This Section is devoted to showing very simple and intuitive evidence, obtained by analyzing three large data sets (30000 sample units) - each drawn from the three models in Figures \ref{fig:distalmod}, \ref{fig:latregmod} and \ref{fig:cwmlca} - in order to motivate the application of the cluster-weighted modeling approach in LCA (see Table \ref{table:legend}). We set the number of latent classes $S = 2,$ and to begin with we fit all three models assuming this value to be known. At the end of the Section, we will also show results on estimation of the number of latent classes based on BIC. \begin{table}[!h] \centering \begin{tabular}{lccc} \hline \hline & & Acronym & Data \\ \hline Latent Class regression & & LCreg & \emph{LCreg} \\ Latent Class with distal outcome & & LCdist & \emph{LCdist} \\ Latent Class cluster-weighted & & LCcw & \emph{LCcw} \\ \hline \hline \end{tabular} \caption{\footnotesize Legend of acronyms used for the population models and for the generated data.\label{table:legend}} \end{table} To get approximately equal (realistic) conditions on class separation, we generated the data such that the entropy-based $R^2$ \citep{magidson81} for the correctly specified model is about 0.7 in all the three data sets - which is the minimum class separation to get a good LC model \citep{vermunt:10,asparouhov2012auxiliary}. The data were generated in R \citep{rcore}, and parameter estimation was carried out with Latent GOLD 5.1 \citep{latentG5.1}. \subsection{\emph{LCreg} data}\label{sec:lcrdata} The \emph{LCreg} data set was generated from a two-class LCreg model, with class memberships of 0.7 and 0.3, six dichotomous indicators $(J=6)$ and one continuous $Z$ - drawn from a standard normal distribution - loaded on all six indicators. The external variable $Z$ is loaded on the indicators with a coefficient of -0.5, if the most likely response is on the first class, or 1, if the most likely response is on the second class, giving a large effect size. \begin{table}[!h] \centering \begin{tabular}{lcccccc} \hline \hline & \multicolumn{2}{c}{Class proportions} & & Entr. $R^2$ & & \#par \\ \hline LCreg & {\bf 0.7010}& {\bf 0.2990} & &{\bf 0.7675} & &{\bf 25}\\ LCdist & 0.7357 & 0.2643 & & 0.8639 & & 17 \\ LCcw & 0.7018 & 0.2982 & & 0.7681 & & 29 \\ \hline \hline \end{tabular} \caption{\footnotesize \emph{LCreg} data. Estimated class proportions, entropy-based $R^2$ and number of parameters for each of the three estimated models. Results from correctly specified model in bold font.\label{table:lcregclassprop}} \end{table} We observe in Table \ref{table:lcregclassprop} that the LCdist model overinflates the mixing proportion on the bigger class, whereas the LCcw model yields nearly equivalent class proportions as in the correctly specified case. This at the cost of four more parameters to be estimated. Table \ref{table:lcrmeanvar} reports estimated means and variances for the variable $Z$ based on the LCdist and LCcw models, along with standard errors and $p$-values of the Wald tests of equality of the means and the variances. Nothing is reported for LCreg, as $Z$ is not modeled. In the LCdist model, both the means are wrongly estimated to be statistically different from zero. Moreover, based on the reported Wald tests, we reject the nulls of equal means and equal variances (with $p$-values smaller than 0.01). These findings for the LCdist model can be explained by the fact that it wrongly predicts a clustered distribution on $Z$ in order to accommodate for a direct effect of $Z$ on the indicators which is not accounted for. This creates an additional source of entropy in the class solution (as displayed by the relatively higher value of the entropy-based $R^2$). \begin{table}[!h] \centering \begin{tabular}{lllcllc} \hline \hline & \multicolumn{2}{c}{Means} & Wald(=) $p$ & \multicolumn{2}{c}{Variances} & Wald(=) $p$ \\ \hline LCdist& 0.0525*** & -0.1640*** & 0.0000 & 1.0301 & 0.8846 & 0.0000 \\ & (0.0071) & (0.0122) & & (0.0100) & (0.0158) & \\ & & & & & & \\ LCcw & -0.0010 & -0.0134 & 0.9000 & 0.9966 & 1.0105 & 0.6100 \\ & (0.0086) & (0,0163)& & (0.0114) & (0.0208) & \\ \hline \hline \end{tabular} \caption{\footnotesize \emph{LCreg} data. Estimated means (*** $p$-value$<$0.01, ** $p$-value$<$0.05, * $p$-value$<$0.1) and variances , and $p$-values from Wald test of equality of component means and variances for the LCdist model and the LCcw model. Standard errors in parentheses.\label{table:lcrmeanvar}} \end{table} \subsection{\emph{LCdist} data}\label{sec:disdata} The \emph{LCdist} data set was generated from a two-class LCdist model, with class memberships of 0.7 and 0.3, six dichotomous indicators ($J=6$) and one continuous $Z$, drawn from a mixture of two normal distributions with means of -1 and 1 and common variance of 1. \begin{table}[!h] \centering \begin{tabular}{lcccccc} \hline \hline & \multicolumn{2}{c}{Class proportions} & & Entr. $R^2$ & & \#par \\ \hline LCreg & 0.5850 & 0.4150 & & 0.2781 & & 25 \\ LCdist& {\bf 0.7006}& {\bf 0.2994} & &{\bf 0.7274} & &{\bf 17}\\ LCcw & 0.7026 & 0.2974 & & 0.7320 & & 29 \\ \hline \hline \end{tabular} \caption{\footnotesize \emph{LCdist} data. Estimated class proportions, entropy $R^2$ and number of parameters for each of the three estimated models. Results from correctly specified model in bold font.\label{table:LCdistclassprop}} \end{table} The LCreg model yields a completely distorted class solution, whereby both the LCdist and LCcw models yields almost identical (correct) solutions (Table \ref{table:LCdistclassprop}). Interestingly, the misspecified response-$Z$ relation in the LCreg model yields a solution with relatively smaller class separation (as measured by the entropy-based $R^2$). Next, we compare estimates of class-specific means and variances of $Z$ as obtained by the LCdist and LCcw models. \begin{table}[!h] \centering \begin{tabular}{lcccccc} \hline \hline & \multicolumn{2}{c}{Means} & Wald(=) $p$ & \multicolumn{2}{c}{Variances} & Wald(=) $p$ \\ \hline LCdist& {\bf -0.9911}***& {\bf 1.0156}*** & {\bf 0.0000} & {\bf 1.0072} & {\bf 0.9886} & 0.4000 \\ & (0.0084) & (0.0145) & & (0.0119) & (0.0196) & \\ & & & & & & \\ LCcw & -0.9889*** & 1.0242*** & 0.0000 & 1.0075 & 0.9810 & 0.2700 \\ & (0.0096) & (0,0176)& & (0.0125) & (0.0207) & \\ \hline \hline \end{tabular} \caption{\footnotesize \emph{LCdist} data. Estimated means (*** $p$-value$<$0.01, ** $p$-value$<$0.05, * $p$-value$<$0.1) and variances , and $p$-values from Wald test of equality of component means and variances for the LCdist model and the LCcw model. Standard errors in parentheses. Results from correctly specified model in bold font. \label{table:distmeanvar}} \end{table} The LCdist and the LCcw models estimate almost identical means and variances of $Z$, both correctly not rejecting the null of common variance across latent classes. We observe that the SE's for the less parsimonious LCcw model are systematically larger than those of the correctly specified model: this is not surprising, as having less degrees of freedom corresponds, all else equal, to slightly more variable estimates. \subsection{\emph{LCcw} data}\label{sec:cwmdata} The \emph{LCcw} data was generated from a two-class LCcw model, with class memberships of 0.7 and 0.3, six dichotomous indicators ($J=6$) and one continuous $Z$, drawn from a mixture of two normal distributions with means of -1 and 1 and common variance of 1. \begin{table}[!h] \centering \begin{tabular}{lcccccc} \hline \hline & \multicolumn{2}{c}{Class proportions} & & Entr. $R^2$ & & \#par \\ \hline LCreg & 0.8899 & 0.1101 & & 0.5611 & & 25 \\ LCdist & 0.4373 & 0.5627 & & 0.6441 & & 17 \\ LCcw& {\bf 0.6993}& {\bf 0.3007} & &{\bf 0.7045} & &{\bf 29}\\ \hline \hline \end{tabular} \caption{\footnotesize \emph{LCcw} data. Estimated class proportions, entropy-based $R^2$ and number of parameters for each of the three estimated models. Results from correctly specified model in bold font.\label{table:lccwmclassprop}} \end{table} Both the LCreg and the LCdist models deliver distorted class solutions (Table \ref{table:lccwmclassprop}). Although with a higher entropy-based $R^2,$ the residual dependence among the indicators due to the exclusion of the direct effect causes a more severe distortion in the LCdist model compared to the LCreg model. Equally we observe (Table \ref{table:lccwmmeanvar}) that the means and variance(s) of $Z$ are both biased in the LCdist model. Contrary to the correctly specified model, in LCdist the Wald test cannot reject the equal variances hypothesis (at 1 \% level). \begin{table}[!h] \centering \begin{tabular}{lcccccc} \hline \hline & \multicolumn{2}{c}{Means} & Wald(=) $p$ & \multicolumn{2}{c}{Variances} & Wald(=) $p$ \\ \hline LCdist& -1.4544***& 0.4159*** & 0.0000 & 0.7044 & 1.2096 & 0.0000 \\ & (0.0102) & (0.0120) & & (0.0111) & (0.0159) & \\ & & & & & & \\ LCcw & {\bf -1.0029}*** & {\bf 0.9955}*** & {\bf 0.0000} & {\bf 1.0215} & {\bf 0.9818} & {\bf 0.0380} \\ & (0.0104) & (0,0122)& & (0.0146) & (0.0160) & \\ \hline \hline \end{tabular} \caption{\footnotesize \emph{LCcw} data. Estimated means (*** $p$-value$<$0.01, ** $p$-value$<$0.05, * $p$-value$<$0.1) and variances , and $p$-values from Wald test of equality of component means and variances for the LCdist model and the LCcw model. Standard errors in parentheses. Results from correctly specified model in bold font. \label{table:lccwmmeanvar}} \end{table} Table \ref{table:ARIcomp} reports adjusted Rand indexes ARI \citep{hubertARI}, arranged in a three-by-three table, comparing the hard partitions obtained with each fitted model under the three data generating model scenarios. The results are in line with what observed above. When the data are generated with a LCreg model, the LCcw model delivers an almost identical partition to that of the correctly specified model, followed close up by the LCdist model - with only about 3\% difference. In the \emph{LCdist} data set as well, the LCcw model's partition is nearly as in the correctly specified model (ARI of $\approx 0.99$), whereby the ARI drops to $\approx 0.21$ when the comparison is with the LCreg partition. In the latest scenario - \emph{LCcw} data set - fitting both the LCreg and the LCdist models delivers in both cases quite different partitions ($\approx 0.16$ and $\approx 0.31$ ARI's) compared to the correctly specified model. \begin{table}[!h] \centering \begin{tabular}{lclccc} \hline \hline & & &\multicolumn{3}{c}{Fitted model}\\ \cmidrule{4-6} Data & &Correct model & LCreg & LCdist & LCcwm \\ \hline \emph{LCreg} & &LCreg & 1 & 0.9732 & 0.9997 \\ \emph{LCdist} & &LCdist & 0.2125 & 1 & 0.9889 \\ \emph{LCcw} & &LCcw & 0.1604 & 0.3101 & 1 \\ \hline \hline \end{tabular} \caption{\footnotesize {\bf Adjusted Rand indexes}, computed between clustering with correctly specified models - LCreg, LCdist and LCcw models - and clustering with the other two models \label{table:ARIcomp}} \end{table} Based on the above data sets, in Table \ref{table:BICcomp} we report also results on BIC values for the three models in all three scenarios, for $S=1,\dots,5$. Although BIC values can be compared for LCdist and LCcw, selecting a model among the three with BIC cannot be done as $Z$ in LCreg is not modeled and the model likelihoods are therefore not comparable. In both the \emph{LCreg} and \emph{LCdist} data sets, BIC for the LCcw model selects, together with the correctly specified model in the first two scenarios, the correct number of classes. Interestingly however, misspecifying the indicators-$Z$ relation causes, in both the LCreg and LCdist models, a severe overstatement of the number of classes (\emph{LCcw} data set). \begin{table}[!h] \centering \begin{tabular}{lclccccc} \hline \hline &&& \multicolumn{5}{c}{Number of components}\\ \cmidrule{4-8} Data && & $S=1$ & $S=2$ & $S=3$ & $S=4$ & $S=5$\\ \hline \multirow{3}{*}{\emph{LCreg} $\begin{dcases*} \\ \\ \end{dcases*}$}&&{\bf LCreg} & 235957.23 &{\bf 191309.94} & 191411.62 & 191539.73 & 191642.90 \\ &&LCdist & 321109.39 & 283926.34 & 278042.32 & 277453.71 & {\bf 277117.53}\\ &&LCcw & 321137.31 &{\bf 276509.33} & 276636.54 & 276766.45 & 276888.98\\ \cmidrule{2-8} \multirow{3}{*}{\emph{LCdist} $\begin{dcases*} \\ \\ \end{dcases*}$}&&LCreg & 233652.06 & 231508.46 & 231251.75 & 231041.07 & {\bf 230893.17}\\ &&{\bf LCdist} & 348714.25 & {\bf 331953.90} & 332037.14 & 332112.24 & 332189.69\\ &&LCcw & 337204.58 & {\bf 332067.91} & 332198.43 & 332329.23 & 332449.70\\ \cmidrule{2-8} \multirow{3}{*}{\emph{LCcw } $\begin{dcases*} \\ \\ \end{dcases*}$} && LCreg & 213395.81 &207054.91 & 206149.45 & 205996.43 & {\bf 205922.50} \\ && LCdist & 321792.35 & 310231.51 & 306431.47 & 305466.02 & {\bf 304613.18}\\ && {\bf LCcw} & 316998.91 &{\bf 303623.67} & 303759.81 & 303910.04 & 304025.80 \\ \hline \hline \end{tabular} \caption{\footnotesize {\bf Model selection with BIC} computed for each model at each data generating model - LCreg, LCdist and LCcw models - for $S=1,\dots,5$. Data generating model and minimum BIC value, for each model at each scenario, in bold. \label{table:BICcomp}} \end{table} \FloatBarrier \section{The different modeling approaches in details}\label{sec:models} \subsection{The latent class cluster-weighted model} Let $\Y = (Y_1,\dots,Y_J)'$ be the vector of the full response pattern and $\y$ its realization. Let us assume also one continuous external variable $Z$ is available, and we denote as $z$ its realization. Let us denote as $X$ the categorical latent variable, with latent classes $s=1,\dots,S.$ A general form of association between $\Y,$ $X$ and $Z,$ involves modeling the following joint probability \begin{equation}\label{eq:fulljoint} P(Z=z,X=s,\Y=\y) = P(Z=z,X=s) P(\Y=\y | Z=z,X=s), \end{equation} where the common assumption in LCA of $\Y$ and $Z$ being conditionally independent given the latent process is relaxed. From Equation \eqref{eq:fulljoint}, several submodels can be specified (covariate model, distal outcome model, etc). If substantive theoretical arguments postulate the latent variable to be a predictor of the external variable $Z,$ the latent class cluster-weighted model specifies the probability of observing a response pattern $\y$ as \begin{equation}\label{eq:cwmod} P(\Y = \y , Z=z) = \sum_{s=1}^S \underbrace{P(X=s)}_{\text{a}} \underbrace{P(Z=z | X=s)}_{\text{b}} \underbrace{P(\Y=\y | Z=z,X=s)}_{\text{c}}, \end{equation} which is defined by three components: the structural component (a), which describes the latent class variable, a measurement component (b), connecting the latent class to the observed responses with a direct effect of $Z$, and the external variable model (c), which models the latent class specific distribution of $Z$. Under the assumption of local independence of response variables given the class membership and $Z$, the conditional distribution of the responses can be written as \begin{equation}\label{eq:locind} P(\Y=\y | Z=z,X=s) = \prod_{j=1}^J P(Y_j=y_j | Z=z,X=s). \end{equation} For estimating the model in Equation \eqref{eq:cwmod}, we assume each $Y_j$ to be conditionally Bernoulli distributed, with success probability $\pi_{sj},$ and parametrize the conditional response probabilities through the following log-odds \begin{equation}\label{eq:condrprobcwm} \log \left( \frac{\pi_{js}}{1-\pi_{js}} \right) = \beta_{0,js} + Z\beta_{js}, \end{equation} whereby $Z$ is assumed to be conditionally Gaussian with mean $\mu_s$ and variance $\sigma^2_s,$ for $1 \leq s \leq S.$ The model of Equation \eqref{eq:cwmod} can be used to assign observations to clusters based on the posterior membership probabilities \begin{equation}\label{eq:postcwm} P(\text{X} = s | \Y = \mathbf{y},Z=z) = \frac{P(X=s) P(Z=z | X=s) P(\Y=\y | Z=z, X=s)}{P(\Y = \y , Z=z)}, \end{equation} according to, for instance, modal or proportional assignment rules. The latent class unconditional probabilities can as well be parametrized using logistic regressions. We opt for the following parametrization \begin{equation}\label{eq:mixprop} \log \left( \frac{P(X=s)}{P(X=1)} \right) = \theta_{s}, \end{equation} for $1 < s \leq S,$ where we take the first category as reference, and we set to zero the related parameter. The total number of free parameters to be estimated is therefore $2(J \times S)$ for the measurement model, $2S$ for the external variable model, and $S-1$ for the structural model. Notice that, by setting the $\beta_{js}$'s of Equation \eqref{eq:condrprobcwm} to zero, the LC cluster-weighted model reduces to a standard LC with distal outcome model. By contrast, given that the external variable component is completely missing, the LC regression is not formally nested in the LC cluster-weighted model, although it can be thought of as a sub-model in which the conditional distribution of $\mathbf{Y} | Z$ is modeled, and $Z$ is taken as fixed-value rather than a random variable to be modeled as well. \subsection{The LC with distal outcome model} It is common, in LCA, to consider a less general version of the joint distribution of Equation \eqref{eq:fulljoint}, by assuming the responses and $Z$ to be conditionally independent given the latent process. If, again, the latent class variable is taken to be a predictor of the external variable $Z,$ this yields the following latent class with distal outcome model \begin{equation}\label{eq:LCdisteq} P(\Y = \y , Z=z) = \sum_{s=1}^S P(X=s) P(Z=z | X=s) P(\Y=\y | X=s). \end{equation} Under the local independence assumption of the items given the latent class variable, the response conditional probabilities can be written, similarly to Equation \eqref{eq:locind}, as \begin{equation}\label{eq:locindDist} P(\Y=\y | X=s) = \prod_{j=1}^J P(Y_j=y_j | X=s), \end{equation} and parametrized through the following log-odds \begin{equation}\label{eq:condrprobDist} \log \left( \frac{\pi_{js}}{1-\pi_{js}} \right) = \beta_{0,js}. \end{equation} The model of Equation \eqref{eq:LCdisteq} can be used to cluster observations, according to modal or proportional assignment rules, based on the following posterior membership probabilities \begin{equation}\label{eq:postdist} P(\text{X} = s | \Y = \mathbf{y},Z=z) = \frac{P(X=s) P(Z=z | X=s) P(\Y=\y | X=s)}{P(\Y = \y , Z=z)}. \end{equation} The external variable $Z$ is assumed, conditional to the latent class, to be Gaussian with mean $\mu_s$ and variance $\sigma^2_s,$ for $1 \leq s \leq S,$ whereby the latent class unconditional probabilities are parametrized as in Equation \eqref{eq:mixprop}. This yields $J \times S + 2S + S-1$ free parameters to be estimated. The only difference with the model of Equation \eqref{eq:cwmod} is that $\mathbf{Y},$ in the measurement component, is conditional only on $X$, not on $Z$. That is, $\mathbf{Y}$ is assumed to be independent of $Z$ given $X$, which is a standard, and rather very strong, assumption of LCA. \subsection{The LC regression model} Rather than modeling the joint distribution $P(Z,X,\Y)$ of Equation \eqref{eq:fulljoint}, the latent class regression models the conditional distribution of $\Y$ given $Z$ and the latent class variable, specifying the following model for $\Y$: \begin{equation}\label{eq:LCregeq} P(\Y = \y | Z=z) = \sum_{s=1}^S P(X=s) P(\Y=\y | Z=z,X=s). \end{equation} In this case, the conditional response probabilities depend on the external variable $Z$ and, under local independence of the responses given the latent variable, the measurement model can be written as in Equation \eqref{eq:locind}, and parametrized as in Equation \eqref{eq:condrprobcwm}. The posterior membership probabilities, computed based on the model in Equation \eqref{eq:LCregeq}, are as follows \begin{equation}\label{eq:postLCreg} P(\text{X} = s | \Y = \mathbf{y},Z=z) = \frac{P(X=s) P(Z=z | X=s) P(\Y=\y | X=s)}{P(\Y = \y , Z=z)}. \end{equation} With latent class unconditional probabilities parametrized as in Equation \eqref{eq:mixprop}, the total number of free parameters to be estimated is $2(J \times S) + S-1.$ \FloatBarrier \begin{table}[!h] \centering \begin{tabular}{lcccc} \hline \hline & & Dir. Eff. & $Z$ modeled & \#par \\ \hline Latent Class regression & & $\checkmark$ & $\times$ & $2(J \times S) + (S-1)$\\ Latent Class with distal outcome & & $\times$ & $\checkmark$ & $(J \times S) + 2S + (S-1)$\\ Latent Class cluster-weighted & & $\checkmark$ & $\checkmark$ & $2(J \times S) + 2S + (S-1)$ \\ \hline \hline \end{tabular} \caption{Summary of different modeling assumptions and number of free parameters to be estimated.\label{table:modsumassumptions}} \end{table} \FloatBarrier Table \ref{table:modsumassumptions} summarizes how $Z$ enters each of the three models, and the total number of free parameters to be estimated. Intuitively, this shows that the first two models can be seen as special cases of the third model, which therefore models the relationship between the 3 sets of variables in the most exhaustive manner. \section{A latent class model of households' assets ownership to predict wealth}\label{sec:empirical} Household wealth cannot be directly observed. Nonetheless, measuring it is a crucial issue for any policy maker. Notably, survey measures of income - or expenditure, if available - are affected by substantial measurement error and systematic reporting bias \citep{ferguson2003,moore2000}. In addition, if wealth as a measure of permanent income \citep{friedman1957} is of interest, current income, even if measured without error, is likely to be a poor approximation. In more recent surveys (like the Household Finance and Consumption Survey, from the European Central Bank), a measure of net wealth - value of total household assets minus the value of total liabilities - is provided. However, relying on each household's subjective evaluation of the current value of each asset they own, such a measure is prone to considerable measurement error as well. Furthermore, having such a complex measure is complicated to understand for a more general audience, to whom a simpler classification/index would appeal. Latent Trait (LTA) and Latent Class Analysis have been used in order to model wealth - or, inversely, deprivation - from observed assets ownership. Whereby in the LTA framework, wealth is modeled as a continuous trait \citep{szeles2013,vandemoortele2014}, LCA was used \citep{moisio2004,perez2005} based on the idea that wealth (poverty) can be seen as a multidimensional latent construct. Although determining which ownership indicators to use can be a problem, arguably they all attempt to identify subgroups in the population based on the same multidimensional phenomenon \citep{moisio2004}: different dimensions of wealth are measured by different (sets of) indicators. In addition, within an LCA framework, response probabilities can be used to evaluate \emph{ex post} each indicator's validity of measuring the (latent variable) wealth. We analyze data from the first wave of the Household Finance and Consumption Survey, conducted by the European Central Bank. We focus on a sample of Italian households, for which we have information on real and financial assets, liabilities, different income measures, consumption expenditures, and a measure of total wealth in euro. The latter is defined as total household assets value, excluding public and occupational pension wealth, minus total outstanding household's liabilities \citep{ecb2013}. The value of each asset is provided by asking to the interviewees how much they think each asset is worth. For instance, related to the item ``owning any car" (HB4300, Table \ref{table:varlist}), the interviewer asks ``For the cars that you/your household own, if you sold them now, about how much do you think you could get?" (HFCS Core Variables Catalogue, 2013). In fact, different households might have very different (and possibly wrong) perceptions about the value of the assets they own. This is why we cannot rely only on DN3001 as a measure of households wealth, but rather use a model known for its strengths in correcting for measurement error using multiple indicators. We selected a set of 10 items related to a financial type of wealth, and we included also 4 items related to a broader type of wealth, for a total of 14 items. The variable concerning household residence tenure status (HB0300) was recoded to have ``entirely owned main residence" or ``partially owned main residence" merged into one category. The resulting variable on tenure status (hometen) has 3 categories, and enters all models with dummy coding. We restrict ourselves to analyzing only households having positive wealth. Doing so allows us to set up a model for log-wealth rather than for wealth, as is commonly done in the economic literature studying elasticities (see, for instance, \citealp{charles2003}). Imposing this restriction leads us to drop only 188 sample units (about 2\% of the total sample). The LCreg, the LCdist and the LCcw models are estimated using Latent GOLD 5.1 \citep{latentG5.1}. Sample syntaxes for each model are reported in the Appendix. The model comparison is done with the purpose of investigating how cluster membership is able to predict classes of (log) wealth. The cluster-weighted modeling approach does so by relaxing the conditional independence assumption and allowing for possible direct effects of log-wealth on the indicators. \begin{table}[!h] \centering \begin{tabular}{lll} \hline \hline Name & Description & Type\\ \hline HB0300 & Household main residence - Tenure status & Nominal \\ & (1 if entirely owned) & \\ & (2 if partially owned) & \\ & (3 if rented) & \\ & (4 if for free use) & \\ hometen & Household main residence - Tenure status & Nominal \\ & (1 if entirely or partially owned) & \\ & (2 if rented) & \\ & (3 if for free use) & \\ HB2400 & Household owns other properties & Dichotomous \\ HB4300 & Household owns any car & Dichotomous \\ HB4700 & Ownership of other valuables & Dichotomous \\ HC0200 & Household has a credit line or overdraft & Dichotomous \\ HC0300 & Household has a credit card & Dichotomous \\ HC0400 & Household has a non collaterized loan & Dichotomous \\ HD0100 & Household has any investment in business & Dichotomous \\ HD1100 & Household owns a sight account & Dichotomous \\ HD1200 & Household owns a savings account & Dichotomous \\ HD1300 & Household has any investment in mutual fund& Dichotomous \\ HD1400 & Household owns bonds & Dichotomous \\ HD1500 & Household owns managed accounts & Dichotomous \\ DN3001 & Net Wealth & Continuous \\ \hline \hline \end{tabular} \caption{\footnotesize Variables list, with description and type. hometen obtained by recoding HB0300 into 3 categories, where ``partially owned" and ``entirely owned" are merged into one. DN3001 is obtained as total household assets, excluding public and occupational pension wealth, minus total outstanding household's liabilities. We focus on a sample of observations with positive net wealth, and work with its logarithmic transformation. Further details on variables definitions are available from the ECB website.\label{table:varlist}} \end{table} In Table \ref{table:modselval} we display number of components, BIC, number of parameters, expected classification error, and entropy-based $R^2$ for each of the three models, from 1 to 5 components. Expected classification error is a standard output in Latent GOLD, and is obtained by cross-tabulating the modal classes (as estimated by the modal assignment rule) with probabilistic classes (as estimated by the class proportions), then taking the ratio between the number of missclassified observations and the total number of units. \begin{table}[!h] \centering \begin{tabular}{lccccc} \hline \hline & $S$ & BIC & \#par & Class. Err. & Entr. $R^2$ \\ \hline \multirow{5}{*}{LCreg} & 1 & 84730.90 & 27 & - & - \\ & 2 & 80709.36 & 55 & 0.12 & 0.59 \\ & 3 & 79157.86 & 83 & 0.18 & 0.58 \\ & 4 & 78647.04 & 111 & 0.24 & 0.57 \\ & 5 & 78680.09 & 139 & 0.25 & 0.57 \\ \cmidrule{2-6} \multirow{5}{*}{LCdist} & 1 & 125830.86 & 16 & - & - \\ & 2 & 114860.57 & 33 & 0.06 & 0.77 \\ & 3 & 109951.62 & 50 & 0.08 & 0.83 \\ & 4 & 108642.01 & 67 & 0.11 & 0.81 \\ & 5 & 107201.39 & 84 & 0.14 & 0.79 \\ \cmidrule{2-6} \multirow{5}{*}{LCcw} & 1 & 115507.90 & 29 & - & - \\ & 2 & 108412.34 & 59 & 0.01 & 0.95 \\ & 3 & 106477.23 & 89 & 0.09 & 0.80 \\ & 4 & 105996.46 & 119 & 0.17 & 0.72 \\ & 5 & 105731.14 & 149 & 0.19 & 0.73 \\ \hline \hline \end{tabular} \caption{\footnotesize Number of components (Ncomp), BIC, number of parameters (\#par), expected classification error (Class. Err.), and entropy-based (Entr.) $R^2$ for each of the three models, from 1 to 5 components.\label{table:modselval}} \end{table} Based on BIC only, both LCdist and LCcw models would select the highest possible number of components ($S=5$), whereas LCreg would select four components. In fact, consistently with previous literature on LCA for measuring wealth \citep{moisio2004,perez2005}, both with four and five components, latent classes would be hardly interpretable. $S=2$ seems to be the best trade-off solution according to BIC and the two fit criteria for the LCreg and LCcw models. LCdist seems to favor $S=3$. We select 2 classes for LCreg and LCcw, and 3 classes for LCdist. We also show, for comparability reasons, results for LCdist with $S=2$. Interestingly however, BIC values of LCcw are always smaller than those of LCdist, indicating a better overall fit to the data. We report (Figure \ref{fig:profiles}) the probability profile plot for LCreg with $S=2$ (Figure \ref{fig:LCreg2profile}), LCdist for $S=2$ and $S=3$(Figures \ref{fig:LCdist2profile} and \ref{fig:LCdist3profile} respectively), and LCcw for$S=2$ (Figure \ref{fig:LCcw2profile}). \begin{figure} \caption{\footnotesize{Probability profile plot for LCreg $(S=2)$, LCdist $(S=2$ and $S=3$) and LCcw $(S=2)$. For the variable ``hometen", the levels indicate the four category ownership probabilities within each wealth class. Similarly, levels refer to average item ownership probability in wealth classes for the remaining (dichotomous) items.} \label{fig:LCreg2profile} \label{fig:LCcw2profile} \label{fig:LCdist2profile} \label{fig:LCdist3profile} \label{fig:profiles} \end{figure} We observe that both the LCreg and the LCcw models, contrary to the two-class LCdist model, predict a wealthier class, with higher ownership probabilities for all items, and a higher probability of owning (partially or entirely) the household main residence - relative to ``free use" and ``rent" categories. In the three-class LCdist model, whose first and second clusters seem to arise from a split of the first cluster of the two-class model, the first wealth class shows higher ownership probabilities for all items, and a higher probability of owning (partially or entirely) the household main residence, as in the two-class LCreg and LCcw models. One possible explanation for this is that the LCdist model needs one additional class to predict a class profile reasonably corresponding to the highest wealth level. Interestingly, profiles of all three models ($S = 2$) are comparable, though each showing specific features. LCreg classes have a similar composition in terms of profiles. Both LCdist ($S = 2$ and $S = 3$) and LCcw deliver better separated classes. However, wealthier households are predicted by LCcw to own more assets, whereas less wealthy household have a relatively larger probability, for instance, to have a rented (or for free use) residence. Table \ref{table:ARIempirical} reports the ARI for pairwise comparisons of the three models. LCdist, with both $S=2$ and $S=3$, and LCcw deliver clusterings with moderate agreement. Figure \ref{fig:logwealth} shows how the class composition of LCdist (with $S=2$ and $S=3$) and LCcw predicts classes in log-wealth. \begin{table}[!h] \centering \begin{tabular}{lcccc} \hline \hline & LCreg & LCdist & LCdist & LCcwm \\ & & $(S = 2)$ & $(S = 3)$ & \\ \hline LCreg & 1 & & &\\ LCdist $(S = 2)$& 0.0869 & 1 & & \\ LCdist $(S = 3)$& 0.3369 & 0.4446 & 1 & \\ LCcw & 0.0661 & 0.4676 & 0.4371 & 1 \\ \hline \hline \end{tabular} \caption{\footnotesize {\bf Adjusted Rand indexes} comparing clustering in classes of wealth of LCreg, LCdist and LCcw models.\label{table:ARIempirical}} \end{table} The first class in LCdist with $S = 2$ (Figure \ref{fig:LCdist2logwealth}) seems to capture observations with lower log-wealth, although its relatively fat tails allocate non-zero density also to the wealthiest observations. With $S = 3$ (Figure \ref{fig:LCdist3logwealth}), the second class is split into two (see also Figure \ref{fig:class_prop_bar}), with the new third class made up of households in the right tale of log-wealth distribution. Consistently with findings in the probability profiles, classes in LCcw are able to discriminate households better in terms of prediction of the (log) wealth distribution (Figure \ref{fig:LCcwlogwealth}). \begin{figure} \caption{\footnotesize{Density plots of log-wealth as predicted by LCdist $(S=2$, a, and $S=3$,b) and LCcw (c). Mixture density in green, component marginal densities in other colors. Class proportions in (d).} \label{fig:LCdist2logwealth} \label{fig:LCdist3logwealth} \label{fig:LCcwlogwealth} \label{fig:class_prop_bar} \label{fig:logwealth} \label{fig:logwealth} \end{figure} To gain further insights on the estimated wealth distribution, we report (Table \ref{table:logwealthdist}) estimated means and variances of log-wealth, for LCdist with $S=2$ and LCcw. For comparability, we choose not to report mean and variance values for LCdist with $S = 3$. \begin{table}[!h] \centering \begin{tabular}{lcccccc} \hline \hline & \multicolumn{2}{c}{Means} & Wald(=) $p$ & \multicolumn{2}{c}{Variances} & Wald(=) $p$ \\ \hline LCdist& 10.0835***& 12.6173*** & 0.0000 & 3.0910 & 0.5573 & 0.0000 \\ & (0.0451) & (0.0129) & & (0.0843) & (0.0137) & \\ & & & & & & \\ LCcw & 9.4489*** & 12.4928*** & 0.0000 & 2.5960 & 0.6304 & 0.0380 \\ & (0.0391) & (0.0112)& & (0.0865) & (0.0131) & \\ \hline \hline \end{tabular} \caption{\footnotesize Estimated means (*** $p$-value$<$0.01, ** $p$-value$<$0.05, * $p$-value$<$0.1) and variances of log-wealth, and $p$-values from Wald test of equality of component means and variances for the LCdist $(S = 2)$ and LCcw models. Standard errors in parentheses. \label{table:logwealthdist}} \end{table} Inference on the estimated parameters of the log-wealth ($p$-values of all tests are below 0.01) validates the model assumption of clustered distribution, with heteroscedastic components, of log-wealth in both LCdist and LCcw. The mean in the first class of wealth is larger for LCdist than for LCcw, along with the related variance. As we observed above, this first class in LCdist absorbs also households in the right tale of the distribution. This complicates the interpretation of the wealth classes compared to LCcw. That is, LCcw, with a first class with smaller mean and variance, allows for a more natural substantive interpretation of the two classes as wealthier versus less wealthy households. This is also consistent with findings of the related literature on poverty (deprivation) \citep{moisio2004,perez2005}. Finally, Table \ref{table:directeff} demonstrates that LCcw provides an easy way to test whether there are significant direct effects of log-wealth on the response variables. Interestingly, inference on such effects points out that there are variables for which we have no significant direct effects of log-wealth (hometen), as well as indicating that some effects are the same across latent classes (HB2400, HD1300, HD1400 and HD1500). In the logic of an applied researcher, this suggests intermediate and more parsimonious modeling options, where effects on some variables can be constrained to be zero, or to be the same across classes, which can easily be discovered with a cluster-weighted modeling approach. \begin{table}[!h] \centering \resizebox{0.68\textheight}{!}{\begin{tabular}{lcccc} \hline \hline Log-wealth on & \multicolumn{2}{c}{Coefficients} & Wald(0) $p$ & Wald(=) $p$ \\ & Class 1 & Class 2 & & \\ \hline hometen & 0.0184 & 0.1532 & 0.6800 & 0.4800 \\ (Main res. tenure stat) & (0.0392) & (0.1930) & & \\ & & & & \\ HB2400 & -1.6242*** & -1.5421*** & 0.0000 & 0.4600 \\ (HH owns other properties) & (0.0957) & (0.0564) & & \\ & & & & \\ HB4300 & -0.5784*** & -1.1729*** & 0.0000 & 0.0000 \\ (HH owns any car) & (0.0426) & (0.0626) & & \\ & & & & \\ HB4700 & -0.3866*** & -0.7445*** & 0.0000 & 0.0000 \\ (HH owns other valuables) & (0.0396) & (0.0639) & & \\ & & & & \\ HC0200 & -0.5217*** & -0.7899*** & 0.0000 & 0.0000 \\ (HH has a credit line) & (0.0431) & (0.0433) & & \\ & & & & \\ HC0300 & -0.8741*** & -1.1398*** & 0.0000 & 0.0001 \\ (HH has a credit card) & (0.0517) & (0.0480) & & \\ & & & & \\ HC0400 & -0.2026*** & -0.2205*** & 0.0000 & 0.7800 \\ (HH has a non-coll. loan) & (0.0415) & (0.0515) & & \\ & & & & \\ HD0100 & -0.7896*** & -1.0633*** & 0.0000 & 0.0002 \\ (HH has any invest. in business) & (0.0536) & (0.0527) & & \\ & & & & \\ HD1100 & -0.8060*** & -1.6133*** & 0.0000 & 0.0000 \\ (HH owns a sight account) & (0.0463) & (0.0768) & & \\ & & & & \\ HD1200 & -0.2216*** & -0.0630* & 0.0000 & 0.0025 \\ (HH owns a savings account) & (0.0364) & (0.0381) & & \\ & & & & \\ HD1300 & -1.0827*** & -1.1224*** & 0.0000 & 0.7500 \\ (HH has invest. in mutual funds) & (0.1061) & (0.0636) & & \\ & & & & \\ HD1400 & -0.9244*** & -0.9835*** & 0.0000 & 0.5100 \\ (HH owns bonds) & (0.0766) & (0.0478) & & \\ & & & & \\ HD1500 & -1.0942*** & -1.1664*** & 0.0000 & 0.6100 \\ (HH owns managed accounts) & (0.1236) & (0.0688) & & \\ \\ \hline \hline \end{tabular}} \caption{\footnotesize Estimated direct effects of log-wealth on each variable per latent class (*** $p$-value$<$0.01, ** $p$-value$<$0.05, * $p$-value$<$0.1), $p$-values from Wald test of joint equality of each variable's direct effect to zero - Wald(0) - and from Wald test of equality of effects across latent classes - Wald(=) - for the LCcw model. Short description below variable names. HH stands for Household. Standard errors in parentheses. \label{table:directeff}} \end{table} \FloatBarrier \section{Conclusion}\label{sec:concl} In this paper we have brought modeling ideas from the regression mixture literature into latent class analysis. Our focus has been to motivate the use of the cluster-weighted modeling approach as a general specification for the joint relationship of the response variables, the external variable, and the latent variable. Three population studies have been used to illustrate this idea, and the actual advantage of the proposed approach was showed through an application on Italian household asset ownership data. The cluster-weighted modeling approach, contrary to the simpler distal outcome model, was able to predict, based on the joint relationship of ownership indicators, the underlying true wealth, and wealth as measured by the survey, two interpretable classes of wealth with class ownership profiles coinciding respectively with less wealthy (first class) and wealthier (second class) households on the measured log-wealth distribution. In the applied researcher perspective, the proposed approach has several advantages. First, it allows to deal with direct effects if this are of substantive interest, as well as if those represent a source of noise to be handled. That is, if direct effects are present, our approach, contrary to the distal outcome model, yields unbiased estimates of the distal outcome cluster specific means and variances. Second, it guarantees a safe and flexible option, in which one starts from the most general model. Then, proceeding backwards, the user can test the model assumptions of both the distal outcome and the latent class regression models. The approach we suggest has some limitations as well. First, it can be unstable if some of the observed response patterns are lacking, or are simply too small in number to estimate the model parameters (see for instance, for standard sufficient conditions for identifiability of LCA, \citealp{bandeenroche:97}). Especially in an exploratory stage of the analysis, in such cases simpler models can be more attractive. Second, depending on the goal of the analysis, the cluster-weighted modeling approach can generate a final output which might be harder to interpret than that of simpler models. In other words, the final model interpretability relies on the final goal of the analysis, which must be clear in mind in this as well as in any other modeling approach. \appendix \section{Latent GOLD syntax for the three models} \subsection{LCreg model syntax} \small{options maxthreads=4; algorithm tolerance=1e-008 emtolerance=0.01 emiterations=500 nriterations=100 ; startvalues seed=0 sets=50 tolerance=1e-005 iterations=50; bayes categorical=1 variances=1 latent=1 poisson=1; montecarlo seed=0 sets=0 replicates=500 tolerance=1e-008; quadrature nodes=10; missing excludeall; output parameters=first betaopts=wl standarderrors profile probmeans=posterior bivariateresiduals estimatedvalues=model; variables dependent hometen nominal, HB2400, HB4300, HB4700, HC0200, HC0300, HC0400, HD0100, HD1100, HD1200, HD1300, HD1400, HD1500; independent logwealth; latent Cluster nominal 2; equations Cluster $<-$ 1; hometen $<-$ 1$|$Cluster + logwealth$|$Cluster; HB2400 $<-$ 1$|$Cluster + logwealth$|$Cluster; HB4300 $<-$ 1$|$Cluster + logwealth$|$Cluster; HB4700 $<-$ 1$|$Cluster + logwealth$|$Cluster; HC0200 $<-$ 1$|$Cluster + logwealth$|$Cluster; HC0300 $<-$ 1$|$Cluster + logwealth$|$Cluster; HC0400 $<-$ 1$|$Cluster + logwealth$|$Cluster; HD0100 $<-$ 1$|$Cluster + logwealth$|$Cluster; HD1100 $<-$ 1$|$Cluster + logwealth$|$Cluster; HD1200 $<-$ 1$|$Cluster + logwealth$|$Cluster; HD1300 $<-$ 1$|$Cluster + logwealth$|$Cluster; HD1400 $<-$ 1$|$Cluster + logwealth$|$Cluster; HD1500 $<-$ 1$|$Cluster + logwealth$|$Cluster; } \subsection{LCdist model syntax} \small{options maxthreads=4; algorithm tolerance=1e-008 emtolerance=0.01 emiterations=500 nriterations=50 ; startvalues seed=0 sets=50 tolerance=1e-005 iterations=50; bayes categorical=1 variances=1 latent=1 poisson=1; montecarlo seed=0 sets=0 replicates=500 tolerance=1e-008; quadrature nodes=10; missing excludeall; output parameters=first betaopts=wl standarderrors profile probmeans=posterior bivariateresiduals estimatedvalues=model; variables dependent hometen nominal, HB2400, HB4300, HB4700, HC0200, HC0300, HC0400, HD0100, HD1100, HD1200, HD1300, HD1400, HD1500, logwealth continuous; latent Cluster nominal 2; equations Cluster $<-$ 1; hometen $<-$ 1$|$Cluster; HB2400 $<-$ 1$|$Cluster; HB4300 $<-$ 1$|$Cluster; HB4700 $<-$ 1$|$Cluster; HC0200 $<-$ 1$|$Cluster; HC0300 $<-$ 1$|$Cluster; HC0400 $<-$ 1$|$Cluster; HD0100 $<-$ 1$|$Cluster; HD1100 $<-$ 1$|$Cluster; HD1200 $<-$ 1$|$Cluster; HD1300 $<-$ 1$|$Cluster; HD1400 $<-$ 1$|$Cluster; HD1500 $<-$ 1$|$Cluster; logwealth $<-$ 1$|$Cluster; logwealth$|$Cluster;} \subsection{LCCw model syntax} Notice that ``logwealthshadow" is a duplicate of ``logwealth", as Latent GOLD does not allow the same variable to be both dependent and independent variable at the same time. \small{options maxthreads=4; algorithm tolerance=1e-008 emtolerance=0.01 emiterations=500 nriterations=50 ; startvalues seed=0 sets=50 tolerance=1e-005 iterations=50; bayes categorical=1 variances=1 latent=1 poisson=1; montecarlo seed=0 sets=0 replicates=500 tolerance=1e-008; quadrature nodes=10; missing excludeall; output parameters=first betaopts=wl standarderrors profile probmeans=posterior bivariateresiduals estimatedvalues=model; variables dependent hometen nominal, HB2400, HB4300, HB4700, HC0200, HC0300, HC0400, HD0100, HD1100, HD1200, HD1300, HD1400, HD1500, logwealth continuous; independent logwealthshadow; latent Cluster nominal 2; equations Cluster $<-$ 1; hometen $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HB2400 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HB4300 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HB4700 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HC0200 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HC0300 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HC0400 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HD0100 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HD1100 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HD1200 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HD1300 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HD1400 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; HD1500 $<-$ 1$|$Cluster + logwealthshadow$|$Cluster; logwealth $<-$ 1$|$Cluster; logwealth$|$Cluster;} \end{document}
\begin{document} \begin{abstract} We provide a categorical framework for recent results of Markus Perling on combinatorics of exceptional collections on numerically rational surfaces. Using it we simplify and generalize some of Perling's results as well as Vial's criterion for existence of a numerical exceptional collection. \end{abstract} \maketitle \section{Introduction} A beautiful recent paper of Markus Perling~\cite{Pe} proves that any numerically exceptional collection of maximal length in the derived category of a numerically rational surface (i.e., a surface with zero irregularity and geometric genus) can be transformed by mutations into an exceptional collection consisting of objects of rank 1. In this note we provide a categorical framework that allows to extend and simplify Perling's result. For this we introduce a notion of a surface-like category. Roughly speaking, it is a triangulated category~$\cT$ whose \emph{numerical Grothendieck group} $\NGr(\cT)$, considered as an abelian group with a bilinear form (\emph{Euler form}), behaves similarly to the numerical Grothendieck group of a smooth projective surface, see Definition~\ref{def:surface-like}. Of course, for any smooth projective surface $X$ its derived category $\bD(X)$ is surface-like. However, there are surface-like categories of different nature, for instance derived categories of noncommutative surfaces and of even-dimensional Calabi--Yau varieties turn out to be surface-like (Example~\ref{example:cy}). Also, some subcategories of surface-like categories are surface-like. Thus, the notion of a surface-like category is indeed more general. In fact, all results of this paper have a numerical nature, so instead of considering categories, we pass directly to their numerical Grothendieck groups. These are free abelian groups $\mathrm{G}$ (we assume them to be of finite rank), equipped with a bilinear form $\chi$ that is neither symmetric, nor skew-symmetric in general. We call such pair $(\mathrm{G},\chi)$ a \emph{pseudolattice} (since this is a non-symmetric version of a lattice). We define and investigate a notion of a \emph{surface-like} pseudolattice (Definition~\ref{def:surface-like}), and show that it has many features similar to numerical Grothendieck groups of surfaces. For instance, one can define the rank function on such~$\mathrm{G}$, define the Neron--Severi lattice $\NS(\mathrm{G})$ of $\mathrm{G}$ (that is isomorphic to the Neron--Severi lattice of the surface~$X$ when $\mathrm{G} =\NGr(\bD(X))$), and construct the canonical class $K_\mathrm{G} \in \NS(\mathrm{G}) \otimes \QQ$ (in general, it is only rational). We also introduce some important properties of surface-like pseudolattices: \emph{geometricity} (Definition~\ref{def:geometricity}), \emph{minimality} (Definition~\ref{def:minimlaity}), and define their \emph{defects} (Definition~\ref{def:defect}). The main result of this paper is Theorem~\ref{theorem-ranks-1}, saying that if a geometric surface-like pseudolattice with zero defect has an exceptional basis (Definition~\ref{def:exceptional}), then this basis can be transformed by mutations to a basis consisting of elements of rank~1. To prove Theorem~\ref{theorem-ranks-1} we first classify all minimal geometric pseudolattices $\mathrm{G}$ with an exceptional basis --- it turns out that minimality implies that the pseudolattice is isometric either to~$\NGr(\bD(\PP^2))$ or to $\NGr(\bD(\PP^1 \times \PP^1))$, see Theorem~\ref{theorem:minimal-cats}. In particular, its rank is 3 or 4, and its defect is 0. To get the general case from this we investigate a kind of \emph{the minimal model program} for surface-like pseudolattices: we introduce the notion of a \emph{contraction} of a pseudolattice (with respect to an exceptional element of zero rank), and show that one can always pass from a general surface-like pseudolattice to a minimal one by a finite number of contractions. We verify that geometricity is preserved under contractions, and that defect does not decrease. In particular, if we start with a geometric pseudolattice of zero defect with an exceptional basis, then defect does not change under these contractions. This allows to deduce Theorem~\ref{theorem-ranks-1} from Theorem~\ref{theorem:minimal-cats}. In most of the proofs we follow the original arguments of Perling. The main new feature that we introduce is the notion of a surface-like pseudolattice that allows to define the contraction operation and gives more flexibility. In particular, this allows to get rid easily from exceptional objects of zero rank that appear as a headache in Perling's approach. Also, the general categorical perspective we take simplifies some of the computations, especially those related to use of Riemann--Roch theorem. Besides Perling's results, we also apply this technique to prove a generalization of a criterion of Charles Vial \cite[Theorem~3.1]{V} for existence of a numerically exceptional collection in the derived category of a surface, see Theorem~\ref{theorem:criterion}. In fact, in the proof we use a lattice-theoretic result~\cite[Proposition~A.12]{V}, but besides that, the proof is an elementary consequence of the minimal model program for surface-like pseudolattices. Of course, it would be very interesting to find higher-dimensional analogues of this technique. For this, we need to understand well the relation between the (numerical) Grothendieck group of higher dimensional varieties and their (numerical) Chow groups. An important result in this direction is proved in a recent paper by Sergey Gorchinskiy~\cite{G}. The paper is organized as follows. In Section~\ref{section:pseudolattices} we discuss numerical Grothendieck groups of triangulated categories and define pseudolattices. We also discuss here exceptional bases of pseudolattices and their mutations. In Section~\ref{section:surface-like} we define surface-like categories and pseudolattices, provide some examples, and discuss their basic properties. In particular, we explain how one defines the rank function, the Neron--Severi lattice, and the canonical class of a surface-like pseudolattice. In Section~\ref{section:minimal} we define minimality and geometricity, and classify minimal geometric surface-like pseudolattices with an exceptional basis. In course of classification we associate with an exceptional basis a toric system and construct from it a fan in a rank 2 lattice, giving rise to a toric surface. Finally, in section~\ref{section:mmp} we define a contraction of a pseudolattice with respect to a zero rank exceptional vector, and via a minimal model program deduce the main results of the paper from the classification results of the previous section. Besides that, we define the defect of a pseudolattice and investigate its behavior under contractions. After the first version of this paper was published, I was informed about a paper of Louis de Thanhoffer de Volcsey and Michel Van den Bergh~\cite{dTVdB}, where a very similar categorical framework was introduced. In particular, in~\cite{dTVdB} a notion of a \emph{Serre lattice of surface type} was defined, which is almost equivalent to the notion of a surface-like pseudolattice, see Remark~\ref{remark:surface-type-like}, and some numerical notions of algebraic geometry were developed on this base. So, the content of Section~\ref{section:surface-like} of this paper is very close to the content of~\cite[Section~3]{dTVdB}. {\bf Acknowledgements.} It should be clear from the above that the paper owns its existence to the work of Markus Perling. I would also like to thank Sergey Gorchinskiy for very useful discussions. I am very grateful to Pieter Belmans and Michel Van den Bergh for informing me about the paper~\cite{dTVdB}. \section{Numerical Grothendieck groups and pseudolattices}\label{section:pseudolattices} Let $\cT$ be a saturated (i.e., smooth and proper) $\kk$-linear triangulated category, where $\kk$ is a field. Let $\KGr(\cT)$ be the Grothendieck group of $\cT$ and $\chi:\KGr(\cT) \otimes \KGr(\cT) \to \ZZ$ the Euler bilinear form: \begin{equation*} \chi(F_1,F_2) = \sum (-1)^i \dim \Hom(F_1,F_2[i]). \end{equation*} In general the form $\chi$ is neither symmetric, nor skew-symmetric; however, it is symmetrized by the Serre functor $\bS \colon \cT \to \cT$ of $\cT$, i.e., we have \begin{equation*} \chi(F_1,F_2) = \chi(F_2,\bS(F_1)). \end{equation*} Since the Serre functor is an autoequivalence, it follows that the left kernel of $\chi$ coincides with its right kernel. We denote the quotient by \begin{equation*} \NGr(\cT) := \KGr(\cT)/\Ker\chi; \end{equation*} it is called the {\sf numerical Grothendieck group} of $\cT$. The numerical Grothendieck group $\NGr(\cT)$ is torsion-free (any torsion element would be in the kernel of $\chi$) and finitely generated \cite{E}, hence a free abelian group of finite rank. The form $\chi$ induces a nondegenerate bilinear form on~$\NGr(\cT)$ which we also denote by $\chi$. This form $\chi$ is still neither symmetric, nor skew-symmetric. \subsection{Pseudolattices} For purposes of this paper we want to axiomatize the above situation. \begin{definition} A {\sf pseudolattice} is a finitely generated free abelian group $\mathrm{G}$ equipped with a nondegenerate bilinear form $\chi \colon \mathrm{G} \otimes \mathrm{G} \to \ZZ$. A pseudolattice $(\mathrm{G},\chi)$ is {\sf unimodular} if the form $\chi$ induces an isomorphism $\mathrm{G} \to \mathrm{G}^\vee$. An {\sf isometry} of pseudolattices $(\mathrm{G},\chi)$ and $(\mathrm{G}',\chi')$ is an isomorphism of abelian groups $f \colon \mathrm{G} \to \mathrm{G}'$ such that $\chi'(f(v_1), f(v_2)) = \chi(v_1,v_2)$ for all $v_1,v_2 \in \mathrm{G}$. \end{definition} For any $v_0 \in \mathrm{G}$ we define \begin{equation*} v_0^\perp = \{ v \in \mathrm{G} \mid \chi (v_0,v) = 0 \}, \qquad {}^\perp v_0 = \{ v \in \mathrm{G} \mid \chi (v,v_0) = 0 \}, \end{equation*} the {\sf right} and the {\sf left orthogonal} complements of $v_0$ in $\mathrm{G}$. We say that a pseudolattice $(\mathrm{G},\chi)$ {\sf has a Serre operator} if there is an automorphism $\rS_\mathrm{G} \colon \mathrm{G} \to \mathrm{G}$ such that \begin{equation*} \chi(v_1,v_2) = \chi(v_2,\rS_\mathrm{G}(v_1)) \qquad\text{for all $v_1,v_2 \in \mathrm{G}$}. \end{equation*} Of course, a Serre operator is unique, and if $\mathrm{G}$ is unimodular then $\rS_\mathrm{G} = (\chi^{-1})^T \circ \chi$ is the Serre operator. Also it is clear that if $\mathrm{G} = \NGr(\cT)$ and $\cT$ admits a Serre functor then the induced operator on $\mathrm{G}$ is the Serre operator. The notion of a pseudolattice with a Serre operator is equivalent to the notion of Serre lattice of~\cite{dTVdB}. We denote by \begin{equation*} \chi_+ := \chi + \chi^T, \qquad \chi_- := \chi - \chi^T \end{equation*} the symmetrization and skew-symmetrization of the form $\chi$ on $\mathrm{G}$. If $\cT$ has a Serre operator, these forms can be written as \begin{equation}\label{eq:serre-symmetrize} \chi_+(v_1,v_2) = \chi(v_1, (1 + \rS_\mathrm{G})v_2), \qquad \chi_-(v_1,v_2) = \chi(v_1, (1 - \rS_\mathrm{G})v_2). \end{equation} \subsection{Exceptional collections and mutations} In this section we discuss a pseudolattice version of exceptional collections and mutations. \begin{definition}\label{def:exceptional} An element $\ce \in \mathrm{G}$ is {\sf exceptional} if $\chi(\ce,\ce) = 1$. A sequence of elements $(\ce_1,\ce_2,\dots,\ce_n)$ is {\sf exceptional} if each $\ce_i$ is exceptional and $\chi(\ce_i,\ce_j) = 0$ for all $i > j$ ({\sf semiorthogonality}). An {\sf exceptional basis} is an exceptional sequence in $\mathrm{G}$ that is its basis. \end{definition} Of course, when $\mathrm{G} = \NGr(\cT)$, the class of an exceptional object in $\cT$ is exceptional, and the classes of elements of an exceptional collection in $\cT$ form an exceptional sequence in $\mathrm{G}$, which is an exceptional basis if the collection is full. \begin{lemma}\label{lemma:chi-unimodular} If $\mathrm{G}$ has an exceptional sequence $\ce_1,\dots,\ce_n$ of length $n = \rk \mathrm{G}$ then $\ce_1,\dots,\ce_n$ is an exceptional basis and $\mathrm{G}$ is unimodular. \end{lemma} \begin{proof} Consider the composition of maps \begin{equation*} \ZZ^n \xrightarrow{\ (\ce_1,\dots,\ce_n)\ } \mathrm{G} \xrightarrow{\quad\chi\quad} \mathrm{G}^\vee \xrightarrow{\ (\ce_1,\dots,\ce_n)^T\ } \ZZ^n. \end{equation*} The composition is given by the Gram matrix of the form $\chi$ on the set of vectors $\ce_1,\dots,\ce_n$ which by definition of an exceptional sequence is upper-triangular with units on the diagonal, hence is an isomorphism. It follows that the first map is injective and the last is surjective. Since $\rk(\mathrm{G}^\vee) = \rk(\mathrm{G}) = n$ and $\mathrm{G}^\vee$ is torsion-free, the last map is an isomorphism, hence so is the first map. Thus $\ce_1,\dots,\ce_n$ is an exceptional basis in $\mathrm{G}$. Moreover, it follows that $\chi$ is an isomorphism, hence $\mathrm{G}$ is unimodular. \end{proof} Assume $\ce \in \mathrm{G}$ is exceptional. \begin{definition}\label{def:mutation} The {\sf left} (resp.\ {\sf right}) {\sf mutation} with respect to $\ce$ is an endomorphism of $\mathrm{G}$ defined by \begin{equation}\label{eq:mutations} \LL_\ce(v) := v - \chi(\ce,v)\ce, \qquad \RR_\ce(v) := v - \chi(v,\ce)\ce. \end{equation} \end{definition} In fact, $\LL_\ce$ is just the projection onto the right orthogonal $\ce^\perp$ (and similarly for $\RR_\ce$). In particular, the mutations kill $\ce$ and define mutually inverse isomorphisms of the orthogonals \begin{equation*} \xymatrix@1@C=5em{ {}^\perp\ce\ \ar@<.5ex>[r]^{\LL_\ce} & \ \ce^\perp \ar@<.5ex>[l]^{\RR_\ce}. } \end{equation*} Moreover, it is easy to see that given an exceptional sequence $\ce_\bullet = (\ce_1,\dots,\ce_n)$ in $\mathrm{G}$, the sequences \begin{equation*} \begin{aligned} \LL_{i,i+1}(\ce_\bullet) &:= (\ce_1, \dots, \ce_{i-1}, && \LL_{\ce_i}(\ce_{i+1}),\ \ \, \ce_i, && \ce_{i+2}, \dots, \ce_n),\\ \RR_{i,i+1}(\ce_\bullet) &:= (\ce_1, \dots, \ce_{i-1}, && \ce_{i+1}, \RR_{\ce_{i+1}}(\ce_{i}), && \ce_{i+2}, \dots, \ce_n) \end{aligned} \end{equation*} are exceptional, and these two operations are mutually inverse. If $\mathrm{G}$ has a Serre operator, every exceptional sequence $\ce_1,\dots,\ce_n$ can be extended to an infinite sequence $\{\ce_i\}_{i \in \ZZ}$ by the rule \begin{equation*} \ce_i = \rS_\mathrm{G}(\ce_{i+n}). \end{equation*} This sequence is called a {\sf helix}. Its main property is that for any $k \in \ZZ$ the sequence $\ce_{k+1},\dots,\ce_{k+n}$ is an exceptional sequence, generating the same helix (up to an index shift). If $\ce_1,\dots,\ce_n$ is an exceptional basis then \begin{equation*} \ce_0 = (\LL_{\ce_1} \circ \dots \circ \LL_{\ce_{n-1}})(\ce_n), \qquad \ce_{n+1} = (\RR_{\ce_n} \circ \dots \circ \RR_{\ce_{2}})(\ce_1), \end{equation*} and it follows that every element of the helix can be obtained from the original basis by mutations. Mutations $\LL_{i,i+1}$ and $\RR_{i,i+1}$ (with $i$ now being an arbitrary integer) can be also defined for helices. \section{Surface-like categories and pseudolattices}\label{section:surface-like} \subsection{Definition and examples} The next is the main definition of the paper. \begin{definition}[\protect{cf.~\cite[Definition~3.2.1]{dTVdB}}] \label{def:surface-like} We say that a pseudolattice $(\mathrm{G},\chi)$ is {\sf surface-like} if there is a primitive element $\bp \in \mathrm{G}$ such that \begin{enumerate} \item $\chi(\bp,\bp) = 0$, \item $\chi_-(\bp,-) = 0$ (i.e., $\chi(\bp,v) = \chi(v,\bp)$ for any $v \in \mathrm{G}$), \item the form $\chi_-$ vanishes on the subgroup $\bp^\perp = {}^\perp\bp \subset \mathrm{G}$ (i.e., $\chi$ is symmetric on $\bp^\perp$). \end{enumerate} An element $\bp$ as above is called a {\sf point-like element} in $\mathrm{G}$. We say that a smooth and proper triangulated category $\cT$ is {\sf surface-like} if its numerical Grothendieck group $(\NGr(\cT),\chi)$ is surface-like (with some choice of a point-like element). \end{definition} \begin{remark}\label{remark:surface-type-like} A Serre operator $\rS_\mathrm{G}$ of a surface-like pseudolattice, if exists, is unipotent by Corollary~\ref{corollary:serre-unipotent}, and by~\eqref{eq:serre-symmetrize} and nondegeneracy of~$\chi$ the rank of $\rS_\mathrm{G} - 1$ does not exceed 2, hence a surface-like pseudolattice with a Serre operator is a Serre lattice of surface type as defined in~\cite[Definition~3.2.1]{dTVdB}. Conversely, as a combination of Lemma~\ref{lemma:surface-like-criterion} below and~\cite[Lemma~3.3.2]{dTVdB} shows, a Serre lattice of surface* type is a surface-like pseudolattice (with a point-like element being a primitive generator of the smallest piece of the numerical codimension filtration of~\cite[Section~3.3]{dTVdB}). However, again by Lemma~\ref{lemma:surface-like-criterion}, a Serre lattice of surface type which is not of surface* type is surface-like only if $\chi$ has an isotropic vector. \end{remark} If we choose a basis $v_0,\dots,v_{n-1}$ in $\mathrm{G}$ such that $v_{n-1} = \bp$ and $\bp^\perp = \langle v_1,\dots,v_{n-1} \rangle$ then the above definition is equivalent to the fact, that the Gram matrix of the bilinear form $\chi$ takes the following form: \begin{equation}\label{eq:chi-matrix} \chi(v_i,v_j) = \begin{pmatrix} a & b_1 & \dots & b_{n-2} & d \\ b'_1 & c_{11} & \dots & c_{1,n-2} & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ b'_{n-2} & c_{n-2,1} & \dots & c_{n-2,,n-2} & 0 \\ d & 0 & \dots & 0 & 0 \end{pmatrix} \end{equation} with the submatrix $(c_{ij})$ being symmetric. The following is a useful reformulation of Definition~\ref{def:surface-like}. \begin{lemma}\label{lemma:surface-like-criterion} A pseudolattice $\mathrm{G}$ is surface-like if and only if one of the following two cases takes place: \begin{enumerate}\renewcommand{\alph{enumi}}{\alph{enumi}} \item\label{item:cy} either $\chi_- = 0$ (i.e., the form $\chi$ is symmetric), and $\chi$ has an isotropic vector; \item\label{item:r2} or the rank of $\chi_-$ equals $2$, and the restriction $\chi\vert_{\Ker \chi_-}$ is degenerate. \end{enumerate} In case~\eqref{item:cy} an element $\bp \in \mathrm{G}$ is point-like if and only if it is isotropic, i.e., $\chi(\bp,\bp) = 0$. In case~\eqref{item:r2} an element $\bp \in \mathrm{G}$ is point-like if and only if $\bp \in \Ker(\chi\vert_{\Ker \chi_-})$. \end{lemma} \begin{proof} Assume $\mathrm{G}$ is surface-like. If $\chi_- = 0$, we are in case~\eqref{item:cy}; then $\bp$ is isotropic by Definition~\ref{def:surface-like}(1). Otherwise the rank of $\chi_-$ equals~2, since $\chi_-$ vanishes on a hyperplane $\bp^\perp \subset \mathrm{G}$ by Definition~\ref{def:surface-like}(3), and moreover $\Ker\chi_- \subset \bp^\perp$. Furthermore, $\bp \in \Ker \chi_-$ by Definition~\ref{def:surface-like}(2), and since $\Ker \chi_- \subset \bp^\perp$ we have $\chi(\Ker \chi_-, \bp) = 0$, hence $\bp$ is in the kernel of the restriction $\chi\vert_{\Ker\chi_-}$. Conversely, if~\eqref{item:cy} holds and $\bp$ is isotropic, then $\chi_- = 0$ and clearly Definition~\ref{def:surface-like} holds. Similarly, if~\eqref{item:r2} holds and $\bp \in \Ker (\chi\vert_{\Ker \chi_-})$, then parts (1) and (2) of Definition~\ref{def:surface-like} hold, and since $\bp^\perp$ contains $\Ker\chi_-$ as a hyperplane, part (3) also holds. \end{proof} The above argument also shows that when $\chi_- \ne 0$, part (1) of Definition~\ref{def:surface-like} follows from parts (2) and (3). Note also that the restriction on the rank of $\chi_-$ also recently appeared in~\cite{BR1}. Let us give some examples. \begin{example}\label{example:cy} Let $\cT$ be a smooth and proper Calabi--Yau category of even dimension (i.e., its Serre functor is a shift $\bS_\cT \cong [k]$ with even $k$), and $P \in \cT$ is a point object, i.e., $\Ext^\bullet(P,P)$ is an exterior algebra on $\Ext^1(P,P)$. Then $\cT$ is a surface-like category and $\mathrm{G} = \NGr(\cT)$ is a surface-like pseudolattice with $[P]$~being a point-like element, since the Euler form is symmetric and so we are in case~\eqref{item:cy} of Lemma~\ref{lemma:surface-like-criterion}. \end{example} For us, however, the main example is the next one. \begin{example}\label{example:standard} Let $X$ be a smooth projective surface, set $\cT = \bD(X)$ to be the bounded derived category of coherent sheaves on $X$, and let $\mathrm{G} = \NGr(\cT) = \NGr(X)$. The topological filtration $0 \subset F^2\Gro(X) \subset F^1\Gro(X) \subset F^0\Gro(X) = \Gro(X)$ of the Grothendieck group $\Gro(X)$ (by codimension of support) induces a filtration $0 \subset \mathrm{G}_2 \subset \mathrm{G}_1 \subset \mathrm{G}_0 = \mathrm{G}$ on the numerical Grothendieck group. Consider the maps \begin{equation*}\arraycolsep=.1em \begin{array}{rll} \rr & \colon \mathrm{G}_0 / \mathrm{G}_1 & \xrightarrow{\ \ } \ZZ, \\ \rc_1 & \colon \mathrm{G}_1 / \mathrm{G}_2 & \xrightarrow{\ \ \ } \NS(X), \\ \rs & \colon \mathrm{G}_2 & \xrightarrow{\ \ \ } \ZZ, \end{array} \end{equation*} given by the rank, the first Chern class, and the Euler characteristic, where $\NS(X)$ is the numerical Neron--Severi group of $X$ (in particular, we quotient out the torsion in $\CH^1(X)$). The rank map is clearly an isomorphism, and so is $\rc_1$ (the surjectivity of $\rc_1$ follows from the surjectivity of $F^1\!K_0(X) \to \CH^1(X)$, and for injectivity it is enough to note that if $\cL$ is a line bundle such that $\rc_1(\cL)$ is numerically equivalent to zero, then by Riemann--Roch $[\cL] = [\cO_X]$ in $\NGr(\bD(X))$). It is also clear that $\rs$ is injective, so if we normalize it by dividing by the minimal degree of a $0$-cycle on $X$, the obtained map is an isomorphism. Considering~$\rc_1$ as a linear map $\mathrm{G} \to \NS(X)$ in a standard way, and extending linearly the normalized Euler characteristic map to a map~$\tilde{\rs} \colon \mathrm{G} \to \ZZ$ (if $X$ has a 0-cycle of degree 1, we can take $\tilde\rs$ to be the Euler characteristic map), we obtain an isomorphism \begin{equation*} \mathrm{G} \xrightarrow{\ \sim\ } \ZZ \oplus \NS(X) \oplus \ZZ, \qquad v \mapsto (\rr(v), \rc_1(v), \tilde{\rs}(v)). \end{equation*} A simple Riemann--Roch computation shows that the form $\chi_-$ is given by \begin{equation*} \chi_-((r_1,D_1,s_1),(r_2,D_2,s_2)) = r_1 (K_X\cdot D_2) - r_2(K_X \cdot D_1), \end{equation*} where $K_X \in \NS(X)$ is the canonical class of $X$ and $\cdot$ stands for the intersection pairing on $\NS(X)$. The kernel of $\chi_-$ is spanned by all $(0,D,s) \in \ZZ \oplus \NS(X) \oplus \ZZ$ such that $K_X \cdot D = 0$, in particular, the rank of $\chi_-$ is 2. Furthermore, by Riemann--Roch we have \begin{equation}\label{eq:chi-surface} \chi((0,D_1,s_1),(0,D_2,s_2)) = - D_1 \cdot D_2. \end{equation} Therefore, $\bp_X := (0,0,1)$ is contained in the kernel of $\chi\vert_{\Ker\chi_-}$, and since the intersection pairing on the numerical Neron--Severi group $\NS(X)$ is nondegenerate, $\bp_X$ generates this kernel unless $K_X^2 = 0$. In the latter case, the kernel of $\chi\vert_{\Ker\chi_-}$ is generated by $\bp_X$ and $K_X$. In particular, Lemma~\ref{lemma:surface-like-criterion}\eqref{item:r2} shows that $\mathrm{G}$ is surface-like with $\bp_X$ a point-like class, and that $\bp_X$ is the unique point-like class unless $K_X^2 = 0$. \end{example} We say that a surface-like category $\cT$ (resp.\ pseudolattice $\mathrm{G}$) is {\sf standard}, if $(\cT,\bp) = (\bD(X),\bp_X)$ for a surface $X$ (resp.\ $(\mathrm{G},\bp) = (\NGr(\bD(X)),\bp_X)$). Next example shows that even a standard surface-like pseudolattice of Example~\ref{example:standard} may have sometimes a non-standard surface-like structure. \begin{example} Assume again $X$ is a smooth projective surface, $\cT = \bD(X)$, and $\mathrm{G} = \NGr(\cT)$. In the notation of Example~\ref{example:standard} set $\bp_K := (0,K_X,0)$. If $K_X^2 = 0$ then $\bp_K$ is a point-like element, that gives a different surface-like structure on the pseudolattice $\mathrm{G}$ and the category $\bD(X)$. \end{example} Further we will need an explicit form of a standard pseudolattice for surfaces $X = \PP^2$ and $X = \PP^1 \times \PP^1$, and $X = \FF_1$ (the Hirzebruch ruled surface). \begin{example}\label{example:p2-p1p1} If $X = \PP^2$ then the classes of the sheaves $(\cO_{\PP^2},\cO_{\PP^2}(1),\cO_{\PP^2}(2))$ form an exceptional basis in which the Gram matrix of the Euler form looks like \begin{equation*} \chi_{\PP^2} = \left(\begin{smallmatrix} 1 & 3 & 6 \\ 0 & 1 & 3 \\ 0 & 0 & 1 \end{smallmatrix}\right). \end{equation*} If $X = \PP^1 \times \PP^1$ for each $c \in \ZZ$ the classes of the sheaves $(\cO_{\PP^1 \times \PP^1},\cO_{\PP^1 \times \PP^1}(1,0),\cO_{\PP^1 \times \PP^1}(c,1),\cO_{\PP^1 \times \PP^1}(c+1,1))$ form an exceptional basis in which the Gram matrix of the Euler form looks like \begin{equation*} \chi_{\PP^1 \times \PP^1} = \left( \begin{smallmatrix} 1 & 2 & 2c + 2 & 2c + 4 \\ 0 & 1 & 2c & 2c + 2 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 1 \end{smallmatrix} \right). \end{equation*} If $X = \FF_1$ for each $c \in \ZZ$ the classes of the sheaves $(\cO_{\FF^1},\cO_{\FF_1}(f),\cO_{\FF_1}(s+cf),\cO_{\FF_1}(s+(c+1)f))$ (where $f$ is the class of a fiber and $s$ is the class of the $(-1)$-section) form an exceptional basis in which the Gram matrix of the Euler form looks like \begin{equation*} \chi_{\FF_1} = \left( \begin{smallmatrix} 1 & 2 & 2c + 1 & 2c + 3 \\ 0 & 1 & 2c-1 & 2c + 1 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 1 \end{smallmatrix} \right). \end{equation*} Note that the left mutation of $\cO_{\FF_1}(s+(c+1)f)$ through $\cO_{\FF_1}(s+cf)$ is isomorphic to $\cO_{\FF_1}(s+(c-1)f)$ (up to a shift, hence all such exceptional collections are mutation-equivalent. Note also, that in case $c = 1$ the left mutation of $\cO_{\FF_1}(s+f)$ through $\cO_{\FF_1}(f)$ is isomorphic to the structure sheaf of the $(-1)$-section, in particular its rank is zero. \end{example} The next examples come from non-commutative geometry. \begin{example} Let $\cT$ be the derived category of a non-commutative projective plane (see, for instance, \cite{KKO}). Then the numerical Grothendieck group $\NGr(\cT)$ is isometric to $\NGr(\PP^2)$ (the numerical Grothendieck group of a commutative plane), hence $\cT$ is a surface-like category. The same applies to noncommutative $\PP^1 \times \PP^1$ and other noncommutative deformations of surfaces. \end{example} From this perspective it would be interesting to classify directed quivers whose derived categories of representations are surface-like (for quivers with at most four vertices this is done in~\cite{dTVdB}). Note that each of these categories can be realized as a semiorthogonal component of the derived category of a smooth projective variety \cite{Or2}, see also~\cite{BR2}. More generally, one can ask when a gluing \cite{KL,Or1,Or3} of two triangulated categories is surface-like. \begin{example}\label{example:brauer} Let $X$ be a smooth projective complex K3 surface and $\beta \in \Br(X)$ an element in the Brauer group. Let $\cT = \bD(X,\beta)$ be the twisted derived category. Its numerical Grothendieck group $\NGr(X,\beta)$ can be described as follows (see \cite[Section~1]{HS} for details). Analyzing the cohomology in the exponential sequence, one can write the Brauer group as an extension \begin{equation*} 0 \to \frac{H^2(X,\QQ)}{\NS(X)_\QQ + H^2(X,\ZZ)} \to \Br(X) \to H^3(X,\ZZ)_{\mathrm{torsion}} \to 0. \end{equation*} For a K3 surface the right term vanishes, hence a Brauer class $\beta$ can be represented by a rational cohomology class $B \in H^2(X,\QQ)$ (called a B-field). Then \begin{equation*} \NGr(X,\beta) = \{ (r,D,s) \in \QQ \oplus \NS(X)_\QQ \oplus \QQ \mid r \in \ZZ,\ D + r B \in \NS(X),\ s + D \cdot B + r B^2/2 \in \ZZ \}, \end{equation*} and the Euler form is given by the Mukai pairing $\chi((r_1,D_1,s_1),(r_2,D_2,s_2)) = r_1s_2 - D_1\cdot D_2 + s_1r_2$. In this case $\bp_X = (0,0,1)$ is still a point-like element. \end{example} See other examples of surface-like categories in~\cite{BP,dTP}. \subsection{Rank function and Neron--Severi lattice} Assume $(\mathrm{G},\bp)$ is a surface-like pseudolattice and let~$\bp \in \mathrm{G}$ be its point-like element. The linear functions $\chi(\bp,-)$ and $\chi(-,\bp)$ on $\mathrm{G}$ coincide (by Definition~\ref{def:surface-like}(2)). We define the {\sf rank function} associated with the point-like element $\bp$ by \begin{equation*} \br(-) := \chi(\bp,-) = \chi(-,\bp). \end{equation*} Note that $\bp^\perp = {}^\perp\bp = \Ker\br$. \begin{lemma}\label{lemma:ns} If $\mathrm{G}$ is a surface-like pseudolattice and $\bp$ is its point-like element, there is a complex \begin{equation}\label{eq:rp-p} \ZZ \xrightarrow{\ \bp\ } \mathrm{G} \xrightarrow{\ \br\ } \ZZ \end{equation} with injective $\bp$ and, if $\mathrm{G}$ is unimodular, surjective $\br$. Its middle cohomology \begin{equation}\label{eq:def-ns} \NS(\mathrm{G}) := \bp^\perp/\bp \end{equation} is a finitely generated free abelian group of rank $\rk(\mathrm{G}) - 2$. \end{lemma} \begin{proof} We have $\br(\bp) = \chi(\bp,\bp) = 0$ by Definition~\ref{def:surface-like}(1), hence~\eqref{eq:rp-p} is a complex. The first map in~\eqref{eq:rp-p} is injective since $\mathrm{G}$ is torsion-free. The second map is nonzero, since $\chi$ is nondegenerate on~$\mathrm{G}$. If its image is $d\ZZ \subset \ZZ$ with $d \ge 2$ (this, up to a sign, is the same $d$ as in~\eqref{eq:chi-matrix}), then $\frac1d \br$ is a well defined element of~$\mathrm{G}^\vee$. If $\mathrm{G}$ is unimodular, then there is an element $v \in \mathrm{G}$ such that $d \cdot \chi(v,-) = \chi(\bp,-)$. Therefore $\bp - d \cdot v$ is in the kernel of $\chi$, hence is zero, hence $\bp$ is not primitive. This contradiction shows that $\br$ is surjective for unimodular $\mathrm{G}$. The group $\bp^\perp$ is torsion-free of rank $\rk(\mathrm{G}) - 1$ since $\br \ne 0$, and the group $\NS(\mathrm{G})$ is torsion-free of rank $\rk(\mathrm{G}) - 2$ since $\bp$ is primitive. \end{proof} The unimodularity assumption is not necessary for the surjectivity of $\br$, however $\br$ is not surjective in general. Indeed, if $(\mathrm{G},\bp_X)$ is the surface-like pseudolattice of Example~\ref{example:brauer}, the index of $\br(\mathrm{G})$ in $\ZZ$ equals the order of $\beta$ in the Brauer group $\Br(X)$. The group $\NS(\mathrm{G})$ defined in~\eqref{eq:def-ns} is called {\sf Neron--Severi} group of a surface-like pseudolattice $\mathrm{G}$. \begin{lemma}\label{lemma:chi-ker-r} If $\mathrm{G}$ is a surface-like pseudolattice and $\bp$ is its point-like element, then the form $\chi$ induces a nondegenerate symmetric bilinear form on $\NS(\mathrm{G})$. In other words, there is a unique nondegenerate symmetric form $q \colon \Sym^2 (\NS(\mathrm{G})) \to \ZZ$ such that the diagram \begin{equation*} \xymatrix{ \bp^\perp \otimes \bp^\perp \ar@{->>}[rr] \ar[dr]_\chi && \NS(\mathrm{G}) \otimes \NS(\mathrm{G}) \ar[dl]^{-q} \\ & \ZZ } \end{equation*} is commutative. The lattice $\mathrm{G}$ is unimodular if and only if $\br$ is surjective and $q$ is unimodular. \end{lemma} Note the sign convention chosen in order to make $q$ equal to the intersection pairing in the standard example (see~\eqref{eq:chi-surface}). In terms of~\eqref{eq:chi-matrix}, the middle symmetric submatrix $(c_{ij})$ of $\chi$ is the Gram matrix of $-q$. \begin{proof} Recall that $\chi_-$ vanishes on $\bp^\perp$ by Definition~\ref{def:surface-like}(3), hence $\chi$ is symmetric on $\bp^\perp$. Furthermore, if $v \in \bp^\perp$ then $\chi(\bp,v) = \br(v) = 0$, hence $\bp$ is in the kernel of $\chi$ on $\bp^\perp$, and thus $\chi$ induces a symmetric form $q$ on $\NS(\mathrm{G})$ such that the above diagram commutes. Finally, consider the diagram \begin{equation*} \xymatrix{ \ZZ \ar@{=}[d] \ar[r]^-\bp & \mathrm{G} \ar[r]^-{\br} \ar[d]^\chi & \ZZ \ar@{=}[d] \\ \ZZ \ar[r]^-{\br} & \mathrm{G}^\vee \ar[r]^-{\bp} & \ZZ } \end{equation*} which is commutative by definition of $\br$, hence is a morphism of complexes. The map induced on the middle cohomologies of complexes $\NS(\mathrm{G}) \to \NS(\mathrm{G})^\vee$ coincides with the map $-q$ by the definition of the latter. Nondegeneracy of $\chi$ implies injectivity of that map, hence $q$ is nondegenerate. If~$\mathrm{G}$ is unimodular then the middle vertical arrow is an isomorphism, hence the induced map $-q$ on the cohomologies is an isomorphism, hence $q$ is unimodular. On the other hand, $\br$ is surjective by Lemma~\ref{lemma:ns}. Conversely, if $\br$ is surjective, then the Snake Lemma implies that the cokernels of $\chi \colon \mathrm{G} \to \mathrm{G}^\vee$ and $q \colon \NS(\mathrm{G}) \to \NS(\mathrm{G}^\vee)$ are isomorphic, so if $q$ is unimodular, so is $\chi$. \end{proof} The lattice $(\NS(\mathrm{G}),q)$ is called {\sf Neron--Severi lattice} of a surface-like pseudolattice $\mathrm{G}$. The filtration $0 \subset \ZZ\bp \subset \bp^\perp \subset \mathrm{G}$ can be thought of as an analog of the topological filtration on $\mathrm{G}$. In Corollary~\ref{corollary:serre-unipotent} below we show that the Serre operator of $\mathrm{G}$ (if exists) preserves this filtration and acts on its factors as the identity. \subsection{Canonical class}\label{subsection:canonical-class} In this section we show how one can define the canonical class of a surface-like pseudolattice. It is always well defined as a linear function on $\NS(\mathrm{G})$, or as an element of $\NS(\mathrm{G}) \otimes \QQ$, and under unimodularity assumption, also as an element of $\NS(\mathrm{G})$. The rank map induces a map \begin{equation}\label{eq:def-lambda} \bl \colon \bw2 \mathrm{G} \to \bp^\perp, \qquad F_1 \wedge F_2 \mapsto \br(F_1)F_2 - \br(F_2)F_1. \end{equation} Its kernel is $\bw2(\bp^\perp)$, and if the rank map is surjective, the map $\bl$ is surjective too. \begin{lemma}\label{lemma:canonical-class} There is a unique element $K_\mathrm{G} \in \NS(\mathrm{G}) \otimes \QQ$ such that for all $v_1,v_2 \in \mathrm{G}$ we have \begin{equation}\label{eq:chi-minus-k} \chi_-(v_1,v_2) = \chi(v_1,v_2) - \chi(v_2,v_1) = -q(K_\mathrm{G},\bl(v_1,v_2)). \end{equation} If $\mathrm{G}$ is unimodular, then $K_\mathrm{G} \in \NS(\mathrm{G})$ is integral. \end{lemma} \begin{proof} Denote by $\bar\bl \colon \bw2\mathrm{G} \to \NS(\mathrm{G})$ the composition of $\bl$ with the projection $\bp^\perp \to \NS(\mathrm{G})$. Consider the diagram \begin{equation*} \xymatrix{ \bw2\mathrm{G} \ar@{->>}[r]^-\bl \ar[d]_{\chi_-} & \Ima (\bl) \ar@{->>}[r] \ar@{-->}[dl] & \Ima(\bar\bl) \ar@{^{(}->}[r] \ar@{..>}[dll] & \NS(\mathrm{G}) \\ \ZZ } \end{equation*} Our goal is to extend the map $\chi_-$ to a linear map from $\NS(\mathrm{G})$. First, let us show that it extends to a dashed arrow. For this it is enough to note that $\chi_-$ vanishes on $\Ker(\bl) = \bw2(\bp^\perp)$ by Definition~\ref{def:surface-like}(3). Next, to construct a dotted arrow it is enough to check that the dashed arrow vanishes on the kernel of the map $\Ima(\bl) \twoheadrightarrow \Ima(\bar\bl)$. Clearly, this kernel is generated by an appropriate multiple of $\bp$. If $v \in \mathrm{G}$ is such that $\br(v) \ne 0$, then $\bl(v \wedge \bp) = \br(v)\bp$, and $\chi_-(v,\bp) = 0$ by Definition~\ref{def:surface-like}(2), hence the dashed arrow vanishes on $\br(v)\bp$. Finally, $\Ima(\bl) \subset \bp^\perp$ is a subgroup of finite index, hence $\Ima(\bar\bl) \subset \NS(\mathrm{G})$ is a subgroup of finite index, hence a linear map from $\Ima(\bar\bl)$ to $\ZZ$ extends uniquely to a linear map $\NS(\mathrm{G}) \to \QQ$, and since the form $q$ on $\NS(\mathrm{G})$ is nondegenerate, there is a unique element $K_\mathrm{G}$ such that~\eqref{eq:chi-minus-k} holds. If $\mathrm{G}$ is unimodular, then $\Ima(\bl) = \bp^\perp$, $\Ima(\bar\bl) = \NS(\mathrm{G})$, hence the dashed arrow is a map $\NS(\mathrm{G}) \to \ZZ$. By unimodularity of $q$, it is given by a scalar product with an integral vector $K_\mathrm{G} \in \NS(\mathrm{G})$. \end{proof} The sign convention in~\eqref{eq:chi-minus-k} is chosen in order to agree with the standard example. We call the element $K_\mathrm{G} \in \NS(\mathrm{G})$ {\sf the canonical class of $\mathrm{G}$}. If $K_\mathrm{G} \in \NS(\mathrm{G}) \subset \NS(\mathrm{G}) \otimes \QQ$, we say that the canonical class of~$\mathrm{G}$ is {\sf integral}. In terms of~\eqref{eq:chi-matrix} the canonical class is given by the formula $K_\mathrm{G}\cdot v_i = (b'_i - b_i)/d$. \subsection{Serre operator} Assume a surface-like pseudolattice has a Serre operator $\rS_\mathrm{G}$. \begin{lemma}\label{lemma:serre-p} Any point-like element in a surface-like pseudolattice is fixed by the Serre operator, i.e., $\rS_\mathrm{G} (\bp) = \bp$. Analogously, $\rS_\mathrm{G}$ fixes the corresponding rank function, i.e. $\br (\rS_\mathrm{G}(-)) = \br(-)$. \end{lemma} \begin{proof} Since $\chi$ is nondegenerate, \eqref{eq:serre-symmetrize} implies $\Ker \chi_- = \Ker(1 - \rS_\mathrm{G})$. Therefore it follows from Definition~\ref{def:surface-like}(2) that $\rS_\mathrm{G}(\bp) = \bp$. \end{proof} It follows that the Serre operator induces an automorphism of complex~\eqref{eq:rp-p}, hence an automorphism of the Neron--Severi lattice $\NS(\mathrm{G})$. \begin{lemma}\label{lemma:serre-identity} The automorphism of the group $\NS(\mathrm{G})$ induced by the Serre operator is the identity map. \end{lemma} \begin{proof} We have to check that $1-\rS_\mathrm{G}$ acts by zero on $\NS(\mathrm{G})$, or equivalently, that it takes $\bp^\perp$ to $\ZZ\bp$. For this note that if $v_1,v_2 \in \bp^\perp$ then \begin{equation*} \chi(v_1,(1-\rS_\mathrm{G})v_2) = \chi_-(v_1,v_2) = 0 \end{equation*} by~\eqref{eq:serre-symmetrize} and Definition~\ref{def:surface-like}(3). On the other hand, since $\chi$ is nondegenerate, its kernel on~$\bp^\perp$ is generated by $\bp$. \end{proof} \begin{corollary}\label{corollary:serre-unipotent} If $\mathrm{G}$ is a surface-like pseudolattice then $\rS_\mathrm{G}$ is unipotent, and moreover $(1 - \rS_\mathrm{G})^3 = 0$. \end{corollary} \begin{proof} Indeed, by Lemma~\ref{lemma:serre-p} the three-step filtration $0 \subset \ZZ\bp \subset \bp^\perp \subset \mathrm{G}$ is fixed by $\rS_\mathrm{G}$, and the induced action on its factors is the identity by Lemma~\ref{lemma:serre-identity}. \end{proof} The relation of the canonical class and the Serre operator is given by the following \begin{lemma}\label{lemma:lambda-k} For any $v \in \mathrm{G}$ we have $\bl(v,\rS_\mathrm{G}(v)) = \br(v)^2K_\mathrm{G} \pmod \bp$. \end{lemma} \begin{proof} If $\br(v) = 0$ then $\br(\rS_\mathrm{G} (v)) = 0$ as well (by Lemma~\ref{lemma:serre-p}), hence both sides are zero. So we may assume $\br(v) \ne 0$. Take arbitrary $v' \in \mathrm{G}$ and let $r = \br(v)$, $r' = \br(v')$. Then \begin{multline*} q(\bl(v,v'),\bl(v,\rS_\mathrm{G} (v))) = -\chi(\bl(v,v'),\bl(v,\rS_\mathrm{G} (v))) = -\chi(rv' - r'v,r \, \rS_\mathrm{G} (v) - rv)) = \\ = r^2(\chi(v',v) - \chi(v',\rS_\mathrm{G} (v))) + rr'(\chi(v,\rS_\mathrm{G} (v)) - \chi(v,v)) = r^2(\chi(v',v) - \chi(v,v')) = r^2q(K_\mathrm{G},\bl(v,v')). \end{multline*} It follows that $\bl(v,\rS_\mathrm{G} (v)) - r^2K_\mathrm{G}$ is orthogonal to all elements of the form $\bl(v,v')$. If $v'' \in \bp^\perp$ then $\bl(v,v' + v'') = \bl(v,v') + rv''$, hence elements of this form span all $\bp^\perp$. It follows that $\bl(v,\rS_\mathrm{G} (v)) - r^2K_\mathrm{G}$ is in the kernel of $q$ on $\bp^\perp$, hence in $\QQ\bp$. \end{proof} \begin{corollary} For any $v \in \mathrm{G}$ we have $\chi(\rS_\mathrm{G} (v),v) - \chi(v,v) = \br(v)^2 q(K_\mathrm{G},K_\mathrm{G})$. \end{corollary} \begin{proof} Substitute $v_1 = \rS_\mathrm{G} v$, $v_2 = v$ into~\eqref{eq:chi-minus-k}, take into account $\chi(v,\rS_\mathrm{G} (v)) = \chi(v,v)$, use Lemma~\ref{lemma:lambda-k} and the fact that the point-like element $\bp$ is in the kernel of $q$. \end{proof} The (rational) number $q(K_\mathrm{G},K_\mathrm{G})$ should be thought of as the canonical degree of a surface-like pseudolattice. \subsection{Exceptional sequences in surface-like pseudolattices} The computations of this section are analogues of those in \cite[Section~3]{Pe}. However, the categorical approach allows to simplify them considerably. Assume $\mathrm{G}$ is a surface-like pseudolattice, $\bp$ is its point-like element, $\br$ is the corresponding rank function, $q$ is the induced quadratic form on $\NS(\mathrm{G})$, and $\bl$ is the map defined in~\eqref{eq:def-lambda}. \begin{lemma}[cf.~\protect{\cite[3.5(ii)]{Pe}}]\label{lemma:ne-pair} Assume $\ce_1, \ce_2 \in \mathrm{G}$ are exceptional with nonzero ranks. Then \begin{equation} \chi(\ce_1,\ce_2) + \chi(\ce_2,\ce_1) = \frac1{\br(\ce_1)\br(\ce_2)}(q(\bl(\ce_1,\ce_2)) + \br(\ce_1)^2 + \br(\ce_2)^2). \end{equation} \end{lemma} \begin{proof} To abbreviate the notation write $r_i = \br(\ce_i)$. Then $\bl(\ce_1,\ce_2) = r_1\ce_2 - r_2\ce_1$, hence \begin{multline*} -q(\bl(\ce_1,\ce_2)) = \chi(\bl(\ce_1,\ce_2),\bl(\ce_1,\ce_2)) = \chi(r_1\ce_2 - r_2\ce_1,r_1\ce_2 - r_2\ce_1) = \\ = r_1^2\chi(\ce_2,\ce_2) + r_2^2\chi(\ce_1,\ce_1) - r_1r_2(\chi(\ce_1,\ce_2) + \chi(\ce_2,\ce_1)) \end{multline*} (the first is by Lemma~\ref{lemma:chi-ker-r}, the second is by definition of $\bl$, and the third is by bilinearity of $\chi$). Using exceptionality $\chi(\ce_1,\ce_1) = \chi(\ce_2,\ce_2) = 1$, we easily deduce the required formula. \end{proof} \begin{lemma}[cf.~\protect{\cite[Proposition~3.8]{Pe}}]\label{lemma:ne-triple} Assume $(\ce_1,\ce_2)$ and $(\ce_2,\ce_3)$ are exceptional pairs in $\mathrm{G}$ and $\br(\ce_2) \ne 0$. Then $(\ce_1,\ce_3)$ is an exceptional pair if and only if $q(\bl(\ce_1,\ce_2),\bl(\ce_2,\ce_3)) = \br(\ce_1)\br(\ce_3)$. \end{lemma} \begin{proof} Set $r_i = \br(\ce_i)$. As in the previous lemma, we have \begin{multline*} q(\bl(\ce_1,\ce_2),\bl(\ce_2,\ce_3)) = q(\bl(\ce_2,\ce_3),\bl(\ce_1,\ce_2)) = -\chi(\bl(\ce_2,\ce_3),\bl(\ce_1,\ce_2)) = \\ = -\chi(r_2\ce_3 - r_3\ce_2, r_1\ce_2 - r_2\ce_1) = -r_1r_2\chi(\ce_3,\ce_2) - r_3r_2\chi(\ce_2,\ce_1) + r_3r_1\chi(\ce_2,\ce_2) + r_2^2\chi(\ce_3,\ce_1). \end{multline*} By the assumptions the first two terms in the right hand side vanish, and the third term equals $r_1r_3$. Hence $r_2^2\chi(\ce_3,\ce_1) = q(\bl(\ce_1,\ce_2),\bl(\ce_2,\ce_3)) - r_1r_3$. In particular, since $r_2$ is nonzero, $\chi(\ce_3,\ce_1)$ vanishes if and only if $q(\bl(\ce_1,\ce_2),\bl(\ce_2,\ce_3)) = r_1r_3$. \end{proof} \begin{lemma}\label{lemma:ne-quadruple} If $(\ce_1,\ce_2,\ce_3,\ce_4)$ is an exceptional sequence in $\mathrm{G}$ then $q(\bl(\ce_1,\ce_2),\bl(\ce_3,\ce_4)) = 0$. \end{lemma} \begin{proof} Set $r_i = \br(\ce_i)$. Then as before \begin{multline*} q(\bl(\ce_1,\ce_2),\bl(\ce_3,\ce_4)) = q(\bl(\ce_3,\ce_4),\bl(\ce_1,\ce_2)) = -\chi(\bl(\ce_3,\ce_4),\bl(\ce_1,\ce_2)) = \\ = -\chi(r_3\ce_4 - r_4\ce_3, r_1\ce_2 - r_2\ce_1) = -r_1r_3\chi(\ce_4,\ce_2) - r_2r_4\chi(\ce_3,\ce_1) + r_2r_3\chi(\ce_4,\ce_1) + r_1r_4\chi(\ce_3,\ce_2). \end{multline*} By the assumption, all the terms in the right hand side vanish. \end{proof} \begin{lemma}[cf.~\protect{\cite[3.3(i) and~3.4(iv)]{Pe}}]\label{lemma:ne-zero-rank} If $\br(\ce) = 0$ then $\ce$ is exceptional in $\mathrm{G}$ if and only if $q(\ce) = -1$. If $\br(\ce_1) = \br(\ce_2) = 0$ then $\chi(\ce_1,\ce_2) = 0$ if and only if $q(\ce_1,\ce_2) = 0$. \end{lemma} \begin{proof} This follows immediately from Lemma~\ref{lemma:chi-ker-r}. \end{proof} \section{Minimal surface-like pseudolattices}\label{section:minimal} Let $\mathrm{G}$ be a surface-like pseudolattice. We use the theory developed in the previous section keeping the point-like element $\bp$ implicit. Also, to simplify notation we write $D_1 \cdot D_2$ instead of $q(D_1,D_2)$ for any $D_1,D_2 \in \NS(\mathrm{G})$ and call this pairing the intersection form. We always denote the rank of $\mathrm{G}$ by $n$. \subsection{Minimality, geometricity, and norm-minimality} We start by introducing some useful notions. \begin{definition}\label{def:minimlaity} A surface-like pseudolattice $\mathrm{G}$ is {\sf minimal} if it has no exceptional elements of zero rank. \end{definition} There is a simple characterization of minimality in terms of Neron--Severi lattices. \begin{lemma}\label{lemma:minimality-criterion} A surface-like pseudolattice $\mathrm{G}$ is minimal if and only if the intersection form on its Neron--Severi lattice $\NS(\mathrm{G})$ does not represent $-1$. \end{lemma} \begin{proof} Follows immediately from Lemma~\ref{lemma:ne-zero-rank}. \end{proof} \begin{definition}\label{def:geometricity} We say that a lattice $\rL$ with a vector $K \in \rL$ is {\sf geometric} if $\rL$ has signature $(1,\rk \rL -1)$ and $K$ is {\sf characteristic}, i.e., $D^2 \equiv K\cdot D \pmod 2$ for any $D \in \rL$. A surface-like pseudolattice $\mathrm{G}$ is {\sf geometric} if its canonical class is integral and $(\NS(\mathrm{G}),K_\mathrm{G})$ is a geometric lattice. \end{definition} \begin{remark}\label{remark:matrix} In terms of matrix presentation~\eqref{eq:chi-matrix}, a unimodular geometric pseudolattice can be represented by a matrix with $b'_1 = \dots = b'_{n-2} = 0$, $d = 1$, the symmetric matrix $(c_{ij})$ representing (up to a sign) the unimodular intersection pairing of its Neron--Severi lattice, and $(b_1,\dots,b_{n-2})$ being the intersection products of the basis vectors with the anticanonical class. Indeed, $d = \pm 1$ by the proof of Lemma~\ref{lemma:ns}, and changing the sign of $v_0$ if necessary we can ensure that $d = 1$. Similarly, using unimodularity of $(c_{ij})$ and adding to $v_0$ an appropriate linear combination of $v_1,\dots,v_{n-2}$ we can make $b'_i = 0$. Then $b_i = -K_\mathrm{G} \cdot v_i$ by an observation at the end of Subsection~\ref{subsection:canonical-class}. \end{remark} The name geometric is motivated by the next result. \begin{lemma} A standard surface-like pseudolattice $(\mathrm{G},\bp) = (\NGr(\bD(X)),\bp_X)$ is geometric. \end{lemma} \begin{proof} The canonical class in the Neron--Severi lattice $\NS(X)$ of a surface $X$ is characteristic since by Riemann--Roch we have $D^2 - K_X \cdot D = 2 (\chi(\cO_X(D)) - \chi(\cO_X))$ and the signature of $\NS(X)$ is equal to $(1,\rk \NS(X) - 1)$ by Hodge index Theorem. We conclude by $\NS(\mathrm{G}) = \NS(X)$. \end{proof} In this paper we are mostly interested in pseudolattices with an exceptional basis. Let $\mathrm{G}$ be a surface-like pseudolattice, and $\ce_\bullet = (\ce_1,\dots,\ce_n)$ an exceptional basis in $\mathrm{G}$. We define its norm as \begin{equation}\label{eq:norm} ||\ce_\bullet|| = \sum_{i=1}^n \br(\ce_i)^2. \end{equation} We say that a basis is {\sf norm-minimal}, if the norm of any exceptional basis obtained from $\ce_\bullet$ by a sequence of mutations is greater or equal than the norm of $\ce_\bullet$. Clearly, any exceptional basis can be transformed by mutations to a norm-minimal basis. The main result of this section is the following \begin{theorem}\label{theorem:minimal-cats} Assume $\mathrm{G}$ is a geometric surface-like pseudolattice of rank $n$ and $\ce_\bullet$ is a norm-minimal exceptional basis in $\mathrm{G}$. If\/ $\br(\ce_i) \ne 0$ for all $1 \le i \le n$ \textup{(}for instance if $\mathrm{G}$ is minimal\textup{)}, then $\mathrm{G}$ is isometric either to~$\NGr(\bD(\PP^2))$ or to~$\NGr(\bD(\PP^1 \times \PP^1))$, and the basis $\ce_\bullet$ corresponds to one of the standard exceptional collections of line bundles in these categories \textup{(}see Example~\textup{\ref{example:p2-p1p1}}\textup{)}. In particular, $n = 3$ or~$n = 4$, $K_\mathrm{G}^2 = 12 - n$, and $\br(\ce_i) = \pm 1$ for all $i$. \end{theorem} The proof takes the rest of this section, and uses essentially Perling's arguments. It is obtained as a combination of Corollaries~\ref{corollary:minimal-n}, \ref{corollary:minimal-3}, and~\ref{corollary:minimal-4}. Note that $n \ge 3$ by geometricity, since the rank of $\NS(\mathrm{G})$ is at least 1 as its signature is equal to $(1,n-3)$. We will implicitly assume this inequality from now on. \subsection{Toric system associated with an exceptional collection} We start with a general important definition, which is a small modification of Perling's definition. The history, in fact, goes back to the work~\cite{HP} where the authors associated a fan of a smooth toric surface to an exceptional collection of line bundles on any rational surface. The notion below is used to generalize this construction to exceptional collections consisting of objects of arbitrary non-zero ranks, and is a small modification of the one, introduced by Perling in~\cite{Pe}. \begin{definition}[cf.~\protect{\cite[Definition~5.5]{Pe}}] \label{def:toric} Let $(\rL,K)$ be a lattice of rank $n - 2$ with a vector $K \in \rL$. Let $\lambda_{i,i+1} \in \rL \otimes \QQ$, $1 \le i \le n$, be a collection of $n$ vectors. Put $\lambda_{i+n,i+n+1} = \lambda_{i,i + 1}$ for all $i \in \ZZ$ and \begin{equation*} \lambda_{i,j} = \lambda_{i,i+1} + \dots + \lambda_{j-1,j} \qquad\text{for all $i < j < i + n$}. \end{equation*} We say that the collection $\lambda_{\bullet,\bullet}$ is a {\sf toric system} in $\rL$, if \begin{enumerate} \item there exist integers $r_i \in \ZZ$ such that for all $i \in \ZZ$ we have \begin{equation*} \lambda_{i-1,i} \cdot \lambda_{i,i+1} = \frac{1}{r_i^2}; \end{equation*} \item for all $i < j < i + n - 2$ we have $\lambda_{i-1,i} \cdot \lambda_{j,j+1} = 0$; \item for all $i < j < i + n$ the vectors $r_i r_j \lambda_{i,j}$ are integral, hence their squares are integers, i.e., \begin{equation*} r_i r_j \lambda_{i,j} \in \rL \qquad\text{and}\qquad a_{i,j} := (r_ir_j \lambda_{i,j})^2 \in \ZZ; \end{equation*} \item for all $i < j < i + n$ the fraction \begin{equation*} n_{i,j} := \frac{a_{i,j} + r_i^2 + r_j^2}{r_ir_j} \in \ZZ \end{equation*} is an integer; \item $\lambda_{1,2} + \dots + \lambda_{n,n+1} = -K$; and finally \item $\gcd(r_1,\dots,r_n) = 1$. \end{enumerate} \end{definition} Note that if $\lambda_{\bullet,\bullet}$ is a toric system, the integers $r_i$ are determined up to a sign, and whether the other conditions hold or not does not depend on the choice of these signs. Furthermore, if the signs of $r_i$ are chosen, the other integers $a_{i,j}$ and $n_{i,j}$ are determined unambiguously. \begin{remark} The main difference between Perling's definition and Definition~\ref{def:toric} is that we demand integrality of $r_ir_j\lambda_{i,j}$ and of $n_{i,j}$ for all $i < j < i + n$, while Perling does that only for $j = i + 1$. \end{remark} For reader's convenience we write down the Gram matrix of the scalar product on $\rL \otimes \QQ$ on the set~$\lambda_{i,i+1}$ for $0 \le i \le n-1$ (note, however, that this set is not a basis in $\NS(X)_\QQ$, since $n > \dim \rL \otimes \QQ$, so the matrix below is degenerate): \begin{equation}\label{eq:matrix-lambda} (\lambda_{i,i+1} \cdot \lambda_{j,j+1}) = \begin{pmatrix} \frac{a_{0,1}}{r_0^2r_1^2} & \frac1{r_1^2} & 0 & 0 & \dots & 0 & \frac1{r_0^2} \\ \frac1{r_1^2} & \frac{a_{1,2}}{r_1^2r_2^2} & \frac1{r_2^2} & 0 & \dots & 0 & 0 \\ 0 & \frac1{r_2^2} & \frac{a_{2,3}}{r_2^2r_3^2} & \frac1{r_3^2} & \dots & 0 & 0 \\ 0 & 0 & \frac1{r_3^2} & \frac{a_{2,3}}{r_2^2r_3^2} & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \dots & \frac{a_{n-2,n-1}}{r_{n-2}^2r_{n-1}^2} & \frac1{r_{n-1}^2} \\ \frac1{r_n^2} & 0 & 0 & 0 & \dots & \frac1{r_{n-1}^2} & \frac{a_{n-1,n}}{r_{n-1}^2r_n^2} \end{pmatrix} \end{equation} Note that the matrix is cyclically tridiagonal and symmetric (since $r_n = r_0$ by periodicity of $\lambda_{\bullet,\bullet}$). The submatrix of~\eqref{eq:matrix-lambda} obtained by deleting the last row and column is used extensively in~\cite{V}. Recall the map $\bl \colon \bw2\mathrm{G} \to \bp^\perp$ defined in~\eqref{eq:def-lambda}. We implicitly compose it with $\bp^\perp \to \bp^\perp/\bp = \NS(\mathrm{G})$. \begin{proposition}\label{proposition:lambda-toric} Let $\ce_1,\dots,\ce_n$ be an exceptional basis in a surface-like pseudolattice $\mathrm{G}$ with $\br(\ce_i) \ne 0$ for all $i$. Extend it to an infinite sequence of vectors in $\mathrm{G}$ by setting $\ce_{i + n} = \rS_\mathrm{G}^{-1}(\ce_i)$ for all $i \in \ZZ$, where~$\rS_\mathrm{G}$ is the Serre operator. Then the collection of vectors \begin{equation}\label{eq:def-r-lambda} \lambda_{i,i+1} := \frac1{\br(\ce_i)\br(\ce_{i+1})} \bl(\ce_i,\ce_{i+1}) = \frac{\ce_{i+1}}{\br(\ce_{i+1})} - \frac{\ce_{i}}{\br(\ce_{i})} \in \NS(\mathrm{G}) \otimes \QQ, \qquad\qquad\text{$1 \le i \le n$,} \end{equation} is a toric system in $(\NS(\mathrm{G}),K_\mathrm{G})$ with $r_i = \br(\ce_i)$. \end{proposition} Note that the equality $r_i = \br(\ce_i)$ is one of the results of the theorem --- we claim that the integers $r_i$ defined from the toric sequence agree (of course, up to a sign) with the ranks of the original vectors $\ce_i$. \begin{proof} Set $r_i := \br(\ce_i)$. Then equations in Definition~\ref{def:toric}(1) follow from Lemma~\ref{lemma:ne-triple}, and those in Definition~\ref{def:toric}(2) from Lemma~\ref{lemma:ne-quadruple}. Furthermore, we have \begin{equation*} \lambda_{i,j} = \lambda_{i,i+1} + \dots + \lambda_{j-1,j} = \left(\frac{\ce_{i+1}}{r_{i+1}} - \frac{\ce_{i}}{r_{i}}\right) + \dots + \left(\frac{\ce_{j}}{r_{j}} - \frac{\ce_{j-1}}{r_{j-1}}\right) = \frac{\ce_{j}}{r_{j}} - \frac{\ce_{i}}{r_{i}} = \frac1{r_ir_j} \bl(\ce_i,\ce_j), \end{equation*} hence $r_ir_j\lambda_{i,j} = \bl(\ce_i,\ce_j)$ is integral; this proves Definition~\ref{def:toric}(3). By Lemma~\ref{lemma:ne-pair} we have \begin{equation*} n_{i,j} = \chi(\ce_i,\ce_j) \end{equation*} as soon as $i < j < i + n$; this proves Definition~\ref{def:toric}(4). Furthermore, \begin{equation*} \sum_{i=1}^n \lambda_{i,i+1} = \bl(\ce_1,\ce_{n+1})/r_1r_{n+1} = \bl(\rS_\mathrm{G}(\ce_{n+1}),\ce_{n+1})/r_{n+1}^2, \end{equation*} and by Lemma~\ref{lemma:lambda-k} in $\NS(\mathrm{G})$ this is equal to $-K_\mathrm{G}$; this proves Definition~\ref{def:toric}(5). Finally, Definition~\ref{def:toric}(6) follows from the fact that $\ce_i$ form a basis of $\mathrm{G}$, and the rank map is surjective since $\mathrm{G}$ is unimodular. \end{proof} \begin{remark}\label{remark:toric-mutations} One can define mutations of toric systems in a way compatible with mutations of exceptional collections. However, we will not need this in our paper, so we skip the construction. \end{remark} Below we discuss some properties of toric systems. We always denote by $r_i$, $a_{i,j}$, and~$n_{i,j}$ the integers determined by the toric system (with some choice of signs of $r_i$). We consider the index set of a toric system as $\ZZ/n\ZZ$ with its natural cyclic order. We say that indices $i$ and $j$ are {\sf cyclically distinct} if $i - j \ne 0 \pmod n$, and {\sf adjacent} if $i - j = \pm 1 \pmod n$. \begin{lemma}\label{lemma:chains-independent} If $\lambda_{\bullet,\bullet}$ is a toric system, then for any $i \in \ZZ$ and for any $0 \le k \le n-2$ the sequence of~$k$ vectors $(\lambda_{i,i+1},\dots,\lambda_{i+k-1,i+k})$ is linearly independent. In particular, $\lambda_{i,i+1} \ne 0$ for all $i$. \end{lemma} \begin{proof} We prove by induction on $k$. When $k = 0$ there is nothing to prove. Assume now that $k \ge 1$ and the claim for $k - 1$ is proved. Assume that $\lambda_{i+k,i+k+1} = x_i\lambda_{i,i+1} + \dots + x_{i+k-1}\lambda_{i+k-1,i+k}$. Consider the scalar product of this equality with $\lambda_{i+k+1,i+k+2}$. The left hand side equals $1/r_{i+k+1}^2 \ne 0$ by Definition~\ref{def:toric}(1), while the right hand side is zero by Definition~\ref{def:toric}(2). This contradiction proves the step. \end{proof} The next lemma uses the signature assumption on the Neron--Severi lattice. \begin{lemma}\label{lemma:extremal-pair} Let $\lambda_{\bullet,\bullet}$ be a toric system in a lattice $\rL$ of signature $(1,n-3)$. If both $a_{i,i+1} \ge 0$ and~$a_{j,j+1} \ge 0$ \textup{(}i.e., $\lambda_{i,i+1}^2 \ge 0$ and $\lambda_{j,j+1}^2 \ge 0$\textup{)} for cyclically distinct $i$ and $j$ then \begin{itemize} \item either $i$ and $j$ are adjacent, \item or $n = 4$, $j = i + 2 \pmod n$, $\lambda_{i,i+1}$ and $\lambda_{j,j+1}$ are proportional, and $a_{i,i+1} = a_{j,j+1} = 0$. \end{itemize} \end{lemma} \begin{proof} We may assume that $i < j < i + n$. Assume $i$ and $j$ are not adjacent. Then the intersection form on the sublattice of $\rL_\QQ$ spanned by $\lambda_{i,i+1}$ and $\lambda_{j,j+1}$ looks as \begin{equation*} \left( \begin{smallmatrix} a_{i,i+1} & 0 \\ 0 & a_{j,j+1} \end{smallmatrix} \right), \end{equation*} hence is non-negatively determined. But by the signature assumption $\rL_\QQ$ does not contain non-negatively determined sublattices of rank 2, hence the vectors $\lambda_{i,i+1}$ and $\lambda_{j,j+1}$ are proportional. By Lemma~\ref{lemma:chains-independent} it follows that $j -i \ge n - 2$ and $(i + n) - j \ge n - 2$. Summing up, we deduce $n \ge 2n - 4$, hence $n \le 4$. Since for $n = 3$ any two cyclically distinct $i$ and $j$ are adjacent, we conclude that $n = 4$, and non-adjacency means $j = i + 2$. Finally, $\lambda_{i,i+1} \cdot \lambda_{j,j+1}= 0$ by Definition~\ref{def:toric}(2), and since the vectors are proportional and nonzero, we also have $\lambda_{i,i+1}^2 = \lambda_{j,j+1}^2 = 0$, hence $a_{i,i+1} = a_{j,j+1} = 0$. \end{proof} \subsection{Fan associated with a toric system} Following Perling, we associate with a toric system $\lambda_{\bullet,\bullet}$ in the Neron--Severi lattice $\NS(\mathrm{G})$ of a surface-like pseudolattice $\mathrm{G}$ a fan in~$\ZZ^2$. As it is customary in toric geometry, we consider a pair of mutually dual free abelian groups \begin{equation*} M \cong \ZZ^2 \qquad\text{and}\qquad N := M^\vee. \end{equation*} We define a map $M \to \ZZ^n$ as the kernel of the map $\ZZ^n \to \NS(\mathrm{G})_\QQ$ defined by taking the base vectors to~$\lambda_{i,i+1}$. Then we consider the dual map $(\ZZ^n)^\vee \to N$ and denote the images of the base vectors by~$\ell_{i,i+1} \in N$. The definition of $\ell_{i,i+1}$ implies that \begin{equation}\label{eq:ell-relation-tensor} \sum_{i=1}^n \ell_{i,i+1} \otimes \lambda_{i,i+1} = 0 \qquad\text{in $N \otimes \NS(\mathrm{G})_\QQ$}, \qquad\text{and}\qquad \text{$N$ is generated by $\ell_{i,i+1}$} \end{equation} (the toric system $(\lambda_{i,i+1})$ defines an element of $\NS(\mathrm{G})_\QQ \otimes (\ZZ^n)^\vee \cong \Hom(\ZZ^n, \NS(\mathrm{G})_\QQ)$, while the collection of vectors~$(\ell_{i,i+1})$ defines an element of $\ZZ^n \otimes N \cong \Hom(M,\ZZ^n)$, and the sum in~\eqref{eq:ell-relation-tensor} is an expression for the composition of these maps). \begin{remark} Perling considers the fan in $N_\RR$ generated by the vectors $\ell_{i,i+1}$, proves that it defines a projective toric surface (\cite[Proposition~10.6]{Pe}), and shows some nice results about it. For instance, he proves that its singularities are T-singularities (which gives a nice connection to Hacking's results~\cite{Ha}), and relates mutations of exceptional collections and their toric systems (see Remark~\ref{remark:toric-mutations}) to degenerations of toric surfaces. We, however, will not need this material and suggest the interested reader to look into~\cite{Pe}. So, toric geometry will not be explicitly discussed below, but an experienced reader will notice it always lurking at the background. \end{remark} The general tensor relation~\eqref{eq:ell-relation-tensor} implies many linear relations. \begin{proposition}\label{proposition:ell-relation} For every $i \in \ZZ$ we have \begin{equation}\label{eq:ell-relation} r_{i+1}^2\ell_{i-1,i} + a_{i,i+1}\ell_{i,i+1} + r_{i}^2\ell_{i+1,i+2} = 0. \end{equation} \end{proposition} \begin{proof} Take the scalar product of~\eqref{eq:ell-relation-tensor} with $r_i^2r_{i+1}^2\lambda_{i,i+1}$ and use Definition~\ref{def:toric}. \end{proof} \begin{corollary}\label{corollary:l-independent} For every $i \in \ZZ$ the vectors $\ell_{i-1,i}$ and $\ell_{i,i+1}$ are linearly independent. \end{corollary} \begin{proof} Assume $\ell_{i-1,i}$ and $\ell_{i,i+1}$ are linearly dependent. Then there is a nonzero element $m \in M$ such that $m(\ell_{i-1,i}) = m(\ell_{i,i+1}) = 0$. Evaluating it on~\eqref{eq:ell-relation-tensor}, we see that \begin{equation*} m(\ell_{i+1,i+2})\lambda_{i+1,i+2} + \dots + m(\ell_{i+n-2,i+n-1})\lambda_{i+n-2,i+n-1} = 0. \end{equation*} But by Lemma~\ref{lemma:chains-independent} the vectors $\lambda_{i+1,i+2},\dots,\lambda_{i+n-2,i+n-1}$ are linearly independent. It follows that $m(\ell_{j,j+1}) = 0$ for all $j$. This contradicts with the fact that $\ell_{j,j+1}$ generate $N$. \end{proof} On the other hand, we have relations of a completely different sort. Denote by $\det \colon N \times N \to \ZZ$ a skew-symmetric bilinear form on $N$ which induces an isomorphism $\bw2N \xrightarrow{\ \sim\ } \ZZ$ (i.e., a volume form). \begin{proposition}\label{proposition:ell-relation-det} For every $i \in \ZZ$ we have \begin{equation}\label{eq:ell-relation-dets} \det(\ell_{i,i+1},\ell_{i+1,i+2}) \ell_{i-1,i} + \det(\ell_{i+1,i+2},\ell_{i-1,i}) \ell_{i,i+1} + \det(\ell_{i-1,i},\ell_{i,i+1}) \ell_{i+1,i+2} = 0. \end{equation} \end{proposition} \begin{proof} This is standard linear algebra. \end{proof} With appropriate choice of the volume form, relations~\eqref{eq:ell-relation} almost coincide with relations~\eqref{eq:ell-relation-dets}. \begin{proposition}\label{proposition:r-a-det} There is a choice of a volume form $\det$ on $N$ and a positive integer $h \in \ZZ$ such that \begin{equation}\label{eq:r-dets} \det(\ell_{i-1,i},\ell_{i,i+1}) = hr_i^2, \quad\text{and}\quad \det(\ell_{i+1,i+2},\ell_{i-1,i}) = ha_{i,i+1} \quad\text{for all $i \in \ZZ$.} \end{equation} With this choice we have $\det(\ell_{i-1,i},\ell_{i,i+1}) > 0$ for all $i \in \ZZ$. \end{proposition} \begin{proof} Indeed, by Corollary~\ref{corollary:l-independent} the space of relations between $\ell_{i-1,i}$, $\ell_{i,i+1}$, $\ell_{i+1,i+2}$ is one-dimensional. Since both relations~\eqref{eq:ell-relation} and~\eqref{eq:ell-relation-dets} are nontrivial (the first because $r_i \ne 0$ and the second because $\det(\ell_{i-1,i},\ell_{i,i+1}) \ne 0$), they are proportional, hence for every $i \in \ZZ$ there is unique $h_i \in \QQ \setminus 0$ such that \begin{equation*} \det(\ell_{i-1,i},\ell_{i,i+1}) = h_ir_i^2, \qquad \det(\ell_{i,i+1},\ell_{i+1,i+2}) = h_ir_{i+1}^2, \quad\text{and}\quad \det(\ell_{i+1,i+2},\ell_{i-1,i}) = h_ia_{i,i+1}. \end{equation*} Comparing the relations for $i$ and $i+1$, we see that $h_i = h_{i+1}$. Hence $h_i = h$ for one non-zero rational number $h$, so that~\eqref{eq:r-dets} holds. Furthermore, since all $r_i$ are mutually coprime (by Definition~\ref{def:toric}(6)), it follows that $h$ is a non-zero integer. Finally, changing the volume form on $N$ if necessary, we can assume that $h > 0$. \end{proof} \begin{remark} In fact, Perling claims that $h = 1$ (see~\cite[Proposition~8.2]{Pe}), however, his proof of this fact is unclear. On the other hand, this is not necessary for the proof of the main result. \end{remark} Now let us deduce some consequences about the geometry of vectors $\ell_{i,i+1}$ on the plane $N_\RR \cong \RR^2$. Consider the polygon defined as the convex hull of the vectors $\ell_{i,i+1}$: \begin{equation*} \bP := \Conv(\ell_{1,2},\ell_{2,3},\dots,\ell_{n,n+1}) \subset N_\RR. \end{equation*} \begin{lemma}\label{lemma:polygon-zero} The point $0 \in N$ is contained in the interior of the polygon $\bP$. \end{lemma} \begin{proof} Indeed, otherwise all $\ell_{i,i+1}$ are contained in a closed half-plane of $N_\RR$. On the other hand, by Proposition~\ref{proposition:r-a-det} we have $\det(\ell_{i-1,i},\ell_{i,i+1}) > 0$ for all $i \in \ZZ$, hence the oriented angle between the vectors $\ell_{i-1,i}$ and $\ell_{i,i+1}$ is contained in the interval $(0,\pi)$. Evidently, a periodic sequence of vectors with this property cannot be contained in a half-plane. This contradiction proves the lemma. \end{proof} We say that a vector $\ell_{i,i+1}$ is {\sf extremal} if it is a vertex of the polygon $\bP$. In other words, if it does not lie in the convex hull of the other vectors. \begin{corollary}\label{corollary:three-extremal} There are at least three extremal vectors among $\ell_{i,i+1}$. \end{corollary} \begin{proof} Indeed, the polygon $\bP$ has a nonempty interior by Lemma~\ref{lemma:polygon-zero}, hence it has at least three vertices. \end{proof} \subsection{Fan of a norm-minimal basis} Throughout this section we assume $\lambda_{\bullet,\bullet}$ is the toric system of a norm-minimal exceptional basis $\ce_\bullet$ in a surface-like pseudolattice $\mathrm{G}$, i.e., the sum $\sum_{i=1}^n r_i^2$ of the squares of the ranks of the basis vectors is minimal possible among all mutations of the basis. \begin{lemma}\label{lemma:a-r} Assume the basis is norm-minimal. Then for every $i \in \ZZ$ we have \begin{itemize} \item if $a_{i,i+1} \ge 0$ then $a_{i,i+1} \ge |r_i^2 - r_{i+1}^2|$, and \item if $a_{i,i+1} < 0$ then $a_{i,i+1} \le -(r_i^2 + r_{i+1}^2)$. \end{itemize} \end{lemma} \begin{proof} We have $\chi(\ce_i,\ce_{i+1}) = n_{i,i+1}$ (see the proof of Proposition~\ref{proposition:lambda-toric}), hence by~\eqref{eq:mutations} the rank of the left mutation $\LL_{\ce_i}(\ce_{i+1})$ of $\ce_{i+1}$ through $\ce_i$ equals \begin{equation*} |r'| = |n_{i,i+1}r_i - r_{i+1}| = \frac{|a_{i,i+1}+r_i^2|}{|r_{i+1}|}. \end{equation*} Norm-minimality implies $|r'| \ge |r_{i+1}|$, i.e. \begin{equation*} |a_{i,i+1} + r_i^2| \ge r_{i+1}^2. \end{equation*} Considering analogously the right mutation $\RR_{\ce_{i+1}}(\ce_{i})$, we deduce \begin{equation*} |a_{i,i+1} + r_{i+1}^2| \ge r_i^2. \end{equation*} Analyzing the cases of nonnegative and negative $a_{i,i+1}$, we easily deduce the required inequalities. \end{proof} Note that in the proof we only use a local norm-minimality of the basis, i.e., that the norm of the basis does not decrease under elementary mutations only. \begin{lemma}\label{lemma:negative-not-extremal} Assume the basis is norm-minimal. If $a_{i,i+1} < 0$ then $\ell_{i,i+1} \in \Conv(0,\ell_{i-1,i},\ell_{i+1,i+2})$. In particular, $\ell_{i,i+1}$ is not extremal. \end{lemma} \begin{proof} If $a_{i,i+1} < 0$ then the relation~\eqref{eq:ell-relation} can be rewritten as \begin{equation*} \ell_{i,i+1} = \frac{r_i^2}{|a_{i,i+1}|}\ell_{i-1,i} + \frac{r_{i+1}^2}{|a_{i,i+1}|}\ell_{i+1,i+2} \end{equation*} By Lemma~\ref{lemma:a-r} we have $|a_{i,i+1}| \ge r_i^2 + r_{i+1}^2$, hence the coefficients in the right hand side are nonnegative and their sum does not exceed 1, hence the claim about the convex hull. Now, since $0$ is in the interior of $\bP$ (Lemma~\ref{lemma:polygon-zero}), we see that $\Conv(0,\ell_{i-1,i},\ell_{i+1,i+2}) \subset \bP$, so the only possibility for~$\ell_{i,i+1}$ to be extremal is if it coincides with one of the vertices of the triangle $\Conv(0,\ell_{i-1,i},\ell_{i+1,i+2})$. But this is impossible by Corollary~\ref{corollary:l-independent}. \end{proof} Combining the last two lemmas we see that there are at least three indices $i$ such that $a_{i,i+1} \ge 0$. This already gives the required restriction on $n$. \begin{corollary}\label{corollary:minimal-n} If a norm-minimal exceptional basis in a geometric surface-like pseudolattice $\mathrm{G}$ consists of elements of non-zero rank, then $n = 3$ or $n = 4$. \end{corollary} \begin{proof} Each vertex of the polygon $\bP$ corresponds to an extremal vector~$\ell_{i,i+1}$, hence the corresponding integer $a_{i,i+1}$ is nonnegative by Lemma~\ref{lemma:negative-not-extremal}. Thus, if there is a pair of non-adjacent $i$ and $j$ such that the vectors $\ell_{i,i+1}$ and $\ell_{j,j+1}$ are extremal, then $n = 4$ by Lemma~\ref{lemma:extremal-pair}. On the other hand, if all $i$ such that $\ell_{i,i+1}$ is extremal are pairwise adjacent, then clearly $n = 3$. \end{proof} \subsection{Norm-minimal bases for $n = 3$ and $n = 4$} It remains to consider two cases. As before we assume that~$\ce_\bullet$ is a norm-minimal exceptional basis of a geometric pseudolattice $\mathrm{G}$ of rank $n$ consisting of elements of non-zero rank and $\lambda_{\bullet,\bullet}$ is its toric system constructed in Proposition~\ref{proposition:lambda-toric}. \begin{lemma}\label{lemma:n=3} If $n = 3$, then $r_i = \pm 1$, $a_{i,i+1} = 1$, and $K_\mathrm{G}^2 = 9$. \end{lemma} \begin{proof} Replacing $\ce_i$ by $-\ce_i$, we may assume that all the ranks are positive. Let $H$ denote a generator of~$\NS(\mathrm{G})$, by unimodularity (Lemma~\ref{lemma:chi-unimodular}) and geometricity of $\mathrm{G}$, we have $H^2 = 1$. By definition of a toric system $\lambda_{ij} = c_{ij}H$, $c_{ij} \in \QQ$. Accordingly, we have by~Definition~\ref{def:toric}(1) \begin{equation*} c_{12}c_{31} = \frac1{r_1^2}, \qquad c_{12}c_{23} = \frac1{r_2^2}, \qquad c_{31}c_{23} = \frac1{r_3^2}. \end{equation*} Solving this system for $c_{ij}$, we see that \begin{equation*} c_{12} = \frac{r_3}{r_1r_2},\qquad c_{31} = \frac{r_2}{r_3r_1},\qquad c_{23} = \frac{r_1}{r_2r_3} \end{equation*} up to a common sign, which can be fixed by replacing $H$ with $-H$ if necessary. By Definition~\ref{def:toric}(5) \begin{equation*} -K_\mathrm{G} = \lambda_{12} + \lambda_{23} + \lambda_{31} = \left(\frac{r_3}{r_1r_2} + \frac{r_2}{r_3r_1} + \frac{r_1}{r_2r_3}\right)H = \frac{r_1^2 + r_2^2 + r_3^2}{r_1r_2r_3}H. \end{equation*} Since $\mathrm{G}$ is unimodular, $K_\mathrm{G}$ is integral by Lemma~\ref{lemma:canonical-class}, i.e., there is $\gamma \in \ZZ$ such that $-K_\mathrm{G} = \gamma H$ and hence \begin{equation}\label{eq:markov} r_1^2 +r_2^2 + r_3^2 = \gamma r_1r_2r_3. \end{equation} Since all $r_i$ are positive, so is $\gamma$. Moreover, $\gcd(r_1,r_2,r_3) = 1$ by Definition~\ref{def:toric}(6) and Proposition~\ref{proposition:lambda-toric}. But the only positive integer~$\gamma$ for which the above equation has an integral indivisible solution is $\gamma = 3$ (\cite[\S2.1]{A}). Therefore, we have $K_\mathrm{G}^2 = \gamma^2 = 9$. Furthermore, in case $\gamma = 3$, equation~\eqref{eq:markov} is the Markov equation, and its positive norm-minimal solution (with respect to the standard braid group action) is $r_1 = r_2 = r_3 = 1$. We obtain $c_{i,i+1} = 1$ and $a_{i,i+1} = r_i^2r_{i+1}^2c_{i,i+1}^2 = 1$. \end{proof} Thus, the Neron--Severi lattice of a geometric surface-like pseudolattice with an exceptional basis of length $n = 3$ can be written as \begin{equation}\label{eq:ns-3} \NS(\mathrm{G}) = \ZZ H,\qquad H^2 = 1,\qquad K_\mathrm{G} = -3H, \end{equation} the toric system of a norm-minimal exceptional collection in $\mathrm{G}$ is $\lambda_{1,2} = \lambda_{2,3} = \lambda_{1,3} = H$. Since $\chi(\ce_i,\ce_j) = n_{i,j} = (a_{i,j} + r_i^2 + r_j^2)/r_ir_j$ and $a_{i,j} = (r_ir_j\lambda_{i,j})^2$, we see that the Gram matrix of the form $\chi$ in the basis $\ce_i$ is equal to the form $\chi_{\PP^2}$ from Example~\ref{example:p2-p1p1}. \begin{corollary}\label{corollary:minimal-3} If a geometric surface-like pseudolattice $\mathrm{G}$ of rank $n = 3$ has an exceptional basis, then~$\mathrm{G}$ is isometric to~$\NGr(\bD(\PP^2))$, and its norm-minimal basis corresponds to the exceptional collection $(\cO,\cO(1),\cO(2))$ in $\bD(\PP^2)$. \end{corollary} Now we pass to the case $n = 4$. The results in this case are quite close to the results of~\cite{dTVdB}. \begin{lemma}\label{lemma:n=4} If $n = 4$, then $r_i = \pm 1$ and $K_\mathrm{G}^2 = 8$. \end{lemma} \begin{proof} Again, we may assume that all the ranks are positive. By Corollary~\ref{corollary:three-extremal} there are at least three extremal vectors $\ell_{i,i+1}$, and by Lemma~\ref{lemma:negative-not-extremal} the corresponding integers $a_{i,i+1}$ are non-negative. After a cyclic shift of indices we may assume \begin{equation*} a_{12}, a_{23}, a_{34} \ge 0. \end{equation*} Applying Lemma~\ref{lemma:extremal-pair} we conclude that $a_{12} = a_{34} = 0$, $\lambda_{12}$ and $\lambda_{34}$ are proportional, and $\lambda_{12}^2 = \lambda_{34}^2 = 0$. By Lemma~\ref{lemma:a-r} we have \begin{equation*} r_1^2 = r_2^2, \qquad r_3^2 = r_4^2. \end{equation*} Since a multiple of $\lambda_{12}$ is integral, we have \begin{equation*} \lambda_{12} = c_{12}f, \qquad \lambda_{34} = c_{34}f, \qquad \text{where $f \in \NS(\mathrm{G})$ is primitive with $f^2 = 0$} \end{equation*} with $c_{12}, c_{34} \in \QQ$. Since $\NS(\mathrm{G})$ is unimodular, there is an element $s \in \NS(\mathrm{G})$ such that $(f,s)$ is a basis of $\NS(\mathrm{G})$ and \begin{equation*} f \cdot s = 1, \qquad d := s^2 \in \{0, -1 \}. \end{equation*} We have \begin{equation*} \lambda_{23} = c_{23}s + c'_{23}f, \qquad \lambda_{41} = c_{41}s + c'_{41}f. \end{equation*} Here all $c$ and $c'$ are rational numbers, and by Lemma~\ref{lemma:chains-independent} all $c$ are nonzero. We have \begin{equation*} c_{23}c_{34} = \lambda_{23} \cdot \lambda_{34} = \frac1{r_3^2} = \frac1{r_4^2} = \lambda_{34} \cdot \lambda_{41} = c_{34}c_{41}, \end{equation*} hence $c_{23} = c_{41}$. Further, \begin{equation*} -K_\mathrm{G} = \lambda_{12} + \lambda_{23} + \lambda_{34} + \lambda_{41} = (c_{12} + c'_{23} + c_{34} + c'_{41})f + 2c_{23}s. \end{equation*} Since $K_\mathrm{G}$ is characteristic and $f^2 = 0$, we have $K_\mathrm{G} \cdot f = 2c_{23}$ is even, hence $c_{23} \in \ZZ$. On the other hand, $r_1r_2\lambda_{12} = r_2^2c_{12}f$ and $r_3r_4\lambda_{34} = r_3^2c_{34}f$ are integral, hence $r_2^2c_{12}$ and $r_3^2c_{34}$ are both integral. But \begin{equation*} \frac1{r_2^2} = \lambda_{12}\lambda_{23} = c_{12}c_{23} \qquad\text{and}\qquad \frac1{r_3^2} = \lambda_{23}\lambda_{34} = c_{23}c_{34}, \end{equation*} hence $(r_2^2c_{12})c_{23} = (r_3^2c_{34})c_{23} = 1$, hence (changing the signs of $f$ and $s$ if necessary) we can write \begin{equation*} c_{12} = \frac1{r_2^2}, \qquad c_{34} = \frac1{r_3^2}, \qquad c_{23} = c_{41} = 1. \end{equation*} Finally, from $0 = \lambda_{23}\cdot \lambda_{41} = d + c'_{23} + c'_{41}$ we deduce that $c'_{23} + c'_{41} = -d \in \ZZ$, hence from integrality of~$K_\mathrm{G}$ we obtain \begin{equation*} \frac1{r_2^2} + \frac1{r_3^2} = c_{12} + c_{34} \in \ZZ, \end{equation*} and from this it easily follows that $r_2^2 = r_3^2 = 1$. We finally see that all $r_i$ are equal to $1$, all $c_{i,i+1} = 1$, and $-K_\mathrm{G} = (2-d)f + 2s$, hence $K_\mathrm{G}^2 = 4(2-d) + 4d = 8$. \end{proof} Thus, the Neron--Severi lattice of a geometric surface-like pseudolattice of rank $n = 4$ with an exceptional basis can be written as \begin{equation}\label{eq:ns-4} \NS(\mathrm{G}) = \ZZ f \oplus \ZZ s,\qquad f^2 = 0,\quad f\cdot s = 1, \quad s^2 = d,\qquad K_\mathrm{G} = (2-d)f - 2s \end{equation} and $d \in \{0, -1\}$. Assume first $d = 0$. As it was shown in the proof of Lemma~\ref{lemma:n=4}, the toric system of a norm-minimal exceptional basis in $\mathrm{G}$ has form $\lambda_{12} = f$, $\lambda_{23} = s + c'f$, $\lambda_{34} = f$ for some $c' \in \ZZ$. This allows to compute all $a_{i,j}$ and~$n_{i,j}$, and to show that the Gram matrix of the form $\chi$ is equal to the form $\chi_{\PP^1 \times \PP^1}$ from Example~\ref{example:p2-p1p1} with $c = c' + 1$. Similarly, if $d = -1$ the toric system of a norm-minimal exceptional basis in $\mathrm{G}$ should have form $\lambda_{12} = f$, $\lambda_{23} = s + c'f$, $\lambda_{34} = f$ for some $c' \in \ZZ$, and computing integers $n_{i,j}$ one checks that $\mathrm{G}$ is isometric to $\NGr(\bD(\mathbb{F}_1))$, where $\mathbb{F}_1$ is the Hirzebruch surface, see Example~\ref{example:p2-p1p1}. On the other hand, again by Example~\ref{example:p2-p1p1} the corresponding exceptional collection in $\bD(\FF_1)$ can be transformed by mutations to a collection of objects of ranks $(1,1,1,0)$, therefore the original exceptional basis is not norm-minimal, and this case does not fit into our assumptions. \begin{corollary}\label{corollary:minimal-4} If a geometric surface-like pseudolattice $\mathrm{G}$ with $n = 4$ has a norm-minimal exceptional basis with all ranks being non-zero, then $\mathrm{G}$ is isometric to~$\NGr(\bD(\PP^1 \times \PP^1))$. A norm-minimal exceptional basis in such $\mathrm{G}$ corresponds to one of the collections \begin{equation*} (\cO, \cO(1,0), \cO(c,1), \cO(c+1,1)) \end{equation*} in $\bD(\PP^1 \times \PP^1)$ for some $c \in \ZZ$. \end{corollary} A combination of Corollary~\ref{corollary:minimal-n}, Corollary~\ref{corollary:minimal-3} and Corollary~\ref{corollary:minimal-4} proves Theorem~\ref{theorem:minimal-cats}. \section{Minimal model program}\label{section:mmp} Assume $\mathrm{G}$ is a surface-like pseudolattice. \subsection{Contraction} Let $\ce \in \mathrm{G}$ be an exceptional element of zero rank. We consider the right orthogonal \begin{equation*} \mathrm{G}_\ce := \ce^\perp = \{ v \in \mathrm{G} \mid \chi(\ce,v) = 0 \} \subset \mathrm{G}. \end{equation*} Since $\chi(\ce,\ce) = 1$, we have a direct sum decomposition \begin{equation}\label{eq:g-split} \mathrm{G} = \mathrm{G}_\ce \oplus \ZZ\ce. \end{equation} Note that $\br(\ce) = 0$ means $\chi(\ce,\bp) = 0$, hence $\bp \in \mathrm{G}_\ce$ and also~$\ce \in \bp^\perp$. Abusing notation, we denote the projection of $\ce$ to $\NS(\mathrm{G}) = \bp^\perp/\bp$ also by $\ce$. \begin{lemma}\label{lemma:contraction} The pseudolattice $\mathrm{G}_\ce$ is surface-like and the element $\bp \in \mathrm{G}_\ce$ is point-like. Moreover, the rank function on $\mathrm{G}_\ce$ is the restriction of the rank function on $\mathrm{G}$; we have an orthogonal direct sum decomposition \begin{equation}\label{eq:contraction-ns} \NS(\mathrm{G}) = \NS(\mathrm{G}_\ce) \mathbin{\mathop{\oplus}\limits^\perp} \ZZ\ce, \end{equation} and a relation between the canonical classes \begin{equation}\label{eq:contraction-k} K_\mathrm{G} = K_{\mathrm{G}_\ce} + (-K_\mathrm{G} \cdot \ce)\ce, \end{equation} If $\mathrm{G}$ is unimodular or geometric, then so is $\mathrm{G}_\ce$. \end{lemma} \begin{proof} Clearly, $\bp$ is primitive in $\mathrm{G}_\ce$, $\chi(\bp,\bp) = 0$, and $\chi_-(\bp,-)$ is zero on $\mathrm{G}_\ce$. Moreover, it is clear that the orthogonal of $\bp$ in $\mathrm{G}_\ce$ is the intersection $\bp^\perp \cap \mathrm{G}_\ce \subset \mathrm{G}$, hence the form $\chi_-$ vanishes on it. This means that $\mathrm{G}_\ce$ is surface-like with point-like element $\bp$. Further, the direct sum decomposition~\eqref{eq:g-split} gives by restriction a direct sum \begin{equation*} \bp^\perp = (\bp^\perp \cap \mathrm{G}_\ce) \oplus \ZZ\ce, \end{equation*} and then by taking the quotient with respect to $\ZZ\bp$ a direct sum~\eqref{eq:contraction-ns}. Its summands are mutually orthogonal by definition. Furthermore, equality~\eqref{eq:chi-minus-k} shows that the orthogonal projection of~$K_\mathrm{G}$ to $\NS(\mathrm{G}_\ce)$ is the canonical class for $\mathrm{G}_\ce$. It fits into~\eqref{eq:contraction-k} since by Lemma~\ref{lemma:ne-zero-rank} we have $\ce^2 = -1$ as~$\ce$ is exceptional of zero rank. Finally, unimodularity of $\mathrm{G}_\ce$ is clear, and geometricity follows from~\eqref{eq:contraction-ns} and~\eqref{eq:contraction-k}. \end{proof} \begin{lemma}\label{lemma:mmp} If $\mathrm{G}$ is a surface-like pseudolattice then there is an exceptional sequence $\ce_1,\dots,\ce_k$ of rank zero elements such that the iterated contraction $\mathrm{G}_{\ce_1,\dots,\ce_k}$ is a minimal surface-like pseudolattice. \end{lemma} \begin{proof} Evidently follows by induction on the rank of $\mathrm{G}$. \end{proof} The pseudolattice obtained from $\mathrm{G}$ by an iterated contraction is called {\sf a minimal model} of $\mathrm{G}$. As we have seen, minimal geometric surface-like pseudolattices admitting an exceptional basis are isometric to numerical Grothendieck groups of $\PP^2$ or $\PP^1\times \PP^1$, so Lemma~\ref{lemma:mmp} can be thought of as a categorical minimal model program for surface-like pseudolattices of that kind. \subsection{Defect} In this section we define the defect of a lattice and of a surface-like pseudolattice. \begin{definition}\label{def:defect} Let $\rL$ be a lattice with a vector $K$. The {\sf defect} of $(\rL,K)$ is defined as the integer \begin{equation*} \delta(\rL,K) := K^2 + \rk \rL - 10 \end{equation*} If $\mathrm{G}$ is a unimodular surface-like pseudolattice, we define its defect as $\delta(\mathrm{G}) := \delta(\NS(\mathrm{G}),K_\mathrm{G})$. \end{definition} It is easy to see that the defect is zero for numerical Grothendieck groups of surfaces with zero irregularity and geometric genus. \begin{lemma} Let $X$ be a smooth projective surface over an algebraically closed field of zero characteristic with $q(X) = p_g(X) = 0$. The corresponding pseudolattice $\mathrm{G} = \NGr(\bD(X))$ has zero defect $\delta(\mathrm{G}) = 0$. \end{lemma} \begin{proof} Since $H^1(X,\cO_X) = H^2(X,\cO_X) = 0$, we have $\rk \NS(X) = e - 2$, where $e$ is the topological Euler characteristic of $X$, and the holomorphic Euler characteristic $\chi(\cO_X)$ equals 1. By Noether formula we have $e = 12\chi(\cO_X) - K_X^2$, hence $\rk \NS(\mathrm{G}) = 10 - K_\mathrm{G}^2$, hence $\delta(\mathrm{G}) = 0$. \end{proof} In general, the defect of a surface-like pseudolattice can be both positive and negative. \begin{example} Let $X \to C$ be a ruled surface over a curve $C$ of genus $g$ (still under assumption that the base field is algebraically closed of zero characteristic) and let $\mathrm{G} = \NGr(\bD(X))$. Then $\rk \NS(X) = 2$, while $K_X^2 = 8(1-g)$, hence $\delta(\mathrm{G}) = 8(1-g) + 2 - 10 = -8g$. Assume further that $X$ has a section $i \colon C \to X$ with normal bundle of degree $-1$ (for instance, $X$ can be the projectivization of a direct sum $\cO_C \oplus \cL$, where $\deg \cL = -1$). Set $\ce$ to be the class of the sheaf~$i_*\cO_C$ in the numerical Grothendieck group $\NGr(\bD(X))$. Clearly, it is exceptional of rank~0, and $K_X \cdot \ce = 2g - 1$. Consequently, by Lemma~\ref{lemma:contraction-defect} below the contraction $\mathrm{G}_\ce$ has defect \begin{equation*} \delta(\mathrm{G}_\ce) = \delta(\mathrm{G}) - (1 - (K_X \cdot \ce)^2) = -8g - (1 - (2g - 1)^2) = 4g(g - 3). \end{equation*} In particular, it is negative for $g = 1$ and $g = 2$, zero for $g = 0$ and $g = 3$, and positive for $g > 3$. \end{example} As a result of classification of Theorem~\ref{theorem:minimal-cats} (more precisely, see Lemma~\ref{lemma:n=3} and Lemma~\ref{lemma:n=4}) we have \begin{corollary} If $\mathrm{G}$ is a minimal geometric surface-like pseudolattice admitting an exceptional basis, then $\delta(\mathrm{G}) = 0$. \end{corollary} An important property is that the defect does not decrease under contractions of geometric categories. \begin{lemma}\label{lemma:contraction-defect} Assume $\mathrm{G}$ is a surface-like pseudolattice and $\ce \in \mathrm{G}$ is exceptional of zero rank. Then \begin{equation}\label{eq:contraction-defect} \delta(\mathrm{G}) = \delta(\mathrm{G}_\ce) + (1 - (K_\mathrm{G} \cdot \ce)^2). \end{equation} In particular, if $\mathrm{G}$ is geometric, then $\delta(\mathrm{G}) \le \delta(\mathrm{G}_\ce)$ and this becomes an equality if and only if $K_\mathrm{G} \cdot \ce = \pm 1$. \end{lemma} \begin{proof} Equality~\eqref{eq:contraction-defect} easily follows from Lemma~\ref{lemma:contraction}. Further, note that $K_\mathrm{G} \cdot \ce \equiv \ce^2 \pmod 2$ since $K_\mathrm{G}$ is characteristic, hence $K_\mathrm{G} \cdot \ce$ is an odd integer, hence the second summand in the right hand side of~\eqref{eq:contraction-defect} is non-positive. This proves the required inequality of the defects. \end{proof} \subsection{Exceptional bases in geometric surface-like pseudolattices} Combining the minimal model program with the classification result of Theorem~\ref{theorem:minimal-cats} we get the following results. \begin{theorem}[cf.~\protect{\cite[Corollary~9.12 and Corollary~10.7]{Pe}}] \label{theorem-ranks-0-1} Let $\mathrm{G}$ be a geometric surface-like pseudolattice. Any exceptional basis in $\mathrm{G}$ can be transformed by mutations into an exceptional basis consisting of~$3$ or $4$ elements of rank~$1$ and all other elements of rank $0$. \end{theorem} \begin{proof} First, we mutate the exceptional basis to a norm-minimal basis $\ce_1,\dots,\ce_n$. Next, we apply a sequence of right mutations to this collection according to the following rule: if a pair $(\ce_i,\ce_{i+1})$ is such that $\br(\ce_i) \ne 0$ and $\br(\ce_{i+1}) = 0$, we mutate it to $(\ce_{i+1},\RR_{\ce_{i+1}}\ce_i)$. Then $\br(\RR_{\ce_{i+1}}\ce_i) = \br(\ce_i)$ by~\eqref{eq:mutations}, so the new collection is still norm-minimal. It is also clear that after a number of such operations we will have the following property: there is $1 \le k \le n$ such that \begin{equation}\label{eq:ranks-0-1} \br(\ce_1) = \dots = \br(\ce_k) = 0, \qquad \text{and}\qquad \br(\ce_i) \ne 0\quad\text{for $i > k$}. \end{equation} We consider the iterated contraction $\mathrm{G}' = \mathrm{G}_{\ce_1,\dots,\ce_k}$ of $\mathrm{G}$. Clearly, $\ce_{k+1},\dots,\ce_n$ is an exceptional basis in $\mathrm{G}'$. Furthermore, it is norm-minimal. Indeed, if there is a mutation in $\mathrm{G}'$ decreasing the norm of the collection, then the same mutation in $\mathrm{G}$ would also decrease the norm in the same way. Since the ranks of all elements in the collection of $\mathrm{G}'$ are non-zero, and $\mathrm{G}'$ is geometric by Lemma~\ref{lemma:contraction}, we conclude by Theorem~\ref{theorem:minimal-cats} that the ranks of elements $\ce_{k+1}, \dots, \ce_n$ are $\pm1$ and $n - k$ is equal to $3$ or $4$. Changing the signs of elements with negative ranks, we deduce that the obtained collection in $\mathrm{G}$ consists of elements of rank 0 and 1 only. \end{proof} \begin{theorem}[cf.~\protect{\cite[Theorem~10.8]{Pe}}] \label{theorem-ranks-1} Let $\mathrm{G}$ be a geometric surface-like pseudolattice with \hbox{$\delta(\mathrm{G}) = 0$}. Any exceptional basis in~$\mathrm{G}$ can be transformed by mutations into an exceptional basis consisting of elements of rank $1$. \end{theorem} \begin{proof} By Theorem~\ref{theorem-ranks-0-1} there is an exceptional basis in $\mathrm{G}$ satisfying~\eqref{eq:ranks-0-1} with $\br(\ce_i) = 1$ for $i > k$. Note also that the category $\mathrm{G}' = \mathrm{G}_{\ce_1,\dots,\ce_k}$ has zero defect by Theorem~\ref{theorem:minimal-cats}. Hence, by Lemma~\ref{lemma:contraction-defect} we have $K_\mathrm{G} \cdot \ce_i = \pm 1$ for all $1 \le i \le k$. We apply to this basis a sequence of right mutations according to the following rule: if a pair $(\ce_i,\ce_{i+1})$ is such that $\br(\ce_i) = 0$ and $\br(\ce_{i+1}) = 1$, we mutate it to $(\ce_{i+1},\RR_{\ce_{i+1}}\ce_i)$. Let us show that $\br(\RR_{\ce_{i+1}}\ce_i) = \pm1$. Indeed, by~\eqref{eq:chi-minus-k} we have \begin{equation*} \chi(\ce_i,\ce_{i+1}) = \chi_-(\ce_i,\ce_{i+1}) = - K_\mathrm{G} \cdot \bl(\ce_i,\ce_{i+1}). \end{equation*} By rank assumptions and~\eqref{eq:def-lambda} we have $\bl(\ce_i,\ce_{i+1}) = -\ce_i$, hence $\chi(\ce_i,\ce_{i+1}) = K_\mathrm{G} \cdot \ce_i = \pm 1$. Therefore by~\eqref{eq:mutations} we have $\br(\RR_{\ce_{i+1}}\ce_i) = \chi(\ce_i,\ce_{i+1})\br(\ce_{i+1}) - \br(\ce_i) = \pm1$. If the rank is $-1$, we change the sign of the element to make the rank equal~1. Clearly, after a finite number of such mutations we will get an exceptional basis consisting of rank~1 elements only. \end{proof} \subsection{Criterion for existence of an exceptional basis} It is easy to show that the criterion for existence of a numerical exceptional collection in the derived category of a surface proved by Charles Vial in~\cite{V} also works for surface-like pseudolattices. We start with a simple lemma. \begin{lemma}\label{lemma:chi-1} Let $(\mathrm{G},\chi)$ be a pseudolattice with $K_\mathrm{G}$ integral and characteristic. If $\br(v_1) = \br(v_2) = 1$ then $\chi(v_1,v_1) \equiv \chi(v_2,v_2) \pmod 2$. Moreover, if $\chi(v,v)$ is odd for some $v \in \mathrm{G}$ with $\br(v) = 1$, then for an appropriate choice of~$v$ one has $\chi(v,v) = 1$. \end{lemma} \begin{proof} We have $v_2 = v_1 + D$, where $\br(D) = 0$. Therefore \begin{equation*} \chi(v_2,v_2) = \chi(v_1,v_1) + \chi(v_1,D) + \chi(D,v_1) + \chi(D,D). \end{equation*} Furthermore, by~\eqref{eq:chi-minus-k} we have \begin{equation*} \chi(v_1,D) + \chi(D,v_1) \equiv K_\mathrm{G} \cdot \bl(v_1,D) = K_\mathrm{G} \cdot D \pmod 2, \end{equation*} and by Lemma~\ref{lemma:chi-ker-r} we have $\chi(D,D) \equiv D^2 \pmod 2$. Therefore \begin{equation*} \chi(v_2,v_2) - \chi(v_1,v_1) \equiv K_\mathrm{G} \cdot D + D^2 \equiv 0 \pmod 2, \end{equation*} since $K_\mathrm{G}$ is characteristic. For the second part note that \begin{equation*} \chi(v+t\bp,v+t\bp) = \chi(v,v) + 2t, \end{equation*} so if $\chi(v,v)$ is odd, an appropriate choice of $t$ ensures that $\chi(v+t\bp,v+t\bp) = 1$. \end{proof} The condition that $\chi(v,v)$ is odd for a rank 1 vector $v \in \mathrm{G}$ is thus independent of the choice of $v$ and is equivalent to existence of a rank 1 vector $v$ with $\chi(v,v) = 1$. If it holds we will say that {\sf $(\mathrm{G},\chi)$ represents~$1$ by a rank~$1$ vector}. This is a pseudolattice analogue of Vial's condition $\chi(\cO_X) = 1$ for a surface $X$. In terms of the matrix representation~\eqref{eq:chi-matrix} of a pseudolattice this condition can be rephrased as $a \equiv 1 \pmod 2$. \begin{theorem}[\protect{cf.~\cite[Theorem~3.1]{V}}]\label{theorem:criterion} Let $(\mathrm{G},\chi)$ be a unimodular geometric pseudolattice of rank $n \ge 3$ and zero defect such that $(\mathrm{G},\chi)$ represents $1$ by a rank~$1$ vector. Then we have the following equivalences: \begin{enumerate} \item $n = 3$ and $K_\mathrm{G} = -3H$ for some $H \in \NS(\mathrm{G})$ if and only if $\mathrm{G}$ is isometric to $\NGr(\bD(\PP^2))$; \item $n = 4$, $\NS(\mathrm{G})$ is even and $K_\mathrm{G} = -2H$ for some $H \in \NS(\mathrm{G})$ if and only if $\mathrm{G}$ is isometric to $\NGr(\bD(\PP^1 \times \PP^1))$; \item $n \ge 4$, $\NS(\mathrm{G})$ is odd and $K_\mathrm{G}$ is primitive if and only if $\mathrm{G}$ is isometric to $\NGr(\bD(X_{n-3}))$, where $X_{n-3}$ is the blowup of $\PP^2$ in $n-3$ points. \end{enumerate} Furthermore, $\mathrm{G}$ has an exceptional basis if and only if one of the three possibilities listed above is satisfied. \end{theorem} \begin{proof} The fact that for the surfaces $\PP^2$, $\PP^1 \times \PP^1$ and $X_{n-3}$ (with the standard surface-like structure) the numerical Grothendieck group has the properties listed in (1), (2), and (3) is evident. Let us check the converse. Assume $n = 3$ and $K_\mathrm{G} = -3H$. By zero defect assumption we have $9H^2 = K_\mathrm{G}^2 = 12 - 3 = 9$, hence, using the signature assumption we have $H^2 = 1$ and $H$ is primitive. By Lemma~\ref{lemma:chi-1} and Remark~\ref{remark:matrix} with appropriate choice of a vector $v_0 \in \mathrm{G}$ the matrix of $\chi$ in the basis $(v_0,H,\bp)$ has form \begin{equation*} \chi = \left( \begin{smallmatrix} 1 & 3 & 1 \\ 0 & -1 & 0 \\ 1 & 0 & 0 \end{smallmatrix} \right) \end{equation*} The above matrix is equal to the matrix of $\chi$ in $\NGr(\bD(\PP^2))$ in the basis $(\cO(-2H),\cO_L,\cO_P)$, where $L$ is a line and $P$ is a point, hence $\mathrm{G}$ is isometric to $\NGr(\bD(\PP^2))$. Assume $n = 4$, $\NS(\mathrm{G})$ even and $K_\mathrm{G} = -2H$. By zero defect we have $4H^2 = K_\mathrm{G}^2 = 12 - 4 = 8$, hence $H^2 = 2$ and $H$ is primitive. By the parity and signature assumption $\NS(\mathrm{G})$ is the hyperbolic lattice, hence we may write $H = H_1 + H_2$, where $(H_1,H_2)$ is the standard hyperbolic basis. By Lemma~\ref{lemma:chi-1} and Remark~\ref{remark:matrix} with appropriate choice of a vector $v_0 \in \mathrm{G}$ the matrix of $\chi$ in the basis $(v_0,H_1,H_2,,\bp)$ has form \begin{equation*} \chi = \left( \begin{smallmatrix} 1 & 2 & 2 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{smallmatrix} \right) \end{equation*} The above matrix is equal to the matrix of $\chi$ in $\NGr(\bD(\PP^1 \times \PP^1))$ in the basis $(\cO(-H_1 - H_2),\cO_{L_1}, \cO_{L_2}, \cO_P)$, where $L_1$ and~$L_2$ are the two rulings and $P$ is a point, hence $\mathrm{G}$ is isometric to $\NGr(\bD(\PP^1 \times \PP^1))$. Finally, assume $n \ge 4$, $\NS(\mathrm{G})$ is odd and $K_\mathrm{G}$ is primitive. By~\cite[Proposition~A.12]{V} there is a basis $\ce_1,\dots,\ce_{n-3},H$ in $\NS(\mathrm{G})$ such that \begin{equation*} \ce_i \cdot \ce_j = - \delta_{ij}, \qquad H^2 = 1, \qquad \ce_i \cdot H = 0, \qquad\text{and}\qquad K_\mathrm{G} = -3H + \sum \ce_i. \end{equation*} By Lemma~\ref{lemma:chi-1} and Remark~\ref{remark:matrix} with appropriate choice of a vector $v_0 \in \mathrm{G}$ the matrix of $\chi$ in the basis $(v_0,H,\ce_1,\dots,\ce_{n-3},\bp)$ has form \begin{equation*} \chi = \left( \begin{smallmatrix} 1 & 3 & 1 & \dots & 1 & 1 \\ 0 & -1 & 0 & \dots & 0 & 0 \\ 0 & 0 & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 1 & 0 \\ 1 & 0 & 0 & \dots & 0 & 0 \end{smallmatrix} \right) \end{equation*} This matrix is equal to the matrix of $\chi$ in $\NGr(\bD(X_{n-3}))$ in the basis $(\cO(-2H),\cO_{L}, \cO_{E_1}, \dots, \cO_{E_{n-3}}, \cO_P)$, where $L$ is a general line on $\PP^2$ and $E_1$, \dots, $E_{n-3}$ are the exceptional divisors, hence $\mathrm{G}$ is isometric to~$\NGr(\bD(X_{n-3}))$. Now let us prove the second part of the theorem. Assume $\mathrm{G}$ has an exceptional basis. If $\mathrm{G}$ is minimal, then by Theorem~\ref{theorem:minimal-cats} we know that $\mathrm{G}$ is isometric to $\bD(\PP^2)$ or $\bD(\PP^1 \times \PP^1)$, hence either (1) or (2) holds. If $\mathrm{G}$ is not minimal, by Theorem~\ref{theorem-ranks-0-1} the exceptional basis can be transformed by mutations into a norm-minimal exceptional collection such that $\br(\ce_i) = 0$ for~$i \le k$ and $\br(\ce_i) = 1$ for $i \ge k + 1$ and $n - k \le 4$, $k \ge 1$. We have $\chi(\ce_n,\ce_n) = 1$, hence $\chi$ represents~$1$ by a rank 1 vector. Moreover, $\ce_1^2 = -1$, hence $\NS(\mathrm{G})$ is odd, and as it was shown in the proof of Theorem~\ref{theorem-ranks-1}, we have $K_\mathrm{G} \cdot \ce_1 = \pm 1$, hence $K_\mathrm{G}$ is primitive. Thus (3) holds. \end{proof} \end{document}
\begin{document} \title{Quantum parameter estimation in a dissipative environment} \author{Wei Wu} \email{[email protected]} \affiliation{School of Physical Science and Technology, Lanzhou University,\\ Lanzhou 730000, People's Republic of China} \author{Chuan Shi} \affiliation{School of Physical Science and Technology, Lanzhou University,\\ Lanzhou 730000, People's Republic of China} \begin{abstract} We investigate the performance of quantum parameter estimation based on a qubit-probe in a dissipative bosonic environment beyond the traditional paradigm of weak-coupling and rotating wave approximations. By making use of an exactly numerical hierarchical equations of motion method, we analyze the influences of the non-Markovian memory effect induced by the environment and the form of probe-environment interaction on the estimation precision. It is found that (i) the non-Markovainity can effectively boost the estimation performance; and (ii) the estimation precision can be improved by introducing a perpendicular probe-environment interaction. Our results indicate the scheme of parameter estimation in a noisy environment can be optimized via engineering the decoherence mechanism. \end{abstract} \maketitle \section{Introduction}\label{sec:sec1} Ultra-sensitive parameter estimation plays an important role in both theoretical and practical researches. It has wide applications from gravitational wave detection~\cite{PhysRevLett.123.231107,PhysRevLett.123.231108}, atom clock synchronization~\cite{PhysRevLett.114.103601,PhysRevLett.113.154101}, to various high accuracy thermometries~\cite{PhysRevLett.114.220405,PhysRevB.98.045101,PhysRevX.10.011018} and magnetometers~\cite{PhysRevLett.115.200501,PhysRevLett.116.240801,Bhattacharjee_2020}. Many previous studies have revealed that certain quantum resources, for example, entanglement~\cite{Nagata726,Zou6381,PhysRevLett.121.160502,PhysRevA.92.032317} and quantum squeezing~\cite{PhysRevD.23.1693,PhysRevLett.119.193601}, can substantially improve the estimation precision and beat the shot-noise limit (standard quantum limit), which is set by the law of classical statistics. Thus, using quantum technology to attain a higher estimation accuracy has became a hot topic in the last decades, and the theory of quantum parameter estimation has been established correspondingly~\cite{RevModPhys.89.035002,RevModPhys.90.035005}. Quantum Fisher information (QFI) lies at the heart of quantum parameter estimation~\cite{683779922,683779921,Liu_2019}. Roughly speaking, it characterizes the statistical information which is extractable from a quantum state carrying the parameter of interest. In this sense, QFI theoretically determines the minimal estimation error, which is independent of specific measurement schemes. Moreover, going beyond the scope of quantum estimation theory, it has been revealed that QFI can be also used to detect the quantum phase transition of a many-body system~\cite{PhysRevA.80.012318,PhysRevA.82.022306,Wu2016,Wang_2014}, quantity the smallest evolution time for a quantum process~\cite{PhysRevA.85.052127,PhysRevLett.110.050402,Deffner_2017}, and measure the non-Markovian information flow in an open quantum system~\cite{PhysRevA.82.042103,PhysRevA.91.042110,Li_2019}. In any practical and actual parameter estimation scheme, the quantum probe, carrying the parameter of interest, unavoidably interacts with its surrounding environment, which generally impairs the quantum resource labeling on the probe and induces the deterioration of quantum coherence. In this sense, the probe and its surrounding environment form an open quantum system~\cite{RevModPhys.59.1,RevModPhys.88.021002,RevModPhys.89.015001}, which implies the estimation performance can be severely influenced by the environment. To gain a global view and more physical insight into the quantum parameter estimation problem, the estimation scheme should be investigated within the framework of quantum dissipative dynamics and how to degrade the noise's impact should be taken into account~\cite{Demkowicz2012,PhysRevLett.113.250801,PhysRevLett.112.120405,1912.04675}. Almost all the existing studies of parameter estimation in a noisy environment restricted their attentions to some exactly solvable situations. For example, they usually assume the probe suffers a pure dephasing decoherence channel~\cite{PhysRevLett.109.233601,Razavian2019} or certain especial amplitude-damping decoherence channels~\cite{PhysRevA.88.035806,PhysRevA.87.032102,Wang_2017,PhysRevA.97.012126,SALARISEHDARAN2019126006}. Very few studies focus on the more general case, where both the dephasing mechanism and quantum relaxation are considered. Considering the fact that the real decoherence process is intricate, generalizing the study of noisy parameter estimation to a more general dissipative environment is highly desirable from both theoretical and experimental perspectives. To address the above concern, one needs to solve the difficulty in achieving an accurate dynamical description of the quantum probe, which is coupled to a general dissipative environment. Therefore, an efficient and reliable approach is typically required. In this paper, we adopt the hierarchical equations of motion (HEOM) approach~\cite{doi:10.1143/JPSJ.58.101,YAN2004216,PhysRevE.75.031107,doi:10.1063/1.2938087,PhysRevA.85.062323} to handle this problem. The HEOM is a set of time-local differential equations for the reduced density matrix of the probe, which can provide a completely equivalent describe of the exact Schr$\ddot{\mathrm{o}}$dinger equation (or the quantum von Neumann equation). This method is beyond the usual Markovian approximation, the rotating-wave approximation (RWA), and the perturbative approximation. Thus, the HEOM can be viewed as an exactly numerical treatment of the quantum dissipative dynamics. In recent years, the HEOM approach has been successfully used to study the anomalous decoherence phenomenon in a nonlinear spin-boson model~\cite{PhysRevA.94.062116}, the quantum Zeno and anti-Zeno phenomena in a noisy environment~\cite{PhysRevA.95.042132}, as well as the influence of counter-rotating-wave terms on the measure of non-Markovianity~\cite{PhysRevA.96.032125}. In this paper, we employ the HEOM method to study the quantum parameter estimation problem in a general dissipative environment. In Sec.~\ref{sec:sec2}, we briefly outline some basic concepts as well as the general formalism of quantum parameter estimation. In Sec.~\ref{sec:sec3}, we present three different methods employed in this paper in detail, including the HEOM approach, the general Bloch equation (GBE) technique~\cite{PhysRevB.71.035318,PhysRevB.79.125317} and the RWA treatment. Compared with the RWA approach, the effect of counter-rotating-wave terms is considered in the GBE method. Thus it can be employed as a benchmark of the purely numerical HEOM approach. The main results and the conclusions of this paper are drawn in Sec.~\ref{sec:sec4} and Sec.~\ref{sec:sec5}, respectively. Throughout the paper, we set $\hbar=k_{\mathrm{B}}=1$ for the sake of simplicity, and all the other units are dimensionless as well. \section{Noisy quantum parameter estimation}\label{sec:sec2} In the theory of quantum parameter estimation, the parameter's information is commonly encoded into the state of the quantum probe via a unitary~\cite{PhysRevLett.79.3865,Hauke2016,McCormick2019} or non-unitary dynamics~\cite{PhysRevLett.109.233601,Razavian2019,PhysRevA.88.035806,PhysRevA.87.032102,Wang_2017,PhysRevA.97.012126,SALARISEHDARAN2019126006,Haase_2018,PhysRevLett.123.040402,tamascelli2020quantum}. Then, one can extract the message of the parameter $\theta$ from the output state of the probe $\rho_{\theta}$ via repeated quantum measurements. In such quantum parameter estimation process, one can not completely eliminate all the errors and estimate $\theta$ precisely. There exists a minimal estimation error, which can not be removed by optimizing the estimation scheme, is given by the famous quantum Cram\'{e}r-Rao bound~\cite{683779922,683779921,Liu_2019} \begin{equation}\label{eq:eq1} \delta \theta\geq\frac{1}{\sqrt{\upsilon F(\theta)}}, \end{equation} where $\delta \theta$ the root mean square of $\theta$, $\upsilon$ is the number of repeated measurements (in this paper, we set $\upsilon=1$ for the sake of convenience), and $F(\theta)\equiv\mathrm{Tr}(\rho_{\theta}\hat{L}_{\theta})$ with $\hat{L}_{\theta}$ determined by $\partial_{\theta}\rho_{\theta}=\frac{1}{2}(\hat{L}_{\theta}\rho_{\theta}+\rho_{\theta}\hat{L}_{\theta})$ is the QFI with respect to the output state $\rho_{\theta}$. From Eq.~(\ref{eq:eq1}), one can immediately find that the optimal estimation precision is completely decided by the value of QFI: the larger the QFI, the smaller the estimation error is. How to saturate the smallest theoretical error (or boost the QFI) is the most crucial problem in the field of quantum parameter estimation. To compute the QFI from the $\theta$-dependent density operator $\rho_{\theta}$, one first needs to diagonalize $\rho_{\theta}$ as $\rho_{\theta}=\sum_{\ell}\xi_\ell|\xi_\ell\rangle\langle\xi_\ell|$, where $\xi_{\ell}\equiv\xi_{\ell}(\theta)$ and $|\xi_{\ell}\rangle\equiv|\xi_{\ell}(\theta)\rangle$ are eigenvalues and eigenvectors of $\rho_{\theta}$, respectively. Then, the QFI can be computed as~\cite{Liu_2019} \begin{equation}\label{eq:eq2} \begin{split} F(\theta)=&\sum_{\ell}\frac{(\partial_{\theta}\xi_{\ell})^{2}}{\xi_{\ell}}+\sum_{\ell}4\xi_{\ell}\langle\partial_{\theta}\xi_{\ell}|\partial_{\theta}\xi_{\ell}\rangle\\ &-\sum_{\ell,\ell'}\frac{8\xi_{\ell}\xi_{\ell'}}{\xi_{\ell}+\xi_{\ell'}}|\langle\partial_{\theta}\xi_{\ell}|\xi_{\ell'}\rangle|^{2}. \end{split} \end{equation} Specially, for a two-dimensional density operator described in the Bloch representation, namely, $\rho_{\theta}=\frac{1}{2}(\mathbf{1}_{2}+\langle\pmb{\hat{\underline{\sigma}}}\rangle\cdot\pmb{\hat{\sigma}})$ with $\langle\pmb{\hat{\underline{\sigma}}}\rangle\equiv(\langle\hat{\sigma}_{x}\rangle,\langle\hat{\sigma}_{y}\rangle,\langle\hat{\sigma}_{z}\rangle)^{\mathbb{T}}$ being the Bloch vector and $\pmb{\hat{\sigma}}\equiv(\hat{\sigma}_{x},\hat{\sigma}_{y},\hat{\sigma}_{z})$ being the vector of Pauli matrices, Eq.~(\ref{eq:eq2}) can be further simplified to~\cite{Liu_2019} \begin{equation}\label{eq:eq3} F(\theta)=|\partial_{\theta}\langle\pmb{\hat{\underline{\sigma}}}\rangle|^{2}+\frac{(\langle\pmb{\hat{\underline{\sigma}}}\rangle\cdot\partial_{\theta}\langle\pmb{\hat{\underline{\sigma}}}\rangle)^{2}}{1-|\langle\pmb{\hat{\underline{\sigma}}}\rangle|^{2}}. \end{equation} For pure state case, the above equation reduces to $F(\theta)=|\partial_{\theta}\langle\pmb{\hat{\underline{\sigma}}}\rangle|^{2}$. Compared with Eq.~(\ref{eq:eq2}), Eq.~(\ref{eq:eq3}) is more computable in practice, because it avoids the operation of diagonalization. In this work, we assume a qubit, acting as the probe and carrying the parameter of interest, is linearly coupled to a dissipative environment. The Hamiltonian of the quantum probe is described by $\hat{H}_{\mathrm{s}}=\frac{1}{2}\Delta\hat{\sigma}_{x}$, where $\Delta$ represents the frequency of tunneling between the two levels of the qubit and \emph{is the encoded parameter to be estimated in this paper}. We assume the dissipative environment is stimulated by a set of harmonic oscillators, i.e., $\hat{H}_{\mathrm{b}}=\sum_{k}\omega_{k}\hat{b}_{k}^{\dagger}\hat{b}_{k}$, where $\hat{b}_{k}^{\dagger}$ and $\hat{b}_{k}$ are the creation and annihilation operators of the $k$th harmonic oscillator with corresponding frequency $\omega_{k}$, respectively. Thus, the Hamiltonian of the whole qubit-probe plus the environment is given by $\hat{H}=\hat{H}_{\mathrm{s}}+\hat{H}_{\mathrm{b}}+\hat{H}_{\mathrm{i}}$. Here, we assume the probe-environment interaction part can be described in the following linear form \begin{equation}\label{eq:eq4} \hat{H}_{\mathrm{i}}=\mathcal{\hat{S}}\otimes\mathcal{\hat{B}}, \end{equation} where $\hat{\mathcal{S}}$ denotes the probe's operator coupled to its surrounding environment, and $\mathcal{\hat{B}}\equiv\sum_{k}g_{k}(\hat{b}_{k}^{\dagger}+\hat{b}_{k})$ with $g_{k}$ being the coupling strength between the probe and the $k$th environmental mode. After a period of non-unitary dynamics, the information of $\Delta$ is then encoded in the reduced density operator of the probe, namely, $\varrho_{\mathrm{s}}(t)\equiv \mathrm{Tr}_{\mathrm{b}}[e^{-i\hat{H}t}\varrho_{\mathrm{sb}}(0)e^{i\hat{H}t}]$. Here, $\varrho_{\mathrm{sb}}(0)$ is the initial state of the whole Hamiltonian. Generally speaking, the ultimate estimation precision associated with $\varrho_{\mathrm{s}}(t)$ depends a number of factors. In this paper, we concentrate on the following two elements: \emph{the characteristic of the environment} and \emph{the form of the probe-environment coupling operator}. The property of the environment is mainly reflected by environmental auto-correlation function, which is defined by \begin{equation}\label{eq:eq5} \alpha(t)\equiv\mathrm{Tr}_{\mathrm{b}}\Big{(}e^{it\hat{H}_{\mathrm{b}}}\hat{\mathcal{B}}e^{-it\hat{H}_{\mathrm{b}}}\hat{\mathcal{B}}\varrho_{\mathrm{b}}\Big{)}, \end{equation} with $\varrho_{\mathrm{b}}$ being the initial state of the environment. Thus, we shall discuss the effect of $\alpha(t)$ and $\mathcal{\hat{S}}$ on the QFI with respect to $\varrho_{\mathrm{s}}(t)$. The determination of the QFI requires the knowledge of the reduced density operator $\varrho_{\mathrm{s}}(t)$. Unfortunately, except in a few special situations, the exact expression of $\varrho_{\mathrm{s}}(t)$ is generally difficult to obtain. To overcome this difficulty, we would like to adopt the following three different methods to evaluate $F(\Delta)$. \section{Methodology}\label{sec:sec3} In this section, we introduce the dynamical formulations employed in our study. The first one is the HEOM method, which can provide rigorous numerical results. As comparisons, we also present two analytical methods: the GBE and the RWA approaches. In this paper, we assume the initial state of the whole probe-environment system has a factorizing form, i.e., $\varrho_{\mathrm{sb}}(0)=\varrho_{\mathrm{s}}(0)\otimes\varrho_{\mathrm{b}}$, where $\varrho_{\mathrm{b}}=|\mathbf{0}_{k}\rangle\langle \mathbf{0}_{k}|$ with $|\mathbf{0}_{k}\rangle\equiv\bigotimes_{k}|0_{k}\rangle$ being the Fock vacuum state of the environment. \subsection{HEOM} The HEOM can be viewed as a bridge linking the well-known Schr$\ddot{\mathrm{o}}$dinger equation, which is exact but generally difficult to solve straightforwardly, and a set of ordinary differential equations, which can be handled numerically by using the Runge-Kutta method. How to establish such a connection, which should be elaborately designed and avoid losing any important dynamical feature of the the quantum probe, is the most important step in the HEOM treatment~\cite{PhysRevA.98.012110,PhysRevA.98.032116}. In many previous references, the HEOM algorithm is realized by making use of the path-integral influence functional approach~\cite{PhysRevE.75.031107,doi:10.1063/1.2938087}. In this paper, we establish the HEOM in an alternative way: within the framework of the non-Markovian quantum state diffusion approach~\cite{PhysRevA.98.012110,PhysRevLett.113.150403,Suess2015}. The dynamics of $\hat{H}$ is determined by the Schr$\ddot{\mathrm{o}}$dinger equation $\partial_{t}|\Psi_{\mathrm{sb}}(t)\rangle=-i\hat{H}|\Psi_{\mathrm{sb}}(t)\rangle$, where $|\Psi_{\mathrm{sb}}(t)\rangle$ is the pure-state wave function of the whole probe-environment system. Any straightforward treatment of the above Schr$\ddot{\mathrm{o}}$dinger equation can be rather troublesome, because of the large number of degrees of freedom. However, by employing the bosonic coherent state, which is defined by $|\mathbf{z}\rangle=\bigotimes_{k}|z_{k}\rangle$ with $|z_{k}\rangle\equiv e^{z_{k}\hat{b}_{k}^{\dagger}}|0_{k}\rangle$, one can recast the original Schr$\ddot{\mathrm{o}}$dinger equation into the following stochastic quantum state diffusion equation (see Appendix for more details) \begin{equation}\label{eq:eq6} \begin{split} \frac{\partial}{\partial t}|\psi_{t}(\textbf{z}^{*})\rangle=&-i\hat{H}_{\mathrm{s}}|\psi_{t}(\textbf{z}^{*})\rangle+\hat{\mathcal{S}}\textbf{z}_{t}^{*}|\psi_{t}(\textbf{z}^{*})\rangle\\ &-\hat{\mathcal{S}}\int_{0}^{t}d\tau\alpha(t-\tau)\frac{\delta}{\delta \textbf{z}_{\tau}^{*}}|\psi_{t}(\textbf{z}^{*})\rangle, \end{split} \end{equation} where $|\psi_{t}(\textbf{z}^{*})\rangle\equiv\langle \textbf{z}|\Psi_{\mathrm{sb}}(t)\rangle$ is the total pure-state wave function in the coherent-state representation, the variable $\textbf{z}_{t}\equiv i\sum_{k}g_{k}z_{k}e^{-i\omega_{k}t}$ can be regarded as a stochastic Gaussian colored noise satisfying $\mathcal{M}\{\textbf{z}_{t}\}=\mathcal{M}\{\textbf{z}_{t}^{*}\}=0$ and $\mathcal{M}\{\textbf{z}_{t}\textbf{z}_{\tau}^{*}\}=\alpha(t-\tau)$. Here $\mathcal{M}\{...\}$ denotes the statistical mean over all the possible quantum trajectories, and $\alpha(t)=\sum_{k}g_{k}^{2}e^{-i\omega_{k}t}$ is the auto-correlation function at zero temperature. In this paper, we concentrate on an Ornstein-Uhlenbeck-type auto-correlation function, namely \begin{equation}\label{eq:eq7} \alpha(t)=\frac{1}{2}\Gamma\gamma e^{-\gamma t}, \end{equation} where $\Gamma$ can be viewed as the probe-environment coupling strength and $\gamma$ is connected to the memory time of the environment. Notice that the auto-correlation function has an exponential form of time, which means $\partial_{t}\alpha(t)=-\gamma\alpha(t)$. Using this property, we can replace the stochastic quantum state diffusion equation in Eq.~(\ref{eq:eq6}) with a set of hierarchial equations of the pure-state wave function $|\psi_{t}(\textbf{z}^{*})\rangle$ as follows~\cite{PhysRevA.98.012110,PhysRevLett.113.150403,Suess2015} \begin{equation}\label{eq:eq8} \begin{split} \frac{\partial}{\partial t}|\psi_{t}^{(m)}\rangle=&(-i\hat{H}_{\mathrm{s}}-m\gamma+\hat{\mathcal{S}}\textbf{z}_{t}^{*})|\psi_{t}^{(m)}\rangle\\ &+\frac{1}{2}m\Gamma\gamma\hat{\mathcal{S}}|\psi_{t}^{(m-1)}\rangle-\hat{\mathcal{S}}|\psi_{t}^{(m+1)}\rangle, \end{split} \end{equation} where \begin{equation*} |\psi_{t}^{(m)}\rangle\equiv\Bigg{[}\int_{0}^{t}d\tau C(t-\tau)\frac{\delta}{\delta \textbf{z}_{\tau}^{*}}\Bigg{]}^{m}|\psi_{t}(\textbf{z}^{*})\rangle, \end{equation*} are auxiliary pure-state wave functions. The hierarchy equation of $|\psi_{t}(\textbf{z}^{*})\rangle$ in Eq.~(\ref{eq:eq8}) no longer contains functional derivatives, but still has stochastic noise terms which hinder the efficiency of numerical simulation. To extract a deterministic equation of motion for the reduced density operator, one needs to trace out the degrees of freedom of the environment by taking the statistical mean over all the possible quantum trajectories~\cite{PhysRevA.98.012110,PhysRevA.98.032116}. The expression of the reduced density operator is then given by $\varrho_{\mathrm{s}}(t)=\varrho_{t}\equiv\mathcal{M}\big{\{}|\psi_{t}(\textbf{z}^{*})\rangle\langle\psi_{t}(\textbf{z}^{*})|\big{\}}$. As shown in Ref.~\cite{PhysRevA.98.012110} show, the equation of motion for $\varrho_{t}$ can be derived from Eq.~(\ref{eq:eq8}), and reads \begin{equation}\label{eq:eq9} \begin{split} \frac{d}{dt}\varrho_{t}^{(m,n)}=&-i\Big{[}\hat{H}_{\mathrm{s}},\varrho_{t}^{(m,n)}\Big{]}-\gamma(m+n)\varrho_{t}^{(m,n)}\\ &+\frac{1}{2}\Gamma\gamma\Big{[}m\hat{\mathcal{S}}\varrho_{t}^{(m-1,n)}+n\varrho_{t}^{(m,n-1)}\hat{\mathcal{S}}\Big{]}\\ &-\Big{[}\hat{\mathcal{S}},\varrho_{t}^{(m+1,n)}\Big{]}+\Big{[}\hat{\mathcal{S}},\varrho_{t}^{(m,n+1)}\Big{]}, \end{split} \end{equation} where $\varrho_{t}^{(m,n)}\equiv\mathcal{M}\big{\{}|\psi_{t}^{(m)}(\textbf{z}^{*})\rangle\langle\psi_{t}^{(n)}(\textbf{z}^{*})|\big{\}}$ are auxiliary reduced density operators. Eq.~(\ref{eq:eq9}) is nothing but a set of ordinary differential equations, which shall be handled in our numerical simulations. The initial state condition of the auxiliary operators are $\varrho^{(0,0)}_{t}=\varrho_{\mathrm{s}}(0)$ and $\varrho^{(m>0,n>0)}_{t}=0$. In numerical simulations, we need to truncate the hierarchical equations by choosing a sufficiently large integer $N$. All the terms of $\varrho^{(m,n)}_{t}$ with $m+n>N$ are set to zero, while the terms of $\varrho^{(m,n)}_{t}$ with $m+n\leq N$ consist of a closed set of ordinary differential equations which can be solved directly by using the fourth-order Runge-Kutta method. It is necessary to emphasize that no approximation is invoked in the above derivation from Eq.~(\ref{eq:eq6}) to Eq.~(\ref{eq:eq9}), which means the mapping from the original Schr$\ddot{\mathrm{o}}$dinger equation to the hierarchy equations given by Eq.~(\ref{eq:eq9}) is exact. In this sense, the numerics obtained from Eq.~(\ref{eq:eq9}) should be viewed as rigorous results. \subsection{GBE} If $\hat{\mathcal{S}}=\hat{\sigma}_{z}$, Eq.~(\ref{eq:eq4}) has a purely transversal or perpendicular interaction (recalling that $\hat{H}_{\mathrm{s}}=\frac{1}{2}\Delta\hat{\sigma}_{x}$). Thus, the probe-environment system has the same structure as the famous spin-boson model, which leads to both the loss of information and the dissipation of energy. The equation of motion of the spin-boson model is governed by the quantum von Neumann equation $\partial_{t}\varrho_{\mathrm{sb}}(t)=-i[\hat{H},\varrho_{\mathrm{sb}}(t)]$, which provides an exact dynamical prediction. Applying the Zwanzig's projection technique with the Born approximation, the quantum von Neumann equation can be transformed to the well-known Zwanzing-Nakajima master equation~\cite{10.1143/PTP.20.948,doi:10.1063/1.1731409} \begin{equation}\label{eq:eq10} \frac{\partial}{\partial t}\varrho_{\mathrm{s}}(t)=-i\mathcal{\hat{L}}_{\mathrm{s}}\varrho_{\mathrm{s}}(t)-\int_{0}^{t}d\tau\hat{\Sigma}(t-\tau)\varrho_{\mathrm{s}}(\tau), \end{equation} here $\hat{\Sigma}(t)$ is the self-energy super-operator \begin{equation}\label{eq:eq11} \hat{\Sigma}(t)=\mathrm{Tr}_{\mathrm{b}}\Big{[}\mathcal{\hat{L}}_{\mathrm{i}}e^{-it\mathcal{\hat{Q}}(\mathcal{\hat{L}}_{\mathrm{s}}+\mathcal{\hat{L}}_{\mathrm{b}}+\mathcal{\hat{L}}_{\mathrm{i}})}\mathcal{\hat{L}}_{\mathrm{i}}\varrho_{\mathrm{b}}\Big{]}, \end{equation} where $\mathcal{\hat{L}}_{\mathrm{x}}$ with $\mathrm{x}=\mathrm{s},\mathrm{b},\mathrm{i}$ is the Liouvillian superoperator satisfying $\mathcal{\hat{L}}_{\mathrm{x}}\mathcal{\hat{O}}=[\hat{H}_{\mathrm{x}},\mathcal{\hat{O}}]$, and $\mathcal{\hat{Q}}\equiv 1-\varrho_{\mathrm{b}}\mathrm{Tr}_{\mathrm{b}}$ is Zwanzig's projection superoperator. From Eq.~(\ref{eq:eq10}), one can notice the evolution of $\varrho_{\mathrm{s}}(t)$ depends on $\varrho_{\mathrm{s}}(\tau)$ at all the earlier times $0<\tau<t$, implying the memory effect from the environment has been considered and is incorporated into the self-energy super-operator $\hat{\Sigma}(t-\tau)$. Thus, the result from Eq.~(\ref{eq:eq9}) is then non-Markovian. The exact treatment of the above Zwanzing-Nakajima master equation in Eq.~(\ref{eq:eq10}) is challenging. Fortunately, the self-energy super-operator of Eq.~(\ref{eq:eq11}) can be expanded in powers of the interaction Liouvillian $\mathcal{\hat{L}}_{\mathrm{i}}$. Only retaining the lowest-order term in the series, $\hat{\Sigma}(t)$ can be approximated as~\cite{PhysRevB.71.035318,PhysRevB.79.125317,WU2020168203} \begin{equation}\label{eq:eq12} \hat{\Sigma}(t)\simeq\mathrm{Tr}_{\mathrm{b}}\Big{[}\mathcal{\hat{L}}_{\mathrm{i}}e^{-it(\mathcal{\hat{L}}_{\mathrm{s}}+\mathcal{\hat{L}}_{\mathrm{b}})}\mathcal{\hat{L}}_{\mathrm{i}}\varrho_{\mathrm{b}}\Big{]}. \end{equation} Eq.~(\ref{eq:eq10}) together with the approximate $\hat{\Sigma}(t)$ in Eq.~(\ref{eq:eq12}) constitute a general non-Markovian quantum master equation, which has been widely used in many previous studies~\cite{PhysRevB.71.035318,PhysRevB.79.125317,PhysRevA.89.062113}. By introducing the time-dependent Bloch vector $\langle\pmb{\hat{\underline{\sigma}}}(t)\rangle$ with $\langle\hat{\sigma}_{i}(t)\rangle\equiv\mathrm{Tr}_{\mathrm{s}}[\hat{\sigma}_{i}\varrho_{\mathrm{s}}(t)]$, one can rewrite the above general quantum master equation as the following GBE~\cite{PhysRevB.71.035318,PhysRevB.79.125317} \begin{equation}\label{eq:eq13} \frac{d}{dt}\langle\pmb{\hat{\underline{\sigma}}}(t)\rangle=\mathfrak{\hat{T}}(t)\diamond\langle\pmb{\hat{\underline{\sigma}}}(t)\rangle, \end{equation} where $\diamond$ denotes the convolution and \begin{equation*} \mathfrak{\hat{T}}(t)=\left[ \begin{array}{ccc} -\mathfrak{A}(t) & 0 & 0 \\ 0 & -\mathfrak{B}(t) & -\Delta\delta(t) \\ 0 & \Delta\delta(t) & 0 \\ \end{array} \right], \end{equation*} with $\mathfrak{A}(t)=4\cos(\Delta t)\alpha(t)$, $\mathfrak{B}(t)=4\alpha(t)$. By means of the Laplace transform, one can find \begin{equation}\label{eq:eq14} \begin{split} \langle\hat{\sigma}_{i}(\lambda)\rangle\equiv&\int_{0}^{\infty}dt\langle\hat{\sigma}_{i}(t)\rangle e^{-\lambda t}\\ =&\sum_{j}\mathfrak{F}_{ij}(\lambda)\langle\hat{\sigma}_{j}(0)\rangle, \end{split} \end{equation} where $i,j=x,y,z$. For the perpendicular probe-environment interaction case, the non-vanishing terms of $\mathfrak{F}_{ij}(\lambda)$ are \begin{equation*} \mathfrak{F}_{xx}(\lambda)=[\lambda+\mathfrak{A}(\lambda)]^{-1}, \end{equation*} \begin{equation*} \mathfrak{F}_{yy}(\lambda)=\bigg{[}\lambda+\mathfrak{B}(\lambda)+\frac{\Delta^{2}}{\lambda}\bigg{]}^{-1}, \end{equation*} \begin{equation*} \mathfrak{F}_{zz}(\lambda)=\lambda^{-1}[\lambda+\mathfrak{B}(\lambda)]\mathfrak{F}_{yy}(\lambda), \end{equation*} \begin{equation*} \mathfrak{F}_{yz}(\lambda)=-\mathfrak{F}_{zy}(\lambda)=-\Delta\lambda^{-1}\mathfrak{F}_{yy}(\lambda). \end{equation*} Then, for an arbitrary given initial state $\langle\pmb{\hat{\underline{\sigma}}}(0)\rangle$, the dynamics of $\langle\pmb{\hat{\underline{\sigma}}}(t)\rangle$ can be completely determined by the GBE method in Eq.~(\ref{eq:eq14}) with the help of inverse Laplace transform. \subsection{RWA} For the purely perpendicular interaction case, one can use an alternative method, the RWA approach, to obtain the dynamical behavior of the probe. The RWA can remove the counter-rotating-wave terms in $\hat{H}$ and obtain the following approximate Hamiltonian \begin{equation}\label{eq:eq15} \hat{H}_{\mathrm{RWA}}=\frac{\Delta}{2}\hat{\sigma}_{x}+\sum_{k}\omega_{k}\hat{b}_{k}^{\dagger}\hat{b}_{k}+\sum_{k}g_{k}\big{(}\hat{\sigma}_{-}\hat{b}_{k}^{\dagger}+\hat{\sigma}_{+}\hat{b}_{k}\big{)}, \end{equation} where $\hat{\sigma}_{+}\equiv|+\rangle\langle -|$ and $\hat{\sigma}_{-}\equiv|-\rangle\langle +|$ with $|\pm\rangle$ being the eigenvectors of $\hat{\sigma}_{x}$, i.e., $\hat{\sigma}_{x}|\pm\rangle=\pm|\pm\rangle$. The Hamiltonian $\hat{H}_{\mathrm{RWA}}$ commutes with the total excitation number operator $\hat{\mathcal{N}}=\hat{\sigma}_{+}\hat{\sigma}_{-}+\sum_{k}\hat{b}_{k}^{\dagger}\hat{b}_{k}$, which is thus a constant of motion and can greatly simplify the reduced dynamical solution of the probe in this situation. At zero temperature, the reduced dynamics of the probe is exactly solvable in the RWA case and can be conveniently expressed in the basis of $\{|+\rangle,|-\rangle\}$ as follows \begin{equation}\label{eq:eq16} \varrho_{\mathrm{s}}(t)=\left[ \begin{array}{cc} \varrho_{++}(0)\mathcal{G}_{t}^{2} & \varrho_{+-}(0)\mathcal{G}_{t}e^{-i\Delta t} \\ \varrho_{-+}(0)\mathcal{G}_{t}e^{i\Delta t} & 1-\varrho_{++}(0)\mathcal{G}_{t}^{2} \\ \end{array} \right], \end{equation} where $\mathcal{G}_{t}$ is the the decay factor. For the Ornstein-Uhlenbeck-type auto-correlation function considered in this paper, the exact expression of $\mathcal{G}_{t}$ is given by~\cite{PhysRevA.96.032125} \begin{equation}\label{eq:eq17} \begin{split} \mathcal{G}_{t}=&\exp\Big{(}-\frac{1}{2}\gamma t\Big{)}\bigg{[}\cosh\Big{(}\frac{1}{2}\Omega t\Big{)}+\frac{\gamma}{\Omega}\sinh\Big{(}\frac{1}{2}\Omega t\Big{)}\bigg{]}, \end{split} \end{equation} with $\Omega\equiv\sqrt{\gamma^{2}-2\gamma\Gamma}$. As showed in many previous studies~\cite{PhysRevA.85.062323,PhysRevA.96.032125}, the RWA is acceptable in the weak probe-environment coupling regime, we thus expect it can provide a reasonable prediction in the above region. \subsection{Comparison}\label{subsec:subsec3d} In Fig.~\ref{fig:fig1}, we display the dynamics of the population difference $\langle\hat{\sigma}_{z}(t)\rangle$ of the qubit-probe, which is a very common quantity of interest in experiments. For the RWA case, the exact expression of the population difference $\langle\hat{\sigma}_{z}(t)\rangle$ is given by $\langle\hat{\sigma}_{z}(t)\rangle_{\mathrm{RWA}}=\mathcal{G}_{t}\cos(\Delta t)$. For the Ornstein-Uhlenbeck-type correlation function considered in this paper, the boundary between Markovian and non-Markovian regimes can be approximately specified by the ratio of $\gamma/\Gamma$~\cite{PhysRevA.95.042132,PhysRevLett.99.160502}. When $\gamma/\Gamma$ is large, the correlation function reduces to a delta correlated auto-correlation function, i.e., $\alpha(t-\tau)\simeq\Gamma\delta(t-\tau)$, which means the environment is memoryless and the decoherence dynamics is Markovian. On the contrary, if $\gamma/\Gamma$ is small, the environmental memory effect can not be neglected and the corresponding decoherence is then non-Markovian. In fact, when $\gamma/\Gamma\rightarrow\infty$, one can demonstrate that the hierarchical equations in Eq.~(\ref{eq:eq8}) can reduce to the common Markovian Lindblad-type master equation by only considering the zeroth order of the terminator~\cite{PhysRevLett.113.150403}. The relation between $\gamma/\Gamma$ and the degree of non-Markovianity has been studied by making use of trace distance~\cite{PhysRevA.81.062124,Wu2018} and dynamical divisibility~\cite{PhysRevA.83.062115}, these studies are consistent with our above analysis. We first consider the Markovian case, say $\lambda/\Gamma=10$ in Fig.~\ref{fig:fig1}(a). A good agreement is found between results from the HEOM and the GBE, while the prediction from the RWA exhibits a small deviation from the above two approaches. Such deviation disappears if the probe-environment coupling becomes further weaker. Thus, three different approaches present a consistent result in Markovian and weak-coupling regime. In the non-Markovian regime, the result from the GBE can still be in qualitative agreement with that of the numerical HEOM method if the coupling strength is weak, see Fig.~\ref{fig:fig1} (c). However, when the coupling becomes stronger, such as the parameters chosen in Fig.~\ref{fig:fig1} (d), the GBE exhibits a relatively large deviation compared with the result from HEOM, probably because it neglects the higher-order terms of the probe-environment coupling. On the contrary, the result calculated with RWA gives a qualitatively incorrect conclusion in the entire non-Markovian regime, unless one only focuses on the short-time behavior of the population difference. \begin{figure} \caption{The dynamics of the population difference $\langle\hat{\sigma} \label{fig:fig1} \end{figure} \begin{figure*} \caption{(a) The QFI $F(\Delta)$ is plotted as a function of time in RWA case. Parameters are chosen as $\gamma=0.25\Gamma$ (red solid line), $\gamma=0.5\Gamma$ (purple dashed line) and $\gamma=5\Gamma$ (blue dotdashed line) with $\Gamma=0.1$ and $\Delta=1$. (b) The QFI $F(\Delta)$ from the GBE method. Parameters are chosen as $\gamma=0.25\Gamma$ (red circles), $\gamma=0.35\Gamma$ (purple rhombuses) and $\gamma=0.5\Gamma$ (blue rectangles) with $\Gamma=0.2$ and $\Delta=0.2$. (c) The QFI obtained by the numerical HEOM method. Parameters are chosen as $\gamma=0.25\Gamma$ (red circles), $\gamma=0.4\Gamma$ (purple rhombuses) and $\gamma=0.6\Gamma$ (blue rectangles) with $\Gamma=0.15$ and $\Delta=0.2$.} \label{fig:fig2} \end{figure*} \section{Results}\label{sec:sec4} In this section, we study the influence of $\alpha(t)$ and $\hat{\mathcal{S}}$ on the estimation precision of $\Delta$ in a dissipative environment. During the numerical calculations to the exact QFI using the HEOM method, one needs to handle the first order derivative to the parameter $\Delta$, namely $\partial_{\Delta}\langle\hat{\sigma}_{i}(t)\rangle$ (see Eq.~(\ref{eq:eq3})). In this paper, the derivative for an arbitrary $\theta$-dependent function $f_{\theta}$ is numerically performed by adopting the following finite difference method \begin{equation}\label{eq:eq18} \frac{\partial f_{\theta}}{\partial\theta}\simeq\frac{-f_{\theta+2\epsilon}+8f_{\theta+\epsilon}-8f_{\theta-\epsilon}+f_{\theta-2\epsilon}}{12\epsilon}. \end{equation} In our numerical simulations, we set $\epsilon/\theta=10^{-5}$, which provides a very good accuracy for finite-difference approximations. In this section, we assume the initial state of the quantum probe is given by $\frac{1}{\sqrt{2}}(|+\rangle+|-\rangle)$. \subsection{Effect of non-Markovainity} We first study the environmental memory effect on the noisy estimation precision. As discussion in~\ref{subsec:subsec3d}, by manipulating the ratio of $\gamma/\Gamma$, the degree of non-Markovianity in the decoherence channel changes drastically. This feature is beneficial for us to explore the connection between the non-Markovianity and the estimation precision in a dissipative environment. In the RWA case, one can derive a very simple expression of the QFI with respect to the parameter $\Delta$ as $F_{\mathrm{RWA}}(\Delta)=t^{2}\mathcal{G}_{t}^{2}$. With this expression at hand, it is very easy to check that the value of QFI can be boosted by decreasing the ratio of $\gamma/\Gamma$. This result implies the non-Markovianity may increase the estimation precision, which is in agreement with the results of Refs.~\cite{PhysRevLett.109.233601,PhysRevA.88.035806}. Moreover, in the non-Markovian regime, we observe that the QFI oscillates with time and exhibits a collapse-and-revival phenomenon before complete disappearance. The same result is also reported in Ref.~\cite{PhysRevA.88.035806}, and can be regarded as an evidence of reversed information flow from the environment back to the probe. Going beyond the RWA, the numerical performances from the GME and the HEOM tell us the same conclusion, see Fig.~\ref{fig:fig2} (b) and (c). Thus, one can conclude that the environmental non-Markovian effect can effectively improve the estimation precision regardless of whether the counter-rotating-wave terms are taken into account. In this sense, our result is a non-trivial generalization of Ref.~\cite{PhysRevA.88.035806} in which only the RWA case is considered. \subsection{Dephasing versus relaxation} \begin{figure} \caption{(a) The QFI $F(\Delta)$ is plotted as a function of $t$ by making use of the HEOM method with different values of $\chi$: $\chi=0$ (blue rectangles), $\chi=0.75$ (purple rhombuses), $\chi=2$ (green triangles) and $\chi=3$ (red circles). (b) The maximum QFI with respect to time versus $\chi$. Parameters are chosen as $\gamma=10\Gamma$, $\Gamma=0.2\Delta$ and $\Delta=1$. (c) The QFI $F(\Delta)$ is displayed as a function of $t$ in the non-Markovian regime with different values of $\chi$: $\chi=0$ (blue rectangles), $\chi=1.5$ (purple rhombuses), $\chi=1$ (green triangles) and $\chi=0.5$ (red circles). (d) The maximum QFI versus $\chi$ in the non-Markovian regime. Parameters are chosen as $\gamma=0.3\Gamma$, $\Gamma=0.3\Delta$ and $\Delta=0.25$. } \label{fig:fig3} \end{figure} Generally speaking, the specific form of the probe-environment operator $\hat{\mathcal{S}}$ fully determines the decoherence channel. When $[\hat{\mathcal{S}},\hat{H}_{\mathrm{s}}]=0$, the probe suffers a pure dephasing decoherence mechanism and only off-diagonal elements of $\varrho_{\mathrm{s}}(t)$ decay during the time evolution. If $[\hat{\mathcal{S}},\hat{H}_{\mathrm{s}}]\neq 0$, the decoherence channel of the probe is relaxation, which results in the dissipation of qubit's energy. An interesting question arises here: what is the influence of the type of decoherence channel on the estimation performance? To address this problem, we generalize our discussion to a more general situation $\hat{\mathcal{S}}=\hat{\sigma}_{x}+\chi\hat{\sigma}_{z}$, where $\chi$ is a tunable real parameter~\cite{doi:10.1063/1.4950888}. Here, both parallel interaction case $\chi=0$ and perpendicular interaction case $\chi\neq0$ are included in the above expression of $\hat{\mathcal{S}}$, which can give rise to a much richer decoherence phenomenon. Such $\mathcal{\hat{S}}$ can be physically realized in atomic gas or quantum dot system, in which the atoms or electron spins are relaxed by their surrounding phonons (say, spontaneous emission process), meanwhile the dephasing process is generated by the random fluctuations of an external electromagnetic field~\cite{PhysRevLett.96.097009,doi:10.1063/1.5039891}. Making use of the HEOM approach, which is independent of the specific form of operator $\mathcal{\hat{S}}$, we can numerically obtain the value of QFI. From Fig.~\ref{fig:fig3}, we find the influence of $\chi$ on the QFI is not evident in the short-time regime. However, as time increases, the effect of $\chi$ becomes no longer negligible. Maximizing the QFI over time, one can see the maximum QFI $F_{\max}(\Delta)$ is quite sensitive to the value of $\chi$: when $\chi$ is small, the introduction of the perpendicular interaction is favorable for obtaining a larger $F_{\max}(\Delta)$; after reaching a local maximum value, $F_{\max}(\Delta)$ gradually decreases as $\chi$ further increases. This result implies that the pure dephasing decoherence mechanism is \emph{not} the best choice for obtaining the maximum precision estimation, which is consistent with the result reported in Ref.~\cite{tamascelli2020quantum}. From Fig.~\ref{fig:fig3} (b) and (d), one can observe that $F_{\max}(\Delta)$ can be smaller than that of the pure dephasing case in the large-$\chi$ regime, which suggests there exists an optimal $\chi$ maximizing the value of QFI. Thus, we draw a conclusion that the performance of noisy parameter estimation can be enhanced by engineering the form of prob-environment coupling. \section{Summary}\label{sec:sec5} In summary, by employing the HEOM method, we have investigated the ultimate achievable limit to a qubit-probe's frequency estimation in a dissipative bosonic environment. Compared with two other approaches, it is found that the non-Markovian memory effect induced by the environment can remarkably boost the estimation precision, regardless of RWA or non-RWA cases. This is good news for a practical quantum sensing protocol, because the actual noisy environment is complicated and non-Markovian, compared with the over-simplified memoryless approximation used in certain theoretical treatments. We also reveal that the pure dephasing is not the optimal decoherence mechanism to obtain the maximum estimation precision. By introducing a perpendicular qubit-environment interaction, the estimation performance can be improved. Furthermore, by adjusting the value of $\chi$ to change the weight of the perpendicular interaction in the $\hat{\mathcal{S}}$ operator, one can attain a larger value of QFI. Due to the fact that both the specific forms of $\alpha(t)$ and $\mathcal{\hat{S}}$ play important roles in determining reduced dynamical behavior of the qubit-probe, our result implies the noisy parameter estimation precision can be optimized by controlling the decoherence mechanism. Though these results are achieved in the Ornstein-Uhlenbeck auto-correlation function case, thanks to the rapid development of HEOM method, our analysis of noisy parameter estimation can be generalized to other auto-correlation functions. For example, as reported in Refs.~\cite{PhysRevA.98.012110,PhysRevA.98.032116,doi:10.1063/1.4936924,PhysRevB.95.214308}, the HEOM method has been extended to arbitrary spectral density function as well as finite temperature environment situation. Moreover, it has been reported that the HEOM method can be extended to simulate the dissipative dynamics of a few-level system embedded in a fermionic environment~\cite{PhysRevA.96.032125,Suess2015,doi:10.1063/1.5136093} or a spin environment~\cite{doi:10.1063/1.5018725,doi:10.1063/1.5018726}. It would be very interesting to extrapolate our study to these more general situations. Finally, due to the comprehensive utilizations of the qubit-based quantum sensor, our study provides a means of designing an optimal estimation scheme to characterize a parameter of interest in a noisy environment. The strategy explored in this paper might have certain potential applications in the researches of quantum metrology and quantum sensing. \section{Acknowledgments} W. Wu wishes to thank Dr. S.-Y. Bai, Prof. H.-G. Luo and Prof. J.-H. An for many useful discussions. This work is supported by the National Natural Science Foundation (Grant No. 11704025). \section{Appendix} In this appendix, we would like to show how to derive Eq.~(\ref{eq:eq6}) from the common Schr$\ddot{\mathrm{o}}$dinger equation $\partial_{t}|\Psi_{\mathrm{sb}}(t)\rangle=-i\hat{H}|\Psi_{\mathrm{sb}}(t)\rangle$. The whole probe-environment Hamiltonian in the interaction picture with respect to the environment reads \begin{equation}\label{eq:eq19} \hat{H}(t)=\hat{H}_{\mathrm{s}}+\mathcal{\hat{S}}\sum_{k}\Big{(}g_{k}\hat{b}_{k}^{\dagger}e^{i\omega_{k}t}+g_{k}\hat{b}_{k}e^{-i\omega_{k}t}\Big{)}. \end{equation} Substituting $\hat{H}(t)$ into the standard Schr$\ddot{\mathrm{o}}$dinger equation, we have \begin{equation}\label{eq:eq20} \partial_{t}|\Psi_{\mathrm{sb}}(t)\rangle=-i\bigg{[}\hat{H}_{\mathrm{s}}+\mathcal{\hat{S}}\sum_{k}g_{k}\hat{b}_{k}^{\dagger}e^{i\omega_{k}t}+g_{k}\hat{b}_{k}e^{-i\omega_{k}t}\bigg{]}|\Psi_{\mathrm{sb}}(t)\rangle. \end{equation} Then, we employe the Bargmann coherent state $|\mathbf{z}\rangle=\bigotimes_{k}|z_{k}\rangle$ with $|z_{k}\rangle\equiv e^{z_{k}\hat{b}_{k}^{\dagger}}|0_{k}\rangle$ to reexpress Eq.~(\ref{eq:eq20}). By left-multiplying the Bargmann coherent state $\langle \mathbf{z}|$ on both sides of Eq.~(\ref{eq:eq20}), one can find \begin{equation}\label{eq:eq21} \begin{split} \partial_{t}&\langle \mathbf{z}|\Psi_{\mathrm{sb}}(t)\rangle=-i\hat{H}_{\mathrm{s}}\langle \mathbf{z}|\Psi_{\mathrm{sb}}(t)\rangle\\ &-i\mathcal{\hat{S}}\langle \mathbf{z}|\bigg{[}\sum_{k}g_{k}\hat{b}_{k}^{\dagger}e^{i\omega_{k}t}+g_{k}\hat{b}_{k}e^{-i\omega_{k}t}\bigg{]}|\Psi_{\mathrm{sb}}(t)\rangle. \end{split} \end{equation} Next, using the following properties of the Bargmann coherent state \begin{equation}\label{eq:eq22} \hat{b}_{k}|z_{k}\rangle=z_{k}|z_{k}\rangle,~~\hat{b}_{k}^{\dagger}|z_{k}\rangle=\frac{\partial}{\partial z_{k}}|z_{k}\rangle, \end{equation} Eq.~(\ref{eq:eq21}) can be simplifies to \begin{equation*}\label{eq:eq23} \begin{split} \partial_{t}|\psi_{t}(\mathbf{z}^{*})\rangle=\bigg{[}-i\hat{H}_{\mathrm{s}}+\mathcal{\hat{S}}\mathbf{z}_{t}^{*}-i\mathcal{\hat{S}}\sum_{k}g_{k}e^{-i\omega_{k}t}\frac{\partial}{\partial z_{k}^{*}}\bigg{]}|\psi_{t}(\mathbf{z}^{*})\rangle, \end{split} \end{equation*} where $\textbf{z}_{t}\equiv i\sum_{k}g_{k}z_{k}e^{-i\omega_{k}t}$. The term $\frac{\partial}{\partial z_{k}^{*}}|\psi_{t}(\mathbf{z}^{*})\rangle$ can be cast as a functional derivative by making use of the functional chain rule~\cite{DIOSI1997569,STRUNZ2001237} \begin{equation}\label{eq:eq24} \begin{split} \frac{\partial}{\partial z_{k}^{*}}|\psi_{t}(\mathbf{z}^{*})\rangle=\int_{0}^{t}d\tau\frac{\partial \mathbf{z}_{\tau}^{*}}{\partial z_{k}^{*}}\frac{\delta}{\delta \mathbf{z}_{\tau}^{*}}|\psi_{t}(\mathbf{z}^{*})\rangle. \end{split} \end{equation} Finally, we have \begin{equation*}\label{eq:eq25} \begin{split} \frac{\partial}{\partial t}|\psi_{t}(\textbf{z}^{*})\rangle=&-i\hat{H}_{\mathrm{s}}|\psi_{t}(\textbf{z}^{*})\rangle+\hat{\mathcal{S}}\textbf{z}_{t}^{*}|\psi_{t}(\textbf{z}^{*})\rangle\\ &-\hat{\mathcal{S}}\int_{0}^{t}d\tau\sum_{k}g_{k}^{2}e^{-i\omega_{k}(t-\tau)}\frac{\delta}{\delta \textbf{z}_{\tau}^{*}}|\psi_{t}(\textbf{z}^{*})\rangle, \end{split} \end{equation*} which reproduces Eq.~(\ref{eq:eq6}) in the main text. Therefore, by defining the stochastic process $\mathbf{z}_{t}$ which originates from environmental degrees of freedom, the standard Schr$\ddot{\mathrm{o}}$dinger equation can be converted into the stochastic quantum state diffusion equation. \end{document}
\begin{document} \title{Sensitivity Analysis for matched pair analysis of binary data: From worst case to average case analysis} \author{Raiden B. Hasegawa\textsf{\footnote{\textit{Address for correspondence}: Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104-6340 US, Email: [email protected].}} \ and \ Dylan S. Small} \date{May 16, 2018} \maketitle \begin{abstract} In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. \newline \textbf{keywords:} attributable effects; binary data; causal inference; cellphone; majorization; sensitivity analysis; traffic collision. \end{abstract} \label{firstpage} \thispagestyle{fancy} \section{INTRODUCTION}\label{sec:intro} \subsection{Sensitivity analysis as causal evidence}\label{subsec:evidence} In matched-pair observational studies, causal conclusions based on usual inferential methods (e.g., McNemar's test for binary data) rest on the assumption that matching on observed covariates has the same effect as randomization (i.e., that there are no unmeasured confounders). In other words, it is assumed that there are no unobserved covariates relevant to both treatment assignment and outcome. A sensitivity analysis assesses the sensitivity of results to violations of this assumption. \cite{cornfield1959} introduced a model for sensitivity analysis that was a major conceptual advance in the field of observational studies. A modern approach to sensitivity analysis is introduced in \cite{rosenbaum1987}; Rosenbaum's approach builds on Cornfield's model (\cite{cornfield1959}) but incorporates uncertainty due to sampling variance. There are other contemporary sensitivity analysis models, see for example \cite{mccandless2007} for a Bayesian approach, but we restrict our focus to Rosenbaum's approach. Rosenbaum's sensitivity analysis yields an upper limit on the magnitude of bias to which the result of the researcher's test of no treatment effect is insensitive for a given significance level $\alpha$. More specifically, \cite{rosenbaum1987} derives bounds on the p-value of this test given an upper bound, $\Gamma$, on the odds ratio of treatment assignment for a pair of subjects matched on observed covariates. $\Gamma$ can be thought of as a measure of ``worst case'' bias in the sense that treatment assignment probabilities in matched pairs are allowed to vary arbitrarily as long as the odds ratio of treatment assignment for a pair of subjects is no greater than $\Gamma$. The largest $\Gamma$ for which the p-value is less than 0.05 is denoted by $\Gamma_{sens}$. We will use $\Gamma_{truth}$ to distinguish the true unknown worst case bias. $\Gamma_{sens}$ is interpreted in Rosenbaum's sensitivity analysis as the largest value of the worst case bias across matched pairs that does not invalidate the finding of evidence for a treatment effect. We refer to this as a \textit{worst case calibrated} sensitivity analysis. A classic example of this type of analysis is given in Chapter 4 of \cite{rosenbaum2002}. Applying the worst case sensitivity analysis to a study of the effects of heavy smoking on lung cancer mortality (\cite{hammond1964}), Rosenbaum finds that $\Gamma_{sens} \approx 6$ and interprets this result cogently: \begin{quote} To attribute the higher rate of death from lung cancer to an unobserved covariate rather than to an effect of smoking, that unobserved covariate would need to produce a sixfold increase in the odds of smoking, and it would need to be a near perfect predictor of lung cancer. \end{quote} A brief, more formal review of Rosenbaum's sensitivity analysis framework is in Section \ref{ss-review}. The worst case calibrated sensitivity analysis raises several potential questions. If we are convinced that there is no pair in Hammond's smoking study such that one unit is more than six times as likely to smoke as the other (i.e., $\Gamma_{truth}\le \Gamma_{sens}$), then we would conclude that our study provides convincing evidence that heavy smoking increases the rate of lung cancer mortality. However, what if, on average, unmeasured confounders do not alter the odds of smoking greatly but there are some subjects for whom the unmeasured confounders make them almost certain to smoke, e.g., a subject who experiences huge peer pressure to smoke. If such a subject ends up in our sample of matched pairs, and we condition on matched pairs in which only one unit receives treatment, a standard practice when conducting matched pair randomization tests, then the odds ratio of treatment assignment in the matched pair containing that subject, and consequently $\Gamma_{truth}$, will be infinite. In such a case, since $\Gamma_{sens}$ is generally finite, we'd expect it to be smaller than $\Gamma_{truth}$. Now, suppose that there are such pairs in the Hammond study but that for most pairs the odds ratio of smoking between the units is much smaller than six. Using the worst case calibrated sensitivity analysis, we would conclude that the study is sensitive to bias. Is there potentially some natural quantification of average bias over the sample of matched pairs, say, $\Gamma'_{truth}$, that isn't infinite and perhaps is smaller than six? And if we calibrate our sensitivity analysis to this measure of bias rather than the worst case measure, will the sensitivity analysis be valid in the sense that the inference is conservative at level $\alpha$ for any $\Gamma \ge \Gamma'_{truth}$? If it is valid, are there other advantages to using the \textit{average case calibrated} sensitivity analysis over the worst case calibrated sensitivity analysis? In what follows, we attempt to answer these motivating questions in the context of a matched pair analysis of the association between cellphone use and car accidents. \subsection{Outline} In this paper we demonstrate that interpreting sensitivity analysis results in terms of average case rather than worst case hidden bias is both valid and conceptually more natural in many common scenarios. To illustrate our claim that the average case analysis is more natural we will perform a causal analysis of a study by \cite{tibshirani1997} that asks if there is an association between cellphone use and motor-vehicle collisions. The study is described in the following section. In section \ref{review} we review the model for sensitivity analysis of tests of no treatment effect and sensitivity intervals for attributable effects for binary data. In section \ref{worst-avg} we discuss the theory behind the validity of average case sensitivity analysis. Finally, the \cite{tibshirani1997} study is examined in this new light in section \ref{examples}. In particular, we see how the average case sensitivity analysis makes it possible to use additional information from the problem to empirically calibrate our sensitivity analysis in Section \ref{subsec:inter} and we extend the average case sensitivity analysis to the study of sensitivity intervals for attributable effects in Section \ref{attr-sec}. \subsection{Motivating Example: Effects of cellphone use on the incidence of motor-vehicle collisions} \cite{tibshirani1997} conducted a case-crossover study of the effects of cellphone use on the incidence of car collisions. In a case-crossover study each subject acts as her own control which has the benefit of controlling for potential confounders that are time-invariant, even if they are unobserved. Data collection took place at a collision reporting center in Toronto between July 1, 1994 and August 31, 1995 during weekday peak hours (10 AM to 6 PM). Consenting drivers who reported having been in a collision with substantial property damage and who owned a cellphone were included in the study. Drivers involved in collisions that involved injury, criminal activity, or transport of dangerous goods were excluded. The resulting study population included 699 individuals who gave permission to review their cellphone records and filled out a brief questionnaire about their personal characteristics and the features of the collision. The matched pair analysis compared cellphone usage in the 10-minute hazard window prior to the crash with a 10-minute control window on a chosen day prior to the crash. We will denote the time of the crash as $t$ and the hazard window as $t-10$ to $t-1$ minutes. The authors examined several different control windows: \begin{enumerate}[leftmargin=20pt] \item{} \textit{Previous day}: time $t-10$ to $t-1$ minutes on the previous day. \item{} \textit{Previous weekday/weekend}: time $t-10$ to $t-1$ minutes on the previous weekday if the crash took place on a weekday and similarly if the crash took place on a weekend. \item{} \textit{One week prior}: time $t-10$ to $t-1$ minutes one week prior to the collision. \item{} \textit{Busiest cellphone day of previous three days}: time $t-10$ to $t-1$ minutes on the one day among the prior three to the collision with the most cellphone calls. \end{enumerate} For each choice of control window, \cite{tibshirani1997} found that there was a significant positive association between cellphone usage and traffic collision incidence. The 2 x 2 contingency tables shown in Table \ref{tbl-cw} summarize the data using the four different control windows. \begin{table}[h!] \centering \begin{tabular}{lrrr} &&\multicolumn{2}{c}{Control} \\ && On phone & Not on phone \\ \hline &&\multicolumn{2}{c}{\textit{Previous Weekday/end}} \\ \hline \multirow{2}{*}{Hazard} & On phone & 12 & 158 \\ & Not on phone & 23 & 506 \\ \hline &&\multicolumn{2}{c}{\textit{One Week Prior}} \\ \hline \multirow{2}{*}{Hazard} & On phone & 6 & 164 \\ & Not on phone & 21 & 508 \\ \hline &&\multicolumn{2}{c}{\textit{Previous Driving Day}} \\ \hline \multirow{2}{*}{Hazard} & On phone & 18 & 119 \\ & Not on phone & 20 & 171 \\ \hline &&\multicolumn{2}{c}{\textit{Most Active Cellphone Day}} \\ \hline \multirow{2}{*}{Hazard} & On phone & 17 & 135 \\ & Not on phone & 43 & 504 \end{tabular} {\mathbf{v}}space{20pt} \caption{\textbf{One Week Prior}: results for one week prior control window versus hazard window;\textbf{Previous Weekday/end}: results for previous weekday/weekend control window versus hazard window; \textbf{Previous Driving Day}: results for previous driving day control window versus hazard window;\textbf{Most Active Cellphone Day}: results for most active cellphone day in previous 3 days control window versus hazard window.}\label{tbl-cw} \end{table} \subsection{Sensitivity of results to hidden bias}\label{sens-examp} As this was an observational study, the associations cannot be assumed to be causal. We would like to quantify how large a hidden bias would have to be to explain the observed association between cellphone use and car accidents without it being causal. A sensitivity analysis seems appropriate and is a straightforward exercise (see Chapter 4, \cite{rosenbaum2002} for example). Table \ref{sens-table} shows the results of a standard worst case sensitivity analysis for each control window. Here, $\Gamma_{sens}$ is the largest value of $\Gamma$ such that the result are still significant at the $\alpha=0.05$ level. In our analysis of the case-crossover study from \cite{tibshirani1997} we condition on subjects who were on a cellphone in exactly one of the control and hazard windows (i.e., discordant case-crossover pairs). Thus, the odds ratio of treatment assignment for the two windows observed for any case-crossover subject can be viewed as the conditional odds that treatment occurs in a particular window. Hence, we can interpret $\Gamma$ as the maximum (and $1/\Gamma$ as the minimum) over all study subjects of the odds that a driver is using a cellphone during the hazard window and not during the control window. \begin{table}[ht] \centering \begin{tabular}{lr} Control Window & $\Gamma_{sens}$ \\ \hline previous weekday/weekend & 4.92 \\ one week prior & 5.53 \\ previous driving day & 4.15 \\ most active cellphone day & 2.40 \end{tabular} {\mathbf{v}}space{20pt} \caption{Sensitivity analysis for (marginal) $\alpha=0.05$.}\label{sens-table} \end{table} The sensitivity analysis suggests that the most active cellphone day control window was the most conservative analysis. This is unsurprising since we would expect that the treatment assignment (cellphone use) would be biased toward the control window on a day when you used a cellphone relatively often. We can interpret these results as follows: \textit{the observed ostensible effect is insensitive to hidden bias that increases the odds that a driver was on a cellphone in the hazard window and not the control window on the most active cellphone day by at most a factor of 2.4.} In many observational studies this type of statement is very useful. However, it may be plausible that some study participants are exposed to infinite (or at least very large) hidden bias. For example, this happens if a subject was not driving during the control window and (almost) always uses her landline rather than her cellphone when she is not driving. When we condition on case-crossover pairs where the treatment is received in exactly one of the windows -- a standard practice when conducting a matched pair randomization test -- such a driver is always on a cellphone during the hazard window. When this happens, the observed ostensible effect is (almost) always sensitive to hidden bias, no matter how strong the observed association. Implicitly, in the worst case sensitivity analysis, the investigator is supremely skeptical; she assumes that it could be that all study participants suffer from the worst case hidden bias which, when it is possible that some study participant suffers from unbounded hidden bias, renders sensitivity analysis under the standard worst case interpretation uninformative. Yet in many studies where unbounded hidden bias in some matched pairs is plausible, as in our motivating example, we still want to examine the sensitivity of our results to potential hidden bias. If we could perform a valid, average case calibrated sensitivity analysis then we could (1) make sensitivity analysis informative even in the presence of pairs subject to unbounded hidden bias and (2) make the interpretation of sensitivity analysis results far less conservative. It turns out that there is a measure of the sample average bias that is generally finite in the presence of pairs subject to unbounded bias for data with binary treatment and outcome. Moreover, the sensitivity analysis calibrated to this measure of average bias is valid when using McNemar's statistic to test the null hypothesis of no treatment effect against the alternative of a positive treatment effect (i.e., that talking on a cellphone while driving increases the rate of automobile accidents). \section{NOTATION AND REVIEW}\label{review} \subsection{Notation} Our study sample consists of $S$ matched pairs where each pair $s=1,2,\dots,S$ is matched on a set of observed relevant covariates $\x_{s1}=\x_{s2}=\x_s$. Units in each pair are indexed by $i=1,2$. We let $Z_{si}$ and $R_{si}$ denote the treatment assignment and outcome, respectively, of the $i$-th unit of the $s$-th pair. The potential outcomes under treatment and control are denoted as $r_{Tsi}$ and $r_{Csi}$, respectively. Hence, we can write $R_{si}=Z_{si}r_{Tsi} + (1-Z_{si})r_{Csi}$. Under Fisher's sharp null hypothesis of no treatment effect, i.e., $r_{Tsi}=r_{Csi}$ for all $i$, we have that $R_{si}=r_{Csi}$. Hereafter, we will work under the null hypothesis and under the assumption that each pair was matched on some set of observed covariates $\x_s$. Additionally, we assume that there is some unobserved covariate $U_{si}$ that is associated with both treatment assignment and outcome and let $u_{si}$ be the realization of $U_{si}$ for the $i$-th unit of the $s$-th pair. Within pair differences in treatment and outcome will be denoted as $V_s = Z_{s1}-Z_{s2}$ and $y_s = r_{Cs1}-r_{Cs2}$. It will be convenient to define the following vector quantities: ${\mathbf{Z}} = (Z_{11},Z_{12},\dots,Z_{S2})^T$, $\mathbf{r} = (r_{C11},r_{C12},\dots,r_{CS2})^T$,$\U = (U_{11},U_{12},\dots,U_{S2})^T$, and $\A = (|y_1|,|y_2|,\dots,|y_S|)^T$. To be very clear about the information on which we are conditioning we will define some important information sets. Let $\mathcal{F} = \{(\x_s,u_{si},r_{Csi},r_{Tsi}):\;s=1,2\dots,S,\, i=1,2\}$ be the set of \textit{fixed} observed and unobserved covariates for all units. Let $\mathcal{Z} = \{{\mathbf{Z}}:\; |V_s|=1,\, s=1,\dots,S\}$ be the set of matched pairs such that only one unit receives treatment. We assume that $\R$ is binary and we define $\mathcal{A}_{{\mathbf{1}}}=\{\A:\; |y_s|=1,\, s=1,\dots,S\}$. So $\mathcal{Z}\cap\mathcal{A}_{{\mathbf{1}}}$ is the set of discordant matched pairs. In the analysis that follows, we will condition on $\mathcal{F},\,\mathcal{Z}\cap\mathcal{A}_{{\mathbf{1}}}$. \subsection{Review: sensitivity analysis for binary data}\label{ss-review} Under the assumption that all variables that confound treatment assignment are observed, \[ Z_{si} {\perp\!\!\!\perp} (r_{Csi},r_{Tsi})\,|\,\X_s \quad \text{(Ignorability)}\] our matched observational study should closely resemble a randomized study and thus $\Pr{{\mathbf{Z}}=\mathbf{z}|\,\mathcal{F},\,\mathcal{Z}\cap\mathcal{A}_{{\mathbf{1}}}} = 1/2^S$ for $\mathbf{z}\in\mathcal{Z}$. In practice, this assumption is rarely valid and the probability of treatment assignment depends materially on the unobserved covariates $\U$. A second assumption made in the causal framework introduced in \cite{rosenbaum1983} is the \textit{Positivity} assumption -- $0 < \Pr{Z_{si}=1|\,\X_s} <1$ for all $s=1,2,\dots,S$ and $i=1,2$ -- which says that all units have a chance of receiving treatment. In our case-crossover study, however, this may not be an appropriate assumption. We introduce an example of how our case-crossover study might violate the positivity assumption in Section \ref{subsec:inter} and how our average case sensitivity analysis framework is able to handle violations of positivity. When both $Z$ and $r$ are binary it is common to use McNemar's statistic to test for treatment effect: \begin{definition} \label{defn:mcnemars} For a matched pair study with binary treatment and outcome we define \textbf{McNemar's statistic} to be \begin{equation} \label{eqn:mcnemars} T({\mathbf{Z}},\br) = \sum_{s=1}^S \mathbbm{1}\{V_sY_s=1\}\,.\end{equation} \end{definition} Under the null distribution of no treatment effect $T({\mathbf{Z}},\br)$ follows a Poisson-Binomial distribution with probabilities $\{p_1,p_2,\dots,p_S\}$ where $p_s = \Pr{(Z_{s1}-Z_{s2})(r_{s1}-r_{s2})=1}$ is the probability that the unit with positive outcome, i.e., $r=1$, receives treatment in pair $s$. If we consider only discordant pairs and we assume, without loss of generality, that the first unit in each pair is the unit with positive outcome we may write \begin{equation} \label{eq:ps1} p_s = \Pr{Z_{s1}=1|\mathcal{F},\,\mathcal{Z}\cap\mathcal{A}_{\mathbf{1}}}\,. \end{equation} Recall that the Poisson-Binomial distribution is the sum of independent, not necessarily identical Bernoulli trials. If $\X_s$ contains the complete set of relevant covariates then $p_s$ equals $1/2$ for all pairs and we can conduct inference using $\operatorname{B}(1/2,S)$ as our null distribution, effectively treating our data as being the outcome of a randomized study. As we mentioned earlier in this section, if there is some unobserved characteristic $U$ that is relevant to treatment assignment and outcome then $\{p_1,\dots,p_S\}$ are unknown and consequently the exact null distribution is no longer available to the investigator. When this is the case, a sensitivity analysis like the one conducted informally in Section \ref{sens-examp} can be used to determine how sensitive the investigator's conclusions are to departures from the ideal randomized design. Following Chapter 4 of \cite{rosenbaum2002} we can formalize the notion of a sensitivity analysis introduced in Sections \ref{subsec:evidence} and \ref{sens-examp} with a simple sensitivity model where \begin{equation} \label{eq:sens-gamma} \frac{1}{1+\Gamma} \le \Pr{Z_{s1}=1|\mathcal{F},\,\mathcal{Z}\cap\mathcal{A}_{\mathbf{1}}} \le \frac{\Gamma}{1+\Gamma} \end{equation} for all $s=1,\dots,S$ and where $\Gamma\ge 1$ is the sensitivity parameter that bounds the extent of departure from a randomized study. Proposition 12 in Chapter 4 of \cite{rosenbaum2002} states that \eqref{eq:sens-gamma} is equivalent to the existence of the following model \begin{equation} \label{eq:sens-equiv} \log\left(\frac{p_s}{1-p_s}\right) = \gamma\left(u_{s1}-u_{s2}\right)\,,\;s=1,\dots,S \end{equation} where $\exp(\gamma) = \Gamma$, $\gamma \ge 0$, and $u_{si} \in [0,1]$ for $s=1,\dots,S$ and $i=1,2$. The restriction of the unobserved confounder to the unit interval in this equivalent representation preserves the non-technical interpretation of $\Gamma$ used in section \ref{sens-examp} as a bound on the odds that the driver was talking on a cellphone in the hazard window. Henceforth, we assume that $U_{si}$ and its realization $u_{si}$ belongs to the unit interval for $s=1,\dots,S$ and $i=1,2$. However, the distribution of $U_{si}$ on the unit interval may be arbitrary. Under this sensitivity model, if we let $T^+$ be binomial with success probability $\Gamma/(1+\Gamma)$ and $T^-$ be binomial with success probability $1/(1+\Gamma)$ it follows from Theorem 2 of \cite{rosenbaum1987} that \begin{equation} \label{eq:stoch-order} \Pr{T^- \ge k} \le \Pr{T \ge k|\mathcal{F},\,\mathcal{Z}\cap\mathcal{A}_{\mathbf{1}}} \le \Pr{T^+ \ge k} \end{equation} for all $k = 1,\dots,S$. This inequality is tight in the sense that it holds for any realization $\mathbf{u}$ of $\mathbf{U}$. For conducting a hypothesis test, the stochastic ordering in \eqref{eq:stoch-order} gives us bounds on the p-value of our test for a given magnitude of bias $\Gamma$. If $\Gamma\ge \Gamma_{truth}$, then $T^+$ yields a valid, albeit conservative, reference distribution for testing the null hypothesis of no treatment effect against the alternative of a positive treatment effect. \subsection{Attributable effects for binary outcomes: hypothesis tests and confidence intervals} \label{subsec:attr-binary} Attributable effects are a way to measure the magnitude of a treatment effect on a binary outcome. The number of attributable effects is the number of positive outcomes among treated subjects that would not have occurred if the subject was not exposed to treatment. In this section, we review \cite{rosenbaum2002a}'s procedure to construct one-sided confidence statements about attributable effects in the context of the cellphone case-crossover study. Let $\widetilde{S}$ be the number of \textit{all} pairs in the study, discordant or not, and let the first $S$ be the discordant pairs. If we assume that $r_{Tsi} \ge r_{Csi}$, that talking on a cellphone cannot prevent an accident, then we can write the attributed effect as \begin{equation} A = \sum_{s=1}^{\widetilde{S}}\sum_{i=1}^2 Z_{si}(r_{Tsi}-r_{Csi}) = \sum_{s=1}^{\widetilde S} Z_{s1}(r_{Ts1}-r_{Cs1}) \end{equation} where the first unit of $s$-th pair is the observation from the hazard window. Why does the second equality hold? If the subject was talking on a cellphone in the control window, that is $Z_{s2}=1$, then we observe $r_{Ts2}=0$ which by our assumption that talking on a cellphone cannot prevent an accident implies that $r_{Cs2}=0$. So attributable effects can only occur among discordant pairs where the subject was talking on a cellphone in the hazard window or concordant pairs where the subject was talking on a cellphone in both windows. The following table characterizes the four types of possible pairs in our case-crossover study, \begin{table}[h!] \centering \begin{tabular}{lrrrrrr} & $Z_{s1}$ & $Z_{s2}$ & $R_{s1}$ & $R_{s2}$ & $r_{Ts1}$ & $r_{Cs1}$ \\ \hline $D(+,-)$ & 1 & 0 & 1 & 0 & 1 & - \\ $D(-,+)$ & 0 & 1 & 1 & 0 & 1 & 1 \\ $C(-,-)$ & 0 & 0 & 1 & 0 & 1 & 1 \\ $C(+,+)$ & 1 & 1 & 1 & 0 & 1 & - \\ \end{tabular} {\mathbf{v}}space{20pt} \caption{The four possible types of pairs in our case-crossover study. $D$ and $C$ indicate discordant and concordant pairs, respectively, and the $+$ and $-$ indicate if a unit in the pair was treated or not, respectively.}\label{tbl:attr} \end{table} $D$ and $C$ indicate discordant and concordant pairs, respectively. $D(+,-)$ is the set of discordant pairs where the subject was on a cellphone in the hazard window, $D(-,+)$ is the set of discordant pairs where the subject was on a cellphone in the control window, $C(+,+)$ is the set of concordant pairs where the subject was on a cellphone in both hazard and control windows, and $C(-,-)$ is the set of concordant pairs where the subject was not on a cellphone in either window. If there are no attributable effects then we know that $r_{Cs1}=1$ in $D(+,-)$ and $C(+,+)$ and we have that $R_{s1}=r_{Cs1}$ for all pairs $s$, concordant or discordant. We can write the probability that the subject was talking on a cellphone at the time of accident for each type of pair as (1) $\Pr{Z_{s1}R_{s1}=1 | D(+,-)\cup D(-,+)} = p_s$, where $p_s$ here is equivalent to the $p_s$ defined in Section \ref{ss-review} when there are no attributable effects; (2) $\Pr{Z_{s1}R_{s1}=1 | C(-,-)} = 0$; and (3) $\Pr{Z_{s1}R_{s1}=1 | C(+,+)} = 1$. Now let $c^+=|C(+,+)|$ denote the cardinality of the set of concordant pairs where the subject was on a cellphone in both windows and let $s=S+1,\dots,S+c^+$ be the pairs belonging to $C(+,+)$. Then if $A=0$ we can define the standardized deviate for McNemar's statistic $T$ as \begin{align}\label{eq:norm-dev} \widetilde{T} &= \frac{\sum_{s=1}^{S} Z_{s1}r_{Cs1} - \sum_{s=1}^S p_s}{\left\{\sum_{s=1}^Sp_s(1-p_s)\right\}^{1/2}} \notag \\ & = \frac{\sum_{s=1}^{S+c^+} Z_{s1}R_{s1} - \left(\sum_{s=1}^S p_s + c^+\right)}{\left\{\sum_{s=1}^Sp_s(1-p_s)\right\}^{1/2}}\,. \end{align} $\widetilde{T}$ defines a normal reference distribution for $\sum_{s=1}^{S}Z_{s1}r_{Cs1}$ that we can use to conduct approximate inference. If $A=a>0$, then $Z_{\widetilde{s}1}R_{\widetilde{s}1} = Z_{\widetilde{s}1}r_{T\widetilde{s}1} = Z_{\widetilde{s}1}(r_{C\widetilde{s}1}+1)$ for pair $\widetilde{s}$ belonging to the set of $a$ pairs with attributable accidents and the second equality above does not hold. When this equality fails to hold, the standard normal deviate $\widetilde T$ cannot be computed from the observed data conditional on $\mathcal F$. How then can we adjust $\widetilde{T}$ for attributable accidents so that it can be computed from the observed data? Because we've assumed talking on a cellphone cannot prevent an accident, we only need to consider two cases. If pair $\widetilde{s}$ belongs to $D(+,-)$ then we subtract $Z_{\widetilde{s}1}(r_{T\widetilde{s}1}-r_{C\widetilde{s}1})=1$ from $\sum_{s=1}^{\widetilde{S}} Z_{s1}R_{s1}$, $p_{\widetilde{s}}$ from the expectation, and $p_{\widetilde{s}}(1-p_{\widetilde{s}})$ from the variance term. If $\widetilde{s}$ belongs to $C(+,+)$ we again subtract $1$ from $\sum_{s=1}^{\widetilde{S}} Z_{s1}R_{s1}$ and subtract $1$ from the $|C(+,+)|$ in the expectation while leaving the variance term unchanged. Let $\boldsymbol{\delta} = (\delta_{11},\delta_{12},\dots,\delta_{\widetilde{S}1},\delta_{\widetilde{S}2})^T$ be defined as $\delta_{sj} = r_{Tsj}-r_{Csj}$. We say that $\boldsymbol{\delta}$ is \textit{compatible} if $\delta_{sj}=0$ whenever $Z_{sj}=1$ and $R_{sj}=0$ or $Z_{sj}=0$ and $R_{sj}=1$. Under this definition, we can express the number of attributable effects as $A={\mathbf{Z}}^T\boldsymbol{\delta}$. For a compatible $\boldsymbol{\delta}$ such that ${\mathbf{Z}}^T\boldsymbol{\delta}= a$ we denote $\widetilde{T}_{-\boldsymbol{\delta}}$ to be $\widetilde{T}$ adjusted for the $a$ attributable effects. $\widetilde{T}_{-\boldsymbol{\delta}}$ defines a new reference distribution for $\sum_{s=1}^{S}Z_{s1}r_{Cs1}$ under the null hypothesis that potential accidents indicated by $\boldsymbol{\delta}$ are attributable to talking on a cellphone while driving. We can write $\widetilde{T}_{-\boldsymbol{\delta}}$ as \begin{equation}\label{eq:T-delt} \widetilde{T}_{-\boldsymbol{\delta}} = \frac{\sum_{s=1}^{S+c^+}Z_{s1}R_{s1}(1-\delta_{s1}) - \left(\sum_{s=1}^S(1-\delta_{s1})p_s+ \sum_{s=S+1}^{S+c^+}(1-\delta_{s1})\right)}{\left\{\sum_{s=1}^S(1-\delta_{s1})p_s(1-p_s)\right\}^{1/2}}\,. \end{equation} Using the notion of asymptotic separability (\cite{gastwirth2000}), \cite{rosenbaum2002a} show that choosing a compatible $\boldsymbol{\delta}^*\equiv\boldsymbol{\delta}^*(a)$ with ${\mathbf{Z}}^T\boldsymbol{\delta}^*(a)=a$ that maximizes the expectation, and when there are ties to maximize the variance term, yields a reference distribution that, asymptotically, has the largest upper tail area among compatible $\boldsymbol{\delta}(a)$. Thus, we can use $T_{-\boldsymbol{\delta}^*}$ to test the plausibility that there are at most $a$ attributable effects. Since $A$ is a random variable we refrain from calling this a hypothesis test, a term usually reserved for unknown parameters. From equation \eqref{eq:T-delt} we see that $\boldsymbol{\delta}^*(a)$ includes the $a$ pairs in $D(+,-)$ with the smallest values of $p_s$. It is possible to invert the one-sided ``plausibility tests" introduced above using $T_{-\boldsymbol{\delta}^*}$ that we just introduced in order to construct a confidence interval for attributable effects of the form $\{A:\,A> a\}$. It turns out that if it is plausible that there are $a$ attributable effects then it is also plausible that there are $a+1$ attributable effects (\cite{rosenbaum2002}). This monotonicity property leads to a very simple procedure to construct a one-sided confidence interval in the absence of hidden bias. First, if $p_s=1/2$ for all $s=1,2,\dots,\widetilde{S}$ then for any $a\ge 0$ we can compute $\widetilde{T}_{-\boldsymbol{\delta}^*}=\{T - a - (S-a)/2\}/\{(S-a)^{1/2}/2\}$. Next, starting with $a=0$ we check if $\widetilde{T}_{-\boldsymbol{\delta}^*} < \Phi^{-1}(1-\alpha)$, incrementing $a$ by one if it isn't and stopping if it is. Finally, let $a^*$ be equal to one less the value of $a$ at which we terminate the procedure. Using the monotonicity result above we have that $\{A:\,A > a^*\}$ is a one-sided $100\times(1-\alpha)\%$ confidence interval. If we bound the worst case calibrated bias above by $\Gamma$ then we can construct a one-sided $100\times(1-\alpha)\%$ confidence interval following the same procedure but instead using $\widetilde{T}_{-\boldsymbol{\delta}^*,\Gamma} = \{T - a - (S-a)p_\gamma\}/\{(S-a)p_\gamma(1-p_\gamma)\}^{1/2}$ as our standard deviate where $p_\gamma = \Gamma/(1+\Gamma)$. The resulting one-sided $100\times(1-\alpha)\%$ confidence interval is referred to as a \textit{sensitivity interval} (See Chapter 4, \cite{rosenbaum2002}). For a detailed illustration of these procedures we refer the reader to Sections 3-6 of \cite{rosenbaum2002a}. \section{FROM WORST CASE TO AVERAGE CASE SENSITIVITY ANALYSIS}\label{worst-avg} \subsection{Valid average case analysis: binary outcome}\label{avg-bin} An investigator conducting a sensitivity analysis tries to determine a test statistic whose null distribution is known conditional on the presence of hypothetical bias $\Gamma$. Since the distribution of $U_{si}$ is unknown, traditionally, the investigator assumes the worst. That is, the null distribution is constructed assuming that in each pair $u_{s1} = 1$ and $u_{s2}=0$. As noted in Section \ref{ss-review}, $T^+$ yields a valid reference distribution for testing the null of no-treatment effect when $\Gamma \ge \Gamma_{truth}$. However, such a test is inherently conservative because it is designed to be valid for any realization of $\U$ since $\U$ and thus since $\p = (p_1,\dots,p_S)^T$ are generally unknown. This is why we resort to a sensitivity analysis where we allow $p_s$ to vary arbitrarily as long as $p_s/(1-p_s) \le \Gamma$. In Section~\ref{subsec:evidence} we asked whether there was some natural quantification of average bias to which we could calibrate our sensitivity analysis which would lead to a less conservative analysis than the worst case calibration. One such quantification is $\Gamma'_{truth} = \overline{\p}/(1-\overline{\p})$ where $\overline{\p}$ is the sample average of $p_s$. In what follows, we show that if we calibrate our sensitivity analysis to $\Gamma'_{truth}$ it will be valid and less conservative than the worst case calibration. To prove this, we show that $T' \sim \operatorname{B}(\Gamma'_{truth}/(1+\Gamma'_{truth}),S)$ yields a valid reference distribution for testing the null of no treatment effect against the alternative of a positive treatment effect. In Theorem \eqref{thm:avgGam} below, we prove that the upper tail probability for McNemar's statistic $T$ is bounded above by the upper tail probability for $T'$. \begin{theorem} \label{thm:avgGam} Set $\overline{\p}= \left(\sum_{s=1}^S p_s\right)/S$ and $\Gamma'_{truth} = \overline{\p}/(1-\overline{\p})$ and let $V_s \stackrel{iid}{\sim} \operatorname{Bern}(\Gamma'_{truth}/(1+\Gamma'_{truth}))$ for all $s=1,2,\dots,S$. Define $T' = V_1 + \dots + V_S$. Then \[ \Prob{T \ge a |\mathcal{F},\,\mathcal{Z}\cap \mathcal{A}_{\mathbf{1}}}\le \Prob{T' \ge a}\;\text{for all}\;\; a \ge S\overline{\p}\,.\] \end{theorem} \begin{proof} Observe that $\p$ majorizes $\overline{\p}\cdot{\mathbf{1}}$ and note that if a function $f(\p)$ is Schur-convex in $\p$ then $f(\p) \ge f(\overline{\p}{\mathbf{1}})$. What remains to be shown is that the distribution function for a Poisson-Binomial is Schur-convex in $\p$. See \cite{gleser1975} for this approach and \cite{hoeffding1956} for the original proof. The theorem as stated is an immediate corollary of Theorem 4 in \cite{hoeffding1956}. \cite{gleser1975} presents a more general version of this result which holds when the success probabilities of $T$ majorize those of $T'$. \end{proof} \begin{remark}\label{rmk:avgGam} Theorem \eqref{thm:avgGam} is a finite sample result whose proofs we refer to are both rather technical. An analogous asymptotic result follows from much simpler arguments. The variance of a Bernoulli random variable with success $p$ can be written as $f(p)=p(1-p)$. $f$ is clearly concave and thus by Jensen's Inequality, $\Var{T} \le \Var{T'}$. Since the expectation of $T$ and $T'$ are equal, using a normal approximation to the exact permutation test will asymptotically yield the same stochastic ordering as in Theorem \eqref{thm:avgGam}. \end{remark} \begin{remark}\label{rmk:T-plus-bnd} It is important to note that $\Gamma'_{truth}\le\Gamma_{truth}$ since $p_s/(1-p_s) \le \Gamma_{truth}$ for $s=1,\dots,S$. Consequently, we have that $\Prob{T'\ge a} \le \Prob{T^+\ge a}$ which implies that sensitivity analysis with respect to $\Gamma'$, the average case calibrated sensitivity analysis, is less conservative than the worst case calibrated sensitivity analysis with respect to $\Gamma$. \end{remark} The implication of this theorem is that it is safe to interpret a sensitivity analysis in terms of $\Gamma'$, an upper bound on the sample average hidden bias ($\overline{\p}/(1-\overline{\p})$). For example, when using the \textit{most active cellphone day} control window we have $\Gamma_{sens}=2.4$. Previously, we would say that if no case-crossover pair was subject to hidden bias larger than 2.4, then the data would still provide evidence that talking on a cellphone increases the risk of getting in a car accident. Now, some case-crossover pairs may be subject to hidden bias (much) larger than 2.4, as long as the sample average hidden bias is no larger than 2.4. It is important to note that this interpretation is only valid for binary outcomes. The proof relies on Schur-convexity of the distribution function of our test statistic with respect to $\p$ which requires that it be symmetric in $\p$. For more general tests, such as the sign-rank test, this is not the case. Some additional applications of Theorem~\ref{thm:avgGam} can be found in the Web Appendices. Web Appendix A considers the case when $U_{s1}$ and $U_{s1}$ measure some time-varying propensity of subject $s$ to use his cellphone. Using Theorem~\ref{thm:avgGam} we develop a little theory and a numerical example. Web Appendix B provides details on how Theorem \ref{thm:avgGam} can be applied when $U$ is not restricted to the unit interval. \section{THE EFFECT OF CELLPHONE USE ON MOTOR-VEHICLE COLLISIONS}\label{examples} In this section we return to our motivating example to see how our average case theory can provide interpretive assistance to our standard sensitivity analysis we carried out in Section \ref{sens-examp} and allow us to incorporate additional information to empirically calibrate our average case sensitivity analysis. \subsection{Driving intermittency}\label{subsec:inter} The study conducted in \cite{tibshirani1997} did not have access to direct information on whether an individual was driving during the control window. The authors examine the effect of driving intermittency during the control window on their relative-risk estimate by bootstrapping the estimate using an intermittency rate of $\widehat{\rho} = 0.65$. In other words, they correct for bias due to the possibility that a subject was not driving during the control window. The intermittency rate was estimated using a survey asking 100 people who reported car crashes whether they were driving at the same time the previous day. Alternatively, one may ask a related question in the context of a sensitivity analysis - does the bias due to driving intermittency explain the observed association between cellphone usage and traffic incidents? Given that the study took place in the early 1990s when, for some cellphones and carphones were synonymous, it would not be surprising if many study participants (almost) always used their landlines rather than their cellphones when not driving, violating the positivity assumption. Therefore, the only plausible $\Gamma_{truth}$ is infinite (or at least very large) when conditioning on case-crossover pairs where the subject is on her cellphone in only one of the two windows. This renders the worst case sensitivity analysis uninformative. No magnitude of association between cellphone use and car accidents would convince us that the relationship was causal if we stuck to the worst case calibration of the sensitivity analysis. The average case calibration, on the other hand, still has a chance. We can use our estimate $\widehat{\rho}$ to approximate a plausible value of $\overline{\textbf{p}}$, $\overline{\textbf{p}} = (1-\widehat{\rho})\cdot 1 + \widehat{\rho}\cdot 0.5 = 0.675$, and a corresponding plausible value of $\Gamma'_{truth}$, $\Gamma'_{truth} = \overline{\textbf{p}}/(1-\overline{\textbf{p}}) = 2.1 \,.$ Theorem \eqref{thm:avgGam} circumvents the conceptual hurdle of unbounded $\Gamma_{truth}$ and allows us to confidently use a sensitivity analysis to quantitatively assess the causal evidence. Moreover, it allows us to incorporate information about $\rho$ into our analysis. If the association between cellphone use and motor vehicle collisions is causal in nature, our empirical calibration suggests that our test for treatment effect should be insensitive to unobserved biases with magnitude $\Gamma' \approx 2.1$. \subsection{An alternative approach to handling pairs with unbounded bias} There are other approaches to dealing with the example of infinite bias we just presented. For instance, the investigator may be more confident in specifying an upper bound on the worst case bias to be finite, $\Gamma<\infty$, for a proportion $1-\beta$ of the matched pair sample than he is in working in terms of the average case bias. If he has a good sense of what proportion $\beta$ of the pairs is exposed to unbounded bias he may drop $\beta\times S$ pairs where the treated unit had positive outcome and perform the standard worst case sensitivity analysis on the remaining $(1-\beta)\times S$ pairs. \cite{rosenbaum1987} proved that this method yields a valid sensitivity analysis. This strategy would be particularly suited for the example of driver intermittency discussed above. However, this approach assumes this particular pattern of unmeasured confounding is present and driver intermittency is just one of many sources of potential bias. On the other hand, the average case analysis accomodates arbitrary patterns of bias that may lead to large differences in average and worst case biases. \subsection{Average case sensitivity analysis for attributable effects}\label{attr-sec} How many of the recorded accidents in our study can be attributed to the driver talking on a cellphone? Recall from Section \ref{subsec:attr-binary} that the set indicated by $\boldsymbol{\delta}^*$ includes the $a$ pairs in $D(+,-)$ with the smallest values of $p_s$. Although we cannot compute $\widetilde{T}_{-\boldsymbol{\delta}^*}$ and thus cannot use it directly to conduct inference, we can compute a lower bound that we will show can be used to perform an average case sensitivity analysis: \begin{align}\label{eq:norm-dev-star} \widetilde{T}_{-\boldsymbol{\delta}^*} & = \frac{\sum_{s=1}^{S}Z_{s1}r_{Cs1} - \sum_{s=1}^S(1-\delta^*_{s1})p_s}{\left\{\sum_{s=1}^S(1-\delta^*_{s1})p_s(1-p_s)\right\}^{1/2}} \notag \\ & = \frac{\sum_{s=1}^S Z_{s1}R_{s1}(1-\delta^*_{s1}) - \sum_{s=1}^S(1-\delta_{s1}^*)p_s }{\left\{\sum_{s=1}^S(1-\delta^*_{s1})p_s(1-p_s)\right\}^{1/2}} \notag \\ & = \frac{T-a - (S-a)\overline{\p}(a) }{\left\{\sum_{s=1}^S(1-\delta^*_{s1})p_s(1-p_s)\right\}^{1/2}} \notag\\ & \ge \frac{T-a - (S-a)\overline{\p}(a) }{\left\{(S-a)\overline{\p}(a)(1-\overline{\p}(a))\right\}^{1/2}} = \widetilde{T}(\overline{\p}(a)) \end{align} where $\overline{\p}(a) = \sum_{s=1}^S(1-\delta_{s1}^*)p_s/(S-a)$. The last inequality follows from Jensen's inequality applied to the variance term in the denominator. Notice that instead of applying Theorem \eqref{thm:avgGam} in order to derive a sensitivity analysis in terms of the average bias we use the simpler argument in Remark \eqref{rmk:avgGam}. Now note that if $p_s \ge \underline{p}$ for all $s=1,\dots,S$ then we can relate the trimmed average probability, $\overline{\p}(a)$, to $\overline{\p}$ as follows \begin{equation}\label{p-bar-lb} \overline{\p} \ge \frac{(S-a)\overline{\p}(a)+a\cdot\underline{p}}{S} = q(a)\,.\end{equation} We can use this relationship to construct a simple procedure -- mirroring that of Section \ref{subsec:attr-binary} -- to perform an average case calibrated sensitivity analysis for one-sided confidence intervals of the form $\{A:\,A> a\}$ that yields average case calibrated sensitivity intervals. The procedure can be summarized as follows, \begin{enumerate}[leftmargin=20pt] \item{} Choose a desired average calibrated sensitivity parameter $\Gamma'$. \item{} For $a=0$ solve $q(a) = \Gamma'/(1+\Gamma')$ for $\overline{\p}(a)$ and denote the solution $p(a,\gamma')$. Compute $\widetilde{T}(p(a,\gamma'))$. \item{} If $\widetilde{T}(p(a,\gamma')) < \Phi^{-1}(1-\alpha)$ then conclude it is plausible that none of the accidents can be attributed to talking on a cellphone. \item{} Else, repeat steps (2) and (3) for $a=1,\dots,S$ stopping when $\widetilde{T}(p(a,\gamma')) < \Phi^{-1}(1-\alpha)$. Let $a^* = a-1$. \item{} Return the $100\times(1-\alpha)\%$ sensitivity interval $\{A:\,A > a^*\}$ and conclude that it is plausible that more than $a^*$ of the accidents are attributable to talking on a cellphone when exposed to an average bias of at most $\Gamma'$. \end{enumerate} Just as in the simple test for no treatment effect, we see that we have a nearly identical procedure to the worst case sensitivity analysis with an average interpretation of the bias parameter. In fact, the procedure also yields a corresponding worst case calibration for the computed sensitivity interval. Under the worst case calibration, the sensitivity interval from step (5) would correspond to a worst case bias $\Gamma = p(a^*,\gamma')/(1-p(a^*,\gamma'))$. How might we apply this procedure to our example? For a given control window we would like to make confidence statements such as, \textit{at the 95\% level it is plausible that there are $a^*$ or more accidents attributable to talking on a cellphone}. Recall the empirically calibrated average case bias from Section \ref{subsec:inter}, $\Gamma' \approx 2.1$. We may also be interested making sensitivity statements such as, \textit{if the average probability of talking on a cellphone during the hazard window is at most 2.1 times that of talking on a cellphone in the control window for drivers in our study, $\Gamma'=2.1$, it is plausible at the 95\% level that there are $a^*$ or more accidents attributable to talking on a cellphone}. Table \ref{attr-table} summarizes the plausible range of attributable accidents for each of the four different control windows. For all four control windows we set $\Gamma'=2.1$. The first column is the number of discordant pairs in which the driver was on a cellphone during the control window. The second column reports the lower bound $a^*$ of the one-sided sensitivity intervals for $\alpha=0.05$ We also report the corresponding worst case calibrated bias in the last column of Table \ref{attr-table}. In the cellphone study we have no convincing reason to believe that $\underline{p} > 0$ but in other examples, it may make sense that $p_s$ is bounded from below, which has the effect of making the procedure less conservative. \begin{table}[ht] \begin{center} \begin{tabular}{lcccr} Control Window & $|D(+,-)|$ & $a^*$ & $\Gamma'$ & $\Gamma$\\ \hline previous weekday/weekend &158&28&2.1&4.04\\ one week prior &164&31& 2.1&4.37 \\ previous driving day &119&18&2.1&3.51 \\ most active cellphone day &134&5&2.1&2.3 \end{tabular} {\mathbf{v}}space{20pt} \caption{Sensitivity analysis for 95\% one-sided confidence intervals for attributable effects of the form $\{A:\,A> a^*\}$. $\Gamma'$ indicates the average calibration bias that we specify for the procedure and $\Gamma$ is the implied worst case calibration that corresponds to the computed interval. We assume that $\underline{p}=0$.}\label{attr-table} \end{center} \end{table} We find that even if the average probability of talking on a cellphone during the hazard window was at most 2.1 times that of talking on a cellphone on the same day one week prior, it is plausible that there are 31 or more accidents attributable to talking on a cellphone. The implied worst case bias associated with this statement is $\Gamma=4.37$. What this means is that we would arrive at the same conclusion about the number of plausible attributable accidents if we put an upper bound on the worst case bias of $\Gamma=4.37$ and followed the standard confidence interval procedure for attributable effects outlined in Section \ref{subsec:attr-binary} and \cite{gastwirth2000}. Unlike the sensitivity analysis for the simple test for no treatment effect, the average case calibrated sensitivity analysis for attributable effects is not guaranteed to be less conservative than the worst case calibration. For a 95\% sensitivity interval for attributable effects generated by our procedure where $a^* >0$, the corresponding upper bound on the average case bias $\Gamma'$ is less than the corresponding upper bound on the worst case bias $\Gamma$. This occurs since we do not know which pairs contain attributable effects nor do we know each pair's particular exposure to hidden bias. Without any further assumptions, the best lower bound for $\Gamma'$ assumes that all the $a$ pairs with attributable effects have arbitrarily small probability of being on a cellphone in the hazard window and not the control window. This is expressed mathematically in equation \eqref{p-bar-lb} by setting $\underline{p}=0$. If $\Gamma'_{truth}<\Gamma_{truth}$ -- which is a reasonable assumption in most circumstances -- then using the average case calibration may still result in a less conservative analysis. However, if all case-crossover pairs are exposed to the same magnitude of bias such that $\Gamma'_{truth}=\Gamma_{truth}$ then we are guaranteed to be less conservative by using the worst case calibration. A reasonable solution would be to simply supply both the $\Gamma'$ and $\Gamma$ when reporting a sensitivity interval, as we do in Table \ref{attr-table}. The investigator may then present an argument based on subject matter expertise as to which calibration is likely to be less conservative. \section{DISCUSSION} The theorem presented in \ref{avg-bin} can be thought of as an interpretive aid: For the same standard sensitivity analysis we now have an additional, often more natural, way to interpret the results. This new average case interpretation may also allow researchers to make use of additional information about the problem to empirically calibrate their sensitivity analysis. As we saw in Section \ref{subsec:inter}, we used the estimate of driver intermittency rate to determine an approximate lower bound on $\Gamma'_{truth}$, providing us with some empirical guidance when conducting our sensitivity analysis. In the worst case setting, such an empirical calibration would not be possible. The investigator performs a sensitivity analysis in anticipation of critics who might claim the association is due to some unobserved confounder. The average case analysis makes the protection that the sensitivity analysis provides against such criticism more robust. As the title of the article makes clear, the results we present are for binary data. As we illustrated in Section \ref{attr-sec}, the notion of attributable effects allows us to construct interpretable confidence intervals for binary outcomes. We show that our average case calibration can be extended to the sensitivity analysis of such confidence intervals and in most cases will yield a less conservative conclusions. It may then be interesting to apply the results here to the sensitivity analysis of displacement effects, the continuous analog of attributable effects for non-binary outcomes. \cite{rosenbaum2002a} show that displacement effects can be analyzed in the attributable effect framework for binary response, providing a potential avenue to extend average case calibrated sensitivity analysis to a study with non-binary outcomes. \section{Supplementary Materials} Web Appendices A and B referenced in Section \ref{worst-avg} and the R code that produced the sensitivity analysis summarized in Table \ref{sens-table} in Section \ref{sens-examp}, the Monte Carlo simulation found in Web Table 1 in Web Appendix A, and the attributable effects analysis in Section \ref{attr-sec} are available with this paper at the Biometrics website on Wiley Online Library. \label{lastpage} \end{document}
\begin{document} ^{\mbox{\tiny T}}itle{SLOCC Convertibility between Two-Qubit States} \author{Yeong-Cherng~Liang}\email{[email protected]} \affiliation{School of Physical Sciences, The University of Queensland, Queensland 4072, Australia.} \author{Llu\'{\i}s~Masanes}\email{[email protected]} \affiliation{Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom.} \author{Andrew~C.~Doherty}\email{[email protected]} \affiliation{School of Physical Sciences, The University of Queensland, Queensland 4072, Australia.} \date{^{\mbox{\tiny T}}oday} \pacs{03.67.-a,03.67.Mn} \begin{abstract} In this paper we classify the four-qubit states that commute with $U\otimes{U}\otimes{V}\otimes{V}$, where $U$ and $V$ are arbitrary members of the Pauli group. We characterize the set of separable states for this class, in terms of a finite number of entanglement witnesses. Equivalently, we characterize the two-qubit, Bell-diagonal-preserving, completely positive maps that are separable. These separable completely positive maps correspond to protocols that can be implemented with stochastic local operations assisted by classical communication (SLOCC). This allows us to derive a complete set of SLOCC monotones for Bell-diagonal states, which, in turn, provides the necessary and sufficient conditions for converting one two-qubit state to another by SLOCC. \end{abstract} \title{SLOCC Convertibility between Two-Qubit States} \section{Introduction} Entanglement has, unmistakeably, played a crucial role in many quantum information processing tasks. Despite the various separability criteria that have been developed, determining whether a general multipartite mixed state is entangled is far from trivial. In fact, computationally, the problem of deciding if a quantum state is separable has been proven to be NP-hard~\cite{NP-hard}. To date, separability of a general bipartite quantum state is fully characterized only for dimension $2^{\mbox{\tiny T}}imes2$ and $2^{\mbox{\tiny T}}imes3$~\cite{PPT}. For higher dimensional quantum systems, there is no single criterion that is both necessary and sufficient for separability. Nevertheless, for quantum states that are invariant under some group of local unitary operators, separability can often be determined relatively easily~\cite{R.F.Werner:PRA:1989,M.Horodecki:PRA:1999, K.G.H.Vollbrecht:PRA:2001,T.Eggeling:PRA:2001}. On the other hand, it is often of interest in quantum information processing to determine if a given state can be transformed to some other desired state by local operations. Indeed, convertibility between two (entangled) states using local quantum operations assisted by classical communication (LOCC) is closely related to the problem of quantifying the entanglement associated to each quantum system. Intuitively, one expects that a (single copy) entangled state can be locally and deterministically transformed to a less entangled one but not the other way round. This intuition was made concrete in Nielsen's work~\cite{M.A.Nielsen:PRL:1999} where he showed that a single copy of a bipartite pure state $\ket{\mathcal{P}si}$ can be locally and deterministically transformed to another bipartite state $\ket{\mathcal{P}hi}$, if and only if $\ket{\mathcal{P}hi}$ takes equal or lower values for a set of functions known as entanglement monotones~\cite{G.Vidal:JMP:2000}. One can, nevertheless, relax the notion of convertibility by only requiring that the conversion succeeds with some nonzero probability. Such transformations are now known as stochastic LOCC (SLOCC)~\cite{W.Dur:PRA:2000}. In this case, it was shown by Vidal~\cite{G.Vidal:PRL:1999} that in the single copy scenario, a pure state $\ket{\mathcal{P}si}$ can be locally transformed to $\ket{\mathcal{P}hi}$ with nonzero probability if and only if the Schmidt rank of $\ket{\mathcal{P}si}$ is higher than or equal to that of $\ket{\mathcal{P}hi}$ (see also Ref.~\cite{W.Dur:PRA:2000}). The analogous situation for mixed quantum states is not as well understood even for two-qubit systems. If it were possible to obtain a singlet state by SLOCC from a single copy of any mixed state, it would be possible to convert any mixed state to any other state~\cite{C.H.Bennett:PRA:1996}. However, as was shown by Kent {\em et al.}~\cite{A.Kent:PRL:1999} (see also Ref.~\cite{LX.Cen}), the best that one can do -- in terms of increasing the entanglement of formation~\cite{S.Hill:PRL:1997} -- is to obtain a Bell-diagonal state with higher but generally non-maximal entanglement. In fact, apart from some rank deficient states, this conversion process is known to be invertible (with some probability)~\cite{F.Verstraete:PRA:2001}. Hence, most two-qubit states are known to be SLOCC equivalent to a unique~\cite{fn:unique} Bell-diagonal state of maximal~\cite{fn:maximal} entanglement~\cite{A.Kent:PRL:1999,F.Verstraete:PRA:2001,F.Verstraete:PRA:2002}. In this paper, we will complete the picture of two-qubit convertibility under SLOCC by providing the necessary and sufficient conditions for converting among Bell-diagonal states. This characterization of the separable completely positive maps (CPM) that take Bell diagonal states to Bell diagonal states has other applications. Specifically, it was required in the proof of our recent work~\cite{ANL} which showed that all bipartite entangled states display a certain kind of hidden non-locality~\cite{Hidden.Nonlocality}. (We show that a bipartite quantum state violates the Clauser-Horne-Shimony-Holt (CHSH) inequality~\cite{CHSH} after local pre-processing with some non-CHSH violating ancilla state if and only if the state is entangled.) Thus this paper completes the proof of that result. The structure of this paper is as follows. In Sec.~\ref{Sec:SeparableStates}, we will start by characterizing the set of separable states commuting with $U\otimes{U}\otimes{V}\otimes{V}$, where $U$ and $V$ are arbitrary members of the Pauli group. Then, after reviewing the one-to-one correspondence between separable maps and separable quantum states in Sec.~\ref{Sec:SeparableMap}, we will derive, in Sec.~\ref{Sec:BellMaps}, the full set of Bell-diagonal preserving SLOCC transformations. A {\em complete} set of SLOCC monotones are then derived in Sec.~\ref{Sec:Monotones} to provide the necessary and sufficient conditions for converting a Bell-diagonal state to another. This will then lead us to the necessary and sufficient conditions that can be used to determine if a two-qubit state can be converted to another using SLOCC transformations. Finally, we conclude the paper with a summary of results. Throughout, the $(i,j)$-th entry of a matrix $W$ is denoted as $[W]_{ij}$ (likewise $[\beta]_i$ for the $i$-th component of a vector) whereas null entry in a matrix will be replaced by $\cdot$ for ease of reading. Moreover, $\mathbb{I}$ is the identity matrix and $\mathcal{P}i$ is used to denote a projector. \section{Four-qubit Separable States with $U\otimes{U}\otimes{V}\otimes{V}$ Symmetry}\label{Sec:SeparableStates} Let us begin by reminding an important property of two-qubit states which commute with all unitaries of the form $U\otimes{U}$, where $U$ are members of the Pauli group. The Pauli group is generated by the Pauli matrices $\{\sigma_i\}_{i=x,y,z}$, and has 16 elements. The representation $U\otimes{U}$ decomposes onto four one-dimensional irreducible representations, each acting on the subspace spanned by one vector of the Bell basis \begin{eqnarray}\label{Eq:BellBases} \ket{\mathcal{P}hi_{^1_2}} &\equiv& \frac{1}{\sqrt{2}}\left( \ket{00} \pm \ket{11} \right), \\ \ket{\mathcal{P}hi_{^3_4}} &\equiv& \frac{1}{\sqrt{2}}\left( \ket{01} \pm \ket{10} \right). \end{eqnarray} This implies that~\cite{K.G.H.Vollbrecht:PRA:2001} any two-qubit state which commutes with $U\otimes{U}$ can be written as $\rho=\sum_{i=1}^4 [r]_{i} \mathcal{P}i_i$, where $\mathcal{P}i_i\equiv\proj{\mathcal{P}hi_i}$. With this information in mind, we are now ready to discuss the case that is of our interest. We would like to characterize the set of four-qubit states which commute with all unitaries $U\otimes{U}\otimes{V}\otimes{V}$, where $U$ and $V$ are members of the Pauli group. Let us denote this set of states by $\varrho$ and the state space of $\rho\in\varrho$ as $\mathcal{H}\simeq \mathcal{H}_{\mathcal{A}'}\otimes\mathcal{H}_{\mathcal{B}'}\otimes\mathcal{H}_{\mathcal{A}''}\otimes\mathcal{H}_{\mathcal{B}''}$, where $\mathcal{H}_{\mathcal{A}'}$, $\mathcal{H}_{\mathcal{B}'}$ etc. are Hilbert spaces of the constituent qubits. In this notation, both the subsystems associated with $\mathcal{H}_{\mathcal{A}'}\otimes\mathcal{H}_{\mathcal{B}'}$ and that with $\mathcal{H}_{\mathcal{A}''}\otimes\mathcal{H}_{\mathcal{B}''}$ have $U\otimes{U}$ symmetry and hence are linear combinations of Bell-diagonal projectors~\cite{K.G.H.Vollbrecht:PRA:2001}. Our aim in this section is to provide a full characterization of the set of $\rho$ that are separable between $\mathcal{H}_\mathcal{A}\equiv\mathcal{H}_{\mathcal{A}'}\otimes\mathcal{H}_{\mathcal{A}''}$ and $\mathcal{H}_\mathcal{B}\equiv\mathcal{H}_{\mathcal{B}'}\otimes\mathcal{H}_{\mathcal{B}''}$ (see Fig.~\ref{Fig:StateSpace}). Throughout this section, a state is said to be {\em separable} if and only if it is separable between $\mathcal{H}_\mathcal{A}$ and $\mathcal{H}_\mathcal{B}$. \begin{figure} \caption{\label{Fig:StateSpace} \label{Fig:StateSpace} \end{figure} The symmetry of $\rho$ allows one to write it as a {\em non-negative} combination of (tensored-) Bell projectors: \begin{gather}\label{Eq:Rep} \rho=\sum_{i=1}^4\sum_{j=1}^4 [r]_{ij}\mathcal{P}i_i\otimes\mathcal{P}i_j, \end{gather} where the Bell projector before and after the tensor product, respectively, acts on $\mathcal{H}_{\mathcal{A}'}\otimes\mathcal{H}_{\mathcal{B}'}$ and $\mathcal{H}_{\mathcal{A}''}\otimes\mathcal{H}_{\mathcal{B}''}$ (Fig.~\ref{Fig:StateSpace}). Thus, any state $\rho\in\varrho$ can be represented in a compact manner, via the corresponding $4^{\mbox{\tiny T}}imes4$ matrix $r$. More generally, any operator $\mu$ acting on the same Hilbert space $\mathcal{H}$ and having the same symmetry admits a $4^{\mbox{\tiny T}}imes4$ matrix representation $M$ via: \begin{gather}\label{Eq:Rep:General} \mu=\sum_{i=1}^4\sum_{j=1}^4 [M]_{ij}\mathcal{P}i_i\otimes\mathcal{P}i_j, \end{gather} where $[M]_{ij}$ is now not necessarily non-negative. When there is no risk of confusion, we will also refer to $r$ and $M$, respectively, as a state and an operator having the aforementioned symmetry. Evidently, in this representation, an operator $\mu$ is non-negative if and only if all entries in the corresponding $4^{\mbox{\tiny T}}imes4$ matrix $M$ are non-negative. Notice also that by appropriate local unitary transformation, one can swap any $\mathcal{P}i_i$ with any other $\mathcal{P}i_j$, $j\neq i$ while keeping all the other $\mathcal{P}i_k$, $k\neq i,j$ unaffected. Here, the term {\em local} is used with respect to the $\mathcal{A}$ and $\mathcal{B}$ partitioning. Specifically, via the local unitary transformation \begin{equation} V_{ij}\equiv\left\{ \begin{array}{c@{\quad:\quad}l} \half(\mathbb{I}_2-{\rm i}\sigma_z)\otimes(\mathbb{I}_2+{\rm i}\sigma_z) & i=1,j=2,\\ \half(\sigma_x+\sigma_z)\otimes(\sigma_x+\sigma_z) & i=2,j=3,\\ \half(\mathbb{I}_2+{\rm i}\sigma_z)\otimes(\mathbb{I}_2+{\rm i}\sigma_z) & i=3,j=4, \end{array} \right. \end{equation} one can swap $\mathcal{P}i_i$ and $\mathcal{P}i_j$ while leaving all the other Bell projectors unaffected. In terms of the corresponding $4^{\mbox{\tiny T}}imes 4$ matrix representation, the effect of such local unitaries on $\mu$ amounts to permutation of the rows and/or columns of $M$. For brevity, in what follows, we will say that two matrices $M$ and $M'$ are local unitarily equivalent if we can obtain $M$ by simply permuting the rows and/or columns of $M'$ and {\em vice versa}. A direct consequence of this observation is that if $r$ represents a separable state, so is any other $r'$ that is obtained from $r$ by independently permuting any of its rows and/or columns. Before we state the main result of this section, let us introduce one more definition. \begin{dfn}\label{dfn:rhos} Let $\mathcal{P}_s\subset\varrho$ be the convex hull of the states \begin{equation}\label{Eq:D0&G0} D_0\equiv \frac{1}{4}\left( \begin{array}{cccc} 1 & \cdot & \cdot & \cdot \\ \cdot & 1 & \cdot & \cdot \\ \cdot & \cdot & 1 & \cdot \\ \cdot & \cdot & \cdot & 1 \\ \end{array} \right),\quad G_0\equiv \frac{1}{4}\left( \begin{array}{cccc} 1 & 1 & \cdot & \cdot \\ 1 & 1 & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ \end{array} \right), \end{equation} and the states that are local unitarily equivalent to these two. \end{dfn} Simple calculations show that with respect to the $\mathcal{A}$ and $\mathcal{B}$ partitioning, $D_0$, $G_0$ are separable~\cite{fn:separable}. Hence, $\mathcal{P}_s$ is a separable subset of $\varrho$. The main result of this section consists of showing the converse, and hence the following theorem. \begin{theorem}\label{Thm:SeparableUUVV} $\mathcal{P}_s$ is the set of states in $\varrho$ that are separable with respect to the $\mathcal{A},\mathcal{B}$ partitioning. \end{theorem} Now, we note that $\mathcal{P}_s$ is a convex polytope. Its boundary is therefore described by a finite number of facets~\cite{B.Grunbaum:polytope}. Hence, to prove the above theorem, it suffices to show that all these facets correspond to valid entanglement witnesses. Denoting the set of facets by $\mathcal{W}=\{W_i\}$. Then, using the software PORTA~\cite{PORTA}, the {\em nontrivial} facets were found to be equivalent under local unitaries to one of the following: \begin{widetext} \begin{gather}\label{Eq:W} W_1\equiv \left( \begin{array}{rrrr} 1 & 1 & 1 &-1 \\ 1 & 1 & 1 &-1 \\ 1 & 1 & 1 &-1 \\ -1 &-1 &-1 & 1 \\ \end{array} \right),~W_2\equiv \left( \begin{array}{cccc} 1 & 1 & \cdot & -1\\ \cdot & \cdot & 1 & \cdot \\ \cdot & \cdot & 1 & \cdot \\ \cdot & \cdot & 1 & \cdot \\ \end{array} \right),~W_3\equiv \left( \begin{array}{rrrr} 3 & 3 & 1 &-1 \\ 3 &-1 & 1 & 3 \\ 1 & 1 & 3 & 1 \\ -1 &-1 & 1 &-1 \\ \end{array} \right),~W_4\equiv \left( \begin{array}{rrrr} 3 & 3 & 1 &-1 \\ 3 &-1 & 1 & 3 \\ 3 &-1 & 1 &-1 \\ 1 & 1 &-1 & 1 \\ \end{array} \right). \end{gather} \end{widetext} Apart from these, there is also a facet $W_0$ whose only nonzero entry is $[W_0]_{11}=1$. $W_0$ and the operators local unitarily equivalent to it give rise to positive definite matrices [c.f. Eq.~\eqref{Eq:ZW}], and thus correspond to trivial entanglement witnesses. On the other hand, it is also not difficult to verify that $W_1$ (and operators equivalent under local unitaries) are decomposable and therefore demand that $\rho_s$ remains positive semidefinite after partial transposition. These are all the entanglement witnesses that arise from the positive partial transposition (PPT) requirement~\cite{PPT} for separable states. To complete the proof of Theorem~\ref{Thm:SeparableUUVV}, it remains to show that $W_2$, $W_3$, $W_4$ give rise to Hermitian matrices \begin{gather}\label{Eq:ZW} Z_{w,k}=\sum_{i=1}^4\sum_{j=1}^4 [W_k]_{ij}~\left(\mathcal{P}i_i\otimes\mathcal{P}i_j\right) \end{gather} that are valid entanglement witnesses, i.e., $^{\mbox{\tiny T}}ext{tr}(\rho_sZ_{w,k})\ge 0$ for any separable $\rho_s\in\varrho$. It turns out that this can be proved with the help of the following lemma from Ref.~\cite{ACD:Extension}. \begin{widetext} \begin{lemma}\label{Lem:ProofOfWitnesses} For a given Hermitian matrix $Z_w$ acting on $\mathcal{H}_\mathcal{A}\otimes\mathcal{H}_\mathcal{B}$, with $dim(\mathcal{H}_\mathcal{A})=d_\mathcal{A}$ and $dim(\mathcal{H}_\mathcal{B})=d_\mathcal{B}$, if there exists $m,n\in\mathbb{Z}^+$, positive semidefinite $\mathcal{Z}$ acting on $\mathcal{H}_\mathcal{A}^{\otimes m}\otimes\mathcal{H}_\mathcal{B}^{\otimes n}$ and a subset $s$ of the $m+n$ tensor factors such that \begin{equation}\label{Eq:Z:Dfn} \pi_\mathcal{A}\otimes\pi_\mathcal{B}~\left(\mathbb{I}_{d_\mathcal{A}}^{\otimes m-1}\otimes Z_w\otimes\mathbb{I}_{d_\mathcal{B}}^{\otimes n-1}\right)~\pi_\mathcal{A}\otimes\pi_\mathcal{B} = \pi_\mathcal{A}\otimes\pi_\mathcal{B}~\left(\mathcal{Z}^{{\rm T}_s}\right)~\pi_\mathcal{A}\otimes\pi_\mathcal{B}, \end{equation} where $\pi_\mathcal{A}$ is the projector onto the symmetric subspace of $\mathcal{H}_\mathcal{A}^{\otimes m}$ (likewise for $\pi_\mathcal{B}$) and $(.)^{{\rm T}_s}$ refers to partial transposition with respect to the subsystem $s$, then $Z_w$ is a valid entanglement witness across $\mathcal{H}A$ and $\mathcal{H}B$, i.e., $^{\mbox{\tiny T}}ext{tr}(Z_w\rho_^{\mbox{\tiny T}}ext{sep})\ge 0$ for any state $\rho_^{\mbox{\tiny T}}ext{sep}$ that is separable with respect to the $\mathcal{A}$ and $\mathcal{B}$ partitioning. \end{lemma} \end{widetext} \begin{proof} Denote by $\mathcal{A}_k$ the subsystem associated with the $k$-th copy of $\mathcal{H}_\mathcal{A}$ in $\mathcal{H}_\mathcal{A}^{\otimes m}$; likewise for $\mathcal{B}_l$. To prove the above lemma, let $\ket{\alpha}\in\mathcal{H}_\mathcal{A}$ and $\ket{\beta}\in\mathcal{H}_\mathcal{B}$ be (unit) vectors, and for definiteness, let $s=\mathcal{B}_n$ then it follows that \begin{align*} &\bra{\alpha}\bra{\beta}~Z_w~\ket{\alpha}\ket{\beta}\\ =&\bra{\alpha}^{\otimes m}\bra{\beta}^{\otimes n} \left(\mathbb{I}_{d_\mathcal{A}}^{\otimes m-1}\otimes Z_w\otimes\mathbb{I}_{d_\mathcal{B}}^{\otimes n-1}\right)\ket{\alpha}^{\otimes m}\ket{\beta}^{\otimes n}\\ =&\bra{\alpha}^{\otimes m}\bra{\beta}^{\otimes n}\left[\pi_\mathcal{A}\otimes \pi_\mathcal{B}\left(\mathcal{Z}^{{\rm T}_s}\right)\pi_\mathcal{A}\otimes\pi_\mathcal{B} \right]\ket{\alpha}^{\otimes m}\ket{\beta}^{\otimes n}\\ =&\bra{\alpha}^{\otimes m}\bra{\beta}^{\otimes n}\left(\mathcal{Z}^{{\rm T}_{\mathcal{B}_n}}\right)\ket{\alpha}^{\otimes m}\ket{\beta}^{\otimes n}\\ =&\bra{\alpha}^{\otimes m} \bra{\beta}^{\otimes n-1} \otimes\bra{\beta^*}~\mathcal{Z}~\ket{\alpha}^{\otimes m}\ket{\beta}^{\otimes n-1}\otimes\ket{\beta^*}\\ \ge &0, \end{align*} where $\ket{\beta^*}$ is the complex conjugate of $\ket{\beta}$. We have made use of the identity $\pi_\mathcal{A}\ket{\alpha}^{\otimes m}=\ket{\alpha}^{\otimes m}$ (likewise for $\pi_\mathcal{B}$) in the second and third equality, Eq.~\eqref{Eq:Z:Dfn} in the second equality, and the positive semidefiniteness of $\mathcal{Z}$. To cater for general $s$, we just have to modify the second to last line of the above computation accordingly (i.e., to perform complex conjugation on all the states in the set $s$) and the proof will proceed as before. \end{proof} More generally, let us remark that instead of having one $\mathcal{Z}$ on the right hand side of Eq.~\eqref{Eq:Z:Dfn}, one can also have a sum of different $\mathcal{Z}$'s, with each of them partial transposed with respect to different subsystems $s$. Clearly, if the given $Z_w$ admits such a decomposition, it is also an entanglement witness~\cite{ACD:Extension}. For our purposes these more complicated decompositions do not offer any advantage over the simple decomposition given in Eq.~\eqref{Eq:Z:Dfn}. By solving some appropriate semidefinite programs~\cite{SDP}, we have found that when $m=3$, $n=2$ and $s=\mathcal{B}_2$, there exist some $\mathcal{Z}_k\ge0$, such that Eq.~\eqref{Eq:Z:Dfn} holds true for each $k\in\{1,2,3,4\}$. Due to space limitations, the analytic expression for these $\mathcal{Z}_k$'s will not be reproduced here but are made available online at~\cite{url:z}. For $W_2$, the fact that the corresponding $Z_{w,2}$ is a witness can even be verified by considering $m=2$, $n=1$ and $s=\mathcal{A}_1$. In this case, $d_\mathcal{A}=d_\mathcal{B}=4$. If we label the local basis vectors by $\{\ket{i}\}_{i=0}^3$, the corresponding $\mathcal{Z}$ reads \begin{gather*} \mathcal{Z}_2=\frac{1}{2}\sum_{i=1}^4\proj{z_i},\\ \ket{z_1}=\ket{01,0}-\ket{02,3}+\ket{11,1}+\ket{13,3}+\ket{22,1}+\ket{23,0},\\ \ket{z_2}=\ket{10,3}+\ket{11,2}+\ket{20,0}+\ket{22,2}-\ket{31,0}+\ket{32,3},\\ \ket{z_3}=\ket{00,0}+\ket{02,2}+\ket{10,1}-\ket{13,2}+\ket{32,1}+\ket{33,0},\\ \ket{z_4}=\ket{00,3}+\ket{01,2}-\ket{20,1}+\ket{23,2}+\ket{31,1}+\ket{33,3}, \end{gather*} where we have separated $\mathcal{A}$'s degree of freedom from $\mathcal{B}$'s ones by comma~\cite{fn:Reorder}. This completes the proof for Theorem~\ref{Thm:SeparableUUVV}. \section{SLOCC Convertibility of Bell-Diagonal States}\label{Sec:SLOCC-BDS} An immediate corollary of the characterization given in Sec.~\ref{Sec:SeparableStates} is that we now know exactly the set of Bell-diagonal preserving transformations that can be performed locally on a Bell-diagonal state. In this section, we will make use of the Choi-Jamio\l{k}owski isomorphism~\cite{Jamiolkowski}, i.e., the one-to-one correspondence between completely positive map (CPM) and quantum state, to make these SLOCC transformations explicit. This will allow us to derive a complete set of SLOCC monotones~\cite{G.Vidal:JMP:2000} which, in turn, serve as a set of necessary and sufficient conditions for converting one Bell-diagonal state to another. \subsection{Separable Maps and SLOCC}\label{Sec:SeparableMap} Now, let us recall some well-established facts about CPM. To begin with, a separable CPM, denoted by $\mathcal{E}_s$ takes the following form~\cite{E.M.Rains:9707002,V.Vedral:PRA:1998} \begin{equation}\label{Eq:SeparableMap} \mathcal{E}_s:\rho^{\mbox{\tiny T}}o\sum_{i=1}^n(A_i\otimes B_i)~\rho~(A_i^\dag\otimes B_i^\dag), \end{equation} where $\rho$ acts on $\mathcal{H}_{\mathcal{A}i}\otimes\mathcal{H}_{\mathcal{B}i}$, $A_i$ acts on $\mathcal{H}_{\mathcal{A}i}$, $B_i$ acts on $\mathcal{H}_{\mathcal{B}i}$~\cite{fn:Kraus}. If, moreover, \begin{equation}\label{Eq:TracePreserving} \sum_i \left(A_i\otimes{B_i}\right)^\dag \left(A_i\otimes{B_i}\right)=\mathbb{I}, \end{equation} the map is trace-preserving, i.e., if $\rho$ is normalized, so is the output of the map $\mathcal{E}_s(\rho)$. Equivalently, the trace-preserving condition demands that the transformation from $\rho$ to $\mathcal{E}_s(\rho)$ can always be achieved with certainty. It is well-known that all LOCC transformations are of the form Eq.~\eqref{Eq:SeparableMap} but the converse is not true~\cite{C.H.Bennett:PRA:1999}. However, if we allow the map $\rho^{\mbox{\tiny T}}o\mathcal{E}_s(\rho)$ to fail with some probability $p<1$, the transformation from $\rho$ to $\mathcal{E}_s(\rho)$ can always be implemented probabilistically via LOCC. In other words, if we do not impose Eq.~\eqref{Eq:TracePreserving}, then Eq.~\eqref{Eq:SeparableMap} represents, up to some normalization constant, the most general LOCC possible on a bipartite quantum system. These are the SLOCC transformations~\cite{W.Dur:PRA:2000}. To make a connection between the set of SLOCC transformations and the set of states that we have characterized in Sec.~\ref{Sec:SeparableStates}, let us also recall the Choi-Jamio{\l}kowski isomorphism~\cite{Jamiolkowski} between CPM and quantum states: for every (not necessarily separable) CPM $\mathcal{E}:\mathcal{H}_{\mathcal{A}i}\otimes\mathcal{H}_{\mathcal{B}i}^{\mbox{\tiny T}}o\mathcal{H}_{\mathcal{A}o}\otimes\mathcal{H}_{\mathcal{B}o}$ there is a unique -- again, up to some positive constant $\alpha$ -- quantum state $\rho_\mathcal{E}$ corresponding to $\mathcal{E}$: \begin{equation}\label{Eq:JamiolkowskiState} \rho_\mathcal{E}=\alpha~\mathcal{E}\otimes \mathcal{I} \left(\ket{\mathcal{P}hi^+}_{\mathcal{A}i}\bra{\mathcal{P}hi^+}\otimes \ket{\mathcal{P}hi^+}_{\mathcal{B}i}\bra{\mathcal{P}hi^+}\right), \end{equation} where $\ket{\mathcal{P}hi^+}_{\mathcal{A}i}\equiv\sum_{i=1}^{d_{\mathcal{A}i}}\ket{i}\otimes\ket{i}$ is the unnormalized maximally entangled state of dimension $d_{\mathcal{A}i}$ (likewise for $\ket{\mathcal{P}hi^+}_{\mathcal{B}i}$). In Eq.~\eqref{Eq:JamiolkowskiState}, it is understood that $\mathcal{E}$ only acts on half of $\ket{\mathcal{P}hi^+}_{\mathcal{A}i}$ and half of $\ket{\mathcal{P}hi^+}_{\mathcal{B}i}$. Clearly, the state $\rho_\mathcal{E}$ acts on a Hilbert space of dimension $d_{\mathcal{A}i}^{\mbox{\tiny T}}imes d_{\mathcal{A}o}^{\mbox{\tiny T}}imes d_{\mathcal{B}i}^{\mbox{\tiny T}}imes d_{\mathcal{B}o}$, where $d_{\mathcal{A}o}^{\mbox{\tiny T}}imes d_{\mathcal{B}o}$ is the dimension of $\mathcal{H}_{\mathcal{A}o}\otimes\mathcal{H}_{\mathcal{B}o}$. Conversely, given a state $\rho_\mathcal{E}$ acting on $\mathcal{H}_{\mathcal{A}o}\otimes\mathcal{H}_{\mathcal{B}o}\otimes\mathcal{H}_{\mathcal{A}i}\otimes\mathcal{H}_{\mathcal{B}i}$, the corresponding action of the CPM $\mathcal{E}$ on some $\rho$ acting on $\mathcal{H}_{\mathcal{A}i}\otimes\mathcal{H}_{\mathcal{B}i}$ reads: \begin{equation}\label{Eq:State->CPM} \mathcal{E}(\rho)=\frac{1}{\alpha}^{\mbox{\tiny T}}ext{tr}_{\mathcal{A}i\mathcal{B}i}\left[\rho_\mathcal{E}\left(\mathbb{I}_{\mathcal{A}o\mathcal{B}o}\otimes\rho^{\mbox{\tiny T}}\right)\right], \end{equation} where $\rho^{\mbox{\tiny T}}$ denote transposition of $\rho$ in some local bases of $\mathcal{H}_{\mathcal{A}i}\otimes\mathcal{H}_{\mathcal{B}i}$. For a trace-preserving CPM, it then follows that we must have $^{\mbox{\tiny T}}ext{tr}_{\mathcal{A}o\mathcal{B}o}(\rho_\mathcal{E})=\alpha\mathbb{I}_{\mathcal{A}i\mathcal{B}i}$. A point that should be emphasized now is that $\mathcal{E}$ is a separable map [c.f. Eq.~\eqref{Eq:SeparableMap}] if and only if the corresponding $\rho_\mathcal{E}$ given by Eq.~\eqref{Eq:JamiolkowskiState} is separable across $\mathcal{H}_{\mathcal{A}i}\otimes\mathcal{H}_{\mathcal{A}o}$ and $\mathcal{H}_{\mathcal{B}i}\otimes\mathcal{H}_{\mathcal{B}o}$~\cite{J.I.Cirac:PRL:2001}. Moreover, at the risk of repeating ourselves, the map $\rho^{\mbox{\tiny T}}o\mathcal{E}(\rho)$ derived from a separable $\rho_\mathcal{E}$ can always be implemented locally, although it may only succeed with some (nonzero) probability. Hence, if we are only interested in transformations that can be performed locally, and not the probability of success in mapping $\rho^{\mbox{\tiny T}}o\mathcal{E}(\rho)$, the normalization constant $\alpha$ as well as the normalization of $\rho_\mathcal{E}$ becomes irrelevant. This is the convention that we will adopt for the rest of this section. \subsection{Bell-diagonal Preserving SLOCC Transformations} \label{Sec:BellMaps} We shall now apply the isomorphism to the class of states $\varrho$ that we have characterized in Sec.~\ref{Sec:SeparableStates}. In particular, if we identify $\mathcal{A}i$, $\mathcal{A}o$, $\mathcal{B}i$ and $\mathcal{B}o$ with, respectively, $\mathcal{A}''$, $\mathcal{A}'$, $\mathcal{B}''$ and $\mathcal{B}'$, it follows from Eq.~\eqref{Eq:Rep} and Eq.~\eqref{Eq:State->CPM} that for any two-qubit state $\rho_{\rm in}$, the action of the CPM derived from $\rho\in\varrho$ reads: \begin{equation} \mathcal{E}:\rho_{\rm in}^{\mbox{\tiny T}}o\rho_{\rm out}\propto \sum_{i,j}[r]_{ij} ^{\mbox{\tiny T}}ext{tr}\left(\rho^{\mbox{\tiny T}}_{\rm in}\mathcal{P}i_j\right)\mathcal{P}i_i. \end{equation} Hence, under the action of $\mathcal{E}$, any $\rho_{\rm in}$ is transformed to another two-qubit state that is diagonal in the Bell basis, i.e., a Bell-diagonal state. In particular, for a Bell-diagonal $\rho_{\rm in}$, i.e., \begin{gather} \rho_{\rm in}=\sum_k[\beta]_k\mathcal{P}i_k,\nonumber\\ [\beta]_k\ge0,\quad \sum_k[\beta]_k=1, \end{gather} the map outputs another Bell-diagonal state \begin{equation} \rho_{\rm out}=\mathcal{E}(\rho_{\rm in})\propto\sum_{i,j} [\beta]_j[r]_{ij}\mathcal{P}i_i. \end{equation} It is worth noting that for general $\rho_\mathcal{E}\in\varrho$, $^{\mbox{\tiny T}}ext{tr}_{\mathcal{A}'\mathcal{B}'}\rho_\mathcal{E}$ is not proportional to the identity matrix, therefore some of the CPMs derived from $\rho\in\varrho$ are intrinsically non-trace-preserving~\cite{fn:Stochasticity}. By considering the convex cone~\cite{fn:cone} of separable states $\mathcal{P}_s$ that we have characterized in Sec.~\ref{Sec:SeparableStates}, we therefore obtain the entire set of Bell-diagonal preserving SLOCC transformations. Among them, we note that the extremal maps, i.e., those derived from Eq.~\eqref{Eq:D0&G0}, admit simple physical interpretations and implementations. In particular, the extremal separable map for $D_0$, and the maps that are related to it by local unitaries, correspond to permutation of the input Bell projectors $\mathcal{P}i_i$ -- which can be implemented by performing appropriate local unitary transformations. The other kind of extremal separable map, derived from $G_0$, corresponds to making a measurement that determines if the initial state is in a subspace spanned by a given pair of Bell states and if successful discarding the input state and replacing it by an equal but incoherent mixture of two of the Bell states. This operation can be implemented locally since the equally weighted mixture of two Bell states is a separable state and hence both the measurement step and the state preparation step can be implemented locally. \begin{figure} \caption{\label{Fig:BDS} \label{Fig:BDS} \end{figure} \subsection{Complete Set of SLOCC Monotones for Bell-diagonal States} \label{Sec:Monotones} Now, let us make use of the above characterization to derive a {\em complete} set of {\em non-increasing} SLOCC monotones for Bell-diagonal states. To begin with, we recall that the set of normalized Bell-diagonal states forms a tetrahedron $\mathcal{T}_0$ in $\mathbb{R}^3$, and the set of separable Bell-diagonal states forms an octahedron $\mathcal{O}_S$ (see Fig.~\ref{Fig:BDS}) that is contained in $\mathcal{T}_0$~\cite{K.G.H.Vollbrecht:PRA:2001}. We will follow Ref.~\cite{K.G.H.Vollbrecht:PRA:2001} and use the expectation values $(-\langle\sigma_x\otimes\sigma_x\rangle,-\langle\sigma_y\otimes\sigma_y\rangle, -\langle\sigma_z\otimes\sigma_z\rangle)$ as the coordinates of this three-dimensional space. The coordinates of the four Bell states $\{\ket{\mathcal{P}hi_i}\}_{i=1}^4$ are then $(-1,1,-1)$, $(1,-1,-1)$,$(-1,-1,1)$ and $(1,1,1)$ respectively. Since Bell-diagonal states are convex mixtures of the four Bell projectors, we may also label any point in the state space of Bell-diagonal states by a four-component weight vector $\vec{\lambda}=(\lambda_1,\lambda_2,\lambda_2,\lambda_4)$ such that the corresponding Bell-diagonal state reads \begin{equation}\label{Eq:BDS} \rho_^{\mbox{\tiny T}}ext{BD}(\vec{\lambda})=\sum_i^4\lambda_i\mathcal{P}i_i. \end{equation} Moreover, as remarked above, we can apply local unitary transformation to swap any of the two Bell projectors while leaving others unaffected. Thus, without loss of generality, we will restrict our attention to Bell-diagonal states such that \begin{equation}\label{Eq:LambdaOrder} \lambda_1\ge\lambda_2\ge\lambda_3\ge\lambda_4, \end{equation} and determine when it is possible to transform between two such states under SLOCC. \begin{figure} \caption{\label{Fig:Plambda} \label{Fig:Plambda} \end{figure} Clearly, any (entangled) Bell-diagonal state can be transformed to any separable Bell-diagonal state via SLOCC -- one can simply discard the original Bell-diagonal state and prepare the separable state using LOCC. Also separable Bell-diagonal states can only be transformed among themselves with SLOCC. What about transformations among entangled Bell-diagonal states? To answer this question, we shall adopt the following strategy. Firstly, we will clarify -- in relation to Fig.~\ref{Fig:BDS} -- the set of entangled Bell-diagonal states satisfying Eq.~\eqref{Eq:LambdaOrder}. Then, we will make use of the characterization obtained in Sec.~\ref{Sec:BellMaps} to determine the set of states that can be obtained from SLOCC transformations when we have an input (entangled) state satisfying Eq.~\eqref{Eq:LambdaOrder}. After that, we will restrict our attention to the subset of these output states satisfying Eq.~\eqref{Eq:LambdaOrder}. Once we have got this, a simple set of necessary and sufficient conditions can be derived to determine if an entangled Bell-diagonal state can be converted to another. We now take a closer look at the set of entangled Bell-diagonal states, in particular those that satisfy Eq.~\eqref{Eq:LambdaOrder}. In Fig.~\ref{Fig:BDS}, the set of entangled Bell-diagonal states is the relative complement of the (blue) octahedron $\mathcal{O}_S$ in the tetrahedron $\mathcal{T}_0$. In this set, those points that satisfy Eq.~\eqref{Eq:LambdaOrder} are a strict {\em subset} contained in the (green) tetrahedron $\mathcal{T}_{\mathcal{P}hi_1}$, which has the Bell state $\ket{\mathcal{P}hi_1}$ and the three mixed separable states \begin{equation}\label{Eq:BDS:Separable:1+i} \rho_{1i}=\half\left(\proj{\mathcal{P}hi_1}+\proj{\mathcal{P}hi_i}\right), \quad i=2,3,4, \end{equation} as its four vertices. In terms of weight vectors, the three separable vertices read \begin{equation}\label{Eq:Tphi} \vec{\lambda}_{12}=\half\left(\begin{array}{c} 1\\1\\ \cdot \\ \cdot\end{array}\right), \vec{\lambda}_{13}=\half\left(\begin{array}{c} 1\\\cdot\\1\\\cdot\end{array}\right), \vec{\lambda}_{14}=\half\left(\begin{array}{c} 1\\\cdot\\\cdot\\1\end{array}\right). \end{equation} $\mathcal{T}_{\mathcal{P}hi_1}$ is the set of Bell-diagonal states satisfying $\lambda_1\ge1/2$ which includes both entangled states (denoted by $\mathcal{T}_E$) and separable states (denoted by $\mathcal{F}_0$). For the purpose of subsequent discussion, it is important to note that every entangled state satisfying Eq.~\eqref{Eq:LambdaOrder} is in $\mathcal{T}_E$ but not every state in $\mathcal{T}_E$ satisfies Eq.~\eqref{Eq:LambdaOrder}. Now, let us consider an entangled Bell-diagonal state $\rho_^{\mbox{\tiny T}}ext{BD}(\vec{\lambda})$ with weight vector \begin{equation}\label{Eq:Dfn:Lambda} \vec{\lambda}=\left(\begin{array}{c}\lambda_1\\\lambda_2\\\lambda_3\\\lambda_4\end{array}\right) \end{equation} satisfying Eq.~\eqref{Eq:LambdaOrder}. Note from the above discussion that $\vec{\lambda}\in\mathcal{T}_E$. Recall that our goal is to determine the set of (entangled) output states -- satisfying Eq.~\eqref{Eq:LambdaOrder} -- which can be obtained from $\vec{\lambda}$ via SLOCC. To achieve that, we will begin by first determining the set of output weight vectors $\{\vec{\lambda}'\}$ which are in the superset $\mathcal{T}_{\mathcal{P}hi_1}$. In particular, we note that under extremal SLOCC transformations associated with $G_0$, and the operators local unitarily equivalent to it [c.f. Eq.~\eqref{Eq:D0&G0} and Sec.~\ref{Sec:BellMaps}], $\vec{\lambda}$ can be brought into any of the separable states $\{\rho_{1i}\}_{i=2}^4$ [c.f. Eqs.~\eqref{Eq:BDS:Separable:1+i} and \eqref{Eq:Tphi}]. Similarly, under extremal SLOCC transformations associated with $D_0$, and the operators local unitarily equivalent to it, $\vec{\lambda}$ can be brought into any of the following entangled Bell-diagonal states by permuting the weights associated with some of the Bell projectors: \begin{gather} \vec{\lambda}_{(34)}=\left(\begin{array}{c}\lambda_1\\\lambda_2\\\lambda_4\\\lambda_3\end{array}\right), \vec{\lambda}_{(324)}=\left(\begin{array}{c}\lambda_1\\\lambda_3\\\lambda_4\\\lambda_2\end{array}\right), \vec{\lambda}_{(24)}=\left(\begin{array}{c}\lambda_1\\\lambda_4\\\lambda_3\\\lambda_2\end{array}\right),\nonumber\\ \vec{\lambda}_{(234)}=\left(\begin{array}{c}\lambda_1\\\lambda_4\\\lambda_2\\\lambda_3\end{array}\right), \label{Eq:ExtremePoints}\vec{\lambda}_{(23)}=\left(\begin{array}{c}\lambda_1\\\lambda_3\\\lambda_2\\\lambda_4\end{array}\right). \end{gather} Evidently, any convex combinations of the vectors listed in Eq.~\eqref{Eq:Tphi}, Eq.~\eqref{Eq:Dfn:Lambda} and Eq.~\eqref{Eq:ExtremePoints} are also attainable from $\vec{\lambda}$ using (non-extremal) SLOCC. Moreover, within $\mathcal{T}_{\mathcal{P}hi_1}$, only convex combinations of these states, denoted by $\mathcal{P}_\lambda$, are attainable from $\vec{\lambda}$ using SLOCC. $\mathcal{P}_\lambda$ is thus a convex polytope with vertices given by the union of vectors listed in Eq.~\eqref{Eq:Tphi}, Eq.~\eqref{Eq:Dfn:Lambda} and Eq.~\eqref{Eq:ExtremePoints}. Then, to determine if $\vec{\lambda}$ can be transformed to another $\vec{\lambda}p\in\mathcal{T}_{\mathcal{P}hi_1}$ amounts to deciding if $\vec{\lambda}p\in\mathcal{P}_\lambda$. It is a well known fact that a convex polytope can also be described by a finite set of inequalities that are associated with each of the facets of the polytope~\cite{B.Grunbaum:polytope}. Therefore, the above task can be done, for example, by checking if $\vec{\lambda}p$ satisfies all the linear equalities defining the polytope $\mathcal{P}_\lambda$. Our real interest, however, is in the set of entangled Bell-diagonal states satisfying Eq.~\eqref{Eq:LambdaOrder}. With some thought, it should be clear that this simplifies the problem at hand so that we will only need to check that $\vec{\lambda}p$ satisfies all the inequalities (facets) that contain $\vec{\lambda}$. From Fig.~\ref{Fig:Plambda}, it can be seen that only three facets of $\mathcal{P}_\lambda$ contain $\vec{\lambda}$. These are $\mathcal{F}_1=^{\mbox{\tiny T}}ext{conv}\{\vec{\lambda}, \vec{\lambda}_{(34)}, \vec{\lambda}_{(324)}, \vec{\lambda}_{(24)}, \vec{\lambda}_{(234)}, \vec{\lambda}_{(23)}\}$, $\mathcal{F}_2=^{\mbox{\tiny T}}ext{conv}\{\vec{\lambda},\vec{\lambda}_{12},\vec{\lambda}_{(34)}\}$ and $\mathcal{F}_3=^{\mbox{\tiny T}}ext{conv}\{\vec{\lambda},\vec{\lambda}_{12},\vec{\lambda}_{13},\vec{\lambda}_{(23)}\}$, where $^{\mbox{\tiny T}}ext{conv}\{.\}$ represents the convex hull formed by the set of points in $\{.\}$~\cite{B.Grunbaum:polytope}. Recall that each vector $\vec{\lambda}_{(.)}$ listed in Eq.~\eqref{Eq:ExtremePoints} is obtained by performing the appropriate permutation $(.)$ on all but the first component of $\vec{\lambda}$. Hence $\mathcal{F}_1$ is a facet of constant $\lambda_1$. After some simple algebra, the inequalities associated with $\mathcal{F}_2$~\cite{fn:F2} and $\mathcal{F}_3$~\cite{fn:F3} can be shown to be, respectively, \begin{gather}\label{Eq:F2} \mathcal{F}_2:\frac{\lambda_3+\lambda_4}{\lambda_1-\lambda_2} \left(\langle\sigma_x\otimes\sigma_x\rangle-\langle\sigma_y\otimes\sigma_y\rangle\right) +\langle\sigma_z\otimes\sigma_z\rangle\le1,\\ \label{Eq:F3}\mathcal{F}_3:\langle\sigma_x\otimes\sigma_x\rangle+\langle\sigma_z\otimes\sigma_z\rangle -\frac{1-2\lambda_1+2\lambda_4}{1-2\lambda_2-2\lambda_3}\langle\sigma_y\otimes\sigma_y\rangle\le1. \end{gather} Imposing the requirement that $\vec{\lambda}p$ satisfies these inequalities gives, respectively, \begin{gather*} \frac{1-2\lambda_2}{\lambda_3+\lambda_4} \ge \frac{1-2\lambdap_2}{\lambdap_3+\lambdap_4}, \end{gather*} and \begin{gather*} \frac{1-2\lambda_2-2\lambda_3}{\lambda_4}\ge \frac{1-2\lambdap_2-2\lambdap_3}{\lambdap_4}. \end{gather*} Together with the requirement imposed by $\mathcal{F}_1$, we see that by defining \begin{align} E_1(\vec{\lambda})&\equiv\lambda_1, \\ E_2(\vec{\lambda})&\equiv\frac{1-2\lambda_2}{\lambda_3+\lambda_4},\\ E_3(\vec{\lambda})&\equiv\frac{1-2\lambda_2-2\lambda_3}{\lambda_4}, \end{align} the intercovertibility between two entangled Bell-diagonal states can be succinctly summarized in the following theorem. \begin{theorem}\label{Thm:Monotones} Let $\rho$ and $\rho'$ be two entangled Bell-diagonal states with, respectively, weight vectors $\vec{\lambda}$ and $\vec{\lambda}'$ satisfying Eq.~\eqref{Eq:LambdaOrder}. Transformation from $\rho$ to $\rho'$ via SLOCC is possible iff \begin{align} E_1(\vec{\lambda})\ge E_1(\vec{\lambda}p),\label{Eq:E1}\\ E_2(\vec{\lambda})\ge E_2(\vec{\lambda}p),\label{Eq:E2}\\ E_3(\vec{\lambda})\ge E_3(\vec{\lambda}p)\label{Eq:E3}. \end{align} In other words, $\{E_i(\vec{\lambda})\}_{i=1}^3$ is a complete set of SLOCC monotones for entangled Bell-diagonal states satisfying Eq.~\eqref{Eq:LambdaOrder}. \end{theorem} \section{SLOCC Convertibility of Two-Qubit States} With Theorem~\ref{Thm:Monotones}, it is just another small step to determine if a two-qubit state $\rho$ can be converted to another, say $\rho'$ using SLOCC. To this end, let us first recall the following definition from Ref.~\cite{W.Dur:PRA:2000}. \begin{dfn} Two states $\rho$ and $\rho'$ are said to be SLOCC equivalent if $\rho$ can be converted to $\rho'$ via SLOCC with nonzero probability and vice versa. \end{dfn} Next, we recall the following theorem, which can be deduced from Theorem 1 in Ref.~\cite{F.Verstraete:PRA:2002} (see also Theorems 1--3 in Ref.~\cite{F.Verstraete:PRA:2001}). \begin{theorem} A two-qubit state $\rho$ is SLOCC equivalent to either (1) a unique Bell-diagonal state satisfying Eq.~\eqref{Eq:LambdaOrder}, (2) a separable state, or (3) a (normalized) non-Bell-diagonal state of the form: \begin{equation} \label{Eq:quasi} \rho_{\rm ND}=\frac{1}{4}\left(\begin{array}{cccc} 2 & \cdot & \cdot & \cdot \\ \cdot & 1 & 2b & \cdot \\ \cdot & 2b & 1 & \cdot \\ \cdot & \cdot & \cdot & \cdot \end{array}\right), \end{equation} where $\rho_{\rm ND}$ is expressed in the standard product basis and $b\le\half$ is unique. \end{theorem} Moreover, as was shown in Ref.~\cite{F.Verstraete:PRA:2001}, the unique Bell-diagonal state in case (1) is the state with maximal entanglement that can be obtained from the original two-qubit state using SLOCC. The two-qubit state associated with case (2) is clearly a separable one since a separable state is, and can only be, SLOCC equivalent to another separable state. The situation for case (3) is somewhat more complicated and the original two-qubit states associated with this case are either of rank 3 or 2 (in the case of $b=1/2$) ~\cite{LX.Cen,F.Verstraete:PRA:2001,F.Verstraete:PRA:2002}. By very inefficient SLOCC transformations -- quasi-distillation~\cite{MPR.Horodecki:PRA:1999} -- the entanglement in the equivalent state $\rho_^{\mbox{\tiny T}}ext{ND}$ can be maximized by converting it into the following Bell-diagonal state: \begin{equation} \rho_{\rm ND}'=\half\left(\begin{array}{cccc} \cdot & \cdot & \cdot & \cdot \\ \cdot & 1 & -2b & \cdot \\ \cdot & -2b & 1 & \cdot \\ \cdot & \cdot & \cdot & \cdot \end{array}\right). \end{equation} However, it remains unclear from existing results~\cite{LX.Cen,MPR.Horodecki:PRA:1999, F.Verstraete:PRA:2001,F.Verstraete:PRA:2002} if this process is reversible~\cite{fn:reverse}. In this regard, we have found that the reverse process can indeed be carried out via a separable map with two terms involved in the Kraus decomposition. In particular, a possible form of the Kraus operators associated with this separable map reads [Eq.~\eqref{Eq:SeparableMap}]: \begin{gather*} A_1=\left(\begin{array}{cc} -2b+\sqrt{1+4b^2} & -1/2\\ 1 & \cdot \end{array}\right),\quad B_1=\left(\begin{array}{cc} 1 & 1/2\\ 1 & \cdot \end{array}\right),\\ A_2=\left(\begin{array}{cc} 2b-\sqrt{1+4b^2} & 1/2\\ 1 & \cdot \end{array}\right),\quad B_2=\left(\begin{array}{cc} 1 & 1/2\\ -1 & \cdot \end{array}\right). \end{gather*} Thus, a two-qubit state that is SLOCC equivalent to $\rho_^{\mbox{\tiny T}}ext{ND}$ is also SLOCC equivalent to a unique Bell-diagonal state $\rho_^{\mbox{\tiny T}}ext{ND}'$. By further local unitary transformation, we can bring $\rho_^{\mbox{\tiny T}}ext{ND}'$ into a form that satisfies Eq.~\eqref{Eq:LambdaOrder}. Hence, this leads us to the following theorem. \begin{theorem} All entangled two-qubit states are SLOCC equivalent to a unique Bell-diagonal state satisfying Eq.~\eqref{Eq:LambdaOrder}. \end{theorem} With this theorem, one can now readily determine if an {\em entangled} two-qubit state $\rho$ can be converted to another, say, $\rho'$, using SLOCC. For that matter, let $\rho_^{\mbox{\tiny T}}ext{BD}(\vec{\lambda})$ and $\rho_^{\mbox{\tiny T}}ext{BD}(\vec{\lambda}')$ be, respectively, the unique Bell-diagonal state satisfying Eq.~\eqref{Eq:LambdaOrder} that is SLOCC equivalent to $\rho$ and $\rho'$. Then, it follows from Theorem~\ref{Thm:Monotones} that $\rho$ can be transformed to $\rho'$ using SLOCC if and only if the corresponding weight vectors of the associated Bell-diagonal states $\vec{\lambda}$ and $\vec{\lambda}'$ satisfy Eqs.~\eqref{Eq:E1}-\eqref{Eq:E3}. In other words, the SLOCC convertibility of two two-qubit states can be decided via the following theorem. \begin{theorem} Let $\rho_^{\mbox{\tiny T}}ext{BD}(\vec{\lambda})$ and $\rho_^{\mbox{\tiny T}}ext{BD}(\vec{\lambda}')$ be, respectively, the Bell-diagonal state satisfying Eq.~\eqref{Eq:LambdaOrder} that is SLOCC equivalent to $\rho$ and $\rho'$. $\rho$ can be locally transformed onto $\rho'$ with nonzero probability if and only if (1) $\rho'$ is separable or (2) $\rho$ is entangled and the associated weight vectors $\vec{\lambda}$ and $\vec{\lambda}'$ satisfy Eqs.~\eqref{Eq:E1}-\eqref{Eq:E3}. \end{theorem} Schematically, if neither $\rho$ nor $\rho'$ are separable and if Eqs.~\eqref{Eq:E1}-\eqref{Eq:E3} are satisfied, then one possible way of transforming $\rho$ to $\rho'$ via SLOCC is by performing the following chain of conversions: \begin{equation*} \rho^{\mbox{\tiny T}}o\rho_^{\mbox{\tiny T}}ext{BD}(\vec{\lambda})^{\mbox{\tiny T}}o\rho_^{\mbox{\tiny T}}ext{BD}(\vec{\lambda}')^{\mbox{\tiny T}}o\rho', \end{equation*} whereas if any one of Eqs.~\eqref{Eq:E1}-\eqref{Eq:E3} is not satisfied, then \begin{equation*} \rho\not^{\mbox{\tiny T}}o\rho'. \end{equation*} \section{Discussion and Conclusion} In this paper, we have investigated the bi-separability of the set of four-qubit states commuting with $U\otimes{U}\otimes{V}\otimes{V}$ where $U$ and $V$ are arbitrary members of the Pauli group. These are essentially convex combination of two (not necessarily identical) copies of Bell states. Evidently, these states are all separable across the two copies. For the other bi-partitioning, we have found that the separable subset is a convex polytope and hence can be described by a finite set of entanglement witnesses. Equivalently, this characterization has also given us the complete set of separable, Bell-diagonal preserving, completely positive maps. This has enabled us to derive a complete set of SLOCC monotones for Bell-diagonal states, which can be used to determine if a Bell-diagonal state can be converted to another using SLOCC. We have then supplemented the result on SLOCC equivalence presented in Refs.~\cite{F.Verstraete:PRA:2001,F.Verstraete:PRA:2002} to arrive at the conclusion that all entangled two-qubit states are SLOCC equivalent to a unique Bell-diagonal state. Combining this with the SLOCC monotones that we have derived immediately leads us to some simple necessary and sufficient criteria on the SLOCC convertibility between two-qubit states. \end{document}
\begin{document} \title{A construction of almost Steiner systems\global\long\def\mathrm{Pr}{\mathrm{Pr}} \global\long\def\mathbb{E}{\mathbb{E}} \global\long\def\mathrm{Bin}{\mathrm{Bin}} \global\long\def\mathrm{Po}{\mathrm{Po}} } \author{Asaf Ferber \thanks{School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv, 69978, Israel. Email: [email protected]. } \and Rani Hod \thanks{School of Computer Science, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv, 69978, Israel. Email: [email protected]. Research supported by an ERC advanced grant. } \and Michael Krivelevich \thanks{School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv, 69978, Israel. Email: [email protected]. Research supported in part by USA-Israel BSF Grant 2010115 and by grant 912/12 from the Israel Science Foundation. } \and Benny Sudakov \thanks{Department of Mathematics, University of California, Los Angeles, CA, USA. Email: [email protected]. Research supported in part by NSF grant DMS-1101185, by AFOSR MURI grant FA9550-10-1-0569 and by a USA-Israel BSF grant. }} \maketitle \begin{abstract} Let $n$, $k$, and $t$ be integers satisfying $n>k>t\ge2$. A Steiner system with parameters $t$, $k$, and $n$ is a $k$-uniform hypergraph on $n$ vertices in which every set of $t$ distinct vertices is contained in exactly one edge. An outstanding problem in Design Theory is to determine whether a nontrivial Steiner system exists for $t\geq6$. In this note we prove that for every $k>t\ge2$ and sufficiently large $n$, there exists an almost Steiner system with parameters $t$, $k$, and $n$; that is, there exists a $k$-uniform hypergraph on $n$ vertices such that every set of $t$ distinct vertices is covered by either one or two edges. \end{abstract} \section{Introduction} Let $n$, $k$, $t$, and $\lambda$ be positive integers satisfying $n>k>t\geq2$. A \emph{$t$-$\left(n,k,\lambda\right)$-design} is a $k$-uniform hypergraph $\mathcal{H}=\left(X,\mathcal{F}\right)$ on $n$ vertices with the following property: every $t$-set of vertices $A\subset X$ is contained in exactly $\lambda$ edges $F\in\mathcal{F}$. The special case $\lambda=1$ is known as a \emph{Steiner system} with parameters $t$, $k$, and $n$, named after Jakob Steiner who pondered the existence of such systems in 1853. Steiner systems, $t$-designs \footnote{That is, $t$-$\left(n,k,\lambda\right)$-designs for some parameters $n$,$k$, and $\lambda$. } and other combinatorial designs turn out to be useful in a multitude of applications, e.g., in coding theory, storage systems design, and wireless communication. For a survey of the subject, the reader is referred to~\cite{handbook-of-combinatorial-designs}. A counting argument shows that a Steiner triple system --- that is, a $2$-$\left(n,3,1\right)$-design --- can only exist when $n\equiv1\textrm{ or }n\equiv3\pmod6$. For every such $n$, this is achieved via constructions based on symmetric idempotent quasigroups. Geometric constructions over finite fields give rise to some further infinite families of Steiner systems with $t=2$ and $t=3$. For instance, for a prime power $q$ and an integer $m\ge2$, affine geometries yield $2$-$\left(q^{m},q,1\right)$-designs, projective geometries yield $2$-$\left(q^{m}+\cdots+q^{2}+q+1,q+1,1\right)$-designs and spherical geometries yield $3$-$\left(q^{m}+1,q,1\right)$-designs. For $t=4$ and $t=5$, only finitely many nontrivial constructions of Steiner systems are known; for $t\ge6$, no constructions are known at all. {} Before stating our result, let us extend the definition of $t$-designs as follows. Let $n$, $k$, and $t$ be positive integers satisfying $n>k>t\geq2$ and let $\Lambda$ be a set of positive integers. A $t$-$\left(n,k,\Lambda\right)$-design is a $k$-uniform hypergraph $\mathcal{H}=\left(X,\mathcal{F}\right)$ on $n$ vertices with the following property: for every $t$-set of vertices $A\subset X$, the number of edges $F\in\mathcal{F}$ that contain $A$ belongs to $\Lambda$. Clearly, when $\Lambda=\left\{ \lambda\right\} $ is a singleton, a $t$-$\left(n,k,\left\{ \lambda\right\} \right)$-design coincides with a $t$-$\left(n,k,\lambda\right)$-design as defined above. Not able to construct Steiner systems for large $t$, Erd\H{o}s and Hanani~\cite{EH63} aimed for large partial Steiner systems; that is, $t$-$\left(n,k,\left\{ 0,1\right\} \right)$-designs with as many edges as possible. Since a Steiner system has exactly ${n \choose t}/{k \choose t}$ edges, they conjectured the existence of partial Steiner systems with $\left(1-o\left(1\right)\right){n \choose t}/{k \choose t}$ edges. This was first proved by Rödl~\cite{Rodl85} in 1985, with further refinements~\cite{Grable99,Kim01,KR98} of the $o\left(1\right)$ term, as stated in the following theorem: \begin{thm}[Rödl] \label{thm:partial-steiner}Let $k$ and $t$ be integers such that $k>t\ge2$ . Then there exists a partial Steiner system with parameters $t$, $k$, and $n$ covering all but $o\left(n^{t}\right)$ of the $t$-sets. \end{thm} Theorem~\ref{thm:partial-steiner} can also be rephrased in terms of a covering rather than a packing; that is, it asserts the existence of a system with $\left(1+o\left(1\right)\right){n \choose t}/{k \choose t}$ edges such that every $t$-set is covered at least once (see, e.g.,~\cite[page 56]{the-probabilistic-method}). Nevertheless, some $t$-sets might be covered multiple times (perhaps even $\omega\left(1\right)$ times). It is therefore natural to ask for $t$-$\left(n,k,\left\{ 1,\ldots,r\right\} \right)$-designs, where $r$ is as small as possible. The main aim of this short note is to show how to extend Theorem~\ref{thm:partial-steiner} to cover all $t$-sets at least once but at most twice. \begin{thm} \label{thm:main}Let $k$ and $t$ be integers such that $k>t\ge2$. Then, for sufficiently large $n$, there exists a $t$-$\left(n,k,\left\{ 1,2\right\} \right)$-design. \end{thm} Our proof actually gives a stronger result: there exists a $t$-$\left(n,k,\left\{ 1,2\right\} \right)$-design with $\left(1+o\left(1\right)\right){n \choose t}/{k \choose t}$ edges. \section{Preliminaries} In this section we present results needed for the proof of Theorem \ref{thm:main}. {} Given a $t$-$\left(n,k,\left\{ 0,1\right\} \right)$-design $\mathcal{H}=\left(X,\mathcal{F}\right)$, we define the \emph{leave hypergraph} $\left(X,\mathcal{L}_{\mathcal{H}}\right)$ to be the $t$-uniform hypergraph whose edges are the $t$-sets $A\subset X$ not covered by any edge $F\in\mathcal{F}$. Following closely the proof of Theorem~\ref{thm:partial-steiner} appearing in~\cite{Grable99}, we recover an extended form of the theorem, which is a key ingredient in the proof of our main result. \begin{thm} \label{thm:partial-steiner-ext}Let $k$ and $t$ be integers such that $k>t\ge2$. There exists a constant $\varepsilon=\varepsilon\left(k,t\right)>0$ such that for sufficiently large $n$, there exists a partial Steiner system $\mathcal{H}=\left(X,\mathcal{F}\right)$ with parameters $t$, $k$, and $n$ satisfying the following property: \begin{itemize} \item [$\left(\clubsuit\right)$] For every $0\le\ell <t$, every set $X'\subset X$ of size $\left|X'\right|=\ell$ is contained in $O\left(n^{t-\ell-\varepsilon}\right)$ edges of the leave hypergraph. \end{itemize} \end{thm} We also make use of the following probabilistic tool. \paragraph{Talagrand's inequality.} In its general form, Talagrand's inequality is an isoperimetric-type inequality for product probability spaces. We use the following formulation from~\cite[pages 232--233]{graph-colouring-and-the-probabilistic-method}, suitable for showing that a random variable in a product space is unlikely to overshoot its expectation under two conditions: \begin{thm}[Talagrand] \label{thm:talagrand-ineq}Let $Z\ge0$ be a non-trivial random variable, which is determined by $n$ independent trials $T_{1},\ldots,T_{n}$. Let $c>0$ and suppose that the following properties hold: \begin{enumerate}[label=\roman*.] \item ($c$-Lipschitz) changing the outcome of one trial can affect $Z$ by at most $c$, and \item (Certifiable) for any $s$, if $Z\ge s$ then there is a set of at most $s$ trials whose outcomes certify that $Z\ge s$. \end{enumerate} Then $\mathrm{Pr}\left[Z>t\right]<2\exp\left(-t/16c^{2}\right)$ for any $t\ge2\mathbb{E}\left[Z\right]+80c\sqrt{\mathbb{E}\left[Z\right]}$. \end{thm} \section{Proof of the main result} In this section we prove Theorem~\ref{thm:main}. \subsection{Outline} \begin{comment} The basic idea is to start from a $t$-$\left(n,k,\left\{ 0,1\right\} \right)$-design and then augment it to a $t$-$\left(n,k,\left\{ 1,2\right\} \right)$-design. Indeed, the \end{comment} The construction is done in two phases: \begin{enumerate}[label=\Roman*.] \item \label{enu:phase-1}Apply Theorem~\ref{thm:partial-steiner-ext} to get a $t$-$\left(n,k,\left\{ 0,1\right\} \right)$-design $\mathcal{H}=\left(X,\mathcal{F}\right)$ with property~$\left(\clubsuit\right)$ with respect to some $0<\varepsilon<1$. \item \label{enu:phase-2}Build another $t$-$\left(n,k,\left\{ 0,1\right\} \right)$-design $\mathcal{H}'=\left(X,\mathcal{F}'\right)$ that covers the uncovered $t$-sets $\mathcal{L}_{\mathcal{H}}$. \end{enumerate} Combining both designs, we get that every $t$-set is covered at least once but no more than twice; namely $\left(X,\mathcal{F}\cup\mathcal{F}'\right)$ is a $t$-$\left(n,k,\left\{ 1,2\right\} \right)$-design, as required. {} We now describe how to build $\mathcal{H}'$. For a set $A\subset X$, denote by $\mathcal{T}_{A}=\left\{ C\subseteq X:\left|C\right|=k\mbox{ and }A\subseteq C\right\} $ the family of possible continuations of $A$ to a subset of $X$ of cardinality $k$. Note that $\mathcal{T}_{A}=\varnothing$ when $\left|A\right|>k$. Consider the leave hypergraph $\left(X,\mathcal{L}_{\mathcal{H}}\right)$. Our goal is to choose, for every uncovered $t$-set $A\in\mathcal{L}_{\mathcal{H}}$, a $k$-set $A'\in\mathcal{T}_{A}$ such that $\left|A'\cap B'\right|<t$ for every two distinct $A,B\in\mathcal{L}_{\mathcal{H}}$. This ensures that the obtained hypergraph $\mathcal{H}'=\left(X,\left\{ A':A\in\mathcal{L}_{\mathcal{H}}\right\} \right)$ is indeed a $t$-$\left(n,k,\left\{ 0,1\right\} \right)$-design. To this aim, for every $A\in\mathcal{L}_{\mathcal{H}}$ we introduce intermediate lists $\mathcal{R}_{A}\subseteq\mathcal{S}_{A}\subseteq\mathcal{T}_{A}$ that will help us control the cardinalities of pairwise intersections when choosing $A'\in\mathcal{R}_{A}$. First note that we surely cannot afford to consider continuations that fully contain some other $B\in\mathcal{L}_{\mathcal{H}}$, so we restrict ourselves to the list \[ \mathcal{S}_{A}=\mathcal{T}_{A}\setminus\bigcup\left\{ \mathcal{T}_{B}:B\in\mathcal{L}_{H}\setminus\left\{ A\right\} \right\} =\left\{ C\in\mathcal{T}_{A}:B\not\subseteq C\mbox{ for all }B\in\mathcal{L}_{H},B\ne A\right\} . \] Note that, by definition, the lists $\mathcal{S}_{A}$ for different $A$ are disjoint. Next, choose a much smaller sub-list $\mathcal{R}_{A}\subseteq\mathcal{S}_{A}$ by picking each $C\in\mathcal{S}_{A}$ to $\mathcal{R}_{A}$ independently at random with probability $p=n^{t-k+\varepsilon/2}$ (we can of course assume here and later that $\varepsilon<1\le k-t$, and thus $0<p<1$). Finally, select $A'\in\mathcal{R}_{A}$ that has no intersection of size at least $t$ with any $C\in\mathcal{R}_{B}$ for any other $B\in\mathcal{L}_{\mathcal{H}}$. If there is such a choice for every $A\in\mathcal{L}_{\mathcal{H}}$, we get $\left|A'\cap B'\right|<t$ for distinct $A,B\in\mathcal{L}_{\mathcal{H}}$, as requested. {} \subsection{Details} We start by showing that the lists $\mathcal{S}_{A}$ are large enough. \begin{claim} \label{clm:many-potential-candidates}For every $A\in\mathcal{L}_{\mathcal{H}}$ we have $\left|\mathcal{S}_{A}\right|=\Theta\left(n^{k-t}\right)$.\end{claim} \begin{proof} Fix $A\in\mathcal{L}_{\mathcal{H}}$. Obviously $\left|\mathcal{T}_{A}\right|=\binom{n-t}{k-t}=\Theta\left(n^{k-t}\right)$. Since $\mathcal{S}_{A}\subseteq\mathcal{T}_{A}$, it suffices to show that $\left|\mathcal{T}_{A}\setminus\mathcal{S}_{A}\right|=o\left(n^{k-t}\right)$. Writing $\mathcal{L}_{H}\setminus\left\{ A\right\} $ as the disjoint union $\bigcup_{\ell=0}^{t-1}\mathcal{B}_{\ell}$, where $\mathcal{B}_{\ell}=\left\{ B\in\mathcal{L}_{H}:\left|A\cap B\right|=\ell\right\} $, we have \begin{align*} \mathcal{T}_{A}\setminus\mathcal{S}_{A} & =\left\{ C\in\mathcal{T}_{A}:\exists B\in\mathcal{L}_{H}\setminus\left\{ A\right\} \mbox{ such that }C\in\mathcal{T}_{B}\right\} \\ & =\bigcup_{\ell=0}^{t-1}\left\{ C\in\mathcal{T}_{A}:\exists B\in\mathcal{B}_{\ell}\mbox{ such that }C\in\mathcal{T}_{B}\right\} \\ & =\bigcup_{\ell=0}^{t-1}\bigcup_{B\in\mathcal{B}_{\ell}}\mathcal{T}_{A\cup B}. \end{align*} Note that for all $0\le\ell<t$ and for all $B\in\mathcal{B}_{\ell}$, $\left|A\cup B\right|=2t-\ell$ and thus $\left|\mathcal{T}_{A\cup B}\right|=\binom{n-2t+\ell}{k-2t+\ell}\le n^{k-2t+\ell}$. Moreover, $\left|\mathcal{B}_{\ell}\right|=\binom{t}{\ell}\cdot O\left(n^{t-\ell-\varepsilon}\right)=O\left(n^{t-\ell-\varepsilon}\right)$ by Property $\left(\clubsuit\right)$. Thus, \[ \left|\mathcal{T}_{A}\setminus\mathcal{S}_{A}\right|\le\sum_{\ell=0}^{t-1}\left|\mathcal{B}_{\ell}\right|n^{k-2t+\ell}=\ell\cdot O\left(n^{k-t-\varepsilon}\right)=o\left(n^{k-t}\right), \] establishing the claim. \end{proof} Recall that the sub-list $\mathcal{R}_{A}\subseteq\mathcal{S}_{A}$ was obtained by picking each $C\in\mathcal{S}_{A}$ to $\mathcal{R}_{A}$ independently at random with probability $p=n^{t-k+\varepsilon/2}$. The next claim shows that $\mathcal{R}_{A}$ typically contains many $k$-sets whose pairwise intersections are exactly $A$. This will be used in the proof of Claim~\ref{clm:one-good-set}. \begin{claim} \label{clm:many-disjoint-candidates}Almost surely (i.e., with probability tending to $1$ as $n$ tends to infinity), for every $A\in\mathcal{L}_{H}$, the family $\mathcal{R}_{A}$ contains a subset $\mathcal{Q}_{A}\subseteq\mathcal{R}_{A}$ of size $\Theta\left(n^{\varepsilon/3}\right)$ such that $C_{1}\cap C_{2}=A$ for every two distinct $C_{1},C_{2}\in\mathcal{Q}_{A}$.\end{claim} \begin{proof} Fix $A\in\mathcal{L}_{H}$. Construct $\mathcal{Q}_{A}$ greedily as follows: start with $\mathcal{Q}_{A}=\varnothing$; as long as $\left|\mathcal{Q}_{A}\right|<n^{\varepsilon/3}$ and there exists $C\in\mathcal{R}_{A}\setminus\mathcal{Q}_{A}$ such that $C\cap C'=A$ for all $C'\in\mathcal{Q}_{A}$, add $C$ to $\mathcal{Q}_{A}$. It suffices to show that this process continues $\left\lfloor n^{\varepsilon/3}\right\rfloor$ steps. If the process halts after $s<\left\lfloor n^{\varepsilon/3}\right\rfloor$ steps, then every $k$-tuple of $\mathcal{R}_{A}$ intersects one of the $s$ previously chosen sets in some vertex outside $A$. This means that there exists a subset $X_A\subset X$ of cardinality $|X_A|=kn^{\varepsilon/3}$ ($X_A$ contains the union of these $s$ previously picked sets) such that none of the edges $C$ of $\mathcal{S}_{A}$ satisfying $C\cap X_A=A$ is chosen into $\mathcal{R}_{A}$. For bounding the number of such edges in $\mathcal S_A$ from bellow, we need to subtract from $|\mathcal S_{A}|$ the number of edges $C\in \mathcal T_{A}$ with $C\cap X_A\neq A$. The latter can be bounded (from above) by $\sum_{i=1}^{k-t}\binom{|X_A|}{i}\binom{n-|X_A|}{k-t-i}$ (choose $1\leq i\leq k-t$ vertices from $X_A$, other than $A$, to be in $C$, and then choose the remaining $k-t-i$ vertices from $X\setminus X_A$). Since $\left|\mathcal{S}_{A}\right|=\Theta\left(n^{k-t}\right)$, and since $$\sum_{i=1}^{k-t}\binom{|X_A|}{i}\binom{n-|X_A|}{k-t-i}\leq \sum_{i=1}^{k-t}\Theta(n^{i \varepsilon/3})\cdot n^{k-t-i}=o(n^{k-t}),$$ we obtain that the number of such edges in $\mathcal{S}_{A}$ is at least $\left|\mathcal{S}_{A}\right|-\sum_{i=1}^{k-t}\binom{|X_A|}{i}\binom{n-|X_A|}{k-t-i}=\Theta\left(n^{k-t}\right)$. It thus follows that the probability of the latter event to happen for a given $A$ is at most $$ \binom{n}{kn^{\varepsilon/3}}\,(1-p)^{\Theta\left(n^{k-t}\right)} $$ (choose $X_A$ first, and then require all edges of $\mathcal{S}_{A}$ intersecting $X_A$ only at $A$ to be absent from $\mathcal{R}_{A}$). The above estimate is clearly at most $$ n^{n^{\varepsilon/3}}\cdot e^{-\Theta\left(pn^{k-t}\right)}=\exp\left\{n^{\varepsilon/3}\ln n-\Theta\left(n^{\varepsilon/2}\right)\right\}< \exp\left\{-n^{\varepsilon/3}\right\}\,. $$ Taking the union bound over all ($\leq {n \choose t}$) choices of $A$ establishes the claim. \end{proof} The last step is to select a well-behaved set $A'\in\mathcal{R}_{A}$. The next claim shows this is indeed possible. \begin{claim} \label{clm:one-good-set}Almost surely for every $A\in\mathcal{L}_{H}$ we can select $A'\in\mathcal{R}_{A}$ such that $\left|A'\cap C\right|<t$ for all $C\in\bigcup\left\{ \mathcal{R}_{B}:B\in\mathcal{L}_{H}\setminus\left\{ A\right\} \right\} $.\end{claim} \begin{proof} Fix $A\in\mathcal{L}_{H}$ and fix all the random choices which determine the list $\mathcal{R}_{A}$ such that it satisfies Claim~\ref{clm:many-disjoint-candidates}. Let $\mathcal{Q}_{A}\subseteq\mathcal{R}_{A}$ be as provided by Claim~\ref{clm:many-disjoint-candidates} and let $\mathcal{R}=\bigcup\left\{ \mathcal{R}_{B}:B\in\mathcal{L}_{H}\setminus\left\{ A\right\} \right\} $ be the random family of all obstacle sets. Define the random variable $Z$ to be the number of sets $A'\in\mathcal{Q}_{A}$ for which $\left|A'\cap C\right|\ge t$ for some $C\in\mathcal{R}$. Since $\mathcal{S}_{A}$ is disjoint from $\mathcal{S}=\bigcup\left\{ \mathcal{S}_{B}:B\in\mathcal{L}_{H}\setminus\left\{ A\right\} \right\} $, we can view $\mathcal{R}$ as a random subset of $\mathcal{S}$, with each element selected to $\mathcal{R}$ independently with probability $p=n^{t-k+\varepsilon/2}$. Thus $Z$ is determined by $\left|\mathcal{S}\right|$ independent trials. We wish to show that $Z$ is not too large via Theorem~\ref{thm:talagrand-ineq}; for this, $Z$ has to satisfy the two conditions therein. \begin{enumerate} \item If $C\in\mathcal{S}$ satisfies $\left|A'\cap C\right|\ge t$ for some $A'\in\mathcal{Q}_{A}$ then $C\setminus A$ must intersect $A'\setminus A$ (since $A\not\subseteq C$). However, the $\left(k-t\right)$-sets $\left\{ A'\setminus A:A'\in\mathcal{Q}_{A}\right\} $ are pairwise disjoint (by the definition of $\mathcal{Q}_{A}$) so each $C$ cannot rule out more than $k$ different sets $A'\in\mathcal{Q}_{A}$. Thus $Z$ is $k$-Lipschitz. \item Assume that $Z\ge s$. Then, by definition, there exist distinct sets $A'_{1},\ldots,A_{s}'\in\mathcal{Q}_{A}$ and (not necessarily distinct) sets $C_{1},\ldots,C_{s}\in\mathcal{R}$ such that $\left|A_{i}'\cap C_{i}\right|\ge t$ for $i=1,\ldots,s$. These are at most $s$ trials whose outcomes ensure that $Z\ge s$; i.e., $Z$ is certifiable. \end{enumerate} Let us now calculate $\mathbb{E}\left[Z\right]$. Fix $A'\in\mathcal{Q}_{A}$ and let $Z_{A'}$ be the indicator random variable of the event $E_{A'}=\left\{ \exists C\in\mathcal{R}:\left|A'\cap C\right|\ge t\right\} $. The only set in $\mathcal{L}_{H}$ fully contained in $A'$ is $A$, so we can write $\mathcal{L}_{H}\setminus\left\{ A\right\} $ as the disjoint union $\bigcup_{\ell=0}^{t-1}\mathcal{B}_{\ell}'$, where $\mathcal{B}_{\ell}'=\left\{ B\in\mathcal{L}_{H}:\left|A'\cap B\right|=\ell\right\} $. For any $0\le\ell<t$ and $B\in\mathcal{B}_{\ell}'$, the number of bad sets (i.e., sets that will trigger $E_{A'}$) in $\mathcal{S}_{B}$ is \begin{align*} \left|\left\{ C\in\mathcal{S}_{B}:\left|A'\cap C\right|\ge t\right\} \right| & \le\left|\left\{ C\in\mathcal{T}_{B}:\left|A'\cap C\right|\ge t\right\} \right|\\ & =\left|\left\{ C\in\mathcal{T}_{B}:\left|\left(A'\cap C\right)\setminus B\right|\ge t-\ell\right\} \right|\\ & =\sum_{i=t-\ell}^{k-t}\binom{k-\ell}{i}\binom{n-k-t+\ell}{k-t-i}=O\left(n^{k-2t+\ell}\right), \end{align*} since $C$ contains $B$, $\left|B\right|=t$, together with $i\ge t-\ell$ elements from $A'\setminus B$ and the rest from $X\setminus\left(A'\cup B\right)$. Each such bad set ends up in $\mathcal{R}_{B}$ with probability $p=n^{\varepsilon/2+t-k}$, so the expected number of bad sets in $\mathcal{R}_{B}$ is $O\left(n^{\varepsilon/2-t+\ell}\right)$. By Property~$\left(\clubsuit\right)$ we have $\left|\mathcal{B}_{\ell}'\right|=O\left(n^{t-\ell-\varepsilon}\right)$ and thus the total expected number of bad sets in $\mathcal{R}$ is $\ell\cdot O\left(n^{\varepsilon/2-\varepsilon}\right)=O\left(n^{-\varepsilon/2}\right)$. By Markov's inequality we have \[ \mathrm{Pr}\left[E_{A'}\right]=O\left(n^{-\varepsilon/2}\right). \] Now, $Z$ is a sum of $\left|\mathcal{Q}_{A}\right|=\Theta\left(n^{\varepsilon/3}\right)$ random variables $Z_{A'}$ and thus \[ \mathbb{E}\left[Z\right]=\left|\mathcal{Q}_{A}\right|\cdot\mathbb{E}\left[Z_{A'}\right]=\left|\mathcal{Q}_{A}\right|\cdot\mathrm{Pr}\left[E_{A'}\right]=O\left(n^{-\varepsilon/6}\right)=o\left(1\right). \] Applying Theorem~\ref{thm:talagrand-ineq} with $t=\left|\mathcal{Q}_{A}\right|=\Theta\left(n^{\varepsilon/3}\right)$ and $c=k=O\left(1\right)$, we get that \[ \mathrm{Pr}\left[\mbox{all \ensuremath{A'\in\mathcal{R}_{A}} are ruled out}\right]\le\mathrm{Pr}\left[Z\ge\left|\mathcal{Q}_{A}\right|\right]<2\exp\left(-\Omega\left(n^{\varepsilon/3}\right)\right). \] Taking the union bound over all $\left|\mathcal{L}_{H}\right|\le\binom{n}{t}$ choices of $A\in\mathcal{L}_{H}$ establishes the claim.\end{proof} \end{document}
\begin{document} \title{Efficient multi-relational network representation using primes} \begin{abstract} In this work, we propose a novel representation of complex multi-relational networks, which is compact and allows very efficient network analysis. Multi-relational networks capture complex data relationships and have a variety of applications, ranging from biomedical to financial, social, etc. As they get to be used with ever larger quantities of data, it is crucial to find efficient ways to represent and analyse such networks. This paper introduces the concept of Prime Adjacency Matrices (PAMs), which utilize prime numbers, to represent the relations of the network. Due to the fundamental theorem of arithmetic, this allows for a lossless, compact representation of a complete multi-relational graph, using a single adjacency matrix. Moreover, this representation enables the fast computation of multi-hop adjacency matrices, which can be useful for a variety of downstream tasks. We illustrate the benefits of using the proposed approach through various simple and complex network analysis tasks. \end{abstract} \section{Introduction} In recent years, research on complex networks has matured and they have been the focus of study in multiple domains, such as biological, social, financial, and others~\cite{boccaletti2006complex}. This is because they allow us to model arbitrarily complex relationships between the data, thus making them very useful in real-world scenarios where complex structures arise. The observation that entities (e.g. nodes) in a complex network may be connected through multiple types of links has resulted in the study of \textit{multi-relational} networks and their variants such as multi-layer, multi-dimensional or multi-plex networks. A good overview of these naming conventions, their definitions, and differences can be found in ~\cite{kivela2014multilayer}. Knowledge graphs~\cite{zou2020survey} are also multi-relational networks, defining relations between entities in the form of $(s, r, o)$, where $s$ and $o$ correspond to the subject and object entities, and $r$ is the relation connecting them. In this work, we will use the term multi-relational graph/network as an umbrella term to express all kinds of complex networks that can be represented through such triples. The goal when analyzing such networks is to generate insights by aggregating the information expressed through each relation. There are many approaches to analyzing networks for different downstream tasks, such as those generating embeddings for the nodes and the relations in the graph~\cite{wang2017knowledge}, tensor decompositions~\cite{kolda2009tensor}, symbolic methodologies~\cite{ji2021survey} and more recently graph neural networks~\cite{zhou2020graph}. However, many of these approaches make use only of the direct relations between entities, without being able to capture relations that are expressed through multiple hops in the graph~\cite{sato2020survey}. In many domains~\cite{edwards2021explainable,liu2014assessment} the paths connecting entities are useful for identifying the true nature of their relationship, the role of each entity, and finally help with the task at hand. For this purpose, there is a need for a framework that will facilitate easy and fast calculations of representations that capture the rich multi-hop information of the network. To this end, we propose the \textit{Prime Adjacency Matrix} (PAM) representation for multi-relational networks. This representation compacts, in a lossless manner, all one-hop relations of the original network in a single adjacency matrix. To do that, we take advantage of the fact that each integer can be uniquely decomposed into a collection of prime factors. By mapping each relation type to a distinct prime, we can construct the PAM in a manner that allows us to express all the information of the original graph without loss. Then, having at our disposal one adjacency matrix for the whole graph, we can easily calculate the powers of this matrix and generate multi-hop adjacency matrices for the graph. This process is very fast and can scale easily to large, complex networks that cover many real-world applications. These higher-order PAMs contain multi-hop information about the graph that can be very easily accessed; simply by looking up the values of the matrices. Hence, the rich structural information that is encapsulated in these PAMs can be used in a wide range of tasks. Specifically, we motivate multiple scenarios where this representation can be used to generate structurally-rich representations for graphs, nodes, pairs of nodes, subgraphs, etc. In this first exposure to the new representation, we design simple processes and present experimental results on tasks such as graph classification, and relation prediction. Specifically, the main contributions of this work are the following: \begin{itemize} \item We introduce a new paradigm for representing multi-relational networks in a single adjacency matrix using primes. To the best of our knowledge, this is the first work to model the full multi-relational graph in a single adjacency matrix in a lossless fashion. \item We use this compact representation for the fast calculation of multi-hop adjacency matrices for the whole complex graph, emphasizing its value for network analysis. \item We prove the usefulness of the representation by conducting experiments that utilize the representation. \end{itemize} The rest of the paper is structured as follows: Section~\ref{sec:methodology} introduces the PAM framework in detail. Then we present its application on different downstream tasks in Section~\ref{sec:applications}. In Section~\ref{sec:discussion}, we comment on the current challenges of the framework and motivate possible solutions. Finally, in Section~\ref{sec:conclusions}, we summarise the main aspects of the novel method and propose future work.\footnote{The code and related scripts can be found in the supplementary material and will also be made publicly available.} \section{Methodology}\label{sec:methodology} In this section, we introduce the proposed framework and highlight its main features. \subsection{Definition} Let us start with an unweighted, directed, multi-relational graph $G$, with $N$ nodes and $R$ unique relation types. We can represent all possible edges between the different nodes with an adjacency tensor $A$ of shape $N \times N \times R$: \begin{equation}\label{eq:tensor_A} A[i,j,r]= \begin{cases} 1 & \text{if $r$ connects nodes $i,j$}\\ 0 &\text{otherwise} \end{cases} \end{equation} We now associate each unique relation type $r \in R$ with a distinct prime number $p_r$, through a mapping function $\varphi$, such that: $ \forall r \in R: \varphi(r) = p_r$, where $p_r$ is prime and $p_i = p_j \iff i = j$. This mapping function is a design choice and simply allocates distinct prime numbers to each $r \in R$. At its simplest form, we would randomly order the relations and allocate the first prime to the first relation, the second prime to the second one and so forth. With this mapping in place, we can construct the \textit{Prime Adjacency Matrix} (PAM) $P$ of shape $N \times N$ in the following form: \begin{equation}\label{eq:PAM} P[i,j]= \begin{cases} \displaystyle \prod_{r:A[i,j,r]=1} p_r & if\text{ $\exists r : A[i,j,r]=1 $} \\ \hspace{2em} $0$ &if\text{ $\forall r :$ $A[i,j,r]=0$} \end{cases} \end{equation} As we can see in \eqref{eq:PAM}, each non-zero element $P[i,j]$ is the product of the primes $p_r$ for all relations $r$ that connect node $i$ to $j$. Due to the Fundamental Theorem of Arithmetic (FTA), we can decompose each product to the original primes that constitute it (i.e. the distinct relations that connect the two nodes), thus preserving the full structure of $G$ in $P$ without any loss. We will also define here $P_{+}$, a variant of the above matrix, which aggregates the relations between two cells through their sum instead of their product, as shown in \eqref{eq:PAM_plus}: \begin{equation}\label{eq:PAM_plus} P_{+}[i,j]= \begin{cases} \displaystyle \sum_{r:A[i,j,r]=1} p_r & if\text{ $\exists r : A[i,j,r]=1 $} \\ \hspace{2em} $0$ &if\text{ $\forall r :$ $A[i,j,r]=0$} \end{cases} \end{equation} This variant is not lossless, but it is convenient for several applications, as will be explained in the following sections. As a note here, if each pair of nodes $i,j$ exhibits at most one relation between them, we can see that $P = P_{+}$, from \eqref{eq:PAM} and \eqref{eq:PAM_plus}. \subsection{A simple example} First, let us consider the case where each pair of nodes is connected by at most one relation. In such cases, the values in $P[i,j]$ are simply the corresponding $p_r$ numbers that connect $(i,j)$. Such a graph is shown in Fig.~\ref{fig:PAM_rolling_example}, where we have 5 nodes and 3 types of relation mapped to $3$ (green), $5$ (blue) and $7$ (magenta), accordingly. \begin{figure} \caption{An example multi-relational graph with 5 nodes and 3 types of relation.} \label{fig:PAM_rolling_example} \end{figure} The resulting PAM would be (with node A corresponding to index 0, node B to index 1, and so forth): $P= (\begin{smallmatrix} 0 & 3 & 5 & 0 & 0 \\ 0 & 0 & 0 & 0 & 5 \\ 0 & 7 & 0 & 0 & 0 \\ 3 & 7 & 3 & 0 & 0 \\ 0 & 0 & 0 & 7 & 0 \\ \end{smallmatrix})$. Hence, for the edge $A\xrightarrow{3}B$ we have $P[0,1] = 3$, for $A\xrightarrow{5}C$ we have $P[0,2] = 5$ and so on, expressing all the edges in the graph. Even in this toy graph, the compact PAM representation facilitates interesting observations. For example, we can see all the incoming/outgoing edges and their types by simply looking at the corresponding columns/rows of $P$. So, looking at $P[0,:]$ and $P[:,0]$ we see that node A has two outgoing edges (i.e. non-zero elements) of types $3$ and $5$, and one incoming edge of type $3$. Another graph property that can be easily inferred is the frequency of different relations. If we simply count the occurrences of the non-zero elements of $P$, we get the distribution of edges per relation type, which is $\{3:3, 5:2, 7:3\}$. \subsection{Moving to multi-hop relationships} Having a single adjacency matrix for the whole $G$ allows us to utilize tools from classical network analysis. Most importantly, we can easily obtain the powers of the adjacency matrix. In a single-relational network, the element $(i, j)$ of the power $k$ of an adjacency matrix, contains the number of paths of length $k$ from node $i$ to node $j$. Generalizing this property to the PAM representation, where each value in the matrix also represents a specific type of the relation, the values of $P^k[i,j]$ allow us to keep track of the relational chain linking two nodes. For instance, the second-order PAM for the example graph of Fig.~\ref{fig:PAM_rolling_example} will be:\newline $P^{2}= P\times P = (\begin{smallmatrix} 0 & 35 & 0 & 0 & 15 \\ 0 & 0 & 0 & 35 & 0 \\ 0 & 0 & 0 & 0 & 35 \\ 0 & 30 & 15 & 0 & 35 \\ 21 & 49 & 21 & 0 & 0 \\ \end{smallmatrix})$. \noindent Let us examine the values of this matrix, by starting with the node pair (A, B) for which we have $P^2[0,1] = P^2[A, B] = 35$. We can see from Fig.~\ref{fig:PAM_rolling_example}, that we can get from node A to node B in two hops only through node C, by following the directed path $A\xrightarrow{5}C\xrightarrow{7}B$. The relations $5$ and $7$ that are exhibited along this 2-hop path, are directly expressed in the value of $P^2[A,B]=35$, through its prime factors, as $35=5*7$. The same goes for the rest of the matrix $P[A,E] = 15 = 3*5$ corresponding to $A\xrightarrow{3}B\xrightarrow{5}E$, $P[E,A] = 21 = 7*3$ corresponding to $E\xrightarrow{7}B\xrightarrow{3}A$, and so on. Hence, using this representation the products in $P^k$ express the relational $k$-chains linking two nodes in the graph. It is also important to note the case of $P^2[D, B] = 30$, which is the sum of the two possible paths $30 = 9 + 21 = 3*3 + 3*7$, corresponding to paths $D\xrightarrow{3}A\xrightarrow{3}B$ and $D\xrightarrow{3}C\xrightarrow{7}B$ accordingly. This case shows that each cell $(i, j)$ in $P^k$ aggregates all ``path-products" of $k$-hops that lead from $i$ to $j$. This is aligned to the notion of adjacency matrix powers in classical graph theory, with the added benefit of encoding the types of relations in the value of the cell. Moreover, we can easily extract structural characteristics for nodes, pairs, subgraphs, and the whole graph, by looking up the PAM. For instance, we can calculate the frequency of the 2-hop paths as in the one-hop case, by simply counting the occurrences of non-zero values in $P^2$, which in this case are: $\{15:2, 21:2, 30:1, 35:4, 49:1\}$. These can be used for further analysis according to the task at hand. For example, if this was the graph of a molecule with atoms as nodes and bonds as relational edges, the observed frequent pattern $35$, corresponding to bonds denoted by $\{5, 7\}$, could be of importance for characterizing the molecule in terms of toxicity, solubility, or permeability~\cite{sharma2017toxim} This procedure can be iterated for as many hops as we are interested in, by simply calculating the corresponding $P^k$. The values in each of these matrices will contain aggregated information regarding the relational chains of length $k$ that connect the corresponding nodes. Interesting characteristics about these graphs and their components can then be easily extracted through simple operations. \subsection{The generic case} Now, let us consider the more general case where multiple relations exist between a pair of nodes. These multi-layer/plex networks, are of interest to many domains~\cite{battiston2017multilayer}. The small graph shown in Fig.~\ref{fig:PAM_2hop_example}~(a) has 3 nodes and 2 different types of relation, green and blue, mapped to the numbers $3$ and $5$ correspondingly. Moreover, the nodes $(0,1)$ are connected with relations $3$ and $5$ simultaneously. \begin{figure} \caption{(a) Simple graph with multiple relations between two nodes. (b) The same graph with node 1 split into 2 nodes, 1a and 1b respectively.} \label{fig:PAM_2hop_example} \end{figure} Now let us focus on the node pair $(0,2)$. The corresponding values of the 2-hop adjacency matrix should express the sum of the 2-hop paths between these two nodes and we would expect the 2-hop value of $P^2$ to be $P^2[0,2]= 3*3 + 5*3 = 24$, aggregating the paths $0\xrightarrow{3}1\xrightarrow{3}2$ and $0\xrightarrow{5}1\xrightarrow{3}2$. To help visualise these two paths, one could split node 1 into two distinct nodes (1a) and (1b) as shown in Fig.~\ref{fig:PAM_2hop_example}~(b). However, the PAM for this graph according to \eqref{eq:PAM} would be: $P = \small(\begin{smallmatrix} 0 & 15 & 0\\ 0 & 0 & 3 \\ 0 & 0 & 0 \\ \end{smallmatrix}\small)$ and the corresponding $P^2 = P \times P = \small(\begin{smallmatrix} 0 & 0 & 45\\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{smallmatrix}\small)$. We can see that $P[0,1] = 3*5 = 15$, representing the product of the relations between the nodes. This leads to $P^2[0,2] = 45 = 15*3 = P[0,1]*P[1,2]$, which is not the expected value of $24$. The expected value can be achieved if summation was used in $P$, instead of multiplication, i.e., $P_{+}[0,1] = 3+5 = 8 $ instead of $P[0,1] = 3*5 = 15$. Then the corresponding PAM would be $P_{+}= \small(\begin{smallmatrix} 0 & 8 & 0\\ 0 & 0 & 3 \\ 0 & 0 & 0 \\ \end{smallmatrix}\small)$ and $P_{+}^2 = P_{+} \times P_{+} = \small(\begin{smallmatrix} 0 & 0 & 24\\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{smallmatrix}\small)$ . Thus, in order to achieve a consistent representation of path aggregates, we will be using $P_{+}$ as introduced in \eqref{eq:PAM_plus}, where we simply use the sum of the primes, instead of their product. This representation is not lossless, as the sum of the primes cannot be uniquely decomposed back to the original primes. So, if we want to represent the full graph $G$ without loss, in the $1$-hop matrix, we will need to use eq.~\eqref{eq:PAM}. When we are interested in calculating the $k$-hop PAMs, it is better to start directly with $P_{+}$ from eq.~\eqref{eq:PAM_plus}. In the rest of the paper, when a power of PAM is used and presented as $P^{k}$, it is calculated using $P_{+}$. It is worth mentioning here that $k$-hop PAMs are lossy by design, independent of whether we use $P$ or $P_{+}$. As a final note, the PAM framework can easily be generalised to heterogeneous (more than one node type) and undirected graphs. In the case of undirected graphs, we are essentially working with the upper/lower triangular PAMs. These extensions increase the applicability of the framework, which will be illustrated in the following section. \section{Applications}\label{sec:applications} In the following subsections, we will present some applications using the PAM representation. All experiments were run on a Ubuntu Server with Intel Core i7 Quad-Core @ $2.30$GHz. At most, $8$ threads and $8$ GB RAM were reserved for the experiments. \subsection{Calculating Prime Adjacency Matrices} To showcase the usability of the PAM representation and the simplicity of the calculations needed, we used some of the most common benchmark knowledge graphs and generated their $P^k$ matrices. Specifically, we experimented on WN18RR~\cite{dettmers2018convolutional}, YAGO3-10~\cite{DBLP:conf/cidr/MahdisoltaniBS15} FB15k-237~\cite{toutanova2015representing}, CoDEx-S~\cite{safavi2020codex}, HetioNet~\cite{himmelstein2017systematic} and ogbl-wikikg2~\cite{hu2020open}. The first three are some of the most well-known and commonly used datasets for link prediction in knowledge graphs, each with different structural characteristics. The other three datasets were selected to demonstrate the scalability of the proposed method. CoDEx-S is the smallest of three variants of the CoDEx dataset presented in ~\cite{safavi2020codex} and was selected to showcase results in a small use case. On the other hand, HetioNet and ogbl-wikikg2 are much larger, in terms of number of edges, than all other datasets used here. HetioNet is a biomedical network with multiple node and relation types, exhibiting also many hub nodes. The ogbl-wikikg2 dataset comes from the Open Graph Benchmark, which focuses on large-scale and challenging graphs, and was added to show the use of the approach on a large-scale dataset. \begin{table}[htbp] \caption{Main characteristics of datasets and time needed to calculate PAMs up to $k=5$.} \begin{center} \adjustbox{max width=1\linewidth}{ \begin{tabular}{lcccc} \toprule Dataset & $N$ & $R$ & $\#$ Edges & $P^5$\\ \midrule CoDEx-S & 2,034 & 42 & 32,888 & 0.2 sec. \\ WN18RR & 40,493 & 11 & 86,835 & 0.3 sec. \\ FB15k-237 & 14,541 & 237 & 272,115 & 39.0 sec. \\ YAGO3-10 & 123,182 & 37 & 1,079,040 & 23.9 sec. \\ HetioNet & 45,158 & 24 & 2,250,197 & 3.5 min. \\ ogbl-wikikg2 & 2,500,604 & 535 & 17,137,181 & $\approx$ 40 min. \\ \bottomrule \end{tabular} } \label{tab:scalability} \end{center} \end{table} The basic characteristics of the six datasets can be seen in Table~\ref{tab:scalability}, where the number of nodes, the number of unique types of relations, the total number of edges (in the training set), and the total time needed to set up PAM and calculate up to $P^5$ are presented. The datasets are sorted from small to large, based on the total number of edges in each graph. We can see that for small and medium-scale KGs the whole process takes less than a minute. Interestingly, the time needed to calculate $P^5$ for HetioNet is disproportionately longer than for YAGO3-10, which is of comparable size, and this is mainly due to the structure of the dataset. Specifically, it is due to the density of the graph. It has twice the number of edges than YAGO3-10, with less than half of the nodes. Hence, it is 5 times denser, leading to denser PAMs, which in turn takes a toll on the time needed. It is important to note that we have not optimised the calculations of the PAMs, opting for simple sparse matrix multiplications between, based on the Compressed Sparse Row form. More efficient ways can be used to handle large-scale datasets~\cite{7013051}\footnote{For all datasets 8GB of RAM were allocated for calculating $P^5$, except for ogbl-wikikg2 which needed more than 200GB of RAM due to it's size.}. It is also worth-noting that, this procedure needs to be executed only once to calculate the needed $P^k$ and be used for multiple downstream tasks. This means that with a few minutes of calculation we obtain the higher-order associations between graph nodes, which reveal valuable patterns. For example, using $P^5$ we can check for all the shortest paths up to 5 hops, retaining some information on the relations that are exhibited along the path. This information can, in turn, be useful for various other tasks~\cite{ghariblou2017shortest,wu2020traffic}. \subsection{Relation Prediction} One task where we can utilize the complex relational patterns captured through higher-order $P^k$ is the \textit{relation prediction} task. This task consists of predicting the most probable relation that should connect two existing nodes in a graph. Essentially, we need to complete the triple (h, ?, t) where \textit{h} is the head entity and \textit{t} is the tail entity, by connecting them with a relation \textit{r}. Our idea is that we can use the rich PAM representations of the nodes and their pairs, to construct expressive feature vectors for each pair, which can be useful for prediction, even without training a prediction model (lazy prediction). To this end, we devised a nearest neighbor scheme, where for each training sample (h, r, t), the pair (h, t) is embedded in a feature space as a sample and r is used as a label for that sample. This feature space is created by us, using the PAMs as shown next. At inference time, given a query pair ($h_q$, $t_q$), we embed it also in the same space and the missing relation is inferred via the labels/relations of its nearest neighbors. Because of the simplicity of this approach (no trainable parameters), the representation of the pairs must be rich enough to capture the semantics needed to make the correct prediction. To create such a representation for a given pair of nodes (h, t) we designed a simple procedure, that utilizes both information about the nodes $h$, $t$, and the paths that connect them. Specifically, the feature vector for a pair (h, t) is simply the concatenation of the representations for the head node, the tail node, and the paths that connect them. More formally, we express this as: \begin{equation}\label{eq:rp} { R(h, t) = [Path(h, t) \| Path(t, h) \| R(h) \| R(t)] } \end{equation} where $Path(u,v)$ denotes the feature vector of the path connecting $u$ to $v$, $R(x)$ denotes the feature vector of $x$ and the symbol $\|$ denotes the concatenation of vectors. \begin{figure} \caption{The construction of the feature vector of the pair (0, 1) for the graph of Fig.~\ref{fig:PAM_rolling_example} \label{fig:PAM_RelationPrediction} \end{figure} This procedure is highlighted in Fig.~\ref{fig:PAM_RelationPrediction} for the pair (0,1) of the small in Fig.~\ref{fig:PAM_rolling_example}. First, we create the feature vectors for the paths that connect the head to the tail and vice versa. The feature vector $Path(h, t)$ is simply a $k$-sized vector where each cell contains the corresponding value from the cell $P^k[h, t]$. That is: $$Path(h, t) = [P[h, t], P^2[h, t], \cdots]$$. We can see in the top-right of Fig.~\ref{fig:PAM_RelationPrediction} that creating these path feature vectors is very easy, essentially accessing the values of the appropriate $P^k$ matrix cells. In the example shown, the green ones correspond to the original path from $0\rightarrow1$, while the blue ones correspond to the inverse path from $1\rightarrow0$. For the feature vectors of the head (tail) entity, we simply keep track of the products of the non-zero elements of the corresponding row (column), which essentially expresses the outgoing (incoming) relations and metapaths that the node exhibits. For the head entity, this simply is: $$R(h) = [\prod{P[h, :]}, \prod{P^2[h, :]}, \cdots]$$ and similarly for the tail: $$R(t) = [\prod{P[:, t]}, \prod{P^2[:, t]}, \cdots]$$ The idea of using different representations for the head and the tail ( i.e. using the rows that represent the outgoing paths for the head, versus using the columns that represent the incoming paths for the tail), accentuates the different roles these entities play in a relational triple. We can see this process in the middle-right of Fig.~\ref{fig:PAM_RelationPrediction}. Here the orange feature vector corresponds to the feature vector of the head $0$ and is constructed by calculating the product of the non-zero elements of the orange-annotated rows of the matrices on the left. The same process is followed for the tail of the pair, but now using the purple-annotated columns to create the purple feature vector. Finally, using the above representations for the paths and the entities themselves, we simply concatenate them to create the final feature vector for the pair as per eq.~\eqref{eq:rp}. This feature vector can optionally be one-hot encoded, because the values in PAMs, and therefore the values of the vectors, express structural properties which are closer to categorical features rather than numerical. In the example of Fig.~\ref{fig:PAM_RelationPrediction}, the outcome can be seen in the bottom right (without the one-hot encoding procedure). In order to evaluate this model, which we name \textit{PAM-knn} we will follow the experimental setup presented in~\cite{10.1145/3447548.3467247}. There, the authors experiment on this task with 6 KG datasets, but we will focus on the 3 most difficult ones which were: NELL995~\cite{DBLP:journals/corr/XiongHW17}, WN18RR~\cite{dettmers2018convolutional} and DDB14, which was created by the authors and is based on the Disease Database\footnote{\url{http://www.diseasedatabase.com/}}, which is a medical database containing biomedical entities and their relationships. \begin{table}[htbp] \caption{Statistics of the datasets on relation prediction. $E_{x}$ denotes the number of edges in the corresponding split.} \begin{center} \begin{tabular}{lccccc} \toprule \textbf{Dataset} & $N$ & $R$ & $E_{train}$ & $E_{val}$ & $E_{test}$ \\ \midrule NELL995 & 63,917 & 198 & 137,465 & 5,000 & 5,000\\ WN18RR & 40,493 & 11 & 86,835 & 3,034 & 3,134\\ DDB14 & 9,203 & 14 & 36,561 & 4,000 & 4,000\\ \bottomrule \end{tabular} \label{tab:datasets_rp} \end{center} \end{table} The characteristics of these datasets are summarized in Table~\ref{tab:datasets_rp}. We compare \textit{PAM-knn} to several widely used graph embedding models, namely TransE~\cite{bordes2013translating}, ComplEx~\cite{trouillon2016complex}, DistMult~\cite{yang2014embedding}, RotatE~\cite{sun2019rotate} and finally DRUM~\cite{sadeghian2019drum}, a model which focuses on relational paths to make a prediction. For a given entity pair $(h, t)$ in the test set, we rank the ground-truth relation type $r$ against all other candidate relation types according to each model. We use \textit{MRR} (mean reciprocal rank) and \textit{Hit@3} (hit ratio with cut-off values of 3) as evaluation metrics, as in the original work. The performance of the competing models is reported as found in the article. Details on hyperparameters for \textit{PAM-knn} and the experimental setup can be found in the supplementary material. \begin{table}[htbp] \caption{Results of relation prediction on all datasets. The best results are highlighted in bold, and the best results of the competing models are highlighted with underlines.} \adjustbox{max width=1\linewidth}{ \begin{tabular}{l|rr|rr|rr} \toprule &\multicolumn{2}{c|}{NELL995} &\multicolumn{2}{c|}{WN18RR} &\multicolumn{2}{c}{DDB14} \\ & MRR & H@3 & MRR & H@3 & MRR & H@3 \\ \midrule TransE & 0.784 & 0.870 & \underline{0.841} & \underline{0.889} & \textbf{\underline{0.966}} & 0.980 \\ CompleX & 0.840 & 0.880 & 0.703 & 0.765 & 0.953 & 0.968 \\ DistMult & 0.847 & 0.891 & 0.634 & 0.720 & 0.927 & 0.961 \\ RotatE & 0.799 & 0.823 & 0.729 & 0.756 & 0.953 & 0.964 \\ DRUM & \textbf{\underline{0.854}} & \textbf{\underline{0.912}} & 0.715 & 0.740 & 0.958 & \textbf{\underline{0.987}} \\ \hline PAM-knn & 0.740 & 0.843 & \textbf{0.852} & \textbf{0.957} & 0.915 & 0.961 \\ \bottomrule \end{tabular} } \label{tab:rp_results} \end{table} The results of the experiments are presented in Table~\ref{tab:rp_results}. Once again, our goal is not to do extensive experimentation on relation prediction and propose a new state-of-the-art algorithm, but rather highlight the usefulness of the PAM framework as a simple model that heavily relies on expressive representations and performs comparably well. We can see from Table~\ref{tab:rp_results}, that in WN18RR, \textit{PAM-knn} outperformed the competing models, while in the other 2 datasets its performance was not far from the competition. It is important to note here that \textit{PAM-knn}, has no trainable parameters and the whole procedure takes a few minutes on all datasets using a CPU, while the competing models were trained for hours using a GPU. Table~\ref{tab:rp_params} presents the number of trainable parameters of the embedding models on the smallest dataset DDB14, which are in the order of millions. Moreover, they increase further as the number of nodes in the graphs grows. \begin{table}[htbp] \centering \caption{Number of trainable parameters of all models on DDB14.} \begin{tabular}{ccccc} \toprule TransE & ComplEx & DisMult & RotatE & PAM-knn\\ \midrule 3.7M & 7.4M & 3.7M & 7.4M & 0 \\ \bottomrule \end{tabular} \label{tab:rp_params} \end{table} To sum up, we designed a simple model for relation prediction that relies on the expressive representations of node pairs, which can be naturally constructed using the PAMs and, as shown in the results, perform on par with many widely used graph embedding methodologies. The procedure is very fast, has no trainable parameters, and can be used as a strong baseline for relation prediction. We could devise more complex representations for the pairs or train a model using the same feature vectors in a supervised setting, but this simple approach highlights the inherent expressiveness and efficiency of the PAM framework. \subsection{Graph Classification} Another application in which the structural properties of the graph play an important role is graph classification. We can use the higher-order PAMs to capture complex relational patterns and describe a graph in its entirety. Such a rich and compact representation of a graph can be very useful for this task~\cite{lee2018graph}. To create a representation for a given multi-relational graph using PAMs we devised the procedure, highlighted in Fig.~\ref{fig:PAM_GraphClass} for the small graph of Fig.~\ref{fig:PAM_rolling_example}. First, we calculate all the PAMs up to a pre-defined $k$ (usually up to 5 is sufficient). Then, from each matrix $P^k$ we calculate the product of its non-zero values $g_k = \displaystyle \prod_{p_{ij} > 0}{p_{ij}^k}$ as a single representative feature for this matrix. \begin{figure} \caption{Graph representation for the graph of Fig.~\ref{fig:PAM_rolling_example} \label{fig:PAM_GraphClass} \end{figure} The intuition behind this lies in the fact that these non-zero values express the paths found at that $k$-hop. For example, as we can see in the top matrix of Fig.~\ref{fig:PAM_GraphClass}, the resulting $g_k=231525$ is the product of all the primes in the graph. By design, this number can be uniquely decomposed back to the original collection of primes (i.e. $\{3,3,3,5,5,7,7,7\}$).Therefore, it captures the information of the distribution of different relations in the graph in a single number, which acts as a ``fingerprint'' for the structure of the graph. Having generated all individual $g_k$ values, the final feature vector that represents the graph is simply $F(G) = [g_1, g_2, ..., g_k]$. In the example, the final feature vector has only 3 values, as we calculated only up to $P^3$, but this can be extended. We experimented with this graph feature representation in the task of graph classification, utilizing the benchmark datasets from ~\cite{Morris+2020}. We use the multi-relational ones, which are mainly small-molecule datasets, with graphs being molecules exhibiting specific biological activities. The main characteristics of the 11 datasets used are shown in Table~\ref{tab:gc_datasets}. It is also worth noting that all the nodes have labels in these experiments (i.e. the type of the atom). Hence, the graphs are heterogeneous. \begin{table}[htbp] \centering \caption{Graph classification dataset characteristics. The reported number of nodes and edges is the average per dataset. Each dataset has 2 classes.} \begin{tabularx}{0.95\linewidth}{lCCC} \toprule Dataset & $\#$ Graphs & $\#$ Nodes & $\#$ Edges \\ \midrule AIDS & 2,000 & 15.69 & 16.20 \\ BZR$\_$MD & 306 & 21.30 & 225.06 \\ COX2$\_$MD & 303 & 26.28 & 335.12 \\ DHFR$\_$MD & 393 & 23.87 & 283.01 \\ ER$\_$MD & 446 & 21.33 & 234.85 \\ MUTAG & 188 & 17.93 & 19.79 \\ Mutagenicity & 4337 & 30.32 & 30.77 \\ PTC$\_$FM & 349 & 14.11 & 14.48 \\ PTC$\_$FR & 351 & 14.56 & 15.00 \\ PTC$\_$MM & 336 & 13.97 & 14.32 \\ PTC$\_$MR & 344 & 14.29 & 14.69 \\ \bottomrule \end{tabularx} \label{tab:gc_datasets} \end{table} We compared our approach, denoted as Power Products (PP), in terms of time and accuracy versus one of the best-performing graph kernels~\cite{DBLP:journals/corr/abs-1903-11835}, the Weisfeiler-Lehman Optimal Assignment (WL-OA) kernel~\cite{kriege2016valid}. We used a Radial Basis Function (RBF) kernel on top of our graph feature vectors to calculate their similarity. Moreover, we created a variant which takes into account the node labels (PP-VH), using a simple Vertex Histogram (VH) kernel~\cite{sugiyama2015halting}. This will allow us to check the impact of utilizing the node information, which isn't used in our framework. As this is a classification task, we use a Support Vector Machine (SVM) (with the similarity kernel precomputed by the underlying model). We use a nested cross-validation (cv) scheme, with an outer 5-fold cv for evaluation and an inner 3-fold cv for parameter tuning. For more details on the experimental set-up and the different hyperparameters used, please refer to the supplementary material. The results are shown in Table~\ref{tab:gc_results}. For brevity, we report the percent change in accuracy \textit{$\Delta Acc \%$} and time \textit{$\Delta Time \%$} of PP and PP-VH over the results of WL-OA. We can see from the table that in terms of accuracy, the WL-OA kernel is better in almost all cases versus PP. However, versus PP-VH it is better only in $4/11$ cases. The impact of the addition of node information indicates that for many datasets in the small-molecule classification task, structure alone is not sufficient. For example, in the PTC$\_$FR dataset, we have an improvement of $21$ p.p. of the PP-VH over the PP variant. \begin{table}[htbp] \begin{center} \caption{Results on graph classification.} \begin{tabularx}{\linewidth}{lCCCC} \toprule &\multicolumn{2}{c}{PP} &\multicolumn{2}{c}{PP-VH} \\ Dataset & \textit{$\Delta Acc \%$}& \textit{$\Delta Time \%$}& \textit{$\Delta Acc \%$} & \textit{$\Delta Time \%$} \\ \midrule AIDS & -1.45 & -99.65 & +0.40 & -99.44 \\ BZR$\_$MD & +5.66 & -94.78 & +12.22 & -88.42\\ COX2$\_$MD & -20.20 & -95.79 & +0.15 & -93.26\\ DHFR$\_$MD & -16.99 & -93.78 & -48.90 & -18.50\\ ER$\_$MD & -0.38 & -92.10 & +7.45 & -89.90\\ MUTAG & -8.69 & -89.88 & +2.21 & -82.93\\ Mutagenicity & -28.53 & -99.75 & -20.23 & -95.72\\ PTC$\_$FM & -2.99 & -95.37 & -18.06 & -67.87\\ PTC$\_$FR & -10.25 & -94.16 & +10.97 & +234.68\\ PTC$\_$MM & -18.35 & -93.99 & -7.73 & -27.64\\ PTC$\_$MR & -9.83 & -95.45 & -14.39 & -72.10\\ \bottomrule \end{tabularx} \label{tab:gc_results} \end{center} \end{table} In terms of performance over time, PP offers an improvement of more than $90\%$ over WL-OA across all datasets, thus being orders of magnitude faster. Even when used with VH kernel, the time needed in most cases is less than half the time needed by WL-OA. Thus, using the PP-VH variant we have more than $50\%$ improvement in time, while also improving the performance in $7/11$ datasets. This indicates that PP-VH could be used as a strong baseline, that is also very fast in most cases. Overall, we've proposed a methodology that capitalizes on the information captured by PAMs and generates representations for graphs, that can be useful for downstream tasks. The aforementioned procedure is extremely fast and if paired with a method capable of handling node labels, can be used as a strong and efficient baseline for graph classification. Moreover, this is but one straightforward way to utilize the $P^k$ matrices to generate a representation for each graph/molecule. More complex models could be devised, as in our case we use a single feature $g_k$ to describe the whole $P^k$ matrix. \section{Discussion}\label{sec:discussion} Having presented the methodology in detail and showcased possible applications, we examine here some open challenges:\newline \noindent\textbf{Order of relations} Due to the commutative property of multiplication, we lose the order of the relations in a path. For example, knowing that $P^{2}[A, B] = 35$ from Fig.~\ref{fig:PAM_rolling_example}, which is factorized into $35 = 5*7$, does not explicitly indicate whether the actual path is $A\xrightarrow{5}C\xrightarrow{7}B $ or $A\xrightarrow{7}C\xrightarrow{5}B$. A possible solution to this would be to keep an index of incoming/outgoing relations per node and create a heuristic to find the sequence of relations in k-hop paths, according to the characteristics of start/end nodes. For instance, knowing that node A has outgoing relations of type $3$ and $5$ only (Fig.~\ref{fig:PAM_rolling_example}), we can be sure that the only possible one is $A\xrightarrow{5}C\xrightarrow{7}B$.\newline \noindent\textbf{Exact decomposition of a sum of products} In the small graph of Fig.~\ref{fig:PAM_rolling_example} we had $P^2[D,B] = 30$ which we decomposed into two paths $ 30 = 9 + 21 = 3*3 + 3*7$. In higher-order matrices, it is impractical to keep track of such compositions. Although in this work we do not focus on the exact decomposition of PAM cell values, there are ways to handle such problems. For instance, one could use an Integer Linear Programming solver to find the exact decomposition of a complex path (i.e. sum of products), given the possible simple paths (i.e. products). As a closing note, despite the open research issues, PAMs proposed in this paper still encapsulate structural characteristics that can be very useful in a plethora of downstream tasks, as proved by the performance of the corresponding models. \section{Conclusions}\label{sec:conclusions} In this work, we presented the Prime Adjacency Matrix (PAM) representation for dealing with multi-relational networks. It is a compact representation that allows representing the one-hop relations of a network losslessly through a single-adjacency matrix. This, in turn, leads to efficient ways of generating higher-order adjacency matrices, that encapsulate rich structural information. We showcased that the representations created are rich-enough to be useful in different downstream tasks. Overall, this is a new paradigm for representing multi-relational networks, which is very efficient and enables the use of tools from classical network theory. In the immediate future, we aim to further strengthen the methodology, by addressing the challenges discussed in Section~\ref{sec:discussion}. We are also interested in measuring the effect of selecting different $\varphi$ mapping functions. Finally, it would be very interesting to see which ideas from single-relational analysis can be transferred to PAMs, such as spectral analysis~\cite{spielman2012spectral} or topological graph theory~\cite{gross2001topological}. \end{document}
\begin{document} \title{Memory effects in quantum channel discrimination} \author{Giulio Chiribella}\affiliation{QUIT - Quantum Information Theory Group, Dipartimento di Fisica ``A. Volta'' Universit\`a di Pavia, via A. Bassi 6, I-27100 Pavia, Italy.} \author{Giacomo M. D'Ariano}\affiliation{QUIT - Quantum Information Theory Group, Dipartimento di Fisica ``A. Volta'' Universit\`a di Pavia, via A. Bassi 6, I-27100 Pavia, Italy.} \author{Paolo Perinotti}\affiliation{QUIT - Quantum Information Theory Group, Dipartimento di Fisica ``A. Volta'' Universit\`a di Pavia, via A. Bassi 6, I-27100 Pavia, Italy.} \operatorname{d}ate{\today} \begin{abstract} We consider quantum-memory assisted protocols for discriminating quantum channels. We show that for optimal discrimination of memory channels, memory assisted protocols are needed. This leads to a new notion of distance for channels with memory. For optimal discrimination and estimation of sets of independent unitary channels memory-assisted protocols are not required. \end{abstract} \maketitle The problem of discrimination between quantum channels has been recently considered in quantum information \cite{darloppar,acin,sacchi1,sacchi2,feng,chefles}. For example, in Ref. \cite{chefles} an application of discrimination of unitary channels as oracles in quantum algorithms is suggested. The optimal discrimination is achieved by applying the unknown channel locally on some bipartite input state of the system with an ancilla, and then performing some measurement at the output. A natural extension to multiple uses is obtained by applying the uses in {\em parallel} to a global input state. However, more generally, one can apply the uses partly in parallel and partly in series, even intercalated with other fixed transformations, as in Ref. \cite{combs}. Indeed, due to its intrinsic causally ordered structure, the memory channel can be used either in parallel or in a causal fashion (see Fig. \ref{uno}). In this Letter we show that this {\em causal} scheme is necessary when the multiple uses are correlated---i.~e. for memory channels---whereas it is not needed for independent uses of unitary channels (the case of non unitary channels remains an open problem). \begin{figure} \caption{\label{uno} \label{uno} \end{figure} Memory channels \cite{palmacc,manc,kretwer,nila,plenio} attracted increasing attention in the last years. They are quantum channels whose action on the input state at the $n$-th use can depend on the previous $n-1$ uses through a quantum ancilla. The problem of optimal discriminability of two memory channels is relevant for assessing that a cryptographic protocol is concealing \cite{bitcomm} and for minimization of oracle calls in quantum algorithms.\par We will provide an example showing that a pair of memory channels can be perfectly discriminabile, even though they never provide orthogonal output states when applied to the same global input state. This new causal setup provides the most general discrimination scheme for multiple quantum channels, and this fact leads to a new notion of distance between channels.\par In the case of two unitary channels, optimal parallel discrimination with $N$ uses was derived in Ref. \cite{darloppar,acin}, and in Ref. \cite{feng} a causal scheme without entanglement was proved to be equivalently optimal. In the following, we will prove the optimality of both schemes for discrimination of unitaries. We will generalize this result to discrimination of sequences of unitaries, and to estimation with multiple copies. Differently from the case of memory channels, we will prove that for all these examples causal schemes are not necessary.\par It is convenient to represent a channel $\map C$ by means of its Choi operator $C$ defined as follows \begin{equation} C:=\map C\otimes\map I(|I\rangle\!\rangle\langle\!\langle I|), \end{equation} for a channel $\map C$ with input/output states in $\mathcal H_{\mathrm{in}/\mathrm{out}}$, respectively, where $|I\rangle\!\rangle:=\sum_{n}|n\rangle|n\rangle\in\mathcal H_\mathrm{in}^{\otimes2}$, $\{|n\rangle\}$ being an orthonormal basis for $\mathcal H_\mathrm{in}$. In this representation complete positivity of $\map C$ is simply $C\geq0$) and the trace-preserving constraint is $\operatorname{Tr}_\mathrm{out}[C]=I_\mathrm{in}$. In a memory channel with $N$ inputs and $N$ outputs labeled as in Fig. \ref{uno}, the causal independence of output $2n+1$ on input $2m$ with $m>n$ is translated to the following recursive property \cite{combs} of the Choi operator $C=:C^{(N)}$ \begin{equation} \operatorname{Tr}_{2n-1}[C^{(n)}]=I_{2n-2}\otimes C^{(n-1)},\quad\forall 1\leq n\leq N, \label{caus} \end{equation} where conventionally $C^{(0)}=1$. A {\em tester} is a set of positive operators $P_i\geq0$ such that the probability of outcome $i$ while testing the channel $\map C$ is provided by the generalized Born rule \begin{equation} p(i|\map C):=\operatorname{Tr}[P_i C]. \end{equation} The notion of tester is an extension of that of POVM, which describes the statistics of customary measurements on quantum states. The normalization of probabilities for testers on memory channels with $N$ input-output systems is equivalent to the following recursive property, analogous to that in Eq.~\eqref{caus} \begin{equation} \begin{split} &\sum_iP_i=I_{2N-1}\otimes \Xi^{(N)},\\ &\operatorname{Tr}_{2n-2}[\Xi^{(n)}]=I_{2n-3}\otimes \Xi^{(n-1)},\quad\forall 2\leq n\leq N,\\ &\operatorname{Tr}[\Xi^{(1)}]=1. \end{split} \label{tester} \end{equation} One can prove \cite{combs} that any tester can be realized by a concrete measurement scheme of the class represented in Fig. \ref{genersch}. \begin{figure} \caption{\label{genersch} \label{genersch} \end{figure} Mathematical structures analogous to Eqs. \eqref{caus} and \eqref{tester} have been introduced in Ref. \cite{watgut} to describe strategies in a quantum game.\par Every tester $\{P_i\}$ can be written in terms of a usual POVM $\{\tilde P_i\}$ as follows \begin{equation} P_i=(I\otimes\Xi^{(N)\frac12})\tilde P_i (I\otimes\Xi^{(N)\frac12}), \label{povm} \end{equation} and for every memory channel $\map C$ the generalized Born rule rewrites as the usual one in terms of the state \begin{equation}\label{C} \tilde C:=(I\otimes\Xi^{(N)\frac12})C (I\otimes\Xi^{(N)\frac12}). \end{equation} The state $\tilde C$ corresponds to the output system-ancilla state in Fig \ref{genersch} after the evolution through all boxes of both the tester and the memory channel, on which the final POVM $\{\tilde P_i\}$ is performed \cite{delayed}.\par The standard discriminability criterion for channels is the following. Two channels $\map C_0$ and $\map C_1$ on a $d$-dimensional system are perfectly discriminable if there exists a pure state $|\Psi\rangle\!\rangle$ in dimension $d^2$ such that $\map C_i\otimes\map I(|\Psi\rangle\!\rangle\langle\!\langle \Psi|)$ with $i=0,1$ are orthogonal (every joint mixed state with an ancilla of any dimension can be purified with an ancilla of dimension $d$). Here we use the notation $|\Psi\rangle\!\rangle:=\sum_{m,n}|m\rangle|n\rangle$ which associates an operator $\Psi$ to a bipartite vector. It is easy to see that the orthogonality between the two output states is equivalent to the following condition \cite{noteort} \begin{equation} C_0(I\otimes \rho)C_1=0, \label{discchan} \end{equation} where $\rho:=\Psi^*\Psi^T$, where $\Psi^*$ and $\Psi^T$ denote the complex conjugate and transpose of $\Psi$ in the canonical basis $\{|n\rangle\}$, respectively. The criterion in Eq.~\eqref{discchan} is too restrictive for memory channels. Indeed, the correct condition for perfect discriminability of two memory channels $\map C_i$ with $i=0,1$ is equivalent to the existence of a tester $\{P_i\}$ with $i=0,1$, such that \begin{equation}\label{PC} \operatorname{Tr}[P_i C_j]=\operatorname{d}elta_{ij}, \end{equation} which means that the two channels can be perfectly discriminated by a measurement scheme as that of Fig. \ref{genersch}. Using Eqs. (\ref{povm}) and (\ref{C}), Eq. (\ref{PC}) becomes $\operatorname{Tr}[\tilde P_i \tilde C_j]=\operatorname{d}elta_{ij}$, whence the states $\tilde C_i$ with $i=0,1$ are orthogonal, and the same derivation as for Eq. (\ref{discchan}) leads to \begin{equation} C_0\left(I\otimes\Xi^{(N)}\right)C_1=0, \label{condiscr} \end{equation} with $\Xi^{(N)}$ as in Eq.~\eqref{tester}. In Eq.~\eqref{condiscr} the identity operator acts only on space $2N-1$, differently from Eq.~\eqref{discchan} where it acts on all output spaces. It is interesting to analyze the special case of memory channels made of sequences of independent channels $\{\map C_{ij}\}_{1\leq j\leq N}$ and $i=0,1$ (in Fig. \ref{genersch}, the memory channel is replaced by an array of channels without the ancillas $A_1$ and $A_2$). The condition for perfect discriminability is the same as Eq.~\eqref{condiscr} with $C_0$ and $C_1$ replaced by $\bigotimes_j C_{ij}$ for $i=0,1$, respectively. In terms of a Kraus form $\map C_i=\sum_jK_{ij}\cdot K_{ij}^\operatorname{d}ag$ Eq. (\ref{condiscr}) becomes the orthogonality condition $\langle\!\langle K_{0j}|\left(I\otimes\Xi^{(N)}\right)|K_{1k}\rangle\!\rangle=0$, which for the sequences of maps becomes \begin{equation} \bigotimes_{l=1}^N\langle\!\langle K^{l}_{0j_l}|\left(I\otimes\Xi^{(N)}\right)\bigotimes_{m=1}^N |K^{m}_{1k_m}\rangle\!\rangle=0. \end{equation} for all choices of indices $(j),(k)$, where $K^m_{ij}$ are the Kraus operators for the channel $\map C_{im}$. For sets composed by single channels $\map C_i$ with $i=0,1$, the condition becomes simply the existence of a state $\rho$ such that \begin{equation} \operatorname{Tr}[\rho K^\operatorname{d}ag_{0j} K_{1k}]=0,\quad\forall j,k, \end{equation} and the minimum rank of such state $\rho$ determines the amount of entanglement required for discrimination.\par We now provide an example of memory channels that cannot be discriminated by a parallel scheme, but can be discriminated with a tester. Each memory channel has two uses, and is denoted as $\map C_i=\map W_i\circ\map Z_i$ for $i=0,1$, where the two uses $\map W_i$ and $\map Z_i$ are connected only through the ancilla $A$, and $\map W_i$ has input $0$ and output $A$ and $1$, and $\map Z_i$ has input $A$ and $2$ and output $3$. The first use $\map W_0$ of $\map C_0$ is the channel with $d$-dimensional input and fixed output \begin{equation} \map W_0(\rho)=\frac{1}{d^2}\sum_{p,q=0}^{d-1}|p,q\rangle\langlep,q|\otimes |p,q\rangle\langlep,q|, \end{equation} $|p,q\rangle$ being an orthonormal basis in a $d^2$ dimensional Hilbert space. The second use $\map Z_0$ of $\map C_0$ is given by \begin{equation} \map Z_0(\rho)=\sum_{p,q=1}^{d-1}W_{p,q}\operatorname{Tr}_{A}[\rho(I_2\otimes |p,q\rangle\langlep,q|)]W^\operatorname{d}ag_{p,q}, \end{equation} where the unitaries $W_{p,q}:=Z^p U^q$ are the customary shift-and-multiply operators, with $Z|n\rangle=|n+1\rangle$ and $U|n\rangle=e^{\frac{2\pi i}d n}|n\rangle$. The second channel $\map C_1$ is given by \begin{equation} \map W_1(\rho)=\frac I{d^2},\quad\map Z_1(\rho)=|0\rangle\langle0|. \end{equation} We will now show that the two channels are discriminable with a casual setup and not with a parallel one. Their Choi operators are \begin{equation} \begin{split} C_0&=\frac{1}{d^2}\sum_{p,q=1}^{d-1}|p,q\rangle\langlep,q|_{1}\otimes|W_{p,q}\rangle\!\rangle\langle\!\langle W_{p,q}|_{32}\otimes I_0,\\ C_1&=\frac{1}{d^2}\;I^{\otimes 2}_{1}\otimes |0\rangle\langle0|_3\otimes I_{02}, \end{split} \label{chois} \end{equation} where the output spaces $1,3$ have dimension $d^2$ and $d$, respectively. Suppose that the channels are perfectly discriminable, then by Eq. (\ref{discchan}) there exists $\rho$ such that \begin{equation} C_0 (I_{13}\otimes\rho_{02}) C_1=C_0 C_1(I_{13}\otimes\rho_{02}) =0, \end{equation} where the second equality comes from the expression of $C_1$ in Eq.~\eqref{chois}. Tracing both sides on the output spaces 1 and 3 one has $\operatorname{Tr}_{13}[C_0C_1]\rho=0$. However, \begin{equation} \operatorname{Tr}_{13}[C_0C_1]=\frac{I}{d^2} \end{equation} whence $\rho=0$. This proves by contradiction that the criterion in Eq.~\eqref{discchan}---corresponding to parallel discrimination schemes---is not satisfied by channels $\map C_0$ and $\map C_1$. We will now show a simple causal scheme which allows perfect discrimination of the same channels. The first use of the channel is applied to any state $|\psi\rangle\langle\psi|$, then the measurement with POVM $\{|p,q\rangle\langlep,q|\}$ is performed at the output on system 1. Depending on the outcome $\bar p,\bar q$, the second use of the channel is applied to the state $W^\operatorname{d}ag_{\bar p,\bar q}|1\rangle\langle1|W_{\bar p,\bar q}$. It is clear that the output of channel $\map Z_0$ is the state $|1\rangle\langle1|$, whereas the output of $\map Z_1$ is $|0\rangle\langle0|$.\par This example highlights the need of using a causal scheme in order to discriminate between memory channels. The causal discriminability criterion \eqref{condiscr} implies a notion of distance between memory channels different from the usual distance between channels. Indeed, the discriminability criterion (\ref{discchan}) between channels corresponds to the cb-norm distance \cite{paulsen,kita,notanorm}. The latter can be rewritten as follows (see e.g. Ref \cite{sacchi1}) \begin{equation} \begin{split} &D_{cb}(\map C_0,\map C_1)=\max_{\rho}\N{\left(I\otimes\rho^{\frac12}\right)\Delta \left(I\otimes\rho^{\frac12}\right)}_1,\\ &\Delta:=C_0-C_1, \end{split} \label{distcb} \end{equation} where the maximum is over all states $\rho$, and $\N{X}_1:=\operatorname{Tr}[\sqrt{X^\operatorname{d}ag X}]$ denotes the trace-norm. One has $D_{cb}(\map C_0,\map C_1)\leq 2$, with the equal sign for perfectly discriminable channels. For memory channels the discriminability criterion (\ref{discchan}) corresponds to the new distance \begin{equation} D(\map C_0,\map C_1):=\max_{\Xi^{(N)}}\N{\left(I\otimes\Xi^{(N)\frac12}\right)\Delta \left(I\otimes\Xi^{(N)\frac12}\right)}_1, \end{equation} where the maximum is over all $\Xi^{(N)}$ satisfying conditions \eqref{tester}. For $N=1$ this notion reduces to the usual distance in Eq.~\eqref{distcb}.\par The easiest application of testers is the discrimination of sequences of unitary channels $(T_j)$ and $(V_j)$, with $j=1,\operatorname{d}ots,N$. Without loss of generality we can always reduce to the discrimination of the sequence $(U_j):=(T^\operatorname{d}ag_jV_j)$ from the constant sequence $(I)$. Let us first consider the case of sequences of two unitaries. By referring to the scheme in Fig. \ref{genersch} we can restate the problem as the discrimination of $W^\operatorname{d}ag(U_1\otimes I)W(U_2\otimes I)$ from $I$ on a bipartite system, where $W$ describes the interaction with an ancillary system. It is well known that optimal discriminability of a unitary $X$ from the identity is related to the angular spread $\Theta(X)$, defined as the maximum relative phase between two eigenvalues of $X$ \cite{darloppar,acin}. Apart from the degenerate case in which $X$ has only two different eigenvalues, the discriminability of $X$ from $I$ is given by the quantity $\max\{0,\cos\Theta(X)/2\}\geq0$, which is zero for $\Theta(X)\geq\pi$, corresponding to perfect discriminability. Since unitary conjugation preseves $\Theta(X)$ and the angular spread of the product of two unitaries $X,Y$ satisfies the following bound \cite{presk} \begin{equation} \Theta(XY)\leq\Theta(X)+\Theta(Y), \label{spredis} \end{equation} and finally $\Theta(X\otimes Y)=\Theta(X)+\Theta(Y)$, one has that $\Theta[W^\operatorname{d}ag(U_1\otimes I)W(U_2\otimes I)]\leq\Theta(U_1\otimes U_2)$, then no causal scheme can outperform the parallel one. By induction, one can prove that this is true for sequences of any length $N$. Indeed, defining $X_{N-1}$ as the product of the tester unitaries alternated with $U_j\otimes I$ for $1\leq j<N$, if $\Theta(X_{N-1})=\Theta(\bigotimes_{j=1}^{N-1}U_j)$ holds true, then it holds also for $N$, due to Eq.~\eqref{spredis} . By the same argument, one can also prove that the sequential scheme of Ref. \cite{feng} equals the performances of the parallel scheme, since there always exists $T$ such that $\Theta(UTVT^\operatorname{d}ag)=\Theta(U\otimes V)$ (indeed it is sufficient that $T$ transforms the eigenbasis of $V$ into that of $U$, suitably matching the eigenvalues). Therefore, the schemes of Refs. \cite{darloppar,acin,feng} are optimal also for discriminating sequences of unitaries. Notice that this also includes the case of discrimination of two different permutations of a sequence of unitary transformations. Another situation in which a parallel scheme already performs optimally is the case of estimation of unitary transformations $U_g$, $g\in G$ which make a unitary representation of the group $G$. For $N$ uses of the unitary $U_g$ the Choi operator in this case is \begin{equation} R_g^{(N)}=R_g^{\otimes N},\; R_g=(U_g\otimes I)|I\rangle\!\rangle\langle\!\langle I|(U_g^\operatorname{d}ag\otimes I). \end{equation} The probability density of estimating $h$ for actual element $g$ is \begin{equation} p(h|g)=\operatorname{Tr}[P_h R_g^{(N)}]. \end{equation} As a figure of merit for estimation one typically considers a cost function $c(h,g)$ averaged on $h$, with $c(h,g)=c(fh,fg)$ $\forall f\in G$ (the cost depends only on distance, not on specific location) \begin{equation} C_g(p)=\int_G\mu(\operatorname{d} h)c(h,g)p(h|g), \end{equation} where $\mu(\operatorname{d} g)$ is the invariant Haar measure on $G$. The optimal density $p$ is the one minimizing $\hat C(p):=\max_{g\in G}C_g(p)$. For every density $p(h|g)$ there exists a {\em covariant} one $p_c(h|g)=p_c(fh|fg)$ $\forall f\in G$ which can be obtained as the average $p_c(h|g):=\overline{p(fh|fg)}$ over $f\in G$ (practically this corresponds to randomly transforming the input before measuring and processing the output accordingly). Since $\hat C(p_c)=\overline{C}(p)\leq\hat C(p)$, then the optimal density minimizing both costs $\hat C$ and $\overline{C}$ can be chosen as covariant. Now, since $p_c(h|g)=p_c(e|gh^{-1})$ ($e$ denoting the identity element in $G$), this means that the optimal tester must be of the covariant form \begin{equation} P_h=(U_h\otimes I)^{\otimes N}P_e(U_h^\operatorname{d}ag \otimes I)^{\otimes N}. \end{equation} For such $P_h$, the normalization $\int_G\mu(\operatorname{d} h)P_h=I\otimes\Xi^{(N)}$ implies the commutation $[I\otimes \Xi^{(N)},(U_h\otimes I)^{\otimes N}]=0$, whence the POVM $\tilde P_h$ in Eq.~\eqref{povm} is itself covariant. The optimal tester problem is then equivalent to the optimal state estimation in the orbit $(I\otimes \Xi^{(N)\frac12})R^{(N)}_g(I\otimes \Xi^{(N)\frac12})$. This proves that the optimal estimation of $U_g$ with $g\in G$ compact group can be reduced to a covariant state estimation problem, and the parallel scheme of Ref. \cite{entest} is optimal. The possibility of achieving the same optimal estimation using a sequential scheme as in Ref. \cite{feng} remains an open problem, as, more generally, the possibility of minimizing the amount of entanglement used by the tester.\par In conclusion, we considered the role of memory effects in the discrimination of memory channels and of customary channels with multiple uses. We used the new notion of {\em tester} \cite{combs}, which describes any possible scheme with parallel, sequential, and combined setup of the tested channels. We provided an example of discrimination of memory channels which cannot be optimized by a parallel scheme, and for which the optimal discrimination is achieved by a sequential scheme. The new testing of memory channels corresponds to a new notion of distance between channels. Finally, we showed that for the purpose of unitary channel discrimination and estimation with multiple uses, memory effects are not needed. \acknowledgments This work has been supported by the EC through the project SECOQC. \end{document}
\begin{document} {\tau_i}tle{Atypical Exit Events Near a Repelling Equilibrium} \author{Yuri Bakhtin} \author{Hong-Bin Chen} \address{Courant Institute of Mathematical Sciences\\ New York University \\ 251~Mercer~St, New York, NY 10012 } \email{[email protected], [email protected] } \keywords{Vanishing noise limit, unstable equilibrium, exit problem, polynomial decay, equidistribution, Malliavin calculus} \subjclass[2010]{60H07, 60H10, 60J60} \begin{abstract} We consider exit problems for small white noise perturbations of a dynamical system generated by a vector field, and a domain containing a critical point with all positive eigenvalues of linearization. We prove that, in the vanishing noise limit, the probability of exit through a generic set on the boundary is asymptotically polynomial in the noise strength, with exponent depending on the mutual position of the set and the flag of the invariant manifolds associated with the top eigenvalues. Furthermore, we compute the limiting exit distributions conditioned on atypical exit events of polynomially small probability and show that the limits are Radon--Nikodym equivalent to volume measures on certain manifolds that we construct. This situation is in sharp contrast with the large deviation picture where the limiting conditional distributions are point masses. \end{abstract} \title{Atypical Exit Events Near a Repelling Equilibrium} \tableofcontents \section{Introduction} This paper is a part of our program on long-term behavior of dynamical systems with multiple unstable equilibria organized into heteroclinic networks, under small noisy perturbations. The existing work in this direction (see \cite{Stone-Holmes:MR1050910}, \cite{Stone-Armbruster:doi:10.1063/1.166423}, \cite{Armbruster-Stone-Kirk:MR1964965} for early analysis with elements of heuristics and \cite{Bakhtin2010:MR2731621}, \cite{Bakhtin2011}, \cite{Almada-Bakhtn:MR2802310}, \cite{long_exit_time}, \cite{long_exit_time-1d-part-2}, \cite{exit_time} for rigorous analysis) is a departure from the classical Freidlin--Wentzell (FW) theory of metastability. In FW, rare transitions can be described via large deviations theory and happen at rates exponential in $-\epsilon^{-2}$ where $\epsilon$ is the strength of the perturbation: \begin{align}\label{eq:SDE_X} dX^\epsilon_t = b(X^\epsilon_t)dt + \epsilon \sigma(X^\epsilon_t)dW_t. \end{align} In \cite{Kifer1981}, it was shown that the exit from a neighborhood of an unstable critical point of $b$ happens in time of the order of $\log \epsilon^{-1}$, in the most unstable direction, along the invariant manifold associated to the top eigenvalue of the linearization of the vector field~$b$. In the case where the top eigenvalue is not simple, the limit of the exit location distribution was studied in \cite{Eizenberg:MR749377}. In the case of a simple top eigenvalue, the results of~\cite{Kifer1981} were strengthened in \cite{Bakhtin2010:MR2731621}, \cite{Bakhtin2011}, \cite{Almada-Bakhtn:MR2802310}, where scaling limits for the distributions of exit locations were obtained and used to compute asymptotic probabilities of various pathways through the network. In particular, it turned out that there are interesting memory effects and in general the typical limiting behavior at logarithmic timescales is not simply a random walk on the graph of heteroclinic connections. To study the dynamics over longer times, one has to study the rare events realizing unusual transitions that are improbable over logarithmic time scales, see the discussion of heteroclinic networks in~\cite{long_exit_time-1d-part-2}. It was also understood in \cite{long_exit_time} and~\cite{long_exit_time-1d-part-2} that the leading contribution to these rare events is due to abnormally long stays in the neighborhood of the critical point. Asymptotic results on the decay of probabilities of these events were obtained for repelling equilibria in these papers for dimension~$1$ and in~\cite{exit_time} for higher dimensions. The general results of \cite{exit_time} can be briefly summarized as follows. If all the eigenvalues of the linearization at the critical point are positive and simple and the leading one is $\lambdabda>0$, then, for all $\alpha>1/\lambdabda$ and initial conditions at distance of the order of $\epsilon$ to the critical point, it was shown that \[\mathbb{P}b{\tau>\alpha \log \epsilon^{-1}}=c\epsilon^{\beta}(1+o(1)), \quad \epsilon\to0,\] where $c$ and $\beta$ were explicitly computed. Note that this is a more precise estimate than $\log \mathbb{P}b{\tau>\alpha \log \epsilon^{-1}}=(\beta \log \epsilon) (1+o(1))$ conjectured in~\cite{mikami1995}. In the present paper, we extend the study of atypical exit times from~\cite{exit_time} to the study of atypical exit locations in the same setting. We assume that the dynamics near the critical point (which we place at the origin in $\mathbb{R}^d$) admits a smooth conjugacy to the linear dynamics with simple characteristic exponents $\lambdabda_1>{\lambda_d}ots>\lambdabda_d>0$ and consider a neighborhood $\mathbf{D}$ of the origin, with smooth boundary $\partial\mathbf{D}$. For any subset $A$ of $\partial\mathbf{D}$ possessing a certain regularity property (most relatively open subsets of $\partial\mathbf{D}$ fall into this category), we prove that \begin{equation} \mathbb{P}b{X_\tau\in A}=\epsilon^{\rho(A)}\mu(A)(1+o(1)),\quad \epsilon\to0, \label{eq:power-asymptotics} \end{equation} where $\rho(A)$ and $\mu(A)$ are constants. The values that the exponent $\rho(A)$ can take belong to a discrete set of values $(\rho_i)_{i=1}^d$: \begin{equation} \label{eq:rho_i} \rho_i= \sum_{j<i} \left(\fl -1\right), \quad i=1,2,{\lambda_d}ots,d. \end{equation} Here and throughout this paper, the sum over an empty set is understood to be $0$. The relevant index $i=i(A)$ to be used in \eqref{eq:rho_i}, i.e, such that $\rho(A)=\rho_{i(A)}$ in \eqref{eq:power-asymptotics} is defined in the following way. For each $i=1,{\lambda_d}ots,d$, there is a uniquely defined $i$-dimensional manifold $M^i$ invariant under the flow generated by the drift vector field $b$, with tangent space at the origin spanned by eigenvectors associated with eigenvalues $\lambdabda_1,{\lambda_d}ots,\lambdabda_i$. These manifolds form a flag, i.e., $M^1\subset M^2\subset{\lambda_d}ots \subset M^d$, their traces on~$\partial\mathbf{D}$ defined by $N^i=M^i\cap \partial\mathbf{D}$ also satisfy $N^1\subset N^2\subset{\lambda_d}ots \subset N^d$ and, additionally, $N^d\cap \partial\mathbf{D}=\partial\mathbf{D}$ and thus $i(A)=\min\{i\in\{1,{\lambda_d}ots,d\}:\ N^i \cap A\ne\emptyset\}$ is always well-defined, see Figure~\ref{fig:3d_example}. Our results mean that exits along manifolds of various dimensions have probabilities of different polynomial decay rates. Since $0=\rho_1<\rho_2<{\lambda_d}ots\rho_d$, these probabilities are of the order of $\epsilon^{\rho_1}(=\epsilon^0=1)\gg \epsilon^{\rho_2}\gg {\lambda_d}ots\gg\epsilon^{\rho_d}$. The differences in the order of magnitude for these probabilities are due to a drastic distortion caused by exponential expansion with different rates in different eigendirections. One can say that the exit direction of the system is largely determined by its behavior in infinitesimal time which is then amplified by exponential growth with different rates in different directions. In agreement with the results of~\cite{Kifer1981}, exiting in the neighborhood of a two-point set $N^1$ ($M^1$ is a $1$-dimensional manifold, i.e., a curve, associated to the most unstable direction) is a typical event which has asymptotic probability $1$. Exiting away from it happens with probability of the order of $\epsilon^{\rho_2}$ and, conditioned on this polynomially rare event, the exit distribution concentrates on $N_2$. In general, exiting away from $N_k$ is a rare event of probability of the order of $\epsilon^{\rho_{k+1}}$ and, conditioned on this rare event, the exit distribution concentrates on $N_{k+1}$. Moreover, these conditional distributions converge weakly, as $\epsilon\to 0$, to a limiting measure that is Radon--Nikodym equivalent to the $k$-dimensional volume on $N_{k+1}$, with a density that can be described explicitly. The basic case where the domain~$\mathbf{D}$ is a cube and the vector field $b$ is linear is at the heart of the analysis. It turns out that the limiting distributions of exit locations conditioned on exits through various faces of the cube show equidistribution properties that cannot be obtained through large deviation estimates and are surprising if one is used to the FW mindset. We discuss the simple situation described above and build our intuition in Section~\ref{sec:heuristic}. In Section~\ref{section:setting} we give the general setting and our main results in detail. The proofs are given in Sections \ref{sec:derivation-main-result}---\ref{section:extension}. The techniques that we are using are primarily probabilistic. Most are based on the classical stochastic calculus tools and the key estimate is based on Malliavin calculus. In principle, exit problems can be addressed using PDE tools. For exits near unstable critical points, some elements of PDE-based analysis can be found in~\cite{Kifer1981}, \cite{Day95}, and \cite{CGLM:MR3083930}. So it would be interesting to find a PDE approach to the problem solved in the present paper but we follow the path of probabilistic analysis using the basic approach similar to \cite{Eizenberg:MR749377}, \cite{Day95}, \cite{Bakhtin2010:MR2731621}, \cite{Bakhtin2011}, \cite{Almada-Bakhtn:MR2802310}, \cite{OnGumbel}, \cite{long_exit_time}, \cite{long_exit_time-1d-part-2}, \cite{exit_time}. Concluding the introduction, let us briefly discuss two directions that will be natural continuations of the present work. Although our new results and those on exit times from~\cite{exit_time} are based on the same density estimates, we do not develop that connection further in this paper. In particular, the detailed asymptotic analysis of the joint distribution of exit locations and exit times seems possible but harder, and we postpone it to another publication. A more important question is the asymptotic behavior of exit distributions near hyperbolic critical points (saddles) of the driving vector field. Atypical events described in terms of the exit location are responsible for atypical transitions in heteroclinic networks. Similarly to the situation in this paper, their probability is expected to decay polynomially in $\epsilon$ leading to a hierarchy of transitions observable at various polynomial time scales, see the heuristic analysis in~\cite{long_exit_time-1d-part-2}. The approach of the present paper based on Malliavin calculus density estimates from~\cite{exit_time}, will be an important ingredient in making this analysis rigorous in another forthcoming publication. {\bf Acknowledgments.} The conditional asymptotic equidistribution first emerged in discussions with Zsolt Pajor-Gyulai in connection to our project on noisy heteroclinic networks. YB thanks NSF for the partial support via award DMS-1811444. \begin{figure} \caption{The dashed line segment and the shaded surface are the portions of $M^1$ and $M^2$ inside $\mathbf{D} \label{fig:3d_example} \end{figure}{} \section{A heuristic computation for a simple case} \label{sec:heuristic} Let us give a heuristic analysis of the simplest situation with exit distribution behavior that is counterintuitive from the point of view of the FW theory. Suppose the diffusion $X=X^\epsilon$ in question is two-dimensional: \begin{align*} dX^1_t&=\lambdabda_1 X^1_t dt +\epsilon dW^1_t,\\ dX^2_t&=\lambdabda_2 X^2_t dt +\epsilon dW^2_t, \end{align*} where $\lambdabda_1>\lambdabda_2>0$ and $X^1_0=X^2_0=0$, and $W^1,W^2$ are independent standard Wiener processes. We define $\tau=\inf\{t\ge 0:\ X_t\in\partial \mathbf{D}\}$, where $\mathbf{D}=(-1,1)^2$ is a square and study the distribution of $X_\tau$, the location of exit from $\mathbf{D}$. When $\epsilon$ is small, it takes a long time to exit, and for large times $t$, the Duhamel principle gives \begin{equation} \label{eq:duhamel-in-simple-ex} X^k_t=\epsilon e^{\lambdabda_k t}\int_0^{t}e^{-\lambdabda_k s}dW^k_s \approx \epsilon e^{\lambdabda_k t}N_k, \end{equation} where $N_k=\int_0^{\infty}e^{-\lambdabda_k s}dW^k_s$ is a centered Gaussian random variable with variance $1/(2\lambdabda_k)$. Denoting $\tau_k=\inf\{t\ge 0: |X^k_t|=1\}$, $k=1,2$, we obtain from \eqref{eq:duhamel-in-simple-ex} that \begin{equation} \label{eq:tau-in-simple-ex} \tau_k\approx \frac{1}{\lambdabda_k}\log\frac{1}{\epsilon}+\frac{1}{\lambdabda_k}\log\frac{1}{|N_k|}. \end{equation} Therefore, for small $\epsilon$, typically we have $\tau_1<\tau_2$. Moreover, plugging \eqref{eq:tau-in-simple-ex} with $k=1$ into \eqref{eq:duhamel-in-simple-ex} with $k=2$, we obtain \[ X_{\tau_1}^2=\epsilon^{1-\frac{\lambdabda_2}{\lambdabda_1}}|N_1|^{-\frac{\lambdabda_2}{\lambdabda_1}}N_2\to 0, \quad \epsilon\to 0, \] so the typical random locations of exit $X_\tau$ will concentrate near points $q_\pm=(\pm1, 0)$ where the invariant manifold associated with the leading eigenvalue $\lambdabda_1$ (i.e., the first axis) intersects $\partial\mathbf{D}$. Let us now prohibit exits through the sides of $\mathbf{D}$ that contain $q_\pm$ and study the unlikely event $B$ of exiting $\mathbf{D}$ through $[-1,1]{\tau_i}mes\{-1,1\}$, i.e., we define $B=\{|X^2_\tau|=1\}$. It turns out that $\mathbb{P}(B)=c\epsilon^{\frac{\lambdabda_1}{\lambdabda_2}-1}(1+o(1))$ and the exit distribution conditioned on $B$ is, somewhat surprisingly, asymptotically uniform on $[-1,1]{\tau_i}mes\{-1,1\}$. Let us present a heuristic argument for this. Introducing events \[ A_r=\left\{|N_1|<r \epsilon^{\frac{\lambdabda_1}{\lambdabda_2}-1}|N_2|^{\frac{\lambdabda_1}{\lambdabda_2}}\right\},\quad r>0, \] we obtain from~\eqref{eq:tau-in-simple-ex} that \[ B=\{\tau_2<\tau_1\}\approx A_1, \] and, plugging \eqref{eq:tau-in-simple-ex} with $k=2$ for $t$ into \eqref{eq:duhamel-in-simple-ex} with $k=1$, we obtain that \[ \{|X_{\tau_2}^1|\le r\}\approx A_r,\quad r>0. \] Next, \[ \mathbb{P}(A_r)=\int_{\Sigma_{r,\epsilon}}g(x_1,x_2) dx_1 dx_2, \] where $\Sigma_{r,\epsilon}=\{(x_1,x_2):\ |x_1|<r \epsilon^{\frac{\lambdabda_1}{\lambdabda_2}-1}|x_2|^{\frac{\lambdabda_1}{\lambdabda_2}} \}$ and $g$ is the joint Gaussian density of~$N_1$ and $N_2$. As $\epsilon\to0$, the domain $\Sigma_{r,\epsilon}$ shrinks to the axis $\{x_1=0\}$, so we can approximate $g(x_1,x_2)$ by $g(0,x_2)$ and conclude that \[ \mathbb{P}(A_r)=c\epsilon^{\frac{\lambdabda_1}{\lambdabda_2}-1} r(1+o(1)),\quad \epsilon\to 0, \] where \[ c=2\int_\mathbb{R} g(0,x)|x|^{\frac{\lambdabda_1}{\lambdabda_2}}dx. \] Therefore, \[ \mathbb{P}(B)=\mathbb{P}\{\tau_2<\tau_1\}=c\epsilon^{\frac{\lambdabda_1}{\lambdabda_2}-1}(1+o(1)),\quad \epsilon \to 0, \] and \[ \frac{\mathbb{P}\{|X_{\tau_2}^1|< r\}}{\mathbb{P}(B)}\to r,\quad \epsilon\to 0, \] which, due to the symmetry of this example, implies that the limiting distribution is uniform. Another implication of this calculation is that to realize $B$ one needs typical values of $N_2$ and atypically small values of $N_1$. Due to~\eqref{eq:tau-in-simple-ex}, this translates into typical values of $\tau_2$ and atypically large values of $\tau_1$. One can say that the main effect of conditioning on $B$ is conditioning $X^1$ to stay within $[-1,1]$ for abnormally long times, with only a moderate effect on the evolution of $X^2$. Let us now expand this example and consider a third coordinate evolving independently according to \[ dX^3_t=\lambdabda_3 X^3_t dt +\epsilon dW^3_t, \] with $0<\lambdabda_3<\lambdabda_2$ and $X^3_0=0$. Now, the unlikely event of interest $B=\{|X^2_\tau|=1\}$ corresponds to the exit through the union of two faces of the cube $\mathbf{D}=(-1,1)^3$ given by $[-1,1]{\tau_i}mes\{-1,1\}{\tau_i}mes[-1,1]$. From the analysis above we know that the exit will happen at time~$\tau_2$ corresponding to moderate values of $N_2$, i.e, near $\frac{1}{\lambdabda_k}\log\frac{1}{\epsilon}$. Plugging the expression for~$\tau_2$ from \eqref{eq:tau-in-simple-ex} into \eqref{eq:duhamel-in-simple-ex} for $k=3$, we obtain that $X^3_{\tau_2}=\epsilon^{1-\frac{\lambdabda_3}{\lambdabda_2}}|N_2|^{-\frac{\lambdabda_3}{\lambdabda_2}}N_3$. Hence, under conditioning on $B$, $X^3_{\tau_2}$ converges to $0$. Combining this with our analysis of the two-dimensional situation above, we conclude that the exit distribution converges to the uniform distribution on $[-1,1]{\tau_i}mes \{-1,1\}{\tau_i}mes\{0\}$. This union of two one-dimensional segments should be viewed as the intersection of $\partial\mathbf{D}$ with the two-dimensional invariant manifold associated with $\lambdabda_1$ and $\lambdabda_2$, i.e., the $x_1x_2$-plane. The goal of this paper may be described as to give a rigorous treatment of this example and its generalizations to higher dimensional nonlinear situations with space-dependent diffusion matrix and general domains. \section{Setting and the main result}\label{section:setting} In $\mathbb{R}^d$, we consider an open simply connected set $\mathbf{D}$, a bounded vector field $b:\mathbb{R}^d \to \mathbb{R}^d$ and the flow $(S_t)_{t\in\mathbb{R}}$ associated with $b$ via the ODE \begin{align}\label{eq:ODE} \begin{split} \tfrac{d}{dt}S_tx & = b(S_tx),\\ S_0x &= x, \end{split} \end{align} satisfying the following conditions: \begin{itemize} \item the origin $0\in \mathbf{D}$; \item $b(0)=0$; \item for all $x\in\mathbf{D}\setminus\{0\}$, the deterministic exit time \begin{equation} \label{eq:deterministic-exit-time} t(x)=\inf\{t\ge0:\ S_tx\in\partial\mathbf{D}\} \end{equation} satisfies $t(x)<\infty$. In particular, $x=0$ is the only critical point of $b$ in $\mathbf{D}$; \item $b$ is $C^5$ and $b(x)=\mathbf{a}x+q(x)$ where \begin{itemize} \item $q(x)\leq C_q|x|^2$ for some $C_q>0$, \item $\mathbf{a}$ is a $d{\tau_i}mes d$ diagonal matrix with real entries $\lambdabda_1>\lambdabda_2>...>\lambdabda_d>0$; \end{itemize} \item $\partial \mathbf{D}$ is $C^1$; \item $b$ is transversal to $\partial\mathbf{D}$, i.e., $ \langle\mathbf{n}(x), b(x)\rangle> 0$ for all $x \in \partial \mathbf{D}$, where $\mathbf{n}$ denotes the outer normal of $\partial \mathbf{D}$. \end{itemize} A more general situation where $\mathbf{a}$ is only assumed to have eigenvalues $\lambdabda_1>\lambdabda_2>...>\lambdabda_d>0$ can be reduced to this one by a diagonalizing linear transformation. Our results also hold for a broad class of domains with piecewise smooth boundaries but we restrict ourselves to domains with smooth boundaries for simplicity. By the Hartman--Grobman Theorem (see, e.g., \cite[Theorem 6.3.1]{Dynamics}), there is an open neighborhood $\mathbf{U}$ of $0$ and a homeomorphism $f: \mathbf{U} \to f(\mathbf{U})$ conjugating the flow~$S$ to the linear flow $\bar S$ generated by the vector field $x\mapsto \mathbf{a}x$ and given by $\bar S_tx= xe^{\lambda t}= (x^je^{{\lambda_j} t})_{j=1}^d$, namely, \begin{align*} \frac{d}{dt}f(S_tx)=\mathbf{a}f(S_tx). \end{align*} \begin{itemize} \item in addition, we assume that $f$ is a $C^5$ diffeomorphism. \end{itemize} \begin{Rem} Due to \cite{Sternberg-1957:MR96853}, for this $C^5$ conjugacy condition to hold in our setting, it suffices to require (i)~a smoothness condition: $b$ is $C^k$ for some $k\ge 5 \vee( \lambdabda_1 / \lambdabda_n)$, and (ii)~a no-resonanse condition: \[\lambdabda_k\ne m_1\lambdabda_1+{\lambda_d}ots+m_d \lambdabda_d\] for all $k=1,{\lambda_d}ots,d$ and all nonnegative integer coefficients $m_1,{\lambda_d}ots, m_d$ satisfying $m_1+{\lambda_d}ots+m_d\ge 2$. \end{Rem} The vector field $x\mapsto \mathbf{a}x$ is the pushforward of $b$ under $f$, and since $\mathbf{a}$ is diagonal, $f$ can be chosen to satisfy $f(0)=0$ and $Df(0)= I$, the identity matrix. We are interested in random perturbations of \eqref{eq:ODE} given by~\eqref{eq:SDE_X}, where \begin{itemize} \item $\epsilon \in (0,1)$ is the noise amplitude parameter; \item $(W_t,\mathcal{F}_t)$ is a standard $n$-dimensional Wiener process with $n\geq d$; \item $\sigma=(\sigma^i_j)_{i=1,{\lambda_d}ots,d;\ j=1,{\lambda_d}ots,n} $ is a map from $\mathbb{R}^d$ into the space of $d{\tau_i}mes n$ matrices satisfying \begin{itemize} \item $\sigma$ is $C^3$ (and , by adjustments outside $\mathbf{D}$, we may assume that $\sigma$ has bounded derivatives in $\mathbb{R}^d$), \item $\sigma(0):\mathbb{R}^n \to \mathbb{R}^d$ is surjective. \end{itemize} \end{itemize} We will study the solutions of~\eqref{eq:SDE_X} with initial data $X^\epsilon_0 = \epsilon \xi_\epsilon\in\mathcal{F}_0$, where \begin{itemize} \item $\xi_\epsilon$ converges to some $\xi_0\in \mathcal{F}_0$ in distribution as $\epsilon \to 0$; \item there are constants $C, c>0$ independent of $\epsilon$ such that \begin{align}\label{eq:xi_exponential_tail} \mathbb{P}b{|\xi_\epsilon|> x}\leq Ce^{-|x|^c} \text{ for all }x\geq 0, \epsilon \in [0,1). \end{align} \end{itemize} To simplify notations, we often suppress the dependence on $\epsilon$. In particular, we write~$X_t$ instead of $X^\epsilon_t$. We introduce the first time for $X_t$ to exit $\mathbf{D}$ as \begin{align}\label{eq:def_tau} \tau=\tau_\epsilon=\inf\{t>0:X_t\not\in \mathbf{D}\}. \end{align} Our main results concern the asymptotic properties of the distribution of $X_\tau$, the location of exit of $X_\tau=X^\epsilon_{\tau_\epsilon}$ from $\mathbf{D}$. To state them, we need to introduce more definitions and notations. \begin{itemize} \item $\mathcal{B}_L = [-L,L]^d$, \quad $L>0$; \item $\mathbf{F}^i_{L\pm}=\mathcal{B}_L\cap \{x\in \mathbb{R}^d:x^i=\pm L \}$ is a face of $\mathcal{B}_L$, and $\mathbf{F}^i_L=\mathbf{F}^i_{L+}\cup\mathbf{F}^i_{L-}$; \item for $A\subset B\subset \mathbb{R}^d$, $\partial_{B}A$ and $\mathop{\mathrm{int}}_{B}A$ denote the boundary and the interior of~$A$ relative to $B$; \item $\partial=\partial_{\mathbb{R}^d}$, $\partial_L=\partial_{\partial \mathcal{B}_L}$, $\mathop{\mathrm{int}}_L=\mathop{\mathrm{int}}_{\partial \mathcal{B}_L}$; \item $\mathcal{H}^s$ denotes the $s$-dimensional Hausdorff measure. \end{itemize} \setminusallskip For $k=1,{\lambda_d}ots,d$, we define sets $\Lambda^k = \bigoplus_{i=1}^k \mathbb{R} e_i$, where $(e_i)_{i=1}^d$ is the standard basis for~$\mathbb{R}^d$. The sets $\Lambda^k$ are invariant manifolds for the linear flow $(\bar S_t)$ associated with top~$k$ exponents $\lambdabda_1,{\lambda_d}ots,\lambdabda_k$. Therefore, the sets \begin{align}\label{eq:def_M^k} M^k=\{x\in\mathbb{R}^d: S_{t}x\in f^{-1}(\Lambda^k)\ \text{for some}\ t\in\mathbb{R}\},\quad k=1,{\lambda_d}ots,d, \end{align} are the $k$-dimensional invariant manifolds associated with top $k$ exponents for the flow~$(S_t)$. Let us define the traces of these manifolds on the boundary by $N^k=M^k\cap \partial\mathbf{D}$ and note that due to our transversality assumptions, $N^k$ is a $(k-1)$-dimensional $C^1$-manifold. In particular $N^1$ consists of two points, $N^2$ is a closed curve in $\partial \mathbf{D}$, and $N^d$ coincides with~$\partial\mathbf{D}$. For any set $A\subset\partial \mathbf{D}$ we define the index of $A$ to be \begin{equation*} i(A)=\min \big\{ k \in\{ 1,{\lambda_d}ots d\}: \overline A\cap N^k \neq \emptyset \big\}, \end{equation*} see Figure~\ref{fig:3d_example}. This notion is going to be useful because we will show that due to the presence of different exponential growth rates in different directions, the probabilities for the system to exit~$\mathbf{D}$ near $N^k$ have different orders of magnitude for different values of $k$. Thus, the index of~$A$ picks the manifold with the dominating contribution. However, this notion becomes truly meaningful and helps computing the asymptotics of exit probabilities only for sets with an additional regularity property which is compatible with the notion of weak convergence of probability measures, holds true for most relatively open subsets of~$\partial\mathbf{D}$, and which we proceed to define. Assuming $d\ge 2$, we say that a set $A\subset\partial \mathbf{D}$ is $N$-regular if it is Borel and satisfies \begin{equation} \label{eq:regularity-def} \mathcal{H}^{i(A)-1}\{\partial_{\partial \mathbf{D}} A\cap N^{i(A)}\} = 0. \end{equation} In the case of $d=1$, all subsets of $\partial\mathbf{D}$ are considered to be $N$-regular. We still need a few more elements of our construction. There is a Euclidean ball $O$ centered at $0$, satisfying $\overline{f^{-1}(O)}\subset \mathbf{U}$, and such that the vector field $x\mapsto\mathbf{a}x$ is transversal to $\partial O$. Let us fix~$O$ and define \begin{align}\label{eq:def_L(O)} L(O)=\sup\{L>0: \mathcal{B}_L\subset O\}. \end{align} For every $L\in(0,L(O))$, we can define $\psi_L:f^{-1}(\partial\mathcal{B}_L) \to\partial \mathbf{D}$ as the Poincare map along the flow $(S_t)$: \begin{equation} \label{eq:psi_L} \psi_L(x)=S_{t(x)}x,\quad x\in f^{-1}(\partial\mathcal{B}_L), \end{equation} where $t(\cdot)$ was introduced in~\eqref{eq:deterministic-exit-time}. We can now define \begin{equation} \zeta_L=f\circ \psi_L^{-1}:\partial\mathbf{D}\to\partial\mathcal{B}_L. \label{eq:def_of-zeta} \end{equation} For $x,y\in\mathbb{R}^d$, we define \begin{gather*} {\tau_i}lde{\chi}^{i}(x^i,y)= \frac{1}{\sqrt{(2\pi)^d\det\mathcal{C}}}|x^i+y^i|^{\sum_{j<i}\fl} \int_{ \mathbb{R}^{d-i}}e^{-\frac{1}{2}x^\intercal\mathcal{C}^{-1}_0 x}\Big|_{(x^1,{\lambda_d}ots,x^{i-1})=-(y^1,{\lambda_d}ots,y^{i-1})}dx^{i+1}{\lambda_d}ots dx^d,\\ \chi^i_+(y)=\int_{[-y^i,\infty)}{\tau_i}lde{\chi}^i(x^i,y)dx^i \quad \text{and}\quad \chi^i_-(y)=\int_{(-\infty, -y^i]}{\tau_i}lde{\chi}^i(x^i,y)dx^i, \end{gather*} where \begin{align}\label{eq:def_C_0_Czero} \mathcal{C}^{jk}= \sum_{l=1}^n\frac{\sigma^j_l(0)\sigma^k_l(0)}{{\lambda_j} + {\lambda_k}}. \end{align} For $L<L(O)$ and $i=1,2,\dots,d$, we define the following measure on $\partial \mathbf{D}$: \begin{align}\label{eq:def_mu_L} \mu^i_L(A)=L^{-\sum_{j<i}\frac{{\lambda_j}}{\lambdabda_{i}}}\sum_{\bullet \in \{+,-\}} \mathbb{E}\chi^i_\bullet(\xi_0)\cdot \mathcal{H}^{i-1}(\zeta_L(A) \cap \mathbf{F}^i_{L\bullet}\cap \Lambda^{i} ) , \quad A\subset\partial \mathbf{D}. \end{align} In this definition, the set $\mathbf{F}^i_{L\bullet}\cap \Lambda^{i}$ is the union of two $(i-1)$-dimensional rectangles: \[ \mathbf{F}^i_{L\bullet}\cap \Lambda^{i}=[-L,L]^{i-1}{\tau_i}mes\{-L,L\}{\tau_i}mes\{0\}^{d-i}, \] and $\mathcal{H}^{i-1}(\,\cdot\, \cap \mathbf{F}^i_{L\bullet}\cap \Lambda^{i})$ is simply the $(i-1)$-dimensional Euclidean volume (Lebesgue measure), so the measure $\mu^i_L$ is Radon--Nikodym equivalent to the volume measure on of~$N^i\cap \zeta^{-1}(\mathbf{F}^i_{L})$. Recalling the definition of $\rho_i$ in~\eqref{eq:rho_i}, we can now state our main result. \begin{Th} \label{thm:main} If $A$ is an $N$-regular set with index $i$, then there is $L_A\in(0,L(O))$ such that for all $L\in (0,L_A)$ \begin{align}\label{eq:in_the_main_result} {\lambda_i}m_{\epsilon\to 0}\epsilon^{-\rho_{i}}\mathbb{P}b{X_\tau\in A} =\mu^{i}_L(A). \end{align} \end{Th} \begin{Rem}\label{rem:LA-ordered} The proof of this theorem also implies that the family of numbers $(L_A)$ indexed by $N$-regular sets $A$ can be chosen to satisfy $L_{A'}\geq L_A$ for $A'\subset A$. \end{Rem} \begin{Rem} Note that the scaling exponent~$\rho_i$ and the limiting constant $\mu^{i}_L(A)$ in~\eqref{eq:in_the_main_result} are defined explicitly. Thus, \eqref{eq:in_the_main_result} provides a very precise approximation. Although the right-hand side of (\ref{eq:in_the_main_result}) seemingly involves $L$, in fact, it does not depend on $L\in(0,L_A)$. It is easy to see that $L^{-\sum_{j<i}\fl}$ in the definition of~$\mu^i_L(A)$ is the correct scaling factor compensating for distortions in directions $1,{\lambda_d}ots, i-1$ introduced by the linear flow that is a part of the definition of $\zeta_L(A)$. We also note that $N$-regular sets $A$ such that $\mu^i_L(A)>0$ (so Theorem~\ref{thm:main} provides the truly leading term in the asymptotics) form a large class that includes, for example, $\zeta_L$-preimages of small open balls with centers in $\mathbf{F}^i_{L\bullet}\cap \Lambda^{i}$. \end{Rem} According to Theorem~\ref{thm:main}, the decay rate of probability of exit is the same for all $N$-regular sets of the same index $i$. This, along with the fact that $N$-regular sets are specifically defined to be continuity sets for $\mathcal{H}^{i-1}$, allows us to state a corollary on the limiting behavior of conditional exit distributions. If $\mathbb{P}b{X_\tau \in A}\neq 0$, let $\nu_A^\epsilon$ be the exit distribution of $X$ conditioned on exiting from $A$: \begin{align*} \nu_A^\epsilon(\cdot)= \frac{\mathbb{P}b{X_\tau \in \,\cdot\,\cap A}}{\mathbb{P}b{X_\tau \in A}}. \end{align*} We denote the weak convergence of finite positive Radon measures by ``$\rightharpoonup$''. \begin{Th}\label{Thm:2} Let $A$ be an $N$-regular set of index $i$ and suppose that $\mu^{i}_L( A)> 0$. Then for all $L< L_A$ the following weak convergence holds as $\epsilon \to 0$: \begin{align*} \nu_A^\epsilon \rightharpoonup \frac{\mu^{i}_L(\,\cdot\, \cap A)}{\mu^{i}_L( A)}. \end{align*} \end{Th} \begin{Rem} The definition of $\mu^i_L$ in \eqref{eq:def_mu_L} together with the bi-Lipschitzness of $\zeta_L$ and~\eqref{eq:zeta_L_map} implies that $\mu^i_L(\,\cdot\, \cap A)$ is equivalent to the $(i-1)$-dimensional Hausdorff measure restricted to $\overline{ A}\cap N^{i-1}$. \end{Rem} In the special case where $b(x)\equiv \mathbf{a}x$, $\mathbf{D}=\mathop{\mathrm{int}} \mathcal{B}_L$, $A=\mathbf{F}^i_{L\pm}$ for some $L>0$ and $i=1,{\lambda_d}ots,d$, the limiting measure in Theorem~\ref{Thm:2} is the uniform distribution on $\mathbf{F}^i_{L\pm} \cap \Lambda^i$. Thus Theorems~\ref{thm:main} and~\ref{Thm:2} are natural generalizations of the simple $2$- and $3$-dimensional equidistribution examples discussed in Section~\ref{sec:heuristic}. It is important to stress that Theorem~\ref{Thm:2}, where the limiting conditional distribution is equivalent to the volume measure on the manifold $N^i$, paints a picture drastically different from the typical large deviations picture where the limiting conditional distributions are often point masses concentrated at the minimizers of the large deviation rate function. The unconditioned exit distribution was also shown to converge to an explicitly computed limit equivalent to the volume on a manifold of smaller dimension in~\cite{Eizenberg:MR749377}. In that paper, the eigenvalues of the linearization are not required to be simple but the assumptions on nonlinearity are fairly restrictive. At the core of the results of \cite{Eizenberg:MR749377} and ours, is the fact that the transition probability over a small time interval is approximately Gaussian and this distribution is carried to the boundary almost deterministically by the flow, different directions being stretched with different rates. However, our results are more delicate since we have to zoom into the transition distribution studying its regularity at small scales with Malliavin calculus tools. The plan of the proof is the following. We are going to decompose the dynamics into two stages: (i) the evolution in the transformed coordinates until the exit from a small cube $\mathcal{B}_L$ (or, equivalently, from $f^{-1}(\mathcal{B}_L)$ in the original coordinates) and (ii) the evolution between exiting from $f^{-1}(\mathcal{B}_L)$ and exiting from $\mathbf{D}$. In the second stage, the process essentially follows the deterministic flow trajectory $(S_t)$ and the associated Poincare map $\psi_L$, with error controlled by a FW large deviation estimate, so it is stage (i) that is central to the analysis. During stage~(i), the evolution is well approximated by a Gaussian process due to approximate linearity of the drift, so to obtain the desired asymptotics we combine direct computations for this Gaussian process with estimates on the error of the Gaussian approximation based on Malliavin calculus bounds previously obtained in~\cite{exit_time}. \section{Proof of the main result}\label{sec:derivation-main-result} Theorem~\ref{thm:main} will follow from two results that we give first. The first result helps to reduce the problem to considering only sets $A$ with $\zeta_L(A)$ being a subset of the union of two faces of $\mathcal{B}_L$ associated with coordinate $i(A)$, and the second one computes the asymptotic probability of exit through such a set. \begin{Prop} \label{Prop:geometry_of_pull_back} Let $A\subset\partial \mathbf{D}$ satisfy $i(A)=i$. The number~$L'_A$ defined by \begin{align*}L'_A=\sup\{L\in(0,L(O)):\ \overline{\zeta_L(A)}\cap \mathbf{F}^j_L=\emptyset, \forall j<i; \mathcal{B}_L\subset O\} \end{align*} is positive, (i.e., the set under the supremum is nonempty) and for all $L<L'_A$ we have \begin{enumerate} \item $\overline{\zeta_L(A)}\cap \mathbf{F}^j_L=\emptyset$ for all $j<i$; \label{item:1_of_prop} \item if $i<d$ and $A$ is $N$-regular, then there are $N$-regular $A_0, A_1\subset \partial \mathbf{D}$ such that \label{item:2_of_prop} \begin{enumerate} \item $A_0 \subset A \subset A_0\cup A_1$;\label{item:a_of_prop} \item $i(A_0)=i$; \quad $\zeta_L(A)\cap \Lambda^i = \zeta_L(A_0)\cap \Lambda^i $; \quad$\zeta_L(A_0)\subset \mathop{\mathrm{int}}_{\partial\mathcal{B}_L}{\mathbf{F}^i_L}$; \label{item:b_of_prop} \item $i(A_1)= i+1$.\label{item:c_of_prop} \end{enumerate} \end{enumerate} \end{Prop} \begin{Prop} \label{Prop:measure-theoretical_prop_of_pull-back} There is $L_0>0$ such that the following holds. Let $A\subset\partial \mathbf{D}$ be an arbitrary $N$-regular set with $i(A)=i$. If $L<L_0$ satisfies $\overline{\zeta_L(A)}\subset \mathop{\mathrm{int}}_{\partial\mathcal{B}_L}{\mathbf{F}^i_L}$, then \begin{align*} {\lambda_i}m_{\epsilon\to 0}\epsilon^{-\rho_i}\mathbb{P}b{X_\tau \in A} = \mu^i_L(A) . \end{align*} \end{Prop} Proposition \ref{Prop:geometry_of_pull_back} and Proposition \ref{Prop:measure-theoretical_prop_of_pull-back} will be proved in Sections \ref{subsection:geometry_pullback} and \ref{subsection:error_of_pullback}, respectively. \begin{proof} [Proof of Theorem~\ref{thm:main}] Let $L_A=L'_A\wedge L_0$, where $L_0$ and $L_A$ are defined in the propositions above. We immediately see that if $A' \subset A$, then $L_{A'}\geq L_A$, so Remark~\ref{rem:LA-ordered} is automatically justified. The idea of the proof is to use Proposition~\ref{Prop:geometry_of_pull_back} in order to approximate $A$ by a union of regular sets of various indices such that Proposition~\ref{Prop:measure-theoretical_prop_of_pull-back} can be applied to each of them. More formally, we will use induction on $i(A)$, starting with the case $i(A) = d$. Note that, by (\ref{item:1_of_prop}) of Proposition \ref{Prop:geometry_of_pull_back}, $i(A) = d$ implies $\zeta_L(A) \subset \mathop{\mathrm{int}}_{\partial \mathcal{B}_L}(\mathbf{F}^d_L)$ for all $L< L_A$. Therefore, we can apply Proposition \ref{Prop:measure-theoretical_prop_of_pull-back} to obtain $${\lambda_i}m_{\epsilon \to 0} \epsilon^{-\rho_d}\mathbb{P}b{X_\tau\in A} = \mu^d_L(A), \quad\text{ for all }L< L_A, $$ which completes the proof of the induction basis. For the induction step, let us assume that the desired result holds for all $A$ with $i(A) = k$ for $i+1 \leq k \leq d$. Let us show it is also true for $A$ with $i(A) = i$. Let us fix $L\in(0,L_A)$ arbitrarily. Since $i<d$ now, we can define $A_0$ and $A_1$ according to part (\ref{item:2_of_prop}) of Proposition \ref{Prop:geometry_of_pull_back}. Since $A_0\subset A$, Remark \ref{rem:LA-ordered} implies $L<L_A\leq L_{A_0}$. Then, using part~(\ref{item:b_of_prop}) of Proposition \ref{Prop:geometry_of_pull_back}, Proposition \ref{Prop:measure-theoretical_prop_of_pull-back}, and the definition of $\mu^i_L$ in \eqref{eq:def_mu_L}, we obtain \begin{align} {\lambda_i}m_{\epsilon \to 0} \epsilon^{-\rho_i}\mathbb{P}b{X_\tau\in A_0} =\mu^i_L(A_0)=\mu^i_L(A). \label{eq:asymptotics-for-A_0} \end{align} By (\ref{item:c_of_prop}) of Proposition \ref{Prop:geometry_of_pull_back}, $i(A_1)=i+1$, so by the induction hypothesis, for each $L'\leq L_{A_1}$, \begin{align*} {\lambda_i}m_{\epsilon \to 0} \epsilon^{-\rho_{i+1}}\mathbb{P}b{X_\tau\in A_1} = \mu^{i+1}_{L'}(A_1), \end{align*} which implies that $\mathbb{P}b{X_\tau\in A_1} = \mathcal{O}(\epsilon^{\rho_{i+1}}) = \setminusallo{\epsilon^{\rho_i}}$. Due to (\ref{item:a_of_prop}), \begin{align*} |\mathbb{P}b{X_\tau\in A} - \mathbb{P}b{X_\tau\in A_0}| \leq \mathbb{P}b{X_\tau\in A_1} = \setminusallo{\epsilon^{\rho_i}}. \end{align*} Combining this with~\eqref{eq:asymptotics-for-A_0}, we complete the induction step and the entire proof. \end{proof} To prove Theorem \ref{Thm:2}, we need the following basic result. \begin{Lemma} \label{lemma:Hyperbolic_Pull_Back} Let $A\subset \partial \mathbf{D}$ be arbitrary. Then the following holds: \begin{enumerate} \item \label{item:equiv-index} $i(A) = \min\big\{k\in \{1,{\lambda_d}ots,d\}:\overline{\zeta_L(A)}\cap \Lambda^k\neq \emptyset \big\}$ for each $L<L(O)$; \item \label{item:equiv-Lambda-regular} if $A$ is Borel, then the following statements are equivalent: \begin{enumerate} \item $A$ is $N$-regular, \item $\mathcal{H}^{i(A)-1}\{\partial_L(\zeta_L(A))\cap \Lambda^{i(A)}\}=0$ for some $L<L(O)$, \item $\mathcal{H}^{i(A)-1}\{\partial_L(\zeta_L(A))\cap \Lambda^{i(A)}\}=0$ for all $L<L(O)$. \end{enumerate} \end{enumerate} \end{Lemma} \begin{proof} First, $\zeta_L$ is a bi-Lipschitz homeomorphism, since it is a composition of a diffeomorphism $f$ and the Poincar\'e map $\psi_L^{-1}$ constructed in~\eqref{eq:psi_L} from smooth flows transversal to locally smooth sections. Secondly, due to \eqref{eq:def_M^k}, the definition $N^k=M^k\cap \partial \mathbf{D}$, and the invariance of $\Lambda^k$ under the linear flow $\bar S$, one can see that \begin{align}\label{eq:zeta_L_map} \zeta_L(\overline{A}\cap N^k)=\overline{\zeta_L(A)}\cap \Lambda^k \quad\text{and}\quad \zeta_L(\partial_{\partial \mathbf{D}}A\cap N^k)= \partial_L(\zeta_L(A))\cap \Lambda^k, \end{align} which implies both parts \eqref{item:equiv-index} and \eqref{item:equiv-Lambda-regular} straightforwardly. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm:2}] We need to prove that \begin{equation} \nu_A^\epsilon (B)\to \frac{\mu^{i}_L(B \cap A)}{\mu^{i}_L( A)},\quad \epsilon\to 0, \label{eq:conv-on-continuity-sets} \end{equation} for every continuity set $B$ of the measure $\mu_L^i(\,\cdot\, \cap A)$ or, equivalently, by the definition~\eqref{eq:def_mu_L}, of $\mathcal{H}^{i-1}\big(\zeta_{L}(\,\cdot\, \cap A)\cap\mathbf{F}^{i}_{L\pm}\cap \Lambda^{i} \big)$ which is equal to $\mathcal{H}^{i-1}\big(\zeta_{L}(\,\cdot\, \cap A)\cap \Lambda^{i} \big)$ due to $L<L_A$. Using the inclusion $\partial_\mathbf{D}(B\cap A)\subset (\partial_\mathbf{D} B \cap \overline A)\cup (\overline B\cap \partial_\mathbf{D} A)$, the $N$-regularity of~$A$ (see~\eqref{eq:regularity-def}) and \eqref{item:equiv-Lambda-regular} of Lemma \ref{lemma:Hyperbolic_Pull_Back}, we conclude that the continuity property of~$B$ implies that of $B\cap A$. Combining this with the fact that $\nu_A^\epsilon(B\cap A^c)=0$ for all~$\epsilon$, we obtain that it is sufficient to check~\eqref{eq:conv-on-continuity-sets} for Borel subsets $B$ of $A$ with continuity property. For such a set $B$, either $i(B)=i(A)$, or $i(B)>i(A)$. In the first case, writing \[ \partial_\mathbf{D} B = \partial_\mathbf{D}(B\cap A)\subset (\partial_\mathbf{D} B\cap \overline A) \cup (\overline B \cap \partial_\mathbf{D} A), \] using the continuity of $B$, part~\eqref{item:equiv-Lambda-regular} of Lemma~\ref{lemma:Hyperbolic_Pull_Back}, and the $N$-regularity of $A$, we conclude that~$B$ is also $N$-regular, so \eqref{eq:conv-on-continuity-sets} follows from Theorem~\ref{thm:main}. In the second case, part~\eqref{item:equiv-index} of Lemma \ref{lemma:Hyperbolic_Pull_Back} implies $\overline{\zeta_L(B)}\cap \Lambda^{i} =\emptyset$. Therefore, $r=\mathrm{dist}(\zeta_L(B),\Lambda^i)>0$, where \begin{equation} \label{eq:dist-def} \mathrm{dist}(C,D)=\inf\{|x-y|:\ x\in C,\ y\in D\}\wedge 1,\quad C,D\subset \mathbb{R}^d. \end{equation} Since $\mathcal{H}^{i-1}\big(\zeta_{L}(B\cap A)\cap\mathbf{F}^{i}_{L\pm}\cap \Lambda^{i} \big)=0$, it suffices to prove $\nu^\epsilon_A(B)\to0$ to ensure~\eqref{eq:conv-on-continuity-sets}. Let us define $B_L^i(r)=\{x\in\mathcal{B}_L: \mathrm{dist}(\{x\},\Lambda^i)\ge r\}$. Since $\zeta_L(B)\subset B_L^i(r)$, and $\zeta^{-1}_L(B_L^i(r))$ is an $N$-regular set of index $i+1$ due to parts \eqref{item:equiv-index} and \eqref{item:equiv-Lambda-regular} of Lemma \ref{lemma:Hyperbolic_Pull_Back}, we can apply Theorem~\ref{thm:main} to $\zeta^{-1}_L(B_L^i(r))$ and conclude that $\mathbb{P}b{X_\tau \in B}=\setminusallo{\epsilon^{\rho_{i(A)}}}=\setminusallo{\mathbb{P}b{X_\tau \in A}}$, so~\eqref{eq:conv-on-continuity-sets} holds in this case as well. The proof is completed. \end{proof} \section{Exit from a box} Recall the definitions of $\mathcal{B}_L=[-L,L]^d$ and $\mathbf{F}^i_{L\pm}$ in Section \ref{section:setting}. Let \begin{align*}\begin{split} \mathbf{F}^i_{L\pm,\delta} & = \{x \in \mathbf{F}^i_{L\pm}: |x^j| \leq L - \delta \text{ for $j \neq i$} \}, \\ \mathbf{F}^i_{L,\delta} &=\mathbf{F}^i_{L+,\delta}\cup \mathbf{F}^i_{L-,\delta}. \end{split} \end{align*} Set \begin{align}\label{eq:def_tau_L} \tau_L = \inf \{t>0: X_t \notin f^{-1}(\mathcal{B}_L) \}, \end{align} where $f$ is the linearizing conjugacy. The main result of this section gives the exit probability asymptotics for sets whose images under $f$ are rectangles: \begin{Prop} \label{Prop:Box_Case} There exists $L_0 >0$ such that for all positive $L\leq L_0$, we have \begin{align*}{\lambda_i}m_{\epsilon\to 0}\epsilon^{-\rho_i}\mathbb{P}b{X_{\tau_L}\in f^{-1}(A )} = & L^{-\sum_{j<i}\fl}\sum_{\bullet \in\{ +,-\}}\mathbb{E}\chi^i_\bullet(\xi_0) \mathcal{H}^{i-1}(A \cap \mathbf{F}^i_{L\bullet}\cap \Lambda^i ) \end{align*} for all $i\in\{1,2,\dots,d\}$, and for all sets $A$ with the following properties: \begin{itemize} \item $A$ is a product of intervals which can be open, closed, or half-open. \item $\overline{A}=[a^1,b^1]{\tau_i}mes {\lambda_d}ots {\tau_i}mes [a^{i-1},b^{i-1}]{\tau_i}mes \{\pm L\}{\tau_i}mes[a^{i+1},b^{i+1}]{\tau_i}mes{\lambda_d}ots{\tau_i}mes[a^d,b^d]$, where $[a^j,b^j] \subset (-L,L)$ for all $j\neq i$ and $a^j,b^j\ne 0$ for $j>i$. \end{itemize} \end{Prop} \subsection{Derivation of Proposition~\ref{Prop:Box_Case} from auxiliary results} From now on, we use standard summation convention over matching upper and lower indices. Let $Y_t = f(X_t)$, where $f$ is the linearizing conjugacy. Using It\^o's formula, we obtain \begin{align} \label{eq:Y_SDE_before_Duhamel} \begin{split} dY^i_t & = \mathbf{a}^i_j Y^j_tdt + \epsilon \partial_k f^i(f^{-1}(Y_t))\sigma^k_j(f^{-1}(Y_t))dW^j_t\\ &+ \frac{\epsilon^2}{2}\sum_{j,k=1}^d\partial_{j,k}^2f^i(f^{-1}(Y_t))\langle \sigma^j(f^{-1}(Y_t)),\sigma^k(f^{-1}(Y_t)) \rangle dt \\ & = \lambdabda^i Y^i_t dt + \epsilon F^i_j(Y_t) dW^j_t + \epsilon^2G^i(Y_t)dt, \end{split} \end{align} where \begin{itemize} \item $\lambdabda^i = \lambdabda_i$ to avoid summation in $i$; \item $F$ and $G$ are $C^3$ (since $f$ is $C^5$ and $\sigma$ is $C^3$); \item since $f(0)=0$ and $Df(0)=I$, we have $F(0)= \sigma(0)$. \end{itemize} Since $F(0)=\sigma(0)$ is $d{\tau_i}mes n$ with full rank and $F$ is continuous, we can find $L_0>0$ small so that there is $c_0>0$ such that $\min_{|u|=1,u\in\mathbb{R}^d}|u^\intercal F(x)|^2\geq c_0$ for all $x\in [-L_0,L_0]^d$. We shrink $L_0$ further, if necessary, to ensure $L_0\leq L(O)$ as in \eqref{eq:def_L(O)}. Since we will only care about exiting from a subset of $[-L_0,L_0]^d$, we modify $F, G$ outside $[-L_0,L_0]^d$ so that \begin{align}\label{eq:modified_F} \begin{split} \min_{|u|=1,u\in\mathbb{R}^d}|u^\intercal F(x)|^2\geq c_0, \text{ for all }x\in \mathbb{R}^d; \\ F,G \text{ and their derivatives are bounded}. \end{split} \end{align} With this $L_0$ chosen, we will consider the following for the rest of this section, applying Duhamel's principle to (\ref{eq:Y_SDE_before_Duhamel}) and setting $Y_0 = \epsilon y$, \begin{align}\label{eq:Y_after_Duhamel} \begin{split} Y^j_t&=\epsilon e^{{\lambda_j} t} y^j+\epsilon e^{{\lambda_j} t}\Big(\int_0^t e^{-{\lambda_j} s}F^j_l(Y_s)dW^l_s+\epsilon\int^t_0e^{-{\lambda_j} s}G^j(Y_s)ds \Big)\\ & = \epsilon e^{{\lambda_j} t}(y^j+ M^j_t + \epsilon V^j_t)=\epsilon e^{{\lambda_j} t}(y^j+ U^j_t ), \end{split} \end{align} where $F,G$ are modified to ensure \eqref{eq:modified_F}. We emphasize that $M_t$, $V_t$ and $U_t$ all depend on $y$ and $\epsilon$. We define $\mathbb{P}^{\epsilon y}= \mathbb{P}\{\,\cdot\, |Y_0=\epsilon y\}$. \setminusallskip Let $C_f$ be the Lipschitz constant of $f$ and $c_f=C_f^{-1}$. Since $f(0)=0$, we have $|\epsilon^{-1}f(\epsilon x)|\leq C_f|x|$ for all $x$ and $\epsilon$. In view of \eqref{eq:xi_exponential_tail}, we choose $\kappa>0$ large so that for \begin{align}\label{eq:def_K(eps)} K(\epsilon)= (\log\epsilon^{-1})^{\kappa} \end{align} we have \begin{align}\label{eq:xi>K(eps)_is_eps^rho_d} \mathbb{P}b{|\xi_\epsilon|>c_fK(\epsilon)}\leq \epsilon^{\rho_d+\delta} \text{, for all }\epsilon \in (0,1]\text{, for some }\delta >0. \end{align} By our definition of $c_f$, we have \begin{align*} \text{if }|x|\leq c_f K(\epsilon), \quad\text{ then }|\epsilon^{-1}f(\epsilon x)|\leq K(\epsilon). \end{align*} \begin{Rem}\label{remark:eta_and_kappa} Later, when needed, $\kappa$ in \eqref{eq:def_K(eps)} will be adjusted to be even larger. This will not affect our results. \end{Rem} According to~\eqref{eq:def_tau_L}, $\tau_L = \inf\{t>0: Y_t\notin \mathcal{B}_L \}$. To prove Proposition \ref{Prop:Box_Case}, we first obtain asymptotics for $Y_t$ exiting rectangular sets uniformly in $Y_0 = \epsilon y$ with $|y|\leq K(\epsilon)$. Recall that $\rho_i$ is given in \eqref{eq:rho_i}. \begin{Prop}\label{prop:Rectangle_set_on_box} Consider $Y_t$ defined by \eqref{eq:Y_after_Duhamel}. If $L< L_0$, $i=1,{\lambda_d}ots,d$, and $A$ is a rectangle described in Proposition \ref{Prop:Box_Case}, then \begin{align}\label{eq:lim_sup_K(eps)} {\lambda_i}m_{\epsilon\to 0}\sup_{|y|\leq K(\epsilon)}\bigg|&\epsilon^{-\rho_i}\mathbb{P}bx{\epsilon y}{Y_{\tau_L}\in A}- \chi^i_\pm\big(\epsilon^{-1}f^{-1}(\epsilon y)\big) c_A \bigg|= 0, \end{align} where \begin{align}\label{eq:def_c_A} c_A=L^{-\sum_{j<i}\fl}\partialod_{j<i}(b^j-a^j)\partialod_{j>i}\Ind{0\in(a^j,b^j)}. \end{align} \end{Prop} Here and throughout the paper, the product over an empty set is understood to equal~$1$. \begin{proof}[Derivation of Proposition \ref{Prop:Box_Case} from Proposition~\ref{prop:Rectangle_set_on_box}] Consider \eqref{eq:SDE_X} with $X_0=\epsilon \xi_\epsilon$ described in \eqref{eq:xi_exponential_tail} and observe that, by \eqref{eq:xi>K(eps)_is_eps^rho_d} and the above proposition, \begin{align} \label{eq:provingProp:Box_Case-1} \epsilon^{-\rho_i}\mathbb{P}b{X_{\tau_L} \in f^{-1}(A)} &= \mathbb{E} \epsilon^{-\rho_i}\mathbb{P}bx{f(\epsilon\xi_\epsilon)}{Y_{\tau_L} \in A}\Ind{|\xi_\epsilon|\leq c_fK(\epsilon)} + o(1)\\ \notag & = c_A\mathbb{E} \chi^i_\pm(\xi_\epsilon)\Ind{|\xi_\epsilon|\leq c_fK(\epsilon)}+o(1). \end{align} There is $C>0$ such that \begin{align}\label{eq:polynomial_bound_of_chi} |\chi^i_\pm(y)|\leq C(1+|y|^{\sum_{j<i}\fl}),\quad x\in\mathbb{R}^d. \end{align} Due to the fast decay of the tail of $\xi_\epsilon$ imposed by \eqref{eq:xi_exponential_tail}, all positive moments of $|\chi^i_\pm(\xi_\epsilon)|$ are bounded uniformly in $\epsilon$. Therefore, due to \eqref{eq:xi>K(eps)_is_eps^rho_d} and H\"older's inequality, we have $\mathbb{E}\chi^i_\pm(\xi_\epsilon)\Ind{|\xi_\epsilon|>c_fK(\epsilon)} = o(1)$, which implies \begin{align} \label{eq:provingProp:Box_Case-2} c_A\mathbb{E} \chi^i_\pm(\xi_\epsilon)\Ind{|\xi_\epsilon|\leq c_fK(\epsilon)}= c_A\mathbb{E} \chi^i_\pm(\xi_\epsilon)+o(1). \end{align} This, along with the uniform tail bound on $\xi_\epsilon$ in~\eqref{eq:xi_exponential_tail}, the polynomial bound on $\chi_{\pm}$ in \eqref{eq:polynomial_bound_of_chi}, and continuity of $\chi_\pm$, implies \begin{align} \label{eq:provingProp:Box_Case-3} {\lambda_i}m_{\epsilon\to 0}c_A\mathbb{E} \chi^i_\pm(\xi_\epsilon) = c_A\mathbb{E}\chi^i_\pm(\xi_0). \end{align} Combining~\eqref{eq:provingProp:Box_Case-1}, \eqref{eq:provingProp:Box_Case-2}, and \eqref{eq:provingProp:Box_Case-3}, we obtain \begin{align*} {\lambda_i}m_{\epsilon\to 0} \epsilon^{-\rho_i}\mathbb{P}b{X_{\tau_L} \in f^{-1}(A)} = L^{-\sum_{j<i}\fl}\mathbb{E}\chi^i_\pm(\xi_0)\mathcal{H}^{i-1}\{A\cap \Lambda^i\}, \end{align*} completing the proof. \end{proof} \subsection{Proof of Proposition \ref{prop:Rectangle_set_on_box}} Let us fix any $L<L_0$ and $i\in\{1,2,\dots,d\}$. First, we remark that it suffices to consider $A$ satisfying \begin{align}\label{eq:rectangle_main_type} 0\in(a^i,\ b^i),\quad \forall j >i. \end{align} In fact, if $A$ does not satisfy \eqref{eq:rectangle_main_type}, then we can find two rectangles $A'$ and $A''$ such that \begin{enumerate}[(i)] \item $\pi^{\leq i}(A)=\pi^{\leq i}(A')=\pi^{\leq i}(A'')$ where $\pi^{\leq i}$ is the projection onto the first $i$ coordinates; \item $A'\subset A''$ and $A\subset A''\setminus A'$; \item $A'$ and $A''$ satisfy \eqref{eq:rectangle_main_type}. \end{enumerate} By \eqref{eq:def_c_A}, we have $c_A=0$ since $A$ does not satisfy \eqref{eq:rectangle_main_type}, and $c_{A'}=c_{A''}$ due to (i). These together with (ii) imply (we use $\eta_\pm =\chi^i_\pm(\epsilon^{-1}f^{-1}(\epsilon y))$): \begin{align*} &\big|\epsilon^{-\rho_i}\mathbb{P}bx{\epsilon y}{Y_{\tau_L}\in A}-\eta_\pm c_A\big|=\big|\epsilon^{-\rho_i}\mathbb{P}bx{\epsilon y}{Y_{\tau_L}\in A}\big|\\ &\leq \big|\epsilon^{-\rho_i}\mathbb{P}bx{\epsilon y}{Y_{\tau_L}\in A'}-\eta_\pm c_{A'}\big|+\big|\epsilon^{-\rho_i}\mathbb{P}bx{\epsilon y}{Y_{\tau_L}\in A''}-\eta_\pm c_{A''}\big|. \end{align*} Finally, (iii) allows us to apply \eqref{eq:lim_sup_K(eps)} to $A'$ and $A''$, and thus \eqref{eq:lim_sup_K(eps)} holds for $A$. To avoid heavy notation, we also assume that $A$ is closed. It can be readily checked that all our arguments are still valid if $A$ is not closed. Recall $\tau_L$ given in \eqref{eq:def_tau_L}. Since $L$ is fixed, for brevity, we write $\tau=\tau_L$ for the rest of the section. Here, we only study the case where $A\subset\mathbf{F}^i_{L+}$, which corresponds to $Y^i_\tau=L$. The case where $A\subset\mathbf{F}^i_{L-}$ (corresponding to $Y^i_\tau=-L$) can be considered in the same way. We will need the following two statements. \begin{Lemma} \label{lemma:rough_est} Assume $0\in (a^j,b^j)$ for all $j>i$. Let \begin{align}\label{eq:def_T_0} T_0=T_0(\epsilon)= \frac{1}{{\lambda_i}}\log\frac{L}{\epsilon(\log\epsilon^{-1})^{\kappa+1}}. \end{align} There are $\gamma_j$, $j=1,{\lambda_d}ots,d$, satisfying \begin{align} \label{eq:gamma_j_condition} 0\vee \Big(\fl-1\Big)<{\gamma_j} <\fl \quad j=1,2,\dots,d, \end{align} such that $$ \mathbb{P}b{y+U_{T_0}\in B_-} - \setminusallo{\epsilon^{\rho_i}} \leq \mathbb{P}b{Y^j_\tau\in [a^j,b^j],\forall j \neq i; Y^i_\tau = L} \leq \mathbb{P}b{y+U_{T_0}\in B_+} + \setminusallo{\epsilon^{\rho_i}}$$ holds uniformly in $|y| \leq K(\epsilon)$, where \begin{align} \label{eq:defOfB_pm} \begin{split} B_\pm &=\cup_{x^i \in I_{\pm}}\big(B_{\pm,<i}^{(x^i)} {\tau_i}mes \{x^i\} {\tau_i}mes B^{(x^i)}_{\pm,>i}\big) \\&= \cup_{x^i \in I_{\pm}}\big((J_{\pm,1}^{(x^i)}{\tau_i}mes...{\tau_i}mes J_{\pm,i-1}^{(x^i)}){\tau_i}mes \{x^i\}{\tau_i}mes (J_{\pm,i+1}^{(x^i)}{\tau_i}mes ... {\tau_i}mes J_{\pm,d}^{(x^i)})\big)\\ \end{split} \end{align} with $I_\pm = \big(\mp\epsilon^{{\gamma_i}}, ( \log\epsilon^{-1})^{\kappa + 1}\pm\epsilon^{{\gamma_i}}\big]$, and for $j\neq i$ \begin{align*} J_{\pm,j}^{(x^i)} &= \Big[a^jL^{-\fl}\epsilon^{\fl - 1}(|x^i|\pm \epsilon^{{\gamma_i}})^\fl \mp \epsilon^{{\gamma_j}},b^jL^{-\fl}\epsilon^{\fl - 1}(|x^i|\pm\epsilon^{{\gamma_i}})^\fl\pm\epsilon^{{\gamma_j}}\Big]. \end{align*} \end{Lemma} Note that due to~\eqref{eq:gamma_j_condition}, for small $\epsilon>0$, the terms $\epsilon^{\gamma_j}$ are small compared to the leading order terms in the definition of $J_{\pm,j}^{(x^i)}$. \begin{Lemma}\label{lemma:iteration} Let $T_0$ be given in \eqref{eq:def_T_0} and $\mathcal{Z}$ be a centered Gaussian vector with covariance matrix $\mathcal{C}$ given in \eqref{eq:def_C_0_Czero}. Then for each $\upsilon \in(0,1)$, there is $\delta>0$ such that \begin{align*} \sup_{|y|\leq \epsilon^{\upsilon-1}} \big|\mathbb{P}b{y+U_{T_0}\in B_\pm}-\mathbb{P}b{y+\mathcal{Z}\in B_\pm}\big| = \setminusallo{\epsilon^{\rho_i+\delta}}. \end{align*} \end{Lemma} Let us define \begin{align} h_\epsilon(y)&= \epsilon^{-\rho_i}\mathbb{P}b{y+\mathcal{Z}\in B_\pm },\label{eq:def_h_eps_initial} \\ h_0(y) & =L^{-\sum_{j<i}\fl}\chi^i_\pm(y)\partialod_{j<i}(b^j-a^j) \label{eq:def_h_0_initial}. \end{align} Note the dependence on $\pm$, which is suppressed to avoid heavy notation. Proposition \ref{prop:Rectangle_set_on_box} follows from Lemmas \ref{lemma:rough_est}, \ref{lemma:iteration} and the following estimate: \begin{align}\label{eq:est_h_eps(y) - h_0(eps^-1_f^-1(eps y ))} \sup_{|y|\leq K(\epsilon)}\big|h_\epsilon(y) - h_0(\epsilon^{-1}f^{-1}(\epsilon y ))\big| = o(1). \end{align} Our plan is to derive \eqref{eq:est_h_eps(y) - h_0(eps^-1_f^-1(eps y ))} in the remainder of this subsection and then prove Lemmas~\ref{lemma:rough_est} and \ref{lemma:iteration} in Subsection~\ref{sec:proofs-of-auxiliary}. \subsubsection{Proof of \eqref{eq:est_h_eps(y) - h_0(eps^-1_f^-1(eps y ))}} We split \eqref{eq:est_h_eps(y) - h_0(eps^-1_f^-1(eps y ))} into estimating $|h_\epsilon(y) - h_0(y)|$ and $|h_0(y) - h_0(\epsilon^{-1}f^{-1}(\epsilon y ))|$ separately. The techniques involved are elementary but the proof is tedious. We proceed in steps. \setminusallskip Step 1. We express $h_\epsilon$ and $h_0$ explicitly in the form of Gaussian integrals over some sets. For each $x\in \mathbb{R}^d$, let \begin{gather} x^{<i} = (x^1,...,x^{i-1}), \quad x^{\geq i} = (x^{i},...,x^d), \quad x^{>i} = (x^{i+1},...,x^d), \quad \hat{x}=(x^{<i},x^{>i});\nonumber\\ \hat{B}^{(x^i)}_\pm = B_{\pm,<i}^{(x^i)}{\tau_i}mes B_{\pm,>i}^{(x^i)}\label{eq:def_hatB} \end{gather} where $B_{\pm,<i}^{(x^i)}$ and $B_{\pm,>i}^{(x^i)}$ are given in \eqref{eq:defOfB_pm}. When $y$ is fixed as in $Y_0=\epsilon y$, we write ${\tau_i}lde{x}=(-y^{<i},x^{\geq i})\in \mathbb{R}^d$ for each $x\in\mathbb{R}^d$. Now, let us introduce \begin{align} g_\epsilon(x^i,y) &=\frac{\epsilon^{-\rho_i}}{\sqrt{(2\pi)^d\det \mathcal{C}}}\int_{\hat{B}^{(x^i+y^i)}_\pm -\hat{y}}e^{-\frac{1}{2}x^\intercal\mathcal{C}^{-1} x}d\hat{x},\label{eq:def_g_eps}\\ g_0(x^i,y)&=\frac{\partialod_{j<i}(b^j-a^j)L^{-\fl}|x^i+y^i|^\fl}{\sqrt{(2\pi)^d\det\mathcal{C}}}\int_{ \mathbb{R}^{d-i}} e^{-\frac{1}{2}{\tau_i}lde{x}^\intercal\mathcal{C}^{-1}_0 {\tau_i}lde{x}}dx^{>i}.\label{eq:def_g_0} \end{align} Recall the definition of $I_\pm$ below \eqref{eq:defOfB_pm}. Additionally, we set \begin{align}\label{def_interval_II} \mathcal{I}_+(y^i)=[-y^i,\infty) \quad \text{and}\quad \mathcal{I}_-(y^i)=(-\infty,-y^i]. \end{align} Using the definitions \eqref{eq:def_h_eps_initial}--\eqref{eq:def_h_0_initial}, we can see \begin{align*} h_\epsilon(y)&= \int_{I_{\pm}-y^i} g_\epsilon(x^i,y)dx^i,\\ h_0(y) &= \int_{\mathcal{I}_\pm(y^i)} g_0(x^i,y)dx^i. \end{align*} \setminusallskip Step 2. We record some useful estimates, which will be proved later. For simplicity of notation, we write \begin{align}\label{eq:def_z_eps} z_\epsilon(y)=\epsilon^{-1}f^{-1}(\epsilon y). \end{align} For convenience, we set $z_0(y)=y$. The following holds for all $\epsilon\in[0,1]$ and $x^i\in\mathbb{R}$, \begin{align} & \sup_{|y|\leq K(\epsilon)}|g_\epsilon(x^i,y)|\leq C\big(1+K(\epsilon)\big)^p e^{-c|x^i|^2}, \label{eq:boundedness_of_g_eps}\\ &\sup_{|y|\leq K(\epsilon)}|g_\epsilon(x^i,y)-g_0(x^i,y)|\leq C \epsilon^{q} e^{-c|x^i|^2}, \label{eq:est_g_eps_-_g_0}\\ &\sup_{|y|\leq K(\epsilon)}|y-z_\epsilon(y)|\leq C\epsilon^q,\label{eq:y^i-z^i_eps_est}\\ &\sup_{|y|\leq K(\epsilon)}|g_0(x^i,y)-g_0(x^i,z_\epsilon(y))| \leq C \epsilon^{q} e^{-c|x^i|^2}, \label{eq:prep_4_h_0_est} \end{align} for some $C,c,p,q>0$. Step 3. We estimate $|h_\epsilon(y)-h_0(y)|$ for $|y|\leq K(\epsilon)$. We shall only treat the case where $\pm$ is $+$ and $\mp$ is $-$. The other case is similar. We start by writing \begin{align*} |h_\epsilon(y) - h_0(y)|\leq &\int_{I_+ - y^i}|g_\epsilon(x^i,y)-g_0(x^i,y)|dx^i+ \int_{-\epsilon^{{\gamma_i}}-y^i}^{-y^i}|g_\epsilon(x^i,y)| dx^i \\ &+ \int_{(\log\epsilon^{-1})^{\kappa+1}+\epsilon^{\gamma_i}-y^i}^\infty|g_0(x^i,y)|dx^i. \end{align*} Using \eqref{eq:boundedness_of_g_eps} and \eqref{eq:est_g_eps_-_g_0}, we have, for some $q'>0$, \begin{align}\label{eq:est_h_eps(y) - h_0(y)} \begin{split} \sup_{|y|\leq K(\epsilon)}|h_\epsilon(y) - h_0(y)| \leq & \int_\mathbb{R} C \epsilon^{q} e^{-c|x^i|^2}dx^i \quad +C\epsilon^{\gamma_i} (1+K(\epsilon))^p\\ &+\int^\infty_{(\log\epsilon^{-1})^{\kappa+1}+\epsilon^{\gamma_i}-K(\epsilon)}C (1+K(\epsilon))^pe^{-c|x^i|^2}dx^i = o(\epsilon^{q'}). \end{split} \end{align} \setminusallskip Step 4. We estimate $|h_0(y)-h_0(z_\epsilon(y))|$ for $|y|\leq K(\epsilon)$. Recall the definition of $\mathcal{I}_\pm(y^i)$ in \eqref{def_interval_II} and note that, due to \eqref{eq:y^i-z^i_eps_est}, \begin{align}\label{eq:symmetric_difference_est} \big|\mathcal{I}_\pm(y^i)\triangle\mathcal{I}_\pm(z_\epsilon^i(y))\big|\leq |y^i-z^i_\epsilon(y)|\leq C\epsilon^q. \end{align} Here $\triangle$ denotes the symmetric difference of sets. By the formula for $h_0(y)$ in Step~1, we first write \begin{align*} &|h_0(y)-h_0(z_\epsilon(y))|\\ &\leq \int_{\mathcal{I}_\pm(z_\epsilon^i(y))}|g_0(x^i,y)-g_0(x^i,z_\epsilon(y))|dx^i+\int_{\mathcal{I}_\pm(y^i)\triangle\mathcal{I}_\pm(z_\epsilon^i(y))}|g_0(x^i,y)|+|g_0(x^i,z_\epsilon(y))| dx^i. \end{align*} We can bound $|g_0(x^i,z_\epsilon(y))|$ by using \eqref{eq:boundedness_of_g_eps} and \eqref{eq:prep_4_h_0_est}. Apply this, \eqref{eq:prep_4_h_0_est} and \eqref{eq:symmetric_difference_est} to see \begin{align}\label{eq:est_h_0(y) - h_0(z)} \sup_{|y|\leq K(\epsilon)}|h_0(y)-h_0(z_\epsilon(y))|\leq \int_\mathbb{R} C\epsilon^{q}e^{-c|x^i|^2}dx^i + C\epsilon^q(1+K(\epsilon))^p=o(\epsilon^{q''}) \end{align} for some $q''>0$. \setminusallskip In conclusion, \eqref{eq:est_h_eps(y) - h_0(eps^-1_f^-1(eps y ))} follows from \eqref{eq:est_h_eps(y) - h_0(y)} and \eqref{eq:est_h_0(y) - h_0(z)}. It remains to prove estimates listed in Step~2. We prove them in the following order: \eqref{eq:y^i-z^i_eps_est}, \eqref{eq:boundedness_of_g_eps}, \eqref{eq:prep_4_h_0_est}, \eqref{eq:est_g_eps_-_g_0}. \begin{proof}[Proof of \eqref{eq:y^i-z^i_eps_est}] Recall the definition of $z_\epsilon(y)$ in \eqref{eq:def_z_eps} and that the local diffeomorphism~$f$ satisfies $f(0)=0$ and $Df(0)=I$. Since $|y|\leq K(\epsilon)$, if $\epsilon$ is small, then $|\epsilon y|$ is uniformly close to $0$. By expanding $f^{-1}$ at $0$, we can see that there are $C,q>0$ such that \begin{align*} |y-z_\epsilon(y)|=|y - \epsilon^{-1}(f^{-1}(\epsilon y))|\leq C\epsilon|y|^2\leq C\epsilon^q,\quad \forall|y|\leq K(\epsilon). \end{align*} This gives \eqref{eq:y^i-z^i_eps_est}. \end{proof} \begin{proof}[Proof of \eqref{eq:boundedness_of_g_eps}] Since $\sigma(0)$ has full rank, by definition of $\mathcal{C}$ in \eqref{eq:def_C_0_Czero}, there is $c>0$ such that \begin{align}\label{eq:gaussian_density_bound} e^{-\frac{1}{2}x^\intercal\mathcal{C}^{-1}x }\leq e^{-c|x|^2},\qquad \forall x\in\mathbb{R}^d. \end{align} Hence, there is $C>0$ such that \begin{align*} |g_0(x^i, y)|&\leq C |x^i+y^i|^{\sum_{j<i}\fl}e^{-c|x^i|^2}. \end{align*} Absorb polynomials of $x^i$ into the exponential to see, for some $C,c,p>0$, \begin{align*} \sup_{|y|\leq K(\epsilon)}|g_0(x^i, y)|\leq C\big(1+K(\epsilon)\big)^p e^{-c|x^i|^2}. \end{align*} From this and \eqref{eq:est_g_eps_-_g_0}, we obtain \eqref{eq:boundedness_of_g_eps}. \end{proof} \begin{proof}[Proof of \eqref{eq:prep_4_h_0_est}] We simplify the expression \eqref{eq:def_g_0} into \begin{align*} g_0(x^i,y)=C|x^i+y^i|^{p_0}\int_{\mathbb{R}^{d-i}}e^{-\frac{1}{2}x^\intercal \mathcal{C}^{-1}x}\big|_{x^{<i}=-y^{<i}}dx^{>i}, \end{align*} for some $C,p_0>0$. Then, we have \begin{align}\label{eq:prep_6_h_0_est} \begin{split} &|g_0(x^i,y)-g_0(x^i,z_\epsilon(y))|\\ &\quad\leq C\Big(|x^i+y^i|^{p_0}-|x^i+z^i_\epsilon(y)|^{p_0}\Big)\int_{\mathbb{R}^{d-i}}e^{-\frac{1}{2}x^\intercal \mathcal{C}^{-1}x}\big|_{x^{<i}=-z_\epsilon^{<i}(y)}dx^{>i}\\ &\qquad + C|x^i+y^i|^{p_0}\int_{\mathbb{R}^{d-i}}\bigg|e^{-\frac{1}{2}x^\intercal \mathcal{C}^{-1}x}\big|_{x^{<i}=-y^{<i}}-e^{-\frac{1}{2}x^\intercal \mathcal{C}^{-1}x}\big|_{x^{<i}=-z_\epsilon^{<i}(y)}\bigg|dx^{>i}. \end{split} \end{align} Let us estimate the terms on the right of \eqref{eq:prep_6_h_0_est}. Using \eqref{eq:y^i-z^i_eps_est}, we have, for some $C,p,q>0$, \begin{align}\label{eq:x^i+y^i_est} \Big| |x^i+y^i|^{p_0}-|x^i+z^i_\epsilon(y)|^{p_0}\Big| \leq C (|x^i|^p+|y|^p)\epsilon^{q'} \leq C\epsilon^q(|x^i|^p+1), \quad\forall|y|\leq K(\epsilon). \end{align} By \eqref{eq:gaussian_density_bound}, there are $C,c>0$ such that \begin{align}\label{eq:g_0_est_prep_2} \int_{\mathbb{R}^{d-i}}e^{-\frac{1}{2}x^\intercal \mathcal{C}^{-1}x}\big|_{x^{<i}=-y^{<i}}dx^{>i}\leq Ce^{-c|x^i|^2}. \end{align} To estimate the integrand of the last integral in \eqref{eq:prep_6_h_0_est}, we need the following observation. Since $\mathcal{C}$ is symmetric and positive definite, there are $C,C'>0$ such that, for all $w,z\in\mathbb{R}^d$, \begin{align} &\big| e^{-\frac{1}{2}w^\intercal\mathcal{C}^{-1}w}-e^{-\frac{1}{2}z^\intercal\mathcal{C}^{-1}z}\big| \leq C (e^{-c|w|^2}\vee e^{-c|z|^2})|w+z||w-z|\nonumber\\ &\quad \leq C e^{-c|w|^2}(|2w|+|w-z|)|w-z|\Ind{|w|\leq |z|}\nonumber\\ &\qquad\qquad+C e^{-c|z|^2}(|2z|+|w-z|)|w-z|\Ind{|w|> |z|}\nonumber\\ &\quad \leq C'(e^{-c'|w|^2}+e^{-c'|z|^2})(|w-z|+|w-z|^2).\label{eq:difference_gaussian_density} \end{align} Using this estimate and \eqref{eq:y^i-z^i_eps_est}, we can obtain \begin{align} &\,\Big|e^{-\frac{1}{2}x^\intercal \mathcal{C}^{-1}x}\big|_{x^{<i}=-y^{<i}}-e^{-\frac{1}{2}x^\intercal \mathcal{C}^{-1}x}\big|_{x^{<i}=-z_\epsilon^{<i}(y)}\Big|\nonumber\\ \leq&\, e^{-c|x^{\geq i}|^2}\big(|y^{<i}-z_\epsilon^{<i}(y)|+|y^{<i}-z_\epsilon^{<i}(y)|^2\big)\nonumber\\ \leq&\, C\epsilon^qe^{-c|x^{\geq i}|^2},\qquad \forall |y|\leq K(\epsilon),\label{eq:g_0_est_prep_3} \end{align} Insert \eqref{eq:x^i+y^i_est}, \eqref{eq:g_0_est_prep_2}, and~\eqref{eq:g_0_est_prep_3} to the right hand side of \eqref{eq:prep_6_h_0_est} to see \begin{align*} &|g_0(x^i,y)-g_0(x^i,z_\epsilon(y))|\\ &\leq C\epsilon^q(|x^i|^p+1)e^{-c|x^i|^2}+C(|x^i|^{p_0}+|K(\epsilon)|^{p_0})\epsilon^qe^{-c|x^i|^2}\\ &\leq C\epsilon^{q'}e^{-c'|x^i|^2}, \qquad \forall|y|\leq K(\epsilon), \end{align*} for some $c',q'>0$. This completes the proof. \end{proof} \begin{proof}[Proof of \eqref{eq:est_g_eps_-_g_0}] Again, the techniques involved are elementary while the proof is tedious. Recall $g_\epsilon$ and $g_0$ in \eqref{eq:def_g_eps}--\eqref{eq:def_g_0}, and the notation in \eqref{eq:def_hatB}. To estimate the difference between $g_\epsilon$ and $g_0$, we introduce \begin{align*}\begin{split} &\mathtt{I}= \frac{\epsilon^{-\rho_i}}{\sqrt{(2\pi)^d\det \mathcal{C}}}\int_{\hat{B}_\pm^{(x^i+y^i )}-\hat{y} }e^{-\frac{1}{2}{\tau_i}lde{x}^\intercal \mathcal{C}^{-1}{\tau_i}lde{x}}d\hat{x},\\ &\mathtt{II}=\frac{\partialod_{j<i}(b^j-a^j)L^{-\fl}|x^i+y^i|^{\fl}}{\sqrt{(2\pi)^d \det \mathcal{C}}}\int_{B_{\pm,>i}^{(x^i+y^i )}-y^{>i}} e^{-\frac{1}{2}{\tau_i}lde{x}^\intercal \mathcal{C}^{-1}{\tau_i}lde{x}}dx^{> i}. \end{split} \end{align*} Then, we write \begin{align}\label{eq:split_difference_between_g_eps-g_0} \begin{split} &\quad\quad\big|g_\epsilon(x^i,y)-g_0(x^i,y)\big| \leq \big|g_\epsilon(x^i,y)-\mathtt{I}\big|+\big|\mathtt{I}-\mathtt{II}\big|+\big|\mathtt{II}-g_0(x^i,y)\big|. \end{split} \end{align} We proceed in steps. In each step, we estimate one term on the right of the above display. Step 1. We estimate $|g_\epsilon(x^i,y)-\mathtt{I}|$ for $|y|\leq K(\epsilon)$. We start by writing \begin{align*} |g_\epsilon(x^i,y)-\mathtt{I}| \leq C\epsilon^{-\rho_i}\int_{\hat{B}_\pm^{(x^i+y^i )}-\hat{y} }| e^{-\frac{1}{2}x^\intercal\mathcal{C}^{-1}x}-e^{-\frac{1}{2}{\tau_i}lde{x}^\intercal\mathcal{C}^{-1}{\tau_i}lde{x}}\big| d\hat{x} \end{align*} Let us estimate the integrand. Recall the estimate \eqref{eq:difference_gaussian_density}. Using $\fl-1>0$ for $j<i$, $|y|\leq K(\epsilon)$, and $e^{-c|x|^2}$ to absorb powers of $|x^i|$, we obtain that, for all $x,y$ satisfying $\hat{x}\in \hat{B}_\pm^{(x^i+y^i )}-\hat{y}$ and $|y|\leq K(\epsilon)$, \begin{align*}\begin{split} &\big| e^{-\frac{1}{2}x^\intercal\mathcal{C}^{-1}x}-e^{-\frac{1}{2}{\tau_i}lde{x}^\intercal\mathcal{C}^{-1}{\tau_i}lde{x}}\big| \leq Ce^{-c_1|x^{\geq i}|^2}\big(|x^{<i}+y^{<i}|+|x^{<i}+y^{<i}|^2\big)\\ &\leq Ce^{-c_1|x^{\geq i}|^2}\sum_{j<i}(|v^j|+|v^j|^2)\Big|_{v^j=\big|\epsilon^{\fl -1}(|x^i+y^i |+ \epsilon^{\gamma_i})^\fl+ \epsilon^{\gamma_j} \big|}\leq C \epsilon^{q_1}e^{-c_2|x^{\geq i}|^2}, \end{split} \end{align*} for some $C, c_1,c_2, q_1>0$. This, along with the definitions of $\hat{B}^{(x^i+y^i)}_{\pm}$ in \eqref{eq:def_hatB}, $B^{(x^i+y^i)}_{\pm,<i}$ in \eqref{eq:defOfB_pm} and $\rho_i$ in \eqref{eq:rho_i}, implies \begin{align}\label{eq:est_1st_difference} \begin{split} &|g_\epsilon(x^i,y)-\mathtt{I}| \leq C\epsilon^{-\rho_i}\int_{\hat{B}_\pm^{(x^i+y^i )}-\hat{y} } \epsilon^{q_1}e^{-c_2|x^{\geq i}|^2}d\hat{x}\\ &= C\epsilon^{q_1-\rho_i}e^{-c_2|x^i|^2}\bigg(\int_{B_{\pm,<i}^{(x^i+y^i )}-y^{<i} }dx^{<i} \bigg)\bigg(\int_{B_{\pm,>i}^{(x^i+y^i )}-y^{>i} }e^{-c_2|x^{> i}|^2} dx^{>i}\bigg)\\ &\leq C\epsilon^{q_1-\rho_i}\big|B_{\pm,<i}^{(x^i+y^i)}\big|\, e^{-c_2|x^i|^2}\\ &\leq C\epsilon^{q_1-\rho_i}\epsilon^{\sum_{j<i}\fl-1}(|x^i|+1)^{\sum_{j<i}\fl}e^{-c_2|x^i|^2}\leq C\epsilon^{q_1}e^{-c_3|x^i|^2}. \end{split} \end{align} Here and henceforth we use $|B|$ to denote the Lebesgue measure of a set $B$. Step 2. We estimate $|\mathtt{I}-\mathtt{II}|$. First note that, by integrating over the first $i-1$ coordinates in~$\mathtt{I}$ and the definition of $B^{(x^i,y^i)}_{\pm,<i}$ in \eqref{eq:defOfB_pm}, we have \begin{align*} \mathtt{I} = \frac{\partialod_{j<i}\big((b^j-a^j)L^{-\fl}(|x^i+y^i |\pm \epsilon^{\gamma_i})^\fl\pm 2\epsilon^{{\gamma_j}-(\fl - 1)}\big)}{\partialod_{j<i}(b^j-a^j)L^{-\fl}|x^i+y^i|^\fl}\mathtt{II}. \end{align*} Also, $|\mathtt{II}|\leq C \partialod_{j<i}|x^i+y^i|^\fl e^{-c|x^i|^2}$ for some $C>0$. Hence, using $|y|\leq K(\epsilon)$ and $e^{-c|x^i|^2}$ to absorb powers of $|x^i|$, we can obtain, for some $C,c, q_2>0$, \begin{align}\label{eq:est_2nd_difference} \begin{split} \big|\mathtt{I}-\mathtt{II}\big|\leq C\epsilon^{ q_2}e^{-c|x^i|^2}. \end{split} \end{align} Step 3. We estimate $|\mathtt{II}-g_0(x^i, y)|$. Note that \begin{align*} |\mathtt{II}-g_0(x^i,y)|&\leq C\int^\infty_{\mathbb{R}^{d-i}\setminus (B_{\pm,>i}^{(x^i+y^i )}-y^{>i})}|x^i+y^i|^{\sum_{j<i}\fl}e^{-c|x^{\geq i}|^2}dx^{> i}\\ &\leq e^{-c|x^i|^2}\sum_{j> i}\int_{\mathbb{R} \setminus (J^{(x^i+y^i )}_{\pm,j}-y^j )} |x^i+y^i|^{\sum_{j<i}\fl}e^{-c|x^j|^2}dx^j. \end{align*} We split the integrals after the last inequality into \begin{align*} &\int_{\mathbb{R} \setminus (J^{(x^i+y^i )}_{\pm,j}-y^j )} |x^i+y^i|^{\sum_{j<i}\fl}e^{-c|x^j|^2}dx^j \\ &= \int_{b^jL^{-\fl}\epsilon^{\fl-1}(|x^i+y^i |\pm\epsilon^{\gamma_i})^\fl \pm \epsilon^{\gamma_j}-y^j }^\infty |x^i+y^i|^{\sum_{j<i}\fl}e^{-c|x^j|^2}dx^j\\ &+\int^{a^jL^{-\fl}\epsilon^{\fl-1}(|x^i+y^i |\pm\epsilon^{\gamma_i})^\fl \mp \epsilon^{\gamma_j}-y^j }_{-\infty} |x^i+y^i|^{\sum_{j<i}\fl}e^{-c|x^j|^2}dx^j. \end{align*} Choosing $q'>0$ sufficiently small, we consider two cases. If $|x^i+y^i|\leq \epsilon^{q'}$, then for some $q_3>0$, the above display is bounded by $C\epsilon^{q_3}$. For the case where $|x^i+y^i|> \epsilon^{q'}$, let us set $c_j= (|a^j|\wedge|b^j|)L^{-\fl}$ and recall that $\fl -1 <0$ for $j>i$. For some $q_4,q_5,p>0$, the above display can be bounded by \begin{align*} &2\int^\infty_{c_j\epsilon^{\fl -1 }(\epsilon^{q'}-\epsilon^{\gamma_i})^{\fl} - \epsilon^{\gamma_j} - K(\epsilon)}|x^i+y^i|^{\sum_{j<i}\fl}e^{-c|x^j|^2}dx^j\\ &\leq C\epsilon^{q_4}|x^i+y^i|^{\sum_{j<i}\fl}\leq C\epsilon^{q_5}(|x^i|^p+1)\qquad \forall |y|\leq K(\epsilon). \end{align*} In deriving the above inequality, we have used $\fl -1 <0$ and chosen $q'<{\gamma_i}$. Combining the above, we have \begin{align}\label{eq:est_5th_difference_case_1} |\mathtt{II}-g_0(x^i,y)|&\leq C\big(\epsilon^{q_3}+\epsilon^{q_5}(1+|x^i|^p)\big)e^{-c|x^i|^2}\leq C\epsilon^{q_6}e^{-c'|x^i|^2}. \end{align} To conclude, we insert \eqref{eq:est_1st_difference}, \eqref{eq:est_2nd_difference}, and \eqref{eq:est_5th_difference_case_1} into \eqref{eq:split_difference_between_g_eps-g_0}. As a consequence, we obtain that, for some constants $q', C, c'>0$, the following holds for $\epsilon$ sufficiently small, \begin{align*} \begin{split} \sup_{|y|\leq K(\epsilon)}&|g_\epsilon(x^i,y)-g_0(x^i,y)|\leq C \epsilon^{q'} e^{-c'|x^i|^2}, \end{split} \end{align*} as desired. \end{proof} \subsection{Proofs of Lemmas \ref{lemma:rough_est} and \ref{lemma:iteration}} \label{sec:proofs-of-auxiliary} \begin{proof}[Proof of Lemma \ref{lemma:rough_est}] Let $\tau_j = \inf\{t> 0: |Y^j_t| = L \}$. We recall \eqref{eq:def_tau_L} and the notation $\tau=\tau_L$. Hence, we have $\tau = \min_{j=1,2,...,d}\{\tau_j \}$. First, we show the following. \begin{Lemma} \label{lemma:drop_tau_i} If $[a^j,b^j]\subset(-L,L)$ for all $j\neq i$, then, with $\rho_i$ defined in \eqref{eq:rho_i}, \begin{align*} \sup_{|y|\leq K(\epsilon)}\big|\mathbb{P}b{Y^j_\tau \in [a^j, b^j], \forall j \neq i;Y^i_\tau =L } - \mathbb{P}b{Y^j_{\tau_i} \in [a^j, b^j], \forall j \neq i;Y^i_{\tau_i} =L } \big | = \setminusallo{\epsilon^{\rho_i}}. \end{align*} \end{Lemma} \begin{proof} Since \[ \mathbb{P}b{Y^j_\tau \in [a^j, b^j], \forall j \neq i;Y^i_\tau =L } = \mathbb{P}b{Y^j_{\tau_i} \in [a^j, b^j], \forall j \neq i;Y^i_{\tau_i} =L ; \tau ={\tau_i} }, \] it remains to estimate the right-hand side of \begin{multline*} \mathbb{P}b{Y^j_{\tau_i} \in [a^j, b^j], \forall j \neq i;Y^i_{\tau_i} =L} - \mathbb{P}b{Y^j_{\tau_i} \in [a^j, b^j], \forall j \neq i;Y^i_{\tau_i} = L;\tau = {\tau_i} }\\ = \mathbb{P}b{Y^j_{\tau_i} \in [a^j, b^j], \forall j \neq i;Y^i_{\tau_i} = L;{\tau_i} > \tau }. \end{multline*} Using the strong Markov property and setting $c_j=|a^j|\vee|b^j|$, we can bound it by \begin{align*} & \sum_{j\neq i} \mathbb{P}b{|Y^j_{\tau_i}|\leq c_j, {\tau_i} > {\tau_j}} \\ &\leq \sum_{j\neq i} \EBig{\sum_{l = \pm L}\Ind{Y^j_{\tau_j} = l}\mathbb{P}bx{Y_{\tau_j}}{ |Y^j_{\tau_i}|\leq c_j}} \leq \sum_{j\neq i,\ l = \pm L} \EBig{\Ind{Y^j_{\tau_j} = l}\mathbb{P}bx{Y_{\tau_j}}{ e^{{\lambda_j} {\tau_i}}|l+\epsilon U^j_{{\tau_i}}|\leq c_j}}\\ &\leq \sum_{j\neq i} \EBig{\Ind{Y^j_{\tau_j} = l}\mathbb{P}bx{Y_{\tau_j}}{ L-\epsilon |U^j_{\tau_i}|\leq c_j}} \leq\sum_{j\neq i} \epsilon^p(L-c_j)^{-p}\E{|U^j_{\tau_i}|^p} \end{align*} for any $p>0$. Let $p >\rho_i$. By \eqref{eq:modified_F}, there is $C>0$ such that, for all $j$, almost surely, \begin{align}\label{eq:<M>_and_|V|_bound} \sup_{t\in[0,\infty)}\langle M^j \rangle_t\le C\quad \text{ and} \quad\sup_{t\in[0,\infty)}|V^j_t| \leq C. \end{align} This, along with BDG inequality, implies that $\E{|U^j_{\tau_i}|^p}$ is bounded uniformly in $\epsilon$, and completes the proof. \end{proof} We will approximate $U_{\tau_i}$ by $U_{T_0}$, where $T_0$ is given \eqref{eq:def_T_0}. By \eqref{eq:Y_after_Duhamel}, \begin{align} \label{eq:TimeGaussianRelation} L=|Y^i_{\tau_i}|=\epsilon e^{{\lambda_i} {\tau_i}}|y^i+U^i_{\tau_i}|, \quad\text{ or } \quad{\tau_i} = \frac{1}{{\lambda_i}}\log\frac{L}{\epsilon |y^i+U^i_{\tau_i}|}. \end{align} Now \eqref{eq:Y_after_Duhamel} and (\ref{eq:TimeGaussianRelation}) give \begin{align*} Y^j_{\tau_i} = L^\fl \epsilon^{1-\fl}(y^j+U^j_{\tau_i} )|y^i+U^i_{\tau_i}|^{-\fl}, \end{align*} which implies \begin{multline}\label{eq:MainProbability} \mathbb{P}b{Y^j_{\tau_i}\in [a^j,b^j],\forall j \neq i;\ Y^i_{\tau_i} = L} = \mathbb{P}b{Y^j_{\tau_i}\in [a^j,b^j],\forall j \neq i;\ Y^i_{\tau_i} > 0} \\ = \mathbb{P}bBig{y^j+U^j_{\tau_i} \in L^{-\fl}\epsilon^{\fl - 1}|y^i+U^i_{\tau_i}|^\fl[a^j,b^j],\forall j \neq i;\ y^i+U^i_{\tau_i}>0}. \end{multline} Then, we compare $\tau_i$ with $T_0$ by showing that, for an appropriate choice of $\kappa$, \begin{align}\label{eq:ti>t_0_whp} \mathbb{P}b{{\tau_i}<T_0} = \mathbb{P}b{|y^i+U^i_{\tau_i}| >(\log\epsilon^{-1})^{\kappa+1}} = \setminusallo{\epsilon^{\rho_i}}. \end{align} By \eqref{eq:<M>_and_|V|_bound} and the exponential martingale inequality (see Problem 12.10 in \cite{Bass}), the following holds uniformly in $|y|\leq K(\epsilon)$ and $\epsilon$ sufficiently small, \begin{align*} \mathbb{P}b{{\tau_i} <T_0}& = \mathbb{P}b{|y^i+U^i_{\tau_i}| >(\log\epsilon^{-1})^{\kappa+1}; {\tau_i}<T_0} \leq \mathbb{P}b{|y^i+U^i_{ {\tau_i}\wedge T_0}|>(\log\epsilon^{-1})^{\kappa+1}}\\ & \leq \mathbb{P}b{|M^i_{{\tau_i} \wedge T_0}|>( \log \epsilon^{-1})^{\kappa+1}-( \log \epsilon^{-1})^{\kappa}-C\epsilon}\\ &\leq \mathbb{P}b{|M^j_{{\tau_i} \wedge T_0}|>\tfrac{1}{2}( \log \epsilon^{-1})^\kappa}\\ & \leq 2\exp\big(-(8C)^{-1}( \log \epsilon^{-1})^{2\kappa}\big). \end{align*} Therefore, it suffices to choose $\kappa$ large enough (see Remark \ref{remark:eta_and_kappa}) to guarantee \eqref{eq:ti>t_0_whp}. So, with high probability, ${\tau_i} \geq T_0$. Let us choose $\delta$ to satisfy \begin{align*} 0< \delta < 2\tfrac{\lambdabda_d}{\lambdabda_i}= 2\min_{j}\{ \tfl \} < 2. \end{align*} Using the boundedness of $F$, and $G$, we can write for some $C_\delta>0$: \begin{multline} \label{eq:quad-var--M} \langle M^j \rangle_{{\tau_i}\vee T_0} - \langle M^j \rangle_{T_0} = \int_{T_0}^{{\tau_i} \vee T_0}e^{-2{\lambda_j} T_0}|F^j (Y_{s\wedge\tau})|^2 ds\\ \leq Ce^{-2{\lambda_j} T_0} \leq C\epsilon^{2\fl}( \log \epsilon^{-1})^{2\fl(\kappa +1)} \leq C_\delta \epsilon^{2\fl-\delta} \end{multline} and \begin{align} \label{eq:bound-on-V} | V^j_{{\tau_i}\vee T_0} - V^j_{T_0} | \leq Ce^{-{\lambda_j} T_0}\leq C_\delta \epsilon^{\fl-\frac{1}{2}\delta}. \end{align} Then we can choose $\gamma_j>0$, $j\in\{1,2,\dots,d\}$ to satisfy, as anticipated in \eqref{eq:gamma_j_condition}, \begin{align} \label{eq:cond-on-gamma} 0\vee \Big(\fl-1\Big)<{\gamma_j} <\fl - \frac{1}{2}\delta. \end{align} By the exponential martingale inequality, estimates \eqref{eq:quad-var--M}, \eqref{eq:bound-on-V}, and the second inequality in~\eqref{eq:cond-on-gamma}, we have \begin{align}\label{eq:U^j_ti_U^j_t_0_difference_est} \begin{split} &\mathbb{P}b{|U^j_{{\tau_i} \vee T_0} - U^j_{T_0}|>\epsilon^{\gamma_j}} \leq \mathbb{P}b{|M^j_{{\tau_i} \vee T_0} - M^j_{T_0}|>\tfrac{1}{2}\epsilon^{\gamma_j}}+ \mathbb{P}b{\epsilon|V^j_{{\tau_i} \vee T_0} - V^j_{T_0}|>\tfrac{1}{2}\epsilon^{\gamma_j}}\\ &\leq 2\exp\big(-\tfrac{1}{2C_\delta}\epsilon^{2{\gamma_j} - 2\fl +\delta}\big)+ \mathbb{P}b{C_\delta \epsilon^{\fl-\frac{1}{2}\delta+1}>\tfrac{1}{2}\epsilon^{\gamma_j}}= \setminusallo{\epsilon^{\rho_i}}\text{, for all }j. \end{split} \end{align} To see the upper bound in Lemma \ref{lemma:rough_est}, observe that \begin{align*} &\mathbb{P}b{Y^j_\tau\in [a^j,b^j],\forall j \neq i; Y^i_\tau = L} \leq \mathbb{P}b{Y^j_{\tau_i} \in [a^j, b^j], \forall j \neq i;Y^i_{\tau_i} =L } + \setminusallo{\epsilon^{\rho_i}}\\ &= \mathbb{P}bBig{y^j+U^j_{\tau_i} \in L^{-\fl}\epsilon^{\fl - 1}|y^i+U^i_{\tau_i}|^\fl[a^j,b^j],\forall j \neq i;\ y^i+U^i_{\tau_i}>0}+ \setminusallo{\epsilon^{\rho_i}}\\ &\leq \mathbb{P} \Big\{ y^j+U^j_{{\tau_i}\vee T_0} \in L^{-\fl}\epsilon^{\fl - 1}|y^i+U^i_{{\tau_i}\vee T_0}|^\fl[a^j,b^j],\forall j \neq i;\\ &\hspace{7cm} y^i+U^i_{{\tau_i}\vee T_0}\in\big(0,(\log\epsilon^{-1})^{\kappa+1}\big]\Big\}+ \setminusallo{\epsilon^{\rho_i}}\\ &\leq \mathbb{P}b{y+U_{T_0}\in B_+}+\setminusallo{\epsilon^{\rho_i}} \end{align*} where we used Lemma \ref{lemma:drop_tau_i} in the first inequality, \eqref{eq:MainProbability} in the identity,\eqref{eq:ti>t_0_whp} in the second inequality, \eqref{eq:U^j_ti_U^j_t_0_difference_est} in the third inequality. For the lower bound, we have \begin{align*} &\mathbb{P}b{Y^j_\tau\in [a^j,b^j],\forall j \neq i; Y^i_\tau = L}\\ &\geq \mathbb{P}bBig{y^j+U^j_{\tau_i} \in L^{-\fl}\epsilon^{\fl - 1}|y^i+U^i_{\tau_i}|^\fl[a^j,b^j],\forall j \neq i;\quad y^i+U^i_{\tau_i}>0}- \setminusallo{\epsilon^{\rho_i}}\\ &\geq \mathbb{P} \Big\{ y^j+U^j_{{\tau_i}\vee T_0} \in L^{-\fl}\epsilon^{\fl - 1}|y^i+U^i_{{\tau_i}\vee T_0}|^\fl[a^j,b^j],\forall j \neq i;\\ &\hspace{7cm} y^i+U^i_{{\tau_i}\vee T_0}\in\big(0,(\log\epsilon^{-1})^{\kappa+1}\big]\Big\}- \setminusallo{\epsilon^{\rho_i}}\\ &\geq \mathbb{P} \Big\{ y^j+U^j_{{\tau_i}\vee T_0} \in L^{-\fl}\epsilon^{\fl - 1}\Big(|y^i+U^i_{ T_0}|-\epsilon^{\gamma_i}\Big)^\fl[a^j,b^j],\forall j \neq i;\\ &\hspace{7cm} y^i+U^i_{ T_0}\in\big(\epsilon^{\gamma_i},(\log\epsilon^{-1})^{\kappa+1}-\epsilon^{\gamma_i}\big]\Big\}- \setminusallo{\epsilon^{\rho_i}}\\ &\geq \mathbb{P}b{y+U_{T_0}\in B_-}-\setminusallo{\epsilon^{\rho_i}} \end{align*} where we used Lemma \ref{lemma:drop_tau_i} and \eqref{eq:MainProbability} for the first inequality, \eqref{eq:ti>t_0_whp} for the second inequality, \eqref{eq:U^j_ti_U^j_t_0_difference_est} for the last two inequalities. We remark that in the penultimate inequality the factor $(|y^i+U^i_{ T_0}|-\epsilon^{\gamma_i})^\fl$ is well-defined on the event we consider. This completes our proof of Lemma \ref{lemma:rough_est}. \end{proof} In order to prove Lemma \ref{lemma:iteration}, we recall the density estimates obtained in~\cite[Lemma~4.1]{exit_time} for the same setup and assumptions as in the present paper. For a random variable $\mathcal{X}$ with its value in $\mathbb{R}^d$, its density, if exists, is denoted as $\rho_\mathcal{X}$. Since $U_t$ depends on $y$, we denote its density by $\rho^y_{U_t}$. \begin{Lemma}\label{Lemma:density_est} Consider \eqref{eq:Y_after_Duhamel} with $Y_0=\epsilon y$. Let $\mathtt{p}(x)=\sum_{j,k=1}^dx^\frac{{\lambda_j}}{{\lambda_k}}$ for $x\geq 0$ and \begin{align}\label{eq:def_Z} Z^j_t = \int_0^t e^{-{\lambda_j} s}F^j_l(0)dW^l_s. \end{align} Then \begin{enumerate} \item there is a constant $\theta>0$ such that for each $\upsilon \in (0,1)$ there are $C,c,\delta>0$ such that, for $\epsilon$ sufficiently small, \begin{align*} |\rho^y_{U_{T(\epsilon)}}(x)-\rho_{Z_{T(\epsilon)}}(x)|\leq C\epsilon^\delta\big(1+\mathtt{p}(\epsilon^{1-\upsilon}|y|)\big) e^{-c|x|^2},\quad x, y\in \mathbb{R}^d, \end{align*} holds for all deterministic $T(\epsilon)$ with $1\leq T(\epsilon)\leq \theta\log\epsilon^{-1}$; \item for each $\theta'>0$, there are constants $C',c',\delta'$ such that, for $\epsilon$ sufficiently small, \begin{align*} |\rho_{Z_{T(\epsilon)}}(x)-\rho_{Z_\infty}(x)|\leq C'\epsilon^{\delta'} e^{-c'|x|^2},\quad x\in \mathbb{R}^d, \end{align*} holds for all deterministic $T(\epsilon)$ with $T(\epsilon)\geq \theta'\log\epsilon^{-1}$, \end{enumerate} \end{Lemma} We will derive the following result from Lemma~\ref{Lemma:density_est} and use it to prove Lemma~\ref{lemma:iteration}. \begin{Lemma}\label{lemma:terminal-iteration} For each $\upsilon\in(0,1)$, there is $\delta>0$ such that \begin{align*} \sup_{|y|\leq \epsilon^{\upsilon-1}}\big|\mathbb{P}b{y+U_{T_0}\in B_\pm} - \mathbb{P}b{y+Z_{T_0}\in B_\pm}\big|=\setminusallo{\epsilon^{\rho_i+\delta}}. \end{align*} \end{Lemma} Let us derive Lemma~\ref{lemma:iteration} from these lemmas first and prove Lemma~\ref{lemma:terminal-iteration} after that. \begin{proof}[Proof of Lemma \ref{lemma:iteration}] The definition~\eqref{eq:def_Z} implies that $Z_\infty$ is well-defined and has the same distribution as $\mathcal{Z}$. The definition of $B_\pm$ in \eqref{eq:defOfB_pm} implies that there is $p>0$ such that for small $\epsilon$, \begin{align}\label{eq:contain_B_pm} B_\pm \subset \partialod_{j=1}^d \Big(\epsilon^{\fl -1}(\log \epsilon^{-1})^p[-1,1]\Big). \end{align} Since there is some $\theta'>0$ such that $T_0\geq \theta'\log\epsilon^{-1}$, by part (2) of Lemma \ref{Lemma:density_est} \and~\eqref{eq:contain_B_pm}, we obtain that, for $\epsilon$ sufficiently small, \begin{align*} |\mathbb{P}b{y+Z_{T_0}\in B_\pm} - \mathbb{P}b{y+\mathcal{Z}\in B_\pm}\big|=\setminusallo{\epsilon^{\rho_i+\delta'} (\log \epsilon^{-1})^{pd}},\quad\forall y\in\mathbb{R}^d. \end{align*} The above display and Lemma \ref{lemma:terminal-iteration} together imply the result of Lemma \ref{lemma:iteration}. \end{proof} To prove Lemma~\ref{lemma:terminal-iteration}, we need some notation. For $\mathbf{v}\in \mathbb{R}^d$, $A\subset \mathbb{R}^d$ and $t\in \mathbb{R}$, we write $e^{\lambda t}\mathbf{v} = (e^{{\lambda_j} t}\mathbf{v}^j)_{j=1}^d\in\mathbb{R}^d$ and $e^{\lambda t}A=\{e^{\lambda t}x: x\in A \}\subset \mathbb{R}^d$. Recalling $T_0=T_0(\epsilon)$ from \eqref{eq:def_T_0} and $\theta$ from the statement of Lemma~\ref{Lemma:density_est}, we set $N=\min\{n\in\mathbb{N}:\frac{T_0}{n}\leq \theta\log\epsilon^{-1}, \forall \epsilon \in(0,1/2]\}$ and $t_k=\frac{k}{N}T_0$. Lemma~\ref{lemma:terminal-iteration} is a specific case of the following result with $k=N$ and $w=0$: \begin{Lemma}\label{lemma:iterative_est} For each $\upsilon\in(0,1)$, there is a constant $\upsilon'$ and constants $\epsilon_k,C_k,\delta_k$, $k=1,2,...,N$ such that \begin{align}\label{eq:induction_step_conclusion} \sup_{|y|\leq \epsilon^{\upsilon-1}}\sup_{|w|\leq \epsilon^{\upsilon'-1}}\big|\mathbb{P}bx{\epsilon y}{y+U_{t_k}+e^{-\lambda t_k}w\in B_\pm} - \mathbb{P}b{y+Z_{t_k}+e^{-\lambda t_k}w\in B_\pm}\big|\leq C_k \epsilon^{\rho_i+\delta_k}, \end{align} holds for all $k=1,2,..., N$ and $\epsilon\in(0,\epsilon_k]$. \end{Lemma} \begin{proof}[Proof of Lemma \ref{lemma:iterative_est}] First, let us choose $\upsilon'\in(0,1)$ to satisfy \begin{align}\label{eq:condition_upsilon'} \frac{1}{N}\fl \geq \frac{1}{N}\frac{\lambdabda_d}{{\lambda_i}} >\upsilon', \quad \text{ for all }j=1,2,...,d. \end{align} For the case $k=1$, Lemma \ref{Lemma:density_est} and \eqref{eq:contain_B_pm} imply that \begin{align*} &\sup_{|y|\leq \epsilon^{\upsilon-1}}\sup_{|w|\leq \epsilon^{\upsilon'-1}}\big|\mathbb{P}bx{\epsilon y}{y+U_{t_1}+e^{-\lambda t_1}w\in B_\pm} - \mathbb{P}b{y+Z_{t_1}+e^{-\lambda t_1}w\in B_\pm}\big|\\ &\leq \sup_{|y|\leq \epsilon^{\upsilon-1}}\sup_{|w|\leq \epsilon^{\upsilon'-1}} \int_{\{x\in\mathbb{R}^d:y+x+e^{-\lambda t_1}w\in B_\pm\}}C\epsilon^\delta\big(1+\mathtt{p}(\epsilon^{1-\upsilon}|y|)\big)e^{-c|x|^2}dx\leq C \epsilon^\delta |B_\pm|\leq C_1\epsilon^{\rho_i+\delta_1} \end{align*} for some $C_1, \delta_1>0$. We proceed by induction. Let $k\leq N$ and let us assume that \eqref{eq:induction_step_conclusion} holds for $k-1$. Set $z(u)= e^{\lambda t_{k-1}}(y+u)$. The Markov property of $Y_t$ implies that \begin{align*} \begin{split} &\mathbb{P}bx{\epsilon y}{y+U_{t_k}+e^{-\lambda t_k}w\in B_\pm} = \mathbb{P}bx{\epsilon y}{Y_{t_k}+\epsilon w\in \epsilon e^{\lambda t_k}B_\pm} \\ &= \EX{\epsilon y}{\mathbb{P}bx{ Y_{t_{k-1}}}{Y_{t_1}+\epsilon w\in \epsilon e^{\lambda t_k}B_\pm}} \\ & = \EXBig{\epsilon y}{\mathbb{P}bx{ \epsilon z(u)}{z(u)+U_{t_1}+e^{-\lambda t_1}w\in e^{\lambda t_{k-1}}B_\pm}\big|_{u=U_{t_{k-1}}}}. \end{split} \end{align*} To check~\eqref{eq:induction_step_conclusion} for $k$ and complete the induction step, we must show that the error caused by replacing $U_{t_1}$ by $Z_{t_1}$ and $U_{t_{k-1}}$ by $Z_{t_{k-1}}$ in this expression is small. More precisely, \eqref{eq:induction_step_conclusion} for $k$ will follow immediately once we prove that there are $\epsilon_k,\delta',\delta''>0$ such that the following relations hold uniformly in $|y|\leq\epsilon^{\upsilon-1}$, $|w|\leq \epsilon^{\upsilon'-1}$ and $\epsilon \in(0,\epsilon_k]$: \begin{equation} \label{eq:replacement-1} |\E^{\epsilon y}A_\epsilon(U_{t_{k-1}},w)- \E^{\epsilon y} B_\epsilon(U_{t_{k-1}},w)|=o(\epsilon^{\rho_i+\delta'}) \end{equation} and \begin{equation} \label{eq:replacement-2} \big|\E^{\epsilon y}{ B_\epsilon(U_{t_{k-1}},w)}-C_\epsilon(y,w)\big| =o(\epsilon^{\rho_i+\delta''}), \end{equation} where \begin{align*} A_\epsilon(u,w)&=\mathbb{P}bx{ \epsilon z(u)}{z(u)+U_{t_1}+e^{-\lambda t_1}w\in e^{\lambda t_{k-1}}B_\pm}, \\ B_\epsilon(u,w)&=\mathbb{P}b{z(u)+Z_{t_1}+e^{-\lambda t_1}w\in e^{\lambda t_{k-1}}B_\pm}, \\ C_\epsilon(y,w)&=\mathbb{P}b{y+Z_{t_k}+e^{-\lambda t_k}w\in B_\pm}. \end{align*} Let us derive \eqref{eq:replacement-1}. Due to part (1) of Lemma \ref{Lemma:density_est}, there are $\delta', C',c'>0$ such that \begin{align}\label{eq:|A_eps(u,w)-B_eps(u,w)|_est} |A_\epsilon(u,w)-B_\epsilon(u,w)|\leq \int_{\{x\in\mathbb{R}^d:z(u)+x+e^{-\lambda t_1}w\in e^{\lambda t_{k-1}}B_\pm\}}C'\epsilon^{\delta'}\big(1+\mathtt{p}(\epsilon^{1-\upsilon'}|z(u)|)\big)e^{-c'|x|^2}dx. \end{align} By \eqref{eq:condition_upsilon'}, we have, for $\epsilon$ sufficiently small, \begin{align*} e^{{\lambda_j} t_{k-1}}\epsilon^{\fl -1}(\log\epsilon^{-1})^p\leq e^{{\lambda_j} t_{N-1}}\epsilon^{\fl -1}(\log\epsilon^{-1})^p \leq \epsilon^{\frac{1}{N}\fl -1 }(\log\epsilon^{-1})^p < \epsilon^{\upsilon'-1}. \end{align*} This, together with \eqref{eq:contain_B_pm}, implies that there is a constant $C>0$, such that\\ if $z(u)+x+e^{-\lambda t_1}w\in e^{\lambda t_{k-1}}B_\pm$ and $|w|\leq \epsilon^{\upsilon'-1}$, then \begin{align}\label{eq:|z(u)|_est} \epsilon^{1-\upsilon'}|z(u)| \leq C+\epsilon^{1-\upsilon'}|x|. \end{align} Using $e^{-c'|x|^2}$ to absorb polynomials of $|x|$, from \eqref{eq:|A_eps(u,w)-B_eps(u,w)|_est} and \eqref{eq:|z(u)|_est} we obtain, for some $C,c>0$, \begin{align*} |A_\epsilon(u,w)-B_\epsilon(u,w)|\leq \epsilon^{\delta'}\int_{\{x\in\mathbb{R}^d:z(u)+x+e^{-\lambda t_1}w\in e^{\lambda t_{k-1}}B_\pm\}}Ce^{-c|x|^2}dx, \qquad |w|\leq \epsilon^{\upsilon'-1}. \end{align*} Let $\mathcal{N}$ be a centered Gaussian with density proportional to $e^{-c|x|^2}$ and independent of~$\mathcal{F}_{t_{k-1}}$. The above display implies that if $|w|\leq \epsilon^{\upsilon'-1}$, then \begin{align*} |\E^{\epsilon y}A_\epsilon(U_{t_{k-1}},w)- \E^{\epsilon y} B_\epsilon(U_{t_{k-1}},w)| \leq C\epsilon^{\delta'}\mathbb{P}bx{\epsilon y}{y+U_{t_{k-1}}+e^{-\lambda t_k}w+e^{-\lambda t_{k-1}}\mathcal{N} \in B_\pm}. \end{align*} Each entry of $e^{-\lambdabda t_1}$ decays like a small positive power of $\epsilon$. So, for small $\epsilon$, \begin{align}\label{eq:induction_assumption_satisfied} |w| \leq \epsilon^{\upsilon'-1} \quad\text{ implies }\quad |e^{-\lambda t_1}w|+ \log\epsilon^{-1}\leq \epsilon^{\upsilon'-1}. \end{align} Therefore, \begin{align*}\begin{split} &|\E^{\epsilon y}A_\epsilon(U_{t_{k-1}},w)- \E^{\epsilon y} B_\epsilon(U_{t_{k-1}},w)| \\ &\leq C\epsilon^{\delta'}\mathbb{P}bx{\epsilon y}{y+U_{t_{k-1}}+e^{-\lambda t_{k-1}}(e^{-\lambda t_1}w+\mathcal{N}) \in B_\pm;\ |\mathcal{N}|\leq \log\epsilon^{-1}} + \setminusallo{\epsilon^{\rho_i+\delta'}}\\ &\leq C\epsilon^{\delta'}\mathbb{P}b{y+Z_{t_{k-1}}+e^{-\lambda t_{k-1}}(e^{-\lambda t_1}w+\mathcal{N})\in B_\pm}+\setminusallo{\epsilon^{\rho_i+\delta_{k-1}+\delta'}}+\setminusallo{\epsilon^{\rho_i+\delta'}}\\ & = \setminusallo{\epsilon^{\rho_i+\delta'}}, \end{split} \end{align*} uniformly in $|y|\leq \epsilon^{\upsilon-1}$ and $|w|\leq \epsilon^{\upsilon'-1}$. Here, in the second inequality we used the induction assumption allowed by \eqref{eq:induction_assumption_satisfied}, independence of $\mathcal{N}$, Fubini's theorem, and the superpolynomial decay of $\mathbb{P}b{|\mathcal{N}|> \log\epsilon^{-1}}$. In the last line we used \eqref{eq:contain_B_pm}, the uniform boundedness of the density of $Z_{t_{k-1}}$, independence of $\mathcal{N}$ and Fubini's theorem. This completes the proof of \eqref{eq:replacement-1}. Let us now prove \eqref{eq:replacement-2}. Let $\widetilde{Z}_{t_1}$ be a copy of $Z_{t_1}$ independent of $\mathcal{F}_{t_{k-1}}$. The following holds uniformly in $|y|\leq \epsilon^{\upsilon-1}$ and $|w|\leq\epsilon^{\upsilon'-1}$: \begin{align*} &\E^{\epsilon y} B_\epsilon(U_{t_{k-1}},w) \\ &= \mathbb{P}bx{\epsilon y}{y+U_{t_{k-1}}+e^{-\lambda t_{k-1}}(e^{-\lambda t_1}w+\widetilde{Z}_{t_1})\in B_\pm}\\ & = \mathbb{P}bx{\epsilon y}{y+U_{t_{k-1}}+e^{-\lambda t_{k-1}}(e^{-\lambda t_1}w+\widetilde{Z}_{t_1})\in B_\pm;\ |\widetilde{Z}_{t_1}|\leq \log\epsilon^{-1}}+\setminusallo{\epsilon^{\rho_i+\delta'}}\\ &= \mathbb{P}b{y+Z_{t_{k-1}}+e^{-\lambda t_{k-1}}(e^{-\lambda t_1}w+\widetilde{Z}_{t_1})\in B_\pm;\ |\widetilde{Z}_{t_1}|\leq \log\epsilon^{-1}}+\setminusallo{\epsilon^{\rho_i+\delta_{k-1}}}+\setminusallo{\epsilon^{\rho_i+\delta'}}\\ &= \mathbb{P}b{y+Z_{t_{k-1}}+e^{-\lambda t_{k-1}}\widetilde{Z}_{t_1}+e^{-\lambda t_k}w\in B_\pm}+\setminusallo{\epsilon^{\rho_i+\delta_{k-1}\wedge \delta'}}, \\ &=C_\epsilon(y,w) +\setminusallo{\epsilon^{\rho_i+\delta_{k-1}\wedge \delta'}}, \end{align*} where we used the induction assumption in the third identity allowed by \eqref{eq:induction_assumption_satisfied}, independence of $\widetilde{Z}_{t_1}$ and Fubini's theorem. In the last line, we used the identity in distribution between $Z_{t_{k-1}}+e^{-\lambda t_{k-1}}\widetilde{Z}_{t_1}$ and $Z_{t_k}$. This proves~\eqref{eq:replacement-2} with $\delta''=\delta_{k-1}\wedge \delta'$ completing the induction step and the entire proof. \end{proof} \section{Extension to a general domain}\label{section:extension} \subsection{Proof of Proposition \ref{Prop:geometry_of_pull_back} }\label{subsection:geometry_pullback} We use the notation introduced in \eqref{eq:def_M^k}--\eqref{eq:def_of-zeta}. Let us first prove the inequality $L'_A>0$ and part (1). The assumption $i(A)=i$ together with definitions \eqref{eq:def_M^k} and \eqref{eq:def_of-zeta} implies that for any $L<L(O)$, $\overline{\zeta_L(A)}\cap \Lambda^{i-1}=\emptyset$. Let us fix any $L_0<L(O)$. Since the set $\overline{\zeta_{L_0}(A)}\subset \partial \mathcal{B}_{L_0}$ is compact, we can find $r>0$ such that $\max\{|y^m|: m>i-1\}/|y|\ge r$ for all $y\in \overline{\zeta_{L_0}(A)}$. Let us choose $t_0>0$ such that for all $t>t_0$ the following holds: if $j\le i-1\le m$ and $|x^m|\le |x^j|$, then $|e^{\lambdabda_m t}x^m|/|e^{\lambdabda_j t}x^j|<r$. Now if $L$ is small enough to ensure that $\bar S_{t_0}\mathcal{B}_L \subset \mathcal{B}_{L_0}$, then for every $j\le i-1$ and every $x\in \mathbf{F}_L^j$, we are guaranteed that the orbit of $x$ under $\bar S$ intersects~$\partial \mathcal{B}_{L_0}$ at a point $y$ satisfying $\max\{|y^m|: m>i-1\}/|y|\le \max\{|y^m|: m>i-1\}/|y^j|< r$, so $y\notin\overline{\zeta_{L_0}(A)}$ and thus $x\notin \overline{\zeta_L(A)}$, which completes the proof of $L'_A>0$ and part~(1). To prove part (\ref{item:2_of_prop}), we fix $L<L'_A$ arbitrarily and recall that $i(A)=i$. It suffices to define \begin{align*} B&=\{x\in \partial\mathcal{B}_L:\ |x_j|\le L/2 \text{\rm\ for all\ } j>i\},\\ C&=\{x\in \partial\mathcal{B}_L:\ |x_j|> L/2 \text{\rm\ for some\ } j>i\}=\partial\mathcal{B}_L\setminus B,\\ A_0& = \zeta_L^{-1}(\zeta_L(A)\cap B)=A\cap \zeta_L^{-1}(B),\\ A_1&=\zeta_L^{-1}(C). \end{align*} Property (2a) is obvious from the construction. The first of (2b) and (2c) hold due to \eqref{item:equiv-index} of Lemma \ref{lemma:Hyperbolic_Pull_Back} and the construction. The second item of (2b) follows from the construction, and the last one follows from part (1) and the definition of $B$. Lastly, the $N$-regularity of $A_0$ and $A_1$ can be verified through the construction and \eqref{item:equiv-Lambda-regular} of Lemma~\ref{lemma:Hyperbolic_Pull_Back}. \subsection{Proof of Proposition \ref{Prop:measure-theoretical_prop_of_pull-back}}\label{subsection:error_of_pullback} We recall the definitions of $\tau$ and $\tau_L$ given in~\eqref{eq:def_tau} and \eqref{eq:def_tau_L}. The goal is to show that the asymptotics of $\mathbb{P}b{X_\tau \in A}$ is exactly captured by that of $\mathbb{P}b{X_{\tau_L} \in f^{-1}\circ\zeta_L(A)}$ for suitable $A\subset \partial \mathbf{D}$, by which we are able to prove Proposition \ref{Prop:measure-theoretical_prop_of_pull-back}. To this end, we need to approximate $\zeta_L(A)$. Simply taking a small neighborhood of that set makes it difficult to verify the continuity with respect to the measure $\mathcal{H}^{i-1}(\,\cdot\,\cap \mathbf{F}^i_{L,\delta}\cap\Lambda^i )$, which is required in Proposition \ref{Prop:Box_Case}. Hence, the following lemma is needed. We recall the definition of $\mathrm{dist}(\cdot,\cdot)$ in~\eqref{eq:dist-def}. \begin{Lemma}\label{lemma:existence_of_delta_set} Let $d\ge 2$. Let $A\subset \partial \mathbf{D}$ be $N$-regular with $i(A)=i$. For $L< L_0$ defined in Proposition~\ref{Prop:Box_Case}, let $B=\zeta_L(A)$ and assume that $\overline{B}\subset \mathop{\mathrm{int}}_{L}{\mathbf{F}^i_L}$. Then, there are two families of Borel sets $(B_\delta)_{\delta>0}$ and $(B_{-\delta})_{\delta>0}$ with the following properties: \begin{enumerate} \item $B_{-\delta} \subset B \subset B_\delta\subset \mathbf{F}^i_{L,\delta_1}$ for some $\delta_1 >0$ and all $\delta>0$; \label{item:one} \item ${\lambda_i}m_{\delta \to 0}\mathcal{H}^{i-1}(B_{\pm \delta}\cap \Lambda^i)=\mathcal{H}^{i-1}(B\cap \Lambda^i)$ for all $\delta>0$; \label{item:two} \item $B_{\pm \delta}$ are finite unions of rectangles described in Proposition~\ref{Prop:Box_Case}, whose interiors are pairwise disjoint;\label{item:three} \item $\mathrm{dist}\big(\partial\mathcal{B}_L \setminus B_\delta, B\big)>0$ and $\mathrm{dist}\big(\partial \mathcal{B}_L \setminus B, B_{-\delta}\big)>0$ for all $\delta>0$.\label{item:four} \end{enumerate} \end{Lemma} The next result shows that $\mathbb{P}b{f(X_{\tau_L})\in (\zeta_L(A))_{\pm\delta}}$ is a very good approximation for~$\mathbb{P}b{X_\tau \in A}$. \begin{Lemma} \label{lemma:Est_Pull_Back} Let $L<L_0$. For each $A\subset \partial \mathbf{D}$ as in Lemma \ref{lemma:existence_of_delta_set}, there is $\delta_0>0$ depending on $A$ such that for each $\delta\in(0,\delta_0)$ \begin{align}\label{eq:in_lemma:Est_Pull_Back} \mathcal{O}(e^{-C_\delta\epsilon^{-2}}) + \mathbb{P}b{f(X_{\tau_L}) \in (\zeta_L(A))_{-\delta}} \leq \mathbb{P}b{X_\tau \in A} \leq \mathbb{P}b{f(X_{\tau_L}) \in (\zeta_L(A))_\delta} + \mathcal{O}(e^{-C_\delta\epsilon^{-2}}), \end{align} as $\epsilon \to 0$, for some $C_\delta>0$ depending on $\delta$. \end{Lemma} To prove this lemma, we will need an estimate on the discrepancy between the deterministic path $S_tx$ and the perturbed one, i.e., the process $X_t$ under $\mathbb{P}^x=\mathbb{P}b{\,\cdot\,|X_0=x}$. To that end, we will use the following consequence of Proposition~2.3 and Theorem~2.4 in \cite[Chapter~III]{Azencott:MR590626} which is an extension of the standard FW large deviation bound without an assumption of uniform ellipticity of $\sigma$. We state it here because we can use it directly for the case $d=1$ in the proof of Proposition \ref{Prop:measure-theoretical_prop_of_pull-back}. \begin{Lemma} \label{lemma:FW_exponentialDecay} Let $b$ and $\sigma$ be Lipschitz and bounded. For all $\epsilon>0$, let $(X_t^\epsilon)_{t\ge0}$, be a solution of the It\^o equation \eqref{eq:SDE_X} with initial condition $X^\epsilon_0=x$, under a probability measure $\mathbb{P}$ and recall the definition of the flow $(S_t)$ from~\eqref{eq:ODE}. For each deterministic $T>0$ and $\eta>0$, $$\mathbb{P}bBig{\sup_{0\leq t \leq T}|X^x_t - S_tx|> \eta}=\mathcal{O}(e^{-C\epsilon^{-2}})$$ holds uniformly in $x$, where $C$ depends only on $T$, $\eta$ and the Lipschitz constant of $b$. \end{Lemma} \begin{proof}[Proof of Proposition \ref{Prop:measure-theoretical_prop_of_pull-back}] First, we consider $d\geq 2$. Splitting $A$ into two sets if necessary, we can assume that $\overline{\zeta_L(A)}\subset \mathop{\mathrm{int}}_{L}\mathbf{F}^i_{L+}$ without loss of generality. Let $\delta_1$ be defined by part~\eqref{item:one} of Lemma \ref{lemma:existence_of_delta_set}. By compactness of $\overline{\zeta_L(A)}$, there is $\Delta\in(0,L_0\wedge \delta_1)$ such that $\zeta_L(A)\subset \mathbf{F}^i_{L+, \Delta}$. We use (\ref{item:three}) of Lemma \ref{lemma:existence_of_delta_set} to represent $(\zeta_L(A))_{\pm\delta}$ as a finite union of rectangles with disjoint interiors. Applying Proposition~\ref{Prop:Box_Case} to these rectangles and noting that the contribution from (perhaps overlapping) boundaries of these rectangles is~$0$, we obtain \begin{align*} {\lambda_i}m_{\epsilon\to 0}\epsilon^{-\rho_i}\mathbb{P}b{f(X_{\tau_L}) \in (\zeta_L(A))_{\pm\delta}}=L^{-\sum_{j<i}\fl}\mathbb{E}\chi^i_+(\xi_0) \mathcal{H}^{i-1}\{(\zeta_L(A))_{\pm\delta} \cap \Lambda^i \}. \end{align*} Therefore, due to (\ref{item:two}) of Lemma~\ref{lemma:existence_of_delta_set} , \begin{align*} {\lambda_i}m_{\delta\to0}{\lambda_i}m_{\epsilon\to 0}\epsilon^{-\rho_i}\mathbb{P}b{f(X_{\tau_L}) \in (\zeta_L(A))_{\pm\delta}}=L^{-\sum_{j<i}\fl}\mathbb{E}\chi^i_+(\xi_0) \mathcal{H}^{i-1}\{\zeta_L(A) \cap \Lambda^i \}. \end{align*} Applying Lemma~\ref{lemma:Est_Pull_Back} we complete the proof for $d\ge 2$. In the special case $d=1$, we have $\partial \mathbf{D}=\{q_-,q_+\}$ with $q_-<0<q_+$, and $i=1$. It suffices to study $\mathbb{P}b{X_\tau = q_\pm}$. Note that $f^{-1}(\mathbf{F}^1_{L\pm,\delta})=\{p_\pm\}$ where $p_\pm = f^{-1}(\pm L)$ satisfy $q_-<p_-<0<p_+<q_+$. Proposition \ref{Prop:Box_Case} implies that \begin{align*} {\lambda_i}m_{\epsilon\to 0}\mathbb{P}b{X_{\tau_L}=p_\pm} = \mathbb{E} \chi_\pm^1(\xi_0). \end{align*} Using Lemma~\ref{lemma:FW_exponentialDecay} we conclude that $ \mathbb{P}b{X_{\tau_L}=p_\pm;\ X_\tau \neq q_\pm}=\mathcal{O}\big(e^{-C\epsilon^{-2}}\big),$ which immediately implies that $ {\lambda_i}m_{\epsilon\to 0}\mathbb{P}b{X_\tau=q_\pm} = \mathbb{E} \chi_\pm^1(\xi_0)$ and completes the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:existence_of_delta_set}] Without loss of generality we may assume that $\overline{B}\subset \mathop{\mathrm{int}}_{L}{\mathbf{F}^i_{L+}}$. Let us choose $\delta_1>0$ such that \begin{align*}\overline{B}\subset \mathop{\mathrm{int}}_{L}{\mathbf{F}}^i_{L+,\delta_1}. \end{align*} If $i=1$, then ${\mathbf{F}}^i_{L+,\delta_1}\cap \Lambda^1= B\cap \Lambda^1=\{p\}$, where $p=(L,0,\dots,0)$. Part~\eqref{item:equiv-Lambda-regular} of Lemma~\ref{lemma:Hyperbolic_Pull_Back} implies $\mathcal{H}^0(\partial_L B \cap \Lambda^1)=0$, so $p\not\in \partial_L B $ and thus $p\in \mathop{\mathrm{int}}_L B$. Hence, we can pick a closed rectangle $R$ on ${\mathbf{F}}^i_{L+,\delta_1}$ such that $p\in R \subset \mathop{\mathrm{int}}_L B$. Setting $B_{-\delta}= R$ for all $\delta>0$, we ensure properties (1)---(4) for $B_{-\delta}$. Choosing $\delta_1$ sufficiently small and setting $B_\delta= \mathbf{F}^i_{L,\delta_1}$, for all $\delta>0$, we ensure properties (1)---(4) for $B_{\delta}$. If $i=d$, then ${\mathbf{F}}^i_{L+,\delta_1}\cap\Lambda^d = {\mathbf{F}}^i_{L+,\delta_1}$. Since ${\mathbf{F}}^i_{L+,\delta_1}$ is $(d-1)$-dimensional and flat, the measure $\mathcal{H}^{d-1}(\,\cdot\, \cap {\mathbf{F}}^i_{L+,\delta_1}\cap \Lambda^d)=\mathcal{H}^{d-1}(\,\cdot\, \cap {\mathbf{F}}^i_{L+,\delta_1})$ can be viewed as the $(d-1)$-dimensional Lebesgue measure restricted on ${\mathbf{F}}^i_{L+,\delta_1}$. By the standard approximation arguments, we can choose $B_{-\delta}$ and $B_\delta$ to be two unions of finitely many rectangles, which satisfy (1) and (2). Slightly adjusting the rectangles, we can ensure (4). For $1<i<d$, we need an extended version of this construction. We construct the family $(B_\delta)_{\delta>0}$ first. Let us define a closed $(i-1)$-dimensional rectangle \begin{align*}Q=\mathbf{F}^i_{L+,\delta_1}\cap \Lambda^i=\{x\in\mathbb{R}^d:\ |x_1|,{\lambda_d}ots,|x_{i-1}|\le L-\delta_1;\ x_i=L;\ x_{i+1}={\lambda_d}ots=x_d=0\}. \end{align*} For every $\delta>0$, using the compactness of $\overline B \cap Q$ and the fact that \begin{equation} \label{eq:regularity-for-B} \mathcal{H}^{i-1}(\partial_{L}B\cap Q) =0 \end{equation} (which follows from the regularity of $A$ and Lemma \ref{lemma:Hyperbolic_Pull_Back}), we can find a set $G_\delta$ satisfying the following: \begin{gather} \text{$G_\delta$ is a finite union of open $(i-1)$-dimensional rectangles};\label{eq:G_delta_property_rect}\\ \overline{B}\cap Q\subset G_\delta\subset Q; \label{eq:G_delta_property_containment}\\ \mathcal{H}^{i-1}(G_\delta \setminus(B\cap Q)) = \mathcal{H}^{i-1}(G_\delta \setminus(\overline{B}\cap Q)) < \delta.\label{eq:G_delta_property_approximation} \end{gather} Since $Q\setminus G_\delta$ and $\overline{B}$ are compact, we can adjust $G_\delta$ to additionally ensure that \begin{align}\label{eq:QG_delta_dist_to_B} \mathrm{dist}(Q\setminus G_\delta, \overline{B})>0. \end{align} Let $\pi$ be the orthogonal projection onto $\Lambda^i$, namely \begin{align*} \pi: x\in\mathbb{R}^d \mapsto (x^1,x^2,...,x^{i-1},0,\dots,0)\in\mathbb{R}^d. \end{align*} Since $\mathbf{F}^i_{L+,\delta_1}\setminus\overline{B}$ is open and $Q\setminus G_\delta$ is closed in the relative topology of $\mathbf{F}^i_{L,\delta_1}$, \eqref{eq:QG_delta_dist_to_B} implies that there is some ``thickness'' $h(\delta)\in(0,\delta)$ such that \begin{align}\label{eq:K_delta} K_\delta = \{x\in \mathbf{F}^i_{L,\delta_1}: \pi(x)\in Q\setminus G_\delta; \quad |x^j|< h(\delta),\ \forall j> i \} \end{align} satisfies \begin{equation} \mathrm{dist}(K_\delta, B)>0.\label{eq:dist_K_B} \end{equation} Let us define $B_\delta=\mathbf{F}^i_{L+,\delta_1} \setminus K_\delta.$ Parts~(\ref{item:one}) and~(\ref{item:four}) of the lemma now follow from~\eqref{eq:dist_K_B}. Using~\eqref{eq:G_delta_property_rect} and subdividing rectangles if needed we can represent $\overline{G_\delta}$ as a finite union of $(i-1)$-dimensional closed rectangles with disjoint interiors. Part~\eqref{item:three} follows now from~\eqref{eq:K_delta} and the definition of $B_\delta$. Since \begin{equation*} B_\delta \cap Q=B_\delta \cap \Lambda^i= G_\delta, \end{equation*} we have $\mathcal{H}^{i-1}(B_\delta \cap \Lambda^i)= \mathcal{H}^{i-1}(G_\delta)$. Thus, \[ 0\le \mathcal{H}^{i-1}(B_\delta \cap \Lambda^i)-\mathcal{H}^{i-1}(B \cap \Lambda^i) =\mathcal{H}^{i-1}(G_\delta\setminus (B\cap Q))<\delta, \] by \eqref{eq:G_delta_property_containment} and~\eqref{eq:G_delta_property_approximation}, so part~(\ref{item:two}) also follows. To construct $B_{-\delta}$, we apply the same approach to the set $B_-=\mathbf{F}_{L+,\delta_1}\setminus B$ and note that due to the regularity of $A$, the set $B_-$ satisfies a version of \eqref{eq:regularity-for-B}, namely, \begin{equation*} \mathcal{H}^{i-1}(\partial_{L}B_-\cap Q) =0, \end{equation*} so we can find a cover $G_{-\delta}$ of $\overline{B_-}\cap Q$ satisfying the versions of requirements \eqref{eq:G_delta_property_rect}--\eqref{eq:QG_delta_dist_to_B} with $B,G_\delta$ replaced by $B_-,G_{-\delta}$. We can now define $K_{-\delta}$ via $B_-$ and $G_{-\delta}$ similarly to~\eqref{eq:K_delta}--\eqref{eq:dist_K_B}, and check that properties (1)--(4) hold if we set $B_{-\delta}=K_{-\delta}$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:Est_Pull_Back}] To derive the upper bound in~\eqref{eq:in_lemma:Est_Pull_Back}, we write \begin{align*} \mathbb{P}b{X_\tau \in A}\le \mathbb{P}b{f(X_{\tau_L}) \in (\zeta_L(A))_\delta} + \mathtt{I}, \end{align*} where $\mathtt{I}=\mathbb{P}b{f(X_{\tau_L}) \notin (\zeta_L(A))_\delta,\ X_\tau \in A}$ is the term we need to estimate. Let $\Gamma = f^{-1}(\partial \mathcal{B}_{L})$. Since $b$ is transversal to both~$\Gamma$ and $\partial \mathbf{D}$, the inverse of the map~$\psi_L$ defined in~\eqref{eq:psi_L} is Lipschitz on~$\partial\mathbf{D}$. Let us introduce $F_\delta=f^{-1}((\zeta_L(A))_\delta)$ and notice that $\gamma=\mathrm{dist}(\partial \mathbf{D}\setminus\psi_L(F_\delta), A)>0$ due to the Lipschitz property of $\psi_L^{-1}$ and (4) of Lemma \ref{lemma:existence_of_delta_set}. Let $T_0=\sup\{t(x):\ x\in\Gamma\}$, where $t(\cdot)$ was defined in~\eqref{eq:deterministic-exit-time}, and $T_1=T_0+1$. Due to the same transversality properties, by time $T_1$, all orbits under $S$ originating from $\Gamma$ exit $\mathbf{D}$ and end up at distance from $\partial \mathbf{D}$ that is bounded away from~$0$. Therefore there is $\eta>0$ such that for every $x\in\Gamma$, and every continuous path $y:[0,T_1]\to\mathbb{R}^d$ such that $\sup_{t\in[0,T_1]}|y(t)-S_tx|\le \eta$, the point $y_\mathbf{D}$ of the first intersection of the path $y$ with $\partial\mathbf{D}$ is well-defined and satisfies $|y_\mathbf{D}-\psi_L(x)|<\gamma$. We can now apply this statement along with Lemma~\ref{lemma:FW_exponentialDecay} to see that \begin{align*} \mathtt{I}&= \int_{\Gamma\setminus F_\delta}\mathbb{P}b{X_{\tau_L}\in dx}\mathbb{P}^x\{X_\tau \in A\} \\ &\le \int_{\Gamma\setminus F_\delta}\mathbb{P}b{X_{\tau_L}\in dx}\mathbb{P}^x\left\{\sup_{0\le t \le T_1} |X^x_t-S_tx| > \eta \right\}=\mathcal{O}(e^{-C\epsilon^{-2}}) \end{align*} for some $C=C(\delta)>0$, which completes the proof of the upper bound in~\eqref{eq:in_lemma:Est_Pull_Back}. The lower bound in~\eqref{eq:in_lemma:Est_Pull_Back} is derived similarly. \end{proof} \end{document}
{\bf e}gin{document} \title{On the almost-principal minors of a symmetric matrix} \author{ Shaun M. Fallat\thanks{Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, S4S 0A2, Canada ([email protected]).} \and Xavier Mart\'inez-Rivera\thanks{Department of Mathematics and Statistics, Auburn University, Auburn, AL 36849, USA ([email protected]). } } \maketitle {\bf e}gin{abstract} The almost-principal rank characteristic sequence (apr-sequence) of an $n\times n$ symmetric matrix is introduced, which is defined to be the string $a_1 a_2 \cdots a_{n-1}$, where $a_k$ is either {\tt A}, {\tt S}, or {\tt N}, according as all, some but not all, or none of its almost-principal minors of order $k$ are nonzero. In contrast to the other principal rank characteristic sequences in the literature, the apr-sequence of a matrix does not depend on principal minors. The almost-principal rank of a symmetric matrix $B$, denoted by $\aprank(B)$, is defined as the size of a largest nonsingular almost-principal submatrix of $B$. A complete characterization of the sequences not containing an $\tt A$ that can be realized as the apr-sequence of a symmetric matrix over a field $\mathbb{F}$ is provided. A necessary condition for a sequence to be the apr-sequence of a symmetric matrix over a field $\mathbb{F}$ is presented. It is shown that if $B \in \mathbb{F}nn$ is symmetric and non-diagonal, then $\rank(B)-1 \leq \aprank(B) \leq \rank(B)$, with both bounds being sharp. Moreover, it is shown that if $B$ is symmetric, non-diagonal and singular, and does not contain a zero row, then $\rank(B) = \aprank(B)$. \end{abstract} \noindentdent{\bf Keywords.} Almost-principal minor; almost-principal rank characteristic sequence; enhanced principal rank characteristic sequence; symmetric matrix; rank; ap-rank. \noindentdent{\bf AMS subject classifications.} 15B57, 15A15, 15A03. \section{Introduction}\label{s: intro} $\null$ \indent Motivated by work of Brualdi et al.\ \cite{P} on the principal rank characteristic sequence (pr-sequence), Butler et al.\ \cite{EPR} introduced the enhanced principal rank characteristic sequence (epr-sequence), which they defined as follows: For a given symmetric matrix $B \in \mathbb{F}nn$, where $\mathbb{F}$ is a field, the \textit{enhanced principal rank characteristic sequence} (epr-sequence) of $B$ is $\epr(B) = \ell_1\ell_2 \cdots \ell_n$, where {\bf e}gin{equation*} \ell_k = {\bf e}gin{cases} \tt{A} &\text{if all of the principal minors of order $k$ are nonzero;}\\ \tt{S} &\text{if some but not all of the principal minors of order $k$ are nonzero;}\\ \tt{N} &\text{if none of the principal minors of order $k$ are nonzero (i.e., all are zero),} \end{cases} \end{equation*} where a minor of {\em order} $k$ is the determinant of a $k \times k$ submatrix of $B$. (The definition of ``principal minor'' appears below.) After subsequent work on epr-sequences (see \cite{{EPR-Hermitian}, {skew}, {XMR-Classif}, {XMR-Char 2}}), another sequence, one that refines the epr-sequence, called the signed enhanced principal rank characteristic sequence (sepr-sequence), was introduced by Mart\'inez-Rivera in \cite{XMR-sepr}. Recently, Fallat and Mart\'inez-Rivera \cite{qpr} extended the definition of the epr-sequence by also taking into consideration the almost-principal minors of the matrix, leading them to a new sequence, which we will define after introducing some terminology: For $B \in \mathbb{F}nn$ and $\alpha, {\bf e}ta \subseteq \{1,2, \dots, n\}$, $B[\alpha, {\bf e}ta]$ will denote the submatrix lying in rows indexed by $\alpha$ and columns indexed by ${\bf e}ta$; $B[\alpha, {\bf e}ta]$ is a {\em principal} submatrix of $B$ if $\alpha = {\bf e}ta$; the minor $\det B[\alpha,{\bf e}ta]$ is a {\em principal} minor of $B$ if $B[\alpha, {\bf e}ta]$ is a principal submatrix of $B$; $B[\alpha, {\bf e}ta]$ is an {\em almost-principal} submatrix of $B$ if $|\alpha| = |{\bf e}ta|$ and $|\alpha \cap {\bf e}ta| = |\alpha|-1$; the minor $\det B[\alpha,{\bf e}ta]$ is an {\em almost-principal} minor of $B$ if $B[\alpha, {\bf e}ta]$ is an almost-principal submatrix of $B$; the minor $\det B[\alpha,{\bf e}ta]$ is a {\em quasi-principal} minor of $B$ if $B[\alpha, {\bf e}ta]$ is a principal or an almost-principal submatrix of $B$; we will say that an $n \times n$ matrix has {\em order} $n$; a sequence $t_1t_2 \cdots t_{k}$ from $\{\tt A,N,S\}$ is said to have {\em length} $k$. As introduced in \cite{qpr}, for a given symmetric matrix $B \in \mathbb{F}nn$, where $\mathbb{F}$ is a field, the \textit{quasi principal rank characteristic sequence} (qpr-sequence) of $B$ is $\qpr(B)=q_1q_2\cdots q_n$, where \[q_k=\left\{{\bf e}gin{array}{ll} {\tt A} & \mbox{ if all of the quasi-principal minors of order $k$ are nonzero;}\\ {\tt S} & \mbox{ if some but not all of the quasi-principal minors of order $k$ are nonzero;}\\ {\tt N} & \mbox{ if none of the quasi-principal minors of order $k$ are nonzero (i.e., all are zero).}\end{array}\right.\] A necessary condition for a sequence to be the qpr-sequence of a symmetric matrix over a field $\mathbb{F}$ was found in \cite{qpr}: {\bf e}gin{thm}\label{qpr-necessary condition} {\rm \cite[Corollary 2.7]{qpr}} Let $\mathbb{F}$ be a field and $q_1q_2 \cdots q_{n}$ be a sequence from $\{\tt A,N,S\}$. If $q_1q_2 \cdots q_{n}$ is the qpr-sequence of a symmetric matrix $B \in \mathbb{F}nn$, then the following statements hold: {\bf e}n \item[(i)] $q_n \neq \tt S$. \item[(ii)] Neither $\tt NA$ nor $\tt NS$ is a subsequence of $q_1q_2 \cdots q_{n}$. \end{enumerate} \end{thm} The necessary condition in Theorem \ref{qpr-necessary condition} was shown to be sufficient if $\mathbb{F}$ is of characteristic $0$: {\bf e}gin{thm}\label{qpr-char 0} {\rm \cite[Theorem 3.7]{qpr}} Let $\mathbb{F}$ be a field of characteristic $0$. A sequence $q_1q_2 \cdots q_n$ from $\{\tt A,N,S\}$ is the qpr-sequence of a symmetric matrix $B \in \mathbb{F}nn$ if and only if the following statements hold: {\bf e}n \item[(i)] $q_n \neq \tt S$. \item[(ii)] Neither $\tt NA$ nor $\tt NS$ is a subsequence of $q_1q_2 \cdots q_n$. \end{enumerate} \end{thm} Theorem \ref{qpr-char 0} establishes a contrast between the epr-sequences and qpr-sequences of symmetric matrices, since a complete characterization such as the one in Theorem \ref{qpr-char 0} for epr-sequences when the field $\mathbb{F}$ is not the field of order $2$ is not yet known (see \cite{XMR-Char 2}). The absence of such a characterization for epr-sequences is due to the difficulty in understanding epr-sequences containing $\tt NA$ or $\tt NS$ as subsequences. However, in the case of qpr-sequences, this difficulty was overcome, since Theorem \ref{qpr-necessary condition} states that neither $\tt NA$ nor $\tt NS$ can occur as a subsequence of the qpr-sequence of a symmetric matrix \cite{qpr}, regardless of the field; this raises a question: {\bf e}gin{quest}\label{qpr question} \rm Should we attribute the fact that neither $\tt NA$ nor $\tt NS$ can occur as a subsequence of the qpr-sequence of a symmetric matrix entirely to the dependence of qpr-sequences on almost-principal minors? \end{quest} Question \ref{qpr question}, together with the applications that almost-principal minors find in numerous areas, which include algebraic geometry, statistics, theoretical physics and matrix theory \cite{qpr} (see, for example, \cite{{Stu17}, {Vanishing Minor Conditions}, {InvM3}, {KP14}, {Stu09}, {Stu16}, {InvM1}}), is motivation for introducing the almost-principal rank and the almost-principal rank characteristic sequence of a symmetric matrix, which is the focus of this paper: {\bf e}gin{defn}{\rm Let $B \in \mathbb{F}nn$ be symmetric. The \textit{almost-principal rank} of $B$, denoted by $\aprank(B)$, is \[\aprank(B) := \max \{ |\alpha| : \det (B[\alpha, {\bf e}ta]) \neq 0, \ |\alpha| = |{\bf e}ta| \mbox{ \ and \ } |\alpha \cap {\bf e}ta| = |\alpha|-1\} \] (where the maximum over the empty set is defined to be 0). }\end{defn} We note that, by definition, the ap-rank of a $1 \times 1$ matrix is $0$. {\bf e}gin{defn}{\rm For $n \geq 2$, the {\em almost-principal rank characteristic sequence} of a symmetric matrix $B\in\mathbb{F}nn$ is the sequence (apr-sequence) $\apr(B)=a_1a_2\cdots a_{n-1}$, where \[a_k=\left\{{\bf e}gin{array}{ll} {\tt A} & \mbox{ if all of the almost-principal minors of order $k$ are nonzero;}\\ {\tt S} & \mbox{ if some but not all of the almost-principal minors of order $k$ are nonzero;}\\ {\tt N} & \mbox{ if none of the almost-principal minors of order $k$ are nonzero (i.e., all are zero).}\end{array}\right.\] }\end{defn} Some observations highlighting the contrast between apr-sequences and pr-, epr-, sepr- and qpr-sequences are now in order: Unlike the other sequences, by definition, apr-sequences do not depend on principal minors; moreover, whether or not a matrix is nonsingular is not revealed by its apr-sequence; the apr-sequence of a symmetric matrix $B \in \mathbb{F}nn$ has length $n-1$ ---while the epr-, sepr- and qpr-sequence each has length $n$; furthermore, unlike epr- and qpr-sequences, apr-sequences may end with $\tt S$. Another observation is that the apr-sequence of a $1 \times 1$ matrix is simply the empty list (or word), and, therefore, wherever the apr-sequence of an $n \times n$ matrix is involved, we assume that $n \geq 2$. In the remainder of the present section, some of the terminology we adopted is introduced, known results that are used frequently are listed, and facts about apr-sequences that will serve as tools in subsequent sections are established. In Section \ref{s: apr-sequence}, in particular, we establish a result analogous to Theorem \ref{NN Thm for epr} below (the $\tt NN$ Theorem for epr-sequences from \cite{EPR}), as well as a necessary condition for a sequence not containing an $\tt A$ to be the apr-sequence of a symmetric matrix (over an arbitrary field). Section \ref{s: No As} is devoted mostly to apr-sequences not containing an $\tt A$, which are completely characterized (for an arbitrary field) in Theorem \ref{No As}, and concludes by providing a necessary condition for a sequence (from $\{\tt A,N,S\}$) to be the apr-sequence of a symmetric matrix (over an arbitrary field). Section \ref{s: ap-rank} is focused on the ap-rank of a symmetric matrix (over an arbitrary field), where it is shown, in particular, that for a symmetric non-diagonal singular matrix $B$ not containing a zero row, $\rank(B) = \aprank(B)$. Section \ref{s: final} has concluding remarks, including an answer to Question \ref{qpr question}. In what follows, unless otherwise stated, $\mathbb{F}$ is used to denote an arbitrary field. Given a vector $x$ of length $n$, $x[\alpha]$ denotes the subvector of $x$ with entries indexed by $\alpha \subseteq \{1,2, \dots, n\}$. If the sequence $a_1a_2 \cdots a_{n-1}$ from $\{\tt A,N,S\}$ is the apr-sequence of a symmetric matrix over $\mathbb{F}$, then we will say that the sequence is {\em attainable} over $\mathbb{F}$ (or simply that the sequence is {\em attainable}, if what is meant is clear from the context). Given a sequence $t_{i_1}t_{i_2} \cdots t_{i_k}$, $\overline{t_{i_1}t_{i_2} \cdots t_{i_k}}$ indicates that the sequence may be repeated as many times as desired (or it may be omitted entirely). The matrices $B$ and $C$ are said to be {\em permutationally similar} if there exists a permutation matrix $P$ such that $C=P^TBP$. If replacing each of the nonzero entries of a matrix $P$ with a $1$ results in a permutation matrix, then we will say that $P$ is a \textit{generalized permutation} matrix. The zero matrix, identity matrix and all-$1$s matrix of order $n$ is denoted with $O_n$, $I_n$ and $J_n$, respectively; moreover, $O_0$, $I_0$ and $J_0$ are understood to be vacuous. The block diagonal matrix with the matrices $B$ and $C$ on the diagonal (in that order) is denoted by $B \oplus C$. \subsection{Known results} $\null$ \indent In this section, known results that are used frequently are listed, of which some have been assigned abbreviated nomenclature. We start with a well-known fact (see \cite{BIRS13}, for example), which states that the rank of a symmetric matrix $B$ is equal to the order of a largest nonsingular principal submatrix of $B$; because of this, we will call the rank of a symmetric matrix \textit{principal}. {\bf e}gin{thm} \label{thm: rank of a symm mtx} {\rm \cite[Theorem 1.1]{BIRS13}} Let $B \in \mathbb{F}nn$ be symmetric. Then $\rank(B) = \max \{ |\gamma| : \det (B[\gamma]) \neq 0 \}$. \end{thm} For a given matrix $B$ having a nonsingular principal submatrix $B[\gamma]$, we denote by $B/B[\gamma]$ the Schur complement of $B[\gamma]$ in $B$ (see \cite{Schur}). The following result is also a well-known fact (see \cite{Brualdi & Schneider}). {\bf e}gin{thm} \label{schur} {\rm (Schur Complement Theorem.)} Let $B \in \mathbb{F}nn$ be symmetric with $\rank(B)=r$. Let $B[\gamma]$ be a nonsingular principal submatrix of $B$ with $|\gamma| = k \leq r$, and let $C = B/B[\gamma]$. Then the following statements hold: {\bf e}gin{enumerate} \item [$(i)$]\label{p1SC} $C$ is an $(n-k)\times (n-k)$ symmetric matrix. \item [$(ii)$]\label{p2SC} Assuming the indexing of $C$ is inherited from $B$, any minor of $C$ is given by \[ \det C[\alpha, {\bf e}ta] = \det B[\alpha \cup \gamma, {\bf e}ta \cup \gamma]/ \det B[\gamma].\] \item [$(iii)$]\label{p3SC} $\rank(C) = r-k$. \end{enumerate} \end{thm} Some necessary results about epr-sequences are listed now. The following theorem, which appears in \cite{EPR}, follows readily from Jacobi's determinantal identity. {\bf e}gin{thm}\label{Inverse Thm} {\rm \cite[Theorem 2.4]{EPR}} {\rm (Inverse Theorem for epr-Sequences.)} Let $B \in \mathbb{F}nn$ be symmetric and nonsingular. If $\epr(B) = \ell_1 \ell_2 \cdots \ell_{n-1}\tt A$, then $\epr(B^{-1}) = \ell_{n-1}\ell_{n-2} \cdots \ell_1 \tt A$. \end{thm} {\bf e}gin{thm} \label{NN Thm for epr} {\rm \cite[Theorem 2.3]{EPR}} {\rm ($\tt NN$ Theorem for epr-Sequences.)} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\epr(B) = \ell_1 \ell_2 \cdots \ell_{n}$ and $\ell_{k} = \ell_{k+1} = \tt N$ for some $k$. Then $\ell_j = \tt N$ for all $j \geq k$. \end{thm} We now state some facts about qpr-sequences. {\bf e}gin{obs} {\rm \cite[Observation 2.1]{qpr}} \label{qpr rank} Let $B \in \mathbb{F}nn$ be symmetric. Then $\rank(B)$ is equal to the index of the last {\tt A} or {\tt S} in $\qpr(B)$. \end{obs} Since the rank of a symmetric matrix is principal, it is not hard to show that a statement analogous to Theorem \ref{NN Thm for epr} must hold for qpr-sequences; however, something stronger does hold: The next result from \cite{qpr} shows that the presence of a single $\tt N$ in the qpr-sequence of a symmetric matrix $B \in \mathbb{F}nn$ implies that the sequence has $\tt N$s from that point forward. This result is of particular relevance later, when we show that an analogous statement does not hold for apr-sequences. {\bf e}gin{thm}\label{qpr: N theorem} {\rm \cite[Theorem 2.6]{qpr}} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\qpr(B) = q_1 q_2 \cdots q_n$ and $q_k = \tt N$ for some $k$. Then $q_j = \tt N$ for all $j \geq k$. \end{thm} We conclude this section with a lemma that is used repeatedly, which is immediate from the fact that appending a row and column to a matrix of rank $r$ results in a matrix whose rank is at most $r+2$. {\bf e}gin{lem} \label{rank when deleting} Let $B \in \mathbb{F}nn$ be nonsingular. Then the rank of any $(n-1)\times (n-1)$ submatrix of $B$ is at least $n-2$. \end{lem} \subsection{Tools for apr-sequences} $\null$ \indent Some results that will serve as tools to establish results in subsequent sections are provided in this section. The first is an immediate consequence of the Schur Complement Theorem, and it is, therefore, stated as a corollary. {\bf e}gin{cor}\label{schurAN} {\rm (Schur Complement Corollary.)} Let $B \in \mathbb{F}nn$ be symmetric, $\apr(B)=a_1 a_2 \cdots a_{n-1}$ and $B[\gamma]$ a nonsingular principal submatrix of $B$, with $|\gamma| = k$. Let $C = B/B[\gamma]$ and $\apr(C)=a'_{1} a'_2 \cdots a'_{n-k-1}$. Then, for $j=1, \dots, n-k-1$, $a'_j=a_{j+k}$ if $a_{j+k} \in \{{\tt A,N}\}$. \end{cor} A result analogous to the Inverse Theorem for epr-Sequences can be established for apr-sequences, by applying Jacobi's determinantal identity: {\bf e}gin{thm}\label{Inverse Thm} {\rm (Inverse Theorem.)} Let $B \in \mathbb{F}nn$ be symmetric and nonsingular. If $\apr(B) = a_1a_2 \cdots a_{n-1}$, then $\apr(B^{-1}) = a_{n-1}a_{n-2} \cdots a_1$. \end{thm} Appending a zero row and a zero column to a matrix is a useful operation, since we can easily determine the apr-sequence of the resulting matrix if we have the apr-sequence of the original matrix, which leads us to the next observation, that is useful when dealing with sequences that do not contain an $\tt A$. {\bf e}gin{obs}\label{append} Let $B \in \mathbb{F}nn$ be symmetric, $\apr(B) = a_1 a_2 \cdots a_{n-1}$ and $B' = B \oplus O_1$. Then $\apr(B') = a'_1 a'_2 \cdots a'_{n-1} \tt N$, with $a'_j = a_j$ if $a_j = \tt N$, and with $a'_j = \tt S$ if $a_j \neq \tt N$, for all $j \in \{1,2, \dots, n-1\}$. \end{obs} Another useful tool for working with apr-sequences is the following fact, which is analogous to \cite[Theorem 2.6]{EPR} (Inheritance Theorem for epr-sequences). {\bf e}gin{thm} {\rm (Inheritance Theorem.)} Let $B\in\mathbb{F}nn$ be symmetric, $m \leq n$, and $1\le k \le m-1$. {\bf e}n \item If $[\apr(B)]_k={\tt N}$, then $[\apr(C)]_k={\tt N}$ for all $m\times m$ principal submatrices $C$. \item If $[\apr(B)]_k={\tt A}$, then $[\apr(C)]_k={\tt A}$ for all $m\times m$ principal submatrices $C$. \item If $m \geq 6$ and $k \leq m-5$ and $[\apr(B)]_k = {\tt S}$, then there exists an $m \times m$ principal submatrix $C_S$ such that $[\apr(C_S)]_k ={\tt S}$. \end{enumerate} \end{thm} {\bf e}gin{proof} Statements (1) and (2) follow from the fact that an almost-principal submatrix of a principal submatrix of $B$ is also an almost-principal submatrix of $B$. We now establish the final statement. Suppose that $m \geq 6$ and $k \leq m-5$ and $[\apr(B)]_k = {\tt S}$. Let $p_1,p_2,\dots,p_{k-1}, q_1,q_2,\dots,q_{k-1}, i, j ,r ,s \in \{1,2, \dots, n\}$ be indices such that the following are almost-principal submatrices of order $k$: \[ B[\{p_1,p_2,\dots,p_{k-1}, i\},\{p_1,p_2,\dots,p_{k-1}, j\}] \mbox{ \ and \ } B[\{q_1,q_2,\dots,q_{k-1}, r\},\{q_1,q_2,\dots,q_{k-1}, s\}]; \] moreover, assume that the former submatrix is nonsingular and the latter is singular. Without loss of generality, we may assume that any common indices between the lists $p_1,p_2,\dots,p_{k-1}$ and $q_1,q_2,\dots,q_{k-1}$ occur in the same position in each list; moreover, we may assume that these common indices (if any) appear consecutively at the beginning of each list. If \[ |\{q_1,q_2,\dots,q_{k-1}\} \cap \{i,j\}| = 2, \] then, without loss of generality, assume that $\{q_{k-2}, q_{k-1}\} = \{i,j\}$. If \[ |\{q_1,q_2,\dots,q_{k-1}\} \cap \{i,j\}| = 1, \] then, without loss of generality, assume that $q_{k-1} \in \{i,j\}$. Consider the following list of almost-principal submatrices of order $k$: \[ {\bf e}gin{array}{c} B[\{p_1,p_2,p_3,\ldots,p_{k-3},p_{k-2},p_{k-1},i\}, \{p_1,p_2,p_3,\ldots,p_{k-3},p_{k-2},p_{k-1}, j\}],\\ B[\{q_1,p_2,p_3,\ldots,p_{k-3},p_{k-2},p_{k-1},i\}, \{q_1,p_2,p_3,\ldots,p_{k-3},p_{k-2},p_{k-1}, j\}],\\ B[\{q_1,q_2,p_3,\ldots,p_{k-3},p_{k-2},p_{k-1},i\}, \{q_1,q_2,p_3,\ldots,p_{k-3},p_{k-2},p_{k-1}, j\}],\\ B[\{q_1,q_2,q_3,\ldots,p_{k-3},p_{k-2},p_{k-1},i\}, \{q_1,q_2,q_3,\ldots,p_{k-3},p_{k-2},p_{k-1}, j\}], \\ \cdots\\ B[q_1,q_2,q_3,\ldots,q_{k-3},p_{k-2},p_{k-1},i\}, \{q_1,q_2,q_3,\ldots,q_{k-3},p_{k-2},p_{k-1}, j\}], \\ B[q_1,q_2,q_3,\ldots,q_{k-3},q_{k-2},q_{k-1},r\}, \{q_1,q_2,q_3,\ldots,q_{k-3},q_{k-2},q_{k-1}, s\}]. \\ \end{array} \] Since the first submatrix in the above list is nonsingular and the last is singular, this list of submatrices must contain a nonsingular submatrix and a singular submatrix appearing consecutively, say, $B[\alpha, {\bf e}ta]$ and $B[\gamma, \mu]$. Note that $|\alpha \cup {\bf e}ta \cup \gamma \cup \mu| \leq k+5 \leq m$. Then by letting $C_S$ be an $m \times m$ principal submatrix of $B$ containing $B[\alpha \cup {\bf e}ta \cup \gamma \cup \mu]$, the desired conclusion follows. \end{proof} \section{The almost-principal rank characteristic sequence}\label{s: apr-sequence} $\null$ \indent We begin with a simple but useful observation. {\bf e}gin{obs} \label{N...} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\apr(B) = a_1 a_2 \cdots a_{n-1}$ and $a_1 = \tt N$. Then $B$ is a diagonal matrix and $a_j = \tt N$ for all $j \geq 1$. \end{obs} {\bf e}gin{prop} \label{A...N and S...N} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\apr(B) = {\tt A} a_2 a_3 \cdots a_{n-2}{\tt N}$ or $\apr(B) = {\tt S} a_2 a_3 \cdots a_{n-2}{\tt N}$. Then $B$ is singular. \end{prop} \begin{proof} Since $\apr(B)$ does not begin with $\tt N$, $B$ is not a diagonal matrix. If $B$ was nonsingular, then, since each of its almost-principal minors of order $n-1$ is zero, $B^{-1}$ is a diagonal matrix, which would imply that $B$ itself is a diagonal matrix, leading to a contradiction. \end{proof} The $\tt NN$ Theorem for epr-sequences states that if the epr-sequence of a symmetric matrix $B \in \mathbb{F}nn$ contains two consecutive $\tt N$s, then it must contain $\tt N$s from that point forward; the same statement holds for apr-sequences: {\bf e}gin{thm} \label{thm:N implies all N} {\rm ({\tt NN} Theorem.)} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\apr(B) = a_1 a_2 \cdots a_{n-1}$ and $a_{k} = a_{k+1} = \tt N$ for some $k$. Then $a_j = \tt N$ for all $j \geq k$. \end{thm} \begin{proof} If $k=n-2$, then there is nothing to prove; thus, assume that $k \leq n-3$. It suffices to show that $a_{k+2} = \tt N$. Suppose to the contrary that $a_{k+2} \neq \tt N$. Let $B[\alpha \cup\{i\}, \alpha \cup\{j\}]$ be a nonsingular almost-principal submatrix of $B$ with $|\alpha| = k+1$ (note that $i \neq j$). We now show that $B$ has a nonsingular $k \times k$ principal submatrix contained in the $(k+1)\times (k+1)$ submatrix $B[\alpha]$. There are two cases: \noindentdent {\bf Case 1}: $B[\alpha]$ is nonsingular. \noindentdent Since $a_{k} = \tt N$, the Inheritance Theorem implies that every almost-principal minor of $B[\alpha]$ of order $k$ is zero. Then, as $B[\alpha]$ is a $(k+1) \times (k+1)$ nonsingular matrix, its inverse is a diagonal matrix, which implies that $B[\alpha]$ is a (nonsingular) diagonal matrix. It now follows immediately that $B[\alpha]$ contains a nonsingular $k \times k$ principal submatrix, as desired. \noindentdent {\bf Case 2}: $B[\alpha]$ is singular. \noindentdent Since $B[\alpha \cup\{i\}, \alpha \cup\{j\}]$ is a nonsingular $(k+2)\times (k+2)$ matrix, Lemma \ref{rank when deleting} implies that $\rank(B[\alpha]) \geq (k+2)-2 = k$. Then, as $B[\alpha]$ is $(k+1) \times (k+1)$ singular matrix, $\rank(B[\alpha]) = k$. Since the rank of a symmetric matrix is principal, $B[\alpha]$ contains a nonsingular $k \times k$ principal submatrix, as desired. Let $B[\gamma]$ be a nonsingular $k \times k$ principal submatrix (of $B[\alpha]$) with $\gamma \subseteq \alpha$ and $|\gamma| = k$. Then, as $|\gamma|=|\alpha|-1$, $\alpha = \gamma \cup \{p\}$ for some $p$ (note that $p \neq i,j$). Let $C=B/B[\gamma]$ and assume that $C$ inherits the indexing from $B$. Observe that $C$ is an $(n-k) \times (n-k)$ matrix (by the Schur Complement Theorem), and that $n-k \geq 3$. Suppose that $\apr(C)=a'_{1} a'_2 \cdots a'_{n-k-1}$ Then, as $a_{k+1} = \tt N$, the Schur Complement Corollary implies that $a'_{1} = \tt N$. It follows from Observation \ref{N...} that $\apr(C)=\tt NN\overline{N}$. Hence, $\det C[\{p,i\}, \{p,j\}] = 0$. However, by the Schur Complement Theorem, {\bf e}gin{align*} \det C[\{p,i\}, \{p,j\}] &= \frac{\det B[\gamma \cup \{p,i\}, \gamma \cup\{p,j\}]}{\det B[\gamma]} \\ &=\frac{\det B[\alpha \cup \{i\}, \alpha \cup\{j\}]}{\det B[\gamma]} \\ & \neq 0. \end{align*} Hence, we have a contradiction. \end{proof} Now that we have the $\tt NN$ Theorem (for apr-sequences), a question arises: Does a statement analogous to Theorem \ref{qpr: N theorem} hold for apr-sequences? It does not: {\bf e}gin{ex} \rm For the matrix \[ B = \mtx{0&1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0}, \] $\apr(B) = \tt SNS$. \end{ex} The next result is a corollary to the $\tt NN$ Theorem. {\bf e}gin{cor} Let $B\in\mathbb{F}nn$ be symmetric. Suppose that $\apr(B)$ contains $\tt NN$. If $\apr(B) \neq \tt NN\overline{N}$, then $B$ is singular. \end{cor} \begin{proof} We will establish the contrapositive. Suppose that $B$ is nonsingular. Let $\apr(B) = a_1a_2 \cdots a_{n-1}$, and suppose that $a_k a_{k+1}= \tt NN$ for some $k$. By the $\tt NN$ Theorem, $a_j = \tt N$ for all $j \geq k$. Hence, $\apr(B) = a_1a_2 \cdots a_{k-1} \tt NN\overline{N}$. By the Inverse Theorem, $\apr(B^{-1}) = {\tt \overline{N} NN}a_{k-1} \cdots a_2a_1$. Then, by the $\tt NN$ Theorem, $a_j = \tt N$ for all $j \leq k-1$, implying that $\apr(B) = \tt NN \overline{N}$, as desired. \end{proof} We now show that $\tt NA$ does not occur as a subsequence of the apr-sequence of a symmetric matrix $B \in \mathbb{F}nn$. But, to do so, we need a lemma: {\bf e}gin{lem} \label{zero diagonal} Let $B\in\mathbb{F}nn$ be symmetric. Suppose that $\apr(B) = a_1{\tt N}a_3 \cdots a_{n-1}$ and $\epr(B) = {\tt N}\ell_2\ell_3 \cdots \ell_n$. Then $\apr(B)$ does not contain $\tt A$. \end{lem} \begin{proof} If $B=O_n$, then the desired conclusion follows by noting that $\apr(B) = \tt NN\overline{N}$. Suppose that $B \neq O_n$, and let $B = [b_{ij}]$. By hypothesis, $b_{ii} = 0$ for all $i$, implying that $B$ must contain a nonzero off-diagonal entry. Without loss of generality, we may assume that $b_{12} \neq 0$. Since every order-2 almost-principal minor of $B$ is zero, $\det B[\{1,2\}, \{1,j\}] = 0$ and $\det B[\{1,2\}, \{2,j\}] = 0$ for all $3 \leq j \leq n$. Then, as $\det B[\{1,2\}, \{1,j\}] = - b_{1j}b_{21} = - b_{1j}b_{12}$ and $\det B[\{1,2\}, \{2,j\}] = b_{12}b_{2j}$, $b_{1j} = b_{2j} = 0$ for all $3 \leq j \leq n$. Since $B$ is symmetric, $b_{j1} = b_{j2} = 0$ for all $3 \leq j \leq n$, implying that $B$ is a block-diagonal matrix with a $2 \times 2$ block. It follows that $a_1 \neq \tt A$. Now, note that, for all $k \in \{2, 3, \dots, n-1\}$, the $k \times k$ almost-principal submatrix $B[[k], [k+1] \setminus \{2\}]$ (where $[p] := \{1,2, \dots , p\}$) is singular, since its first row is zero, as $b_{11}=0$. Hence, $a_{k} \neq \tt A$ for all $k \geq 2$. \end{proof} {\bf e}gin{thm} \label{NA} The sequence $\tt NA$ does not occur as a subsequence of the apr-sequence of a symmetric matrix over a field $\mathbb{F}$. \end{thm} \begin{proof} Let $B\in\mathbb{F}nn$ be symmetric, $\apr(B) = a_1a_2 \cdots a_{n-1}$ and $\epr(B) = \ell_1\ell_2 \cdots \ell_{n}$. Suppose to the contrary that $a_ka_{k+1} = \tt NA$ for some $k$. By Observation \ref{N...}, $k \geq 2$. We now show that $\ell_{k-1}= \tt N$. Suppose to the contrary that $\ell_{k-1} \neq \tt N$. Let $B[\gamma]$ be nonsingular with $|\gamma| = k-1$. By the Schur Complement Corollary, $\apr(B/B[\gamma]) = \tt NA \cdots $, which contradicts Observation \ref{N...}. Hence, $\ell_{k-1}= \tt N$. By Lemma \ref{zero diagonal}, $k-1 \geq 2$. Since $a_{k+1} = \tt A$, $\rank(B) \geq k+1$. Then, as the rank of $B$ is principal, the $\tt NN$ Theorem for epr-Sequences implies that $\ell_{k-2} \neq \tt N$. Let $B[\mu]$ be nonsingular with $|\mu| = k-2$. Since $a_ka_{k+1} = \tt NA$, the Schur Complement Corollary implies that $\apr(B/B[\mu]) = \tt YNA \cdots$ for some $\tt Y \in \{\tt A,N,S\}$. Since $\ell_{k-1} = \tt N$, the Schur Complement Theorem implies that $\epr(B/B[\mu]) = \tt N \cdots$, which contradicts Lemma \ref{zero diagonal}. \end{proof} The fact that an analogous version of Theorem \ref{qpr: N theorem} does not hold in general for apr-sequences raises a natural question: What restrictions (if any) can be added to the hypothesis of Theorem \ref{qpr: N theorem} in order to have its conclusion hold for apr-sequences? Requiring the apr-sequence to contain $\tt A$ as a subsequence is one such restriction: {\bf e}gin{thm}\label{...A...} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\apr(B) = a_1 a_2 \cdots a_{n-1}$ and $a_{k} = \tt A$ for some $k$. Then neither $\tt NA$ nor $\tt NS$ is a subsequence of $\apr(B)$. Equivalently, if $a_t = \tt N$ for some $t$, then $a_j = \tt N$ for all $j \geq t$. \end{thm} \begin{proof} By Theorem \ref{NA}, $\apr(B)$ does not contain $\tt NA$. Suppose to the contrary that $a_{p}a_{p+1} = \tt NS$ for some $p$. Obviously, $p \neq k$ and $p \neq k-1$. Thus, $p \leq k-2$ or $p \geq k+1$. Let $\epr(B) = \ell_1\ell_2 \cdots \ell_n$. We now examine all possibilities in two cases. \noindentdent {\bf Case 1}: $p \leq k-2$. \noindentdent By Observation \ref{N...}, $p \geq 2$. We now show that $\ell_{p-1} = \tt N$. If $\ell_{p-1} \neq \tt N$, then $B$ has a nonsingular $(p-1) \times (p-1)$ principal submatrix, say, $B[\gamma]$, which would imply that $\apr(B/B[\gamma]) = \tt N \cdots A \cdots$ (by the Schur Complement Corollary), contradicting Observation \ref{N...}. Hence, $\ell_{p-1} = \tt N$. By Lemma \ref{zero diagonal}, $p \geq 3$. Since $a_k = \tt A$, $\rank(B) \geq k > p$. Then, as $\ell_{p-1} = \tt N$ and the rank of $B$ is principal, $\ell_{p-2} \neq \tt N$ (see the $\tt NN$ Theorem for epr-Sequences). Let $B[\mu]$ be a nonsingular $(p-2) \times (p-2)$ (principal) submatrix. Then, by the Schur Complement Corollary and Schur Complement Theorem, $\apr(B/B[\mu]) = \tt XN \cdots A \cdots$ and $\epr(B/B[\mu]) = \tt N \cdots$, for some $\tt X \in \{A,N,S\}$, contradicting Lemma \ref{zero diagonal}. \noindentdent {\bf Case 2}: $p \geq k+1$. \noindentdent Since $a_{p+1} = \tt S$, $\rank(B) \geq p+1$. We proceed by considering two cases. \noindentdent {\bf Subcase A}: $B$ contains a nonsingular $(p+1) \times (p+1)$ principal submatrix. \noindentdent Let $B[\alpha]$ be nonsingular with $|\alpha| = p+1$. By the Inheritance Theorem, $\apr(B[\alpha]) = \cdots \tt A \cdots N$. Then, by the Inverse Theorem, $\apr((B[\alpha])^{-1}) = \tt N \cdots A \cdots$, which contradicts Observation \ref{N...}. \noindentdent {\bf Subcase B}: $B$ does not contain a nonsingular $(p+1) \times (p+1)$ principal submatrix. \noindentdent Clearly, $\ell_{p+1} = \tt N$. Since $\rank(B) \geq p+1$, and because the rank of $B$ is principal, the $\tt NN$ Theorem for epr-Sequences implies that $B$ contains a nonsingular $(p+2) \times (p+2)$ principal submatrix, say, $C$. Let $\apr(C) = a'_1a'_2 \cdots a'_{p+1}$ and $\epr(C) = \ell'_1 \ell'_2 \cdots \ell'_{p+1} \tt A$. By the Inheritance Theorem, $a'_k = \tt A$ and $a'_p = \tt N$. Since every principal submatrix of $C$ is also a principal submatrix of $B$, $\ell'_{p+1} = \tt N$. Thus far, we have that $\apr(C) = a'_1a'_2 \cdots a'_{k-1}{\tt A} \cdots {\tt N}a'_{p+1}$ and $\epr(C) = \ell'_1 \ell'_2 \cdots \ell'_{p} \tt NA$. By the Inverse Theorem and the Inverse Theorem for epr-Sequences, $\apr(C^{-1}) = a'_{p+1}{\tt N} \cdots {\tt A}a'_{k-1} \cdots a'_{2}a'_{1}$ and $\epr(C^{-1}) = {\tt N}\ell'_p \cdots \ell'_2 \ell'_{1} {\tt A}$, which contradicts Lemma \ref{zero diagonal}. \end{proof} \section{Sequences not containing an $\tt A$}\label{s: No As} $\null$ \indent In this section, we confine our attention to the apr-sequences not containing $\tt A$ as a subsequence, for which a complete characterization will be provided (see Theorem \ref{No As}). This characterization is then used to obtain a necessary condition for a sequence to be the apr-sequence of a symmetric matrix over an arbitrary field $\mathbb{F}$. We will start by focusing on sequences that begin with $\tt SN$. We introduce two matrices that are central to this section: \[ L_2(a) := \mtx{1&1\\1&a} \mbox{ \ and \ \ } A(K_2) := \mtx{0&1\\1&0}, \] where $a \in \mathbb{F}$. {\bf e}gin{lem}\label{SN lemma} Let $B \in \mathbb{F}nn$ and $n \geq 3$. Suppose that $\apr(B) = a_1 a_2 \cdots a_{n-1}$. Then the following statements hold: {\bf e}n \item If $B = J_{n-k} \oplus O _k$ for some integer $k$ with $1 \leq k \leq n-1$, then $\apr(B) = \tt SN\overline{N}$. \item If $B = L_2(a) \oplus O _{n-2}$ for some $a \in \mathbb{F}$, then $\apr(B) = \tt SN\overline{N}$. \item If $B=A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2) \oplus O _k$ for some integer $k$ with $0 \leq k \leq n-2$, then $\apr(B) = \tt S\overline{NS} \hspace{0.4mm} \overline{N}$, with $\tt \overline{N}$ containing $k$ copies of $\tt N$. \end{enumerate} \end{lem} \begin{proof} The verification of Statements (1) and (2) is omitted, since it is trivial. Statement (3) is established by examining two cases. First, consider the case when $B=A(K_2) \oplus O_k$ with $k \geq 1$. In that case, obviously, $a_1 = \tt S$, and, since every almost-principal submatrix of $B$ of order 2 or larger would contain a zero row or a zero column, we would have $\apr(B) = {\tt SN} \overline{\tt N}$. Finally, to establish the remaining cases of Statements (3), by Observation \ref{append}, it suffices to show that the matrix \[ C=A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2), \] with at least two copies of $A(K_2)$, has apr-sequence $\tt SNS\overline{NS}$. Let $\apr(C) = a'_1a'_2 \cdots a'_{m-1}$, where $m \geq 4$ (thus, $C$ is an $m\times m$ matrix and $m$ is even). Clearly, $a'_1 = \tt S$. Let $p$ be an odd integer with $3 \leq p \leq m-1$. We now show that $a'_p = \tt S$. Notice that \[ B[\{1,2, \dots, p\}, \{1,2, \dots, p+1\}\setminus \{p\}] = A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2) \oplus J_1 \] (with $\frac{p-1}{2}$ copies of $A(K_2)$), which is nonsingular, and that $B[\{1,2, \dots, p\}, \{2, 3,\dots, p+1\}]$ is singular, since its second row consists entirely of zeros (i.e., it is a zero row). Hence, $a'_p = \tt S$, as desired. We now show that $a'_q = \tt N$ if $q$ is even. First, observe that any principal submatrix of $B$ of odd order contains a zero row and a zero column. Let $q$ be an even integer with $2 \leq q \leq m-2$, and suppose that $B[\alpha \cup \{i\}, \alpha \cup \{j\}]$ is a $q \times q$ almost-principal submatrix; thus, $i \neq j$ and $|\alpha| = q-1$. Hence, $B[\alpha \cup \{i,j\}]$ is a $(q+1) \times (q+1)$ (principal) submatrix of odd order, implying that $B[\alpha \cup \{i,j\}]$ contains a zero row and a zero column. Hence, any $q \times q$ almost-principal submatrix of $B[\alpha \cup \{i,j\}]$ contains either a zero row or a zero column. Then, as $B[\alpha \cup \{i\}, \alpha \cup \{j\}]$ is a submatrix of $B[\alpha \cup \{i,j\}]$, $B[\alpha \cup \{i\}, \alpha \cup \{j\}]$ is singular. It follows that $a'_q= \tt N$. We conclude that $\apr(C)=\tt SNS\overline{NS}$, as desired. \end{proof} {\bf e}gin{prop}\label{SN} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\apr(B) = {\tt SN}a_3 a_4 a_5 \cdots a_{n-1}$. Then one of the following statements holds: {\bf e}n \item There exists a generalized permutation matrix $P$ and a nonzero constant $c \in \mathbb{F}$ such that $cP^{T}BP = J_{n-k} \oplus O _k$ for some integer $k$ with $1 \leq k \leq n-1$. Moreover, $\apr(B) = \tt SN\overline{N}$. \item There exists a generalized permutation matrix $P$, a nonzero constant $c \in \mathbb{F}$ and $a \in \mathbb{F}$ such that $cP^{T}BP = L_2(a) \oplus O _{n-2}$. Moreover, $\apr(B) = \tt SN\overline{N}$. \item There exists a generalized permutation matrix $P$ such that $P^TBP= A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2) \oplus O _k$ for some integer $k$ with $0 \leq k \leq n-2$. Moreover, $\apr(B) = \tt S\overline{NS} \hspace{0.4mm} \overline{N}$, with $\tt \overline{N}$ containing $k$ copies of $\tt N$. \end{enumerate} \end{prop} \begin{proof} Suppose that $B=[b_{ij}]$. Since $\apr(B)$ begins with $\tt S$, $B$ contains at least one nonzero off-diagonal entry. It suffices to show that the first sentence of one of Statements (1), (2) and (3) holds, since the remaining part of the statements follows immediately from Lemma \ref{SN lemma}. We proceed by examining two cases. \noindentdent {\bf Case 1}: $B$ contains a row with more than one nonzero off-diagonal entry. \noindentdent We now show that Statement (1) holds. Since a simultaneous permutation of the rows and columns of $B$ leaves $\apr(B)$ invariant, we may assume that the first row of $B$ contains more than one nonzero off-diagonal entry. Furthermore, we may assume that $b_{1j} \neq 0$ for $j \in \{2,3, \dots,p\}$ for some $p \geq 3$, and that $b_{1j} = b_{j1} = 0$ for $j \in \{p+1, p+2, \dots,n\}$ (note that $\{p+1, p+2, \dots,n\}$ is empty if $n=3$). Let $\alpha = \{2, 3, \dots, p\}$ and ${\bf e}ta = \{p+1, p+2, \dots n\}$. Since $a_2 = \tt N$, $b_{11}b_{23} - b_{12}b_{13} = \det B[\{1,2\}, \{1,3\}] = 0$. Then, as $b_{12}b_{13} \neq 0$, $b_{11} \neq 0$. Since multiplying $B$ by a nonzero constant leaves $\apr(B)$ invariant, we may assume that $b_{11} = 1$. Furthermore, we may assume that $b_{1j} = b_{j1} = 1$ for all $j \in \alpha$, as multiplying a row and column of $B$ by a nonzero constant leaves $\apr(B)$ invariant. Thus far, we have that $b_{11}=1$, that $b_{1j} = b_{j1} = 1$ for all $j \in \alpha$, and that $b_{1j} = b_{j1} = 0$ for all $j \in {\bf e}ta$. Now, note that for all $i,j \in\{1,2, \dots, n\}$, with $i \neq j$, and all $t \in\{1,2, \dots, n\} \setminus \{i,j\}$, \[ \det B[\{t,i\}, \{t,j\}] = b_{tt}b_{ij} - b_{ti}b_{tj}. \] Since $a_2 = \tt N$, we have that, for $i \neq j$ and $t \in\{1,2, \dots, n\} \setminus \{i,j\}$, \[ b_{tt}b_{ij} = b_{ti}b_{tj}. \] Since $b_{11} = 1$, $b_{ij} = b_{1i}b_{1j}$ for all $i,j \in \{2,3, \dots, n\}$ with $i \neq j$. Thus, if $i,j \in \alpha$ and $i \neq j$, then $b_{ij}=1$, implying that every off-diagonal entry of $B[\alpha]$ is 1. Moreover, if $i \in {\bf e}ta$ or $j \in {\bf e}ta$, with $i \neq j$, then $b_{ij}=0$. Hence, $B[{\bf e}ta]$ is a diagonal matrix and $B=B[\alpha] \oplus B[{\bf e}ta]$. We now show that $B[\alpha] = J_p$. Observe that if $t \in \alpha$ and $i,j \in \{1,2, \dots, p\} \setminus\{t\}$ with $i \neq j$, then $b_{tt} = b_{ti}b_{tj}/b_{ij} =1$. Hence, $B[\alpha] = J_p$. We now show that $B[{\bf e}ta]=O_{n-p}$. Since $a_1 = \tt S$, it follows that $n>p$, as otherwise we would have $n=p$, which would imply that $B=B[\alpha] = J_p$, whose apr-sequence is $\tt AN\overline{N}$, which is a contradiction. It follows that ${\bf e}ta$ is nonempty. If $t \in {\bf e}ta$, then $b_{tt} =b_{t1}b_{t2}/b_{12} = 0$. Thus, $B[{\bf e}ta]=O_{n-p}$. Then, with $k:=n-p$, we have that $B=J_{n-k} \oplus O _k$. It is easy to see that the operations performed on $B$ that resulted in the matrix $J_{n-k} \oplus O _k$ is accomplished by finding an appropriate generalized permutation matrix $P$ and a nonzero constant $c$ such that $cP^{T}BP = J_{n-k} \oplus O _k$. Moreover, observe that $1 \leq k \leq n-3 \leq n-1$. Hence, Statement (1) holds. \noindentdent {\bf Case 2}: Each row of $B$ contains at most one nonzero off-diagonal entry. \noindentdent Since a simultaneous permutation of the rows and columns of $B$ leaves $\apr(B)$ invariant, we may assume that $b_{12}\neq 0$ and $b_{1j} =0$ for $j \in \{2,3, \dots, n\}$. Since $B$ does not contain a row with more than one nonzero off-diagonal entry, and because $B$ is symmetric, $B=B[\{1,2\}] \oplus B[\{3,4, \dots, n\}]$. Moreover, since multiplying a row and column of $B$ by a nonzero constant leaves $\apr(B)$ invariant, we may assume that $b_{12}=b_{21}=1$. Then, as $a_2= \tt N$, $0=\det B[\{1,j\},\{2,j\}] = b_{jj}$ for $j \in \{3,4, \dots, n\}$. Hence, $B[\{3,4, \dots, n\}]$ has zero diagonal. \noindentdent {\bf Subcase A}: $B[\{3,4, \dots, n\}] = O_{n-2}$. \noindentdent If $b_{11}=b_{22}=0$, then $B=A(K_2) \oplus O_{n-2}$, implying that Statement (3) holds. Now, suppose that $b_{11} \neq 0$ or $b_{22} \neq 0$. We may assume that $b_{11} \neq 0$. Then by multiplying $B$ by $\frac{1}{b_{11}}$, and then multiplying row 2 and column 2 of $B$ by $b_{11}$, we obtain the matrix $L_2(a) \oplus O_{n-2}$ for some $a$. Without loss of generality, we may assume that $B=L_2(a) \oplus O_{n-2}$. It is easy to verify that the operations performed on $B$ that led to the matrix $L_2(a) \oplus O_{n-2}$ is accomplished by finding an appropriate generalized permutation matrix $P$ and a nonzero constant $c$ such that $cP^{T}BP = L_2(a) \oplus O_{n-2}$. Hence, Statement (2) holds. \noindentdent {\bf Subcase B}: $B[\{3,4, \dots, n\}] \neq O_{n-2}$. \noindentdent Since $B[\{3,4, \dots, n\}]$ has zero diagonal, $B[\{3,4, \dots, n\}]$ must have a nonzero off-diagonal entry. Without loss of generality, we may assume that $b_{34} \neq 0$ and $b_{3j} = 0$ for $j \in \{4,5, \dots n\}$. Then, as $a_2 = \tt N$, $0=\det B[\{1,3\},\{1,4\}]=b_{11}b_{34}$ and $0=\det B[\{2,3\},\{2,4\}]=b_{22}b_{34}$. Since $b_{34} \neq 0$, $b_{11} =b_{22}= 0$. Hence, $B=A(K_2) \oplus B[\{3,4, \dots, n\}]$. Since every almost-principal minor of $B[\{3,4, \dots, n\}]$ is an almost-principal minor of $B$, $\apr(B[\{3,4, \dots, n\}])$ begins with $\tt SN$. By our assumption in the present case (Case 2), $B[\{3,4, \dots, n\}]$ does not contain a row with more than one nonzero off-diagonal entry. Thus, we can apply our findings in Subcase A and Subcase B of the present case (Case 2) to the matrix $B[\{3,4, \dots, n\}]$: Since $B[\{3,4, \dots, n\}]$ has zero diagonal, we conclude that we may assume that either $B[\{3,4, \dots, n\}] = A(K_2) \oplus B[\{5,6, \dots, n\}]$ (if $n \geq 5$) or $B[\{3,4, \dots, n\}] = A(K_2)$ (if $n=4$). Hence, we must have $B=A(K_2) \oplus A(K_2)$ if $n=4$, and $B = A(K_2) \oplus A(K_2) \oplus B[\{5,6, \dots, n\}]$ if $n \geq 5$. It is not hard to see now that continuing this process will allow us to assume, without loss of generality, that $B=A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2) \oplus O _k$ for some integer $k$ with $0 \leq k \leq n-2$ (where the parity of $k$ is the same as that of $n$); hence, Statement (3) holds. \end{proof} We now turn our attention to sequences that begin with $\tt SS$. {\bf e}gin{prop}\label{SS...NS} A sequence of the form ${\tt SS}a_3a_4 \cdots a_{n-3}{\tt NS}$ is not the apr-sequence of a symmetric matrix over $\mathbb{F}$. \end{prop} \begin{proof} Suppose to the contrary that there exists a symmetric matrix $B \in \mathbb{F}nn$ such that $\apr(B) = {\tt SS}a_3a_4 \cdots a_{n-3}{\tt NS}$. Observe that $B$ is singular (otherwise, the Inverse Theorem would imply that $B^{-1}$ has apr-sequence ${\tt SN} a_{n-3} \cdots a_4a_3 \tt SS$, which would contradict Proposition \ref{SN}). Hence, $\rank(B) \leq n-1$. Since $\apr(B)$ ends with $\tt S$, $B$ contains a nonsingular $(n-1) \times (n-1)$ (almost-principal) submatrix, implying that $\rank(B) = n-1$. Since the rank of $B$ is principal, $B$ contains a nonsingular $(n-1) \times (n-1)$ principal submatrix, say, $B'$. Without loss of generality, we may assume that $B'=B[\{1,2, \dots, n-1\}]$. By the Inheritance Theorem, $\apr(B')$ ends with $\tt N$. Then, as $B'$ is nonsingular, $(B')^{-1}$ is a diagonal matrix, implying that $B'$ is also a diagonal matrix. Then, as $B'$ is nonsingular, $b_{jj} \neq 0$ for all $j \in \{1,2, \dots, n-1\}$. Since $\apr(B)$ begins with $\tt S$, the last row (and last column) of $B$ must contain a nonzero off-diagonal entry. Without loss of generality, we may assume that $b_{1n} \neq 0$. Now, note that \[ \det(B[\{1,2, \dots, n-2\},\{1,2, \dots, n\} \setminus \{1,n-1\}]) = (-1)^{n-1}b_{1n} \prod_{j=2}^{n-2}b_{jj} \neq 0. \] Hence, $B$ contains an $(n-2) \times (n-2)$ nonsingular almost-principal submatrix, which contradicts the fact that $\apr(B) = {\tt SS}a_3a_4 \cdots a_{n-3}{\tt NS}$. \end{proof} {\bf e}gin{lem}\label{NS lemma} Let $B \in \mathbb{F}nn$ be symmetric, and let $k$ be an even integer. Suppose that $\apr(B) = a_1 a_2 \cdots a_{n-1}$ and $a_{k}a_{k+1} = \tt NS$. Let $B'$ be a $(k+2) \times (k+2)$ principal submatrix of $B$. If $B' = A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2)$, then $a_1 a_2 = \tt SN$. \end{lem} \begin{proof} Suppose that $B' = A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2)$. Thus, $a_1 = \tt S$. If $k=2$, then there is nothing to prove, and if $n=k+2$, then the desired conclusion follows from Lemma \ref{SN lemma}; thus, we assume that $k \geq 4$, and that $n \geq k+3$. Without loss of generality, we may assume that $B' = B[\{1,2, \dots, k+2\}]$. We now show that $B[\{k+3,k+4, \dots,n\}]$ has zero diagonal, and that \[ B = B' \oplus B[\{k+3,k+4, \dots,n\}]. \] Suppose that $B=[b_{ij}]$. To see that $B[\{k+3,k+4, \dots,n\}]$ has zero diagonal, let $j \in \{k+3, k+4, \dots,n\}$ and $\alpha =\{1,2,\dots, k\} \cup\{j\}$. Then, as $a_k=\tt N$, \[ 0= \det B[\alpha \setminus \{1\}, \alpha \setminus \{2\}]= (-1)^{\frac{k-2}{2}}b_{jj}\det(M), \] where $M$ is the $(k-1) \times (k-1)$ matrix \[ M=J_1 \oplus A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2). \] Then, as $\det(M) \neq 0$, $b_{jj} = 0$. It follows that $B[\{k+3,k+4, \dots,n\}]$ has zero diagonal, as desired. Now, to show that \[ B = B' \oplus B[\{k+3,k+4, \dots,n\}], \] let $p,q \in \{1,2, \dots, k+2\}$ and $j \in \{k+3, k+4, \dots,n\}$, where $p$ is odd and $q$ is even. We now show that $b_{pj}=0$ and $b_{qj}=0$. \noindentdent {\bf Case 1}: $p \leq k-2$ and $q \leq k-2$. \noindentdent Let $\alpha =\{1,2,\dots, k\}$. It is easy to see that \[ \det B[\alpha, (\alpha \setminus \{p+1\}) \cup \{j\}]= (-1)^{p+k} b_{pj}\det(F) \] and \[ \det B[\alpha, (\alpha \setminus \{q-1\}) \cup \{j\}]= (-1)^{q+k} b_{qj}\det(G), \] for some matrices $F$ and $G$ that are permutationally similar to the $(k-1) \times (k-1)$ matrix \[ J_1 \oplus A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2). \] Since $a_k = \tt N$, $\det B[\alpha, (\alpha \setminus \{p+1\}) \cup \{j\}]=0$ and $\det B[\alpha, (\alpha \setminus \{q-1\}) \cup \{j\}]=0$. Then, as $F$ and $G$ are nonsingular, $b_{pj}=0$ and $b_{qj}=0$, as desired. \noindentdent {\bf Case 2}: $p > k-2$ and $q > k-2$. \noindentdent Let ${\bf e}ta =\{1,2,\dots, k-2\}$. It is easy to see that \[ \det B[{\bf e}ta \cup \{p, p+1\}, {\bf e}ta \cup \{p,j\}]= -b_{pj}\det(H) \] and \[ \det B[{\bf e}ta \cup \{q-1,q\}, {\bf e}ta \cup \{q,j\}]= b_{qj}\det(H), \] where $H$ is the $(k-1) \times (k-1)$ matrix \[ H=A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2) \oplus J_1. \] Since $a_k = \tt N$, $\det B[{\bf e}ta \cup \{p, p+1\}, {\bf e}ta \cup \{p,j\}] =0$ and $\det B[{\bf e}ta \cup \{q-1,q\}, {\bf e}ta \cup \{q,j\}]=0$. Then, as $H$ is nonsingular, $b_{pj}=0$ and $b_{qj}=0$, as desired. It follows from Case 1 and Case 2 that \[ B = B' \oplus B[\{k+3,k+4, \dots,n\}], \] as desired. To conclude the proof, we show that $a_2 = \tt N$. If $B[\{k+3,k+4, \dots,n\}]$ is the zero matrix, then the desired conclusion follows from Lemma \ref{SN lemma}. Thus, assume that $B[\{k+3,k+4, \dots,n\}]$ is not the zero matrix. Since $B' = A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2)$, it suffices to show that there exists a generalized permutation matrix $P$ such that \[ P^TB[\{k+3,k+4, \dots,n\}]P= A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2) \oplus O _t \] for some integer $t$ with $t \geq 0$, since that would imply that there exists a generalized permutation matrix $Q$ such that \[ Q^TBQ= A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2) \oplus O _t, \] and, therefore, that $a_2 = \tt N$ (see Lemma \ref{SN lemma}). Since $B[\{k+3,k+4, \dots,n\}]$ is a nonzero matrix with zero diagonal, $n \geq k+4$. If $n=k+4$, then the fact that $B[\{k+3,k+4, \dots,n\}]$ has zero diagonal immediately implies that there exists a generalized permutation matrix $P$ such that $P^TB[\{k+3,k+4, \dots,n\}]P = A(K_2)$, as desired. Thus, we assume that $n \geq k+5$ (implying that the order of $B[\{k+3,k+4, \dots,n\}]$ is greater than or equal to $3$). Since $B[\{k+3,k+4, \dots,n\}]$ has zero diagonal, Proposition \ref{SN} implies that it suffices to show that $\apr(B[\{k+3,k+4, \dots,n\}])$ begins with $\tt SN$. We start by showing that $\apr(B[\{k+3,k+4, \dots,n\}])$ begins with $\tt S$. Since $B[\{k+3,k+4, \dots,n\}]$ is a nonzero matrix with zero diagonal, $\apr(B[\{k+3,k+4, \dots,n\}])$ does not begin with $\tt N$. Suppose to the contrary that $\apr(B[\{k+3,k+4, \dots,n\}])$ begins with $\tt A$. Since $B[\{k+3,k+4, \dots,n\}]$ has zero diagonal, $B[\{k+3,k+4, \dots,n\}] = J_{n-k-2} - I_{n-k-2}$. Let $\theta = \{1,2, \dots, k-2\}$. Note that $B[\theta] = A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2)$, and that \[ B[\theta \cup \{k+3,k+4\}, \theta \cup \{k+3, k+5\}] = B[\theta] \oplus B[\{k+3,k+4\}, \{k+3, k+5\}]. \] Then, as \[ B[\{k+3,k+4\}, \{k+3, k+5\} = \mtx{0&1 \\ 1&1} \] is nonsingular, $B[\theta] \oplus B[\{k+3,k+4\}, \{k+3, k+5\}]$ is nonsingular, implying that $B[\theta \cup \{k+3,k+4\}, \theta \cup \{k+3, k+5\}]$ is nonsingular, a contradiction to the fact that $a_k = \tt N$. Hence, it follows that $\apr(B[\{k+3,k+4, \dots,n\}])$ begins with $\tt S$. It now remains to show that the second letter in $\apr(B[\{k+3,k+4, \dots,n\}])$ is $\tt N$. Let $p,q,r \in \{k+3, k+4, \dots, n\}$ be distinct integers, and let $\mu = \{1,2, \dots, k+2\}$. We now show that $\det(B[\{p,q\},\{p,r\}])=0$. Notice that $B[\mu]=B'$, and that \[ B[\mu \cup \{p,q\}, \mu \cup\{p,r\}] = B' \oplus B[\{p,q\},\{p,r\}]. \] Since $a_k = \tt N$, $B[\mu \cup \{p,q\}, \mu \cup\{p,r\}]$ is singular. Then, as $B'$ is nonsingular, $B[\{p,q\},\{p,r\}]$ is singular, implying that $\det(B[\{p,q\},\{p,r\}])=0$, as desired. Hence, every $2 \times 2$ almost-principal submatrix of $B[\{k+3,k+4, \dots,n\}]$ is singular, implying that $\apr(B[\{k+3,k+4, \dots,n\}])$ begins with $\tt SN$, as desired. \end{proof} {\bf e}gin{thm}\label{NS} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\apr(B) = a_1 a_2 \cdots a_{n-1}$ and $a_{k}a_{k+1} = \tt NS$ for some $k$. Then $a_1 a_2 = \tt SN$. \end{thm} \begin{proof} Suppose to the contrary that $a_1 a_2 \neq \tt SN$. Since $\apr(B)$ contains $\tt NS$, Observation \ref{N...} implies that $a_1 \neq \tt N$, and Theorem \ref{...A...} implies that $a_1a_2$ does not contain an $\tt A$. It follows that $a_1a_2 = \tt SS$. Since $a_{k+1} = \tt S$, $B$ contains a nonsingular $(k+1) \times (k+1)$ almost-principal submatrix, say, $B[\alpha \cup \{i\}, \alpha \cup \{j\}]$ (thus, $i \neq j$ and $|\alpha| = k$). Let $B' = B[\alpha \cup \{i,j\}]$ and $\apr(B') = a'_1a'_2 \cdots a'_{k+1}$. Since $a_k = \tt N$, the Inheritance Theorem implies that $a'_k= \tt N$. Since $B'$ contains the nonsingular $(k+1) \times (k+1)$ almost-principal submatrix $B[\alpha \cup \{i\}, \alpha \cup \{j\}]$ as a submatrix, $a'_{k+1} \in \{\tt A, \tt S\}$. Thus, $a'_{k}a'_{k+1} \in \{\tt NA, NS \}$. By Theorem \ref{NA}, $a'_{k}a'_{k+1} = \tt NS$. Thus, $\apr(B') = a'_1a'_2 \cdots a'_{k-1} \tt NS$. Since $\apr(B')$ contains $\tt NS$, Observation \ref{N...} implies that $a'_1 \neq \tt N$, Theorem \ref{...A...} implies that $a'_1a'_2$ does not contain an $\tt A$, and Proposition \ref{SS...NS} implies that $a'_1a'_2 \neq \tt SS$. Thus, $a'_1a'_2 = \tt SN$, and, therefore, $\apr(B') = {\tt SN}a'_3a'_4 \cdots a'_{k-1} \tt NS$. It follows from Proposition \ref{SN} that there exists a generalized permutation matrix $P$ such that \[ P^TB'P=A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2), \] and that $k$ is even. Without loss of generality, we may assume that \[ B' = A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2). \] By Lemma \ref{NS lemma}, $a_1 a_2 = \tt SN$, a contradiction to the fact that $a_1a_2 = \tt SS$. \end{proof} The sequences not containing an $\tt A$ that are realized as the apr-sequence of a symmetric matrix over a field $\mathbb{F}$ are characterized: {\bf e}gin{thm}\label{No As} Let $a_1 a_2 \cdots a_{n-1}$ be a sequence from $\{\tt S, N\}$ and $\mathbb{F}$ be a field. Then $a_1 a_2 \cdots a_{n-1}$ is the apr-sequence of a symmetric matrix $B \in \mathbb{F}nn$ if and only if $a_1 a_2 \cdots a_{n-1}$ has one of the following forms: {\bf e}n \item $\tt N\overline{N}$. \item $\tt SN\overline{N}$. \item $\tt SNS\overline{NS} \hspace{0.4mm} \overline{N}$. \item $\tt SS\overline{S} \hspace{0.4mm} \overline{N}$. \end{enumerate} \end{thm} \begin{proof} Let $\sigma = a_1 a_2 \cdots a_{n-1}$. Suppose that $\sigma$ is the apr-sequence of some symmetric matrix $B \in \mathbb{F}nn$. If $a_1 = \tt N$, then $\sigma$ has form (1) (see Observation \ref{N...}). Thus, assume that $a_1 = \tt S$. Since the apr-sequence $\tt S$ is not attainable (by a $2 \times 2$ matrix), $n \geq 3$. If $a_2= \tt N$, then Proposition \ref{SN} implies that $\sigma$ must have one of the forms (2) or (3). Finally, assume that $a_2 = \tt S$. Then, by Theorem \ref{NS}, $\sigma$ does not contain $\tt NS$. It follows that $\sigma$ must have form (4). For the other direction, suppose that $\sigma$ has one of the forms (1)--(4). If $\sigma = \tt N\overline{N}$, then $\apr(O_{n}) = \sigma$. If $\sigma$ has the form (2) or (3), then the desired conclusion follows from Lemma \ref{SN lemma}. Finally, suppose that $\sigma$ has the form (4); thus, $n \geq 3$. Because of Observation \ref{append}, it suffices to reach the desired conclusion in the case when $\sigma$ does not contain an $\tt N$; thus, we assume that $\sigma = \tt SS \overline{S}$. Let $B' = J_2 \oplus I_{n-2}$ and $\apr(B') = a'_1a'_2 \dots a'_{n-1}$. We now show that $\apr(B') = \sigma$. It is obvious that $a'_1 = \tt S$. Let $k \in \{2,3, \dots, n-1\}$ and $\alpha = \{1,2, \dots, k+1\}$. Now, observe that $C[\alpha \setminus \{1\}, \alpha \setminus \{k+1\}]$ has a zero row, and that $C[\alpha \setminus \{1\}, \alpha \setminus \{2\}] = I_{k}$. Hence, $C$ contains both a singular and a nonsingular $k \times k$ almost-principal submatrix, implying that $\apr(C) = \tt SS\overline{S} = \sigma$, as desired. \end{proof} Combining Theorem \ref{No As} with Theorem \ref{...A...} leads to a necessary condition for a sequence to be the apr-sequence of a symmetric matrix over an arbitrary field $\mathbb{F}$: {\bf e}gin{thm}\label{necessary condition} Let $\mathbb{F}$ be a field. Let $n\geq 3$ and $\sigma = a_1a_2 \cdots a_{n-1}$ be a sequence from $\{\tt A,N,S\}$. If $\sigma$ is the apr-sequence of a symmetric matrix $B \in \mathbb{F}nn$, then one of the following statements holds: {\bf e}n \item $\sigma = \tt SNS\overline{NS} \hspace{0.4mm} \overline{N}$. \item Neither $\tt NA$ nor $\tt NS$ is a subsequence of $\sigma$. \end{enumerate} \end{thm} If $\mathbb{F}$ is the field of order $2$, then the converse of Theorem \ref{necessary condition} does not hold: An exhaustive inspection reveals that the only sequences starting with $\tt A$ that are realized as the apr-sequence of a $4 \times 4$ symmetric matrix over the field of order $2$ are $\tt AAA$, $\tt ASS$, $\tt ASN$ and $\tt ANN$ (since a simultaneous permutation of the rows and columns of a matrix leaves its apr-sequence invariant, this inspection is reduced to checking a total of five matrices); as this list does not include the sequences $\tt AAN$, $\tt AAS$ and $\tt ASA$, the converse of Theorem \ref{necessary condition} does not hold if $\mathbb{F}$ is the field of order $2$. For fields of characteristic $0$, we suspect that the converse of Theorem \ref{necessary condition} holds. Since Lemma \ref{SN lemma} implies that $\sigma = \tt SNS\overline{NS} \hspace{0.4mm} \overline{N}$ is realized as the apr-sequence of a symmetric matrix (over any field), the converse of Theorem \ref{necessary condition} holds if the following statement holds: If $\mathbb{F}$ is a field and $\sigma$ is a sequence from $\{\tt A,N,S\}$ of length $n-1$, with $n\geq 3$, that contains neither $\tt NA$ nor $\tt NS$ as a subsequence, then $\sigma$ is the apr-sequence of a symmetric matrix in $\mathbb{F}nn$; if $\mathbb{F}$ is a field of characteristic $0$, this statement is reminiscent of, and closely related to, Theorem \ref{qpr-char 0}, which is partly why we speculate that the converse of Theorem \ref{necessary condition} may hold if $\mathbb{F}$ is a field of characteristic $0$. To establish this statement for fields of characteristic $0$, it would be natural to resort to probabilistic techniques akin to those used in both \cite{EPR} and \cite{qpr}. In some instances, these techniques consist of taking an $(n-1) \times (n-1)$ symmetric matrix $B$ with $\apr(B)=a_1a_2 \cdots a_{n-2}$ and verifying the existence of a bordering strategy to produce an $n \times n$ symmetric matrix $B'$ such that $\apr(B'):= a'_1a'_2\cdots a'_{n-1} = a_1a_2 \cdots a_{n-2}a'_{n-1}$, where $a'_{n-1}$ is prescribed. Roughly speaking, the existence of such a $B'$ depends upon sufficient available choice of possible vectors in $\mathbb{F}^{n-1}$. If $B$ is an \textit{arbitrary} $(n-1) \times (n-1)$ symmetric matrix with $\epr(B) = \tt AA \cdots A$ (i.e., the sequence each of whose terms is equal to $\tt A$), then applying the aforementioned probabilistic techniques to $B$ yield an $n \times n$ (symmetric) matrix $B'$ with $\epr(B')=\tt AA \cdots AN$ (see \cite[Proposition 4.1]{EPR}). Similarly, if $C$ is an \textit{arbitrary} $(n-1) \times (n-1)$ symmetric matrix with $\qpr(C) = \tt AA \cdots A$, then applying the aforementioned probabilistic techniques to $C$ yield an $n \times n$ (symmetric) matrix $C'$ with $\qpr(C')=\tt AA \cdots AN$ (see \cite[Lemma 3.1]{qpr}). The next example shows that something similar does not hold for apr-sequences, i.e., that if $B$ is an \textit{arbitrary} $(n-1) \times (n-1)$ symmetric matrix with $\apr(B) = \tt AA \cdots A$, then applying the aforementioned probabilistic techniques to $B$ need not yield an $n \times n$ (symmetric) matrix $B'$ with $\apr(B')=\tt AA \cdots AN$. {\bf e}gin{ex}\label{Bordering Example 1}\normalfont Let \[ B= \mtx{ -1 & 1 & 1\\ 1 & -1 & 1\\ 1 & 1 & -1} \in \mathbb{R}^{3\times3} \mbox{\quad and \quad} B'= \mtx{ B & \vec{y} \\ \vec{y} & t} \in \mathbb{R}^{4\times4}, \] where $\vec{y}$ and $t$ are arbitrary. Suppose that $\vec{y}=[y_1, y_2, y_3]^T$. Observe that $\apr(B)=\tt AA$. We now show that there is no $\vec{y} \in \mathbb{R}^3$ such that $\apr(B')=\tt AAN$. Observe that $\det(B'[\{1,2,3\},\{2,3,4\}])= 2(y_2+y_3)= -2\det(B'[\{2,3\},\{2,4\}])$. It follows that if all of the order-$3$ almost-principal minors of $B'$ are zero, then some order-$2$ almost-principal minor of $B'$ is zero. Thus, there is no $\vec{y} \in \mathbb{R}^3$ such that $\apr(B')=\tt AAN$. \end{ex} The next example shows that if $B$ is an \textit{arbitrary} $(n-1) \times (n-1)$ symmetric matrix with $\apr(B) = \tt SS \cdots S$ (i.e., the sequence each of whose terms is equal to $\tt S$), then applying the aforementioned probabilistic techniques to $B$ need not yield an $n \times n$ (symmetric) matrix $B'$ with $\apr(B')=\tt SS \cdots SA$. {\bf e}gin{ex}\label{Bordering Example 2}\normalfont Let \[ B= \mtx{ 1 & 1 & 0 & 0\\ 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 1\\ 0 & 1 & 1 & 1} \in \mathbb{F}^{4\times4} \mbox{\quad and \quad} B'= \mtx{ B & \vec{y} \\ \vec{y} & t} \in \mathbb{F}^{5 \times 5}, \] where $\mathbb{F}$ is an arbitrary field and $\vec{y}$ and $t$ are arbitrary. Observe that $\apr(B)=\tt SSS$. We now show that there is no $\vec{y} \in \mathbb{F}^4$ such that $\apr(B')=\tt SSSA$. This is readily seen by noting that $\det(B'[\{1,2,3,4\},\{2,3,4,5\}])=0$ for all $\vec{y} \in \mathbb{F}^{4}$ (two of the columns of $B'[\{1,2,3,4\},\{2,3,4,5\}]$ are the same). \end{ex} \section{The ap-rank of a symmetric matrix}\label{s: ap-rank} $\null$ \indent This section is devoted to studying the ap-rank of a symmetric matrix over an arbitrary field $\mathbb{F}$. We start with basic observations. {\bf e}gin{obs} Let $n \geq 2$ and $B \in \mathbb{F}nn$ be symmetric. Then $\aprank(B)$ is equal to the index of the last $\tt A$ or $\tt S$ in $\apr(B)$. \end{obs} {\bf e}gin{obs} Let $B \in \mathbb{F}nn$ be symmetric. Then $\aprank(B) \leq \rank(B)$. \end{obs} {\bf e}gin{obs}\label{aprank=0} Let $B \in \mathbb{F}nn$ be symmetric. Then $\aprank(B) = 0$ if and only if $B$ is a diagonal matrix. \end{obs} Since the inverse of a nonsingular non-diagonal matrix is non-diagonal, the following fact is deduced easily from the relationship between a matrix and its adjoint. {\bf e}gin{prop}\label{aprank nonsingular} Let $B \in \mathbb{F}nn$ be symmetric, non-diagonal and non-singular. Then $\aprank(B) = n-1$. \end{prop} As we saw earlier (in Theorem \ref{thm: rank of a symm mtx}), the rank of a symmetric matrix is equal to the order of a largest nonsingular principal submatrix ---which led us to call the rank of such a matrix ``principal.'' A natural question one should ask is whether an analogous connection exists between the rank of a symmetric matrix and the order of a largest nonsingular almost-principal submatrix; that is, is it the case that the rank and ap-rank of a symmetric matrix is the same? Obviously, the answer is negative, since, for example, for a nonzero diagonal matrix $B$, $\aprank(B) = 0$, while $\rank(B)>0$. Moreover, since for an $n \times n$ matrix $B$ we must have $\aprank(B) \leq n-1$, $\aprank(B) \neq \rank(B)$ if $B$ is nonsingular. But what can we say if $B$ is non-diagonal and singular? After establishing the following two lemmas, we show that if $B$ is symmetric, non-diagonal and singular, and does not contain a zero row, then $\aprank(B) = \rank(B)$. {\bf e}gin{lem}\label{zero row 1} Let $B \in \mathbb{F}nn$ be symmetric and singular. Suppose that $\rank(B) = r$ and $B$ does not contain a zero row. Let $B[\alpha]$ be an $r \times r$ nonsingular (principal) submatrix of $B$. Then there exists $p \in \{1,2, \dots, n\} \setminus \alpha$ such that $B[\alpha \cup \{p\}]$ does not contain a zero row. \end{lem} \begin{proof} Without loss of generality, we may assume that $\alpha = \{1,2, \dots, r\}$. Suppose to the contrary that the matrix $B[\alpha \cup \{p\}]$ contains a zero row for all $p \in \{1,2, \dots, n\} \setminus \alpha$ (since $B$ is singular, $\{1,2, \dots, n\} \setminus \alpha$ is nonempty). It follows that $B=B[\alpha] \oplus C$, where $C$ is an $(n-r) \times (n-r)$ matrix with zero diagonal. Now, observe that $\rank(B) = \rank(B[\alpha]) + \rank(C)$. Then, as $\rank(B[\alpha]) = \rank(B)$, it follows that $\rank(C) = 0$, implying that $C= O_{n-r}$, which contradicts the fact that $B$ does not contain a zero row. \end{proof} {\bf e}gin{lem}\label{zero row 2} Let $B \in \mathbb{F}nn$ be symmetric. Suppose that $\apr(B) = a_1 a_2 \cdots a_{n-2}\tt N$ and $\rank(B) = n-1$. Then $B$ contains a zero row. \end{lem} \begin{proof} Suppose that $B=[b_{ij}]$. Since $\apr(B)$ ends with $\tt N$, the adjoint of $B$ is a diagonal matrix, and is of rank one, since $\rank(B)=n-1$. Hence it follows that $Be_i=0$ for some standard basis vector $e_i$. Thus, the $i$th column (and hence row) of $B$ is zero. \end{proof} {\bf e}gin{thm}\label{aprank singular} Let $B \in \mathbb{F}nn$ be a symmetric, non-diagonal, singular matrix not containing a zero row. Then $\aprank(B) = \rank(B)$. \end{thm} \begin{proof} Since $B$ is a non-diagonal matrix, $n \geq 2$. Let $r=\rank(B)$. Since $\aprank(B) \leq r$, it suffices to show that $B$ contains a nonsingular $r \times r$ almost-principal submatrix. Since the rank of $B$ is principal, $B$ contains a nonsingular $r \times r$ principal submatrix, say, $B[\alpha]$. By Lemma \ref{zero row 1}, there exists $p \in \{1,2, \dots, n\} \setminus \alpha$ such that $B' := B[\alpha \cup \{p\}]$ does not contain a zero row. Since $\rank(B) = r$, and because $B'$ contains the nonsingular $r \times r$ matrix $B[\alpha]$, $\rank(B') = r$. Let $\apr(B') = a'_1a'_2 \cdots a'_r$. Since $B'$ is a singular $(r+1) \times (r+1)$ (symmetric) matrix with $\rank(B') = r$, and because $B'$ does not contain a zero row, Lemma \ref{zero row 2} implies that $a'_r \neq \tt N$. Hence, $B'$ contains a nonsingular, $r \times r$, almost-principal submatrix. Then, as every almost-principal submatrix of $B'$ is also an almost-principal submatrix of $B$, the desired conclusion follows. \end{proof} Although the rank and ap-rank of a symmetric matrix $B$ are not always the same, the rank cannot exceed the ap-rank by more than one if $B$ is a non-diagonal matrix: {\bf e}gin{thm}\label{ap-rank} Let $B \in \mathbb{F}nn$ be symmetric and non-diagonal. Define the parameter $t:=\max\{|\alpha|: B[\alpha] \mbox{ does not contain a zero row}\}$, and let $B'$ be the $t\times t$ principal submatrix of $B$ not containing a zero row. Then $\rank(B) - 1 \leq \aprank(B) \leq \rank(B)$. Moreover, $\aprank(B) = \rank(B)$ if and only if $B'$ is singular. Equivalently, $\aprank(B) = \rank(B)-1$ if and only if $B'$ is nonsingular. \end{thm} \begin{proof} Since $B$ is symmetric and non-diagonal, it is immediate that $n \geq 2$ and $t \geq 2$. Without loss of generality, we may assume that $B' = B[1,2, \dots,t]$. Thus, $B=B' \oplus O_{n-t}$. Then, as $\rank(B) = \rank(B')$ and $\aprank(B) = \aprank(B')$, it suffices to show that the desired conclusions hold for the case with $B=B'$ (that is, the case with $t=n$); thus, we assume that $B=B'$. If $B$ is nonsingular, then, by Proposition \ref{aprank nonsingular}, $\aprank(B) = n-1 = \rank(B)-1$. If $B$ is singular, then, by Theorem \ref{aprank singular}, $\aprank(B)=\rank(B)$. It follows that $\aprank(B)= \rank(B)-1$ or $\aprank(B)=\rank(B)$, implying that $\rank(B) - 1 \leq \aprank(B) \leq \rank(B)$, as desired. The remaining two statements, and their equivalency, is established easily using the above arguments in this proof and the fact that $\rank(B) - 1 \leq \aprank(B) \leq \rank(B)$. \end{proof} Although for a given symmetric matrix $B \in \mathbb{F}nn$ we must have $0\leq \rank(B) - \apr(B) \leq 1$ if $B$ is non-diagonal, $\rank(B) - \apr(B)$ can attain any integer value on the closed interval $[0,n]$ if $B$ is a diagonal matrix, since, for example, $\rank(B) - \apr(B) = r$ if $B$ is a diagonal matrix with $\rank(B) = r$. \section{Concluding remarks}\label{s: final} $\null$ \indent Given that the only difference between the epr- and apr-sequence is that the former depends on principal minors, while the latter depends on almost-principal minors, it is worthwhile to compare the state of affairs for epr- and apr-sequences. Although the apr-sequence was just introduced (in the present paper), we already have a better understanding of this sequence than of the epr-sequence: The epr-sequences of symmetric matrices over the field of order $2$ were completely characterized in \cite{XMR-Char 2}; however, for any other field, no such characterization exists. In Section \ref{s: No As}, the sequences not containing any $\tt A$s that are realized as the apr-sequence of a symmetric matrix over an arbitrary field $\mathbb{F}$ were completely characterized. Moreover, in Section \ref{s: No As}, a necessary condition for a sequence to be the apr-sequence of a symmetric matrix over a field $\mathbb{F}$ was presented. It is clear, then, that our understanding of apr-sequences is already better than that of epr-sequences. As stated in Section \ref{s: intro}, one of our motivations for introducing the ap-rank and apr-sequence of a symmetric matrix was answering Question \ref{qpr question}, which asks if we should attribute the fact that neither $\tt NA$ nor $\tt NS$ can occur as a subsequence of the qpr-sequence of a symmetric matrix $B \in \mathbb{F}nn$ entirely to the dependence of qpr-sequences on almost-principal minors. The following remark answers Question \ref{qpr question}, under the assumption that $n \geq 3$ (the question is trivial when $n \leq 2$). {\bf e}gin{rem} \normalfont Let $n \geq 3$, $\mathbb{F}$ be a field and $B \in \mathbb{F}nn$ be symmetric. Then the fact that neither $\tt NA$ nor $\tt NS$ can occur as a subsequence of $\qpr(B)$ is attributed entirely to the dependence of $\qpr(B)$ on almost-principal minors if and only if $B$ is a non-diagonal matrix for which there does not exist a generalized permutation matrix such that $P^TBP = T^p_q$ when $p \geq 2$, where $T^p_q$ is the $n \times n$ matrix \[ T^p_q:=\underbrace{A(K_2) \oplus A(K_2) \oplus \cdots \oplus A(K_2)}_{\mbox{$p$ times}} \oplus O_{q} \in \mathbb{F}nn. \] We now establish the previous statement. First, suppose that $B$ is a diagonal matrix with $\rank(B)=r$. If $B$ is nonsingular, then $\epr(B) = \tt AAA\overline{A}$. If $B$ is singular, then $\epr(B) = \tt \overline{S} \hspace{0.4mm}\overline{N}$, with $\tt S$ occurring $r$ times and $\tt N$ occurring $n-r$ times. Since $r$ is equal to the index of the last $\tt A$ or $\tt S$ in $\qpr(B)$ (see Observation \ref{qpr rank}), neither $\tt NA$ nor $\tt NS$ is a subsequence of $\qpr(B)$, regardless of what $\apr(B)$ is; hence, the fact that neither $\tt NA$ nor $\tt NS$ is a subsequence of $\qpr(B)$ is not attributed entirely to the dependence of $\qpr(B)$ on almost-principal minors. Now, suppose that $B$ is a non-diagonal matrix for which there exists a generalized permutation matrix $P$ such that $P^TBP=T^p_q$ for some $p \geq 2$. Then either $\epr(B)=\tt NS\overline{NS}NA$ (if $q=0$) or $\epr(B)=\tt NS\overline{NS}\hspace{0.4mm}\overline{N}$ (if $q \geq 1$), with $\tt \overline{N}$ containing $q$ copies of $\tt N$. Moreover, $\apr(B) = \tt SNS\overline{NS} \hspace{0.4mm} \overline{N}$, with $\tt \overline{N}$ containing $q$ copies of $\tt N$ (see Proposition \ref{SN}). Then, as $\rank(B)=n-q$, and because $\rank(B)$ is equal to the index of the last $\tt A$ or $\tt S$ in $\qpr(B)$, $\qpr(B) = \tt SSS\overline{S} \hspace{0.4mm} \overline{N}$. It is easy to see that the fact that neither $\tt NA$ nor $\tt NS$ is a subsequence of $\qpr(B)$ is attributed to {\em both} the principal and the almost-principal minors of $B$. Finally, suppose that $B$ is a non-diagonal matrix for which there does not exist a generalized permutation matrix such that $P^TBP = T^p_q$ when $p \geq 2$. Let $\qpr(B)=q_1q_2 \cdots q_n$. It suffices to present an argument based solely on almost-principal minors for the fact that if $q_k= \tt N$ for some $k$, then $q_j = \tt N$ for all $j \geq k$; we present one based on the ap-rank and apr-sequence of $B$: Suppose that $q_k= \tt N$ for some $k$. Let $\apr(B)=a_1a_2 \cdots a_{n-1}$. If $k =n$, then there is nothing to prove; thus, assume that $k \leq n-1$. Obviously, $a_k = \tt N$. We now show that $\apr(B)$ does not contain $\tt NA$ nor $\tt NS$ as a subsequence. If it was the case that $\apr(B)$ contained $\tt NA$ or $\tt NS$ as a subsequence, then Theorem \ref{necessary condition} would imply that $\apr(B) = \tt SNS\overline{NS} \hspace{0.4mm} \overline{N}$, and, then, Proposition \ref{SN} would imply that there exists a generalized permutation matrix such that $P^TBP=T^p_q$ for some $p \geq 2$, leading to a contradiction. Hence, neither $\tt NA$ nor $\tt NS$ is a subsequence of $\apr(B)$. It follows that $a_j = \tt N$ for all $j \geq k$, and, therefore, that $\aprank(B) \leq k-1$. Then, as $B$ is non-diagonal, Theorem \ref{ap-rank} implies that $\rank(B) \leq k$. Since $\rank(B)$ is equal to the index of the last $\tt A$ or $\tt S$ in $\qpr(B)$ (see Observation \ref{qpr rank}), $q_j = \tt N$ for all $j \geq k+1$. Then, as $q_k = \tt N$, the desired conclusion follows. \end{rem} \subsection*{Acknowledgments} $\null$ \indent The research of the first author was supported in part by an NSERC Discovery Research Grant RGPIN-2014-06036. The authors express their gratitude to a referee for their careful review, and for many helpful and constructive comments on a previous version which greatly improved the presentation of the paper. Moreover, they thank a referee for bringing Example \ref{Bordering Example 1} to their attention. {\bf e}gin{thebibliography}{99} \bibitem{BIRS13} W. Barrett, S. Butler, M. Catral, S. M. Fallat, H. T. Hall, L. Hogben, P. van den Driessche, M. Young. The principal rank characteristic sequence over various fields. \textit{Linear Algebra and its Applications} \textbf{459} (2014), 222--236. \bibitem{Stu17} T. Boege, A. D'Al\`{i}, T. Kahle, B. Sturmfels. The Geometry of Gaussoids. \textit{Foundations of Computational Mathematics}, to appear. \href{https://arxiv.org/abs/1710.07175} {arXiv:1710.07175}. \bibitem{P} R. A. Brualdi, L. Deaett, D. D. Olesky, P. van den Driessche. The principal rank characteristic sequence of a real symmetric matrix. \textit{Linear Algebra and its Applications} \textbf{436} (2012), 2137--2155. \bibitem{Brualdi & Schneider} R. A. Brualdi, H. Schneider. Determinantal identities: Gauss, Schur, Cauchy, Sylvester, Kronecker, Jacobi, Binet, Laplace, Muir, and Cayley. \textit{Linear Algebra and its Applications} \textbf{52/53} (1983), 769--791. \bibitem{EPR} S. Butler, M. Catral, S. M. Fallat, H. T. Hall, L. Hogben, P. van den Driessche, M. Young. The enhanced principal rank characteristic sequence. \textit{Linear Algebra and its Applications} \textbf{498} (2016), 181--200. \bibitem{EPR-Hermitian} S. Butler, M. Catral, H. T. Hall, L. Hogben, X. Mart\'{i}nez-Rivera, B. Shader, P. van den Driessche. The enhanced principal rank characteristic sequence for Hermitian matrices. \textit{Electronic Journal of Linear Algebra} \textbf{32} (2017), 58--75. \bibitem{qpr} S. M. Fallat, X. Mart\'{i}nez-Rivera. The quasi principal rank characteristic sequence. \textit{Linear Algebra and its Applications} \textbf{548} (2018), 42--56. \bibitem{skew} S. M. Fallat, D. D. Olesky, P. van den Driessche. The enhanced principal rank characteristic sequence for skew-symmetric matrices. \textit{Linear Algebra and its Applications} \textbf{498} (2016), 366--377. \bibitem{Vanishing Minor Conditions} C. R. Johnson, J. S. Maybee. Vanishing minor conditions for inverse zero patterns. \textit{Linear Algebra and its Applications} \textbf{178} (1993), 1--15. \bibitem{InvM3} C. R. Johnson, R. L. Smith. Inverse $M$-matrices, II. \textit{Linear Algebra and its Applications} \textbf{435} (2011), 953--983. \bibitem{KP14} R. Kenyon, R. Pemantle. Principal minors and rhombus tilings. \textit{Journal of Physics A: Mathematical and Theoretical} \textbf{47} (2014), 474010. \bibitem{XMR-Classif} X. Mart\'{i}nez-Rivera. Classification of families of pr- and epr-sequences. \textit{Linear and Multilinear Algebra} \textbf{65} (2017), 1581--1599. \bibitem{XMR-Char 2} X. Mart\'{i}nez-Rivera. The enhanced principal rank characteristic sequence over a field of characteristic $2$. \textit{Electronic Journal of Linear Algebra} \textbf{32} (2017), 273--290. \bibitem{XMR-sepr} X. Mart\'{i}nez-Rivera. The signed enhanced principal rank characteristic sequence. \textit{Linear and Multilinear Algebra} \textbf{66} (2018), 1484--1503. \bibitem{Stu09} B. Sturmfels. Open problems in algebraic statistics. In {\em Emerging Applications of Algebraic Geometry}, M. Putinar and S. Sullivant, Editors. I.M.A. Volumes in Mathematics and its Applications, \textbf{149}, Springer, New York, 2009. \bibitem{Stu16} B. Sturmfels, E. Tsukerman, L. Williams. Symmetric matrices, Catalan paths, and correlations. \textit{Journal of Combinatorial Theory, Series A}. \textbf{144} (2016), 496--510. \bibitem{InvM1} R. A. Willoughby. The inverse $M$-matrix problem. \textit{Linear Algebra and its Applications} \textbf{18} (1977), 75--94. \bibitem{Schur} F. Zhang (editor). \textit{The Schur Complement and its Applications.} Springer-Verlag, New York, New York, 2005. \end{thebibliography} \end{document}
\begin{document} \title{\scshape On the dimension of spline spaces on planar T-meshes} \author[Bernard Mourrain]{Bernard Mourrain,\\ GALAAD, INRIA M\'editerran\'ee\\ \texttt{[email protected]}} \begin{abstract} We analyze the space $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$ of bivariate functions that are piecewise polynomial of bi-degree $\leqslant (m, m')$ and of smoothness $\mathbf{r}$ along the interior edges of a planar T-mesh $\mathcal{T}$. We give new combinatorial lower and upper bounds for the dimension of this space by exploiting homological techniques. We relate this dimension to the weight of the maximal interior segments of the T-mesh, defined for an ordering of these maximal interior segments. We show that the lower and upper bounds coincide, for high enough degrees or for hierarchical T-meshes which are enough regular. We give a rule of subdivision to construct hierarchical T-meshes for which these lower and upper bounds coincide. Finally, we illustrate these results by analyzing spline spaces of small degrees and smoothness. \end{abstract} \maketitle \section*{Introduction} Standard parametrisations of surfaces in Computer Aided Geometric Design are based on tensor product B-spline functions, defined from a grid of nodes over a rectangular domain \cite{farin:book}. These representations are easy to control but their refinement has some drawback. Inserting a node in one direction of the parameter domain implies the insertion of several control points in the other directions. If for instance, regions along the diagonal of the parameter domain should be refined, this would create a fine grid in some regions where it is not needed. To avoid this problem, while extending the standard tensor product representation of CAGD, spline functions associated to a subdivision with T-junctions instead of a grid, have been studied. Such a T-mesh is a partition of a domain $\Omega$ into axis-aligned boxes, called the cells of the T-mesh. The first type of T-splines introduced in \cite{Sederberg03,Sederberg04}, are defined by blending functions which are products of univariate B-spline basis functions associated to some nodes of the subdivision. They are piecewise polynomial functions, but the pieces where these functions are polynomial do not match with the cells of the T-subdivision. Moreover, there is no proof that these piecewise polynomial functions are linearly independent. Indeed, \cite{BuChSa10} shows that in some cases, these blending T-spline functions are not linearly independent. Another issue related to this construction is that there is no characterization of the vector space spanned by these functions. For this reason, the partition of unity property which is useful in CAGD is not available directly in this space. The spline functions have to be replaced by piecewise rational functions, so that these piecewise rational functions sum to $1$. However, this construction complexifies the practical use of such T-splines. Being able to describe a basis of the vector space of piecewise polynomials of a given smoothness on a T-mesh is an important but non-trivial issue. It yields a construction of piecewise polynomial functions on the T-subdivision which form a partition of unity so that the use of piecewise rational functions is not required. It has also a direct impact in approximation problems such as surface reconstruction \cite{deBoor01} or isogeometric analysis \cite{hughes:CMAME2005}, where controlling the space of functions used to approximate a solution is critical. In CAGD, it also provides more degrees of freedom to control a shape. This explains why further works have been developed to understand better the space of piecewise polynomial functions with given smoothness on a T-subdivision. To tackle these issues special families of splines on T-meshes have been studied. In {\cite{deng06}}, {\cite{deng08}}, these splines are piecewise polynomial functions on a hierarchical T-subdivision. They are called PHT-splines (Polynomial Hierarchical T-splines). Dimension formulae of the spline space on such a subdivision have been proposed when the degree is high enough compared to the smoothness {\cite{deng06}, \cite{HuDeFeChe06}, \cite{LiWaZha06}} and in some cases for biquadratic $C^1$ piecewise polynomial functions {\cite{Deng08TSPL22}}. The construction of a basis is described for bicubic $C^1$ spline spaces in terms of the coefficients of the polynomials in the Bernstein basis attached to a cell. When a cell is subdivided into 4 subcells, the Bernstein coefficients of the basis functions of the old level are modified and new linearly independent functions are introduced, using Bernstein bases on the cells at the new level. In this paper, we analyse the dimension of the space $\mathcal{S}_{m, m'}^{\mathbf{r}}(\mathcal{T})$ of bivariate functions that are piecewise polynomial of bi-degree $\leqslant (m, m')$ of smoothness $\mathbf{r}$ along the interior edges of a general planar T-mesh $\mathcal{T}$, where $\mathbf{r}$ is a smoothness distribution on $\mathcal{T}$. As we will see, computing this dimension reduces to compute the dimension of the kernel of a certain linear map (namely the map $\tilde{\partial}_{2}$ introduced in Section \ref{sec:2}). Thus for a given T-mesh, a given smoothness distribution $\mathbf{r}$ and a given bi-degree $(m,m')$, it is possible to compute the dimension of $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$ by linear algebra tools (see eg. a software implementation developed by P. Alfed\footnote{\texttt{http://www.math.utah.edu/$\sim$pa/MDS/index.html}} for such computations). We would like to avoid a case-by-case treatment and to describe this dimension in terms of combinatorial quantities attached to $\mathcal{T}$ and easy to evaluate. As shown in \cite{Li:2011:IDS:2038067.2038142} or \cite{Berdinsky:2012}, the dimension may also depend on the geometry of the T-mesh and not just on its topology. This explains why it is not always possible to provide a purely combinatorial formula for the dimension of $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$. The main results in this paper are \begin{itemize} \item a description of the dimension in terms of a {\em combinatorial} part that depends only on the topology of the T-mesh and an {\em homological} part that takes into account the fact that the dimension may depend on the geometry of the T-mesh (Theorem \ref{thm:dim}); \item {\em combinatorial} lower and upper bounds on the dimension that are easy to evaluate (Theorem \ref{thm:dim:bound}); \item sufficient conditions under which the lower and upper bounds coincide so that the dimension depends only on the topology of the T-mesh (Theorem \ref{thm:weighted}). \end{itemize} We proceed as follows. By extending homological techniques developed in \cite{b-htssg-88} and \cite{Schenck1997535}, we obtain combinatorial lower and upper bounds on the dimension of these spline spaces for general T-subdivisions. We relate the upper bound to the maximal interior segments and their weights and show that the lower and upper bounds coincide for $T$-meshes which are enough regular. Namely, if a T-mesh is $(m+1, m'+1)$-weighted, the dimension depends directly on the number of faces, interior edges and interior points. In particular, we obtain the dimension formula for a constant smoothness distribution $\mathbf{r}= (r,r')$ with $m \geq 2 r + 1$ and $m' \geq 2 r' + 1$, providing a new proof of a result also available in {\cite{deng06}, \cite{HuDeFeChe06}, \cite{LiWaZha06}} for a hierarchical T-mesh. The algebraic approach gives an homological interpretation of the method called Smoothing Cofactor-Conformality method in \cite{Wang01}. It allows us to generalize the dimension formulae obtained by this technique \cite{LiWaZha06}, \cite{HuDeFeChe06}. We also give a rule of subdivision to construct hierarchical T-meshes for which the lower and upper bounds coincide. As a consequence, we can recover the dimension of the space of Locally Refined splines described in {\cite{DoLyPe10}}. We do not consider the problem of constructing explicit bases for these spline spaces, which will be analyzed separately. In the first section, we recall the notations and the polynomial properties which are needed in the following. Section \ref{sec:2} describes the chain complex associated to the spline space and analyzes its homology. In Section \ref{sec:3}, we give lower and upper bounds on the dimension of the spline space and analyze cases where these bounds are coincide. Section \ref{sec:4} deals with the properties of hierarchical $T$-meshes, obtained by recursive subdivisions of cells. In the last section, we analyse some examples for small degree and smoothness. \section{Planar T-splines}\label{sec:1} In the following, we will deal with notions which are of topological and algebraic nature. We start by the topological definitions. \subsection{T-meshes} For any set $S\subset \mathbbm{R}^{2}$, $\overline{S}$ is its closure for the usual topology, $S^{\circ}$ its relative interior, $\partial S$ its boundary. We define a T-mesh $\mathcal{T}$ of $\mathbbm{R}^{2}$ as: \begin{itemize} \item a finite set $\mathcal{T}_{2}$ of closed axis-aligned rectangles of $\mathbbm{R}^{2}$, \item a finite set $\mathcal{T}_{1}$ of closed axis-aligned segments included in $\cup_{\sigma\in \mathcal{T}_{2}} \partial \sigma$, \item a finite set of points $\mathcal{T}_{0} = \cup_{\tau\in \mathcal{T}_{1}} \partial \tau$, \end{itemize} such that \begin{itemize} \item For $\sigma\in \mathcal{T}_{2}$, $\partial \sigma$ is the finite union of elements of $\mathcal{T}_{1}$. \item For $\sigma,\sigma'\in \mathcal{T}_{2}$ with $\sigma\neq \sigma'$, $\sigma\cap \sigma'=\partial \sigma\cap \partial \sigma'$ is the finite union of elements of $\mathcal{T}_{1}\cup \mathcal{T}_{0}$. \item For $\tau,\tau'\in \mathcal{T}_{1}$ with $\tau\neq \tau'$, $\tau\cap \tau' =\partial \tau \cap \partial\tau' \subset \mathcal{T}_{0}$. \end{itemize} We denote by $\Omega =\cup_{\sigma\in \mathcal{T}_{2}} \sigma \subset \mathbbm{R}^2$ and call it the domain of the T-mesh $\mathcal{T}$. The elements of $\mathcal{T}_{2}$ are called $2$-faces or cells and their number is denoted $f_2$. The elements of $\mathcal{T}_{1}$ are called $1$-faces or edges. An element of $\mathcal{T}_{1}$ is called an interior edge if it intersects $\Omega^{\circ}$. It is called a boundary edge otherwise. The set of interior edges is denoted by $\mathcal{T}_1^o$. The number of edges in $\mathcal{T}_1$ is $f_1$ and the number of interior edges is $f_{1}^{o}$. An edge parallel to the first (resp. second) axis of $\mathbbm{R}^{2}$ is called horizontal (resp. vertical). Let $\mathcal{T}_{1}^{o,h}$ (resp. $\mathcal{T}_{1}^{o,v}$) be the set of horizontal (resp. vertical) interior edges and $f_1^h$ (resp. $f_1^v$) the number of interior horizontal (resp. vertical) edges. Then, the number of interior edges is $f_1^o = f_1^h + f_1^v$. The elements of $\mathcal{T}_{0}$ are called $0$-faces or vertices. A vertex is interior if it is in $\Omega^{\circ}$. It is a boundary vertex otherwise. The set of interior vertices is denoted $\mathcal{T}_0^o$. We denote by $f_0$ be the number of vertices of $\mathcal{T}_0$ and by $f_0^o$ be the number of interior vertices. A vertex is a crossing vertex if it is an interior vertex and belongs to $4$ distinct elements of $\mathcal{T}_{1}$. A vertex is a T-vertex if it is an interior vertex and belongs to exactly $3$ distinct elements of $\mathcal{T}_{1}$. Let $f_{0}^{+}$ (resp. $f_0^T$) be the number of crossing (resp. T) vertices. A boundary vertex is a vertex in $\mathcal{T}_{0}\cap \partial \Omega$. The number of boundary vertices is $f_{0}^{b}$. A vertex is a corner vertex if it belongs to $\partial \Omega$ and to a vertical and a horizontal boundary edge. To simplify the definitions and remove redundant edges, we will assume that {\em any vertex $\gamma\in \mathcal{T}_{0}$ belongs at least to one horizontal edge $\tau_{h} (\gamma) \in \mathcal{T}_{1}$ and one vertical edge $\tau_{v} (\gamma) \in \mathcal{T}_{1}$}. We denote by $\nu_{h} (\mathcal{T}) =\{s_{1}, \ldots, s_{l}\}\subset \mathbbm{R}$ (resp. $\nu_{v}(\mathcal{T}) =\{t_{1}, \ldots, t_{m}\} \subset \mathbbm{R}$) the set of first (resp. second) coordinates of the points in vertical (horizontal) segments $\in \mathcal{T}_{1}^{v}$ (resp. $\in \mathcal{T}_{1}^{h}$). The elements of $\nu_{h} (\mathcal{T})$ (resp. $\nu_{v} (\mathcal{T})$) are called the horizontal (resp. vertical) nodes of the T-mesh $\mathcal{T}$. \begin{example}\label{ex:1} Let us illustrate the previous definitions on the following T-mesh: \begin{center} \begin{center} \includegraphics[height=4.5cm]{fig-1.eps} \end{center} \end{center} In this example, there are $f_{2}= 7$ rectangles, $f_{1}^{o}=9$ interior edges such that $f_{1}^{h}=4$ are horizontal and $f_{1}^{v}=5$ are vertical. There are $f_{0}^{o}=3$ interior points $\gamma_{1}, \gamma_{2},\gamma_{3}$; $\gamma_{1}, \gamma_{3}$ are T-vertices and $\gamma_{2}$ is a crossing vertex. There are $f_{0}^{b}=15$ boundary vertices and $12$ corner vertices. The horizontal nodes are $\nu_{h} (\mathcal{T}) =\{0, \ldots, 5\}$ and the vertical nodes are $\nu_{v} (\mathcal{T}) =\{0, \ldots, 4\}$. \end{example} For our analysis of spline spaces on T-meshes, we assume the following: \noindent{}\textbf{Assumption:} {\em The domain $\Omega$ is simply connected and $\Omega^{o}$ is connected.} This implies that $\Omega$ has one connected component with no ``hole'' and that the number of boundary edges through a boundary vertex is $2$. \subsection{T-splines} We are going now to define the spaces of piecewise polynomial functions on a T-mesh, with bounded degrees and given smoothness. An element in such a space is called a spline function. \begin{definition} A {\em smoothness distribution} on a T-mesh $\mathcal{T}$ is a pair of maps $(\mathbf{r}_{h},\mathbf{r}_{v})$ from $(\nu_{h} (\mathcal{T}) \times \nu_{v} (\mathcal{T}))$ to $\mathbbm{N}\times \mathbbm{N}$. \end{definition} For convenience, we will define the smoothness map $\mathbf{r}$ on $\mathcal{T}_{1}$ as follows: for any $\tau \in \mathcal{T}_{1}^{v}$ (resp. $\tau \in \mathcal{T}_{1}^{h}$), $\mathbf{r} (\tau)= \mathbf{r}_{h} (s)$ (resp. $\mathbf{r} (\tau)= \mathbf{r}_{v} (t)$) where $s \in \nu_{h} (\mathcal{T})$ (resp. $t\in \nu_{v} (\mathcal{T})$) is the first (resp. second) coordinate of any point of $\tau$. We will also define the horizontal and vertical smoothness on $\mathcal{T}_{0}$ as follows: for any $\gamma = (s,t) \in \mathcal{T}_{0}$, $\mathbf{r}_{h} (\gamma) = \mathbf{r}_{h} (s)$ and $\mathbf{r}_{v} (\gamma) = \mathbf{r}_{v} (t)$. For $r,r'\in \mathbbm{N}$, we say that $\mathbf{r}$ is a constant smoothness distribution equal to $(r,r')$ if $\forall s \in \nu_{h} (\mathcal{T}), \mathbf{r} (s)=r$, $\forall t \in \nu_{v} (\mathcal{T}), \mathbf{r} (t)=r'$. Let $R=\mathbbm{R}[s,t]$ be the polynomial ring with coefficients in $\mathbbm{R}$. For $m,m'\in \mathbbm{N}$, we denote by $R_{m,m'}=R_{(m,m')}$ the vector of polynomials in $R$ of degree $\le m$ in $s$ and $\le m'$ in $t$. An element of $R_{m,m'}$ is of bi-degree $\leqslant (m,m')$. The goal of this paper is to analyse the dimension of the space of splines of bi-degree $\le (m,m')$ and of smoothness $\mathbf{r}$ on a T-mesh $\mathcal{T}$, that we define now. \begin{definition} Let $\mathcal{T}$ be a T-mesh and $\mathbf{r}$ a node smoothness distribution. We denote by $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$ the vector space of functions which are polynomials in $R_{m, m'}$ on each cell $\sigma \in \mathcal{T}_2$ and of class $\mathbf{r} (\tau)$ in $s$ (resp. in $t$) at any point of $\tau \cap \Omega^{o}$ if $\tau$ is a vertical (resp. horizontal) interior edge. \end{definition} We will say that $f \in \mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$ is of (continuity) class $C^{\mathbf{r}}$ on $\mathcal{T}$. We notice that the boundary edges and their smoothness are not involved in the characterization of a spline function. \begin{example} We consider again the T-mesh of Example \ref{ex:1}. If we take the node smoothness distribution $\mathbf{r}_{h} (1)= 1$, $\mathbf{r}_{h} (2)= 0$, $\mathbf{r}_{h} (3)= \mathbf{r}_{h} (4)= \mathbf{r}_{h} (5)= 1$, and $\mathbf{r}_{v}$ constant equal to $1$, then $\mathcal{S}_{3,3}^{\mathbf{r}} (\mathcal{T})$ is the vector space of bicubic piecewise polynomial functions on $\mathcal{T}$ which are $C^{1}$ in $s$ for $s<2$ and $s>2$, continuous for $s=2$ and $C^{1}$ in $t$ in $\Omega^{o}$. \end{example} \subsection{Polynomial properties} We recall here basic results on the dimension of the vector spaces involved in the analysis of \ $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$: For any $\tau \in \mathcal{T}_{1}$, let $l_{\tau} \in R$ be a non-zero polynomial (of degree $1$) defining the line supporting the edge $\tau$. Let $\Delta^{\mathbf{r}} (\tau) = l_{\tau}^{\mathbf{r} (\tau) + 1}$. We denote by $\mathfrak{I}^{\mathbf{r}} (\tau) = (\Delta^{\mathbf{r}} (\tau))$ the ideal generated by the polynomial $\Delta^{\mathbf{r}} (\tau)\in R$ and by $\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau) =\mathfrak{I}^{\mathbf{r}} (\tau) \cap R_{m, m'}$ its part of bi-degree $\leq (m,m')$. Notice that $\mathfrak{I}^{\mathbf{r}}(\tau)$ defines the line supporting the egde $\tau$ with multiplicity $(\mathbf{r} (\tau) + 1)$. By definition, two horizontal (resp. vertical) edges $\tau_1, \tau_2$ which share a point define the same ideal $\mathfrak{I}^{\mathbf{r}} (\tau_1) =\mathfrak{I}^{\mathbf{r}} (\tau_2)$. We define the bi-degree $\delta$ for any egde $\tau \in \mathcal{T}_{1}$ as follows: \begin{itemize} \item $\delta (\tau) = (\mathbf{r} (\tau) + 1, 0)$ if $\tau$ is vertical, \item $\delta (\tau) = (0, \mathbf{r} (\tau) + 1)$ if $\tau$ is horizontal. \end{itemize} Let $\mathfrak{I}^{\mathbf{r}} (\gamma) =\mathfrak{I}^{\mathbf{r}} (\tau_v) +\mathfrak{I}^{\mathbf{r}} (\tau_h) = (\Delta^{\mathbf{r}} (\tau_v), \Delta^{\mathbf{r}} (\tau_h))$ where $\tau_{v}, \tau_{h}\in \mathcal{T}_{1}$ are vertical and horizontal edges such that $\tau_{v}\cap \tau_{h}= \{\gamma\}$. The ideal $\mathfrak{I}^{\mathbf{r}}(\gamma)$ defines the point $\gamma$ with multiplicity $(\mathbf{r}_{h} (\gamma) + 1) \times (\mathbf{r}_{v} (\gamma) + 1)$. We denote by $\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma) =\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau_v) +\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau_h)$. Notice that these definitions are independent of the choice of the vertical edge $\tau_v$ and horizontal edge $\tau_h$ which contain $\gamma$. The bi-degree of a vertex $\gamma\in \mathcal{T}_{0}$ is $\delta (\gamma)= (\mathbf{r}_{h} (\gamma) + 1, \mathbf{r}_{v} (\gamma) + 1)$. Here are the basic dimension relations that we will use to analyse the spline functions on a T-mesh. \begin{lemma} \label{lem:dim}{$\mbox{}$} \begin{itemizedot} \item $\dim R_{m, m'} = (m + 1) \times (m' + 1)$. \item $\dim \left( R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau) \right) = \left\{ \begin{array}{ll} (m + 1) \times (\min (\mathbf{r}(\tau),m') + 1) & \tmop{if} \tau \in \mathcal{T}_{1}^{h}\\ (\min (\mathbf{r}(\tau),m) + 1) \times (m' + 1) & \tmop{if} \tau \in \mathcal{T}_{1}^{v} \end{array} \right.$ \item $\dim \left( R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma) \right) = (\min (\mathbf{r}_{h}(\gamma),m) + 1) \times (\min (\mathbf{r}_{v} (\gamma),m') + 1)$ for all $\gamma\in \mathcal{T}_{0}$. \end{itemizedot} \end{lemma} \begin{proof} To obtain these formulae, we directly check that \begin{itemize} \item a basis of $R_{m,m'}$ is the set of monomials $s^{i} t^{j}$ with $0 \le i \le m$, $0 \le j\le m'$; \item a basis of $R_{m,m'}/\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau)$ is the set of monomials $s^{i} t^{j}$ with $0 \le i \le m$ and $0 \le j\le \min (\mathbf{r} (\tau),m')$ if $\tau \in \mathcal{T}_{1}^{h}$ (resp. $0 \le i \le \min (\mathbf{r} (\tau),m)$, $0 \le j\le m'$ if $\tau \in \mathcal{T}_{1}^{v}$); \item a basis of $R_{m,m'}/\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma)$ is the set of monomials $s^{i} t^{j}$ with $0 \le i \le \min (\mathbf{r}_{h} (\gamma),m)$, $0 \le j\le \min (\mathbf{r}_{v} (\gamma),m')$. \end{itemize} since the ideal of an edge $\tau \in \mathcal{T}_{1}$ is up to a translation $(s^{\mathbf{r}(\tau)+1})$ or $(t^{\mathbf{r}(\tau)+1})$. \end{proof} An algebraic way to characterise the $C^{\mathbf{r}}$-smoothness is given by the next lemma: \begin{lemma}[\cite{b-htssg-88}] \label{lem:reg}Let $\tau \in \mathcal{T}_1$ and let $p_1, p_2$ be two polynomials. Their derivatives coincide on $\tau$ up to order $\mathbf{r} (\tau)$ iff $p_1 - p_2 \in \mathfrak{I}^{\mathbf{r}} (\tau)$. \end{lemma} In the following, we will need algebraic properties on univariate polynomials. We denote by $U=\mathbbm{R}[u]$ the space of univariate polynomials in the variable $u$ with coefficients in $\mathbbm{R}$. Let $U_{n}$ denote the space of polynomials of $U$ of degree $\leq n$. For a polynomial $g\in U$ of degree $d$ and an integer $n\geq d$, $g\, U_{n-d}$ is the vector space of multiples of $g$ which are of degree $\leq n$. For polynomials $g_{1}, \ldots, g_{k}\in U$ respectively of degree $d_{1},\ldots, d_{k}$ and an integer $n\geq \max_{i=1,\ldots,k} d_{i}$, $\sum_{i=1}^{k}\, g_{i}\, U_{n-d_{i}}$ is the vector space of sums of multiples of $g_{i}$ of degree $\leq n$. We will use the apolar product defined on $U_{n}$ by: \[ \langle f, g \rangle_n = \sum_{i = 0}^n \binom{n}{i} f_i g_i \] where $f = \sum_{i = 0}^n f_i u^i $, $g = \sum_{i = 0}^n g_i u^i \in \mathbbm{R}[u]_n$. One of the properties that we will need is the following {\cite{Salmon1885}, \cite{KungRota84}, \cite{EhrRota93}}: \begin{lemma} \label{lem::polar}Let $g \in U_n$, $d < n$ and $a \in \mathbbm{R}$. Then $g$ is orthogonal to $(u - a)^d U_{n - d}$ for the apolar product iff \[ \partial^k g (a) = 0, k = 0, \ldots, n - d. \] \end{lemma} \begin{proposition} \label{dim:apolar}Let $a_1, \ldots, a_l \nonesep \in \mathbbm{R}$ be $l$ distinct points and $d_{1}, \ldots, d_{l} \in \mathbbm{N}$ . Then \[ \dim \left( \sum_{i = 1}^l (u - a_i)^{d_{i}} U_{n - d_{i}} \right) = \min (n + 1, \sum_{i=1}^{l} n - d_{i} + 1) . \] \end{proposition} \begin{proof} In order to compute the dimension of $V := \sum_{i = 1}^l (u - a_i)^d U_{n - d} \subset U_n$, we compute the dimension of the orthogonal $V^{\bot}$ in $U_n$ of $V$ for the apolar product. Let $g \in U_n$ be an element of the orthogonal $V^{\bot}$ of $V$. By lemma \ref{lem::polar}, $\partial^k g (a_i) = 0, k = 0, \ldots, n - d_{i}, i = 1, \ldots, l$. In other words, $g$ is divisible by $(u - a_i)^{n - d_{i} + 1}$ for $i = 1, \ldots, l$. As the points $a_i$ are distinct, $g$ is divisible by \[ \Pi := \prod_{i = 1}^l (u - a_i)^{n - d_{i} + 1} . \] Conversely, any multiple of $\Pi$ of degree $\leqslant n$ is in $V^{\bot}$. Thus $V^{\bot} = (\Pi)\cap U_n$. This vector space $V^{\bot}$ of multiples of $\Pi$ in degree $n$ is of dimension $\max (0, n + 1 - \deg (\Pi))$, so $V$ is of dimension \[ n + 1 - \max (0, n + 1 - \deg (\Pi)) = \min (n + 1, \deg (\Pi)) = \min (n + 1, \sum_{i=1}^{l} n - d_{i} + 1). \] \end{proof} We are going to use an equivalent formulation of this result: \begin{equation} \dim \left( U_n / \sum_{i = 1}^l (u - a_i)^d_{i} U_{n - d_{i}} \right) = (n + 1 - \sum_{i=1}^{l} n - d_{i} + 1)_+ \end{equation} where for any $a \in \mathbb{Z}$, $a_{+}=\max (0, a)$. A similar result is proved in \cite{LiWaZha06}[Lemma 2] when all $d_{i}$ are equal to $d$, by analyzing the coefficient matrix of generators of $\sum_{i = 1}^l (u - a_i)^d U_{n - d}$. \subsection{Maximal interior segments} In order to simplify the analysis of $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$, we introduce the following definitions: For any interior edge $\tau \in \mathcal{T}_1^o$, we define $\rho(\tau)$ as the {\tmem{maximal segment}} made of edges $\in \mathcal{T}_1^o$ of the same direction as $\tau$, which contains $\tau$ and such that their union is connected. We say that the maximal segment $\rho(\tau)$ is interior if it does not intersect the boundary of $\Omega$. As all the edges belonging to a maximal segment $\rho$ have the same supporting line, we can define $\Delta^{\mathbf{r}} (\rho) = \Delta^{\mathbf{r}} (\tau)$ for any edge $\tau$ belonging to $\rho= \rho(\tau)$. The set of all maximal interior segments is denoted by $\tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$. The set of horizontal (resp. vertical) maximal interior segments of $\mathcal{T}$ is denoted by $\tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})$ (resp. $\tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$). The degree of $\rho \in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T}) \overline{}$ is by definition $\delta ( \rho) = \delta (\tau)$ for any $\tau \subset \rho$. For each interior vertex $\gamma \in \mathcal{T}_0^o$, which is the intersection of an horizontal edge $\tau_h \in \mathcal{T}_1^o$ with a vertical edge $\tau_v \in \mathcal{T}_1^o$, let $\rho_h (\gamma)$ (resp. $\rho_v (\gamma)$) is the corresponding horizontal (resp. vertical) maximal segment. We denote by $\Delta_h^{\mathbf{r}} (\gamma)$ (resp. $\Delta_v^{\mathbf{r}} (\gamma)$) the equations of the corresponding supporting lines to the power $\mathbf{r}_{h} (\gamma) + 1$ (resp. $\mathbf{r}_{v} (\gamma) + 1$). Notice that the intersection of two distinct maximal interior segments is either a T-vertex or a crossing vertex. We say that $\rho \in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$ is {\tmem{blocking}} $\rho' \in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$ if one of the end points of $\rho'$ is in the interior of $\rho$. \begin{example}\label{ex:1.9} In the figure below, the maximal interior edges are indicated by plain segments. \begin{center} \includegraphics{tspline-3.eps} \end{center} In this example, $\rho_1$ is blocking $\rho_4$ and $\rho_2$ is blocking $\rho_3$. \end{example} \section{Topological chain complexes}\label{sec:2} In this section, we describe the tools from algebraic topology, that we will use. For more details, see eg. {\cite{Spanier66}}, {\cite{Hatcher02}}. \subsection{Definitions} We consider the following complexes: {\small \[ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \begin{array}{lllllllll} & & & & 0 & & 0 & & \\ & & & & \downarrow & & \downarrow & & \\ \mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o) : & & 0 & \rightarrow & \bigoplus_{\tau \in \mathcal{T}_1^o} [\tau]\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau) & \rightarrow & \bigoplus_{\gamma \in \mathcal{T}_0^o} [\gamma]\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma) & \rightarrow & 0\\ & & \downarrow & & \downarrow & & \downarrow & & \\ \mathfrak{R}_{m, m'} (\mathcal{T}^o) : & & \bigoplus_{\sigma \in \mathcal{T}_2} [\sigma] R_{m, m'} & \rightarrow & \bigoplus_{\tau \in \mathcal{T}_1^o} [\tau] R_{m, m'} & \rightarrow & \bigoplus_{\gamma \in \mathcal{T}_0^o} [\gamma] R_{m, m'} & \rightarrow & 0\\ & & \downarrow & & \downarrow & & \downarrow & & \\ \mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o) : & & \bigoplus_{\sigma \in \mathcal{T}_2} [\sigma] R_{m, m'} & \rightarrow & \bigoplus_{\tau \in \mathcal{T}_1^o} [\tau] R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau) & \rightarrow & \bigoplus_{\gamma \in \mathcal{T}_0^o} [\gamma] R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma) & \rightarrow & 0\\ & & \downarrow & & \downarrow & & \downarrow & & \\ & & 0 & & 0 & & 0 & & \end{array} \] } The different vector spaces of these complexes are obtained as the components in bi-degree $\leqslant (m, m')$ of $R$-modules generated by (formal) independent elements $[\sigma], [\tau], [\gamma]$ indexed respectively by the faces, the interior edges and interior points of $\mathcal{T}$. An oriented edge $\tau \in \mathcal{T}_1$ is represented as: $[\tau] = [a b]$ where $a, b \in \mathcal{T}_0$ are the end points. The opposite edge is represented by $[b a]$. By convention, $[b a] = - [a b]$. The maps of the complex $\mathfrak{R}_{m, m'} (\mathcal{T}^o)$ are defined as follows: \begin{itemizedot} \item for each face $\sigma \in \mathcal{T}_2$ with its counter-clockwise boundary formed by edges $\tau_1 = a_1 a_2, \ldots, \tau_l = a_l a_1$, $\partial_2 (\sigma) = [\tau_1] \oplus \cdots \oplus [\tau_l] = [a_1 a_2] \oplus \cdots \oplus [a_l a_1]$; \item for each interior edge $\tau \in \mathcal{T}_1^o$ from $\gamma_1$ to $\gamma_2 \in \mathcal{T}_0$, $\partial_1 ([\tau]) = [\gamma_2] - [\gamma_1]$ where $[\gamma] = 0$ if $\gamma \not\in \mathcal{T}_0^o$; \item for each interior point $\gamma \in \mathcal{T}_{0}^{o}$, $\partial_0 ([\gamma]) = 0$. \end{itemizedot} By construction, we have $\partial_i \circ \partial_{i + 1} = 0$ for $i = 0, 1$. The maps of the complex $\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)$ are obtained from those of $\mathfrak{R}_{m, m'} (\mathcal{T}^o)$ by restriction, those of the complex $\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)$ are obtained by taking the quotient by the corresponding vector spaces of $\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)$. The corresponding differentials of the complex are denoted $\bar{\partial}_i$. For each column of the diagram, the vertical maps are respectively the inclusion map and the quotient map. The complex $\mathfrak{R}_{m, m'} (\mathcal{T}^o)$ is also known as the chain complex of $\mathcal{T}$ relative to its boundary $\partial \mathcal{T}$. \begin{example} We consider the following subdivision $\mathcal{T}$ of a rectangle $\Omega$: \begin{center} \includegraphics{tspline-5.eps} \end{center} We have \begin{itemizedot} \item $\partial_2 ([\sigma_1]) = [\gamma_1 \beta_1] + [\beta_3 \gamma_1], \partial_2 ([\sigma_2]) = [\gamma_1 \beta_2] + [\beta_1 \gamma_1], \partial_2 ([\sigma_3]) = [\gamma_1 \beta_3] + [\beta_2 \gamma_1]$, \item $\partial_1 ([\beta_1 \gamma_1]) = [\gamma_1], \partial_1 ([\beta_2 \gamma_1]) = [\gamma_1], \partial_1 ([\beta_3 \gamma_1] = [\gamma_1]$, \item $\partial_0 ([\gamma_1]) = 0$. \end{itemizedot} This defines the following complex: \[ \begin{array}{lllllllll} \mathfrak{R}_{m, m'} (\mathcal{T}) : & & \bigoplus_{i = 1}^3 [\sigma_i] R_{m, m'} & \rightarrow & \bigoplus_{i = 1}^3 [\beta_i \gamma_1] R_{m, m'} & \rightarrow & [\gamma_1] R_{m, m'} & \rightarrow & 0 \end{array} \] The matrices of these maps in the canonical (monomial) bases are \[ [\partial_2] = \left(\begin{array}{ccc} - I & I & 0\\ 0 & - I & I\\ I & 0 & - I \end{array}\right), [\partial_1] = \left(\begin{array}{ccc} I & I & I \end{array}\right) \] where $I$ is the identity matrix of size $(m + 1) \times (m' + 1)$ (ie. the dimension of $R_{m, m'}$). Let us consider the case where $\gamma_{1}= (0,0)$, $(m,m')= (2,2)$ and $\mathbf{r}$ is the constant distribution $(1,1)$ on $\mathcal{T}$. The matrices of the complex $\mathfrak{S}_{2,2}^{1,1} (\mathcal{T})$ are \[ [ \bar{\partial}_2] = \left(\begin{array}{ccc} - [\Pi_1] & [\Pi_1] & 0\\ 0 & - [\Pi_2] & [\Pi_2]\;\\ \;[\Pi_3] & 0 & - [\Pi_3] \end{array}\right), [\bar{\partial}_1] = \left(\begin{array}{ccc} [P_1] & [P_2] & [P_3] \end{array}\right) \] where $[\Pi_i]$ (resp. $[P_i]$) are the matrices of the projections $$ \begin{array}{rclrcl} \Pi_{1}=\Pi_{3}: R_{2,2}&\rightarrow & R_{2,2} /(s^{2}) & \Pi_{2}: R_{2,2}&\rightarrow & R_{2,2} /(t^{2}) \\ p& \mapsto & p \mod s^{2} & p& \mapsto & p \mod t^{2}\\ \\ P_{1}=P_{3} : R_{2,2}/ (s^{2})&\rightarrow & R_{2,2} /(s^{2}, t^{2}) & P_{2} : R_{2,2}/(t^{2})&\rightarrow & R_{2,2} /(s^{2}, t^{2}) \\ p \mod s^{2}& \mapsto & p \mod (s^{2},t^{2}) & p\mod t^{2}& \mapsto & p \mod (s^{2},t^{2}). \end{array} $$ The matrices $\Pi_{i}$ are of size $12\times 16$ and the matrices $P_{i}$ are of size $9\times 12$. \end{example} \subsection{Their homology} In this section, we analyse the homology of the different complexes. The homology of a chain complex of a triangulation of a (planar) domain is well-known {\cite{Spanier66}[Chap. 4]}, {\cite{Hatcher02}[Chap. 2]}. Since, it is not explicit in the litterature, we give in the appendix a simple proof of the exactness of $\mathfrak{R}_{m, m'} (\mathcal{T}^o)$. \subsubsection{The 0-homology} We start by analysing the homology on the vertices. \begin{lemma} $H_0 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = H_0 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) = 0.$ \end{lemma} \begin{proof} By Proposition \ref{proph0r} in the appendix, we have $H_0 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = 0$. Taking the quotient by $\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau_i)$, we still get that $\bar{\partial}_1$ is surjective so that $H_0 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) = 0$. \end{proof} Let us describe in more details $$ H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) = \bigoplus_{\gamma \in \mathcal{T}_0^o} [\gamma]\mathfrak{I}^{\mathbf{r}} (\gamma) / \partial_1 ( \bigoplus_{\tau \in \mathcal{T}_1^o} [\tau]\mathfrak{I}^{\mathbf{r}} (\tau)). $$ We consider the free $R$-module generated by the {\em half-edge} elements $[\gamma| \tau]$, for all interior edges $\tau \in \mathcal{T}_1^o$ and all vertices $\gamma \in \tau$. By convention $[\gamma| \tau] \equiv 0$ if $\gamma \in \partial \Omega$. For $\gamma \in \mathcal{T}_0^o$, let $E_{h} (\gamma)$ (resp. $E_{v} (\gamma)$) be the set of horizontal (resp. vertical) interior edges that contain $\gamma$ and let $E (\gamma) = E_{h} (\gamma) \cup E_{v} (\gamma)$. We consider first the map \begin{eqnarray*} \varphi_{\gamma} : \bigoplus_{\tau \in E (\gamma)} [\gamma| \tau]\, R_{(m, m') - \delta (\tau)} & \rightarrow & [\gamma]\,\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma)\\\; [\gamma| \tau] & \mapsto & [\gamma]\, \Delta^{\mathbf{r}} (\tau) \end{eqnarray*} By definition of $\mathfrak{I}^{\mathbf{r}}_{m, m'}(\gamma)$, this map is surjective. Its kernel is denoted $\mathfrak{K}^{\mathbf{r}}_{m, m'} (\gamma)$. Let $P_h (\gamma)$ (resp. $P_v (\gamma)$) be the set of pairs $(\tau,\tau')$ of distinct horizontal (resp. vertical) interior edges which contain $\gamma$ (with $(\tau,\tau')$ identify to $(\tau',\tau)$). We denote by $P (\gamma) = P_{h} (\gamma) \cup P_{v} (\gamma)$. If $\gamma$ is a T-junction, one of the two sets is empty and the other is a singleton containing one pair. If $\gamma$ is a crossing vertex, each set is a singleton. The following proposition gives an explicit description of the kernel $\mathfrak{K}^{\mathbf{r}}_{m, m'} (\gamma)$, that we will exploit hereafter. \begin{proposition} \label{prop:2.3} \begin{eqnarray*} \mathfrak{K}^{\mathbf{r}}_{m, m'} (\gamma) & = & \sum_{(\tau, \tau') \in P (\gamma)} ([\gamma| \tau] - [\gamma| \tau']) R_{(m, m') - \delta (\tau)} \\ & + & \sum_{\tau \in E_h (\gamma), \tau' \in \text{$E_v (\gamma)$}} ([\gamma| \tau]\, \Delta^{\mathbf{r}} (\tau') - [\gamma| \tau']\, \Delta^{\mathbf{r}} (\tau)) R_{(m - r - 1, m' - r' - 1)} \end{eqnarray*} \end{proposition} \begin{proof} Let us suppose first that $\gamma$ is a crossing vertex. We denote by $\tau_1, \tau_2$ the horizontal edges, $\tau_3, \tau_4$ the vertical edges at $\gamma$. The matrix of the map $\varphi_{\gamma}$ in the basis $[\gamma| \tau_i]$ is \[ [\varphi_{\gamma}] = \left(\begin{array}{cccc} \Delta & \Delta & \Delta' & \Delta' \end{array}\right) \] where $\Delta = \Delta^{\mathbf{r}} (\tau_1) = \Delta^{\mathbf{r}} (\tau_{_2})$, \ $\Delta' = \Delta^{\mathbf{r}} (\tau_3) = \Delta^{\mathbf{r}} (\tau_4) $. Since $\Delta$ and $\Delta'$ have no common factor, the kernel of the matrix $[\varphi_{\gamma}]$ is generated by the elements $[\gamma| \tau_1] - [\gamma| \tau_2]$, $[\gamma| \tau_2]\, \Delta' - [\gamma| \tau_3]\, \Delta $, $[\gamma| \tau_3] - [\gamma| \tau_4]$, which give the description of $\mathfrak{K}^{\mathbf{r}}_{m, m'} (\gamma)$ in bi-degree $\leq (m,m')$. A similar proof applies when there is no horizontal or vertical pair of distinct edges at $\gamma$. This proves the result. \end{proof} We use the maps $(\varphi_{\gamma})_{\gamma \in \mathcal{T}_0^o}$ to define \begin{eqnarray*} \varphi: \bigoplus_{\gamma \in \mathcal{T}_0^o} \bigoplus_{\tau \in E_{\gamma}} [\gamma| \tau]\, R_{(m, m') - \delta (\tau)} & \rightarrow & \bigoplus_{\gamma \in \mathcal{T}_0^o} [\gamma]\,\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma), \end{eqnarray*} so that we have the following exact sequence: \[ 0 \rightarrow \bigoplus_{\gamma \in \mathcal{T}_0^o} \mathfrak{K}^{\mathbf{r}}_{m, m'} (\gamma) \rightarrow \bigoplus_{\gamma \in \mathcal{T}_0^o} \bigoplus_{\tau \in E (\gamma)} [\gamma| \tau]\, R_{(m, m') - \delta (\tau)} \rightarrow \bigoplus_{\gamma \in \mathcal{T}_0^o} [\gamma]\,\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma) \rightarrow 0 \] Using this exact sequence, we can now identify $\bigoplus_{\gamma \in \mathcal{T}_1^o} [\gamma]\,\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma)$ with the quotient \[ \bigoplus_{\gamma \in \mathcal{T}_1^o} \bigoplus_{\tau \in E_{\gamma}} [\gamma| \tau]\, R_{(m, m') - \delta (\tau)} / \sum_{\gamma \in \mathcal{T}_1^o} \mathfrak{K}^{\mathbf{r}}_{m, m'} (\gamma) . \] The next proposition uses this identification and Proposition \ref{prop:2.3} to describe more explicitly $H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o))$: \begin{proposition} We have \begin{eqnarray*} H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) & = & \bigoplus_{\gamma \in \mathcal{T}_0^o} \bigoplus_{\tau \in E (\gamma)} [\gamma| \tau]\, R_{(m, m') - \delta (\tau)}\\ & \Bigg \slash & \left( \sum_{(\tau, \tau') \in P (\gamma)} ([\gamma| \tau] - [\gamma| \tau']) R_{(m, m') - \delta (\tau)} \right.\\ && + \sum_{\tau = (\gamma, \gamma') \in \mathcal{T}_1^o} ([\gamma| \tau] - [\gamma'| \tau]) R_{(m, m') - \delta (\tau)}\\ && \left. + \sum_{\tau \in E_{h} (\gamma), \tau' \in E_{v} (\gamma)} ([\gamma| \tau] \Delta^{\mathbf{r}} (\tau') - [\gamma| \tau'] \Delta^{\mathbf{r}} (\tau)) R_{(m, m') - \delta (\gamma)} \right) . \end{eqnarray*} \end{proposition} \begin{proof} The application \begin{eqnarray*} \partial_1 : \bigoplus_{\tau \in \mathcal{T}_1^o} [\tau]\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau) &\rightarrow&\bigoplus_{\gamma \in \mathcal{T}_0^o} [\gamma]\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma) \end{eqnarray*} lift to an application: \begin{eqnarray*} \tilde{\partial}_1 : \bigoplus_{\tau \in \mathcal{T}_1^o} [\tau] R_{(m, m') - \delta (\tau)} & \rightarrow & \bigoplus_{\gamma \in \mathcal{T}_0^o} \bigoplus_{\tau \in E ({\gamma})} [\gamma| \tau] R_{(m, m') - \delta (\tau)} \\ \tau & \mapsto & [\gamma| \tau] - [\gamma'| \tau] \end{eqnarray*} so that the image of $\partial_1$ lift in $\bigoplus_{\gamma \in \mathcal{T}_0^o} \bigoplus_{\tau \in E ({\gamma})} [\gamma| \tau] R_{(m, m') - \delta (\tau)} $ to \begin{eqnarray*} \tmop{im} \tilde{\partial}_1 & = & \sum_{\tau \in \mathcal{T}_1^o} ([\gamma| \tau] - [\gamma'| \tau]) R_{(m, m') - \delta (\tau)} . \end{eqnarray*} Consequently, \begin{eqnarray*} H_0 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) & = & \bigoplus_{\gamma \in \mathcal{T}_0^o} \bigoplus_{\tau \in E (\gamma)} [\gamma| \tau]\, R_{(m, m') - \delta (\tau)} \Bigg \slash \left( \tmop{im} \tilde{\partial}_1 + \sum_{\gamma \in \mathcal{T}_0^o} \mathfrak{K}^{\mathbf{r}}_{m, m'} (\gamma) \right), \end{eqnarray*} which yields the desired description of $H_0 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}))$. \end{proof} In the next proposition, we simplify further the description of $H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o))$: \begin{proposition}\label{prop:3.5} $$ \begin{array}{lll} H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) & = & \oplus_{\rho \in \tmop{\mathrm{\textsc{mis}}} ( {\mathcal{T})}} \, [\rho]\, R_{(m, m') - \delta (\rho)}\\ & \big \slash & \left( \sum_{\gamma \in \mathcal{T}_0^o} ( [\rho_v(\gamma)]\, \Delta^{\mathbf{r}}_h (\gamma) - [\rho_h (\gamma)]\, \Delta^{\mathbf{r}}_v (\gamma) ) R_{(m , m')-\delta (\gamma)} \right) . \end{array} $$ \end{proposition} \begin{proof} Let $B = \bigoplus_{\gamma \in \mathcal{T}_0^o} \bigoplus_{\tau \in E ({\gamma})} [\gamma| \tau]\, R_{(m, m') - \delta (\tau)}$, $K = \tmop{im} \tilde{\partial}_1 + \sum_{\gamma \in \mathcal{T}_0^o} \mathfrak{K}^{r, r'}_{m, m'} (\gamma)$ and $$ \begin{array}{lll} K' & = & \left( \sum_{(\tau, \tau') \in P (\gamma)} ([\gamma| \tau] - [\gamma| \tau']) R_{(m, m') - \delta (\tau)}\right.\\ & & \left. + \sum_{\tau = (\gamma, \gamma') \in \mathcal{T}_1^o} ([\gamma| \tau] - [\gamma'| \tau]) R_{(m, m') - \delta (\tau)} \right). \end{array}$$ As $K' \subset K \subset B$, we have $B / K \equiv (B / K') / (K / K')$. Taking the quotient by $K'$ means, that we identify the horizontal (resp. vertical) edges which share a vertex. Thus all horizontal (resp. vertical) edges which are contained in a maximal segment $\rho$ of $\mathcal{T}$ are identify to a single element, that we denote $[\rho]$. As $[\gamma| \tau] = 0$ if $\gamma \in \partial \Omega$, we also have $[\rho] = 0$ if the maximal segment $\rho$ intersects the boundary $\partial\Omega$. This yields the desired description of $H_0(\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}))$. \end{proof} \begin{definition} Let $h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = \dim H_0 (\mathfrak{I}_{m, m'}^{r, r'} (\mathcal{T}^o)) .$ \end{definition} \subsubsection{The 1-homology} We consider now the homology on the edges. We use the property that $H_1 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = 0$ (see Proposition \ref{proph1r} in the appendix). \begin{proposition} $H_1 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) = H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o))$. \end{proposition} \begin{proof} As $H_0 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = 0$ and $H_1 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = 0$, we deduce from the long exact sequence (see Appendix \ref{sec:homology}) \[ \cdots \rightarrow H_1 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) \rightarrow H_1 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) \rightarrow H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) \rightarrow H_0 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) \rightarrow \cdots \] that $H_1 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) \sim H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o))$. \end{proof} \subsubsection{The 2-homology} Finally, the homology on the 2-faces will give us information on the spline space $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$. We have the following result (proved in Proposition \ref{proph2r} in the appendix): \begin{proposition} $H_2 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = R_{m, m'}.$ \end{proposition} The following proposition relates the spline space $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$ to an homology module. \begin{proposition} $H_2 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) = \ker \partial_2 =\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}).$ \end{proposition} \begin{proof} An element of $H_2 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) = \ker \overline{\partial}_2$ is a collection of polynomials $(p_{\sigma})_{\sigma \in \mathcal{T}_2}$ with $p_{\sigma} \in R_{m, m'}$ and $p_{\sigma} \equiv p_{\sigma'} \tmop{mod} \mathfrak{I}_{\tau}^{\mathbf{r}} (\tau)$ if $\sigma$ and $\sigma'$ share the (internal) edge $\tau$. By Lemma \ref{lem:reg}, this implies that the piecewise polynomial function which is $p_{\sigma}$ on $\sigma$ and $p_{\sigma'}$ on $\sigma'$ is of class $C^{\mathbf{r}}$ across the edge $\tau$. As this is true for all interior edges, $(p_{\sigma})_{\sigma \in \mathcal{T}_2} \in \ker \underline{\partial}_2 $ is a piecewise polynomial function of $R_{m, m'}$ which is of class $C^{\mathbf{r}}$, that is an element of $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$. \end{proof} \section{Lower and upper bounds on the dimension}\label{sec:3} In this section, are the main results on the dimension of the spline space $\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$. \begin{theorem} \label{thm:dim} Let $\mathcal{T}$ be a T-mesh and let $\mathbf{r}$ be a smoothness distribution on $\mathcal{T}$. Then \begin{eqnarray}\label{eq:3.1} {\dim \mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})} & = & \sum_{\sigma\in \mathcal{T}_{2}\ } (m + 1) (m' + 1) \label{eq:dim} \\ & - & \sum_{\tau\in \mathcal{T}_{1}^{o,h}} (m + 1) (\mathbf{r} (\tau) + 1) - \sum_{\tau\in \mathcal{T}_{1}^{o,v}} (m' + 1) (\mathbf{r} (\tau) + 1) \nonumber\\ &+& \sum_{\gamma\in \mathcal{T}_{0}^{o}}(\mathbf{r}_{h} (\gamma) + 1) ( \mathbf{r}_{v} (\gamma) + 1) \nonumber\\ & + & h_{m, m'}^{\mathbf{r}} (\mathcal{T}). \nonumber \end{eqnarray} where $h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = \dim H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o))$. \end{theorem} \begin{proof} The complex {\small \[ \mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o) : \oplus_{\sigma \in \mathcal{T}_2} [\sigma] R_{m, m'} \longrightarrow \oplus_{\tau \in \mathcal{T}_1^o} [\tau] R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau) \longrightarrow \oplus_{\gamma \in \mathcal{T}_0^o} [\gamma] R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma) \longrightarrow 0 \] } induces the following relations \[ \dim (\oplus_{\sigma \in \mathcal{T}_2} [\sigma] R_{m, m'}) - \dim (\oplus_{\tau \in \mathcal{T}_1^o} [\tau] R_{m, m'} /\mathfrak{I}^{r, r'}_{m, m'} (\tau)) + \dim (\oplus_{\gamma \in \mathcal{T}_0^o} [\gamma] R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma)) \] \[ = \dim (H_2 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o))) - \dim (H_1 (\mathfrak{S}_{m, m'}^{r, r} (\mathcal{T}^o))) + \dim (H_0 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o))) \] As $H_2 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) =\mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$, $H_0 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) = 0$ and $H_1 (\mathfrak{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) = H_0 (\mathfrak{I}^{\mathbf{r}}_{m, m'} (\mathcal{T}^o))$, we deduce that $$ \begin{array}{lll} \dim \mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T}) & = & \dim (\oplus_{\sigma \in \mathcal{T}_2} [\sigma] R_{m, m'}) - \dim (\oplus_{\tau \in \mathcal{T}_1^o} [\tau] R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\tau))\\ & + & \dim (\oplus_{\gamma \in \mathcal{T}_0^o} [\gamma] R_{m, m'} /\mathfrak{I}^{\mathbf{r}}_{m, m'} (\gamma)) + \dim (H_0 (\mathfrak{I}^{r, r'}_{m, m'} (\mathcal{T}))) \end{array} $$ which yields the dimension formula \eqref{eq:3.1} using Lemma \ref{lem:dim}. \end{proof} As an immediate corollary of this theorem and of Proposition \ref{prop:3.5}, we deduce the following result: \begin{corollary} \label{cor:dim:exact} If the T-mesh $\mathcal{T}$ has no maximal interior segments then $h_{m, m'}^{\mathbf{r}} (\mathcal{T})=0$. \end{corollary} In the case of a constant smoothness distribution, Theorem \ref{thm:dim} is written as follows: \begin{theorem} \label{thm:dim:cst} Let $\mathcal{T}$ be a T-mesh and let $\mathbf{r}=(r,r')$ be a constant smoothness distribution on $\mathcal{T}$. Then \begin{eqnarray} {\dim \mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})} & = & (m + 1) (m' + 1) f_2 \label{eq:dim:cst} \\ & - & \left((m + 1) (r' + 1) f_1^h + (m' + 1) (r + 1) f_1^v \right) \nonumber\\ &+& (r + 1) (r' + 1) f_0 \nonumber\\ & + & h_{m, m'}^{\mathbf{r}} (\mathcal{T}). \nonumber \end{eqnarray} where \begin{itemizedot} \item $f_2$ is the number of 2-faces $\in \mathcal{T}_2$, \item $f_1^h$ (resp. $f_1^v$) is the number of horizontal (resp. vertical) interior edges $\in \mathcal{T}_1^o$, \item $f_0$ is the number of interior vertices $\in \mathcal{T}_0^o$, \item $h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = \dim H_0 (\mathfrak{I}_{m, m'}^{\mathbf{r}} (\mathcal{T}^o)) .$ \end{itemizedot} \end{theorem} We are going now to bound $h_{m, m'}^{\mathbf{r}} (\mathcal{T})$ for general T-meshes. \begin{definition} Let $\iota$ be an ordering of $\tmop{\mathrm{\textsc{mis}}}(\mathcal{T})$ that is a map from $\tmop{\mathrm{\textsc{mis}}}(\mathcal{T})$ to $\mathbb{N}$. For $\rho \in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$, let $\Gamma_{\iota}(\rho)$ be the set of vertices $\gamma$ of $\rho$ which are not on a maximal interior segment $\rho'\in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$ with $\iota (\rho') > \iota (\rho)$. The number of elements of $\Gamma_{\iota}(\rho)$ is denoted $\lambda_{\iota} (\rho)$. \end{definition} We define now the weight of a maximal interior segment. \begin{definition} For $\rho\in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$, let \begin{itemize} \item $\omega_{\iota} (\rho) = \sum_{\gamma\in \Gamma_{\iota}(\rho)} (m-\mathbf{r}_{v} (\gamma))$ if $\rho\in \tmop{\mathrm{\textsc{mis}}}_{h} (\mathcal{T})$. \item $\omega_{\iota} (\rho) = \sum_{\gamma\in \Gamma_{\iota}(\rho)} (m'-\mathbf{r}_{h} (\gamma))$ if $\rho\in \tmop{\mathrm{\textsc{mis}}}_{v} (\mathcal{T})$. \end{itemize} We called it the {\em weight} of $\rho$. \end{definition} As in the usual spline terminology, for an interior point $\gamma\in \mathcal{T}_{0}^{o}$, we call $\gamma$ $m-\mathbf{r}_{h} (\gamma)$ (resp. $m'-\mathbf{r}_{h} (\gamma)$ ) the horizontal (resp. vertical) multiplicity of $\gamma$. If $\rho$ is horizontal (resp. vertical), the weight of $\rho$ is the sum of the vertical (resp. horizontal) multiplicities of the vertices $\gamma \in \Gamma_{\iota} (\rho)$. Notice that if $\mathbf{r}= (r,r')$ is a constant smoothness distribution on $\mathcal{T}$, then $\omega_{\iota} (\rho)= (m -r) \lambda (\rho)$ for $\rho \in \tmop{\mathrm{\textsc{mis}}}_{h} (\mathcal{T})$ (resp. $\omega_{\iota} (\rho)= (m' -r') \lambda (\rho)$ for $\rho \in \tmop{\mathrm{\textsc{mis}}}_{v} (\mathcal{T})$). \begin{example} We consider the T-mesh of Example \ref{ex:1.9} with $(m,m') = (2,2)$, the constant smoothness distribution $\mathbf{r}= (1,1)$ and the ordering of the maximal interior segments $\iota (\rho_{i})=i$ for $i=1, \ldots, 4$. Then we have \begin{itemize} \item $\omega_{\iota} (\rho_{1}) = (2-1) \times 2 =2$, \item $\omega_{\iota} (\rho_{2}) = (2-1) \times 2 =2$, \item $\omega_{\iota} (\rho_{3}) = (2-1) \times 3 =3$, \item $\omega_{\iota} (\rho_{4}) = (2-1) \times 3 =3$, \end{itemize} since the multiplicity of a vertex is $2-1=1$ and the interior points of $\rho_{1}, \rho_{2}$ are not in $\Gamma_{\iota} (\rho_{1})$ or $\Gamma_{\iota} (\rho_{2})$. \end{example} In the following, we will drop the index $\iota$ to simplify the notations, assuming that the ordering $\iota$ is fixed. Then we have the following theorem: \begin{theorem}\label{thm:h}\label{thm:dim:bound} Let $\mathcal{T}$ be a T-mesh and let $\mathbf{r}$ be a smoothness distribution on $\mathcal{T}$. Then \begin{eqnarray*} 0\ \leq\ h_{m, m'}^{\mathbf{r}} (\mathcal{T}) & \leq & \sum_{\rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})} (m + 1 - \omega (\rho))_+ \times (m' - \mathbf{r}(\rho))\\ & + & \sum_{\rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})} (m - \mathbf{r} (\rho)) \times \left( m' + 1 - \omega (\rho) )\right)_+. \end{eqnarray*} \end{theorem} \begin{proof} Let $\rho_{1},\ldots, \rho_{l}$ be the maximal interior segments of $\mathcal{T}$. By Proposition \ref{prop:3.5}, $h_{m, m'}^{\mathbf{r}} (\mathcal{T})$ is the dimension of the quotient in bi-degree $\leq (m, m')$ of the module $M := \oplus_{i= 1}^l [\rho_i]\, R$ by the module $K$ generated by the following relations: for each vertex $\gamma \in \mathcal{T}_0^{o}$ which is on a maximal interior segment, \begin{itemizedot} \item $\Delta^{\mathbf{r}} (\rho_j) [\rho_{_i}] - \Delta^{\mathbf{r}} (\rho_i) [\rho_j]$ if $\gamma$ is the intersection of the maximal interior segments $\rho_i$ and $\rho_j$, \item $\Delta^{\mathbf{r}} (\rho) [\rho_{_i}]$ if $\gamma$ is the intersection of the maximal interior segment $\rho_i$ with another maximal segment $\rho$ intersecting $\partial \Omega$. \end{itemizedot} To compute the dimension of $M/K$ in bi-degree $\leqslant (m,m')$, we use a graduation on $M$ given by the indices of the segments. For $r:=\sum_i p_i [\rho_i] \in M$ (with $p_i \in R_{(m, m') - \delta (\rho_i)}$), let $\tmop{In} (r)$ be the element $p_{i_0} [\rho_{i_0}]$ where $i_0$ is the maximal index such that $p_i \neq 0$. We denote it by $\tmop{In} (r)$ and called it the initial of $r$. Let $\tmop{In} (K)=\{ \tmop{In} (k)\mid k\in K \}$. The dimension $h_{m, m'}^{\mathbf{r}} (\mathcal{T})$ is then \[ h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = \dim (M / K) = \dim (M / \tmop{In} (K)). \] Notice that $\tmop{In} (K)$ contains the multiples in bi-degree $\leqslant (m, m')$ of \begin{itemizedot} \item $\Delta^{\mathbf{r}} (\rho_j) [\rho_{_i}]$ if $\gamma$ is the intersection of two maximal interior segments $\rho_i$ and $\rho_j$ with $i > j$, \item $\Delta^{\mathbf{r}} (\rho) [\rho_{_i}]$ if $\gamma$ is the intersection of the maximal interior segment $\rho_i$ with a maximal segment $\rho$ intersecting $\Omega$. \end{itemizedot} Let $L_i$ be the vector space spanned by these initials in bi-degree $\leqslant (m, m')$, which are multiples of $[\rho_{i}]$. By definition, for each $\gamma\in \Gamma (\rho_{i})$, we have a generator $\Delta^{\mathbf{r}} (\rho)[\rho_{i}]$ in $L_{i}$ for $\{\gamma\}=\rho_{i}\cap \rho$. By Proposition \ref{dim:apolar}, $L_i$ is of dimension \begin{itemizedot} \item $\min (m + 1, \omega (\rho_{i})) \times (m' - \mathbf{r} (\rho_{i}))$ if $\rho_i \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})$, \item $\min (m' + 1, \omega (\rho_{i})) \times (m - \mathbf{r} (\rho_{i}))$ if $\rho_i \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$. \end{itemizedot} Thus the dimension of $[\rho_{i}] R_{(m, m') - \delta (\rho_i)} / L_i$ is \begin{itemizedot} \item $(m + 1 - \omega (\rho_{i}))_+ \times (m' - \mathbf{r}(\rho_{i}))$ if $\rho_i \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})$, \item $(m - \mathbf{r} (\rho_{i})) \times ( m' + 1 - \omega (\rho_{i}))_+$ if $\rho_i \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$. \end{itemizedot} As $\tmop{In} (K) \supset \sum_i L_i$, we have \begin{eqnarray*} h_{m, m'}^{\mathbf{r}} (\mathcal{T}) &=& \dim (M / \tmop{In} (K))\\ &\leq& \dim ( [\rho_{_i}] \, R_{(m, m') - \delta (\rho_i)}/ (\sum_i L_i)) = \sum_i \dim \left( [\rho_{_i}] \,R_{(m, m') - \delta (\rho_i)}/ L_i\right). \end{eqnarray*} This gives the announced bound on $h_{m, m'}^{\mathbf{r}} (\mathcal{T})$, using the previous computation of $\dim( [\rho_{_i}]\, R_{(m, m') - \delta (\rho_i)}/ L_i)$. \end{proof} \begin{definition} The T-mesh $\mathcal{T}$ with a smoothness distribution $\mathbf{r}$ is $(k,k')$-weighted if \begin{itemize} \item $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})$, $\omega (\rho)\geq k$ \item $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$, $\omega (\rho) \geq k'$; \end{itemize} \end{definition} \begin{theorem}\label{thm:weighted} Let $\mathcal{T}$ be a T-mesh and let $\mathbf{r}$ be a smoothness distribution on $\mathcal{T}$. If $\mathcal{T}$ is $(m+1,m'+1)$-weighted, then $h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = 0$. \end{theorem} \begin{proof} By definition, $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})$, $ \omega (\rho) \geq m + 1$ (i.e. $(m+1 - \omega (\rho))_{+}=0$) and $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$, $\omega (\rho) \geq m' + 1$ (i.e. $(m'+1 - \omega (\rho))_{+}=0$). By Theorem \ref{thm:h}, we directly deduce that $h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = 0$. \end{proof} Here is a direct corollary which generalizes a result in \cite{LiWaZha06}: \begin{corollary} Suppose that the end points of a maximal interior segment $\rho\in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$ are in $\Gamma (\rho)$. If for each horizontal (resp. vertical) maximal interior segment $\rho\in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$, the sum of the vertical (resp. horizontal) multiplicities of the end points and of the vertices of $\rho$ on a maximal segment connected to the boundary is greater than or equal to $m+1$ (resp. $m'+1$), then $h_{m, m'}^{\mathbf{r}} (\mathcal{T})=0$. \end{corollary} \begin{proof} Let $\rho$ be a maximal interior segment of $\mathcal{T}$. By hypothesis, the end points of $\rho$ are in $\Gamma (\rho)$. As any point $\gamma \in \rho$ which is also on a maximal segment connected to the boundary is in $\Gamma (\rho)$, the hypothesis implies that $\omega (\rho)\ge m+1$ if $\rho$ is horizontal and $\omega (\rho)\ge m'+1$ if $\rho$ is vertical. We deduce by Theorem \ref{thm:dim:bound} that $h_{m, m'}^{\mathbf{r}} (\mathcal{T})=0$. \end{proof} Another case where $h_{m, m'}^{\mathbf{r}} (\mathcal{T})$ is known is described in the next proposition. \begin{proposition}\label{prop:upper} If $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})$, $\omega (\rho) \leq m + 1$ and $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$, $\omega (\rho) \leq m' + 1$, then \begin{eqnarray}\label{eq:upper} h_{m, m'}^{\mathbf{r}} (\mathcal{T}) & = & \sum_{\rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})} (m + 1 - \omega (\rho))_+ \times (m' - \mathbf{r}(\rho))\\ & + & \sum_{\rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})} (m - \mathbf{r} (\rho)) \times \left( m' + 1 - \omega (\rho) )\right)_+.\nonumber \end{eqnarray} \end{proposition} \begin{proof} In the case where $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})$, $\omega (\rho) \leq m + 1$ and $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$, $\omega (\rho) \leq m' + 1$, Proposition \ref{dim:apolar} implies that there is no relations in bi-degree $\leqslant (m, m')$ of the monomial multiples of $\Delta^{\mathbf{r}} (\rho_j) [\rho_{_i}]$, $\Delta^{\mathbf{r}} (\rho) [\rho_{_i}]$ for $i = 1, \ldots, l, j<i$ using the same notation as in the proof of Theorem \ref{thm:dim:bound}. This implies that $\tmop{In} (K) = \oplus_i L_i$, which shows that $$ h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = \dim (M / \tmop{In} (K)) = \sum_i \dim (R_{(m, m') - \delta (\rho_i)} [\rho_{_i}] / L_i). $$ Thus the equality \eqref{eq:upper} holds. \end{proof} As a corollary, we have the following result for constant smoothness distribution: \begin{theorem}\label{thm:h:cst} \label{thm:dim:bound:cst} Let $\mathcal{T}$ be a T-mesh and let $\mathbf{r}= (r,r')$ be a constant smoothness distribution on $\mathcal{T}$. Then \begin{eqnarray*} 0 \leq h_{m, m'}^{\mathbf{r}} (\mathcal{T}) & \leq & \sum_{\rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})} (m + 1 - (m - r) \lambda (\rho))_+ \times (m' - r')\\ & + & \sum_{\rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})} (m - r) \times \left( m' + 1 - (m' - r') \lambda (\rho) \right)_+ . \end{eqnarray*} Moreover, equality holds in the following cases: \begin{itemizedot} \item $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T})$, $(m - r) \lambda (\rho) \geq m + 1$ and $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$, $(m' - r') \lambda (\rho) \geq m' + 1$; \item $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_h (\mathcal{T}), (m - r) \lambda (\rho) \leq m + 1$ and $\forall \rho \in \tmop{\mathrm{\textsc{mis}}}_v (\mathcal{T})$, $(m' - r') \lambda (\rho) \leq m' + 1$. \end{itemizedot} \end{theorem} \section{Hierarchical T-meshes} \label{sec:4} We consider now a special family of T-meshes, which can be defined by recursive subdivision from an initial rectangular domain $\Omega$. Their study is motivated by practical applications, where local refinement of tensor-product spline spaces are considered eg. in isogeometric analysis \cite{hughes:CMAME2005}. \begin{definition} A hierarchical T-mesh is either the initial axis-aligned rectangle $\Omega$ or obtained from a hierarchical T-mesh by splitting a cell along a vertical or horizontal line. \end{definition} A hierarchical T-mesh will also be called a T-subdivision. It can be represented by a subdivision tree where the nodes are the cells obtained during the subdivision and the children of a cell $\sigma$ are the cells obtained by subdividing $\sigma$. In a hierarchical T-mesh $\mathcal{T}$, the maximal interior segments are naturally ordered in the way they appear during the subdivision process. This is the ordering $\iota$ that we will consider hereafter. Notice that a maximal interior segment $\rho$ is transformed by a split either into a maximal segment which intersects $\partial\Omega$ or into a larger maximal interior segment with a larger weight. We remark that in a hierarchical T-mesh, if $\rho_{i}$ is blocking $\rho_{j}$ then $i< j$. \begin{example} Here are an hierarchical T-mesh (case a) and a non-hierarchical T-mesh (case b): \begin{center} \includegraphics[height=5cm]{tspline-1.eps} \includegraphics[height=5cm]{tspline-2.eps}\\ (a) \ \hspace{3cm}\ \ \ \ \ \ \ \ \ (b) \end{center} \end{example} \subsection{Dimension formula for hierarchical T-meshes} As a corollary of Theorem \ref{thm:dim:bound}, we deduce the following result, also proved in {\cite{deng06}, \cite{LiWaZha06}, \cite{HuDeFeChe06}}: \begin{proposition} \label{prop:dim:exact} Let $\mathcal{T}$ be a hierarchical T-mesh and let $\mathbf{r}= (r,r')$ be a constant smoothness distribution on $\mathcal{T}$. For $m \geq 2 r + 1$ and $m' \geq 2 r' + 1$, we have $h_{m, m'}^{r,r'} (\mathcal{T}) = 0.$ \end{proposition} \begin{proof} We order the maximal interior segments in the way they appear during the subdivision. If a segment $\rho_i \in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$ is blocking $\rho_j \in \tmop{\mathrm{\textsc{mis}}} (\mathcal{T})$, we must have $i < j$. This shows that the end points of $\rho_i$ are in $\Gamma (\rho_i)$. Thus, $\lambda (\rho_i) \geq 2$. As \ $m \geq 2 r + 1$, we have \[ (m - r) \lambda (\rho_i) \geq 2 (m - r) \geq m + (m - 2 r) \geq m + 1. \] Thus, $(m + 1 - (m - r) \lambda (\rho_i))_+ = 0$. Similarly $(m' + 1 - (m' - r') \lambda (\rho_i))_+ = 0$ holds since $m' \geq 2 r' + 1$. By Theorem \ref{thm:h}, we deduce that $h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = 0$. \end{proof} Theorem \ref{thm:weighted} leads us to the following construction rule of a T-subdivision $\mathcal{T}$ for which $h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = 0.$ \begin{algorithm}[$(k,k')$-weighted subdivision rule]\ \\ For each 2-face $\sigma$ of a T-mesh to be subdivided, \begin{enumerate} \item Split $\sigma$ with the new edge $\tau$; \item If the edge $\tau$ does not extend an existing segment, extend $\tau$ (on one side and/or the other) so that the maximal segment containing $\tau$ is either intersecting $\partial \Omega$ or horizontal (resp. vertical) and of weight $\geq k$ (resp. $\geq k'$). \end{enumerate} \end{algorithm} If such a rule is applied in the construction of a T-subdivision, \begin{itemize} \item either a new maximal interior segment is constructed so that its weight is $\ge k$ if it is a horizontal maximal interior segment (resp. $\ge k'$ it it is a vertical maximal interior segment), \item or an existing maximal interior segment is extended and its weight is also increased, \item or a maximal segment intersecting $\partial \Omega$ is constructed. \end{itemize} In all cases, if we start with a $(k,k')$-weighted T-mesh, we obtain a new T-mesh, which is also $(k,k')$-weighted. By Theorem \ref{thm:weighted}, if $k \ge m+1$ and $k'\ge m'+1$ then $h_{m, m'}^{\mathbf{r}} (\mathcal{T}) = 0$ and the dimension of $\dim \mathcal{S}_{m, m'}^{\mathbf{r}} (\mathcal{T})$, given by formula \eqref{eq:dim}, depends only on the number of cells, interior segments and interior vertices of $\mathcal{T}$. From this analysis, we deduce the dimension formula of the space of Locally Refined splines described in \cite{DoLyPe10}. \section{Examples} In this section, we analyse the dimension formula of spline spaces of small bi-degree and small constant smoothness distribution $\mathbf{r}= (r,r')$ on a T-mesh $\mathcal{T}$. \subsection{Bilinear $C^{0,0}$ T-splines} We consider first piecewise bilinear polynomials on $\mathcal{T}$, which are continuous, that is $m = m' = 1$ and $r = r' = 0$. By Proposition \ref{prop:dim:exact}, we have $h_{1, 1}^{0, 0} (\mathcal{T}) = 0$. Using Theorem \ref{thm:dim} and Lemma \ref{lem:nbf} in the appendix, we obtain: \begin{equation} \dim \mathcal{S}_{1, 1}^{0, 0} (\mathcal{T}) = 4 f_2 - 2 f_1^o + f_0^o = f_0^+ + f_0^b. \label{eq:dim:c0} \end{equation} \subsection{Biquadratic $C^{1,1}$ T-splines} Let us consider now the set of piecewise biquadratic functions on a T-mesh $\mathcal{T}$, which are $C^{1}$. For $m = m' = 2$ and $r = r' = 1$, Theorem \ref{thm:dim} and Lemma \ref{lem:nbf} again yield \begin{equation} \dim \mathcal{S}_{2, 2}^{1, 1} (\mathcal{T}) = 9 f_2 - 6 f_1^o + 4 f_0^o + h_{2, 2}^{1, 1} (\mathcal{T}) = f_0^+ - \frac{1}{2} f_0^T + \frac{3}{2} f_0^b + 3 + h_{2, 2}^{1, 1} (\mathcal{T}) . \end{equation} If the T-mesh $\mathcal{T}$ is $(3,3)$-weighted, then by Theorem \ref{thm:weighted}, we have a $h_{2, 2}^{1, 1}(\mathcal{T})=0$, but this is not always the case. \begin{example}\label{ex:5.1} Here is an example where $h_{2, 2}^{1, 1} (\mathcal{T}) = 1$ by Proposition \ref{prop:upper}, since there is one maximal interior segment $\rho$ with $\omega (\rho) = 2-1 + 2-1 = 2$: \begin{center} \includegraphics{tspline-6.eps} \end{center} The dimension of $\mathcal{S}_{2, 2}^{1, 1} (\mathcal{T})$ is $9 \times 4 - 6 \times 5 + 4 \times 2 + h_{2, 2}^{1, 1} (\mathcal{T}) = 14 + 1 = 15$. Notice that the dimension is the same without the (horizontal) interior segment. Thus a basis of \ $\mathcal{S}_{2, 2}^{1, 1} (\mathcal{T})$ is the tensor product B-spline basis corresponding to the nodes $s_0, s_0, s_0, s_1, s_2, s_3, s_3, s_3$ in the horizontal direction and the nodes $t_0, t_0, t_0, t_1, t_1, t_1$ in the vertical direction. \end{example} \begin{example}\label{ex:5.2} Here is another example. We subdivide the T-mesh $\mathcal{T}_1$ to obtain the second T-mesh $\mathcal{T}_2$: \begin{center} \includegraphics{tspline-7.eps} \ \includegraphics{tspline-8.eps} \end{center} Doing this, we increase the number of cells by $9 - 1 = 8$, the number of interior edges by $24 - 4 = 20$, the number of interior points by $16 - 4 = 12$. The dimension of the spline space increases by $9 \times 8 - 6 \times 20 + 4 \times 12 + h_{2, 2}^{1, 1} (\mathcal{T}_2) - h_{2, 2}^{1, 1} (\mathcal{T}_1)= h_{2, 2}^{1, 1} (\mathcal{T}_2) - h_{2, 2}^{1, 1} (\mathcal{T}_1)$. Since there is no maximal interior segment in $\mathcal{T}_{1}$, by Corollary \ref{cor:dim:exact} we have $h_{2, 2}^{1, 1} (\mathcal{T}_1) = 0$. Choosing a proper ordering of the interior segments, we deduce by Theorem \ref{thm:dim} that $h_{2, 2}^{1, 1} (\mathcal{T}_2) \leq 1$. Suppose that $\sigma_1 = [a_0, a_3] \times [b_0, b_3]$ and $\sigma_2 = [a_1, a_2] \times [b_1, b_2]$. For $u_{0}\le u_{1}\le u_{2}\le u_{3}\in \mathbbm{R}$, let $N (u;u_{0}, u_{1},u_{2},u_{3})$ be the B-spline basis function in the variable $u$ of degree $2$ associated to the nodes $u_{0}, \ldots, u_{3}$ (see \cite{deBoor01}). Then the piecewise polynomial function \[ N (s ; a_0, a_1, a_2, a_3) \times N (t ; b_0, b_1, b_2, b_3) \] is an element of $\mathcal{S}_{2, 2}^{1, 1} (\mathcal{T}_2)$, with support in $\sigma_1$. It is not in $\mathcal{S}_{2, 2}^{1, 1} (\mathcal{T}_1)$, since the function is not polynomial on $\sigma_1$. Thus we have $\dim \mathcal{S}_{2, 2}^{1, 1} (\mathcal{T}_2) = \dim \mathcal{S}_{2, 2}^{1, 1} (\mathcal{T}_1) + 1$. Notice that $\mathcal{T}_2$ is $(2,2)$-weighted but not $3$-weighted, since any new maximal segment intersects two of the other new maximal segments. \end{example} For a general hierarchical T-mesh, we consider a sequence of T-meshes $\mathcal{T}_{0}, \ldots \mathcal{T}_{l}$ where $\mathcal{T}_{0}$ has one cell, $\mathcal{T}_{l}=\mathcal{T}$ and such that $\mathcal{T}_{i+1}$ refines $\mathcal{T}_{i}$ by inserting new edges. We can assume that at each level $i\neq 0$ a new maximal interior segment $\rho_{i}$ appears and that we number the maximal interior segments of $\mathcal{T}$ in the order they appear during this subdivision. Notice that any maximal interior segment of $\mathcal{T}$ extends one of the maximal interior segments $\rho_{i}$ and thus its weight is bigger. Notice also that the maximal interior segment $\rho_{i}$ introduced at level $i$ extends to a maximal segment of $\mathcal{T}$, which may intersect the boundary. In this case, it is not involved in the dimension upper bound. Then, we have the following corollary: \begin{proposition} Let $\mathcal{T}$ be a hierarchical T-mesh. \begin{equation}\label{eq:5.3} 9 f_2 - 6 f_1^o + 4 f_0^o \le \dim \mathcal{S}_{2, 2}^{1, 1} (\mathcal{T}) \le 9 f_2 - 6 f_1^o + 4 f_0^o + \sigma \end{equation} where $\sigma$ is the number of levels of the subdivision where a maximal interior segment with no-interior point is introduced. \end{proposition} \begin{proof} Consider the new maximal interior segment $\rho_{i}$ of $\mathcal{T}_{i}$ appearing at level $i$. By construction, the end points of $\rho_{i}$ are not on maximal interior segments of bigger index. Thus $\omega (\rho_{i})\ge 2$. If $\rho_{i}$ contains an interior vertex then $\omega (\rho_{i})\ge 3$ and $(3 - \omega (\rho_{i}))_{+}=0$. Otherwise $(3 - \omega (\rho_{i}))_{+}=1$. As $\rho_{i}$ extends to a maximal segment $\tilde{\rho}_{i}$ which is either interior or intersecting the boundary, we have $(3 - \omega (\tilde{\rho}_{i}))_{+} \le (3- \omega (\rho_{i}))_{+}$ with the convention that $\omega (\tilde{\rho}_{i})=3$ if $\tilde{\rho}_{i}$ is intersecting the boundary. By Theorem \ref{thm:dim:bound}, we have $$ 0 \le h_{2,2}^{1,1} (\mathcal{T}) \le \sum_{i=1}^{l} (3 - \omega (\tilde{\rho}_{i}))_{+} \le \sum_{i=1}^{l} (3 - \omega (\rho_{i}))_{+} $$ where $l$ is the number of levels in the subdivision, $\rho_{i}$ is the maximal interior segment introduced at level $i$ and $\tilde{\rho}_{i}$ is its extension in $\mathcal{T}$. By the previous remarks, $\sum_{i=1}^{l} (3 - \omega (\rho_{i}))_{+}=\sigma$ is the number of levels of the subdivision where a maximal interior segment with no interior point is introduced. Using Theorem \ref{thm:dim}, this proves the bound on the dimension of $\mathcal{S}_{2, 2}^{1, 1} (\mathcal{T})$. \end{proof} Examples \ref{ex:5.1} and \ref{ex:5.2} show that the dimension can be given by the upper bound. On other other hand, for any (3,3)-weighted hierarchical subdivision, the lower bound is reached. This shows that the inequalities \eqref{eq:5.3} are optimal for $\dim \mathcal{S}_{2, 2}^{1, 1} (\mathcal{T})$. \begin{remark} In the case of a hierarchical subdivision where some cells of a given level are subdivided (as $\sigma_{1}$ in Example \ref{ex:5.2}) into 9 sub-cells which have the same length and height, it can be proved that the dimension is in fact: $$ \dim \mathcal{S}_{2, 2}^{1, 1} (\mathcal{T}) = 9 f_2 - 6 f_1^o + 4 f_0^o + \sigma $$ where $\sigma$ is the number of isolated subdivided cells (i.e. the cell is subdivided, not touching the boundary and the adjacent cells sharing an edge are not subdivided) at some level of the subdivision. Indeed, any maximal interior segment subdividing a non-isolated cell contains an interior point and is not involved in the upper bound. As in Example \ref{ex:5.2}, only the isolated cells have a maximal interior segment with no interior points. This example also shows that a new $C^{(1,1)}$ bi-quadratic basis element can be constructed for each isolated cell, proving that the upper bound is reached. This gives a dimension formula similar to the one in \cite{Deng08TSPL22}, for a slightly different subdivision strategy. \end{remark} \subsection{Bicubic $C^{1,1}$ T-splines} For $m = m' = 3$ and $r = r' = 1$, that is for piecewise bicubic polynomial functions which are $C^{1}$, Proposition \ref{prop:dim:exact} yields $h_{3, 3}^{1, 1} (\mathcal{T}) = 0$. Using Theorem \ref{thm:dim} and Lemma \ref{lem:nbf} in the appendix, we obtain: \begin{equation} \dim \mathcal{S}_{3, 3}^{2, 2} (\mathcal{T}) = 16 f_2 - 8 f_1^o + 4 f_0^o = 4 (f_0^+ + f_0^b). \label{eq:dim:d3c1} \end{equation} \subsection{Bicubic $C^{2,2}$ T-splines} For $m = m' = 3$ and $r = r' = 2$, by Theorem \ref{thm:dim} and Lemma \ref{lem:nbf}, we have: \begin{equation} \dim \mathcal{S}_{3, 3}^{2, 2} (\mathcal{T}) = 16 f_2 - 12 f_1^o + 9 f_0^o + h^{2, 2}_{3, 3} (\mathcal{T}) = f_0^+ - f_0^T + 2 f_0^b + 8 + h^{2, 2}_{3, 3} (\mathcal{T}). \end{equation} If $\mathcal{T}$ is a hierarchical $(4,4)$-weighted subdivision, then by Theorem \ref{thm:weighted}, we have $h_{3, 3}^{2, 2}(\mathcal{T})=0$. \appendix \section{Combinatorial properties} We recall some well-known enumeration results for a T-mesh of an axis-aligned rectangular domain $\Omega$. \begin{lemma} \label{lem:nbf}{$\mbox{}$} \begin{itemizedot} \item $f_2 = f_0^+ + \frac{1}{2} f_0^T + \frac{1}{2} f_0^b - 1$ \item $f_1^o = 2 f_0^+ + \frac{3}{2} f_0^T + \frac{1}{2} f_0^b - 2$ \item $f_0^o = f_0^+ + f_0^T $ \end{itemizedot} \end{lemma} \begin{proof} Each face $\sigma \in \mathcal{T}_2$ is a rectangle with 4 corners. If we count these corners for all cells in $\mathcal{T}_2$, we enumerate 4 times the crossing vertices, 2 times the $T$-vertices which are interior or on the boundary and one time the corner vertices of $\Omega$. This yields the relation \[ 4 f_2 = 4 f_0^+ + 2 (f_0^T + (f_0^b - 4)) + 4. \] Each interior edge $\tau \in \mathcal{T}_1^o$ has two end points. Counting these end points for all interior edges, we count 4 times the crossing vertices, 3 times the $T$-vertices which are interior and one time the $T$-vertices on the boundary: \[ 2 f_1^o = 4 f_0^+ + 3 f_0^T + (f_0^b - 4) . \] Finally, as an interior vertex is a crossing vertex or a $T$-vertex, we have \[ f_0^o = f_0^+ + f_0^T . \] \end{proof} \section{Complexes and homology\label{sec:homology}} Let us recall here the basic properties that we will need on complexes of vector spaces. Given a sequence of $\mathbb{K}$-vectors spaces $A_i$, $i = 0, \ldots, l$ and linear maps $\partial_i : A_i \rightarrow A_{i + 1}$, we say that we have a complex \[ \mathcal{A}: A_l \rightarrow A_{l - 1} \rightarrow \cdots A_i \rightarrow A_{i - 1} \rightarrow \cdots A_1 \rightarrow A_0 \] if $\tmop{im} \partial_i \subset \ker \partial_{i - 1}$. \begin{definition} The $i^{\tmop{th}}$ homology $H_i (\mathcal{A})$ of $\mathcal{A}$ is $\ker \partial_{i - 1} / \tmop{im} \partial_i$ for $i = 1, \ldots, l$. \end{definition} The complex $\mathcal{A}$ is called {\tmem{exact}} (or an exact sequence) if $H_i (\mathcal{A}) = 0$ (ie. $\tmop{im} \partial_i = \ker \partial_{i - 1}$) for $i = 1, \ldots, l$. If the complex is exact and $A_l = A_0 = 0$, we have \[ \sum_{i = 1}^{l - 1} (- 1)^i \dim A_i = 0. \] Given complexes $\mathcal{A}= (A_i)_{i = 0, \ldots, l}$ $\mathcal{B}= (\mathcal{B}_i)_{i = 0, \ldots, l}$, $\mathcal{C}= (C_i)_{i = 0, \ldots, l}$ and exact sequences \[ 0 \rightarrow A_i \rightarrow B_i \rightarrow C_i \rightarrow 0 \] for $i = 0, \ldots, l$, we have a long exact sequence \cite{CarEil99}, \cite{Spanier66}[p. 182]: \[ \cdots \rightarrow H_{i + 1} (\mathcal{C}) \rightarrow H_i (\mathcal{A}) \rightarrow H_i (\mathcal{B}) \rightarrow H_i (\mathcal{C}) \rightarrow H_{i - 1} (\mathcal{A}) \rightarrow \cdots \] \section{Dual topological complex} The dual complex $\mathcal{T}^{\star}$ of the subdivision $\mathcal{T}$, is such that we have the following properties. \begin{itemizedot} \item a face $\sigma \in \mathcal{T}_2$ \ is a vertex of the dual complex $\mathcal{T}^{\star}$. \item an edge of $\mathcal{T}^{\star}$ is connecting two elements $\sigma, \sigma' \in \mathcal{T}_2$ if they share a common (interior) edge $\tau \in \mathcal{T}_1^o .$ Thus it is identified with the edge $\tau$ of $\mathcal{T}$ between $\sigma, \sigma'$; \item a face of $\mathcal{T}^{\star}$ corresponds to an elements $\gamma \in \mathcal{T}_0^o$. It is either a triangle if $\gamma$ is a $T$-junction or a quadrangle if $\gamma$ is a crossing vertex. \end{itemizedot} Notice that the boundary cells of $\mathcal{T}$ correspond to boundary vertices of $\mathcal{T}^{\star}$. They are connected by boundary edges which belong to a single face of $\mathcal{T}^{\star}$. \section{Topological chain complex} In this appendix section, we recall the main properties of the topological chain complex {\small \[ \begin{array}{lllllllll} \mathfrak{R}_{m, m'} (\mathcal{T}^o) : & & \bigoplus_{\sigma \in \mathcal{T}_2} [\sigma] R_{m, m'} & \rightarrow & \bigoplus_{\tau \in \mathcal{T}_1^o} [\tau] R_{m, m'} & \rightarrow & \bigoplus_{\gamma \in \mathcal{T}_0^o} [\gamma] R_{m, m'} & \rightarrow & 0 \end{array} \] } where \begin{itemize} \item $\forall \gamma \in \mathcal{T}_o^o$, $\partial_0 ([\gamma]) = 0$. \item $\forall \tau=[\gamma_{1},\gamma_{2}] \in \mathcal{T}_1^o$, $\partial_1 ([\gamma_{1},\gamma_{2}]) = [\gamma_{1}]-[\gamma_{2}]$ with $[\gamma] \equiv 0$ iff $\gamma\in\partial \Omega$. and \item $\forall \sigma \in \mathcal{T}_2^o$ with its counter-clockwise boundary formed by the edges $[\gamma_{1},\gamma_{2}], \ldots, [\gamma_{l},\gamma_{1}]$, $\partial_2 (\sigma) = [\gamma_{1},\gamma_{2}] + \cdots + [\gamma_{l},\gamma_{1}]$ with $[\gamma,\gamma'] \equiv 0$ iff $\gamma,\gamma'\in\partial \Omega$. \end{itemize} We assume that $\Omega$ is simply connected and $\Omega^{o}$ is connected. We prove that $\mathfrak{R}_{m, m'} (\mathcal{T}^o)$ is acyclic on a $T$-mesh of $\Omega$. \begin{proposition}\label{proph0r} $H_0 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = 0$. \end{proposition} \begin{proof} Let $\gamma \in \mathcal{T}_0^o$. There is a sequence of edges $\tau_0 = \gamma_0 \gamma_1$, \ $\tau_1 = \gamma_1 \gamma_2$, $\ldots$, \ $\tau_l = \gamma_l \gamma_{l + 1}$, such that $\tau_i \in \mathcal{T}_1^o$, $\gamma_0 \not\in \mathcal{T}_0^o$ and $\gamma_{l + 1} = \gamma$. Then \[ \partial_1 ([\tau_0] + \cdots + [\tau_l]) = [\gamma_1] - [\gamma_0] + \cdots + [\gamma_{l + 1}] - [\gamma_l] = [\gamma] \] since $[\gamma_0] = 0$ and $[\gamma_{l + 1}] = [\gamma]$. Multiplying by any element in $R_{m, m'}$, we get that $[\gamma] R_{m, m'} \subset \tmop{im} \partial_1 $ and thus $H_0 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = 0$. \end{proof} \begin{proposition}\label{proph1r} $H_1 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = 0$. \end{proposition} \begin{proof} Let $p = \sum_{\tau \in \mathcal{T}_1^o} [\tau] p_{\tau} \in \ker \partial_1$ with $p_{\tau} \in R_{m, m'}$. Let us prove that $p$ is in the image of $\partial_2$. For each $\gamma \in \mathcal{T}_0^o$ and each edge $\tau$ which contains $\gamma$, we have $\sum \varepsilon_{\tau} p_{\tau} = 0$ with $\varepsilon_{\tau} = 1$ if $\tau$ ends at $\gamma$, $\varepsilon_{\tau} = - 1$ if $\tau$ starts at $\gamma$ and $\varepsilon_{\tau} = 0$ otherwise. For any $\sigma \in \mathcal{T}_2^o$ and $\tau \in \mathcal{T}_1^o$, we define $\varepsilon_{\sigma, \tau} = 1$ if $\tau$ is oriented counter-clockwise on the boundary of $\sigma$, $\varepsilon_{\sigma, \tau} = - 1$ if $\tau$ is oriented counter-clockwise on the boundary of $\sigma$ and $\varepsilon_{\sigma, \tau} = 0$ otherwise. For any oriented edge of the dual graph $\mathcal{T}^{\star}$ from $\sigma'$ to $\sigma$, let us define \ $\partial_1^{\star} ([\sigma', \sigma]) = \varepsilon_{\sigma, \tau} p_{\tau}$. Notice that $\partial_1^{\star} ([\sigma, \sigma']) = \varepsilon_{\sigma', \tau} p_{\tau} = - \varepsilon_{\sigma, \tau} p_{\tau} = - \partial_1^{\star} ([\sigma', \sigma])$, since the orientation of $\tau$ on the boundary of $\sigma$ and $\sigma'$ are opposite. Let $\sigma_0$ be the 2-face of $\mathcal{T}$ with the lowest left corner for the lexicographic ordering. We order the cells $\sigma \in \mathcal{T}_2$ according to their distance to $\sigma_0$ in this dual graph $\mathcal{T}^{\star}$. We define an element $q = \sum_{\sigma \in \mathcal{T}_2} q_{\sigma} [\sigma]$ \ where $q_{\sigma} \in R_{m, m'}$ by induction using this order, as follows: \begin{itemizedot} \item $q_{\sigma_0} = 0$; \item For any $\sigma, \sigma' \in \mathcal{T}_2$, if $\sigma > \sigma'$ and $\sigma$ and $\sigma'$ share a common edge $\tau$, then $q_{\sigma} = q_{\sigma'} + \partial_1^{\star} ([\sigma', \sigma])$. \end{itemizedot} Thus, if $[\sigma_0, \sigma_1], [\sigma_1, \sigma_2], \ldots, [\sigma_{k - 1}, \sigma_k]$ is a path of $ \mathcal{T}^{\star}$ connecting $\sigma_0$ to $\sigma_k = \sigma$ with $\sigma_{i + 1} > \sigma_i$ then $ q_{\sigma} = \sum_{i = 0}^{k - 1} \partial_1^{\star} ([\sigma_i, \sigma_{i + 1}])$. Let us prove that this definition does not depend on the chosen path between $\sigma_0$ and $\sigma$. We first show that for any face {\tmem{$\gamma^{\star}$}} of \ $\mathcal{T}^{\star}$ attached to a vertex $\gamma$, if its counter-clockwise boundary is formed by the edges $[\sigma, \sigma'], [\sigma', \sigma''], \ldots, [\sigma''', \sigma]$ corresponding to the edges $\tau, \tau', \tau'', \ldots$ of $\mathcal{T}$ containing $\gamma$, then \[ \partial_1^{\star} ([\sigma, \sigma']) + \partial_1^{\star} ([\sigma', \sigma''] + \cdots + \partial_1^{\star} ([\sigma''', \sigma]) = \varepsilon_{\sigma, \tau} p_{\tau} + \varepsilon_{\sigma', \tau'} p_{\tau'} + \varepsilon_{\sigma'', \tau''} p_{\tau''} + \cdots = 0. \] By changing the orientation of an edge $\tau$, we replace $p_{\tau}$ by $- p_{\tau}$ and \ $\varepsilon_{\sigma, \tau}$ by $- \varepsilon_{\sigma, \tau}$ so that the quantity $\varepsilon_{\sigma, \tau} p_{\tau}$ is not changed. Thus we can assume that all the edges $\tau, \tau', \tau'', \ldots$ are pointing to $\gamma$. As $p \in \ker \partial_1$, we have $p_{\tau} + p_{\tau'} + p_{\tau''} + \cdots = 0$. Now as the cells $\sigma, \sigma', \sigma'', \ldots, \sigma$ are ordered counter-clockwise around $\gamma$ and as the edges are pointing to $\gamma$, we have $\varepsilon_{\sigma, \tau} = \varepsilon_{\sigma', \tau'} = \varepsilon_{\sigma'', \tau''} = \cdots = 1$, so that the sum $\partial_1^{\star} ([\sigma, \sigma']) + \partial_1^{\star} ([\sigma', \sigma''] + \cdots + \partial_1^{\star} ([\sigma''', \sigma])$ over a the boundary of a face {\tmem{$\gamma^{\star}$}} of \ $\mathcal{T}^{\star}$ is 0. By composition, for any loop of $\mathcal{T}^{\star}$, the sum on the corresponding oriented edges is $0$. This shows that the definition of $q_{\sigma}$ does not depend on the oriented path from $\sigma_0$ to $\sigma$. By construction, we have \[ \partial_2 (q) = \sum_{\sigma \in \mathcal{T}_2} ( \sum_{\tau \in \mathcal{T}_1^o} \varepsilon_{\sigma, \tau} q_{\sigma} [\tau]) = \sum_{\tau \in \mathcal{T}_1^o} ( \sum_{\sigma \in \mathcal{T}_2} \varepsilon_{\sigma, \tau} q_{\sigma}) [\tau] . \] For each interior edge $\tau \in \mathcal{T}_1^o$, there are two faces $\sigma_1 > \sigma_2$, which are adjacent to $\tau .$ Thus, we have $\varepsilon_{\sigma_1, \tau} = - \varepsilon_{\sigma_2, \tau} $ and $q_{\sigma_1} = q_{\sigma_2} + \varepsilon_{\sigma_1, \tau} p_{\tau}$. We deduce that \[ ( \sum_{\sigma} \varepsilon_{\sigma, \tau} q_{\sigma}) = \varepsilon_{\sigma_1, \tau} q_{\sigma_1} + \varepsilon_{\sigma_2, \tau} q_{\sigma_2} = \varepsilon_{\sigma_1, \tau} (q_{\sigma_2} + \varepsilon_{\sigma_1, \tau} p_{\tau}) + \varepsilon_{\sigma_2, \tau} q_{\sigma_2} = p_{\tau} . \] This shows that $\partial_2 (q) = p$. In other words, $\tmop{im} \partial_2 = \ker \partial_1$ and $H_1 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = 0$. \end{proof} \begin{proposition}\label{proph2r} If $\Omega$ has one connected component. Then $H_2 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = R_{m, m'}.$ \end{proposition} \begin{proof} An element of $H_2 (\mathfrak{R}_{m, m'} (\mathcal{T}^o)) = \ker \partial_2$ is a collection of polynomials $(p_{\sigma})_{\sigma \in \mathcal{T}_2}$ such that $p_{\sigma} \in R_{m, m'}$ and $p_{\sigma} = p_{\sigma'}$ if $\sigma$ and $\sigma'$ share an (internal) edges. As $\mathcal{T}$ is a subdivision of a rectangle $D_0$, all faces $\sigma \in \mathcal{T}_2$ share pairwise an edge. Thus $p_{\sigma} = p_{\sigma'}$ for all $\sigma, \sigma' \in \mathcal{T}_2$ and $H_2 (\mathfrak{R}_{m, m'} (\mathcal{T})) = R_{m, m'}$. \end{proof} Notice that by counting the dimensions in the exact sequence $\mathfrak{R}_{m, m'} (\mathcal{T})$, we recover the well-known Euler formula: $f_2 - f_1 + f_0 = 1$ (the domain $\Omega$ has one connected component). \end{document}
\begin{document} \title{Functional limit theorems for the P\'olya and $q$-P\'olya urns} \gdef\@thefnmark{}\@footnotetext{2010 Mathematics Subject Classification: 60F17; 60K99; 60C05. } \gdef\@thefnmark{}\@footnotetext{Keywords: P\'olya urn, $q$-P\'olya urn, $q$-calculus, functional limit theorems.} \begin{abstract} For the plain P\'olya urn with two colors, black and white, we prove a functional central limit theorem for the number of white balls assuming that the initial number of black balls is large. Depending on the initial number of white balls, the limit is either a pure birth process or a diffusion. We also prove analogous results for the $q$-P\'olya urn, which is an urn where, when picking a ball, the balls of one color have priority over those of the other. \end{abstract} \section{Introduction and results} \subsection{The models} \label{Models} \hspace{2.5ex} \textbf{The P\'olya urn.} This is the model where in an urn that has initially $r$ white and $s$ black balls we draw, successively, uniformly, and at random, a ball from it and then we return the ball back together with $k$ balls of the same color as the one drawn. The number $k\in{\mathbb N}p$ is fixed. Call $A_n$ and $B_n$ the number of white and black balls respectively after $n$ drawings. The most notable result regarding its asymptotic behavior is that the proportion of white balls in the urn after $n$ drawings, $A_n/(A_n+B_n)$, converges almost surely as $n\to\infty$ to a random variable with distribution Beta$(r/k, s/k)$. Our aim in this work is to examine whether the entire path $(A_n)_{n\ge 0}$ after appropriate natural transformations converges to a stochastic process. Standard references for the theory and the applications of P\'olya urn and related models are \cite{JoKo} and \cite{Mah}. \textbf{The $q$-P\'olya urn}. This is a $q$-analog of the P\'olya urn (see \cite{Gas}, \cite{KacCheung} for more on $q$-analogs) introduced in \cite{Kup} and studied further in \cite{Char12} (see also \cite{Char16}). A $q$-analog of a mathematical object $A$ is another object $A(q)$ so that when $q\to1$, $A(q)$ ``tends'' to $A$. Take $q\in(0, \infty){\raise0.3ex\hbox{$\scriptstyle \setminus$}}\{1\}$. The $q$-analog of any $x\in \mathbb{C}$ is defined as \begin{equation} \label{qNumber} [x]_q:=\frac{q^x-1}{q-1}. \end{equation} Note that $\lim_{q\to1} [x]_q=x$. Now consider an urn that has initially $r$ white and $s$ black balls, where $r, s\in{\mathbb N}, r+s>0$. We perform a sequence of additions of balls in the urn according to the following rule. If at a given time the urn contains $w$ white and $b$ black balls ($w, b\in {\mathbb N}, w+b>0$), then we add $k$ white balls with probability \begin{align} \label{qPWhite} \mbox{\bf P}_q(\text{white})&=\frac{[w]_q}{[w+b]_q}. \\ \intertext{Otherwise, we add $k$ black balls, and this has probability} \mbox{\bf P}_q(\text{black})&=1-\mbox{\bf P}_q(\text{white})=q^w \frac{[b]_q}{[w+b]_q}. \label{qPBlack} \end{align} To understand how the $q$-P\'olya urn works, it helps to realize the probabilities $\mbox{\bf P}_q(\text{white}), \mbox{\bf P}_q(\text{black})$ through a natural experiment. If $q\in (0, 1)$, then we put the balls in a line with the $w$ white coming first and the $b$ black following. To pick a ball, we go through the line, starting from the beginning and picking each ball with probability $1-q$ independently of what happened with the previous balls. If we finish the line without picking a ball, we start from the beginning. Once we pick a ball, we return it to its position together with $k$ balls of the same color. Given these rules, the probability of picking a white ball is \begin{equation} \label{LinePick} (1-q^w)\sum_{j=0}^\infty (q^{w+b})^j=\frac{1-q^w}{1-q^{w+b}}=\frac{[w]_q}{[w+b]_q}, \end{equation} which is \eqref{qPWhite}, because before picking a white ball, we will go through the entire list a random number of times, say $j$, without picking any ball and then, going through the white balls, we pick one (probability $1-q^w$). If $q>1$, we place in the line first the black balls and we go through the list picking each ball with probability $1-q^{-1}$. According to the above computation, the probability of picking a black ball is $$\frac{[b]_{q^{-1}}}{[w+b]_{q^{-1}}}=q^w \frac{[b]_q}{[w+b]_q},$$ which is \eqref{qPBlack}. We extend the notion of drawing a ball from a $q$-P\'olya urn to the case where exactly one of $w, b$ is infinity. Then the probability to pick a white (resp. black) ball is determined again by \eqref{qPWhite} (resp. \eqref{qPBlack}), where this is understood as the limit of the right hand side as $w$ or $b$ goes to $\infty$. For example, assuming that $w=\infty$ and $b\in {\mathbb N}$, we have $\mbox{\bf P}_q(\text{white})=1$ if $q<1$ and $\mbox{\bf P}_q(\text{white})=q^{-b}$ if $q>1$. Again these probabilities are realized through the experiment described above. Thus, we can run the process even if we start with an infinite number of balls from one color and finite from the other. \subsection{P\'olya urn. Scaling limits} \label{Polya_Scaling_Sec} For the results of this section, we consider an urn whose initial composition depends on $m\in{\mathbb N}p$. It is $A_0^{(m)}$ and $B_0^{(m)}$ white and black balls respectively. After $n$ drawings, the composition is $A_n^{(m)}, B_n^{(m)}$. To see a new process arising out of the path of $(A_n^{(m)})_{n\ge0}$ we start with an initial number of balls that tends to infinity as $m\to\infty$. We assume then that $B_0^{(m)}$ grows linearly with $m$. Regarding $A_0^{(m)}$, we study three regimes: \begin{itemize} \item[a)] $A_0^{(m)}$ stays fixed with $m$. \item[b)] $A_0^{(m)}$ grows to infinity but sublinearly with $m$. \item[c)] $A_0^{(m)}$ grows linearly with $m$. \end{itemize} The regime where $A_0^{(m)}$ grows superlinearly with $m$ follows by regime b) by changing the roles of the two colors. In the regimes a) and b), the scarcity of white balls has as a result that the time between two consecutive drawings of a white ball is of order $m/A_0^{(m)}$ (the probability of picking a white ball in the first few drawings is approximately $A_0^{(m)}/m$, which is small). We expect then that speeding up time by this factor we will see a birth process. And indeed this is the case as our first two theorems show. All processes appearing in this work with index set $[0, \infty)$ and values in some Euclidean space ${\mathbb R}^d$ are elements of $D_{{\mathbb R}^d}[0, \infty)$, the space of functions $f:[0, \infty)\to{\mathbb R}^d$ that are right continuous and have limits from the left of each point of $[0, \infty)$. This space is endowed with the Skorokhod topology, and convergence in distribution of processes with values on that space is defined through that topology. We remind the reader that the negative binomial distribution with parameters $\nu\in(0, \infty)$ and $p\in(0, 1)$ is the distribution with support in ${\mathbb N}$ and probability mass function \begin{equation}\label{NBDensity} f(x)={x+\nu-1 \choose x}p^\nu(1-p)^x \end{equation} for all $x\in{\mathbb N}$. When $\nu\in{\mathbb N}p$, this is the distribution of the number of failures until we see the $\nu$-th success in a sequence of independent trials, each having probability of success $p$. For a random variable $X$ with this distribution, we write $X\sim NB(\nu, p)$. \begin{theorem} \label{PolyaPathRegime1} Fix $w_0\in{\mathbb N}p$ and $b_0\ge0$. If $A_0^{(m)}=w_0$ and $\lim_{m\to\infty}B_0^{(m)}/m=b_0$, then the process $(k^{-1}\{A^{(m)}_{[mt]}-A_0^{(m)}\})_{t\ge0}$ converges in distribution, as $m\to\infty$, to an inhomogeneous in time pure birth process $Z=(Z_t)_{t\ge0}$ such that for all $0\le t_1<t_2, j\in{\mathbb N}$, the random variable $Z(t_2)-Z(t_1)|Z(t_1)=j$ has distribution $NB\big(\frac{w_0}{k}+j, \frac{t_1+(b_0/k)}{t_2+(b_0/k)}\big)$. Equivalently, $Z$ has rates $\lambda_{t, j}=(k j+w_0)/(k t+b_0)$ for all $(t, j)\in [0, \infty)\times {\mathbb N}$. \end{theorem} \begin{theorem} \label{PolyaPathRegime2} If $A_0^{(m)}=:g_m$ with $g_m\to\infty, g_m=o(m)$ and $\lim_{m\to\infty}B_0^{(m)}/m=b_0$ with $b_0>0$ constant, then the process $(k^{-1}\{A_{[tm/g_m]}^{(m)}-A_0^{(m)}\})_{t\ge0}$, as $m\to\infty$, converges in distribution to the Poisson process on $[0, \infty)$ with rate $1/b_0$. \end{theorem} Next, we look at regime c), i.e., in the case that at time 0 both black and white balls are of order $m$. In this case, the normalized process of the number of white balls has a non-random limit, which we determine, and then we study the fluctuations of the process around this limit. \begin{theorem} \label{PolyaPathLinear} Assume that $A_{0}^{(m)}, B_{0}^{(m)}$ are such that $\lim_{m\to\infty} \frac{A_{0}^{(m)}}{m}=a, \frac{B_{0}^{(m)}}{m}=b$ where $a, b\in[0, \infty)$ are not both zero. Then the process $(A_{[mt]}^{(m)}/m)_{t\ge0}$, as $m\to\infty$, converges in distribution to the deterministic process $X_t=\frac{a}{a+b}(a+b+k t), t\ge0$. \end{theorem} The limit $X$ is the same as in an urn in which we add at each step $k$ white or black balls with corresponding probabilities $a/(a+b), b/(a+b)$, that is, irrespective of the composition of the urn at that time. To determine the fluctuations of the process $(A_{[mt]}^{(m)}/m)_{t\ge0}$ around its $m\to\infty$ limit, $X$, we let $$C_t^{(m)}=\sqrt{m}\bigg( \frac{A_{[mt]}^{(m)}}{m}-X_t\bigg)$$ for all $m\in{\mathbb N}p$ and $t\ge0$. \begin{theorem} \label{PolyaDiffusionLimit} Let $a, b\in[0, \infty)$, not both zero, $\theta_1, \theta_2\in {\mathbb R}$, and assume that $A_0^{(m)}:=[am+\theta_1\sqrt{m}], B_0^{(m)}=[bm+\theta_2\sqrt{m}]$ for all large $m\in{\mathbb N}$. Then the process $(C_t^{(m)})_{t\ge0}$ converges in distribution, as $m\to\infty$, to the unique strong solution of the stochastic differential equation \begin{align} Y_0&=\theta_1, \\ dY_t&=\frac{k}{a+b+k t} \bigg\{Y_t-\frac{a}{a+b}(\theta_1+\theta_2)\bigg\} \,dt+k\frac{\sqrt{ab}}{a+b}\,dW_t, \end{align} which is \begin{equation} \label{PolyaDiffLimit} Y_t=\theta_1+\frac{b\theta_1-a\theta_2}{(a+b)^2 } kt+k\frac{\sqrt{ab}}{a+b} (a+b+k t) \int_0^t \frac{1}{a+b+ks}\, dW_s. \end{equation} $W$ is a standard Brownian motion \end{theorem} \noindent \textbf{Remark}. Functional central limit theorems for P\'olya type urns have been proven with increasing generality in the works \cite{Go93}, \cite{BaHu02}, \cite{Ja}. The major difference with our results is that in theirs, the initial number of balls, $A_0^{(m)}, B_0^{(m)}$, is fixed. More specifically: 1) Gouet (\cite{Go93}) studies urns with two colors (black and white) in the setting of Bagchi and Pal (\cite{BaPa85}). According to that, when a white ball is drawn, we return it in the urn together with $a$ white and $b$ black balls, while if a black ball is drawn, we return it together with $c$ white and $d$ black. The numbers $a, b, c, d$ are fixed integers (possibly negative), the number of balls added to the urn is fixed (that is $a+b=c+d$), and balls are drawn uniformly form the urn. The plain P\'olya urn is not studied in that work because, according to the author, it has been studied by Heyde in \cite{He77}. However, for the P\'olya urn, \cite{He77} discusses the central limit theorem and the law of the iterated logarithm. In any case, following the techniques of Heyde and Gouet one can prove the following. Assume for simplicity that $k=1$ and let $L=:\lim_{n\to\infty}\frac{A_n}{n}$. The limit exists with probability one because of the martingale convergence theorem. Then $$\left\{\sqrt{n}\left(t\frac{A_{n/t}}{n}-L\right)\right\}_{t\ge0} \overset{d}{\to} \{W_{L' (1-L') t}\}_{t\ge0}$$ as $n\to\infty$. $W$ is a standard Brownian motion and $L'$ is a random variable independent of $W$ and having the same distribution as $L$. On the other hand, de-Finetti's theorem gives easily the more or less equivalent statement that, as $n\to\infty$, $$\left\{\sqrt{n}\left(\frac{A_{nt}}{nt}-L\right)\right\}_{t\ge0} \overset{d}{\to} \{W_{L' (1-L')/t}\}_{t\ge0}$$ with $W, L'$ as before. 2) Bai, Hu, and Zhang (\cite{BaHu02}) work again in the setting of Bagchi and Pal, but now the numbers $a, b, c, d$ depend on the order of the drawing and are random. The requirement that each time we add the same number of balls is relaxed. 3) Janson (\cite{Ja}) considers urns with many colors, labeled $1, 2, \ldots, l$, where after each drawing, if we pick a ball of color $i$, we place in the urn balls of every color according to a random vector $(\xi_{i, 1}, \ldots, \xi_{i, l})$ whose distribution depends on $i$ ($\xi_{i, j}$ is the number of balls of color $j$ that we add in the urn). Also, each ball is assigned a certain nonrandom activity that depends only on its color, and then the probability to pick a certain color at a drawing equals the ratio of the total of the activities of all balls of that color to the total of the activities of all balls present in the urn at that time. A restriction in that work is that there is a color $i_0$ so that starting the urn with just one ball and this ball has this color, there is positive probability to see in the future every other color. This excludes the classical P\'olya urn that we study. \subsection{$q$-P\'olya urn. Basic results} \label{qPolyaBasic} We recall some notation from $q$-calculus (see \cite{Char16}, \cite{KacCheung}). For $q\in(0, \infty){\raise0.3ex\hbox{$\scriptstyle \setminus$}}\{1\}, x\in \mathbb{C}, k\in {\mathbb N}p$, we define \begin{align} [x]_q&:=\frac{q^x-1}{q-1} & \text{the $q$-number of } x,\\ [k]_q!&:=[k]_q [k-1]_q \cdots [1]_q & \text{the $q$-factorial},\\ [x]_{k, q}&:=[x]_q [x-1]_q \cdots [x-k+1]_q & \text{ the $q$-factorial of order } k,\\ {x\brack k}_{q}&:=\frac{[x]_{k, q}}{[k]_q!} & \text{ the $q$-binomial coefficient}\\ (x; q)_\infty&:=\prod_{i=0}^\infty(1-xq^i) \text{ when }q\in[0, 1) & \text{the $q$-Pochhammer symbol} \label{qPoch}, \end{align} We extend these definitions in the case $k=0$ by letting $[0]_q!=1, [x]_{0, q}=1$. Now consider a $q$-P\'olya urn that has initially $r$ white and $s$ black balls, where $r\in{\mathbb N}\cup\{\infty\}$ and $s\in{\mathbb N}$. Call $X_n$ the number of drawings that give white ball in the first $n$ drawings. Its distribution is specified by the following. \noindent \textbf{Fact 1:} Let $a:=r/k$ and $b:=s/k$. \noindent (i) If $r\in {\mathbb N}$, then the probability mass function of $X_n$ is \begin{align} \label{QPolyaWhiteDistr} \mbox{\bf P}\left(X_{n}=x\right)&=q^{k(n-x)(a+x)} \frac{{-a \brack x}_{q^{-k}} {-b \brack n-x}_{q^{-k}}}{{-a-b\brack n}_{q^{-k}}}=q^{-sx} \frac{{a+x-1 \brack x}_{q^{-k}} {b+n-x-1\brack n-x}_{q^{-k}}}{{a+b+n-1\brack n}_{q^{-k}}} \\ &=q^{-kx(b+n-x)} \frac{{-a \brack x}_{q^{k}} {-b \brack n-x}_{q^{k}}}{{-a-b\brack n}_{q^{k}}} \label{QPWDQ} \end{align} for all $x\in{\mathbb N}$. \noindent (ii) If $r=\infty$ and $q>1$, then the probability mass function of $X_n$ is \begin{align} \label{QPolyaWhiteDistrRInfty} \mbox{\bf P}\left(X_{n}=x\right)&=q^{-sx}(1-q^{-k})^{n-x} {b+n-x-1\brack n-x}_{q^{-k}} \frac{[n]_{q^{-k}}!}{[x]_{q^{-k}}!} \end{align} for all $x\in{\mathbb N}$. Relation \eqref{QPolyaWhiteDistr} is (3.1) in \cite{Char12} where it is proved through recursion. In Section \ref{qPolyaBasicProofs} we give an alternative proof. According to the experiment described in Section \ref{Models}, the balls that are placed first in the line have an advantage to be picked (the white if $q\in(0, 1)$, the black if $q>1$). In fact, this leads to the extinction of drawings from the balls of the other color; there is a point after which the number of balls in the urn of that color stays fixed to a random number. In the next theorem, we identify the distribution of this number. We treat the case $q>1$. \begin{theorem}[Extinction of the second color] \label{LimitOfWeakColor} Assume that $q>1, r\in {\mathbb N}\cup\{\infty\}, s\in {\mathbb N}$. As $n\to\infty$, with probability one, $(X_n)_{n\ge1}$ converges to a random variable $X$ with values in ${\mathbb N}$ and probability mass function \noindent (i) \begin{equation} \label{LimOfWeakColor} f(x)= q^{-sx}{\frac{r}{k}+x-1 \brack x}_{q^{-k}}\frac{(q^{-s}; q^{-k})_\infty}{(q^{-r-s}; q^{-k})_\infty} \end{equation} for all $x\in{\mathbb N}$ in the case $r\in {\mathbb N}$ and \noindent (ii) \begin{equation} f(x)=\left(\frac{q^{-s}}{1-q^{-k}}\right)^x \frac{1}{[x]_{q^{-k}}!} (q^{-s}; q^{-k})_\infty \end{equation} for all $x\in{\mathbb N}$ in the case $r=\infty$. \end{theorem} When $r\in{\mathbb N}$ and $k|r$, $X$ has the negative $q$-binomial distribution of the second kind with parameters $r/k, q^{-s}, q^{-k}$ (see \S 3.1 in \cite{Char16} for its definition). When $r=\infty$, $X$ has the Euler distribution with parameters $q^{-s}/(1-q^{-k}), q^{-k}$ (see \S 3.3 in \cite{Char16} again). \subsection{$q$-P\'olya urn. Scaling limits} As in Section \ref{Polya_Scaling_Sec}, we consider an urn whose composition after $n$ drawings is $A_n^{(m)}$ white and $B_n^{(m)}$ black balls. $m\in {\mathbb N}p$ is a parameter. Our objective is to find limits of the entire path of the process $(A_n^{(m)})_{n\in {\mathbb N}}$ analogous to the ones of Section \ref{Polya_Scaling_Sec} for the P\'olya urn. Assume that $q>1$. If we keep $q$ fixed, nothing new appears because: (a) If $A_0^{(m)}, B_0^{(m)}$ are fixed for all $m$, then after some point we pick only black balls (Theorem \ref{LimitOfWeakColor}(i)). (b) If $\lim_{m\to\infty} B_0^{(m)}=\infty$ then the process converges to the one where we pick only black balls. (c) If $B_0^{(m)}$ is fixed for all $m$ and $\lim_{m\to\infty} A_0^{(m)}=\infty$ then the process converges to the one where $r=\infty$ and again, after some point, we pick only black balls (Theorem \ref{LimitOfWeakColor}(ii)). Interesting limits appear once we take $q=q_m$ to depend on $m$ and approach 1 as $m\to\infty$. We study two regimes for $q_m$. In the first, the distance of $q_m$ from 1 is $\Theta(1/m)$ while in the second, the distance is $o(1/m)$. \subsubsection{The regime $q=1+\Theta(m^{-1})$} Assume that $q_m=c^{1/m}$ with $c>1$. \begin{theorem} \label{QPolyaPathRegime1} Fix $w_0\in {\mathbb N}p$ and $b_0\ge 0$. If $A^{(m)}_0=w_0$ and $\lim_{m\to\infty} B^{(m)}_0/m=b_0$, then the process $(k^{-1}(A^{(m)}_{[m t]}-A_0^{(m)}))_{t\ge0}$ converges in distribution as $m\to\infty$ to an inhomogeneous in time pure birth process $Z$ with starting value $0$ and such that for all $0\le t_1<t_2, j\in{\mathbb N}$, the random variable $Z(t_2)-Z(t_1)|Z(t_1)=j$ has distribution $NB\big(\frac{w_0}{k}+j, \frac{1-c^{-b_0-kt_1}}{1-c^{-b_0-kt_1}}\big)$. Equivalently, $Z$ has rates \begin{equation} \label{Qrates} \lambda_{t, j}=\frac{w_0+j k}{c^{b_0+kt}-1}\log c \end{equation} for all $(t, j)\in[0, \infty)\times {\mathbb N}$. \end{theorem} \begin{theorem} \label{QPolyaPathRegime2} Assume that $A^{(m)}_0=g_m$ and $\lim_{m\to\infty} B^{(m)}_0/m=b_0$, where $b_0\in(0, \infty)$ and $g_m\in {\mathbb N}p, g_m\to\infty, g_m=o(m)$ as $m\to\infty$. Then the process $(k^{-1}(A^{(m)}_{[t m/g_m]}-A^{(m)}_0))_{t\ge0}$ converges in distribution, as $m\to\infty$, to the Poisson process on $[0, \infty)$ with rate \begin{equation} \label{QPrates} \frac{\log c}{c^{b_0}-1}. \end{equation} \end{theorem} \begin{theorem} \label{QPolyaPathODE} Assume that $A_{0}^{(m)}, B_{0}^{(m)}$ are such that $\lim_{m\to\infty} A_{0}^{(m)}/m=a, \lim_{m\to\infty}B_{0}^{(m)}/m=b$, where $a, b\in[0, \infty)$ are not both zero. Then the process $\left(A_{[mt]}/m\right)_t\geq 0$ converges in distribution, as $m\rightarrow+\infty$, to the unique solution of the differential equation \begin{align} \hat X_{0}&=a, \\ d\hat X_{t}&=k\frac{1-c^{\hat X_{t}}}{1-c^{a+b+k t}}dt, \label{ODEqPolya} \end{align} which is \begin{equation} \label{qPolyaDetLimit} \hat X_t:=a-\frac{1}{\log c}\log\left(\frac{c^b-1+c^{-kt}(1-c^{-a})}{c^b-c^{-a}} \right). \end{equation} \end{theorem} As for the P\'olya urn, we determine the fluctuations of the process $(A_{[mt]}^{(m)}/m)_{t\ge0}$ around its $m\to\infty$ limit, $\hat X$. Let $$\hat C_t^{(m)}=\sqrt{m}\bigg( \frac{A_{[mt]}^{(m)}}{m}-\hat X_t\bigg)$$ for all $m\in{\mathbb N}p$ and $t\ge0$. \begin{theorem} \label{QPolyaDiffusionLimit} Let $a, b\in[0, \infty)$, not both zero, $\theta_1, \theta_2\in {\mathbb R}$, and assume that $A_0^{(m)}:=[am+\theta_1\sqrt{m}], B_0^{(m)}=[bm+\theta_2\sqrt{m}]$ for all large $m\in{\mathbb N}$. Then the process $(\hat C_t^{(m)})_{t\ge0}$ converges in distribution, as $m\to\infty$, to the unique solution of the stochastic differential equation \begin{equation} \label{QPolyaNoiseSDE} \begin{aligned} \hat Y_0&=\theta_1, \\ d\hat Y_t&=\frac{k\log c}{c^{a+b+k t}-1} \bigg\{\frac{(c^{a+b}-1)\hat Y_t-c^b(c^a-1)(\theta_1+\theta_2)}{c^b-1+c^{-kt}(1-c^{-a})}\bigg\} \,dt\\&+k\sqrt{(c^a-1)(c^b-1)} \frac{c^{(a+kt)/2}}{c^{a+b+kt}-c^{a+kt}+c^a-1}\,dW_t, \end{aligned} \end{equation} which is \begin{equation} \label{QPolyaDiffLimit} \begin{aligned} \hat Y_t=\frac{c^{a+b+kt}-1}{c^{a+b+kt}-c^{a+kt}+c^a-1}\bigg(&\theta_1-(\theta_1+\theta_2)\frac{c^{a+b}(c^a-1)}{c^{a+b}-1}\frac{c^{kt}-1}{c^{a+b+kt}-1}\\&+k\sqrt{(c^a-1)(c^b-1)} \int_0^t\frac{c^{(a+kt)/2}}{c^{a+b+kt}-1}\, dW_s \bigg). \end{aligned} \end{equation} $W$ is a standard Brownian motion \end{theorem} \subsubsection{The regime $q=1+o(m^{-1})$} In this regime, we let $q=q(m):=c^{\varepsilon_m/m}$ where $c>1$ and $\varepsilon_m\to0^+$ as $m\to\infty$. With computations analogous to those of the results of the previous subsection, it is easy to see that Theorems \ref{PolyaPathRegime1}, \ref{PolyaPathRegime2} , \ref{PolyaPathLinear}, \ref{PolyaDiffusionLimit} hold exactly the same for the $q$-P\'olya urn in this regime. \subsection{$q$-P\'olya urn with many colors} In this paragraph, we give a $q$-analog for the P\'olya urn with more than two colors. The way to do the generalization is inspired by the experiment we used in order to explain relation \eqref{qPWhite}. Let $l\in{\mathbb N}, l\ge 2$, and $q\in(0,1)$. Assume that we have an urn containing $w_i$ balls of color $i$ for each $i\in\{1, 2, \ldots, l\}$. To draw a ball from the urn, we do the following. We order the balls in a line, first those of color 1, then those of color 2, and so on. Then we visit the balls, one after the other, in the order that they have been placed, and we select each with probability $1-q$ independently of what happened with the previous balls. If we go through all balls without picking any, we repeat the same procedure starting from the beginning of the line. Once a ball is selected, the drawing is completed. We return the ball to its position together with another $k$ of the same color. For each $i=0, 1, \ldots, l$, let $s_i=\sum_{1\le j\le i} w_j$. Notice that $s_l$ is the total number of balls in the urn. Then, working as for \eqref{LinePick}, we see that \begin{equation} \label{problcolors} \mbox{\bf P}(\text{color $i$ is drawn})=q^{s_{i-1}}\frac{1-q^{w_i}}{1-q^{s_l}}=\frac{q^{s_{i-1}}-q^{s_i}}{1-q^{s_l}}=q^{s_{i-1}}\frac{[w_i]_q}{[s_l]_q}. \end{equation} Call $p_i$ the number in the last display for all $i=1, 2, \ldots, l$. Note that when $q\to1$, $p_i$ converges to $w_i/s_l$, which is the probability for the usual P\'olya urn with $l$ colors. It is clear that for any given $q\in(0, \infty){\raise0.3ex\hbox{$\scriptstyle \setminus$}}\{1\}$, the numbers $p_1, p_2, \ldots, p_l$ are non-negative and add to 1 (the second fraction in \eqref{problcolors} shows this). We define then for this $q$ the $q$-P\'olya urn with colors $1, 2, \ldots, l$ to be the sequential procedure in which, at each step, we add $k$ balls of a color picked randomly among $\{1, 2, \ldots, l\}$ so that the probability that this color is $i$ is $p_i$ . When $q>1$, these probabilities come out of the experiment described above but in which we place the balls in reverse order (that is, first those of color $l$, then those of color $l-1$, and so on) and we go through the list selecting each ball with probability $1-q^{-1}$. It is then easy to see that the probability to pick a ball of color $i$ is $p_i$. \begin{theorem} \label{PMFOfWeakColors} Assume that $q\in(0, 1)$ and that we start with $a_1, a_2,\ldots, a_l$ balls from colors 1, 2, $\ldots, l$ respectively, where $a_1, a_2, \ldots, a_l\in{\mathbb N}$ are not all zero. Call $X_{n, i}$ the number of times in the first $n$ drawings that we picked color $i$. The probability mass function for the vector $\left(X_{n,2},X_{n,3},\ldots,X_{n,l}\right)$ is \begin{align} \label{probmassforvectorAig} \mbox{\bf P}\left(X_{n,2}=x_{2},\ldots,X_{n,l}=x_{l}\right)&=q^{\sum_{i=2}^{l}x_i\sum_{j=1}^{i-1}\left(a_{j}+k x_{j}\right)}\frac{\prod_{i=1}^{l}{-\frac{a_{i}}{k} \brack x_{i}}_{q^{-k}}}{{-\frac{a_1+a_2\ldots +a_l}{k} \brack n}_{q^{-k}}} \\ \label{probmassforvector} &={n \brack x_{1},x_{2},\ldots,x_{l}}_{q^{-k}} \frac{q^{\sum_{i=2}^{l}x_i\sum_{j=1}^{i-1}\left(a_{j}+k x_{j}\right)}\prod_{i=1}^{l}\left[-\frac{a_{i}}{k}\right]_{x_{i},q^{-k}}}{\left[-\frac{a_{1}+a_{2}+\ldots+a_{l}}{k}\right]_{n,q^{-k}}} \end{align} for all $x_2, \ldots, x_l\in\{0,1,2,\ldots,n\}$ with $x_2+\cdot+x_l\le n$, where $x_{1}:=n-\sum_{i=2}^{l}x_{2}$ and ${n \brack x_{1},x_{2},\ldots,x_{l}}_{q^{-k}}:=\frac{[n]_{q^{-k}}!}{[x_{1}]_{q^{-k}}!\cdot \ldots \cdot [x_{l}]_{q^{-k}}!}$ is the $q$-multinomial coefficient. \end{theorem} It follows from Theorem \ref{LimitOfWeakColor} that when $q\in(0, 1)$, after some random time, we will be picking only balls of color 1. So that the number of times that we pick each of the other colors $2, 3, \ldots, l$, say $X_2, X_3, \ldots, X_n$ are finite. We determine the joint distribution of these numbers. \begin{theorem}\label{LimitOfWeakColors} Under the assumptions of Theorem \ref{PMFOfWeakColors}, as $n\rightarrow +\infty,$ with probability one, the vector $\left(X_{n,2},X_{n,3},\ldots,X_{n,l}\right)$ converges to a random vector $\left(X_{2},X_{3},\ldots,X_{l}\right)$ with values in $\mathbb{N}^{l-1}$ and probability mass function \begin{equation} f\left(x_{2},x_{3},\ldots,x_{l}\right)=q^{\sum_{i=2}^{l}x_{i}\sum_{j=1}^{i-1}a_{j}}\prod_{i=2}^{l}{x_{i}+\frac{a_{i}}{k}-1 \brack x_{i}}_{q^{k}} \frac{(q^{a_1}; q^k)_\infty}{(q^{a_1+\cdots+a_l}; q^k)_\infty} \end{equation} for all $x_2, \ldots, x_l\in {\mathbb N}$. \end{theorem} \noindent Note that the random variables $X_2, \ldots, X_l$ are independent although $\left(X_{n,2},X_{n,3},\ldots,X_{n,l}\right)$ are dependent. Next, we look for a scaling limit for the path of the process. Assume that $c\in(0, 1)$ and $q_{m}=c^{1/m}$. Let $A_{j,i}^{(m)}$ be the number of balls of color $i$ in this urn after $j$ drawings. \begin{theorem} \label{QPolyaPathODEWColors} Let $m$ be a positive integer and assume that in the $q$-P\'olya urn with $l$ different colors of balls it holds $\frac{1}{m}\left(A_{0,1}^{(m)},A_{0,2}^{(m)}, \ldots , A_{0,l}^{(m)}\right)\overset{m\to\infty}{\to}\left(a_{1}, a_{2} ,\dots, a_{l}\right)$, where $a_{1},\ldots,a_{l}\in\left[0,\infty\right)$ are not all zero. Set $\sigma_0=0$ and $\sigma_i:=\sum_{j\le i}a_j$ for all $i=1, 2, \ldots, l$. Then the process $\left(\frac{1}{m}A_{[mt],1}^{(m)},\frac{1}{m}A_{[mt],2}^{(m)},\ldots,\frac{1}{m}A_{[mt],l}^{(m)}\right)_{t\geq 0} $ converges in distribution, as $m\rightarrow +\infty$, to $(X_{t, 1}, X_{t, 2}, \ldots, X_{t, l})_{t\ge0}$ with \begin{equation} \label{SystemSolution} X_{t, i}=a_i+\frac{1}{\log c} \log \frac{(1-c^{\sigma_l+kt})-c^{\sigma_{i-1}}(1-c^{k t})}{(1-c^{\sigma_l+kt})-c^{\sigma_i}(1-c^{k t})} \end{equation} for all $i=1, 2, \ldots, l$. \end{theorem} As in the case of two colors, we study the regime where $q_{m}=c^{\epsilon_{m}/m}$, with $c\in(0,1)$ and $\epsilon_{m}\rightarrow 0^{+}.$ \begin{theorem} \label{QPolyaPathODEWColors2} Let $m$ be a positive integer and assume that in the q-P\'olya urn with $l$ different colors of balls that $\frac{1}{m}\left(A_{0,1}^{(m)},A_{0,2}^{(m)}, \ldots , A_{0,l}^{(m)}\right)\overset{m\to\infty}{\to}\left(a_{1}, a_{2} ,\dots, a_{l}\right)$, where $a_{1},\ldots,a_{l}\in\left[0,\infty\right)$ are not all zero. Then the process $\left(\frac{1}{m}A_{[mt],1}^{(m)},\frac{1}{m}A_{[mt],2}^{(m)},\ldots,\frac{1}{m}A_{[mt],l}^{(m)}\right)_{t\geq 0}$ converges in distribution, as $m\rightarrow +\infty$, to $(X_t)_{t\ge0}$ with \begin{equation}\label{SystemSolutionReg2} X_t=\left(1+\frac{kt}{a_{1}+\dots+a_{l}}\right)(a_1, a_2, \ldots, a_l) \end{equation} for all $t\ge0$. \end{theorem} \noindent \textbf{Orientation}. In Section \ref{qPolyaBasicProofs}, we prove Fact 1 and Theorem 1.5, which are basic results for the $q$-P\'olya urn. Section \ref{JumpLimits} (Section \ref{DetermDiffLimits}) contains the proofs of the theorems for the P\'olya and $q$-P\'olya urns that give convergence to a jump process (to a continuous process). Finally, Section \ref{ProofMoreColors} contains the proofs for the results that refer to the $q$-P\'olya urn with arbitrary, finite number of colors. \section{$q$-P\'olya urn. Prevalence of a single color} \label{qPolyaBasicProofs} In this section, we prove the claims of Section \ref{qPolyaBasic}. Before doing so, we mention three properties of the $q$-binomial coefficient. For all $q\in (0, \infty){\raise0.3ex\hbox{$\scriptstyle \setminus$}}\{1\}, x\in \mathbb{C}, n, k\in {\mathbb N}$ with $k\le n$ it holds \begin{align} \label{NegToPosa} [-x]_q&=-q^{-x}[x]_q,\\ \label{NegToPos} {-x \brack k}_q&=(-1)^k q^{-k(k+2x-1)/2}{x+k-1 \brack k}_q,\\ {x \brack k}_{q^{-1}}&=q^{-k(x-k)}{x \brack k}_{q} \label{qMinusOneToq},\\ \sum_{1\le i_1<i_2<\cdots<i_k\le n} q^{i_1+i_2+\cdots+i_k}&=q^{{k+1 \choose 2}}{n \brack k }_{q}. \label{QBinCoeffCount} \end{align} The first is trivial, the second follows from the first, the third is easily shown, while the last is Theorem 6.1 in \cite{KacCheung}. \begin{proof}[\textbf{Proof of Fact 1}] (i) The probability to get black balls exactly at the drawings $i_1<i_2<\cdots<i_{n-x}$ is \begin{equation} \label{PnWBalls} g(i_1, i_2, \ldots, i_{n-x})=\frac{\prod_{j=0}^{x-1} [r+j k]_q \prod_{j=0}^{n-x-1} [s+j k]_q}{\prod_{j=0}^{n-1} [r+s+j k]_q} q^{\sum_{\nu=1}^{n-x}r+(i_\nu-\nu) k}. \end{equation} To see this, note that, due to \eqref{qPWhite} and \eqref{qPBlack}, the required probability would be equal to the above fraction if in \eqref{qPBlack} the term $q^w$ were absent. This term appears whenever we draw a black ball. Now, when we draw the $\nu$-th black ball, there are $r+(i_\nu-\nu) k$ white balls in the urn, and this explains the exponent of $q$ in \eqref{PnWBalls}. Since $[x+j k]_q=\frac{1-q^{x+jk}}{1-q}=[-\frac{x}{k}-j]_{q^{-k}}[-k]_q$ for all $x, j\in {\mathbb R}$, the fraction in \eqref{PnWBalls} equals \begin{equation} \frac{[-a]_{x, q^{-k}} [-b]_{n-x, q^{-k}}}{[-a-b]_{n , q^{-k}}}. \end{equation} Then \begin{align}\sum_{1\le i_1<i_2<\cdots<i_{n-x}\le n} q^{\sum_{\nu=1}^{n-x}r+(i_\nu-\nu)k}&=q^{(n-x) r-k(n-x)(n-x+1)/2}\sum_{1\le i_1<i_2<\cdots<i_{n-x}\le n} (q^k)^{i_1+i_2+\cdots+i_{n-x}}\\ &=q^{(n-x) r-k(n-x)(n-x+1)/2} q^{k {n-x+1 \choose 2 }} {n \brack x }_{q^k} \\ &=q^{(n-x) r} q^{kx(n-x)}{n \brack x }_{q^{-k}}=q^{k (n-x)(a+x)}{n \brack x }_{q^{-k}}. \end{align} The second equality follows from \eqref{QBinCoeffCount} and the equality ${n \brack x }_{q^k}={n \brack n-x }_{q^k}$. The third, from \eqref{qMinusOneToq}. Thus, the sum $\sum_{1\le i_1<i_2<\cdots<i_{n-x}\le n}g(i_1, i_2, \ldots, i_{n-x})$ equals the first expression in \eqref{QPolyaWhiteDistr}. The second expression in \eqref{QPolyaWhiteDistr} and \eqref{QPWDQ} follow by using \eqref{NegToPos} and \eqref{qMinusOneToq} respectively. (ii) In this scenario, we take $r\to\infty$ in the last expression in \eqref{QPolyaWhiteDistr}. We will explain shortly why this gives the probability we want. Since $q^{-k}\in(0, 1)$, we have $\lim_{t\to\infty}[t]_{q^{-k}}=(1-q^{-k})^{-1}$ and thus, for each $\nu\in{\mathbb N}$, it holds \begin{equation} \lim_{t\to\infty}{t+\nu-1 \brack \nu}_{q^{-k}}=\frac{1}{[\nu]_{q^{-k}}!} \frac{1}{(1-q^{-k})^\nu}. \end{equation} Applying this twice in the last expression in \eqref{QPolyaWhiteDistr} (there $a=r/k\to\infty$), we get as limit the right hand side of \eqref{QPolyaWhiteDistrRInfty}. Now, to justify that passage to the limit $r\to\infty$ in \eqref{QPolyaWhiteDistr} gives the required result, we argue as follows. For clarity, denote the probability $\mbox{\bf P}_q(\text{white})$ when there are $w$ white and $b$ black balls in the urn by $\mbox{\bf P}_q^{w, b}(\text{white})$. And when there are $r$ white and $s$ black balls in the urn in the beginning of the procedure, denote the probability of the event $X_n=x$ by $\mbox{\bf P}^{r, s}(X_n=x)$. It is clear that the probability $\mbox{\bf P}^{r, s}(X_n=x)$ is a continuous function (in fact, a polynomial) of the quantities $$\mbox{\bf P}_q^{r+ki, s+k j}(\text{white}):i=0, 1, \ldots, x-1, j=0, 1, \ldots, n-x-1,$$ for all values of $r\in {\mathbb N}\cup\{\infty\}, s\in {\mathbb N}$. In $\mbox{\bf P}^{\infty, s}(X_n=x)$, each such quantity, $\mbox{\bf P}_q^{\infty, m}(\text{white})$, equals $\lim_{r\to\infty}\mbox{\bf P}^{r, m}(\text{white})$. Thus, $\mbox{\bf P}^{\infty, s}(X_n=x)=\lim_{r\to\infty}\mbox{\bf P}^{r, s}(X_n=x)$. \end{proof} Before proving Theorem \ref{LimitOfWeakColor}, we give a simple argument that shows that eventually we will be picking only black balls. That is, the number $X:=\lim_{n\to\infty} X_n$ of white balls drawn in an infinite sequence of drawings is finite. It is enough to show it in the case that $r=\infty$ and $s=1$ since, by the experiment that realizes the $q$-P\'olya urn, we have (using the notation from the proof of Fact 1 (ii)) $$\mbox{\bf P}^{r, s}(X=\infty)\le \mbox{\bf P}^{\infty, 1}(X=\infty).$$ For each $n\in{\mathbb N}p$, call $E_n$ the event that at the $n$-th drawing we pick a white ball, $B_n$ the number of black balls present in the urn after that drawing (also, $B_0:=1$), and write $\hat q:=1/q$. Then $\mbox{\bf P}(E_n)=\textbf{E}(\mbox{\bf P}(E_n| B_{n-1}))=\textbf{E}(\hat q^{B_{n-1}})$. We will show that this decays exponentially with $n$. Indeed, since at every drawing there is probability at least $1-\hat q$ to pick a black ball, we can construct in the same probability space the random variables $(B_n)_{n\ge1}$ and $(Y_i)_{i\ge1}$ so that the $Y_i$ are i.i.d. with $Y_1\sim$ Bernoulli$(1-\hat q)$ and $B_n\ge 1+k(Y_1+\cdots+Y_n)$ for all $n\in{\mathbb N}p$. Consequently, $$\mbox{\bf P}(E_n)\le \textbf{E}(\hat q^{1+k(Y_1+\cdots+X_{n-1})})=\hat q \{\textbf{E}(\hat q^{kY_1})\}^{n-1}.$$ This implies that $\sum_{n=1}^\infty \mbox{\bf P}(E_n)<\infty$, and the first Borel-Cantelli lemma gives that $\mbox{\bf P}^{\infty, 1}(X_\infty=\infty)=0$. \begin{proof}[\textbf{Proof of Theorem \ref{LimitOfWeakColor}}] Since $(X_n)_{n\ge1}$ is increasing, it converges to a random variable $X$ with values in ${\mathbb N}\cup\{\infty\}$. In particular, it converges to this variable in distribution. Our aim is to take the limit as $n\to\infty$ in the last expression in \eqref{QPolyaWhiteDistr} and in \eqref{QPolyaWhiteDistrRInfty} in order to determine the distribution of $X$. Note that for $a\in {\mathbb R}$ and $\theta\in[0, 1)$ it is immediate that (recall \eqref{qPoch} for the notation) \begin{equation} \label{QBinomialAsymptotics} \lim_{n\to\infty}{a+n \brack n }_{\theta}=\frac{(\theta^{a+1}; \theta)_\infty}{(\theta; \theta)_\infty}. \end{equation} \noindent (i) Taking $n\to\infty$ in the last expression in \eqref{QPolyaWhiteDistr} and using \eqref{QBinomialAsymptotics}, we get the required expression, \eqref{LimOfWeakColor}, for $f$. Then relation (2.2) in \cite{Char12} (or (8.1) in \cite{KacCheung}) shows that $\sum_{x\in{\mathbb N}} f(x)=1$, so that it is a probability mass function of a random variable $X$ with values in ${\mathbb N}$. \noindent (ii) This follows after taking limit in \eqref{QPolyaWhiteDistrRInfty} and using \eqref{QBinomialAsymptotics} and $\lim_{n\to\infty} (1-q^{-k})^n [n]_{q^{-k}}!=(q^{-k}; q^{-k})_\infty$. \end{proof} \section{Jump process limits. Proof of Theorems \ref{PolyaPathRegime1}, \ref{PolyaPathRegime2}, \ref{QPolyaPathRegime1}, \ref{QPolyaPathRegime2}} \label{JumpLimits} In the case of Theorems \ref{PolyaPathRegime1}, \ref{QPolyaPathRegime1}, we let $g_m:=1$ for all $m\in {\mathbb N}^+$, and in all four theorems we let $v:=v_m:=m/g_m$. Our interest is in the sequence of the processes $(Z^{(m)})_{m\ge1}$ with \begin{equation} Z^{(m)}(t)=\frac{1}{k}(A^{(m)}_{[v t]}-A^{(m)}_0) \end{equation} for all $t\ge0$. We apply Theorem 7.8 in \cite{EtKu}, that is, we show that the sequence $(Z^{(m)})_{m\ge1}$ is tight and its finite dimensional distributions converge. Tightness gives that there is a subsequence of this sequence that converges in distribution to a process $Z=(Z_t)_{t\ge0}$ with paths in the space $D_{\mathbb R}[0, \infty)$ of real valued functions on $[0, \infty)$ right continuous with left limits. Then tightness together with convergence of finite dimensional distributions shows that the whole sequence $(Z^{(m)})_{m\ge1}$ converges in distribution to $Z$. \noindent \textbf{Notation:} (i) For sequences $(a_n)_{n\in{\mathbb N}}, (b_n)_{n\in{\mathbb N}}$ with values in ${\mathbb R}$, we will say that they are asymptotically equivalent, and will write $a_n\sim b_n$ as $n\to\infty$, if $\lim_{n\to\infty} a_n/b_n=1$. We use the same expressions for functions $f, g$ defined in a neighborhood of $\infty$ and satisfy $\lim_{x\to\infty} f(x)/g(x)=1$. \noindent (ii) For $a\in \mathbb{C}$ and $k\in {\mathbb N}p$, let \begin{align} (a)_k&:=a(a-1)\cdots (a-k+1),\\ a^{(k)}&:=a(a+1)\cdots(a+k-1), \end{align} the falling and rising factorial respectively. Also let $(a)_0:=a^{(0)}:=1$. \subsection{Convergence of finite dimensional distributions} \label{FddConvergence} Since for each $m\ge1$ the process $Z^{(m)}$ is Markov taking values in ${\mathbb N}$ and increasing in time, it is enough to show that the conditional probability \begin{equation}\label{ConditionalTransition} \mbox{\bf P}(Z^{(m)}(t_2)=k_2 | Z^{(m)}(t_1)=k_1) \end{equation} converges as $m\to\infty$ for each $0\le t_1<t_2$ and nonnegative integers $k_1\le k_2$. \noindent Consider first the case of P\'olya urn and define \begin{align} n&:=[v t_2]-[v t_1], \label{nDef}\\ x&:=k_2-k_1,\\ \sigma&:=\frac{A_0^{(m)}+kk_1}{k},\\ \tau&:=\frac{k[v t_1]-kk_1+B_0^{(m)}}{k}. \label{tauDef} \end{align} Then, the above probability equals \begin{align} \notag &\mbox{\bf P}(A_{[v t_2]}^{(m)}=kk_2+w_0 | A_{[v t_1]}^{(m)}=kk_1+w_0)\\ &={n \choose x}\frac{k\sigma (k\sigma+k)(k\sigma+2k)\cdots (k\sigma+(x-1)k) k\tau(k\tau+k)(k\tau+2k)\cdots (k\tau+(n-x-1)k)}{(k\sigma+k\tau)(k\sigma+k\tau+k)(k\sigma+k\tau+2k)\cdots (k\sigma+k\tau+(n-1)k)}\\ &=\frac{(n)_x}{x!}\frac{\sigma^{(x)} \tau^{(n-x)} }{(\sigma+\tau)^{(n)}}=\frac{(n)_x}{x!} \sigma^{(x)} \frac{\Gamma\left(\tau+n-x\right)}{\Gamma\left(\tau\right)}\frac{\Gamma\left(\sigma+\tau\right)}{\Gamma\left(\sigma+\tau+n\right)}. \label{TransitionProbab} \end{align} To compute the limit as $m\to\infty$ of \eqref{TransitionProbab}, we will use Stirling's approximation for the Gamma function, \begin{equation} \label{GammaStirling} \Gamma(y)\sim\left(\frac{y}{e}\right)^y\sqrt{\frac{2\pi}{y}} \end{equation} as $y\to\infty$, and its consequence \begin{equation} \label{GammaAsymptotics} \Gamma(y+a)\sim\Gamma(y) y^a \end{equation} as $y\to\infty$ for all $a\in\mathbb{R}$. \begin{proof}[\textbf{Theorem \ref{PolyaPathRegime1}}] Recall that $v=m$ in this theorem. Using \eqref{GammaAsymptotics} twice, with the role of $a$ played by $-x$ and $\sigma$, we see that the last quantity in \eqref{TransitionProbab}, for $m\to\infty$, is asymptotically equivalent to \begin{align*} &\frac{(m(t_2-t_1))^x}{x!}\sigma^{(x)}\tau^\sigma \frac{\left(\tau+n\right)^{-x}}{\left(\tau+n\right)^{\sigma}}\sim \frac{(m(t_2-t_1))^x}{x!}\sigma^{(x)} \frac{\{m(t_1+(b_0/k))\}^\sigma}{\{m(t_2+(b_0/k))\}^{\sigma+x}} \\ &=\frac{(t_2-t_1)^x}{x!} \sigma^{(x)} \frac{\{t_1+(b_0/k)\}^\sigma}{\{t_2+(b_0/k)\}^{\sigma+x}} ={\sigma+x-1 \choose x} \left(\frac{t_2-t_1}{t_2+(b_0/k)}\right)^x \left(1-\frac{t_2-t_1}{t_2+(b_0/k)}\right)^\sigma. \end{align*} Thus, as $m\to\infty$, the distribution of $\{Z^{(m)}(t_2)-Z^{(m)}(t_1)\}| Z^{(m)}(t_1)=k_1$ converges to the negative binomial distribution with parameters $\sigma, \frac{t_1+(b_0/k)}{t_2+(b_0/k)}$ [recall \eqref{NBDensity}]. \end{proof} \begin{proof}[\textbf{Theorem \ref{PolyaPathRegime2}}] Using \eqref{GammaStirling}, we see that the last quantity in \eqref{TransitionProbab}, for $m\to\infty$, is asymptotically equivalent to \begin{align*} &\frac{(m(t_2-t_1))^x}{x! g_m^x}\frac{g_m^x}{k^x} e^x \frac{(\tau+n-x)^{\tau+n-x}}{\tau^\tau}\frac{(\sigma+\tau)^{\sigma+\tau}}{(\sigma+\tau+n)^{\sigma+\tau+n}} \\ &\sim \frac{m^x (t_2-t_1)^x}{x! k^x} e^x (\tau+n-x)^{-x} \left(\frac{\tau+n-x}{\sigma+\tau+n}\right)^n \left(\frac{\sigma+\tau}{\sigma+\tau+n}\right)^\sigma \left(\frac{(\tau+n-x)(\sigma+\tau)}{\tau(\sigma+\tau+n)}\right)^{\tau} \\ &\sim \frac{m^x (t_2-t_1)^x}{x! k^x} e^x \tau^{-x} e^{-(t_2-t_1)/b_0} e^{-(t_2-t_1)/b_0} e^{-x+(t_2-t_1)/b_0}\sim \frac{1}{x!}\left(\frac{t_2-t_1}{b_0}\right)^x e^{-(t_2-t_1)/b_0} . \end{align*} Here it was crucial that $b_0>0$. Thus, as $m\to\infty$, the distribution of $\{Z^{(m)}(t_2)-Z^{(m)}(t_1)\}| Z^{(m)}(t_1)=k_1$ converges to the Poisson distribution with parameter $(t_2-t_1)/b_0$. \end{proof} \noindent Now we treat the cases of Theorems \ref{QPolyaPathRegime1}, \ref{QPolyaPathRegime2}, which concern the $q$-P\'olya urn. Define again $n, x, \sigma, \tau$ as in \eqref{nDef}-\eqref{tauDef}, and $r:=q_m^{-k}=c^{-k/m}$. Then, the probability in \eqref{ConditionalTransition}, with the help of the last expression in \eqref{QPolyaWhiteDistr}, is computed as \begin{equation} \label{QPolyaTransition} r^{\tau x} {\sigma+x-1 \brack x }_{r} \frac{{\tau+n-x-1 \brack n-x }_{r} }{{\sigma+\tau+n-1 \brack n }_{r} }=r^{\tau x} {\sigma+x-1 \brack x }_{r} \Big(\prod_{i=n-x+1}^{n}(1-r^i)\Big) \frac{1}{\prod_{i=n-x}^{n-1}(1-r^{\tau+i})} \frac{[\tau+n-1]_{n, r}}{[\sigma+\tau+n-1]_{n, r}}. \end{equation} The last ratio is \begin{align} \prod_{i=0}^{n-1} \frac{1-r^{\tau+i}}{1-r^{\sigma+\tau+i}}=\prod_{i=0}^{n-1} \left(1-(1-r^\sigma) r^\tau \frac{r^i}{1-r^{\sigma+\tau+i}}\right). \end{align} Denote by $1-a_{m, i}$ the $i$-th term of the product. The logarithm of the product equals \begin{equation} \label{RiemannSum} -(1-r^\sigma) r^\tau \sum_{i=0}^{n-1} \frac{r^i}{1-r^{\sigma+\tau+i}}+o(1) \end{equation} as $m\to\infty$. To justify this, note that $1-r^\sigma\sim \frac{1}{m}(A_0^{(m)}+k k_1) \log c$ and $r^{\tau+i}/(1-r^{\sigma+\tau+i})\le 1/(1-c^{-b_0})$ for all $i\in {\mathbb N}$. Thus, for all large $m$, $|a_{m, i}|<1/2$ for all $i=0, 1, \ldots, n-1$, and the error in approximating the logarithm of $1-a_{m, i}$ by $-a_{m, i}$ is at most $|a_{m, i}|^2$ (by Taylor's expansion, we have $|\log(1-y)+y|\le |y|^2$ for all $y$ with $|y|\le1/2$). The sum of all errors is at most $n \max_{0\le i<n}|a_{m, i}|^2$, which goes to zero as $m\to\infty$ because $1-r^\sigma\sim C/n$ for some appropriate constant $C>0$. We will compute the limit of the right hand side of \eqref{QPolyaTransition} as $m\to\infty$ under the assumptions of Theorems \ref{QPolyaPathRegime1}, \ref{QPolyaPathRegime2}. \begin{proof}[\textbf{Theorem \ref{QPolyaPathRegime1}}] As $m\to\infty$, the first term of the product in \eqref{QPolyaTransition} converges to $c^{-x(b_0+k t_1)}$. The $q$-binomial coefficient converges to ${k^{-1}w_0+k_2-1 \choose k_2-k_1}$. The third term converges to $(1-c^{-k(t_2-t_1)})^x$, while the denominator of the fourth term converges to $(1-\rho_2)^x$, where we set $\rho_i:=c^{-b_0-kt_i}$ for $i=1, 2$. The expression preceding $o(1)$ in \eqref{RiemannSum} is asymptotically equivalent to \begin{align}&-\frac{k}{m} \sigma (\log c) \rho_1 \sum_{i=0}^{n-1} \frac{c^{-ki/m}}{1-r^{\sigma+\tau} c^{-ki/m}} = -\rho_1k \sigma (\log c) \frac{1}{m}\sum_{i=0}^{n-1} \frac{c^{-ki/m}}{1-\rho_1 c^{-ki/m}}+o(1) \\&=-\rho_1 k \sigma \log c \int_0^{t_2-t_1} \frac{1}{c^{k y}-\rho_1}\, dy+o(1) =\sigma \log \frac{1-\rho_1}{1-\rho_2}+o(1). \end{align} The equality in the first line is true because $\lim_{m\to\infty}r^{\sigma+\tau}=\rho_1$ and the function $x\mapsto c^{-k i/m}/(1-x c^{-ki/m})$ has derivative bounded uniformly in $i, m$ when $x$ is confined to a compact subset of $[0, 1)$. Thus, the limit of \eqref{QPolyaTransition}, as $m\to\infty$, is \begin{equation} {\sigma+x-1 \choose x} \left(\frac{\rho_1-\rho_2}{1-\rho_2}\right)^x \left(\frac{1-\rho_1}{1-\rho_2}\right)^\sigma, \end{equation} which means that, as $m\to\infty$, the distribution of $\{Z^{(m)}(t_2)-Z^{(m)}(t_1)\}| Z^{(m)}(t_1)=k_1$ converges to the negative binomial distribution with parameters $\sigma, (1-\rho_1)/(1-\rho_2)$. \end{proof} \begin{proof}[\textbf{Theorem \ref{QPolyaPathRegime2}}] Now the term $r^{\tau x}$ converges to $c^{-x b_0}$, while \begin{align} &{\sigma+x-1 \brack x }_{r}\Big(\prod_{i=n-x+1}^{n}(1-r^i)\Big)=\frac{\prod_{i=0}^{x-1}(1-r^{\sigma+i})}{\prod_{i=1}^x (1-r^i)} \Big(\prod_{i=n-x+1}^{n}(1-r^i)\Big)\\ &\sim \frac{\prod_{i=0}^{x-1}(\sigma+i)}{\prod_{i=1}^x i} \frac{((t_2-t_1)k\log c)^x}{g_m^x}\sim\frac{1}{x!}((t_2-t_1)\log c)^x. \end{align} The denominator of the fourth term in \eqref{QPolyaTransition} converges to $(1-c^{-b_0})^x$. The expression in \eqref{RiemannSum} is asymptotically equivalent to \begin{equation} -r^\tau(1-r^\sigma)\sum_{i=0}^{n-1} \frac{r^i}{1-r^{\sigma+\tau+i}}\sim -c^{-b_0} \frac{g_m}{m}\log c \frac{n}{1-c^{-b_0}} \sim -\frac{\log c}{c^{b_0}-1}(t_2-t_1). \end{equation} In the first $\sim$, we used the fact that the terms of the sum, as $m\to\infty$, converge uniformly in $i$ to $(1-c^{-b_0})^{-1}$. Thus, the limit of \eqref{QPolyaTransition}, as $m\to\infty$, is \begin{equation} \frac{1}{x!} \left(\frac{\log c}{c^{b_0}-1}(t_2-t_1)\right)^x e^{-\frac{\log c}{c^{b_0}-1}(t_2-t_1) }, \end{equation} which means that, as $m\to\infty$, the distribution of $\{Z^{(m)}(t_2)-Z^{(m)}(t_1)\}| Z^{(m)}(t_1)=k_1$ converges to the Poisson distribution with parameter $\frac{t_2-t_1}{c^{b_0}-1}\log c$. \end{proof} For use in the following section, we define \begin{equation} \label{transitionProb} U(t_1, t_2, k_1, x):=\lim_{m\to\infty}\mbox{\bf P}(Z^{(m)}(t_2)=k_1+x | Z^{(m)}(t_1)=k_1) \end{equation} for all $0\le t_1\le t_2, k_1\in {\mathbb N}, x\in {\mathbb N}$. The results of this section show that $U$ as a function of $x\in {\mathbb N}$ is a probability mass function of an appropriate random variable with values in ${\mathbb N}$. \subsection{Tightness} We apply Corollary 7.4 of Chapter 3 in \cite{EtKu}. According to it, it is enough to show that \begin{itemize} \item[(i)] For each $t\ge0$, it holds $\lim_{R\to\infty} \varlimsup_{m\to\infty}\mbox{\bf P}(|Z^{(m)}(t)|\ge R)=0$. \item[(ii)] For each $T, \varepsilon>0$, it holds $\lim_{\delta\to0} \varlimsup_{m\to\infty} \mbox{\bf P}(w'(Z^{(m)}, \delta, T)\ge \varepsilon)=0.$ \end{itemize} Here, for any function $f:[0, \infty)\to{\mathbb R}$, we define $$w'(f, \delta, T):=\inf_{\{t_i\}}\max_{i}\sup_{s, t\in[t_{i-1}, t_i)}|f(s)-f(t)|,$$ where the infimum is over all partitions of the form $0=t_0<t_1<\cdots t_{n-1}<T\le t_n$ with $t_i-t_{i-1}>\delta$ for all $i=1, 2, \ldots, n$. The first requirement holds because $Z^{(m)}(t)$ converges in distribution as we showed in the previous subsection. The second requirement, since $Z^{(m)}$ is a jump process with jump sizes only 1, is equivalent to \begin{equation} \label{TightnessJumps} \lim_{\delta\to0^+} \varlimsup_{m\to\infty} \mbox{\bf P}(\text{There are at least two jump times of $Z^{(m)}$ in $[0, T]$ with distance}\le \delta)=0. \end{equation} Call $A_{m, \delta}$ the event inside the probability and for $j=1, 2, \ldots, [T/\delta]$ define $I_j:=((j-1)\delta, (j+1)\delta]$. Then, for each $\ell\in {\mathbb N}$, the probability $\mbox{\bf P}(A_{m, \delta}\cap \{Z^{(m)}(T)\le \ell\})$ is bounded above by \begin{align}&\sum_{j=1}^{[T/\delta]}\mbox{\bf P}( \{Z^{(m)}(T)\le \ell\}\cap\{\text{There are at least two jump times of $Z^{(m)}$ in } I_j\}) \\ &\le \sum_{j=1}^{[T/\delta]}\mbox{\bf P}( \{Z^{(m)}(T)\le \ell\}\cap\{Z^{(m)}((j+1)\delta)-Z^{(m)}((j-1)\delta)\ge 2\})\\ &\le \sum_{j=1}^{[T/\delta]} \max_{0\le \mu \le \ell} \mbox{\bf P}(Z^{(m)}((j+1)\delta)-Z^{(m)}((j-1)\delta)\ge 2 | Z^{(m)}((j-1)\delta)=\mu). \end{align} The limit of the last quantity as $m\to\infty$, with the use of the function $U$ of \eqref{transitionProb}, is written as \begin{equation} \label{JumpMaxBound} \sum_{j=1}^{[T/\delta]} \max_{0\le \mu \le \ell} \sum_{x=2}^\infty U((j-1)\delta, (j+1)\delta, \mu, x) \le\frac{T}{\delta} \max_{\substack{0\le \mu\le \ell\\ 1\le j\le [T/\delta]}} \sum_{x=2}^\infty U((j-1)\delta, (j+1)\delta, \mu, x). \end{equation} \textsc{Claim:} The max in \eqref{JumpMaxBound} is bounded above by $\delta^2 C(\ell, T)$ for an appropriate constant $C(\ell, T)\in (0, \infty)$ that does not depend on $m$ or $\delta$. Assuming the claim and taking $m\to\infty$ in $\mbox{\bf P}(A_{m, \delta})=\mbox{\bf P}(A_{m, \delta}\cap \{Z^{(m)}(T)\le \ell\})+\mbox{\bf P}(A_{m, \delta}\cap \{Z^{(m)}(T)>\ell\})$, we get $$\varlimsup_{m\to\infty} \mbox{\bf P}(A_{m, \delta})\le \delta C(\ell, T)+\varlimsup_{m\to\infty}\mbox{\bf P}(\{Z^{(m)}(T)>\ell\}).$$ Now let $\varepsilon>0$. Because of the validity of (i) in the tightness requirements, there is $\ell$ large enough so that the second term is $<\varepsilon$. Fixing this $\ell$ and taking $\delta\to0$ in the inequality, we get \eqref{TightnessJumps}. \textsc{Proof of the claim:} We establish the above claim for each of the Theorems \ref{PolyaPathRegime1}, \ref{PolyaPathRegime2}, \ref{QPolyaPathRegime1}, \ref{QPolyaPathRegime2}. We use the following bounds. If $X, Y$ are random variables with $X\sim$ Poisson($\lambda$) and $Y\sim NB(\nu, p)$ then \begin{align} \mbox{\bf P}(X\ge 2)&\le \lambda^2, \label{PoissonTail}\\ \mbox{\bf P}(Y\ge 2)&\le \frac{\nu (\nu+1)}{2} (1-p)^2. \label{NBTail} \end{align} The first inequality is elementary, while the second is true because the difference of the two sides $$\mbox{\bf P}(Y\ge 2)-\frac{\nu(\nu+1)}{2} (1-p)^2=1-p^\nu-r p^\nu(1-p)-\frac{\nu(\nu+1)}{2} (1-p)^2$$ is an increasing function of $p$ in $[0, 1]$ with value $0$ at $p=1$. According to the results of Section \ref{FddConvergence}, the sum after the max in \eqref{JumpMaxBound} equals $\mbox{\bf P}(X\ge2)$ where \begin{equation} X\sim \begin{cases} NB\big(\frac{w_0}{k}+\mu, \frac{t_1+(b_0/k)}{t_2+(b_0/k)}\big) & \text{for Theorem \ref{PolyaPathRegime1}},\\ \text{Poisson}\big(\frac{2\delta}{b_0}\big) & \text{for Theorem \ref{PolyaPathRegime2}}, \\ NB\big(\frac{w_0}{k}+\mu, \frac{1-c^{-b_0-kt_1}}{1-c^{-b_0-kt_2}}\big) & \text{for Theorem \ref{QPolyaPathRegime1}},\\ \text{Poisson}\big(2\delta \frac{\log c}{c^{b_0}-1}\big) & \text{for Theorem \ref{QPolyaPathRegime2}}, \\ \end{cases} \end{equation} and $t_1:=(j-1)\delta, t_2:=(j+1)\delta$. The claim then follows from \eqref{PoissonTail} and \eqref{NBTail}. \subsection{Conclusion} It is clear from the form of the finite dimensional distributions that in all Theorems \ref{PolyaPathRegime1}, \ref{PolyaPathRegime2}, \ref{QPolyaPathRegime1}, \ref{QPolyaPathRegime2} the limiting process $Z$ is a pure birth process that does not explode in finite time. Its rate at the point $(t, j)\in [0, \infty)\times {\mathbb N}$ is $$\lambda_{t, j}=\lim_{h\to0^+}\frac{1}{h} \mbox{\bf P}(Z(t+h)=j+1| Z(t)=j)$$ and is found as stated in the statement of each theorem. \section{Deterministic and diffusion limits. Proof of Theorems \ref{PolyaPathLinear}, \ref{PolyaDiffusionLimit}, \ref{QPolyaPathODE}, \ref{QPolyaDiffusionLimit} } \label{DetermDiffLimits} These theorems are proved with the use of Theorem 7.1 in Chapter 8 of \cite{Du96}, which is concerned with convergence of time-homogeneous Markov processes to diffusions. Since our basic Markov chain, $(A_n^{(m)})_{n\in{\mathbb N}}$, is not time-homogeneous, we do the standard trick of considering the chain $\{(A_n^{(m)}, n)\}_{n\in{\mathbb N}}$ which is time-homogeneous. \subsection{Proof of Theorems \ref{PolyaPathLinear}, \ref{QPolyaPathODE} } For each $m\in{\mathbb N}^+$, consider the discrete time-homogeneous Markov chain $$Z_{n}^{(m)}=\Big(\frac{A_n^{(m)}}{m}, \frac{n}{m}\Big).$$ From any given state $(x_1, x_2)$ of $Z_n^{(m)}$, the chain moves to either of $(x_1+k/m, x_2+m^{-1}), (x_1, x_2+m^{-1})$ with corresponding probabilities $p(x_1, x_2, m), 1-p(x_1, x_2, m)$, where $$p(x_1, x_2, m):=\begin{cases} \frac{mx_1}{A_0^{(m)}+B_0^{(m)}+kmx_2} & \text{ in the case of Theorem \ref{PolyaPathLinear}},\\ \frac{1-q_m^{mx_1}}{1-q_m^{A_0^{(m)}+B_0^{(m)}+kmx_2}} & \text{ in the case of Theorem \ref{QPolyaPathODE}}. \end{cases} $$ This is true because when the chain is at the point $(x_1, x_2)$, then the time $n$ is $n=m x_2$ and $A_{n}^{(m)}+B_{n}^{(m)}=A_0^{(m)}+B_0^{(m)}+kn$. Define also \begin{equation} p(x_1, x_2):=\lim_{m\to\infty} p(x_1, x_2, m)=\begin{cases} \frac{x_1}{a+b+kx_2} & \text{ in the case of Theorem \ref{PolyaPathLinear}},\\ \frac{1-c^{x_1}}{1-c^{a+b+kx_2}} & \text{ in the case of Theorem \ref{QPolyaPathODE}}. \end{cases} \end{equation} We compute the mean and the covariance matrix for the one step change of $Z^{(m)}=(Z^{(m), 1}, Z^{(m), 2})$ conditioned on its current position. \begin{align} \textbf{E}\left[Z_{n+1}^{(m), 1}-Z_n^{(m), 1}|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{k}{m}p(x_1, x_2, m),\\ \textbf{E}\left[Z_{n+1}^{(m), 2}-Z_n^{(m), 2}|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{1}{m},\\ \textbf{E}\left[(Z_{n+1}^{(m), 1}-Z_n^{(m), 1})^2|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{k^2}{m^2}p(x_1, x_2, m),\\ \textbf{E}\left[(Z_{n+1}^{(m), 1}-Z_n^{(m), 1})(Z_{n+1}^{(m), 2}-Z_n^{(m), 2})|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{k}{m^{2}}p(x_1, x_2, m),\\ \textbf{E}\left[(Z_{n+1}^{(m), 2}-Z_n^{(m), 2})^2|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{1}{m^{2}}. \end{align} For each $m\in{\mathbb N}p$, we consider the process $\Lambda_t^{(m)}:=Z_{[mt]}^{(m)}, t\ge0$. According to Theorem 7.1 in Chapter 8 of \cite{Du96}, the sequence $(\Lambda^{(m)})_{m\ge1}$ converges weakly to the solution, $(S_{t})_{t\ge0}$, of the differential equation \begin{equation} \label{VectorODE} \begin{aligned} dS_{t}&=b(S_{t})dt,\\ S_0&=\Big(\begin{array}{c} a \\ 0\end{array}\Big), \end{aligned} \end{equation} where \begin{equation} S_{t}=\left(\begin{array}{c}S_{t}^{(1)} \\S_{t}^{(2)}\end{array}\right), \, b\Big(\begin{array}{c} x \\ y\end{array}\Big)=\Big(\begin{array}{c} kp(x, y) \\1\end{array}\Big). \end{equation} To apply the theorem, we need to check that the martingale problem MP$(b, \mathbb{O})$ has a unique solution. Here, $\mathbb{O}$ is the $2\times2$ zero matrix. See \cite{Du96}, \S 5.4, for details on the martingale problem. The problem indeed has unique solution because the differential equation \eqref{VectorODE} has a unique solution, and by well known results, this implies the claim for the martingale problem. It follows that the process $(A^{(m)}_{[mt]})_{t\ge0}$ converges, as $m\to\infty$, to the solution of the differential equation \begin{align}X_{0}&=a,\\ dX_{t}&=k p(X_t, t)dt. \end{align} For both theorems, \ref{PolyaPathLinear} and \ref{QPolyaPathODE}, this ordinary differential equation is separable and its unique solution is the one stated. \subsection{Proof of Theorems \ref{PolyaDiffusionLimit}, \ref{QPolyaDiffusionLimit}} \begin{proof}[\textbf{Proof of Theorem \ref{PolyaDiffusionLimit}}] Call $\lambda:=a/(a+b)$. For each $m\in{\mathbb N}^+$, consider the discrete time-homogeneous Markov chain $$Z_{n}^{(m)}=\Big(\sqrt{m}\Big(\frac{A_n^{(m)}}{m}-a-\lambda k \frac{n}{m}\Big), \frac{n}{m}\Big), n\in {\mathbb N}.$$ From any given state $(x_1, x_2)$ of $Z_n^{(m)}$, the chain moves to either of $(x_1-k m^{-1/2}\lambda, x_2+m^{-1}), (x_1+k m^{-1/2}(1-\lambda), x_2+m^{-1})$ with corresponding probabilities \begin{align} \Pi^{(m)}\left[\left(x_{1},x_{2}\right),\left(x_1-\frac{k}{\sqrt{m}}\lambda, x_{2}+\frac{1}{m}\right)\right]&=\frac{B_n^{(m)}}{A_n^{(m)}+B_n^{(m)}},\\ \Pi^{(m)}\left[\left(x_{1},x_{2}\right),\left(x_1+\frac{k}{\sqrt{m}}(1-\lambda), x_{2}+\frac{1}{m}\right)\right]&=\frac{A_n^{(m)}}{A_n^{(m)}+B_n^{(m)}}, \end{align} with \begin{align} A_n^{(m)}&=ma+\lambda k m x_2+x_1\sqrt{m},\\ B_n^{(m)}&=A_0^{(m)}+B_0^{(m)}+k m x_2-A_n^{(m)}. \end{align} We used the fact that when the chain is at the point $(x_1, x_2)$, then the time $n$ is $n=m x_2$. We compute the mean and the covariance matrix for the one step change of $Z^{(m)}=(Z^{(m), 1}, Z^{(m), 2})$ conditioned on its current position. \begin{align} \textbf{E}\left[Z_{n+1}^{(m), 1}-Z_n^{(m), 1}|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{k}{\sqrt{m}} \frac{(1-\lambda) A_n^{(m)}-\lambda B_n^{(m)}}{A_n^{(m)}+B_n^{(m)}} \notag \\ &\sim \frac{1}{m}\frac{k \{x_{1}-(\theta_1+\theta_2)\lambda\}}{a+b+k x_{2}},\\ \textbf{E}\left[Z_{n+1}^{(m), 2}-Z_n^{(m), 2}|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{1}{m},\\ \textbf{E}\left[(Z_{n+1}^{(m), 1}-Z_n^{(m), 1})^2|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{k^2}{m}\left(\lambda^2 \frac{B_n^{(m)}}{A_n^{(m)}+B_n^{(m)}}+(1-\lambda)^2 \frac{A_n^{(m)}}{A_n^{(m)}+B_n^{(m)}}\right) \notag \\ &\sim \frac{1}{m} \frac{k^2 a b}{(a+b)^2},\\ \textbf{E}\left[(Z_{n+1}^{(m), 1}-Z_n^{(m), 1})(Z_{n+1}^{(m), 2}-Z_n^{(m), 2})|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&\sim\frac{1}{m^{2}}\frac{kx_{1}}{a+b+kx_{2}},\\ \textbf{E}\left[(Z_{n+1}^{(m), 2}-Z_n^{(m), 2})^2|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{1}{m^{2}}. \end{align} Then, for each $m\in{\mathbb N}^+$, we consider the process $\Lambda_t^{(m)}:=Z_{[mt]}^{(m)}, t\ge0$. According to Theorem 7.1 in Chapter 8 of \cite{Du96}, the sequence $(\Lambda^{(m)})_{m\ge1}$ converges in distribution to the solution, $(S_{t})_{t\ge0}$, of the stochastic differential equation \begin{align} dS_{t}&=b(S_{t})dt+\sigma(S_{t})dB_{t},\\ S_0&=\left(\begin{array}{c}\theta_1 \\0\end{array}\right), \end{align} where $$\begin{array}{lll} &S_{t}=\left(\begin{array}{c}S_{t}^{(1)} \\S_{t}^{(2)}\end{array}\right), & B_{t}=\left(\begin{array}{c}B_{t}^{(1)} \\B_{t}^{(2)}\end{array}\right),\\ &b\Big(\begin{array}{c} x \\ y\end{array}\Big)=\Big(\begin{array}{c} \frac{k \{x-(\theta_1+\theta_2)\lambda\}}{a+b+k y} \\1\end{array}\Big), & \sigma\Big(\begin{array}{c} x \\ y\end{array}\Big)=\left(\begin{array}{ccc} k\frac{\sqrt{ab}}{a+b}&0\\ 0&0\\ \end{array}\right). \end{array}$$ $B$ is a two dimensional standard Brownian motion. Again, to apply that theorem, we need to check that the martingale problem MP$(b, \sigma)$ has a unique solution. This follows from the existence and uniqueness of strong solution for the above stochastic differential equation as the coefficients $b, \sigma$ are Lipschitz and grow at most linearly at infinity. Thus, the process $(Z^{(m), 1}_{[mt]})_{t\ge0}$ converges in distribution, as $m\to\infty$, to the solution of \begin{align} Y_0&=\theta_1, \\ dY_t&=\frac{k\{Y_t-(\theta_1+\theta_2)\lambda\}}{a+b+kt}\,dt+k\frac{\sqrt{ab}}{a+b}dB_t^{(1)}. \end{align} The same is true for $(C_t^{(m)})_{t\ge0}$ because $\sup_{t\ge0}|C_t^{(m)}-Z^{(m), 1}_{[mt]}|\le k/\sqrt{m}$. To solve the last SDE, we set $U_{t}:=\{Y_{t}-(\theta_1+\theta_2)\lambda\}/(a+b+kt)$. Ito's lemma gives that $$dU_{t}=k\frac{\sqrt{ab}}{(a+b)}\frac{1}{a+b+kt}dB_{t}^{(1)},$$ and since $U_{0}=(b\theta_1-a\theta_2)/(a+b)^2$, we get $$U_{t}=\frac{b\theta_1-a\theta_2}{(a+b)^2}+k\frac{\sqrt{ab}}{a+b} \int_{0}^{t}\frac{1}{a+b+ks}dB_{s}^{(1)}.$$ This gives \eqref{PolyaDiffLimit}. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{QPolyaDiffusionLimit}}] The proof is analogous to that of Theorem \ref{PolyaDiffusionLimit}, only the algebra is a little more involved. For each $m\in{\mathbb N}^+$, consider the discrete time-homogeneous Markov chain $$Z_{n}^{(m)}=\Big(\sqrt{m}\Big(\frac{A_n^{(m)}}{m}-X_{n/m}\Big), \frac{n}{m}\Big), n\in {\mathbb N}.$$ From any given state $(x_1, x_2)$ of $Z_n^{(m)}$, the chain moves to either of \begin{align}&(x_1, x_2)+(k m^{-1/2}+\sqrt{m}(X_{x_2}-X_{x_2+m^{-1}}) , m^{-1}), \\ & (x_1, x_2)+(\sqrt{m}(X_{x_2}-X_{x_2+m^{-1}}) , m^{-1}) \end{align} with corresponding probabilities $p(x_1, x_2, m), 1-p(x_1, x_2, m)$, where \begin{equation}p(x_1, x_2, m)= \frac{[A_n^{(m)}]_{q_m}}{[A_0^{(m)}+B_0^{(m)}+kmx_2]_{q_m}} \end{equation} and \begin{align} A_n^{(m)}&=mX_{x_2}+x_1\sqrt{m},\\ B_n^{(m)}&=A_0^{(m)}+B_0^{(m)}+k m x_2-A_n^{(m)}. \end{align} We used the fact that when the chain is at the point $(x_1, x_2)$, then the time $n$ is $n=m x_2$. For convenience, let $\Delta X_{x_2}=X_{x_2+m^{-1}}-X_{x_2}$. We compute the mean and the covariance matrix for the one step change of $Z^{(m)}=(Z^{(m), 1}, Z^{(m), 2})$ conditioned on its current position. Of the following relations, the first four are immediate, the fifth follows from part (a) of the claim that follows and the fact that $Z_{n+1}^{(m), 2}-Z_n^{(m), 2}=1/m$. \begin{align} \textbf{E}\left[Z_{n+1}^{(m), 1}-Z_n^{(m), 1}|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{k}{\sqrt{m}} p(x_1, x_2, m)-\sqrt{m} \Delta X_{x_2} \label{Asymptotics1} \\ \textbf{E}\left[(Z_{n+1}^{(m), 1}-Z_n^{(m), 1})^2|Z^{(m)}_n=(x_1, x_2)\right]&=\left(\frac{k^2}{m}-2k \Delta X_{x_2}\right)p(x_1, x_2, m)+m (\Delta X_{x_2})^2 \label{Asymptotics2} \\ \textbf{E}\left[Z_{n+1}^{(m), 2}-Z_n^{(m), 2}|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{1}{m},\\ \textbf{E}\left[(Z_{n+1}^{(m), 2}-Z_n^{(m), 2})^2|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&=\frac{1}{m^{2}},\\ \textbf{E}\left[(Z_{n+1}^{(m), 1}-Z_n^{(m), 1})(Z_{n+1}^{(m), 2}-Z_n^{(m), 2})|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]&= O(m^{-2}) \end{align} We examine now the asymptotics of the first two expectations. \textsc{Claim}: \begin{equation*} \begin{array}{lrl} (a) & \textbf{E}\left[Z_{n+1}^{(m), 1}-Z_n^{(m), 1}|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right]\sim& \dfrac{1}{m} \dfrac{k\log c}{c^{a+b+k x_2}-1} \left(c^{X_{x_2}}x_1-\dfrac{(c^{X_{x_2}}-1)c^{a+b+k x_2}}{c^{a+b+k x_2}-1} (\theta_1+\theta_2)\right)\\ & & +O(\dfrac{1}{m^{3/2}}) \\ \phantom{adgagf}\\ (b) & \textbf{E}\left[\{Z_{n+1}^{(m), 1}-Z_n^{(m), 1}\}^2|Z^{(m)}_n=\left(x_{1},x_{2}\right)\right] \sim & \dfrac{1}{m} k^2 g(x_2)\{1-g(x_2)\}+O(\dfrac{1}{m^{3/2}}) \end{array} \end{equation*} where $g(x_2):=\lim_{m\to\infty} p(x_1, x_2, m)=\frac{c^{X_{x_2}}-1}{c^{a+b+kx_2}-1}$. \textsc{Proof of the claim}. We examine the asymptotics of $p(x_1, x_2, m)$ and $\Delta X_{x_2}$. We have \begin{align} p(x_1, x_2, m)&=\frac{c^{X_{x_2}+\frac{1}{\sqrt{m}x_1}}-1}{c^{\frac{A_0^{(m)}+B_0^{(m)}}{m}+kx_2}-1}=\frac{c^{X_{x_2}+\frac{1}{\sqrt{m}x_1}}-1}{c^{a+b+kx_2+\frac{\theta_1+\theta_2}{\sqrt{m}}+O(\frac1m)}-1}\\ &=g(x_2)+\frac{\log c}{c^{a+b+k x_2}-1} \left(c^{X_{x_2}}x_1-\frac{(c^{X_{x_2}}-1) c^{a+b+k x_2}}{c^{a+b+kx_2}-1}(\theta_1+\theta_2) \right)\frac{1}{\sqrt{m}}+O(\frac1m). \end{align} The second equality follows from a Taylor development. Also \begin{equation}\Delta X_{x_2}=X'_{x_2}\frac{1}{m}+O(m^{-2})=k g(x_2)\frac{1}{m}+O(m^{-2}). \end{equation} For $X'$ we used the differential equation, \eqref{ODEqPolya}, that $X$ satisfies instead of the explicit expression for it. Substituting these expressions in \eqref{Asymptotics1}, \eqref{Asymptotics2}, we get the claim. Relation \eqref{qPolyaDetLimit} implies that $c^{X_{x_2}}=(c^{a+b}-1)/\{c^b-1+c^{-k x_2}(1-c^{-a})\}$, and this gives that the parenthesis following $\frac{1}{m}$ in equation (a) of the claim above equals \begin{equation} \frac{(c^{a+b}-1) x_1-c^b(c^a-1)(\theta_1+\theta_2)}{c^b-1+c^{-kx_2}(1-c^{-a})} \end{equation} and also that \begin{equation}g(x_2)\{1-g(x_2)\}=\frac{(c^a-1)(c^b-1) c^{a+kx_2}}{(c^{a+b+kx_2}-c^{a+kx_2}+c^a-1)^2}. \end{equation} It follows as before that the process $(Z^{(m)}_{[mt]})_{t\ge0}$ converges, as $m\to\infty$, to the solution of the stochastic differential equation \begin{align} dS_{t}&=b(S_{t})dt+\sigma(S_{t})dB_{t},\\ S_0&=\left(\begin{array}{c}\theta_1 \\0\end{array}\right), \end{align} where $$\begin{array}{lll} &S_{t}=\left(\begin{array}{c}S_{t}^{(1)} \\S_{t}^{(2)}\end{array}\right), \, B_{t}=\left(\begin{array}{c}B_{t}^{(1)} \\B_{t}^{(2)}\end{array}\right), \phantom{} \\ &b\Big(\begin{array}{c} x \\ y\end{array}\Big)=\Big(\begin{array}{c} \frac{k\log c}{c^{a+b+k y}-1} \bigg\{\frac{(c^{a+b}-1) x-c^b(c^a-1)(\theta_1+\theta_2)}{c^b-1+c^{-ky}(1-c^{-a})}\bigg\} \\1\end{array}\Big), & \phantom{} \\ & \sigma\Big(\begin{array}{c} x \\ y\end{array}\Big)=\left(\begin{array}{ccc} k\sqrt{(c^a-1)(c^b-1)} \frac{c^{(a+ky)/2}}{c^{a+b+ky}-c^{a+ky}+c^a-1}&0\\ 0&0\\ \end{array}\right). &\phantom{} \end{array}$$ $B$ is a two dimensional standard Brownian motion. Again, the martingale problem MP$(b, \sigma)$ has a unique solution due to the form of the functions $b, \sigma$. And with analogous arguments as in Theorem \ref{PolyaDiffusionLimit}, we get that the process $(\hat C_t^{(m)})_{t\ge0}$ converges to the unique solution of the stochastic differential equation \eqref{QPolyaNoiseSDE}. To solve that, we remark that a solution of a stochastic differential equation of the form \begin{equation} \label{LinearSDE} dY_t=(\alpha (t) Y_t+\beta(t))dt+\gamma(t) dW_t \end{equation} with $\alpha, \beta, \gamma:[0, \infty)\to{\mathbb R}$ continuous functions is given by \begin{equation} Y_t=e^{\int_0^t \alpha(s)\, ds}\left(Y_0+\int_0^t \beta(s) e^{-\int_0^s \alpha (r)\, dr}\, ds+\int_0^t \gamma(s) e^{-\int_0^s \alpha (r)\, dr}\, dW_s\right). \end{equation} [To discover the formula, we apply It\'o's rule to $Y_t \exp\{-\int_0^t \alpha(s)\, ds\}$ and use \eqref{LinearSDE}.] Applying this formula for the values of $\alpha, \beta, \gamma$ dictated by \eqref{QPolyaNoiseSDE} we arrive at \eqref{QPolyaDiffLimit}. \end{proof} \section{Proofs for the $q$-P\'olya urn with many colors} \label{ProofMoreColors} \begin{proof}[\textbf{Proof of Theorem \ref{PMFOfWeakColors}}] First, the equality of the expressions in \eqref{probmassforvectorAig}, \eqref{probmassforvector} follows from the definition of the $q$-multinomial coefficient. We will prove \eqref{probmassforvectorAig} by induction on $l$. When $l=2$, \eqref{probmassforvectorAig} holds because of \eqref{QPolyaWhiteDistr}. In that relation, we have $x_1=x, x_2=n-x$. Assuming that \eqref{probmassforvectorAig} holds for $l\ge2$ we will prove the case $l+1$. The probability $\mbox{\bf P}\left(X_{n,2}=x_{2},\ldots,X_{n,l+1}=x_{l+1}\right)$ equals \begin{align} &\mbox{\bf P}\left(X_{n,3}=x_3,\ldots,X_{n,l+1}=x_{l+1}\right)\mbox{\bf P}(X_{n,2}=x_2 \mid X_{n,3}=x_3,\ldots,X_{n,l+1}=x_{l+1}) \label{PTimesConditional}\\ &=q^{\sum_{i=3}^{l+1}x_i\sum_{j=1}^{i-1}\left(a_{j}+kx_{j}\right)}\frac{ {-\frac{a_1+a_2}{k} \brack x_1+x_2}_{q^{-k}} \prod_{i=3}^{l+1}{-\frac{a_{i}}{k} \brack x_{i}}_{q^{-k}}}{{-\frac{a_{1}+\ldots a_{l+1}}{k} \brack n}_{q^{-k}}} \cdot q^{ x_2\left(a_1+k x_1\right)}\frac{ {-\frac{a_1}{k} \brack x_1}_{q^{-k}} {-\frac{a_2}{k} \brack x_2}_{q^{-k}} }{{-\frac{a_1+a_2}{k} \brack x_1+x_2}_{q^{-k}}} \\ &=q^{\sum_{i=2}^{l+1}x_{i}\sum_{j=1}^{i-1}\left(a_{j}+k x_{j}\right)}\frac{\prod_{i=1}^{l+1}{-\frac{a_{i}}{k} \brack x_{i}}_{q^{-k}}}{{-\frac{a_{1}+\ldots a_{l+1}}{k} \brack n}_{q^{-k}}}. \end{align} This finishes the induction provided that we can justify these two equalities. The second is obvious, so we turn to the first. The first probability in \eqref{PTimesConditional} is specified by the inductive hypothesis. That is, given the description of the experiment, in computing this probability it is as if we merge colors 1 and 2 into one color which is placed in the line before the remaining $l-1$ colors. This color has initially $a_1+a_2$ balls and we require that in the first $n$ drawings we choose it $x_1+x_2$ times. The second probability in \eqref{PTimesConditional} is specified by the $l=2$ case of \eqref{probmassforvectorAig}, which we know. More specifically, since the number of drawings from colors $3, 4, \ldots, l+1$ is given, it is as if we have an urn with just two colors $1, 2$ that have initially $a_1$ and $a_2$ balls respectively. We do $x_1+x_2$ drawings with the usual rules for a $q$-P\'olya urn, placing in a line all balls of color 1 before all balls of color 2, and we want to pick $x_1$ times color 1 and $x_2$ times color 2. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{LimitOfWeakColors}}] The components of $(X_{n, 2}, X_{n, 3}, \ldots, X_{n, l})$ are increasing in $n$, and from Theorem \ref{LimitOfWeakColor} we have that each of them has finite limit (we treat all colors $2, \ldots, l$ as one color). Thus the convergence of the vector with probability one to a random vector with values is ${\mathbb N}^{l-1}$ follows. In particular, we also have convergence in distribution, and it remains to compute the distribution of the limit. Let $x_1:=n-(x_2+\cdots+x_l)$. Then the probability in \eqref{probmassforvectorAig} equals \begin{align} \mbox{\bf P}\left(X_{n,2}=x_{2},\ldots,X_{n,l}=x_{l}\right)&=q^{-\sum_{1\le i<j \le l } a_j x_i }\frac{\prod_{i=1}^{l}{\frac{a_{i}}{k}+x_i-1 \brack x_{i}}_{q^{-k}}}{{\frac{\sum_{i=1}^{l}a_{i}}{k}+n-1 \brack n}_{q^{-k}}}\\&=q^{\sum_{1\le j< i \le l}x_{i} a_j}\frac{\prod_{i=1}^{l}{\frac{a_{i}}{k}+x_i-1 \brack x_{i}}_{q^{k}}}{{n+\frac{\sum_{i=1}^{l}a_{i}}{k}-1 \brack n}_{q^{k}}} \\ &=q^{\sum_{i=2}^{l}\left(x_{i}\sum_{j=1}^{i-1}a_{j}\right)}\left\{\prod_{i=2}^{l}{\frac{a_{i}}{k}+x_i-1 \brack x_{i} }_{q^{k}}\right\}\frac{{x_1+\frac{a_{1}}{k}-1 \brack x_{1}}_{q^{k}}}{{n+\frac{\sum_{i=1}^{l}a_{i}}{k}-1 \brack n}_{q^{k}}}. \label{FDDManyColors} \end{align} In the first equality, we used \eqref{NegToPos} while in the second we used \eqref{qMinusOneToq}. When we take $n\to\infty$ in \eqref{FDDManyColors}, the only terms involving $n$ are those of the last fraction, and \eqref{QBinomialAsymptotics} determines their limit. Thus, the limit of \eqref{FDDManyColors} is found to be the function $f(x_{2},\ldots,x_{l})$ in the statement of the theorem. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{QPolyaPathODEWColors}}] For each $m\in {\mathbb N}p$, we consider the discrete time-homogeneous Markov chain $$Z_n^{(m)}:=\left(\frac{n}{m}, \frac{A_{n,2}^{(m)}}{m}, \frac{A_{n,3}^{(m)}}{m}, \ldots, \frac{A_{n,l}^{(m)}}{m}\right), n\in {\mathbb N}.$$ From any given state $(t, x):=(t,x_2, x_3, \ldots, x_l)$ that $Z^{(m)}$ finds itself it moves to one of \begin{align*} &\left( t+\frac{1}{m}, x_2, \ldots, x_i+\frac{1}{m}, \ldots, x_l\right), i=2, \ldots, l,\\ &\left( t+\frac{1}{m}, x_2, \ldots, x_i, \ldots, x_l\right) \end{align*} with corresponding probabilities \begin{align} p_i(x_2, \ldots, x_l, t, m)&=q^{m s_{i-1}(t)}\frac{[m x_i]_q}{[m s_l(t)]_q}, i=2, \ldots, l,\\ p_1(x_2, \ldots, x_l, t, m)&=\frac{ [m x_1(t)]_q}{[m s_l(t)]_q}, \end{align} where $s_i(t)=x_1(t)+\sum_{1<j\le i} x_j$ for $i\in \{1, 2, \ldots, l\}$ and $x_1(t):=m^{-1}\sum_{j=1}^l A_{0, j}^{(m)}+kt-\sum_{2\le j \le l} x_i$. These follow from \eqref{problcolors} once we count the number of balls of each color present at the state $(t, x)$. To do this, we note that $Z_n^{(m)}=(t, x)$ implies that $n=mt$ drawings have taken place so far, the total number of balls is $A^{(m)}_{0, 1}+\cdots+A_{0, l}^{(m)}+k mt$, and the number of balls of color $i$, for $2\le i \le l$, is $m x_i$. Thus, the number of balls of color 1 is $A^{(m)}_{0, 1}+\cdots+A_{0, l}^{(m)}+k mt-m \sum_{2\le j\le l} x_i=mx_1(t)$. The required relations follow. Let $x_1:=\lim_{m\to\infty} x_1(t)=\sigma_l+kt-\sum_{2\le j\le l} x_i$ and $s_i:=\lim_{m\to\infty} s_i(t)=\sum_{1\le j \le i}x_i$ for all $i\in \{1, 2, \ldots, l\}$. Then, since $q=c^{1/m}$, for fixed $(t, x_2, \ldots, x_l)\in [0, \infty)^l$ with $(x_2, \ldots, x_l)\ne 0$, we have \begin{equation} \lim_{m\to\infty}p_i(x_2, \ldots, x_l, t, m)=c^{s_{i-1}} \frac{ [x_i]_c}{[s_l]_c} \end{equation} for all $i=2, \ldots, l$. We also note the following. \begin{align} Z_{{n+1}, 1}^{(m)}-Z_{n, 1}^{(m)}&=\frac{1}{m},\\ \textbf{E}\left[Z_{{n+1}, i}^{(m)}-Z_{n, i}^{(m)}|Z_n^{(m)}=\left(t, x_{2},\ldots,x_{l}\right)\right]&=\frac{k}{m} p_i(x_2, \ldots, x_l, t, m),\\ \textbf{E}\left[(Z_{{n+1}, i}^{(m)}-Z_{n, i}^{(m)})^2|Z_n^{(m)}=\left(t, x_{2},\ldots,x_{l}\right)\right]&=\frac{k^2}{m^2} p_i(x_2, \ldots, x_l, t, m),\\ \textbf{E}\left[(Z_{{n+1}, i}^{(m)}-Z_{n, i}^{(m)})(Z_{{n+1}, j}^{(m)}-Z_{n, j}^{(m)})|Z_n^{(m)}=\left(t, x_{2},\ldots,x_{l}\right)\right]&=0 \end{align} for $i,j=2,3,\ldots,l$ with $i\neq j$. Therefore, with similar arguments as in the proof of Theorem \ref{PolyaPathLinear}, as $m\rightarrow+\infty , (Z_{[mt]}^{(m)})_{t\ge0}$ converges in distribution to $Y$, the solution of the ordinary differential equation \begin{equation} \label{ODEColors} \begin{aligned} dY_{t}&=b(Y_{t})dt,\\ Y_0&=(0, a_2, \ldots, a_l), \end{aligned} \end{equation} where $b(t, x_2, \ldots, x_l)=\left(1, b^{(2)}(t, x),b^{(3)}(t, x),\ldots,b^{(l)}(t, x)\right)$ with $$b^{(i)}(t, x)=k c^{s_{i-1}} \frac{ [x_i]_c}{[s_l]_c}$$ for $i=2,3,\ldots,l.$ Note that $s_l=\sigma_l+kt$ does not depend on $x$. Since $A^{(m)}_{[mt], 1}+A^{(m)}_{[mt], 2}+\cdots+A^{(m)}_{[mt], l}=kmt+A^{(m)}_{0, 1}+A^{(m)}_{0, 2}+\cdots+A^{(m)}_{0, l}$, we get that the process $(A^{(m)}_{[mt], 1}/m, A^{(m)}_{[mt], 2}/m+\cdots+A^{(m)}_{[mt], l}/m)_{t\ge0}$ converges in distribution to a process $(X_{t, 1}, X_{t, 2}, \ldots, X_{t, l})_{t\ge0}$ so that $X_{t, 1}+\cdots+X_{t, l}=a_1+a_2+\cdots+a_l+kt$, while the $X_{t, i}, i=2, \ldots, l$, satisfy the system \begin{align} X_{t, i}'&=kc^{\sigma_l+k t-\sum_{j=i}^l X_{t,i}}\frac{1-c^{X_{t,i}}}{1-c^{\sigma_l+kt}} \qquad \text{ for all } t>0,\\ X_{0, i}&=a_i, \end{align} with $i=2 ,3,\ldots,l$. Letting $Z_{r, i}=c^{X_{\frac{1}{k\log c}\log r, i}}$ for all $r\in(0, 1]$ and $i\in\{1, 2, \ldots, l\}$, we have for the $Z_{r, i}, i\in\{2, 3, \ldots, l\}$ the system \begin{align} \frac{Z'_{r, i}}{1-Z_{r, i}}&=\frac{\sigma_l}{1-\sigma_l r}\frac{1}{\prod_{i<j\le l} Z_{r, j}},\\ Z_{1, i}&=c^{a_i}. \end{align} In the case $i=l$, the empty product equals 1. It is now easy to prove by induction (starting from $i=l$ and going down to $i=2$) that \begin{equation} \label{ZSystemSol} Z_{r, i}=\frac{c^{\sigma_l-\sigma_{i-1}}(1-c^{\sigma_l} r)-c^{\sigma_l}(1-r)}{c^{\sigma_l-\sigma_i} (1-c^{\sigma_l}r)-c^{\sigma_l}(1-r)} \end{equation} for all $r\in(0, 1]$. Since $Z_{r, 1}Z_{r, 2}\cdots Z_{r, l}=c^{\sigma_l} r$, we can check that \eqref{ZSystemSol} holds for $i=1$ too. The fraction in \eqref{ZSystemSol} equals \begin{equation}c^{a_i} \frac{(1-c^{\sigma_l} r)-c^{\sigma_{i-1}}(1-r)}{(1-c^{\sigma_l}r)-c^{\sigma_i}(1-r)}. \end{equation} Recalling that $X_{t, i}=(\log c)^{-1} \log Z_{c^{kt}}$, we get \eqref{SystemSolution} for all $i\in\{1, 2, \ldots, l\}$ . \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{QPolyaPathODEWColors2}}] This is proved in the same way as Theorem \ref{QPolyaPathODEWColors}. We keep the same notation as there. The only difference now is that $\lim_{m\to\infty} p_i(t, x_2, \ldots, x_l, m)=x_i/s_l$. As a consequence, the system of ordinary differential equations for the limit process $Y_t:=(t, X_{t, 2}, \ldots, X_{t, l})$ is \eqref{ODEColors} but with $$b^{(i)}(t, x)=\frac{kx_i}{s_l}.$$ Recall that $s_l=\sigma_l+kt$. Thus, for $i=2, 3, \ldots, l$, the process $X_{t, i}$ satisfies $X_{t, i}'=k X_{t, i}/(\sigma_l+kt), X_{0, i}=a_i$, which give immediately the last $l-1$ coordinates of \eqref{SystemSolutionReg2}. The formula for the first coordinate follows from $X_{t, 1}+X_{t, 2}+\cdots+X_{t, l}=kt+\sigma_l$. \end{proof} \end{document}
\begin{document} \large \vspace*{.1in} \begin{center} \rm\Large Position-momentum local realism violation of the Hardy type \end{center} \vspace*{1ex} \begin{center} {Bernard Yurke$^1$, Mark Hillery$^2$, and David Stoler$^1$} \end{center} \begin{center} {\large \em $^1$Bell Laboratories, Lucent Technologies, Murray Hill, NJ 07974} \end{center} \begin{center} {\large \em $^2$ Hunter College of the City University of New York, 695 Park Ave. New York, NY 10021} \end{center} \begin{center} \today \end{center} \begin{flushleft} We show that it is, in principle, possible to perform local realism violating experiments of the Hardy type in which only position and momentum measurements are made on two particles emanating from a common source. In the optical domain, homodyne detection of the in-phase and out-of-phase amplitude components of an electromagnetic field is analogous to position and momentum measurement. Hence, local realism violations of the Hardy type are possible in optical systems employing only homodyne detection. \end{flushleft} \vspace*{2ex} PACS: {03.65.Bz} \vspace*{1ex} file:{\small stoler/Local-realism/xpprl971106.tex} As an example to support their contention that quantum mechanics is incomplete Einstein, Podolsky, and Rosen (EPR) \cite{einstein35} exhibited a quantum mechanical wave function, describing two particles emitted from a common source, in which the positions and the momenta of the two particles were strongly correlated. This wave function described the situation in which the measurement of the position of one of the particles would allow one to predict with complete certainty the position of the other particle and the measurement of the momentum of one of the particles would allow one to predict with complete certainty the momentum of the other particle. Because of these strong correlations even when the particles were well-separated it was argued that each of the particles must have a definite position and a definite momentum even though a quantum mechanical wave function does not simultaneously ascribe a definite position and a definite momentum to a particle. Therefore, it was argued that quantum mechanics is incomplete. It was hoped that in the future a complete theory could be devised in which a definite position and definite momentum would be ascribed to each particle. In 1992 the EPR Gedanken experiment was actually carried out \cite{ou92} as a quantum optics experiment in which electromagnetic field analogues of position and momentum were measured on correlated photon states generated by parametric down-conversion \cite{reid88,reid89}. The analogues of the position and momentum were the two quadrature amplitudes of the electromagnetic field measured via homodyne detectors \cite{yuen80,schumaker84,yurke87}. A quantum mechanical state having the properties of the state employed by EPR had been realized. However, since the work of Bell \cite{bell65} it has been known that a complete theory, of the type EPR hoped for, capable of making the same predictions as quantum mechanics, does not exist \cite{clauser78}. A variety of experiments, referred to as local realism violating experiments, have been proposed and performed, demonstrating that quantum mechanics is inherently at odds with classical notions about how effects propagate. Most striking among the proposals are the ``one event'' local realism violating experiments devised by Greenberger Horne and Zeilinger (GHZ) \cite{greenberger89,mermin90,yurke92} and by Hardy \cite{hardy92,clifton92,yurke93}. The Bell, GHZ, and Hardy experiments that have been proposed generally measure spin components or count particles, i.e., they employ observables that have a discrete spectrum. There are, however, some examples in which continuous observables or a mixture of discrete and continuous observables have been employed \cite{bell87,cetto85,yurke97,gilchrist98}. In fact, Bell \cite{bell87} showed that position and momentum measurements on a pair of particles in a state for which the Wigner function has negative regions can give rise to local realism violating effects of the Clauser, Holt, Horne, and Shimony type \cite{clauser69}. Here we show that local realism violating effects of the Hardy type can be obtained through position and momentum measurements on a pair of particles prepared in the appropriate state. Given that homodyne detection measurements of the two quadrature amplitude components of an electromagnetic field provide an optical analogue to position and momentum measurements, an optical experiment exhibiting local realism violations of the Hardy type can be devised, provided the appropriate entangled state can be generated. A local realism constraint on the positions and momenta measured for a pair of particles emitted from a common source can be arrived at by regarding the detectors as responding to messages emitted by the source \cite{mermin90,yurke97}. The source does not know, ahead of time, whether a position or a momentum measurement will be performed by a given detector. Hence, the instruction set emitted by the source must tell the detectors what to do in either case. The instruction sets are conveniently labeled via the array $(\alpha_{x1}, \alpha_{x2}; \alpha_{p1}, \alpha_{p2})$ where $\alpha_{xi}$ and $\alpha_{pi}$ are members of the set $\{+,-\}$. Here $\alpha_{xi}$ denotes whether detector $i$, measuring the position ${x_i}$ of particle $i$, will report the position to be positive ($\alpha_{xi} = +$) or negative ($\alpha_{xi} = -$). Similarly, $\alpha_{pi}$ denotes whether detector $i$, measuring the momentum ${p_i}$ of particle $i$, will report the momentum to be positive ($\alpha_{pi} = +$) or negative ($\alpha_{pi} = -$). The probability that a message of the form $(\alpha_{x1}, \alpha_{x2}; \alpha_{p1}, \alpha_{p2})$ will be denoted as $P(\alpha_{x1}, \alpha_{x2}; \alpha_{p1}, \alpha_{p2})$. Let $P_{\beta_1 \beta_2}(\alpha_{\beta_1 1}, \alpha_{\beta_2 2})$, where $\beta_i \in \{x,p\}$, is the probability that detector $1$ measuring $\beta_1$ reports $\alpha_{\beta_1 1}$ while detector $2$ measuring $\beta_2$ reports $\alpha_{\beta_2 2}$. For example, $P_{xp}(+,-)$ denotes the joint probability that detector 1 measuring position will report a positive position while detector 2 measuring momentum will report a negative momentum. In terms of the message probabilities, $P_{pp}(-,-)$ is given by \begin{eqnarray} P_{pp}(-,-) & = & P(+,+;-,-) + P(+,-;-,-) \nonumber \\ & & + \ P(-,+;-,-) + P(-,-;-,-) \ . \label{eq:2.1} \end{eqnarray} The joint probabilities provide the following bounds on the message probabilities \begin{eqnarray} P(+,+;-,-) & \leq & P_{xx}(+,+) \label{eq:2.2} \ ,\\ P(+,-;-,-) & \leq & P_{px}(-,-) \label{eq:2.3} \ , \\ P(-,+;-,-) & \leq & P_{xp}(-,-) \label{eq:2.4} \ , \end{eqnarray} and \begin{equation} P(-,-;-,-) \leq \min\{P_{xp}(-,-),P_{px}(-,-)\} \ . \label{eq:2.5} \end{equation} Applying these inequalities to Eq.~(\ref{eq:2.1}) yields the following local realism constraint on the joint probabilities: \begin{eqnarray} P_{pp}(-,-) & \leq & P_{xx}(+,+) + P_{px}(-,-) + P_{xp}(-,-) \nonumber \\ & & + \ \min\{P_{xp}(-,-),P_{px}(-,-)\} \ . \label{eq:2.6} \end{eqnarray} If it is rigorously known that the probabilities on the right-hand side of the inequality (\ref{eq:2.6}) are all zero, \begin{equation} P_{xx}(+,+) = P_{xp}(-,-) = P_{px}(-,-) = 0 \ , \label{eq:3.1} \end{equation} then it follows, according to local realism, that $P_{pp}(-,-)$ is rigorously zero. Thus, the appearance of a single event in which both particles have negative momentum would violate local realism. This situation, referred to as ``one event'' local realism violation, of course, cannot be achieved in practice because with a finite amount of data or the presence of spurious events it is impossible to rigorously demonstrate, experimentally, that Eq.~(\ref{eq:3.1}) holds for a given physical system. Nevertheless, if the spurious event rate is sufficiently small, it is possible to demonstrate to a high degree of certainty with a finite amount of data that the inequality Eq.~(\ref{eq:2.6}) is violated. It is shown here how a wave function can be constructed that satisfies Eq.~(\ref{eq:3.1}) and for which the joint probability on the left-hand side of (\ref{eq:2.5}) is nonzero \begin{equation} P_{pp}(-,-) \geq 0 \ . \label{eq:3.2} \end{equation} Let the wave function be denoted by $\psi_{\beta_1\beta_2}$, depending on the representation. For example, $\psi_{xp}$ is the wave function in the representation in which the position coordinate of particle 1 and the momentum coordinate of particle 2 are employed. Eq.~(\ref{eq:3.1}) imposes the following conditions on the wave function: \begin{eqnarray} \psi_{xx}(x_1,x_2) & = & 0 \ {\rm when} \ x_1 \geq 0 \ {\rm and} \ x_2 \geq 0 \ , \label{eq:3.3} \\ \psi_{px}(p_1,x_2) & = & 0 \ {\rm when} \ p_1 \leq 0 \ {\rm and} \ x_2 \leq 0 \ , \label{eq:3.4} \end{eqnarray} and \begin{equation} \psi_{xp}(x_1,p_2) = 0 \ {\rm when} \ x_1 \leq 0 \ {\rm and} \ p_2 \leq 0 \ . \label{eq:3.5} \end{equation} A wave function satisfying these conditions can be constructed as follows: Let $g(p_1,p_2)$ be a function that is nonzero only when $p_1$ and $p_2$ are positive, i.e., \begin{equation} g(p_1,p_2) = 0 \ {\rm if} \ p_1 \leq 0 \ {\rm or} \ p_2 \leq 0 \ . \label{eq:3.6} \end{equation} Its Fourier transform, denoted by $f(x_1,x_2)$, is \begin{equation} f(x_1,x_2) = \frac{1}{2\pi} \int_{-\infty}^\infty \int_{-\infty}^\infty e^{i(p_1x_1 + p_2x_2)} g(p_1,p_2) \ dp_1 dp_2 \ . \label{eq:3.7} \end{equation} The wave function $\psi_{xx}$ is then given by \begin{equation} \psi_{xx}(x_1,x_2) = N[1 - \theta(x_1)\theta(x_2)] f(x_1,x_2) \label{eq:3.8} \end{equation} where $\theta(x)$ is the Heaviside function defined by \begin{equation} \theta(x) = \left\{ \begin{array}{cc} 1 & \ {\rm if} \ x \geq 0 \\ 0 & \ {\rm if} \ x < 0 \end{array} \right. \label{eq:3.9} \end{equation} and $N$ is the normalization coefficient chosen so that $\psi_{xx}(x_1,x_2)$ is normalized: \begin{equation} \int_{-\infty}^\infty \int_{-\infty}^\infty |\psi_{xx}(x_1,x_2)|^2 = 1 \ . \label{eq:3.9b} \end{equation} Eq.~(\ref{eq:3.3}) is enforced by the factor in square brackets appearing in Eq.~(\ref{eq:3.8}). That Eq.~(\ref{eq:3.4}) is also satisfied is now demonstrated. $\psi_{px}$ is a Fourier transform of $\psi_{xx}$: \begin{equation} \psi_{px}(p_1,x_2) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{-ip_1x_1} \psi_{xx}(x_1,x_2) \ dx_1 \ . \label{eq:3.10} \end{equation} But, from Eq.~(\ref{eq:3.8}) this reduces to \begin{equation} \psi_{px}(p_1,x_2) = \frac{N}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{-ip_1x_1} f(x_1,x_2) \ dx_1 \label{eq:3.11} \end{equation} when $x_2 \leq 0$. Substituting Eq.~(\ref{eq:3.7}) into this and carrying out the $x_1$ integration followed by a momentum integration yields \begin{equation} \psi_{px}(p_1,x_2) = \frac{N}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{ip_2 x_2} g(p_1,p_2) dp_2 \label{eq:3.12} \end{equation} when $ x_2 \leq 0 $. It is evident from Eq.~(\ref{eq:3.6}) that the right-hand side of Eq.~(\ref{eq:3.12}) is zero when $ p_1 \leq 0 $, that is, Eq.~(\ref{eq:3.4}) is satisfied. A similar argument shows that the wave function of Eq.~(\ref{eq:3.8}) also satisfies Eq.~(\ref{eq:3.5}). Transforming Eq.~(\ref{eq:3.8}) into the momentum representation for both particles yields, keeping Eq.~(\ref{eq:3.6}) in mind, \begin{eqnarray} \psi_{pp}(p_1,p_2) & = & - \frac{N}{2\pi} \int_0^\infty \int_0^\infty e^{-i(p_1x_1 + p_2 x_2)}f(x_1,x_2) \ dx_1 dx_2 \nonumber \\ & & \ \ \ \ \ {\rm when} \ p_1 \leq 0 \ {\rm and} \ p_2 \leq 0 \ . \label{eq:3.13} \end{eqnarray} $\psi_{pp}(p_1,p_2)$ evaluated over this range is what is needed to compute $P_{pp}(-,-)$: \begin{equation} P_{pp}(-,-) = \int_{-\infty}^0 \int_{-\infty}^0 | \psi_{pp}(p_1,p_2)|^2 dp_1 dp_2 \ . \label{eq:3.14} \end{equation} If $\psi_{pp}(p_1,p_2) \neq 0$ over some region in the domain ($p_1 < 0 \ {\rm and} \ p_2 < 0$), then a wave function has been constructed that violates the local realism condition Eq.~(\ref{eq:2.6}). We now specialize to the case when $g(p_1,p_2)$ factorizes as follows: \begin{equation} g(p_1,p_2) = g(p_1)g(p_2) \label{eq:4.1} \end{equation} where \begin{equation} g(p) = 0 \ {\rm for} \ p \leq 0 \ . \label{eq:4.2} \end{equation} Then $f(x_1,x_2)$ factorizes, \begin{equation} f(x_1,x_2) = f(x_1)f(x_2) \ , \label{eq:4.3} \end{equation} where \begin{equation} f(x) = \frac{1}{\sqrt{2 \pi}} \int_0^\infty e^{ipx} g(p) dp \ . \label{eq:4.4} \end{equation} Also, Eq.~(\ref{eq:3.13}) reduces to \begin{equation} \psi_{pp}(p_1,p_2) = -N\psi_p(p_1)\psi_p(p_2) \ {\rm when} \ p_1 \leq 0 \ {\rm and} \ p_2 \leq 0 \label{eq:4.5} \end{equation} where \begin{equation} \psi_p(p) = \frac{1}{\sqrt{2\pi}} \int_0^\infty e^{-ipx} f(x) dx \ {\rm when} \ p \leq 0 \ . \label{eq:4.6} \end{equation} Substituting Eq.~(\ref{eq:4.5}) into Eq.~(\ref{eq:3.14}) yields \begin{equation} P_{pp}(-,-) = N^2\left[\int_{-\infty}^0 | \psi_p(p) |^2 dp \right]^2 \ . \label{eq:4.7} \end{equation} As a specific example, let $g(p)$ be given by \begin{equation} g(p) = \left\{ \begin{array}{cc} \sqrt{2\lambda} e^{-\lambda p} & \ {\rm for} \ p > 0 \\ 0 & \ {\rm for } \ p \leq 0 \end{array} \right. \ . \label{eq:6.1} \end{equation} From this, using Eq.~(\ref{eq:4.4}), one obtains \begin{equation} f(x) = i \sqrt{\frac{\lambda}{\pi}} \frac{1}{x + i\lambda} \ . \label{eq:6.2} \end{equation} Substituting this into Eq.~(\ref{eq:3.8}), using Eq.~(\ref{eq:4.3}), and computing the norm, one obtains \begin{equation} N = \frac{2}{\sqrt{3}} \ . \label{eq:6.3} \end{equation} Substituting Eq.~(\ref{eq:6.2}) into Eq.~(\ref{eq:4.6}) yields, for $p \leq 0$, \begin{equation} \psi_p(p) = \frac{i}{\pi}\sqrt{\frac{\lambda}{2}} \left[ \int_0^\infty \frac{\cos(|p|x)}{x + i\lambda} dx + i\int_0^\infty \frac{\sin(|p|x)}{x + i\lambda} dx \right] \ . \label{eq:6.5} \end{equation} By breaking the right-hand side of this equation into real and imaginary parts and by making use of formulas given by Gradshteyn and Ryzhik \cite{gradshteyn} (section 3.723, Eqs. 1 through 4), this equation simplifies to \begin{equation} \psi_p(p) = - \frac{i}{\pi}\sqrt{\frac{\lambda}{2}} e^{\lambda|p|}{\rm Ei}(-\lambda|p|) \ . \label{eq:6.7} \end{equation} From this one finds \begin{equation} \int_{-\infty}^0 |\psi_p(p)|^2 dp = \frac{1}{2\pi^2} \int_0^\infty e^{2x}{\rm Ei}^2(-x)dx \ . \label{eq:6.8} \end{equation} By performing a numerical integration of this equation we have found that \begin{equation} \int_{-\infty}^0 |\psi_p(p)|^2 dp = \frac{1}{8} \label{eq:6.13} \end{equation} to one part in $10^8$. From Eqs.~(\ref{eq:4.7}) and (\ref{eq:6.3}) one thus obtains \begin{equation} P_{pp}(-,-) = \frac{1}{48} \ . \label{eq:6.14} \end{equation} Thus, for a system possessing the wave function described here, a local realism violating event in which the momenta of both particles are negative occurs at a rate of one event in 48 events. It has been shown here that local realism violating experiments of the Hardy type are, in principle, possible in which only position and momentum measurements are performed. A means of experimentally generating the appropriate states has not been offered, so it remains to be seen whether such states can be realized in practice. In this regard we derive hope from the fact that the experiment proposed by EPR became realizable 57 years later as an optical analogue (through the development of parametric down-converters and homodyne detectors) and we take heart in the fact that state synthesis is an active topic of research \cite{vogel93}. \begin{references} \bibitem{einstein35} A.~Einstein, B.~Podolsky, and N.~Rosen, Phys. Rev. {\bf 47}, 777 (1935). \bibitem{ou92} Z.~Y.~Ou, S.~F.~Pereira, H.~J.~Kimble, and K.~C.~Peng, Phys. Rev. Lett. {\bf 68}, 3663 (1992). \bibitem{reid88} M.~D.~Reid and P.~D.~Drummond, Phys. Rev. Lett. {\bf 60}, 2731 (1988); Phys. Rev. A {\bf 41}, 3930 (1990). \bibitem{reid89} M.~D.~Reid, Phys. Rev. A {\bf 40}, 913 (1989). \bibitem{yuen80} H.~P.~Yuen and J.~H.~Shapiro, IEEE Trans. Inf. Theory {\bf 26}, 78 (1980). H.~P.~Yuen and V.~W.~S.~Chan, Opt. Lett. {\bf 8}, 177 (1983). \bibitem{schumaker84} B.~L.~Schumaker, Opt. Lett. {\bf 9}, 189 (1984). \bibitem{yurke87} B.~Yurke and D.~Stoler, Phys. Rev. A {\bf 36}, 1955 (1987). \bibitem{bell65} J.~S.~Bell, Physics (Long Island City, N.Y.) {\bf 1}, 195 (1965). \bibitem{clauser78} J.~F.~Clauser and A.~Schimony, Rep. Prog. Phys. {\bf 41}, 1881 (1978). \bibitem{greenberger89} D.~M.~Greenberger, M.~A.~Horne, and A.~Zeilinger, in {\large \it Bell's Theorem, Quantum Theory and Conceptions of the Universe}, edited by M.~Kafatos (Kluwer Academic, Dordrecht, 1989) pp. 69-72. D.~M.~Greenberger, M.~A.~Horne, A.~Shimony, and A.~Zeilinger, Am. J. Phys. {\bf 58}, 731 (1990). \bibitem{mermin90} N.~D.~Mermin, Am. J. Phys. {\bf 58}, 731 (1990). \bibitem{yurke92} B.~Yurke and D.~Stoler, Phys. Rev. Lett. {\bf 68}, 1251 (1992). \bibitem{hardy92} L.~Hardy, Phys. Rev. Lett. {\bf 68}, 2981 (1992); Phys. Lett. {\bf 167A}, 17 (1992). \bibitem{clifton92} R.~Clifton and P.~Niemann, Phys. Lett. A {\bf 162}, 15 (1992). C.~Pagonis and R.~Clifton, Phys. Lett. A {\bf 168}, 100 (1992). \bibitem{yurke93} B. Yurke and D. Stoler, Phys. Rev. Lett. {\bf 47}, 1704 (1993). \bibitem{bell87} J.~S.~Bell, {\large \it Speakable and Unspeakable in Quantum Mechanics} (Cambridge Univ. Press, Cambridge, 1987) page 196. \bibitem{cetto85} A.~M.~Cetto, L.~De~La~Pena, and E.~Santos, Phys. Lett. {\bf 113A}, 304 (1985). \bibitem{yurke97} B.~Yurke and D.~Stoler, Phys. Rev. Lett. {\bf 79}, 4941 (1997). \bibitem{gilchrist98} A.~Gilchrist, P.~Deuar, and M.~D.~Reid, Phys. Rev. Lett. {\bf 80}, 3169 (1998). \bibitem{clauser69} J.~F.~Clauser, R.~A.~Holt, M.~A.~Horne, and A.~Shimony, Phys. Rev. Lett. {\bf 23}, 880 (1969). \bibitem{gradshteyn} I.~S.~Gradshteyn and I.~M.~Ryzhik, {\it Table of Integrals, Series, and Products} (Academic Press, New York, 1980) \bibitem{vogel93} K.~Vogel, V.~M.~Akulin, and W.~P.~Schleich, Phys. Rev. Lett. {\bf 71}, 1816 (1993). \end{references} \end{document}
\begin{document} \title{Interfacing light and single atoms with a lens} \author{Meng Khoon Tey$^1$, Gleb Maslennikov$^1$, Timothy C. H. Liew$^1$, Syed Abdullah Aljunid$^1$, Florian Huber$^2$, Brenda Chng$^1$, Zilong Chen$^3$, Valerio Scarani$^{1,4}$ and Christian Kurtsiefer$^{1,4}$} \address{$^1$ Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore, 117543} \address{$^2$ Department of Physics, Technical University of Munich, James Franck Street, 85748, Germany} \address{$^3$ Institute of Materials Research and Engineering, 3 Research Link, Singapore, 117602} \address{$^4$ Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore, 117542} \ead{[email protected], [email protected]} \begin{abstract} We characterize the interaction between a single atom or similar microscopic system and a light field via the scattering ratio. For that, we first derive the electrical field in a strongly focused Gaussian light beam, and then consider the atomic response. Following the simple scattering model, the fraction of scattered optical power for a weak coherent probe field leads to unphysical scattering ratios above 1 in the strong focusing regime. A refined model considering interference between exciting and scattered field into finite-sized detectors or optical fibers is presented, and compared to experimental extinction measurements for various focusing strengths. \end{abstract} \pacs{42.50Ct, 42.50Ex, 42.25Bs} \maketitle \section{Introduction\label{intro}} Atom-light interaction at the single quantum level plays an important role in many quantum communication and computation protocols. While spontaneous emission can provide a natural transfer of atomic states into photonic qubits, strong interaction of light with an atom is needed to transfer a photonic qubit into internal atomic degrees of freedom as a stationary qubit. This process is essential to implement quantum light-matter interfaces \cite{cirac:1997,duan:2001,walls:1990}, unless post-selection techniques are used \cite{rosenfeld:07}. The common approach to achieve this strong interaction pursued for a long time is to use a high finesse cavity around the atom, in which the electrical field strength of a single photon is enhanced by multiple reflections between two highly reflective mirrors, resulting in a high probability of absorption. Another approach to increase the interaction between an atom and a single photon is simply to focus the light field of a single photon down to a diffraction limited area, motivated by the fact that the absorption cross section of an atom is on the order of the square of the optical wavelength. Recent theoretical research on this matter predicts that the absorption probability may reach the maximal value of 100\% for dedicated focusing geometries \cite{leuchs:2007}. In this paper, we study the interaction strength between a two-level system and a tightly focused weak coherent light Gaussian beam, which is simpler to prepare. Such a system has been theoretically investigated by van Enk and Kimble \cite{van_Enk:2001} and they concluded that one can expect only a weak interaction. An experiment on single atom absorption has been carried out a long time ago in the weak focusing regime \cite{wineland:87}, but recent experimental results with single molecules \cite{molecule1} and atoms \cite{our_paper} showed an interaction strength which exceeded these theoretical predictions by far. In this paper we extend the original theoretical model such that it is applicable in the strong focusing regime and provide experimental data on the extinction by a single atom for various focusing parameters. Extrapolating from there, we find that the interaction strength in between light and an atom can indeed be very strong for realistic focusing geometries. The paper is organized as follows: In Section~\ref{theory}, we explain how we quantify the interaction strength between an atom and a weak coherent light field, and set out the basic problem. In Section~\ref{sec:focusfield}, we calculate the field strength at the focus of an ideal lens by considering a Gaussian incident beam for the strong focusing regime. Using this `ideal' focusing field developed in this section, we then obtain an expression for the scattering ratio in Section~\ref{scatt_theory}, and for the extinction of a focused light beam by a two-level system in various geometries in Section~\ref{sec:extinction}. The theoretical prediction is compared with our experimental results in Section~\ref{sec:experiment}. \section{Basic problem\label{theory}} The system that we investigate is a single two-level atom localized in free space illuminated by a focused weak monochromatic light field (probe) with an incident power $P_\mathrm{in}$. The interaction strength of the probe with the atom is directly related to the fraction of power scattered by the atom. Therefore, it seems reasonable to quantify the interaction strength with the ratio of the scattered light power $P_\mathrm{sc}$ to the total incident power $P_\mathrm{in}$, i.e. \begin{equation}\label{definition} R_\mathrm{sc}:=\frac{P_\mathrm{sc}}{P_\mathrm{in}}\,. \end{equation} \begin{figure} \caption{\label{fig:geometry} \label{fig:geometry} \end{figure} To prepare an atom into a clean two-level system, it is convenient to use optical pumping with circularly polarized light - the optical transition therefore will also be driven by circularly polarized light. The light field itself should have a well-defined spatial profile before it is focused onto the atom with a lens. A circularly polarized, collimated Gaussian beam propagating along the $z$ axis (see figure~\ref{fig:geometry}) will therefore be the starting point for our work. Its electrical field strength before the lens is given by \begin{equation} \label{EFS_beforelens} \vec{E}(\rho, t)=\frac{E_\mathrm{L}}{\sqrt{2}}\left[\cos(\omega t)\hat{x}+\sin(\omega t)\hat{y}\right]e^{-\rho^2/w_\mathrm{L}^2}\,, \end{equation} where $\rho$ is the radial distance from the lens axis, $w_\mathrm{L}$ the waist of the beam, $\hat{x}, \hat{y}$ are the unit vectors in transverse directions, and $E_\mathrm{L}$ is the field amplitude. The beam carries a total power of \begin{equation} \label{incidentPower} P_\mathrm{in}=\frac{1}{4}\epsilon_0\pi cE_\mathrm{L}^2w_\mathrm{L}^2\,, \end{equation} where $\epsilon_0$ is the electric permittivity of vacuum, and $c$ the speed of light in vacuum. Due to the rotational symmetry the field on the lens axis is always circularly polarized. For an atom that is stationary at the focal point of the lens, the electric field can thus be written as \begin{equation} \label{field_at_atom} \vec{E}(t)=\frac{E_\mathrm{A}}{\sqrt{2}}\left[\cos(\omega t)\hat{x}+\sin(\omega t)\hat{y}\right]\,, \end{equation} where $E_\mathrm{A}$ denotes the amplitude of the field at the focus. In the long wavelength limit the atom only interacts with the field at the location of the atom. For a field that is resonant with the atomic transition and with an intensity much below saturation, the power scattered by a two-level atom is~\cite{cohen-tann:1992} (see ~\ref{App1} for more details) \begin{equation} \label{scattered_power2} P_\mathrm{sc}=\frac{3\epsilon_0c\lambda^2E_\mathrm{A}^2}{4\pi}, \end{equation} leading to a scattering ratio of \begin{equation} \label{scatt_ratio} R_\mathrm{sc}=\frac{P_\mathrm{sc}}{P_\mathrm{in}}=\frac{3\lambda^2}{\pi^2w_\mathrm{L}^2}\left(\frac{E_\mathrm{A}}{E_\mathrm{L}}\right)^2\,, \end{equation} which is exact under weak and on-resonant excitation. To evaluate the the scattering ratio $R_\mathrm{sc}$ and therefore the interaction strength, one needs to know $(E_\mathrm{A}/E_\mathrm{L})^2$. For a weakly focused field where the paraxial approximation holds one finds that \begin{equation} \label{paraxial} \left(\frac{E_\mathrm{A}}{E_\mathrm{L}}\right)^2 \simeq \left(\frac{w_\mathrm{L}}{w_\mathrm{f}}\right)^2\,, \end{equation} where $w_\mathrm{f}$ is the Gaussian beam waist at the focus. This leads to \begin{equation} \label{scatt_ratio_parax} R_\mathrm{sc}\simeq\frac{3\lambda^2}{\pi^2 w_\mathrm{f}^2}=3u^2\,, \end{equation} with the focusing strength \begin{equation}\label{eqdefu} u:=w_L/f \end{equation} used to fix the focal waist, $w_f=\lambda/(\pi u)$. With a Gaussian focal spot area $A=\pi w_\mathrm{f}^2/2$, the scattering ratio can also be expressed as $R_\mathrm{sc} \simeq \sigma_\mathrm{max}/A$, where $\sigma_\mathrm{max}=3\lambda^2/2\pi$ is the absorption cross section of a two-level system exposed to a resonant plane wave. However, for strongly focused light, the paraxial approximation breaks down, and we need other methods to find $(E_\mathrm{A}/E_\mathrm{L})^2$. \section{Electrical field in a tight focus\label{sec:focusfield}} The paraxial approximation breaks down for strongly focused beams both in the expression of the electric field just behind the lens, and in the propagation of this field to the focus. An approach to overcome the propagation problem was reported by van Enk and Kimble in~\cite{van_Enk:2001}. Their lens model, however, applies only to the weak focusing regime. In the following, we present a lens model to overcome this limitation and propagate the optical field behind the ideal lens into the focal regime and investigate the focal field using their technique numerically. We obtain a closed expression for the electrical field in the focus using the Green theorem for the propagation. To simplify the expressions in this section, we express the electrical field in dimensionless units, so the electrical field strength of the collimated Gaussian beam entering the focusing lens is given by \begin{equation}\label{input_gaussian_field} \vec{F}_\mathrm{in}=\hat{\epsilon}_+e^{-\rho^2/w_\mathrm{L}^2}\,, \end{equation} where $\hat{\epsilon}_+$ is one of the circular polarization vectors $\hat{\epsilon}_\pm=(\hat{x}\pm i \hat{y})/\sqrt2$. \subsection{Model of an ideal lens\label{field}} An ideal converging lens converts a beam with a plane wave front into one with a spherical wave front which converges towards the focal point $F$. Therefore, it can be modeled as a phase plate modifying an incoming field $F_\mathrm{in}$ with a radially dependent phase factor $\varphi(\rho)$ into \begin{equation}\label{unphysical_field} \vec{F}_\mathrm{F} = \varphi(\rho)\vec{F}_\mathrm{in}\,. \end{equation} In paraxial optics, a convenient analytical treatment of Gaussian beams can be obtained assuming a parabolic phase factor, \begin{equation}\label{parabolic_phase_factor} \varphi_\mathrm{pb}(\rho)=e^{-ik\rho^2/2f}\,; \end{equation} this was adopted in \cite{van_Enk:2001}. However, the conversion of a plane into a spherical wave front corresponds to a phase factor of \begin{equation}\label{spherical_phase_factor} \varphi_\mathrm{sp}(\rho)=e^{-ik\sqrt{\rho^2+f^2}}\,, \end{equation} which is only approximated by equation (\ref{parabolic_phase_factor}). On top of this, multiplication of a incoming field with such a phase factor leads to an electrical field which is not compatible with Maxwell equations, since the polarization vector for $\rho>0$ is not tangential to the wave front anymore. In view of this, we have to change the local polarization with three requirements in mind \cite{wolf:1959}: (i) A rotationally symmetric lens does not alter the local azimuthal field component, but tilts the local radial polarization component of the incoming field towards the axis; (ii) The polarization at point P (see Figure~\ref{fig:geometry}) after transformation by the lens is orthogonal to the line FP; (iii) The power flowing into and out of an arbitrarily small area on the thin ideal lens is the same. These requirements determine completely the focusing field right after the lens. With the input field in equation~(\ref{input_gaussian_field}), one finds (see~\ref{App2} for details) \begin{eqnarray}\nonumber \fl \vec{F}_\mathrm{F}(\rho,\phi,z=-f)=&\frac{1}{\sqrt{\cos\theta}}\textrm{ }\left(\frac{1+\cos\theta}{2}\textrm{}\hat{\epsilon}_+ + \frac{\sin\theta e^{i\phi}}{\sqrt{2}}\textrm{}\hat{z}+\frac{\cos\theta-1}{2}e^{2i\phi}\textrm{}\hat{\epsilon}_-\right)\\& \times \exp\left(-\rho^2/w_\mathrm{L}^2\right)\textrm{ }\exp\left[-ik\sqrt{\rho^2+f^2}\right]\,, \label{physical_focused_field} \end{eqnarray} with $\theta=\mathrm{arctan} (\rho/f)$. In particular, the factor $1/\sqrt{\cos\theta}$ is needed in order to meet requirement (iii). \subsection{Numerical propagation of the field to the focus} The optical field with a converging wave front directly behind the lens needs to be propagated into the focal region to arrive at a field strength of the light interacting with the microscopic system. Various methods can be applied for this purpose. The one implemented in \cite{van_Enk:2001} projects the focusing field $\vec{F}_\mathrm{F}$ on an orthogonal set of modes $\vec{F}_\mu$, $\mu=(k_t,s,m)$ with cylindrical symmetry (see \ref{sec:cylmodes} for details). This decomposition reads \begin{equation} \vec{F}_\mathrm{F} =\sum\limits_\mu\kappa_\mu\vec{F}_\mu \end{equation} where the expansion coefficients $\kappa_\mu$ are given by \begin{eqnarray}\nonumber \fl \kappa_\mu=\delta_{m1}\pi k_t \int_0^\infty d\rho\textrm{ }\rho\frac{1}{\sqrt{\cos\theta}}\textrm{ }\textrm{{\Large \{}} \frac{sk+k_z}{k}\left(\frac{1+\cos\theta}{2}\right)J_0(k_t \rho)+i \frac{\sqrt{2} k_t}{k}\left(\frac{\sin\theta}{\sqrt{2}}\right) J_1(k_t \rho)\\&\fl\fl\fl\fl\fl+\frac{sk-k_z}{k}\left(\frac{\cos\theta-1}{2}\right) J_2(k_t\rho)\textrm{{\Large \}}}\exp\left[-ik\sqrt{\rho^2+f^2}\,-\frac{\rho^2}{w_\mathrm{L}^2}\right]\,, \end{eqnarray} with $\theta=\tan^{-1}(\rho/f)$. The Kronecker symbol $\delta_{m1}$ reflects conservation of angular momentum under the lens transformation \cite{vanenk:1992,beijersbergen:1993}. The projection integral has no analytic solution, so the coefficients have to be evaluated numerically. The (dimensionless) field components in the three polarization components $\hat\epsilon_\pm, \hat{z}$ at any point behind the lens are superpositions of contributions from different modes, \begin{eqnarray}\label{physical_F+} F_+(\rho, \phi, z)&=&\sum_{s=\pm1}\int_{0}^{k}dk_t\textrm{ }\frac{1}{4\pi}\frac{sk+k_z}{k}J_0(k_t\rho)e^{i k_z z} \kappa_{\mu},\\\label{physical_Fz} F_z(\rho, \phi, z)&=&\sum_{s=\pm1}\int_{0}^{k}dk_t\textrm{ }(-i)\frac{\sqrt{2}}{4\pi}\frac{k_t}{k} J_1(k_t\rho)e^{i k_z z}e^{i\phi} \kappa_{\mu},\\\label{physical_F-} F_-(\rho, \phi, z)&=&\sum_{s=\pm1}\int_{0}^{k}dk_t\textrm{ }\frac{1}{4\pi}\frac{sk-k_z}{k}J_2(k_t\rho)e^{i k_z z}e^{2i\phi} \kappa_{\mu}.\end{eqnarray} We now evaluate the field components for different regions with this method: first directly behind the lens, then on the optical axes to near the focus, and finally in the focusing plane near the focus. \subsubsection{Focusing field reconstruction} As a consistency check, we first evaluate the field components right after the lens (i.e., for $z=-f$) for a reasonably strong focusing field with $u=1.56$ corresponding to $w_\mathrm{L}=7$\,mm for $f=4.5$\,mm. The relative difference between the reconstructed and original field is less than $10^{-3}$, a bound limited by our numerical accuracy. A linear combination of the field modes $\mu$ in the form of equations (\ref{physical_F+}) to (\ref{physical_F-}) is compatible with the Maxwell equations, so since there is no significant difference between the original and reconstructed field, the choice in equation (\ref{physical_focused_field}) for the focusing field is compatible with Maxwell equations even for strong focusing parameters. \subsubsection{Field along the optical axis} The solid line in figure~\ref{fig:field_comparison} shows the dimensionless intensity $|F_+|^2$ for $f=4.5$\,mm, $\lambda=780$\,nm and $w_\mathrm{L}=1.1$\,mm ($u=0.244$), with a clearly peaked distribution centered at the focus $\Delta z=0$ and a depth of field, defined as the full width at half maximum (FWHM), of about 9.5\,$\mu$m. This result is still very close to the much simpler paraxial approximation of a Gaussian beam, with a depth of field of $2\lambda/(\pi u^2)=8.31\,\mu$m. For comparison, we show the result for a focusing field using a parabolic phase factor $\varphi_\mathrm{pb}$ only, following \cite{van_Enk:2001}. The spherical aberration there displaces and spreads the focus, and significantly reduces the maximal intensity in the focal point $F$. This problem becomes even more serious for a larger input waist $w_\mathrm{L}$. \begin{figure} \caption{Dimensionless intensity $|F_+|^2$ along the optical axis for the full focusing field model according to equation (\ref{physical_focused_field} \label{fig:field_comparison} \end{figure} \subsubsection{Field in the focal plane\label{field_at_focus}} We now examine the field near the focus in more details. Figure \ref{fig:field_at_focus_4graphs} shows the field on the focal plane for different focusing strengths. For this, we choose different input waists $w_\mathrm{L}$, but keep $f$=4.5\,mm and $\lambda$=780\,nm fixed. For comparison, we also show the result for a focusing field according to the paraxial approximation, \begin{equation}\label{focal_gaussian_field} \vec{F}_\mathrm{parax}=\frac{w_\mathrm{L}}{w_\mathrm{f}}\hat{\epsilon}_+e^{-\rho^2/w_\mathrm{f}^2}, \end{equation} with a paraxial focal waist $w_\mathrm{f}=f\lambda/\pi w_\mathrm{L}$. \begin{figure} \caption{Field amplitudes at the focus for different focusing strengths. All plots are for a focal length of 4.5\,mm and wavelength of 780\,nm.} \label{fig:field_at_focus_4graphs} \end{figure} For weak focusing ($u=0.022, w_\mathrm{L}=0.1\,$mm) in figure~\ref{fig:field_at_focus_4graphs}a, $|F_+|$ overlaps completely with the paraxial prediction with negligible $|F_z|$ and $|F_-|$. For an initial waist $w_\mathrm{L}=0.3$\,mm corresponding to $u=0.067$ and $w_\mathrm{f}\simeq \textrm{3.7} \mu\textrm{m}$ (about 5$\lambda$), discrepancies between the paraxial approximation and the extended model start to appear (Figure~\ref{fig:field_at_focus_4graphs}b). With increasing $w_\mathrm{L}$, the $\hat{z}$- and $\hat{\epsilon}_-$ polarized field components become stronger for $\rho>0$, but an atom localized on the optical axis still only experiences a $\hat{\epsilon}_+$-polarized field. Figure~\ref{fig:field_at_focus_4graphs}d shows the focused field that maximizes $|F_+|$ for the parameters in our model. It is obtained with an incident waist $w_\mathrm{L}=10$\,mm ($u=2.22$). An increase of the incident waist beyond that does not reduce the focal spot size any further due to the diffraction limit. Instead, more energy is transferred to the other polarization components, thus decreasing the magnitude of the $F_+$. \subsection{Analytical expression for the field in the focal point} An alternative method for propagating the focusing field directly behind the lens is offered by the Green theorem. For given electrical and magnetic fields $\vec{E}(\vec{r}\,')$ and $\vec{B}(\vec{r}\,')$ on an arbitrary closed surface $S'$ that encloses a point $\vec{r}$, the electrical field at this point is determined by \cite{jackson:CE} \begin{eqnarray}\nonumber \vec{E}(\vec{r})=\oint_{S'}\mathrm{ }dA'\textrm{ }&&\textrm{{\Large \{}}ikc\left[\vec{n}\,'\times\vec{B}(\vec{r}\,')\right]G(\vec{r},\vec{r}\,')+\left[\vec{n}\,'\times\vec{E}(\vec{r}\,')\right] \times\nabla'G(\vec{r},\vec{r}\,')\\ && +\left[\vec{n}\,'\cdot\vec{E}(\vec{r}\,')\right]\nabla'G(\vec{r},\vec{r}\,')\textrm{{\Large \}}},\label{green_theorem} \end{eqnarray} where $\vec{n}\,'$ is the unit vector normal to a differential surface element $dA'$ and points into the volume enclosed by $S'$, and $G(\vec{r},\vec{r}\,')$ is the Green function given by \begin{equation} G(\vec{r},\vec{r}\,')=\frac{e^{ik|\vec{r}-\vec{r}\,'|}}{4\pi |\vec{r}-\vec{r}\,'|}\,. \end{equation} If point $\vec{r}$ is the focus of an aplanatic focusing field, then the local field propagation wave vector $\vec{k}\,'$ at any point $\vec{r}\,'$ always points towards (away from) point $\vec{r}$ for the incoming (outgoing) field in the far field limit, i.e. when $|\vec{r}-\vec{r}\,'|\gg\lambda$. In this limit, one has \begin{eqnarray} B(\vec{r}\,')\rightarrow \frac{\vec{k}\,'}{c|\vec{k}\,'|}\times E(\vec{r}\,'),\label{BE_relation} \\ \nabla'G\rightarrow -i\vec{k}\,'G\,\textrm{ before the focus}\,, \nabla'G\rightarrow i\vec{k}\,'G\,\textrm{ after the focus}\,.\label{gradientapprox} \end{eqnarray} In the far field limit, \Eref{green_theorem} reduces to \begin{eqnarray}\nonumber \vec{E}(\vec{r}_{\mathrm{\,\,focus}})&=&-2i\int_{S_\mathrm{bf}} dA'\textrm{ }\left[\vec{n}\,'\cdot\vec{k}\,'\right]\vec{E}(\vec{r}\,')G(\vec{r},\vec{r}\,')\\&&+2i\int_{S_\mathrm{af}} dA'\textrm{ }\left[\vec{n}\,'\cdot\vec{E}(\vec{r}\,')\right]\vec{k}\,'G(\vec{r},\vec{r}\,')\,.\label{simplified_Green} \end{eqnarray} Here the surface $S'$ is divided into two parts, where $S_\mathrm{bf}$ is the one before the focal plane, and $S_\mathrm{af}$ is the surface after the focal plane. The second term in \Eref{simplified_Green} vanishes if we choose $S_\mathrm{af}$ to be an infinitely large hemisphere centered at the focus, since in this case $\vec{n}\,'$ is perpendicular to $\vec{E}(\vec{r}\,')$ at all points on $S_\mathrm{af}$ for an aplanatic field. If we choose $S_\mathrm{bf}$ as an infinitely large plane that coincides with the ideal lens and adopt the dimensionless incident field in \Eref{physical_focused_field}, we get \begin{equation}\label{focus_inte} \vec{F}(0,0,z=0)=\frac{-ik\sqrt{f}}{2}\int_0^\infty d\rho\,\frac{\rho(f+\sqrt{f^2+\rho^2})}{(f^2+\rho^2)^{5/4}}\exp(-\frac{\rho^2}{w_L^2})\,\hat{\epsilon}_+\, \end{equation} which has an analytical solution \begin{equation}\label{focus_solution} \vec{F}(0,0,z=0)=-\frac{1}{4}{ikw_L\over u} e^{1/u^2}\left[\sqrt{1\over u}\Gamma(-{1\over4},{1\over u^2})+\sqrt{u}\Gamma({1\over4},{1\over u^2})\right]\,\hat{\epsilon}_+\,, \end{equation} with the incomplete gamma function $\Gamma(a,b)=\int_b^\infty\,t^{a-1}e^{-t}\,dt\,$, and $u=w_L/f$ as in \Eref{eqdefu}. The results obtained with the mode decomposition method agree with this expression within computational errors of about 0.1\%. The $-i$ reflects a Gouy phase of $-\pi/2$ \cite{born:PO}. We now restore the field dimensions by multiplication with the amplitude in the center of the collimated Gaussian beam, which can be expressed by the optical power according to equation (\ref{incidentPower}), \begin{equation} E_L={1\over w_L}\sqrt{4P_\mathrm{in}\over\epsilon_0\pi c}\,, \end{equation} resulting in an electrical field amplitude in the focus of \begin{equation} \left|E_A\right|=\sqrt{\pi P_\mathrm{in}\over\epsilon_0 c\lambda^2}\,\cdot\,{1\over u} e^{1/u^2}\left[\sqrt{1\over u}\Gamma(-{1\over4},{1\over u^2})+\sqrt{u}\Gamma({1\over4},{1\over u^2})\right]\label{eq:absolute_focus_field} \end{equation} with purely circular polarization. The focal field thus only depends on the input power, the optical wavelength and the focusing strength $u$. \section{Study of the scattering ratio\label{scatt_theory}} \begin{figure} \caption{Scattering ratio $R_\mathrm{sc} \label{fig:scat_prob_4models} \end{figure} The electrical field amplitude $E_A$ at the focus for a given excitation power now allows to determine the fraction $R_\mathrm{sc}$ of the optical power scattered away by a two-level atom or similar microscopic object according to equation (\ref{scattered_power2}). With \Eref{eq:absolute_focus_field}, we arrive at \begin{equation} R_\mathrm{sc}={3\over4u^3}\, e^{2/u^2}\left[\Gamma(-{1\over4},{1\over u^2})+u\Gamma({1\over4},{1\over u^2})\right]^2\,.\label{eq:absolute_scatteringrate} \end{equation} In Figure~\ref{fig:scat_prob_4models} we show this quantity as a function of the focusing strength $u$. A striking feature of this plot is that, for $u$ large enough, $R_\mathrm{sc}$ exceeds the value of 1, as if more light would be scattered than was incident. However, $R_\mathrm{sc}$ cannot be interpreted as a scattering ratio anymore if the solid angle subtended by the excitation field is not negligible: in the strong focusing regime, interference between the exciting field and the scattered field must be taken into account. In fact, the physical bound (Bassett limit) is $R_\mathrm{sc}\leq 2$~\cite{bassett:1986}. For our focusing model of the Gaussian beam, we predict a maximal value of $R_\mathrm{sc}=1.456$ for a focusing strength $u=2.239$. For reference, we also show $R_\mathrm{sc}$ for focal fields derived under the paraxial approximation and for a parabolic wave front model. All models agree in the weak focusing regime, and $R_\mathrm{sc}$ can reach values on the order of 1, which indicates that by strong focusing, one can accomplish a strong interaction between a light field and a single atom. \section{Extinction as a measurement of scattering\label{sec:extinction}} The parameter $R_\mathrm{sc}$ we have just discussed can be used as a figure-of-merit for scattering experiments. In this section, we go beyond this one-parameter description to provide a detailed model for the experiment we performed \cite{our_paper}. In this experiment, we measured the extinction of the transmitted light beam due to scattering. \begin{figure} \caption{A transmission measurement setup with an atom at the focus of a lens. The transmitted power is a result of interference between the scattered light and the probe for coherent scattering.\label{img:idea} \label{img:idea} \end{figure} Figure~\ref{img:idea} illustrates a simple transmission setup with an atom located at the focus of two confocal lenses, where the second lens collects all the excitation power if no atom is present at the focus. The actual measured transmission $T$ depends on the area covered by the power detector after the second lens, and can be obtained by considering the interference between the incident probe field and the field coherently scattered by the atom. For a given focusing and collection geometry, the extinction \begin{equation} \epsilon=1-T={P_\mathrm{in}-P_\mathrm{out}\over P_\mathrm{in}} \end{equation} is maximal for a weak incident probe field in resonance with the transition in the two-level system. The total field at any place is a superposition of the focusing field exciting the atom, and the scattered field: \begin{equation}\label{totalfield} \vec{E}_\mathrm{t}(\vec{r})= \vec{E}_\mathrm{F}(\vec{r})+\vec{E}_\mathrm{sc}(\vec{r})\,. \end{equation} The spatial dependency of the scattered field $\vec{E}_\mathrm{sc}$ is that of a rotating electrical dipole, with an amplitude proportional to the exciting electrical field amplitude $E_A$, and the total power contained in this dipole radiation must match \Eref{scattered_power2}. Far away from the dipole ($r\gg\lambda$), this scattered field takes the form \begin{equation}\label{dipole_field} \vec{E}_\mathrm{sc}(\vec{r})=\frac{3E_Ae^{i(kr+\pi/2)}}{2kr} \left[\hat{\epsilon}_+-(\hat{\epsilon}_+\cdot\hat{r})\,\hat{r}\right]\,, \end{equation} where $\hat{r}$ is the radial unit vector pointing away from the scatterer~\cite{cohen-tann:1992}. The $\pi/2$ phase reflects the fact that the dipole moment of the atom lags the field $E_A$ by $\pi/2$ at resonance. The focusing field $\vec{E}_\mathrm{F}$ close to the lenses at $z=\pm f$ takes the form \begin{eqnarray} \fl \vec{E}_\mathrm{F}(\rho,\phi,z=\pm f)=& \frac{E_L}{\sqrt{|\cos\theta}|}\textrm{} \left(\frac{1\mp\cos\theta}{2}\textrm{}\hat{\epsilon}_+ \mp \frac{\sin\theta e^{i\phi}}{\sqrt{2}}\textrm{}\hat{z}+ \frac{\mp\cos\theta-1}{2}e^{2i\phi}\textrm{}\hat{\epsilon}_-\right)\nonumber\\& \times \exp\left(-\rho^2/w_\mathrm{L}^2\right)\textrm{ } \exp\left[\pm i(k\sqrt{\rho^2+f^2}-\pi/2)\right]\,, \label{physical_focused_field2} \end{eqnarray} where $\theta\in[0,\pi]$ is the polar angle between the $-z$ direction and a point $(\rho, \phi, z)$ as in figure~\ref{fig:geometry}. The phase is adjusted such that the electric field amplitude $E_A$ at the focus is real. The excitation field and the forward scattered field interfere destructively, as was shown first for the case of an incident plane wave~\cite{bohren:1983,paul:1983,davis:2001}, and more recently for arbitrary incident fields \cite{zumofen:2008} with the help of vectorial multipole expansions \cite{bohren:ASLSP, sheppard:1997,mojarad:2008}. \subsection{Energy flux through transverse planes} The optical power $P_\mathrm{out}$ arriving at a detector behind the atom or a collection lens can be evaluated from the superposition of fields via the time averaged energy flux through a plane $S$ with a fixed $z$, \begin{equation} P_S = \frac{\epsilon_0 c^2}{2}\int_S \Re\left\{\vec{E}_\mathrm{t}\times\vec{B}^*_\mathrm{t}\right\}\cdot d\vec{A} \end{equation} where $d\vec{A}$ is a differential area element of the surface $S$ and $\Re(x)$ denotes the real part of $x$. Far away from the focus, the electromagnetic field can be locally approximated by a plane wave such that $\vec{B}=\hat{k}\times\vec{E}/c$, where $\hat{k}=\hat{k}_\mathrm{sc},\,\hat{k}_\mathrm{F}$ is a dimensionless unit vector parallel to the local field propagation direction. Both $\vec{E}_\mathrm{F}$ and $\vec{E}_\mathrm{sc}$ have spherical wave fronts, i.e., the local propagation directions are parallel. Before the focus we have $\hat{k}_\mathrm{sc}=-\hat{k}_\mathrm{F}$, while after the focus we have $\hat{k}_\mathrm{sc}=\hat{k}_\mathrm{F}$. With these field properties, and with the local transversality, $\hat{k}_\mathrm{F}\cdot\vec{E}_\mathrm{F}=0$ and $\hat{k}_\mathrm{F}\cdot\vec{E}_\mathrm{sc}=0$, the power through the two planes can be expressed with electrical fields only, \begin{equation} \fl \textrm{ }P_\mathrm{z=\pm f}=\frac{\epsilon_0 c}{2} \int_\mathrm{z=\pm f}\Re\left\{\vec{E}_\mathrm{F}\cdot\vec{E}_\mathrm{F}^*\pm\vec{E}_\mathrm{sc}\cdot\vec{E}_\mathrm{sc}^*+\vec{E}_\mathrm{sc}\cdot\vec{E}_\mathrm{F}^*\pm\vec{E}_\mathrm{F}\cdot\vec{E}_\mathrm{sc}^*\right\}\hat{k}_\mathrm{F}\cdot\hat{z}dA\label{flux_at_pmf}\,. \end{equation} The two first terms represent (i) the power of the excitation field, (ii) the power of the scattered field, while the third and fourth term represent (iii) the interference term. The contribution (i) to $P_\mathrm{z=\pm f}$ is simply the input power, \begin{equation}\label{Uf_in} P_\mathrm{z=\pm f,\,in}:=\frac{\epsilon_0 c}{2} \int_\mathrm{z=\pm f}\Re\left\{\vec{E}_\mathrm{F}\cdot\vec{E}_\mathrm{F}^*\right\}\hat{k}_\mathrm{F}\cdot\hat{z}dA = \frac{1}{4}\epsilon_0\pi cE_\mathrm{L}^2w_\mathrm{L}^2=P_\mathrm{in}\,, \end{equation} and the contribution (ii) in these planes, \begin{equation} \left|P_\mathrm{z=\pm f,\,sc}\right| :=\frac{\epsilon_0 c}{2} \int_\mathrm{z=\pm f}\Re\left\{\vec{E}_\mathrm{sc}\cdot\vec{E}_\mathrm{sc}^*\right\}\hat{k}_\mathrm{F}\cdot\hat{z}dA = \frac{3\epsilon_0c\lambda^2E_\mathrm{A}^2}{8\pi}=\frac{P_\mathrm{sc}}{2}\,,\label{Uf_sc} \end{equation} where $P_\mathrm{sc}$ is the scattered power as defined previously in \Eref{scattered_power2}. The interference contribution (iii) vanishes for $z=-f$ because $\left(\vec{E}_\mathrm{sc}\cdot\vec{E}_\mathrm{F}^*-\vec{E}_\mathrm{F}\cdot\vec{E}_\mathrm{sc}^*\right)$ is purely imaginary, whereas for $z=+f$ we get \begin{eqnarray}\label{Uf_int} P_\mathrm{z=+f,\,int}&:=\frac{\epsilon_0 c}{2} \int_\mathrm{z=\pm f}\Re\left\{\vec{E}_\mathrm{sc}\cdot\vec{E}_\mathrm{F}^*+\vec{E}_\mathrm{F}\cdot\vec{E}_\mathrm{sc}^*\right\}\hat{k}_\mathrm{F}\cdot\hat{z}dA\\ &=-\frac{3\pi\epsilon_0 c E_A E_L \sqrt{f}}{2k} \int_0^\infty\frac{\rho(f+\sqrt{f^2+\rho^2})}{(f^2+\rho^2)^{5/4}}\exp(-\frac{\rho^2}{w_L^2}) d\rho\,. \end{eqnarray} The negative sign, which comes from both the Gouy phase in the incident field (\Eref{physical_focused_field2}) and the phase difference between the dipole and local field (\Eref{dipole_field}), reveals that the scattered light and the incident light interfere destructively after the focus \cite{zumofen:2008}. This integral can be solved in the same way as \Eref{focus_inte}, leading to \begin{equation}\label{Uf_int_result} P_\mathrm{z=+f,\,int} = -\frac{3\epsilon_0c\lambda^2E_\mathrm{A}^2}{4\pi}=-P_\mathrm{sc}\,. \end{equation} Thus, the power flowing through both planes $z=\pm f$ is the same, \begin{equation} P_\mathrm{z=\pm f}=P_\mathrm{in}-\frac{P_\mathrm{sc}}{2}\,. \end{equation} This indicates what we mentioned above: a value $R_\mathrm{sc}>1$ does not violate energy conservation, the physical bound being rather $R_\mathrm{sc}\leq 2$. For later on, we should define a measurable extinction as the difference of the transmitted power with and without the atom, divided by the power transmitted without the atom at a location behind the atom, e.g. at $z=+f$: \begin{eqnarray} \epsilon&=\frac{P_\mathrm{z=+f,\,in}-P_\mathrm{z=+f}}{P_\mathrm{z=+f,\,in}} \nonumber\\ &=\frac{-P_\mathrm{z=+f,\,int}-P_\mathrm{z=+f,\,sc}}{P_\mathrm{z=+f,\,in}}\,.\label{extinction} \end{eqnarray} If the collection lens at $z=+f$ is infinitely large, it takes a value of \begin{equation} \epsilon=\frac{P_\mathrm{sc}/2}{P_\mathrm{in}}=\frac{R_\mathrm{sc}}{2}\,. \end{equation} \subsection{Extinction observed with a detector/lenses with finite diameter} Realistic lenses will have a finite size, thus only partly transmit the excitation and scattered light. We now estimate how this obstruction affects the relation between an observed extinction and the inferred scattering ratio $R_\mathrm{sc}$. The first effect of a finite lens aperture radius $\rho_0$ of the first lens is a reduction of the field at the focus. The Green theorem method for evaluating the focal field via equation (\ref{focus_inte}) still has a closed solution for a finite radius, \begin{eqnarray} \fl\frac{E_A^{\rho_\mathrm{0}}}{E_L}= kf\sqrt{u}e^{1/ u^2} \left\{{1\over4}\left[\Gamma\left({1\over4},{1\over u^2}\right) -\Gamma\left({1\over4},{1+v^2\over u^2}\right)\right] +{1\over u}\left[\Gamma\left(\frac{3}{4},{1+v^2\over u^2}\right) -\Gamma\left(\frac{3}{4},{1\over u^2}\right)\right]\right.\nonumber\\ \left. +\sqrt{1\over u}e^{-{1/ u^2}} \left[1-\frac{e^{-v^2/ u^2}}{\left(1+v^2\right)^{1/4}}\right]\right\}\label{finite_lens_focus_solution} \end{eqnarray} with $u=w_L/f$ as in \Eref{eqdefu}, and similarly $v:=\rho_\mathrm{0}/f$ as half the f-number of the lens which is related to the numerical aperture NA of the lens via $\mathrm{NA}^2=v^2/(1+v^2)$. This obstruction reduces the scattered power $P_\mathrm{sc}$. For realistic lens sizes, however, this is a very small effect. For instance, for $\rho_\mathrm{0}=2w_\mathrm{L}$ we find $P_\mathrm{sc}^\mathrm{\rho_\mathrm{0}}\simeq 0.97P_\mathrm{sc}^\mathrm{\,\infty}$. Without the atom, the transmitted power after the collection lens is given by \begin{equation} P_\mathrm{f=+z,in}^\mathrm{\rho_\mathrm{0}}=P_\mathrm{in}\left[1-\exp(-\frac{2\rho^2_\mathrm{0}}{w_\mathrm{L}^2})\right]\,. \end{equation} The transmitted power in absence of an atom is thus very close to $P_\mathrm{in}$ for $\rho_\mathrm{0}>2w_\mathrm{L}$. The contribution (iii) of the interference terms to the forward power is now given by \begin{equation} P_\mathrm{z=+f,\,int}^{\rho_\mathrm{0}}=-P_\mathrm{sc}^\mathrm{\rho_\mathrm{0}}\,, \end{equation} and the contribution (ii) of the scattered light by \begin{eqnarray}\label{collected_scattered_light} P_\mathrm{z=+f\,,sc}^\mathrm{\rho_\mathrm{0}} &=& {1-\alpha\over2}\cdot P_\mathrm{sc}^{\rho_\mathrm{0}}\\ \mathrm{with}\quad \alpha&=&\frac{1+3v^2/4}{\left(1+v^2\right)^{3/2}} = (1-{\mathrm{NA}^2\over4})\sqrt{1-\mathrm{NA}^2}\,. \label{pickup_factor} \end{eqnarray} The extinction measurable with a finite aperture lens thus is \begin{equation} \epsilon={1+\alpha\over2} \cdot \frac{P_\mathrm{sc}^{\rho_\mathrm{0}}}{P_\mathrm{in}} \cdot {1\over1-e^{-2\rho^2_\mathrm{0}/{w_\mathrm{L}^2}}} \,.\label{exact_extinction} \end{equation} If the lenses fully accommodate the Gaussian incident beam, say $\rho_0>2w_L$, then this can very well be approximated by \begin{equation} \epsilon={1+\alpha\over2}\cdot R_\mathrm{sc}^{\rho_\mathrm{0}}\,, \end{equation} where $R_\mathrm{sc}^{\rho_\mathrm{0}}=P_\mathrm{sc}^{\rho_\mathrm{0}}/P_\mathrm{in}$. We can also quantitatively evaluate the reflectivity of a single atom in this strong focusing regime, which was recently found to be possibly very large \cite{zumofen:2008}. The scattered power recollected by the input lens is also given by \Eref{collected_scattered_light}, thus the 'single atom reflectivity' with a Gaussian beam profile is \begin{equation}\label{reflectivity} R= {1-\alpha\over2}\cdot R_\mathrm{sc}^{\rho_\mathrm{0}}\,. \end{equation} We conclude that the measured extinction presents a lower bound to the scattering ratio if the collection lens fully collects the probe after the focus. For a small numerical aperture of the collection lens, $\epsilon\simeq R_\mathrm{sc}$ as expected, whereas for a large collection aperture (corresponding to a small loss factor $\alpha$), a reduced extinction $\epsilon\rightarrow R_\mathrm{sc}/2$ should be observed. \subsection{Extinction observed with a detector behind a single mode fiber} The symmetrical arrangement of the focusing and collection lens, and the typical preparation of a Gaussian excitation beam by an optical fiber suggests that the light could also be collected by a single mode optical fiber. The confinement of the light field into waveguides with well-defined mode functions makes the atom-focusing arrangement an independent building block for 'processing' electromagnetic fields. The amplitude $a_c$ of the light field collected into the optical fiber is again given by the sum of the excitation and scattered field, picked up by the optical fiber. This amplitude $a_c$ can be obtained by projecting the field $\vec{E}_\mathrm{t}$ onto the field mode $\vec{g}$ of the optical fiber. This projection can be carried out with the scalar product \begin{equation} a_c=\left \langle \vec{g}, \vec{E}_\mathrm{t}\right \rangle:={\epsilon_0 c\over2}\int_{\vec{x}\in S}\left\{ \vec{g}^*(\vec{x})\cdot\vec{E}_\mathrm{t}(\vec{x})\right\} (\hat{k}_{\vec{g}}\cdot\hat{n})\, dA\,, \label{eq:scalprod} \end{equation} where the integration plane $S$ is chosen such that both $\vec{g}$ and $\vec{E}$ are far away from a focus, $\hat{k}_{\vec{g}}$ is the local propagation direction of the mode function $\vec{g}$, and $\hat{n}$ the normal vector on the plane $S$. This integration can be carried out at any convenient location as long as it captures the mode function. The scalar product in \Eref{eq:scalprod} is written such that it resembles the form of the power integral in planes $z=\pm f$ in \Eref{flux_at_pmf}, so we can conveniently use the integrations carried out earlier. Thus, the integration plane $S$ is chosen at $z=+f$, directly before the collection lens. In the experiment, the excitation mode is matched to the collecting single mode fiber. Correspondingly, we define the target mode function $\vec{g}(x,y,z)$ to be the same as that of the excitation mode of $\vec{E}_\mathrm{F}$ in equation (\ref{physical_focused_field2}). With the normalization condition $\left \langle \vec{g}, \vec{g}\right\rangle=1$ we simply can set \begin{equation} \vec{g}=\vec{E}_\mathrm{F}/\sqrt{P_\mathrm{in}}\,. \end{equation} With this normalization, the square of the projection coefficient $a_c$ has the dimension of a power. Thus, the optical power of the field coupled into the fiber with a scattering atom present is given by \begin{eqnarray} P_\mathrm{out}&=&\left| \left \langle \vec{g}, \vec{E}_\mathrm{t} \right \rangle \right|^2= \left| \left \langle \vec{g}, \vec{E}_\mathrm{F}+ \vec{E}_\mathrm{sc} \right \rangle \right|^2 \nonumber \\ &=&\left|\left \langle \vec{g}, \vec{E}_\mathrm{F} \right \rangle +\left \langle \vec{g}, \vec{E}_\mathrm{sc} \right \rangle \right|^2\,.\label{eq:fiber_transmitted_power} \end{eqnarray} The first scalar product is determined by the mode normalization. The second one, $\left \langle \vec{g}, \vec{E}_\mathrm{sc} \right \rangle$, represents the projection of the scattered field onto the collection mode. Modulo the normalization constant $\sqrt{P_\mathrm{in}}$, it is identical to half the interference contribution in \Eref{Uf_int}, whose explicit expression was given in \Eref{Uf_int_result}. Therefore we find \begin{equation} 1-\epsilon={P_\mathrm{out}\over P_\mathrm{in}}={1\over P_\mathrm{in}}\left| \sqrt{P_\mathrm{in}}-{P_\mathrm{sc}/2\over\sqrt{P_\mathrm{in}}}\right|^2 = \left|1-{R_\mathrm{sc}\over2}\right|^2\,. \label{eq:exact_extinction2} \end{equation} In the weak focusing regime where $R_\mathrm{sc}\ll1$, this translates again into an extinction $\epsilon\approx R_\mathrm{sc}\,.$ For a focusing parameter $u=2.239$, we get a maximal extinction of $\epsilon_\mathrm{max}=0.926$. For the light scattered back into the excitation mode, we do not have to consider the field $\vec{E}_\mathrm{F}$, and arrive similarly at $P_\mathrm{back}=\left| \left \langle \vec{g}, \vec{E}_\mathrm{sc} \right \rangle \right|^2=P_\mathrm{in}R_\mathrm{sc}^2/4$, or a reflectivity of \begin{equation} R=R_\mathrm{sc}^2/4\,. \end{equation} \section{Experiment}\label{sec:experiment} In this section, we consider the results of our experiment where we measured the extinction of a Gaussian beam by a single $^{87}$Rb atom with different focusing strengths, and compare the results to the above theoretical model. A detailed description of the experimental setup is reported in~\cite{our_paper} and shown in Figure~\ref{experimental_setup}. \begin{figure} \caption{\label{experimental_setup} \label{experimental_setup} \end{figure} Two aspheric lenses ($f=4.5$\,mm) are mounted in a UHV chamber in a confocal arrangement. A single $^{87}$Rb atom is localized in a far-off resonant dipole trap (FORT) that is formed by 980\,nm light at the focus of the lens pair. A probe beam is delivered from a single-mode fiber and focused onto the atom by one lens, and picked up by the other one. The confocal arrangement ensures that all of the incident probe power is collected in the absence of an atom, thus implementing the scheme discussed in the previous section. We use a circularly polarized probe to optically pump the atom into a closed cycling transition. After allowing some time for optical pumping, we measure the transmission of the probe beam that is defined as the ratio of count rates at detector D1 when the atom is present in the trap, to the count rate when the atom is absent. Such a measurement is carried out for different probe frequencies to obtain the transmission spectrum of a single Rb atom. The spectrum is fitted to a Lorentzian with the resonant frequency, the full width at half maximum (FWHM) of the spectrum and its minimum value $T_\mathrm{min}$ on resonance as parameters. We obtained spectra for four different input waists of the probe, thus measuring extinctions for different focusing strengths. The observed FWHM never exceed 7.7 MHz, which is close to the natural linewidth of the optical transition (6 MHz), so we conclude that the atom was successfully kept in a two-level system. For each probe frequency, the probe power was adjusted such that the atom scatters $\approx$ 2500 photons per second, way below saturation. The properties of various transmission spectra obtained with different probe incident waists are summarized in Table~\ref{table:results_summary}. \begin{table}[h!b!p!] \caption{Summary of transmission spectra of the probe for different focusing strengths $u$. Listed are $w_\mathrm{L}$: incident waist of the probe; $w_\mathrm{f}$ and $w_\mathrm{D}$: estimated paraxial focal waists of the probe beam and FORT, respectively; $\epsilon$ and W: maximum observed extinction value and FWHM of the transmission spectrum; $R_\mathrm{sc}$: scattering ratio for the focusing parameter; $\epsilon_\mathrm{theo}$: expected extinction according to \Eref{eq:exact_extinction2}.} \label{table:results_summary} \begin{indented} \item[]\begin{tabular}{@{}llllllll} \br $w_\mathrm{L}$(mm) &$u$& $w_\mathrm{f}$($\mu$m) & $w_\mathrm{D}(\mu$m) & $\epsilon_\mathrm{max}$ (\%) & W (MHz) &$R_\mathrm{sc}$&$\epsilon_\mathrm{theo}(\%)$\\ \mr 0.5 &0.11& 2.23 & 2.0 & 2.38 $\pm$ 0.03 & 7.1 $\pm$ 0.2 &0.0362& 3.58\\ 1.1 &0.24& 1.01 & 2.0 & 7.2 $\pm$ 0.1 & 7.4 $\pm$ 0.2 &0.1606& 15.41\\ 1.3 &0.29& 0.86 & 1.4 & 9.8 $\pm$ 0.2 & 7.5 $\pm$ 0.2 &0.2157& 20.40\\ 1.4 &0.31& 0.80 & 1.4 & 10.4 $\pm$ 0.1 & 7.7 $\pm$ 0.2 &0.2449& 22.99\\ \br \end{tabular} \end{indented} \end{table} We also carefully characterized the losses of the probe beam in its optical path to ensure that our measured extinctions are not exaggerated by interference artefacts that can happen when certain elements in the transmission path preferentially filter more probe than the scattered light \cite{our_paper}. From point A to point B in Figure~\ref{experimental_setup} we measured 53--60\% transmission without the atom in the trap. The losses are mostly determined by 21\% loss through the uncoated UHV chamber walls and 17--24\% loss due to the coupling into the single-mode fiber at the transmission measurement channel. The coupling loss into the fiber increases as the input waist of the probe beam $w_\mathrm{L}$ increases. Almost all losses can be ascribed to reflections at optical surfaces, except for a 9--16\% re-coupling loss into a single mode fiber due to mode mismatch. We are thus reasonably confident that our measurement is free from artefacts that may arise due to incomplete collection of the probe. \begin{figure} \caption{Experimentally measured extinction (black symbols) for several focusing strengths $u$ and predicted values for extinction $\epsilon$ and Reflectivity $R$ for a coupling into single mode optical fibers. The inset shows the prediction for much stronger focusing parameters. \label{fig:extinction_th+expt} \label{fig:extinction_th+expt} \end{figure} In Figure~\ref{fig:extinction_th+expt} we compare the extinctions obtained from the experiment with values predicted by \Eref{eq:exact_extinction2}. Since the probe is recoupled into a single mode fiber for every experimental point using a different lens that matches the probe waist, and the coupling lens has a NA=0.55 (corresponding to $v=0.66$), we can safely neglect any clipping, and use \Eref{eq:exact_extinction2} for estimating the extinction. Obviously the measured extinctions are smaller than those predicted, especially for a larger focusing strength. We see a few possible reasons for this discrepancy. Firstly, the lenses we used in the experiment may not be suffciently close to an ideal lens, since they were designed for a situation with an additional window in the focusing part, which we did not have in our experiment. Secondly, the interaction strength is significantly affected by the motion of the atom in the dipole trap. While we do not have an independent measure of the position fluctuation of the atom in the trap, measurements in a similar trap showed a temperature of the atom in the trap of around 100\,$\mu$K due to the overlapping MOT \cite{weber:2006}, slightly below the Doppler temperature of 143\,$\mu$K of Rubidium. With our trap frequencies of $\nu_\rho\approx70$\,kHz in transverse and $\nu_z\approx20$\,kHz in longitudinal direction, this results in position uncertainties of $\sigma_\rho\approx220$\,nm and $\sigma_z\approx780$\,nm, respectively. The scattering ratio $R_\mathrm{sc}$ gets reduced due to the presence of the atom in regions with a reduced excitation field, and due to the spatially dependent detuning in the optical dipole trap. In paraxial approximation, we find an approximate reduction of the scattering rate due to the lower average field of \begin{equation} R_\mathrm{sc}'\approx R_\mathrm{sc}\left(1-2\sigma_\rho^2/w_\mathrm{f}^2\right)^2 \left(1-\sigma_z^2\lambda^2/(\pi^2w_\mathrm{f}^4)\right)\,, \end{equation} which results in a reduction of 2\% for $w_L=0.5$\,mm to 23\% for $w_L=1.4$\,mm. The reduction in $R_\mathrm{sc}$ and therefore in $\epsilon$ in our regime is about proportional to the temperature, so a doubling of the temperature alone would explain the discrepancy between theory and experiment already. On the other hand, the contribution due to a spatial variation of the resonance frequency is less than 1\%. Additionally, the presence of the atom away from the focal point reduces the efficiency of the optical pumping. However, the observed extinction ratios still exceed the prediction by the parabolic wave front model in \cite{van_Enk:2001}. The inset of Figure \ref{fig:extinction_th+expt} extrapolates the extinction we could expect for much stronger focusing; as mentioned earlier, the maximal extinction should reach 92.6\%; whether such lenses can be manufactured, however, remains an open question. We also depicted the reflectivity of the fiber-atom-fiber system, which should reach values strong enough to be detected in an experimental setup. Extinction and reflectivity don't match in this configuration, which means that there is still a significant amount of light which is neither transmitted nor reflected back into the optical fiber. Such losses are unavoidable for the described coupling scheme, which still places the fiber-atom-fiber system in disadvantage to a atom-cavity system in terms of success probability of scattering into known modes. If it would be possible to achieve a better overlap of photonic modes with the dipole transition, such losses should be reduced. \section{Conclusion} We have demonstrated both theoretically and experimentally that a substantial coupling efficiency of a light beam to a single atom can be achieved by focusing a light beam with a lens. By modifying the model given in \cite{van_Enk:2001}, we have constructed a focusing field compatible with Maxwell equations that is suitable for the strong focusing regime. High values for the extinction of light (up to 92\%) by a two-level atom stationary at the focus under the assumptions of weak on-resonant coherent probe are predicted. Within the limitations of our current trap, our experimental results confirm the possibility of observing a substantial extinction already for relatively weak focusing. The measured extinction depends on the particular collection configuration of an experimental setup. It is thus not a fixed quantity for a given incident field. As such, the scattering ratio as defined by \Eref{scatt_ratio} is a better quantity to characterize the interaction strength between a weak coherent field and an atom in free space, even though it loses a simple physical interpretation in the strong focusing regime. These results may also be of interest for experiments with single molecules \cite{molecule1,molecule2} and quantum dots \cite{quantumdots}. \section{Power scattered by the atom in a coherent light field \label{App1}} We briefly repeat the results for light scattered by a two-level atom following \cite{cohen-tann:1992}. The steady-state population $\rho_{22}$ of the excited state in a two-level atom exposed to a monochromatic field can be obtained from the optical Bloch equations: \begin{equation} \rho_{22}=\frac{|\Omega|^2/4}{\delta^2+|\Omega|^2/2+\Gamma^2/4} \label{population} \end{equation} Therein, $\Gamma$ is the radiative decay rate of the excited state, \begin{equation} \Gamma=\frac{\omega_{12}^3|d_{12}|^2}{3\pi\epsilon_0\hbar c^3}\,, \label{decay} \end{equation} and $\Omega=E_\mathrm{A}|d_\mathrm{12}|/\hbar$ is the Rabi frequency for a vanishing detuning $\delta=\omega-\omega_{12}$ of the driving field with respect to the atomic transition frequency $\omega_{12}$. Therein, $|d_\mathrm{12}|$ is the electrical dipole moment of the atom. The optical power scattered by this atom is simply the product of energy splitting, decay rate and population of the excited state: \begin{equation} \label{scattered_power} P_\mathrm{sc}=\rho_{22}\Gamma\hbar\omega_{12} \end{equation} For weak ($\Omega\ll\Gamma$) on resonant ($\delta=0$) excitation, the scattered power becomes \begin{equation} \label{scatteredPower} P_\mathrm{sc}=\frac{3\epsilon_0c\lambda^2E_\mathrm{A}^2}{4\pi}. \end{equation} for an excitation field amplitude $E_A$ at the location of the atom. \section{Transformation of local polarization by the lens \label{App2}} To obtain the local polarization of the focusing field in~\Eref{physical_focused_field}, we consider a point P$(\rho,\phi,z)$ before the lens and an incident light field with polarization \begin{equation} \hat{\epsilon}_{in}=\hat{\epsilon}_+=\frac{\hat{x}+i\hat{y}}{\sqrt2}\,, \end{equation} or in the cylindrical basis, \begin{equation} \hat{\epsilon}_{in}=\frac{e^{i\phi}}{\sqrt2}\textrm{ }\hat{\rho}+\frac{ie^{i\phi}}{\sqrt2}\textrm{ }\hat{\phi}\,, \end{equation} where $\hat{\rho}=\cos{\phi}\textrm{ }\hat{x}+\sin{\phi}\textrm{ }\hat{y}$ and $\hat{\phi}=-\sin{\phi}\textrm{ }\hat{x}+\cos{\phi}\textrm{ }\hat{y}$ are two unit vectors along the radial and azimuthal directions respectively. The ideal lens leaves the azimuthal component unchanged but tilts the radial component such that the local polarization of the field right after the lens is perpendicular to the line FP in Figure~\ref{fig:geometry} (F is the focus point), that is: \begin{eqnarray} \hat{\epsilon}_{F}&=&\left(\frac{\cos{\theta}\,e^{i\phi}}{\sqrt2}\,\hat{\rho} + \frac{\sin{\theta}\,e^{i\phi}}{\sqrt2}\,\hat{z}\right) + \frac{ie^{i\phi}}{\sqrt2}\,\hat{\phi}\\ &=&\frac{1+\cos\theta}{2}\,\hat{\epsilon}_+ + \frac{\sin\theta e^{i\phi}}{\sqrt{2}}\,\hat{z} + \frac{\cos\theta-1}{2}e^{2i\phi}\,\hat{\epsilon}_-\,,\nonumber \end{eqnarray} where $\theta=\arctan(\rho/f)$ and $\hat{\epsilon}_-=(\hat{x}-i\hat{y})/\sqrt2$. \section{Decomposition of a field into modes with cylindrical symmetry\label{sec:cylmodes}} For completeness, directly following \cite{van_Enk:2001}, we outline the main properties of the cylindrical modes $\vec{F_\nu}$, which form a complete orthogonal set to compose an electric field that satisfies the source-free Maxwell equations, \begin{equation} \label{field_expansion} \vec{E}(t)=2\Re\left[\sum_\nu a_\nu\vec{F}_\nu e^{i\omega t}\right]\,, \end{equation} where the summation over $\nu$ is a short-hand notation for \begin{equation} \sum_\nu:=\int dk \int dk_z \sum_s \sum_m, \end{equation} and $a_\nu$ are arbitrary complex amplitudes. The modes are characterized by four indices $\nu:=(k,k_z,m,s)$, where $k=\frac{2\pi}{\lambda}$ is the wave vector modulus, $k_z=\vec{k}\cdot\hat{z}$ the wave vector component in $z$-direction, $m$ an integer-valued angular momentum index, and $s=\pm1$ the helicity. The dimensionless mode functions $\vec{F}_\nu$ in cylindrical coordinates $(\rho,z,\phi)$ given in \cite{vanenk:1994} are \begin{eqnarray} \fl \vec{F}_\nu (\rho,z,\phi)=&\frac{1}{4\pi}\frac{sk-k_z}{k}G(k,k_z,m+1)\hat{\epsilon}_- + \frac{1}{4\pi}\frac{sk+k_z}{k} G(k,k_z,m-1)\hat{\epsilon}_+ \nonumber\\ &- i\frac{\sqrt2}{4\pi}\frac{k_t}{k}G(k,k_z,m)\hat{z}\,, \end{eqnarray} where $k_t=\sqrt{k^2-k_z^2}$ is the transverse part of the wave vector, $\hat{\epsilon}_\pm=(\hat{x}\pm i \hat{y})/\sqrt2$ are the two circular polarization vectors, and \begin{equation} G(k,k_z,m)=J_m(k_t \rho)e^{ik_z z}e^{im\phi}, \end{equation} with $J_m$ the $m$-th order Bessel function. As we are interested in a monochromatic beam with a fixed value of $k=2\pi/\lambda$ propagating in the positive $z$ direction ($k_z>0$), the set of mode indices is reduced to $\mu:=(k_t,m,s)$ where, for convenience, $k_t$ is taken as a mode index instead of $k_z$. Now, we introduce the notation \begin{equation} \sum_\mu:= \int dk_t \sum_s \sum_m \end{equation} for a complete summation over all possible modes. For a fixed $k$ the modes $\vec{F}_\mu$ are orthogonal with respect to the scalar product \begin{equation}\label{orthogonalrelation1} \left\langle\vec{F}_\mu,\vec{F}_{\mu'}\right\rangle = \int_S\textrm{ }\vec{F}_\mu^\ast (\vec{r})\cdot\vec{F}_{\mu '}(\vec{r})\,dS = \delta(k_t-k_t')\delta_{mm'}\delta_{ss'}/(2\pi k_t)\,, \end{equation} where $S$ is a plane perpendicular to the $z$ axis. This scalar product can thus be used to find the amplitudes of the modes $\mu, \mu'$ in an arbitrary electric field compatible with the Maxwell equations. \section*{References} \end{document}
\begin{document} \title[Arcs and caps from cubic curves]{Complete arcs and complete caps from cubics with an isolated double point} \thanks{This research was supported by the Italian Ministry MIUR, Geometrie di Galois e strutture di incidenza, PRIN 2009--2010, by INdAM, and by Tubitak Proj. Nr. 111T234.} \author[N.Anbar, D.Bartoli, M. Giulietti, I.Platoni]{Nurdag\"ul Anbar \and Daniele Bartoli \and Massimo Giulietti \and Irene Platoni } \thanks{Nurdag\"ul Anbar - Faculty of Engineering and Natural Sciences - Sabanci University\\ Orhanli-Tuzla - 34956 Istanbul - Turkey. \email{[email protected]}} \thanks{Daniele Bartoli - Dipartimento di Matematica e Informatica - University of Perugia\\ Via Vanvitelli 1 - 06123 Perugia - Italy. \email{[email protected]}} \thanks{ Massimo Giulietti - Dipartimento di Matematica e Informatica - University of Perugia\\ Via Vanvitelli 1 - 06123 Perugia - Italy. \email{[email protected]} } \thanks{Irene Platoni - Dipartimento di Matematica - University of Trento\\ Via Sommarive, 14 - 38123, Povo (TN) - Italy. \email{[email protected]}} \begin{abstract} Small complete arcs and caps in Galois spaces over finite fields $\mathbb{F}_q$ with characteristic greater than $3$ are constructed from cubic curves with an isolated double point. For $m$ a divisor of $q+1$, complete plane arcs of size approximately $q/m$ are obtained, provided that $(m,6)=1$ and $m<\frac{1}{4}q^{1/4}$. If in addition $m=m_1m_2$ with $(m_1,m_2)=1$, then complete caps of size approximately $\frac{m_1+m_2}{m}q^{N/2}$ in affine spaces of dimension $N\equiv 0 \pmod 4$ are constructed. \end{abstract} \maketitle \section{Introduction} In an (affine or projective) space over a finite field, a cap is a set of points no three of which are collinear. A cap is said to be complete if it is maximal with respect to set theoretical inclusion. Plane caps are usually called arcs. Arcs and caps have played an important role in Finite Geometry since the pioneering work by B. Segre \cite{MR0238846}. These objects are relevant also in Coding Theory, being the geometrical counterpart of distinguished types of error-correcting and covering linear codes. In this direction, an important issue is to ask for explicit constructions of small complete caps in Galois spaces. In fact, complete caps correspond to quasi-perfect linear codes with covering radius $2$, so that the smaller is the size of the cap, the better is the density of the covering code. The trivial lower bound for the size of a complete cap in a Galois space of dimension $N$ and order $q$ is \begin{equation}\label{TrivialLB} \sqrt 2 q^{(N-1)/2}. \end{equation} If $q$ is even and $N$ is odd, such bound is substantially sharp; see \cite{MR1395760}. Otherwise, all known infinite families of complete caps have size far from \eqref{TrivialLB}; see the survey papers \cite{HS1,HirschfeldStorme2001} and the more recent works \cite{ABGP,Anb-Giu,BarGiuPrep,DAOS,MR2656392,Gjac,Gjcd,GPIEE}. For $q=p^s$ with $p>3$ a prime, the smallest explicitly described complete plane caps are due to Sz\H onyi, who constructed complete arcs with roughly $q/m$ points for any divisor $m$ of $q-1$ satisfying $m< \frac{1}{C} q^{1/4}$, with $C>1$ is a constant independent of $q$ \cite{MR1221589,TamasRoma}\footnote{The condition of $m$ being a divisor of $q-1$ was not originally required in \cite{TamasRoma}, but is actually needed in order for the proof of a key lemma by Voloch to be correct; see Remark 4 in \cite{Anb-Giu}.} . From these arcs, by using some lifting methods, complete caps of size roughly $q^{N/2}/\sqrt{m}$ in $AG(N,q)$, $N\equiv 0 \pmod 4$, are obtained in \cite{ABGPnodal,Anb-Giu}. The aim of this paper is to obtain similar results for the case where $m$ is a divisor of $q+1$, in order to significantly widen the range of $q$'s for which complete arcs in $AG(2,q)$ of size about $q^{3/4}$, as well as complete caps in $AG(N,q)$ with roughly $q^{(4N-1)/8}$ points, can actually be constructed. To this end, plane cubics with an isolated double points are considered. Let $G$ denote the abelian group of the non-singular $\mathbb{F}_q$-rational points of an irreducible plane cubic $\mathcal X$ defined over $\mathbb{F}_q$. It was already noted by Zirilli \cite{ZIR} that no three points in a coset $A$ of a subgroup $K$ of $G$ can be collinear, provided that the index $m$ of $K$ in $G$ is not divisible by $3$. Since then, arcs in cubics have been thoroughly investigated, as well as caps arising from these arcs by recursive constructions; see \cite{ABGP,ABGPnodal,Anb-Giu,FaPaSc,GFFA,HV,MR792577,MR1221589,TamasRoma,VOL,MR1075538}. However, no results about arcs and caps from cubics with an isolated double point have appeared so far. One of the problems that come up when dealing with these cubics is that the natural parametrization of the points of $A$, arising from the natural isomorphism between $G$ and the subgroup of order $q+1$ of the multiplicative group of $\mathbb F_{q^2}$, involves polinomial functions defined over $\mathbb F_{q^2}$ but not over $\mathbb{F}_q$. This makes it impossible a straightforward application of the classical method by Segre \cite{MR0149361} and Lombardo Radice \cite{LombRad} for proving that a point $P$ off $\mathcal X$ is collinear with two points in $A$; in fact, such method needs that the algebraic curve $\mathcal C$ describing the collinearity with $P$ and two generic points in $A$ is defined over $\mathbb{F}_q$. A key point of the paper is to overcome such a difficulty by finding a curve which is birationally equivalent to $\mathcal C$, but is defined over $\mathbb{F}_q$; see Lemmas \ref{irrmodificato} and \ref{troppe}. The main achievements here are Theorems \ref{unouno} and \ref{duedue}. For a divisor $m$ of $q+1$ such that $(m,6)=1$ and $m\le \sqrt[4]{q}/4$, we explicitly describe a complete arc of size approximately $m+\frac{q+1}{m}$; if in addition $m$ admits a non-trivial factorization $m=m_1m_2$ with $(m_1,m_2)=1$, we also provide complete caps of size approximately $\frac{m_1+m_2}{m}q^{N/2}$ in affine spaces $AG(N,q)$ with dimension $N\equiv 0 \pmod 4$. The paper is organized as follows. In Section 2 we review some of the standard facts on curves and algebraic function fields. We also briefly sketch a recursive construction from \cite{Gjac} of complete caps from bicovering arcs, that is arcs for which completeness holds in a stronger sense; see Definition \ref{bico}. Section 3 presents some preliminary results on the algebraic curve describing the collinearity with $P$ and two generic points in $A$. The proof that under our assumptions on $m$ almost each point $P$ not on $\mathcal X$ is bicovered by the secants of $A$ is the main object of Section 4; see Propositions \ref{archinuovi}, \ref{archinuoviVAR}, and \ref{archibishop}. The case where $P$ lies in $\mathcal X$ is dealt with in Proposition \ref{bicointerni}. Finally, the proof of our main results is completed in Section 5. \section{Preliminaries} Let $q$ be an odd prime power, and let $\mathbb{F}_q$ denote the finite field with $q$ elements. Throughout the paper, $\mathbb{K}$ will denote the algebraic closure of $\mathbb{F}_q$. \subsection{Curves and function fields} Let $\mathcal C$ be a projective absolutely irreducible algebraic curve, defined over the algebraic closure $\mathbb{K}$ of $\mathbb{F}_q$. An \textit{algebraic function field $F$ over $\mathbb{K}$} is an extension $F$ of $\mathbb{K}$ such that $F$ is a finite algebraic extension of $\mathbb{K}(x)$, for some element $x\in F$ transcendental over $\mathbb{K}$. If $F=\mathbb{K}(x)$, then $F$ is called the \textit{rational function field over $\mathbb{K}$}. For basic definitions on function fields we refer to \cite{STI}. It is well known that to any curve $\mathcal C$ defined over $\mathbb{K}$ one can associate a function field $\mathbb{K}(\mathcal C)$ over $\mathbb{K}$, namely the field of the rational functions of $\mathcal C$. Conversely, to a function field $F$ over $\mathbb{K}$ one can associate a curve $\mathcal C$, defined over $\mathbb{K}$, such that $\mathbb{K}(\mathcal C)$ is $\mathbb{K}$-isomorphic to $F$. The genus of $F$ as a function field coincides with the genus of $\mathcal C$. A place $\alphamma$ of $\mathbb{K}(\mathcal C)$ can be associated to a single point of $\mathcal C$ called the \textit{center} of $\alphamma$, but not vice versa. A bijection between places of $\mathbb{K}(\mathcal C)$ and points of $\mathcal C$ holds provided that the curve $\mathcal C$ is non-singular. Let $F$ be a function field over $\mathbb{K}$. If $F'$ is a finite extension of $F$, then a place $\alphamma '$ of $F'$ is said to be \textit{lying over} a place $\alphamma$ of $F$ , if $\alphamma\subset \alphamma '$. This holds precisely when $\alphamma = \alphamma ' \cap F$. In this paper $e\left(\alphamma ' | \alphamma \right)$ will denote the \textit{ramification index} of $\alphamma '$ over $\alphamma$. A finite extension $F'$ of a function field $F$ is said to be {\em unramified} if $e(\alphamma'|\alphamma)=1$ for every $\alphamma'$ place of $F'$ and every $\alphamma$ place of $F$ with $\alphamma'$ lying over $\alphamma$. \begin{proposition}[Proposition 3.7.3 in \cite{STI}]\label{teo1} Let $F$ be an algebraic function field over $\mathbb{K}$, and let $m>1$ be an integer relatively prime to the characteristic of $\mathbb{K}$. Suppose that $u\in F$ is an element satisfying $ u \neq \omega^e\mbox{ for all }\omega \in F \mbox{ and } e|m\mbox{, }e>1. $ Let \begin{equation} F'=F(y)\mbox{ with }y^m=u. \end{equation} Then \begin{itemize} \item[(i)] for $\alphamma'$ a place of $F'$ lying over a place $\alphamma$ of $F$, we have $ e(\alphamma'| \alphamma)=\frac{m}{r_\alphamma} $ where \begin{equation}\label{eq55} r_\alphamma:=(m,v_{\alphamma}(u))>0 \end{equation} is the greatest common divisor of $m$ and $v_\alphamma(u)$; \item[(ii)] if $g$ (resp. $g'$) denotes the genus of $F$ (resp. $F'$) as a function field over $\mathbb{K}$, then $$ g'=1+m\left( g-1+\frac{1}{2}\displaystyle\sum_{\alphamma} \left(1-\frac{r_\alphamma}{m}\right) \right), $$ where $\alphamma$ ranges over the places of $F$ and $r_\alphamma$ is defined by \eqref{eq55}. \end{itemize} \end{proposition} An extension such as $F'$ in Proposition \ref{teo1} is said to be a {\em Kummer extension} of $F$. A curve $\mathcal C$ is said to be defined over $\mathbb{F}_q$ if the ideal of $\mathcal C$ is generated by polynomials with coefficients in $\mathbb{F}_q$. In this case, $\mathbb{F}_q(\mathcal C)$ denotes the subfield of $\mathbb{K}(\mathcal C)$ consisting of the rational functions defined over $\mathbb{F}_q$. A place of $\mathbb{K}(\mathcal C)$ is said to be $\mathbb{F}_q$-rational if it is fixed by the Frobenius map on $\mathbb{K}(\mathcal C)$. The center of an $\mathbb{F}_q$-rational place is an $\mathbb{F}_q$-rational point of $\mathcal C$; conversely, if $P$ is a simple $\mathbb{F}_q$-rational point of $\mathcal C$, then the only place centered at $P$ is $\mathbb{F}_q$-rational. The following result is a corollary to Proposition \ref{teo1}. \begin{proposition}\label{teo1cor} Let $\mathcal C$ be an irreducible plane curve of genus $g$ defined over $\mathbb{F}_q$. Let $u\in \mathbb{F}_q(\mathcal C)$ be a non-square in $\mathbb{K}(\mathcal C)$. Then the Kummer extension $\mathbb{K}(\mathcal C)(w)$, with $w^2=u$, is the function field of some irreducible curve defined over $\mathbb{F}_q$ of genus $$ g'=2g-1+\frac{M}{2}, $$ where $M$ is the number of places of $\mathbb{K}(\mathcal C)$ with odd valuation of $u$. \end{proposition} The function field $\mathbb{K}(\mathcal C)(w)$ as in Proposition \ref{teo1cor} is said to be a {\em double cover} of $\mathbb{K}(\mathcal C)$ (and similarly the corresponding irreducible curve defined over $\mathbb{F}_q$ is called a double cover of $\mathcal C$). Finally, we recall the Hasse-Weil bound, which will play a crucial role in our proofs. \begin{proposition}[Hasse-Weil Bound - Theorem 5.2.3 in \cite{STI}]\label{HaWe} The number $N_q$ of $\mathbb{F}_q$-rational places of the function field $\mathbb{K}(\mathcal C)$ of a curve $\mathcal C$ defined over $\mathbb{F}_q$ with genus $g$ satisfies $$ |N_q-(q+1)|\le 2g \sqrt q. $$ \end{proposition} \subsection{Complete caps from bicovering arcs} Throughout this section, $N$ is assumed to be a positive integer divisible by $4$. Let $q'=q^\frac{N-2}{2}$. Fix a basis of $\mathbb{F}_qpr$ as a linear space over $\mathbb{F}_q$, and identify points in $AG(N,q)$ with vectors of $\mathbb{F}_qpr\times \mathbb{F}_qpr \times \mathbb{F}_q \times \mathbb{F}_q$. For an arc ${\mathcal A}$ in $AG(2,q)$, let $$C_{\mathcal A}=\{(\alpha,\alpha^2,u,v)\in AG(N,q)\mid \alpha \in \mathbb{F}_qpr\,,\,\,\,(u,v)\in {\mathcal A}\}\,.$$ As noticed in \cite{Gjac}, the set $C_{\mathcal A}$ is a cap whose completeness in $AG(N,q)$ depends on the bicovering properties of ${\mathcal A}$ in $AG(2,q)$, defined as follows. According to Segre \cite{MR0362023}, given three pairwise distinct points $P,P_1,P_2$ on a line $\ell$ in $AG(2,q)$, $P$ is external or internal to the segment $P_1P_2$ depending on whether \begin{equation}\label{exto} (x-x_1)(x-x_2)\quad \text{is a non-zero square or a non-square in }\mathbb{F}_q, \end{equation} where $x$, $x_1$ and $x_2$ are the coordinates of $P$, $P_1$ and $P_2$ with respect to any affine frame of $\ell$. \begin{definition}\label{bico} Let ${\mathcal A}$ be a complete arc in $AG(2,q)$. A point $P\in AG(2,q)\setminus {\mathcal A}$ is said to be bicovered by ${\mathcal A}$ if there exist $P_1,P_2,P_3,P_4\in {\mathcal A}$ such that $P$ is both external to the segment $P_1P_2$ and internal to the segment $P_3P_4$. If every $P\in AG(2,q)\setminus {\mathcal A}$ is bicovered by ${\mathcal A}$, then ${\mathcal A}$ is said to be a bicovering arc. If there exists precisely one point $Q\in AG(2,q)\setminus {\mathcal A}$ which is not bicovered by ${\mathcal A}$, then ${\mathcal A}$ is said to be almost bicovering, and $Q$ is called the center of ${\mathcal A}$. \end{definition} A key tool in this paper is the following result from \cite{Gjac}. \begin{proposition}\label{mainP} Let $\tau$ be a non-square in $\mathbb{F}_q$. If ${\mathcal A}$ is a bicovering $k$-arc, then $C_{\mathcal A}$ is a complete cap in $AG(N,q)$ of size $kq^{(N-2)/2}$. If ${\mathcal A}$ is almost bicovering with center $Q=(x_0,y_0)$, then either $$C=C_{\mathcal A} \cup \{(\alpha,\alpha^2-\tau,x_0,y_0)\mid \alpha \in \mathbb{F}_qpr\}$$ or $$C=C_{\mathcal A} \cup \{(\alpha,\alpha^2-\tau^2,x_0,y_0)\mid \alpha \in \mathbb{F}_qpr\}$$ is a complete cap in $AG(N,q)$ of size $(k+1)q^{(N-2)/2}$. The former case occurs precisely when $Q$ is external to every secant of ${\mathcal A}$ through $Q$. \end{proposition} \section{A family of curves defined over $\mathbb{F}_q$}\label{secfam} Throughout this section $q=p^h$ for some prime $p>3$, and $m$ is a proper divisor of $q+1$ with $(m,6)=1$. Also, $\bar t$ is a non-zero element in $\mathbb{F}_{q^2}$ which is not an $m$-th power in $\mathbb{F}_{q^2}$. Let $A,B\in \mathbb{F}_{q^2}$ with $AB\neq (A-1)^3$. An important role for the present investigation is played by the curve \begin{equation}\label{curva2bis} \mathcal C_{A,B,\bar t,m}: f_{A,B,\bar t,m}(X,Y)=0, \end{equation} where \begin{equation}\label{curva2} \begin{array}{rcl} f_{A,B,\bar t,m}(X,Y)& = &A(\bar t^3X^{2m}Y^m+\bar t^3X^mY^{2m}-3\bar t^2X^{m}Y^m+1) -B\bar t^2X^mY^m\\ & & -\bar t^4X^{2m}Y^{2m}+3\bar t^2X^mY^m-\bar tX^m-\bar tY^m. \end{array} \end{equation} The curve $\mathcal C_{A,B,\bar t,m}$ was thoroughly investigated in \cite{ABGPnodal}. \begin{proposition}[Case 2 of Proposition 9 in \cite{ABGPnodal}]\label{P26apr13} Let $A,B$ be such that \begin{itemize} \item $AB\neq (A-1)^3$; \item $A\neq 0$; \item either $A^3\neq -1$ or $B\neq 1-(A-1)^3$. \end{itemize} Then the curve $\mathcal C_{A,B,\bar t,m}$ is absolutely irreducible of genus $g\le 3m^2-3m+1$. \end{proposition} Under the assumptions of Proposition \ref{P26apr13}, let $\bar x$ and $\bar y$ denote the rational functions associated to the affine coordinates $X$ and $Y$, respectively. Then $\mathbb{K}(\mathcal C_{A,B,\bar t,m})=\mathbb{K}(\bar{x},\bar{y})$ with $f_{A,B,\bar t,m}(\bar x,\bar y)=0$. Let $\bar u=\bar x^m$ and $\bar z=\bar y^m$. The following results from \cite{ABGPnodal} about the function field extension $\mathbb{K}(\bar x,\bar y):\mathbb{K}(\bar u,\bar z)$ will be needed. \begin{proposition}[Lemma 4 in \cite{ABGPnodal}]\label{clear} In the function field $\mathbb{K}(\bar u,\bar z)$, there exist six places $\alphamma_j$, $j=1,\ldots,6$, such that $$ {\rm div}(\bar u)=\alphamma_4+\alphamma_5-\alphamma_1-\alphamma_2, \qquad {\rm div}(\bar z)=\alphamma_2+\alphamma_6-\alphamma_3-\alphamma_4. $$ \end{proposition} \begin{proposition}[Case 2 of Proposition 9 in \cite{ABGPnodal}]\label{clear2} For each $j=1,\ldots, 6$, the ramification index of $\alphamma_j$ in the extension $\mathbb{K}(\bar x,\bar y)$ over $\mathbb{K}(\bar u,\bar z)$ is equal to $m$, and no other place of $\mathbb{K}(\bar u,\bar z)$ is ramified. \end{proposition} According to \cite{ABGPnodal}, for $j=1,\ldots,6$, let ${\bar {\bar{\alphamma}}}_j^1, \ldots,{\bar {\bar{\alphamma}}}_j^m$ denote the places of $\mathbb{K}(\bar x, \bar y)$ lying over the place $\alphamma_j$ of $\mathbb{K}(\bar u, \bar z)$. \begin{proposition}[Case 2 of Proposition 9 in \cite{ABGPnodal}]\label{clear3} In $\mathbb{K}(\bar x,\bar y)$, \begin{equation}\label{26apr} {\rm div}\big((A-\bar t\bar x^m)(A-\bar t\bar y^m)\big)=m\betaig(\sum_{i=1}^m({\bar{\bar {\alphamma}}}_5^i+{\bar{\bar {\alphamma}}}_6^i-{\bar{\bar {\alphamma}}}_4^i-{\bar{\bar {\alphamma}}}_2^i)\betaig). \end{equation} \end{proposition} In order to investigate the bicovering properties of a coset of index $m$ in the abelian group of the non-singular $\mathbb{F}_q$-rational points of a cubic with an isolated double point we need to establish whether $$\frac{(A-\bar t\bar x^m)(A-\bar t\bar y^m)}{(1-\bar t\bar x^m)(1-\bar t\bar y^m)}$$ is a square in $\mathbb{K}(\bar x, \bar y)$. \begin{proposition}\label{exiisolated} Assume that $A$ and $B$ satisfy the conditions of Proposition {\rm{\ref{P26apr13}}}. For $d\in \mathbb{K}$, $d\neq 0$, let $$ \eta=d\frac{(A-\bar t\bar x^m)(A-\bar t\bar y^m)}{(1-\bar t\bar x^m)(1-\bar t\bar y^m)}. $$ If $A\neq 1$, then \begin{itemize} \item [{\rm{(i)}}] the divisor of $\eta$ is $$ m\sum_{i=1}^m({\bar{\bar \alphamma}}_5^i+{\bar{\bar \alphamma}}_6^i+{\bar{\bar \alphamma}}_1^i+{\bar{\bar \alphamma}}_3^i)-{\bar{\bar{D}}}, $$ where ${\bar{\bar{D}}}$ is a divisor of degree $4m^2$ whose support consists of places not lying over any place in $\{\alphamma_j\mid j=1,\ldots,6\}$; \item [{\rm{(ii)}}] the function field $\mathbb{K}(\bar x,\bar y,\bar w)$ with $\bar w^2=\eta$ is a Kummer extension of $\mathbb{K}(\bar x,\bar y)$; \item [{\rm{(iii)}}] the genus of the function field $\mathbb{K}(\bar x,\bar y,\bar w)$ is less than or equal to $8m^2-4m+1$. \end{itemize} \end{proposition} \begin{proof} By Propositions \ref{clear}, from $A\neq 1$ it is easy to deduce that the divisor of $1- \bar t \bar u$ in $\mathbb{K}(\bar u,\bar z)$ is $$ -\alphamma_1-\alphamma_2+ D_1, $$ where $D_1$ is the degree-$2$ divisor of the zeros of $1- \bar t \bar u$. Similarly, $$ {\rm div}(1- \bar t \bar z)=-\alphamma_3-\alphamma_4+ D_2, $$ and hence in $\mathbb{K}(\bar u,\bar z)$ we have $$ {\rm div}((1- \bar t\bar u)(1- \bar t\bar z))=-\alphamma_1-\alphamma_2-\alphamma_3-\alphamma_4+D, $$ where $D$ is a divisor of degree $4$ whose support is disjoint from $\{\alphamma_i\mid i=1,\ldots,6\}$. Therefore, by Proposition \ref{clear2}, $$ {\rm div}((1- \bar t \bar x^m)( 1- \bar t \bar y^m))=m\sum_{i=1}^m(-{\bar{\bar \alphamma}}_1^i-{\bar{\bar \alphamma}}_2^i-{\bar{\bar \alphamma}}_3^i-{\bar{\bar \alphamma}}_4^i)+{\bar{\bar{D}}}, $$ where ${\bar{\bar{D}}}$ is a divisor of degree $4m^2$ whose support is disjoint from the set of places lying over $\{\alphamma_i\mid i=1,\ldots,6\}$. Then by Proposition \ref{clear3} the divisor of $\eta$ is $$ m\sum_{i=1}^m({\bar{\bar \alphamma}}_5^i+{\bar{\bar \alphamma}}_6^i+{\bar{\bar \alphamma}}_1^i+{\bar{\bar \alphamma}}_3^i)-{\bar{\bar{D}}}. $$ This proves (i). As $\eta$ is not a square in $\mathbb{K}(\bar x, \bar y)$, assertion (ii) holds as well. Finally, Proposition \ref{teo1} yields (iii). \end{proof} \section{Covering properties of certain subsets of $\mathcal X$}\label{covvi} Throughout this section we fix an element $\beta$ in $\mathbb{F}_{q^2}\setminus \mathbb{F}_q$ such that $\beta^2\in \mathbb{F}_q$. Let $\mathcal X$ be the plane cubic with equation $$ Y(X^2-\beta^2)=1. $$ The point $Y_\infty$ is an isolated double point with tangents $X=\pm \beta$, and $X_\infty$ is an inflection point with tangent $Y=0$. We choose $X_\infty$ as the neutral element of the abelian group $(\mathcal X\setminus \{Y_\infty\},\oplus)$ of the non-singular points of $\mathcal X$. For $v \in \mathbb{K} \setminus \{0,1\}$, let $Q_v$ be the point on $\mathcal X$ with affine coordinates $\big(\frac{v+1}{v-1}\beta, \frac{(v-1)^2}{4v\beta^2}\big)$. Also, let $Q_0=Y_\infty$ and $Q_1=X_\infty$. Such a parametrization actually defines an isomorphism between $(\mathcal X\setminus \{Y_\infty\},\oplus)$ and the multiplicative group of $\mathbb{K}$. In fact, it is straightforward to check that for $v,w \in \mathbb{K}^*$, \begin{equation}\label{8feb} Q_v\oplus Q_w=Q_{vw}. \end{equation} The $(q+1)$ non-singular $\mathbb{F}_q$-rational points of $\mathcal X$ form a cyclic subgroup $G$ of $(\mathcal X\setminus \{Y_\infty\},\oplus)$. It is easily seen that $$ G=\{Q_{\frac{u+\beta}{u-\beta}}\mid u \in \mathbb{F}_q \} \cup \{X_\infty\}. $$ For a divisor $m$ of $q+1$, the group $G$ has precisely one subgroup $K$ of index $m$, consisting of the $m$-th powers in $G$. By \eqref{8feb}, $$ K=\big\{Q_{(\frac{u+\beta}{u-\beta})^m}\mid u \in \mathbb{F}_q \big\} \cup \{X_\infty\}. $$ Let $T=Q_{\bar t}$ be a point in $G\setminus K$ and let $K_T$ be the coset $K\oplus T$. Then \begin{equation}\label{descri} K_T=\big\{Q_{\bar t (\frac{u+\beta}{u-\beta})^m}\mid u \in \mathbb{F}_q\big\} \cup \{Q_{\bar t}\}. \end{equation} Throughout this section $a,b$ are elements in $\mathbb{F}_q$ with $b(a^2-\beta^2)\neq 1$, and $P$ is the point in $AG(2,q)\setminus \mathcal X$ with affine coordinates $(a,b)$. We also assume that $(m,6)=1$. Let $$ g_{a,b}(X,Y):=bX^2Y^2 - (b\beta^2+1) (X^2+Y^2) - XY +a(X + Y) + \beta^2(b\beta^2 +1), $$ and $$ L_{a,b,\bar t, m}(X,Y)=(\bar tX^m-1)^2(\bar t Y^m-1)^2g_{a,b}(\beta\frac{\bar tX^m+1}{\bar tX^m-1},\beta\frac{\bar tY^m+1}{\bar tY^m-1}). $$ \begin{lemma}\label{alldebolenuova} Let $(x,y)$ be an affine point of the curve $L_{a,b,\bar t, m}(X,Y)=0$. If $$(\bar tx^m-1)(\bar t y^m-1)(x^m-y^m)\neq 0,$$ then $P$ is collinear with $Q_{\bar t x^m}$ and $Q_{\bar t y^m}$. \end{lemma} \begin{proof} We first note that for $u,v$ distinct elements in $\mathbb{K}\setminus \{\pm \beta\}$, the point $P$ is collinear with $(u,\frac{1}{u^2-\beta^2})$ and $(v,\frac{1}{v^2-\beta^2})$ if and only if $g_{a,b}(u,v)=0$. In fact, $$\det\left( \begin{array}{ccc} u&\frac{1}{u^2-\beta^2}&1\\ v&\frac{1}{v^2-\beta^2}&1\\ a&b&1\\ \end{array}\right) $$ is equal to $$\frac{1}{(u^{2}-\beta^2)(v^{2}-\beta^2)}(v-u)[bu^{2}v^{2}-(b\beta^{2}+1)(u^{2}+v^{2})-uv+a(u+v)+b\beta^4+\beta^{2}].$$ It is straightforward to check that $Q_{\bar tx^m}$ coincides with $(u,\frac{1}{u^2-\beta^2})$ precisely when $u=\beta\frac{\bar t x^m+1}{\bar t x^m-1}$. Then the claim follows by the definition of $L_{a,b,\bar t, m}$. \end{proof} The curve with equation $L_{a,b,\bar t,m}(X,Y)=0$ actually belongs to the family described in Section \ref{secfam}. \begin{lemma}\label{l55} Let $$ A=\frac{a+\beta}{a-\beta}, \qquad B=\frac{8b\beta^3}{a-\beta}. $$ Then $$ L_{a,b,\bar t, m}(X,Y)=-2\beta(a-\beta)f_{A,B,\bar t,m}(X,Y) $$ where $f_{A,B,\bar t,m}$ is defined as in \eqref{curva2}. \end{lemma} \begin{proof} The proof is a straightforward computation. \end{proof} Henceforth, $\sqrt{-3}$ will denote a fixed square root of $-3$ in $\mathbb{F}_{q^2}$. \begin{lemma} If \begin{equation}\label{condizione} (a,b)\notin \betaig\{(0,-\frac{9}{8\beta^2}), (\beta \sqrt{-3},0), (-\beta \sqrt{-3},0) \betaig\} \end{equation} then $L_{a,b,\bar t, m}(X,Y)=0$ is an absolutely irreducible curve with genus less than or equal to $3m^2-3m+1$. \end{lemma} \begin{proof} For $A,B$ as in Lemma \ref{l55}, let $\mathcal C_{A,B,\bar t,m}$ be as in \eqref{curva2bis}. By Lemma \ref{l55}, the curve $L_{a,b,\bar t, m}(X,Y)=0$ is actually $\mathcal C_{A,B,\bar t,m}$. Note that $m$ divides $q^2-1$ and that each coefficient of $f_{A,B,\bar t, m}(X,Y)$ lies in $\mathbb{F}_{q^2}$. Then by Proposition \ref{P26apr13} the curve $\mathcal C_{A,B,\bar t,m}$ is absolutely irreducible of genus $g\le 3m^2-3m+1$, provided that none of the following holds: \begin{itemize} \item[(1)] $AB=(A-1)^3$; \item[(2)] $A=0$; \item[(3)] $A^3=-1$ and $B=1-(A-1)^3$. \end{itemize} Case (1) cannot occur as $b(a^2-\beta^2)\neq 1$. Also, $a \in \mathbb{F}_q$ implies $a+\beta \neq 0$, which rules out (2). Assume then that (3) holds. Then $A^3=-1$ implies $a(a^2+3\beta^2)=0$. From $B=1-(A-1)^3$ we deduce $$b=3\frac{a^2+3\beta^2}{8\beta^{2}(\beta a-\beta^2)}.$$ Then either $(a,b)=(0,-\frac{9}{8\beta^2})$ or $(a,b)=(\pm\beta \sqrt{-3},0)$, a contradiction. \end{proof} \begin{remark}\label{quattordici} Let $q=p^s$ with $p>3$ a prime. Then $-3$ is a non-square in $\mathbb{F}_q$ if and only if $s$ is odd and $p\equiv 2 \pmod 3$; see e.g. {\rm \cite[Lemma 4.5]{GFFA}}. \end{remark} In order to show that if \eqref{condizione} holds then $P$ is collinear with two points in $K_T$, we need to ensure the existence of a point $(x,y)$ of the curve $L_{a,b,\bar t, m}(X,Y)=0$ such that $Q_{\bar tx^m}$ and $Q_{\bar t y^m}$ are distinct points in $K_T$. To this end, it is useful to consider a curve which is birationally equivalent to $L_{a,b,\bar t, m}(X,Y)=0$, but, unlike $L_{a,b,\bar t, m}(X,Y)=0$, is defined over $\mathbb{F}_q$. Let $$ M_{a,b,\bar t, m}(R,V):=(R-\beta)^{2m}(V-\beta)^{2m}L_{a,b,\bar t,m}\betaig(\frac{R+\beta}{R-\beta},\frac{V+\beta}{V-\beta}\betaig)=0. $$ \begin{lemma}\label{irrmodificato} If \eqref{condizione} holds, then $M_{a,b,\bar t,m}(R,V)=0$ is an absolutely irreducible curve birationally equivalent to $L_{a,b,\bar t,m}(X,Y)=0$. \end{lemma} \begin{proof} Let $\mathbb{K}(\bar x,\bar y)$ be the function field of $L_{a,b,\bar t,m}(X,Y)=0$, so that $L_{a,b,\bar t,m}(\bar x,\bar y)=0$. Both the degrees of the extensions $\mathbb{K}(\bar x,\bar y):\mathbb{K}(\bar x)$ and $\mathbb{K}(\bar x,\bar y):\mathbb{K}(\bar y)$ are equal to $2m$. Let $$ \bar r:=\beta \frac{\bar x+1}{\bar x-1},\qquad \bar v:=\beta\frac{\bar y+1}{\bar y-1}. $$ Then $M_{a,b,\bar t,m}(\bar r,\bar v)=0$. As $$ \bar x= \frac{\bar r+\beta}{\bar r-\beta},\qquad \bar y=\frac{\bar v+\beta}{\bar v-\beta} $$ we have $$ \mathbb{K}(\bar x,\bar y)=\mathbb{K}(\bar r,\bar v),\qquad \mathbb{K}(\bar x)=\mathbb{K}(\bar r),\qquad \mathbb{K}(\bar y)=\mathbb{K}(\bar v). $$ Therefore, both the degrees of the extensions $\mathbb{K}(\bar r,\bar v):\mathbb{K}(\bar r)$ and $\mathbb{K}(\bar r,\bar v):\mathbb{K}(\bar v)$ are equal to $2m$. As the degrees of $M_{a,b,\bar t,m}(R,V)$ in both $R$ and $V$ are also equal to $2m$, the polynomial $M_{a,b,\bar t, m}(R,V)$ cannot be reducible. \end{proof} \begin{lemma}\label{troppe} The curve with equation $M_{a,b,\bar t,m}(R,V)=0$ is defined over $\mathbb{F}_q$. \end{lemma} \begin{proof} We are going to show that up to a scalar factor in $\mathbb{K}^*$ the coefficients of $M_{a,b,\bar t,m}(R,V)$ lie in $\mathbb{F}_q$. Consider the following polynomials in $\mathbb{F}_{q^2}[Z]$: $$ \theta_1(Z)=(Z+\beta)^m+(Z-\beta)^m,\qquad \theta_2(Z)=\frac{1}{\beta}((Z+\beta)^m-(Z-\beta)^m), $$ Let $$ t=\beta \frac{\bar t+1}{\bar t-1}, $$ As both $t$ and $\beta^2$ belong to $\mathbb{F}_q$, the polynomials \begin{equation}\label{hl} h(Z)=t\theta_1(Z)+\beta^2\theta_2(Z),\qquad l(Z)=\theta_1(Z)+t\theta_2(Z) \end{equation} actually lie in $\mathbb{F}_q[Z]$. Taking into account that $ t=\beta\frac{\bar t+1}{\bar t-1}$, a straightforward computation gives \begin{equation}\label{keykey} \bar t\betaig(\frac{Z+\beta}{Z-\beta}\betaig)^m=\frac{\frac{h(Z)}{l(Z)}+\beta}{\frac{h(Z)}{l(Z)}-\beta}. \end{equation} Whence, $$ \bar t\betaig(\frac{Z+\beta}{Z-\beta}\betaig)^m+1=\frac{2h(Z)}{h(Z)-\beta l(Z)}\quad \text{and} \quad \bar t\betaig(\frac{Z+\beta}{Z-\beta}\betaig)^m-1=\frac{2\beta l(Z)}{h(Z)-\beta l(Z)}. $$ We then have that $ M_{a,b,\bar t,m}(R,V)$ coincides with $$ (R-\beta)^{2m}(V-\beta)^{2m} \betaig(\frac{2\beta l(R)}{h(R)-\beta l(R)}\betaig)^2 \betaig(\frac{2\beta l(V)}{h(V)-\beta l(V)}\betaig)^2 g_{a,b}\left(\frac{h(R)}{l(R)},\frac{h(V)}{l(V)}\right). $$ From $$ h(Z)-\beta l(Z)=2(t-\beta)(Z-\beta)^m $$ we obtain $$ M_{a,b,\bar t,m}(R,V)=\frac{\beta^4}{(t-\beta)^4} l(R)^2l(V)^2 g_{a,b}\left(\frac{h(R)}{l(R)},\frac{h(V)}{l(V)}\right), $$ whence the assertion. \end{proof} \begin{remark}\label{coordinate} By the proof of Lemma {\rm{\ref{alldebolenuova}}}, for any $z\in \mathbb{F}_q$, the $X$-coordinate of the point $ Q_{\bar t(\frac{z+\beta}{z-\beta})^m } $ is $ u=\beta(\bar t(\frac{z+\beta}{z-\beta})^m +1)/(\bar t(\frac{z+\beta}{z-\beta})^m -1). $ Then, by \eqref{keykey}, $u=\frac{h(z)}{l(z)}$ holds, with $h(Z)$ and $l(Z)$ as in \eqref{hl}. \end{remark} \begin{remark}\label{notanodale} If $(r,v)$ is an $\mathbb{F}_q$-rational affine point of the curve $M_{a,b,\bar t,m}(R,V)=0$ with $$ \betaig(\frac{r+\beta}{r-\beta}\betaig)^m\neq \betaig(\frac{v+\beta}{v-\beta}\betaig)^m $$ then $P=(a,b)$ is collinear with $Q_{\bar t (\frac{r+\beta}{r-\beta})^m}$ and $Q_{\bar t (\frac{v+\beta}{v-\beta})^m}$, which are two distinct points in $K_T$ by \eqref{descri}. \end{remark} \begin{proposition}\label{archinuovi} Let $P=(a,b)$ be a point in $AG(2,q)$ off $\mathcal X$. Assume that \eqref{condizione} holds. If $$ q+1-(6m^2-6m+2)\sqrt q \gammae 4m^2+8m+1 $$ then $P$ is collinear with two distinct points of $K_T$. \end{proposition} \begin{proof} Let $\mathbb{K}(\bar r, \bar v)$ be the function field of $M_{a,b,\bar t, m}(R,V)=0$, so that $M_{a,b,\bar t,m}(\bar r,\bar v)=0$ holds. Let $E$ be the set of places $\alphamma$ of $\mathbb{K}(\bar r,\bar v)$ for which at least one of the following holds: \begin{itemize} \item[(1)] $\alphamma$ is a pole of either $\bar r$ or $\bar v$; \item[(2)] $\alphamma$ is a pole of either $\betaig(\frac{\bar r+\beta}{\bar r-\beta}\betaig)$ or $\betaig(\frac{\bar v+\beta}{\bar v-\beta}\betaig)$; \item[(3)] $\alphamma$ is a zero of $\betaig(\frac{\bar r+\beta}{\bar r-\beta}\betaig)^m- \betaig(\frac{\bar v+\beta}{\bar v-\beta}\betaig)^m$. \end{itemize} As both degrees of the extensions $\mathbb{K}(\bar r,\bar v):\mathbb{K}(\bar r)$ and $\mathbb{K}(\bar r,\bar v):\mathbb{K}(\bar v)$ are equal to $2m$, the number of places satisfying (1) is at most $4m$. According to the proof of Lemma \ref{irrmodificato}, we have that $$ \bar x= \frac{\bar r+\beta}{\bar r-\beta},\qquad \bar y=\frac{\bar v+\beta}{\bar v-\beta} $$ satisfy $f_{A,B,\bar t, m}(\bar x,\bar y)=0$. Therefore, by Propositions \ref{clear} and \ref{clear2} the number places satysfying (2) is $4m$. It is easily seen that in $\mathbb{K}(\bar u, \bar z)$ the rational function $\bar u-\bar z$ has at most $4$ distinct zeros; hence, the set of poles of $\bar x^m-\bar y^m$ in $\mathbb{K}(\bar x,\bar y)$ has size less than or equal to $4m^2$. This shows that $E$ comprises at most $4m^2+8m$ places. Our assumption on $q$ and $m$, together with the Hasse-Weil bound, ensures the existence of at least $4m^2+8m+1$ $\mathbb{F}_q$-rational places of $\mathbb{K}(\bar r,\bar v)$; hence, there exists at least one $\mathbb{F}_q$-rational place $\alphamma_0$ of $\mathbb{K}(\bar r,\bar s)$ not in $E$. Let $\tilde r=\bar r(\alphamma_0)$ and $ \tilde v=\bar v(\alphamma_0).$ By Remark \ref{notanodale}, $P=(a,b)$ is collinear with $Q_{\bar t (\frac{\tilde r+\beta}{\tilde r-\beta})^m}$ and $Q_{\bar t (\frac{\tilde v+\beta}{\tilde v-\beta})^m}$, which are two distinct points in $K_T$. \end{proof} The following technical variant of Proposition \ref{archinuovi} will also be needed. \begin{proposition}\label{archinuoviVAR} Let $P=(a,b)$ be a point in $AG(2,q)$ off $\mathcal X$. Assume that \eqref{condizione} holds. If \begin{equation}\label{aritmeticaVAR} q+1-(6m^2-6m+2)\sqrt q \gammae 8m^2+8m+1 \end{equation} then $P$ is collinear with two distinct points of $K_T\setminus \{T\}$. \end{proposition} \begin{proof} One can argue as in the proof of Proposition \ref{archinuovi}. We need to ensure that neither $Q_{\bar t (\frac{\tilde r+\beta}{\tilde r-\beta})^m}$ or $Q_{\bar t (\frac{\tilde v+\beta}{\tilde v-\beta})^m}$ coincides with $T$. As $T=Q_{\bar t}$, this is equivalent to $\alphamma_0$ not being a zero of either $(\frac{\bar r+\beta}{\bar r-\beta})^m-1$ or $(\frac{\bar v+\beta}{\bar v-\beta})^m-1$ in the function field $\mathbb{K}(\bar r,\bar v)$. By Proposition \ref{clear}, in $\mathbb{K}(\bar u,\bar z)$ both rational functions $\bar u-1$ and $\bar z-1$ have at most two distinct zeros. Therefore, there are at most $4m^2$ places $\alphamma_0$ that need to be ruled out. \end{proof} If \eqref{condizione} is not satisfied, then $P$ is not collinear with any two points of $K_T$. Actually, a stronger statement holds. \begin{proposition}\label{fuori} Let $a,b \in \mathbb{F}_q$ be such that $$ (a,b)\in \betaig\{(0,-\frac{9}{8\beta^2}),(\beta\sqrt{-3},0),(-\beta\sqrt{-3},0)\betaig\}. $$ Then the point $P=(a,b)$ is not collinear with any two $\mathbb{F}_q$-rational affine points of $\mathcal X$. \end{proposition} \begin{proof} We recall that by the proof of Lemma \ref{alldebolenuova}, the point $P$ is collinear with $(x,\frac{1}{x^2-\beta^2})$ and $(y,\frac{1}{y^2-\beta^2})$, with $x,y\in \mathbb{F}_q$, if and only if $g_{a,b}(x,y)=0$. If $ (a,b)=(0,-\frac{9}{8\beta^2}) $ then $$g_{a,b}(X,Y)=-\frac{1}{8\beta^2}(9X^2Y^2-\beta^2(X^2+Y^2)+8\beta^2XY+\beta^4)=$$ $$=-\frac{1}{8\beta^2}(3XY - X\beta + Y\beta + \beta^2)(3XY + X\beta -Y\beta + \beta^2). $$ If $g_{a,b}(x,y)=0$, then either \begin{equation}\label{9feb} 3xy - x\beta + y\beta + \beta^2=0\quad \text{ or }\quad 3xy + x\beta -y\beta + \beta^2=0. \end{equation} If $(x,y)\in \mathbb{F}_q$, then both $x$ and $y$ are fixed by the Frobenius map over $\mathbb{F}_q$, and hence both equalities in \eqref{9feb} hold. This easily implies $x=y$. Then no two distinct $\mathbb{F}_q$-rational affine points of $\mathcal X$ can be collinear with $(a,b)$. Note that $(a,b)=(\pm \beta \sqrt{-3},0)$ can only occur when $-3$ is a non-square in $\mathbb{F}_q$, otherwise $\pm\beta\sqrt{-3}\notin \mathbb{F}_q$. In this case, $(\sqrt{-3})^q=-\sqrt{-3}$ holds; also, $$ g_{\beta\sqrt{-3},0}(X,Y)=-(X^2+Y^2) - XY +\beta\sqrt{-3}(X + Y) +\beta^2= $$ $$ =-\betaig(X+\frac{1+\sqrt{-3}}{2}Y+\frac{-\beta\sqrt{-3}+\beta}{2}\betaig)\betaig(X+\frac{1-\sqrt{-3}}{2}Y+\frac{-\beta\sqrt{-3}-\beta}{2}\betaig) $$ and $$ g_{-\beta\sqrt{-3},0}(X,Y)=-(X^2+Y^2) - XY -\beta\sqrt{-3}(X + Y) +\beta^2= $$ $$ =-\betaig(X+\frac{1-\sqrt{-3}}{2}Y+\frac{\beta\sqrt{-3}+\beta}{2}\betaig)\betaig(X+\frac{1+\sqrt{-3}}{2}Y+\frac{\beta\sqrt{-3}-\beta}{2}\betaig) $$ The assertion for $(a,b)=(\pm \beta \sqrt{-3},0)$ then follows by the same arguments used for $(a,b)= (0,-\frac{9}{8\beta^2})$. \end{proof} In order to investigate the bicovering properties of the arc $K_t$, according to Remark \ref{coordinate} we need to consider the rational function $ \big(a-\frac{h(\bar r)}{l(\bar r)}\big)\big(a-\frac{h(\bar v)}{l(\bar v)}\big) $ in the function field of $M_{a,b,\bar t,m}(R,V)=0$. \begin{lemma}\label{exitre} Let $P=(a,b)$ be a point in $AG(2,q)$ off $\mathcal X$ satisfying \eqref{condizione}. Let $\mathbb{K}(\bar r, \bar v)$ be the function field of $M_{a,b,\bar t, m}(R,V)=0$, so that $M_{a,b,\bar t,m}(\bar r,\bar v)=0$. Then the rational function $ \big(a-\frac{h(\bar r)}{l(\bar r)}\big)\big(a-\frac{h(\bar v)}{l(\bar v)}\big) $ is not a square in $\mathbb{K}(\bar r,\bar v)$. \end{lemma} \begin{proof} Let $\bar x$ and $\bar y$ be as in the proof of Proposition \ref{archinuovi}, so that $\mathbb{K}(\bar r,\bar v)=\mathbb{K}(\bar x,\bar y)$ with $f_{A,B,\bar t, m}(\bar x,\bar y)=0$. By straightforward computation, $$ \betaig(a-\frac{h(\bar r)}{l(\bar r)}\betaig)\betaig(a-\frac{h(\bar v)}{l(\bar v)}\betaig) =\frac{4\beta^2(\bar t \bar x^m-A)(\bar t \bar y^m-A)}{(A-1)^2(\bar t \bar x^m-1)(\bar t \bar y^m-1)} . $$ Then the assertion follows from Proposition \ref{exiisolated}. \end{proof} \begin{proposition}\label{archibishop} Let $P=(a,b)$ be a point in $AG(2,q)$ off $\mathcal X$. Assume that \eqref{condizione} holds. If \begin{equation}\label{aritmetica2} q+1-(16m^2-8m+2)\sqrt q \gammae 16m^2+24m+1 \end{equation} then $P$ is bicovered by the points of $K_T$. \end{proposition} \begin{proof} Let $\mathbb{K}(\bar r, \bar v)$ be the function field of $M_{a,b,\bar t, m}(R,V)=0$, so that $M_{a,b,\bar t,m}(\bar r,\bar v)=0$. By Proposition \ref{exiisolated} and Lemma \ref{exitre}, for every $c\in \mathbb{F}_q^*$ the equation $$ \bar w^2=c \betaig(a-\frac{h(\bar r)}{l(\bar r)}\betaig)\betaig(a-\frac{h(\bar v)}{l(\bar v)}\betaig) $$ defines a Kummer extension $\mathbb{K}(\bar r,\bar v,\bar w)$ of $\mathbb{K}(\bar r, \bar v)$ with genus less than or equal to $8m^2-4m+1$. Let $E$ be as in the proof of Proposition \ref{archinuovi}, and let $E'$ be the set of places of $\mathbb{K}(\bar r,\bar v,\bar w)$ that either lie over a place in $E$ or over a zero or a pole of $\big(a-\frac{h(\bar r)}{l(\bar r)}\big)\big(a-\frac{h(\bar v)}{l(\bar v)}\big)$. By Proposition \ref{exiisolated}, together with the proof of Proposition \ref{archinuovi}, an upper bound for the size of $E'$ is $16m^2+24m$. Our assumption on $q$ and $m$, together with Proposition \ref{HaWe}, ensures the existence of at least $16m^2+24m+1$ $\mathbb{F}_q$-rational places of $\mathbb{K}(\bar r,\bar v,\bar w)$; hence, there exists at least one $\mathbb{F}_q$-rational place $\alphamma_c$ of $\mathbb{K}(\bar r,\bar v,\bar w)$ not in $E'$. Let $$ \tilde r=\bar r(\alphamma_c),\quad \tilde v=\bar v(\alphamma_c), \quad \tilde w=\bar w(\alphamma_c). $$ Note that $P_c=(\tilde r,\tilde v)$ is an $\mathbb{F}_q$-rational affine point of the curve with equation $M_{a,b,\bar t,m}(R,V)=0$. Therefore, by Remark \ref{notanodale}, $P$ is collinear with two distinct points $$ P_{1,c}=Q_{\bar t (\frac{\tilde r+\beta}{\tilde r-\beta})^m},\,P_{2,c}=Q_{\bar t (\frac{\tilde v+\beta}{\tilde v-\beta})^m}\in K_T. $$ If $c$ is chosen to be a square, then $P$ is external to $P_{1,c}P_{2,c}$; on the other hand, if $c$ is not a square, then $P$ is internal to $P_{1,c}P_{2,c}$. This proves the assertion. \end{proof} In the final part of this section we deal with points in $\mathcal X$. \begin{proposition}\label{bicointerni} Let $K_{T'}$ be a coset of $K$ such that $K_T\cup K_{T'}$ is an arc. For $u\in \mathbb{F}_q$, let $P_u=(u,\frac{1}{u^2-\beta^2})$ be an $\mathbb{F}_q$-rational affine point of $\mathcal X$ not belonging to $K_T\cup K_{T'}$ but collinear with a point of $K_T$ and a point of $K_{T'}$. \begin{itemize} \item[{\rm{(i)}}] If $u\neq 0$ and \eqref{aritmetica2} holds, then $P_u$ is bicovered by $K_T\cup K_{T'}$. \item[{\rm{(ii)}}] The point $P_0=(0,-\frac{1}{\beta^2})$ is not bicovered by $K_T\cup K_{T'}$. It is internal (resp. external) to every segment cut out on $K_T\cup K_{T'}$ by a line through $P_0$ when $q\equiv 1\pmod 4$ (resp. $q\equiv 3\pmod 4$). \end{itemize} \end{proposition} \begin{proof} Note that when $P$ ranges over $K_T$, then the point $Q=\ominus (P_u\oplus P)$ ranges over $K_{T'}$ and is collinear with $P_u$ and $P$. Recall that $P$ belongs to $K_T$ if and only if $P=(e,\frac{1}{e^2-\beta^2})$ with $$ e=\beta\frac{\bar t(\frac{x+\beta}{x-\beta})^m+1}{\bar t(\frac{x+\beta}{x-\beta})^m-1} $$ for some $x\in \mathbb{F}_q$. In this case, $Q=(s(e),\frac{1}{s(e)^2-\beta^2})$ with $ s(e)=-\frac{ue+\beta^2}{u+e}. $ For an element $\bar x$ transcendental over $\mathbb{K}$ let $$ e(\bar x)=\beta\frac{\bar t(\frac{\bar x+\beta}{\bar x-\beta})^m+1}{\bar t (\frac{\bar x+\beta}{\bar x-\beta})^m-1}=\frac{\beta\bar t (\bar x+\beta)^m+\beta(\bar x-\beta)^m}{\bar t (\bar x+\beta)^m-(\bar x-\beta)^m}\in \mathbb{K}(\bar x). $$ Note that $e(\bar x)$ is defined over $\mathbb{F}_q$. In order to determine whether $P_u$ is bicovered by $K_T\cup K_{T'}$ we need to investigate whether the following rational function is a square in $\mathbb{K}(\bar x)$: $$ \eta(\bar x)=(u-e(\bar x))(u-s(e(\bar x)))= \frac{u-e(\bar x)}{u+e(\bar x)}(u^2+2ue(\bar x)+\beta^2) $$ Let $\alphamma $ be a zero of $\bar t(\frac{\bar x+\beta}{\bar x-\beta})^m-1$ in $\mathbb{K}(\bar x)$. Note that since $(m,p)=1$, the polynomial $tZ^m-1$ has no multiple roots in $\mathbb{K}[Z]$. Then the valuation $v_\alphamma(e(\bar x))$ of $e(\bar x)$ at $\alphamma$ is $-1$. If in addition $u\neq 0$, then $ v_\alphamma(\eta(\bar x))=v_\alphamma(e(\bar x))=-1, $ whence $\eta(\bar x)$ is not a square in $\mathbb{K}(\bar x)$ and Proposition \ref{teo1cor} applies to $c\eta(\bar x)$ for each $c\in \mathbb{F}_q^*$. Since the number of poles of $\eta(\bar x)$ is at most $2m$, the genus of the Kummer extension $\mathbb{K}(\bar x,\bar w)$ of $\mathbb{K}(\bar x)$ with $\bar w^2=c\eta(\bar x)$ is at most $2m-1$. Our assumption on $q$, together with the Hasse-Weil bound, yield the existence of an $\mathbb{F}_q$-rational place $\alphamma_c$ of $\mathbb{K}(\bar x,\bar w)$ which is not a zero nor a pole of $\bar w$. Let $ \tilde x=\bar x(\alphamma_c)$, $ \tilde w=\bar w(\alphamma_c) $, $$ \tilde e =\beta\frac{\bar t(\frac{\tilde x+\beta}{\tilde x-\beta})^m+1}{\bar t(\frac{\tilde x+\beta}{\tilde x-\beta})^m-1}\quad \text{and}\quad s(\tilde e)=-\frac{u\tilde e+\beta^2}{u+\tilde e}. $$ Therefore, if $u\neq 0$, then $P_u$ is collinear with two distinct points $$ P(c)=\betaig(\tilde e,\frac{1}{\tilde e^2-\beta^2}\betaig)\in K_T\qquad Q(c)=\betaig(s(\tilde e),\frac{1}{s(\tilde e)^2-\beta^2}\betaig)\in K_{T'}. $$ If $c$ is chosen to be a square, then $P_u$ is external to $P(c)Q(c)$; on the other hand, if $c$ is not a square, then $P_u$ is internal to $P(c)Q(c)$. Assume now that $u=0$. First note that $P_0$ coincides with $Q_{-1}$, and hence belongs to $K$. Therefore, as $m$ is odd, $P_0$ cannot be collinear with any two points from the same coset of $K$. Assume then that $P_0$ is collinear with $P=(e,\frac{1}{e^2-\beta^2})\in K_T$ and $Q=\big(s(e),\frac{1}{s(e)^2-\beta^2}\big)\in K_{T'}$. It is straightforward to check that $(u-e)(u-s(e))=e\cdot s(e)=-\beta^2$. Since $\beta^2$ is not a square in $\mathbb{F}_q$, the assertion follows from the well-known fact that $-1$ is a square in $\mathbb{F}_q$ precisely when $q\equiv 1 \pmod 4$. \end{proof} \section{Complete arcs and complete caps from cubics with an isolated double point} Throughout this section $q=p^s$ with $p$ a prime, $p>3$. Also, $\mathcal X$, $G$, $m$, $K$ and $K_T$ are as in Section \ref{covvi}. We recall the notion of a maximal-$3$-independent subset of a finite abelian group $\mathcal G$, as given in \cite{MR1075538}. A subset $M$ of $\mathcal G$ is said to be {\em maximal }$3$-{\em independent} if \begin{itemize} \item[ (a)] $x_1+x_2+x_3\neq 0$ for all $x_1,x_2,x_3\in M$, and \item[(b)] for each $y\in \mathcal G\setminus M$ there exist $x_1,x_2\in M$ with $x_1+x_2+y=0$. \end{itemize} If in (b) $x_1\neq x_2$ can be assumed, then $M$ is said to be {\em good}. Assume that $S$ is a good maximal $3$-independent subset of $G$. Since three points in $G$ are collinear if and only if their sum is equal to the neutral element, $S$ is an arc whose secants cover all the points in $G$. For direct products of abelian groups of order at least $4$, an explicit construction of good maximal $3$-independent subsets was provided by Sz\H onyi; see e.g. \cite[Example 1.2]{MR1221589}. If $m$ and $(q+1)/m$ are coprime, such a construction applies to $G$. \begin{proposition}\label{mazzi3i} Assume that $m$ and $(q+1)/m$ are coprime. Let $H$ be the subgroup of $G$ of order $m$, so that $G$ is the direct product of $K$ and $H$. Fix two elements $R\in K$ and $R'\in H$ of order greater than $3$, and let $T=R'\ominus 2R$. Then $$ \mathcal A=K_T\setminus \{T\} \quad \bigcup \quad (H\oplus R) \setminus \{\ominus 2T\oplus R\} $$ is a good maximal $3$-independent subset of $G$. \end{proposition} Let $\mathcal E$ denote the set of points $P$ in $AG(2,q)\setminus \mathcal X$ whose affine coordinates $(a,b)$ do not satisfy \eqref{condizione}. By Remark \ref{quattordici}, the size of $\mathcal E$ is $3$ precisely when $s$ is odd and $p\equiv 2 \pmod 3$; otherwise, $\mathcal E$ consists of the point with coordinates $(0,-\frac{9}{8\beta^2})$. \subsection{Small complete arcs in $AG(2,q)$} Let $\mathcal A$ be as in Proposition \ref{mazzi3i}. We use Propositions \ref{archinuoviVAR}, \ref{fuori}, and \ref{mazzi3i} in order to construct small complete arcs in Galois planes. Note that \eqref{aritmeticaVAR} is implied by $ m\le \frac{\sqrt[4]q}{\sqrt 6}. $ \begin{theorem}\label{unouno} Let $q=p^s$ with $p>3$ a prime. Let $m$ be a divisor of $q+1$ such that $(m,6)=1$ and $(m,\frac{q+1}{m})=1$. If $m\le \frac{\sqrt[4]q}{\sqrt 6}$, then \begin{itemize} \item if either $s$ is even or $p\equiv 1 \pmod 3$, the set $\mathcal A\cup \mathcal E$ is a complete arc in $AG(2,q)$ of size $m+\frac{q+1}{m}-2$; \item if $s$ is odd and $p\equiv 2 \pmod 3$, the set $\mathcal A\cup \mathcal E$ contains a complete arc in $AG(2,q)$ of size at most $m+\frac{q+1}{m}$. \end{itemize} \end{theorem} \subsection{Small complete caps in $AG(N,q)$, $N\equiv 0\pmod 4$} Let $M$ be a maximal $3$-independent subset of the factor group $G/K$ containing $K_T$. Then the union $S$ of the cosets of $K$ corresponding to $M$ is a good maximal $3$-independent subset of $G$; see \cite{MR1075538}, Lemma 1, together with Remark 5(5). It has already been noticed that $S$ is an arc whose secants cover all the points in $G$. Note also that $K$ is disjoint from $S$, and hence the point $P_0=(0,-\frac{1}{\beta^2})$ does not belong to $S$. If either $s$ is even or $p\equiv 1 \pmod 3$, by Propositions \ref{fuori}, \ref{archibishop}, and \ref{bicointerni}, then $S\cup \{(0,-\frac{9}{8\beta^2})\}$ is an almost bicovering arc with center $P_0$, provided that $m$ is small enough with respect to $q$. \begin{theorem}\label{pre14mar} Let $q=p^s$ with $p>3$ a prime, and assume that either $s$ is even or $p\equiv 1 \pmod 3$. Let $m$ be a proper divisor of $q+1$ such that $(m,6)=1$ and \eqref{aritmetica2} holds. Let $K$ be the subgroup of $G$ of index $m$. For $M$ a maximal $3$-independent subset of the factor group $G/K$, the point set \begin{equation}\label{13maggio} \mathcal B=\betaig(\bigcup_{K_{T_i}\in M}K_{T_i}\betaig)\bigcup \mathcal E \end{equation} is an almost bicovering arc in $AG(2,q)$ with center $P_0=(0,-\frac{1}{\beta^2})$. The size of $\mathcal B$ is $\#M\cdot \frac{q+1}{m}+1$. \end{theorem} When $s$ is odd and $p\equiv 2 \pmod 3$ a further condition on $M$ is needed in order to ensure that $\mathcal B$ as in \eqref{13maggio} is an almost bicovering arc. Note that by Proposition \ref{fuori} there is precisely one point in $G$ collinear with any two points in $\mathcal E$. \begin{theorem}\label{pre14marBIS} Let $q=p^s$ with $p>3$ a prime. Assume that $s$ is odd and $p\equiv 2 \pmod 3$. Let $m$ be a proper divisor of $q+1$ such that $(m,6)=1$ and \eqref{aritmetica2} holds. Let $K$ be the subgroup of $G$ of index $m$. Let $Q_1$ denote the only point in $G$ collinear with $(0,-\frac{9}{8\beta^2})$ and $(\beta\sqrt{-3},0)$; similarly, let $Q_2\in G$ be collinear with $(0,-\frac{9}{8\beta^2})$ and $(-\beta\sqrt{-3},0)$. For $M$ a maximal $3$-independent subset of the factor group $G/K$ not containing $K\oplus Q_1$ nor $K\oplus Q_2$, the point set $$ \mathcal B=\betaig(\bigcup_{K_{T_i}\in M}K_{T_i}\betaig)\bigcup \mathcal E $$ is an almost bicovering arc in $AG(2,q)$ with center $P_0=(0,-\frac{1}{\beta^2})$. The size of $\mathcal B$ is $\#M\cdot \frac{q+1}{m}+3$. \end{theorem} We use Theorems \ref{pre14mar} and \ref{pre14marBIS}, together with Proposition \ref{mainP}, in order to construct small complete caps in affine spaces $AG(N,q)$. Assume that $m=m_1m_2$ with $(m_1,m_2)=1$. Then the factor group $G/K$ is the direct product of two subgroups of order $m_1>4$ and $m_2>4$, and the aforementioned construction by Sz\H onyi \cite[Example 1.2]{MR1221589} of a maximal $3$-independent set $M$ of size $m_1+m_2-3$ applies. It is easily seen that $M$ can be chosen in such a way that it does not contain any two fixed cosets of $K$. As \eqref{aritmetica2} is implied by $ m\le \frac{\sqrt[4]q}{4}, $ the following result holds. \begin{theorem}\label{duedue} Let $q=p^h$ with $p>3$, and let $m$ be a proper divisor of $q+1$ such that $(m,6)=1$ and $m\le \frac{\sqrt[4]q}{4}$. Assume that $m=m_1m_2$ with $(m_1,m_2)=1$. Then for $N\equiv 0 \pmod 4$, $N\gammae 4$, there exists a complete cap in $AG(N,q)$ of size less than or equal to $$ \betaig((m_1+m_2-3)\cdot \frac{q+1}{m}+3\betaig)q^{\frac{N-2}{2}}. $$ \end{theorem} \end{document}
\begin{document} \setcounter{page}{0} \thispagestyle{empty} \title{ On the operator norm of a Hermitian random matrix with correlated entries} \author{Jana Reker, IST Austria} \maketitle {\downarrow}te \begin{abstract} We consider a correlated $N\times N$ Hermitian random matrix with a polynomially decaying metric correlation structure. By calculating the trace of the moments of the matrix and using the summable decay of the cumulants, we show that its operator norm is stochastically dominated by one. \end{abstract} \AMSclass{60B20}\\ \keywords{Correlated random matrix, operator norm, polynomially decaying metric correlation structure} \setcounter{page}{1} \section{Introduction} Let $H$ be a Hermitian $N\times N$ random matrix such that $H=\smash{\frac{1}{\sqrt{N}}}W$, where $W\in\mathds{C}^{N\times N}$ has matrix elements of order one. For Wigner matrices, i.e., when the entries of $W$ are identically distributed and independent (up to the Hermitian symmetry) with some mild moment condition, it is well known that $\| H\|$ is bounded uniformly in $N$ with very high probability. In fact, it even converges to 2 under the normalization $\mathds{E} |W_{ij}|^2=1$ (see~\cite{BaiYin1988} and~\cite[Thm~2.1.22]{AndersonGuionnetZeitouni2009}, as well as~\cite{FuerediKomlos1981},~\cite{Juhasz1981},~\cite{Vu2007} for more quantitative bounds under stronger moment conditions). In contrast, if the entries of $W$ are very strongly correlated, the norm of $H$ may be as large as~$\sqrt{N}$. In this paper, we assume the entries of $W$ to be correlated following a \emph{polynomially decaying metric correlation structure} as, e.g., considered in~\cite{ErdosKruegerSchroeder2017}, but with a weaker, summable correlation decay. This dependence structure is characterized by the 2-cumulants of the matrix elements $W_{ij}$ decaying at least as an inverse $2+{\varepsilon}$ power of the distance with respect to a natural metric on the index pairs $(i,j)$. The higher cumulants follow a similar pattern (see Assumption~\eqref{A3} below). Under these mild decay conditions, we show that $\| H\|$ is essentially bounded with very high probability. This result was already stated in~\cite{ErdosKruegerSchroeder2017}, indicating that an extension of Wigner’s moment method applies. In the current paper, we carry out this task which turns out to be rather involved. In~\cite{ErdosKruegerSchroeder2017}, this bound was used as an apriori control on $\| H\|$ for the resolvent method, leading to optimal local laws for~$H$. We remark that it is possible the modify the proof in~\cite{ErdosKruegerSchroeder2017} to obtain the bound on $\| H \|$ directly, i.e., without relying on the current paper. However, an independent proof via the moment method has several advantages. First, it is conceptually much simpler and less technical than the resolvent approach in \cite{ErdosKruegerSchroeder2017}. Further, it only requires the summability of the 2-cumulants (see exponent $s>2$ in~\eqref{A3-2-cumu} below), while \cite{ErdosKruegerSchroeder2017} assumed a faster decay ($s>12$, see~\cite[Eq.~(3a)]{ErdosKruegerSchroeder2017}). Lastly, the current method can be generalized to even weaker correlation decays, resulting in correlated random matrix ensembles whose norm grows with~$N$ but slower than the trivial~$\sqrt{N}$ bound. We start by giving some general notation and the precise assumptions on the matrix $W$ in Section~\mathrm{e}f{sect-notation} below. The bound on the operator norm of $H$ is then formulated in Theorem~\mathrm{e}f{main} and its proof is given in Sections~\mathrm{e}f{sect-2-cumu-estimates} and~\mathrm{e}f{sect-hi-cumu-estimates}. For simplicity, the argument is carried out only for symmetric $H\in\mathds{R}^{N\times N}$. The Hermitian case follows analogously and is hence omitted. \textbf{Acknowledgment:} I am very grateful to László Erdős for suggesting the topic and supervising my work on this project. \subsection{Notation and Assumptions on the Model}\label{sect-notation} Throughout the paper, boldface indicates vectors $\mathbf{x}\in\mathds{C}^N$ and their Euclidean norm is denoted by $\|\mathbf{x}\|_2$. Further, the operator norm of a matrix~$A\in\mathds{C}^{N\times N}$ is denoted by~$\|A\|$. In the estimates, $C$ (without subscript) denotes a generic constant the value of which may change from line to line. We note the following assumptions on the matrix $W$. \begin{assumption}[A1]\labelA{A1} $\mathds{E} W_{i,j}=0$ for all $i,j=1,\dots,N$. \end{assumption} \begin{assumption}[A2]\labelA{A2} For all $q\in\mathds{N}$ there exists a constant $\mu_q$ such that $\mathds{E}|W_{i,j}|^q\leq\mu_q$ for all~$i,j=1,\dots,N$. \end{assumption} The assumption on the correlation decay is given in terms of the multivariate cumulants~$\kap{k}$ of the matrix elements. \begin{definition}[Cumulants] Let $\mathbf{w}=(w_1,\dots,w_n)$ be a random vector taking values in $\mathds{R}^n$. The \textbf{cumulants}~$\kappa_m$ of $\mathbf{w}$ are defined as the Taylor coefficients of the log-characteristic function of~$\mathbf{w}$, i.e., \begin{displaymath} \ln\mathds{E}[\mathrm{e}^{i\mathbf{t}\cdot\mathbf{w}}]=\sum_m\kappa_m\frac{(i\mathbf{t})^m}{m!}, \end{displaymath} where the sum is taken over all multi-indices $m=(m_1,\dots,m_n)\in\mathds{N}^n$ and $m!=\smash{\prod_{j=1}^n(m_j!)}$. For a multiset $B\subset\{1,\dots,n\}$ with~${|B|=k}$, we also write~$\kap{k}(w_j|j\in B)$ instead of $\kappa_m$, where $m_i$ is the multiplicity of $m_i\in B$. \end{definition} The complex cumulants arising whenever $\mathbf{w}$ takes values in~$\mathds{C}^n$ are defined by considering the real and imaginary part of the random variables separately. To keep the notation short, we usually view the cumulants as a function of the indices of the matrix elements by identifying~$\kap{k}(W_{a_1,a_2},\dots)$ with $\kap{k}(a_1a_2,\dots)$ or~$\kap{k}(\alpha_1,\dots,\alpha_k)$ using $\alpha_1,\dots,\alpha_k\in\{1,\dots,N\}^2$. Further, $d$ denotes the Euclidean distance on $\{1,\dots,N\}^2$ modulo the (Hermitian) symmetry,~i.e., \begin{displaymath} d(a_1a_2,a_3a_4):=\min\{|a_1-a_3|+|a_2-a_4|,|a_1-a_4|+|a_2-a_3|\}. \end{displaymath} In this notation, the conditions from the polynomially decaying metric correlation structure can be formulated as follows. \begin{assumption}[A3]\labelA{A3} The $k$-cumulants $\kap{k}$ of the matrix elements of $W$ satisfy \begin{align} |\kap{2}(a_1a_2,a_3a_4)|&\leq\frac{C_{\kappa}}{1+d(a_1a_2,a_3a_4)^s},\label{A3-2-cumu}\\ |\kap{k}(\alpha_1,\dots,\alpha_k)|&\leq C(k)\prod_{e\in T_{min}}|\kap{2}(e)|,\ k\geq3,\label{A3-k-cumu} \end{align} for $s>2$ and some constants $C_{\kappa},C(k)>0$. Here, $T_{min}$ is the minimal spanning tree on the complete graph on $k$ vertices labelled by $\alpha_1,\dots,\alpha_k$ with edge weights $d(\alpha_i,\alpha_j)$, i.e., the spanning tree for which the sum of the edge weights is minimal. \end{assumption} Note that a correlation decay of the form~\eqref{A3-k-cumu} arises in different statistical physics models~(see~\cite{DuneauIagolnitzerSouillard1973}). \subsection{Statement of the Main Result} With the notation established, we give the statement on the operator norm of $H$ as follows. \begin{theorem}\label{main} Under the assumptions \eqref{A1}-\eqref{A3}, we have that for all~${{\varepsilon}>0}$,~${D>0}$ there exists a suitable constant~$C({\varepsilon},D)$ such that, for all $N\in\mathds{N}$, \begin{equation*} \mathds{P}\big(\|H\|>N^{{\varepsilon}}\big)\leq C({\varepsilon},D)N^{-D}. \end{equation*} \end{theorem} \subsection{Setup for the Moment Method} We show that the assumptions \eqref{A1}-\eqref{A3} imply that \begin{equation}\label{est-by-const} \mathds{E}[\tfrac{1}{N}\mathrm{tr}(H^k)]\leq C(k),\quad \forall k\in\mathds{N}, \end{equation} from which the statement of Theorem~\mathrm{e}f{main} follows by an application of Chebyshev's inequality. As $H$ is Hermitian, we have $\|H\|^k\leq\mathrm{tr}(H^k)$ for all even $k\in\mathds{N}$. This implies \begin{align*} \mathds{P}(\|H\|>N^{{\varepsilon}})\leq\frac{\mathds{E}[\|H\|^k]}{N^{k{\varepsilon}}}\leq\frac{N\mathds{E}[\tfrac{1}{N}\mathrm{tr}(H^k)]}{N^{k{\varepsilon}}}\leq\frac{NC(k)}{N^{k{\varepsilon}}} \end{align*} and thus gives the desired bound if $k$ is chosen large enough. Writing out the term on the left-hand side of~\eqref{est-by-const} using a cumulant expansion yields \begin{align} \mathds{E}[\tfrac{1}{N}\mathrm{tr}(H^k)]&=\frac{1}{N}\sum_{a_1,\dots,a_k}\mathds{E}[H_{a_1,a_2}H_{a_2,a_3}\dots H_{a_k,a_1}]\nonumber\\ &=N^{-k/2-1}\sum_{\pi\in\mathds{P}i_k}\sum_{a_1,\dots,a_k}\prod_{B\in\pi}\kap{|B|}(a_ja_{j+1}|j\in B),\label{cumu-expansion} \end{align} where $\mathds{P}i_k$ denotes the set of partitions of $\{1,\dots,k\}$ and the index $j+1$ is to be interpreted $\mathrm{mod}\ k$, i.e., if $j=k$, then $a_ja_{j+1}=a_ka_1$. Observe that all terms involving~1-cumulants vanish due to \eqref{A1}. Hence, one can restrict the sum to partitions $\pi$ without singleton sets. The cumulant expansion~\eqref{cumu-expansion} is the main difference between the real symmetric and complex Hermitian case, as considering $H\in\mathds{C}^{N\times N}$ requires replacing the cumulants by their complex counterparts. However, one can always reduce the argument to the real case by considering the real and imaginary parts of the random variables separately. Thus, from now on we assume $H$ to be real. We develop the estimates by first deriving suitable bounds for the products that only involve 2-cumulants, which include the leading terms, and then successively incorporating higher-order cumulants. The fast correlation decay from Assumption~\eqref{A3} implies that roughly half of the summations over the indices~$a_1,\dots,a_k$ yield a factor $N$, while the other half can be summed up with an~$N$-independent bound. Therefore, we introduce the following counting rule. \begin{crule}[CR]\labelR{CR} Every independent summation over an index $a_1,\dots,a_k$ in~\eqref{cumu-expansion} yields a contribution of order~$\sqrt{N}$. \end{crule} Note that bounding the leading terms requires an extra power of $N$ (see the proof of~\eqref{2-cumu-term} below) such that the factor $N^{-k/2-1}$ in~\eqref{cumu-expansion} is canceled out completely. \section{Proof of~\eqref{est-by-const} for Terms Involving Only 2-Cumulants}\label{sect-2-cumu-estimates} Throughout this section, assume that $k$ is even and $|B|=2$ for all $B\in\pi$. We aim to show that, for all such $\pi\in\mathds{P}i_k$, \begin{equation}\label{2-cumu-term} N^{-k/2-1}\Big|\sum_{a_1,\dots,a_k}\prod_{B\in\pi}\kap{2}(a_ja_{j+1}|j\in B)\Big|\leq C(k). \end{equation} The terms can be visualized by considering a $k$-gon, where the vertices are labeled by the indices $a_1,\dots,a_k$ and the edges by the successive double indices~$(a_1,a_2),\dots,(a_k,a_1)$. We denote the corresponding graph by $\Gamma_k$. In this picture, every 2-cumulant combines two edges such that each edge belongs to exactly one 2-cumulant. We say that a vertex \emph{belongs to a~2-cumulant}, if it is adjacent to one of the edges associated with it. A vertex~(resp. the corresponding index) which only occurs in a single 2-cumulant is referred to as \emph{internal vertex} (resp. \emph{internal index}). We give an example for a particular $\pi\in\mathds{P}i_8$ in Fig.~\mathrm{e}f{fig1} below, where different linestyles indicate the edges associated with the same 2-cumulant. Further examples that include internal indices are given in Fig.~\mathrm{e}f{fig2}. \begin{center} \begin{tikzpicture}[scale=1.25] \draw (-0.3827,0.9239) node[above=1pt] {$a_1$}; \draw (-0.9239,0.3827) node[left=1pt] {$a_2$}; \draw (-0.9239,-0.3827) node[left=1pt] {$a_3$}; \draw (-0.3827,-0.9239) node[below=1pt] {$a_4$}; \draw (0.3827,-0.9239) node[below=1pt] {$a_5$}; \draw (0.9239,-0.3827) node[right=1pt] {$a_6$}; \draw (0.9239,0.3827) node[right=1pt] {$a_7$}; \draw (0.3827,0.9239) node[above=1pt] {$a_8$}; \draw[black,dotted,very thick] (-0.3827,0.9239) -- (-0.9239,0.3827); \draw[black,dashed,very thick] (-0.9239,0.3827) -- (-0.9239,-0.3827); \draw[black] (-0.9239,-0.3827) -- (-0.3827,-0.9239); \draw[black,densely dotted] (-0.3827,-0.9239) -- (0.3827,-0.9239); \draw[black,dotted,very thick] (0.3827,-0.9239) -- (0.9239,-0.3827); \draw[black,dashed,very thick] (0.9239,-0.3827) -- (0.9239,0.3827); \draw[black] (0.9239,0.3827) -- (0.3827,0.9239); \draw[black,densely dotted] (0.3827,0.9239) -- (-0.3827,0.9239); \filldraw [black] (-0.3827,0.9239) circle (1pt)(-0.9239,0.3827) circle (1pt)(-0.9239,-0.3827) circle (1pt)(-0.3827,-0.9239) circle (1pt); \filldraw [black] (0.3827,-0.9239) circle (1pt)(0.9239,-0.3827) circle (1pt)(0.9239,0.3827) circle (1pt)(0.3827,0.9239) circle (1pt); \end{tikzpicture} \captionof{figure}{Visualization of $\kap{2}(a_1a_2,a_5a_6)\kap{2}(a_2a_3,a_6a_7)\kap{2}(a_3a_4,a_7a_8)\kap{2}(a_4a_5,a_8a_1)$}\label{fig1} \end{center} Assume first that $\pi$ is chosen such that the 2-cumulants occurring in the term do not involve internal indices. We note the following general estimates whose proofs are elementary from~\eqref{A3-2-cumu} and the fact that $s>2$. \begin{lemma}\label{red-rule-2-cumu} Assume that \eqref{A3} holds. Then \begin{align*} \sum_{a_1}|\kap{2}(a_1a_2,a_3a_4)|&\leq C,\quad \sum_{a_1,a_2}|\kap{2}(a_1a_2,a_3a_4)|\leq C,\quad \sum_{a_1,a_3}|\kap{2}(a_1a_2,a_3a_4)|\leq CN,\\ \sum_{a_1,a_2,a_3}|\kap{2}(a_1a_2,a_3a_4)|&\leq CN,\quad \sum_{a_1,\dots,a_4}|\kap{2}(a_1a_2,a_3a_4)|\leq CN^2 \end{align*} uniformly for any choice of the unsummed indices. In particular, the estimates follow~\eqref{CR}. \end{lemma} Note that summation over an internal index would, in general, not obey this counting rule, since~$\smash{\sum_{a_2=1}^N}|\kap{2}(a_1a_2,a_2a_3)|$ may be of order $N$ if $a_1=a_3$. This is the reason why internal indices are treated separately. The key to estimating~\eqref{2-cumu-term} in the given case is a recursive summation procedure. We demonstrate the approach for the term visualized in Fig. \mathrm{e}f{fig1}. \begin{example}\label{ex-nointindices} To start the summation, estimate \begin{align} S&:=N^{-5}\sum_{a_1,\dots,a_8}|\kap{2}(a_1a_2,a_5a_6)\kap{2}(a_2a_3,a_6a_7)\kap{2}(a_3a_4,a_7a_8)\kap{2}(a_4a_5,a_8a_1)|\nonumber\\ &\leq N^{-5}\sum_{a_1,\dots,a_8,a_1'}|\kap{2}(a_1a_2,a_5a_6)\kap{2}(a_2a_3,a_6a_7)\kap{2}(a_3a_4,a_7a_8)\kap{2}(a_4a_5,a_8a_1')|.\label{start-sum} \end{align} Adding the extra summation label $a_1'$ appears as an unnecessary overestimate, but it simplifies the following steps by breaking the cyclic structure of the graph. Next, isolate the~2-cumulant involving~$a_1$ by taking the maximum over the remaining indices~$a_2,a_4,a_5$ for the other factors. Together with $a_1$, we sum over all labels appearing in $\kap{2}(a_1a_2,a_5a_6)$. Applying the last bound in Lemma~\mathrm{e}f{red-rule-2-cumu} yields a factor $(\sqrt{N})^4=N^2$ and the estimate \begin{align*} S\leq N^{-5}CN^2\max_{a_2,a_5,a_6}\Big(\sum_{a_3,a_4,a_7,a_8,a_1'}|\kap{2}(a_2a_3,a_6a_7)\kap{2}(a_3a_4,a_7a_8)\kap{2}(a_4a_5,a_8a_1')|\Big). \end{align*} From $a_1$, continue counter-clockwise along the octagon to find the next index to sum over, i.e., $a_3$. Identifying and isolating $\kap{2}(a_2a_3,a_6a_7)$, sum over all remaining indices in the factor, i.e., $a_3$, $a_6$ and $a_7$, to obtain a contribution of $(\sqrt{N})^3=N^{3/2}$ and the estimate \begin{align*} S\leq CN^{-3/2}\max_{a_3,a_4,a_5,a_7}\Big(\sum_{a_8,a_1'}|\kap{2}(a_3a_4,a_7a_8)\kap{2}(a_4a_5,a_8a_1')|\Big). \end{align*} Repeating the previous step, continuing along the octagon yields~$a_8$ as the next index. Isolating $\kap{2}(a_4a_5,a_8a_1')$ and performing the last two summations yields a factor of ${(\sqrt{N})^2=N}$ and the final estimate \begin{align*} S\leq CN^{-1/2}\max_{a_3,a_4,a_7,a_8}|\kap{2}(a_3a_4,a_7a_8)|, \end{align*} which can be bounded by a constant as claimed in~\eqref{2-cumu-term}. Note that the pairing in Fig. 1 gives a subleading contribution to~\eqref{cumu-expansion}. \end{example} As this recursive summation procedure relies on Lemma~\mathrm{e}f{red-rule-2-cumu}, it cannot be applied directly if summation over internal indices are present. To prepare for the general case, we first extend Lemma~\mathrm{e}f{red-rule-2-cumu} to estimates for 2-cumulants where the unsummed indices are replaced by arbitrary fixed vectors~${\mathbf{x}\in\mathds{R}^N}$ in the sense that~${\smash{\kap{2}(\mathbf{x}a_2,a_3a_4):=\sum_{a_1=1}^N\kap{2}(a_1a_2,a_3a_4)x_{a_1}}}$, and terms such as $\kap{2}(\mathbf{x}a_2,a_3\mathbf{y})$ are defined similarly. We collect some of the estimates below, where we follow the convention that vectors only occur in place of the first index of an index pair. However, the same bounds hold if the second index of the respective pair is replaced instead. As in Lemma~\mathrm{e}f{red-rule-2-cumu}, no internal index is summed~up. \begin{lemma}\label{ext-red-rule-2-cumu} Assume that \eqref{A3} holds and let $\mathbf{x},\mathbf{y}\in\mathds{R}^N$. Then we have \begin{align*} \sum_{a_2}|\kap{2}(\mathbf{x}a_2,a_3a_4)|&\leq C\|\mathbf{x}\|_2,\quad \sum_{a_3}|\kap{2}(\mathbf{x}a_2,a_3a_4)|\leq CN^{1/2}\|\mathbf{x}\|_2,\\ \sum_{a_2,a_3}|\kap{2}(\mathbf{x}a_2,a_3a_4)|&\leq CN\|\mathbf{x}\|_2,\quad \sum_{a_3,a_4}|\kap{2}(\mathbf{x}a_2,a_3a_4)|\leq CN\|\mathbf{x}\|_2,\\ \sum_{a_2,a_3,a_4}|\kap{2}(\mathbf{x}a_2,a_3a_4)|&\leq CN^{3/2}\|\mathbf{x}\|_2 \end{align*} uniformly for any choice of unsummed indices. Similar bounds hold with two vectors, i.e., \begin{align*} |\kap{2}(\mathbf{x}a_2,\mathbf{y}a_4)|&\leq C\|\mathbf{x}\|_2\|\mathbf{y}\|_2,\quad \sum_{a_2}|\kap{2}(\mathbf{x}a_2,\mathbf{y}a_4)|\leq CN^{1/2}\|\mathbf{x}\|_2\|\mathbf{y}\|_2,\\ \sum_{a_2,a_4}|\kap{2}(\mathbf{x}a_2,\mathbf{y}a_4)|&\leq CN\|\mathbf{x}\|_2\|\mathbf{y}\|_2. \end{align*} In particular, the estimates follow~\eqref{CR}. \end{lemma} \begin{proof} Noting that $\max_j|x_j|\leq\|\mathbf{x}\|_2$, the bounds in Lemma~\mathrm{e}f{red-rule-2-cumu} imply that \begin{align*} \sum_{a_2}|\kap{2}(\mathbf{x}a_2,a_3a_4)|\leq \|\mathbf{x}\|_2 \sum_{a_1,a_2}|\kap{2}(a_1a_2,a_3a_4)|\leq C\|\mathbf{x}\|_2. \end{align*} If the summation over $a_2$ is replaced by a summation over $a_3$ or $a_4$, we obtain a bound of order $N$ instead. Further, an application of the Cauchy-Schwarz inequality leads to \begin{align} \sum_{a_3}|\kap{2}(\mathbf{x}a_2,a_3a_4)|&\leq\|\mathbf{x}\|_2\sqrt{\sum_{a_1}\Big(\sum_{a_3}|\kap{2}(a_1a_2,a_3a_4)|\Big)^2}\leq CN^{1/2}\|\mathbf{x}\|_2.\label{1sum-1vector} \end{align} Recalling that the summation over three indices gives a factor $N$, the bound for summation over all three indices in $|\kap{2}(\mathbf{x}a_2,a_3a_4)|$ follows analogously. For 2-cumulants that involve two vectors, applying~\eqref{A3-2-cumu} yields \begin{align} |\kap{2}(\mathbf{x}a_2,\mathbf{y}a_4)|&\leq\sum_{a_1,a_3}\Big(\frac{ C_{\kappa}|x_{a_1}y_{a_3}|}{1+|a_1-a_3|^s+|a_2-a_4|^s}+\frac{ C_{\kappa}|x_{a_1}y_{a_3}|}{1+|a_1-a_4|^s+|a_2-a_3|^s}\Big).\label{0sums-2vectors} \end{align} Set~${\varepsilon=\|\mathbf{y}\|_2/\|\mathbf{x}\|_2}$ and estimate the first term as \begin{align*} \sum_{a_1,a_3}\frac{|x_{a_1}y_{a_3}|}{1+|a_1-a_3|^s+|a_2-a_4|^s}&\leq \sum_{a_1,a_3}\frac{{\varepsilon}|x_{a_1}|^2+{\varepsilon}^{-1}|y_{a_3}|^2}{1+|a_1-a_3|^s}\leq C({\varepsilon}\|\mathbf{x}\|_2^2+{\varepsilon}^{-1}\|\mathbf{y}\|_2^2) \end{align*} to obtain an $N$-independent bound. The estimate of the second term is similar. Adding a summation over one index, e.g., $a_2$, two applications of the Cauchy-Schwarz inequality and the third estimate of Lemma~\mathrm{e}f{red-rule-2-cumu} yield \begin{align*} \sum_{a_2}|\kap{2}(\mathbf{x}a_2,\mathbf{y}a_4)|&\leq\|\mathbf{x}\|_2\|\mathbf{y}\|_2\sqrt{\sum_{a_3}\Big(\sum_{a_1}\sum_{a_2}|\kap{2}(a_1a_2,a_3a_4)|\Big)^2}\leq CN^{1/2}\|\mathbf{x}\|_2\|\mathbf{y}\|_2. \end{align*} Finally, it follows \begin{align*} &\sum_{a_2,a_4}|\kap{2}(\mathbf{x}a_2,\mathbf{y}a_4)|\leq \sum_{a_1,\dots,a_4}\Big(\frac{C_{\kappa}|x_{a_1}y_{a_3}|}{1+|a_1-a_3|^s+|a_2-a_4|^s}+\frac{C_{\kappa}|x_{a_1}y_{a_3}|}{1+|a_1-a_4|^s+|a_2-a_3|^s}\Big). \end{align*} Here, we obtain \begin{align*} \sum_{a_1,\dots,a_4=1}^N\frac{|x_{a_1}y_{a_3}|}{1+|a_1-a_3|^s+|a_2-a_4|^s}&\leq CN\sum_{a_1,a_3=1}^N\frac{|x_{a_1}y_{a_3}|}{1+|a_1-a_3|^s}\leq CN\|\mathbf{x}\|_2\|\mathbf{y}\|_2, \end{align*} and the estimate for the second term is similar. \end{proof} Lemma~\mathrm{e}f{ext-red-rule-2-cumu} is the key tool for estimating terms that include summation over internal indices. Consider the matrix~${T\in\mathds{R}^{N\times N}}$ defined by its matrix elements \begin{equation}\label{def-T} T_{a_1,a_3}:=\sum_{a_2}T^{(a_2)}_{a_1,a_3}:=\sum_{a_2}\kap{2}(a_1a_2,a_2a_3) \end{equation} and observe that $|T_{a_1,a_3}|\leq C_{\kappa}N$ by~\eqref{A3-2-cumu}, but also $\|T\|\leq CN$, since \begin{align*} \|T^{(a_2)}\mathbf{x}\|_2^2\leq\sum_{a_1}\Big(\sum_{a_3}|\kap{2}(a_1a_2,a_2a_3)x_{a_3}|\Big)^2\leq\Big(\sum_{a_1}\sum_{a_3}|\kap{2}(a_1a_2,a_2a_3)|^2\Big)\|\mathbf{x}\|_2^2\leq C\|\mathbf{x}\|_2^2 \end{align*} for $\mathbf{x}\in\mathds{R}^N$, and $\|T\|\leq\sum_{a_2}\|T^{(a_2)}\|$. Next, we derive similar estimates for the matrix~${T^{[j]}\in\mathds{R}^{N\times N}}$ defined for ${2\leq j\leq k-1}$ by \begin{equation}\label{def-B} T^{[j]}_{a_1,a_{2j+1}}:=\sum_{a_2,a_{2j}}\kap{2}(a_1a_2,a_{2j}a_{2j+1})T_{a_2,a_{2j}}^{j-1}. \end{equation} Note that the superscript corresponds to the total number of 2-cumulants that are rewritten to obtain $T^{[j]}_{a_1,a_{2j+1}}$. Again, we have $\smash{|T^{[j]}_{a_1,a_{2j+1}}|}\leq CN^j$ by a direct estimate, but also \begin{equation}\label{norm-Tj} \|T^{[j]}\|\leq CN^j\quad \forall j\in\{2,\dots, k-1\}. \end{equation} As the argument is the same in the general case, consider only $j=2$. Define the vector $\mathbf{y}^{a_4}$ through $(\mathbf{y}^{a_4})_{a_2}:=N^{-1}T_{a_2,a_4}$, $a_2=1,\dots,N$. Observing that~${\|\mathbf{y}^{a_4}\|_2\leq C}$ uniformly in~$a_4$, it follows for~${T^{[2],a_4}:=\sum_{a_2}\kap{2}(a_1a_2,a_4a_5)T_{a_2,a_4}}$ that \begin{align} \|T^{[2],a_4}\mathbf{x}\|_2^2&\leq\Big(N|\kap{2}(a_1\mathbf{y}^{a_4},a_4\mathbf{x})|\Big)^2\leq CN^2\|\mathbf{y}^{a_4}\|_2^2\|\mathbf{x}\|_2^2\leq CN^2\|\mathbf{x}\|_2^2\label{norm-B} \end{align} for $\mathbf{x}\in\mathds{R}^N$, which implies $\|T^{[2]}\|\leq N\max_{a_4}\|T^{[2],a_4}\|\leq CN^2$. Note that the structure of the matrix-vector multiplication in~\eqref{norm-B} does not allow for the usual convention of the vector occurring only as the first index of an index pair. We demonstrate the approach for treating general products of 2-cumulants for the terms visualized in Fig. \mathrm{e}f{fig2} below. \begin{figure} \caption{A crossing (left) and a non-crossing pairing (right) for $k=10$.} \label{fig2} \end{figure} \begin{example}\label{ex-intindices} First, consider the term on the left of Fig.~2. Recalling the definition of the matrix $T$ from~\eqref{def-T}, rewrite the summation over the internal indices $a_2$, $a_4$ and~$a_7$~as \begin{align} &N^{-6}\sum_{a_1,\dots,a_{10}}\kap{2}(a_1a_2,a_2a_3)\kap{2}(a_3a_4,a_4a_5)\kap{2}(a_5a_6,a_9a_{10})\kap{2}(a_6a_7,a_{10}a_1)\kap{2}(a_7a_8,a_8a_9)\nonumber\\ &=N^{-6}\sum_{a_1,a_5,a_6,a_7,a_9,a_{10}}T^2_{a_1,a_5}\kap{2}(a_5a_6,a_9a_{10})\kap{2}(a_6a_7,a_{10}a_1)T_{a_7,a_9}\nonumber\\ &=N^{-3}\sum_{a_1,a_6,a_7,a_{10}}\kap{2}(\mathbf{x}^{a_1}a_6,\mathbf{y}^{a_7}a_{10})\kap{2}(a_6a_7,a_{10}a_1),\label{rewritten-ex} \end{align} where the vectors $\mathbf{x}^{a_1},\mathbf{y}^{a_7}\in\mathds{R}^N$ in the last step are given by \begin{align*} x^{a_1}_{a_5}&=\frac{1}{N^2}T^2_{a_1,a_5},\ a_5=1,\dots,N,\quad y^{a_7}_{a_9}=\frac{1}{N}T_{a_7,a_9},\ a_9=1,\dots,N. \end{align*} To keep the notation consistent with the proof in the general case and Lemma~\mathrm{e}f{ext-red-rule-2-cumu}, we introduce the convention that vectors obtained from matrix elements are always defined via the rows of the respective matrix. Recalling that~$\|T\|\leq CN$, we have $\|\mathbf{x}^{a_1}\|_2,\|\mathbf{y}^{a_7}\|_2\leq C$ uniformly for any choice of $a_1$ and $a_7$, respectively. Note that one factor of $N$ per power of~$T$ is written in front of the sum and that the convention chosen for the vectors ensures that we always replace the first index of an index pair. Further, the sum obtained from~\eqref{rewritten-ex} does not involve summation over internal indices. Modifying the recursive summation procedure from Example~\mathrm{e}f{ex-nointindices} by also taking the maximum over $a_1$ in the 2-cumulant involving $\mathbf{x}^{a_1}$ instead of introducing the additional summation label $a_1'$, and using the bounds from Lemmas~\mathrm{e}f{red-rule-2-cumu} and~\mathrm{e}f{ext-red-rule-2-cumu} yields \begin{align} &N^{-3}\sum_{a_1,a_6,a_7,a_{10}}|\kap{2}(\mathbf{x}^{a_1}a_6,\mathbf{y}^{a_7}a_{10})\kap{2}(a_6a_7,a_{10}a_1)|\nonumber\\ &\leq N^{-3} \max_{a_1,a_6,a_7,a_{10}}|\kap{2}(\mathbf{x}^{a_1}a_6,\mathbf{y}^{a_7}a_{10})|\sum_{a_1,a_6,a_7,a_{10}}|\kap{2}(a_6a_7,a_{10}a_1)|\nonumber\\ &\leq CN^{-3}N^2\max_{a_1,a_6,a_7,a_{10}}|\kap{2}(\mathbf{x}^{a_1}a_6,\mathbf{y}^{a_7}a_{10})|\leq CN^{-1}\leq C,\label{eq-ex-withvectors} \end{align} showing that the pairing on the left of Fig. \mathrm{e}f{fig2} gives a sub-leading contribution to~\eqref{cumu-expansion}. In contrast, observe that the non-crossing pairing on the right of Fig.~\mathrm{e}f{fig2} needs to be handled differently, since treating $T$ as before yields~$\kap{2}(\mathbf{x}^{a_1}a_6,a_{10}a_1)$ and~$\kap{2}(a_6a_7,\mathbf{y}^{a_7}a_{10})$, which cannot be estimated using Lemma~\mathrm{e}f{ext-red-rule-2-cumu} due to the indices $a_1$ and~$a_7$ appearing twice in the respective 2-cumulants. However, the terms can be rewritten using the matrices~$T^{[j]}$ defined in~\eqref{def-B}. Recalling that $\|T^{[j]}\|\leq CN^j$ from~\eqref{norm-Tj}, we obtain \begin{align*} &N^{-6}\Big|\sum_{a_1,a_5,a_6,a_7,a_9,a_{10}}T^2_{a_1,a_5}\kap{2}(a_5a_6,a_{10}a_1)\kap{2}(a_6a_7,a_9a_{10})T_{a_7,a_9}\Big|\\ &=N^{-6}\Big|\sum_{a_6,a_{10}}T^{[3]}_{a_{10},a_6}T^{[2]}_{a_6,a_{10}}\Big|=N^{-6}\big|\mathrm{tr} \big(T^{[3]}T^{[2]}\big)\big|\leq CN^{-6}NN^3N^2=C. \end{align*} Hence, the term on the left of Fig.~\mathrm{e}f{fig2} yields a leading contribution to~\eqref{cumu-expansion}. \end{example} After all these preparations, we can give a complete proof of~\eqref{2-cumu-term}. \subsection{Proof of~\eqref{2-cumu-term}} Let $k\geq2$ be even and $\pi\in\mathds{P}i_k$ such that $|B|=2$ for all $B\in\pi$. Recalling that the product on the left-hand side of~\eqref{2-cumu-term} can be visualized on the graph $\Gamma_k$ by marking the two edges belonging to the same 2-cumulant in the same color, consider the (ordered) set of edges~$E_k=\{(a_1,a_2),\dots,(a_k,a_1)\}$ and the mapping ${\varphi}:E_k\rightarrow E_k$ that maps every edge to the other one associated with the same~2-cumulant. Starting with $e_1=(a_1,a_2)$, go through the elements of $E_k$ and note the pairing~$(e_n,{\varphi}(e_n))$ whenever $e_n\neq{\varphi}(e_i)$ for all~$i<n$ to obtain~$k/2$ pairs of edges $(e_n,{\varphi}(e_n))$. We denote these pairings as an~(ordered) set~$C(\pi):=\{(e_1,{\varphi}(e_1)),\dots,(e_{k/2},{\varphi}(e_{k/2}))\}$ and refer to the edges in $C(\pi)$ as paired edges. Note that, by construction, every element of $E_k$ occurs exactly once in $C(\pi)$. The graph in which the pairings of~$\pi\in\mathds{P}i_k$ are marked is denoted by the tuple~$(\Gamma_k,C(\pi))$. The proof of~\eqref{2-cumu-term} is structured into two main steps. In the first step, we generalize the strategies from Example~\mathrm{e}f{ex-intindices} to rewrite the term on the left-hand side to a form that is tractable by a recursive summation procedure. We then carry out the required estimates in the second step, showing the validity of~\eqref{CR} up to an extra factor of $N$, i.e., that the bound is indeed of the order claimed in~\eqref{2-cumu-term}. We formulate the rewriting procedure in terms of the graph $(\Gamma_k,C(\pi))$ first and then give the corresponding formulas for the~2-cumulants that are involved. Note that rewriting is not required if the term does not involve summation over internal indices (see Example~\mathrm{e}f{ex-nointindices}). \underline{\smash{Step 1: Rewriting}}\\ Consider $(\Gamma_k,C(\pi))$ as defined above. The rewriting procedure corresponds to carrying out the following reduction algorithm on the graph (see Fig.~\mathrm{e}f{fig3}). \begin{itemize} \item[Step I] Check the set $C(\pi)$ for pairings that involve adjacent edges. If there are any, go through them in the order they appear in $C(\pi)$, replace the corresponding edges of~$\Gamma_k$ and their common vertex by a single edge and remove the pairing from $C(\pi)$. To distinguish the new edges from the edges of the original graph, we assign them the weight one, while any remaining paired edges are assigned the weight zero. The graph resulting from this step is characterized by the property that adjacent edges belong to different pairings. \item[Step II] Check the new graph for any edges of nonzero weight that are adjacent. If there are any, identify the edges that involve the vertex $a_l$ for the smallest value $l\in\{1,\dots,k\}$, replace the two edges and their common vertex by a single edge and assign it the sum of the weights of the edges that were replaced. This step is then repeated until any edge of the resulting graph with nonzero weight is only adjacent to paired edges, i.e., edges that were assigned the weight zero. \item[Step III] Check the new graph for any edges assigned a nonzero weight, say $w_j>0$, that are adjacent to two edges belonging to the same pairing, say $(e_j,{\varphi}(e_j))$. If there are any, go through them in increasing order of the corresponding $j$ and replace the three edges and the two vertices between them by a single edge. After going around the graph once, remove the pairings that were replaced from $C(\pi)$ and assign the new edges the respective weights $w_j+1$. As this may generate new subgraphs of the same structure, the step is repeated until every edge assigned a nonzero weight is either adjacent to two edges belonging to different pairings or another edge of nonzero weight. \item[Step IV] Repeat Steps II and III until neither can be carried out any further. The graph resulting from this procedure is characterized by the property that whenever two edges are adjacent to an edge assigned a nonzero weight, they are paired edges that belong to different pairings. \end{itemize} Note that the procedure may remove all pairings from $C(\pi)$, which happens if the pairing~$\pi$ was non-crossing. In this case only one vertex connected to itself by an edge (loop) of weight~$k/2$ remains, as the weight assigned to an edge reflects the number of pairings removed from $C(\pi)$ in obtaining it. In the general case, we denote the new weighted graph on $k'\leq k$ edges by $\smash{\widetilde{\Gamma}_{k'}(\pi)}$ and the set of pairings remaining after the above algorithm has been carried out by $\smash{\widetilde{C}(\pi)}\subseteq C(\pi)$ to obtain a tuple $(\smash{\widetilde{\Gamma}_{k'}(\pi)},\smash{\widetilde{C}(\pi)})$. Observe that in the simplification process, connected subgraphs of $\Gamma_k$ for which the pairings in~$C(\pi)$ do not cross are replaced by edges with nonzero weight while the crossing pairings of the original graph remain in~$\smash{\widetilde{C}(\pi)}$. We demonstrate the algorithm for the example given on the right of Fig.~\mathrm{e}f{fig2}. For simplicity, the vertex labels and edge weights that are equal to zero are left out. \begin{figure} \caption{The steps of the reduction algorithm with arrows indicating the steps.} \label{fig3} \end{figure} Next, we explain the algebraic counterpart to the graph reduction on the left-hand side of~\eqref{2-cumu-term}. Step I corresponds to carrying out all summations over internal indices in the 2-cumulants explicitly and introducing matrix elements of~$T$ as defined in~\eqref{def-T}. Hence, every new edge of weight one corresponds to one matrix element of $T$ introduced in the rewriting. The vertex that is removed from the original graph matches the index over which the summation was carried out. Recalling that $\|T\|\leq CN$, the weight of the edge equals the power of $N$ in the bound of the corresponding matrix. This rule will hold more generally along the rewriting procedure, as every edge with nonzero weight $w$ will be represented by a matrix $M$ with $\|M\|\leq N^w$. Step~II corresponds to evaluating summations of the form $\sum_{b}(M_1)_{a,b}(M_2)_{b,c}$ by introducing elements of the product of the respective matrices. Here, $M_1$ and $M_2$ denote any two matrices corresponding to edges of nonzero weight that have been obtained in the rewriting procedure previously, $a,b,c\in\{a_1,\dots,a_k\}$ with $b\neq a$ and~$b\neq c$ denote three summation indices. Moreover, $M_1M_2$ represents the matrix corresponding to the new edge and, as~${\|M_1M_2\|\leq\|M_1\|\ \|M_2\|}$, we obtain an explicit norm bound. Hence, adding the weights of the two edges representing~$M_1$ and $M_2$ on the graph matches the power of~$N$ obtained when the bounds for the respective matrix norms are multiplied. Step III corresponds to carrying out two more summations on the left-hand side of~\eqref{2-cumu-term} to replace one matrix element and one~2-cumulant by a suitable new matrix element. Let $M$ denote the matrix corresponding to the edge of nonzero weight connecting the $(l+1)$-th and $m$-th vertices,~$l,m\in\{1,\dots,k\}$, then \begin{align}\label{new-matrix} \sum_{a_{l+1},a_m}M_{a_{l+1},a_m}\kap{2}(a_la_{l+1},a_m,a_{m+1})=:\widetilde{M}_{a_l,a_{m+1}}, \end{align} where the norm of the new matrix can be estimated by $\|\smash{\widetilde{M}}\|\leq N\|M\|$ following an argument similar to~\eqref{norm-B}. The additional $N$-power in the bound of the matrix norm again matches the weight of the corresponding new edge assigned in Step III of the algorithm. Therefore, the power of $N$ in the bound for the norm of any matrix $M$ obtained in the procedure coincides with the respective weights assigned to the corresponding edge introduced in Steps~II and III. As a result of the rewriting, the term on the left-hand side of~\eqref{2-cumu-term} is reduced to a summation over a product of 2-cumulants and matrix elements. The power of $N$ in the norm bounds of the respective matrices is readily obtained from~$\smash{\widetilde{\Gamma}_{k'}(\pi)}$, while any vertices remaining in the weighted graph correspond to a summation over the respective index that has not been removed. Further, the pairings of the remaining 2-cumulants have no non-crossing subset. \underline{\smash{Step 2: Estimates via summing-in steps}}\\ After Step 1, summations over a certain number $k'\leq k$ of indices $a_{l_1},\dots,a_{l_{k'}}$ satisfying~${l_1<\dots<l_{k'}}$, remain. We rename them as~${b_1=a_{l_1}, b_2=a_{l_2}, \dots, b_{k'}=a_{l_{k'}}}$. Recall that Step IV of the reduction algorithm terminates in one of two possible states: either~$\smash{\widetilde{C}(\pi)}=\emptyset$ or in $(\smash{\widetilde{\Gamma}_{k'}(\pi)},\smash{\widetilde{C}(\pi)})$ any two edges adjacent to an edge with nonzero weight belong to different pairings. Assume first that Step 1 reduces the graph $(\Gamma_k,C(\pi))$ to a single vertex $b_1$ connected to itself by an edge of weight $k/2$. Recall that this full reduction occurs whenever $\pi$ is non-crossing. The corresponding term involving 2-cumulants thus equals \begin{displaymath} \sum_{b_1}M_{b_1,b_1}=\mathrm{tr}(M), \end{displaymath} where~$M$ denotes the matrix corresponding to the loop in $\smash{\widetilde{\Gamma}_{k'}}$. By Step~1,~${\|M\|\leq CN^{k/2}}$, giving $|\mathrm{tr}(M)|\leq CN^{k/2+1}$. Hence, this term gives a leading contribution to~\eqref{cumu-expansion}. In the second case, i.e., whenever $\pi$ is crossing, at least two pairings remain in $\smash{\widetilde{C}(\pi)}$. If $M^{[j]}$ denotes a matrix that corresponds to an edge of weight $j$ in $\smash{\widetilde{\Gamma}_{k'}}$, then $\|\smash{M^{[j]}}\|\leq CN^j$. Hence, by defining \begin{equation}\label{def-vector-t} \Big(\mathbf{x}^{[M,j]b_l}\Big)_{b_{l+1}}:=\frac{1}{N^j}M_{b_l,b_{l+1}}^{[j]},\ b_{l+1}=1,\dots,N \end{equation} with $l\in\{1,\dots,k'\}$ and $k'+1=1$, we obtain a vector that satisfies \begin{equation}\label{uni-bound-vector} \|\mathbf{x}^{[M,j]b_l}\|_2\leq C \end{equation} uniformly for any choice of~$b_l$. Next, we sum in the vectors using the notation of Lemma~\mathrm{e}f{ext-red-rule-2-cumu}. Going through the newly introduced vectors~\eqref{def-vector-t} in increasing order of the corresponding~$l$, we can replace some of the indices in the remaining 2-cumulants by vectors by carrying out the summations over the corresponding index~$b_{l+1}$ explicitly. Since all vectors are defined from the rows of the respective matrices in~\eqref{def-vector-t}, summing over the corresponding $b_{l+1}$ implies that the sum-in procedure always involves the first index of an index pair, i.e., at most two indices in each~2-cumulant are replaced by a vector. Moreover, the result neither involves summation over internal indices nor, as a consequence of Step~III, an index that occurs in the same~2-cumulant both as a superscript of a vector and an argument. In Fig.~\mathrm{e}f{fig4} we visualize the rewriting and summing in procedure for the term on the left of Fig.~\mathrm{e}f{fig2}. The small arrows in Fig.~\mathrm{e}f{fig4} point to the index that will be summed in. Again, edge weights that are equal to zero are left out. \begin{figure} \caption{Visualization of the rewriting and summing in for a crossing partition.} \label{fig4} \end{figure} The term obtained after the sum-in procedure is now tractable by recursive summation as illustrated in Examples~\mathrm{e}f{ex-nointindices} and~\mathrm{e}f{ex-intindices}. We give the algorithm below. \begin{itemize} \item[Step i] Identify the index $b_l$ in the sum with the smallest subscript~$l$. Whenever the index occurs both as a superscript of a vector and an argument, start the procedure with the 2-cumulant that involves $b_l$ as an argument. Otherwise, use an estimate similar to~\eqref{start-sum} with $a_1$ replaced by $b_l$ and start the procedure with the~2-cumulant that involves~$b_l$. \item[Step ii] Isolate the chosen~2-cumulant and perform the summation over the indices it involves similar to~\eqref{eq-ex-withvectors}. Whenever an index occurs both as a superscript of a vector and an argument in this step, carry out the summation for the 2-cumulant that involves the index as an argument and use the uniform bound~\eqref{uni-bound-vector} for the~vector. \item[Step iii] Repeat Step ii, i.e., identify the index $b_l$ in the remaining sum with the smallest $l$ and carry out Step ii for the 2-cumulant that involves $b_l$ as an argument until all summations have been performed. \end{itemize} Note in particular that the bounds obtained from Lemmas~\mathrm{e}f{red-rule-2-cumu} and~\mathrm{e}f{ext-red-rule-2-cumu} in Step ii follow~\eqref{CR}. It remains to collect the factors $N^j$ resulting from the normalization~\eqref{def-vector-t}. To check the validity of~\eqref{CR}, we compare the number of summations that were carried out and the order of the bound that is obtained in exchange. First, observe that introducing each matrix element of~$T^j$ requires carrying out summations over~${2j-1}$ consecutive indices while~${\|T^j\|\leq CN^j}$. Hence, these matrix elements yield a factor $\sqrt{N}$ more than prescribed by~\eqref{CR} if estimated trivially. Moreover, carrying out a rewriting step corresponding to Step II or III on the graph does not change this fact. Indeed, if the matrix elements of $M_1$ and~$M_2$ are obtained from carrying out~${2j_1-1}$ and~${2j_2-1}$ summations and the matrices are bounded by $CN^{j_1}$ and~$CN^{j_2}$ in norm, respectively, then the matrix elements of $M_1M_2$ account for~$2(j_1+j_2)-1$ summations while the matrix itself obeys a norm bound of order $N^{j_1+j_2}$. Similarly, if the matrix elements of~$M$ are obtained from carrying out $2j-1$ summations and $\|M\|\leq CN^j$, then the matrix elements of $\smash{\widetilde{M}}$ in~\eqref{new-matrix} account for two more summations, i.e., $2(j+1)-1$ in total, while the power of~$N$ in the norm bound is increased by one. Since along the sum-in procedure one additional summation is carried out per matrix to replace an index by a vector of the form~\eqref{def-vector-t} after the rewriting of the term, the additional factor of $\sqrt{N}$ per matrix is balanced out such that all estimates obtained subject to~\eqref{CR}. Hence, at most $k+1$ powers of $\sqrt{N}$ are collected from bounding the~$k$ summations with the extra factor~$\sqrt{N}$ being obtained whenever an estimate similar to~\eqref{start-sum} is used to start the recursive summation procedure. Thus, the bound is of order $N^{-1/2}$ here, i.e., stronger than~\eqref{2-cumu-term}, showing that all crossing pairings are subleading. This completes the proof of~\eqref{2-cumu-term}. \section{Proof of~\eqref{est-by-const} in the General Case}\label{sect-hi-cumu-estimates} Let now $k\geq1$ be arbitrary. We aim to show that \begin{equation}\label{k-cumu-term} N^{-k/2-1}\Big|\sum_{a_1,\dots,a_k}\prod_{B\in\pi}\kap{|B|}(a_ja_{j+1}|j\in B)\Big|\leq C(k) \end{equation} for all $\pi\in\mathds{P}i_k$. This would complete the proof of~\eqref{est-by-const} via~\eqref{cumu-expansion}. We start by proving the necessary bounds for $j$-cumulants with~${j\geq3}$ and give the proof of~\eqref{k-cumu-term} in Section~\mathrm{e}f{sect-put-together}. Note that no internal index is summed~up below. \begin{lemma}\label{red-rule-3-cumu} Assume that \eqref{A3} holds. Then \begin{align} \sum_{a_1,a_2,a_3,a_4}|\kap{3}(a_1a_2,a_3a_4,a_5a_6)|&\leq C,\label{eq-rr3c-1}\\ \sum_{a_1,\dots,a_5}|\kap{3}(a_1a_2,a_3a_4,a_5a_6)|&\leq CN,\label{eq-rr3c-2}\\ \sum_{a_1,\dots,a_6}|\kap{3}(a_1a_2,a_3a_4,a_5a_6)|&\leq CN^2\label{eq-rr3c-3} \end{align} uniformly for any choice of the unsummed indices. In particular, the estimates follow~\eqref{CR}. \end{lemma} \begin{proof} Applying~\eqref{A3-k-cumu} to the 3-cumulant in~\eqref{eq-rr3c-1} and taking the maximum in a suitable way in the resulting terms, we obtain, e.g., \begin{align*} &\sum_{a_1,\dots,a_4}|\kap{2}(a_1a_2,a_3a_4)\kap{2}(a_3a_4,a_5a_6)|\leq \Big(\max_{a_3,a_4}\sum_{a_1,a_2}|\kap{2}(a_1a_2,a_3a_4)|\Big)\sum_{a_3,a_4}|\kap{2}(a_3a_4,a_5a_6)|, \end{align*} as well as similar bounds for the terms corresponding to the other possible spanning trees. Hence, the summation over up to two index pairs is bounded by a constant, giving~\eqref{eq-rr3c-1}. The remaining estimates~\eqref{eq-rr3c-2} and~\eqref{eq-rr3c-3} readily follow, as the additional summations yield a factor of $N$ or $N^2$, respectively. \end{proof} Observe that the bounds given in~\eqref{eq-rr3c-1} and~\eqref{eq-rr3c-2} imply that, e.g., \begin{align} \sum_{a_2}|\kap{3}(a_1a_2,a_2a_3,a_4a_5)|\leq\sum_{a_2,a_2'}|\kap{3}(a_1a_2,a_2'a_3,a_4a_5)|\leq C\leq C\sqrt{N}\label{dummy-ind} \end{align} uniformly for any choice of the unsummed indices. In particular, the estimates comply with~\eqref{CR} and there is no need to perform summations over internal indices separately as in Section~\mathrm{e}f{sect-2-cumu-estimates}. The only exception occurs for $k=3$, where one obtains \begin{equation}\label{k3-term} \sum_{a_1,a_2,a_3}|\kap{3}(a_1a_2,a_2a_3,a_3a_1)|\leq\sum_{a_1,a_2,a_3,a_1',a_2',a_3'}|\kap{3}(a_1a_2,a_2'a_3,a_3'a_1')|\leq CN^2 \end{equation} instead of the bound of order $N^{3/2}$ prescribed by the counting rule. As~\eqref{k3-term} is the only term for $k=3$ due to \eqref{A1}, and $k/2+1=5/2$, it follows that~\eqref{est-by-const} holds for $k=3$. Hence, we can exclude the $k=3$ case from the following analysis and assume that all estimates for~3-cumulants with or without internal indices comply with~\eqref{CR}. Similarly to Lemma~\mathrm{e}f{ext-red-rule-2-cumu}, we define \begin{equation}\label{3-cumu-vector} \kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6):=\sum_{a_1}\kap{3}(a_1a_2,a_3a_4,a_5a_6)x_{a_1} \end{equation} for~$\mathbf{x}\in\mathds{R}^N$ with terms such as $\kap{3}(\mathbf{x}a_2,a_3\mathbf{y},a_5a_6)$ being defined in the same way. Again, we follow the convention that vectors only occur as the first index of every index pair. \begin{lemma}\label{ext-red-rule-3-cumu} Assume that \eqref{A3} holds and let $\mathbf{x},\mathbf{y},\mathbf{z}\in\mathds{R}^N$. Then the summation over any number of indices of~$|\kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6)|$, $|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,a_5a_6)|$, or~$|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,\mathbf{z}a_6)|$ satisfies~\eqref{CR}. Thus, for one vector we, e.g., have \begin{align} \sum_{a_2}|\kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6)|&\leq CN^{1/2}\|\mathbf{x}\|_2,\label{eq-err3c-1}\\ \sum_{a_2,a_4}|\kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6)|&\leq CN\|\mathbf{x}\|_2\label{eq-err3c-2} \end{align} uniformly for any choice of the unsummed indices. Similar bounds hold if two or three vectors are involved, respectively, e.g., \begin{align} \sum_{a_5}|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,a_5a_6)|&\leq CN^{1/2}\|\mathbf{x}\|_2\|\mathbf{y}\|_2,\label{eq-err3c-5}\\ \sum_{a_2}|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,\mathbf{z}a_6)|&\leq CN^{1/2}\|\mathbf{x}\|_2\|\mathbf{y}\|_2\|\mathbf{z}\|_2.\nonumber \end{align} Moreover, we have bounds that are stronger than~\eqref{CR} by a factor of~$\sqrt{N}$ for every summation that is carried out over two consecutive indices belonging to different index pairs,~e.g., \begin{align} \sum_{a_4,a_5}|\kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6)|&\leq CN^{1/2}\|\mathbf{x}\|_2.\label{eq-err3c-4}\\ \sum_{a_2,\dots,a_6}|\kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6)|&\leq CN^{3/2}\|\mathbf{x}\|_2,\label{eq-err3c-3}\\ \sum_{a_4,a_5}|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,a_5a_6)|&\leq CN^{1/2}\|\mathbf{x}\|_2\|\mathbf{y}\|_2.\label{eq-err3c-6} \end{align} In particular, for 3-cumulants, a summation over an internal index also contributes at most a factor of~$\sqrt{N}$. \end{lemma} \begin{proof} The proof is divided into three general arguments. First, as the bounds obtained in Lemma~\mathrm{e}f{red-rule-3-cumu} are stronger than~\eqref{CR}, estimating the vector elements by $\max_j|v_j|\leq\|\mathbf{v}\|_2$ for~${\mathbf{v}=\mathbf{x},\mathbf{y},\mathbf{z}}$, respectively, and performing the summation directly yields the desired estimate in most cases. Whenever this argument does not yield a sufficiently strong bound, we apply the Cauchy-Schwarz inequality similar to the proof of Lemma~\mathrm{e}f{ext-red-rule-2-cumu}. Lastly, it remains to check that the estimates obtained are strong enough to satisfy the second part of the lemma and also comply with~\eqref{CR} if summation over internal indices is included. As the latter can be treated by introducing additional summation labels similar to~\eqref{dummy-ind}, it is enough to derive suitable bounds for distinct $a_1,\dots,a_6$. Assume first that the 3-cumulant involves only one vector. Here, estimating $|x_j|\leq\|\mathbf{x}\|_2$ and applying~\eqref{eq-rr3c-1} yields \begin{align} \sum_{a_2,a_3,a_4}|\kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6)|&\leq\|\mathbf{x}\|_2\sum_{a_1,\dots,a_4}|\kap{3}(a_1a_2,a_3a_4,a_5a_6)|\leq C\|\mathbf{x}\|_2,\label{eq-err3cp-1} \end{align} which immediately implies~\eqref{eq-err3c-1} and~\eqref{eq-err3c-2}. The same $N$-independent bound as~\eqref{eq-err3cp-1} holds if one sums over~$a_2,a_5,a_6$ instead. Whenever all three index pairs are involved in the summation, Lemma~\mathrm{e}f{red-rule-3-cumu} implies a bound of order~$N$,~i.e., \begin{align} \sum_{a_2,\dots,a_5}|\kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6)|&\leq\|\mathbf{x}\|_2\sum_{a_1,\dots,a_5}|\kap{3}(a_1a_2,a_3a_4,a_5a_6)|\leq CN\|\mathbf{x}\|_2,\label{eq-err3cp-2} \end{align} and the same estimate holds if the summation over some other index $a_2,\dots,a_5$ is left out instead of $a_6$. In particular, all summations over two, three and four distinct indices yield a bound of at most order $N$, which complies with~\eqref{CR}. The validity of~\eqref{CR} for summation over five distinct indices follows from~\eqref{eq-rr3c-3}. Now we prove~\eqref{eq-err3c-4} and~\eqref{eq-err3c-3} by applying~\eqref{A3-k-cumu} and estimating one 2-cumulant in the bound similar to~\eqref{1sum-1vector} in the proof of Lemma~\mathrm{e}f{ext-red-rule-2-cumu}. This leads to \begin{align*} &\sum_{a_1,a_3,a_5}|x_{a_1}\kap{2}(a_1a_2,a_3a_4)\kap{2}(a_3a_4,a_5a_6)|\\ &\leq\Big(\max_{a_3}\sum_{a_5}|\kap{2}(a_3a_4,a_5a_6)|\Big)\sum_{a_1,a_3}|x_{a_1}\kap{2}(a_1a_2,a_3a_4)|\leq CN^{1/2}\|\mathbf{x}\|_2, \end{align*} which, together with similar estimates obtained for the other possible spanning trees on the complete graph on three vertices, proves~\eqref{eq-err3c-4}. The estimate in~\eqref{eq-err3c-3} follows similarly. Applying~\eqref{dummy-ind} and~\eqref{eq-err3cp-2} implies the remaining stronger bounds for the second part of the lemma, which completes the proof for $\kap{3}(\mathbf{x}a_2,a_3a_4,a_5a_6)$. Next, assume that two vectors are involved in~$\kap{3}$. Here, estimating $|x_j|\leq\|\mathbf{x}\|_2$ and~${|y_j|\leq\|\mathbf{y}\|_2}$, applying~\eqref{eq-rr3c-1} yields \begin{align} \sum_{a_2,a_4}|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,a_5a_6)|&\leq\|\mathbf{x}\|_2\|\mathbf{y}\|_2\sum_{a_1,\dots,a_4}|\kap{3}(a_1a_2,a_3a_4,a_5a_6)|\leq C\|\mathbf{x}\|_2\|\mathbf{y}\|_2.\label{eq-err3cp-3} \end{align} Further, arguing similarly to~\eqref{1sum-1vector} gives \begin{align*} &\sum_{a_1,a_3,a_5}|x_{a_1}y_{a_3}\kap{2}(a_3a_4,a_5a_6)\kap{2}(a_5a_6,a_1a_2)|\leq CN^{1/2}\|\mathbf{x}\|_2\|\mathbf{y}\|_2, \end{align*} which, together with bounds of at most order $N^{1/2}$ for the terms corresponding to the other possible spanning trees, implies~\eqref{eq-err3c-5}. An analogous estimate holds for summation over $a_6$, showing the validity of~\eqref{CR} for a single summation. The remaining cases for summation over distinct indices follow similar to~\eqref{eq-err3cp-3} by applying~\eqref{eq-rr3c-2} and~\eqref{eq-rr3c-3}. Note, however, that the above bounds are again not sufficient to imply~\eqref{CR} for summation over internal indices. Arguing as in the proof of Lemma~\mathrm{e}f{ext-red-rule-2-cumu} yields the stronger estimate \begin{align*} &\sum_{a_1,a_3,a_4,a_5}|x_{a_1}y_{a_3}\kap{2}(a_1a_2,a_3a_4)\kap{2}(a_3a_4,a_5a_6)|\leq CN^{1/2}\|\mathbf{x}\|_2\|\mathbf{y}\|_2. \end{align*} Again, one can estimate similarly for the terms corresponding to the other possible spanning trees, which implies~\eqref{eq-err3c-6}. Lastly, one needs to check that also \begin{align*} \sum_{a_2,a_4,a_5}|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,a_4a_5)|&\leq\sum_{a_2,a_4,a_5,a_4'}|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,a_4'a_5)|\leq CN^{3/2}\|\mathbf{x}\|_2\|\mathbf{y}\|_2, \end{align*} which follows from a similar argument. This completes the proof for~$\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,a_5a_6)$. For 3-cumulants that involve three vectors, note that writing out the term similar to~\eqref{3-cumu-vector} already involves all three index pairs. Whenever two summations are carried out, the estimates obtained from Lemma~\mathrm{e}f{red-rule-3-cumu} comply with~\eqref{CR},~e.g., \begin{align*} \sum_{a_2,a_4}|\kap{3}(\mathbf{x}a_2,\mathbf{y}a_4,\mathbf{z}a_6)|&\leq\|\mathbf{x}\|_2\|\mathbf{y}\|_2\|\mathbf{z}\|_2\sum_{a_1,\dots,a_5}|\kap{3}(a_1a_2,a_3a_4,a_5a_6)|\leq CN\|\mathbf{x}\|_2\|\mathbf{y}\|_2\|\mathbf{z}\|_2, \end{align*} but a different argument is needed for estimating zero, one, or three summations. Here, arguing as in the proof of Lemma~\mathrm{e}f{ext-red-rule-2-cumu} gives \begin{align*} &\sum_{a_2}\sum_{a_1,a_3,a_5}|x_{a_1}y_{a_3}z_{a_5}\kap{2}(a_1a_2,a_3a_4)\kap{2}(a_3a_4,a_5a_6)|\\ &\leq\|\mathbf{z}\|_2\Big(\max_{a_3}\sum_{a_5}|\kap{2}(a_3a_4,a_5a_6)|\Big)\sum_{a_1,a_2,a_3}|x_{a_1}y_{a_3}\kap{2}(a_1a_2,a_3a_4)|\leq C\|\mathbf{x}\|_2\|\mathbf{y}\|_2\|\mathbf{z}\|_2 \end{align*} together with similar $N$-independent bounds for the terms corresponding to the other possible spanning trees. By a similar argument, we obtain a bound of order $N$ for three summations. As internal indices cannot occur here, the proof of the lemma is complete. \end{proof} Lastly, we derive the necessary estimates to incorporate cumulants of order four and higher. As no exceptions similar to~\eqref{k3-term} occur, we directly include summations over internal indices into the bound. \begin{lemma}\label{est-high-cumu} Assume that \eqref{A3} holds and let $j\geq4$. Then \begin{align*} \sum_{a_1,\dots,a_{2j}}|\kap{j}(a_1a_2,\dots,a_{2j-1}a_{2j})|\leq CN^2. \end{align*} \end{lemma} \begin{proof} After applying~\eqref{A3-k-cumu}, bound the term by considering all possible spanning trees on the complete graph on $j$ vertices. Starting the summation at the leaves of the respective tree, sum up one index pair of each 2-cumulant. Continuing the procedure until the root of the tree is reached, an $N$-independent bound is obtained in every step. Finally, a summation over all four indices of the last 2-cumulant remains, which yields the claimed bound of order $N^2$ by applying the last estimate of Lemma~\mathrm{e}f{red-rule-2-cumu}. \end{proof} Note that Lemma~\mathrm{e}f{est-high-cumu} in particular implies that \begin{displaymath} \sum_{a_1,\dots,a_j}|\kap{j}(a_1a_2,a_2a_3,a_3a_4,\dots,a_ja_1)|\leq\sum_{a_1,\dots,a_j,a_1',\dots,a_j'}|\kap{j}(a_1a_2,a_2'a_3,\dots,a_j'a_1')|\leq CN^2 \end{displaymath} for any $j\geq4$ such that no special treatment is needed for~\eqref{CR}. Moreover, whenever $j\geq5$, the bound is stronger than~\eqref{CR}. However, Lemma~\mathrm{e}f{est-high-cumu} only gives a bound of $N^2$ independent of the number of summations carried out. \subsection{Including 3-cumulants}\label{sect-put-together} Building on the proof of~\eqref{2-cumu-term}, assume next that $\pi\in\mathds{P}i_k$ is chosen such that $|B|\in\{2,3\}$ for all $B\in\pi$. Here, we obtain the following bound. \begin{lemma}\label{2-3-cumu-bound} Under assumptions \eqref{A1}-\eqref{A3}, let $k\geq1$ and choose a partition $\pi\in\mathds{P}i_k$ with~${|B|\in\{2,3\}}$ for all $B\in\pi$. Then \begin{equation}\label{2-3-cumu-term} N^{-k/2-1}\sum_{a_1,\dots,a_k}\prod_{B\in\pi}|\kap{|B|}(a_ja_{j+1}|j\in B)|\leq C(k). \end{equation} \end{lemma} \begin{proof} Excluding the case where $|B|=2$ for all $B\in\pi$, i.e.,~\eqref{2-cumu-term}, assume that the partition involves at least one set with three elements. To estimate the left-hand side of~\eqref{2-3-cumu-term}, we extend the argument used in the proof of~\eqref{2-cumu-term} for crossing partitions. The first step is to sum up internal indices. Since the summation over internal indices in 3-cumulants does not interfere with~\eqref{CR}, only the 2-cumulants that involve internal indices are considered here. Next, we sum in the matrix elements that were obtained from the rewriting and, in the third step, estimate the summation that remains. \underline{\smash{Step 1: Rewriting}}\\ Consider the graph~$\Gamma_k$ and the~(ordered) set $E_k=\{(a_1,a_2),\dots,(a_k,a_1)\}$. Similarly to the proof of~\eqref{2-cumu-term}, we go through its elements one by one to extract an (ordered) set $C(\pi)$ of groupings in which each element of $E_k$ occurs exactly once. The elements of~$C(\pi)$ are pairs and~3-tuples from~$E_k$ corresponding to the 2- and~3-cumulants. By restricting the algorithm from the proof of~\eqref{2-cumu-term} to the pairs in $C(\pi)$, extract a weighted graph~$(\smash{\widetilde{\Gamma}_{k'}}(\pi),\smash{\widetilde{C}}(\pi))$ on~$k'\leq k$ vertices that corresponds to the term obtained from rewriting all summations over internal indices in the~2-cumulants. Recall that the norm bounds for any matrix resulting from the rewriting procedure are recorded in $\smash{\widetilde{\Gamma}_{k'}}(\pi)$ as the weights assigned to the edges. We rename the remaining~$k'$ summation indices as~$b_1,\dots,b_{k'}$. The resulting graph $(\smash{\widetilde{\Gamma}_{k'}}(\pi),\smash{\widetilde{C}}(\pi))$ has the property that any two edges adjacent to an edge with nonzero weight do not belong to the same pair. However, they may still belong to the same 3-tuple, as 3-tuples directly transfer from~$C(\pi)$ to $\smash{\widetilde{C}}(\pi)$. \underline{\smash{Step 2: Sum-In}}\\ Similar to the proof of~\eqref{2-cumu-term}, we aim to rewrite the matrix elements obtained in Step 1 into vectors of the form~\eqref{def-vector-t} and perform a sum-in procedure to replace some of the indices with vectors in the remaining 2- and 3-cumulants. Consider the possible subgraphs in which an edge with nonzero weight is adjacent to two edges that belong to the same~3-tuple. Here, rewriting and estimating the term using the vectors in~\eqref{def-vector-t} fails similarly to the second part of Example~\mathrm{e}f{ex-intindices} such that the corresponding terms have to be treated differently. We visualize the respective subgraphs in Fig.~\mathrm{e}f{fig5} below, where edges belonging to the same~3-tuple are indicated by the same color and~$w_j$~(resp.~$w_{j_1},w_{j_2}$) denotes some nonzero edge weight. For simplicity, the labels of the individual vertices and edge weights that are equal to zero are left out and the remainder of the graph is indicated by horizontal~dots. \begin{figure} \caption{The three types of subgraphs to be considered separately.} \label{fig5} \end{figure} Next, we gather the 3-cumulant and the matrix element(s) corresponding to the respective subgraphs in Fig.~\mathrm{e}f{fig5} into one term that we refer to as type~1~(solid), type~2~(dashed) or type~3~(dotted), respectively. Note that the two solid and two dashed graphs look very similar, but are still not identical since the cyclic ordering breaks the symmetry. As a result of Step~III of the algorithm used in the proof of~\eqref{2-cumu-term}, any matrix element that does not occur in a term of type~1,~2 or~3 can be treated using the same sum-in procedure as in the proof of~\eqref{2-cumu-term}, i.e., by applying~\eqref{def-vector-t} and performing the summation over the corresponding $b_{l+1}$ explicitly. Again, defining the vectors by the rows of the respective matrices ensures that only the first index of every index pair in the remaining 2- and~3-cumulants may be replaced by a vector. At this point, any separate matrix element occurs in a term of type 1, 2 or~3. Note that also up to one~(types~1 and 3) or two (type 2) indices of the~3-cumulant in the respective terms may have been replaced by a vector along the sum-in procedure. \underline{\smash{Step 3: Estimates}}\\ To estimate the remaining summations, we apply a recursive summation procedure similar to the proof of~\eqref{2-cumu-term}. Recall that the summation over any number of indices of a 2- or~3-cumulant follows~\eqref{CR} by Lemmas~\mathrm{e}f{red-rule-2-cumu},~\mathrm{e}f{ext-red-rule-2-cumu},~\mathrm{e}f{red-rule-3-cumu}, and~\mathrm{e}f{ext-red-rule-3-cumu}. This includes in particular summations over internal indices in the 3-cumulants and any summation over internal indices of 2-cumulants resulting in the sum-in of the vectors in~\eqref{def-vector-t}. Hence, it remains to show that summations over terms of type 1,~2, and~3, respectively, comply with~\eqref{CR}. Here, we estimate the matrix element or elements trivially by the norm of the corresponding matrix. This yields a factor of~$\sqrt{N}$ more than prescribed by the counting rule, which is compensated by applying the stronger bounds from Lemma~\mathrm{e}f{red-rule-3-cumu} or~\mathrm{e}f{ext-red-rule-3-cumu}, respectively, for the 3-cumulant. We give the details below. Assume first that the summation on the left-hand side of~\eqref{k-cumu-term} reduces to one single term of type~1, with the internal index occurring, e.g., at $b_4$, or of type~3. We obtain from Lemma~\mathrm{e}f{red-rule-3-cumu} and~\eqref{dummy-ind} that \begin{align*} \sum_{b_1,\dots,b_5}|\kap{3}(b_1b_2,b_3b_4,b_4b_5)|\leq CN^2,\quad \sum_{b_1,\dots,b_6}|\kap{3}(b_1b_2,b_3b_4,b_5b_6)|\leq CN^2. \end{align*} Observe that these estimates are better than the bound prescribed by~\eqref{CR} by one or two factors of $\sqrt{N}$, respectively. Hence, the final bound for the term follows~\eqref{CR} again. Whenever a term of type~1,~2, or~3 is encountered along the recursive summation procedure, we argue similarly. Recall that Step ii used in the proof of~\eqref{2-cumu-term} requires summing over all remaining indices for every 2-cumulant that is encountered. For terms of type~1,~2, or~3, we modify this step and sum up all but one (type~1 and~3) or two (type~2) indices instead, always leaving the index $b_l$ for the largest value of $l$ in each connected subgraph unsummed. The index or indices that are left out always belong to another 2- or~3-cumulant or term of type~1,~2, or~3 and are thus considered in later steps of the procedure. The summation rule is visualized in Fig.~\mathrm{e}f{fig6} below, where the large dots denote the indices that will be summed up after estimating the matrix element(s) trivially. \begin{figure} \caption{The summation rule for terms of type 1,~2~and 3.} \label{fig6} \end{figure} For the terms of type 1 or~3, leaving one index of the respective~3-cumulant unsummed always allows for a bound of order $N$ by Lemma~\mathrm{e}f{red-rule-3-cumu} or~\eqref{eq-err3cp-2}, i.e., at least $(\sqrt{N})^2$ better than~\eqref{CR} would give, which compensates for the additional factor of~$\sqrt{N}$ or~$N$ from estimating the matrix elements, respectively. The same holds for terms of type 2. Note, however, that due to the dashed subgraphs in Fig.~\mathrm{e}f{fig6} having two connected components, we may also encounter the case that only two summations remain. Here, we apply~\eqref{eq-rr3c-1},~\eqref{eq-err3c-4} or~\eqref{eq-err3c-6} whenever the 3-cumulant involves zero, one or two vectors, respectively. This allows to estimate the remaining two summations by a bound of at most order $N^{1/2}$, showing that~\eqref{CR} holds again. Hence, we can apply a similar recursive summation procedure as used in the proof of~\eqref{2-cumu-term} given by the following modified algorithm. \begin{itemize} \item[Step i] Identify the summation index $b_l$ occurring in a term of type 1, 2 or 3 that corresponds to the smallest subscript $l$. If there are no such terms, carry out Step i as in the proof of~\eqref{2-cumu-term}, possibly choosing an index occuring in a~3-cumulant. \item[Step ii] If the chosen index occurs in a term of type~1,~2 or~3, estimate the corresponding matrix element(s) using the bound for the matrix norm, then isolate the corresponding~3-cumulant and perform the summation according to Fig.~\mathrm{e}f{fig6}. Otherwise, carry out Step~ii as in the proof of~\eqref{2-cumu-term}, possibly applying Lemma~\mathrm{e}f{red-rule-3-cumu} or~\mathrm{e}f{ext-red-rule-3-cumu} if the index chosen in Step i occurs in a 3-cumulant. \item[Step iii] Identify the index $b_l$ in the remaining sum that corresponds to the smallest subscript~$l$ and carry out Step ii until all summations have been evaluated. \end{itemize} Note that starting the summation with estimating a matrix element breaks the cyclic structure of the graph similarly to~\eqref{start-sum}. Further, we have shown that all estimates obtained in Step ii follow~\eqref{CR}. The final bound for the sum obtained at the end of the procedure is thus at most of order $N^{k/2+1/2}$, showing that we obtain a sub-leading contribution to~\eqref{cumu-expansion} if the term on the left-hand side of~\eqref{k-cumu-term} involves~3-cumulants. \end{proof} \subsection{Including Cumulants of Order Four and Higher} With the necessary tools established, we proceed to estimating the summation~\eqref{k-cumu-term} in the general case, i.e., when we have cumulants (tuples) of arbitrary order. \begin{proof}[Proof of~\eqref{k-cumu-term}] Let $k\in\mathds{N}$ and $\pi\in\mathds{P}i_k$ be arbitrary. Excluding the cases already considered, we can assume that $k\geq4$ and that the partition includes at least one set of four or more elements. As the term on the left-hand side of~\eqref{k-cumu-term} may involve summations over internal indices in 2-cumulants, we first follow Step 1 of the proof of Lemma~\mathrm{e}f{2-3-cumu-bound}, treating~$j$-cumulants for $j>3$ similar to the 3-cumulants. The term resulting from this procedure is a product of cumulants of order three and higher, 2-cumulants that do not involve internal indices and the matrix elements corresponding to the edges of nonzero weight in $(\smash{\widetilde{\Gamma}_{k'}}(\pi),\smash{\widetilde{C}}(\pi))$. Again, we rename the indices remaining after the rewriting step as~${b_1,\dots,b_{k'}}$. Next, we follow Step 2 of the proof of Lemma~\mathrm{e}f{2-3-cumu-bound} to incorporate some of the matrix elements into the remaining 2- and 3-cumulants by applying the sum-in procedure. Further, any matrix elements occurring in a term of type 1, 2 or 3 (see Fig.~\mathrm{e}f{fig5}) are gathered together with the corresponding 3-cumulant. Assume first that the partition~$\pi$ is chosen such that all matrix elements remaining after the sum-in procedure are associated with a term of type~1,~2 or~3. Recall that the counting rule established for 2- and 3-cumulants prescribes that every individual summation yields a factor of~$\sqrt{N}$, while the estimate for higher-order cumulants yields a contribution of $N^2$ when summing over all indices in the respective cumulant. To apply both rules simultaneously, we divide the terms obtained from the rewriting procedure into two factors $F$ and $G$, where $G$ collects the 2- and 3-cumulants, as well as the terms of type 1, 2 and 3 and the additional factors of $N$ obtained from the $N^{-j}$ normalization in~\eqref{def-vector-t}, while the higher-order cumulants constitute $F$. Whenever no rewriting is required, we can split the left-hand side of~\eqref{k-cumu-term} directly into \begin{align*} F(b_1,\dots,b_k)&:=\prod_{\substack{B\in\pi\\|B|\geq4}}|\kap{|B|}(b_jb_{j+1}|j\in B)|,\quad G(b_1,\dots,b_k):=\prod_{\substack{B\in\pi\\2\leq|B|\leq3}}|\kap{|B|}(b_jb_{j+1}|j\in B)|. \end{align*} Next, consider the set $X:=\{j: b_j\text{ appears in a cumulant that belongs to }F\}$. Abbreviating~$b_X=\{b_j: j\in X\}$ and $b_{X^c}=\{b_1,\dots,b_{k'}\}\setminus b_X$, it follows that \begin{align*} \sum_{b_1,\dots,b_{k'}}F(b_X)G(b_1,\dots,b_{k'})&\leq\Big(\sum_{b_X}F(b_X)\Big)\Big(\max_{b_X}\sum_{b_{X^c}}G(b_1,\dots,b_{k'})\Big) \end{align*} by taking the maximum over all indices in $b_X$ in the factors that occur in $G$. Let $n\geq1$ be the number of factors in~$F$. Since every cumulant included in $F$ is of order four or higher, each factor involves at least eight indices as arguments. As every index belongs to two edges and, hence, has to appear exactly twice in the product of~$F$ and $G$, there are at least $4n$ distinct indices in~$F$. By definition, the total number of distinct indices occurring in $F$ is equal to~$|X|$ such that $|X|\geq 4n$. Introducing additional summation labels $b_1',\dots,b_l'$ to sum over all indices involved in the respective factors and applying Lemma~\mathrm{e}f{est-high-cumu} thus implies \begin{align}\label{bound-part1} \sum_{b_X}F(b_X)\leq\sum_{b_X,b_1',\dots,b_l'}F(b_X,b_1',\dots,b_l')\leq C(N^2)^n\leq CN^{|X|/2}. \end{align} For the summation involving $G$, we follow Step 3 from the proof of Lemma~\mathrm{e}f{2-3-cumu-bound}. This yields \begin{align}\label{bound-part2} \max_{b_X}\sum_{b_X^c}G(b_1,\dots,b_{k'})\leq CN^{(k-|X|)/2+1/2} \end{align} from applying the recursive summation procedure, as all terms in the sum follow~\eqref{CR}. Hence, we obtain a contribution of order~$\sqrt{N}$ for every summation, including the ones that were carried out along the rewriting and sum-in step, and possibly an additional factor $\sqrt{N}$ from an estimate similar to~\eqref{start-sum}. Combining~\eqref{bound-part1} and~\eqref{bound-part2} thus yields \begin{align*} N^{-k/2-1}\Big(\sum_{b_X}F(b_X)\Big)\Big(\max_{b_X}\sum_{b_X^c}G(b_1,\dots,b_{k'})\Big)\leq CN^{-1/2}. \end{align*} In the previously excluded cases, there either is a term similar to the terms of type~1,~2 or~3 with the 3-cumulant replaced by a cumulant of order four or higher, or a matrix element for which introducing vectors as in~\eqref{def-vector-t} would require summing it into a higher-order cumulant. In both cases, we estimate the matrix element trivially by the norm of the corresponding matrix, which yields a factor of~$\sqrt{N}$ more than prescribed by~\eqref{CR}. However, the occurrence of the matrix element implies that the $j$-cumulant must involve at least $j+1$ distinct indices. Hence, one more summation than in the minimal case is carried out when the summation over all indices in the cumulant is evaluated. Hence, the additional factor~$\sqrt{N}$ is balanced out. This implies that the estimates again follow~\eqref{CR}, giving the claim in the general case. In particular, we obtain a sub-leading contribution to~\eqref{cumu-expansion} whenever the term on the left-hand side of~\eqref{k-cumu-term} involves cumulants of order four or higher. \end{proof} \mathrm{e}newcommand*{\bibname}{References} \addcontentsline{toc}{chapter}{References} \end{document}
\begin{document} \textitle{Gaussian Mixture Graphical Lasso with \\ Application to Edge Detection in Brain Networks} \author{\IEEEauthorblockN{1\textsuperscript{st} Hang Yin} \IEEEauthorblockA{\textit{Data Science} \\ \textit{Worcester Polytechnic Institute}\\ Worcester, USA \\ [email protected]} \and \IEEEauthorblockN{2\textsuperscript{nd} Xinyue Liu} \IEEEauthorblockA{\textit{Alexa AI} \\ \textit{Amazon}\\ Boston, USA \\ [email protected] } \and \IEEEauthorblockN{3\textsuperscript{rd} Xiangnan Kong} \IEEEauthorblockA{\textit{Computer Science} \\ \textit{Worcester Polytechnic Institute}\\ Worcester, USA \\ [email protected]} } \title{Gaussian Mixture Graphical Lasso with \ Application to Edge Detection in Brain Networks} \begin{abstract} \small\baselineskip=9pt Sparse inverse covariance estimation (\textit{i.e.}, edge detection) is an important research problem in recent years, where the goal is to discover the direct connections between a set of nodes in a networked system based upon the observed node activities. Existing works mainly focus on unimodal distributions, where it is usually assumed that the observed activities are generated from a \emph{single} Gaussian distribution (\textit{i.e.}, one graph). However, this assumption is too strong for many real-world applications. In many real-world applications (\textit{e.g.}, brain networks), the node activities usually exhibit much more complex patterns that are difficult to be captured by one single Gaussian distribution. In this work, we are inspired by Latent Dirichlet Allocation (LDA) \cite{blei2003latent} and consider modeling the edge detection problem as estimating a mixture of \emph{multiple} Gaussian distributions, where each corresponds to a separate sub-network. To address this problem, we propose a novel model called Gaussian Mixture Graphical Lasso (MGL). It learns the proportions of signals generated by each mixture component and their parameters iteratively via an EM framework. To obtain more interpretable networks, MGL imposes a special regularization, called Mutual Exclusivity Regularization (MER), to minimize the overlap between different sub-networks. MER also addresses the common issues in read-world data sets, \textit{i.e.}, noisy observations and small sample size. Through the extensive experiments on synthetic and real brain data sets, the results demonstrate that MGL can effectively discover multiple connectivity structures from the observed node activities. \end{abstract} \section{Introduction} Edge detection of brain network \cite{fabijanska2008edge} aims at identifying the edges between nodes (\textit{i.e.}, functionally coherent brain regions) of a brain mapping \cite{belliveau1995fmri} from a temporal sequence of observed activities (\textit{e.g.}, fMRI scans). Since a well-constructed connectivity network servers as the prerequisite for many graph mining algorithms on brain disorder diagnosis and brain functionality analysis \cite{ahmadlou2012graph}, it is significant to design a more effective and accurate edge detection method. \begin{figure} \caption{ The problem of Gaussian mixture sparse inverse covariance estimation. The brain activities over time may originate from the mixture of multiple latent cognitive brain modes (\emph{i.e.} \label{fig:butterfly} \end{figure} Existing edge detection methods usually rely on the assumption that all nodes' activities obey a multivariate Gaussian distribution, and the connections between nodes could be depicted by their inverse covariance matrix (\textit{a.k.a.} precision matrix). A widely used variation of this line of works is known as Graphical Lasso (GLasso) \cite{friedman2008sparse}, which additionally imposes sparseness on the precision matrix. However, in many neurology studies such as \cite{diez2018neurogenetic}, human brains usually exhibit dramatically different activity modes when they perform different tasks. Based on these studies, we believe that the cognitive structure of the human mind can be paralleled into several sub-graphs based on different cognitive control processes and behavior. Cognitive control means a set of dynamic processes that engage and disengage different nodes of brain to modulate attention and switch between tasks. Applying GLasso without considering different latent cognitive modes is equivalent to deriving an ``average'' network representation. Since the behavior of different brain modes varies significantly, the derived ``average'' network may lose crucial information. Under such context, as illustrated in Figure~\ref{fig:butterfly}, it is natural to investigate whether and how one could extend the edge detection methods applied in brain network to capture the connectivity structures of multiple underlying cognitive brain modes. \begin{figure} \caption{Latent Dirichlet Allocation (LDA)\cite{blei2003latent} \caption{Graphical Lasso\cite{friedman2008sparse} \caption{Mixture Graphical Lasso (this paper)} \caption{Comparison of Latent Dirichlet Allocation(LDA), Graphical Lasso (GLasso) and our model in this paper. In each sub-graph, the boxes are "plates" representing replicates, which are repeated entities. The outer plate represents document in LDA or observation subject in brain network study, while the inner plate represents the generative process of word ($W$) in a given document or brain node activity ($A$) in a given subject, each of which word or scan is associated with a choice of topic ($T$) or mode ($M$) and the parameter of the corresponding word or node activity distribution ($\phi$ or $\Sigma$). $\pi$ is the topic or mode distribution. $N$ denotes the number of words or scans.} \label{fig:family} \end{figure} To incorporate the concept of multiple connectivity structure into edge detection, we follow the idea of latent Dirichlet allocation (LDA) \cite{blei2003latent} to adopt Gaussian mixture model on this problem. LDA views a document as a mixture of various topics, and it assumes that the generation of a document follows some topic-word distributions which can be found by sampling. Similarly, we could view brain scans as mixtures of latent modes, where each mode is characterized by a Gaussian distribution with different covariance $\Sigma_{\text{M}}$. Each covariance matrix $\Sigma_{\text{M}}$ corresponds to a specific connectivity among brain nodes. In the generation of each brain node activity, our model chooses a mode $\text{M}$ based on the mode distribution $\pi$ (as LDA chooses a topic), and then it generates a brain node activity $A_{i} \sim \text{Multinomial}(\mathbf{0},\Sigma_{\text{M}})$ (as LDA generates a word based on the topic chosen). Figure~\ref{fig:family} illustrates the relations and differences between our proposals and LDA, we also compare with traditional edge detection methods Graphical Lasso~\cite{friedman2008sparse}, where all brain node activities are assumed being produced by a single unified zero-mean $\Sigma$-covariance multivariate Gaussian distribution. In this paper, our goal is to reveal these structure of underlying sub-network from the observed activities simultaneously. To solve above issues, our main challenges are as follows: \begin{itemize} \item \textbf{Mixture of multiple connectivity networks}: In real-world cases, the proportions and assignments of each mode are not observable. Without the prior knowledge of them, general GLasso only discovers a simple graph for the whole data sets. While our problem setting requires estimating the proportions and assignments of multiple latent cognitive modes as well as the parameters of the network for each mode, with the same input as GLasso, which is much more challenging. \item \textbf{Direct connectivity among the nodes}: The finite Gaussian Mixture Model (GMM) \cite{pearson1894contributions} seems a straightforward solution to our problem, which incorporates a heterogeneous structure into the graphical model. It fits multivariate normal distributions and treats proportions and assignments as prior and posterior probabilities (estimators) in the Bayesian setting respectively. However, it estimates the covariance of each distribution rather than the inverse covariance, which indicates that the discovered connections could be indirect and make the network unnecessarily complicated. So GMM is inappropriate to distinguish the directed relationships between each pair of nodes. \item \textbf{Noisy Observations and Small Sample}: It is already a challenging task to discover a single network given the noisy observations and the small size of the data sample. GLasso employs simple $\ell_1$-norm regularization on to alleviate the sensitiveness to noises, but it is not sufficient for our case. Based on the cognitive studies on the human brain \cite{lin2017dynamic}, each brain sub-network is not only sparse but also has limited overlapping with other sub-network, simply adopting $\ell_1$-norm regularization as in GLasso may make the derived sub-network highly intertwined and hard to interpret. So we want to design a new regularization into the model, which can enforce it to discover a set of different sub-graphs, no matter small sample size or noisy data. \end{itemize} To tackle the above challenges, we propose a new model, namely MGL, to discover such mixture connectivity structures of the brain network. Similar to GMM, MGL learns the proportions and assignments of each latent cognitive mode iteratively via an EM framework, with the emphases on inferring the inverse covariance matrix of each latent distribution. A novel regularization approach called Mutual Exclusivity Regularization (MER) is also proposed to differ each inverse covariance matrix, implying that sub-network of different brain regions are activated under different cognitive modes. \section{Preliminary} \subsection{Notation} Throughout this paper, $\mathbb{R}$ denotes the set of all real numbers, $\mathbb{R}^n$ stands for the n-dimensional euclidean space. The set of all $m \times n$ matrices with real entries is denoted as $\mathbb{R}^{m \times n}$. All matrices are written in boldface. We write $\mathbf{X} \succ 0$ to denote that matrix $\mathbf{X}$ is positive definite. We write $\text{tr}(\cdot)$ to refer the trace of a matrix, which is defined to be the sum of the elements on the main diagonal of the matrix. We use $\vert \mathbf{X} \vert$ to denote the determinant of a real square matrix $\mathbf{X}$. We define a special matrix of $\mathbf{X}$ as follows: \begin{equation} \begin{aligned} \bar{\mathbf{X}} = \begin{bmatrix} 0 & |X_{12}| &\cdots &|X_{1N}|\\|X_{21}|&0&\cdots&|X_{2N}| \\ |X_{N1}| &|X_{N2}|&\cdots &0\end{bmatrix} \end{aligned} \end{equation} $\bar{\mathbf{X}}$ is the non-negative copy of $\mathbf{X}$ removed all diagonal elements. \subsection{Graphical Lasso} Graphical Lasso (GLasso) or Gaussian Graphical Model (GGM) is usually formulated as the following optimization problem, \begin{equation} \begin{aligned} \label{eq:glasso} \min_{\mathbf{\Theta} \succ 0} -\text{log} \vert \mathbf{\Theta} \vert + \text{tr} (\mathbf{S} \mathbf{\Theta}) + \lambda ||\mathbf{\Theta}||_1\\ \end{aligned} \end{equation} where $\mathbf{S} = \frac1n \mathbf{X}^{\top} \mathbf{X}$ is the empirical covariance matrix. $\mathbf\Theta = \mathbf\Sigma^{-1}$ is defined as the inverse covariance matrix, which can filter the directed links between all relationships. $||\mathbf{\Theta}||_1$ is the $\ell_1$-norm regularization that encourages sparse solutions, and $\lambda$ is a positive parameter denotes the strength of regularization. \section{MGL Method} \subsection{Gaussian Mixture Graphical Lasso} Given the number of base distributions $K$ and the number of node $N$, we assume the observed sample of each node is a mixture of the $K$ distributions. Thus, the joint probability of all observations $\mathbf{X} = (\boldsymbol{x}_1^\top, \cdots, \boldsymbol{x}_N^\top) \in \mathbb{R}^{N \times D}$ is given by \begin{align*} \label{eq:mggl} p(\mathbf{X} \vert \mathbf{\Theta}_k, \boldsymbol{\mu}_k, \phi_k) &= \prod_{i=1}^{N}\sum_{k=1}^{K} \phi_k \mathcal{N}(\boldsymbol{x}_i \vert \boldsymbol{\mu}_k, \mathbf{\Sigma}_k) \end{align*} We could assume $\boldsymbol\mu_k = \boldsymbol{0}$ without losing generality, so the negative log likelihood (NLL) in terms of $\{\mathbf{\Theta}_k\}$ is given by, \begin{equation} \label{eq:mgglsimple} \begin{aligned} \text{NLL}(\boldsymbol\theta) = - \sum_{i=1}^{N}\text{log}\Big(\sum_{k=1}^{K} \phi_k\mathcal{N}(\boldsymbol{x}_i \vert \boldsymbol{0}, \mathbf{\Theta}_k^{-1})\Big) \end{aligned} \end{equation} where $\boldsymbol\theta = \{\phi_1, \cdots, \phi_k, \mathbf{\Theta}_1, \cdots \mathbf{\Theta}_k\}$ is the model parameters. \subsection{The Mutual Exclusivity Regularization} Similar to the Adaptive Lasso in cite{zou2006adaptive}, we also need to impose regularization on our mixture model to obtain interpretable results, which means non overlapping edges exist among all estimators of precision matrices. However, be different with adaptive lasso or fused lasso, the intuitions are two folds: (1) we want each $\mathbf{\Theta}_k$ to be sparse; (2) we want each $\mathbf{\Theta}_k$ to be fairly different from other $\mathbf{\Theta}_{k'}$. Towards this end, we propose to the mutual exclusivity regularization as follows, \begin{align} \ell_{\lambda_1, \lambda_2}(\{\mathbf{\Theta}_k\}) = \lambda_1 \sum_{k=1}^{K} \Vert \mathbf{\Theta}_k \Vert_1 + \lambda_2 \sum_{i\ne j} \text{tr}(\bar{\mathbf{\Theta}}_i \bar{\mathbf{\Theta}}_j) \end{align} where $\bar{\mathbf{\Theta}}$ is the non-negative copy of $\mathbf{\Theta}$ removed all diagonal elements. The first term is identical to graphical lasso, which imposes sparsity controlled by $\lambda_1 > 0$ on each $\mathbf{\Theta}_k$. The second term is the summation of the approximate divergence measure between each pair $(\mathbf{\Theta}_i,\mathbf{\Theta}_j)$. It is easy to see when there is no overlapping non-zero entities between each $\mathbf{\Theta}_k$, this term reaches its minimal value $0$. $\lambda_2 > 0$ is employed to tune the strength of the second regularization. So it makes sense that we can use this term to force each estimation of $\mathbf{\Theta}_k$ in the result to have as few over-lapping elements as possible. Hence, we formally present the objective of our MGL as follows, \begin{align} \min_{\{\mathbf{\Theta}_k \succ 0\}} \text{NLL}(\{\mathbf{\Theta}_k\}) + \ell_{\lambda_1, \lambda_2}(\{\mathbf{\Theta}_k\}) \end{align} \subsection{The Latent States} Since there are $K$ separate latent distributions, so each data sample $\boldsymbol{x}_i$ could come from one of the $K$ distributions, we denote the corresponding state as $z_{i} \in \{1,\cdots, K\}$. Thus, the NLL function could be rewritten as follows, \begin{equation} \begin{aligned} \label{latent} \text{NLL}(\boldsymbol\theta) &= - \sum_{i=1}^{N}\log\sum_{k=1}^{K}\Big(\frac{\mathbf{Q}(z_{ik}) p\big(\boldsymbol{x}_i \vert \mathbf{\Theta}_k\, \phi_k\big)}{\mathbf{Q}(z_{ik})}\Big)\\ &= - \sum_{i=1}^{N}\log\sum_{k=1}^{K}\Big(\frac{p\big(\boldsymbol{x}_i,z_{ik} \vert \mathbf{\Theta}_k\, \phi_k\big)}{\mathbf{Q}(z_{ik})}\Big) \end{aligned} \end{equation} Here $\mathbf{Q}(z_{ik})$ is the latent variable and $\sum_{k=1}^{K} \mathbf{Q}(z_{ik}) = 1 $. According to the expression in the Equation (\ref{latent}), it can not be directly computed because the expression in $log$ is a sum term. So we can use the Expectation Maximization (EM) algorithm to optimize the above NLL \emph{w.r.t.} $\{\mathbf{\Theta}_k\}$. We summarized the MGL algorithm in Algorithm (\ref{mggl}). \begin{algorithm}[t] \caption{Algorithm for \texttt{MGL}} \label{mggl} \begin{algorithmic}[1] \Require i: $\mathbf{X}$: The observations of $D$-variate Gaussian distribution ii: $k$: the number of Gaussian distributions iii: $\lambda_1$: the Lagrangian multiplier of sparsity constraint iv: $\lambda_2$: The Lagrangian multiplier of mutual exclusivity constraint v: $\text{iter}_{max}$: the maximum number of iteration Output: $\mathbf{\hat{\Theta}_k}$, $\mathbf{\Hat{\phi}_k}$ \State Initialization: initialize $\mathbf{\phi}_k^{(0)}$, $\mathbf{\Theta}_k^{(0)}$ and $r_{ik}^{(0)}$ \Repeat \State E step: Update the latent variable $r_{ik}^{(t)}$ with given $\mathbf{\phi}_k^{(t-1)}$ and $\mathbf{\Theta}_k^{(t-1)}$ \State M step: Update $\mathbf{\phi}_k^{(t)}$, $\mathbf{\Theta}_k^{(t)}$ with $r_{ik}^{(t-1)}$ \Until{$iter={iter}_{max} \text{ or convergence}$} \end{algorithmic} \end{algorithm} \begin{figure} \caption{Low Dimension - Sample Size (Scenario 1)} \caption{Low Dimension - Noise (Scenario 2)} \caption{High Dimension - Sample Size (Scenario 3)} \caption{High Dimension - Noise (Scenario 4)} \caption{Comparison of each model on edge detection. Each figure shows the results of F1-score. The dark blue line indicates GLasso + Spectral; the light blue indicates $k$-means + GLasso; the orange one shows the result of MGL without Mutual Exclusivity Regularization and the green one shows the result of MGL. } \label{fig:f1-score} \end{figure} \subsection{Initialization} As we know from the Algorithm (\ref{mggl}), we need to give starting values of each estimators. In the process of comparative experiments, we found that the initialization of the parameters will largely affects the performance of our model. The following scheme we found empirically works well in our experiments. For each observation $i = 1,\dots,N$, we distribute it randomly a class $k \in \{1,\dots,K\}$. Then we assign a weight $\hat{r}_{ik} = 0.9$ for this observation $i$ and distribution $k$ and $\hat{r}_{ij} = \frac{0.1}{K-1}$ for all other distributions. In the M-step, we update $\mathbf{\Theta}_k$ from the initial values $\hat{\mathbf{\Theta}}_k^{(0)}$ computed by GLasso based on the whole samples. and $\phi_k$ from the initial values $\hat{\phi}_k = \frac{1}{K}$. \section{Empirical Study} In this part, we demonstrate the performance of our proposed model through extensive comparative experiments. We evaluate our proposed model in synthetic datasets at first. To comprehensively evaluate proposed model, we conduct experiment to answer the following research questions: \begin{itemize} \item{\textbf{RQ 1}:} How does MGL perform compared with state-of-the-art models in the consideration of the effect of sample size? \item{\textbf{RQ 2}:} Does our model still show robustness under noise? If the MER regularization term has positive influence on the performance under noise? \item{\textbf{RQ 3}:} How do hyper-parameters in comparative experiments impact each model performance? \item{\textbf{RQ 4}:} Is there a problem with mixture brain network structure in real ADHD-200 datasets? \end{itemize} \subsection{Compared Baselines} To demonstrate the effectiveness of our proposed method, we test against several variations of the state-of-art method Graphical Lasso: \begin{itemize} \item \textbf{GLasso + Spectral Clustering}: GLasso algorithm that assumes all data samples are drawn from the same Gaussian distribution, then using Spectral Clustering divide the whole network into several sub-graph. \item \textbf{\emph{k}-means + GLasso}: This is a pipeline method that first employs \emph{k}-means to assign each $\boldsymbol{x}_i$ to different groups, then using GLasso for each group to obtain the final $\mathbf{\Theta}_k$. \item \textbf{JGL \cite{gao2016estimation}}: This is the Joint Graphical Model with fused lasso, which is proposed in \cite{gao2016estimation}. It is equivalent to our proposed model without MER term. So it can work as the comparative method for assessing the performance of MER. \end{itemize} \subsection{Synthetic Simulations} Due to the lack of ground truth in many real-world data, we first compare our proposed method against other competitors on several carefully designed synthetic data sets. \subsubsection{Data Set} In this sub-section, we design some synthetic data sets purposefully. Firstly, we generate $k$ diagonal matrices ($k$ is the number of distribution, which is given in advance), then divide it into several equal-scale blocks. It makes sense for two reasons: we need to control each sub-graphs $\mathbf{\Theta}_k$ with non-overlapping edges on off-diagonal areas; by making edges of each sub-graph more concentrated, it is helpful for making results conductive to visualization. Secondly, we choose different off-diagonal blocks on each $\mathbf{\Theta}_k$, giving connectivities for these chosen blocks with a high density. Following the above steps, we generate each $\mathbf{\Theta}_k$ without overlapping edges on off-diagonal areas. Based on $\mathbf{\Theta}_k$, we compute each $\mathbf{\Sigma}_k$, then select $N_k$ samples ($\sum_{k=1}^{K} N_k = N$) randomly from each Gaussian distribution. In the next subsection, in order to evaluate the stability of our model, we also add noise into the samples. To exclusive the system randomness, we sample 10 times for all experiments, calculate the average of each experiments. So we can evaluate the precision and stability of our model at the same time. \subsubsection{Experimental Settings} We simulate four scenarios by controlling one parameter and holding on the others. In these situations, we select sample size $N$ and the standard error of noise $\mathbf{\sigma}$ as the controlled parameters. \begin{itemize} \item \textbf{Scenario 1}: We fix $p=8$ (the number of variables), $k=2$ (the number of Gaussian distributions), and $\mathbf{\sigma}=0$ (the standard error of noise), and then control sample size $N$ from 100 to 520. \item \textbf{Scenario 2}: We fix $p=8$, $k=2$, and $N=500$, and then control noise $\mathbf{\sigma}$ from 0.1 to 0.8. \item \textbf{Scenario 3}: We fix $p=20$, $k=2$, and $\mathbf{\sigma}=0$, and then control sample size $N$ from 200 to 1000. \item \textbf{Scenario 3}: We fix $p=20$, $k=2$, and $N=1000$, and then control noise $\mathbf{\sigma}$ from 0.1 to 0.8. \end{itemize} \subsubsection{Evaluation} To evaluate the quality of each sub-graph, we define the F1-score of edge detection as $F1 = \frac{2 N_d^2}{N_a N_d + N_g N_d}$, where $N_d$ is the number of true edges detected by the model, $N_g$ is the number of true edges and $N_a$ is the total number of edges detected. According to the expression, higher F1-score indicates better quality of edge detection. Figure \ref{fig:f1-score} shows the comparison between MGL and other baseline models. The results in the figure answer the first three \textbf{RQ} mentioned at the beginning of this section. The first column shows the results when we control sample size $N$ and hold on the others, which corresponds to \textbf{RQ1}. It is obvious that $k$-means and Spectral models are useless when the ground truth data sets are drawn from mixture Gaussian distribution. Meanwhile, when the sample size is not large enough, the precision of JGL is lower than that with MGL. The second column shows the results when we control noise, which corresponds to \textbf{RQ2}. We fix the sample size $N$ on 500, so when $\mathbf{\sigma}=0$, JGL is as good as MGL. According to the results, The louder the noise, the worse JGL performs, which means sensitive to the noise.So the result demonstrates that MER regularization can improve the performance of our proposed model. Compared to the others, MGL shows robustness in this scenario. To answer \textbf{RQ3}, we can figure out the answer from both column in this figure. Since our experiments are setting in low-dimensional and high-dimensional space separately, we can see from all comparison results that the issue of hyper-parameters does not affect the performance of MGL. In contrast, the performance of JGL in high-dimensional space isn't as well as that in the low-dimensional space, no matter in the scenario of sample size or noise. In summary, in the comparative experiment of synthetic datasets with ground-truth, our proposed method MGL shows better accuracy and robustness than that of other comparison methods. \subsection{Real fMRI Data} In the subsection, we evaluate our proposed method on fMRI dataset from ADHD-200 project\footnote{http://fcon\_1000.projects.nitrc.org/indi/adhd200}. Through this paper, we discover the network discovery from a collection of fMRI scans, in which each sample corresponds to a 4D brain image (a sequence of 3D images) of a subject. Our real world dataset is distributed by nilearn\footnote{http://nilearn.github.io/}. Specifically, there are 40 subjects in total. Among them, 20 subjects are labeled as ADHD, and the others are labeled as TDC. The fMRI scan of each subject in the dataset is a series of snapshots of 3D brain images of size $61 \times 76 \times 61$ over $\sim$176 time steps. In our experiment, we only choose the subjects which are labeled as ADHD. We focus on the multiple connectivity structures among the same subjects, in order to provide evidence on feature selection between different subjects in further study. Rather than discover the brain network on the level of voxels, we extracts the signal on regions defined via a probabilistic atlas, to construct the data sets. So it is more conventional for visualization of the results. The data sets is a $1899 \times 39$ data sets and we consider that they are drawn from a mixture Gaussian distribution. However, the number $k$ of it is unknown, which need to be given in advance. Through repeated experimental observations, we found that $k=4$ can provide the most reasonable results on the data sets. Because real fMRI data lacks ground-truth as a reference to measure the accuracy and robustness of the model. We are more concerned with the interpretability and rationality of the results. Specific to our proposed model, we are more concerned about whether our model can mine different connectivity structures among nodes from the fMRI datasets. \begin{figure} \caption{Sub-graphs discovered by $k$-means} \caption{Sub-graphs discovered by JGL} \caption{Sub-graphs discovered by MGL} \caption{Comparison of $k$-means + GLasso, JGL and MGL on ADHD dataset. The results show how to estimate a mixture connectivity structure on a group of subjects using different group sparse inverse covariance estimation models from real fMRI data set. The closer the color of elements in off-diagonal is to blue, the bigger probability the directed edges between corresponding nodes.} \label{fig:adhd} \end{figure} \begin{figure} \caption{ We turn the results of Fig. \ref{fig:adhd} \label{fig:connectome} \end{figure} According to the Figure \ref{fig:adhd}, we can find that there are almost no differences among four sub-graphs discovered by $k$-means plus GLasso. It indicates that this method is useless for mining sub-graphs in ADHD data sets. JGL shows four different sub-graphs, however, so many overlapped areas among them. These results seem not to be sparse matrices, which indicates that the corresponding connectivity structure is not very clear through this method. Compared to it, sub-graphs discovered by MGL is clearer and the number of overlapped areas is less. Therefore, although lacking the ground truth in ADHD data, we can still believe that the inferred results of MGL is consistent with the defined problem in this paper, especially in the consideration of mixture Gaussian distribution with non-overlapping areas among their precision matrices. The Figure \ref{fig:connectome} shows the corresponding connectivity structure of the results discovered by MGL. Here we only choose the axial direction of the cuts to show. The closer the color is to red, the stronger the directed relationship between the corresponding nodes. We highlight the stronger edges by adjusting the threshold of colorbar. According to the visualization of results, we can see that different sub-graphs highlight different relationships among all nodes. Different sub-graphs emphasize the relationships of different nodes, which means that subjects present different network structures on the time-line. This phenomenon is more obvious between the nodes related to DMN (default mode network), which includes the Parietal, Occipital Lobes, the Cingulum Region Posterior and the Frontal Cortex. Although the hypothesis about non-overlapped areas among each connectivity structure may not exist in real ADHD subjects, we believe that MGL with MER regularization can more prominently show the difference between each connectivity structures discovered, so that we can have a better understanding of the association between cognitive network and human activities. According to the analysis above, despite the lack of ground-truth, we believe that the existing results are still consistent with the problem defined in this paper. So the result shows that there is a mixture connectivity structure among nodes in the fMRI datasets, and our proposed model MGL can effectively mine this mixture connectivity structure. \section{Related Work} In the edge detection of brain network, it has two major branches: effective connectivity estimation and functional connectivity estimation. For the first branch, scholars pay more attention on obtaining a directed network from fMRI data through structure learning method for Bayesian networks \cite{huang2011brain}. In contrast, the second branch focuses on some approaches such as hierarchical clustering, pairwise correlations and independent component analysis, which can be found in \cite{friston2011functional} for more details. \cite{friedman2008sparse} proposed sparse gaussian graphic models , which are a very useful for discovering directed links of brain network based on large-scale dataset by using sparse inverse covariance estimation. However, in the task of edge detection, these methods focus on unimodal distributions, where it is usually assumed that the observed samples are drawn from a single Gaussian distribution, which is opposed to some recent studies \cite{anderson2018developmental}. The Joint Graphical Model with fused lasso, which is proposed in \cite{gao2016estimation}, is in the framework of multivariate Gaussian mixture modeling. However, this method has shown to be sensitive to the noise and small size of the data sample. \section{Conclusion} To address the problem of mixture connectivity substructures between nodes in brain network discovery, we propose embedding one of the current methods of estimating multiple Gaussian graphical models in the framework of Gaussian mixture modeling, then design a new regularization term, called mutual exclusivity regularization, to make sub-graphs un-overlapped with each other. Through extensive controlled experiments, we demonstrate that our proposed model MGL shows more effectiveness than other baseline models, meanwhile, MGL shows more robustness than JGL, especially in the consideration of small samples or noisy data sets. In addition, this conclusion is also demonstrated in the experiment of real fMRI brain scanning datasets from ADHD subjects. So we have reason to believe that, our method can also be applied in other domains when network connectivity structure is very complex. \footnotesize{ \balance } \end{document}
\begin{document} \begin{asciiabstract} In a lens space X of order r a knot K representing an element of the fundamental group \pi_1 X \cong \Z/r\Z of order s \leq r contains a connected orientable surface S properly embedded in its exterior X-N(K) such that the boundary of S intersects the meridian of K minimally s times. Assume S has just one boundary component. Let g be the minimal genus of such surfaces for K, and assume s \geq 4g-1. Then with respect to the genus one Heegaard splitting of X, K has bridge number at most 1. \end{asciiabstract} \begin{htmlabstract} In a lens space X of order r a knot K representing an element of the fundamental group &pi;<sub>1</sub> X &asymp; <b>Z</b>/r<b>Z</b> of order s &le; r contains a connected orientable surface S properly embedded in its exterior X-N(K) such that &part; S intersects the meridian of K minimally s times. Assume S has just one boundary component. Let g be the minimal genus of such surfaces for K, and assume s &ge; 4g-1. Then with respect to the genus one Heegaard splitting of X, K has bridge number at most 1. \end{htmlabstract} \begin{abstract} In a lens space $X$ of order $r$ a knot $K$ representing an element of the fundamental group $\pi_1 X \cong \mathbb Z/r\mathbb Z$ of order $s \leq r$ contains a connected orientable surface $S$ properly embedded in its exterior $X-N(K)$ such that $\partial S$ intersects the meridian of $K$ minimally $s$ times. Assume $S$ has just one boundary component. Let $g$ be the minimal genus of such surfaces for $K$, and assume $s \geq 4g-1$. Then with respect to the genus one Heegaard splitting of $X$, $K$ has bridge number at most $1$. \end{abstract} \title{Small genus knots in lens spaces have small bridge number} \section{Statement of results} Any knot $K$ in a lens space $X = L(r, q)$, $r>0$, is rationally nullhomologous, ie\ $[K] = 0 \in H_1(X;\Q) \cong 0$. We say $r$ is the {\em order\/} of the lens space $X$, and we say the smallest positive integer $s$ such that $s[K] = 0 \in H_1(X;\Z) \cong \Z/r\Z$ is the {\em order\/} of the knot $K$. Note $s \leq r$. The exterior $X-N(K)$ of $K$ thus contains a connected properly embedded orientable surface $S$ such that when $S$ is oriented $\ensuremath{\partial} S$ is coherently oriented on $\ensuremath{\partial} \bar{N}(K)$ and intersects the meridian $\mu \subseteq \ensuremath{\partial} \bar{N}(K)$ of $K$ minimally $s$ times, ie\ $|\mu \cdot \ensuremath{\partial} S| = s$. Such a surface $S$ is an analogue of a Seifert surface for a knot in $S^3$. We refer to the genus of a knot $K$ in $X$ as the minimal genus of these ``rational'' Seifert surfaces for $K$. For this article we will restrict our attention to knots with rational Seifert surfaces that have just one boundary component. In this paper we prove the following theorem. \begin{thm}\label{main} Let $K$ be a genus $g$ knot of order $s$ in a lens space $X$ whose Seifert surfaces have one boundary component. If $s \geq 4g-1$ then, with respect to the Heegaard torus of $X$, $K$ has bridge number at most $1$. \end{thm} \fullref{main} may be curiously rephrased as saying small genus knots in lens spaces have small bridge number. In \cite{berge:skwsyls} Berge shows that double-primitive knots (ie\ simple closed curves that lie on a genus $2$ Heegaard surface in $S^3$ and represent a generator of $\pi_1$ for each handlebody) admit lens space surgeries. We refer to these knots as {\em Berge knots\/}. Berge further shows that the corresponding knot in the resulting lens space is $1$--bridge. \fullref{main} may be used to show the following theorem which one may care to compare with \fullref{conj:berge}. \begin{thm}\label{thm:smallgenusbergeknots} Let $K'$ be a genus $g$ knot in $S^3$. If $K'$ admits a lens space surgery of order $r \geq 4g-1$ then $K'$ is a Berge knot. \end{thm} \begin{conj}[Berge~\cite{berge:skwsyls}] \label{conj:berge} If $K$ is a knot in a lens space $X$ with an $S^3$ surgery, then with respect to the genus one Heegaard splitting of $X$, $K$ has bridge number at most $1$. In particular, the corresponding knot $K' \subseteq S^3$ is a Berge knot. \end{conj} \subsection{A quick overview of knots with lens space surgeries} For coprime integers $p$ and $q$, a $p/q$ Dehn surgery on a knot in $S^3$ is the process of removing a solid torus neighborhood of the knot and attaching a new solid torus so that on the torus boundary of the knot exterior the new meridian is homologous to $p$ times the old meridian and $q$ times the old longitude. The rational number $p/q$ (including $1/0$) is called the {\em slope\/} of the surgery. We say a knot in $S^3$ {\em admits a lens space surgery\/} if some $p/q$ Dehn surgery with $q \neq 0$ yields a lens space. As a consequence of Thurston's geometrization \cite{thurston:gt3m,thurston:survey}, a knot in $S^3$ is either a torus knot, a satellite knot, or a hyperbolic knot. For any given torus knot, Moser describes all $p/q$ Dehn surgeries that produce lens spaces \cite{moser:esaatk}. Satellite knots and the slopes along which they admit lens space surgeries are classified by Bleiler and Litherland \cite{bleilerlitherland:lsads}. Hyperbolic knots admitting lens space surgeries have yet to be classified though they are conjectured to be Berge knots. Because the nonhyperbolic knots that admit lens space surgeries are also Berge knots, \fullref{conj:berge} contains this conjecture, albeit in another guise. Furthermore, by the Cyclic Surgery Theorem \cite{cgls:dsok}, if $p/q$ Dehn surgery on any nontorus knot produces a lens space, then $q=\pm1$, ie\ the surgery slope is {\em integral\/}. \subsection{Berge's list of double-primitive knots} Berge also gives a conjecturally complete list of double-primitive knots \cite{berge:skwsyls}. Based on calculations from this list, Goda and Teragaito propose the following conjecture. \begin{conj}[Goda--Teragaito~\cite{gt:dsokwylsagok}]\label{conj:gt} If a hyperbolic knot in $S^3$ of genus $g$ admits a lens space surgery of order $r$, then $2g+8 \leq r \leq 4g-1$. \end{conj} If Berge's list is complete, then \fullref{thm:smallgenusbergeknots} not only would confirm this upper bound but also would reprove the following theorem of Rasmussen. \begin{thm}[Rasmussen~\cite{rasmussen:lssaacogat}]\label{cor:rasmussenbound} If a nontrivial knot in $S^3$ of genus $g$ admits a lens space surgery of order $r$, then $r \leq 4g+3$. \end{thm} \subsection{Towards a conjecture of Bleiler and Litherland} Oszv\'ath and Szab\'o have shown that the Alexander polynomial $\Delta_K(T)$ of a knot $K$ with a lens space surgery determines $\smash{\widehat{HFK}}$ \cite{os:kfh+lss}. This then implies that the degree of $\Delta_K(T)$ equals twice the genus of $K$ \cite{os:hdagb}. At the end of \cite{os:agfh} for each lens space of order at most $26$ they list polynomials among which must be the Alexander polynomial of any knot that yields the lens space by positive integer surgery. (Indeed they present an algorithm for listing such polynomials associated to any given lens space.) From the results of \cite{os:kfh+lss} these lists may be refined by removing any polynomial whose nonzero coefficients do not alternate between $+1$ and $-1$. For each polynomial $A(T)$ listed for a lens space of order $r < 18$ with the exception of the polynomial listed for $L(14,11)$ we have $4(\frac{1}{2} \deg A(T)) - 1 \leq r$. Thus by \fullref{thm:smallgenusbergeknots} if $K'$ is a nontrivial knot in $S^3$ admitting a lens space surgery of order $r < 18$ other than $L(14,11)$, then $K'$ is a Berge knot. Computations discussed by Berge \cite{berge:skwsyls} confirm that $1$--bridge knots in lens spaces of order less than $1000$ with surgery yielding $S^3$ correspond to knots in Berge's list. Thus we conclude the following theorem. \begin{thm} If a hyperbolic knot in $S^3$ admits a lens space surgery of order $r$, then $r=14$ or $r \geq 18$. \end{thm} This nearly obtains the following conjecture. \begin{conj}[Bleiler--Litherland \cite{bleilerlitherland:lsads}] \label{conj:lensspaceorder} If a hyperbolic knot in $S^3$ admits a lens space surgery of order $r$, then $r \geq 18$. \end{conj} The $(-2,3,7)$--pretzel knot which is genus $5$ and has a lens space surgery of order $18$ (and a second of order $19$) is hyperbolic and realizes this conjectured bound. Note that $L(14, 11)$ is obtained by integral surgery on $T_{(3,5)}$, the $(3,5)$ torus knot, which has genus $4$. To resolve \fullref{conj:lensspaceorder}, one must address whether $\pm 14$ surgery on a genus $4$ hyperbolic knot $K$ in $S^3$ with $\Delta_K(T) = \Delta_{T_{(3, 5)}}(T)$ can yield $L(14, 11)$. Note that if for every polynomial $A(T)$ that Ozsv\'ath and Szab\'o's algorithm lists for $L(r,q)$ we have $4(\frac{1}{2} \deg A(T)) - 1 \leq r$, then the above method permits us to determine every knot in $S^3$ that has a surgery producing $L(r,q)$. Since such knots must be Berge knots, one appeals to the last paragraph of \cite{berge:skwsyls} to determine the knots if $r < 1000$. If $r \geq 1000$ and there exists a nontrivial $A(T)$ in Ozsv\'ath and Szab\'o's list for $L(r,q)$, then since Berge's list is not yet known to be complete one must also examine all $1$--bridge knots in $L(r,q)$ for which an integer surgery may yield $S^3$. This may be done in the manner in which the list at the end of \cite{berge:skwsyls} is produced. Berge shows that there are finitely many such knots to consider \cite[Theorem~3]{berge:skwsyls}. Since deciding whether a genus $2$ Heegaard diagram represents $S^3$ is algorithmic \cite{hot:aafrS3}, this is a finite process. Further note that Ozsv\'ath and Szab\'o do not (necessarily) list polynomials for $L(r,q)$ that may be Alexander polynomials of knots for which nonintegral surgery yields $L(r,q)$. As mentioned previously, such knots are known to be only torus knots \cite[Cyclic Surgery Theorem]{cgls:dsok}, and Moser's classification \cite{moser:esaatk} gives a means of checking for these. For example, for the lens space $L(19,11)$ Ozsv\'ath and Szab\'o list only the polynomial $T^{-5}-T^{-4}+T^{-2}-T^{-1}+1-T+T^2-T^4+T^5$. Since $4 \cdot 5 - 1 =19$, we examine Berge's list to see that this corresponds only to the $(-2,3,7)$--pretzel knot. In fact, this is the only knot in $S^3$ for which positive surgery yields $L(19,11)$, and moreover it is hyperbolic. Indeed, $-L(19,11) = L(19,8)$ may be only obtained by $19/2$ surgery on $T_{(-2,5)}$ and by $19/3$ surgery on $T_{(-2,3)}$, a trefoil. Thus $L(19,11)$ may also be obtained by $-19/2$ surgery on $T_{(2, 5)}$ and by $-19/3$ surgery on $T_{(2, 3)}$. We further examine Ozsv\'ath and Szab\'o's list for lens spaces of order $19$. Because it has genus $4$ we may conclude that $L(19,5)$ is obtained only by $19$ surgery on $T_{(-2,9)}$. Although $L(19,16)$ may be obtained by $19$ surgery on $T_{(-4,5)}$, its genus is $6$ which is too large for our methods to determine whether there are other knots for which surgery may yield this lens space. Aside from reflections of the above mentioned torus knots with the corresponding sign change for surgery slope, no other torus knots have lens space surgeries or order $19$. \subsection[Proof of Theorem 1.2]{Proof of \fullref{thm:smallgenusbergeknots}} \begin{proof} If $K'$ admits a lens space surgery $X$ of order $r$, then the surgery slope crosses the $0$--slope minimally $r$ times. Thus if $K$ is the corresponding knot in the lens space $X$, $K$ has order $r$. If $r \geq 4g-1$ then by \fullref{main}, $K$ is at most a $1$--bridge knot in $X$. If $K$ is a $0$--bridge knot, then $K'$ is a torus knot and hence a Berge knot. If $K$ is not $0$--bridge then Theorem~2 of \cite{berge:skwsyls} shows that $K'$ is a double-primitive knot in $S^3$ and hence a Berge knot. \end{proof} \subsection{Further questions} The proof of \fullref{main} is dependent upon $K$ having a Seifert surface with just one boundary component. \begin{question} Do similar results hold if the Seifert surfaces for $K$ have more than one boundary component? \end{question} One wonders whether there are other relationships among knot order, genus, and bridge number (or width) for knots in lens spaces. In a given lens space for any specified order and bridge number one may construct knots of arbitrarily large genus. Indeed the collection of torus knots of a given order in a lens space ought to contain knots of arbitrarily large genus. Thus we ask questions akin to \fullref{main}. In the remainder of this section we restrict ourselves to considering knots in lens spaces whose Seifert surfaces have just one boundary component. \begin{question} If $K$ is a genus $g$ knot of order $s$ in a lens space such that $s=4g-2$, is the bridge number of $K$ bounded? \end{question} If we instead ask that $s=4g-3$ then nullhomologous genus $1$ knots satisfy the hypothesis. Assumably Whitehead doubles of knots in lens spaces can be concocted to have arbitrarily large bridge number. Further alterations to the question are then required such as restricting attention to knots whose order equals that of the lens space. In another direction, \fullref{main} shows that any genus $1$ knot of order at least $3$ is $1$--bridge, any genus $2$ knot of order at least $7$ is $1$--bridge, etc. Clearly genus $1$ knots of order $1$ contained in a ball in a lens space correspond to genus $1$ knots in $S^3$. Indeed there are many other nullhomologous genus $1$ knots in a lens space. \begin{question} Is there a characterization of genus $1$ knots of order $2$ in lens spaces? \end{question} \begin{problem} Classify genus $1$ knots of order at least $3$ in lens spaces. \end{problem} \begin{question} If $K$ is a knot of order $s$ in a lens space, then for each integer $b \geq 2$ what is the minimal possible genus of $K$ with bridge number $b$? \end{question} \section{Preliminaries}\label{sec:prelims} The notion of thin position for knots in $S^3$ was first introduced by Gabai and employed in his proof of Property~R \cite{gabai:fatto3mIII}. The theory of thin position for knots has since been greatly developed; see Scharlemann's survey \cite{scharlemann:tpittock}. As one may observe, the idea of thin position naturally generalizes to knots in three manifolds with a given Heegaard splitting. We discuss this below in \fullref{sec:thinpositioninlensspaces} for genus 1 Heegaard splittings. The graphs of intersection arising from two intersecting properly embedded surfaces in the complement of a knot have a history of usefulness in questions about Dehn surgery. The power of this idea was first demonstrated by Litherland~\cite{litherland:sokistII} and most notably used to great success in Gordon and Luecke's portion of the proof of the Cyclic Surgery Theorem \cite{cgls:dsok} and their solution to the knot complement problem \cite{gl:knotcomplement}. If one of the two intersecting surfaces becomes a Heegaard surface in some Dehn filling of the knot complement, the theory of thin position may be used to establish some nice properties of the resulting graphs of intersection. Gabai does this for the standard Heegaard splitting of $S^3$ \cite{gabai:fatto3mIII}, which Rieck promotes to Heegaard splittings in general \cite{rieck:hsomitdfs}. See Gordon's survey \cite{gordon:cmids}. We construct such graphs of intersection from a genus 1 Heegaard splitting of our lens space and a Seifert surface for our knot in \fullref{sec:graphsofintersection}. \subsection{A few words about notation} By $A-B$ we denote the usual set difference, the set of points in $A$ that are not in $B$. If $A$ and $B$ are two intersecting manifolds then by $A \backslash B$ we denote $A$ ``cut along'' $B$, the closure of $A-B$ in the path metric on $A$. A regular open neighborhood of $A$ is denoted $N(A)$. Its closure is denoted $\bar{N}(A)$. \subsection{Thin position in lens spaces}\label{sec:thinpositioninlensspaces} Let $h \co X \to \R \cup \{\pm \infty\}$ be the height function induced by the genus one Heegaard splitting of the lens space $X = L(r,q)$ so that $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_z = h^{-1} (z)$ is a torus for $z \in \R$ and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{\pm \infty} = h^{-1}(\pm \infty)$ are circles. The tori $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_z$, $z \in \R$, are {\em level tori\/}, and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{\pm \infty}$ are the {\em circles at infinity\/}. Each level torus separates $X$ into two solid torus components; the component $X^+$ {\em above\/} containing $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{+\infty}$ and the component $X^-$ {\em below\/} containing $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{-\infty}$. Let $K$ be a knot in $X$. By an isotopy of $K$ we may assume $K \subseteq X - \{\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{\pm \infty} \}$ and that $h|K$ is a Morse function so that $h|K$ has finitely many critical points, all of which are nondegenerate and have distinct critical values. Given such a Morse presentation of $K$, let $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_1}, \dots, \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_n}$ be level tori so that exactly one is between each consecutive pair of critical levels. Define the {\em width\/} of the Morse presentation to be $\sum_{i=1}^n |\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_i} \cap K|$. The {\em width of $K$\/} is the minimum of all widths of Morse presentations of $K$. A Morse presentation of $K$ that realizes the width of $K$ is said to be a {\em thin presentation of $K$\/}, and $K$ itself is said to be in a {\em thin position\/}. If the first critical value of $h|K$ above the level torus $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_i}$ is a maximum and the first critical value below is a minimum, then $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_i}$ is a {\em thick level (torus)\/}. Similarly, if the first critical value of $h|K$ above the level torus $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_i}$ is a minimum and the first critical value below is a maximum, then $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_i}$ is a {\em thin level (torus)\/}. Every Morse presentation of $K$ must have a thick level torus, but not all Morse presentations of $K$ have a thin level torus. Over all Morse presentations of $K$ that have no thin level tori, one that minimizes the width of $K$ is said to be a {\em bridge presentation of $K$\/}, and $K$ itself is said to be in a {\em bridge position\/}. If $K$ is in bridge position and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_0$ is the thick level torus, then the {\em bridge number of $K$\/} is $\frac{1}{2}|K \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_0|$. We say a knot with bridge number $n$ is an {\em $n$--bridge knot\/}. In the event that $K$ may further be isotoped to lie as an embedded curve in a level torus, then either $K$ is a trivial knot (and hence bounds a disk in $X$) or $K$ is a {\em lens space torus knot\/}. For both of these cases we will say that $K$ has width $0$ and bridge number $0$. Let $K$ be in Morse position, and let $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ be a noncritical level torus. Suppose that $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ contains an arc $\alpha$ with interior disjoint from $K$ that together with an arc $\beta$ of $K - \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounds an embedded disk $\Delta$ with interior disjoint from $K$. If $\beta$ lies above (resp.\ below) $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ then we say that $\Delta$ is a {\em high (resp.\ low) disk\/} for $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ or for the arc $K \cap \Delta$. \begin{Lemma}\label{highdisklowdisk} Assume $K$ is not $0$--bridge. If $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is a noncritical level torus in a thin presentation of $K$, then $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ cannot have a high disk and a low disk whose boundaries have empty intersection in the complement of $K$. \end{Lemma} \begin{proof} This is standard in the theory of thin position (see eg\ \cite{gabai:fatto3mIII} and \cite{scharlemann:tpittock}). Assume $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ admits a high disk $\Delta^+$ and a low disk $\Delta^-$ such that $\ensuremath{\partial} \Delta^+ \cap \ensuremath{\partial} \Delta^- - K = \emptyset$; see \fullref{fig:thinningdisks}(a). Let $\alpha^+ = \ensuremath{\partial} \Delta^+ \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and $\alpha^- = \ensuremath{\partial} \Delta^- \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Then $|\alpha^+ \cap \alpha^-| \leq 2$. Use $\Delta^+$ and $\Delta^-$ to isotop $\Delta^+ \cap K$ and $\Delta^- \cap K$ onto $\alpha^+$ and $\alpha^-$ respectively. If $|\alpha^+ \cap \alpha^-| = 2$ then $K$ is $0$--bridge. If $|\alpha^+ \cap \alpha^-| \leq 1$ then a further slight isotopy of $K$ will produce a Morse presentation of $K$ with reduced width; see \fullref{fig:thinningdisks}(b). Both situations contradict our hypothesis. \end{proof} \begin{figure} \caption{(a) A high disk $\Delta_+$ and a low disk $\Delta_-$\qua (b) The result of a thinning isotopy associated to the disks $\Delta_+$ and $\Delta_-$ \qua (c) A long disk $\Delta$ \qquad (d) The result of a thinning isotopy associated to the long disk $\Delta$} \label{fig:thinningdisks} \end{figure} Let $K$ be in Morse position, and let $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ be a noncritical level torus. Suppose that $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ contains an arc $\alpha$ with interior disjoint from $K$ that together with an arc $\beta$ of $K$ bounds an embedded disk $\Delta$ with interior disjoint from $K$. If $\Delta$ is not a high or low disk (and hence the interior of $\beta$ intersects $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$) then we say that $\Delta$ is a {\em long disk\/} for $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ or for the arc $K \cap \Delta$. \begin{Lemma}\label{longdisk} Assume $K$ is not $0$--bridge. If $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is a noncritical level torus in a thin presentation of $K$, then $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ cannot have a long disk. \end{Lemma} Note that the existence of a long disk does not necessarily imply the existence of a pair of high and low disks satisfying the hypotheses of \fullref{highdisklowdisk}. \begin{proof} Let $\mathbb{T} = \{\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_1}, \dots, \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_m}\}$ be a collection of level tori used to calculate the width of $K$. Assume $\Delta$ is a long disk for $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$; see \fullref{fig:thinningdisks}(c). Furthermore, without loss of generality, assume $N(\alpha) \cap \Delta$ is above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. We may use $\Delta$ to isotop the arc $\beta = \Delta \cap K$ onto $\alpha = \ensuremath{\partial} \Delta - \Int \beta \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Perform a slight isotopy of the interior of this arc upwards so that it has just one critical point, a maximum. We may then pull this critical point up to the height of the lowest critical point above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ of the former arc $\beta$; see \fullref{fig:thinningdisks}(d). Hence $K$ will again be in a Morse position. Such an isotopy necessarily reduces the number of critical points of $K$. Moreover, all the critical points are at the heights of former critical points. Therefore there exists a proper subset of $\mathbb{T}$ that forms a suitable collection of level tori with which to calculate the width of $K$ after the isotopy. Furthermore, the isotopy does not increase $|\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{z_i} \cap K|$ for any $i$. Therefore the isotopy decreases the width of $K$, contradicting that $K$ is in thin position. \end{proof} \subsection{Graphs of intersection}\label{sec:graphsofintersection} Let $K$ be a knot of genus $g$ and order $s$ in the lens space $X$ of order $r$ whose Seifert surfaces have just one boundary component. Let $S \subseteq X - N(K)$ be a Seifert surface for $K$ of genus $g$. Note that $S$ is incompressible and $\ensuremath{\partial}$--incompressible in the exterior of $K$. Let $\what{S}$ be the surface $S$ with its boundary (abstractly) capped off by a disk. Assume that $K$ is in thin position and that $K$ is not $0$--bridge. Let $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ be a thick level for $K$ with $|\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap K|=t$, and set $T = \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} - N(K)$. By an isotopy of $S$ we may assume that $S$ and $T$ intersect transversely and each component of $\ensuremath{\partial} T$ intersects $\ensuremath{\partial} S$ exactly $s$ times. Gabai \cite{gabai:fatto3mIII} shows that we may assume $S$ has been isotoped so that each arc component of $S \cap T$ is essential in $S$ and in $T$ (cf\ Gordon's comment in \cite{gordon:cmids} or \cite[Proposition~2.1]{gordon:tdsokils}). We may further assume that $S$ has been chosen among all such Seifert surfaces for $K$ to minimize the number of intersections with $T$. Note that every closed component of $S \cap T$ is nontrivial in $T$. If a component $\gamma$ of $S \cap T$ were trivial on $T$, then since $S$ is incompressible it must also be trivial on $S$. Therefore a disk exchange would produce another Seifert surface for $K$ that has fewer intersections with $T$. The arc components of $S \cap T$ define fat vertexed graphs $G_S$ and $G_T$ in $\what{S}$ and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ respectively. The {\em (fat) vertices\/}, {\em edges\/}, and {\em faces\/} of these graphs are defined as follows. The fat vertex of $G_S$ is the single disk $\what{S} - \Int S$. The fat vertices of $G_T$ are the disks of intersection $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap \bar{N}(K) = \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} - \Int T$. The edges of $G_S$ are the arc components of $S \cap T$ as they lie on $S$, and the edges of $G_T$ are the arc components of $S \cap T$ as they lie on $T$. The faces of $G_S$ are the connected components of $S$ cut along the edges of $G_S$, ie\ the path metric closure of the complement of the edges of $G_S$ in $S$. Similarly, the faces of $G_T$ are the connected components of $T$ cut along the edges of $G_T$. Observe that simple closed curves of $S \cap T$ are not recorded in the graphs $G_S$ and $G_T$. For example, the interior of a face of $G_S$ may intersect $T$. Therefore each component of $S \backslash T$ is contained in a face of $G_S$, though often a component of $S \backslash T$ actually is a face of $G_S$. To make the distinction from faces, we refer to the components of $S \backslash T$ and $T \backslash S$ as {\em regions\/}. Orient $K$ and sequentially label the intersections of $K \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ from $1$ to $t$. We may refer to the $i$--th intersection as $K_i$. This induces a numbering of the components of $\ensuremath{\partial} T$ as $1, 2, \dots, t$ in the order in which they appear on $\ensuremath{\partial} \bar{N}(K)$, and hence numbers the vertices of $G_T$. Denote these vertices $U_1, \dots, U_t$. As there is only one vertex of $G_S$, we leave it unnumbered. Since $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is separating and $|\ensuremath{\partial} S| = 1$, the parity rule implies that the edges of $G_T$ connect vertices of opposite parity. An edge of $G_T$ with end points on $U_i$ and $U_j$ is an arc of $S \cap T$ with endpoints on the intersection of $\ensuremath{\partial} S$ with the $i$--th and $j$--th components of $\ensuremath{\partial} T$ and hence it is also an edge of $G_S$ with end points on the single vertex of $G_S$. On the boundary of the fat vertex of $G_S$ we label the end points of this arc with $i$ and $j$. Around the vertex of $G_S$, the labels $1, 2, \dots, t$ appear sequentially and repeat $s$ times. Since each arc component of $S \cap T$ is essential in $S$ and in $T$, neither graph $G_S$ nor $G_T$ contains a trivial loop. Denote by $\mathbf{t}$ the set of edge-endpoint labels $\{1, 2, \dots, t\}$ of $G_S$ for which we have the associated $\mathbf{t}$--{\em intervals\/} $(1,2), (2,3), \dots, (t{-}1, t), (t,1)$. We may index the arc of $K \backslash \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ running from $K_i$ to $K_{i+1}$ by the $\mathbf{t}$--interval $(i,i{+}1)$ as $K_{(i,\,i+1)}$. Similarly $H_{(i,\,i+1)}$ denotes the $1$--handle of $\bar{N}(K) \backslash \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ running from vertex $U_i$ to vertex $U_{i+1}$. Concatenations of consecutive $\mathbf{t}$--intervals such as $(i-1,i+1)$ may be used to index longer arcs of $K$ and longer $1$--handles, eg\ $K_{(i-1,\,i+1)} = K_{(i-1,\,i)} \cup K_{(i,\,i+1)}$ and $H_{(i-1,\,i+1)} = H_{(i-1,\,i)} \cup H_{(i,\,i+1)}$. For each label $x$ in $\mathbf{t}$, the subgraph $\smash{G_S^x}$ of $G_S$ is the graph in $\what{S}$ consisting of the single vertex of $G_S$ and every edge of $G_S$ that has an endpoint labeled $x$. Due to the parity rule, each graph $\smash{G_S^x}$ has exactly $s$ edges. The faces of $\smash{G_S^x}$ are the connected components of $S$ cut along the edges of $\smash{G_S^x}$. If $f$ is a face of $G_S$ (resp.\ $\smash{G_S^x}$ for some $x \in$ $\mathbf{t}$), then each component of $\ensuremath{\partial} f$ consists of an alternating sequence of {\em edges\/} and {\em corners\/} where the edges are edges of $G_S$ (resp.\ $\smash{G_S^x}$) and the corners are identified with $\mathbf{t}$--intervals $(i, i{+}1)$ (resp.\ concatenated $\mathbf{t}$--intervals $(i,j)$) as they are arcs of $\ensuremath{\partial} S$ between the labeled components $i$ and $i+1$ (resp.\ $i$ and $j$) of $\ensuremath{\partial} T$. The edges and corners of a region $R$ of $S \backslash T$ contained in a face $f$ of $G_S$ are the edges and corners of $f$ contained in $R$. If $R$ is a proper subset of $f$, then may have edges and corners not contained in $R$ and $R$ may have a boundary component that is not comprised of edges and corners. We may similarly define the edges and corners of faces of $G_T$ and regions of $T \backslash S$; here corners are arcs of $\ensuremath{\partial} T$ between edges of $G_S$. A disk face of a graph $G_S$, $\smash{G_S^x}$, or $G_T$ with $n$ edges in its boundary is an {\em $n$--gon\/}. We commonly refer to a $2$--gon as a {\em bigon\/}, a $3$--gon as a {\em trigon\/}, a $4$--gon as a {\em tetragon\/}, and a $5$--gon as a {\em pentagon\/}. Given an $n$--gon $f$ of $\smash{G_S^x}$, if for every $y \in \mathbf{t}$ and every $n$--gon $f'$ of $G_S^y$ such that $f' \subseteq f \subseteq S$ implies $f' = f$, then $f$ is an {\em innermost $n$--gon\/}. To each edge of $G_S$ with endpoints labeled $i$ and $j$, we associate its {\em label pair\/} $\{i,j\}$. A {\em Scharlemann cycle of length $n$\/} ($Sn$ cycle, for short) is a set of $n$ edges of $G_S$ each with label pair $\{i, i{+}1\}$ for some $i$ in $\mathbf{t}$ that bounds an $n$--gon of $G_S$ (with corners all $(i, i{+}1)$). An {\em extended Scharlemann cycle of length $n$\/} is a set of $n$ edges of $\smash{G_S^x}$ each with label pair $\{x, y\}$ for some $x$ and $y$ in $\mathbf{t}$ that bounds an $n$--gon of $\smash{G_S^x}$ with corners all $(x,y)$ or all $(y,x)$ and contains a Scharlemann cycle of length $n$. Notice that the Scharlemann cycle of length $n$ contained in the $n$--gon of an extended Scharlemann cycle of length $n$ with label pair $\{x,y\}$ has label pair $\{(x+y-1)/2, (x+y+1)/2 \}$ or $\{(x+y-1)/2 + t/2, (x+y+1)/2 +t/2\}$. If $n$ is $2$ or $3$ we will often abbreviate these terms as {\em (extended) $S2$ cycle\/} and {\em (extended) $S3$ cycle\/}. By a {\em forked extended $S2$ cycle\/} we mean a set of three edges of some $\smash{G_S^x}$ that bounds a trigon composed of a bigon $B$ bounded by an (extended) $S2$ cycle of either $G_S^{\smash{x+1}}$ or $G_S^{\smash{x-1}}$ together with a bigon and a trigon of $G_S$ adjoined to the edges of $B$; see \fullref{fig:forkedbigona+1}. \subsection[Outline of proof of Theorem 1.1]{Outline of proof of \fullref{main}} \begin{proof} Throughout the subsequent sections, unless noted otherwise, we assume that we have the following: \begin{itemize} \item a lens space $X$ of order $r$, \item a height function $h$ on $X$, \item a knot $K$ of order $s$ in thin position with respect to $h$, \item a thick level $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ splitting $X$ into the two solid tori $X^+$ and $X^-$, \item the surface $S$ of genus $g \geq 1$ with $|\ensuremath{\partial} S|=1$, $\what{S}$, and $T$ as defined above, \item the graphs $G_S$, $\smash{G_S^x}$ for all $x \in \mathbf{t}$, and $G_T$ and \item $s \geq 4g-1$. \end{itemize} Since $g \geq 1$ and $s \geq 4g-1$, $s \geq 3$. Since $r \geq s$, $r \geq 3$ as well. For technical reasons, we defer the case $r = 3$ to \fullref{sec:genus1case} at the end. Accordingly, we will work under the assumption that $r \geq 4$ until then. The main body of work encompasses showing that $t \leq 6$. In \fullref{sec:bigonsandtrigons} we distill the hypothesis of our theorem into the existence of bigons and trigons in the graphs $\smash{G_S^x}$ for each $x \in \mathbf{t}$. We adapt some fundamental lemmas from Goda and Teragaito \cite{gt:dsokwylsagok} which are then employed to understand what the bigons and trigons may look like. In \fullref{sec:annuliandtrees} we use the existence of bigons and trigons to construct annuli that weave back and forth through $X^+$ and $X^-$ crossing $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Arcs of $K$ lie on these annuli. The thinness of $K$ then gives constraints on ``how much'' of $K$ may lie on such an annulus. With these constraints, the bigons and trigons imply the existence of a second such annulus. In \fullref{sec:twoschcycles}, to have the requisite bigons and trigons, we reckon with two disjoint annuli containing most of the knot $K$. The techniques of \fullref{sec:annuliandtrees} are then re-employed to obtain contradictions to the thinness of $K$. In \fullref{doubleunfurl} we conclude that for $r \geq 4$, we have $t \leq 6$. In \fullref{sec:tis6} we fix $K$ with $t=6$ and isotop the interior of $S$ to gain a better grasp on the faces of $G_S$. Then we use Euler characteristic estimates and further thin position arguments to refine our understanding of the faces of $G_S$. From this \fullref{prop:tnot6} concludes $t \neq 6$ and hence $t \leq 4$. In \fullref{sec:bridgeposition} we consider multiple thick levels for $K$ in thin position. Using the result that $K$ may intersect a thick level at most $4$ times, we promote the thin position of $K$ to a bridge position. Once again using the existence of bigons and trigons, we find a thinning isotopy of $K$. Thus we conclude in \fullref{thm:t=2} that for $r \geq 4$, we have $t = 2$. \fullref{lem:t=2} quickly shows that $K$ is at most $1$--bridge. Finally, in \fullref{sec:genus1case} we treat the case that $r=3$ in which case $s=3$ and $g=1$. \fullref{thm:r=3} concludes that $K$ is at most $1$--bridge. \end{proof} \section{Bigons and trigons}\label{sec:bigonsandtrigons} Our proof of \fullref{main} is set in motion by the following lemma. \begin{lemma}\label{musthavebigons} For each $x \in \mathbf{t}$, $\smash{G_S^x}$ must have a bigon or trigon face. \end{lemma} \begin{proof} Recall that $\smash{G_S^x}$ has $s$ edges and we are assuming $s \geq 4g-1$. \begin{align*} \chi(S) = 1- 2g &= -s + \sum_{\mbox{{\scriptsize disk faces of} } \smash{G_S^x}} \chi(\mbox{disk}) + \sum_{\mbox{{\scriptsize nondisk faces of} } \smash{G_S^x}} \chi(\mbox{nondisk}) \\ &\leq -s + \#(\mbox{disk faces}) \end{align*} Assume $\smash{G_S^x}$ has no bigons or trigons. So each disk face has at least four edges. Thus \begin{align*} s \geq \frac{1}{2} \cdot 4 \cdot &\#(\mbox{disk faces}). \\ 1-2g \leq -s + &\#(\mbox{disk faces}) \leq -s + \frac{1}{2} s,\tag*{\hbox{Hence}}\end{align*} and so $s/2 \leq 2g-1$ or $s \leq 4g-2$, a contradiction. \end{proof} We thus study how bigons and trigons of $\smash{G_S^x}$ for each $x \in \mathbf{t}$ arise within $G_S$. Often a bigon or trigon of one graph $\smash{G_S^x}$ will contain a bigon or trigon of another graph $G_S^y$. We say a face $f$ of $\smash{G_S^x}$ {\em accounts for the label $y$\/} if $f$ contains a bigon or trigon of $G_S^y$. \subsection[Fundamental lemmas about G sub S]{Fundamental lemmas about $G_S$} We adapt and build on some useful lemmas about order $2$ and order $3$ Scharlemann cycles and the faces of $G_S$ they bound from the work of Goda and Teragaito in \cite{gt:dsokwylsagok}. They work with the case that $s=r$, but our generalization of this presents no problem here. The main difficulty to overcome in our work that is not present in \cite{gt:dsokwylsagok} is the potential presence of simple closed curves $\gamma \in S \cap T$ that are essential in $T$ and yet trivial in both $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and $S$. Goda and Teragaito avoid such occurrences by choosing $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ among all Heegaard tori. In our situation, we require $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ to be a level Heegaard torus. Nevertheless \fullref{musthavebigons} will permit us to conclude in \fullref{circlesofintersection} that no such curve $\gamma$ exists. We begin with some lemmas about how edges of Scharlemann cycles and extended Scharlemann cycles may lie on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Let $\sigma$ be a set of edges of $G_S$. Let $\Gamma$ be the subgraph of $G_T$ consisting of the edges of $\sigma$ and the vertices of $G_T$ to which the edges are incident. If $\Gamma$ is contained in a disk in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, then we say the edges of $\sigma$ {\em lie in a disk\/} in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If $\Gamma$ is contained in an annulus in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ but do not lie in a disk, then we say the edges of $\sigma$ {\em lie in an essential annulus\/} in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Let $f$ be a face of $G_S$ or $\smash{G_S^x}$, and let $\sigma$ be the edges of $\ensuremath{\partial} f$. Let $Q$ be a two-sided surface in $X$ with product neighborhood $Q \times [-\epsilon, \epsilon]$ for small $\epsilon > 0$ so that $Q$ is identified with $Q \times \{0\}$. If $\ensuremath{\partial} f \cap Q = \sigma$ and $N(\sigma) \cap f$ is contained in either $Q \times [0, \epsilon]$ or $Q \times [-\epsilon, 0]$ then we say $f$ {\em lies on one side of $Q$\/} even if $\Int f$ does not. \begin{lemma}{\rm (cf\ \cite[Lemma 2.3]{gt:dsokwylsagok})}\qua\label{GT:L2.3} Let $\sigma$ be a Scharlemann cycle in $G_S$ of length $p$ with label pair $\{x,x{+}1\}$ and let $f$ be the face of $G_S$ bounded by $\sigma$. Suppose that $p \neq r$. Then $f$ cannot lie on one side of a disk. In particular, the edges of $\sigma$ cannot lie in a disk in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. \end{lemma} \begin{proof} Assume the edges of $\sigma$ lie in a disk $D$ and $f$ lies on one side of $D$. (If the edges of $\sigma$ lie in disk in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, then necessarily $f$ lies to one side of that disk.) If $\Int f \cap D \neq \emptyset$ then by choosing $D$ smaller, we may assume $\Int f \cap D$ is a collection of simple closed curves. Let $\xi \in \Int f \cap D$ be an innermost simple closed curve on $f$. Alter $D$ by a disk exchange with the disk on $f$ that $\xi$ bounds. In this manner we may produce a disk $D' \subseteq X$ of which $f$ lies to one side (containing the vertices $U_x$ and $U_{x+1}$ and the edges of $\sigma$) such that $\Int f \cap D' = \emptyset$. Then $N(D' \cup H_{(x,\, x{+}1)} \cup f)$ is a punctured lens space of order $p$. Since a lens space is irreducible, $X$ is a lens space of order $p$. This is contrary to the assumption that $p \neq r$. \end{proof} \begin{lemma}{\rm (cf\ \cite[Lemma 2.1]{gt:dsokwylsagok})}\qua\label{GT:L2.1} Let $\sigma$ be an $Sp$ cycle of $G_S$, $p = 2$ or $3$, with label pair $\{x, x{+}1\}$. Let $f$ be the face of $G_S$ bounded by $\sigma$. Then the edges of $\sigma$ lie in an essential annulus $A$ in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Furthermore, the core of $A$ does not bound a disk in $X$. Indeed, the core of $A$ runs $p$ times in the longitudinal direction of the solid torus on the side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ to which $f$ lies. \end{lemma} \begin{proof} Because we are assuming $r \geq 4$, \fullref{GT:L2.3} implies the edges of $\sigma$ do not lie in a disk in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. The proof of Lemma~2.1 of \cite{gt:dsokwylsagok} then applies to show $\sigma$ must lie in an essential annulus $A$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Let $W = X^+$ or $X^-$ be the solid torus on the side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in which $f$ lies. If the core of $A$ bounds a disk in $X$, then it must bound a meridional disk of a solid torus on one side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. By \fullref{GT:L2.3}, the core of $A$ cannot bound a disk in the solid torus $X \backslash W$. As we will show, the core of $A$ cannot bound a disk in $W$ since it runs $p$ times in the longitudinal direction of $W$. If $\Int f \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} = \emptyset$, then the remainder of this proof follows from Lemma~2.1 of \cite{gt:dsokwylsagok}. Because the core of $A$ runs $p$ times in the longitudinal direction of the solid torus $M = \bar{N}(A \cup H_{(x,\, x{+}1)} \cup f) \subseteq W$, the space $W \backslash M$ must be a solid torus whose meridional disk crosses $\ensuremath{\partial} M \backslash A$ once. Thus the core of $A$ must also run $p$ times in the longitudinal direction on $W$. If $\Int f \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \neq \emptyset$, then by choosing $A$ smaller we may assume that $\Int f \cap \ensuremath{\partial} A = \emptyset$. Let $\xi \subseteq \Int f \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ be an innermost simple closed curve on $f$. Alter $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and $A$ by a disk exchange with the disk on $f$ that $\xi$ bounds. (Note that $\xi$ might be essential in $T$ and yet contained in $A$.) Continuing in this manner we may produce a Heegaard torus $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ (isotopic to $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in $X$) and an essential annulus $A'$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ such that $\sigma$ lies in the essential annulus $A'$, $f$ lies on the same sides of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, and $\Int f \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' = \emptyset$. Let $W'$ be the solid torus on the side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ containing $f$. As in \cite{gt:dsokwylsagok}, $M = \bar{N}(A' \cup H_{\smash{(x,\,x{+}1)}} \cup f)$ is a solid torus such that $A'$ runs $p$ times in the longitudinal direction of $M$. Thus the core of $A'$ and each component of $\ensuremath{\partial} A'$ runs $p$ times in the longitudinal direction of $W'$. Since $\ensuremath{\partial} A' = \ensuremath{\partial} A$ and $W'$ is isotopic to $W$, it follows that the core of $A$ runs $p$ times in the longitudinal direction of $W$. \end{proof} \begin{lemma}\label{lieinanannulus} Let $\sigma'$ be an extended $Sp$ cycle for $p=2$ or $3$. Assume $\sigma$ is the $Sp$ cycle contained in the face of $S$ bounded by $\sigma'$. Then the edges of $\sigma$ and $\sigma'$ each lie in essential annuli $A$ and $A'$ respectively in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ so that $A \cap A' = \emptyset$. \end{lemma} \begin{proof} Assume $\sigma$ has label pair $\{x, x{+}1\}$. Then $\sigma'$ has label pair $\{x-n, x+1+n\}$ (taken mod $t$) for some positive integer $n$. We proceed by induction. By \fullref{GT:L2.1}, the edges of $\sigma$ lie in an essential annulus $A = A_0$ in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Let $\sigma_{i-1}$ be the extended $Sp$ cycle with label pair $\{x-(i-1), x+1+(i-1)\}$ contained in the extended $Sp$ cycle $\sigma_i$ with label pair $\{x-i, x+1+i\}$. Assume the edges of $\sigma_{i-1}$ lie in an essential annulus $A_{i-1}$ in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. There are $p$ bigons of $G_S$ whose edges give a one to one correspondence between the edges of $\sigma_{i-1}$ and the edges of $\sigma_i$. Each bigon has the two corners $(x-i, x-i+1)$ and $(x+i, x+i+1)$. Of these bigons, take two whose edges in $\sigma_{i-1}$ lie in the essential annulus $A_{i-1}$ (and not in a disk). Extend their corners radially into $H_{(x-i,\, x-i+1)}$ and $H_{(x+i,\, x+i+1)}$, and join them together to form an annulus $B$ with $\ensuremath{\partial} B \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Since one component of $\ensuremath{\partial} B$ is an essential curve contained in $A_{i-1}$, the other is either parallel on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ to the core of $A_{i-1}$ or bounds a disk on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If one component of $\ensuremath{\partial} B$ bounds a disk $D$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, then the other component bounds the disk $D \cup B$. Since this other component is parallel to the core of $A_{i-1}$ and hence to the core of $A$, the core of $A$ bounds a disk in $X$ contrary to \fullref{GT:L2.1}. Thus the component of $\ensuremath{\partial} B$ not in $A_{i-1}$ is an essential curve on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ parallel to the core of $A$. Hence the edges of $\sigma_i$ lie in an essential annulus $A_i$ in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and $A_{i-1} \cap A_{i} = \emptyset$. When $i=n$ we obtain that the edges of $\sigma'$ lie in the annulus $A_n = A'$. Furthermore, since the label pairs of $\sigma_i$ and $\sigma_j$ for $0\leq i < j \leq n$ are distinct, $A_i \cap A_j = \emptyset$. \end{proof} \begin{lemma}\label{lem:twoS2cycleswithsamelabelpair} If $\sigma_1$ and $\sigma_2$ are two $S2$ cycles with the same label pairs then the edges of $\sigma_1 \cup \sigma_2$ lie in an essential annulus. Furthermore, within the annulus, the edges of $\sigma_1$ separate the edges of $\sigma_2$. \end{lemma} This lemma implies that each edge of $\sigma_1$ bounds a bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ with an edge of $\sigma_2$. \begin{proof} Assume the edges of $\sigma_1 \cup \sigma_2$ do not lie in an essential annulus. Nevertheless, by \fullref{GT:L2.3} the edges $\sigma_i$ lie in an essential annulus $A_i$ for each $i=1,2$. Let $f$ be the bigon of $G_S$ bounded by $\sigma_1$. Let $g$ be the bigon of $G_S$ bounded by $\sigma_2$. Let $\{x, x{+}1\}$ be the label pair of $\sigma_1$ and $\sigma_2$. Note that the corners of $f$ and $g$ are all on $H_{(x,\, x{+}1)}$. If $\Int(f \cup g) \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \neq \emptyset$, then alter $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ by disk exchanges to obtain the Heegaard torus $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ so that $\Int(f \cup g) \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' = \emptyset$ while $\ensuremath{\partial}(f \cup g) \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' = \ensuremath{\partial}(f \cup g) \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Similarly, we obtain the essential annuli $A_i'$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ in which the edges of $\sigma_i$ lie. Let $V$ be the solid torus on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ as $f$ and $g$. Consider the solid torus $V_f = V \backslash (f \cup H_{(x,\, x{+}1)})$. Note that the core of the annulus $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' \backslash A_1'$ is a longitudinal curve on $\ensuremath{\partial} V_f$. Since $\sigma_2$ does not lie in $A_1'$, $\sigma_2$ must have an edge that is not parallel to an edge of $\sigma_1$. Thus both corners of $g$ lie on the same rectangle of $\ensuremath{\partial} H_{(x,\, x{+}1)} \backslash (U_x \cup U_{x+1} \cup f)$ which is contained in $\ensuremath{\partial} V_f$. Hence the other edge of $\sigma_2$ is not parallel to an edge of $\sigma_2$. Then the simple closed curve $\ensuremath{\partial} g$ on $\ensuremath{\partial} V_f$ must intersect the core of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' \backslash A'$ twice. Therefore $\ensuremath{\partial} g$ cannot be nullhomologous in $H_1(V_f)$ contradicting that it bounds a disk in $V_f$. Since the corners of $g$ cannot be on the same rectangle of $\ensuremath{\partial} H_{(x,\, x{+}1)} \cap \ensuremath{\partial} V_f$, the edges of $g$ must be incident to $U_x$ and $U_{x+1}$ on opposite sides of the edges of $\sigma_1$. Since the edges of $\sigma_1$ and $\sigma_2$ lie in an essential annulus, the edges of $\sigma_1$ separate the edges of $\sigma_2$ within this annulus. \end{proof} \begin{lemma}{\rm (cf\ \cite[Lemma~2.4]{gt:dsokwylsagok})}\qua\label{oppositesides} Let $\sigma_1$ and $\sigma_2$ be $S2$ or $S3$ cycles of $G_S$ with disjoint label pairs, and let $f_1$ and $f_2$ be the faces of $G_S$ bounded by $\sigma_1$ and $\sigma_2$ respectively. Then the faces $f_1$ and $f_2$ lie on opposite sides of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. \end{lemma} \begin{proof} By \fullref{lieinanannulus}, the edges of $\sigma_i$ lie in an annulus $A_i$ for $i = 1, 2$. Since the label pairs of $\sigma_1$ and $\sigma_2$ are disjoint, $A_1 \cap A_2 = \emptyset$. We may assume $A_1$ and $A_2$ have been chosen so that $(A_1 \cup A_2) \cap (\Int f_1 \cup \Int f_2)$ is only a collection of simple closed curves. By the minimality assumption on $|S \cap T|$ and \fullref{GT:L2.1}, $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap (\Int f_1 \cup \Int f_2)$ may contain only simple closed curves that are essential on $T$ and yet trivial on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Let $\xi \in \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap (\Int f_1 \cup \Int f_2)$ be an innermost simple closed curve on $f_1 \cup f_2$. Alter $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ (and both $A_1$ and $A_2$) by a disk exchange with the disk on $f_1 \cup f_2$ bounded by $\xi$. In this manner we may produce a Heegaard torus $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ (isotopic to $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in $X$) with annuli $A_1'$ and $A_2$' such that $\sigma_i \subseteq A_i'$ for $i=1,2$, $f_i$ lies on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ as $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ for $i=1,2$, and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' \cap (\Int f_1 \cup \Int f_2) = \emptyset$. Let $\{x_i, x_i{+}1\}$ be the label pair of $\sigma_i$. Construct the solid torus $$V_i = \bar{N}(A_i' \cup H_{(x_i,\, x_i+1)} \cup f_i).$$ The meridian of $V_i$ intersects the core of the annulus $A_i$ algebraically $2$ or $3$ times depending on the order of the cycle $\sigma_i$. If $f_1$ and $f_2$ lie on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, then $f_1$ and $f_2$ lie on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$. Then the solid tori $V_1$ and $V_2$ are both contained in a single solid torus on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$. Let $A_3$ be an annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' \backslash (A_1' \cup A_2')$. The manifold $V_3 = V_1 \cup \bar{N}(A_3) \cup V_2$ has toroidal boundary which intersects $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ in the annulus $A_1' \cup A_2' \cup A_3$. Thus $V_3$ must be a solid torus. Due to neither meridian of $V_1$ or $V_2$ intersecting a component of $\ensuremath{\partial} A_3$ algebraically once, $V_3$ cannot be a solid torus. Therefore $f_1$ and $f_2$ cannot lie on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. \end{proof} \begin{figure} \caption{(a) The trigon $F$\qua (b) The edges of $F$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu} \label{fig:funnytrigon} \end{figure} \begin{lemma} \label{lem:funnytrigon} Let $F$ be a trigon with corners $(x, x{+}1)$ and edges as in \fullref{fig:funnytrigon}(a) such that its edges lie in an essential annulus $A$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as in \fullref{fig:funnytrigon}(b). Let $V$ be the solid torus on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as $F$. Then the core of $A$ is a meridional curve for $V$. \end{lemma} \begin{proof} Note that $F$ itself is not a face of $G_S$ or any of its subgraphs since it has edges with both endpoints on the same vertex. Nevertheless we will continue to use the language of edges and corners when talking about $F$. Consider $M = \bar{N}(A \cup H_{(x,\, x{+}1)} \cup F)$ formed by attaching the $2$--handle $\bar{N}(F)$ to the genus $2$ handlebody $\bar{N}(A \cup H_{(x,\, x{+}1)})$. Let $D_z$ be a meridional disk of $\bar{N}(A)$ whose boundary intersects the edges of $F$ only twice and misses the vertices. Let $D_H$ be the cocore of the $1$--handle $H_{(x,\, x{+}1)}$. The fundamental group of $\bar{N}(A \cup H_{(x,\, x{+}1)})$ is then generated by curves $\zeta$ and $\eta$ dual to $D_z$ and $D_H$ respectively. Thus we have the presentation \begin{align*} \pi_1(M) &= \langle \zeta, \eta | \eta \zeta \eta^{-1} \zeta \eta = 1 \rangle \\ &= \langle \zeta, \eta | (\zeta \eta^2)^2 = \eta^3 \rangle \\ &= \langle a, b | a^2 = b^3 \rangle \end{align*} which is the trefoil group. Therefore $\pi_1(M)$ is not $\Z$, and $M$ is not a solid torus. We may consider $M$ as contained in $V$ such that $\ensuremath{\partial} M \cap \ensuremath{\partial} V = A$. Then $A' = \ensuremath{\partial} M \backslash A$ is properly embedded annulus in $V$. Assume the core of $A$ is not a meridional curve for $V$. Then the components of $\ensuremath{\partial} A' = \ensuremath{\partial} A$ are not meridional curves for $V$. Thus the two components of $V \backslash A'$ must be solid tori. This is contrary to $M$ not being a solid torus. \end{proof} \subsection{Bigons} \begin{lemma}\label{bigonS2} If $\sigma = \{e', e''\} \subseteq \smash{G_S^x}$ bounds a bigon $f$ of $\smash{G_S^x}$, then $\sigma$ is either an $S2$ cycle or an extended $S2$ cycle. \end{lemma} \begin{proof} The edges of $G_S$ on $f$ must be mutually parallel. By relabeling if necessary, we may assume $x=1$ and the edges of $G_S$ on $f$ are $\{e'=e_1, e_2, \dots, e_n = e'' \}$ labeled successively so that, along a chosen corner of $f$, $e_i$ has label $i$ taken mod $t$. We claim that $n \leq t$. If $n > t+1$, then the edge $e_{t+1}$ is contained in $\smash{G_S^1}$ and yet is not on the boundary of $f$. This contradicts that $f$ is a bigon of $\smash{G_S^1}$. If $n=t+1$, then the label $1$ for the endpoints of each $e'$ and $e''$ occur on the same corner of $f$. Since there are $t+1$ edges, the label $1$ must occur for the end point of some edge $e_i$ on the other corner of $f$. Hence there is an edge of $\smash{G_S^1}$ contained in the interior of $f$ contradicting that $f$ is a bigon of $\smash{G_S^1}$. Since $1 < n \leq t$, $\sigma$ has label pair $\{1,n\}$. The label $1$ of the edges $e'$ and $e''$ occur on opposite corners of $f$. If $n=2$, then $\sigma$ is an $S2$ cycle. If $n > 2$, then $\sigma$ is an extended $S2$ cycle. \end{proof} \subsection{Trigons} The structure of trigons of $\smash{G_S^x}$ takes a bit more work to determine. We will first classify innermost trigons of $\smash{G_S^x}$ and then determine how trigons of another graph $G_S^y$ may contain them. By relabeling, we may assume a given innermost trigon is a trigon of $\smash{G_S^1}$. If $f$ is a trigon of $\smash{G_S^1}$, then $G_S$ on $f$ appears as one of the four types shown in \fullref{fig:trigontypes}. The label $1$ appears only where shown since otherwise $f$ would not be a face of $\smash{G_S^1}$. Types I$'$ and II$'$ may be obtained from types I and II respectively by suitable changes of orientations and labeling. Therefore we may further assume an innermost trigon is a trigon of $\smash{G_S^1}$ of type I or type II. \begin{figure} \caption{The four possible types of trigons} \label{fig:trigontypes} \end{figure} \begin{prop} \label{innermosttrigons} An innermost trigon of $\smash{G_S^1}$ is bounded by either an $S3$ cycle or a forked extended $S2$ cycle. \end{prop} \begin{proof} To highlight the overall structure of this proof we relegate two of the subcases to \fullref{lem:ruleoutcaseIa<b} and \fullref{lem:ruleoutcaseIa>b}. Let $f$ be an innermost trigon of $\smash{G_S^1}$. Note that we are not concerned with whether or not $f \cap T$ contains any circle components. {\bf Case I}\qua The trigon $f$ is of type I. To be innermost, we must have $a$, $b$, or $c$ be $1$. Otherwise there would be a trigon of $G_S^2$ contained in $f$. Say $c = 1$. The trigon appears as in \fullref{fig:trigonIa}. \begin{figure} \caption{The trigon of type I with $c=1$} \label{fig:trigonIa} \end{figure} Since $f$ is a face of $\smash{G_S^1}$, we have $a+b \leq t$. Notice that $a$ and $b$ are both odd. One may readily check that both the cases $b=c=1\neq a$ and $a=b=c=1$ are innermost trigons corresponding to a forked extended $S2$ cycle and an $S3$ cycle respectively. Thus assume $a \neq 1 \neq b$. {\bf Subcase $a<b$}\qua This is ruled out by \fullref{lem:ruleoutcaseIa<b}. {\bf Subcase $a=b$}\qua In this situation there is a trigon of $G_S^{a+1}$ within $f$ contradicting that $f$ is innermost. {\bf Subcase $a>b$}\qua This is ruled out by \fullref{lem:ruleoutcaseIa>b}. {\bf Case II}\qua The trigon $f$ is of type II. For $f$ to be a face of $\smash{G_S^1}$, we must have $c+b >t$, $c+a \leq t$, and $a+b \leq t$. These conditions imply $a < b$ and $a < c$. The trigon appears as in \fullref{fig:trigonIIa}. We have three main cases. \begin{figure} \caption{The trigon of type II} \label{fig:trigonIIa} \end{figure} {\bf Subcase $c>b$}\qua With some relabeling, part of the trigon appears as in Case I, $a<b$. This gives a contradiction to \fullref{lem:ruleoutcaseIa<b}. {\bf Subcase $c=b$}\qua In this situation there is a trigon of $G_S^{b+1}$ inside $f$. {\bf Subcase $c<b$}\qua Here, $a<c<b$. With some relabeling, the arguments of Case I, $a>b$ apply. Thus there are no innermost trigons of type II. \begin{figure} \caption{The two forms of innermost trigons of $\smash{G_S^1} \label{fig:trigonIe} \end{figure} Hence, up to relabeling and changes in orientation, any innermost trigon of $\smash{G_S^1}$ appears as in \fullref{fig:trigonIe}. Such trigons are bounded by either an $S3$ cycle or a forked extended $S2$ cycle. \end{proof} \begin{remark} Note that with \fullref{innermosttrigons} in hand, it follows from the definitions of $S3$ cycles and forked extended $S2$ cycles that all innermost trigons have one or two sets of label pairs for their edges. None have three distinct label pairs. \end{remark} \begin{lemma}\label{lem:ruleoutcaseIa<b} There cannot exist an innermost trigon of $\smash{G_S^1}$ of type I with $c=1$ and $a < b$. \end{lemma} \begin{proof} Assume a trigon $f$ of $\smash{G_S^1}$ of type I with $c=1$ and $a < b$ does exist. On $f$, $G_S$ has a bigon $\kreis{A}$ with edges $e_1$ and $e_2$ having label pairs $\{2,a\}$ and $\{1,a{+}1\}$ respectively and corners $(1,2)$ and $(a,a{+}1)$, another bigon $\kreis{B}$ with edges $e_3$ and $e_4$ having label pairs $\{a, b{+}1\}$ and $\{a{+}1, b\}$ respectively and corners $(a,a{+}1)$ and $(b,b{+}1)$, and a trigon $\kreis{C}$ with edges $e_5$, $e_6$, and $e_7$ having label pairs $\{1, b{+}1\}$, $\{2,a\}$, and $\{a{+}1,b\}$ respectively and corners $(1,2)$, $(a,a{+}1)$, $(b,b{+}1)$. See \fullref{fig:trigonalessthanb}(a). Note that $\kreis{A}$, $\kreis{B}$, and $\kreis{C}$ all lie on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. \begin{figure} \caption{(a) The trigon of type I with $c=1$ and $a<b$\qua (b) The seven edges on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu} \label{fig:trigonalessthanb} \end{figure} There are two extended $S2$ cycles contained in $f$: $\sigma_1 = \{e_1, e_6\}$ with label pair $\{2,a\}$ and $\sigma_2 = \{e_3,e_7\}$ with label pair $\{a{+}1, b\}$. Since $\sigma_1$ is an extended $S2$ cycle, by \fullref{lieinanannulus} the edges $e_1$ and $e_6$ lie in an essential annulus $A_1$. Similarly, since $\sigma_2$ is an extended $S2$ cycle, the edges $e_3$ and $e_7$ lie in an essential annulus $A_2$. The three edges $e_2$, $e_3$, and $e_5$ connect the vertices $U_a$ and $U_{a+1}$ (via the vertices $U_1$ and $U_{b+1}$) and thus lie in an annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ between the cores of $A_1$ and $A_2$. These seven edges must lie on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as in \fullref{fig:trigonalessthanb}(b). Choose rectangles $\rho_{(1, 2)}$ and $\rho_{(b,\,b+1)}$ on $H_{(1,2)}$ and $H_{(b,\,b+1)}$ respectively that connect the edges of the bigons and trigon on them. Form a larger trigon $F$ by connecting the two bigons $\kreis{A}$ and $\kreis{B}$ to the trigon $\kreis{C}$ with the rectangles $\rho_{(1,2)}$ and $\rho_{(b,\,b+1)}$. Let $\ensuremath{\partial}_1 F$ be the edge of $F$ with label pair $\{a,a\}$, $\ensuremath{\partial}_2 F$ be the edge with label pair $\{a{+}1,a{+}1\}$, an $\ensuremath{\partial}_3 F$ be the edge with label pair $\{a, a{+}1\}$. Note that $\ensuremath{\partial}_1 F$ and $\ensuremath{\partial}_2 F$ each lie in an essential annulus. See \fullref{fig:trigonF}(a) for the labeling of $\ensuremath{\partial} F$ and \fullref{fig:trigonF}(b) for the placement of $\ensuremath{\partial} F$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. \begin{figure} \caption{(a) The trigon $F$\qua (b) $\ensuremath{\partial} \label{fig:trigonF} \end{figure} \fullref{lem:funnytrigon} implies that the core of the annulus in which the edges of $F$ lie bounds a meridional disk of the solid torus on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as $F$. Hence the cores of $A_1$ and $A_2$ each bound meridional disks. Let $\sigma_i'$ be the $S2$ cycle of $G_S$ in the face of $G_S$ bounded by $\sigma_i$ for $i=1,2$. By \fullref{lieinanannulus}, the core of the annuli in which the edges of $\sigma_i'$ bound meridional disks for $i=1,2$. This contradicts \fullref{GT:L2.1}. \end{proof} \begin{lemma}\label{lem:ruleoutcaseIa>b} There cannot exist a trigon of $\smash{G_S^1}$ of type I with $c=1$ and $a > b$. \end{lemma} \begin{proof} The lower left branch of \fullref{fig:trigonIa} appears now (rotated upwards) as in \fullref{fig:trigonIb}. \begin{figure} \caption{Part of the trigon of type I with $c=1$ and $a > b$} \label{fig:trigonIb} \end{figure} Also, since there is an extended $S2$ cycle with label pair $\{2, a\}$ contained in $f$, by \fullref{lieinanannulus} the two edges with label pair $\{2+i, a-i\}$ for each $0\leq i \leq (a+1)/2$ lie in disjoint essential annuli. The $S2$ cycle has label pair $\{(a+1)/2, (a+3)/2\}$. If $a-b+1 = b$ so that the $S2$ cycle has label pair $\{b, b{+}1\}$ then we may view the edges $e_0, e_1, \dots, e_5$ on $T$ as in \fullref{fig:trigonIc}. The endpoints labeled $b+1$ of edges $e_4$, $e_0$, and $e_3$ are immediately preceded by the endpoints labeled $b$ of edges $e_3$, $e_1$, and $e_4$ respectively. Due to orientations, given that the edges $e_4$, $e_0$, and $e_3$ appear in that order clockwise around the vertex $U_{b+1}$, the edges $e_3$, $e_1$, and $e_4$ must appear in the order counterclockwise around the vertex $U_b$ as shown. Hence the edges $e_0$ and $e_1$ must emanate from opposite sides of the annulus in which the edges $e_3$ and $e_4$ lie. This however leaves no possible position for $e_6$. \begin{figure} \caption{Edges of the trigon of type I with $c=1$ and $a = 2b-1$ on $T$} \label{fig:trigonIc} \end{figure} If the $S2$ cycle does not have label pair $\{b, b{+}1\}$, then let $\smash{e_3'}$ and $\smash{e_4'}$ be the other edges on $f$ that have the same labels as $e_3$ and $e_4$ respectively. Since each of $\{e_3, \smash{e_3'}\}$ and $\{e_4, \smash{e_4'}\}$ are extended $S2$ cycles, they each lie in an essential annulus. Furthermore each pair of edges $\{e_3, e_4\}$ and $\{\smash{e_3'}, \smash{e_4'}\}$ bounds a bigon $B$ and $B'$ respectively on $G_S$. We may thus form an ``annulus'' $$A = B \cup H_{(b,\,b+1)} \cup B' \cup H_{(a-b+1,\,a-b+2)}.$$ Then the positions of the edges $e_3$ and $\smash{e_3'}$ in $G_T$ dictate the placement of the edges $e_4$ and $\smash{e_4'}$. If $e_3$ and $\smash{e_3'}$ are swapped, then $e_4$ and $\smash{e_4'}$ must also be swapped. Otherwise $A$ would be a properly embedded nonseparating annulus in one of the Heegaard solid tori which cannot occur. We may thus view the edges $e_0, e_1, e_3, \smash{e_3'}, e_4, \smash{e_4'}$ and $e_6$ on $T$ as in \fullref{fig:trigonId}. Since the edges $e_2$ and $e_5$ also form an extended $S2$ cycle, they lie in an essential annulus disjoint from the previous edges. These two edges appear either as shown in \fullref{fig:trigonId} or perhaps swapped with one another (depending on whether $e_3$ or $\smash{e_3'}$ is closer to $e_2$ on $f$). \begin{figure} \caption{Edges of the trigon of type I with $c=1$, $a > b$, and $a \neq 2b-1$ on $T$} \label{fig:trigonId} \end{figure} Observe that the annulus $A$ separates the vertices $U_1$ and $U_2$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Since $1$ and $b$ are both odd, $H_{(1, 2)}$ lies on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as $A$. Though the interiors of $B$ and $B'$ may intersect $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in simple closed curves, $H_{(1, 2)}$ must intersect $A$. This cannot occur. \end{proof} \begin{lemma} \label{forkededges} If an innermost trigon $g$ of $\smash{G_S^1}$ is bounded by a forked extended $S2$ cycle with edges as in \fullref{fig:forkedbigon}(a), then the edges $e_1, \dots, e_5$ appear on $G_T$ as in \fullref{fig:forkedbigon}(b) (up to symmetries). \end{lemma} \begin{figure} \caption{(a) A trigon bounded by a forked extended $S2$ cycle of $\smash{G_S^1} \label{fig:forkedbigon} \end{figure} \begin{proof} To prove, we examine the possible configurations of these five edges on $G_T$ and see what the bigon $\kreis{A}$ and the trigon $\kreis{D}$ bounded by them on $g$ imply. The edges $e_2$ and $e_3$ form an (extended) $S2$ cycle $\sigma$. By \fullref{lieinanannulus} they lie in an essential annulus $A$. Thus we may begin by assuming these two edges are as shown in \fullref{fig:forkedbigon}(b). The edge $e_4$ connects vertices $U_1$ and $U_2$, and the edge $e_1$ connects the vertices $U_1$ and $U_{a+1}$. Without loss of generality, we may assume $e_4$ and $e_1$ are also as shown in \fullref{fig:forkedbigon}(b). It remains for us to determine the position of $e_5$ (relative to the first four edges). The endpoints labeled $2$ of edges $e_3$, $e_4$, and $e_2$ are immediately preceded by the endpoints labeled $1$ of edges $e_4$, $e_5$, and $e_1$ respectively. Due to orientations, given that the edges $e_3$, $e_4$, and $e_2$ appear in that order clockwise around the vertex $U_2$, the edges $e_4$, $e_5$, and $e_1$ must appear in that order counterclockwise around the vertex $U_1$. Therefore the edge $e_5$ may appear either as in \fullref{fig:forkedbigon}(b) or as in \fullref{fig:forkededges2}. \begin{figure} \caption{Another possible placement of the edge $e_5$} \label{fig:forkededges2} \end{figure} Assume the five edges appear as in \fullref{fig:forkededges2}. Extend the $(a, a{+}1)$ corner of $\kreis{A}$ and $\kreis{D}$ radially inward through $H_{\smash{(a,\,a{+}1)}}$ to its core $K_{\smash{(a,\,a{+}1)}}$. Join $\kreis{A}$ to $\kreis{D}$ along this common corner to form the trigon $F$. By \fullref{lem:funnytrigon}, the core of the essential annulus $A \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in which the edges of $F$ lie bounds a meridional disk of the solid torus on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as $F$. Thus the core of $A$ bounds a meridional disk. By \fullref{lieinanannulus} the annulus in which the edges of the $S2$ cycle that lies in between the edges of $\sigma$ lie must also bound a meridional disk. This however contradicts \fullref{GT:L2.1}. Thus the edges of $\kreis{A}$ and $\kreis{D}$ must appear as in \fullref{fig:forkedbigon}(b). \end{proof} \begin{lemma}\label{thinningforkplusbigon} Let $g$ be a trigon of $\smash{G_S^1}$ with edges as in \fullref{fig:forkedbigon}(a). If the interior of the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by edges $e_1$ and $e_5$ (and the vertices $U_1$ and $U_{a+1}$) is disjoint from $K$, then there cannot be a bigon of $G_S$ attached to $e_4$, the edge with label pair $\{1,2\}$. \end{lemma} \begin{proof} Note that since $g$ is bounded by a forked extended $S2$ cycle, $t \geq 4$. Assume there is a bigon of $G_S$ attached to $e_4$. Let $g'$ be $g$ with this bigon attached. Let $\Delta$ be the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by the edges $e_1$ and $e_5$. Assume $\Int \Delta \cap K = \emptyset$. Then, by the minimality assumption on $|S \cap T|$, we have $\Int \Delta \cap g' = \emptyset$. The corners of $\Delta$ are arcs $u_1$ of $\ensuremath{\partial} U_1$ and $u_{a+1}$ of $\ensuremath{\partial} U_{a+1}$. We first consider the case that $a+1 \geq 6$ (and hence $t \geq 6$). \begin{figure} \caption{The bigons and trigon of $g'$} \label{fig:forkplusbigon} \end{figure} Label the bigons and the trigon of $G_S$ on $g'$ as $\kreis{A}, \kreis{B}, \dots, \kreis{E}$ as shown in \fullref{fig:forkplusbigon}. Let $\rho_{(1,2)}$ be the rectangle on $\ensuremath{\partial} H_{(1,2)} \backslash (U_1 \cup U_2)$ and $u_2$ be the arc of $\ensuremath{\partial} U_2$ so that $\rho_{(1,2)}$ is bounded by $u_1$, the $(1,2)$ corner of $\kreis{A}$, $u_2$, and the corner $(1,2)$ of $\kreis{D}$. Let $\rho_{\smash{(a,\,a{+}1)}}$ be the rectangle on $\ensuremath{\partial} H_{\smash{(a,\,a{+}1)}} \backslash (U_a \cup U_{a+1})$ and $u_a$ be the arc of $\ensuremath{\partial} U_a$ so that $\rho_{\smash{(a,\,a{+}1)}}$ is bounded by $u_{a+1}$, the $(a, a{+}1)$ corner of $\kreis{A}$, $u_{a}$, and a $(a, a{+}1)$ corner of $\kreis{D}$. Let $\rho_{(2,3)}$ be the rectangle on $\ensuremath{\partial} H_{(2,3)} \backslash (U_2 \cup U_3)$ and $u_3$ be the arc of $\ensuremath{\partial} U_3$ so that $\rho_{(2,3)}$ is bounded by $u_2$, the $(2,3)$ corner of $\kreis{B}$, $u_3$, and the $(2,3)$ corner of $\kreis{E}$. Let $\rho_{(a-1,\,a)}$ be the rectangle on $\ensuremath{\partial} H_{(a-1,\,a)} \backslash (U_{a-1} \cup U_a)$ and $u_{a-1}$ be the arc of $\ensuremath{\partial} U_{a-1}$ so that $\rho_{(a-1,\,a)}$ is bounded by $u_{a}$, the $(a{-}1, a)$ corner of $\kreis{B}$, $u_{a-1}$, and the $(a{-}1, a)$ corner of $\kreis{C}$. \begin{figure} \caption{The bigons and trigon of $g'$ assembled to form a long disk $D$} \label{fig:thinforkplusbigon} \end{figure} Assemble $\kreis{A}, \kreis{B}, \dots, \kreis{E}$, $\Delta$, $\rho_{(1,2)}$, $\rho_{(2,3)}$, $\rho_{(a-1,\,a)}$, and $\rho_{\smash{(a,\,a{+}1)}}$ to form the embedded disk $D$ as shown in \fullref{fig:thinforkplusbigon}. The boundary of $D$ is composed of an arc $\alpha = e_7 \cup u_{a-1} \cup e_6 \cup u_3 \cup e_8$ and an arc $\beta = (H_{(t,1)} \cap \kreis{E}) \cup (H_{(1,2)} \cap \kreis{D}) \cup (H_{(2,3)} \cap \kreis{C})$. The arc $\alpha$ lies on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. The arc $\beta$ lies on $\ensuremath{\partial} H_{(t,3)}$ (recall $H_{(t,3)} = H_{(t,1)} \cup H_{(1,2)} \cup H_{(2,3)}$) and can be radially extended into $H_{(t,3)}$ to lie on the arc $K_{(t,3)}$. Hence we may take $\beta$ to lie on $K$. Therefore $D$ is a long disk. By \fullref{longdisk}, such a disk cannot exist. If $a+1 =4$, then $\kreis{B}$ and $\kreis{C}$ are the same bigon of $G_S$ on $g$. The two corners of $\kreis{B} = \kreis{C}$ however are distinct arcs on $H_{(2,3)}$. Also notationally $\rho_{(a-1,\,a)}$ is confused with $\rho_{(2,3)}$. In this case, let us refer to the former as $\rho_{\smash{(2,3)}}'$. Further notice that the interiors of $\rho_{(2,3)}$ and $\rho_{\smash{(2,3)}}'$ do not intersect. We may assemble the disk $D$ using two copies of $\kreis{B}$. A slight isotopy which fixes $\beta$ and keeps $\alpha$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ makes $D$ embedded. (Alternatively, in this construction of $D$, one may regard $\kreis{B}$ and $\kreis{C}$ as slight pushoffs of the same bigon to opposite sides.) Again, $D$ is a long disk. \end{proof} \begin{figure} \caption{The two possible trigons bounded by a forked extended $S2$ cycle of $G_S^{a+1} \label{fig:forkedbigona+1} \end{figure} \begin{lemma}\label{thinningtwoforks} Let $g$ be the trigon bounded by a forked extended $S2$ cycle of $\smash{G_S^1}$ with edges as in \fullref{fig:forkedbigon}(a). Let $f$ be the trigon bounded by a forked extended $S2$ cycle of $G_S^{a+1}$. Its edges must be as in \fullref{fig:forkedbigona+1} (a) or (b). Let $\Delta_g$ be the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by the edges $e_1$ and $e_5$ of $g$, and let $\Delta_f$ be the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by the edges $\smash{e_1'}$ and $e_5'$ of $f$. If the interiors of $\Delta_g$ and $\Delta_f$ are disjoint from $K$, then the edge $\smash{e_4'}$ has label pair $\{a, a{+}1\}$. \end{lemma} \begin{proof} Assume the interiors of $\Delta_f$ and $\Delta_g$ are disjoint from $K$. By the minimality assumption on $|S \cap T|$, we have $\Int (\Delta_f \cup \Delta_g) \cap (f \cup g) = \emptyset$. Assume the edge $\smash{e_4'}$ of $f$ does not have label pair $\{a, a{+}1\}$. Therefore it must have label pair $\{a{+}1, a{+}2\}$ and $f$ is as in \fullref{fig:forkedbigona+1}(b). Assemble disks $D_g$ and $D_f$ in a manner similar to what is done in the proof of \fullref{thinningforkplusbigon}. The disk $D_g$ is comprised of $\kreis{A}$, $\kreis{D}$, $\Delta_g$, a rectangle $\rho_{(a,\,a+1)}$ on $\ensuremath{\partial} H_{\smash{(a,\,a{+}1)}}$, and rectangle $\rho_{(1, 2)}$ on $\ensuremath{\partial} H_{(1,2)}$. The disk $D_f$ is comprised of $\kreis{A}'$, $\kreis{D}'$, $\Delta_f$, a rectangle $\rho_{(a+1,\,a+2)}$ on $\ensuremath{\partial} H_{(a+1,\,a+2)}$, and a rectangle $\rho_{(b,\,b+1)}$ on $\ensuremath{\partial} H_{(b,\,b+1)}$. These two disks are shown in \fullref{fig:twoforkshighlowdisks}. \begin{figure} \caption{The construction of a high disk and a low disk} \label{fig:twoforkshighlowdisks} \end{figure} Note that the arcs $K_{(1,2)}$ and $K_{(a+1,\,a+2)}$ lie on opposite sides of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. We may extend arcs of the boundaries of the two disks $D_g$ and $D_f$ radially into $H_{(1,2)}$ and $H_{(a+1,\,a+2)}$ so that $\ensuremath{\partial} D_g = K_{(1,2)} \cup (D_g \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu})$ and $\ensuremath{\partial} D_f = K_{(a+1,\,a+2)} \cup (D_f \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu})$. Even if $b=2$, one may check that the arcs $\ensuremath{\partial} D_g \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and $\ensuremath{\partial} D_f \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ are disjoint. Therefore the pair of disks $D_g$ and $D_f$ are a pair of disjoint high and low disks for $K$. This is a contradiction to \fullref{highdisklowdisk}. Therefore $f$ is as in \fullref{fig:forkedbigona+1}(a), and the edge $\smash{e_4'}$ has label pair $\{a, a{+}1\}$. \end{proof} \begin{lemma} \label{lem:shortpairsofforks} Assume for $i \neq j$ that $\sigma_i \subseteq G_S^i$ and $\sigma_j \subseteq G_S^j$ are two forked extended $S2$ cycles such that for each (extended) $S2$ cycle contained in the face bounded by one of $\sigma_i$ or $\sigma_j$ there is an (extended) $S2$ cycle with the same label pair contained in the face bounded by the other forked extended $S2$ cycle. Then the faces bounded by $\sigma_i$ and $\sigma_j$ each contain only an $S2$ cycle and no extended $S2$ cycles. \end{lemma} \begin{proof} Assume $\smash{G_S^1}$ has a forked extended $S2$ cycle $\sigma_1$ that bounds a trigon $g$ on $\smash{G_S^1}$ as in \fullref{fig:forkedbigon}(a). By \fullref{forkededges} the five edges $e_1$, $e_2$, $e_3$, $e_4$, and $e_5$ lie on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as in \fullref{fig:forkedbigon}(b). For each $n = 0, \dots, (a-3)/2$ there is an (extended) $S2$ cycle contained in $g$ with label pair $\{2+n,a-n\}$. Assume there is a forked extended $S2$ cycle $\sigma_j$ of $G_S^j$ bounding a trigon $f$ such that for each $n = 0, \dots, (a-3)/2$ there is an (extended) $S2$ cycle contained in $f$ with label pair $\{2+n,a-n\}$. By hypothesis $j \neq 1$. Hence $j = a+1$. Therefore $f$ appears as in \fullref{fig:forkedbigona+1}(a) with $b=1$. We now consider how the edges of $\kreis{A}'$ and $\kreis{D}'$ lie with respect to the edges of $g$ as shown in \fullref{fig:forkedbigon}(b). Note that since $\kreis{A} \neq \kreis{D}'$ and $\kreis{D} \neq \kreis{A}'$ the edges $\smash{e_1'}, \dots, e_5'$ each must be distinct from the edges $e_1, \dots, e_5$. Also, by \fullref{lem:twoS2cycleswithsamelabelpair} and \fullref{lieinanannulus}, the edges $e_2$, $e_3$, $\smash{e_2'}$, and $\smash{e_3'}$ together lie in an essential annulus. The edges $e_1$ and $\smash{e_1'}$ have the same label pair $\{1, a{+}1\}$. Either they lie in a disk or they lie in an essential annulus in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. {\bf Case 1}\qua If $e_1$ and $\smash{e_1'}$ lie in an essential annulus, then either (a) $\smash{e_1'}$ is incident to $U_1$ in the arc of $\ensuremath{\partial} U_1$ between $e_4$ and $e_5$ that does not intersect $e_1$ or (b) $\smash{e_1'}$ is incident to $U_1$ in the arc of $\ensuremath{\partial} U_1$ between $e_4$ and $e_1$ that does not intersect $e_5$. For (a), the corners of $\kreis{A}'$ force $\smash{e_2'}$ to be incident to the vertices $U_2$ and $U_a$ on opposite sides of $e_2 \cup e_3 \cup U_2 \cup U_a$. Thus $\smash{e_2'}$ cannot not lie in the essential annulus which contains $e_2$ and $e_3$ contrary to \fullref{lem:twoS2cycleswithsamelabelpair}. For (b), the corners of $\kreis{A}'$ force $\smash{e_2'}$ to be incident to the vertices $U_2$ and $U_a$ on the side of $e_2 \cup e_3 \cup U_2 \cup U_a$ to which $e_4$ is not incident. By \fullref{lem:twoS2cycleswithsamelabelpair} the edge $\smash{e_3'}$ must lie on the other side of $e_2 \cup e_3 \cup U_2 \cup U_a$. Following the $(a, a{+}1)$ corner of $\kreis{D}'$ from $\smash{e_3'}$ to $\smash{e_4'}$ implies that $\smash{e_4'}$ lies in the disk of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ between $e_1$ and $e_5$. This however contradicts that $\smash{e_4'}$ has label pair $\{a, a{+}1\}$. Therefore the edges $e_1$ and $\smash{e_1'}$ cannot lie in an essential annulus. {\bf Case 2}\qua The edges $e_1$ and $\smash{e_1'}$ lie in a disk. There are three possibilities for the placement of $\smash{e_1'}$ with respect to the edges of $g$. Either (a) $e_5$ lies in the disk between $e_1$ and $\smash{e_1'}$, (b) $e_1$ lies in the disk between $\smash{e_1'}$ and $e_5$ or (c) $\smash{e_1'}$ lies in the disk between $e_1$ and $e_5$. For (a), edge $\smash{e_1'}$ is incident to $U_1$ and $U_{a+1}$ in the same arcs as in Case 1(a). Thus again edge $\smash{e_2'}$ cannot lie in the essential annulus which contains $e_2$ and $e_3$ contrary to \fullref{lem:twoS2cycleswithsamelabelpair}. For (b), edge $\smash{e_1'}$ is incident to $U_1$ and $U_{a+1}$ in the same arcs as in case 2(a). Thus again edge $\smash{e_4'}$ is forced to have the wrong label pair. For (c), the corners of $\kreis{A}'$ force $\smash{e_2'}$ to be incident to the vertices $U_2$ and $U_a$ on the side of $e_2 \cup e_3 \cup U_2 \cup U_a$ to which $e_4$ is incident. The edges $e_2$ and $\smash{e_2'}$ must lie in a disk. Otherwise $e_2$ and $\smash{e_2'}$ lie in an essential annulus with core parallel to the core of the essential annulus in which the edges $e_2$ and $e_3$ lie. Then on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, $e_1$ and $\smash{e_1'}$ bound a bigon $\Delta$. With the appropriate rectangles $\rho_{(1,2)}$ and $\rho_{(a,\,a+1)}$ on $\ensuremath{\partial} H_{(1,2)}$ and $\ensuremath{\partial} H_{(a,\,a+1)}$ respectively we may form the disk $D = \kreis{A} \cup \kreis{A}' \cup \rho_{(1, 2)} \cup \rho_{(a,\,a+1)} \cup \Delta$ whose boundary is parallel to the core of the essential annulus in which the edges $e_2$ and $\smash{e_2'}$ lie. Then by \fullref{lieinanannulus} $\ensuremath{\partial} D$ is parallel to the core of the annulus in which an $S2$ cycle lies. This contradicts \fullref{GT:L2.1}. Since the edges $e_2$ and $\smash{e_2'}$ lie in a disk, $\smash{e_2'}$ is incident to $U_2$ in the arc of $\ensuremath{\partial} U_2$ between $e_2$ and $e_4$ that does not contain $e_3$. Therefore $\smash{e_3'}$ is incident to $U_2$ and $U_a$ in the arcs of $\ensuremath{\partial} U_2$ and $\ensuremath{\partial} U_a$ between $e_2$ and $e_3$ in which $\smash{e_2'}$ is not incident. Following the corners of $\kreis{D}'$ puts edge $\smash{e_4'}$ incident to $U_a$ on the same side of $e_2$ and $e_3$ as $\smash{e_3'}$ and edge $e_5'$ so that $e_1$ lies in the disk between $\smash{e_1'}$ and $e_5'$. This configuration is shown in \fullref{fig:oneconfiguration}. Let $\Gamma$ be this subgraph of $G_T$ consisting of the edges $e_1, \dots, e_5$ and $e_1, \dots, e_5$ and the vertices $U_1$, $U_2$, $U_a$, and $U_{a+1}$. \begin{figure} \caption{(a) The trigons $g$ and $f$\qua (b) The subgraph $\Gamma$ consisting of the edges $e_1, \dots, e_5, \smash{e_1'} \label{fig:oneconfiguration} \end{figure} If $a > 3$ then there is an (extended) $S2$ cycle with label pair $\{3, a{-}1\}$ which by \fullref{lieinanannulus} lies in an essential annulus. Thus $\Gamma$ must lie in an essential annulus. This is a contradiction. Thus $a=3$. This proves the lemma. \end{proof} \begin{prop}\label{lieinannulusorboundbigon} If a trigon $f$ of $\smash{G_S^x}$ is not innermost, then there exists two edges of $f$ that have the same label pair. These edges either lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ or bound a bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ whose interior intersects $K$. Moreover, if the edges of $f$ are not an extended $S3$ cycle, then $f$ contains a forked extended $S2$ cycle whose edge with label pair distinct from the other two is not an edge of $f$. \end{prop} \begin{proof} Assume $f$ is a trigon of $\smash{G_S^x}$ that is not an innermost trigon. Let $g$ be the trigon innermost on $f$. Therefore $f$ is obtained from $g$ by attaching bigon faces of $G_S$. By \fullref{innermosttrigons}, the edges of $g$ are an $S3$ cycle or a forked extended $S2$ cycle. If the edges of $g$ are an $S3$ cycle, then to obtain $f$ an equal number of bigons of $G_S$ must be attached to all three edges of $g$. Hence the edges of $f$ are an extended $S3$ cycle and lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If the edges of $g$ are a forked extended $S2$ cycle, then relabel so that $g$ is a trigon of $\smash{G_S^1}$ as in \fullref{fig:forkedbigon}(a) with one edge $e_4$ having label pair $\{1, 2\}$ and the other two edges $e_1$ and $e_5$ having label pair $\{1, a{+}1\}$. Note $a \neq 1$. Then $f$ is obtained by attaching bigon faces of $G_S$ to one, two, or three edges of $g$. Let $\Delta_g$ be the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by $e_1$ and $e_5$ (and the vertices $U_1$ and $U_a$). \fullref{thinningforkplusbigon} implies that the interior of $\Delta_g$ must intersect $K$ if a bigon of $G_S$ is incident to $e_4$. {\bf Case I}\qua If $f$ is obtained by attaching bigon faces of $G_S$ to just one edge of $g$, then the bigons are attached to $e_4$. Otherwise edges with label pairs $\{1,2\}$ and $\{1, a{+}1\}$ must be on the boundary of $f$. Since $a \neq 1$, this implies that $f$ is a trigon of $\smash{G_S^1}$, contradicting that $f$ properly contains the trigon $g$ of $\smash{G_S^1}$. Attaching bigons to the edge $e_4$ of $g$ with label pair $\{1, 2\}$ implies that $f$ must be a trigon of $G_S^{a+1}$. Furthermore, $e_1$ and $e_5$ are still edges of $f$. These edges bound $\Delta_g$. According to \fullref{thinningforkplusbigon} the interior of $\Delta_g$ must intersect $K$. This satisfies the conclusion of the proposition. {\bf Case II}\qua If $f$ is obtained from $g$ by attaching bigons of $G_S$ to the two edges with label pair $\{1, a{+}1\}$, then $f$ must be a trigon of either $\smash{G_S^1}$ or $G_S^2$. This cannot occur since $g$ is a trigon of $\smash{G_S^1}$ and its interior contains edges of $G_S^2$. Thus $f$ must be obtained from $g$ by attaching bigons of $G_S$ to the edge with label pair $\{1,2\}$ and one of the edges with label pair $\{1, a{+}1\}$. Because the third edge has label pair $\{1, a{+}1\}$, $f$ must be a trigon of either $\smash{G_S^1}$ or $\smash{G_S^{a+1}}$. It can be neither as the interior of $f$ will contain an edge of $g$ with label pair $\{1, a{+}1\}$. Thus this case cannot occur. {\bf Case III}\qua Assume $f$ is obtained from $g$ by attaching bigons of $G_S$ to all three edges of $g$. The number of bigons attached to each edge is not necessarily uniform. \begin{figure} \caption{(a) A forked extended $S2$ cycle with $b$ bigons attached to each edge of the face $g$ it bounds \qua (b) The construction of a long disk} \label{fig:3extforkedbigon} \end{figure} If $b$ bigons are attached to each edge of $g$, then $f$ is a trigon of $G_S^{t-b+1}$. Then $f$ has two edges with label pair $\{a+b+1,t-b+1\}$ and one edge with label pair $\{t-b+1,b+2\}$. The trigon $f$ appears as in \fullref{fig:3extforkedbigon}(a). Note that $b+2 < a+b+1$ since $a > 1$, and $a+b+1 < t-b+1$ since otherwise there would be an edge of $G_S^{t-b+1}$ in the interior of $f$. Either the two edges of $f$ with label pair $\{a+b+1, t-b+1\}$ lie in an essential annulus (satisfying the proposition) or they bound a bigon $\Delta_f$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If $\Int \Delta_f \cap K = \emptyset$ then we may construct a long disk in a manner similar to the constructions for $a+1 \geq 6$ and $a+1=4$ in \fullref{thinningforkplusbigon}. See \fullref{fig:3extforkedbigon}(b). Thus $\Int \Delta_f \cap K \neq \emptyset$ satisfying the proposition. If a different number of bigons is attached to each edge of $g$ to form $f$, then assume $b$ is the minimum number of bigons attached to an edge of $g$. Hence all edges of $g$ have $b$ bigons attached, and either one or two edges have more than $b$ edges attached. Let $g'$ be the trigon formed by attaching $b$ bigons to each of the edges of $g$. The trigon $g'$ appears as in \fullref{fig:3extforkedbigon}(a). If $f$ is obtained from $g'$ by attaching bigons to two of the edges of $g'$, then one may check that the arguments of Case II apply. If $f$ is obtained from $g'$ by attaching bigons to just one of the edges, then one may check that the initial arguments of Case I also apply to imply that extra bigons are attached to the edge of $g'$ with label pair $\{b+2, t-b+1\}$. Thus $f$ is a trigon of $G_S^{a+b+1}$. Either the two edges of $f$ with label pair $\{a+b+1, t-b+1\}$ lie in an essential annulus (satisfying the proposition) or they bound a bigon $\Delta_f$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If $\Int \Delta_f \cap K = \emptyset$ then we may construct a long disk in a manner similar to the construction in \fullref{thinningforkplusbigon}. Again, see \fullref{fig:3extforkedbigon}(b). Thus $\Int \Delta_f \cap K \neq \emptyset$ satisfying the proposition. \end{proof} \begin{lemma}\label{lem:emptybigon} If two edges of a trigon of some $\smash{G_S^x}$ bound a bigon $B$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, then $\Int B \cap K = \emptyset$. \end{lemma} \begin{proof} Assume otherwise. Then for some $x$ there is a trigon $F$ of $\smash{G_S^x}$ with two edges that bound a bigon $B$ in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ with $\Int B \cap K \neq \emptyset$ such that (*) any other bigon in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by two edges of a trigon of some $G_S^y$ and contained in $B$ has interior disjoint from $K$. Let $U_1$ be a vertex of $G_T$ in $\Int B$. By \fullref{musthavebigons} the graph $\smash{G_S^1}$ must have a bigon or trigon face $g$. Because $U_1$ is contained in the interior of $B$, no pair of edges bounding $g$ may lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. By \fullref{GT:L2.1} and \fullref{lieinanannulus}, $g$ must be a trigon that is not bounded by an $S3$ cycle or an extended $S3$ cycle. If $g$ is not an innermost trigon then by \fullref{lieinannulusorboundbigon} a pair of its edges must bound a bigon $B'$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ whose interior intersects $K$. This bigon $B'$ however must be contained in $B$, contradicting (*). Therefore $g$ must be an innermost trigon. Since it cannot be bounded by an $S3$ cycle, \fullref{innermosttrigons} implies that it must be bounded by a forked extended $S2$ cycle. We may assume $g$ has edges as in \fullref{fig:forkedbigon}(a) which appear on $G_T$ as in \fullref{fig:forkedbigon}(b) as described by \fullref{forkededges}. Because $U_1$ is in the interior of $B$ and the edges $e_2$ and $e_3$ lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, the edges of $G_T$ bounding $B$ must both be incident to $U_2$. Because edges of $G_T$ connect vertices of opposite parity, the edges of $G_T$ bounding $B$ cannot also be incident to $U_{a+1}$. Therefore they must be incident to some other vertex, say $U_z$, which may be $U_a$. (Note that this means $x=2$ or $x=z$.) One such configuration with $z \neq a$ is depicted in \fullref{fig:bigonconfig}. If $z = a$ then it may be the case that either $e_2$ or $e_3$ bounds a bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ with an edge of $B$. \begin{figure} \caption{For $z \neq a$ the bigon $B$ is shown in grey with the edges of $g$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu} \label{fig:bigonconfig} \end{figure} Observe that the vertex $U_{a+1}$ must be contained in the interior of $B$. Therefore the bigon $\Delta_g$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by the two edges $e_1$ and $e_5$ of $g$ is contained in $B$. By (*) the interior of $\Delta_g$ must be disjoint from $K$. Furthermore the graph $G_S^{a+1}$ must have a trigon face $f$ bounded by a forked extended $S2$ cycle by the same arguments used above for the trigon $g$. The edges of $f$ then appear as in \fullref{fig:forkedbigona+1}(a) or (b). Because the bigon $\Delta_f$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by the two edges $\smash{e_1'}$ and $e_5'$ of $f$ is contained in $B$, (*) implies its interior must be disjoint from $K$. Since the interiors of both $\Delta_g$ and $\Delta_f$ are disjoint from $K$, \fullref{thinningtwoforks} implies that the edges of $G_S$ on $f$ must appear as in \fullref{fig:forkedbigona+1}(a). Moreover, since the edge $\smash{e_4'}$ has label pair $\{a, a{+}1\}$, it must be the case that $z=a$. Note that the two edges $\smash{e_2'}$ and $\smash{e_3'}$ both have label pairs $\{a, b{+}1\}$ and lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. {\bf Claim}\qua $b=1$ Observe that the vertex $U_b$ is contained in $B$. Note that $b \neq 2, a+1$ since edges of $G_T$ connect vertices of opposite parity, and $b \neq a$ since the edges of $f$ would not form a forked extended $S2$ cycle otherwise. If $b \neq 1$ then the above arguments apply once again to show that $G_S^b$ contains a trigon $h$ bounded by a forked extended $S2$ cycle. As with the trigons $g$ and $f$, two edges of $h$ bound a bigon $\Delta_h$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ which is contained in $B$ and thus has interior disjoint from $K$. With $f$ playing the role of $g$, $h$ playing the role of $f$, and adjusting the labeling accordingly, \fullref{thinningtwoforks} implies that $h$ has an edge, say $\smash{e_4''}$, with label pair $\{b, b{+}1\}$. Since $b \neq 1$, $b+1 \neq 2$. Therefore in order for $\smash{e_2'}$ and $\smash{e_3'}$ to lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, the vertex $U_{b+1}$ must be disjoint from $B$. Yet this contradicts that the edge $\smash{e_4''}$ connects the vertices $U_b$ and $U_{b+1}$. Hence $b = 1$. Since $b=1$ the two forked extended $S2$ cycles bounding $g$ and $f$ fit the hypotheses of \fullref{lem:shortpairsofforks}. Indeed the notation that we are currently using agrees with the notation used in the proof of \fullref{lem:shortpairsofforks}. As a consequence, we have that $a=3$ and the subgraph $\Gamma$ of $G_T$ consisting of the edges $e_1, \dots, e_5$ and $\smash{e_1'}, \dots, e_5'$ and the vertices $U_1 \dots U_4$ does not lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Since $z=a=3$, the edges of $B$ have label pair $\{2, 3\}$. Therefore $F$ is a trigon of either $G_S^2$ or $G_S^3$. The subgraph $\Gamma$ and the two edges of $B$ are shown in \fullref{fig:twoshortforkedextendedS2}. Because of the symmetry between the edges of $f$ and $g$, we may assume $F$ is a trigon of $G_S^2$. \begin{figure} \caption{(a) The two forked extended $S2$ cycles $g$ and $f$ \qua (b) The bigon $B$ in grey and the edges of $g$ and $f$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu} \label{fig:twoshortforkedextendedS2} \end{figure} If $F$ is not an innermost trigon, then by \fullref{lieinannulusorboundbigon} either $F$ is bounded by an extended $S3$ cycle or contains a forked extended $S2$ cycle. In either case, $F$ must contain an (extended) $S3$ cycle or an (extended) $S2$ cycle $\sigma$ for which neither $2$ nor $3$ belong to its label pair. By \fullref{lieinanannulus}, the edges of $\sigma$ must lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. But this essential annulus must be disjoint from $\Gamma$ which contradicts that $\Gamma$ does not itself lie in an essential annulus. Since $F$ must be an innermost trigon, by \fullref{innermosttrigons} it is bounded by an $S3$ cycle or a forked extended $S2$ cycle. If $F$ is bounded by an $S3$ cycle, then it has label pair $\{2,3\}$. By \fullref{GT:L2.1} its third edge must lie in an essential annulus $A$ with the two edges that bound $B$. Beginning with the two edges of $B$, by following the endpoints of the edges along the corners of $F$ (along $\ensuremath{\partial} H_{2,3}$), we find that all three edges of $F$ must encounter the vertex $U_2$ from the same side of the essential annulus $A'$ in which edges $e_2$ and $e_3$ lie. Following along the corners of $F$ to the vertex $U_3$, we find that the edges of $F$ must encounter the vertex $U_3$ from the other side of the annulus $A'$. Therefore the core curves of each $A$ and $A'$ are isotopic. But since $e_2$ and $e_3$ bound an $S2$ cycle in the solid torus on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as $F$, \fullref{GT:L2.1} implies that the cores of $A$ and $A'$ run $3$ and $2$ times respectively in the longitudinal direction of this solid torus. Because these cores are isotopic on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, this is a contradiction. If $F$ is bounded by a forked extended $S2$ cycle of $G_S^2$, then by \fullref{forkededges} the edge with label pair distinct from the other two has label pair $\{2, 1\}$ or label pair $\{2, 3\}$. But since the two of its edges that bound $B$ have the label pair $\{2,3\}$, the third edge must have label pair $\{2, 1\}$. Therefore there must be an (extended) $S2$ cycle with label pair $\{1, 4\}$ on $F$. By \fullref{lieinanannulus} the two edges of this $S2$ cycle must lie in an essential annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Yet since the vertices $U_1$ and $U_4$ are both contained in the interior of the bigon $B$, this cannot occur. \end{proof} \begin{prop}\label{prop:threetrigontypes} A trigon of $\smash{G_S^x}$ is bounded by either an $S3$ cycle, an extended $S3$ cycle, or a forked extended $S2$ cycle. \end{prop} \begin{proof} Let $f$ be a trigon of $\smash{G_S^x}$. By \fullref{innermosttrigons}, $f$ is innermost if and only if it is bounded by either an $S3$ cycle or a forked extended $S2$ cycle. If $f$ is not innermost, then by \fullref{lieinannulusorboundbigon} either it is bounded by an extended $S3$ cycle or $f$ contains a forked extended $S2$ cycle whose edge with label pair distinct from the other two is not an edge of $f$. Let us assume $f$ is as in this last situation since it is the only one we must rule out. Let $g$ be the forked extended $S2$ cycle in $f$ and assume it is labeled as in \fullref{fig:forkedbigon}(a). By \fullref{lem:emptybigon} the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by edges $e_1$ and $e_5$ of $g$ has interior disjoint from $K$. Since edge $e_4$ is not an edge on the boundary of $f$, within $f$ there must be a bigon of $G_S$ with $e_4$ as an edge. This contradicts \fullref{thinningforkplusbigon}. \end{proof} \begin{lemma}\label{lem:bigonsandtrigons} Each graph $\smash{G_S^x}$ contains at least one of the following: an $S2$ cycle, an extended $S2$ cycle, an $S3$ cycle, an extended $S3$ cycle, a forked extended $S2$ cycle. \end{lemma} \begin{proof} Since \fullref{musthavebigons} implies that each graph $\smash{G_S^x}$ must contain a bigon or a trigon, this lemma follows from \fullref{bigonS2} and \fullref{prop:threetrigontypes}. \end{proof} \begin{lemma}\label{circlesofintersection} $S \cap T$ contains no simple closed curves that bound disks in $X$. \end{lemma} \begin{proof} A simple closed curve of $S \cap T$ is either essential on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ or bounds a disk on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Assume there is a simple closed curve $\gamma \in S \cap T$ that is an essential curve on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and bounds a disk in $X$. Via \fullref{lem:bigonsandtrigons} $G_S$ must have an $S2$ or $S3$ cycle $\sigma$. By \fullref{GT:L2.1}, the edges of $\sigma$ lie in an essential annulus $A$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Since $\gamma$ must be disjoint from $A$, it must be parallel to the core of $A$. Thus the core of $A$ bounds a disk in $X$. This contradicts \fullref{GT:L2.1}. Assume there is a simple closed curve $\gamma \in S \cap T$ that bounds a disk $D \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Assume there is a vertex, say $U_x$, of $G_T$ in $D$. By \fullref{lem:bigonsandtrigons}, $\smash{G_S^x}$ contains either an $S2$ cycle, an extended $S2$ cycle, an $S3$ cycle, an extended $S3$ cycle, or a forked extended $S2$ cycle. By \fullref{GT:L2.1} and \fullref{lieinanannulus} only the edges of a forked extended $S2$ cycle may not lie in an essential annulus. Hence $\smash{G_S^x}$ must contain a forked extended $S2$ cycle $\sigma$ that bounds a face of $\smash{G_S^x}$ with edges as in \fullref{fig:forkedbigon}. Since an edge of $\sigma$ must be incident to a vertex that has an (extended) $S2$ cycle also incident to it, $\sigma$ cannot lie in $D$ by \fullref{lieinanannulus}. Thus there are no vertices of $G_T$ in $D$. Hence $D \cap K = \emptyset$. Since lens spaces are irreducible, $\gamma$ must also bound a disk $D' \subseteq S$ that is isotopic rel--$\ensuremath{\partial}$ in $X - N(K)$ to $D$. This contradicts the minimality assumption on $|S \cap T|$. \end{proof} \subsection[Similar forked extended S2 cycles]{Similar forked extended $S2$ cycles} With \fullref{lem:bigonsandtrigons} in hand, we may refine our understanding of \fullref{lem:shortpairsofforks}. In particular, as we shall soon see, the hypotheses of \fullref{lem:shortpairsofforks} hold true only if $t=4$. The following proposition will be relevant in \fullref{sec:twoschcycles} for the proof of \fullref{labelaccount}. \begin{prop} \label{prop:nosaladforks} Assume $\sigma_i \subseteq G_S^i$ and $\sigma_j \subseteq G_S^j$ are two forked extended $S2$ cycles such that for each (extended) $S2$ cycle contained in the face bounded by one of $\sigma_i$ or $\sigma_j$ there is an (extended) $S2$ cycle with the same label pair contained in the face bounded by the other forked extended $S2$ cycle. If $t \geq 6$ then $i=j$. \end{prop} \begin{proof} Let us return to the notation set up in \fullref{lem:shortpairsofforks}. Specifically, note that for $f$, $g$, and $\Gamma$ we set $a = 3$. One may care to refer to \fullref{fig:twoshortforkedextendedS2} (disregarding the bigon $B$) instead of \fullref{fig:oneconfiguration} for a depiction of the arrangement of their edges. \fullref{lem:bigonsandtrigons} implies that for each $x \in \mathbf{t}$ $\smash{G_S^x}$ contains a face $F$ bounded by an (extended) $S2$ cycle, an (extended) $S3$ cycle, or a forked extended $S2$ cycle. In any of these cases, there exists an $S2$ cycle or an $S3$ cycle $\sigma$ of $G_S$ on $F$. By \fullref{GT:L2.1} the edges of an $S2$ or $S3$ cycle lie in an essential annulus. Since the subgraph $\Gamma$ does not lie in an essential annulus (see the proof of \fullref{lem:shortpairsofforks}), the edges of $\sigma$ must be incident to at least one vertex in $\Gamma$. Therefore $\sigma$ must have label pair $\{t, 1\}, \{1, 2\}, \{2, 3\}, \{3, 4\}$, or $\{4, 5\}$. By \fullref{oppositesides} the faces of $G_S$ bounded by Scharlemann cycles of order $2$ or $3$ with disjoint label pairs must lie on opposite sides of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Since each $f$ and $g$ contain an $S2$ cycle with label pair $\{2, 3\}$, $\sigma$ may only have label pair $\{1, 2\}$, $\{2, 3\}$, or $\{3, 4\}$. {\em Claim:\/} $\sigma$ cannot have label pair $\{1, 2\}$ or $\{3, 4\}$. Assume $\sigma$ has label pair $\{1, 2\}$. Note that $e_4$ cannot belong to $\sigma$. Let $c$ be a corner of the face of $G_S$ bounded by $\sigma$. Let $E_1$ and $E_2$ be the edges incident to $c$ at the vertices $U_1$ and $U_2$ respectively. $E_1$ is incident to $U_1$ on the arc of $\ensuremath{\partial} U_1$ either between $e_4$ and $e_5'$ or $e_4$ and $e_5$ in which no other edge of $\Gamma$ is incident. In the former case the edge $E_2$ is incident to $U_2$ in the arc of $\ensuremath{\partial} U_2$ between $e_3$ and $\smash{e_3'}$ in which no other edge of $\Gamma$ is incident. Thus the other end is incident to $U_3$. This contradicts that the label pair of $E_2$ is $\{1, 2\}$. In the latter case the edge $E_2$ is incident to $U_2$ in the arc of $\ensuremath{\partial} U_2$ between $e_3$ and $e_4$ in which no other edge of $\Gamma$ is incident. Thus $E_1$ and $E_2$ lie in the same disk of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash \Gamma$. Therefore every edge of $\sigma$ lies in a disk. This contradicts \fullref{GT:L2.1}. Hence $\sigma$ cannot have label pair $\{1, 2\}$. Due to the symmetry of $f$, $g$, and $\Gamma$, the same argument prohibits $\sigma$ from having label pair $\{3, 4\}$. This proves the claim. Due to the above claim, the $S2$ or $S3$ cycle of $G_S$ in any bigon or trigon of $\smash{G_S^x}$ for any $x \in \mathbf{t}$ must have label pair $\{2, 3\}$. If for $5 \leq x \leq t$ the graph $\smash{G_S^x}$ has an extended $S2$ cycle or extended $S3$ cycle $\sigma'$, then the label pair of $\sigma'$ cannot include any of the labels $1$, $2$, $3$, or $4$. Thus $\sigma'$ must lie in a disk in the complement of $\Gamma$. This contradicts \fullref{lieinanannulus}. Thus $G_5$ and $G_t$ both have forked extended $S2$ cycles which bound trigons containing extended $S2$ cycles with label pair $\{1, 4\}$. This contradicts \fullref{lem:shortpairsofforks}. \end{proof} \section{Annuli and trees}\label{sec:annuliandtrees} \subsection{Construction of annuli, related complexes, and trees} Assume the labeling of the vertices of $G_T$ has been chosen so that $K_{(t, 1)}$ is contained in $X^-$. Recall that by \fullref{circlesofintersection} the interior of each face of $G_S$ is disjoint from $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Assume $1 < k \leq t/2$ and $\smash{G_S^k}$ contains an extended $Sp$ cycle $\sigma$ for $p=2$ or $3$ to which the innermost Scharlemann cycle has label pair $\{1, t\}$. The edges of $\sigma$ thus have label pair $\{k, t-k+1\}$. Let $B$ denote the face of $\smash{G_S^k}$ bounded by $\sigma$. The edges of $G_S$ divide $B$ into $p(k-1)$ bigons plus the $p$--gon bounded by the Scharlemann cycle. Let $B_1$ be the $p$--gon bounded by the Scharlemann cycle. If $p=2$, for $i \geq 2$ let $B_i'$ and $B_i''$ be the bigons bounded by the edges with label pairs $\{i-1, t-i+2\}$ and $\{i, t-i+1\}$. If $p=3$, for $i \geq 2$ let $B_i'$, $B_i''$, and $B_i'''$ be the bigons bounded by the edges with label pairs $\{i-1, t-i+2\}$ and $\{i, t-i+1\}$ chosen so that \begin{itemize} \item $\ensuremath{\partial} B_i' \cap \ensuremath{\partial} B_{i+1}' \neq \emptyset$, $\ensuremath{\partial} B_i'' \cap \ensuremath{\partial} B_{i+1}'' \neq \emptyset$, $\ensuremath{\partial} B_i''' \cap \ensuremath{\partial} B_{i+1}''' \neq \emptyset$ and \item the edges $\ensuremath{\partial} B_i' \cap \ensuremath{\partial} B_{i+1}'$ and $\ensuremath{\partial} B_i'' \cap \ensuremath{\partial} B_{i+1}''$ do not lie in a disk; see \fullref{lieinanannulus}. \end{itemize} For each $2 \leq i \leq k$, form the annulus $A_i$ from $B_i' \cup B_i'' \cup H_{(i-1,\,i)} \cup H_{(t-i+1,\, t-i+2)}$ by shrinking each $H_{(i-1,\,i)}$ and $H_{(t-i+1,\, t-i+2)}$ radially to their cores $K_{(i-1,\,i)}$ and $K_{(t-i+1,\, t-i+2)}$. The annulus $A_i$ is contained in $X^+$ if and only if $i$ is even. Let $a_i$ be the curve of $\ensuremath{\partial} A_i$ that contains the point $K_i$, and set $a_1$ to be the curve of $\ensuremath{\partial} A_2$ that contains $K_1$. \begin{figure} \caption{(a) The solid torus $X^-$ with $H_{(t, 1)} \label{fig:S2cycle} \end{figure} \begin{figure} \caption{(a) The trigon $B_1$ for $p=3$ \qua (b) The complex $A_1$ for $p=3$} \label{fig:S3cycle} \end{figure} \label{$A_1$} If $p=2$, let $A_1$ be the M\"obius band formed from $B_1 \cup H_{(t, 1)}$ by shrinking $H_{(t, 1)}$ to its core radially. \fullref{fig:S2cycle} shows (a) the solid torus $X^-$ with $H_{(t, 1)}$ and (b) how $B_1$ sits in $X^-$. Such a construction of a M\"obius band is done in the proof of Lemma~2.5 of \cite{gt:dsokwylsagok}. If $p=3$, let $A_1$ be the complex formed from $B_1 \cup H_{(t, 1)}$ by first isotoping the edge $B_1 \cap B_2'''$ across the disk component of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (B_1 \cup H_{(t, 1)})$ (which has interior disjoint from $K$ by \fullref{lem:emptybigon}) keeping the corners of $B_1$ on $H_{(t, 1)}$. See \fullref{fig:S3cycle}(a) for an example of the placement of $B_1$ in $X^-$; see also \fullref{fig:S2cycle}(a). Next identify a small collar neighborhood in $B_1$ of the two edges that have been isotoped together. Then after shrinking $H_{(t, 1)}$ to its core radially, the resulting complex $A_1$ intersects $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as the curve $a_1$. See \fullref{fig:S3cycle}(b). Note that after chopping along a suitable meridional disk $D$, $(X^- \backslash D, \bar{N}(A_1) \backslash D)$ is homeomorphic to $(I \times D^2, I \times \bar{N}(Y))$ where $Y$ is the complex in the standard disk $D^2$ formed by three radii; see \fullref{GT:L2.1}. For both of these cases, $\ensuremath{\partial} \bar{N}(A_1) \cap X^-$ is an annulus that double covers $A_1$ (except along $K_{(t, 1)}$ if $p=3$ where it is triple covered by the annulus). Define the annulus $A = \bigcup_{i=2}^k A_i$. By \fullref{lieinanannulus} $\bigcup_{i=2}^{k} \ensuremath{\partial} A_i = \{a_1, \dots, a_k\}$ is a collection of $k$ essential simple closed curves on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ that are mutually disjoint and parallel. Furthermore, by \fullref{GT:L2.1} these curves are not meridional in either $X^+$ or $X^-$. These curves then divide $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ into $k$ annuli $T_j$ so that $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} - A = \bigcup_{j=1}^k \Int T_j$. Recall that two $3$--manifolds with toroidal boundaries attached together along an annulus that is incompressible in each is a solid torus if and only if each of the manifolds is itself a solid torus and one of their meridians crosses the annulus exactly once. Each annulus $A_i$ (for $i \neq 1$) thus separates $X^\pm$ (where $\pm = +$ or $-$ depending on the parity of $i$) into two solid tori where at least one of the meridians of the solid tori crosses the core curve of $A_i$ just once. This implies that $A_i$ is isotopic in $X^\pm$ rel--$\ensuremath{\partial}$ onto $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Let $V_i$ be the solid torus of $X^\pm \backslash A_i$ through which this isotopy occurs. In the event that both solid tori of $X^\pm \backslash A_i$ would work, choose the $V_i$ so that if $\Int V_i \cap \Int V_j$ is nonempty, then it is either all of $\Int V_i$ or all of $\Int V_j$. Note that none of the $V_i$ contain $A_1$. \begin{figure} \caption{(a) A schematic of the annulus $A$\qua (b) The corresponding graph $\ensuremath{\mathcal{T} \label{fig:crosssection} \end{figure} Consider the collection of solid tori $\mathcal{X} = X \backslash A \backslash \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ formed by chopping $X$ along both $A$ and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. The boundaries of all but at most two $X_l \in \mathcal{X}$ are alternately comprised of the annuli $A_i$ and the annuli $T_j$. Let $X_1 \subseteq X^-$ be the solid torus of $\mathcal{X}$ that contains $A_1$. The boundary of $X_1$ intersects the curve $a_1$ but not $\Int A_2$. Let $X_* \in \mathcal{X}$ be the solid torus whose boundary intersects $a_k$ but not $\Int A_k$. Since $k>1$, each of $\ensuremath{\partial} X_*$ and $\ensuremath{\partial} X_1$ contains more than one of the annuli $T_j$. The meridian of each $X_l \in \mathcal{X}$ intersects the core curve of an annulus $T_j$ on its boundary exactly once except for at most two, one of which is necessarily $X_1 \subseteq X^-$ and the other we shall denote $X_0 \subseteq X^+$. Note that $X_0 \cap \Int V_i = \emptyset$. It may be the case that $X_* = X_0$ or $X_* = X_1$. See \fullref{fig:crosssection}(a) for a schematic example of the annulus $A$, the complex $A_1$, $X_1$, and $X_*$ for both $p = 2$ and $3$. We will continue draw schematics of the annulus $A$ et al.\ in the quotiented manner indicated in \fullref{fig:crosssection}(a). Form a graph $\ensuremath{\mathcal{T}}$ with vertices $x_l$ corresponding to solid tori $X_l \in \mathcal{X}$ and edges $t_j$ corresponding to annuli $T_j$ so that an edge $t_j$ connects vertices $x_l$ and $x_m$ if the annulus $T_j$ is contained in the boundary of each $X_l$ and $X_m$. \fullref{fig:crosssection}(b) shows the graph $\ensuremath{\mathcal{T}}$ corresponding to the annulus $A$ in \fullref{fig:crosssection}(a). The vertices $x_l$ may be marked as $+$ or $-$ according to whether the corresponding $X_l$ is contained in $X^+$ or $X^-$. Adjacent vertices must have opposite signs. \begin{lemma} The graph $\ensuremath{\mathcal{T}}$ is a tree. \end{lemma} \begin{proof} If $\ensuremath{\mathcal{T}}$ is not a tree, then there is a cycle of distinct edges $t_{j_1}, t_{j_2}, \dots, t_{j_w}$ such that $\ensuremath{\partial} t_{j_{i}} \cap \ensuremath{\partial} t_{j_{i+1}} = x_{l_i}$ (unless $s = w$ in which case $\ensuremath{\partial} t_{j_1} = \ensuremath{\partial} t_{j_2}$) and $j_i = j_{i'}$ only if $i = i'$. Let $R_{l_i}$ be an annulus properly contained in the solid torus $X_{l_i}$ corresponding to the vertex $x_{l_i}$ whose boundary components are the core curves of the annuli $T_{j_i}$ and $T_{j_{i+1}}$. Since the annuli are connected in a cycle, $R = \bigcup_{i=1}^w R_{l_i}$ is a torus that is disjoint from $A$. Since $R$ is contained in the lens space $X$, it must be separating. On the boundary of $X_{l_i}$ the annuli $T_{j_i}$ and $T_{j_{i+1}}$ are necessarily separated by at least two of the curves $\{a_1, \dots, a_k\}$ since each $\ensuremath{\partial} T_{j_i}$ and $\ensuremath{\partial} T_{j_{i+1}}$ are pairs of curves in $\{a_1, \dots, a_k\}$ and $\Int T_{j_i} \cap \Int T_{j_{i+1}} = \emptyset$. Specifically, $X_{l_i} - R_{l_i}$ has two components each of which has nontrivial intersection with $A$. This however contradicts that $A$ is connected and $R$ is separating. \end{proof} Since $\ensuremath{\mathcal{T}}$ is a tree and there are $k$ annuli $T_j$, $\ensuremath{\mathcal{T}}$ has $k+1$ vertices. Hence there are $k+1$ solid tori $X_l$. Furthermore, since $T$ divides $K$ into $t$ arcs (the arcs $K_{(1, 2)}, K_{(2, 3)}, \dots, K_{(t, 1)}$) of which $2(k-1)$ (the arcs $K_{(t-k+1,\, t-k+2)}, \dots, K_{(t-1,\,t)}$ and $K_{(1, 2)}, \dots, K_{(k-1,\,k)}$) are contained in $A$, the remaining $t-2(k-1)$ arcs (the arcs $K_{(k,\,k+1)}, \dots, K_{(t-k,\, t-k+1)}$ and $K_{(t,1)}$) are disjoint from $A$ (except at the four points $K_1$, $K_t$, $K_k$, and $K_{t-k+1}$). Aside from $K_{(t,1)}$, these remaining arcs together form the arc $K_{(k,\,t-k+1)}$ which runs from $\ensuremath{\partial} A$ inside $X_*$ through the interior of the solid tori $X_l$ crossing between them by passing through the $T_j$ eventually returning to $\ensuremath{\partial} A$ in $X_*$. Due to $\ensuremath{\mathcal{T}}$ being a tree, $K_{(k,\,t-k+1)}$ eventually returns through the same $T_j$ from which it entered an $X_l$. Therefore $|K_{(k,\,t-k+1)} \cap \Int T_j|$ is even for every $j$. Consider the subtree $\ensuremath{\mathcal{T}}_* \subseteq \ensuremath{\mathcal{T}}$ consisting of the vertices and edges corresponding to the $X_l$ and $T_j$ with which $K_{(k,\,t-k+1)}$ has nonempty intersection. Since $K_{(k,\,t-k+1)}$ intersects the union $\smash{\bigcup_{j=1}^k} \Int T_j$ a total of $(t-k+1) - k - 1 = t-2k$ times and $K_{(k,\,t-k+1)}$ intersects each $\Int T_j$ an even number of times, $\ensuremath{\mathcal{T}}_*$ has at most $t/2 - k$ edges. Root both trees $\ensuremath{\mathcal{T}}$ and $\ensuremath{\mathcal{T}}_*$ at the vertex $x_*$. \begin{remark} \label{evenintersections} Indeed, for each edge $t_j$ of $\ensuremath{\mathcal{T}}_*$ there is a positive even number of intersections of the interior of $K_{(k,\,t-k+1)}$ with the corresponding annulus $T_j \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A$. Notice that if $t/2 - k < k$ (ie\ $t/4 < k$) then there exists at least one $X_l$ (and hence some $T_j$) which $K_{(k,\,t-k+1)}$ does not intersect. To rephrase, if $K_{(k,\,t-k+1)}$ intersects every $X_l \in \mathcal{X}$, then $t/4 \geq k$ and the face $B$ contains (extended) $S2$ or $S3$ cycles for the $2k (\leq t/2)$ graphs $G_S^i$ with $t-k+1 \leq i \leq t$ or $1 \leq i \leq k$. \end{remark} For either $\ensuremath{\mathcal{T}}$ or $\ensuremath{\mathcal{T}}_*$ we say a vertex other than $x_*$ of valency $1$ is a {\em leaf\/} and a (nonleaf) vertex other than $x_*$ all of whose adjacent vertices except at most one are leaves is a {\em penultimate leaf\/}. \subsection{Initial constraints on the trees due to thinness} \begin{lemma} \label{leafisotopy} Let $x_m$ be a leaf of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$ which is not $x_0$ and corresponds to the solid torus $X_m$. If $X_m \subseteq X^+$ (resp.\ $X^-$), then there is a high disk (resp.\ low disk) in $X_m$ for each of the two arcs of $K \cap \ensuremath{\partial} X_m$. \end{lemma} \begin{proof} We first remark that $x_1$ cannot be a leaf unless $k=1$. Nevertheless, we are assuming $k > 1$. Since $x_m \not \in \ensuremath{\mathcal{T}}_*$, $x_m \not \in X_*$. Therefore the boundary of $X_m$ is formed by two annuli, say $T_m$ and $A_m$. Since $X_m \neq X_0$, the meridian of $X_m$ crosses the core curves of $T_m$ and $A_m$ each once. In particular, there are two disjoint meridional disks each with boundary consisting of one of the transverse arcs $K_{(m-1,\,m)}$ and $K_{(t-m+1,\, t-m+2)}$ on $A_m$ and an arc on $T_m$. Since $K \cap \Int X_m = \emptyset$, these are high (or low) disks. \end{proof} \begin{lemma}\label{leafcancel} Any two leaves of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$ neither of which is $x_0$ must have the same sign. \end{lemma} \begin{proof} Assume otherwise. Let $x_p$ and $x_n$ be two leaves of $\ensuremath{\mathcal{T}}$ of signs $+$ and $-$ respectively that are not in $\ensuremath{\mathcal{T}}_*$. Assume neither is $x_0$. The two vertices cannot be joined by an edge since $\ensuremath{\mathcal{T}}$ is a tree containing the vertex $x_0$. Therefore the edges $t_p$ and $t_n$ incident to $x_p$ and $x_n$ respectively are distinct. Let $X_p, X_n, T_p,$ and $T_n$ be the corresponding solid tori and annuli. Hence the annuli $T_n \subseteq X_n$ and $T_p \subseteq X_p$ are distinct. By \fullref{leafisotopy}, there is a high disk $D_p$ such that $D_p \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \subseteq T_p$ and a low disk $D_n$ such that $D_n \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \subseteq T_n$. For example, see \fullref{fig:oppleaves}. Since $K$ is in thin position this contradicts \fullref{highdisklowdisk}. \end{proof} \begin{figure} \caption{(a) Parts of $\ensuremath{\mathcal{T} \label{fig:oppleaves} \end{figure} \begin{lemma} \label{penultimate} If $\ensuremath{\mathcal{T}}$ has a penultimate leaf not in $\ensuremath{\mathcal{T}}_*$, then the vertex $x_0$ must be among the penultimate leaf and leaves to which it is adjacent. \end{lemma} \begin{proof} Let $x_p$ be a penultimate leaf of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$ and let $x_{l_1}, \dots, x_{l_{n}}$ be the leaves adjacent to it. Assume $x_0$ is none of these. Also recall that $x_1$ cannot be a leaf unless $k=1$. Let $T_{l_i}$, for $i = 1, \dots, n$, be the annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ corresponding to the edge of $\ensuremath{\mathcal{T}}$ which connects $x_{l_i}$ to $x_p$. Let $T_p$ be the remaining annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap X_p$. Note that $T_p$ corresponds to the edge $t_p$ which separates $\{x_p, x_{l_1}, \dots, x_{l_n} \}$ from the rest of the vertices of $\ensuremath{\mathcal{T}}$ and from $\ensuremath{\mathcal{T}}_*$ in particular. Since meridional disks of the $X_{l_i}$ may be isotoped to intersect the cores of the $T_{l_i}$ once, the manifold $X_p'$ obtained by joining each $X_{l_i}$ to $X_p$ along $T_{l_i}$ is itself a solid torus. {\bf Case 1}\qua Assume $x_p \neq x_1$. In this case $\ensuremath{\partial} X_p'$ is composed of the annulus $T_p$ and a subannulus $A'= \bigcup_{i=m}^{m+2n} A_i$ of $A$ for some $m \geq 2$. This annulus contains the arc $K_{(m-1,\, m+2n)}$ and the arc $K_{(t-m-2n+1,\, t-m+2)}$. See \fullref{fig:penultimatepnot1}. \begin{figure} \caption{(a) Part of $\ensuremath{\mathcal{T} \label{fig:penultimatepnot1} \end{figure} Since $X_p \neq X_0$ or $X_1$, meridional disks may be isotoped to intersect the cores of each annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap X_p$ exactly once. Thus a meridional disk of $X_p'$ may be isotoped to intersect the core of $T_p$ once and intersect $A'$ in the arc $K_{(m-1,\, m+2n)}$ or $K_{(t-m-2n+1,\, t-m+2)}$. This meridional disk is a long disk. Its existence contradicts \fullref{longdisk}. {\bf Case 2}\qua Assume $x_p = x_1$. A meridional disk of $X_p$ may be isotoped to intersect the core of each annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap X_p$ two or three times depending on the order of the Scharlemann cycle at hand, so the method of Case 1 does not apply. See \fullref{fig:penultimatepis1}(a) and (b). Nevertheless, in this case the boundary $\ensuremath{\partial} X_p'$ is composed of the annulus $T_p$ and a subannulus $A'= \smash{\bigcup_{i=2}^{2n+1}} A_i \subseteq A$. This annulus contains the arcs $K_{(1, \,2n+1)}$ and $K_{(t-2n,\,t)}$. Because $x_1 = x_p \in \ensuremath{\mathcal{T}} \backslash \ensuremath{\mathcal{T}}_*$, the only arc of $K$ in the interior of $X_p'$ is $K_{(t,1)}$. Recall that $K_{(t,1)} \subseteq A_1$. Let $D'$ be a meridional disk of $X_p'$. See \fullref{fig:penultimatepis1}(b) and (c) for illustrations. Isotop $K_{(t-2n,\, 2n+1)}$ within $A' \cup A_1$ to lie on $D'$. This puts $K_{(t-2n,\,t)} \cup K_{(1, \,2n+1)} \subseteq \ensuremath{\partial} D'$ and $K_{(t, 1)}$ as a properly embedded arc on $D'$. Let $D$ be one of the two components of $D' \backslash K_{(t, 1)}$ so that $\ensuremath{\partial} D$ is the union of one arc on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and an arc of $K$. See \fullref{fig:penultimatepis1}(d). The disk $D$ is a long disk. Since $K$ is in thin position, its existence contradicts \fullref{longdisk}. \begin{figure} \caption{(a) Part of $\ensuremath{\mathcal{T} \label{fig:penultimatepis1} \end{figure} \end{proof} \subsection{The unfurling isotopy and further constraints on the trees} Let $R$ be a torus in a $3$--manifold $Y$ that bounds a solid torus $V$. Because $R$ bounds the solid torus $V$, any Dehn twist along $R$ is isotopic to the identity. Assume $R$ is divided into two annuli $T_R$ and $A_R$ by the two parallel curves $a_{\rm out}$ and $a_{\rm in}$. Let $K$ be a knot in $Y$ so that $K \cap R$ is a collection of spanning arcs of $A_R$ with $K \cap N(a_{\rm out}) \subseteq V$ and $K \cap N(a_{\rm in}) \subseteq Y - \Int V$. \begin{Lemma} \label{main-unfurl} There is an ambient isotopy $\ensuremath{\mathcal{T}}heta_u \co Y \times [0,2\pi] \to Y$ so that \begin{enumerate} \item $\ensuremath{\mathcal{T}}heta_0$ is the identity, \item $\ensuremath{\mathcal{T}}heta_{2\pi} (K-R) = \ensuremath{\mathcal{T}}heta_0 (K-R)$ and \item $\ensuremath{\mathcal{T}}heta_{2\pi}(K \cap R)$ is a collection of transverse arcs of $T_R$. \end{enumerate} \end{Lemma} We refer to an ambient isotopy of $K$ via such a Dehn twist as an {\em unfurling\/}. This unfurling isotopy is shown schematically in \fullref{fig:unfurlingschematic}. Note that $a_{\rm out}$ and $a_{\rm in}$ need not be longitudinal curves on $R =\ensuremath{\partial} V$. \begin{figure} \caption{(a) A (not necessarily meridional) ``cross section'' of the solid torus $V$ \qua (b) The cross section shown with $K$ at $u=0$\qua (c) The isotopy at $u=\pi/2$\qua (d) The isotopy at $u=\pi$\qua (e) The isotopy at $u=3\pi/2$ \qua(f) The finished isotopy at $u=2\pi$} \label{fig:unfurlingschematic} \end{figure} \begin{proof} Let $\gamma_A$ be an arc of $A_R \cap K$, $\gamma_T$ be an arc of $T_R$ connecting the endpoints of $\gamma_A$, and $\gamma=\gamma_A \cup \gamma_T$ be the simple closed curve oriented so that $\gamma_A$ runs from $a_{\rm out}$ to $a_{\rm in}$. Take $\ensuremath{\mathcal{T}}heta_u$ to be the Dehn twist along $R$ in the direction of $\gamma$. We may view \fullref{fig:unfurlingschematic} as a ``cross section'' of $V$ along $\gamma$. The conclusions of the lemma are immediate following the definition of the Dehn twist. \end{proof} Let $T_R$ be an annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A$. Then we have $\ensuremath{\partial} T_R = a_m \cup a_n$ where $m<n$. Let $A_R = \smash{\bigcup_{i=m+1}^n A_{i}}$. \begin{lemma} \label{dividingtorus} The torus $R = A_R \cup T_R$ bounds a solid torus in $X$ that does not contain $X_0$. \end{lemma} \begin{proof} Since $R$ is a torus in a lens space, it separates $X$ into two pieces, say $V$ and $W$. Deleting the edge $t_R$ corresponding to the annulus $T_R$ from $\ensuremath{\mathcal{T}}$ yields two trees $\ensuremath{\mathcal{T}}_V$ and $\ensuremath{\mathcal{T}}_W$ each consisting of vertices $x_i$ corresponding to solid tori $X_i \in \mathcal{X}$ in $V$ and $W$ respectively. Since only one of $V$ and $W$ may contain $X_0$, only one of $\ensuremath{\mathcal{T}}_V$ and $\ensuremath{\mathcal{T}}_W$ may contain $x_0$. Without loss of generality, assume the tree $\ensuremath{\mathcal{T}}_V$ does not contain the vertex $x_0$. Though $\ensuremath{\mathcal{T}}_V$ may contain $x_1$, all other vertices of $\ensuremath{\mathcal{T}}_V$ then correspond to solid tori $X_i \neq X_0$ or $X_1$ whose meridians traverse each annulus of $X_i \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ (and each annulus of $X_i \cap A$) exactly once. Therefore the union $X_V$ of the $X_i$ corresponding to $x_i \in \ensuremath{\mathcal{T}}_V$ along their common annuli $X_i \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is a solid torus. Note that $\ensuremath{\partial} X_V = R$, and the solid torus $X_V$ is indeed $V$. \end{proof} \begin{lemma} \label{unfurling} $x_1 \in \ensuremath{\mathcal{T}}_*$ \end{lemma} \begin{proof} Assume $x_1 \not \in \ensuremath{\mathcal{T}}_*$. We may assume $k \geq 3$ since if $k \leq 2$ then either $x_1 \in \ensuremath{\mathcal{T}}_*$ or $t=2$. Put a transverse direction on $A$ respecting the ordering of the $A_i$. For odd (resp.\ even) $i$, the transverse direction at $a_i$ points above (resp.\ below) $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Consider the collection $\ensuremath{\mathcal{A}}$ of curves $a_i$ on the boundary of $X_1$. If $k$ is even, then $a_k \not \in \ensuremath{\mathcal{A}}$ since otherwise $x_1 \in \ensuremath{\mathcal{T}}_*$. For each $a_i \in \ensuremath{\mathcal{A}}$ with $i$ even, the annulus $A_{i+1}$ must be contained in $\ensuremath{\partial} X_1$. Since $a_1 \in \ensuremath{\mathcal{A}}$, there are more odd indexed curves than even indexed curves in $\ensuremath{\mathcal{A}}$. Hence there are two curves $a_m$ and $a_n$ in $\ensuremath{\mathcal{A}}$ cobounding an annulus $T_R \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A$ in $\ensuremath{\partial} X_1$ with $m<n$ and both $m$ and $n$ odd. Consider the annulus $A_R = \bigcup_{i=m+1}^n A_i$ and the torus $R = A_R \cup T_R$. See \fullref{fig:unfurl1}(a). By \fullref{dividingtorus}, $R$ bounds a solid torus, say $V$, that does not contain $X_0$. The assumption that $x_1 \not \in \ensuremath{\mathcal{T}}_*$ is equivalent to the statement that the solid torus $X_1$ is disjoint from the arc $K_{(k,\,t-k+1)}$ of $K$. Hence $T_R$ and moreover $R$ are disjoint from this arc. Therefore $K$ only intersects $R$ as two transverse arcs of $A_R$. At one of the curves $a_m$ and $a_n$, $K$ continues into $V$; at the other curve, $K$ continues away from $V$. We may apply \fullref{main-unfurl} to obtain an isotopy of $K$. See \fullref{fig:unfurl1}(b). \begin{figure} \caption{(a) The construction of the unfurling torus\qua (b) After the unfurling isotopy} \label{fig:unfurl1} \end{figure} After this unfurling isotopy, the arcs once of $K \cap A_R$ are now arcs of $K \cap T_R$ and may be nudged to be transverse to the height function. The new set of critical levels is a strict subset of the former set of critical levels. Furthermore the intersection number of $K$ with any of the remaining critical levels has not increased. Therefore the unfurling isotopy has decreased the width of $K$, contradicting the thinness of $K$. \end{proof} \begin{Lemma} \label{unfurling2} The vertices of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$ are leaves of $\ensuremath{\mathcal{T}}$ of the same sign. \end{Lemma} \begin{figure} \caption{(a) The annuli $A_{(1,\,m)} \label{fig:unfurl2} \end{figure} \begin{proof} By \fullref{unfurling}, $x_1 \in \ensuremath{\mathcal{T}}_*$. If $x_0 \in \ensuremath{\mathcal{T}}_*$, then \fullref{leafcancel} and \fullref{penultimate} imply the conclusion. Thus we may assume $x_0 \not \in \ensuremath{\mathcal{T}}_*$. Let $t_0$ be the edge of $\ensuremath{\mathcal{T}}$ incident to $x_0$ that separates $x_0$ from $x_*$, $T_0$ be the annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A$ corresponding to $t_0$, and $\ensuremath{\partial} T_0 = a_m \cup a_n$ with $m < n$. Form the two annuli $A_{1,\,m} = \smash{\bigcup_{i=2}^m} A_i$ and $A_{m,\,n} = \smash{\bigcup_{i=m+1}^n} A_i$. See \fullref{fig:unfurl2}(a). Let $\smash{A_{1,\,m}'}$ be a slight push off of $A_{1,\,m}$ with boundaries $\smash{a_1'}$ and $\smash{a_m'}$ so that $\smash{a_m'} \subseteq T_0$. Also let $\smash{A_1'}$ be $(\ensuremath{\partial} \smash{\bar{N}}(A_1) \cap X_1) \backslash \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ where one component of $\ensuremath{\partial} \smash{A_1'}$ is $\smash{a_1'}$ and the other, say $\smash{a_1''}$, is a slight push off of $a_1$ to its other side. Let $\smash{T_1'}$ be the annulus between $a_1$ and $\smash{a_1'}$ and $\smash{T_1''}$ be the annulus between $a_1$ and $\smash{a_1''}$ so that $\smash{T_1'} \cap \smash{T_1''} = a_1$. Let $\smash{T_0'}$ be the annulus in $T_0$ bounded by $\smash{a_m'}$ and $a_n$. See \fullref{fig:unfurl2}(b). We now form the torus \[R = A_{1,\,m} \cup A_{m,\,n} \cup T_0' \cup \smash{A_{1,\,m}'} \cup A_1' \cup T_1''\] as shown in \fullref{fig:unfurl2}(c). By construction $A_1$ and $X_0$ both lie on the same side of $R$. It follows (as in \fullref{dividingtorus}) that $R$ bounds a solid torus, say $V$, on its other side. Divide $R$ into the two annuli $A_R = A_{1,\,m} \cup A_{m,\,n}$ (which is a subannulus of $A$) and $T_R = \smash{T_0'} \cup \smash{A_{1,\,m}'} \cup \smash{A_1'} \cup \smash{T_1''}$. Notice that $K \cap R = K_{(1,\,n)} \cup K_{(t-n+1,\,t)}$ are two transverse arcs of $A_R$, $K \cap \Int T_R = \emptyset$, and $K \backslash V = K_{(t, 1)}$ with $\ensuremath{\partial} K_{(t, 1)} \subseteq a_n$. \fullref{main-unfurl} applies. The result of this unfurling isotopy is depicted in \fullref{fig:unfurl2}(d). Form the annuli $A_Q = A_{(1,\,m)}' \cup A_1' \cup T_1''$ and $T_Q = A_{(1,\,m)} \cup (T_0 \backslash T_0')$. See \fullref{fig:unfurl3}(a). The torus $Q = A_Q \cup T_Q$ bounds a solid torus that contains $A_1$. Notice that $Q_A = T_R$. Hence after unfurling along $R$, $Q_A$ is a subannulus of the isotoped $A_R$, $K \cap T_Q = \emptyset$, and \fullref{main-unfurl} applies again. \begin{figure} \caption{(a) The annuli $Q_A$ and $Q_T$ together bound a solid torus (b) The result of unfurling along $Q = Q_A \cup Q_T$\qua (c) The result of a further final isotopy} \label{fig:unfurl3} \end{figure} After unfurling along $R$ and then along $Q$, the annulus $A_{(m,\,n)}$ has been repositioned as $T_0$, and the arcs $K_{(m,\,n)}$ and $K_{(t-n+1,\, t-m+1)}$ now lie as transverse arcs of $T_0$. See \fullref{fig:unfurl3}(b). The rest of $K$ is as it was (up to perhaps height-preserving isotopies near $a_1$, $a_m$, and $a_n$). By a slight further isotopy of the arcs $K_{(m,\,n)}$ and $K_{(t-n+1,\, t-m+1)}$ into $X_0$, we obtain another Morse presentation of $K$ of width not greater than previously. See \fullref{fig:unfurl3}(c). Since $K$ was in thin position before these isotopies, it must be in another thin position now. Hence $n = m+1$ and $x_0$ is now a leaf of $\ensuremath{\mathcal{T}}_*$. Furthermore this implies that $x_0$ must have been a leaf of $\ensuremath{\mathcal{T}}$ connected to $x_1 \in \ensuremath{\mathcal{T}}_*$ before the isotopies. If $\ensuremath{\mathcal{T}}$ had a leaf not in $\ensuremath{\mathcal{T}}_*$ of opposite sign from $x_0$, then after these isotopies there would be a high disk and low disk pair contradicting \fullref{highdisklowdisk}. If $\ensuremath{\mathcal{T}}$ had a nonleaf vertex that was not in $\ensuremath{\mathcal{T}}_*$, then $\ensuremath{\mathcal{T}}$ has a penultimate leaf not in $\ensuremath{\mathcal{T}}_*$. Since $x_0$ was a leaf of $\ensuremath{\mathcal{T}}$ connected to $\ensuremath{\mathcal{T}}_*$ by an edge, it could not be a leaf adjoined to this penultimate leaf. Thus there would have been a long disk contradicting \fullref{longdisk}. The conclusion of this lemma follows. \end{proof} \begin{lemma}\label{lem:movingx0} If $x_0 \not \in \ensuremath{\mathcal{T}}_*$, then there is an isotopy of $K$ into another thin position which takes each annulus $A_i$ to itself for $i < k$ and $A_k$ to another annulus such that the resulting trees have $x_0 \in \ensuremath{\mathcal{T}}_*$. \end{lemma} \begin{proof} The proof of this lemma is basically contained within the proof of \fullref{unfurling2}. Assume $x_0 \not \in \ensuremath{\mathcal{T}}_*$. By \fullref{unfurling2}, $\ensuremath{\partial} X_0$ consists of the annulus $A_k$ and the annulus, say, $T_k \in \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A$ bounded by the curves $a_k$ and $a_{k-1}$. Let $X_2 \in \mathcal{X}$ be the other solid torus that contains $A_k$ in its boundary. Since $k \geq 3$, $X_2$ cannot correspond to a leaf of $\ensuremath{\mathcal{T}}$. \fullref{unfurling2} implies that the interior of $X_2$ must intersect $K$. The isotopy employed in the proof of \fullref{unfurling2} has the effect of rearranging $S$ in $N(\bigcup_{i=1}^k A_i \cup T_k)$ so that $A_k$ is moved from one side of $X_0$ to a slight push off $T_k$ on the other side and each $A_i$ for $1\leq i <k$ is taken to itself. For $\ensuremath{\mathcal{T}}$, this is tantamount to moving the label $x_0$ to the vertex in $\ensuremath{\mathcal{T}}_*$ that previously corresponded to $X_2$. \end{proof} \subsection{The least extreme critical points} Recall that in the beginning of this \fullref{sec:annuliandtrees} we let $\{A_2, \dots, A_k\}$ be the annuli and $A_1$ be the M\"obius band or complex associated to the face of an extended Scharlemann cycle of order $2$ or $3$. Let $A_{k+1}$ be the complex formed from identifying the common corners of the bigon and trigon at the ``ends'' of the face bounded by a forked extended Scharlemann cycle that contains the extended Scharlemann cycle of order $2$ provided such a forked extended Scharlemann cycle exists. Otherwise, set $A_{k+1} = \emptyset$. \begin{Lemma}\label{highestmin} After perhaps a width-preserving isotopy, the highest minimum (resp.\ lowest maximum) below (resp.\ above) $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ lies on an arc of $K$ that together with an arc on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounds a low disk (resp.\ high disk) with interior disjoint from $ \bigcup_{i=1}^{k+1} A_i$ and from $K$. Furthermore, if the boundary of this low disk (resp.\ high disk) intersects $\bigcup_{i=1}^{k+1} \ensuremath{\partial} A_i$ then either it intersects only $\ensuremath{\partial} A_{k+1} - a_k$ if $A_{k+1} \neq \emptyset$ or it intersects only $a_k$ if $A_{k+1} = \emptyset$. \end{Lemma} \begin{proof} We work with the highest minimum below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. The other case follows in the same manner. Let $m$ be the first critical value of $h(K)$ below $0$, and let $p_m$ be the corresponding critical point. Since $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} = h^{-1}(0)$ is a thick level, $p_m$ is a minimum. For a suitably small $\epsilon >0$, $K \cap h^{-1}[m-\epsilon,0]$ is a collection of arcs of $K$ transverse to the induced product structure on $h^{-1}[m-\epsilon,0]$ together with the arc $\kappa$ containing $p_m$ that has both endpoints on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Thus there exists a low disk for $\kappa$. Let $D$ be a low disk for $\kappa$ so that its interior is disjoint from $K$ and $\ensuremath{\partial} D$ consists of $\kappa$ and an arc on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If $\kappa$ is contained in $A_i$ for some $i$, we may assume that $N(\ensuremath{\partial} \kappa) \cap (D-\kappa) \cap A_i = \emptyset$ at the expense of perhaps increasing $|(\ensuremath{\partial} D - \kappa) \cap A_i|$. Let us further assume $D$ is transverse to $\smash{\bigcup_{i=1}^{k+1}} A_i$ and that $D$ has been chosen among all such disks (with $N(\ensuremath{\partial} \kappa) \cap (D-\kappa) \cap A_i = \emptyset$) so that $|D \cap \smash{\bigcup_{i=1}^{k+1}} A_i|$ is minimized. Assume $(D-\kappa) \cap \Int A_i \neq \emptyset$ for some $i \in \{1, \dots, k+1\}$. Since $D$ is disjoint from $K$ except along $\kappa$, $(D-\kappa) \cap A_i = D \cap (A_i \backslash K)$. If the intersection contained any simple closed curves (that do not intersect $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in the case of $A_1$), then standard innermost disk arguments would imply a contradiction either to the minimality of $|\Int D \cap \bigcup_{i=1}^{k+1} A_i|$ or to the incompressibility of $S$. Therefore $D \cap A_i$ is a collection of arcs for each $i$. Since each component of $A_i \backslash K$ is a bigon or trigon of $G_S$, each arc of $D \cap A_i$ either bounds a subdisk of $A_i \backslash K$ with an arc of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ or bounds a rectangle of $A_i$ with two arcs of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and an arc of $K$. Standard outermost arc arguments show that we may assume that only arcs of the second type occur. Let $\alpha$ be an arc of $D \cap (A_i \backslash K)$, outermost on $D$. Thus $\alpha$ cuts off a subdisk $D_\alpha \subseteq D$ disjoint from $\kappa$. Since $\alpha$ must be an arc of the second type, let $R \subseteq A_i$ be a rectangle with boundary composed of $\alpha$, two arcs of $\ensuremath{\partial} A_i \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, and an arc, say $\kappa'$, of $K \cap A_i$. Since the interior of $D_\alpha$ is disjoint from $A_i$, $D_\alpha$ intersects $R$ only along $\alpha$. Thus by a slight isotopy of the disk $R \cup D_\alpha$, we may form a low disk $D'$ with boundary composed of $\kappa'$ and an arc on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ such that $|\Int D' \cap \bigcup_{i=1}^{k+1} A_i| = 0$. If $\kappa' = \kappa$ then the existence of $D'$ contradicts the minimality of $|\Int D \cap \smash{\bigcup_{i=1}^{k+1}} A_i|$. Thus either $|\Int D \cap \smash{\bigcup_{i=1}^{k+1}} A_i| = 0$ satisfying the lemma or $\kappa' \neq \kappa$. If $\kappa' \neq \kappa$ then $D'$ guides a width-preserving isotopy of $K$ so that the minimum of $\kappa'$ is higher than the minimum of $\kappa$. Given the disk $D$ with $|\Int D \cap \bigcup_{i=1}^{k+1} A_i| = 0$, the final conclusion of the lemma is immediate. \end{proof} If $A_{k+1} \neq \emptyset$, then each arc of $A_{k+1} \cap K$ bounds a disk with an arc of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ contained in $N(\ensuremath{\partial} A_{k+1})$ whose interior is disjoint from $K$ and $a_k$. These disks may be constructed similarly to the construction of the disk $D_g$ in \fullref{thinningtwoforks}. \begin{Lemma}\label{uniqueleaf} If a leaf of $\ensuremath{\mathcal{T}}$ is not in $\ensuremath{\mathcal{T}}_*$, then it is the only such leaf. Furthermore $A_k$ is contained in the boundary of the solid torus corresponding to this leaf. \end{Lemma} \begin{proof} Assume $k=2$. If there are two leaves of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$, then $x_* = x_1 = \ensuremath{\mathcal{T}}_*$. Hence $t=4$ contradicting that $t \geq 6$. Thus we assume $k \geq 3$. Let $x_m$ be a leaf of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$. Without loss of generality we may assume its sign is positive. By \fullref{unfurling}, $x_1 \in \ensuremath{\mathcal{T}}_*$. \fullref{lem:movingx0} implies that we may assume $x_0 \in \ensuremath{\mathcal{T}}_*$. Thus $x_m$ is neither $x_0$ nor $x_1$. Let $A_m$ be the annulus of $A \backslash \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ which together with an annulus $T_m \in \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A$ cuts off the solid torus $X_m \in \mathcal{X}$ corresponding to the leaf $x_m$. The arcs of $K \cap A_m = K_{(m-1,\,m)} \cup K_{(t-m+1,\,t-m+2)}$ each with an arc $\tau_{(m-1,\,m)}$ or $\tau_{(t-m+1,\,t-m+2)}$ of $T_m$ bound a meridional disk of $X_m$. These two meridional disks are high disks and may be assumed to be disjoint. By \fullref{highestmin} the arc $\kappa$ of $K \backslash \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ containing the highest minimum below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ together with an arc $\tau_\kappa$ of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounds a low disk $D$ with interior disjoint from $\bigcup_{i=1}^k A_i$. If the interior of $\tau_\kappa$ is disjoint from the interior of either $\tau_{(m-1,\,m)}$ or $\tau_{(t-m+1,\,t-m+2)}$, then by \fullref{highdisklowdisk} $K$ is not in thin position. Hence $\tau_\kappa$ must intersect the interior of both. Thus $\tau_\kappa \cap \Int T_m \neq \emptyset$. For this to occur, either $\ensuremath{\partial} \kappa \cap \Int T_m \neq \emptyset$, $\ensuremath{\partial} \kappa \subseteq \ensuremath{\partial} T_m$, or $\Int \tau_\kappa \cap \ensuremath{\partial} T_m \neq \emptyset$. Since the interior of $X_m$ is disjoint from $K$, $\ensuremath{\partial} \kappa \cap \Int T_m = \emptyset$. If $\ensuremath{\partial} \kappa \subseteq \ensuremath{\partial} T_m$ then $\kappa$ must join the two arcs of $K \cap A_m$. Hence $a_k \subseteq \ensuremath{\partial} T_m$. If $\Int \tau_\kappa \cap \ensuremath{\partial} T_m \neq \emptyset$ then, as in \fullref{highestmin}, $\Int \tau_\kappa$ may only intersect $a_k$. Hence $a_k \subseteq \ensuremath{\partial} T_m$. Thus $A_k \subseteq \ensuremath{\partial} X_m$. Assume $x_n$ is another leaf of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$. \fullref{unfurling2} implies that $x_m$ and $x_n$ have the same sign. We may also assume $x_n$ is neither $x_0$ nor $x_1$. The same argument above thus applies for the leaf $x_n$. Hence if $X_n \in \mathcal{X}$ is the solid torus corresponding to $x_n$, then $A_k \subseteq \ensuremath{\partial} X_n$. Therefore $A_k$ separates $X^+$ into the solid tori $X_m$ and $X_n$ of $\mathcal{X}$. In other words, $X^+ = X_m \cup A_k \cup X_n$. Since $k \geq 3$, $A_{k-2}$ exists. Furthermore $A_{k-2}$ must be contained in $X^+$ since $A_k$ is. Yet since $A_k$ is the only subannulus of $A = \bigcup_{i=2}^k A_i$ contained in $X^+$, it must be the case that $k=3$ and $A_1 \subseteq X^+$. But because neither $x_m = x_1$ nor $x_n = x_1$, $A_1 \not \subseteq X^+$. This is a contradiction. \end{proof} \begin{lemma}\label{lem:lonevertex} There may be at most one vertex of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$. \end{lemma} \begin{proof} This follows from \fullref{unfurling2} and \fullref{uniqueleaf}. \end{proof} \subsection{Bounds on extended Scharlemann cycles} We may now get good estimates on how many labels are accounted for by the face of an extended $S2$ or $S3$ cycle. Recall that we are assuming $t \geq 6$. \begin{Lemma}\label{bounded} For any $x \in \mathbf{t}$ the face bounded by an extended $S2$ or $S3$ cycle on $\smash{G_S^x}$ may account for at most $t/2 +1$ of the labels if $4 {\nmid} t$ and at most $t/2$ of the labels if $4|t$. \end{Lemma} \begin{proof} Let $\sigma'$ be an extended $S2$ or $S3$ cycle of $\smash{G_S^x}$ for some $x \in \mathbf{t}$ bounding the disk face $f'$. Let $\sigma$ be the Scharlemann cycle contained in $f'$. Assume we have relabeled so that $\sigma$ has label pair $\{t,1\}$ and $\sigma'$ has label pair $\{t-k+1, k\}$. Thus $f'$ accounts for $2k$ labels. If $k=1$ then $\sigma = \sigma'$ and the lemma holds. Therefore we may assume $k \geq 2$. From $f'$ form the complex $A_1$, the annuli $A_i$ for $i=2, \dots, k$, the annulus $A=\smash{\bigcup_{i=2}^{k}} A_i$, and the corresponding trees. \fullref{lem:lonevertex} implies that there can be at most one vertex of $\ensuremath{\mathcal{T}}$ not contained in $\ensuremath{\mathcal{T}}_*$. Since $\ensuremath{\mathcal{T}}$ has $k$ edges, $\ensuremath{\mathcal{T}}_*$ has at least $k-1$ edges. By \fullref{evenintersections}, the interior of the arc $K_{(k,\,t-k+1)}$ intersects $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ at least $2(k-1)$ times. Hence at least $2(k-1)$ labels are not accounted for by $f'$. Therefore $t \geq 2k + 2(k-1) = 4k - 2$, ie\ $t/2 +1 \geq 2k$. Since $k$ must be an integer, if $t$ is divisible by $4$ then $t/2 \geq 2k$. \end{proof} \begin{lemma}\label{boundedforked} For any $x \in \mathbf{t}$ the trigon bounded by a forked extended $S2$ cycle on $\smash{G_S^x}$ may account for at most $t/2$ of the labels if $4 {\nmid} t$ and at most $t/2-1$ of the labels if $4|t$. \end{lemma} \begin{proof} Let $\sigma''$ be a forked extended $S2$ cycle bounding the disk face $f''$. Let $\sigma'$ be the outermost extended $S2$ cycle contained in $f''$, and let $f'$ be the disk face it bounds. As in the above proof of \fullref{bounded}, let $\sigma$ be the $S2$ cycle contained in $f'$. Assume we have relabeled so that $\sigma$ has label pair $\{t,1\}$ and $\sigma'$ has label pair $\{t-k+1, k\}$. Since $f'$ accounts for $2k$ labels, $f''$ accounts for $2k+1$ labels. If $k=1$ then $\sigma = \sigma'$, $f''$ accounts for $3$ labels, and the lemma holds. Therefore we may assume $k\geq 2$. Again form the complex $A_1$, the annuli $A_i$ for $i=2, \dots,k$, the annulus $A$, and the corresponding trees; and again note that $\ensuremath{\mathcal{T}}$ can have at most one vertex not in $\ensuremath{\mathcal{T}}_*$. Also form the complex $A_{k+1}$ from the bigon and trigon of $f'' \backslash f'$. Note that $A_{k+1} \cap A = a_k$. Let $T_k$ and $T_k'$ be the two annuli of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A$ with $a_k$ as a boundary component. Since $A_{k+1} \cap K = K_{(k,\,k+1)} \cup K_{(t-k,\,t-k+1)}$, either $T_k$ or $T_k'$ contains both the end points $K_{k+1}$ and $K_{t-k}$ (because $a_k$ contains the end points $K_k$ and $K_{t-k+1}$). Say $T_k$ contains these end points. Since by \fullref{lem:lonevertex} at most one vertex of $\ensuremath{\mathcal{T}}$ is not in $\ensuremath{\mathcal{T}}_*$, either (a) $T_k'$ corresponds to the edge $t_k'$ of $\ensuremath{\mathcal{T}}$ incident to the leaf of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$ or (b) the interior of $K_{(k+1,\,t-k)}$ intersects $T_k'$ nontrivially. {\bf Case (a)}\qua Assume $T_k'$ corresponds to an edge $t_k'$ of $\ensuremath{\mathcal{T}}$ incident to the leaf of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$. It must be the annulus $A_k$ that together with $T_k'$ cuts off the solid torus corresponding to this leaf. By \fullref{lem:movingx0} we may assume this leaf is not $x_0$. Furthermore, by paying attention to the isotopy of \fullref{lem:movingx0} (described in the proof of \fullref{unfurling2}) one may note that $A_{k+1}$ is preserved. Since we may assume this leaf is not $x_0$ so that $A_k$ and $T_k'$ are parallel, the arcs of $K \cap A_k$ together with arcs contained in $T_k'$ then bound high (or low) disks. Furthermore, as in the construction of the high disks of \fullref{thinningtwoforks} shown in \fullref{fig:twoforkshighlowdisks}, the arcs of $K \cap A_{k+1}$ together with arcs of $T_k$ bound low (or high) disks. By \fullref{highdisklowdisk}, this contradicts the thinness of $K$. {\bf Case (b)}\qua Assume the interior of $K_{(k+1,\,t-k)}$ intersects $T_k'$ nontrivially. If $\ensuremath{\mathcal{T}} \neq \ensuremath{\mathcal{T}}_*$, then there is a leaf of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$. By \fullref{uniqueleaf} the solid torus corresponding to this leaf contains $A_k$ and hence $a_k$ in its boundary. Thus it must contain $T_k$ or $T_k'$. This contradicts our assumption that neither $T_k$ nor $T_k'$ has interior disjoint from $K$. Thus $\ensuremath{\mathcal{T}} = \ensuremath{\mathcal{T}}_*$. Since $\ensuremath{\mathcal{T}}$ has $k$ edges, $\ensuremath{\mathcal{T}}_*$ has $k$ edges. By \fullref{evenintersections}, each edge of $\ensuremath{\mathcal{T}}_*$ corresponds to at least two intersections of the interior of the arc $K_{(k,\,t-k+1)}$ with each of the $k$ annuli $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A$. Since $\ensuremath{\mathcal{T}}$ is a tree and the points $K_{k+1}$ and $K_{t-k}$ are contained in $T_k$, the interior of the arc $K_{(k+1,\,t-k)}$ must intersect $T_k$ twice more so that $K$ may intersect the interior of $T_k'$. Hence $K$ intersects the interior of $T_k$ at least $4$ times. It follows that the interior of the arc $K_{(k,\,t-k+1)}$ intersects $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ at least $2k+2$ times. Therefore $t \geq 2k + 2k+2$, ie\ $t/2 -1 \geq 2k$. Since $k$ must be an integer, if $t$ is divisible by $4$ then $t/2-2 \geq 2k$. Since $f''$ accounts for one more label than $f'$, it may account for at most $t/2$ labels if $4{\nmid}t$ and at most $t/2-1$ labels if $4|t$. \end{proof} \section{Two Scharlemann cycles}\label{sec:twoschcycles} \subsection{Accounting for labels} Let $\mathcal{F}_{\{x,\, x{+}1\}}$ be the collection of bigons and trigons of $G_S^y$ for every $y \in \mathbf{t}$ that contain a Scharlemann cycle (of order $2$ or $3$) of $G_S$ with label pair $\{x, x{+}1\}$. Let $\Lambda_{\{x,\, x{+}1\}}$ be the subset of labels in $\mathbf{t}$ accounted by the faces of $\mathcal{F}_{\{x,\, x{+}1\}}$. \begin{lemma} \label{labelaccount} There exists an $F \in \mathcal{F}_{\{x,\, x{+}1\}}$ such that $F$ accounts for every label in $\Lambda_{\{x,\, x{+}1\}}$. \end{lemma} \begin{proof} For $F_1, F_2 \in \mathcal{F}_{\{x,\, x{+}1\}}$, say $F_1 \leq F_2$ if the set of labels for which $F_1$ accounts is a subset of the set of labels for which $F_2$ accounts. It is clear that $\leq$ is a partial ordering. We claim that $\leq$ indeed totally orders the finite set $\mathcal{F}_{\{x,\, x{+}1\}}$. A maximal element of $\mathcal{F}_{\{x,\, x{+}1\}}$ then accounts for every label in $\Lambda_{\{x,\, x{+}1\}}$. Assume neither $F_1$ nor $F_2 \in \mathcal{F}_{\{x,\, x{+}1\}}$ are bounded by forked extended $S2$ cycles. Now $F_i$ accounts for the labels $\{x-k_i, x-k_i+1, \dots, x, x+1, \dots, x+k_i, x+k_i+1\}$ for some integer $k_i \geq 0$ for each $i=1,2$, since $F_i$ is an extended $S2$ or $S3$ cycle. Thus either $F_1 \leq F_2$ or $F_2 \leq F_1$ since either $k_1 \leq k_2$ or $k_2 \leq k_1$ respectively. By \fullref{prop:nosaladforks}, if $F_1, F_2 \in \mathcal{F}_{\{x,\, x{+}1\}}$ are two trigons bounded by forked extended $S2$ cycles containing outermost extended $S2$ cycles with the same label pair, then they must account for the same set of labels. Thus $F_1 = F_2$ with respect to the ordering $\leq$. Since a trigon bounded by a forked extended $S2$ cycle accounts for an odd number of labels whereas an extended $S2$ or $S3$ cycle accounts for an even number of labels, it follows that for any $F_1, F_2 \in \mathcal{F}_{\{x,\, x{+}1\}}$, either $F_1 \leq F_2$ or $F_2 \leq F_1$. Thus $\leq$ is a total ordering on $\mathcal{F}_{\{x,\, x{+}1\}}$. \end{proof} \begin{prop}\label{disjointlabelpairs} There exist two Scharlemann cycles on $G_S$ each of order $2$ or $3$ with disjoint label pairs. Furthermore, there cannot be a third Scharlemann cycle of order $2$ or $3$ with label pair distinct from the other two. \end{prop} \begin{proof} Assume any Scharlemann cycle of $G_S$ of order $2$ or $3$ has label pair $\{x, x{+}1\}$. By \fullref{labelaccount} there exists an $F \in \mathcal{F}_{\{x,\, x{+}1\}}$ that accounts for the labels $\Lambda_{\{x,\, x{+}1\}}$. Then by \fullref{bounded} and \fullref{boundedforked}, $|\Lambda_{\{x,\, x{+}1\}}| \leq t/2 +1$. Since $t \geq 6$, we have $\Lambda_{\{x,\, x{+}1\}} \neq \mathbf{t}$. Therefore there must be a second $S2$ or $S3$ cycle with label pair distinct from $\{x, x{+}1\}$. Assume there are two Scharlemann cycles $\sigma$ and $\sigma'$ of $G_S$ of order $2$ or $3$ with label pairs $\{x-1, x\}$ and $\{x, x{+}1\}$ respectively. Again, by \fullref{labelaccount} there exists $F \in \mathcal{F}_{\{x-1,\, x\}}$ that accounts for the labels $\Lambda_{\{x-1,\, x\}}$ and $F' \in \mathcal{F}_{\{x,\, x{+}1\}}$ that accounts for the labels $\Lambda_{\{x,\, x{+}1\}}$. Also, by \fullref{bounded} and \fullref{boundedforked}, $|\Lambda_{\lambda}| \leq t/2 +1$ for each $\lambda = \{x{-}1, x\}, \{x, x{+}1\}$. Because for each $\{x{-}1, x\}$ and $\{x, x{+}1\}$ the sequence of labels $\lambda$ contains the middle of the set $\Lambda_{\lambda}$ (when ordered sequentially), $|\Lambda_{\{x-1,\, x\}} \cup \Lambda_{\{x,\, x{+}1\}}| \leq t/2 + 1 + 1$. Since $t\geq 6$, $\Lambda_{\{x-1,\, x\}} \cup \Lambda_{\{x,\, x{+}1\}} \neq \mathbf{t}$. Therefore there must be a third $S2$ or $S3$ cycle $\sigma''$ in $G_S$ with label pair distinct from the label pair of each $\sigma$ and $\sigma'$. Assume there are three Scharlemann cycles of order $2$ or $3$ with mutually distinct label pairs. Then two of them have disjoint label pairs and must bound faces on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. This contradicts \fullref{oppositesides}. \end{proof} Pulling \fullref{labelaccount} and the proof of \fullref{disjointlabelpairs} together, there exists two bigons or trigons, say $F^-$ and $F^+$ of the subgraphs $\smash{G_S^x}$ and $G_S^y$ respectively for some $x, y \in \mathbf{t}$, that account for all the labels $\mathbf{t}$. Interior to $F^-$ and $F^+$ are order $2$ or $3$ Scharlemann cycles $\sigma^-$ and $\sigma^+$ respectively with disjoint label pairs. These Scharlemann cycles bound faces $f^-$ and $f^+$ respectively of $G_S$. Let us assume we have labeled the intersections of $K \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ so that the Scharlemann cycle $\sigma^-$ within $F^-$ has label pair $\{t, 1\}$. \begin{Lemma} \label{tis61014} $4{\nmid} t$ \end{Lemma} \begin{proof} Assume $4|t$. Then by \fullref{bounded} and \fullref{boundedforked} the maximum number of labels for which each $F^-$ and $F^+$ may account is $t/2$. In order for $F^-$ and $F^+$ to account for all $t$ labels together, their label sets must each realize this maximum and must be disjoint from one another. Since $\sigma^-$ has label pair $\{t, 1\}$, $F^-$ accounts for the labels $\{t-t/4+1, \dots, t, 1, \dots, t/4\}$. Therefore $F^+$ accounts for the labels $\{t/4+1, \dots, t/2, t/2+1, \dots, t-t/4\}$. This however implies that $\sigma^+$ has label pair $\{t/2, t/2+1\}$. Thus $f_{\sigma^-}$ and $f_{\sigma^+}$ lie on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ contradicting \fullref{oppositesides}. \end{proof} \begin{Lemma} \label{oneconfig} $F^+$ and $F^-$ are each bounded by extended Scharlemann cycles of order $2$ or $3$. \end{Lemma} \begin{proof} Given that $t \geq 6$ by assumption and $4{\nmid} t$ by \fullref{tis61014}, then it is direct to check that the bounds of \fullref{bounded} and \fullref{boundedforked} give us only two cases that we must consider. For both cases it turns out that $\sigma^+$ has label pair $\{t/2, t/2+1\}$. Without loss of generality, these two cases are: \begin{enumerate} \item \label{twoschcycles} \begin{itemize} \item $F^-$ is bounded by an extended Scharlemann cycle of order $2$ or $3$ that accounts for the labels $\{t-(t+2)/4+1\, \dots, t, 1, \dots, (t+2)/4\}$, and \item $F^+$ is bounded by an extended Scharlemann cycle of order $2$ or $3$ that accounts for the labels $\{(t+2)/4+1, \dots, t/2, t/2+1, \dots, t-(t+2)/4\}$. \end{itemize} \item \label{twoforks} \begin{itemize} \item $F^-$ is bounded by a forked extended $S2$ cycle that accounts for the labels $\{t-(t+2)/4+2, \dots, t, 1, \dots, (t+2)/4-1, (t+2)/4\}$, and \item $F^+$ is bounded by a forked extended $S2$ cycle that accounts for the labels $\{(t+2)/4+1, \dots, t/2, t/2+1, \dots, t-(t+2)/4, t-(t+2)/4+1\}$. \end{itemize} \end{enumerate} Case~\eqref{twoschcycles} satisfies the lemma. Note also that since $F^+$ only accounts for $t/2 - 1$ labels it could be interior to the face of another extended Scharlemann cycle or a forked extended $S2$ cycle. In Case~\eqref{twoforks}, notice that $F^-$ is a trigon of $G_{\smash{S}}^{(t+2)/4}$ bounded by a forked extended $S2$ cycle with the arc $K_{\smash{((t+2)/4-1, (t+2)/4)}}$ on its boundary. Similarly, $F^+$ is a trigon of $G_S^{\smash{t-(t+2)/4+1}}$ which is bounded by a forked extended $S2$ cycle with the arc $K_{\smash{(t-(t+2)/4, t-(t+2)/4+1)}}$ on its boundary. Furthermore, since $(t+2)/4-1$ and $t-(t+2)/4$ have opposite parity, these two arcs of $K$ lie on opposite sides of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. A high disk and a low disk may then be constructed from $F^+$ and $F^-$ as in \fullref{thinningtwoforks}. By construction these disks may be isotoped to be disjoint. This contradicts \fullref{highdisklowdisk}. \end{proof} We will henceforth assume $F^+$ and $F^-$ are as in Case~\eqref{twoschcycles} in the proof of \fullref{oneconfig}. \subsection{Two trees} As in \fullref{sec:annuliandtrees}, from each $F^-$ and $F^+$ we may construct corresponding complexes $$A^- = \textstyle\bigcup_{i=1}^{(t+2)/4-1} A_i^- \quad \text{and} \quad A^+ = \textstyle\bigcup_{i=1}^{(t+2)/4-1} A_i^+$$ respectively where $A_i^\pm$ are annuli for $i = 2, \dots, (t+2)/4-1$ and $A_1^\pm$ are M\"obius bands or complexes. Notice that $A^-$ and $A^+$ are disjoint. Define the curves $\smash{a_i^\pm} = \ensuremath{\partial} \smash{A_i^+} -\ensuremath{\partial} \smash{A_{i-1}^\pm}$ for $2 \leq i \leq (t+2)/4-1$ and $\smash{a_1^\pm = \ensuremath{\partial} A_1^\pm}$. We may further define the annulus $\smash{A_{(t+2)/4}^-}$ and the curve $\smash{a_{(t+2)/4}^-} = \ensuremath{\partial} \smash{A_{(t+2)/4}^- - a_{(t+2)/4-1}^-}$. Note that $\smash{A_{(t+2)/4}^-}$ is disjoint from $A^+$ too. We may also define trees $\ensuremath{\mathcal{T}}^+$, $\smash{\ensuremath{\mathcal{T}}_*^+}$, $\ensuremath{\mathcal{T}}^-$, and $\smash{\ensuremath{\mathcal{T}}_*^-}$ associated to $A^+$ and $A^-$, as well as the corresponding labelings of vertices $\smash{x_1^+}$, $\smash{x_*^+}$, $\smash{x_0^+}$, $\smash{x_1^-}$, $\smash{x_*^-}$, and $\smash{x_0^-}$. Since the solid torus corresponding to $\smash{x_0^+}$ must contain $\smash{A_1^-}$ and the solid torus corresponding to $\smash{x_0^-}$ must contain $\smash{A_1^+}$, $\smash{x_0^- \in \ensuremath{\mathcal{T}}_*^-}$ and $\smash{x_0^+ \in \ensuremath{\mathcal{T}}_*^+}$. \begin{Lemma}\label{highestmin2} After perhaps a width-preserving isotopy, the highest minimum (resp.\ lowest maximum) below (resp.\ above) $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ lies on an arc of $K$ that together with an arc on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounds a low (resp.\ high) disk with interior disjoint from $A^- \cup A_{(t+2)/4}^- \cup A^+$. \end{Lemma} \begin{proof} Because $A^- \cup A_{(t+2)/4}^-$ and $A^+$ are disjoint, this lemma quickly follows from the proof of \fullref{highestmin}. \end{proof} \begin{Lemma}\label{oneannulusintersectingK} The interiors of the arcs of $K \backslash (A^- \cup A^+)$ intersect $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in a single annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A^- \cup A^+)$. \end{Lemma} \begin{proof} The interior of the arcs of $K \backslash (A^- \cup A^+)$ intersect $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in only two points, $K_{(t+2)/4}$ and $K_{t-(t+2)/4 +1}$. Since $A_{(t+2)/4}^-$ is disjoint from $A^+$, $a_{(t+2)/4}^-$ is contained in a component of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A^- \cup A^+)$. The points $K_{(t+2)/4}$ and $K_{t-(t+2)/4+1}$ are contained in $a_{(t+2)/4}^-$. \end{proof} Define $T_{(t+2)/4}$ to be this annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A^- \cup A^+)$ containing $a_{(t+2)/4}^-$. \begin{Lemma}\label{differentboundarycurves} Assume $t \geq 10$ and $4 {\nmid} t$. For any annulus $R \in \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A^- \cup A^+)$, we have $\ensuremath{\partial} R = a_i^- \cup a_j^+$ for some $i, j \in \{1, \dots, (t+2)/4-1\}$. \end{Lemma} \begin{proof} Assume otherwise. Since $|A^- \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}| = |A^+ \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}| = (t+2)/4-2 \geq 2$, there must be two annuli, say $R^+$ and $R^-$, of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A^- \cup A^+)$ such that $\ensuremath{\partial} R^+ = a_i^+ \cup a_j^+$ and $\ensuremath{\partial} R^- = a_k^- \cup a_l^-$ for integers $1 \leq i < j \leq (t+2)/4-1$ and $1 \leq k < l \leq (t+2)/4-1$. By \fullref{oneannulusintersectingK}, at least one of these two annuli has its interior disjoint from $K$ and is thus not $T_{(t+2)/4}$. Consider the annulus $\bigcup_{i=2}^k A_i^-$ where $k=(t+2)/4$ and its associated trees $\ensuremath{\mathcal{T}}$ and $\ensuremath{\mathcal{T}}_*$. As noted in \fullref{evenintersections} since $k > t/4$, there exists a vertex of $\ensuremath{\mathcal{T}}$ not in $\ensuremath{\mathcal{T}}_*$. By \fullref{uniqueleaf} there is a single leaf of $\ensuremath{\mathcal{T}}$ not contained in $\ensuremath{\mathcal{T}}_*$. Furthermore, $A_k^- = A_{(t+2)/4}^-$ is contained in the boundary of the solid torus $X_{(t+2)/4}$ corresponding to this leaf. Thus $\ensuremath{\partial} X_{(t+2)/4} \backslash A_{(t+2)/4}^-$ is one component of $T_{(t+2)/4} \backslash a_{(t+2)/4}$. Therefore one component of $\ensuremath{\partial} T_{(t+2)/4}$ is $a_{(t+2)/4-1}^-$. Since the other component of $T_{(t+2)/4} \backslash a_{(t+2)/4}$ has interior disjoint from $K$, the other component of $\ensuremath{\partial} T_{(t+2)/4}$ cannot be $a_n^+$ for any $1 \leq n \leq (t+2)/4-2$ without contradicting \fullref{uniqueleaf}. Thus neither $R^+$ nor $R^-$ is $T_{(t+2)/4}$. Now consider the annuli $A^+$ and $A^-$ and their corresponding trees. If $i$ and $j$ have the same parity, then the torus $R^+ \cup \smash{\bigcup_{s=i+1}^j A_s^+}$ separates $\smash{A_1^+}$ from $K \backslash A^+$. Thus the vertex $x_1^+ \in \ensuremath{\mathcal{T}}^+$ is not contained in $\ensuremath{\mathcal{T}}_*^+$. \fullref{unfurling} then applies contradicting the thinness of $K$. Similarly $k$ and $l$ cannot have the same parity. Since $i$ and $j$ have opposite parity, then $R^+$ corresponds to an edge of $\ensuremath{\mathcal{T}}^+$ that separates vertices of $\ensuremath{\mathcal{T}}^+$ from $\ensuremath{\mathcal{T}}_*^+$. By \fullref{uniqueleaf} $R^+$ separates a single leaf of $\ensuremath{\mathcal{T}}^+$ from $\ensuremath{\mathcal{T}}_*^+$. Thus $i+1 = j = (t+2)/4-1$. This leaf corresponds to a solid torus $V^+$ with interior disjoint from $K$ that contains $\smash{A_{(t+2)/4-1}^+}$ in its boundary. Since $V^+$ contains neither $\smash{A_1^+}$ nor $\smash{A_1^-}$, the core of $R^+$ is a longitudinal curve of $V^+$. It follows that there are meridional disks of $V^+$ which form low (or high) disks for the arcs of $A_{(t+2)/4-1}^+ \cap K$. Let $D^+$ be one of these disks. Similarly since $k$ and $l$ have different parity, then $k+1 = l = (t+2)/4$ and there exists a solid torus $V^-$ such that $\ensuremath{\partial} V^- = R^- \cup A_{(t+2)/4}^-$. Furthermore there are meridional disks of $V^-$ which form high (or low) disks for the arcs of $A_{(t+2)/4-1}^- \cap K$. Let $D^-$ be one of these disks. Note that the annuli $A_{\smash[b]{(t+2)/4-1}}^+$ and $A_{\smash[b]{(t+2)/4-1}}^-$ lie on opposite sides of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Thus $V^+$ and $V^-$ lie on opposite sides of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Therefore $D^+$ and $D^-$ form a pair of disjoint high and low disks for $K$. By \fullref{highdisklowdisk} this contradicts the thinness of $K$. \end{proof} \begin{lemma}\label{alternatingcurves} We have $\ensuremath{\partial} T_{\smash[b]{(t+2)/4}} = a_{\smash[b]{(t+2)/4-1}}^+ \cup a_{\smash[b]{(t+2)/4-1}}^-$. Furthermore, except for $T_{(t+2)/4}$ and the annulus (other than $T_{(t+2)/4}$ if $t=6$) bounded by $a_1^+ \cup a_1^-$, all of the other annuli of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A^+ \cup A^-)$ are bounded by either $a_{i-1}^+ \cup a_{i}^-$ or $a_{i-1}^- \cup a_{i}^+$ for $i \in \{2, \dots, (t+2)/4-1\}$. \end{lemma} \begin{proof} \fullref{tis61014} implies that $4{\nmid}t$. Recall that we are assuming $t \geq 6$. The lemma follows immediately for $t=6$ since in this case $(t+2)/4-1 = 1$. We have the complexes $A^- = \smash{A_1^-}$ and $A^+ = \smash{A_1^+}$ and the annulus $\smash{A_2^-}$ which give the three curves $\smash{a_1^-}$, $\smash{a_1^+}$, and $\smash{a_2^-}$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. There are only two annuli of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A^- \cup A^+)$---one of which is $T_{(t+2)/4}=T_2$---and both have boundary $a_1^+ \cup a_1^-$. \fullref{differentboundarycurves} which applies for $t \geq 10$ implies that each annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A^+ \cup A^-)$ has boundary $a_i^+ \cup a_j^-$ for some $i$ and some $j \in \{1, \dots, (t+2)/4-1\}$. Consider the edges of the trees $\ensuremath{\mathcal{T}}^+$ and $\ensuremath{\mathcal{T}}^-$. Since each edge of $\ensuremath{\mathcal{T}}^+$ (resp.\ $\ensuremath{\mathcal{T}}^-$) corresponds to an annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A^+$ (resp.\ $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A^-$), such an annulus must intersect $A^-$ (resp.\ $A^+$) exactly once. Therefore the annulus corresponding to an edge incident to a leaf of $\ensuremath{\mathcal{T}}^+$ (resp.\ $\ensuremath{\mathcal{T}}^-$) must intersect $A^-$ (resp.\ $A^+$) either in $a_1^-$ or $\smash{a_{(t+2)/4-1}^-}$ (resp.\ $\smash{a_1^+}$ or $\smash{a_{(t+2)/4-1}^+}$). Thus each tree $\ensuremath{\mathcal{T}}^+$ and $\ensuremath{\mathcal{T}}^-$ may only have two leaves and is thus homeomorphic to a line segment. The lemma now follows for $t=10$; see \fullref{fig:tis10A+-}(a) for the case $t = 10$. Hence we may assume $t\geq14$. \begin{figure} \caption{(a) $A^+$ and $A^-$ for $t=10$\qua (b) The ends of $A^+$ and $A^-$ for $t > 10$} \label{fig:tis10A+-} \end{figure} Since each tree is homeomorphic to a line segment, the two annuli of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A^-$ (resp.\ $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A^+$) with boundary containing the curve $\smash{a_{(t+2)/4-1}^-}$ (resp.\ $a_{(t+2)/4-1}$) also contain either $\smash{a_{(t+2)/4-2}^-}$ or $\smash{a_{(t+2)/4-3}^-}$ (resp.\ $\smash{a_{(t+2)/4-2}^+}$ or $\smash{a_{(t+2)/4-3}^+}$). Note that one of these two annuli must contain $T_{(t+2)/4}$. By \fullref{differentboundarycurves} each of these two annuli must intersect $A^+$ (resp.\ $A^-$) in exactly one curve. Indeed, since the annulus with boundary $a_{(t+2)/4-2}^- \cup a_{(t+2)/4-1}^-$ (resp.\ $\smash{a_{(t+2)/4-2}^+ \cup a_{(t+2)/4-1}^+}$) corresponds to an edge of $\ensuremath{\mathcal{T}}^+$ which is incident to a leaf, it must intersect either $\smash{a_1^+}$ or $\smash{a_{(t+2)/4-1}^+}$ (resp.\ $\smash{a_1^-}$ or $\smash{a_{(t+2)/4-1}^-}$). If it intersects $a_1^+$ (resp.\ $a_1^-$), then the other annulus must contain $T_{(t+2)/4}$. In this case, $$\ensuremath{\partial} T_{(t+2)/4} = a_{(t+2)/4-1}^- \cup a_2^+ \quad \text{(resp. }a_{(t+2)/4-1}^+ \cup a_2^-).$$ Moreover, $T_{(t+2)/4}$ must thus be contained in the annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A^+$ (resp.\ $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash A^-$) with boundary $a_1^- \cup a_2^-$ (resp.\ $a_1^+ \cup a_2^+$). This however forces $a_2^- = a_{(t+2)/4-1}^-$ (resp.\ $\smash{a_2^+ = a_{(t+2)/4-1}^+}$.) Hence $t = 10$ contrary to our assumption. This implies $$\ensuremath{\partial} T_{(t+2)/4} = a_{(t+2)/4-1}^+ \cup a_{(t+2)/4-1}^-.$$ Because each $\ensuremath{\mathcal{T}}^+$ and $\ensuremath{\mathcal{T}}^-$ is homeomorphic to a line segment, $A^+$ and $A^-$ spiral around one another as in \fullref{fig:tis10A+-}(b). The remainder of the lemma then follows. \end{proof} \begin{thm}\label{doubleunfurl} $t \leq 6$ \end{thm} \begin{proof} This proof is phrased and its figures drawn so that $A_{(t+2)/4-1}^-$ is above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. \fullref{tis61014} implies that $4{\nmid}t$. Hence we assume $t \geq 10$. If $t \equiv 2 \mod 8$, this coincides with our convention that $K_{(t, 1)}$ is below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If $t \equiv 6 \mod 8$, then one ought to flip the figures and make the appropriate corresponding changes to the language. \begin{figure} \caption{(a) The ``ends'' of $A^+$ and $A^-$, $A_{(t+2)/4} \label{fig:doubleunfurl} \end{figure} By \fullref{alternatingcurves}, the ``ends'' of $A^+$ and $A^-$ are as in \fullref{fig:tis10A+-}. Let $T_R$ be the annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by $a_{(t+2)/4}^-$ and $a_{(t+2)/4-2}^-$ that contains $a_{(t+2)/4-1}^+$. Let $A_R$ be the union $\smash{A_{(t+2)/4-1}^- \cup A_{(t+2)/4}^-}$. By \fullref{dividingtorus}, the torus $R = T_R \cup A_R$ bounds a solid torus $V$. Note that $V$ contains the two arcs $K - (A^+ \cup A^- \cup \smash{A_{(t+2)/4}^-})$ in its interior, the arcs $K \cap (\smash{A_{(t+2)/4-1}^- \cup A_{(t+2)/4}^-})$ on its boundary, as well as the points $K \cap a_{(t+2)/4-1}^+$ on its boundary. See \fullref{fig:doubleunfurl}(a). Let $\delta$ and $\delta'$ be parallel simple closed curves on $R$ that transversely cross $T_R$ and $A_R$ just once so that $(\delta \cup \delta') \cap A_R = K \cap A_R$ and $(\delta \cup \delta') \cap a_{(t+2)/4-1}^+ \subseteq K \cap a_{(t+2)/4-1}^+$. Orient $\delta$ so that it crosses from $T_R$ into $A_{(t+2)/4}^-$. Although $K$ intersects the interior of $T_R$ in two points, we may perform a Dehn twist $\ensuremath{\mathcal{T}}heta_u$ along the torus $R$ in the direction of $\delta$ analogous to the unfurling isotopy of \fullref{main-unfurl}. We must mind the effect of the isotopy on the arcs $K \cap N(a_{(t+2)/4-1}^+)$. The annulus $\ensuremath{\mathcal{T}}heta_{2 \pi}(A_R)$ lies as $T_R$. A slight further isotopy makes this annulus and the arcs of $\ensuremath{\mathcal{T}}heta_{2 \pi}(K)$ on it transverse to the height function. These arcs of $K$ now have no critical points; at least four fewer than before the isotopy. The critical points of $K$ that were on $A_{(t+2)/4-1}^-$ and $A_{(t+2)/4}^-$ have been removed. See \fullref{fig:doubleunfurl}(b). The annulus $A_{(t+2)/4-1}^+$ (slightly extended through $N(R)$) near $a_{(t+2)/4-1}^+$ is spun once around the meridian of $R$ by $\ensuremath{\mathcal{T}}heta_u$. The resulting annulus may be regarded as the double-curve sum of $A_{(t+2)/4-1}^+$ with $R$ (suitably oriented). The arcs of $\ensuremath{\mathcal{T}}heta_{2\pi}(K)$ on this annulus may similarly be regarded as resulting from this double-curve sum of $K \cap N(A_{(t+2)/4-1}^+)$ with the two curves $\delta$ and $\delta'$ on $R$. A further slight isotopy with support in $N(T_R)$ makes this resulting annulus and arcs of $\ensuremath{\mathcal{T}}heta_{2\pi}(K)$ on it transverse to the height function. We may further assume that after these isotopies the arcs of $\ensuremath{\mathcal{T}}heta_{2\pi}(K \cap N(a_{(t+2)/4-1}^+))$ in $N(T_R)$ each have just one critical point. See again \fullref{fig:doubleunfurl}(b). Indeed, outside of $N(T_R)$, we have $K = \ensuremath{\mathcal{T}}heta_{2\pi}(K)$ and $A^- \cup A_{(t+2)/4}^- \cup A^+ = \ensuremath{\mathcal{T}}heta_{2\pi}(A^- \cup A_{(t+2)/4}^- \cup A^+)$. After these isotopies there are four ``new'' critical points of $\ensuremath{\mathcal{T}}heta_{2\pi}(K)$: two minima just above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and two maxima just below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Due to these critical points, the width of $\ensuremath{\mathcal{T}}heta_{2\pi}(K)$ is currently greater than the width of $K$. Consider the subannulus $Q$ of $\ensuremath{\mathcal{T}}heta_{2 \pi}(A^+)$ lying between the curves formerly labeled $a_{\smash{(t+2)/4-1}}^-$ and $a_{(t+2)/4-2}^+$. Let $T_Q$ be the subannulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by $\ensuremath{\partial} Q$ with interior disjoint from $\ensuremath{\mathcal{T}}heta_{2\pi}(K)$. Note that $Q$ is parallel to $T_Q$ through the solid torus that they together bound. This solid torus and the annuli $Q$ and $T_Q$ are shown in \fullref{fig:doubleunfurl}(c). We may isotop $Q$ along with the arcs of $\ensuremath{\mathcal{T}}heta_{2\pi}(K) \cap Q$ towards $T_Q$ so that the arcs $\ensuremath{\mathcal{T}}heta_{2\pi}(K) \cap Q$ after this isotopy each have only one minimum just below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. We may also slightly isotop the other annuli and arcs of the knot in $N(T_R)$ downwards so that the minima that were just above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ are now just below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. The result of these isotopies may be seen in \fullref{fig:doubleunfurl}(d). Notice the collection of annuli and arcs of $K$ now above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is indistinguishable from the collection of annuli and arcs of $K$ above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ prior to the isotopies. Furthermore, notice that (mainly due to the isotopy of $Q$) the width of the resulting position of the knot is at most the width of $K$. Since $K$ was originally in thin position, these widths must be equal. Let us continue to refer to the annuli $A_i^\pm$ that are unaffected by these isotopies by their former labels. The annulus $A_{(t+2)/4-2}^-$ has been elongated (in a height-preserving manner) with $\ensuremath{\mathcal{T}}heta_{2\pi}(A_R)$; we shall also refer to this elongated annulus by $\smash{A_{(t+2)/4-2}^-}$. After the isotopies, $\ensuremath{\mathcal{T}}heta_{2\pi}(\smash{A_{(t+2)/4-1}^+})$ is cut into three annuli by $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Let us use $\smash{A_{(t+2)/4-1}^+}$ to refer to the isotoped annulus $Q$ and the subsequent two annuli as $\smash{A_{(t+2)/4}^+}$ and $\smash{A_{(t+2)/4+1}^+}$. Note that the post-isotopies annulus $\smash{A_{(t+2)/4}^+}$ coincides with the pre-isotopies annulus $\smash{A_{(t+2)/4-1}^-}$. Again, see \fullref{fig:doubleunfurl}(d). Note also that $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ remains a thick level. Prior to any isotopies in this proof, \fullref{highestmin2} implies that the lowest maximum of $K$ above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ lies on an arc $\kappa$ of $K \backslash \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ that together with an arc $\tau$ of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounds a high disk $\Delta$ with interior disjoint from $A^+ \cup A^-$. Since the interior of $\Delta$ is disjoint from $A_{(t+2)/4-1}^-$, either $\tau \subseteq T_R \cup T_{(t+2)/4}$ or $\tau \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} - \Int(T_R \cup T_{(t+2)/4})$. Because the annuli and arcs of $K$ above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ before the isotopies are indistinguishable from the annuli and arcs of $K$ above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ after the isotopies, the high disk $\Delta$ exists after these isotopies too. After the isotopies, an arc of $K$ on the annulus $\smash{A_{(t+2)/4+1}^+}$ bounds a low disk $\Delta_{+1}$ with an arc of $T_R \backslash T_{(t+2)/4}$, and an arc of $K$ on the annulus $A_{(t+2)/4-1}^+$ bounds a low disk $\Delta_{-1}$ with an arc of $T_Q$. Note that the interior of $\tau$ is disjoint from at least one of $\Delta_{+1} \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ or $\Delta_{-1} \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Since the position of $K$ after the isotopies has width at most that of $K$ before the isotopies, $\Delta$ together with either $\Delta_{+1}$ or $\Delta_{-1}$ forms a pair of high and low disks that by \fullref{highdisklowdisk} contradict the thinness of the original thin position of $K$. \end{proof} \section[The case t=6]{The case $t=6$}\label{sec:tis6} In this section we show that if $K$ is in thin position, then $t \neq 6$. Assuming $t=6$, our approach is to isotop $\Int S$ fixing $K$ so that we may gain an understanding of the resulting pieces of $S \backslash T$ on each side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Then since there are $3s$ arcs of $S \cap T$ and $s \geq 4g-1$, we obtain conflicting estimates on the Euler characteristic of $S$. \subsection[Isotopies of S]{Isotopies of $S$} By \fullref{oneconfig} we may assume that among the graphs $\smash{G_S^x}$ for $x \in \mathbf{t}$ there is a face $F$ bounded by an extended $S2$ cycle or an extended $S3$ cycle accounting for the labels $\{1, 2, 5, 6\}$ and a face $g$ bounded by an $S2$ cycle or an $S3$ cycle accounting for the labels $\{3, 4\}$. Let $f$ be the face of $G_S$ in $F$ bounded by the Scharlemann cycle. \begin{prop} \label{prop:isotopS} After perhaps reassigning labels of $\mathbf{t}$ by $x \mapsto 7-x$, one may isotop the interior of $S$ so that for some integer $n \geq 0$ the following holds: \begin{enumerate} \item \begin{itemize} \item The resulting arcs of $S \cap T$ are essential in $S$ and in $T$, and \item either $|S \cap T|$ remains minimized over isotopies of $\Int S$ or there exists annuli in $S \backslash T$. \end{itemize} \item \begin{itemize} \item For each label pair $\lambda = \{1, 6\}, \{2, 5\}, \{3, 4\}$ there are $s-n$ edges of $G_S$ with label pair $\lambda$, and they lie in an essential annulus in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, \item for each label pair $\lambda = \{1, 4\}, \{2, 3\}, \{5, 6\}$ there are $n$ edges of $G_S$ with label pair $\lambda$ and \item for each label pair $\lambda = \{1, 2\}, \{3, 6\}, \{4, 5\}$ there are no edges of $G_S$ with label pair $\lambda$. \end{itemize} \item \begin{itemize} \item If $f$ is a bigon, then there are $(s-n)/2$ faces of $S \cap X^-$ parallel to $f$, and \item if $f$ is a trigon, then there are either $(s-n)/3$ or $(s-2n)/3$ faces of $S \cap X^-$ parallel to $f$. \end{itemize} \item \begin{itemize} \item If $g$ is a bigon then $S \cap X^+$ is a collection of $3(s-n)/2$ bigons and $n$ trigons, and \item if $g$ is a trigon then $S \cap X^+$ is either a collection of $s-2n$ bigons and $(s+4n)/3$ trigons or a collection of $s-n$ bigons and $(s+2n)/3$ trigons. \end{itemize} \end{enumerate} \end{prop} Let $V$ be the solid torus component of $X^+ \backslash (\bar{N}(K) \cup F)$. Let $W$ be the solid torus $X^+ \backslash (V \cup \bar{N}(K) \cup g)$. Let $T_V$ be the annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap V$ so that $T_V$ is bounded by edges of $G_T$ with label pairs $\{1,6\}$ and $\{2, 5\}$. Observe that $\ensuremath{\partial} V \backslash T_V$ is parallel through $V$ onto $T_V$. Let $T_W$ and $T_W'$ be the two annuli of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \cap W$ where $T_W$ is bounded by edges of $G_T$ with label pairs $\{3, 4\}$ and $\{1, 6\}$ and $T_W'$ is bounded by edges of $G_T$ with label pairs $\{3, 4\}$ and $\{2, 5\}$. See \fullref{fig:toriVandW}. \def\small{\small} \begin{figure} \caption{The solid tori $V$, $W$, and $X^-\backslash f$ schematically} \label{fig:toriVandW} \end{figure} To prove \fullref{prop:isotopS} we begin in $V$ and push what we can of $S$ along $F$ through $X^-$ into $W$ and back into $X^-$. When in $W$ we also use $g$ to guide the isotopy. This puts $S \cap X^+$ into a position that we can understand and count. The unknown parts of $S \cap X^-$ are dealt with in subsequent subsections. \subsubsection{Pushing $S$ along bigons and trigons} We say two edges $e$ and $e'$ of $G_T$ are {\em parallel\/} if they bound an embedded bigon on $T$. We say two faces $R$ and $R'$ of $S \backslash T$ are {\em parallel\/} if $R$ and $R'$ cobound a product region in $(X-N(K)) \backslash T$. Similarly we say two edges of $G_T$ or two faces of $S \backslash T$ are {\em adjacent\/} if they are parallel and no other edge or face respectively lies between them. \begin{lemma}\label{lem:paralleldisks} Let $R$ and $R'$ be two disks of $S \backslash T$ such that $\ensuremath{\partial} R$ is parallel to $\ensuremath{\partial} R'$ on $\ensuremath{\partial} (X^\pm - N(K))$. Then $R$ is parallel to $R'$. Furthermore, every face of $S \backslash T$ in the product region between $R$ and $R'$ is a disk parallel to $R$ and $R'$. \end{lemma} \begin{proof} Since $\ensuremath{\partial} R$ is parallel to $\ensuremath{\partial} R'$, there is an annulus $A$ on $\ensuremath{\partial} (X^\pm - N(K))$ connecting them. Thus $R \cup A \cup R' \cong S^2$. Since $X^\pm - N(K)$ is irreducible, $R \cup A \cup R'$ is the boundary of a solid ball. This ball is the requisite product region $R \times [0,1]$. If $P \in S \backslash T$ is contained in the ball bounded by $R \cup A \cup R'$, then each component of $\ensuremath{\partial} P$ is an essential curve in $A$. Since $P$ must be incompressible, $P$ is a disk. It follows that $P$ is parallel to both $R$ and $R'$. \end{proof} \begin{lemma} \label{lem:parallelbigons} If $B$ and $B'$ are two bigons of $G_S$ on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ with edges $e_1 \subseteq B$ and $\smash{e_1'} \subseteq B'$ that are parallel as edges of $G_T$, then $B$ and $B'$ are parallel in $X$. \end{lemma} \begin{proof} We have the edges $e_1$ and $\smash{e_1'}$ of $B$ and $B'$ respectively which are parallel on $T$. Let $e_2$ and $\smash{e_2'}$ be the other edges of $B$ and $B'$ respectively. Note that $e_2$ and $\smash{e_2'}$ have the same label pair. Assume $B$ and $B'$ are not parallel. By \fullref{lem:paralleldisks} $B$ and $B'$ the edges $e_2$ and $\smash{e_2'}$ lie in an essential annulus $A$. Due to the existence of $f$ and $g$, \fullref{GT:L2.1} implies that the core of $A$ does not bound a disk in $X$. Join $B$ and $B'$ together along their corners and their parallel edges $e_1$ and $\smash{e_1'}$. This forms a disk in $X$ whose boundary is an essential curve in $A$ giving a contradiction. \end{proof} \begin{lemma}\label{lem:isotopacrossdisk} Let $R$ be a region of $S \backslash T$ in $X^+$. Let $D$ be a boundary compressing disk for $R$ in $X^+ - N(K)$. Let $D \cap R = \alpha$ and $D \cap T = \tau$. Assume that $\alpha$ is not parallel on $R$ to a corner of $R$. Then there exists an isotopy of $\Int S$ with support in $N(D)$ such that after the isotopy \begin{itemize} \item in $X^+$, $R$ is replaced by the result of boundary compression of $R$ along $D$, \item in $X^-$, the regions of $S \cap X^-$ incident to $\tau$ are joined by surgery (restricted to $X^-$) along $\tau$, \item each arc of $S \cap T$ is essential in both $S$ and $T$ and \item if $\tau$ connects distinct components of $R \cap T$ then such components are arcs and $|S \cap T|$ remains minimized. \end{itemize} The same holds with $X^+$ and $X^-$ interchanged. \end{lemma} \begin{remark} This is the type $A$ isotopy described by Jaco \cite{jaco3mfldtop}. \end{remark} \begin{proof} Consider the isotopy of $S$ with support in $N(D)$ that pushes the arc $\alpha \subseteq R$ through $D$. The lemma follows directly. In $X^+$, this isotopy is effectively a boundary compression of $R$ along $D$. Such a compression could only create a monogon if $\alpha$ were parallel to a corner of $R$. We however explicitly avoid this case. In $X^-$, this isotopy is effectively surgery of $S$ along the arc $\tau$ restricted to $X^-$. Such a surgery cannot create a monogon. See \fullref{fig:isotopyalongdisk}. \begin{figure} \caption{The isotopy of $R$ through $D$} \label{fig:isotopyalongdisk} \end{figure} Indeed, restricting our view to just $S \cap T$ on $T$, this isotopy has the effect of surgering $R \cap T$ along $\tau$. If $\tau$ has both end points on the same component of $\gamma \in R \cap T$, then there is an arc $\gamma_0 \subseteq \gamma$ connecting the end points of $\tau$. The isotopy will then render $\gamma$ into the two components $\gamma_0 \cup \tau'$ and $(\gamma - \gamma_0) \cup \tau''$ where $\tau'$ and $\tau''$ are suitable pushoffs of $\tau$. If $\tau$ has its endpoints on distinct arc components of $R \cap T$, then $|S \cap T|$ remains unchanged after the isotopy. However, if a simple closed curve of $R \cap T$ contains just one end point of $\tau$, then such an isotopy would reduce $|S \cap T|$ contradicting the assumed minimality of $|S \cap T|$. \end{proof} \begin{lemma}\label{lem:pushalongbigon} Let $B$ and $R$ be regions of $S \backslash T$ on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ such that $B$ is a bigon and there is an edge of $B$ adjacent to an edge of $R$. Then either \begin{itemize} \item $R$ is a bigon parallel to $B$, \item $R$ is a trigon or \item there is an isotopy of $\Int S$ supported in $N(B \cup R)$ such that after the isotopy $R$ is replaced by a bigon $B'$ parallel to $B$ and a region $R'$ with two fewer corners than $R$ and two edges of regions (or of a single region) of $S \backslash T$ on the opposite side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ are joined by surgery along an arc on $T$. \end{itemize} Furthermore in the last case the isotopy preserves the property that each arc of $S \cap T$ is essential in both $S$ and $T$. If the two edges are distinct, then $|S \cap T|$ remains minimized over all isotopies of $\Int S$. If the two edges are not distinct, then $R'$ is not a disk. \end{lemma} \begin{proof} We will construct a disk $D$ that gives a boundary compression for $R$ in cut exterior $(X-N(K)) \backslash T$ and then apply \fullref{lem:isotopacrossdisk}. The desired disk $D$ is a pushoff of $B$ with two corners and an edge on $R$ and remaining edge on $T$. Let $e_B$ and $e_R$ be the adjacent edges of $B$ and $R$. Together they bound a bigon $\delta$ of $T \backslash S$. Let $e_B'$ be the other edge of $B$. Assume $e_B$ and $e_R$ have label pairs $\{x, y{+}1\}$ so that, by the parity rule, the corners of $B$ are on $\ensuremath{\partial} H_{(x,\, x{+}1)}$ and $\ensuremath{\partial} H_{(y,\,y+1)}$. It may be the case that $x=y$. Note that the corners of $R$ adjacent to $e_R$ bound rectangles $\rho_{(x,\, x{+}1)} \subseteq \ensuremath{\partial} H_{(x,\, x{+}1)}$ and $\rho_{(y,\,y+1)} \subseteq \ensuremath{\partial} H_{(y,\,y+1)}$ with the corners of $\delta$, the corners of $B$, and arcs on the vertices $U_{x+1}$ and $U_{y}$. Since $\rho_{(x,\, x{+}1)}$ and $\rho_{(y,\,y+1)}$ have their interiors disjoint from $S$, we form a disk $D$ that is a slight pushoff of the disk $B \cup \delta \cup \rho_{(x,\, x{+}1)} \cup \rho_{(y,\,y+1)}$. Notice that $\ensuremath{\partial} D$ is composed of an arc $\alpha$ of $R$ and an arc $\tau$ of $T \backslash S$. The arc $\alpha$ is a slight pushoff of the edge $e_R$ and the two corners of $R$ to which it is incident. Assume $R$ is neither a bigon nor a trigon. Then $\alpha$ is not parallel on $R$ either to a corner or into an edge of $R$. Thus we may apply the isotopy of \fullref{lem:isotopacrossdisk}. If the endpoints of $\tau$ lie on a single component of $R \cap T$ then the component of $\ensuremath{\partial} R$ containing $e_R$ has just two edges. Since $R$ is not a bigon, it cannot be a disk. Thus after the isotopy $R'$ is not a disk. If $R$ is a bigon, then \fullref{lem:parallelbigons} implies that $R$ is parallel to $B$. \end{proof} The above lemma applies directly if $B$ is a face of $G_S$ bounded by an $S2$ cycle. If $B$ is a face of $G_S$ bounded by an $S3$ cycle, we may obtain a similar statement. \begin{lemma}\label{lem:pushalongS3cycle} Let $B$ be a face of $G_S$ bounded by an $S3$ cycle. Let $R$ be a region of $S \backslash T$ on the same side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ as $B$ such that there is an edge of $B$ adjacent to an edge of $R$. Further assume this edge of $R$ is not between the two parallel edges of $B$. Then either \begin{itemize} \item $R$ is a trigon parallel to $B$, \item $R$ is a tetragon or \item there is an isotopy of $\Int S$ supported in $N(B \cup R)$ such that after the isotopy $R$ is replaced by a trigon $B'$ parallel to $B$ and a region $R'$ with three fewer corners than $R$ and two edges of regions (or of a single region) of $S \backslash T$ on the opposite side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ are joined by surgery along an arc on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. \end{itemize} Furthermore in the last case the isotopy preserves the property that each arc of $S \cap T$ is essential in both $S$. If the two edges are distinct, then $|S \cap T|$ remains minimized over all isotopies of $S$. If the two edges are not distinct, then $R'$ is not a disk. \end{lemma} \begin{proof} Again, we will construct a disk $D$ that gives a boundary compression for $R$ in $(X-N(K)) \backslash T$ and then apply \fullref{lem:isotopacrossdisk}. The desired disk $D$ is a pushoff of $B$ with three corners and two edges on $R$ and remaining edge on $T$. Let $e_B$ and $e_R$ be the adjacent edges of $B$ and $R$. Together they bound a bigon $\delta$ of $T \backslash S$. Assume $e_B$ and $e_R$ have label pairs $\{x, x{+}1\}$ so that the corners of $B$ are on $\ensuremath{\partial} H_{(x,\, x{+}1)}$. Note that the corners of $R$ adjacent to $e_R$ bound rectangles $\rho, \rho' \subseteq \ensuremath{\partial} H_{(x,\, x{+}1)}$ with the corners of $\delta$, two corners of $B$, and arcs on the vertices $U_{x}$ and $U_{x+1}$. Two of the three edges of $B$ are parallel on $T$. These two edges bound a bigon $\delta_B$ on $T$. By assumption, $e_R \not \subseteq \delta_B$. Nevertheless, following around the component of $\ensuremath{\partial} R$ containing $e_R$, one of the edges before or after $e_R$ is contained in $\delta_b$ and is adjacent to an edge of $B$ other than $e_B$. Let $e_B'$ and $e_R'$ be these edges of $B$ and $R$ respectively. Let $\delta'$ be the bigon of $T \backslash S$ bounded by $e_B'$ and $e_R'$. We may assume that the rectangle $\rho'$ is bounded by the corner of $B$ that connects the edges $e_B$ to $e_B'$, the corner of $R$ that connects the edges $e_R$ to $e_R'$, and a corner of each $\delta$ and $\delta'$. The next corner of $R$ then bounds a rectangle $\rho'' \subseteq \ensuremath{\partial} H_{(x,\, x{+}1)}$ with the next corner of $B$, a corner of $\delta'$ and an arc on one of the vertices $U_{x}$ or $U_{x+1}$. Since $\rho$, $\rho'$, and $\rho''$ have their interiors disjoint from $S$, we form a disk $D$ that is a slight pushoff of the disk $B \cup \delta \backslash \delta' \cup \rho \cup \rho' \cup \rho''$. Notice that $\ensuremath{\partial} D$ is composed of an arc $\alpha$ of $R$ and an arc $\tau$ of $T \backslash S$. The arc $\alpha$ is a slight pushoff of the edges $e_R$ and $e_R'$, the corner between them and the two corners surrounding them. The remainder of this proof follows completely analogously to the above proof of \fullref{lem:pushalongbigon}. Assume $R$ is neither a trigon nor a tetragon. Then $\alpha$ is not parallel on $R$ either to a corner or into an edge of $R$. Thus we may apply the isotopy of \fullref{lem:isotopacrossdisk}. If the endpoints of $\tau$ lie on a single component of $R \cap T$ then the component of $\ensuremath{\partial} R$ containing $e_R$ has just three edges. Since $R$ is not a trigon, it cannot be a disk. Thus after the isotopy $R'$ is not a disk. If $R$ is a trigon, then a proof analogous to that of \fullref{lem:parallelbigons} implies that $R$ is parallel to $B$. \end{proof} \subsubsection{Arranging $S$ in the solid torus $V$} Before performing any isotopies of $S$ we need to see how it may presently be positioned in $V$. \begin{lemma} \label{lem:bigonsdisksandannuliinsolidtorus} Each region $R \in S \cap V$ is either a bigon parallel to a bigon of $F \backslash f$, a meridional disk of $V$ with an odd number of at least $3$ corners, or an annulus with two corners. \end{lemma} \begin{proof} Choose an orientation on $S$. Such a choice induces an orientation on each $R \in S \cap V$ which in turn induces an orientation on each boundary component of $R$. Let us then consider the collection of oriented simple closed curves $\mathcal{C} = \{ \ensuremath{\partial} R | R \in S \cap V\}$ as they lie on the torus $\ensuremath{\partial} V$. The collection $\mathcal{C}$ may contain both trivial and essential curves on $\ensuremath{\partial} V$. Let $\gamma$ be an (arbitrarily oriented) essential circle on $\ensuremath{\partial} V$ disjoint from $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ intersecting each corner of each $R \in S \cap V$ (ie\ each arc of $C \cap \ensuremath{\partial} \bar{N}(K)$ for each $C \in \mathcal{C}$) exactly once. Since a corner of an $R \in S \cap V$ is an arc on either $\ensuremath{\partial} H_{(1, 2)}$ or $\ensuremath{\partial} H_{(5, 6)}$, $\gamma$ may be divided into two arcs, say $\gamma_+$ and $\gamma_-$, according to the direction in which a curve $C \in \mathcal{C}$ crosses the arc. If $C \in \mathcal{C}$ is a trivial curve on $\ensuremath{\partial} V$, then $C$ must be the boundary of a disk of $S \cap V$. If $C \cap \gamma = \emptyset$, then $C$ is a trivial curve on $T$. Since $C$ bounds a disk in $S$ and a disk in $T$, these disks together form a sphere in $V$ which bounds a solid ball in $V$. This implies that there exists an isotopy of $S$ that reduces $|S \cap T|$ contradicting the assumption that $|S \cap T|$ is minimized. Thus $C \cap \gamma \neq \emptyset$. Since $C$ bounds a disk on $\ensuremath{\partial} V$, $\gamma$ must alternately cross in and out of the disk that $C$ bounds. Therefore the direction in which $C$ crosses $\gamma$ alternates around $\gamma$. Because $\gamma$ is divided into the two arcs $\gamma_+$ and $\gamma_-$ that dictate the direction in which $C$ may cross, $C$ must intersect $\gamma$ only twice. Hence $C$ is the boundary of a bigon. Such a bigon has one corner on $K_{(1, 2)}$ and one corner on $K_{(5, 6)}$. Since $C$ is a trivial curve on $\ensuremath{\partial} V$, the bigon it bounds must be parallel to one of the bigons of $F \backslash f$. If $C \in \mathcal{C}$ is a meridional curve, then it intersects $\gamma$ algebraically once. Hence $|C \cap \gamma|$ is odd. Furthermore, $C$ must bound a disk region in $S$ since otherwise there would be a compression of $S$. Since there are no monogons of $S \backslash T$, we know that $|C \cap \gamma| > 1$. Thus any meridional disk of $S \cap V$ has an odd number of at least three corners. Assume $C \in \mathcal{C}$ is an essential nonmeridional curve on $\ensuremath{\partial} V$. Since $C \subseteq \ensuremath{\partial} R$ for some $R \in S \cap V$, then $\ensuremath{\partial} R \subseteq \mathcal{C}$ is a collection of essential nonmeridional curves on $\ensuremath{\partial} V$ that is nullhomologous in $V$. Since $R$ is incompressible, $R$ must be an annulus. Let $C'$ be the other component of $\ensuremath{\partial} R$. Let $A$ be an annulus on $\ensuremath{\partial} V$ between $C$ and $C'$ oriented so that the orientations of $C$ and $C'$ respect the induced boundary orientation. In order for the curves $C$ and $C'$ to intersect each $\gamma_+$ and $\gamma_-$ in the prescribed directions either $\gamma$ crosses $A$ transversely once, $\gamma$ intersects $A$ once but is disjoint from one component of $\ensuremath{\partial} A$, or $\ensuremath{\partial} A$ is disjoint from $\gamma$. Thus the annulus $R$ has respectively one corner and one edge on each boundary component, one boundary component in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and one with two corners and two edges, or both boundary components in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. However, if both boundary components of $R$ are in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, then $R$ is isotopic onto $A$ within $V$ (since the annulus $\ensuremath{\partial} V \backslash T_V$ is parallel to $A$ through $V$). Thus there is an isotopy of $S$ that reduces $|S \cap T|$ contradicting the assumption that $|S \cap T|$ is minimized. Therefore an annulus of $S \cap V$ must have exactly two corners. \end{proof} \begin{lemma}\label{lem:meridionaldisks} Assume $P \in S \cap V$ is a trigon and $R \in S \cap V$ is a meridional disk of $V$ distinct from $P$. Then $R$ has an edge parallel on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ to an edge of a bigon of $S \cap V$ such that no edge of $P$ lies between these edges. Furthermore, if $R$ is also a trigon, it is parallel to $P$. \end{lemma} \begin{proof} We continue with the setup of the proof of \fullref{lem:bigonsdisksandannuliinsolidtorus}. As $\ensuremath{\partial} P \subseteq \ensuremath{\partial} V$ must cross $\gamma$ geometrically three times and algebraically once, cut torus $\ensuremath{\partial} V \backslash (\ensuremath{\partial} P \cup \gamma)$ consists of three components. Two of the components are disks, say $D_1$ and $D_2$, each with boundary composed of an arc of $\ensuremath{\partial} P$ and an arc of $\gamma$. The third component is a disk with boundary composed alternately of four arcs of $\ensuremath{\partial} P$ and four arcs of $\gamma$. Note that each disk $D_1$ and $D_2$ corresponds to a parallelism on $T$ between an edge of $P$ and an edge of a bigon of $S \cap V$. The two disks $D_1$ and $D_2$ meet at a point $p$ of $\ensuremath{\partial} P \cap \gamma$ where $\ensuremath{\partial} P$ crosses $\gamma$ in a direction opposite that of the other two points. Without loss of generality, we may assume $p \in \gamma_+$ and $(\ensuremath{\partial} P \cap \gamma)-p \in \gamma_-$. Thus $\gamma_+ \subseteq (D_1 \cup D_2) \cap \gamma$. Since $\ensuremath{\partial} R$ must intersect $\gamma_+$, $\ensuremath{\partial} R \cap D_i \neq \emptyset$ for either $i=1$ or $2$. An arc of $\ensuremath{\partial} R \cap D_i$ then cuts off a disk disjoint from $\ensuremath{\partial} P$ that corresponds to a parallelism between an edge of $R$ and an edge of a bigon of $S \cap V$. This parallelism does not contain an edge of $\ensuremath{\partial} P$. Furthermore, if $R$ is a trigon then $|\ensuremath{\partial} R \cap \gamma|=3$. The end points of an arc of $\ensuremath{\partial} R \cap D_i$ for $i=1$ or $2$ account for two of these intersections. In the complement of $\ensuremath{\partial} P$, there is only one way to complete this arc into an essential simple closed curve that crosses $\gamma$ just once more. The rectangle on $D_i$ between $\ensuremath{\partial} P \cap D_i$ and $\ensuremath{\partial} R \cap D_i$ extends to an annulus between $\ensuremath{\partial} P$ and $\ensuremath{\partial} R$. This annulus is cut into three rectangles by $\gamma$. These rectangles imply each edge of $R$ is parallel to an edge of $P$ and hence $\ensuremath{\partial} R$ is parallel to $\ensuremath{\partial} P$ on $\ensuremath{\partial} (X^+ - N(K))$. Thus by \fullref{lem:paralleldisks} $R$ is parallel to $P$. \end{proof} \begin{prop}\label{prop:groomingV} Let $B_1$ and $B_2$ be the two bigons of $F \backslash f$ on $\ensuremath{\partial} V$. Then one may isotop $\Int S$ so that \begin{itemize} \item $S \cap V$ is a collection of bigons parallel to either $B_1$ or $B_2$ and a collection of mutually parallel trigons, \item each arc of $S \cap T$ is essential in both $S$ and $T$ and \item $|S \cap T|$ remains minimized. \end{itemize} \end{prop} \begin{proof} By \fullref{lem:bigonsdisksandannuliinsolidtorus}, a region $R \in S \cap V$ is either a bigon parallel to either $B_1$ or $B_2$, a meridional disk of $V$ with an odd number of edges, or an annulus with two corners. We first note that $S \cap V$ cannot simultaneously contain meridional disks of $V$ and annuli. If an annulus $Q$ of $S \cap V$ were to exist with a meridional disk of $S \cap V$, then the curves $\ensuremath{\partial} Q$ and its core curve must all be meridional curves. Thus the core curve of $Q$ must bound a disk in $V$. This disk implies $S$ is compressible contradicting that it is incompressible. Hence we consider annuli and meridional disks of $S \cap V$ separately. {\bf Case 1}\qua $S \cap V$ contains annuli. Let $\mathcal{Q}$ be the collection of annuli of $S \cap V$. By the proof of \fullref{lem:bigonsdisksandannuliinsolidtorus}, all the annuli in $\mathcal{Q}$ either (a) have one boundary component contained in $T_V$ or (b) have each boundary component intersecting the annulus $T_V$ in a single spanning arc. We will refer to such annuli as type (a) or type (b) accordingly. Since properly embedded annuli in a solid torus are parallel into the boundary torus, each $Q \in \mathcal{Q}$ is isotopic to an embedded annulus $Q'$ in $\ensuremath{\partial} V$ bounded by $\ensuremath{\partial} Q$. Let $V_Q \subseteq V$ be the solid torus through which $Q$ is parallel to $Q'$. If $Q_1 \in \mathcal{Q}$ is contained in $V_Q$, then $Q_1$ is parallel to an annulus $Q_1' \subseteq Q'$. If $Q_2 \in \mathcal{Q}$ is not contained in $V_Q$, then it is parallel to an annulus $Q_2' \subseteq \ensuremath{\partial} V$ such that either $Q_2' \cap Q' = \emptyset$ or $Q' \subseteq Q_2'$. Let $\mathcal{Q}'$ be a collection of annuli in $\ensuremath{\partial} V$ to which the annuli of $\mathcal{Q}$ are parallel. We may assume $\mathcal{Q}'$ has been chosen so that it is partially ordered by nesting. Choose a meridional curve $m \subseteq \ensuremath{\partial} V$ for $V$ such that for each $Q' \in \mathcal{Q}'$ $m \cap Q' \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is a collection of spanning arcs of $Q'$. Thus there exists a meridional disk $D$ with $\ensuremath{\partial} D = m$ such that $D$ intersects each annulus $Q \in \mathcal{Q}$ in transverse arcs that are isotopic through a subdisk of $D$ onto $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Though there may be several, let us choose such a subdisk $D_Q$ of $D$ for each $Q \in \mathcal{Q}$ (and corresponding $Q' \in \mathcal{Q}'$). Since the $Q' \in \mathcal{Q}'$ are nested on $\ensuremath{\partial} V$, the disks $D_Q$ are nested on $D$. Consider a disk $D_Q$ outermost on $D$ and its corresponding annulus $Q$. Note that $\Int D_Q \cap S = \emptyset$ and $\ensuremath{\partial} D_Q \cap S \subseteq Q$ is not parallel to any corner of $Q$. Thus there is an isotopy of $S$ through $D_Q$ with support in $N(D_Q)$ as in \fullref{lem:isotopacrossdisk}. Assume $Q \in \mathcal{Q}$ is an annulus of type (a). Since one component of $\ensuremath{\partial} Q$ before the isotopy is wholly contained in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$, after the isotopy the components of $\ensuremath{\partial} Q \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ that contained $\ensuremath{\partial} \delta$ are joined. Thus $|S \cap T|$ is reduced contradicting the original minimality assumption on $|S \cap T|$. Hence $\mathcal{Q}$ contains no annuli of type (a). Assume $Q \in \mathcal{Q}$ is an annulus of type (b). Since $\ensuremath{\partial} Q \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is composed of two arcs before the isotopy, the isotopy exchanges them for two new arcs. Moreover, viewing the isotopy of $Q$ as a boundary compression along $D_Q$, it follows that $Q$ becomes a bigon parallel to a bigon of $F \backslash f$. A similar isotopy may be then performed for each remaining annulus of $\mathcal{Q}$. {\bf Case 2}\qua $S \cap V$ contains meridional disks. A meridional disk $R \in S \cap V$ must have an edge parallel to an edge of $B_1$ or $B_2$. If not, each edge of $\ensuremath{\partial} R$ would necessarily have the same label pair and must lie in the annulus $T_V$. Thus if $R$ is an $n$--gon, $\ensuremath{\partial} R$ would intersect a longitude of $V$ minimally $n$ times on $\ensuremath{\partial} V$. Since $R$ cannot be a monogon, this contradicts that $R$ is a meridional disk. Recall that by \fullref{lem:bigonsdisksandannuliinsolidtorus} each bigon of $S \cap V$ is parallel to either $B_1$ or $B_2$. Let $R$ be a nonbigon face of $S \cap V$ such that $R$ has an edge $e_R$ adjacent to an edge of a bigon $B$ of $S \cap V$. If $R$ is not a trigon, then by \fullref{lem:pushalongbigon} there exists an isotopy of $\Int S$ in $N(B \cup R)$ yielding in $V$ a bigon parallel to either $B_1$ or $B_2$ and a meridional disk of $V$ with two fewer corners than $R$. Perform such isotopies until each nonbigon face of $S \cap V$ with an edge adjacent to an edge of a bigon of $S \cap V$ is a trigon. Now after all the isotopies thus far, assume there exists a nonbigon, nontrigon face $R \in S \cap V$. Then since by assumption $S \cap V$ contains no annuli, \fullref{lem:bigonsdisksandannuliinsolidtorus} implies that $R$ is a meridional disk of $V$ with at least $5$ corners. By \fullref{lem:meridionaldisks} the trigons of $S \cap V$ are mutually parallel. The same lemma then also implies that $R$ has an edge $e_R$ parallel to an edge $e_B$ of $B_1$ or $B_2$ such that no edge of a trigon of $S \cap V$ lies between $e_R$ and $e_B$. But then in between $e_R$ and $e_B$ there must be an edge of a nonbigon, nontrigon face of $S \cap V$ that is adjacent to an edge of a bigon of $S \cap V$. This contradict that we have isotoped $S$ so that each nonbigon face of $S \cap V$ with an edge adjacent to an edge of a bigon of $S \cap V$ is a trigon. \end{proof} \subsubsection{Arranging $S$ near $f$ in $X^-$} Perform the isotopy of \fullref{prop:groomingV}. By relabeling if necessary, we may assume that the edges of the trigons of $S \cap V$ that are not parallel to edges of bigons of $S \cap V$ connect the vertices $U_5$ and $U_6$. Thus because the edges of $g$ (which is an $S2$ cycle or an $S3$ cycle with label pair $\{3, 4\}$) obstruct them, there are no edges of $G_T$ connecting $U_1$ to $U_2$ in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ . Also, since the trigons of $S \cap V$ are all mutually parallel, there are at most three parallelism edge classes of $G_T$ in $T_V$ incident to $U_6$. \begin{prop}\label{prop:groomingf} Fixing $S \cap V$, one may isotop $\Int S$ so that \begin{itemize} \item if $R \in S \cap X^-$ has an edge with label pair $\{1, 6\}$ that does not lie between two parallel edges of $f$, then $R$ is parallel to $f$, \item if $R \in S \cap X^-$ is not parallel to $f$ but has an edge with label pair $\{1, 6\}$, then the corners of $R$ incident to this edge are also incident to edges with label pairs $\{1, 4\}$ and $\{5, 6\}$, \item there are no edges of $G_T$ with label pairs $\{1, 2\}$ or $\{3, 6\}$, \item each arc of $S \cap T$ is essential in both $S$ and $T$ and \item either $|S \cap T|$ remains minimized or $S \cap T$ contains essential simple closed curves in the annulus on $T$ between edges with label pairs $\{1, 6\}$ and $\{3, 4\}$. \end{itemize} \end{prop} \begin{proof} Recall that $f$ is either a bigon or a trigon bounded by an $S2$ cycle or an $S3$ cycle respectively. Note that $f$ has two parallel edges only if $f$ is a trigon. Let $\mathcal{E}$ be the collection of edges of $G_T$ with label pair $\{1, 6\}$ that lie in $T_V$, do not lie between two parallel edges of $f$, and are not an edge of a face of $S \cap X^-$ parallel to $f$. If $\mathcal{E} \neq \emptyset$ then let $R$ be a face of $S \cap X^-$ that has an edge $e_R \in \mathcal{E}$ adjacent to an edge of a face of $S \cap X^-$ that is parallel to $f$. Since $f$ is a $p$--gon bounded by an $Sp$ cycle for $p = 2$ or $3$, we claim that $R$ cannot be a $(p{+}1)$--gon. To the contrary, assume $R$ is a $(p{+}1)$--gon. The corners of $f$ divide $\ensuremath{\partial} H_{(6, 1)}$ into $p$ rectangles. These rectangles are joined cyclically by edges of $G_T$ with label pair $\{1, 6\}$. Since $R$ is a $(p{+}1)$--gon, every edge of $R$ cannot have label pair $\{1, 6\}$. Therefore $R$ must have an edge with label pair $\{1, 4\}$, $\{5, 6\}$ or $\{3, 6\}$. Thus it must have a second edge with one of these three label pairs. Moving in both directions along $\ensuremath{\partial} R$ from $e_R$, counting a second edge if $f$ is a trigon, the final two edges of $R$ must emanate from $U_1$ and $U_6$ and lie in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} - T_V$. Thus the final two edges of $R$ must have label pairs $\{1, 4\}$ and $\{3, 6\}$. Since $H_{(3, 4)} \subseteq X^+$, $R$ cannot have a corner connecting these edges. Thus $R$ is not a $(p{+}1)$--gon. Since $R$ is not a $(p{+}1)$--gon, by either \fullref{lem:pushalongbigon} or \fullref{lem:pushalongS3cycle}, one may isotop $R$ to produce a new face $B'$ of $S \cap X^-$ which has $e_R$ as an edge that is parallel in $X-N(K)$ to $f$ and a new face $R'$ of $S \cap X^-$ with two fewer corners than $R$ while joining faces of $S \cap X^+$ along an arc $\tau \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash S$. By construction, $\tau$ connects an edge of $R$ incident to $U_1$ not in $T_V$ to an edge of $R$ incident to $U_6$ not in $T_V$. Therefore any arc of $S \cap T$ created by this isotopy is not contained in $T_V$. Recall that because edges of $F$ and $f$ delineate the boundary of $T_V$ there may be edges of $G_T$ parallel on $T$ to edges of $f$ which are not contained in $T_V$. Furthermore, since $e_R$ is now an edge of the face $B'$ which is parallel to $f$, it is no longer included in $\mathcal{E}$. Thus the isotopy strictly decreases $|\mathcal{E}|$. Since such an isotopy creates no monogons, each arc of $S \cap T$ is essential in each $S$ and $T$. The isotopy may only increase $|S \cap T|$ if the arc $\tau$ connects an edge of $R$ to itself. In this case neither $R$ nor $R'$ is a disk. The region $R'$ has a boundary component that lies as an essential simple closed curve in the annulus of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ between the edges of $G_T$ with label pairs $\{1, 6\}$ and those with label pairs $\{3, 4\}$. Repeatedly perform such isotopies until $\mathcal{E} = \emptyset$. If for some $R \in S \cap X^-$ not parallel to $f$ has an edge $e_R$ with label pair $\{1, 6\}$, then $e_R$ must lie between two parallel edges of $f$. Otherwise, since $e_R$ is not contained in $T_V$, moving in both directions along $\ensuremath{\partial} R$ away from $e_R$, picking up a second edge that lies between two parallel edges of $f$ if $f$ is a trigon, we arrive at two edges of $R$ that lie in $T_V$. One edge has an end point on $U_6$ and the other has an end point on $U_1$. Since the edges of $G_T$ in $T_V$ must have label pair $\{1, 6\}$ or $\{5, 6\}$, the latter edge must be in $\mathcal{E}$. This contradicts that $\mathcal{E} = \emptyset$. Assume $e_R$ is an edge of a face $R \in S \cap X^-$ not parallel to $f$ such that $e_R$ lies between two parallel edges of $f$. Then the two corners of $R$ incident to $e_R$ are incident to edges each with an endpoint incident to either $U_1$ or $U_6$. Since these two edges cannot lie between two parallel edges of $f$, they cannot have label pair $\{1, 6\}$. Thus they have the label pairs $\{1, 4\}$ and $\{1, 6\}$. It similarly follows that there are no edges with label pair $\{3, 6\}$ due to the fact that there are no edges with label pair $\{1, 2\}$. \end{proof} \subsubsection{Arranging $S$ in $X^+ \backslash V$} Recall that $g$ is a $p$--gon of $S \cap X^+$ bounded by an $Sp$ cycle for $p = 2$ or $3$. Also recall that $W$ is the solid torus $X^+ \backslash (V \cup \bar{N}(K) \cup g)$. \begin{prop}\label{prop:groomingW} One may isotop $S$ so that each face of $S \cap W$ is either \begin{itemize} \item a bigon parallel to a bigon of $F \backslash f$, \item a $p$--gon parallel to the $p$--gon $g$, \item a meridional disk of $W$ with $2$ or $3$ corners or \item an annulus with a boundary component in each component $T_W$ and $T_W'$ of $T \cap W$. \end{itemize} Furthermore, this isotopy creates no monogons and preserves the results of the isotopies of \fullref{prop:groomingV} and \fullref{prop:groomingf}. \end{prop} \begin{proof} Let $\mathcal{E}$ be the collection of edges of $G_T$ with label pair either $\{3, 4\}$ or $\{1, 6\}$ that lie in $T_W$, do not lie between two parallel edges of $g$, and are not an edge of a face of $S \cap W$ parallel to either $g$ or a bigon of $F \backslash f$. Let $\mathcal{E}'$ be the collection of edges of $G_T$ with label pair either $\{3, 4\}$ or $\{2, 5\}$ that lie in $T_W'$, do not lie between two parallel edges of $g$, and are not an edge of a face of $S \cap W$ parallel to either $g$ or a bigon of $F \backslash f$. If $\mathcal{E} \neq \emptyset$ then let $R$ be a face of $S \cap W$ that has an edge $e_R \in \mathcal{E}$ adjacent to an edge of a face $S \cap W$ that is parallel to either $g$ or a bigon of $F \backslash f$. If $e_R$ has label pair $\{1, 6\}$ then $R$ cannot be a trigon. If $g$ is a $p$--gon and $e_R$ has label pair $\{3, 4\}$ then $R$ cannot be a $(p{+}1)$--gon. This may be seen by attempting to connect a set of $3$ or $4$ corners in $\ensuremath{\partial} W$ with the edges of $R$. Therefore, by either \fullref{lem:pushalongbigon} or \fullref{lem:pushalongS3cycle}, one may isotop $R$ to produce a new face $B'$ of $S \cap W$ which has $e_R$ as an edge that is parallel to either $g$ or a bigon of $F \backslash f$ and a new face $R'$ of $S \cap W$ with two fewer corners than $R$ while joining faces of $S \cap X^-$ along an arc $\tau \subseteq T_W'$. Since $e_R$ is now an edge of the face $B'$ which is parallel to either $g$ or a bigon of $F$, it is no longer included in $\mathcal{E}$. Thus the isotopy strictly decreases $|\mathcal{E}|$. Such an isotopy creates no monogons, and hence each arc of $S \cap T$ is essential in each $S$ and $T$. Repeatedly perform such isotopies until $\mathcal{E} = \emptyset$. Because the only arcs of $S \cap T$ affected are ones in $T_W'$, the results of these isotopies do not have any effect upon those of \fullref{prop:groomingV} and \fullref{prop:groomingf}. By \fullref{prop:groomingf} there are no edges with label pair $\{3, 6\}$. Thus $\mathcal{E} = \emptyset$ implies $\mathcal{E}' = \emptyset$. If otherwise, then there exists an edge $e_R \in \mathcal{E}'$ of a region $R \in S \cap W$ that is not parallel to either $g$ or a bigon of $F \backslash f$. If $e_R$ has label pair $\{2, 5\}$, then it is incident to a corner of $R$ on $\ensuremath{\partial} H_{(5, 6)}$. Therefore the next edge of $R$ must lie in $T_W$ and have an end point labeled $6$. Since there are no edges of $G_T$ with label pair $\{3, 6\}$, this edge must have label pair $\{1, 6\}$ and is thus an edge of $\mathcal{E}$ contradicting that $\mathcal{E} = \emptyset$. If $e_R$ has label pair $\{3, 4\}$ then after the corner of $R$ incident to the end point of $e_R$ labeled $4$ the next edge must have label pair $\{3, 4\}$. If this edge is between two parallel edges of $g$, then the next edge too must have label pair $\{3, 4\}$. Regardless, one of these edges lies in $T_W$ and is thus an edge of $\mathcal{E}$ contradicting that $\mathcal{E} = \emptyset$. Similarly, it follows that there can be no edges with label pair $\{4, 5\}$. An edge with label pair $\{4, 5\}$ must be an edge of a face $S \cap W$ with a corner on $\ensuremath{\partial} H_{(5, 6)}$ followed by an edge incident to $U_6$. This edge must then have label pair $\{1, 6\}$ and thus be an edge of $\mathcal{E}$ contradicting that $\mathcal{E} = \emptyset$. Assume a face $R$ of $S \cap W$ has an edge $e_R$ with label pair $\{1, 4\}$. Since there are no edges with label pair $\{3, 6\}$, no edge of $S \cap W$ can transversely cross $T_W$ in the direction opposite from $e_R$. Thus $R$ must be a meridional disk with one edge in $T_W$ and one edge in $T_W'$. If $g$ is a trigon, $R$ may have a third edge between two parallel edges of $g$. Thus $R$ is a meridional disk of $W$ with either two or three edges and two or three corners respectively. Note that because there may be two distinct edges classes of $G_T$ with label pair $\{1, 4\}$ there may be two parallelism classes of disks of $S \cap W$ that are meridional disks of $W$. Nevertheless, all such disks have the same number of corners. Furthermore, the number of edges with label pair $\{1, 4\}$ equals the number of edges with label pair $\{2, 5\}$. If $S \cap T_W$ or $S \cap T_W'$ contains a simple closed curve component $c$, then there can be no edges with label pair $\{1, 4\}$ or $\{2, 5\}$. Thus an edge of $G_T$ in $T_W$ or $T_W'$ must be an edge of a face of $S \cap W$ parallel to either $g$ or a bigon of $F \backslash f$. Let $R \in S \cap W$ be the face with $c \in \ensuremath{\partial} R$. Since $c$ is a longitudinal curve of $W$, $R$ must be an annulus with its other boundary component contained in whichever of $T_W$ or $T_W'$ that does not contain $c$. Otherwise, if these two boundary components are contained in the same annulus, then $R$ is parallel into $T_W$ or $T_W'$. An isotopy of $S$ will push $R$ (and any other annuli of $S \cap W$ between $R$ and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$) into $X^-$. This will violate the minimality of $|S \cap T|$. Hence we may assume $S$ has been isotoped so that any annulus of $S \cap W$ has one boundary component in $T_W$ and its other one in $T_W'$. \end{proof} \subsubsection[Proof of \ref{prop:isotopS}]{Proof of \fullref{prop:isotopS}}\label{sec:proofofpropisotopS} \begin{proof} Perform the isotopies of $\Int S$ first in \fullref{prop:groomingV}, then in \fullref{prop:groomingf}, and finally in \fullref{prop:groomingW}. This arranges the edges of $G_T$ into the edge classes described in the proposition. The isotopies also maintain that the arcs of $S \cap T$ are essential in both $S$ and $T$. Recall that on $G_T$ there are $s$ edges around each vertex. Since there are $n$ edges with each label pair $\{1, 4\}$, $\{2, 3\}$, and $\{5, 6\}$ and no edges with label pair $\{1, 2\}$, $\{3, 6\}$, or $\{4, 5\}$, there must be a total of $s-n$ edges with each label pair $\{1, 6\}$, $\{2, 5\}$, and $\{3, 4\}$. Each collection of $s-n$ edges with label pair $\{1, 6\}$, $\{2, 5\}$, and $\{3, 4\}$ lie in an essential annulus. {\bf The faces of $S \cap X^+$}\qua The proof of Case 2 of \fullref{prop:groomingV} implies each edge of $G_T$ with label pair $\{5, 6\}$ is an edge of a trigon in $V$ whose other two edges have label pairs $\{1, 6\}$ and $\{2, 5\}$. Since there are $n$ edges with label pair $\{5, 6\}$, there must be $n$ trigons contained in $V$. Together the proofs of \fullref{prop:groomingV} and \fullref{prop:groomingW} imply that an edge with label pair $\{1, 6\}$ or $\{2, 5\}$ that is not an edge of a trigon in $V$ is an edge of a bigon that is parallel to a bigon of $F \backslash f$. Since there are $s-n$ edges with label pair $\{1, 6\}$, $n$ of which are edges of trigons in $V$, there are $s-2n$ edges with label pair $\{1, 6\}$ that are edges of bigons parallel to bigons of $F \backslash f$. Since a bigon of $F \backslash f$ has only one edge with label pair $\{1, 6\}$, each edge with label pair $\{1, 6\}$ that is not an edge of a trigon is an edge of a distinct bigon. Thus there are $s-2n$ bigons parallel to a bigon of $F \backslash f$. Furthermore, of the $s-n$ edges with label pair $\{2, 5\}$, $n$ belong to a trigon in $V$ and $s-2n$ belong to a bigon parallel to a bigon of $F \backslash f$. By \fullref{prop:groomingW} an edge with label pair $\{2, 3\}$ is an edge of a meridional disk of $W$ with $2$ or $3$ corners and hence $2$ or $3$ edges respectively. One corner runs along $\ensuremath{\partial} H_{(1, 2)}$ from the edge with label pair $\{2, 3\}$ to an edge with label pair $\{1, 4\}$. If $g$ is a bigon, then the meridional disk of $W$ is a bigon itself with its final corner on $\ensuremath{\partial} H_{(3, 4)}$. If $g$ is a trigon, then two edges of $g$ are parallel. Also, the corners of $g$ divide $\ensuremath{\partial} H_{(3, 4)}$ into three rectangles $\rho$, $\rho'$, and $\rho''$. The bigon $\delta$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ between the two parallel edges of $g$ connects two of these rectangles, say $\rho$ and $\rho'$. Either the edges with label pair $\{2, 3\}$ and $\{1, 4\}$ are incident to $\rho \cup \delta \cup \rho'$ or they are incident to $\rho''$. If they are incident to $\rho \cup \delta \cup \rho'$, then a meridional disk of $W$ is a trigon with a corner on each of $\rho$ and $\rho'$ and an edge that lies in $\delta$. If the edges are incident to $\rho''$, then a meridional disk of $W$ is a bigon with its final corner on $\rho''$. Since there are just $n$ edges with label pair $\{2, 3\}$, there are $n$ such meridional disks of $W$. They are either all bigons or all trigons. In either case, each has one edge with label pair $\{2, 3\}$ and one edge with label pair $\{1, 4\}$. If the disks are trigons, then their third edge has label pair $\{3, 4\}$. If $g$ is a bigon, then each edge with label pair $\{3, 4\}$ is an edge of a bigon parallel to $g$. Since there are $s-n$ edges with label pair $\{3, 4\}$, there are $(s-n)/2$ bigons parallel to $g$. If $g$ is a trigon, then each edge with label pair $\{3, 4\}$ that is not an edge of a meridional disk of $W$ is an edge of a trigon parallel to $g$. If the meridional disks of $W$ are bigons, then they have no edge with label pair $\{3, 4\}$. Hence there are $(s-n)/3$ trigons parallel to $g$. If the meridional disks of $W$ are trigons, then they each have one edge with label pair $\{3, 4\}$. Hence there are $(s-2n)/3$ trigons parallel to $g$. If $S \cap X^+$ contains no meridional disks of $W$, then $n=0$. Similarly, if there are annuli of $S \cap W$ then $n=0$. Let us now summarize the collection $S \cap X^+$. If $g$ is a bigon, then $S \cap X^+$ is composed of $n$ trigons contained in $V$, $s-2n$ bigons parallel to bigons of $F \backslash f$, $n$ bigons that are meridional disks of $W$, $(s-n)/2$ bigons parallel to $g$, and annuli if $n = 0$. Thus if $g$ is a bigon, then $S \cap X^+$ contains $(s-2n) + n + (s-n)/2 = 3(s-n)/2$ bigons, $n$ trigons, and possibly annuli if $n=0$. If $g$ is a trigon, then $S \cap X^+$ is composed of $n$ trigons contained in $V$, $s-2n$ bigons parallel to bigons of $F \backslash f$, either $n$ bigons that are meridional disks of $W$ and $(s-n)/3$ trigons parallel to $g$ or $n$ trigons that are meridional disks of $W$ and $(s-2n)/3$ trigons parallel to $g$, and annuli if $n=0$. Thus if $g$ is a trigon, then $S \cap X^+$ contains either $(s-2n) + n= s-n$ bigons and $n + (s-n)/3 = (s+2n)/3$ trigons or $s-2n$ bigons and $n + n + (s-2n)/3 = (s-4n)/3$ trigons and possibly annuli if $n=0$. {\bf The faces of $S \cap X^-$}\qua By \fullref{prop:groomingf}, an edge with label pair $\{1, 6\}$ that does not lie between two parallel edges of $f$ is an edge of a face parallel to $f$. If $f$ is a bigon, then no edges of $f$ are parallel. Hence all $s-n$ edges with label pair $\{1, 6\}$ are edges of bigons parallel to $f$. Thus there are $(s-n)/2$ bigons parallel to $f$. If $f$ is a trigon, then each of the $n$ edges with label pair $\{5, 6\}$ are incident to a corner on $\ensuremath{\partial} H_{(6, 1)}$ with are in turn either incident to an edge with label pair $\{1, 4\}$ or incident to an edge with label pair $\{1, 6\}$ that lies between the parallel edges of $f$. In the former case the $s-n$ edges with label pair $\{1, 6\}$ are all edges of trigons parallel to $f$; hence there are $(s-n)/3$ trigons of $S \cap X^-$ parallel to $f$. In the latter case the remaining $s-2n$ edges with label pair $\{1, 6\}$ are edges of trigons parallel to $f$; hence there are $(s-2n)/3$ trigons of $S \cap X^-$ parallel to $f$. \end{proof} \subsection{Euler characteristic estimates} The arcs and simple closed curves of $S \cap T$ break $S$ into faces which lie either in $X^+$ or $X^-$. The arcs of $S \cap T$ form the edges of $G_S$. Because we are assuming $t = 6$, $G_S$ has a total of $3s$ edges. \begin{lemma}\label{lem:upperbound} Both $\displaystyle\sum_{R \in S \cap X^+} \chi (R) \leq 3s/2\ \ $ and $\displaystyle\sum_{R \in S \cap X^-} \chi (R) \leq 3s/2$. \end{lemma} \begin{proof} Since $\displaystyle \sum_{R \in S \cap X^+} \chi (R) \leq \#(\mbox{disks in } S \cap X^+)$ \begin{align*} 2 \cdot \#(\mbox{disks in } S \cap X^+) &\leq \#(\mbox{edges of disks in } S \cap X^+) \tag*{\hbox{and}}\\ &\leq \#(\mbox{edges of faces in } S \cap X^+) = 3s,\\ \sum_{R \in S \cap X^+} \chi (R) &\leq 3s/2. \tag*{\hbox{we have}}\end{align*} Replacing $X^+$ with $X^-$ achieves the other case. \end{proof} Note that equality is realized in \fullref{lem:upperbound} if $S \cap X^+$ (or $S \cap X^-$) is a collection of bigons. Utilizing \fullref{prop:isotopS} we can obtain an exact count for $\displaystyle\sum_{R \in S \cap X^+} \chi (R)$. \begin{lemma}\label{lem:betterupperbound} Assume $S$ has been isotoped in accordance with \fullref{prop:isotopS}. If $g$ is a bigon then $\displaystyle \sum_{R \in S \cap X^+} \chi (R) = 3s/2 - n/2$. If $g$ is a trigon then $\displaystyle\sum_{R \in S \cap X^+} \chi (R) = \begin{cases} 4s/3 - 2n/3 \\ \mbox{or} \\ 4s/3 - n/3. \end{cases} $ \end{lemma} \begin{proof} If $g$ is a bigon then $S \cap X^+$ is a collection of $3(s-n)/2$ bigons and $n$ trigons. \[ \sum_{R \in S \cap X^+} \chi (R) = 3(s-n)/2 + n = 3s/2 - n/2.\leqno{\hbox{Thus}}\] If $g$ is a trigon then $S \cap X^+$ is either a collection of $s-2n$ bigons and $(s+4n)/3$ trigons or a collection of $s-n$ bigons and $(s+2n)/3$ trigons. \[ \sum_{R \in S \cap X^+} \chi (R) = \begin{cases} s-2n + (s+4n)/3 = 4s/3 - 2n/3 \\ \mbox{or} \\ s-n + (s+2n)/3 = 4s/3 - n/3. \end{cases}\]\vskip-32pt\proved \end{proof} \begin{lemma} \label{lem:lowerfacesbound} If $g$ is a bigon then \[s + 1/2 + n/2 \leq \sum_{R \in S \cap X^-} \chi (R).\] If $g$ is trigon then either \begin{align*}7s/6 + 1/2 + 2n/3 &\leq \sum_{R \in S \cap X^-} \chi (R)\\ 7s/6 + 1/2 + n/3 &\leq \sum_{R \in S \cap X^-} \chi (R).\tag*{\hbox{or}}\end{align*} In any case, $s < \displaystyle\sum_{R \in S \cap X^-} \chi (R)$. \end{lemma} \begin{proof} Since $S$ is a once punctured orientable surface of genus $g$, we have \[ \chi(S) = 1 - 2g = -3s + \sum_{R \in S \cap X^-} \chi (R) + \sum_{R \in S \cap X^+} \chi (R). \] Because $s \geq 4g-1$, $(s+1)/2 \geq 2g$. Thus \begin{align*}1 - (s+1)/2 \leq -3s + &\sum_{R \in S \cap X^-} \chi (R) + \sum_{R \in S \cap X^+} \chi (R)\qquad\\[-1ex] 5s/2 +1/2 - \sum_{R \in S \cap X^+} \chi (R) \leq &\sum_{R \in S \cap X^-} \chi (R).\tag*{\hbox{and so}}\end{align*} Due to \fullref{lem:betterupperbound}, if $g$ is a bigon \[5s/2 + 1/2 - (3s/2 - n/2) = s + 1/2 + n/2 \leq \sum_{R \in S \cap X^-} \chi (R).\] If $g$ is a trigon then one of following occurs: \[\eqalignbot{5s/2 + 1/2 - (4s/3 - 2n/3) &= 7s/6 +1/2 + 2n/3 \leq \sum_{R \in S \cap X^-} \chi (R),\cr 5s/2 + 1/2 - (4s/3 - n/3) &= 7s/6 + 1/2 + n/3 \leq \sum_{R \in S \cap X^-} \chi (R).}\proved\] \end{proof} \subsubsection{The disks of $S \cap X^-$} The remainder of this section is devoted to showing $$\displaystyle \sum_{R \in S \cap X^-} \chi (R) \leq s.$$ To do so we must better understand the disks of $S \cap X^-$. \begin{lemma}\label{lem:nobigonortrigonon1456} There are no bigons or trigons of $S \cap X^-$ that have an edge with label pair $\{1, 4\}$ or $\{5, 6\}$. If there exist edges of $G_T$ with label pair $\{1, 6\}$ that are not edges of faces of $S \cap X^-$ parallel to $f$, then there are also no tetragons of $S \cap X^-$ with an edge having label pair $\{1, 4\}$ or $\{5, 6\}$. \end{lemma} \begin{proof} In light of \fullref{prop:groomingf}, if a face $\kreis{E} \in S \cap X^-$ has an edge with one of the label pairs $\{1, 4\}$ and $\{5, 6\}$, then it also has an edge with the other label pair on the same component of $\ensuremath{\partial} \kreis{E}$ connected either (a) by a single $(6, 1)$ corner or (b) by a sequence of a $(6, 1)$ corner, an edge with label pair $\{1, 6\}$, and another $(6, 1)$ corner. If $\kreis{E} \in S \cap X^-$ is not parallel to $f$ but has an edge with label pair $\{1,6\}$, then this edge must appear both on $\ensuremath{\partial} \kreis{E}$ as in situation (b) and on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ between two parallel edges of $f$ as arranged by \fullref{prop:groomingf}. Note that in either situation, in the other arc of $\ensuremath{\partial} \kreis{E}$ between the two edges with label pairs $\{1, 4\}$ and $\{5, 6\}$ there cannot be just one other edge. This is because such an edge would necessarily have label pair $\{4,5\}$ which is prohibited by \fullref{prop:isotopS}. Thus in situation (a) $\kreis{E}$ cannot be a trigon, and in situation (b) $\kreis{E}$ cannot be a tetragon (nor a bigon). Therefore we consider situation (a) where $\kreis{E}$ is a bigon and situation (b) where $\kreis{E}$ is a trigon. If $n=0$, then there are no edges with label pair $\{1, 4\}$ or $\{5, 6\}$, and the lemma is trivial. Therefore assume $n > 0$. By \fullref{prop:groomingV} there exists a trigon $\kreis{D}$ of $S \cap V$ that shares an edge with $\kreis{E}$. The three edges of $\kreis{D}$ have label pairs $\{1, 6\}$, $\{2, 5\}$, and $\{5, 6\}$. Let $\kreis{A}$ be a bigon of $S \cap V$ with an edge parallel and closest to the edge of $\kreis{D}$ with label pair $\{2, 5\}$. The other edge of $\kreis{A}$ has label pair $\{1, 6\}$. Let $\Delta$ be the bigon on $T$ between these edges of $\kreis{A}$ and $\kreis{D}$ with label pair $\{2, 5\}$. Let $\rho_{(1, 2)}$ and $\rho_{(5, 6)}$ be the rectangles on $\ensuremath{\partial} H_{(1, 2)}$ and $\ensuremath{\partial} H_{(5, 6)}$ respectively that contain the corners of $\kreis{A}$, $\kreis{D}$, and $\Delta$. We may assemble $\kreis{A}$, $\kreis{D}$, $\Delta$, $\rho_{(1, 2)}$, and $\rho_{(5, 6)}$ into a high disk for $K_{(5, 6)}$. See \fullref{fig:startinghighdisk}. \begin{figure} \caption{The construction of a high disk for $K_{(5, 6)} \label{fig:startinghighdisk} \end{figure} {\bf Situation (a)}\qua $\kreis{E}$ is a bigon. In this situation, $f$ may be either a bigon or a trigon. Let $\kreis{B}$ and $\kreis{C}$ be two bigons or trigons in $X^- - N(K)$ parallel to $f$ and disjoint from both $f$ and $\kreis{E}$ so that $\kreis{A} \cap \kreis{B}$ and $\kreis{C} \cap \kreis{D}$ are each edges with label pair $\{1, 6\}$. Note that $\kreis{B}$ and $\kreis{C}$ may actually be faces of $S \cap X^-$. Let $\rho_{(6, 1)}$ be the rectangle on $\ensuremath{\partial} H_{(6, 1)}$ with boundary containing corners of $\kreis{B}$, $\kreis{E}$, and $\rho_{(5, 6)}$. If $f$ is a bigon, then let $\smash{\rho_{(6, 1)}'}$ be the rectangle on $\ensuremath{\partial} H_{(6, 1)}$ with boundary containing corners of $\kreis{B}$, $\kreis{C}$, and $\rho_{(1, 2)}$. If $f$ is a trigon, then there is an edge of $\kreis{B}$ and an edge of $\kreis{C}$ that lies between the two parallel edges of $f$. Let $\delta$ be the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by these two edges of $\kreis{B}$ and $\kreis{C}$. Then let $\smash{\rho_{(6, 1)}'}$ be the rectangle on $\ensuremath{\partial} H_{(6, 1)}$ bounded by corners of $\kreis{B}$, $\kreis{C}$, $\rho_{(1, 2)}$, and $\delta \cap U_6$. Let $\rho_{(6, 1)}''$ be the rectangle on $\ensuremath{\partial} H_{(6, 1)}$ with boundary containing corners of $\kreis{B}$, $\kreis{C}$, and $\delta \cap U_1$. Assemble $\kreis{A}, \kreis{B}, \kreis{C}, \kreis{D}, \kreis{E}, \Delta$, $\rho_{(1, 2)}$, $\rho_{(5, 6)}$, $\rho_{(6, 1)}$, and $\smash{\rho_{(6, 1)}'}$ (and $\delta$ and $\smash{\rho_{(6, 1)}''}$ if $f$ is a trigon) to form the embedded disk $D$ as shown in \fullref{fig:twolongdisks}(a) and (b). The boundary of $D$ may be slightly extended into $H_{(4, 1)}$ so that it is composed of the arc $K_{(4, 1)}$ and an arc on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Thus $D$ is a long disk. By \fullref{longdisk}, such a disk cannot exist. \begin{figure} \caption{The construction of a long disk if $f$ is (a) a bigon and (b) a trigon} \label{fig:twolongdisks} \end{figure} {\bf Situation (b)}\qua $\kreis{E}$ is a trigon. Here $f$ is necessarily a trigon. Again, let $\kreis{B}$ and $\kreis{C}$ be two bigons or trigons in $X^- - N(K)$ parallel to $f$ and disjoint from both $f$ and $\kreis{E}$ so that $\kreis{A} \cap \kreis{B}$ and $\kreis{C} \cap \kreis{D}$ are each edges with label pair $\{1, 6\}$. Note that $\kreis{B}$ and $\kreis{C}$ may actually be faces of $S \cap X^-$. Let $\rho_{(6, 1)}$ be the rectangle on $\ensuremath{\partial} H_{(6, 1)}$ bounded by corners of $\kreis{B}$, $\kreis{C}$, and $\rho_{(1, 2)}$. Since $f$ is a trigon, there is an edge of $\kreis{B}$ and an edge of $\kreis{E}$ that lies between the two parallel edges of $f$. Let $\delta$ be the bigon on $T$ bounded by these two edges of $\kreis{B}$ and $\kreis{E}$. Note that there is an edge of $\kreis{C}$ between the two parallel edges of $f$, but it is not contained in $\delta$. Let $\smash{\rho_{(6, 1)}'}$ be the rectangle on $\ensuremath{\partial} H_{(6, 1)}$ bounded by corners of $\kreis{B}$, $\kreis{E}$, $\rho_{(5, 6)}$, and $\delta \cap U_1$. Let $\rho_{(6, 1)}''$ be the rectangle on $\ensuremath{\partial} H_{(6, 1)}$ with boundary containing corners of $\kreis{B}$, $\kreis{E}$, and $\delta \cap U_6$. Assemble $\kreis{A}, \kreis{B}, \kreis{C}, \kreis{D}, \kreis{E}, \Delta$, $\rho_{(1, 2)}$, $\rho_{(5, 6)}$, $\rho_{(6, 1)}$, $\smash{\rho_{(6, 1)}'}$, and $\smash{\rho_{(6, 1)}''}$ to form the embedded disk $D$, a ``lopsided bigon,'' as shown in \fullref{fig:lopsidedbigon}. The boundary of $D$ may be slightly extended into $H_{(4, 1)}$ so that it is composed of the arc $K_{(4, 1)}$, the arc $K_{(6, 1)}$, and two arcs, say $\tau_1$ and $\tau_2$, on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. \begin{figure} \caption{The construction of the ``lopsided bigon''} \label{fig:lopsidedbigon} \end{figure} Isotop the arc $K_{(4, 1)}$ across $D$ so that the decomposition $K = K_{(1, 4)} \cup K_{(4, 1)}$ becomes $K = K_{(1, 4)} \cup \tau_1 \cup K_{(6, 1)} \cup \tau_2$. Since the ends of $K_{(6, 1)}$ are in $X^-$ while the ends of $K_{(1, 4)}$ are in $X^+$, a further slight isotopy of $K$ in $N(\tau_1 \cup \tau_2)$ puts $K$ in Morse position without introducing new critical points. Since the extrema of $K$ are in a $1-1$ correspondence with a proper subset of the former extrema of $K$ (with the correspondence taking maxima to maxima and minima to minima), the width of $K$ has been reduced. In other words, the width of the two arcs $K_{(1, 4)} \cup K_{(6, 1)}$ is less than the width of the knot $K_{(1, 4)} \cup K_{(4, 1)}$. This contradicts that $K$ is in thin position. \end{proof} \begin{lemma}\label{lem:bigonlabels} A bigon of $S \cap X^-$ that is not parallel to $f$ must have edges with label pairs $\{2, 5\}$ and $\{3, 4\}$. \end{lemma} \begin{proof} Due to \fullref{lem:nobigonortrigonon1456}, a bigon of $S \cap X^-$ cannot have an edge with label pair $\{1, 4\}$ or $\{5, 6\}$. Thus an edge of a bigon of $S \cap X^-$ not parallel to $f$ may have label pair $\{2, 5\}$, $\{3, 4\}$, or $\{2, 3\}$. If an bigon of $S \cap X^-$ has label pair $\{2, 3\}$, then both edges must have this label pair. Such a bigon is thus bounded by an $S2$ cycle. Since $f$ is in $X^-$ too, this contradicts \fullref{oppositesides}. Because $H_{(3, 4)} \subseteq X^+$, the conclusion of the lemma follows. \end{proof} \begin{lemma}\label{lem:existenceofbigon} There must exist a bigon of $S \cap X^-$ whose edges have label pairs $\{2, 5\}$ and $\{3, 4\}$. \end{lemma} \begin{proof} Assume every bigon of $S \cap X^-$ is parallel to $f$. {\bf Case 1}\qua $f$ is a bigon.\qua By \fullref{prop:isotopS}, $(s-n)/2$ faces of $S \cap X^-$ are bigons parallel to $f$. Since by \fullref{lem:nobigonortrigonon1456} the edges with label pair $\{1, 4\}$ and $\{5, 6\}$ cannot be edges of bigons or trigons, at worst they are edges of tetragons. Since each edge with label pair $\{1, 4\}$ is connected to an edge with label pair $\{5, 6\}$ by a corner on $\ensuremath{\partial} H_{(6, 1)}$ (due to \fullref{lem:nobigonortrigonon1456} because the edges of the bigon $f$ are not parallel on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$) and each such edge class has $n$ edges, together with $2n$ edges that have label pairs $\{2, 5\}$, $\{3, 4\}$, or $\{2, 3\}$, these may bound at most $n$ tetragons. The remaining $3s - 2 \cdot (s-n)/2 - 4 \cdot n = 2s - 3n$ edges all have label pair $\{2, 5\}$, $\{3, 4\}$, or $\{2, 3\}$. Any trigon with all its edge among these, must have one edge with each of these three label pairs. Since there are only $n$ edges with label pair $\{2, 3\}$, there may be at most $n$ such trigons. The remaining $2s-3n - 3 \cdot n = 2s-6n$ edges may at worst bound tetragons. Thus \[ \sum_{R \in S \cap X^-} \chi (R) \leq (s-n)/2 + n + n + (2s-6n)/4 = s.\] {\bf Case 2}\qua $f$ is a trigon.\qua By \fullref{prop:isotopS}, either $(s-n)/3$ or $(s-2n)/3$ of the faces of $S \cap X^-$ are trigons parallel to $f$. {\bf Case 2a}\qua If there are $(s-n)/3$ trigons parallel to $f$, then each edge with label pair $\{1, 6\}$ is an edge of a trigon parallel to $f$. Then by \fullref{lem:nobigonortrigonon1456} the edges with label pair $\{1, 4\}$ and $\{5, 6\}$ cannot be edges of bigons or trigons. At worst they are edges of tetragons. Since each edge with label pair $\{1, 4\}$ is connected to an edge with label pair $\{5, 6\}$ by a corner on $\ensuremath{\partial} H_{(6, 1)}$ (because in order to have $(s-n)/3$ trigons parallel to $f$, each edge of $G_T$ between the parallel edges of $f$ must belong to one of these trigons---see the last paragraph of the proof of \fullref{prop:isotopS} in \fullref{sec:proofofpropisotopS}) and each such edge class has $n$ edges, then together with $2n$ of the edges that have label pairs $\{2, 5\}$, $\{3, 4\}$, or $\{2, 3\}$, these may bound at most $n$ tetragons. The remaining $3s - 3 \cdot (s-n)/3 - 4 \cdot n = 2s - 3n$ edges all have label pair $\{2, 5\}$, $\{3, 4\}$, or $\{2, 3\}$. Any trigon with all its edges among these, must have one edge with each of these three label pairs. Since there are only $n$ edges with label pair $\{2, 3\}$, there may be at most $n$ such trigons. The remaining $2s-3n - 3 \cdot n = 2s-6n$ edges may at worst bound $(2s-6n)/4$ tetragons. \[ \sum_{R \in S \cap X^-} \chi (R) \leq (s-n)/3 + n + n + (2s-6n)/4 = 5s/6 + n/6.\leqno{\hbox{Thus}}\] {\bf Case 2b}\qua If there are $(s-2n)/3$ trigons parallel to $f$ (and $n \neq 0$), then there exists an edge with label pair $\{1, 6\}$ that is not an edge of a trigon parallel to $f$ (indeed in $G_T$ between the parallel edges of $f$ there are $n$ edges that do not belong to faces of $S \cap X^-$ that are trigons parallel to $f$---again, see the last paragraph of the proof of \fullref{prop:isotopS} in \fullref{sec:proofofpropisotopS}). Then by \fullref{lem:nobigonortrigonon1456} the edges with label pair $\{1, 4\}$ and $\{5, 6\}$ cannot be edges of bigons, trigons, or tetragons. At worst they are edges of pentagons. Since each edge with label pair $\{1, 4\}$ is connected to an edge with label pair $\{5, 6\}$ by a two corners on $\ensuremath{\partial} H_{(6, 1)}$ and an edge with label pair $\{1, 6\}$ and each such edge class has $n$ edges, together with $2n$ edges that have label pairs $\{2, 5\}$, $\{3, 4\}$, or $\{2, 3\}$, these may bound at most $n$ pentagons. The remaining $3s - 3 \cdot (s-2n)/3 - 5 \cdot n = 2s - 3n$ edges all have label pair $\{2, 5\}$, $\{3, 4\}$, or $\{2, 3\}$. Any trigon with all its edges among these, must have one edge with each of these three label pairs. Since there are only $n$ edges with label pair $\{2, 3\}$, there may be at most $n$ such trigons. The remaining $2s-3n - 3 \cdot n = 2s-6n$ edges may at worst bound $(2s-6n)/4$ tetragons. \[ \sum_{R \in S \cap X^-} \chi (R) \leq (s-2n)/3 + n + n + (2s-6n)/4 = 5s/6 - n/6.\leqno{\hbox{Thus}}\] Since necessarily $n<s$, we have $5s/6 - n/6 < 5s/6 + n/6 < s$. Thus \[ \sum_{R \in S \cap X^-} \chi (R) \leq s.\] However, by \fullref{lem:lowerfacesbound} \[s < \sum_{R \in S \cap X^-} \chi (R)\] regardless of whether $g$ is a bigon or trigon. This, in all cases, is a contradiction. Thus there must be a bigon of $S \cap X^-$ that is not parallel to $f$. By \fullref{lem:bigonlabels}, the edges of such a bigon must have label pairs $\{2, 5\}$ and $\{3, 4\}$. \end{proof} \begin{lemma}\label{lem:arrangingedges2345} Let $B$ be a bigon of $S \cap X^-$ whose edges $b_0$ and $b_1$ have label pairs $\{2, 5\}$ and $\{3, 4\}$ respectively. Let $R$ be an $m$--gon of $S \cap X^-$ whose edges have label pairs among $\{2, 5\}$, $\{3, 4\}$, and $\{2, 3\}$. For each $i = 0,1$ let $p_i$ be the number of edges of $R$ in the same edge class of $G_T$ as $b_i$, and let $\bar{p}_i$ be the number of edges of $R$ with the same label pair as $b_i$ but not in the same edge class as $b_i$. Assume that the edges of $B$ and $R$ together lie in an essential annulus $A$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If $M = \bar{N}(A \cup H_{(2, 3)} \cup H_{(4, 5)} \cup B \cup R)$ is a solid torus such that the core of $A$ is a longitude, then $R$ has exactly one edge with label pair $\{2, 3\}$. Thus for each $i = 0, 1$ we have $p_i + \bar{p}_i = (m-1)/2$. Furthermore either \begin{itemize} \item $\bar{p}_i = 0$ for $i = 0,1$, or \item $p_0 = \bar{p}_1$ and $p_1 = \bar{p}_0$. \end{itemize} \end{lemma} One may care to compare this lemma in the case that $R$ is a trigon with \fullref{lem:funnytrigon}. \begin{proof} Consider the genus $3$ handlebody $\bar{N}(A \cup H_{(2, 3)} \cup H_{(4, 5)})$ which we view as the torus $A \times [0,1]$ with the $1$--handles $H_{(2, 3)}$ and $H_{(4, 5)}$ attached to $A \times \{1\}$. We then obtain $M$ by attaching to $A \times [0,1] \cup \ensuremath{\partial} H_{(2, 3)} \cup \ensuremath{\partial} H_{(4, 5)}$ the $2$--handles $\smash{\bar{N}(B)}$ and $\smash{\bar{N}(R)}$. Assume $M$ is a solid torus such that the core of $A$ is a longitude. Let $E$ be a disk. By attaching the $2$--handle $\bar{N}(E)$ to $A \times [0,1]$ along the core of $A \times \{0\}$ we form a $3$--ball $W = \smash{\bar{N}(A \cup E)}$. Let $E^+$ and $E^-$ be the two sides of $\bar{N}(E)$ on $\ensuremath{\partial} W$. Notice that $\ensuremath{\partial} W - \Int(E^+ \cup E^-)$ may be identified with the annulus $A$. To the ball $W$ we attach the ``$1$--handle'' $H=\smash{H_{(2, 3)} \cup \bar{N}(B) \cup H_{(4, 5)}}$ forming the solid torus $M' = \smash{\bar{N}(A \cup H_{(2, 3)} \cup H_{(4, 5)} \cup B \cup E)}$. Note $M' \cup \smash{\bar{N}(R) = M \cup \bar{N}(E)}$ and $M \cup \smash{\bar{N}(E)} \cong B^3$. Therefore on $\ensuremath{\partial} M'$ the curve $\ensuremath{\partial} R$ must be isotopic to a curve that algebraically intersects the cocore of $H$ just once. Because $R$ has no edges with label pair $\{4, 5\}$, it must have exactly one edge with label pair $\{2, 3\}$. It then follows that $p_i + \bar{p}_i = (m-1)/2$ for each $i = 0, 1$. Let $\Gamma$ be the subgraph of $G_T$ consisting of the edges of $B$ and $R$ and the vertices $U_2$, $U_3$, $U_4$, and $U_5$. Since $\Gamma \subseteq A$, $\Gamma \subseteq \ensuremath{\partial} W$. The positions of $E^+$ and $E^-$ relative to $\Gamma$ define how $\Gamma$ is contained in $A$. Unless $E^+$ and $E^-$ are contained in distinct bigons of $\ensuremath{\partial} W \backslash \Gamma$ whose edges lie in two distinct edge classes of $\Gamma$ on $\ensuremath{\partial} W$, either $\bar{p}_0=0$ or $\bar{p}_1=0$. Thus we assume both $\bar{p}_0 \neq 0$ and $\bar{p}_1 \neq 0$. This forces $\Gamma$ to appear on $A$ as in \fullref{fig:edgesofGammaonA}. It is then clear (eg\ by sliding over $B$ each of the $p_i$ edges of $R$ that are parallel to $b_i$) that in order for $M$ to be a solid torus, $p_0 = \bar{p}_1$ and $p_1 = \bar{p}_0$. \end{proof} \begin{figure} \caption{The edges of $\Gamma$ on $A$ when $\bar{p} \label{fig:edgesofGammaonA} \end{figure} \begin{lemma} \label{lem:onlybigons} The only disks of $S \cap X^-$ whose edges have label pairs among $\{2, 5\}$, $\{3, 4\}$, and $\{2, 3\}$ are bigons. Furthermore all such bigons are parallel. \end{lemma} \begin{proof} \fullref{lem:existenceofbigon} implies there exists a bigon $B$ of $S \cap X^-$ whose edges have label pairs $\{2, 5\}$ and $\{3, 4\}$. By \fullref{lem:parallelbigons} any bigon with edges having label pairs among $\{2, 5\}$, $\{3, 4\}$, and $\{2, 3\}$ is parallel to $B$. Assume $n=0$. Then there are no edges with label pair $\{2,3\}$. Thus, due to the existence of $B$, any boundary component of a face of $S \cap X^-$ with edges having all its label pairs among $\{2, 5\}$ and $\{3, 4\}$ may only have two edges. To see this, let $A_{\{2,5\}}$ and $A_{\{3,4\}}$ be narrow essential annuli in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ which contain all edges of $G_T$ with label pair $\{2,5\}$ and $\{3, 4\}$ respectively and the pairs of fat vertices $\{U_2, U_5\}$ and $\{U_3, U_4\}$ respectively. Abstractly cap off the boundary components of $A_{\{2,5\}}$ and $A_{\{3,4\}}$ and delete the interiors of the fat vertices to form annuli $\smash{\mskip3.5mu\what{\mskip-3.5mu A\mskip1mu}\mskip-1mu}_{\{2,5\}}$ and $\smash{\mskip3.5mu\what{\mskip-3.5mu A\mskip1mu}\mskip-1mu}_{\{3,4\}}$ in which the edges of $G_T$ they contain run from one boundary component to the other. Then joining $\smash{\mskip3.5mu\what{\mskip-3.5mu A\mskip1mu}\mskip-1mu}_{\{2,5\}}$ and $\smash{\mskip3.5mu\what{\mskip-3.5mu A\mskip1mu}\mskip-1mu}_{\{3,4\}}$ with the annuli $\ensuremath{\partial} H_{(2,3)} \backslash (U_2 \cup U_3)$ and $\ensuremath{\partial} H_{(4,5)} \backslash (U_4 \cup U_5)$ (with the corners of faces of $S \cap X^-$ on them) forms a torus. The edges and corners of $S \cap X^-$ on this torus form a $1$--manifold in which the closed components all meet each $\smash{\mskip3.5mu\what{\mskip-3.5mu A\mskip1mu}\mskip-1mu}_{\{2,5\}}$, $\smash{\mskip3.5mu\what{\mskip-3.5mu A\mskip1mu}\mskip-1mu}_{\{3,4\}}$, $\ensuremath{\partial} H_{(2,3)}$, and $\ensuremath{\partial} H_{(4,5)}$ the same number of times. Since $\ensuremath{\partial} B$ is one such component, any other such component must meet $\smash{\mskip3.5mu\what{\mskip-3.5mu A\mskip1mu}\mskip-1mu}_{\{2,5\}}$ and $\smash{\mskip3.5mu\what{\mskip-3.5mu A\mskip1mu}\mskip-1mu}_{\{3,4\}}$ each just once as well, ie\ must have just two edges. Therefore any boundary component of a face of $S \cap X^-$ with edges having all its label pairs among $\{2, 5\}$ and $\{3, 4\}$ is such a closed component of this torus and hence must have just two edges. Therefore a disk of $S \cap X^-$ whose edges have label pairs among $\{2, 5\}$ and $\{3, 4\}$ must be a bigon. Since \fullref{lem:parallelbigons} implies this bigon is parallel to $B$, the lemma at hand is satisfied. Assume $n>0$. Construct a high disk $D^+$ for the arc $K_{(5,6)}$ as in \fullref{lem:nobigonortrigonon1456}. See \fullref{fig:startinghighdisk}. Assume $R$ is an $m$--gon of $S \cap X^-$ for $m>2$ with edges having label pairs among $\{2, 3\}$, $\{2, 5\}$, and $\{3, 4\}$. {\bf Case 1}\qua The edges of $B$ and $R$ lie in an essential annulus $A$ in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. By taking $A$ smaller if necessary we may assume the interior of the arc $D^+ \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is disjoint from $A$. Let $M = \bar{N}(A \cup H_{(2, 3)} \cup H_{(4, 5)} \cup B \cup R)$. Due to the existence of $B$, we may assume $R$ has at least one edge with label pair $\{2, 3\}$ or else the argument assuming $n=0$ applies. Therefore $\ensuremath{\partial} R$ is a nonseparating curve on $\ensuremath{\partial} \bar{N}(A \cup H_{(2, 3)} \cup H_{(4, 5)} \cup B)$, and $\ensuremath{\partial} M$ is a torus. Since $M$ is contained in the solid torus $X^- - N(f \cup H_{(1,6)})$ in which the core of $A$ is longitudinal, $M$ must also be a solid torus such that the core of $A$ is a longitude in $M$. Let $b_i$, $p_i$, and $\bar{p}_i$ for $i = 0,1$ be as in \fullref{lem:arrangingedges2345}. Then by \fullref{lem:arrangingedges2345} $R$ has exactly one edge with label pair $\{2, 3\}$ and either $\bar{p}_0=0$, $\bar{p}_1=0$, or both $p_0 = \bar{p}_1$ and $p_1 = \bar{p}_0$. In particular, either $p_0 \neq 0$ or $p_1 \neq 0$ and always $\bar{p}_0 \leq p_1$. Note that the edges of $R$ parallel to $b_0$ lie on the opposite side of $B$ from the edges parallel to $b_1$. Beginning with the edge of $R$ adjacent to $b_0$ label the $p_0$ edges of $R$ parallel to $b_0$ as $e_j$ for $j = 1, \dots, p_0$. Then beginning with the edge of $R$ adjacent to $b_1$ label the first $\bar{p}_0$ edges that are parallel to $b_1$ as $e_j$ for $j = p_0 +1, \dots, (m-1)/2$. If either $p_0 = 0$ or $p_1 = 0$ then all edges $e_j$ for $j = 1, \dots, (m-1)/2$ are of the second type or first type respectively. Also for each $j$ let $B_j$ be a copy of $B$. For $j = 1, \dots, p_0$, let $\Delta_j$ be the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by $e_j$ and $b_0$. Let $\rho_j'$ and $\rho_j''$ be the rectangles on $\ensuremath{\partial} H_{(2, 3)}$ and $\ensuremath{\partial} H_{(4, 5)}$ respectively bounded by the corners of $R$ incident to $e_j$, the corners of $\Delta_j$, the corners of $B_j$, and the appropriate final arcs of $\ensuremath{\partial} U_3$ and $\ensuremath{\partial} U_4$. Name these final arcs $\gamma_j'$ and $\gamma_j''$ of $\rho_j'$ and $\rho_j''$ respectively. For $j = p_0+1, \dots, (m-1)/2$, let $\Delta_j$ be the bigon on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by $e_j$ and $b_1$. Let $\rho_j'$ and $\rho_j''$ be the rectangles on $\ensuremath{\partial} H_{(2, 3)}$ and $\ensuremath{\partial} H_{(4, 5)}$ respectively bounded by the corners of $R$ incident to $e_j$, the corners of $\Delta_j$, the corners of $B_j$, and the appropriate final arcs of $\ensuremath{\partial} U_2$ and $\ensuremath{\partial} U_5$. Name these final arcs $\gamma_j'$ and $\gamma_j''$ of $\rho_j'$ and $\rho_j''$ respectively. Form the disks $D_j = \Delta_j \cup B_j \cup \rho_j' \cup \rho_j''$ for each $j = 1, \dots, (m-1)/2$. Sequentially attach each disk $D_j$ to $R$ along $e_j$ and the corners incident to it and slightly isotop the current attached disk off of the remaining disks. The resulting complex is a low disk $D^-$ for $K_{(2, 3)}$. (Alternatively, one may conceive of $D^-$ as the result of ``sliding'' each edge $e_j$ over the bigon $B$.) By construction, the arcs $\gamma_j''$ are disjoint from the edges of $G_T$ with label pair $\{5, 6\}$. It follows that $D^+ \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and $D^- \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ have disjoint interiors. Thus by \fullref{highdisklowdisk} the existence of the pair of disks $D^+$ and $D^-$ contradicts the thinness of $K$. {\bf Case 2}\qua The edges of $B$ and $R$ lie in a disk $D$ in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Consider the solid torus $M = \bar{N}(D \cup H_{(2, 3)} \cup H_{(4, 5)} \cup B)$. If $R$ has more than one edge with label pair $\{2, 3\}$ then $M \cup \bar{N}(R)$ is a punctured lens space contained in a solid torus which cannot occur. If $R$ has no edges with label pair $\{2, 3\}$ then the argument assuming $n=0$ applies. Thus we may assume $R$ has just one edge with label pair $\{2, 3\}$. The procedure of Case 1 of constructing a low disk $D^-$ that is disjoint from $D^+$ now applies with $p_0 = p_1 = (m-1)/2$ and $\bar{p}_0 = \bar{p}_1 = 0$. \end{proof} \begin{prop}\label{prop:tnot6} $t \neq 6$ \end{prop} \begin{proof} Assuming $t=6$, we shall find contradictions to \fullref{lem:lowerfacesbound}. If a disk of $S \cap X^-$ is neither a bigon nor parallel to $f$, then by \fullref{lem:onlybigons} it must have an edge with label pair $\{1, 4\}$ and an edge with label pair $\{5, 6\}$. Since there are only $n$ edges with label pair $\{1, 4\}$ (and only $n$ edges with label pair $\{5, 6\}$), there may be at most $n$ disks that are neither bigons nor parallel to $f$. If a disk of $S \cap X^-$ is a bigon yet not parallel to $f$, then again by \fullref{lem:onlybigons} its edges must have label pairs $\{2, 5\}$ and $\{3, 4\}$, and all such bigons are parallel. Thus there may be at most as many bigons of $S \cap X^-$ not parallel to $f$ as the number of edges in an edge (parallelism) class with label pair $\{3, 4\}$. As a result of the isotopies of \fullref{prop:isotopS}, if $g$ is a bigon then each edge class with label pair $\{3,4\}$ contains $(s-n)/2$ edges. If $g$ is a trigon then either there are $(s-n)/3$ trigons parallel to $g$ whose edges account for all edges with label pair $\{3,4\}$ or there are $(s-2n)/3$ trigons parallel to $g$ whose edges along with an edge from each of the $n$ trigons that are meridional disks of $W$ account for all edges with label pair $\{3, 4\}$. In the former case, there are at most $2(s-n)/3$ edges in an edge class with label pair $\{3,4\}$. In the latter case, there are at most $2(s-2n)/3+n$ edges in an edge class with label pair $\{3,4\}$. Therefore if $g$ is a bigon there are at most $(s-n)/2$ bigons of $S \cap X^-$ not parallel to $f$, and if $g$ is a trigon there are at most $2(s-2n)/3+n$ bigons of $S \cap X^-$ not parallel to $f$. Any other disk of $S \cap X^-$ must be parallel to $f$. {\bf Case 1}\qua $g$ is a bigon.\qua By \fullref{lem:lowerfacesbound}, \[s+ \frac{1}{2} + \frac{n}{2} \leq \sum_{R \in S \cap X^-} \chi(R). \] {\bf Case 1a}\qua $f$ is a bigon.\qua Since $f$ is a bigon then by \fullref{prop:isotopS} there are $(s-n)/2$ faces of $S \cap X^-$ parallel to $f$. Since $g$ is a bigon then there are at most $(s-n)/2$ bigons of $S \cap X^-$ not parallel to $f$. Since there are at most $n$ disks of $S \cap X^-$ that are not bigons (parallel to $f$ or otherwise), \begin{align*} \sum_{R \in S \cap X^-} \chi(R) &= \sum_{\mbox{{\scriptsize disks} }R \in S \cap X^-} \chi(R) + \sum_{\mbox{{\scriptsize nondisks} }R \in S \cap X^-} \chi(R) \\ &\leq \Big(\frac{s-n}{2} + \frac{s-n}{2} + n\Big) + 0\\ & = s. \end{align*} But $s+ \frac{1}{2} + \frac{n}{2} \leq s$ is a contradiction. {\bf Case 1b}\qua $f$ is a trigon.\qua Since $f$ is a trigon then by \fullref{prop:isotopS} there are at most $(s-n)/3$ faces of $S \cap X^-$ parallel to $f$. Since $g$ is a bigon then there are at most $(s-n)/2$ bigons of $S \cap X^-$ not parallel to $f$. Since there are at most $n$ disks of $S \cap X^-$ that are neither bigons nor trigons parallel to $f$, \begin{align*} \sum_{R \in S \cap X^-} \chi(R) &= \sum_{\mbox{{\scriptsize disks} }R \in S \cap X^-} \chi(R) + \sum_{\mbox{{\scriptsize nondisks} }R \in S \cap X^-} \chi(R) \\ &\leq \Big(\frac{s-n}{3} + \frac{s-n}{2} + n\Big) + 0\\ & = \frac{5}{6}s+\frac{n}{6}n. \end{align*} But $s+ \frac{1}{2} + \frac{n}{2} \leq \frac{5}{6}s+\frac{n}{6}$ is a contradiction. {\bf Case 2}\qua $g$ is a trigon.\qua By \fullref{lem:lowerfacesbound}, \[\frac{7}{6}s+ \frac{1}{2} + \frac{n}{3} \leq \sum_{R \in S \cap X^-} \chi(R). \] {\bf Case 2a}\qua $f$ is a bigon.\qua Since $f$ is a bigon then by \fullref{prop:isotopS} there are $(s-n)/2$ faces of $S \cap X^-$ parallel to $f$. Since $g$ is a trigon then there are at most $2(s-2n)/3+n$ bigons of $S \cap X^-$ not parallel to $f$. Since there are at most $n$ disks of $S \cap X^-$ that are not bigons (parallel to $f$ or otherwise), \begin{align*} \sum_{R \in S \cap X^-} \chi(R) &= \sum_{\mbox{{\scriptsize disks} }R \in S \cap X^-} \chi(R) + \sum_{\mbox{{\scriptsize nondisks} }R \in S \cap X^-} \chi(R) \\ &\leq \Big(\frac{s-n}{2} + 2\frac{s-2n}{3}+n + n\Big) + 0\\ & = \frac{7}{6}s+\frac{n}{6}. \end{align*} But $\frac{7}{6}s+ \frac{1}{2} + \frac{n}{3} \leq \frac{7}{6}s+\frac{n}{6}$ is a contradiction. {\bf Case 2b}\qua $f$ is a trigon.\qua Since $f$ is a trigon then by \fullref{prop:isotopS} there are at most $(s-n)/3$ faces of $S \cap X^-$ parallel to $f$. Since $g$ is a trigon then there are at most $2(s-2n)/3+n$ bigons of $S \cap X^-$ not parallel to $f$. Since there are at most $n$ disks of $S \cap X^-$ that are not bigons (parallel to $f$ or otherwise), \begin{align*} \sum_{R \in S \cap X^-} \chi(R) &= \sum_{\mbox{{\scriptsize disks} }R \in S \cap X^-} \chi(R) + \sum_{\mbox{{\scriptsize nondisks} }R \in S \cap X^-} \chi(R) \\ &\leq \Big(\frac{s-n}{3} + 2\frac{s-2n}{3}+n + n\Big) + 0\\ & = s+\frac{n}{3}. \end{align*} But $\frac{7}{6}s+ \frac{1}{2} + \frac{n}{3} \leq s+\frac{n}{3}$ is a contradiction. \end{proof} \section[The case t=4]{The case $t=4$}\label{sec:bridgeposition} We continue to assume that $K$ is in thin position and that $r \geq 4$. In light of \fullref{prop:tnot6}, we have that $|K \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}| = t \leq 4$. If $t=4$ we will show that $K$ must also be in bridge position. Then since each graph $\smash{G_S^x}$ for $x = 1, 2, 3, 4$ must have a bigon or a trigon by \fullref{musthavebigons}, we will exhibit a width reducing isotopy of $K$ that contradicts its supposed thinness. The following lemma relating bridge position and thin position is an immediate consequence of definitions. \begin{lemma}\label{thintobridge} If a knot $K$ in thin position has no thin levels, then it is in bridge position. \end{lemma} To show that thin position of $K$ is bridge position, we must consider the existence of a second thick level. If $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ is another thick level, let $T' = \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' - N(K)$. We may assume that $S$ has been isotoped (with support outside a neighborhood of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$) so that $S$ and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ intersect transversely and every arc of $S \cap T'$ is essential in both $S$ and $T'$. We then form the fat vertexed graphs $G_T'$ and $G_S'$. Since $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ are disjoint, the edges of the graphs $G_S$ and $G_S'$ are disjoint. Hence the graphs $G_S$ and $G_S'$ may be considered simultaneously on $\what{S}$. \begin{lemma}\label{nextthicklevel} Assume $f$ is the face of an $Sp$ cycle for $p=2$ or $3$ with label pair $\{x, x{+}1\}$ so that $\ensuremath{\partial} f \cap K = K_{(x,\,x+1)}$. Further assume that $f$ is below (resp.\ above) $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. If $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ is another thick level below (resp.\ above) $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ then $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' \cap K_{(x,\,x+1)} \neq \emptyset$. In particular $f$ contains the face of an $Sp$ cycle of $G_S'$. \end{lemma} \begin{proof} We show the case that $f$ is below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. By \fullref{GT:L2.1} the edges of $f$ lie in an essential annulus $A$ on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and the core of $A$ runs $p$ times longitudinally in $X^-$. If $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ is disjoint from $f$, then $f$ is disjoint from $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{-\infty}$. Thus the curve $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{-\infty}$ must be isotopic to a curve on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ that is disjoint from $A$. This contradicts that $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}_{-\infty}$ runs just once in the longitudinal direction of $X^-$. Therefore $f \cap \what{T'} \neq \emptyset$. If $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ is disjoint from $K_{(x,\, x{+}1)}$, then $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ intersects $f$ in simple closed curves. A simple closed curve of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' \cap f$ innermost on $f$ bounds a disk. This contradicts \fullref{circlesofintersection}. Thus $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' \cap K_{(x,\, x{+}1)} \neq \emptyset$. Since the each arc of $S \cap T'$ is essential in $S$, each edge of $G_S'$ on $f$ must be parallel on $f$ to one of the edges of $\ensuremath{\partial} f$. Note that $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ intersects $K_{(x,\, x{+}1)}$ an even number of times. Each of these intersections appear on each corner of $f$. Since $p = 2$ or $3$, it is clear that $f$ contains an $Sp$ cycle of $G_S'$. \end{proof} \begin{thm}\label{thm:t=2} $t=2$ \end{thm} \begin{proof} By \fullref{prop:tnot6}, $t \leq 4$. Assume $t=4$. By \fullref{musthavebigons} for each of the labels $y \in \mathbf{t} = \{1, 2, 3, 4\}$ the graph $G_S^y$ must have a bigon or a trigon. Due to \fullref{oppositesides}, since $t=4$ any two distinct label pairs of $S2$ and $S3$ cycles in $G_S$ must intersect. Therefore the set of label pairs of $S2$ and $S3$ cycles in $G_S$ has cardinality at most two. Hence the $S2$ and $S3$ cycles in $G_S$ account for at most three of the four labels. Thus for some $y \in \mathbf{t}$ the graph $G_S^y$ contains an extended $S2$ cycle, an extended $S3$ cycle, or a forked extended $S2$ cycle. Let $F$ be the face of this extended $S2$ cycle, extended $S3$ cycle, or forked extended $S2$ cycle. Let $\sigma$ be the Scharlemann cycle contained in the interior of $F$ and let $f$ be the face of $G_S$ that $\sigma$ bounds. By relabeling we may assume that $f$ lies below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and that $\sigma$ has label pair $\{1, 4\}$. Since $r \geq 4$, by \fullref{GT:L2.1} the edges of $\sigma$ lie in an essential annulus in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. The arc $K_{(2, 3)}$ is the only arc of $K$ not appearing among the corners of $F$. If $F$ is bounded by an extended $S2$ cycle or extended $S3$ cycle, then the arcs $K_{(1, 2)}$ and $K_{(3, 4)}$ lie on an annulus formed from bigons of $F \backslash f$. This annulus is necessarily parallel onto $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ through one of the two solid tori it cuts off from $X^+$. Since $K$ is in thin position, the arcs $K_{(1, 2)}$ and $K_{(3, 4)}$ each have only one critical point (a maximum) in their interiors. If $F$ is bounded by a forked extended $S2$ cycle, then the arcs $K_{(1, 2)}$ and $K_{(3, 4)}$ lie on the complex formed from the bigon and trigon of $F \backslash f$. As in \fullref{thinningtwoforks}, we may construct a high disk $D_g$ for one of the two arcs of $K$. Since the two arcs of $K \cap X^+$ are joined by a bigon of $G_S$, it follows that they each have only one critical point in their interiors. Regardless of whether $F$ is bounded by an extended Scharlemann cycle or a forked extended Scharlemann cycle, the two arcs $K_{(1, 2)}$ and $K_{(3, 4)}$ each have high disks that intersect $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in an annulus $R$ that contains edges of $f$ in just one of its boundaries. \fullref{highestmin} applies to show that the arc of $K$ containing the highest minimum below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounds a low disk $D$ with an arc of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ such that the interior of $D$ is disjoint from $F$ and $K$. By \fullref{nextthicklevel}, if $K_{(4, 1)}$ contains the highest minimum then there cannot be another thick level below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Thus $K$ is in bridge position. Therefore the arc $K_{(2, 3)}$ has a low disk $D$ with interior disjoint from $f$. Alternately, if we assume the arc $K_{(2, 3)}$ contains the highest minimum, then by \fullref{highestmin} $K_{(2, 3)}$ again has a low disk $D$ with interior disjoint from $f$. Since the high disks of the arcs $K_{(1, 2)}$ and $K_{(3, 4)}$ intersect $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in $R$ and the interior of the low disk $D$ for the arc $K_{(2, 3)}$ is disjoint from $f$, these three arcs are contained within a solid torus $R' \times [-\epsilon, \epsilon]$ for some annulus $R' \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ that extends $R$ to contain $D \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ and some $\epsilon > 0$. As $R$ contains edges of $f$ in just one of its boundaries, so does $R'$. Hence $R' \times [-\epsilon, \epsilon]$ may be assumed to have its interior disjoint from $f$. Via the two high disks, isotop the arcs $K_{(1, 2)}$ and $K_{(3, 4)}$ onto $R$. Then, along the component of $\ensuremath{\partial} R$ in $\Int R'$, pivot $D$ in $R' \times [-\epsilon, \epsilon]$ $180^\circ$ through $R' \backslash R$ so that it is contained in $X^+$. Finally, a further small isotopy to make $K_{(1, 2)}$ and $K_{(3, 4)}$ transverse to the height function will reduce the width of $K$. See \fullref{fig:2bridgeisotopy}. This contradicts the thinness of $K$. Hence $t \neq 4$. \end{proof} \begin{figure} \caption{The isotopy of $K$ from a $2$--bridge position to a $1$--bridge position} \label{fig:2bridgeisotopy} \end{figure} \begin{lemma}\label{lem:t=2} If $t=2$ then $K$ is at most $1$--bridge. \end{lemma} \begin{proof} Since the first critical point of $K$ above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is a maximum, there can be no other critical points of $K$ above $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Similarly, the arc of $K$ below $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ may have no critical points other than a minimum. Hence there are no thin levels. By \fullref{thintobridge} $K$ is in bridge position. Since $t=2$, $b(K) \leq 1$. \end{proof} \section[The case r=3]{The case $r=3$}\label{sec:genus1case} In this section we assume $r=3$. Since $r \geq s \geq 4g-1$, either $g=0$ or $r = s$ and $g=1$. If $g=0$ then $N(S) \cup N(K)$ is a punctured lens space of order $3$ and hence $K$ is $0$--bridge. Thus we assume $g=1$. Goda and Teragaito have shown the following. \begin{thm}[Goda--Teragaito~\cite{gt:dsokwylsagok}] No Dehn surgery on a genus one, hyperbolic knot in $S^3$ gives a lens space. \end{thm} We adapt their proof of this theorem for our case that $r=3$ and $g=1$. \begin{thm} \label{thm:r=3} If $r=3$ then $b(K) \leq 1$. \end{thm} To prove this, we follow Section~5 of \cite{gt:dsokwylsagok}. First we need to adapt some lemmas from Section~3 of \cite{gt:dsokwylsagok}. \begin{lemma}[cf {{\cite[Lemma~3.1]{gt:dsokwylsagok}}}] \label{lem:GT3.1} Let $\{e_1, e_2, \dots, e_t\}$ be mutually parallel edges in $G_S$ numbered successively. If $r$ is odd, then $\{e_{t/2}, e_{t/2+1}\}$ is an $S2$ cycle. \end{lemma} \begin{proof} As in the first paragraph of the proof of Lemma~3.1 of \cite{gt:dsokwylsagok} we assume $e_i$ has the label $i$ at one endpoint for $1 \leq i \leq t$ and that $e_2j$ has the label $1$ at its other endpoint for some $j < t/2$. Thus we obtain two $S2$ cycles $\sigma_1$ and $\sigma_2$ with disjoint label pairs. Similarly we let $f_i$ be the face of $G_S$ bounded by $\sigma_i$ for $i=1,2$. Since $r \neq 2$, by \fullref{GT:L2.3} the edges of each $\sigma_1$ and $\sigma_2$ do not lie in a disk. Thus they lie in an essential annulus. The proof of \fullref{oppositesides} then carries through to show that $f_1$ and $f_2$ lie on opposite sides of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Furthermore, by disk exchanges outside of $N(\sigma_1 \cup \sigma_2)$, we may construct a Heegaard torus $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ from $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ so that the edges of $\sigma_i$ lie in an essential annulus in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ and $\Int f_i \cap \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}' = \emptyset$ for $i=1,2$. We then construct two disjoint M\"obius bands $B_1$ and $B_2$ on either side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ as in the proof of Lemma~2.5 of \cite{gt:dsokwylsagok} and the beginning of our \fullref{sec:annuliandtrees}. Since $\ensuremath{\partial} B_1$ and $\ensuremath{\partial} B_2$ are parallel on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$, they divide $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}'$ into two annuli. Let $A$ be one of these annuli. Then $B_1 \cup A \cup B_2$ is an embedded Klein bottle in our lens space $X$. Thus the order of $X$ is even. This contradicts that $r$ is odd. Therefore the edge $e_t$ has the label $1$ at its other endpoint and $\{e_{t/2}, e_{t/2+1}\}$ is an $S2$ cycle. \end{proof} \begin{lemma}{\rm (cf\ \cite[Lemma~3.2]{gt:dsokwylsagok})}\qua\label{lem:GT3.2} If $r$ is odd, then $G_S$ does not contain more than $t$ mutually parallel edges. \end{lemma} \begin{proof} In the proof of Lemma~3.2 of \cite{gt:dsokwylsagok}, replace each occurrence of Lemma~3.1 with our \fullref{lem:GT3.1}. \end{proof} \begin{lemma}[cf {{\cite[Lemma~5.2]{gt:dsokwylsagok}}}] \label{lem:GT5.2} If $r$ is odd, then $G_S$ cannot have more than $t/2$ mutually parallel edges. \end{lemma} \begin{proof} In the proof of Lemma~5.2 of \cite{gt:dsokwylsagok}, replace Lemma~3.1 with our \fullref{lem:GT3.1} and Lemma~3.2 with our \fullref{lem:GT3.2}. \end{proof} \begin{proof}[Proof of \fullref{thm:r=3}] If $t=2$ then \fullref{lem:t=2} implies $b(K) \leq 1$. Thus we assume $t \geq 4$. Also assume the interior of $S$ has been isotoped to minimize $|S \cap T|$. Following the proof of Lemma~5.3 of \cite{gt:dsokwylsagok}: The vertex of $G_S$ has valency $3t$, and there are a total of $3t/2$ edges of $G_S$. By \fullref{lem:GT5.2}, $G_S$ consists of three families of mutually parallel edges each containing exactly $t/2$ edges. Thus there is no $S2$ cycle in $G_S$, but there are two $S3$ cycles $\sigma_1$ and $\sigma_2$ which we may assume to have label pairs $\{t, 1\}$ and $\{t/2, t/2+1\}$ respectively by an appropriate choice of relabeling. Let $g_i$ be the face of $G_S$ bounded by $\sigma_i$ for $i=1,2$. Note that each graph $\smash{G_S^x}$ for $x \in \mathbf{t}$ consists of three edges with label pair $\{x, t-x+1\}$. Each such triple of edges $\smash{G_S^x}$ is a ``twice'' (extended) $S3$ cycle: to one side it bounds a trigon containing $g_1$ and to the other side it bounds a trigon containing $g_1$. The proof of Claim~5.5 of \cite{gt:dsokwylsagok} carries through without alteration to show each of these extended $S3$ cycles lie in essential annuli in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Thus $S \cap T$ cannot contain a simple closed curve that bounds a disk in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ without bounding a disk in $T$. Because every face of $G_S$ is a disk, every simple closed curve of $S \cap T$ must bound a disk on $S$. Thus each simple closed curve of $S \cap T$ that is trivial on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ is also trivial on $S$. Let $D_T$ be an disk on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ bounded by a simple closed curve of $S \cap T$ that is both trivial and innermost on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Let $D_S$ be the disk in $S$ bounded by $\ensuremath{\partial} D_T$. Since $D_T$ must be disjoint from $K$, $D_S \cup D_T$ forms a $2$--sphere that is disjoint from $K$. Since lens spaces are irreducible, $D_S \cup D_T$ bounds a ball $B$. Because $K$ is not nullhomologous in our lens space $X$, $K \not \subseteq B$. Thus there is an isotopy of $\Int S$ with support in a neighborhood of $B$ pushing $D_S$ past $D_T$ thereby reducing $|S \cap T|$. This contradicts our minimality assumption. Thus any simple closed curve of $S \cap T$ is essential on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Furthermore, any simple closed curve of $S \cap T$ innermost on $S$ must bound a meridional disk of the same solid torus on one side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. For each $i=1,2$, let $A_i$ be a narrow annulus in $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ in which $\sigma_i$ lies that does not contain any simple closed curves of $S \cap T$. If $S \cap T$ does indeed contain simple closed curves, then let $\delta$ be an innermost simple closed curve on $S$ bounding the disk $D$. The cores of the $A_i$ are parallel on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ to $\delta$, and $\delta \subseteq \smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu} \backslash (A_1 \cup A_2)$. For each $i=1,2$, let $A_i'$ be the annulus on $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$ between $A_i$ and $\delta$, and let $D_i' = A_i \cup A_i' \cup D$ slightly pushed off $D$ so that $D_1'$ and $D_2'$ are disjoint from each other and from $D$. If $\Int(g_1 \cup g_2) \cap (A_1' \cup A_2') \neq \emptyset$ then perform disk exchanges on $g_1 \cup g_2$ with $D_1' \cup D_2'$ to produce trigons $g_i'$ from $g_i$ for each $i=1,2$ so that $\Int(g_1' \cup g_2') \cap (A_1' \cup A_2') = \emptyset$. Note that $\ensuremath{\partial} g_i' = \ensuremath{\partial} g_i$. Then $\bar{N}(D_1' \cup H_{(t,1)} \cup g_1')$ and $\bar{N}(D_2' \cup H_{(t/2, t/2+1)} \cup g_2')$ gives two disjoint punctured lens spaces in $X$, which is absurd. Thus $S \cap T$ contains no simple closed curves. The proof of Claim~5.4 of \cite{gt:dsokwylsagok} (from which the preceding paragraph takes inspiration) may now be used to show that $g_1$ and $g_2$ lie on opposite sides. \fullref{GT:L2.1} then implies that the cores of the $A_i$ are not meridional curves for the solid tori on either side of $\smash{\mskip2mu\what{\mskip-2mu T\mskip-1.5mu}\mskip.5mu}$. Thus $t/2$ is odd, and so $t \geq 6$. \fullref{sec:annuliandtrees} now applies to the extended $S3$ cycle $\sigma_2$ considered as bounding the trigon $F$ containing $g_1$. \fullref{bounded} implies that $F$ may account for at most $t/2+1$ labels. This contradicts that $F$ accounts for all $t$ labels. \end{proof} \end{document}
\begin{document} \centerline{\bf TWO GENERATOR SUBALGEBRAS OF LIE ALGEBRAS} \centerline {Kevin Bowman} \centerline {Department of Physics, Astronomy and Mathematics} \centerline {University of Central Lancashire} \centerline {Preston PR1 2HE, England} \centerline {David A. Towers} \centerline {Department of Mathematics, Lancaster University} \centerline {Lancaster LA1 4YF, England} \centerline {and} \centerline {Vicente R. Varea \footnote[1]{Supported by DGI Grant BFM2000-1049-C02-01}} \centerline {Department of Mathematics, University of Zaragoza} \centerline {Zaragoza, 50009 Spain} {\sl Abstract} In \cite{tho} Thompson showed that a finite group $G$ is solvable if and only if every two-generated subgroup is solvable (Corollary 2, p. 388). Recently, Grunevald et al. \cite{gknp} have shown that the analogue holds for finite-dimensional Lie algebras over infinite fields of characteristic greater than $5$. It is a natural question to ask to what extent the two-generated subalgebras determine the structure of the algebra. It is to this question that this paper is addressed. Here, we consider the classes of strongly-solvable and of supersolvable Lie algebras, and the property of triangulability. \par {\sl Keywords} Lie algebra, two generator, solvable, supersolvable, triangulable. \par AMS 2000 {\sl Mathematics subject classification} 17B05, 17B30 \section {Introduction.} Let $\mathcal {P}$ be a certain property that a subalgebra of a Lie algebra may possess. A task is to obtain information on the structure of a Lie algebra $L$ all of whose two-generated {\em proper} subalgebras possess the property $\mathcal{P}$. Given a subalgebra $S$ of $L$, we distinguish two types of properties that $S$ may possess: one type is that $S$ belong to a certain class $\mathcal{C}$ of Lie algebras, in this case the class $\mathcal{C}$ will be identified with the property; the other one is that $S$ be immersed in $L$ in a certain way. In this paper we will consider properties of these two types. Of the first type, we will consider the classes of strongly-solvable and supersolvable Lie algebras and of the other type the property that $S$ be triangulable on $L$. In section 2, we give some preliminary results and we collect some known ones on important classes of Lie algebras. Most of them will be used in the sequel In section 3, we consider the classes of strongly-solvable and of supersolvable Lie algebras. We prove that if $L$ is a solvable Lie algebra all of whose two-generated proper subalgebras are strongly solvable (resp. supersolvable) then either $L$ is two-generated and every proper subalgebra of $L$ is strongly solvable (resp. supersolvable) or else $L$ is strongly solvable (resp. supersolvable). In the strongly-solvable case, we also prove that if $L$ is non-solvable, $F$ is infinite and $\mathrm{char}(F)>5$, then $L$ is two-generated, every proper subalgebra of $L$ is strongly solvable and $L/{\phi(L)}$ is simple (where ${\phi(L)}$ denotes the largest ideal of $L$ contained in every maximal subalgebra of $L$). In section 4, we prove that a solvable Lie algebra $L$ is triangulable whenever every two-generated {\em proper} subalgebra of $L$ is triangulable on $L$. Also, we prove that if every proper subalgebra of a Lie algebra $L$ is triangulable on $L$ but $L$ itself is not, then $L$ is two-generated and $L/\phi(L)$ is simple. Moreover, we give some information on the structure of a simple Lie algebra $L$ all of whose proper subalgebras are triangulable on $L$. In particular we obtain that such a Lie algebra is two-generated. Throughout $L$ will denote a finite-dimensional Lie algebra over a field $F$. There will be no assumptions on $F$ other than those specified in individual results. We shall call $L$ {\em supersolvable} if there is a chain $0 = L_{0} \subset L_{1} \subset \ldots \subset L_{n-1} \subset L_{n} = L$, where $L_{i}$ is an $i$-dimensional ideal of $L$. We shall call $L$ {\em strongly solvable} if its derived subalgebra $L^2$ is nilpotent. It is well known that $$ {\mathrm supersolvable}\,\Longrightarrow\, {\mathrm strongly-solvable}\,\Longrightarrow\,{\mathrm solvable} $$ For algebraically closed fields of characteristic zero, these three classes of Lie algebras coincide (Lie's theorem). For fields of characteristic zero, every solvable Lie algebra is strongly solvable. There are well-known examples of solvable Lie algebras over algebraically closed fields of non-zero characteristic which are not supersolvable (see for instance \cite[p.53]{J} or \cite{bn}). Nevertheless, if the ground field $F$ is algebraically closed then every strongly-solvable Lie algebra is supersolvable (see \cite[Lemma 2.4] {bn}). A subalgebra $S$ of $L$ is said to be {\em triangulable} on $L$ if ${\rm ad}_{L} S = \{{\rm ad}_{L} x \mid x \in S\}$ is a Lie algebra of linear transformations of $L$ which is triangulable over the algebraic closure of $F$ (equivalently, if every element of $S^2$ acts nilpotently on $L$, see \cite{w}). Some questions regarding triangulabilty of linear Lie algebras have been considered in \cite{gu}. The symbol $\dot{+}$ will denote a vector space direct sum. We say that $L$ is {\em almost abelian} if $L = L^{2}\dot{+} Fx$ with ${\rm ad}\,x$ acting as the identity map on the abelian ideal $L^{2}$; $L$ is {\em quasi-abelian} if it is abelian or almost abelian. Quasi-abelian Lie algebras are precisely those in which every subspace is a subalgebra. \section{Preliminary Results} The {\em Frattini subalgebra} of $L$, denoted by $F(L)$, is the intersection of all maximal subalgebras of $L$. It is known that $F(L)$ need not be an ideal of $L$, even for algebraically closed fields (see \cite{v3}); the {\em Frattini ideal} of $L$, denoted by $\phi (L)$, is the largest ideal of $L$ contained in $F(L)$. We say that $L$ is {\em $\phi$ - free} if $\phi (L) = 0$. Clearly, $L/\phi(L)$ is $\phi$-free. The following is straightforward. \begin{lemma}\label{l:lemma2} A Lie algebra $L$ is two-generated if and only if $L/\phi(L)$ is two-generated. \end{lemma} A class $\mathcal{C}$ of Lie algebras is said to be {\em saturated} if $L\in\mathcal{C}$ whenever $L/\phi(L)\in\mathcal{C}$. It is well-known that the classes of solvable, strongly-solvable, supersolvable and nilpotent Lie algebras are saturated (see \cite{bn} and \cite{to}) For short, we will say that the property $\mathcal{P}$ satisfies condition (*) if for every Lie algebra $L$ all of whose two-generated {\it proper} subalgebras of $L$ possess the property $ \mathcal{P}$, either $L$ itself possesses the property $ \mathcal{P}$ or $L$ is two-generated. Next, we collect some known results on classes of Lie algebras which satisfy this condition. {\bf Theorem 0} \begin{enumerate} \item The classes of abelian, nilpotent and quasi-abelian Lie algebras satisfy condition (*). \item The class of simple (including the one-dimensional) Lie algebras satisfies condition (*). If ${\mathrm char}(F)\neq 2,3$ and every two-generated proper subalgebra of $L$ is either simple or one-dimensional, then every subalgebra of $L$ of dimension $>1$ is simple and $L$ is two-generated. \item If $F$ is infinite and ${\mathrm char}(F)>5$, then the class of solvable Lie algebras satisfies condition (*) \end{enumerate} {\bf Proof.} (1): Clearly, the class of abelian Lie algebras satisfies condition (*). Now, suppose that every two-generated subalgebra of $L$ is nilpotent. Let $x \in L$ and choose any $y \in L$. Then $<x, y>$ is nilpotent, so $y({\rm ad}\,x)^{n} = 0$ for some $n$. Hence $y({\rm ad}\,x)^{d} = 0$, where $d =$ dim $L$. This holds for all $y \in L$, so ${\rm ad}\,x$ is nilpotent for all $x \in L$. It follows from Engel's Theorem that $L$ is nilpotent. To prove the last assertion in (1), suppose that every two-generated subalgebra of $L$ is quasi-abelian. Then every two-dimensional subspace of $L$ is a subalgebra of $L$, from which it follows that every subspace of $L$ is a subalgebra of $L$, and hence that $L$ is quasi-abelian. (2): This is proved in \cite[Proposition 3.2]{v} and \cite[Theorem 4]{v2}. (3): It is an immediate consequence of a result of Grunewald, Kunyavskii, Nikolova and Plotkin in \cite{gknp}. The structure of solvable minimal non-abelian Lie algebras has been fully described by Stitzinger in \cite[Theorem 1]{stit}. If $L$ is such a Lie algebra, then $L=A\dot{+} Fx$, where $A$ is an abelian ideal of $L$ and ${\rm ad}\,x$ acts irreducibly on $A$. Non-solvable minimal non-abelian Lie algebras have been studied by Farnsteiner in \cite{F} and Gein in \cite{gmin} (see also Elduque \cite{e}). We finish this section by collecting some known results on the structure of a minimal non-$\mathcal{C}$ Lie algebra for several classes $\mathcal{C}$. Minimal non-quasi-abelian Lie algebras over a field of characteristic different from 2 and 3 have been studied by Gein in \cite[Proposition 3]{gmod} and Varea in \cite[Theorem 2.2 and Corollary 2.4] {vhirosh}. If $L$ is such a Lie algebra, then one of the following occurs: (a) $L$ is solvable minimal non-abelian; (b) $L\cong\rm{sl}(2)$; (c) $L$ is simple minimal non-abelian; or (d) $L$ has a basis $a_1,a_2,x$ with product given by one of the following rules: \begin{enumerate}\item $[a_1,a_2]=0, [a_1,x]=a_1, [a_2,x]=\alpha a_2$, $1\neq \alpha\in F$\item $[a_1,a_2]=0, [a_1,x]=a_1, [a_2,x]=a_1+ a_2$. \end{enumerate} Minimal non-solvable Lie algebras over an algebraically closed field of prime characteristic have been studied by Varea in \cite{vmin} Minimal non-nilpotent Lie algebras have been studied by Stitzinger in \cite{stit}, Gein and Kuznecov \cite{gk}, Towers \cite{tomin} and Farnsteiner \cite{F}. In Gein \cite{Gbook}, it is proved that if $L$ is simple and minimal non-nilpotent then the intersection of two distinct maximal subalgebras of $L$ is zero and $L$ has no non-zero ad-nilpotent elements. For the readers' convenience we include here a proof of Gein's result. (Our proof is different from the one given in \cite{Gbook}.) \begin{theor}{\bf{(Gein \cite{Gbook})}} \label{t:G} Let $L$ be a simple Lie algebra over an arbitrary field $F$. Assume that every proper subalgebra is nilpotent. Then the following hold: \begin{enumerate} \item $M_{1} \cap M_{2} = 0$ for every pair of different maximal subalgebras $M_{1}$ and $M_{2}$ of $L$; \item $L$ has no non-zero ad-nilpotent elements; and \item $L$ is two-generated. \end{enumerate} \end{theor} {\em Proof.} (1): Let $M$ be a maximal subalgebra of $L$. Assume that there exists a proper subalgebra $S$ of $L$ not contained in $M$ such that $M \cap S \neq 0$. Choose $S$ such that $\dim M \cap S$ is maximal. Nilpotency of $M$ implies that $N_{M}(M \cap S) \neq M \cap S$. Nilpotency of $S$ implies that $N_{S}(M \cap S) \neq M \cap S$. Let $T$ be the subalgebra of $L$ generated by $N_{M}(M \cap S)$ and $N_{S}(M \cap S)$. Since $M \cap S$ is a non-zero ideal of $T$, it follows from the simplicity of $L$ that $T \neq L$. Moreover we have $S \cap M < T \cap M$, which contradicts our choice of $S$. (2): Let $0 \neq x \in L$ be ad-nilpotent. Let $H$ be a maximal subalgebra of $L$ containing $x$. Let $L = L_0\dot{+} L_{1}$ be the Fitting decomposition of $L$ relative to $H$. Since $H$ is a maximal subalgebra of $L$, we have $H=L_0$. Since $x$ acts nilpotently on $L_{1}$, there exists $0 \neq y \in L_{1}$ such that $[x, y] = 0$. Therefore $C_{L}(x)$ is not contained in $H$. Take a maximal subalgebra $M$ of $L$ containing $C_{L}(x)$. We find that $H \cap M \neq 0$ and $H \neq M$, which contradicts (1). (3): Assume $L \neq <x,y>$ for every $x$, $y \in L$. Let $M$ be a maximal subalgebra of $L$. Take $0 \neq x \in M$ and $y\in L$, $y\not\in M$. There exists a maximal subalgebra $S$ of $L$ containing $<x,y>$. We have $S \cap M \neq 0$, which contradicts (1). {\it Conjecture:} Every simple minimal non-solvable Lie algebra is two-generated. From \cite{gknp} (see Theorem 0(3)) it follows that the conjecture is true for infinite fields of characteristic greater than $5$. In section 4, we will prove that a Lie algebra $L$ all of whose proper subalgebras are triangulable on $L$ is two-generated. \section{The classes of strongly-solvable and supersolvable Lie algebras} In this section we consider the classes of strongly-solvable and supersolvable Lie algebras. We recall that the Lie algebra with basis $a,b,c$ and product given by $[a,b]=c$, $[a,c]=[b,c]=0$ is called the {\em three-dimensional Heisenberg algebra}. The structure of solvable $\phi$-free minimal non-strongly-solvable Lie algebras was determined in \cite{bt} and is given below. \begin{theor}\label{t:bt} {\sl Let $L$ be a solvable $\phi$-free minimal non-strongly-solvable Lie algebra. Then $F$ has characteristic $p > 0$ and $L = A \dot{+} B$ is a semidirect sum, where $A$ is the unique minimal ideal of $L$, dim $A \geq 2$, $A^{2} = 0$, and either $B = M \dot{+} Fx$, where $M$ is an abelian minimal ideal of $B$ (type I), or $B$ is the three-dimensional Heisenberg algebra (type II).} \end{theor} By using Theorem \ref{t:bt} we prove the following. \begin{propo}\label{p:1} Let $L$ be solvable $\phi$-free minimal non-strongly-solvable Lie algebra. Then $L$ is two-generated. \end{propo} {\em Proof.} The structure of $L$ is described in Theorem \ref{t:bt} above. Suppose first that $L$ is of type I. Clearly, $[A,B]$ is an ideal of $L$. So that either $[A,B]=A$ or $[A,B]=0$. In the latter case, we have that $B$ is also an ideal of $L$, which is a contradiction. Therefore $[A,B]=A$. On the other hand, we have that $[x,M]$ is an ideal of $B$ contained in $M$. So, either $[x,M]=M$ or $[x,M]=0$. In the latter case, we have that $B$ is abelian. This yields that $L^2\leq A$ and hence $L^2$ is abelian, a contradiction. Therefore $[x,M]=M$. Then we have that $B^2=M$ and $L^2=A+B^2=A+M$. If $A ({\rm ad}\, m)^{2} = 0$ for all $m \in M$ it is easy to see that $({\rm ad}\, n)^{3} = 0$ for all $n \in L^2$. But then $L^2$ is nil and hence nilpotent, by Engel's theorem, a contradiction. Choose $m \in M$ such that $A ({\rm ad}\, m)^{2} \neq 0$, and then $a \in A$ such that $[[a,m],m] \neq 0$. Now $B = <m, x>$. Put $D = <a+x, m>$. Then $[[a+x,m],m] = [[a,m],m] \in D$, whence $D \cap A \neq 0$. Clearly $L = A + D$ so $D \cap A$ is an ideal of $L$. It follows that $A \subseteq D$, giving $D = A + <x, m> = L$ and $L$ is two-generated. So suppose that $L$ is of type II. Then $B$ has a basis $c,s,x$ with product given by $[s,x]=c$, $[c,x]=[c,s]=0$. Put $C = C_{A}(c)$. Then $C$ is an ideal of $L$ and so $C = A$ or $C = 0$. The former implies that $L^{2} = A + Fc$ is abelian, a contradiction, so $C = 0$. Let $D = <a + s, x>$. Then $[a,x] + c \in D$. If $[a,x] = 0$, then $c \in D$. But this implies that $0 \neq [a,c] \in D \cap A$, whence $L = D$ as above. So suppose that $[a,x] \neq 0$. This yields that $[[a,x],x] \in D$, from which $[[a,x],x] \neq 0$ would give $L = D$ again. So we have that $[a,x] \neq 0$ but $[[a,x],x] = 0$. Put $E = <[a,x] + s, x>$. Then $c = [[a,x] + s,x] \in E$, whence $[[a,x],c] \in E$. But $[[a,x],c] \neq 0$ since $[a,x] \neq 0$, giving $E \cap A \neq 0$ from which $L = E$. The proof is complete. Now we are able to prove the following result. \begin{theor}\label{t:theor1} Let $L$ be a Lie algebra such that every two-generated proper subalgebra is strongly solvable. \begin{enumerate} \item Assume that $L$ is solvable but not strongly solvable. Then every proper subalgebra of $L$ is strongly solvable and $L$ is two-generated. \item Assume that $L$ is non-solvable, $F$ is infinite and $\mathrm{char}(F)>5$. Then $L$ is two-generated, every proper subalgebra of $L$ is strongly solvable and $L/{\phi(L)}$ is simple. \end{enumerate} \end{theor} {\em Proof.} (1): Let $S$ be a non-strongly-solvable subalgebra of $L$ of minimal dimension. Clearly, $S$ is minimal non-strongly-solvable. We have that $S/\phi(S)$ is $\phi$-free, solvable and minimal non-strongly-solvable. By Proposition \ref{p:1} it follows that $S/\phi(S)$ is two-generated, whence $S$ is also two-generated, by Lemma \ref{l:lemma2}. By our hypothesis it follows that $S=L$. This yields that $L$ is minimal non-strongly-solvable and two-generated. The proof of (1) is complete. (2): By using Theorem 0(3) we obtain that every non-solvable subalgebra of $L$ (including $L$ itself) is two-generated. From this and our hypothesis it follows that every proper subalgebra of $L$ is solvable. Now let $S$ be a proper subalgebra of $L$. Assume that $S$ is not strongly solvable. Then, by (1) it follows that $S$ is two-generated. But then, by our hypothesis $S$ is strongly solvable, a contradiction. Put $\bar{L}=L/\phi(L)$. We have that $\bar{L}$ is non-solvable but every proper subalgebra of $\bar{L}$ is solvable. On the other hand, since $\bar{L}$ is $\phi$-free we have that $\bar{L}=N+S$, $N\cap S=0$, where $N$ is the largest nilpotent ideal of $\bar{L}$ and $S$ is a non-solvable subalgebra of $\bar{L}$. This yields that $S=\bar{L}$ and hence $N=0$. Let $A$ be a minimal ideal of $\bar{L}$. We have that either $A^2=A$ or $A^2=0$. In the latter case, we have $A\leq N=0$, a contradiction. This yields that $A=A^2$ and hence $A=\bar{L}$. Therefore $\bar{L}$ is simple. This completes the proof. Next we consider the class of supersolvable Lie algebras The structure of strongly-solvable minimal non-supersolvable Lie algebras as well as that of $\phi$-free, non-strongly-solvable and minimal non-supersolvable Lie algebras were determined in \cite{ev}. For the readers' convenience, we give them below \begin{theor}\cite[Theorems 1.1 and 1.2]{ev}\label{t:ev} Let $L$ be minimal non-supersolvable. \begin{enumerate} \item If $L$ is strongly solvable, then $L=\phi(L)\dot{+}A\dot{+}Fx$, where $A$ is subspace of $L$, $A^2\leq \phi(L)$, with ${\rm ad}x$ acting irreducibly on $A$ and ${\rm ad}x\mid_{\phi(L)}$ is split. \item If $L$ is $\phi$-free and non-strongly-solvable, then ${\rm char}(F)=p>0$ and one of the following hold: \begin{enumerate} \item $L=((x,y,e_0,e_1,\ldots,e_{p-1}))$ with $[e_i,y]=(\alpha+i)e_i$ where $\alpha$ is a fixed scalar in $F$, $[e_i,x]=e_{i+1}$ (indices mod.$p$), $[x,y]=x$, $[e_i,e_j]=0$ and $F=\{t^p-t\mid t\in F\}$. \item $L=((x,y,z,e_0,e_1,\ldots,e_{p-1}))$ with $[e_i,z]=e_i$ for every $i$ and $[e_i,x]=e_{i+1}$ $(i=0,\ldots,p-2)$ (non-specified products are zero) and $F$ is perfect whenever $p=2$. \end{enumerate} \end{enumerate} \end{theor} Now we can prove the following \begin{theor}\label{t:theor3} Let $L$ be a solvable Lie algebra. \begin{enumerate} \item If $L$ minimal non-supersolvable, then $L$ is two-generated. \item If every two-generated proper subalgebra of $L$ is supersolvable but $L$ is not supersolvable, then every proper subalgebra of $L$ is supersolvable. So, either $L$ has the structure given in (1) of Theorem \ref{t:ev} or $L/\phi(L)$ is isomorphic to one of the Lie algebras described in (2) of Theorem \ref{t:ev}. \end{enumerate} \end{theor} {\em Proof.} (1): If $L^2$ is not nilpotent, then $L$ is minimal non-strongly-solvable. So, by Theorem \ref{t:theor1} it follows that $L$ is two-generated. Now assume that $L^{2}$ is nilpotent. Put $\bar{L}=L/\phi(L)$. We have that $\bar{L}$ is $\phi$-free, strongly solvable and minimal non-supersolvable. By Theorem \ref{t:ev}, we have that $\bar{L}=A\dot{+} Fx$ where $A$ is a nonzero abelian ideal of $\bar{L}$ and ${\rm ad}\,x$ acts irreducibly on $A$. Pick $0\neq a\in A$. We have that $\bar{L}$ is generated by $a$ and $x$. From Lemma \ref{l:lemma2} it follows that $L$ is two-generated. The proof of (1) is complete. (2): Clearly, $L$ has a subalgebra $S$ which is minimal non-supersolvable. From (1) it follows that $S$ is two-generated. Then, by our hypothesis, we have that $S=L$. \section{The property of triangulability} A subalgebra $S$ of $L$ is said to be {\em triangulable} on $L$ if ${\rm ad}_{L} S = \{{\rm ad}_{L} x \mid x \in S\}$ is a Lie algebra of linear transformations of $L$ which is triangulable over the algebraic closure of $F$. A subalgebra $S$ of $L$ is said to be {\em nil} on $L$ if ${\rm ad}\, x$ is nilpotent for every $x\in L$. It is well-known that for every subalgebra $S$ of $L$, there is a unique maximal ideal ${\rm nil}(S)$ of $S$ consisting of ad-nilpotent elements of $L$ (see \cite[Proposition 2.1]{w}). Also, it is known that $S$ is triangulable on $L$ if and only if $S/{\rm nil}(S)$ is abelian (see \cite[Theorem 2.2]{w}). Note that every subalgebra of $L$ which is triangulable on $L$ is strongly solvable. First, we give the following easy lemma which will be used in the sequel \begin{lemma}\label{l:nil} Let $L$ be a Lie algebra and $S$ and $T$ subalgebras of $L$ which are nil on $L$. Assume that $[S,T] \subset T$. Then $S + T$ is nil on $L$. \end{lemma} {\em Proof.} We see that the set ${\rm ad}\,S\cup{\rm ad}\,T$ is weakly closed in the sense of Jacobson \cite{J}. Therefore, by \cite[Theorem 1,p.33]{J}, every element of $S + T$ acts nilpotently on $L$. \begin{propo}\label{p:semi} Let $S$ be a triangulable subalgebra of $L$. Then ${\rm nil}(S)$ is precisely the set of all elements of $S$ which act nilpotently on $L$. \end{propo} {\em Proof.} Let $\Omega$ be an algebraic closure of $F$ and let $L_{\Omega}=L\otimes_F\Omega$. Put $A={\rm ad}_LS$. We have that $A\leq gl(L)$ and that $A_{\Omega}$ is a subalgebra of $ gl(L_{\Omega})$ and a set of simultaneously triangulable linear maps. Then it is obvious that the set $N(A_{\Omega})$ of nilpotent elements of $A_{\Omega}$ is closed under linear combinations and that $(A_{\Omega})^2<N(A_{\Omega})$. Since $(A^2)_{\Omega}= (A_{\Omega})^2$, it follows that $A^2$ is contained in the set $N$ of nilpotent elements in $A$. Also, we have that $N$ is closed under linear combinations. So $N$ is actually an ideal of $A$. Then ${\rm nil}(S)$ is contained in the inverse image $N^{\prime}$ of $N$ under the adjoint representation ${\rm ad}_L$. This is a Lie-homomorphism, whence $N^{\prime}$ is an ideal of $S$. It follows that $N^{\prime}={\rm nil}(S)$. This completes the proof. \begin{lemma}\label{l:bige} Let $L$ be a Lie algebra which is not two-generated. Then for each maximal subalgebra $M$ of $L$ and each non-zero element $x$ in $M$ there exists a maximal subalgebra $S$ of $L$ different from $M$ containing $x$. If the ground field $F$ is infinite, then each element of $L$ lies in infinitely many maximal subalgebras of $L$. \end{lemma} {\em Proof.} Let $M$ be a maximal subalgebra of $L$ and $0 \neq x \in M$. Pick $y \in L \setminus M$. By hypothesis, $<x,y> \neq L$. So, there exists a maximal subalgebra $S$ of $L$ containing $<x,y>$. We see that $S \neq M$ since $y \in S$ and $y \not\in M$. Now suppose that $F$ is infinite. Let $x \in L$ and assume that $M_{1},\cdots, M_{r}$ are the only maximal subalgebras of $L$ which contain $x$. Since $F$ is infinite, $M_{1} \cup \cdots \cup M_{r} \neq L$. Pick $y \in L$, $y \not\in M_{1} \cup \cdots \cup M_{r}$. Then, by hypothesis $L \neq <x,y>$. Take a maximal subalgebra $S$ of $L$ containing $<x,y>$. We see that $x \in S$ and $S \neq M_{i}$ for every $i$, which is a contradiction. Now we give some information on the structure of a simple Lie algebra all of whose proper subalgebras are triangulable. Clearly, a simple minimal non-abelian Lie algebra satisfies this condition. However, we do not know any other example. \begin{theor}\label{t:simple-triang} Let $L$ be a simple Lie algebra such that every proper subalgebra of $L$ is triangulable on $L$. Then the following hold: \begin{enumerate} \item if $M$ is a maximal subalgebra of $L$, then either $M$ is abelian and none of the non-zero elements of $M$ acts nilpotently on $L$ or ${\rm nil}(M)$ is a non-zero maximal nil subalgebra of $L$; \item if $K$ is a non-zero maximal nil subagebra of $L$, then the normalizer of $K$ in $L$ is a maximal subalgebra of $L$; \item for each two different maximal subalgebras $M_{1}$ and $M_{2}$ of $L$, none of the non-zero elements of $M_{1} \cap M_{2}$ acts nilpotently on $L$ (in particular, $M_1\cap M_2$ is abelian); and \item $L$ is two-generated. \end{enumerate} \end{theor} {\em Proof.} (1): Let $M$ be any maximal subalgebra of $L$. First assume that ${\rm nil}(M) = 0$. Then we have $M^2 \leq {\rm nil}(M) = 0$. So, $M$ is abelian. By Proposition \ref{p:semi} it follows that $M$ does not contain any non-zero ad-nilpotent element of $L$. Now assume ${\rm nil}(M) \neq 0$. Let $K$ be a maximal nil subalgebra of $L$ containing ${\rm nil}(M)$. Assume $K \neq {\rm nil}(M)$. By Proposition \ref{p:semi} again, we have $M \cap K = {\rm nil}(M)$. Nilpotency of $K$ implies that ${\rm nil}(M)$ is properly contained in the normalizer $N_{K}({\rm nil}(M))$ of ${\rm nil}(M)$ in $K$. This yields that ${\rm nil}(M)$ is an ideal of $L$, since $L=M+N_{K}({\rm nil}(M))$ by maximality of $M$, a contradiction. Therefore $K={\rm nil}(M)$ and so ${\rm nil}(M)$ is maximal nil on $L$. The proof of (1) is complete. (2): Let $0 \neq K$ be a maximal nil subalgebra of $L$. Let $M$ be a maximal subalgebra of $L$ containing $K$. Then, by Proposition \ref{p:semi} it follows that $K = {\rm nil}(M)$. This yields that $M =N_ {L}(K)$ and hence $N_{L}(K)$ is a maximal subalgebra. (3): Let $M_1, M_2$ be distinct maximal subalgebras of $L$. By Proposition \ref{p:semi} we have that $${\rm nil}(M_1\cap M_2)={\rm nil}(M_1)\cap {\rm nil}(M_2)$$We need to prove that ${\rm nil}(M_1\cap M_2)=0$. Assume that ${\rm nil}(M_1\cap M_2)\neq 0$. Choose $M_1$ and $M_2$ such that the dimension of ${\rm nil}(M_1\cap M_2)$ is maximal. Suppose that ${\rm nil}(M_1\cap M_2)={\rm nil}(M_2)$. Then we have $0\neq {\rm nil}(M_2)\leq {\rm nil}(M_1)$. By (1) it follows that ${\rm nil}(M_2)={\rm nil}(M_1)$. This yields that ${\rm nil}(M_2)$ is an ideal of $M_1$ and $M_2$ and therefore it is an ideal of $L$. Simplicity of $L$ implies that ${\rm nil}(M_2)=0$, a contradiction. Hence ${\rm nil}(M_1\cap M_2)\neq {\rm nil}(M_2)$. Analogously, we have ${\rm nil}(M_1\cap M_2)\neq {\rm nil}(M_1)$. Now, by using Engel's Theorem we obtain subalgebras $S_1$ and $S_2$ of $L$ strictly containing ${\rm nil}(M_1\cap M_2)$ and such that $${\rm nil}(M_1\cap M_2)\lhd S_i\leq{\rm nil}(M_i)\,\,{\rm for}\,\, i=1,2$$ Let $S$ be the subalgebra of $L$ generated by $S_1$ and $S_2$. We find that ${\rm nil}(M_1\cap M_2)$ is an ideal of $S$. Simplicity of $L$ implies that $S\neq L$. Take a maximal subalgebra $M_3$ of $L$ containing $S$. We have $${\rm nil}(M_1\cap M_2)\subset S_1\subseteq M_3\cap{\rm nil}(M_1)={\rm nil}(M_1\cap M_3)$$ which contradicts the maximality of ${\rm nil}(M_1\cap M_2)$. This completes the proof of (3). (4): Assume that $L \neq <a,b>$ for every $a$, $b \in L$. Let $K$ be a nil subalgebra of $L$ of maximal dimension. If $K = 0$, then by (1) we have that every maximal subalgebra of $L$ is abelian. Let $M_1,M_2$ be distinct maximal subalgebras of $L$. We have that $M_1\cap M_2=0$; since, otherwise, we would have that $M_1\cap M_2$ is a nonzero ideal of $L$, which is a contradiction. Pick $0\neq a_1\in M_1$ and $0\neq a_2\in M_2$. Since $ <a_1,a_2>\neq L$, we can take a maximal subalgebra $M_3$ of $L$ containing $a_1$ and $a_2$. But then we have $0\neq a_1\in M_1\cap M_3$ and $M_1\neq M_3$, a contradiction. Therefore $K \neq 0$. Pick $x \in K$, $x \neq 0$. By (3), there exists only one maximal subalgebra of $L$ containing $x$, which contradicts Lemma \ref{l:bige}. Therefore $L$ can be generated by two elements. The proof of the theorem is complete. Now we can prove the following \begin{lemma}\label{l:t} \begin{enumerate} \item If $\phi(L)\leq S\leq L$, then ${\rm nil}(S/\phi(L))={\rm nil}(S)/\phi(L)$. \item If every two-generated {\em proper} subalgebra of $L$ is triangulable on $L$, then the same holds in $L/\phi(L)$. \end{enumerate} \end{lemma} {\em Proof.} (1): Assume $\phi(L)\leq S\leq L$. Let ${\rm nil}(S/\phi(L))=K/\phi(L)$. Since $\phi(L)$ is a nilpotent ideal of $L$ (see \cite{to}), it is nil on $L$. So, $\phi(L)\leq {\rm nil}(S)$. Clearly, ${\rm nil}(S)\leq K$. Now let $x\in K$. We have that $x$ acts nilpotently on $L/\phi(L)$. This yields that $L=\phi(L)+L_0(x)$, where $L_0(x)$ is the Fitting null-component of $L$ relative to ${\rm ad}\,x$. As $L_0(x)$ is a subalgebra of $L$, it follows that $L=L_0(x)$. So that $x$ acts nilpotently on $L$. This yields that $K\leq{\rm nil}(S)$. (2): Assume that every two-generated {\em proper} subalgebra of $L$ is triangulable on $L$. Let $S/\phi(L)$ be a two-generated proper subalgebra of $L/\phi(L)$. Take elements $a,b\in L$ such that $S/\phi(L)=<a+\phi(L),b+\phi(L)>$. We have that $S=<a,b>+\phi(L)$ and that $<a,b>$ is triangulable on $L$. On the other hand, from Lemma \ref{l:nil} it follows that $\phi(L)+{\rm nil}(<a,b>)$ is nil on $L$. Since $\phi(L)+{\rm nil}(<a,b>)$ is an ideal of $S$, we have that $\phi(L)+{\rm nil}(<a,b>)\leq {\rm nil}(S)$. Then, $$S^2\leq \phi(L)+<a,b>^2\leq \phi(L)+{\rm nil}(<a,b>)\leq {\rm nil}(S)$$ So, we have that $$(S/\phi(L))^2=S^2+\phi(L)/\phi(L)\leq{\rm nil}(S)/\phi(L)={\rm nil}(S/\phi(L))$$ by (1). Therefore $S/\phi(L)$ is triangulable on $L/\phi(L)$. The proof is complete. \begin{theor}\label{t:2} \begin{enumerate} \item If $L$ is solvable and every two-generated proper subalgebra of $L$ is triangulable on $L$, then $L$ is triangulable. \item If every proper subalgebra of $L$ is triangulable on $L$ but $L$ itself is not, then $L$ is two-generated and $L/{\phi(L)}$ is simple (so, $L/\phi(L)$ satisfies (1)-(4) in Theorem \ref{t:simple-triang}). \end{enumerate} \end{theor} {\em Proof.} (1): Let $L$ be solvable and every two-generated proper subalgebra of $L$ be triangulable on $L$. In particular, we have that every two-generated proper subalgebra of $L$ is strongly solvable. Put $\bar{L}=L/\phi(L)$. Then, by Theorem \ref{t:theor1}(1) it follows that every proper subalgebra of $\bar{L}$ is strongly solvable. Moreover, by Lemma \ref{l:t} we have that every two-generated proper subalgebra of $\bar{L}$ is triangulable on $\bar{L}$. Now suppose that $L$ is not triangulable. So that $L$ is not strongly solvable. Thus, $\bar{L}$ is not strongly solvable either. By using Theorem \ref{t:bt} we obtain that $\bar{L}=A+ B$, where $A$ is the unique minimal ideal of $\bar{L}$, $A^2=0$, $B<\bar{L}$, $A\cap B=0$, and either $B=M\dot{+} Fx$, where $M$ is the unique minimal ideal of $B$ or $B$ is the three-dimensional Heisenberg algebra. We have that $\bar{L}^2=A+B^2$. On the other hand, we see that $B$ is two-generated. So, $B$ is triangulable on $\bar{L}$. This yields that $B^2$ acts nilpotently on $A$ and therefore $\bar{L}^2$ is nilpotent. So that $\bar{L}$ is strongly solvable. This contradiction completes the proof of (1). (2): Assume that every proper subalgebra of $L$ is triangulable on $L$ but $L$ itself is not. By (1) we have that $L$ is not solvable. By Lemmas \ref{l:t}(1), \ref{l:lemma2} and since the class of solvable Lie algebras is saturated, we may suppose without loss of generality that $L$ is $\phi$-free. By Towers \cite{to} we have that $L=N+S$ and $N\cap S=0$, where $N$ is the largest nilpotent ideal of $L$ and $S$ is a subalgebra of $L$. If $S\neq L$, then we have that $S$ is solvable. So, $L$ is solvable, a contradiction. It follows that $N=0$ and so $L$ is simple. Hence, $L$ satisfies (1)-(4) in Theorem \ref{t:simple-triang}. The proof is complete. ACKNOWLEDGMENT The authors are grateful to the referee for his/her useful comments; in particular, for providing a shorter and clearer proof of Proposition 4.2. \end{document}
\begin{document} \title{Secure communication with choice of measurement} \author{Dong Xie} \email{[email protected]} \author{An Min Wang} \email{[email protected]} \affiliation{Department of Modern Physics , University of Science and Technology of China, Hefei, Anhui, China.} \begin{abstract} It has been found that the signal can be encoded in the choice of the measurement basis of one of the communicating parties, while the outcomes of the measurement are irrelevant for the communication and therefore may be discarded. The proposed protocol was novel and interesting, but it wasn't secure for communication. The eavesdropper can obtain the information without being detected. We utilize the outcomes of the measurement and separate the Hilbert space to propose a secure communication protocol. And the error correction code is used to increase the fault tolerance in the noise. {\bf Keywords: measurement; Secure protocol; fault tolerance } \end{abstract} \pacs{03.67.Hk, 03.65.Ta} \maketitle \section{Introduction} Quantum measurement must have an unavoidable disturbance to the system. Recently, Amir Kalev at al. \cite{lab1} showed that this disturbance can be used for communication. Nonselective measurements on one part of a quantum system in a maximally entangled state are used to encode and communicate information, where the outcomes are not recorded. Obviously, in the classical theory, the nonselective measurements cannot carry information \cite{lab2,lab3}. Hence, the process of quantum measurement can decompose into two stages: the first stage is the nonselective measurement, which elevates a particular set of dynamical variables (those labeling the basis in our case) to reality; the second stage involves the determination of the value of the dynamical variable (is same with the classical measurement). Compared with classical communication, the biggest advantage of quantum communication is the unconditional security. However the proposed protocol in the ref. \cite{lab1} is easily eavesdropped. So it is necessary and interesting to find a secure protocol for communication. We find that although the outcomes of measurements on one part are irrelevant for the communication, they can be used to guarantee the security. The noise from environment inevitably disturbs the quantum system. The general damage on the system comes from the decoherence induced by the noise. Our protocol can filter the incorrect information which are created by the decoherence noise. For those noise which can change the phase and the message, repetition code is used to correct the error. The rest of article is arranged as follows. In the section II, we make a simple review and correction about the nonselective measurement protocol. A secure communication protocol is proposed in the section III. In the section IV, We advise the communication protocol in consider of the noise. Finally, we make a conclusion and outlook in the section V. \section{A Simple Review of nonselective measurement} In order to introduce a secure communication protocol, we firstly review the nonselective measurement protocol in the ref. \cite{lab1}. There are $d+1$ mutual unbiased bases, which are given in terms of the computational basis by \begin{equation} |m;b\rangle=\frac{1}{\sqrt{d}}\sum_{n=0}^{d-1}|n\rangle w^{bn^2-2nm};b,m=\ddot{0},0,1,...,d-1, \end{equation} where $|m;\ddot{0}\rangle=|m\rangle$, $w=e^{i2\pi/d}$, and $d$ is an odd prime number. Let Alice prepare one of the following $d^3$ two-qudit maximally entangled states \cite{lab4,lab5}: \begin{equation} |c,r;s\rangle_{1,2}=\frac{1}{\sqrt{d}}\sum_{n=0}^{d-1}|n\rangle_1|c-n\rangle_2w^{sn^2-2rn}, \end{equation} in which, $c, r, s=0,1,...,d-1$. Then Alice sends one qudit labeled by 1 to Bob. To communicate a message to Alice, Bob measures his qudit in the basis $|m; b\rangle$. The different $b$ is equivalent to different signal. After the measurement, Bob sends the qudit back to Alice. In order to get the message, Alice measures the two qudits in the basis of preparation, $\{{|c',r';s\rangle_{1,2}}\}_{c', r'=0}^{d-1} $ of Eq.(2). Finally, the decoding table is given by \begin{equation} \begin{split} c\neq c'&\rightarrow b=s+\frac{r-r'}{c'-c},\\ r\neq r', c=c'&\rightarrow b= \ddot{0},\\ r=r', c=c'&\rightarrow inconclusive. \end{split} \end{equation} The probability of the inconclusive outcome is $1/d$. So for large dimension $d$, Bob can communicate message to Alice very well. From the Eq.(3), the value of $b$ can be outside the range of $0,1,2...,d-1$. We make a correction as follows, \begin{equation} c\neq c'\rightarrow b=s+\frac{r-r'+Nd}{c'-c}, \end{equation} where $N=0,\pm1,\pm2,...,\pm \infty$. The value $N$ is chosen to guarantee the value of $b$ in the range of $1,2...,d-1$. \section{Secure communication protocol} The above protocol isn't safe. The Eve can steal the information without being detected. The security is most important in the communication \cite{lab6}. Eve can eavesdrop by the following way, as shown in Fig.1. Firstly, Eve intercepts the qudit $1$ which is sent from Alice to Bob. And Eve sends one of the $d^3$ two-qudit maximally entangled state, the one labeled by 3 to Bob. Next, Eve intercepts the qudit, which is sent from Bob. After Eve measures the two-qudit in the basis $\{{|c',r';s\rangle_{3,4}}\}_{c', r'=0}^{d-1} $, she can obtain the message, the value of $b$. Then she measures the the qudit $1$ in the basis $|m,b\rangle_{m=0}^{d-1}$. So Alice and Bob cannot know that the communication has been eavesdropped, because Alice also obtains the corrected message from Bob. \begin{figure} \caption{\label{fig.1} \label{fig.1} \end{figure} In order to guarantee the safety, it is necessary to detect Eve. We consider the dimension of the Hilbert space to be $\frac{3d-1}{2}$. So the computational basis is $\{|n\rangle\}_{n=0}^{\frac{3d -3}{2}}$. The kets that compose other bases (which we care about) are given by \begin{equation} \begin{split} &|m,x;b\rangle=\frac{1}{\sqrt{d}}\sum_{n=\frac{x-1}{2}(d-1)}^{\frac{x+1}{2}(d-1)}|n\rangle w^{bn^2-2nm};\\ &b,m=0,1 ..., d-1; x=1,2;\\ &|m,x;b=\ddot{0}\rangle=|m,x\rangle. \end{split} \end{equation} Where $x$ represents the different Hilbert space and $w=e^{i2\pi/d}$. Alice prepares one of the following two-qubit maximally entangled states in the space $x$: \begin{equation} \begin{split} &|c,r,x;s\rangle_{1,2}=\frac{1}{\sqrt{d}}\sum_{n=\frac{x-1}{2}(d-1)}^{\frac{x+1}{2}(d-1)}|n\rangle_1|f(c-n)\rangle_2w^{sn^2-2rn},\\ &c,r,s=0,1,...,d-1, \end{split} \end{equation} in which, \begin{equation}f(c-n)= \begin{cases} c-n & c\geq n,\\c-n+\frac{x+1}{2}(d-1)+1 & c<n. \end{cases} \end{equation} And sends the particle 1 to Bob. Bob accepts and performs a measurement on the qudit in the basis $\{|m,x,b\rangle_1\}_{m=0,x=1}^{d-1, 2}$. The different $b$ represents different signal. The two-qudits state is described now by \begin{equation} \begin{split} \rho_{1,2}=&\sum_{m=0,x'=1}^{d-1,2}|m,x',b\rangle_1\langle m,x',b|\\ &|c,r,x;s\rangle_{1,2}\langle c,r,x;s||m,x',b\rangle_1\langle m,x',b| \end{split} \end{equation} The proposed protocol in the ref. \cite{lab1}, Bob need not record the measurement outcome. For the security, the measurement outcome of $x$ is useful. After the measurement, Alice and Bob check the value of $x$. If the value of $x$ recorded by Bob is different with that from Alice, they abort the protocol; if the value of $x$ are same, Bob sends the particle 1 back to Alice. Finally Alice performs a measurement on the two-qudit in the basis, $\{|c',r',x;s\rangle_{1,2}\}_{c',r'=0}^{d-1}$, to retrieve the message from Bob. The decoding table is same as the Eq.(3). In the process of communication, there are only two ways by which Eve can eavesdrop on the communication: measure or not measure the particle 1 before Bob to get the value of $x$. First way, Eve steals message as the way in Fig.1. However, Eve doesn't know the right value of $x$ in the state sent from Alice. If she only chooses a value of $x$ randomly from $\{1, 2\}$, not to measure the particle 1 before Bob, she would be detected with the probability $3/4$ for one message by checking the value of $x$ between Alice and Bob. The probability of successful eavesdropping will decrease exponentially with the number of messages $N$: $(3/4)^N$. And, if one chooses a greater Hilbert space (the value of $x$ has more choices), the probability is very small for every message. Second way, in order to know the value of $x$, Eve needs to perform a measurement on the particle 1. Due to the two subspaces separated by the variable $x$ are nonorthogonal, Eve could obtain the right value $x$ with the probability $1/2$ by the measurement without disturbing the state of particle 1. For $N$ messages, the probability is given by $(\frac{1}{2})^N$. Perhaps Eve can get the right value $x$ by making the same measurement on particle 1 as Bob. Then, she sends the particle 1 to Bob. However, Eve doesn't know the measurement of Bob beforehand, so the successful probability is $\frac{1}{d+1}$ for one message. No matter which way Eve would use, the probability of successful eavesdropping tends to zero for many messages. Namely, the proposed protocol is secure against the eavesdropping. The whole secure communication protocol can be simply summarized as follows.\\ 1. Alice randomly chooses one two-qudit state from Eq.(6) and send particle 1 to Bob.\\ 2. Bob accepts the particle 1, measures it in the basis $\{|m,x,b\rangle_1\}_{m=0,x=1}^{d-1, 2}$, and records the value of $x$.\\ 3. After Bob obtains the value of $x$, Alice tells Bob the initial value of $x$ by classical communication. If the value of $x$ from Alice and Bob are different, they abort the communication.\\ 4. Bob sends the particle 1 back to Alice. Then Alice performs a measurement on the two-qubit and retrieves the message $b$.\\ 5. Bob sends $2n$ messages to Alice by the above procedure.\\ 6. Alice randomly selects $n$ messages to check with Bob. They keep the remaining $n$ messages. If more than an acceptable number disagree, they abort the protocol.\\ 7. Alice and Bob perform information reconciliation and privacy amplification on the remaining $n$ messages to obtain $l$ shared message \cite{lab7,lab8}. \section{Secure protocol in the noise} In the practical communication, the noise from the environment is inevitable. For secure and successful communication, we have to consider the influence of noise. The noise destroys the quantum state and reduces the fidelity. In order to increase the fault tolerance, the error correction code is necessary. At the same times, we have to avoid treating the Eve as the noise, which is untractable. Firstly, we consider that the noise only induces decoherence of quantum state. Decoherence is the most obstacle in quantum communication and computation \cite{lab7,lab9,lab10}. We find that the function of decoherence on the maximally entangled state(Eq.(6)) is same like the measurement of Bob on the particle 1 in the basis, $\{|m,x;b=\ddot{0}\rangle_1\}_{m=0,x=1}^{d-1,2}$. So Alice will get the message $b=\ddot{0}$, no matter what Bob has sent. In order to find the error created by the decoherence noise, Bob never sends the message $b=\ddot{0}$. Once Alice obtains the message $b=\ddot{0}$, he immediately knows there is an error from the decoherence. Here, simple repetition code for correcting the error is easily utilized by Eve. Because Eve can also perform a measurement on the particle 1 in the basis, $\{|m,x;b=\ddot{0}\rangle_1\}_{m=0,x=1}^{d-1,2}$. We cannot distinguish her from noise. So, Eve just measure a few of repeated messages to get the value of $x$, and sends one particle of his preparation to Bob. Like the scheme in Fig.1, Eve can steal the message successfully. There is one way which can simply solve the question in the decoherence noise. It isn't necessary to use the repetition code to correct the error from the decoerence noise. After the quantum communication, Alice tells Bob which messages are wrong, and discards them. The remaining messages are treated as secret keys. Then, for the robust against the decoherence noise, the above way performs very well. However, there are some noise, which can change the phase of quantum state to create the wrong message $b$. The above way isn't effective, because Alice don't know which messages are incorrect until Bob tells him. This makes the question become very hard. We proposed a scheme which can correct the error as follows. Now repetition code is necessary to be used to find the error created by the noise which can change the phase and the message. A secure protocol is proposed as follows.\\ The first step: Bob sends $2n*m$ messages to Alice, where we use repeated m messages to denote one message for correcting the error. The same $m$ messages needn't to be together, and are randomly arranged in the sequence of transmission as shown in Fig.2. \\ The second step: After Alice obtains $2n*m$ messages, they compare $n*m$ messages which are selected randomly. If more than t of these disagree, they abort the protocol. Here, the value of $t$ is required to be chosen suitably. If $t$ is too small, the Eve can steal too much information; if $t$ is too big, the success probability of protocol is too low (we leave an open question how to choose the optimal value of $t$ ). \\ The third step: Bob tells Alice the position of repeated message in the sequence. Then Alice corrects the error according to the principle that the minority is subordinate to the majority. \begin{figure} \caption{\label{fig.2} \label{fig.2} \end{figure} \section{Conclusion and outlook} We have proposed a secure communication protocol based on treating the choice of measurement as signal. It is suitable for some interesting cryptography tasks, such as the superdense coding \cite{lab11} which achieves $2\log2d$ bits per qudit sent from Bob to Alice. In this article, the first stage of quantum measurement is used for communication and the second stage are utilized to guaranteed the security. We guess that a secure communication protocol must involves the second stages because the classical information is required to detect the eavesdropper Eve (a formal proof is left as an open question). And there are other secure communication protocols, which deserve the further study. The noise also is considered. The error created by the decoherence noise is easily found, so it just need to discard the wrong messages. In general noise, a careful check is required. And we advise the repetition code to correct the error, where the sequence is randomly arranged against the eavesdropping. \end{document}
\begin{document} \title{Some local wellposedness results for nonlinear Schr\"odinger equations below $L^2$} \author{Axel Gr\"unrock \\mbox{${\cal F}$}achbereich Mathematik\\ Bergische Universit\"at - Gesamthochschule Wuppertal \\ Gau{\ss}stra{\ss}e 20 \\ D-42097 Wuppertal \\ Germany \\ e-mail [email protected]} \date{} \maketitle \newcommand{\mbox{${\longrightarrow}$}}{\mbox{${\longrightarrow}$}} \newcommand{\mbox{${\Rightarrow}$}}{\mbox{${\Rightarrow}$}} \newcommand{\mbox{${\Leftarrow}$}}{\mbox{${\Leftarrow}$}} \newcommand{\iso}{\mbox{${\stackrel{\sim}{\longrightarrow}}$}} \newcommand{\mbox{${\cal F}$}}{\mbox{${\cal F}$}} \newcommand{\mbox{${\cal F}$}x}{\mbox{${\cal F}_x$}} \newcommand{\mbox{${\cal F}$}t}{\mbox{${\cal F}_t$}} \newcommand{\mbox{${\cal S} ({\bf R}^{n+1})$}}{\mbox{${\cal S} ({\bf R}^{n+1})$}} \newcommand{\mbox{${\cal S} ({\bf R})$}}{\mbox{${\cal S} ({\bf R})$}} \newcommand{\mbox{${\cal Y} ({\bf R} \times {\bf T}^n)$}}{\mbox{${\cal Y} ({\bf R} \times {\bf T}^n)$}} \newcommand{\mbox{$ U_{\phi}$}}{\mbox{$ U_{\phi}$}} \newcommand{\mbox{$ X^+_{s,b} $}}{\mbox{$ X^+_{s,b} $}} \newcommand{\mbox{$ Y_{s}$}}{\mbox{$ Y_{s}$}} \newcommand{\mbox{$ X^+_{s,b} $}m}{\mbox{$ X^-_{s,b} $}} \newcommand{\mbox{$ Y_{s}$}m}{\mbox{$ Y^-_{s}$}} \newcommand{\mbox{$ X^+_{s,b} $}pm}{\mbox{$ X^{\pm}_{s,b} $}} \newcommand{\mbox{$ Y_{s}$}pm}{\mbox{$ Y^{\pm}_{s}$}} \newcommand{\mbox{$ Y_{s}$}y}[1]{\mbox{$ Y_{#1} $}} \newcommand{\X}[1]{\mbox{$ X_{s,#1}$}} \newcommand{\Xpm}[1]{\mbox{$ {X^{\pm}}_{s,#1}$}} \newcommand{\XX}[2]{\mbox{$ X^+_{#1,#2} $}} \newcommand{\XXm}[2]{\mbox{$ X^-_{#1,#2} $}} \newcommand{\XXpm}[2]{\mbox{$ X^{\pm}_{#1,#2} $}} \newcommand{\XXX}[3]{\mbox{$ X_{#1,#2}(#3) $}} \newcommand{\mbox{$ \| f \| _{ X_{s,b}} $}}{\mbox{$ \| f \| _{ X_{s,b}} $}} \newcommand{\mbox{$ {\| f \|}^2_{ X_{s,b}} $}}{\mbox{$ {\| f \|}^2_{ X_{s,b}} $}} \newcommand{\nf}[1]{\mbox{$ \| f \| _{ #1} $}} \newcommand{\qf}[1]{\mbox{$ {\| f \|}^2_{ #1} $}} \newcommand{\nx}[1]{\mbox{$ \| #1 \| _{ X_{s,b}} $}} \newcommand{\qx}[1]{\mbox{$ {\| #1 \|}^2_{X_{s,b} } $}} \newcommand{\n}[2]{\mbox{$ \| #1 \| _{ #2} $}} \newcommand{\nk}[3]{\mbox{$ \| #1 \| _{ #2} ^{#3}$}} \newcommand{\q}[2]{\mbox{$ {\| #1 \|}^2_{#2} $}} \newcommand{\conv}[2]{\mbox{${\mbox{\Huge{$\ast$}}}_{_{\!\!\!\!\!\!\!\!\!{#1}}}^{^{\!\!\!\!\!\!{#2}}}$}} \pagestyle{plain} \mbox{${\longrightarrow}$}ule{\textwidth}{0.5pt} \newtheorem{lemma}{Lemma}[section] \newtheorem{bem}{Remark}[section] \newtheorem{bsp}{Example}[section] \newtheorem{definition}{Definition}[section] \newtheorem{kor}{Corollary}[section] \newtheorem{satz}{Theorem}[section] \newtheorem{prop}{Proposition}[section] \begin{abstract} In this paper we prove some local (in time) wellposedness results for nonlinear Schr\"odinger equations \[u_t - i \Delta u = N(u,\overline{u}) , \hspace{2cm}u(0)=u_0\] with rough data, that is, the initial value $u_0$ belongs to some Sobolev space of negative index. We obtain positive results for the following nonlinearities and data: \begin{itemize} \item $N(u,\overline{u})=\overline{u}^{2}$, \hspace{2,5cm} $u_0 \in H^{s}_x({\bf T}^2)$, \hspace{2.02cm} $s>-\frac{1}{2}$, \item $N(u,\overline{u})=\overline{u}^{3}$, \hspace{2,6cm}$u_0 \in H^{s}_x({\bf T})$, \hspace{2,27cm}$s>-\frac{1}{3}$, \item $N(u,\overline{u})=\overline{u}^{2}$, \hspace{2,5cm} $u_0 \in H^{s}_x({\bf T}^3)$, \hspace{1.88cm} $s>-\frac{3}{10}$, \item $N(u,\overline{u})=u^3$ or $N(u,\overline{u})=\overline{u}^{3}$, \hspace{0,25cm}$u_0 \in H^{s}_x({\bf R})$,\hspace{2,13cm} $s>-\frac{5}{12}$, \item $N(u,\overline{u})=u\overline{u}^{2}$, \hspace{2,3cm} $u_0 \in H^{s}_x({\bf R})$,\hspace{2,26cm} $s>-\frac{2}{5}$, \item $N(u,\overline{u})=\overline{u}^{4}$, \hspace{2,6cm}$u_0 \in H^{s}_x({\bf T})$ or $u_0 \in H^{s}_x({\bf R})$,\hspace{0,05cm} $s>-\frac{1}{6}$, \item $N(u,\overline{u})=|u|^{4}$, \hspace{2,41cm}$u_0 \in H^{s}_x({\bf R})$,\hspace{2,25cm} $s>-\frac{1}{8}$, \item $N(u,\overline{u})=u^4,\,\,\,u^3\overline{u}\,\,\,$ or $u\overline{u}^3$,\,\,\,\hspace{0,54cm}$u_0 \in H^{s}_x({\bf R})$,\hspace{2,25cm} $s>-\frac{1}{6}$. \end{itemize} The proof uses the Fourier restriction norm method. \end{abstract} \section{Introduction and main results} The first local (in time) wellposedness results below $L^2$ for the initial value problem for nonlinear Schr\"odinger equations (NLS) \[u_t - i \Delta u = N(u,\overline{u}) , \hspace{2cm}u(0)=u_0\] were published in 1996 by Kenig, Ponce and Vega in \cite{KPV96}. (Here the initial value $u_0$ is assumed to belong to some Sobolev space $H^{s}_x = H^{s}_x({\bf T}^n)$ or $H^{s}_x = H^{s}_x({\bf R}^n)$ with $s<0$.) These authors considered the nonlinearities \[N_1(u,\overline{u})=u^2,\hspace{1cm}N_2(u,\overline{u})=u\overline{u},\hspace{1cm}N_3(u,\overline{u})=\overline{u}^2\] in one space dimension. They obtained wellposedness for $N_1$ and $N_3$ under the assumptions $u_0 \in H^{s}_x({\bf R}),\hspace{0.1cm}s>-\frac{3}{4}$ or $u_0 \in H^{s}_x({\bf T}),\hspace{0.1cm}s>-\frac{1}{2}$ and for $N_2$, provided that $u_0 \in H^{s}_x({\bf R}),\hspace{0.1cm}s>-\frac{1}{4}$. This was followed in 1997 by Staffilani's paper \cite{St97}, where wellposedness for NLS with $N=N_3$ and $u_0 \in H^{s}_x({\bf R}^2),\hspace{0.1cm}s>-\frac{1}{2}$ was shown. A standard scaling argument suggests that there are even more possible candidates for the nonlinearity to allow local wellposedness below $L^2$: The critical Sobolevexponent for NLS with $N(u,\overline{u})=|u|^{\alpha}u$ obtained by scaling is $s_c=\frac{n}{2}-\frac{2}{\alpha}$. So, for $N_i$, $1\le i \le 3$, there might be local wellposedness for some $s<0$ even for space dimension $n=3$, and in one space dimension also for cubic and quartic nonlinearities positive results seem to be possible. Recently new theorems concerning this question were presented: In \cite{CDKS} Colliander, Delort, Kenig and Staffilani could prove that in the nonperiodic setting all the results on $N_i$, $1\le i \le 3$, carry over from the one- to the twodimensional case (with the same restrictions on $s$). Concerning the threedimensional nonperiodic case, Tao has shown wellposedness for NLS with the nonlinearities $N_1$ and $N_3$ for $s>-\frac{1}{2}$ and with $N_2$ for $s>-\frac{1}{4}$ (see \cite{T00}, section 11). So concerning the quadratic nonlinearities in the nonperiodic setting the question is meanwhile completely answered. In this paper the remaining cases are considered, we obtain positive results for the following nonlinearities and data: \begin{itemize} \item $N(u,\overline{u})=\overline{u}^{2}$, \hspace{2,5cm} $u_0 \in H^{s}_x({\bf T}^2)$, \hspace{2.02cm} $s>-\frac{1}{2}$, \item $N(u,\overline{u})=\overline{u}^{3}$, \hspace{2,6cm}$u_0 \in H^{s}_x({\bf T})$, \hspace{2,32cm}$s>-\frac{1}{3}$, \item $N(u,\overline{u})=\overline{u}^{2}$, \hspace{2,5cm} $u_0 \in H^{s}_x({\bf T}^3)$, \hspace{1.9cm} $s>-\frac{3}{10}$, \item $N(u,\overline{u})=u^3$ or $N(u,\overline{u})=\overline{u}^{3}$, \hspace{0,1cm}$u_0 \in H^{s}_x({\bf R})$,\hspace{2,15cm} $s>-\frac{5}{12}$, \item $N(u,\overline{u})=u\overline{u}^{2}$, \hspace{2,3cm} $u_0 \in H^{s}_x({\bf R})$,\hspace{2,27cm} $s>-\frac{2}{5}$, \item $N(u,\overline{u})=\overline{u}^{4}$, \hspace{2,6cm}$u_0 \in H^{s}_x({\bf T})$ or $u_0 \in H^{s}_x({\bf R})$, $s>\!-\frac{1}{6}$, \item $N(u,\overline{u})=|u|^{4}$, \hspace{2,41cm}$u_0 \in H^{s}_x({\bf R})$,\hspace{2,3cm} $s>-\frac{1}{8}$, \item $N(u,\overline{u})=u^4,\,\,\,u^3\overline{u}\,\,\,$ or $u\overline{u}^3$,\,\,\,\hspace{0,4cm}$u_0 \in H^{s}_x({\bf R})$,\hspace{2,3cm} $s>-\frac{1}{6}$. \end{itemize} To obtain our results, we use the Fourier restriction norm method as it was introduced in \cite{B93} and further developed in \cite{KPV96} and \cite{GTV97}. (In order to concentrate on the crucial multilinear estimates we shall assume this method to be known, for an instructive description thereof we refer to \cite{G96}.) In particular, we will use the function spaces $\mbox{$ X^+_{s,b} $}pm = \exp{(\pm it\Delta)} H^b_t(H^s_x)$ equipped with the norms \begin{eqnarray*} \n{f}{\mbox{$ X^+_{s,b} $}pm}= \n{\exp{(\mp it\Delta)}f}{H^b_t(H^s_x)}=\n{<\mbox{$ X^+_{s,b} $}i>^s <\tau \pm |\mbox{$ X^+_{s,b} $}i|^2>^b \mbox{${\cal F}$} f}{L^2_{\mbox{$ X^+_{s,b} $}i \tau}} \\ = (\int \mu (d \mbox{$ X^+_{s,b} $}i) d \tau <\mbox{$ X^+_{s,b} $}i>^{2s}<\tau \pm |\mbox{$ X^+_{s,b} $}i|^2>^{2b}|\mbox{${\cal F}$} f (\mbox{$ X^+_{s,b} $}i,\tau)|^2)^{\frac{1}{2}} \hspace{1,5cm}. \end{eqnarray*} Here $\mbox{${\cal F}$}$ denotes the Fourier transform in space and time, $\mu$ is the Lebesgue measure on ${\bf R}^n$ in the nonperiodic respectively the counting measure on ${\bf Z}^n$ in the periodic case, and we use the notation $<x>=(1+|x|^2)^{\frac{1}{2}}$. Observe that $\n{\overline{f}}{\mbox{$ X^+_{s,b} $}}=\n{f}{\mbox{$ X^+_{s,b} $}m}$. Our proofs rely heavily on the following interpolation property of the $\mbox{$ X^+_{s,b} $}pm$-spaces: We have \[(\XXpm{s_0}{b_0},\XXpm{s_1}{b_1})_{[\theta]} = \mbox{$ X^+_{s,b} $}pm \,\,\,,\] whenever for $\theta \in [0,1]$ it holds that $s=(1-\theta )s_0 + \theta s_1$, $b=(1-\theta )b_0 + \theta b_1$. Here $[\theta]$ denotes the complex interpolation method. Moreover we will make extensive use of the fact that with respect to the inner product on $L^2_{xt}$ the dual space of $\mbox{$ X^+_{s,b} $}pm$ is given by $\XXpm{-s}{-b}$. To give a precise formulation of our results, we also need the restriction norm spaces $\mbox{$ X^+_{s,b} $}pm (I) = \exp{(\pm it\Delta)} H^b_t(I, H^s_x)$ with norms \[\n{f}{\mbox{$ X^+_{s,b} $}pm (I)}= \inf \{\n{\tilde{f}}{\mbox{$ X^+_{s,b} $}pm} : \tilde{f} \in \mbox{$ X^+_{s,b} $}pm \,\,\, ,\,\,\,\tilde{f}|_I=f\} .\] Now our results read as follows: \begin{satz}Assume \begin{itemize} \item[i)] $n=1$, \hspace{3cm} $m=3$, \hspace{3cm}$s>-\frac{1}{3}$ ,\,\,or \item[ii)] $n=1$, \hspace{3cm} $m=4$,\hspace{3cm} $s>-\frac{1}{6}$ ,\,\,or \item[iii)] $n=2$, \hspace{3cm} $m=2$,\hspace{3cm} $s>-\frac{1}{2}$ ,\,\,or \item[iv)] $n=3$,\hspace{3cm} $m=2$, \hspace{3cm}$s>-\frac{3}{10}$ . \end{itemize} Then there exist $b>\frac{1}{2}$ and $T=T(\n{u_0}{H^s_x({\bf T}^n)})>0$, so that there is a unique solution $u \in \mbox{$ X^+_{s,b} $} ([-T,T])$ of the periodic boundary value problem \[u_t - i \Delta u = \overline{u}^{m} , \hspace{2cm}u(0)=u_0 \in H^{s}_x({\bf T}^n) .\] This solution satisfies $u \in C^0([-T,T],H^s_x({\bf T}^n))$ and for any $T' < T$ the mapping \\ $f: H^s_x({\bf T}^n) \mbox{${\longrightarrow}$} \mbox{$ X^+_{s,b} $} ([-T',T'])\,,\,u_0 \mapsto u$ (Data upon solution) is locally Lipschitz continuous. \end{satz} The nonlinear estimates leading to this result are contained in Theorems \mbox{${\longrightarrow}$}ef{t1}, \mbox{${\longrightarrow}$}ef{t11} and \mbox{${\longrightarrow}$}ef{t2}, see sections 4 and 5 below. For i) and iii) our results are optimal in the framework of the method and up to the endpoint, in fact there are counterexamples showing that the corresponding multilinear estimates fail for lower values of $s$, see the discussion in section 4. For ii) the scaling argument suggests the optimality of our result. The restriction on $s$ in iv) can possibly be lowered down to $-\frac{1}{2}$, cf. the remark below Thm. \mbox{${\longrightarrow}$}ef{t11}. All the following results are restricted to the onedimensional nonperiodic case: \begin{satz}Assume \begin{itemize} \item[i)] $s>-\frac{5}{12}$ \hspace{1cm}and \hspace{1cm}$N(u,\overline{u})=u^3$ or $N(u,\overline{u})=\overline{u}^3$, or \item[ii)] $s>-\frac{2}{5}$ \hspace{2cm}and \hspace{2cm}$N(u,\overline{u})=u\overline{u}^2$. \end{itemize} Then there exist $b>\frac{1}{2}$ and $T=T(\n{u_0}{H^s_x})>0$, so that there is a unique solution $u \in \mbox{$ X^+_{s,b} $} ([-T,T])$ of the initial value problem \[u_t - i \partial_x^2 u = N(u,\overline{u}) , \hspace{2cm}u(0)=u_0 \in H^{s}_x ({\bf R}).\] This solution satisfies $u \in C^0([-T,T],H^s_x)$ and for any $T' < T$ the mapping \\ $f: H^s_x \mbox{${\longrightarrow}$} \mbox{$ X^+_{s,b} $} ([-T',T'])\,,\,u_0 \mapsto u$ (Data upon solution) is locally Lipschitz continuous. \end{satz} For the corresponding trilinear estimates see Theorems \mbox{${\longrightarrow}$}ef{t111} and \mbox{${\longrightarrow}$}ef{t1111} (and the remark below) in section 4. We must leave open the question, whether or not the bound on $s$ in the above Theorem can be lowered down to $-\frac{1}{2}$, which is the scaling exponent in this case. This question is closely related to the problem concerning certain trilinear refinements of Strichartz' estimate posed in section 3. \begin{satz} Let $s > - \frac{1}{6}$ and $N(u,\overline{u})\in \{u^4, u^3\overline{u},u\overline{u}^3,\overline{u}^4\}$. Then there exist $b>\frac{1}{2}$ and $T=T(\n{u_0}{H^s_x({\bf R})})>0$, so that there is a unique solution $u \in \mbox{$ X^+_{s,b} $} ([-T,T])$ of the initial value problem \[u_t - i \partial_x^2 u = N(u,\overline{u}), \hspace{2cm}u(0)=u_0 \in H^{s}_x ({\bf R}).\] This solution satisfies $u \in C^0([-T,T],H^s_x({\bf R}))$ and for any $T' < T$ the mapping $f: H^s_x({\bf R}) \mbox{${\longrightarrow}$} \mbox{$ X^+_{s,b} $} ([-T',T'])\,,\,u_0 \mapsto u$ (Data upon solution) is locally Lipschitz continuous. The same statement holds true for $s > - \frac{1}{8}$ and $N(u,\overline{u})=|u|^4$. \end{satz} See Theorems \mbox{${\longrightarrow}$}ef{t2} and \mbox{${\longrightarrow}$}ef{t22} as well as proposition \mbox{${\longrightarrow}$}ef{p51} in section 5 for the crucial nonlinear estimates. The $-\frac{1}{6}$-results should be optimal by scaling, while for the $|u|^4$-nonlinearity the corresponding estimate fails for $s<-\frac{1}{8}$, cf. example \mbox{${\longrightarrow}$}ef{ex53}. Further counterexamples concerned with the periodic case are also given in section 5. {\bf{Acknowledgement:}} I want to thank Professor Hartmut Pecher for numerous helpful conversations. \section{Preparatory lemmas} \subsection{The periodic case} To prove our results concerning the space-periodic problems, we need the following Strichartz type estimates due to Bourgain: \begin{lemma}\label{l21} Let $n=1$. Then for all $\epsilon>0$ and $b>\frac{1}{2}$ there exists a constant $c = c(\epsilon,b)$, so that the following estimate holds: \[\nf{L^6_t({\bf R},L^6_x ({\bf T}))} \leq c \nf{\XX{\epsilon}{b}}\,\,.\] \end{lemma} This is essentially Prop. 2.36 in \cite{B93}. For a proof in the form given here, see \cite{G00}, Lemma 2.2. \begin{kor}\label{k21} Let $n=1$. Then for all $\epsilon>0$ and $b>\frac{1}{2}$ there exists a constant $c = c(\epsilon,b)$, so that the following estimate holds: \[\nf{L^8_t({\bf R},L^4_x ({\bf T}))} \leq c \nf{\XX{\epsilon}{b}}\,\,.\] \end{kor} Proof: This follows by interpolation between the above lemma and the Sobolev embedding theorem in the time variable. \begin{lemma}\label{l22} i) Let $n=2$. Then for all $\epsilon>0$ and $b>\frac{1}{2}$ there exists a constant $c = c(\epsilon,b)$, so that the following estimate holds: \[\nf{L^4_t({\bf R},L^4_x ({\bf T}^2))} \leq c \nf{\XX{\epsilon}{b}}\,\,.\] ii) Let $n=3$. Then for all $s>\frac{1}{4}$ and $b>\frac{1}{2}$ there exists a constant $c = c(s,b)$, so that the following estimate holds: \[\nf{L^4_t({\bf R},L^4_x ({\bf T}^3))} \leq c \nf{\XX{s}{b}}\,\,.\] \end{lemma} This is essentially the two- respectively the threedimensional case of Prop. 3.6 in \cite{B93}, see also \cite{G00}, Lemma 2.3. \begin{kor}\label{k22} Let $n=3$. Then for all $s>\frac{1}{5}$ and $b>\frac{9}{20}$ there exists a constant $c = c(s,b)$, so that the following estimate holds: \[\nf{L^4_t({\bf R},L^{\frac{10}{3}}_x ({\bf T}^3))} \leq c \nf{\XX{s}{b}}\,\,.\] \end{kor} Proof: This follows by interpolation between part ii) of the above lemma and the embedding $\XX{0}{\frac{1}{4}} \subset L^4_t({\bf R},L^2_x ({\bf T}^3))$. $Remark:$ Because of $\n{f}{L^p_t(L^q_x)}=\n{\overline{f}}{L^p_t(L^q_x)}$ and $\n{f}{\mbox{$ X^+_{s,b} $}m}=\n{\overline{f}}{\mbox{$ X^+_{s,b} $}}$ the estimates stated in this subsection hold for $\mbox{$ X^+_{s,b} $}m$ instead of $\mbox{$ X^+_{s,b} $}$. Moreover they are also valid (with $\epsilon =0$) for the corresponding spaces of nonperiodic functions: This is a direct consequence of the Strichartz estimates and \cite{GTV97}, Lemma 2.3. \subsection{The onedimensional nonperiodic case} \begin{lemma}\label{l23} Let $n=1$. Then for all $b_0>\frac{1}{2} \ge s \ge 0$, the following estimates are valid: \begin{itemize} \item[i)] $\n{u \overline{v}}{L^2_t(H_x^s)}\le c\n{v}{\XX{0}{b_0}} \n{u}{\XX{0}{b}}$, provided $b>\frac{1}{4}+\frac{s}{2}$, \item[ii)] $\n{u \overline{v}}{L^p_t(H_x^s)}\le c\n{v}{\XX{0}{b_0}} \n{u}{\XX{0}{b_0}}$, provided $\frac{1}{p}=\frac{1}{4}+\frac{s}{2}$, \item[iii)] $\n{vw}{\XX{\sigma}{b'}} \le c \n{v}{\XX{\sigma}{b_0}}\n{w}{L^2_t(H_x^{-s-\sigma})}$, provided $\sigma \le 0$, $b'<-\frac{1}{4}-\frac{s}{2}$. \end{itemize} \end{lemma} Proof: We start from the following estimate due to Bekiranov, Ogawa and Ponce \[\n{u \overline{v}}{L^2_t(\dot{H}_x^{\frac{1}{2}})}\le c\n{u}{\XX{0}{b}} \n{v}{\XX{0}{b}},\hspace{1cm}b>\frac{1}{2}\](see \cite{BOP98}, Lemma 3.2). Combined with\[\n{u \overline{v}}{L^2_{xt}}\le c\n{u}{\XX{0}{b}}\n{v}{\XX{0}{b}},\hspace{1cm}b>\frac{3}{8},\] which follows from Strichartz' estimate, this gives \begin{equation}\label{21} \n{u \overline{v}}{L^2_t(H_x^{\frac{1}{2}})}\le c\n{v}{\XX{0}{b_0}} \n{u}{\XX{0}{b}},\hspace{1cm}b_0,b>\frac{1}{2}. \end{equation} On the other hand, by H\"older and again by Strichartz' estimate we have \begin{equation}\label{22} \n{u \overline{v}}{L^2_{xt}}\le c\n{v}{L^6_{xt}}\n{u}{L^3_{xt}}\le c\n{v}{\XX{0}{b_0}} \n{u}{\XX{0}{b}},\hspace{1cm}b>\frac{1}{4},b_0>\frac{1}{2}. \end{equation} Now, by interpolation between (\mbox{${\longrightarrow}$}ef{21}) and (\mbox{${\longrightarrow}$}ef{22}), we obtain part i). To see part ii), we interpolate (\mbox{${\longrightarrow}$}ef{21}) with \[\n{u \overline{v}}{L_t^4(L_x^2)}\le \n{v}{L_t^8(L_x^4)}\n{u}{L_t^8(L_x^4)} \le c\n{v}{\XX{0}{b_0}} \n{u}{\XX{0}{b_0}}, \hspace{1cm}b_0>\frac{1}{2},\] which follows from the $L_t^8(L_x^4)$-Strichartz-estimate. Next we dualize part i) to obtain part iii) for $\sigma =0$. For $\sigma < 0$, because of $<\mbox{$ X^+_{s,b} $}i_1> \le c <\mbox{$ X^+_{s,b} $}i><\mbox{$ X^+_{s,b} $}i_2>$, we then have \[\n{vw}{\XX{\sigma}{b'}} \le c\n{(J^{\sigma}v)(J^{-\sigma}w)}{\XX{0}{b'}} \le c\n{v}{\XX{\sigma}{b_0}}\n{w}{L^2_t(H_x^{-s-\sigma})}.\footnotemark\] \footnotetext{Here and in the sequel $J^{\sigma}$ ($I^{\sigma}$) denotes the Bessel (Riesz) potential of order $-\sigma$.} $ \Box$ In order to formulate and prove an analogue for Lemma \mbox{${\longrightarrow}$}ef{l23} in the case of two unbared factors, we introduce some bilinear pseudodifferential operators: \begin{definition} We define $I_-^s (f,g)$ by its Fourier-transform (in the space variable) \[\mbox{${\cal F}$}_x I_-^s (f,g) (\mbox{$ X^+_{s,b} $}i) := \int_{\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2=\mbox{$ X^+_{s,b} $}i}d\mbox{$ X^+_{s,b} $}i_1|\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2|^s \mbox{${\cal F}$}_xf(\mbox{$ X^+_{s,b} $}i_1)\mbox{${\cal F}$}_xg(\mbox{$ X^+_{s,b} $}i_2).\] If the expression $|\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2|^s$ in the integral is replaced by $<\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2>^s$, the corresponding operator will be called $J_-^s (f,g)$. Similarly we define $I_+^s (f,g)$ and $J_+^s (f,g)$ by \[\mbox{${\cal F}$}_x I_+^s (f,g) (\mbox{$ X^+_{s,b} $}i) := \int_{\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2=\mbox{$ X^+_{s,b} $}i}d\mbox{$ X^+_{s,b} $}i_1|\mbox{$ X^+_{s,b} $}i_1+2\mbox{$ X^+_{s,b} $}i_2|^s \mbox{${\cal F}$}_xf(\mbox{$ X^+_{s,b} $}i_1)\mbox{${\cal F}$}_xg(\mbox{$ X^+_{s,b} $}i_2).\] \end{definition} $Remark$ $(simple \,\,\,properties):$ \begin{itemize} \item[i)] For functions $u$, $v$ depending on space- and time-variables we have \[\mbox{${\cal F}$} I_-^s (u,v) (\mbox{$ X^+_{s,b} $}i,\tau) := \int_{\stackrel{\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2=\mbox{$ X^+_{s,b} $}i}{\tau_1+\tau_2=\tau}}d\mbox{$ X^+_{s,b} $}i_1 d\tau_1|\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2|^s \mbox{${\cal F}$} u(\mbox{$ X^+_{s,b} $}i_1,\tau_1)\mbox{${\cal F}$} v(\mbox{$ X^+_{s,b} $}i_2,\tau_2)\] and similar Integrals for the other operators. \item[ii)] $I_-^s (f,g)$ always coincides with $I_-^s (g,f)$ (and $J_-^s (f,g)$ with $J_-^s (g,f)$), since we can exchange $\mbox{$ X^+_{s,b} $}i_1$ and $\mbox{$ X^+_{s,b} $}i_2$ in the corresponding integral, while in general we will have $I_+^s (f,g) \neq I_+^s (g,f)$ (and $J_+^s (f,g) \neq J_+^s (g,f)$). \item[iii)]Fixing $u$ and $s$ we define the linear operators $M$ and $N$ by \[Mv:=J_-^s (u,v)\hspace{1cm} \mbox{and}\hspace{1cm}Nw:=J_+^s (w,\overline{u}).\] Then it is easily checked that $M$ and $N$ are formally adjoint with respect to the inner product on $L^2_{xt}$. \end{itemize} Now we have the following bilinear Strichartz-type estimate: \begin{lemma}\label{l24} \[\n{I_-^{\frac{1}{2}}(e^{it\partial ^2}u_1,e^{it\partial ^2}u_2)}{L^2_{xt}} \le c \n{u_1}{L^2_{x}}\n{u_2}{L^2_{x}}\] \end{lemma} Proof: We will write for short $\hat{u}$ instead of $\mbox{${\cal F}$}_x u$ and $\int_* d\mbox{$ X^+_{s,b} $}i_1$ for $\int_{\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2=\mbox{$ X^+_{s,b} $}i}d\mbox{$ X^+_{s,b} $}i_1$. Then, using Fourier-Plancherel in the space variable we obtain: \begin{eqnarray*} && \q{I_-^{\frac{1}{2}}(e^{it\partial ^2}u_1,e^{it\partial ^2}u_2)}{L^2_{xt}} \\ &=& c\int d\mbox{$ X^+_{s,b} $}i dt |\int_* d\mbox{$ X^+_{s,b} $}i_1 |\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2|^{\frac{1}{2}} e^{-it(\mbox{$ X^+_{s,b} $}i_1^2 + \mbox{$ X^+_{s,b} $}i_2^2)}\hat{u}_1(\mbox{$ X^+_{s,b} $}i_1)\hat{u}_2(\mbox{$ X^+_{s,b} $}i_2)|^2 \\ &=& c\int d\mbox{$ X^+_{s,b} $}i dt\int_* d\mbox{$ X^+_{s,b} $}i_1 d \eta_1 e^{-it(\mbox{$ X^+_{s,b} $}i_1^2 + \mbox{$ X^+_{s,b} $}i_2^2- \eta_1^2 - \eta_2^2)}(|\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2||\eta_1-\eta_2|)^{\frac{1}{2}}\prod_{i=1}^2 \hat{u_i}(\mbox{$ X^+_{s,b} $}i_i) \overline{\hat{u_i}(\eta_i)} \\ &=& c\int d\mbox{$ X^+_{s,b} $}i \int_* d\mbox{$ X^+_{s,b} $}i_1 d \eta_1 \delta (\eta_1^2 + \eta_2^2 - \mbox{$ X^+_{s,b} $}i_1^2 - \mbox{$ X^+_{s,b} $}i_2^2)(|\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2||\eta_1-\eta_2|)^{\frac{1}{2}}\prod_{i=1}^2 \hat{u_i}(\mbox{$ X^+_{s,b} $}i_i) \overline{\hat{u_i}(\eta_i)} \\ &=& c\int d\mbox{$ X^+_{s,b} $}i \int_* d\mbox{$ X^+_{s,b} $}i_1 d \eta_1 \delta (2(\eta_1^2 - \mbox{$ X^+_{s,b} $}i_1^2 + \mbox{$ X^+_{s,b} $}i(\mbox{$ X^+_{s,b} $}i_1-\eta_1)))(|\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2||\eta_1-\eta_2|)^{\frac{1}{2}}\prod_{i=1}^2 \hat{u_i}(\mbox{$ X^+_{s,b} $}i_i) \overline{\hat{u_i}(\eta_i)}. \end{eqnarray*} Now we use $\delta(g(x)) = \sum_n \frac{1}{|g'(x_n)|} \delta(x-x_n)$, where the sum is taken over all simple zeros of $g$, in our case: \[g(x)= 2(x^2 + \mbox{$ X^+_{s,b} $}i (\mbox{$ X^+_{s,b} $}i_1-x)-\mbox{$ X^+_{s,b} $}i_1^2)\] with the zeros $x_1 =\mbox{$ X^+_{s,b} $}i_1$ and $x_2 =\mbox{$ X^+_{s,b} $}i-\mbox{$ X^+_{s,b} $}i_1$, hence $g'(x_1)=2(2\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i)$ respectively $g'(x_2)=2(\mbox{$ X^+_{s,b} $}i-2\mbox{$ X^+_{s,b} $}i_1)$. So the last expression is equal to \begin{eqnarray*} &&c\int d\mbox{$ X^+_{s,b} $}i \int_* d\mbox{$ X^+_{s,b} $}i_1 d \eta_1\frac{1}{|2\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i|}\delta(\eta_1-\mbox{$ X^+_{s,b} $}i_1)(|\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2||\eta_1-\eta_2|)^{\frac{1}{2}}\prod_{i=1}^2 \hat{u_i}(\mbox{$ X^+_{s,b} $}i_i) \overline{\hat{u_i}(\eta_i)}\\ &+& c\int d\mbox{$ X^+_{s,b} $}i \int_* d\mbox{$ X^+_{s,b} $}i_1 d \eta_1\frac{1}{|2\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i|}\delta(\eta_1-(\mbox{$ X^+_{s,b} $}i-\mbox{$ X^+_{s,b} $}i_1))(|\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2||\eta_1-\eta_2|)^{\frac{1}{2}}\prod_{i=1}^2 \hat{u_i}(\mbox{$ X^+_{s,b} $}i_i) \overline{\hat{u_i}(\eta_i)}\\ &=&c\int d\mbox{$ X^+_{s,b} $}i \int_* d\mbox{$ X^+_{s,b} $}i_1 \prod_{i=1}^2|\hat{u_i}(\mbox{$ X^+_{s,b} $}i_i)|^2 + c\int d\mbox{$ X^+_{s,b} $}i \int_* d\mbox{$ X^+_{s,b} $}i_1\hat{u}_1(\mbox{$ X^+_{s,b} $}i_1)\overline{\hat{u}_1}(\mbox{$ X^+_{s,b} $}i_2)\hat{u}_2(\mbox{$ X^+_{s,b} $}i_2)\overline{\hat{u}_2}(\mbox{$ X^+_{s,b} $}i_1)\\ &\le& c (\prod_{i=1}^2 \q{u_i}{L^2_x} + \q{\hat{u}_1\hat{u}_2}{L^1_{\mbox{$ X^+_{s,b} $}i}}) \le c \prod_{i=1}^2 \q{u_i}{L^2_x}. \end{eqnarray*} $ \Box$ \begin{kor}\label{k23} Let $b_0 > \frac{1}{2}$ and $0 \le s \le \frac{1}{2}$. Then the following estimates hold true: \begin{itemize} \item[i)] $\n{J^s_- (u,v)}{L^2_{xt}}\le c \n{u}{\XX{0}{b_0}}\n{v}{\XX{0}{b}}$, provided $b>\frac{1}{4}+\frac{s}{2}$, \item[ii)] $\n{J^s_+ (v,\overline{u})}{\XX{0}{b'}}\le c \n{u}{\XX{0}{b_0}}\n{v}{L^2_{xt}}$, provided $b'>-\frac{1}{4}-\frac{s}{2}$. \end{itemize} \end{kor} $Remark:$ In i) we may replace $J^s_- (u,v)$ by $J^s_- (\overline{u},\overline{v})$, in fact a short computation shows that $J^s_- (\overline{u},\overline{v})=\overline{J^s_- (u,v)}$. Proof: Arguing as in the proof of Lemma 2.3 in \cite{GTV97}, we obtain from the above Lemma \[\n{I^{\frac{1}{2}}_-(u,v)}{L^2_{xt}} \le c \n{u}{\XX{0}{b_0}}\n{v}{\XX{0}{b}},\hspace{1cm}b,b_0 >\frac{1}{2}.\] Combining this with \[\n{uv}{L^2_{xt}}\le \n{u}{L^6_{xt}}\n{v}{L^3_{xt}}\le c\n{u}{\XX{0}{b_0}} \n{v}{\XX{0}{b}},\hspace{1cm}b>\frac{1}{4},b_0>\frac{1}{2},\] we obtain i) for $s=\frac{1}{2}$ and $s=0$. To see i) for $0<s<\frac{1}{2}$, $b>\frac{1}{4}+\frac{s}{2}$, we write $w=\Lambda^b v$, where $\Lambda^b$ is defined by $\mbox{${\cal F}$} \Lambda^b v(\mbox{$ X^+_{s,b} $}i,\tau)=<\tau+\mbox{$ X^+_{s,b} $}i^2>^b \mbox{${\cal F}$} v(\mbox{$ X^+_{s,b} $}i,\tau)$. Then we have to show that \begin{equation}\label{23} \n{J^s_- (u,\Lambda^{-b}w)}{L^2_{xt}}\le c \n{u}{\XX{0}{b_0}}\n{w}{L^2_{xt}}, \end{equation} where \[\n{J^s_- (u,\Lambda^{-b}w)}{L^2_{xt}}=\n{\int_{\stackrel{\tau_1+\tau_2=\tau}{\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2=\mbox{$ X^+_{s,b} $}i}}<\!\!\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2\!\!>^s\mbox{${\cal F}$} u(\mbox{$ X^+_{s,b} $}i_1,\tau_1)<\!\!\tau_2+\mbox{$ X^+_{s,b} $}i_2^2\!\!>^{-b}\mbox{${\cal F}$} w(\mbox{$ X^+_{s,b} $}i_2,\tau_2)}{L^2_{\mbox{$ X^+_{s,b} $}i \tau}}.\] Notice that, by the preceding, (\mbox{${\longrightarrow}$}ef{23}) is already known in the limiting cases $(s,b)=(0,\frac{1}{4}+\epsilon))$ and $(s,b)=(\frac{1}{2},\frac{1}{2}+\epsilon)$, $\epsilon>0$. Choosing $\epsilon=b-\frac{1}{4}-\frac{s}{2}$ we have \[<\!\!\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2\!\!>^s <\!\!\tau_2+\mbox{$ X^+_{s,b} $}i_2^2\!\!>^{-b} \le <\!\!\tau_2+\mbox{$ X^+_{s,b} $}i_2^2\!\!>^{-\frac{1}{4}-\epsilon} + <\!\!\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2\!\!>^{\frac{1}{2}}<\!\!\tau_2+\mbox{$ X^+_{s,b} $}i_2^2\!\!>^{-\frac{1}{2}-\epsilon}\] and hence \[\n{J^s_- (u,\Lambda^{-b}w)}{L^2_{xt}}\le \n{ u(\Lambda^{-\frac{1}{4}-\epsilon}w)}{L^2_{xt}}+\n{J^{\frac{1}{2}}_- (u,\Lambda^{-\frac{1}{2}-\epsilon}w)}{L^2_{xt}}\le c\n{u}{\XX{0}{b_0}}\n{w}{L^2_{xt}}.\] Finally, ii) follows from i) by duality (cf. part iii) of the remark on simple properties of $J^s_-$). $ \Box$ \section{Trilinear refinements of the onedimensional $L^6$-Strichartz-estimate in the nonperiodic case} In \cite{B98} Bourgain showed the following bilinear refinement of the $L^4_{xt}$-Strichartz-estimate in two space dimensions \[\n{u_1u_2}{L^2_t(H^s_x)}\le c \n{u_1}{\XX{s+\epsilon}{b}}\n{u_2}{\XX{0}{b}},\] provided $0 \le s < \frac{1}{2}<b$, $\epsilon>0$. The exponent in the onedimensional Strichartz estimate is $6$, so the question for trilinear refinements of this estimate comes up naturally. In this section we shall give a partial answer to this question, starting with the following fairly easy application of Kato's smoothing effect: \begin{lemma}\label{l31} Let $0\le s\le \frac{1}{4}$, $b>\frac{1}{2}$. Then the estimate \[\n{u_1 u_2 u_3}{L^2_{xt}} \le c \n{u_1}{\XX{s}{b}}\n{u_2}{\XX{-s}{b}}\n{u_3}{\XX{0}{b}}\] holds true. \end{lemma} Proof: For $s=0$ this follows from standard Strichartz' estimate, for $s=\frac{1}{4}$ we argue as follows: Interpolation between the $L^6$-estimate and the Kato smoothing effect \[\n{e^{it\partial ^2}u_0}{L_x^{\infty}(L_t^2)}\le c \n{u_0}{\dot{H}_x^{-\frac{1}{2}}}\] (see Thm. 4.1 in \cite{KPV91}) with $\theta =\frac{1}{2}$ yields \[\n{I^{\frac{1}{4}} e^{it\partial ^2}u_0}{L_x^{12}(L_t^3)}\le c\n{u_0}{L^2_{x}}.\] Now Lemma 2.3 in \cite{GTV97} gives \begin{equation}\label{31} \n{I^{\frac{1}{4}}u}{L_x^{12}(L_t^3)} \le c \n{u}{\XX{0}{b}}, \,\,\,\,b>\frac{1}{2}. \end{equation} On the other hand by Thm. 2.5 in \cite{KPV91} we get \[\n{e^{it\partial ^2}u_0}{L_x^4(L_t^{\infty})}\le c \n{u_0}{\dot{H}_x^{\frac{1}{4}}}\] and thus \begin{equation}\label{32} \n{u}{L_x^4(L_t^{\infty})}\le c \n{I^{\frac{1}{4}}u}{\XX{0}{b}}\le c \n{u}{\XX{\frac{1}{4}}{b}}, \,\,\,\,b>\frac{1}{2}. \end{equation} Using the projections $p$ and $P$ defined by $p=\mbox{${\cal F}$}^{-1}\chi_{\{|\mbox{$ X^+_{s,b} $}i| \le 1\}}\mbox{${\cal F}$}$ and $P= Id-p$, we now have \[\n{u_1u_2}{L^3_{xt}}\le \n{u_1pu_2}{L^3_{xt}}+\n{u_1Pu_2}{L^3_{xt}}=:N_1+N_2\] with \[N_1 \le \n{u_1}{L^6_{xt}}\n{pu_2}{L^6_{xt}}\le c\n{u_1}{\XX{0}{b}}\n{pu_2}{\XX{0}{b}}\le c\n{u_1}{\XX{\frac{1}{4}}{b}}\n{u_2}{\XX{-\frac{1}{4}}{b}}.\] For $N_2$ we use (\mbox{${\longrightarrow}$}ef{31}) and (\mbox{${\longrightarrow}$}ef{32}) to obtain \begin{eqnarray*} \n{u_1Pu_2}{L^3_{xt}}&\le &\n{u_1}{L_x^4(L_t^{\infty})}\n{Pu_2}{L_x^{12}(L_t^3)}\\ &\le & c \n{u_1}{\XX{\frac{1}{4}}{b}}\n{I^{-\frac{1}{4}}Pu_2}{\XX{0}{b}}\le c\n{u_1}{\XX{\frac{1}{4}}{b}}\n{u_2}{\XX{-\frac{1}{4}}{b}}. \end{eqnarray*} Now, using H\"older and standard Strichartz again, from this we obtain the claim for $s=\frac{1}{4}$. For $0<s<\frac{1}{4}$ the result then follows by multilinear interpolation, see Thm. 4.4.1 in \cite{BL}. $ \Box$ {\bf Problem:} Does the above estimate hold for $\frac{1}{4}<s<\frac{1}{2}$ ? \begin{kor}\label{k31} Assume $0\le s\le \frac{1}{4}$ and $b>\frac{1}{2}$. Let $\tilde{u}$ denote $u$ or $\overline{u}$. Then the following estimates are valid: \begin{itemize} \item[i)] $\n{\tilde{u}_1 \tilde{u}_2 \tilde{u}_3}{L^2_{xt}} \le c \n{u_1}{\XX{s}{b}}\n{u_2}{\XX{-s}{b}}\n{u_3}{\XX{0}{b}}$, \item[ii)] $\n{\tilde{u}_1 \tilde{u}_2 \tilde{u}_3}{\XX{-s}{-b}} \le c \n{u_1}{L^2_{xt}}\n{u_2}{\XX{-s}{b}}\n{u_3}{\XX{0}{b}}$, \item[iii)] $\n{\tilde{u}_1 \tilde{u}_2 \tilde{u}_3}{L^2_t (H^s_x)} \le c \n{u_1}{\XX{s}{b}}\n{u_2}{\XX{0}{b}}\n{u_3}{\XX{0}{b}}$, \item[iv)] $\n{\tilde{u}_1 \tilde{u}_2 \tilde{u}_3}{\XX{-s}{-b}} \le c \n{u_1}{L^2_t (H^{-s}_x)}\n{u_2}{\XX{0}{b}}\n{u_3}{\XX{0}{b}}$. \end{itemize} \end{kor} Proof: Clearly, in $\n{u_1 u_2 u_3}{L^2_{xt}}$ any factor $u_i$ may be replaced by $\overline{u}_i$. This gives i). From this we obtain ii) by duality. Writing $<\mbox{$ X^+_{s,b} $}i> \le <\mbox{$ X^+_{s,b} $}i_1>+<\mbox{$ X^+_{s,b} $}i_2>+<\mbox{$ X^+_{s,b} $}i_3>$ and applying i) twice (plus standard Strichartz), part iii) can be seen. Dualizing again, part iv) follows. $ \Box$ In some cases, using the bilinear estimates of the previous section, we can prove better $L^2_t(H^s_x)$-estimates: \begin{lemma}\label{l32} \begin{itemize} \item[i)] For $|s|<\frac{1}{2}<b$ the following estimate holds: \[\n{u_1 \overline{u}_2u_3}{L_t^2(H_x^s)} \le c\n{u_1}{\XX{0}{b}}\n{u_2}{\XX{0}{b}}\n{u_3}{\XX{s}{b}}\] \item[ii)] For $-\frac{1}{2}<s \le 0$, $b>\frac{1}{2}$ the following is valid: \[\n{u_1 \overline{u}_2u_3}{L_t^2(H^{s} _x)} \le c\n{u_1}{\XX{0}{b}}\n{u_2}{\XX{s}{b}}\n{u_3}{\XX{0}{b}}\] \end{itemize} \end{lemma} $Remark:$ Using multilinear interpolation (Thm. 4.4.1 in \cite{BL}) we obtain \[\n{u_1 \overline{u}_2u_3}{L_t^2(H^{s} _x)} \le c\n{u_1}{\XX{s_1}{b}}\n{u_2}{\XX{s_2}{b}}\n{u_3}{\XX{s_3}{b}},\] provided $-\frac{1}{2}<s \le 0$, $b>\frac{1}{2}$, $s_{1,2,3} \le 0$ and $s_1+s_2+s_3=s$. Moreover, we may replace $u_1 \overline{u}_2u_3$ on the left hand side by $\overline{u}_1 u_2 \overline{u}_3$. Proof: First we show i) for $s > 0$. From $<\mbox{$ X^+_{s,b} $}i>\le c (<\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2>+<\mbox{$ X^+_{s,b} $}i_3>)$ it follows that \[\n{u_1 \overline{u}_2u_3}{L_t^2(H_x^s)} \le c\n{J^s(u_1 \overline{u}_2)u_3}{L_{xt}^2} + \n{u_1 \overline{u}_2J^su_3}{L_{xt}^2} =: c (N_1+N_2).\] Using the standard $L^6_{xt}$-Strichartz-estimate we see that $N_2$ is bounded by the right hand side of i). For $N_1$ we have with $s=\frac{1}{p}$, $\frac{1}{2}-s=\frac{1}{q}$ ($\mbox{${\Rightarrow}$} H^s \subset L^q,\,\,\,H^{\frac{1}{2}} \subset H^{s,p}$): \begin{eqnarray*} N_1 &\le& c \n{J^s(u_1 \overline{u}_2)}{L_t^2(L_x^p)}\n{u_3}{L_t^{\infty}(L_x^q)}\\ &\le& c \n{u_1 \overline{u}_2}{L_t^2(H^{\frac{1}{2}}_x)}\n{u_3}{L_t^{\infty}(H^s_x)}\\ &\le& c \n{u_1}{\XX{0}{b}}\n{u_2}{\XX{0}{b}}\n{u_3}{\XX{s}{b}} \end{eqnarray*} by Lemma \mbox{${\longrightarrow}$}ef{l23}, i), and the Sobolev embedding in the time variable. Next we consider i) for $s<0$. Writing $<\mbox{$ X^+_{s,b} $}i_3>\le c(<\mbox{$ X^+_{s,b} $}i>+<\mbox{$ X^+_{s,b} $}i_1 +\mbox{$ X^+_{s,b} $}i_2>)$, we obtain \[\n{u_1 \overline{u}_2u_3}{L_t^2(H_x^s)} \le c\n{u_1 \overline{u}_2J^s u_3}{L_{xt}^2} + \n{J^{-s}(u_1 \overline{u}_2)J^su_3}{L_t^2(H^s_x)} =: c (N_1+N_2).\] To estimate $N_1$ we use again the standard $L^6_{xt}$-Strichartz estimate. For $N_2$ we use the embedding $L^q \subset H^s,\,\,s-\frac{1}{2}=-\frac{1}{q}$ and H\"older's inequality: \begin{eqnarray*} N_2 &\le& c \n{J^{-s}(u_1 \overline{u}_2)J^su_3}{L_t^2(L^q_x)}\\ &\le& c \n{J^{-s}(u_1 \overline{u}_2)}{L_t^2(L^p_x)}\n{u_3}{L_t^{\infty}(H^s_x)}, \end{eqnarray*} where $\frac{1}{q}=\frac{1}{2}+\frac{1}{p}$. The second factor is bounded by $c\n{u_3}{\XX{s}{b}}$ because of Sobolev's embedding Theorem in the time variable. For the first factor we use the embedding $H^{\frac{1}{2}}\subset H^{-s,p}$ (observe that $s=-\frac{1}{p}$) and again Lemma \mbox{${\longrightarrow}$}ef{l23}, i). We conclude the proof by showing ii): Here we have $\mbox{$ X^+_{s,b} $}i =(\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2)+(\mbox{$ X^+_{s,b} $}i_3+\mbox{$ X^+_{s,b} $}i_2)-\mbox{$ X^+_{s,b} $}i_2$ respectively $<\mbox{$ X^+_{s,b} $}i_2>\le c(<\mbox{$ X^+_{s,b} $}i>+<\mbox{$ X^+_{s,b} $}i_1 +\mbox{$ X^+_{s,b} $}i_2>+<\mbox{$ X^+_{s,b} $}i_3 +\mbox{$ X^+_{s,b} $}i_2>)$ and thus \[\n{u_1 \overline{u}_2u_3}{L_t^2(H_x^s)} \le c(N_1 +N_2 +N_3)\] with \[N_1 =\n{u_1 (J^s\overline{u}_2) u_3}{L_{xt}^2} \le c\n{u_1}{\XX{0}{b}}\n{u_2}{\XX{s}{b}}\n{u_3}{\XX{0}{b}}\] (by standard Strichartz) and \[N_2= \n{J^{-s}(u_1 J^s\overline{u}_2)u_3}{L_t^2(H^s_x)},\,\,\,\,\,N_3=\n{u_1 J^{-s}((J^s\overline{u}_2)u_3)}{L_t^2(H^s_x)}.\] By symmetry between $u_1$ and $u_3$ it is now sufficient to estimate $N_2$: Using the embedding $L^q \subset H^s,\,\,\,s-\frac{1}{2}=-\frac{1}{q}$, H\"older's inequality and the embedding $H^{\frac{1}{2}}\subset H^{-s,p},\,\,\,-s=\frac{1}{p}$ we obtain \begin{eqnarray*} N_2 &\le& c \n{J^{-s}(u_1 J^s\overline{u}_2)u_3}{L_t^2(L^q_x)}\\ &\le & c \n{J^{-s}(u_1 J^s\overline{u}_2)}{L_t^2(L^p_x)}\n{u_3}{L_t^{\infty}(L^2_x)}\\ &\le & c \n{J^{\frac{1}{2}}(u_1 J^s\overline{u}_2)}{L^2_{xt}}\n{u_3}{L_t^{\infty}(L^2_x)}. \end{eqnarray*} Again, Lemma \mbox{${\longrightarrow}$}ef{l23}, i), and the Sobolev embedding in $t$ give the desired bound. $ \Box$ \begin{lemma}\label{l33} For $-\frac{1}{2}<s \le 0$, $b>\frac{1}{2}$ the following holds true: \[\n{u_1 u_2 u_3}{L_t^2(H^{s} _x)} \le c\n{u_1}{\XX{s}{b}}\n{u_2}{\XX{0}{b}}\n{u_3}{\XX{0}{b}}\] \end{lemma} $Remark:$ Again we may use multilinear interpolation to get \[\n{u_1 u_2u_3}{L_t^2(H^{s} _x)} \le c\n{u_1}{\XX{s_1}{b}}\n{u_2}{\XX{s_2}{b}}\n{u_3}{\XX{s_3}{b}}\] for $-\frac{1}{2}<s \le 0$, $b>\frac{1}{2}$, $s_{1,2,3} \le 0$ and $s_1+s_2+s_3=s$. The same holds true with $u_1 u_2u_3$ replaced by $\overline{u}_1 \overline{u}_2 \overline{u}_3$. Proof: It is easily checked that for $\mbox{${\longrightarrow}$}ho, \lambda \ge 0$ the inequality \[<\mbox{$ X^+_{s,b} $}i_1>^{\mbox{${\longrightarrow}$}ho} \le c(<\mbox{$ X^+_{s,b} $}i>^{\mbox{${\longrightarrow}$}ho}+\frac{<\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_2>^{\mbox{${\longrightarrow}$}ho+\lambda}}{<\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2>^{\lambda}}+\frac{<\mbox{$ X^+_{s,b} $}i_1-\mbox{$ X^+_{s,b} $}i_3>^{\mbox{${\longrightarrow}$}ho+\lambda}}{<\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_3>^{\lambda}})\] is valid, if $\mbox{$ X^+_{s,b} $}i=\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2+\mbox{$ X^+_{s,b} $}i_3$. Choosing $\mbox{${\longrightarrow}$}ho = -s$ and $\lambda = s+\frac{1}{2}$ it follows, that \[\n{u_1 u_2 u_3}{L_t^2(H^{s} _x)} \le c (N_1+N_2+N_3),\] where \[N_1=\n{(J^su_1) u_2 u_3}{L^2_{xt}} \le c \n{u_1}{\XX{s}{b}}\n{u_2}{\XX{0}{b}}\n{u_3}{\XX{0}{b}}\] (by standard Strichartz) and \[N_2= \n{(J^{-\lambda}J_-^{\frac{1}{2}}(J^su_1, u_2))u_3}{L_t^2(H^s_x)},\,\,\,\,\,N_3=\n{(J^{-\lambda}J_-^{\frac{1}{2}}(J^su_1, u_3))u_2}{L_t^2(H^s_x)}.\] Now, by symmetry between $u_2$ and $u_3$, it is sufficient to estimate $N_2$. Using the embedding $L^q \subset H^s$, ($s-\frac{1}{2}=-\frac{1}{q}$) and H\"older we get \begin{eqnarray*} N_2 &\le& c \n{J^{-\lambda}J_-^{\frac{1}{2}}(J^su_1, u_2)u_3}{L_t^2(L^q_x)}\\ &\le & c \n{J^{-\lambda}J_-^{\frac{1}{2}}(J^su_1, u_2)}{L_t^2(L^p_x)}\n{u_3}{L_t^{\infty}(L^2_x)} \end{eqnarray*} with $\frac{1}{q}=\frac{1}{2}+\frac{1}{p}$. The second factor is bounded by $c\n{u_3}{\XX{0}{b}}$. For the first factor we observe that $L^2 \subset H^{-\lambda,p}$, so it can be estimated by \[\n{J_-^{\frac{1}{2}}(J^su_1, u_2)}{L^2_{xt}}\le c \n{u_1}{\XX{s}{b}}\n{u_2}{\XX{0}{b}},\] where in the last step we have used Corollary \mbox{${\longrightarrow}$}ef{k23}, i). $ \Box$ \section{Estimates on quadratic and cubic nonlinearities} \begin{satz}\label{t1} Let $n=1, m=3$ or $n=2, m=2$. Assume $ 0 \geq s > - \frac{1}{m}$ and $ - \frac{1}{2} < b' < \frac{ms}{2}$. Then in the periodic and nonperiodic case for all $b>\frac{1}{2}$ the estimate \[\n{ \prod_{i=1}^m\overline{u}_i}{\XX{0}{b'}} \leq c \prod_{i=1}^m \n{u_i}{\XX{s}{b}}\] holds true. \end{satz} Proof: Defining $f_i(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau - |\mbox{$ X^+_{s,b} $}i|^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} \overline{u}_i(\mbox{$ X^+_{s,b} $}i, \tau)$, $1 \leq i \leq m$, we have \[\n{ \prod_{i=1}^m \overline{u}_i}{\XX{0}{b'}}=c \n{<\!\!\tau + |\mbox{$ X^+_{s,b} $}i|^2\!\!>^{b'} \int d \nu \prod_{i=1}^m <\!\!\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2\!\!>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}},\] where $d \nu = \mu(d\mbox{$ X^+_{s,b} $}i_1..d\mbox{$ X^+_{s,b} $}i_{m-1}) d\tau_1.. d \tau_{m-1}$ and $\sum_{i=1}^m (\mbox{$ X^+_{s,b} $}i_i,\tau_i) = (\mbox{$ X^+_{s,b} $}i, \tau)$. \footnote{In the sequel we shall make repeated use of this convention.} Because of \[\tau + |\mbox{$ X^+_{s,b} $}i|^2 - \sum_{i=1}^m\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2 =|\mbox{$ X^+_{s,b} $}i|^2 + \sum_{i=1}^m |\mbox{$ X^+_{s,b} $}i_i|^2\] there is the inequality \begin{eqnarray}\label{41} <\mbox{$ X^+_{s,b} $}i>^2 + \sum_{i=1}^m <\mbox{$ X^+_{s,b} $}i_i>^2 & \le & <\tau + |\mbox{$ X^+_{s,b} $}i|^2 > + \sum_{i=1}^m <\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2> \nonumber \\ & \le & c (<\tau + |\mbox{$ X^+_{s,b} $}i|^2 > + \sum_{i=1}^m <\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2> \chi_{A_i}), \end{eqnarray} where in $A_i$ we have $<\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2> \,\,\ge \,\, <\tau + |\mbox{$ X^+_{s,b} $}i|^2 >$. Since $b' < \frac{ms}{2}$ is assumed, it follows \[<\mbox{$ X^+_{s,b} $}i>^{\epsilon}\prod_{i=1}^m<\mbox{$ X^+_{s,b} $}i_i>^{-s+\epsilon} \leq c (<\tau + |\mbox{$ X^+_{s,b} $}i|^2 >^{-b'} + \sum_{i=1}^m <\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2>^{-b'} \chi_{A_i})\] for some $\epsilon > 0$. From this we conclude that \[\n{ \prod_{i=1}^m \overline{u}_i}{\XX{0}{b'}} \le c \sum_{j=0}^m \n{I_j}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}},\] with \[I_0(\mbox{$ X^+_{s,b} $}i,\tau)=<\mbox{$ X^+_{s,b} $}i>^{-\epsilon}\int d \nu \prod_{i=1}^m <\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-\epsilon}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)\] and, for $1 \le j \le m$, \begin{eqnarray*} I_j(\mbox{$ X^+_{s,b} $}i,\tau)\!\!\!\! & = & \!\!\!\!<\!\!\mbox{$ X^+_{s,b} $}i\!\!>\!\!^{-\epsilon}<\!\!\tau \!\!+ \!\!|\mbox{$ X^+_{s,b} $}i|^2 \!\!>^{b'}\!\!\int\!\! d \nu <\!\!\tau_j \!\!- \!\!|\mbox{$ X^+_{s,b} $}i_j|^2 \!\! >\!\!\!\!^{-b'} \prod_{i=1}^m <\!\!\tau_i \!\!- \!\!|\mbox{$ X^+_{s,b} $}i_i|^2\!\!>\!\!\!\!^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>\!\!^{-\epsilon}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)\chi_{A_i} \\ \!\!\!\!& \le &\!\!\!\! <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{-\epsilon}<\!\!\tau \!\!+ \!\!|\mbox{$ X^+_{s,b} $}i|^2 \!\!>^{-b}\!\!\int \!\! d \nu <\!\!\tau_j\!\! - \!\!|\mbox{$ X^+_{s,b} $}i_j|^2\!\! >^{b} \prod_{i=1}^m <\!\!\tau_i\!\! -\!\! |\mbox{$ X^+_{s,b} $}i_i|^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-\epsilon} f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i). \end{eqnarray*} To estimate $I_0$ we use H\"olders inequality and Lemma \mbox{${\longrightarrow}$}ef{l21} respectively \mbox{${\longrightarrow}$}ef{l22}: \begin{eqnarray*} \n{I_0}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} & \le & \n{\int d \nu \prod_{i=1}^m <\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-\epsilon}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ &=& c \n{\prod_{i=1}^m J^{s-\epsilon} \overline{u}_i}{L^2_{x,t}} \le c\prod_{i=1}^m\n{ J^{s-\epsilon} \overline{u}_i}{L^{2m}_{x,t}}\\ & \le &c\prod_{i=1}^m\n{ J^{s} \overline{u}_i}{\XXm{0}{b}}=c\prod_{i=1}^m\n{\overline{u}_i}{\XXm{s}{b}}. \end{eqnarray*} To estimate $I_j$, $1\le j \le m$, we define $p=2m$ and $p'$ by $\frac{1}{p}+\frac{1}{p'}=1$. Then we use the dual versions of Lemma \mbox{${\longrightarrow}$}ef{l21} respectively \mbox{${\longrightarrow}$}ef{l22}, H\"olders inequality and the Lemmas themselves to obtain: \begin{eqnarray*} \n{I_j}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} & \le & c\n{(\prod_{\stackrel{i=1}{i \ne j}}^m J^{s-\epsilon} \overline{u}_i)(J^{-\epsilon}\mbox{${\cal F}$}^{-1}f_j)}{\XX{-\epsilon}{-b}}\\ & \le & c\n{(\prod_{\stackrel{i=1}{i \ne j}}^m J^{s-\epsilon} \overline{u}_i)(J^{-\epsilon}\mbox{${\cal F}$}^{-1}f_j)}{L^{p'}_{x,t}}\\ & \le & c\n{J^{-\epsilon}\mbox{${\cal F}$}^{-1}f_j}{L^2_{x,t}}\prod_{\stackrel{i=1}{i \ne j}}^m\n{ J^{s-\epsilon} \overline{u}_i}{L^{p}_{x,t}}\\ & \le & c\n{f_j}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\prod_{\stackrel{i=1}{i \ne j}}^m\n{ J^{s} \overline{u}_i}{\XXm{0}{b}} = c \prod_{i=1}^m\n{\overline{u}_i}{\XXm{s}{b}} \end{eqnarray*} $ \Box$ $Remark:$ The above theorem with $n=m=2$ can be inserted into the proof of Theorem 2.5 in \cite{St97}, thus showing that the statement of that theorem also holds in the periodic case. So we can answer this question left open in \cite{St97} affirmatively (cf. the remark on top of p. 81 in \cite{St97}). Moreover it is a straightforward application of Sobolev's embedding theorem, to prove the complementary estimate \[\n{\prod_{i=1}^m u_i}{\XX{0}{-b'}} \le c \prod_{i=1}^m \n{u_i}{\XXpm{1}{b}},\,\,\,-b'< \frac{1}{2}<b,\] $m \ge 2$ arbitrary. (Observe that Thm. \mbox{${\longrightarrow}$}ef{t1} holds with $s=0$ on the left hand side.) So we obtain the bound \[\n{u(t)}{H^s_x ({\bf T}^2)} \le c <t>^{s-1+ \epsilon} ,\,\,\,s>1,\,\,\,\epsilon>0,\] whenever $u$ is a global solution of \[iu_t + \Delta u + \lambda |u|^{2l} u =0, \hspace{1cm}u(0)=u_0 \in H^s_x ({\bf T}^2),\] ($l \in {\bf N}$) and $\n{u(t)}{H^1_x ({\bf T}^2)}$ is controlled by the conserved energy. \begin{satz}\label{t11} Let $n=3$ and assume $ 0 \geq s > - \frac{3}{10}$, $ - \frac{1}{2} < b' < \frac{s}{2} - \frac{7}{20}$ and $b>\frac{1}{2}$. Then in the periodic case the estimate \[\n{ \prod_{i=1}^2\overline{u}_i}{\XX{s}{b'}} \leq c \prod_{i=1}^2 \n{u_i}{\XX{s}{b}}\] holds true. \end{satz} Proof: Writing $f_i(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau - |\mbox{$ X^+_{s,b} $}i|^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} \overline{u}_i(\mbox{$ X^+_{s,b} $}i, \tau)$, $1 \leq i \leq 2$, we have \begin{eqnarray*} \n{ \prod_{i=1}^2\overline{u}_i}{\XX{s}{b'}} \hspace{4cm}\\ = c \n{<\mbox{$ X^+_{s,b} $}i>^s<\tau + |\mbox{$ X^+_{s,b} $}i|^2>^{b'} \int d \nu \prod_{i=1}^2 <\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}. \end{eqnarray*} By the expressions $<\tau + |\mbox{$ X^+_{s,b} $}i|^2>$ and $<\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2>$, $i=1,2$, the quantity \\ $<\mbox{$ X^+_{s,b} $}i>^2 +<\mbox{$ X^+_{s,b} $}i_1>^2+<\mbox{$ X^+_{s,b} $}i_2>^2$ can be controlled. So we split the domain of integration into $A_0 +A_1 +A_2$, where in $A_0$ we have \\ $<\tau + |\mbox{$ X^+_{s,b} $}i|^2> = \max{(<\tau + |\mbox{$ X^+_{s,b} $}i|^2>, <\tau_{1,2} - |\mbox{$ X^+_{s,b} $}i_{1,2}|^2>)}$ and in $A_j$, $j=1,2$, it should hold that $<\tau_j - |\mbox{$ X^+_{s,b} $}i_j|^2> = \max{(<\tau + |\mbox{$ X^+_{s,b} $}i|^2>, <\tau_{1,2} - |\mbox{$ X^+_{s,b} $}i_{1,2}|^2>)}$. First we consider the region $A_0$: Here we use that for $\epsilon > 0$ sufficiently small \[<\mbox{$ X^+_{s,b} $}i>^{\frac{3}{10} + s} \prod_{i=1}^2<\mbox{$ X^+_{s,b} $}i_i>^{-s + \frac{1}{5}+ \epsilon}\le c <\tau + |\mbox{$ X^+_{s,b} $}i|^2>^{-b'}.\] This gives the upper bound \begin{eqnarray*} \n{<\mbox{$ X^+_{s,b} $}i>^{-\frac{3}{10}}\int d \nu \prod_{i=1}^2 <\tau_i - |\mbox{$ X^+_{s,b} $}i_i|^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-\frac{1}{5}- \epsilon}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \\ =c \n{\prod_{i=1}^2 J^{s-\frac{1}{5}- \epsilon}\overline{u}_i}{L_t^2(H_x^{-\frac{3}{10}})}.\hspace{3cm} \end{eqnarray*} Now, using the embedding $L_x^q \subset H_x^{-\frac{3}{10}}$, $\frac{1}{q}=\frac{3}{5}$, H\"older's inequality and Corollary \mbox{${\longrightarrow}$}ef{k22}, we get the following chain of inequalities: \begin{eqnarray*} \n{\prod_{i=1}^2 J^{s-\frac{1}{5}- \epsilon}\overline{u}_i}{L_t^2(H_x^{-\frac{3}{10}})} &\le & c \n{\prod_{i=1}^2 J^{s-\frac{1}{5}- \epsilon}\overline{u}_i}{L_t^2(L_x^q)} \\ &\le & c \n{J^{s-\frac{1}{5}- \epsilon}u_1}{L_t^4(L_x^{2q})}\n{J^{s-\frac{1}{5}- \epsilon}u_2}{L_t^4(L_x^{2q})} \\ & \le & c \prod_{i=1}^2 \n{u_i}{\XX{s}{b}}. \end{eqnarray*} Now, by symmetry, it only remains to show the estimate for the region $A_1$: Here we use \[<\mbox{$ X^+_{s,b} $}i>^s<\tau + |\mbox{$ X^+_{s,b} $}i|^2>^{b+b'}<\mbox{$ X^+_{s,b} $}i_1>^{-s}<\mbox{$ X^+_{s,b} $}i_2>^{-s + \frac{1}{4} + \epsilon} \le c <\mbox{$ X^+_{s,b} $}i>^{-\frac{1}{4}-\epsilon}<\tau_1 - |\mbox{$ X^+_{s,b} $}i_1|^2>^{b}\] to obtain the upper bound \begin{eqnarray*} \n{<\mbox{$ X^+_{s,b} $}i>^{-\frac{1}{4}-\epsilon} <\tau + |\mbox{$ X^+_{s,b} $}i|^2>^{-b}\int d \nu f_1(\mbox{$ X^+_{s,b} $}i_1, \tau_1)<\mbox{$ X^+_{s,b} $}i_2>^{- \frac{1}{4} - \epsilon}<\tau_2 - |\mbox{$ X^+_{s,b} $}i_2|^2>^{-b}f_2(\mbox{$ X^+_{s,b} $}i_2, \tau_2)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \\ =c \n{(\mbox{${\cal F}$} ^{-1}f_1)( J^{s-\frac{1}{4}- \epsilon}u_2)}{\XX{-\frac{1}{4}-\epsilon}{-b}},\hspace{4cm} \end{eqnarray*} where $\n{f_1}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}=\n{\mbox{${\cal F}$}^{-1}f_1}{L^2_{x,t}}=\n{u_1}{\XX{s}{b}}$. Now we use the dual form of Lemma \mbox{${\longrightarrow}$}ef{l22}, ii), H\"older's inequality and the Lemma itself to obtain \begin{eqnarray*} \n{\mbox{${\cal F}$} ^{-1}f_1 J^{s-\frac{1}{4}- \epsilon}u_2}{\XX{-\frac{1}{4}-\epsilon}{-b}} &\le & c\n{\mbox{${\cal F}$} ^{-1}f_1 J^{s-\frac{1}{4}- \epsilon}u_2}{L_{xt}^{\frac{4}{3}}}\\ &\le & c\n{\mbox{${\cal F}$} ^{-1}f_1}{L^2_{xt}}\n{J^{s-\frac{1}{4}- \epsilon}u_2}{L^4_{xt}}\\ &\le & c \prod_{i=1}^2 \n{u_i}{\XX{s}{b}} \end{eqnarray*} $ \Box$ $Remark:$ In the nonperiodic case we can combine the argument given above with the $L_t^4(L_x^3)$-Strichartz-estimate to obtain the estimate in question whenever $s>-\frac{1}{2}$, $b'< \frac{s}{2}-\frac{1}{4}$, $b>\frac{1}{2}$. (This result was already indicated by Tao, see the remark below Prop. 11.3 in \cite{T00}.) As far as I know, it is still an open question, whether or not the analogue of this Strichartz-estimate, that is \[\XX{\epsilon}{b} \subset L_t^4({\bf R}, L_x^3({\bf T}^3)),\hspace{1cm}b>\frac{1}{2},\,\,\epsilon>0\] holds in the periodic case. This, of course, could be used to lower the bound on $s$ in the above theorem down to $-\frac{1}{2}+\epsilon$. Before we turn to the cubic nonlinearities in the continuous case, let us briefly discuss some counterexamples concerning the periodic case: The examples given by Kenig, Ponce and Vega connected with the onedimensional periodic case (see the proof of Thm 1.10, parts (ii) and (iii) in \cite{KPV96}) show that the estimate \[\n{u_1 \overline{u}_2}{\XX{s}{b'}}\le c \n{u_1}{\XX{s}{b}}\n{u_2}{\XX{s}{b}}\] fails for all $s<0$, $b,b' \in {\bf R}$, and that the estimate \[\n{\overline{u}_1 \overline{u}_2}{\XX{s}{b'}}\le c \n{u_1}{\XX{s}{b}}\n{u_2}{\XX{s}{b}}\] fails for all $s<- \frac{1}{2}$, if $b-b' \le 1$. From this we can conclude by the method of descent, that these estimates also fail in higher dimensions. So our estimate on $\overline{u}_1 \overline{u}_2$ is sharp (up to the endpoint), while in three dimensions the estimate might be improved (as indicated above), and for $u_1 \overline{u}_2$ no results with $s<0$ can be achieved by the method. For the bilinear form $B(u_1,u_2)=u_1 u_2$ in the two- and threedimensional periodic setting we have the following counterexample exhibiting a significant difference between the periodic and nonperiodic case (cf. the results in \cite{CDKS} and \cite{T00} mentioned in the introduction): \begin{bsp}\label{ex41} In the periodic case in space dimension $d\ge 2$ the estimate \[\n{\prod_{i=1}^2 u_i}{\XX{s}{b'}} \le c \prod_{i=1}^2 \n{u_i}{\XX{s}{b}}\] fails for all $s<0$, $b, b' \in {\bf R}$. \end{bsp} Proof: The above estimate implies \[\n{<\!\!\tau \!\!+\!\! |\mbox{$ X^+_{s,b} $}i|^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^2 <\!\!\tau_i\!\! + \!|\mbox{$ X^+_{s,b} $}i_i|^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c \prod_{i=1}^2 \n{f_i}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] Choosing two orthonormal vectors $e_1$ and $e_2$ in ${\bf R}^d$ and defining for $n \in {\bf N}$ \[f^{(n)}_{1}(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,ne_1}\chi(\tau + n^2),\hspace{0.3cm}f^{(n)}_2(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,ne_2}\chi(\tau + n^2),\] where $\chi$ is the characteristic function of $[-1,1]$, we have $\n{f^{(n)}_i}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}=c$ and it would follow that \begin{equation}\label{411} n^{-2s} \n{<\!\!\tau \!\!+\!\! |\mbox{$ X^+_{s,b} $}i|^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^2 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c. \end{equation} Now a simple computation shows that \[\int d \nu \prod_{i=1}^2 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i) \ge \delta_{\mbox{$ X^+_{s,b} $}i,n(e_1 +e_2)}\chi(\tau + 2 n^2),\] which inserted into (\mbox{${\longrightarrow}$}ef{411}) gives $n^{-s} \le c$. This is a contradiction for all $s<0$. $ \Box$ The next example shows that our estimate on $\overline{u}_1\overline{u}_2\overline{u}_3$ is essentially sharp: \begin{bsp}\label{ex42} In the periodic case in one space dimension the estimate \[\n{\prod_{i=1}^3 \overline{u}_i}{\XX{s}{b'}} \le c \prod_{i=1}^3\n{u_i}{\XX{s}{b}}\] fails for all $s< -\frac{1}{3}$, if $b- b' \le 1$. \end{bsp} Proof: From the above estimate we obtain \[\n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^3 <\!\!\tau_i\!\! - \!\mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c \prod_{i=1}^3 \n{f_i}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] Then for $n \in {\bf N}$ we define \[f^{(n)}_{1,2}(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,n}\chi(\tau - n^2),\hspace{0.3cm}f^{(n)}_3(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,-2n}\chi(\tau - 4n^2),\] with $\chi$ as in the previous example. Again we have $\n{f^{(n)}_i}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}=c$ and \begin{equation}\label{412} n^{-3s} \n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^3 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c. \end{equation} Now it can be easily checked that \[\int d \nu \prod_{i=1}^3 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i) \ge \delta_{\mbox{$ X^+_{s,b} $}i,0}\chi (\tau - 6n^2).\] This leads to $n^{-3s+2b'} \le c$ respectively to $\frac{2}{3}b' \le s$. Consider next the following sequences of functions \[g^{(n)}_{1}(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,n}\chi(\tau + 5n^2),\hspace{0.3cm}g^{(n)}_2(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,n}\chi(\tau - n^2),\hspace{0.3cm}g^{(n)}_3(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,-2n}\chi(\tau - 4n^2).\] Arguing as before we are lead to the restriction $-\frac{2}{3}b \le s$. Adding up these two restrictions and taking into account that $b-b' \le 1$ we arrive at $s\ge -\frac{1}{3}$. $ \Box$ For all the other cubic nonlinearities the corresponding estimates fail for $s<0$, $b, b' \in {\bf R}$, see the examples \mbox{${\longrightarrow}$}ef{ex51} and \mbox{${\longrightarrow}$}ef{ex52} in the next section as well as the remarks below. Next we consider the cubic nonlinearities in the continuous case: \begin{satz}\label{t111} In the nonperiodic case in one space dimension the estimates \begin{equation}\label{42} \n{ \prod_{i=1}^3 \overline{u}_i}{\XX{\sigma}{b'}} \leq c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}} \end{equation} and \begin{equation}\label{43} \n{\prod_{i=1}^3 u_i}{\XX{\sigma}{b'}} \leq c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}} \end{equation} hold, provided $0\ge s>-\frac{5}{12}$, $-\frac{1}{2} < b' < \frac{1}{2}(\frac{1}{4}+ 3s)$, $\sigma <\min{(0,3s-2b')}$ and $b>\frac{1}{2}$. \end{satz} Proof: 1. To show (\mbox{${\longrightarrow}$}ef{42}), we write $f_i(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau +\mbox{$ X^+_{s,b} $}i^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} u_i(\mbox{$ X^+_{s,b} $}i, \tau)$, $1\le i\le 3$. Then we have \begin{eqnarray*} && \n{ \prod_{i=1}^3 \overline{u}_i}{\XX{\sigma}{b'}} =\n{\prod_{i=1}^3 u_i}{\XXm{\sigma}{b'}}\\ & = & \n{<\tau - \mbox{$ X^+_{s,b} $}i^2>^{b'} <\mbox{$ X^+_{s,b} $}i>^{\sigma}\int d \nu \prod_{i=1}^3 <\tau_i +\mbox{$ X^+_{s,b} $}i_i^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}. \end{eqnarray*} For $0 \le \alpha, \beta, \gamma$ with $\alpha + \beta + \gamma = 2$ we have the inequality \[<\!\mbox{$ X^+_{s,b} $}i_1\!>^{\alpha}<\!\mbox{$ X^+_{s,b} $}i_2\!>^{\beta}<\!\mbox{$ X^+_{s,b} $}i_3\!>^{\gamma} \le <\!\mbox{$ X^+_{s,b} $}i\!>^{2}+ \sum_{i=1}^3 <\!\mbox{$ X^+_{s,b} $}i_i\!>^{2}\le c(<\!\tau - \mbox{$ X^+_{s,b} $}i^2\!>+\sum_{i=1}^3<\!\tau_i +\mbox{$ X^+_{s,b} $}i_i^2\!> \chi_{A_i}),\] where in $A_i$ the expression $<\tau_i +\mbox{$ X^+_{s,b} $}i_i^2>$ is dominant. Hence \[\n{ \prod_{i=1}^3 \overline{u}_i}{\XX{\sigma}{b'}} \leq c\sum_{k=0}^3 N_k\] with \begin{eqnarray*} N_0 &=& \n{ <\mbox{$ X^+_{s,b} $}i>^{\sigma}\int d \nu \prod_{i=1}^3 <\tau_i +\mbox{$ X^+_{s,b} $}i_i^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{\frac{2b'}{3}-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ &=& c\n{\prod_{i=1}^3 J^\frac{2b'}{3}u_i}{L_t^2(H_x^{\sigma})} \le c\prod_{i=1}^3 \n{J^\frac{2b'}{3}u_i}{\XX{\frac{\sigma}{3}}{b}} \le c\prod_{i=1}^3 \n{u_i}{\XX{s}{b}}, \end{eqnarray*} where we have used Lemma \mbox{${\longrightarrow}$}ef{l33} and the assumption $\sigma \le 3s-2b' $. Next we estimate $N_1$ by \begin{eqnarray*} \n{<\tau - \mbox{$ X^+_{s,b} $}i^2>^{b'} <\mbox{$ X^+_{s,b} $}i>^{\sigma}\int d \nu \prod_{i=1}^3 <\tau_i +\mbox{$ X^+_{s,b} $}i_i^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)\chi_{A_1}}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ \le c \n{\!\!<\!\!\tau \!\!-\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{\sigma}\int d \nu <\!\!\mbox{$ X^+_{s,b} $}i_1\!\!>^{2b'-3s} f_1(\mbox{$ X^+_{s,b} $}i_1,\tau_1)\prod_{i=2}^3 <\!\!\tau_i \!\!+\!\!\mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ = c \n{(\Lambda ^b J^{2b'-2s}u_1)(J^su_2)(J^su_3)}{\XXm{\sigma}{-b}},\hspace{3cm} \end{eqnarray*} where $\Lambda ^b = \mbox{${\cal F}$} ^{-1}<\tau + \mbox{$ X^+_{s,b} $}i^2>^b \mbox{${\cal F}$}$. By part iv) of Corollary \mbox{${\longrightarrow}$}ef{k31} this is bounded by \begin{eqnarray*} && c\n{\Lambda ^b J^{2b'-2s}u_1}{L^2_t(H_x^{\sigma})}\n{u_2}{\XX{s}{b}}\n{u_3}{\XX{s}{b}}\\ &=&c \n{u_1}{\XX{2b'-2s + \sigma}{b}}\n{u_2}{\XX{s}{b}}\n{u_3}{\XX{s}{b}} \leq c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}, \end{eqnarray*} since $2b'-2s + \sigma \le s$. To estimate $N_k$ for $k=2,3$ one only has to exchange the indices $1$ and $k$. Now (\mbox{${\longrightarrow}$}ef{42}) is shown. 2. Now we prove the second estimate: With $f_i$ as above we have \[\n{\prod_{i=1}^3 u_i}{\XX{\sigma}{b'}}=c\n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{\sigma}\int d \nu \prod_{i=1}^3 <\!\!\tau_i\!\! +\!\!\mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] Here the quantity, which can be controlled by the expressions $<\tau + \mbox{$ X^+_{s,b} $}i^2>$, $<\tau_i + \mbox{$ X^+_{s,b} $}i_i^2>$, $1\le i \le 3$, is \[c.q.:= |\mbox{$ X^+_{s,b} $}i_1^2 + \mbox{$ X^+_{s,b} $}i_2^2+ \mbox{$ X^+_{s,b} $}i_3^2 - \mbox{$ X^+_{s,b} $}i^2|.\] So we devide the domain of integration into two parts $A$ and $A^c$, where in $A$ it should hold that \[\mbox{$ X^+_{s,b} $}i_1^2 + \mbox{$ X^+_{s,b} $}i_2^2+ \mbox{$ X^+_{s,b} $}i_3^2 + \mbox{$ X^+_{s,b} $}i^2 \le c\,\,\,\, c.q.\] Then concerning this region we can argue precisely as in the first part of this proof. For the region $A^c$ we may assume by symmetry that $\mbox{$ X^+_{s,b} $}i_1^2 \ge \mbox{$ X^+_{s,b} $}i_2^2 \ge \mbox{$ X^+_{s,b} $}i_3^2$. Then it is easily checked that in $A^c$ we have \[1.\,\,\,\mbox{$ X^+_{s,b} $}i^2 \ge \frac{1}{2}\mbox{$ X^+_{s,b} $}i_1^2 \ge \frac{1}{2}\mbox{$ X^+_{s,b} $}i_2^2\,\,\,\hspace{1,5cm}\mbox{and}\hspace{1,5cm}\,\,\,2.\,\,\,\mbox{$ X^+_{s,b} $}i_3^2 \le \mbox{$ X^+_{s,b} $}i_1^2 \le c (\mbox{$ X^+_{s,b} $}i_1 \pm \mbox{$ X^+_{s,b} $}i_3)^2.\] From this it follows \[\prod_{i=1}^3<\mbox{$ X^+_{s,b} $}i_i>^{-s} \le c <\mbox{$ X^+_{s,b} $}i>^{-\sigma}<\mbox{$ X^+_{s,b} $}i_1 + \mbox{$ X^+_{s,b} $}i_3>^{-s_0}<\mbox{$ X^+_{s,b} $}i_1 - \mbox{$ X^+_{s,b} $}i_3>^{\frac{1}{2}}\] for $s_0 = \frac{1}{2}+2b' +\epsilon$, so that $-3s \le -\sigma -s_0 + \frac{1}{2} = -\sigma -2b'-\epsilon$ for $\epsilon$ sufficiently small. Hence \begin{eqnarray*} &&\n{<\tau + \mbox{$ X^+_{s,b} $}i^2>^{b'} <\mbox{$ X^+_{s,b} $}i>^{\sigma}\int d \nu \prod_{i=1}^3 <\tau_i +\mbox{$ X^+_{s,b} $}i_i^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)\chi_{A^c}}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ & \le & c \n{<\!\!\tau\!\! +\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} \int d \nu <\!\!\mbox{$ X^+_{s,b} $}i_1\!\!+\!\!\mbox{$ X^+_{s,b} $}i_3\!\!>^{-s_0}<\!\!\mbox{$ X^+_{s,b} $}i_1\!\!-\!\!\mbox{$ X^+_{s,b} $}i_3\!\!>^{\frac{1}{2}} \prod_{i=1}^3 <\!\!\tau_i\!\! +\!\!\mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ &=& c \n{(J^su_2) J^{-s_0}J_-^{\frac{1}{2}}(J^su_1,J^su_3)}{\XX{0}{b'}} \end{eqnarray*} Using part iii) of Lemma \mbox{${\longrightarrow}$}ef{l23} (observe that $b'<-\frac{1}{4}+\frac{s_0}{2}$) and part i) of Corollary \mbox{${\longrightarrow}$}ef{k23} this can be estimated by \begin{eqnarray*} && c\n{J^su_2}{\XX{0}{b}}\n{J^{-s_0}J_-^{\frac{1}{2}}(J^su_1,J^su_3)}{L^2_t(H_x^{s_0})}\\ & \le & c \n{u_2}{\XX{s}{b}}\n{J_-^{\frac{1}{2}}(J^su_1,J^su_3)}{L^2_{xt}} \le c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}. \end{eqnarray*} $ \Box$ \begin{satz}\label{t1111} In the nonperiodic case in one space dimension the estimate \begin{equation}\label{11} \n{ u_1\prod_{i=2}^3 \overline{u}_i}{\XX{s}{b'}} \leq c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}} \nonumber \end{equation} holds, provided $-\frac{1}{4} \ge s>-\frac{2}{5}$, $-\frac{1}{2} < b' < \min{(s-\frac{1}{10},-\frac{1}{4}+\frac{s}{2} )}$ and $b>\frac{1}{2}$. \end{satz} Proof: We write $f_1(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau +\mbox{$ X^+_{s,b} $}i^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} u_1(\mbox{$ X^+_{s,b} $}i, \tau)$ and \\ $f_{2,3}(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau -\mbox{$ X^+_{s,b} $}i^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} \overline{u}_{2,3}(\mbox{$ X^+_{s,b} $}i, \tau)$. Then, using the abbreviations $\sigma_0 = \tau +\mbox{$ X^+_{s,b} $}i^2$, $\sigma_1 = \tau_1 +\mbox{$ X^+_{s,b} $}i_1^2$ and $\sigma_{2,3} = \tau_{2,3} -\mbox{$ X^+_{s,b} $}i_{2,3}^2$, we have \[\n{ u_1\prod_{i=2}^3 \overline{u}_i}{\XX{s}{b'}}=c\n{<\!\!\sigma_0\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^3 <\!\!\sigma_i\!\!>\!\!^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>\!\!^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] Here the quantity \[c.q.:= |\mbox{$ X^+_{s,b} $}i^2 + \mbox{$ X^+_{s,b} $}i_2^2+ \mbox{$ X^+_{s,b} $}i_3^2 - \mbox{$ X^+_{s,b} $}i_1^2|=2|\mbox{$ X^+_{s,b} $}i_2 \mbox{$ X^+_{s,b} $}i_3 - \mbox{$ X^+_{s,b} $}i (\mbox{$ X^+_{s,b} $}i_2+\mbox{$ X^+_{s,b} $}i_3)|\] can be controlled by the expressions $<\sigma_i>$, $0\le i \le 3$. Thus we devide the domain of integration into $A + A^c$, where in $A$ it should hold that $c.q. \ge c <\mbox{$ X^+_{s,b} $}i_2><\mbox{$ X^+_{s,b} $}i_3>$. First we consider the region $A^c$. In this region it holds that \[1. <\mbox{$ X^+_{s,b} $}i_2> \le c<\mbox{$ X^+_{s,b} $}i>\hspace{1cm} \mbox{or}\hspace{1cm}<\mbox{$ X^+_{s,b} $}i_3> \le c<\mbox{$ X^+_{s,b} $}i>\] \[\mbox{and} \hspace{1cm}2.<\mbox{$ X^+_{s,b} $}i_{2,3}> \le c<\mbox{$ X^+_{s,b} $}i_2 \pm \mbox{$ X^+_{s,b} $}i_3>\hspace{0.5cm} \mbox{or}\hspace{0.5cm}<\mbox{$ X^+_{s,b} $}i_{2,3}>\le c<\mbox{$ X^+_{s,b} $}i \pm \mbox{$ X^+_{s,b} $}i_{2,3}>.\] Writing $A^c=B_1 + B_2$, where in $B_1$ we assume $<\mbox{$ X^+_{s,b} $}i_2> \le <\mbox{$ X^+_{s,b} $}i_3>$ and in $B_2$, consequently, $<\mbox{$ X^+_{s,b} $}i_2> \ge <\mbox{$ X^+_{s,b} $}i_3>$, it will be sufficient by symmetry to consider the subregion $B_1$. Now $B_1$ is splitted again into $B_{11}$ and $B_{12}$, where in $B_{11}$ we assume $<\mbox{$ X^+_{s,b} $}i_{2,3}> \le c<\mbox{$ X^+_{s,b} $}i_2 \pm \mbox{$ X^+_{s,b} $}i_3>$ and in $B_{12}$ it should hold that $<\mbox{$ X^+_{s,b} $}i_{2,3}>\le c<\mbox{$ X^+_{s,b} $}i \pm \mbox{$ X^+_{s,b} $}i_{2,3}>$. Subregion $B_{11}$: Here it holds that \\ $<\mbox{$ X^+_{s,b} $}i_1><\mbox{$ X^+_{s,b} $}i_2><\mbox{$ X^+_{s,b} $}i_3>\le c <\mbox{$ X^+_{s,b} $}i><\mbox{$ X^+_{s,b} $}i_2-\mbox{$ X^+_{s,b} $}i_3><\mbox{$ X^+_{s,b} $}i_2+\mbox{$ X^+_{s,b} $}i_3>$, giving the upper bound \begin{eqnarray*} &&\n{<\sigma_0>^{b'} \int d \nu <\mbox{$ X^+_{s,b} $}i_2+\mbox{$ X^+_{s,b} $}i_3>^{-s} <\mbox{$ X^+_{s,b} $}i_2-\mbox{$ X^+_{s,b} $}i_3>^{-s}\prod_{i=1}^3 <\sigma_i>^{-b} f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ &=& c \n{(J^s u_1) J^{-s}J_-^{-s}(J^s \overline{u}_2 ,J^s \overline{u}_3) }{\XX{0}{b'}}\\ &\le & c \n{u_1}{\XX{s}{b}}\n{J_-^{-s}(J^s \overline{u}_2 ,J^s \overline{u}_3)}{L^2_{xt}} \le c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}, \end{eqnarray*} where we have used part iii) of Lemma \mbox{${\longrightarrow}$}ef{l23} (demanding for $b' < -\frac{1}{4}+ \frac{s}{2}$) and part i) of Corollary \mbox{${\longrightarrow}$}ef{k23}. Subregion $B_{12}$: Here we have \\ $<\mbox{$ X^+_{s,b} $}i_1><\mbox{$ X^+_{s,b} $}i_2><\mbox{$ X^+_{s,b} $}i_3>\le c <\mbox{$ X^+_{s,b} $}i><\mbox{$ X^+_{s,b} $}i-\mbox{$ X^+_{s,b} $}i_3><\mbox{$ X^+_{s,b} $}i+\mbox{$ X^+_{s,b} $}i_3>$, leading to the upper bound \begin{eqnarray*} &&\n{<\!\sigma_0\!>^{b'} \!\int \!d \nu <\!\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2+2\mbox{$ X^+_{s,b} $}i_3\!>^{-s} <\!\mbox{$ X^+_{s,b} $}i_1+\mbox{$ X^+_{s,b} $}i_2\!>^{-s}\!\prod_{i=1}^3 \!<\!\sigma_i\!>^{-b} \!f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ &=& c \n{J_+^{-s}(J^{-s}((J^s u_1) (J^s \overline{u}_2)) ,J^s \overline{u}_3) }{\XX{0}{b'}}\\ &\le & c \n{u_3}{\XX{s}{b}}\n{J^{-s}((J^s u_1)(J^s \overline{u}_2))}{L^2_{xt}} \le c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}. \end{eqnarray*} Here we have used part ii) of Corollary \mbox{${\longrightarrow}$}ef{k23} (leading again to the restriction \\ $b' < -\frac{1}{4}+ \frac{s}{2}$) and part i) of Lemma \mbox{${\longrightarrow}$}ef{l23}. By this the discussion for the region $A^c$ is completed. Next we consider the region $A=\sum_{j=0}^3A_j$, where in $A_j$ the expression $<\!\sigma_j\!>$ is assumed to be dominant. By symmetry between the second and third factor (also in the exceptional region $A^c$) it will be sufficient to show the estimate for the subregions $A_0$, $A_1$ and $A_2$. Subregion $A_0$: Here we can use $<\mbox{$ X^+_{s,b} $}i_2><\mbox{$ X^+_{s,b} $}i_3>\le c<\sigma_0>$ to obtain the upper bound \begin{eqnarray*} && \n{ <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu <\!\!\sigma_1\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_1\!\!>^{-s}f_1(\mbox{$ X^+_{s,b} $}i_1,\tau_1) \prod_{i=2}^3 <\!\!\sigma_i\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{b'-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ &=& c \n{u_1 J^{b'}\overline{u}_2 J^{b'}\overline{u}_3}{L^2_t(H^s_x)}\le c\prod_{i=1}^3 \n{u_i}{\XX{s}{b}} \end{eqnarray*} by part ii) of Lemma \mbox{${\longrightarrow}$}ef{l32}, provided $s>-\frac{1}{2}$ (in the last step we have also used $s\ge b'$). Subregion $A_1$: Here we have $<\mbox{$ X^+_{s,b} $}i_2><\mbox{$ X^+_{s,b} $}i_3>\le c<\sigma_1>$ and $<\sigma_0> \le <\sigma_1>$. Subdevide $A_1$ again into $A_{11}$ and $A_{12}$ with $<\mbox{$ X^+_{s,b} $}i_1>\le c<\mbox{$ X^+_{s,b} $}i>$ in $A_{11}$ and, consequently, $<\mbox{$ X^+_{s,b} $}i_1>\approx<\mbox{$ X^+_{s,b} $}i_2+\mbox{$ X^+_{s,b} $}i_3>$ in $A_{12}$. Then for $A_{11}$ we have the upper bound \begin{eqnarray*} \n{<\sigma_0>^{-b}\int d \nu f_1(\mbox{$ X^+_{s,b} $}i_1,\tau_1) \prod_{i=2}^3 <\sigma_i>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{b'-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ =c \n{(\mbox{${\cal F}$}^{-1}f_1)(J^{b'}\overline{u}_2)(J^{b'}\overline{u}_3)}{\XX{0}{-b}}\le c\n{(\mbox{${\cal F}$}^{-1}f_1)(J^{b'}\overline{u}_2)(J^{b'}\overline{u}_3)}{L^1_t(L_x^2)} \end{eqnarray*} by Sobolev's embedding theorem (plus duality) in the time variable. Now using H\"older's inequality and the $L_t^4(L_x^{\infty})$-Strichartz estimate this can be controlled by \[\n{\mbox{${\cal F}$}^{-1}f_1}{L^2_{xt}}\n{J^{b'}u_2}{L_t^4(L_x^{\infty})}\n{J^{b'}u_3}{L_t^4(L_x^{\infty})}\le c\prod_{i=1}^3 \n{u_i}{\XX{s}{b}},\] provided $b'\le s$. Now $A_{12}$ is splitted again into $A_{121}$, where we assume $<\mbox{$ X^+_{s,b} $}i_2+\mbox{$ X^+_{s,b} $}i_3>\le c <\mbox{$ X^+_{s,b} $}i_2-\mbox{$ X^+_{s,b} $}i_3>$, implying that also $<\mbox{$ X^+_{s,b} $}i_1>\le c <\mbox{$ X^+_{s,b} $}i_2-\mbox{$ X^+_{s,b} $}i_3>$, and $A_{122}$, where $<\mbox{$ X^+_{s,b} $}i_2>\approx<\mbox{$ X^+_{s,b} $}i_3>$. Consider the subregion $A_{121}$ first: Using $<\mbox{$ X^+_{s,b} $}i_1>^{-s}\le c <\mbox{$ X^+_{s,b} $}i_2-\mbox{$ X^+_{s,b} $}i_3>^{\frac{1}{2}}<\mbox{$ X^+_{s,b} $}i_2+\mbox{$ X^+_{s,b} $}i_3>^{-s-\frac{1}{2}}$, for this region we obtain the upper bound \begin{eqnarray*} \n{\!\!<\!\!\sigma_0\!\!>\!\!\!\!^{-b}\!\!<\!\!\mbox{$ X^+_{s,b} $}i\!\!>\!\!\!^{s} \!\! \int \!\! d \nu f_1(\mbox{$ X^+_{s,b} $}i_1,\tau_1)<\!\!\mbox{$ X^+_{s,b} $}i_2\!\!-\!\!\mbox{$ X^+_{s,b} $}i_3\!\!>\!\!^{\frac{1}{2}} <\!\!\mbox{$ X^+_{s,b} $}i_2\!\!+\!\!\mbox{$ X^+_{s,b} $}i_3\!\!>\!\!\!\!^{-s-\frac{1}{2}} \!\!\prod_{i=2}^3\!\!<\!\!\sigma_i\!\!>\!\!\!\!^{-b} \!\!<\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>\!\!\!^{b'-s}\!f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}&&\\ = c\n{(\mbox{${\cal F}$}^{-1}f_1) J^{-s-\frac{1}{2}}J_-^{\frac{1}{2}}(J^{b'}\overline{u}_2,J^{b'}\overline{u}_3)}{\XX{s}{-b}}\hspace{4cm}&&\\ \le c\n{(\mbox{${\cal F}$}^{-1}f_1) J^{-s-\frac{1}{2}}J_-^{\frac{1}{2}}(J^{b'}\overline{u}_2,J^{b'}\overline{u}_3)}{L_t^1(L_x^p)}\hspace{1cm}(s-\frac{1}{2}=-\frac{1}{p})\hspace{1.5cm}&&\\ \le c \n{\mbox{${\cal F}$}^{-1}f_1}{L^2_{xt}}\n{J^{-s-\frac{1}{2}}J_-^{\frac{1}{2}}(J^{b'}\overline{u}_2,J^{b'}\overline{u}_3)}{L_t^2(L_x^q)}\hspace{1cm}(\frac{1}{p}=\frac{1}{2}+\frac{1}{q})\hspace{1.2cm}&&\\ \le c \n{u_1}{\XX{s}{b}}\n{J_-^{\frac{1}{2}}(J^{b'}\overline{u}_2,J^{b'}\overline{u}_3)}{L^2_{xt}}\leq c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}.\hspace{3.5cm}&& \end{eqnarray*} Next we consider the subregion $A_{122}$, where $<\mbox{$ X^+_{s,b} $}i_2>\approx<\mbox{$ X^+_{s,b} $}i_3> \ge c <\mbox{$ X^+_{s,b} $}i_1>$. Here we get the upper bound \begin{eqnarray*} && \n{\!\!<\!\!\sigma_0\!\!>\!\!\!\!^{-b}\!\!<\!\!\mbox{$ X^+_{s,b} $}i\!\!>\!\!\!^{s} \!\! \int \!\! d \nu f_1(\mbox{$ X^+_{s,b} $}i_1,\tau_1) <\!\!\mbox{$ X^+_{s,b} $}i_1\!\!>\!\!\!\!^{-s-\frac{1}{6}}\!\!\prod_{i=2}^3 \!\!<\!\!\sigma_i\!\!>\!\!\!\!^{-b}\!\!<\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>\!\!\!^{b'-s+\frac{1}{12}} \!f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ & = & c \n{(\Lambda ^b J ^{-\frac{1}{6}}u_1)(J^{b'+\frac{1}{12}}\overline{u}_2)(J^{b'+\frac{1}{12}}\overline{u}_3)}{\XX{s}{-b}},\hspace{1cm}(\Lambda ^b = \mbox{${\cal F}$}^{-1}<\tau + \mbox{$ X^+_{s,b} $}i^2>^b\mbox{${\cal F}$})\\ & \le & c \n{\Lambda ^b u_1}{L^2_t(H_x^{-\frac{1}{4}-\frac{1}{6}})}\n{J^{b'+\frac{1}{12}}u_2}{\XX{0}{b}}\n{J^{b'+\frac{1}{12}}u_3}{\XX{0}{b}}, \end{eqnarray*} where we have used $s \le -\frac{1}{4}$ and part iv) of Corollary \mbox{${\longrightarrow}$}ef{k31}. The latter is bounded by $c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}$, provided $s \ge -\frac{5}{12}$ and $s \ge b' +\frac{1}{12}$. Thus the discussion for the region $A_1$ is complete. Subregion $A_2$: First we write $A_2=A_{21}+A_{22}$, where in $A_{21}$ it should hold that $<\mbox{$ X^+_{s,b} $}i_1> \le c <\mbox{$ X^+_{s,b} $}i>$. Then this subregion can be treated precisely as the subregion $A_{11}$, leading to the bound $s > - \frac{1}{2}$. For the remaining subregion $A_{22}$ it holds that \[<\mbox{$ X^+_{s,b} $}i_2><\mbox{$ X^+_{s,b} $}i_3>\le c <\sigma_2>\hspace{1cm}\mbox{and}\hspace{1cm} <\mbox{$ X^+_{s,b} $}i_1>\le c <\mbox{$ X^+_{s,b} $}i_2 + \mbox{$ X^+_{s,b} $}i_3>.\] Now $A_{22}$ is splitted again into $A_{221}$, where we assume $<\mbox{$ X^+_{s,b} $}i_1>\,\,\,\le c \,\,\,<\mbox{$ X^+_{s,b} $}i_2>$, and into $A_{222}$, where we then have $<\mbox{$ X^+_{s,b} $}i_2> \,\,\,<< \,\,\,<\mbox{$ X^+_{s,b} $}i_1>$. The upper bound for $A_{221}$ is \begin{eqnarray*} && \n{\!\!<\!\!\sigma_0\!\!>\!\!\!\!^{-b}\!\!<\!\!\mbox{$ X^+_{s,b} $}i\!\!>\!\!\!^{s} \!\! \int \!\! d \nu f_2(\mbox{$ X^+_{s,b} $}i_2,\tau_2) <\!\!\mbox{$ X^+_{s,b} $}i_2\!\!>\!\!\!\!^{-s}\!\prod_{i\ne 2} \!\!<\!\!\sigma_i\!\!>\!\!\!\!^{-b}\!\!<\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>\!\!\!^{b'-s} \!f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ & \le & c \n{(\Lambda ^b_-\overline{u}_2)(J^{b'}u_1)(J^{b'}\overline{u}_3)}{\XX{s}{-b}} \hspace{1cm}(\Lambda ^b_- = \mbox{${\cal F}$} ^{-1} <\tau - \mbox{$ X^+_{s,b} $}i ^2>^b \mbox{${\cal F}$})\\ & \le & c \n{\Lambda ^b_-\overline{u}_2}{L_t^2(H_x^s)}\n{u_1}{\XX{b'}{b}}\n{u_3}{\XX{b'}{b}}\le c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}. \end{eqnarray*} Here we have used part i) of Lemma \mbox{${\longrightarrow}$}ef{l32} (dualized version) and the assumption $s \ge b'$. For the subregion $A_{222}$ the argument is a bit more complicated and it is here, where the strongest restrictions on $s$ occur: Subdevide $A_{222}$ again into $A_{2221}$ and $A_{2222}$ with $<\mbox{$ X^+_{s,b} $}i_2>^2 \le <\mbox{$ X^+_{s,b} $}i_1>$ in $A_{2221}$. Then in $A_{2221}$ it holds that \[(<\mbox{$ X^+_{s,b} $}i_1><\mbox{$ X^+_{s,b} $}i_2><\mbox{$ X^+_{s,b} $}i_3>)^{\frac{2}{5}} \le c <\mbox{$ X^+_{s,b} $}i_1> \le c <\mbox{$ X^+_{s,b} $}i_3>\le c<\mbox{$ X^+_{s,b} $}i_2 \pm \mbox{$ X^+_{s,b} $}i_3>,\] hence, for $\epsilon = 1 + \frac{5}{2}s\,\,\,(>0)$, \[\prod_{i=1}^3<\mbox{$ X^+_{s,b} $}i_i>^{-s} \le c <\mbox{$ X^+_{s,b} $}i_2-\mbox{$ X^+_{s,b} $}i_3>^{\frac{1}{2}}<\mbox{$ X^+_{s,b} $}i_2+\mbox{$ X^+_{s,b} $}i_3>^{\frac{1}{2}-\epsilon}.\] Then, throwing away the $<\mbox{$ X^+_{s,b} $}i>^s$-factor, we obtain the upper bound \begin{eqnarray*} && \n{\!\!<\!\!\sigma_0\!\!>\!\!\!^{b'}\!\!<\!\!\mbox{$ X^+_{s,b} $}i_2\!\!-\!\!\mbox{$ X^+_{s,b} $}i_3\!\!>\!\!^{\frac{1}{2}}<\!\!\mbox{$ X^+_{s,b} $}i_2\!\!+\!\!\mbox{$ X^+_{s,b} $}i_3\!\!>^{\frac{1}{2}-\epsilon}\prod_{i=1}^3\!\!<\!\!\sigma_i\!\!>\!\!\!\!^{-b}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ & = & c \n{(J^su_1)J^{\frac{1}{2}-\epsilon}J_-^{\frac{1}{2}}(J^{s}\overline{u}_2,J^{s}\overline{u}_3)}{\XX{0}{b'}}\\ & \le & c \n{u_1}{\XX{s}{b}} \n{J_-^{\frac{1}{2}}(J^{s}\overline{u}_2,J^{s}\overline{u}_3)}{L^2_{xt}} \le c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}, \end{eqnarray*} by Lemma \mbox{${\longrightarrow}$}ef{l23}, part iii), and Corollary \mbox{${\longrightarrow}$}ef{k23}, part i) (and the remark below), leading to the restriction $b' < \frac{5}{4}s$, which - in the allowed range for $s$ - is in fact weaker than $b' < s - \frac{1}{10}$. Finally we consider the subregion $A_{2222}$, where we have $<\mbox{$ X^+_{s,b} $}i_1>^{\frac{1}{2}}\le <\mbox{$ X^+_{s,b} $}i_2>\,\,\,<<\,\,\,<\mbox{$ X^+_{s,b} $}i_1>\approx<\mbox{$ X^+_{s,b} $}i_3>$, implying that \[<\mbox{$ X^+_{s,b} $}i_1>^{\frac{3}{20}} \le c (<\mbox{$ X^+_{s,b} $}i_2><\mbox{$ X^+_{s,b} $}i_3>)^{\frac{1}{10}}.\] This gives the upper bound \begin{eqnarray*} \n{\!\!<\!\!\sigma_0\!\!>\!\!\!\!^{-b}\!\!<\!\!\mbox{$ X^+_{s,b} $}i\!\!>\!\!\!^{s} \!\! \int \!\! d \nu <\!\!\mbox{$ X^+_{s,b} $}i_1\!\!>\!\!\!\!^{-s-\frac{3}{20}}\!<\!\!\sigma_1\!\!>\!\!\!\!^{-b}f_1(\mbox{$ X^+_{s,b} $}i_1,\tau_1)\!\!\prod_{i=2}^3 \!\!<\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>\!\!\!^{b'-s+\frac{1}{10}}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)\!\!<\!\!\sigma_3\!\!>\!\!\!\!^{-b}}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\\ \le c \n{(J^{-\frac{3}{20}}u_1)(\Lambda ^b_- J^{b'+\frac{1}{10}}\overline{u}_2)(J^{b'+\frac{1}{10}}\overline{u}_3)}{\XX{s}{b}}.\hspace{3cm} \end{eqnarray*} Now using $s \le -\frac{1}{4}$ again and part ii) of Corollary \mbox{${\longrightarrow}$}ef{k31} this can be estimated by \[c\n{u_1}{\XX{-\frac{3}{20}-\frac{1}{4}}{b}}\n{\Lambda ^b_- J^{b'+\frac{1}{10}}\overline{u}_2}{L^2_{xt}}\n{u_3}{\XX{b'+\frac{1}{10}}{b}}\le c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}},\] since $s>-\frac{2}{5}$ and $s>b'+\frac{1}{10}$ as assumed. $ \Box$ $Remark$: The estimate (\mbox{${\longrightarrow}$}ef{11}) also holds under the assumption $s \ge -\frac{1}{4}$, $b'<-\frac{3}{8}$ and $b>\frac{1}{2}$. For $s = -\frac{1}{4}$ this is contained in the above theorem, and for $s > -\frac{1}{4}$ this follows from $<\mbox{$ X^+_{s,b} $}i > \le c \prod_{i=1}^3<\mbox{$ X^+_{s,b} $}i_i >$. \section{Estimates on quartic nonlinearities} \begin{satz}\label{t2} Let $n=1$. Assume $ 0 \geq s > - \frac{1}{6}$ and $ - \frac{1}{2} < b' < \frac{3s}{2} - \frac{1}{4}$. Then in the periodic and nonperiodic case for all $b>\frac{1}{2}$ the estimate \[\n{ \prod_{i=1}^4\overline{u}_i}{\XX{s}{b'}} \leq c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}}\] holds true. \end{satz} Proof: Again we write $f_i(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau - \mbox{$ X^+_{s,b} $}i^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} \overline{u}_i(\mbox{$ X^+_{s,b} $}i, \tau)$, so that \[\n{ \prod_{i=1}^4 \overline{u}_i}{\XX{s}{b'}}=c \n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^4 <\!\!\tau_i\!\! - \!\!\mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] Now we can use the inequality (\mbox{${\longrightarrow}$}ef{41}) with $m=4$ and the assumption $b'<\frac{3 s}{2} - \frac{1}{4}$ to obtain \[<\mbox{$ X^+_{s,b} $}i>^{s + \frac{1}{2}-\epsilon}\prod_{i=1}^4<\mbox{$ X^+_{s,b} $}i_i>^{-s+\epsilon} \leq c (<\tau + \mbox{$ X^+_{s,b} $}i^2 >^{-b'} + \sum_{i=1}^4 <\tau_i - \mbox{$ X^+_{s,b} $}i_i^2>^{-b'} \chi_{A_i})\] for some $\epsilon>0$. (Again in $A_i$ we assume $<\tau_i - \mbox{$ X^+_{s,b} $}i_i^2> \,\,\ge \,\, <\tau + \mbox{$ X^+_{s,b} $}i^2 >$.) From this it follows, that \[\n{ \prod_{i=1}^4 \overline{u}_i}{\XX{s}{b'}} \le c \sum_{j=0}^4 \n{I_j}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}},\] with \[I_0(\mbox{$ X^+_{s,b} $}i,\tau)=<\mbox{$ X^+_{s,b} $}i>^{-\frac{1}{2}+\epsilon}\int d \nu \prod_{i=1}^4<\tau_i - \mbox{$ X^+_{s,b} $}i_i^2>^{-b} <\mbox{$ X^+_{s,b} $}i_i>^{-\epsilon}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)\] and, for $1 \le j \le m$, \begin{eqnarray*} I_j(\mbox{$ X^+_{s,b} $}i,\tau)\!\!\!\! & = & \!\!\!\!<\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{-\frac{1}{2}+\epsilon}<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2 \!\!>^{b'}\!\!\int\!\! d \nu <\!\!\tau_j \!\!- \!\!\mbox{$ X^+_{s,b} $}i_j^2 \!\! >^{-b'} \prod_{i=1}^4 <\!\!\tau_i\!\! -\!\! \mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-\epsilon}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)\chi_{A_i} \\ \!\!\!\!& \le &\!\!\!\! <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{-\frac{1}{2}+\epsilon}<\!\!\tau + \mbox{$ X^+_{s,b} $}i^2 \!\!>^{-b}\!\!\int \!\! d \nu <\!\!\tau_j - \mbox{$ X^+_{s,b} $}i_j^2\!\! >^{b} \prod_{i=1}^4 <\!\!\tau_i - \mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-\epsilon} f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i). \end{eqnarray*} Next we estimate $I_0$ using first Sobolev's embedding theorem, then H\"older's inequality, again Sobolev and finally Corollary \mbox{${\longrightarrow}$}ef{k21}. Here $\epsilon'$, $\epsilon''$ denote suitable small, positive numbers. \begin{eqnarray*} \n{I_0}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} &=&\n{\prod_{i=1}^4 J^{s-\epsilon} \overline{u}_i}{L^2_t (H_x^{-\frac{1}{2}+\epsilon})} \le c\n{\prod_{i=1}^4 J^{s-\epsilon} \overline{u}_i}{L^2_t (L_x^{1+\epsilon'})}\\ & \le &c\prod_{i=1}^4\n{ J^{s-\epsilon} \overline{u}_i}{L_t^{8}(L_x^{4+4\epsilon'})} \le c\prod_{i=1}^4\n{ J^{s-\epsilon''} \overline{u}_i}{L_t^{8}(L_x^{4})}\\ & \le &c\prod_{i=1}^m\n{ J^{s}\overline{u}_i}{\XXm{0}{b}}=c\prod_{i=1}^m\n{\overline{u}_i}{\XXm{s}{b}} \end{eqnarray*} To estimate $I_j$, $1\le j \le m$, we use Sobolev (in both variables) plus duality, H\"older, again Sobolev (in the space variable) and Lemma \mbox{${\longrightarrow}$}ef{l21}. Again we need suitable small, positive numbers $\epsilon'$, $\epsilon''$ and $\epsilon'''$. \begin{eqnarray*} \n{I_j}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} & \le & c\n{(\prod_{\stackrel{i=1}{i \ne j}}^4 J^{s-\epsilon} \overline{u}_i)(J^{-\epsilon}\mbox{${\cal F}$}^{-1}f_j)}{\XX{-\frac{1}{2}+\epsilon}{-b}}\\ & \le & c\n{(\prod_{\stackrel{i=1}{i \ne j}}^4 J^{s-\epsilon} \overline{u}_i)(J^{-\epsilon}\mbox{${\cal F}$}^{-1}f_j)}{L^{1}_t(L_x^{1+\epsilon'})}\\ & \le & c\n{J^{-\epsilon}\mbox{${\cal F}$}^{-1}f_j}{L^2_{x,t}}\prod_{\stackrel{i=1}{i \ne j}}^4\n{ J^{s-\epsilon} \overline{u}_i}{L^{6}_t(L_x^{6+\epsilon''})}\\ & \le & c\n{J^{-\epsilon}\mbox{${\cal F}$}^{-1}f_j}{L^2_{x,t}}\prod_{\stackrel{i=1}{i \ne j}}^4\n{ J^{s-\epsilon'''} \overline{u}_i}{L^{6}_{xt}}\\ & \le & c\n{f_j}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}\prod_{\stackrel{i=1}{i \ne j}}^4\n{ J^{s} \overline{u}_i}{\XXm{0}{b}} = c \prod_{i=1}^4\n{\overline{u}_i}{\XXm{s}{b}} \end{eqnarray*} $ \Box$ In the periodic case the following examples show, that for all the other quartic nonlinearities ($u^4,\,\,u^3\overline{u},\,\,...,\,\,u\overline{u}^3$) the corresponding estimates fail for all $s<0$. The argument is essentially that given in the proof of Thm 1.10 in \cite{KPV96}. \begin{bsp}\label{ex51} In the periodic case in one space dimension the estimate \[\n{\prod_{i=1}^4 u_i}{\XX{s}{b'}} \le c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}}\] fails for all $s<0$, $b, b' \in {\bf R}$. \end{bsp} Proof: The above estimate implies \[\n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^4 <\!\!\tau_i\!\! + \!\!\mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c \prod_{i=1}^4 \n{f_i}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] Defining for $n \in {\bf N}$ \[f^{(n)}_{1,2}(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,2n}\chi(\tau + \mbox{$ X^+_{s,b} $}i^2),\hspace{0.3cm}f^{(n)}_3(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,-n}\chi(\tau + \mbox{$ X^+_{s,b} $}i^2),\hspace{0.3cm}f^{(n)}_4(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,0}\chi(\tau + \mbox{$ X^+_{s,b} $}i^2),\] where $\chi$ is the characteristic function of $[-1,1]$, we have $\n{f^{(n)}_i}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}=c$ and it would follow that \begin{equation}\label{51} n^{-3s} \n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^4 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c. \end{equation} Now a simple computation shows that \[\int d \nu \prod_{i=1}^4 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i) \ge \delta_{\mbox{$ X^+_{s,b} $}i,3n}\chi(\tau + \mbox{$ X^+_{s,b} $}i^2).\] Inserting this into (\mbox{${\longrightarrow}$}ef{51}) we obtain $n^{-2s} \le c$, which is a contradiction for any $s<0$. $ \Box$ $Remark:$ Using only the sequences $f^{(n)}_{1,2,3}$ from the above proof, the same calculation shows that in the periodic case the estimate \[\n{\prod_{i=1}^3 u_i}{\XX{s}{b'}} \le c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}\] fails for all $s<0$, $b, b' \in {\bf R}$. \begin{bsp}\label{ex52} In the periodic case in one space dimension the estimates \[\n{ u_1 \overline{u}_2 \tilde{u}_3 \tilde{u}_4}{\XX{s}{b'}} \le c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}},\] where $\tilde{u}=u$ or $\tilde{u}=\overline{u}$, fail for all $s<0$, $b, b' \in {\bf R}$. \end{bsp} Proof: We define \begin{eqnarray*} f^{(n)}_{1}(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,n}\chi(\tau + \mbox{$ X^+_{s,b} $}i^2)&,&f^{(n)}_{2}(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,-n}\chi(\tau - \mbox{$ X^+_{s,b} $}i^2)\\ f^{(n)}_{3,4}(\mbox{$ X^+_{s,b} $}i,\tau)=\delta_{\mbox{$ X^+_{s,b} $}i,0}\chi(\tau \pm \mbox{$ X^+_{s,b} $}i^2)&&(+ \,\,\,\,\mbox{for}\,\,\,\,\tilde{u}_{3,4}=u_{3,4},\,\,\,- \,\,\,\,\mbox{for}\,\,\,\,\tilde{u}_{3,4}=\overline{u}_{3,4}). \end{eqnarray*} Then the above estimate would imply \begin{equation}\label{52} n^{-2s} \n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^4 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c. \end{equation} Now \[\int d \nu \prod_{i=1}^4 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i) \ge \delta_{\mbox{$ X^+_{s,b} $}i,0}\chi(\tau ),\] which inserted into (\mbox{${\longrightarrow}$}ef{52}) again leads to $n^{-2s} \le c$. $ \Box$ $Remark:$ Using only the sequences $f^{(n)}_{1,2,3}$ from the above proof, we see that in the periodic case the estimates \[\n{u_1 \overline{u}_2 \tilde{u}_3}{\XX{s}{b'}} \le c \prod_{i=1}^3 \n{u_i}{\XX{s}{b}}\] fail for all $s<0$, $b, b' \in {\bf R}$. Now we turn to discuss the continuous case, where we can use the bi- and trilinear inequalities of section 2.2 respectively 3 in order to prove the relevant estimates for some $s<0$. We start with the following \begin{prop}\label{p51} Let $0\ge s > -\frac{1}{8}$, $-\frac{1}{2}<b'<-\frac{1}{4}+2s$. Then in the continuous case in one space dimension for any $b>\frac{1}{2}$ the estimate \[\n{u_1u_2\overline{u}_3\overline{u}_4}{\XX{s}{b'}}\leq c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}}\] holds true. \end{prop} Proof: Apply part iii) of Lemma \mbox{${\longrightarrow}$}ef{l23} to obtain \[\n{u_1u_2\overline{u}_3\overline{u}_4}{\XX{s}{b'}}\leq c \n{u_1}{\XX{s}{b}}\n{u_2\overline{u}_3\overline{u}_4}{L_t^2(H^{\sigma - s})},\] provided that $s\le 0$, $-\frac{1}{2}\le \sigma \le 0$, $b'<-\frac{1}{4}+\frac{\sigma}{2}$. This is fulfilled for $\sigma = 4s$ and the second factor is equal to \[\n{u_2\overline{u}_3\overline{u}_4}{L_t^2(H^{3 s})} \le c \prod_{i=2}^4 \n{u_i}{\XX{s}{b}}\] by Lemma \mbox{${\longrightarrow}$}ef{l32} and the remark below. $ \Box$ To show that this proposition is essentially (up to the endpoint) sharp, we present the following counterexample (cf. Thm 1.4 in \cite{KPV96}): \begin{bsp}\label{ex53} In the nonperiodic case in one space dimension the estimate \[\n{ u_1 u_2 \overline{u}_3 \overline{u}_4}{\XX{s}{b'}} \le c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}}\] fails for all $s<-\frac{1}{8}$, $b, b' \in {\bf R}$. \end{bsp} Proof: The above estimate implies \[\n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^4 <\!\!\sigma_i\!\! >^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c \prod_{i=1}^4 \n{f_i}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}},\] where $<\sigma_{1,2}>=<\tau_{1,2}+\mbox{$ X^+_{s,b} $}i^2_{1,2}>$ and $<\sigma_{3,4}>=<\tau_{3,4}-\mbox{$ X^+_{s,b} $}i^2_{3,4}>$. Choosing \[f^{(n)}_{1,2}(\mbox{$ X^+_{s,b} $}i,\tau)=\chi(\mbox{$ X^+_{s,b} $}i - n)\chi(\tau + \mbox{$ X^+_{s,b} $}i^2),\,\,\,\,\,\,\,\,f^{(n)}_{3,4}(\mbox{$ X^+_{s,b} $}i,\tau)=\chi(\mbox{$ X^+_{s,b} $}i + n)\chi(\tau - \mbox{$ X^+_{s,b} $}i^2)\] we arrive at \begin{equation}\label{53} n^{-4s} \n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^4 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}} \le c. \end{equation} Now an elementary computation gives \[\int d \nu \prod_{i=1}^4 f^{(n)}_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i) \ge c\chi_c(2n\mbox{$ X^+_{s,b} $}i )\chi_c(\tau ),\] where $\chi_c$ is the characteristic function of $[-c,c]$. Inserting this into (\mbox{${\longrightarrow}$}ef{53}) we get $n^{-4s-\frac{1}{2}} \le c$, which is a contradiction for any $s<-\frac{1}{8}$. $ \Box$ Finally we consider the remaining nonlinearities $u^4$, $u^3\overline{u}$ and $u\overline{u}^3$, for which we can lower the bound on $s$ down to $-\frac{1}{6}+ \epsilon$ : \begin{satz}\label{t22} Let $n=1$. Assume $ 0 \geq s > - \frac{1}{6}$, $ - \frac{1}{2} < b' < \frac{3s}{2} - \frac{1}{4}$ and $b>\frac{1}{2}$. Then in the nonperiodic case the estimates \[\n{N(u_1,u_2,u_3,u_4) }{\XX{s}{b'}} \leq c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}}\] hold true for $N(u_1,u_2,u_3,u_4)=\prod_{i=1}^4u_i$, $=(\prod_{i=1}^3u_i)\overline{u}_4$ or $=(\prod_{i=1}^3\overline{u}_i)u_4$. \end{satz} Proof: 1. We begin with the nonlinearity $N(u_1,u_2,u_3,u_4)=\prod_{i=1}^4u_i$. Writing $f_i(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau + \mbox{$ X^+_{s,b} $}i^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} u_i(\mbox{$ X^+_{s,b} $}i, \tau)$ we have \[\n{ \prod_{i=1}^4 u_i}{\XX{s}{b'}}=c \n{<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\int d \nu \prod_{i=1}^4 <\!\!\tau_i\!\! +\!\!\mbox{$ X^+_{s,b} $}i_i^2\!\!>^{-b} <\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] The quantity controlled by the expression $<\tau + \mbox{$ X^+_{s,b} $}i^2>$, $<\tau_i+\mbox{$ X^+_{s,b} $}i_i^2>$, $1\le i\le 4$ is $|\sum_{i=1}^4 \mbox{$ X^+_{s,b} $}i_i^2 - \mbox{$ X^+_{s,b} $}i^2|$. We devide the domain of integration into $A$ and $A^c$, where in $A$ we assume $\mbox{$ X^+_{s,b} $}i^2 \le \frac{\mbox{$ X^+_{s,b} $}i_1^2}{2}$ and thus \[|\sum_{i=1}^4 \mbox{$ X^+_{s,b} $}i_i^2 - \mbox{$ X^+_{s,b} $}i^2| \ge c (\sum_{i=1}^4 \mbox{$ X^+_{s,b} $}i_i^2 + \mbox{$ X^+_{s,b} $}i^2).\] So concerning this region we may refer to the proof of Theorem \mbox{${\longrightarrow}$}ef{t2}. For the region $A^c$, where $\mbox{$ X^+_{s,b} $}i_1^2 \le 2 \mbox{$ X^+_{s,b} $}i^2$ we have the upper bound \[c\n{(J^su_1)\prod_{i=2}^4u_i}{\XX{0}{b'}} \le c \n{u_1}{\XX{s}{b}}\n{\prod_{i=2}^4u_i}{L_t^2(H_x^{3s})}\le c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}}\] by Lemma \mbox{${\longrightarrow}$}ef{l23}, part iii), which requires $b' < \frac{3s}{2} - \frac{1}{4}$, $s\ge - \frac{1}{6}$, and by Lemma \mbox{${\longrightarrow}$}ef{l33} (and the remark below), which demands $s> - \frac{1}{6}$. 2. Next we consider $N(u_1,u_2,u_3,u_4)=(\prod_{i=1}^3u_i)\overline{u}_4$. For $1\le i \le 3$ we choose the $f_i$ as in the first part of this proof and $f_4(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau - \mbox{$ X^+_{s,b} $}i^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} \overline{u}_4(\mbox{$ X^+_{s,b} $}i, \tau)$. Then the left hand side of the claimed estimate is equal to \[c \n{\!\!<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\!\!\int\!\! d \nu \prod_{i=1}^3\!\! <\!\!\tau_i\!\! +\!\!\mbox{$ X^+_{s,b} $}i_i^2\!\!>\!\!\!\!^{-b} \!\!<\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>\!\!\!\!^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)<\!\!\tau_4\!\! -\!\!\mbox{$ X^+_{s,b} $}i_4^2\!\!>\!\!\!\!^{-b} \!\!<\!\!\mbox{$ X^+_{s,b} $}i_4\!\!>\!\!\!\!^{-s}f_4(\mbox{$ X^+_{s,b} $}i_4,\tau_4)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] Now the quantity controlled by the expressions $<\tau + \mbox{$ X^+_{s,b} $}i^2>$, $<\tau_i + \mbox{$ X^+_{s,b} $}i_i^2>$, $1\le i \le 3$, and $<\tau_4 - \mbox{$ X^+_{s,b} $}i_4^2>$ is \[c.q.:=|\mbox{$ X^+_{s,b} $}i_1^2 + \mbox{$ X^+_{s,b} $}i_2^2 + \mbox{$ X^+_{s,b} $}i_3^2 - \mbox{$ X^+_{s,b} $}i_4^2 - \mbox{$ X^+_{s,b} $}i^2 |.\] We devide the domain of integration into the regions $A$, $B$ and $C =(A+B)^c$, where in $A$ it should hold that \[c.q.\,\,\,\ge c (\sum_{i=1}^4 \mbox{$ X^+_{s,b} $}i_i^2 + \mbox{$ X^+_{s,b} $}i^2).\] Again, concerning this region we may refer to the proof of Thm. \mbox{${\longrightarrow}$}ef{t2}. Next we write $B = \bigcup_{i=1}^3 B_i$, where in $B_i$ we assume $\mbox{$ X^+_{s,b} $}i_i^2 \le c \mbox{$ X^+_{s,b} $}i^2$ for some large constant $c$. By symmetry it is sufficient to consider the subregion $B_1$, where we obtain the upper bound \[c \n{(J^su_1)u_2 u_3 \overline{u}_4}{\XX{0}{b'}} \le c \n{u_1}{\XX{s}{b}}\n{u_2 u_3 \overline{u}_4}{L^2_t(H_x^{3s})}\le c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}}\] by Lemma \mbox{${\longrightarrow}$}ef{l23}, part iii), demanding for $b' < \frac{3s}{2} - \frac{1}{4}$, $3s \ge -\frac{1}{2}$ and Lemma \mbox{${\longrightarrow}$}ef{l32} (resp. the remark below), where $s>-\frac{1}{6}$ is required. Considering the region $C$ we may assume by symmetry between the first three factors that $\mbox{$ X^+_{s,b} $}i_1^2\ge \mbox{$ X^+_{s,b} $}i_2^2 \ge \mbox{$ X^+_{s,b} $}i_3^2$. Then for this region it is easily checked that 1. $\mbox{$ X^+_{s,b} $}i^2 << \mbox{$ X^+_{s,b} $}i_3^2$, \hspace{0.4cm} 2. $\mbox{$ X^+_{s,b} $}i_4^2 \ge \frac{3}{2} \mbox{$ X^+_{s,b} $}i_2^2$, hence $\mbox{$ X^+_{s,b} $}i_4^2 \le c (\mbox{$ X^+_{s,b} $}i_4 + \mbox{$ X^+_{s,b} $}i_2)^2$, and \hspace{0.4cm}3. $\mbox{$ X^+_{s,b} $}i_1^2 \le c (\mbox{$ X^+_{s,b} $}i_1 - \mbox{$ X^+_{s,b} $}i_3)^2$. This implies \begin{itemize} \item[1.] $<\mbox{$ X^+_{s,b} $}i>^{-2s}<\mbox{$ X^+_{s,b} $}i_4>^{-s} \le c <\mbox{$ X^+_{s,b} $}i_4 + \mbox{$ X^+_{s,b} $}i_2>^{-3s}$\hspace{0.5cm}and \item[2.] $<\mbox{$ X^+_{s,b} $}i>^{\frac{1}{2}+3s}<\mbox{$ X^+_{s,b} $}i_1>^{-s}<\mbox{$ X^+_{s,b} $}i_2>^{-s}<\mbox{$ X^+_{s,b} $}i_3>^{-s}\le c <\mbox{$ X^+_{s,b} $}i_1 - \mbox{$ X^+_{s,b} $}i_3>^{\frac{1}{2}}$, \end{itemize} leading to the upper bound \begin{eqnarray*} && \n{J_-^{\frac{1}{2}}(J^su_1,J^su_3) J^{-3s}(J^su_2J^s\overline{u}_4)}{\XX{-\frac{1}{2}}{b'}}\\ & \le & c \n{J_-^{\frac{1}{2}}(J^su_1,J^su_3) J^{-3s}(J^su_2J^s\overline{u}_4)}{L_t^p(L_x^{1+})}\hspace{2cm}(b'-\frac{1}{2}=-\frac{1}{p})\\ & \le & c \n{J_-^{\frac{1}{2}}(J^su_1,J^su_3)}{L^2_{xt}}\n{J^{-3s}(J^su_2J^s\overline{u}_4)}{L_t^q(L_x^{2+})}\hspace{1cm}(\frac{1}{q}=\frac{1}{p}-\frac{1}{2}=-b'). \end{eqnarray*} Using Corollary \mbox{${\longrightarrow}$}ef{k23} the first factor can be estimated by \[c \n{u_1}{\XX{s}{b}}\n{u_3}{\XX{s}{b}},\] while for the second we can use Sobolev's embedding Theorem and part ii) of Lemma \mbox{${\longrightarrow}$}ef{l23} to obtain the bound \[c \n{J^su_2J^s\overline{u}_4}{L_t^q(H_x^{-3s+})} \le c \n{u_2}{\XX{s}{b}}\n{u_4}{\XX{s}{b}}.\] Here the restriction $b' < \frac{3s}{2} - \frac{1}{4}$ is required again. 3. Finally we consider $N(u_1,u_2,u_3,u_4)=(\prod_{i=1}^3\overline{u}_i)u_4$. With $f_i(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau - \mbox{$ X^+_{s,b} $}i^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} \overline{u}_i(\mbox{$ X^+_{s,b} $}i, \tau)$, $1\le i \le 3$ and $f_4(\mbox{$ X^+_{s,b} $}i, \tau)= <\tau + \mbox{$ X^+_{s,b} $}i^2>^{b}<\mbox{$ X^+_{s,b} $}i>^s \mbox{${\cal F}$} u_4(\mbox{$ X^+_{s,b} $}i, \tau)$ the norm on the left hand side is equal to \[c \n{\!\!<\!\!\tau \!\!+\!\! \mbox{$ X^+_{s,b} $}i^2\!\!>^{b'} <\!\!\mbox{$ X^+_{s,b} $}i\!\!>^{s}\!\!\int\!\! d \nu \prod_{i=1}^3\!\! <\!\!\tau_i\!\! -\!\!\mbox{$ X^+_{s,b} $}i_i^2\!\!>\!\!\!\!^{-b} \!\!<\!\!\mbox{$ X^+_{s,b} $}i_i\!\!>\!\!\!\!^{-s}f_i(\mbox{$ X^+_{s,b} $}i_i,\tau_i)<\!\!\tau_4\!\! +\!\!\mbox{$ X^+_{s,b} $}i_4^2\!\!>\!\!\!\!^{-b} \!\!<\!\!\mbox{$ X^+_{s,b} $}i_4\!\!>\!\!\!\!^{-s}f_4(\mbox{$ X^+_{s,b} $}i_4,\tau_4)}{L^2_{\mbox{$ X^+_{s,b} $}i,\tau}}.\] The controlled quantity here is \[c.q.:=|\mbox{$ X^+_{s,b} $}i_1^2 + \mbox{$ X^+_{s,b} $}i_2^2 + \mbox{$ X^+_{s,b} $}i_3^2 - \mbox{$ X^+_{s,b} $}i_4^2 + \mbox{$ X^+_{s,b} $}i^2 |.\] Devide the area of integration into $A$, $B$ and $C =(A+B)^c$, where in $A$ we assume again \[c.q.\,\,\,\ge c (\sum_{i=1}^4 \mbox{$ X^+_{s,b} $}i_i^2 + \mbox{$ X^+_{s,b} $}i^2)\] in order to refer to the proof of Thm. \mbox{${\longrightarrow}$}ef{t2}. In $B$ we assume $\mbox{$ X^+_{s,b} $}i_4^2 \le c \mbox{$ X^+_{s,b} $}i^2$, so that for this region we have the bound \[c \n{\overline{u}_1 \overline{u}_2 \overline{u}_3(J^su_4)}{\XX{0}{b'}} \le c \n{u_4}{\XX{s}{b}}\n{u_2 u_3 u_4}{L^2_t(H_x^{3s})}\le c \prod_{i=1}^4 \n{u_i}{\XX{s}{b}}\] by Lemma \mbox{${\longrightarrow}$}ef{l23}, part iii), and Lemma \mbox{${\longrightarrow}$}ef{l33} and the remark below. Here $b' < \frac{3s}{2} - \frac{1}{4}$ and $s>-\frac{1}{6}$ is required. For the region $C$ we shall assume again that $\mbox{$ X^+_{s,b} $}i_1^2\ge \mbox{$ X^+_{s,b} $}i_2^2 \ge \mbox{$ X^+_{s,b} $}i_3^2$. Then it is easily checked that in $C$ 1. $\mbox{$ X^+_{s,b} $}i^2 << \mbox{$ X^+_{s,b} $}i_4^2$, \hspace{0.4cm} 2. $\mbox{$ X^+_{s,b} $}i_4^2 \ge \frac{3}{2} \mbox{$ X^+_{s,b} $}i_2^2$, hence $\mbox{$ X^+_{s,b} $}i_4^2 \le c (\mbox{$ X^+_{s,b} $}i_4 + \mbox{$ X^+_{s,b} $}i_2)^2$, and \hspace{0.4cm}3. $\mbox{$ X^+_{s,b} $}i_1^2 \le c (\mbox{$ X^+_{s,b} $}i_1 - \mbox{$ X^+_{s,b} $}i_3)^2$. Then for $C$ we have the estimate \begin{eqnarray*} && \n{J_-^{\frac{1}{2}}(J^s\overline{u}_1,J^s\overline{u}_3) J^{-3s}(J^su_4J^s\overline{u}_2)}{\XX{-\frac{1}{2}}{b'}}\\ & \le & c \n{J_-^{\frac{1}{2}}(J^su_1,J^su_3)}{L^2_{xt}}\n{J^{-3s}(J^su_2J^s\overline{u}_4)}{L_t^q(L_x^{2+})} \end{eqnarray*} with $\frac{1}{q}=-b'$, cf. the corresponding part of step 2. of this proof. Again we can use Corollary \mbox{${\longrightarrow}$}ef{k23} and part ii) of Lemma \mbox{${\longrightarrow}$}ef{l23} to obtain the desired bound. $ \Box$ \end{document}
\begin{document} \maketitle \begin{abstract} Using the theory of Frobenius singularities, we show that $13mK_X + 45mA$ is very ample for an ample Cartier divisor $A$ on a Kawamata log terminal surface $X$ with Gorenstein index $m$, defined over an algebraically closed field of characteristic $p>5$. \end{abstract} \section{Introduction} The positivity of line bundles is a fundamental topic of research in algebraic geometry. Showing the base point freeness or very ampleness of line bundles allows for the description of the geometry of algebraic varieties. The motivation for this paper centers around two questions. The first one is the following: given an ample Cartier divisor $A$, find an effective $n \in \mathbb{N}$ for which $nA$ is very ample. A famous theorem of Matsusaka states that one can find such $n \in \mathbb{N}$ which depends only on the Hilbert polynomial of $A$, when the variety is smooth and the characteristic of the field if equal to zero (\cite{Matsusaka}). This theorem plays a fundamental role in constructing moduli spaces of polarized varieties. In positive characteristic, Koll\'{a}r proved the same statement for normal surfaces (\cite[Theorem 2.1.2]{Kollar85}). The second question motivating the results of this paper is the famous Fujita conjecture, which, in characteristic zero, is proved only for curves and surfaces. \begin{conj}[Fujita conjecture] Let $X$ be a smooth projective variety of dimension $n$, and let $A$ be an ample Cartier divisor on $X$. Then $K_X + (n+2)A$ is very ample. \end{conj} Fujita-type results play a vital role in understanding the geometry of algebraic varieties. In positive characteristic, the conjecture is known only for curves, and those surfaces which are neither of general type, nor quasi-elliptic. This follows from a result of Shepherd-Barron which says that on such surfaces, rank two vector bundles which do not satisfy Bogomolov inequality are unstable (\cite[Theorem 7]{Shepherd-Barron}). Indeed, the celebrated proof by Reider of the Fujita conjecture for characteristic zero surfaces, can be, in such a case, applied without any modifications (see \cite{Terakawa}, \cite{Reider}). Given lack of any progress for positive characteristic surfaces of general type, Di Cerbo and Fanelli undertook a different approach to the problem (\cite{DiCerboFanelli}). They proved among other things that $2K_X + 4A$ is very ample, where $A$ is ample, and $X$ is a smooth surface of general type in characteristic $p\geq 3$. In this paper, we consider the aforementioned questions for singular surfaces. As far as we know, no effective bounds for singular surfaces in positive characteristic have been obtained before. The main theorem is the following. \begin{theorem} \label{theorem:main-mini} Let $X$ be a projective surface with Kawamata log terminal singularities defined over an algebraically closed field of characteristic $p>5$. Assume that $mK_X$ is Cartier for some $m \in \mathbb{N}$. Then, for an ample Cartier divisor $A$: \begin{itemize} \item $4mK_X+14mA$ is base point free, and \item $13mK_X+45mA$ is very ample. \end{itemize} \end{theorem} In particular, Kawamata log terminal surfaces with $K_X$ ample, and defined in characteristic $p>5$, satisfy that $58mK_X$ is very ample, where $m$ is the Gorenstein index. The bounds are not sharp. See Theorem \ref{theorem:main} for a slightly more general statement. Instead of assuming that $X$ is Kawamata log terminal and $p>5$, it is enough to assume that $X$ is $F$-pure and $\mathbb{Q}$-factorial. The theorem also holds in characteristic zero, but in such a case the existence of effective bounds follows in an easier way by Koll\'{a}r's effective base point free theorem (\cite{kollareffective}) and the Kodaira vanishing theorem. The proof consists of three main ingredients. First, we apply the result of Di Cerbo and Fanelli on a desingularization of $X$. This shows that the base locus of $2mK_X + 7mA$ is zero dimensional. Then, we apply the technique of Cascini, Tanaka and Xu (see \cite[Theorem 3.8]{CasciniTanakaXu}) to show that the base locus of $2(2mK_X + 7mA)$ is empty. Therewith, the very ampleness of $13mK_X + 45mA$ follows from a generalization of a result of Keeler (\cite[Theorem 1.1]{Keeler}) in the case of $F$-pure varieties. As far as we know, after the paper of Cascini, Tanaka and Xu had been announced, no one has yet applied their technique of constructing $F$-pure centers. We believe that down-to-earth examples provided in our paper may be suitable as a gentle introduction to some parts of their prolific paper, \cite{CasciniTanakaXu}.\\ As a corollary to the main theorem, we obtain the following Matsusaka-type bounds. \begin{corollary} \label{corollary:Matsusaka-mini}Let $A$ and $N$ be, respectively, an ample and a nef Cartier divisor on a Kawamata log terminal projective surface defined over an algebraically closed field of characteristic $p>5$. Let $m\in \mathbb{N}$ be such that $mK_X$ is Cartier. Then $kA - N$ is very ample for any \[ k > \frac{2A \cdot (H+N)}{A^2}((K_X +3A) \cdot A + 1), \] where $H \defeq 13mK_X+45mA$. \end{corollary} One of the fundamental conjectures in birational geometry is Borisov-Alexeev Boundedness conjecture, which says that $\epsilon$-klt log Fano varieties are bounded. In dimension two it was proved by Alexeev (\cite[Theorem 6.9]{Alexeev}). One of the ingredients of the proof is the beforementioned result about boundedness of polarized surfaces by Koll\'{a}r (\cite[Theorem 2.1.2]{Kollar85}). Further, explicit bounds on the volume have been obtained by Lai (\cite[Theorem 4.3]{Lai}) and Jiang (\cite[Theorem 1.3]{Jiang}). For characteristic $p>5$, in the proof of the boundedness of $\epsilon$-klt log del Pezzo pairs, we can replace Koll\'{a}r's result by Theorem \ref{theorem:main-mini}, and hence obtain rough, but explicit bounds on the size of the bounded family. \begin{corollary}\label{corollary:boundedness_of_log_del_Pezzo} For any $\epsilon \in \mathbb{R}_{>0}$ and a finite set $I \subseteq [0,1] \cap \mathbb{Q}$, there exist effectively computable natural numbers $b(\epsilon,I)$ and $n(\epsilon,I)$ satisfying the following property. Let $(X,\Delta)$ be an $\epsilon$-klt log del Pezzo surface with the coefficients of $\Delta$ contained in $I$, defined over an algebraically closed field of characteristic $p>5$. Then, there exists a very ample divisor $H$ on $X$ such that $H^i(X,H) = 0$ for $i>0$ and \[ |H^2|, |H \cdot K_X|, |H\cdot \Delta|, |K_X \cdot \Delta|, |\Delta^2| \] are smaller than $b(\epsilon,I)$. Further, $H$ embeds $X$ into $\mathbb{P}^k$, where $k \leq b(\epsilon, I)$, and $n(\epsilon,I)\Delta$ is Cartier. \end{corollary} \begin{comment} \begin{corollary} \label{corollary:boundedness_of_log_del_Pezzo} Let $(X,\Delta)$ be a klt log del Pezzo surface such that $m(K_X+\Delta)$ and $mK_X$ are Cartier for some $m\in \mathbb{N}$. Then, there exists a very ample divisor $H$ on $X$ such that $H^i(X,H) = 0$ for $i>0$ and \begin{align*} H^2 &\leq 128m^5(2m-1)a^2 + \max\Big(9,2m + 4 + \frac{2}{m}\Big)b^2, \\ |H \cdot K_X| &\leq 128m^5(2m-1)a + 3m\max\Big(9,2m + 4 + \frac{2}{m}\Big)b, \\ H \cdot \Delta & \leq 128m^5(2m-1)(a+b) + 3m\max\Big(9,2m + 4 + \frac{2}{m}\Big)(a+b), \end{align*} where $a = 18m$ and $b=62m^2$. In particular, $(X,\Delta)$ embeds into $\mathbb{P}^k$, where \[ k \leq 128m^5(2m-1)a^2 + \max\Big(9,2m + 4 + \frac{2}{m}\Big)b^2. \] \end{corollary} Note that $\epsilon$-klt log del Pezzo surfaces have the $\mathbb{Q}$-factorial index at each point bounded by $2(2/\epsilon)^{128/\epsilon^5}$ and the number of singular points bounded by $128/\epsilon^5$ (see Proposition \ref{proposition:log_del_pezzo_first_bounds}). \end{comment} The paper is organized as follows. The second section consists of basic preliminaries. In the third section we give a proof of the base point free part of the main theorem. In the fourth section, we prove a technical generalization of the main theorem to arbitrary characteristic. In the fifth section we show Matsusaka-type bounds (Corollary \ref{corollary:Matsusaka-mini}). In the sixth section we derive effective bounds on log del Pezzo pairs.\\ As far as we are concerned, the best source of knowledge about Frobenius singularities are unpublished notes of Schwede \cite{SchwedeUtah}. We also recommend \cite{SchwedeTucker}. The readers interested in Matsusaka theorem and Fujita-type theorems are encouraged to consult \cite[Section 10.2 and 10.4]{Laz2}. A proof of Reider's theorem for normal surfaces in characteristic zero may be found in \cite{Sakai}. Certain other Fujita-type bounds for singular surfaces in characteristic zero are obtained in \cite{LangerAdjoint}. The effective base point free theorem in characteristic zero is proved in \cite{kollareffective}. Various results on the base point free theorem for surfaces in positive characteristic may be found in \cite{tanakaxmethod} and \cite{mnw}. \section{Preliminaries} We always work over an algebraically closed field $k$ of positive characteristic $p>0$. We refer to \cite{KollarMori} for basic results and basic definitions in birational geometry like log discrepancy or Kawamata log terminal singularities. We say that a pair $(X,\Delta)$ is a \emph{log Fano} pair if $-(K_X + \Delta)$ is ample. In the case when $\dim(X)=2$, we say that $(X,\Delta)$ is a \emph{log del Pezzo} pair. A pair $(X,B)$ is \emph{$\epsilon$-klt}, if the log discrepancy along any divisor is greater than $\epsilon$. Note that the notion of being $0$-klt is equivalent to klt. The \emph{Cartier index} of $(X,\Delta)$ is the minimal number $m \in \mathbb{N}$ such that $m(K_X +\Delta)$ is Cartier. If $(X,\Delta)$ is klt, then it must be $1/m$-klt. Recall that any Kawamata log terminal surface has rational singularities, and is, in particular, $\mathbb{Q}$-factorial (see \cite{Tanaka2dim}). We denote the base locus of a line bundle $\mathcal{L}$ by $\mathrm{Bs}(\mathcal{L})$. Note, that by abuse of notation, we use the notation for line bundles and the notation for divisors interchangeably. We will repeatedly use, without mentioning directly, that $K_X + 3A$ is nef, where $X$ is a projective normal surface and $A$ is an ample Cartier divisor. This is a direct consequence of the cone theorem (\cite[Proposition 3.15]{Tanaka2dim}). \\ The following facts are used in the proofs in this paper. \begin{theorem}[{Mumford regularity, \cite[Theorem 1.8.5]{Laz1}}] \label{theorem:fujita} Let $X$ be a projective variety, and let $M$ be a globally generated ample line bundle on $X$. Let $\mathcal{F}$ be a coherent sheaf on $X$ such that $H^i(X, \mathcal{F} \otimes M^{-i})=0$ for $i>0$. Then $\mathcal{F}$ is globally generated. \end{theorem} \begin{theorem}[{Fujita vanishing, \cite[Theorem 1.4.35]{Laz1} and \cite[Remark 1.4.36]{Laz1}}] \label{theorem:fujita_vanishing} Let $X$ be a projective variety, and let $H$ be an ample divisor on $X$. Given any coherent sheaf $\mathcal{F}$ on $X$, there exists an integer $m(\mathcal{F},H)$ such that \[ H^i\big(X, \mathcal{F} \otimes \mathcal{O}_X(mH + D)\big) = 0, \] for all $i>0$, $m \geq m(\mathcal{F},H)$ and any nef Cartier divisor $D$ on $X$. \end{theorem} \begin{theorem}[Log-concavity of volume] \label{theorem:volume} For any two big Cartier divisors $D_1$ and $D_2$ on a normal variety $X$ of dimension $n$ we have \[ \vol(D_1 +D_2)^{\frac{1}{n}} \geq \vol(D_1)^{\frac{1}{n}} + \vol(D_2)^{\frac{1}{n}}. \] \end{theorem} \noindent Recall that \[ \vol(D) \defeq \limsup_{m \to \infty} \frac{H^0(X,mD)}{m^n / n!}. \] \begin{proof} See \cite[Theorem 2.2]{DiCerboFanelli} (cf.\ \cite[Theorem 11.4.9]{Laz2} and \cite{TakagiFujita}). \end{proof} \subsection{Frobenius singularities} All the rings in this section are assumed to be geometric and of positive characteristic, that is finitely generated over an algebraically closed field of characteristic $p>0$. One of the most amazing discoveries of singularity theory is that properties of the Frobenius map may reflect how singular a variety is. This observation is based on the fact that for a smooth local ring $R$, the $e$-times iterated Frobenius map $R \to F^e_* R$ splits. Further, the splitting does not need to hold when $R$ is singular. This leads to a definition of \emph{$F$-split rings}, rings $R$ such that for divisible enough $e \gg 0$ the $e$-times iterated Frobenius map $F^e \colon R \to F^e_* R$ splits. For log pairs, we have the following definition. \begin{definition} We say that a log pair $(X, \Delta)$ is \emph{$F$-pure} if for any close point $x \in X$ and any natural number $e > 0 $ there exists a map \[ \phi \in \mathrm{Hom}_{\mathcal{O}_{X,x}}(F^e_*\mathcal{O}_{X,x}(\lfloor(p^e-1)\Delta \rfloor), \mathcal{O}_{X,x}) \] such that $1 \in \phi(F^e_*\mathcal{O}_{X,x})$, where $\mathcal{O}_{X,x}$ it the stalk at $x \in X$. \end{definition} As a consequence of Grothendieck duality (see \cite[Lemma 2.9]{HaconXu}), we have \[ \Hom(F^e_* \mathcal{O}_X, \mathcal{O}_X) \simeq H^0(X, \omega_X^{1-p^e}). \] This explains the following crucial proposition. \begin{proposition}[{\cite[Theorem 3.11, 3.13]{SchwedeFAdjunction}}] \label{proposition:bijection_frobenius_divisors} Let $X$ be a normal variety. Then, there is a natural bijection. \[ \left\{ \begin{array}{c} \text{Non-zero }\mathcal{O}_X\text{-linear maps} \\ \phi \colon F^e_* \mathcal{O}_X \to \mathcal{O}_X \text{ up to} \\ \text{pre-multiplication by units} \end{array} \right\} \longleftrightarrow \left\{ \begin{array}{c} \text{Effective }\mathbb{Q}\text{-divisors }\Delta \text{ such that } \\ (1-p^e)\Delta \sim -(1-p^e)K_X \end{array} \right\} \] \end{proposition} The $\mathbb{Q}$-divisor corresponding to a splitting $\phi \colon F^e_* \mathcal{O}_X \to\mathcal{O}_X$ will be denoted by $\Delta_{\phi}$. The morphism extends to $\phi \colon F^e_* \mathcal{O}_X((p^e-1)\Delta_{\phi}) \to \mathcal{O}_X$, which gives that $(X,\Delta_{\phi})$ is $F$-pure. Note that we will apply the above proposition mainly for $X$ replaced by $\Spec \mathcal{O}_{X, x}$, where $x \in X$. \\ Frobenius singularities are alleged to be the correct counterparts of birational singularities in positive characteristic. This supposition is propped up by the following theorems. \begin{theorem}[{see \cite{HaraWatanabe}}] \label{theorem:fsplit_and_lc} Let $(X,B)$ be a log pair. If $(X,B)$ is $F$-pure, then $(X, B)$ is log canonical. \end{theorem} \begin{theorem}[{\cite{Hara2dimSing}}] \label{theorem:2dimp>5} Let $X$ be a Kawamata log terminal projective surface defined over an algebraically closed field of characteristic $p>5$. Then $X$ is $F$-pure. \end{theorem} The above theorem implies, even more, that $X$ is strongly $F$-regular. In this paper, however, we do not need the notion of $F$-regularity.\\ A key tool in the theory of Frobenius splittings is a trace map (see \cite{TanakaTrace} and \cite{SchwedeAdjoint}). For an integral divisor $D$ on a normal variety $X$, there is an isomorphism derived from the Grothendieck duality (see \cite[Lemma 2.9]{HaconXu}): \begin{equation} \label{equation:Grothendieck} \mathcal{H} om_{\mathcal{O}_X}(F^e_*\mathcal{O}_X(D), \mathcal{O}_X) \simeq \mathcal{O}_X(-(p^e-1)K_X-D). \end{equation} \begin{definition} Let $B$ be a $\mathbb{Q}$-divisor such that $(p^e-1)(K_X+B)$ is Cartier. We call \[ \Tr^e_{X,B} \colon F^e_*\mathcal{O}_X(-(p^e-1)(K_X + B)) \rightarrow \mathcal{O}_X, \] the \emph{trace map}. It is constructed by applying the above isomorphism (\ref{equation:Grothendieck}) to the map \begin{equation} \label{equation:trace_evaluation} \mathcal{H} om_{\mathcal{O}_X}(F^e_*\mathcal{O}_X((p^e-1)B), \mathcal{O}_X) \xrightarrow{\mathrm{ev}} \mathcal{H} om_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{O}_X) \end{equation} being the dual of the composition $\mathcal{O}_X \xrightarrow{F^e} F^e_* \mathcal{O}_X \xhookrightarrow{\hphantom{F^e}} F^e_* \mathcal{O}_X((p^e-1)B)$. \end{definition} The rank one sheaves in question are not necessary line bundles, but since $X$ is normal, we can always restrict ourselves to the smooth locus. The trace map can be also defined when the index of $K_X+B$ is divisible by $p$, but we will not need it in this paper. The following proposition reveals the significance of the trace map. \begin{proposition}[{\cite[Proposition 2.10]{HaconXu}}] \label{proposition:trace_and_fsplit} Let $(X,B)$ be a normal log pair such that the Cartier index of $K_X+B$ is not divisible by $p$. Then $(X,B)$ is $F$-pure at a point $x \in X$ if and only if the trace map $\Tr^e_{X,B}$ is surjective at $x$ for all enough divisible $e \gg 0$. \end{proposition} It is easy to see that $\Tr^e_{X,B}$ is surjective at $x$ for all divisible enough $e \gg 0$ if and only if it is surjective for just one $e>0$ satisfying that $(p^e-1)B$ is Weil. For the convenience of the reader we give a proof of the proposition. \begin{proof} The key point is that $\Tr^e_{X,B}$ is induced by the evaluation map (\ref{equation:trace_evaluation}). Replace $X$ by $\Spec \mathcal{O}_{X,x}$. For $\phi \in \mathrm{Hom}_{\mathcal{O}_X}(F^e_*\mathcal{O}_X((p^e-1)B), \mathcal{O}_X)$, the image $\mathrm{ev}(\phi)$ is defined by the commutativity of the following diagram: \begin{center} \begin{tikzcd} \mathcal{O}_X \arrow{r}{F^e} \arrow[bend right = 25]{rr}{\mathrm{ev}(\phi)} & F^e_* \mathcal{O}_X((p^e-1)B) \arrow{r}{\phi} & \mathcal{O}_X. \end{tikzcd} \end{center} Note that $\mathrm{Hom}(\mathcal{O}_X,\mathcal{O}_X) \simeq \mathcal{O}_X$ is generated by the identity morphism $\mathrm{id}$. In particular, $\mathrm{ev}$ is surjective if and only if there exists $\phi$ such that $\mathrm{ev}(\phi)=\mathrm{id}$, which is equivalent to $\phi$ being a splitting. \end{proof} Further, we consider another version of the trace map. Let $D$ be a $\mathbb{Q}$-divisor such that $K_X + B + D$ is Cartier. Tensoring the trace map $\Tr^e_{X,B}$ by it, we obtain: \[ \Tr^e_{X,B}(D) \colon F^e_*\mathcal{O}_X(K_X + B + p^eD) \longrightarrow \mathcal{O}_X(K_X + B + D). \] By abuse of notation, both versions of the trace map are denoted in the same way. Later, we will need the following lemma. \begin{lemma} \label{lemma:index_divisible_by_p} Let $X$ be a $\mathbb{Q}$-factorial variety, which is $F$-pure at a point $x \in X$. Then, there exists an effective $\mathbb{Q}$-divisor $B$ such that \[ (p^e-1)(K_X + B) \] is Cartier for enough divisible $e \gg 0$, and $(X,B)$ is $F$-pure at $x$. One can take the coefficients of $B$ to be as small as possible. \end{lemma} More precisely, if we can find $B$ as above, then there exists a sequence $\lim_{j \to \infty} a_j = 0$ such that $a_j B$ satisfies the conditions of this lemma. Note that, if the Gorenstein index of $X$ is not divisible by $p$, then one can just take $B=0$. \begin{proof} By Proposition \ref{proposition:bijection_frobenius_divisors}, we know that there exists an effective $\mathbb{Q}$-divisor $\Delta$ such that $(X,\Delta)$ is $F$-pure at $x \in X$, and the Cartier index of $K_X + \Delta$ at $x$ is not divisible by $p$. In particular, for enough divisible $e \gg 0$, there exists a $\mathbb{Q}$-divisor $D \sim (p^e-1)(K_X+\Delta)$ such that $x \not \in \Supp D$. Further, we can find a Cartier divisor $E$ disjoint with $x$ for which $E + D$ is effective. Notice that $K_X + \Delta + D + E \sim p^e(K_X+\Delta) + E$ has Cartier index indivisible by $p$ for $e \gg 0$. Take \[ B = \frac{1}{p^n+1}(\Delta + D + E) \] for $n \gg 0$. Then \[ K_X + B \sim K_X + \Delta + D + E - \frac{p^n}{p^n+1}(\Delta + D + E), \] where both $K_X + \Delta + D + E$ and $p^n(\Delta + D + E)$ have Cartier indices not divisible by $p$. \end{proof} \subsection{Reider's analysis} Reider's analysis is a method of showing that divisors of the form $K_X + L$ are globally generated or very ample, where $L$ is a big and nef divisor on a smooth surface $X$. The idea is that a base point of $K_X + L$ provides us with a rank two vector bundle $\mathcal{E}$ which does not satisfy Bogomolov inequality $c_1(\mathcal{E})^2 \leq 4c_2(\mathcal{E})$. In characteristic zero such vector bundles are unstable. Using the instability, one can deduce a contradiction to the existence of a base point, when $L$ is ``numerically-ample enough''. In positive characteristic, the aforementioned fact about unstable vector bundles on smooth surfaces is not true in general. However, Shepherd-Barron proved it for surfaces which are neither of general type nor quasi-elliptic of Kodaira dimension one (\cite{Shepherd-Barron}). This leads to the following. \begin{proposition}[{\cite[Theorem 2.4]{Terakawa}}] \label{proposition:terakawa} Let $X$ be a smooth projective surface neither of general type nor quasi-elliptic with $\kappa(X)=1$, and let $D$ be a nef divisor such that $D^2 > 4$. Assume that $q \in X$ is a base-point of $K_X + D$. Then, there exists an integral curve $C$ containing $q$, such that $D \cdot C \leq 1$. \end{proposition} In particular, for such surfaces, $K_X + 3A$ is base point free for an ample divisor $A$. The goal of this subsection is to prove the following proposition, which covers all types of surfaces. \begin{proposition} \label{proposition:andrea_general} Let $X$ be a smooth projective surface, and let $D$ be a nef and big divisor on it. Assume that $q \in S$ is a base point of $K_X + D$. Further, suppose that \begin{enumerate} \item $D^2 > 4$, if $X$ is quasi-elliptic with $\kappa(X)=1$, \item $D^2 > \vol(K_X) + 4$, if $X$ is of general type and $p \geq 3$, \item $D^2 > \vol(K_X) + 6$, if $X$ is of general type and $p=2$, or \item $D^2 > 4$, otherwise. \end{enumerate} Then, there exists a curve $C$ containing $q$ such that \begin{enumerate} \item[(1a)] $D \cdot C \leq 5$, if $X$ is quasi-elliptic with $\kappa(X)=1$ and $p=3$, \item[(1b)] $D \cdot C \leq 7$, if $X$ is quasi-elliptic with $\kappa(X)=1$ and $p=2$, \item[(2)] $D \cdot C \leq 1$, if $X$ is of general type and $p\geq 3$, \item[(3)] $D \cdot C \leq 7$, if $X$ is of general type and $p=2$, or \item[(4)] $D \cdot C \leq 1$, otherwise. \end{enumerate} \end{proposition} Note that the case (4) is nothing else but Proposition \ref{proposition:terakawa}. The proof follows step-by-step the proof by Di Cerbo and Fanelli (\cite{DiCerboFanelli}). The only addition is that the curve $C$ must contain $q$. The idea that this must hold has been established in (\cite{Sakai}) based on (\cite{Serrano}), but, for the convenience of the reader, we present the full proof below. The following is crucial in the proof of Proposition \ref{proposition:andrea_general}. \begin{proposition}[{\cite{DiCerboFanelli}}] \label{proposition:key_for_andrea} Consider a birational morphism $\pi \colon Y \to X$ between smooth projective surfaces $X$ and $Y$ which are either of general type, or quasi-elliptic with $\kappa(X)=1$. Let $\overline{D}$ be a big divisor on $Y$ such that $H^1(Y, \mathcal{O}_Y(-\overline{D})) \neq 0$ and $\overline{D}^2>0$. Further, suppose that $D \defeq \pi_* \overline{D}$ is nef, and \begin{enumerate} \item $\overline{D}^2 > \vol(K_X)$, if $X$ is of general type and $p\geq 3$, or \item $\overline{D}^2 > \vol(K_X) + 2$, if $X$ is of general type and $p=2$. \end{enumerate} Then, there exists a non-zero non-exceptional effective divisor $\overline{E}$ on $Y$, such that \begin{align*} &k\overline{D} - 2\overline{E} \text{ is big,} \\ &(k\overline{D} - \overline{E}) \cdot \overline{E} \leq 0, \text{ and } \\ &0 \leq D \cdot E \leq \frac{k\alpha}{2}-1, \end{align*} where $E \defeq \pi_* \overline{E}$, $\alpha \defeq D^2 - \overline{D}^2$, and \begin{itemize} \item $k=3$, if $X$ is quasi-elliptic with $\kappa(X)=1$ and $p=3$, \item $k=4$, if $X$ is quasi-elliptic with $\kappa(X)=1$ and $p=2$, \item $k=1$, if $X$ is of general type and $p\geq 3$, or \item $k=1$ or $k=4$, if $X$ is of general type and $p=2$. \end{itemize} \end{proposition} \begin{proof} It follows directly from \cite[Proposition 4.3]{DiCerboFanelli}, \cite[Theorem 4.4]{DiCerboFanelli}, \cite[Proposition 4.6]{DiCerboFanelli} and \cite[Corollary 4.8]{DiCerboFanelli}. \end{proof} Further, we need the following lemma. \begin{lemma}[{\cite[Lemma 2]{Sakai}}] \label{lemma:sakai} Let $D$ be a nef and big divisor on a smooth surface $S$. If \[ D \equiv D_1 + D_2 \] for numerically non-trivial pseudo-effective divisors $D_1$ and $D_2$, then $D_1 \cdot D_2 > 0$. \end{lemma} Now, we can proceed with the proof of the main proposition in this subsection. \begin{proof}[Proof of Proposition \ref{proposition:andrea_general}] The first case is covered by Proposition \ref{proposition:terakawa}, so we may assume that $X$ is of general type or quasi-elliptic with $\kappa(X)=1$. Let $\pi \colon Y \to X$ be a blow-up at $q \in X$ with the exceptional curve $F$. Given that $q$ is a base point of $K_X + D$, from the exact sequence \[ 0 \rightarrow \mathcal{O}_Y(\underbrace{\pi^*(K_X + D) -F}_{K_Y + \pi^*D - 2F}) \longrightarrow \mathcal{O}_Y(\pi^*(K_X + D)) \longrightarrow \mathcal{O}_F(\pi^*(K_X+D)) \rightarrow 0 \] we obtain that \[ H^1(Y, \mathcal{O}_Y(-\pi^*D + 2F)) = H^1(Y, \mathcal{O}_Y(K_Y + \pi^*D -2F)) \neq 0. \] Set $\overline{D} \defeq \pi^*D - 2F$. Since \[ \overline{D}^2 = D^2 - 4, \] we have $\overline{D}^2>0$, and the assertions $(1)$ and $(2)$ in Proposition \ref{proposition:key_for_andrea} are satisfied. Further, using that $H^0(Y, \overline{D}) = H^0(X, \mathcal{O}_X(D)\otimes m_q^2)$ and $\vol(D)>4$, one can easily check that $\overline{D}$ is big. Hence, by Proposition \ref{proposition:key_for_andrea}, there exists a non-zero non-exceptional effective divisor $\overline{E}$ on $Y$, such that \begin{align*} &k\overline{D} - 2\overline{E} \text{ is big, and} \\ &(k\overline{D} - \overline{E}) \cdot \overline{E} \leq 0, \text{ and} \\ &0 \leq D \cdot E \leq 2k-1, \\ \end{align*} where $E = \pi_* \overline{E}$. To finish the proof, it is enough to show that $\overline{E}$ contains a component, which intersects $F$ properly. Its pushforward onto $X$ would be the sought-for curve $C$. Assume that the claim is not true, that is $\overline{E} = \mu^*E + aF$ for $a\geq 0$. We have that \[ 0 \geq (k\overline{D} - \overline{E})\cdot \overline{E} = (kD-E) \cdot E + (2k+a)a. \] This implies $kD\cdot E \leq E^2$. Since $D\cdot E \geq 0$, it holds that $E^2 \geq 0$. Given $kD - 2E$ is big, we may apply Lemma \ref{lemma:sakai} with $kD = (kD-2E) + 2E$, and obtain $kD \cdot E > 2E^2$. This is a contradiction with the other inequalities in this paragraph. \end{proof} \section{Base point freeness} The goal of this section is to prove the base point free part of Theorem \ref{theorem:main-mini}. \begin{lemma} \label{lemma:cone_corollary} Let $L$ be an ample Cartier divisor on a normal projective surface $X$. Let $\pi \colon \widetilde{X} \to X$ be the minimal resolution of singularities. Then $K_{\widetilde{X}} + 3\pi^*L$ is nef, and $K_{\widetilde{X}} + n\pi^*L$ is nef and big for $n\geq 4$. \end{lemma} \begin{proof} Take a curve $C$. We need to show that $(K_{\widetilde{X}} + 3\pi^*L) \cdot C \geq 0$. If $K_{\widetilde{X}} \cdot C \geq 0$, then the inequality clearly holds. Thus, by cone theorem (\cite[Theorem 3.13]{Tanaka2dim} and \cite[Remark 3.14]{Tanaka2dim}), we need to prove it, when $C$ is an extremal ray satisfying $K_{\widetilde{X}} \cdot C < 0$. In such a case, we have that $K_{\widetilde{X}} \cdot C \geq -3$. If $C$ is not an exceptional curve, then $3\pi^*L \cdot C \geq 3$, and so the inequality holds. But $C$ cannot be exceptional, because then its contraction would give a smooth surface (see \cite[Theorem 1.28]{KollarMori}), and so $\widetilde{X}$ would not be a minimal resolution. This concludes the first part of the lemma. As for the second part, $K_{\widetilde{X}} + n\pi^*L$ is big and nef for $n \geq 4$, since adding a nef divisor to a big and nef divisor gives a big and nef divisor. \end{proof} The following proposition yields the first step in the proof of Theorem \ref{theorem:main-mini}. \begin{proposition} \label{proposition:part_one} Let $X$ be a normal projective surface defined over an algebraically closed field of characteristic $p>3$. Assume that $mK_X$ is Cartier for some $m \in \mathbb{N}$. Let $A$ be an ample Cartier divisor on $X$. Then \[ \mathrm{Bs}(m(aK_{X} + bA) + N) \subseteq \Sing(X), \] for any nef Cartier divisor $N$, where $a=2$ and $b=7$. \end{proposition} The proposition is even true for $a=2$ and $b=6$, but in this case, $aK_X + bA$ need not be ample. \begin{proof} Let $\pi \colon \overline{X} \to X$ be the minimal resolution of singularities with the exceptional locus $E$. First, we claim that \[ \mathrm{Bs}(2K_{\overline{X}} + 7\pi^*A + M) \subseteq E. \] for any nef Cartier divisor $M$ on $\overline{X}$. Assume this is true. If $m=1$, then the proposition follows automatically. In general, we note that $2K_{\overline{X}} + 7\pi^*A$ is nef by Lemma \ref{lemma:cone_corollary}, and so setting $M = (m-1)(2K_{\overline{X}} + 7\pi^*A)+\pi^*N$ yields \[ \mathrm{Bs}(m(2K_{\overline{X}} + 7\pi^*A)+\pi^*N) \subseteq E. \] In particular, $\mathrm{Bs}(m(2K_{X} + 7A)+N) \subseteq \pi(E)$, which concludes the proof. Hence, we are left to show the claim. Assume by contradiction that there exists a base point $q \in \overline{X}$ of $2K_{\overline{X}} + 7\pi^*A + M$ such that $q \not \in E$. We apply Proposition \ref{proposition:andrea_general} for $D = K_{\overline{X}} + 7\pi^*A +M$. The assumptions are satisfied, because, by Lemma \ref{lemma:cone_corollary}, $D$ is big and nef, and, by Theorem \ref{theorem:volume}, \begin{align*} \vol(D) &\geq \vol(K_{\overline{X}}) + 49, \text{ if }X \text{ is of general type, and }\\ \vol(D) &\geq \vol(K_{\overline{X}}+4\pi^*A) + \vol(3\pi^*A) > 9 \text{ in general}. \end{align*} Here, we used that $K_{\overline{X}} + 4\pi^*A$ is nef and big by Lemma \ref{lemma:cone_corollary}. Therefore, there exists a curve $C$ containing $q$ such that \[ C \cdot D \leq 1. \] We can write $D = (K_{\overline{X}} + 3\pi^*A+M) + 4\pi^*A$. As $C$ is not exceptional, $C \cdot \pi^*A > 0$. Thus, we obtain a contradiction. \end{proof} Applying above Proposition \ref{proposition:part_one} and Theorem \ref{theorem:2dimp>5}, the base point free part of Theorem \ref{theorem:main-mini} follows from the following proposition by taking $L \defeq m(aK_X + bA) - K_X$. \begin{proposition} \label{proposition:part_two} Let $X$ be an $F$-pure $\mathbb{Q}$-factorial projective surface defined over an algebraically closed field of characteristic $p>0$. Let $L$ be an ample $\mathbb{Q}$-divisor on $X$ such that $K_X + L$ is an ample Cartier divisor and \[ \mathrm{Bs}(K_X + L + M) \subseteq \Sing(X), \] for every nef Cartier divisor $M$. Then $2(K_X + L)+N$ is base point free for every nef Cartier divisor $N$. \end{proposition} If we assume that $\dim \mathrm{Bs}(K_X+L+M) = 0$, then the same proof will give us that $3(K_X+L)+N$ is base point free. Before proceeding with the proof, we would like to give an example explaining an idea of how to show the proposition if we worked in characteristic zero. \begin{remark} \label{example:char0} Here, $X$ is a smooth surface defined over an algebraically closed field $k$ of characteristic zero, and $L$ is an ample Cartier divisor on it. The goal of this remark is to prove the following statement by applying a well-known strategy: \begin{quote}\textit{if $K_X + L$ is ample and $\dim \mathrm{Bs}(K_X + L) = 0$, then $3(K_X + L)$ is base point free.} \end{quote} Take any point $q \in \mathrm{Bs}(K_X + L)$. It is enough to show that $3(K_X+L)$ is base point free at $q$. By assumptions, $K_X+L$ defines a finite map outside of its zero dimensional base locus, and so there exist divisors $D_1, D_2, D_3 \in |K_X+L|$ without common components such that the multiplier ideal sheaf $\mathcal{J}(X,\Delta)$ for $\Delta = \frac{2}{3}(D_1 + D_2+ D_3)$ satisfies \begin{align*} \dim\, &\mathcal{J}(X,\Delta) = 0, \text{ and }\\ q \in \ &\mathcal{J}(X,\Delta) . \end{align*} Note that $\Delta \sim_{\mathbb{Q}} 2(K_X +L)$. Let $W$ be a zero-dimensional subscheme defined by $\mathcal{J}(X,\Delta)$. We have the following exact sequence \[ 0 \to \mathcal{O}_X(\hspace{0.5pt}3(K_X+L)) \otimes \mathcal{J}(X,\Delta) \to \mathcal{O}_X(\hspace{0.5pt}3(K_X+L)) \to \mathcal{O}_W(\hspace{0.5pt}3(K_X + L)) \to 0. \] Since $3(K_X+L) \sim_{\mathbb{Q}} K_X + \Delta + L$, by Nadel vanishing theorem (\cite[Theorem 9.4.17]{Laz2}) \[ H^1(X, \mathcal{O}_X(\hspace{0.5pt}3(K_X + L)) \otimes \mathcal{J}(X,\Delta)) = 0, \] and so \[ H^0(X, \mathcal{O}_X(\hspace{0.5pt}3(K_X+ L))) \longrightarrow H^0(W, \mathcal{O}_W(\hspace{0.5pt}3(K_X + L))) \] is surjective. Since $\dim W = 0$, we get that $3(K_X + L)$ is base point free along $W$, and so it is base point free at $q$. \end{remark} \begin{proof}[Proof of Proposition \ref{proposition:part_two}] Take an arbitrary closed point $q \in X$. We need to show that $q \not \in \mathrm{Bs}(2(K_X +L)+N)$. By taking $M = K_X + L + N$ in the assumption, we get that $\mathrm{Bs}(2(K_X + L) + N) \subseteq \Sing(X)$. Hence, we can assume $q \in \Sing(X)$. By assumptions, $K_X+L$ defines a finite map outside of its zero dimensional base locus, so there exist divisors $D_1, D_2 \in |K_X+L|$ such that $\dim (D_1 \cap D_2) = 0$ and $q \in D_1\cap D_2$. Let $W$ be the scheme defined by the interesection of $D_1$ and $D_2$. By definition, $I_W = I_{D_1} + I_{D_2}$. By Theorem \ref{theorem:fujita_vanishing}, we can choose $e>0$ such that \[ H^1\Big(X, \mathcal{O}_X\Big(\frac{p^e-1}{2}L +M\Big) \otimes I_W\Big) = 0 \] for any nef Cartier divisor $M$. By Lemma \ref{lemma:index_divisible_by_p}, we know that there exists an effective $\mathbb{Q}$-divisor $B$ such that \[ (p^e-1)(K_X + B) \] is Cartier, and \[ \mathrm{Tr}_{X,B} \colon F^e_*\mathcal{O}_X(-(p^e-1)(K_X+B)) \longrightarrow \mathcal{O}_X \] is surjective at $q$, for enough divisible $e \gg 0$. If the Gorenstein index of $X$ is not divisible by $p$, then we can take $B=0$. Further, we may assume that $\frac{1}{2}L - B$ is ample. Now, take maximal $\lambda_1, \lambda_2 \in \mathbb{Z}_{\geq 0}$ such that \[ \mathrm{Tr}_{X,\Delta} \colon F^e_* \mathcal{L} \to \mathcal{O}_X \] is surjective at the stalk $\mathcal{O}_{X,q}$, where \begin{align*} \mathcal{L} &\defeq \mathcal{O}_X(-(p^e-1)(K_X + B) - \lambda_1D_1 - \lambda_2 D_2)\text{, and} \\ \Delta &\defeq B+\frac{\lambda_1}{p^e-1}D_1 + \frac{\lambda_2}{p^e-1} D_2. \end{align*} The pair $(X,\Delta)$ is $F$-pure by Proposition {\ref{proposition:trace_and_fsplit}}. We want to show the existence of the following diagram: \begin{center}\begin{tikzcd} F^e_* \mathcal{L} \arrow{r} \arrow{d}{\Tr_{X,\Delta}} & F^e_* \big(\mathcal{L}|_{W}\big) \arrow{d} \\ \mathcal{O}_X \arrow{r} & \mathcal{O}_{X,q}/m_q \end{tikzcd}\end{center} To show that such a diagram exists we need to prove that the image of $F^e_* (\mathcal{L} \otimes I_W)$ under $\mathrm{Tr}_{X,\Delta}$ is contained in $m_q$. This follows from the fact that $I_W = \mathcal{O}(-D_1) + \mathcal{O}(-D_2)$ and from the maximality of $\lambda_1, \lambda_2$. More precisely the image of \[ F^e_* \mathcal{O}_X(-(p^e-1)(K_X + B) - (\lambda_1+1)D_1 - \lambda_2 D_2) \] must be contained in $m_q$, and analogously for $\lambda_2$ replaced by $\lambda_2+1$. So, we tensor this diagram by the line bundle $\mathcal{O}_X(K_X + \Delta + H)$, where \[ H \defeq 2(K_X + L) - (K_X + \Delta) + N, \] and take $H^0$ to obtain the diagram \begin{center}\begin{tikzcd} H^0\big(X, F^e_*\mathcal{O}_X(K_X + \Delta + p^eH)\big) \arrow{d}{} \arrow{r} & H^0\big(W, \mathcal{O}_W\big) \arrow{d} \\ H^0\big(X,\mathcal{O}_X(K_X + \Delta + H)\big) \arrow{r} & H^0\big(q,\mathcal{O}_{X,q}/m_q \big), \end{tikzcd} \end{center} Note that $K_X + \Delta + H = 2(K_X + L) + N$. Further, by Theorem \ref{theorem:fsplit_and_lc}, $(X, \Delta)$ is log canonical at $q \in X$, and so by Lemma \ref{lemma:log_canonical_coefficients} we get \[ \frac{\lambda_1}{p^e-1} + \frac{\lambda_2}{p^e-1} \leq 1. \] Therefore, $H$ is ample and \[K_X + \Delta +p^eH \sim \frac{p^e{-}1}{2}L + \underbrace{(p^e{-}1)\Big(\frac{1}{2}L - B\Big) {+} (p^e{+}1 {-} \lambda_1 {-} \lambda_2)(K_X + L) {+} p^eN}_{\mathrm{nef}}.\] The right vertical arrow is surjective, since $\mathrm{Tr}_{X,\Delta} \colon F^e_* \mathcal{L} \to \mathcal{O}_X$ is surjective, and $\dim W = 0$. The upper horizontal arrow is surjective as \[H^1\Big(X, \mathcal{O}_X\Big(\frac{p^e-1}{2}L + M\Big) \otimes I_W\Big)=0\] for any nef Cartier divisor $M$. Thus, the lower horizontal arrow is surjective, and so the proof of the base point freeness is completed. \end{proof} The following lemma was used in the proof. \begin{lemma} \label{lemma:log_canonical_coefficients} Let $(X,B + a_1D_1 +a_2D_2)$ be a log canonical two dimensional pair, such that $B$ is an effective $\mathbb{Q}$-divisor, $a_1,a_2 \in \mathbb{R}_{\geq 0}$, and $D_1$ together with $D_2$ are Cartier divisors intersecting at a singular point $x \in X$. Then $a_1 + a_2 \leq 1$. \end{lemma} Of course, the lemma is not true, when $x$ is a smooth point. \begin{proof} Consider a minimal resolution of singularities $\pi \colon \widetilde{X} \to X$. Write \[ K_{\widetilde{X}} + \Delta_{\widetilde{X}} + \pi^*(B + a_1D_1 + a_2D_2) = \pi^*(K_X + B + a_1D_1 + a_2D_2). \] Since $\pi$ is a minimal resolution, we have that $\Delta_{\widetilde{X}} \geq 0$. Take an exceptional curve $C$ over $x$. Since $D_1$ and $D_2$ are Cartier, the coefficient of $C$ in $\Delta_{\widetilde{X}} + \pi^*(B + a_1D_1 + a_2D_2)$ is greater or equal $a_1 + a_2$. Since $(\widetilde{X},\Delta_{\widetilde{X}} + \pi^*(B + a_1D_1 + a_2D_2))$ is log canonical, this concludes the proof of the lemma. \end{proof} \subsection{Very ampleness} The goal of this subsection is to show the following proposition and finish off the proof of Theorem \ref{theorem:main-mini}. \begin{proposition}[{cf.\ \cite[Corollary 4.5]{SchwedeAdjoint}, \cite[Theorem 1.1]{Keeler}}] \label{proposition:very_ampleness} Let $X$ be an $F$-pure projective variety of dimension $n$. Let $D$ be an ample $\mathbb{Q}$-Cartier divisor such that $K_X+D$ is Cartier, and let $L$ be an ample globally generated Cartier divisor. Then $K_X + (n+1)L + D$ is very ample. \end{proposition} The proof follows very closely the strategy described in \cite{Keeler}. Theorems of a similar flavour have been obtained by Schwede (\cite{SchwedeAdjoint}), Smith (\cite{Smith1} and \cite{Smith2}) and Hara (\cite{Hara}). First, we need a slight generalization of \cite[Example 1.8.22]{Laz1}. \begin{lemma}[{c.f.\ \cite[Examples 1.8.18 and 1.8.22]{Laz1}}] \label{lemma:very_ampleness} Let $X$ be a normal projective variety of dimension $n$. Consider a coherent sheaf $\mathcal{F}$ and a point $x \in X$. Let $B$ be a globally generated ample line bundle. If \[ H^{i+k-1}\big(X, \mathcal{F} \otimes B^{-(i+k)}\big) = 0, \] for $1 \leq i \leq n$ and $1 \leq k \leq n$, then $\mathcal{F} \otimes m_x$ is globally generated. \end{lemma} \begin{proof} Set $\mathcal{F}(-i) \defeq F \otimes B^{-i}$. Our goal is to prove that \[H^i(X, \mathcal{F}(-i) \otimes m_x) = 0\] for all $i>0$. Then, Theorem \ref{theorem:fujita} would imply the global generatedness of $F \otimes m_x$. Since $B$ is ample and globally generated, it defines a finite map and so there exist sections $s_1, s_2, \ldots, s_n \in H^0(X,B)$ intersecting in a zero dimensional scheme $W$ containing $x$. By the same argument as in \cite[Example 1.8.22]{Laz1}, using \cite[Proposition B.1.2(ii)]{Laz1} we get that \[ H^i(X, \mathcal{F}(-i) \otimes I_W) = 0, \text{ for } i>0. \] To conclude the proof of the lemma, we consider the following short exact sequence \[ 0 \longrightarrow I_W \longrightarrow m_x \longrightarrow m_x/ I_W \longrightarrow 0, \] and tensor it by $\mathcal{F}(-i)$, to get a short exact sequence \[ 0 \longrightarrow \mathcal{G} \longrightarrow \mathcal{F}(-i) \otimes I_W \longrightarrow \mathcal{F}(-i) \otimes m_x \longrightarrow \mathcal{F}(-i) \otimes \big(m_x/ I_W \big) \longrightarrow 0, \] where the term \[ \mathcal{G} \defeq \ker \big(\mathcal{F}(-i) \otimes I_W \longrightarrow \mathcal{F}(-i) \otimes m_x \big) \] comes from the fact that $\mathcal{F}$ may not be flat. Since $m_x/ I_W$ is flat off $W$, we have that \[ \dim \Supp \big( \mathrm{Tor}^1 \big(\mathcal{F} (-i), m_x/ I_W\big)\big) = 0,\] and so $H^i(X, \mathcal{G}) = 0$ for $i>0$. A simple diagram chasing shows that $H^i(X, \mathcal{F}(-i) \otimes m_x )=0$ for $i>0$, and so we are done. \end{proof} \begin{proof}[Proof of Proposition \ref{proposition:very_ampleness}] Choose a point $q \in X$. To prove the theorem, it is enough to show that $\mathcal{O}_X\big(K_X + (n+1)L + D\big) \otimes m_x$ is globally generated at $q$ for all $x \in X$. Set $\mathcal{L} \defeq \mathcal{O}_X(L)$. Since $X$ is F-pure, Proposition \ref{proposition:trace_and_fsplit} and Lemma \ref{lemma:index_divisible_by_p} imply that there exists an effective $\mathbb{Q}$-divisor $B$ such that \[ (p^e-1)(K_X + B) \] is Cartier for divisible enough $e>0$ and $\mathrm{Tr}^e_{X,B}$ is surjective at $q$. If the Gorenstein index of $X$ is not divisible by $p$, then we can take $B=0$. Further, we may assume that $D-B$ is ample. Tensoring $\mathrm{Tr}^e_{X,B}((n+1)L + D-B)$ by $m_x$, we obtain a morphism \[ F^e_* \mathcal{O}_X\big(K_X + B + p^e\big((n+1)L + D-B\big)\big) \otimes m_x \longrightarrow \mathcal{O}_X\big(K_X + (n+1)L + D\big) \otimes m_x, \] which is surjective at $q$, and so it is enough to show that $F^e_* \mathcal{O}_X\big(K_X + B + p^e\big((n+1)L + D-B\big)\big) \otimes m_x$ is globally generated for divisible enough $e>0$. However, this follows from Lemma \ref{lemma:very_ampleness} as \begin{align*} H^{i+k-1}\big(X&, F^e_* \mathcal{O}_X\big(K_X + B + p^e\big((n+1)L + D-B\big)\big) \otimes \mathcal{L}^{-(i+k)}\big) \\ &= H^{i+k-1}\big(X, \mathcal{O}_X\big(K_X + B + p^e\big((n+1 - i-k)L + D-B\big)\big)\big) = 0 \end{align*} for $e \gg 0$ and $1 \leq i+k -1 \leq n$, by Serre vanishing. \end{proof} Now, the proof of the main theorem is straightforward. \begin{proof}[Proof of Theorem \ref{theorem:main-mini}] It follows directly from Theorem \ref{theorem:2dimp>5}, Proposition \ref{proposition:part_one}, Proposition \ref{proposition:part_two} and Proposition \ref{proposition:very_ampleness}. \end{proof} \section{Generalizations of the main theorem} In this section we present a technical generalisation of Theorem \ref{theorem:main-mini}. \begin{theorem} \label{theorem:main} Let $X$ be an $F$-pure $\mathbb{Q}$-factorial projective surface defined over an algebraically closed field of characteristic $p>0$. Assume that $mK_X$ is Cartier for some $m \in \mathbb{N}$. Let $L$ be an ample Cartier divisor on $X$ and let $N$ be any nef Cartier divisor. The following holds. \begin{itemize} \item If $X$ is neither of general type nor quasi-elliptic with $\kappa(X)=1$, then \begin{itemize} \item[] $2mK_X+8mL+N$ is base point free, and \item[] $7mK_X+27mL+N$ is very ample. \end{itemize} \item If $p=3$ and $X$ is quasi-elliptic with $\kappa(X)=1$, then \begin{itemize} \item[] $2mK_X+12mL+N$ is base point free, and \item[] $7mK_X+39mL+N$ is very ample. \end{itemize} \item If $p=2$ and $X$ is quasi-elliptic with $\kappa(X)=1$, then \begin{itemize} \item[] $2mK_X+16mL+N$ is base point free, and \item[] $7mK_X+51mL+N$ is very ample. \end{itemize} \item If $p \geq 3$ and $X$ is of general type, then \begin{itemize} \item[] $4mK_X+14mL+N$ is base point free, and \item[] $13mK_X+45mL+N$ is very ample. \end{itemize} \item If $p =2 $ and $X$ is of general type, then \begin{itemize} \item[] $4mK_X+22mL+N$ is base point free, and \item[] $13mK_X+69mL+N$ is very ample. \end{itemize} \end{itemize} \end{theorem} The bounds are rough. The theorem is a direct consequence of the following proposition. \begin{proposition} \label{proposition:part_one_upgraded} Let $X$ be a normal projective surface defined over an algebraically closed field of characteristic $p>0$. Assume that $mK_X$ is Cartier for some $m\in \mathbb{N}$. Let $A$ be an ample Cartier divisor on $X$. Then \[ \mathrm{Bs}(m(aK_{X} + bA)+N) \subseteq \Sing(X), \] for any nef Cartier divisor $N$, where \begin{itemize} \item $a=1, b=4$, if $X$ is neither of general type nor quasi-elliptic with $\kappa(X)=1$, \item $a=1, b=6$, if $X$ is quasi-elliptic with $\kappa(X)=1$ and $p=3$, \item $a=1, b=8$, if $X$ is quasi-elliptic with $\kappa(X)=1$ and $p=2$, \item $a=2, b=7$, if $X$ is of general type and $p\geq 3$, \item $a=2, b=11$, if $X$ is of general type and $p =2$. \end{itemize} \end{proposition} \begin{proof} This follows from Proposition \ref{proposition:andrea_general}, by exactly the same proof as of Proposition \ref{proposition:part_one}. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem:main}] This follows directly from Theorem \ref{theorem:2dimp>5}, Proposition \ref{proposition:part_one_upgraded}, Proposition \ref{proposition:part_two} and Proposition \ref{proposition:very_ampleness}. \end{proof} \section{Matsusaka-type bounds} The goal of this section is to prove Corollary \ref{corollary:Matsusaka-mini}. The key part of the proof is the following proposition. \begin{proposition} Let $A$ be an ample Cartier divisor and let $N$ be a nef Cartier divisor on a normal projective surface $X$. Then $kA-N$ is nef for any \[ k \geq \frac{2A \cdot N}{A^2}((K_X + 3A) \cdot A +1) + 1. \] \end{proposition} \begin{proof} The proof is exactly the same as \cite[Theorem 3.3]{DiCerboFanelli}. The only difference is that for singular surfaces, the cone theorem is weaker, so we have $K_X+3D$ in the statement, instead of $K_X+2D$. \end{proof} The following proof is exactly the same as of \cite[Theorem 1.2]{DiCerboFanelli}. \begin{proof}[Proof of Proposition \ref{corollary:Matsusaka-mini}] By Theorem \ref{theorem:main-mini}, we know that $H$ is very ample. By the above proposition, we know that $kA - (H+N)$ is a nef Cartier divisor. Thus, by the proof of Theorem \ref{theorem:main-mini} \[ H + \underbrace{(kA - (H+N))}_{\text{nef}} = kA - N \] is very ample. \end{proof} Applying Theorem \ref{theorem:main}, we obtain the following. \begin{corollary} \label{corollary:Matsusaka}Let $A$ and $N$ be respectively an ample and a nef Cartier divisor on an $F$-pure $\mathbb{Q}$-factorial projective surface defined over an algebraically closed field of characteristic $p>0$. Let $m\in \mathbb{N}$ be such that $mK_X$ is Cartier. Then $kA - N$ is very ample for any \[ k > \frac{2A \cdot (H+N)}{A^2}((K_X +3A) \cdot A + 1), \] where \begin{itemize} \item $H \defeq 7mK_X + 27mA$, if $X$ is neither quasi-elliptic with $\kappa(X)=1$, nor of general type, \item $H \defeq 7mK_X+39mA$, if $X$ is quasi-elliptic with $\kappa(X)=1$ and $p=3$, \item $H \defeq 7mK_X+51mA$, if $X$ is quasi-elliptic with $\kappa(X)=1$ and $p=2$, \item $H \defeq 13mK_X+45mA$, if $X$ is of general type and $p \geq 3$, \item $H \defeq 13mK_X+69mA$, if $X$ is of general type and $p = 2$. \end{itemize} \end{corollary} \section{Bounds on log del Pezzo pairs} The goal of this section is to prove Corollary \ref{corollary:boundedness_of_log_del_Pezzo}. We need the following facts. \begin{proposition} \label{proposition:log_del_pezzo_first_bounds} Let $(X,\Delta)$ be an $\epsilon$-klt log del Pezzo pair for $0< \epsilon < 3^{-1/2}$. Let $\pi \colon \overline{X} \to X$ be the minimal resolution. Then \begin{itemize} \item[(a)] $0 \leq (K_X + \Delta)^2 \leq \max\Big(9, \lfloor 2/\epsilon \rfloor + 4 + \frac{4}{\lfloor 2/\epsilon\rfloor}\Big)$, \item[(b)] $\mathrm{rk} \Pic(\overline{X}) \leq 128(1/\epsilon)^5$, \item[(c)] $2 \leq -E^2 \leq 2 / \epsilon$ for any exceptional curve $E$ of $\pi \colon \overline{X} \to X$, \item[(d)] if $m$ is the $\mathbb{Q}$-factorial index at some point $x \in X$, then \[ m \leq 2(2/\epsilon)^{128/\epsilon^5}. \] \end{itemize} \end{proposition} \begin{proof} Point (a) follows from \cite[Theorem 1.3]{Jiang}. Points (b) and (c) follow from \cite[Corollary 1.10]{MoriAlexeev} and \cite[Lemma 1.2]{MoriAlexeev}, respectively. Last, (d) follows from the fact that the Cartier index of a divisor divides the determinant of the intersection matrix of the minimal resolution of a singularity (see also the paragraph below \cite[Theorem A]{Lai}). \end{proof} Further, we need to prove the following: \begin{lemma} \label{lemma:log_del_pezzo_second_bounds} Let $(X,\Delta)$ be a klt log del Pezzo pair such that $m(K_X+\Delta)$ is Cartier for some natural number $m \geq 2$. Then \begin{enumerate} \item $0 \leq (K_X + \Delta) \cdot K_X \leq 3m\max\big(9, 2m + 4 + \frac{2}{m}\big)$, and \item $|K_X^2| \leq 128m^5(2m-1)$. \end{enumerate} \end{lemma} Recall that if a log del Pezzo pair $(X,\Delta)$, with Cartier index $m$, is klt, then it must be $1/m$-klt. \begin{proof} The non-negativity in $(1)$ is clear, since \[ \big(K_X + \Delta\big) \cdot K_X = \big(K_X+\Delta\big)^2 - \big(K_X+\Delta\big)\cdot \Delta \geq 0. \] Further, by cone theorem, $K_{X} - 3m\big(K_{X} + \Delta\big)$ is nef, and thus \[ (K_{X} + \Delta) \cdot \big(K_X - 3m(K_{X} + \Delta)) \leq 0. \] This, together with (a) in Proposition \ref{proposition:log_del_pezzo_first_bounds}, implies $(1)$. To prove $(2)$, we proceed as follows. Let $\pi \colon \overline{X} \to X$ be the minimal resolution of singularities of $X$. By (b) in Proposition \ref{proposition:log_del_pezzo_first_bounds}, we have $\mathrm{rk} \Pic(\overline{X}) \leq 128m^5$, and so $-9 \leq -K_{\overline{X}}^2 \leq 128m^5$. Indeed, the self intesection of the canonical bundle on a minimal model of a rational surface is $8$ or $9$, and each blow-up decreases it by one. Write \[ K_{\overline{X}} + \sum a_i E_i = \pi^*K_X, \] where $E_i$ are the exceptional divisors of $\pi$. Notice, that since $\overline{X} \to X$ is minimal and $X$ is klt, we have $0 \leq a_i < 1$. By applying (b) and (c) from Proposition \ref{proposition:log_del_pezzo_first_bounds}, we obtain \begin{align*} |K_X^2| &= \big|\big(K_{\overline{X}} + \sum a_i E_i\big) \cdot K_{\overline{X}} \big|\\ &\leq \big|K_{\overline{X}}\big|^2 + 128m^5(2m-2)\\ &\leq 128m^5(2m-1). \end{align*} \end{proof} \begin{proof}[Proof of Corollary \ref{corollary:boundedness_of_log_del_Pezzo}] By Proposition \ref{proposition:log_del_pezzo_first_bounds}, the $\mathbb{Q}$-factorial index of $X$ is bounded with respect to $\epsilon$. Indeed, the $\mathbb{Q}$-factorial index is bounded at each point by (d), and the number of singular points is bounded by (b). Hence, we can assume that there exists $m\in \mathbb{N}$ bounded with respect to $\epsilon$ and $I$, such that $mK_X$, $m\Delta$, and $m(K_X+\Delta)$ are Cartier. Set $a = 13m$ and $b=45m^2$. By Theorem \ref{theorem:main-mini} and Remark \ref{remark:effective_vanishing}, the divisor $H \defeq aK_X - b(K_X + \Delta)$ is very ample, and $H^i(X, H)=0$ for $i>0$. Proposition \ref{proposition:log_del_pezzo_first_bounds}(a) and Lemma \ref{lemma:log_del_pezzo_second_bounds} imply that $H^2$, $|H \cdot K_X|$, $H \cdot \Delta$, $|K_X \cdot \Delta|$, and $|\Delta^2|$ are bounded with respect to $m$. The ample divisor $H$ embeds $X$ into a projective space of dimension $\chi(\mathcal{O}_X(H)) = H^0(X,\mathcal{O}_X(H))$, which is bounded with respect to $m$ by the Riemann-Roch theorem. \end{proof} If $mK_X$ and $m(K_X+\Delta)$ are Cartier for $m>1$, then one can easily calculate, that is enough to take \[ b(\epsilon, I) = \Big(a^2+b^2\Big)\Big(128m^5(2m-1) + \max\Big(9,2m + 4 + \frac{2}{m}\Big)\Big), \] where $a=13m$ and $b=45m^2$. \begin{remark}Corollary \ref{corollary:boundedness_of_log_del_Pezzo} and the Riemann-Roch theorem for surfaces with rational singularities imply that the absolute values of the coefficients of the Hilbert polynomial of $X$ with respect to $H$ are bounded with respect to $\epsilon$ and $I$. Further, let $n \in \mathbb{N}$ be such that $n\Delta$ is Cartier. Then the absolute values of the coefficients of the Hilbert polynomial of $n\Delta$ with respect to $H|_{n\Delta}$ are bounded with respect to $\epsilon$, $I$, and $n$. Indeed, \[ \chi(\mathcal{O}_{n\Delta}(mH)) = mn\Delta \cdot H - \frac{1}{2}n\Delta \cdot (n\Delta + K_X) \] for $m \in \mathbb{Z}$, by the Riemann-Roch theorem and the adjunction formula. \end{remark} \begin{remark} One of the reasons for our interest in the above corollary is that it provides bounds on $\epsilon$-klt log del Pezzo surfaces which are independent of the characteristic. In particular, it shows the existence of a bounded family of $\epsilon$-klt log del Pezzo surfaces over $\Spec \mathbb{Z}$ (see \cite[Lemma 3.1]{CTW15b}). We were not able to verify whether Koll\'{a}r's bounds depend on the characteristic or not. We believe that stating explicit bounds might ease the life of future researchers, wanting to handle questions related to the behavior of log del Pezzo surfaces in mix characteristic or for big enough characteristic. \end{remark} \subsection{Effective vanishing of $H^1$} The goal of this subsection is to give a proof of the following proposition. \begin{proposition} \label{proposition:effective_vanishing} Let $X$ be a normal projective surface. Then \[ H^i(X, \mathcal{O}_X(D)) = 0 \text{ for } i>0, \] where $D = 3K_X + 14A + N$ is Cartier for an ample Cartier divisor $A$ and a nef $\mathbb{Q}$-Cartier divisor $N$. \end{proposition} It was pointed to us by the anonymous referee that the proposition follows from a result of Koll{\'a}r. We present this approach below. First, let us recall the aforementioned result. \begin{theorem}[{\cite[Theorem II.6.2 and Remark II.6.7.2]{kollarcurves}}] \label{theorem:kollar_surfaces} Let $X$ be a normal, projective variety defined over a field of characteristic $p$. Let $L$ be an ample $\mathbb{Q}$-Cartier Weil divisor on $X$ satisfying $H^1(X, \mathcal{O}_X(-L)) \neq 0$. Assume that $X$ is covered by a family of curves $\{D_t\}$ such that $X$ is smooth along a general $D_t$ and \[ ((p-1)L - K_X) \cdot D_t > 0. \] Then, through every point $x \in X$ there is a rational curve $C \subseteq X$ such that \[ L \cdot C \leq 2\dim X \frac{L \cdot D_t}{((p-1)L - K_X)\cdot D_t}. \] \end{theorem} \begin{proof}[{Proof of Proposition \ref{proposition:effective_vanishing}}] Set $L = 2K_X + 14A+N$. Since $D$ is Cartier and $\omega_X$ is reflexive, we get that $\omega_X \otimes \mathcal{O}_X(-D) = \mathcal{O}_X(-L)$. Thus, by Serre duality, we need to show $H^i(X,\mathcal{O}_X(-L)) = 0$ for $i<2$. However, by cone theorem, we have that $K_X+3A$ is nef, and so $L$ is ample, giving in particular that $H^0(X, \mathcal{O}_X(-L)) = 0$. Hence, we are left to show $H^1(X,\mathcal{O}_X(-L))=0$. To this end, we suppose by contradiction that $H^1(X,\mathcal{O}_X(-L))\neq 0$ and apply Theorem \ref{theorem:kollar_surfaces} for a general pencil of curves $\{D_t\}$ in some very ample linear system. Since $(p-1)L-K_X$ is ample, the assumptions of the theorem are satisfied. In particular, we get a curve $C$ such that \[ L \cdot C \leq 4 \frac{L \cdot D_t}{((p-1)L-K_X) \cdot D_t}, \] and, as $L$ is ample, this in turn gives \[ L \cdot C \leq 4 \frac{L \cdot D_t}{(L - K_X) \cdot D_t}. \] Since $K_X + 3A$ is nef, we have $L\cdot C \geq 8$. Further, $K_X \cdot D_t < (L-K_X)\cdot D_t$, as $L-K_X = K_X + 14A+N$. Therefore, \[ 8 \leq L \cdot C < 4\Bigg(\frac{2(L-K_X)\cdot D_t}{(L-K_X)\cdot D_t}\Bigg) = 8, \] which is a contradiction. \end{proof} \begin{remark} \label{remark:effective_vanishing} Proposition \ref{proposition:effective_vanishing} shows, under assumptions and notation of Theorem \ref{theorem:main-mini}, that the very ample divisor $H \coloneq 13mK_X + 45mA$ satisfies $H^i(X,H) = 0$ for $i>0$. Similar statements hold for very ample divisors considered in Theorem \ref{theorem:main}. \end{remark} \begin{comment} \bib{Kollar85}{article}{ author={Koll{\'a}r, J{\'a}nos}, title={Toward moduli of singular varieties}, journal={Compositio Math.}, volume={56}, date={1985}, number={3}, pages={369--398}, issn={0010-437X}, review={\MR{814554 (87e:14009)}}, } \bib{Alexeev}{article}{ author={Alexeev, Valery}, title={Boundedness and $K^2$ for log surfaces}, journal={Internat. J. Math.}, volume={5}, date={1994}, number={6}, pages={779--810}, issn={0129-167X}, review={\MR{1298994 (95k:14048)}}, doi={10.1142/S0129167X94000395}, } \bib{TanakaTrace}{article}{ author={Tanaka, Hiromu}, title={The trace map of Frobenius and extending sections for threefols}, eprint={arXiv:1302.3134v1} } \bib{CasciniTanakaXu}{article}{ author={Cascini, Paolo}, author={Tanaka, Hiromu}, author={Xu, Chenyang}, title={On base point freeness in positive characteristic}, eprint={arXiv:1305.3502v2} } \bib{SchwedeUtah}{article}{ author={Schwede, Karl}, title={F-singularities and Frobenius splitting notes}, eprint={http://www.math.utah.edu/~schwede/frob/RunningTotal.pdf} } \bib{SchwedeTucker}{article}{ author={Schwede, Karl}, author={Tucker, Kevin}, title={A survey of test ideals}, conference={ title={Progress in commutative algebra 2}, }, book={ publisher={Walter de Gruyter, Berlin}, }, date={2012}, pages={39--99}, review={\MR{2932591}}, } \bib{Lai}{article}{ author={Lai, Ching-Jui}, title={Bounding the volumes of singular Fano threefolds}, journal = {ArXiv e-prints}, eprint = {http://arxiv.org/abs/1204.2593v1}, year = {2012}, } \bib{DiCerboFanelli}{article}{ author={Di Cerbo, Gabriele}, author={Fanelli, Andrea}, title={Effective Matsusaka's Theorem for surfaces in characteristic p}, journal = {ArXiv e-prints}, eprint = {arXiv:1501.07299v2}, year = {2015}, } \bib{Matsusaka}{article}{ author={Matsusaka, T.}, title={Polarized varieties with a given Hilbert polynomial}, journal={Amer. J. Math.}, volume={94}, date={1972}, pages={1027--1077}, issn={0002-9327}, review={\MR{0337960 (49 \#2729)}}, } \bib{Shepherd-Barron}{article}{ author={Shepherd-Barron, N. I.}, title={Unstable vector bundles and linear systems on surfaces in characteristic $p$}, journal={Invent. Math.}, volume={106}, date={1991}, number={2}, pages={243--262}, issn={0020-9910}, review={\MR{1128214 (92h:14027)}}, doi={10.1007/BF01243912}, } \bib{Reider}{article}{ author={Reider, Igor}, title={Vector bundles of rank $2$ and linear systems on algebraic surfaces}, journal={Ann. of Math. (2)}, volume={127}, date={1988}, number={2}, pages={309--316}, issn={0003-486X}, review={\MR{932299 (89e:14038)}}, doi={10.2307/2007055}, } \bib{Terakawa}{article}{ author={Terakawa, Hiroyuki}, title={The $d$-very ampleness on a projective surface in positive characteristic}, journal={Pacific J. Math.}, volume={187}, date={1999}, number={1}, pages={187--199}, issn={0030-8730}, review={\MR{1674325 (99m:14014)}}, doi={10.2140/pjm.1999.187.187}, } \bib{Keeler}{article}{ author={Keeler, Dennis S.}, title={Fujita's conjecture and Frobenius amplitude}, journal={Amer. J. Math.}, volume={130}, date={2008}, number={5}, pages={1327--1336}, issn={0002-9327}, review={\MR{2450210 (2009i:14006)}}, doi={10.1353/ajm.0.0015}, } \bib{Smith1}{article}{ author={Smith, Karen E.}, title={Fujita's freeness conjecture in terms of local cohomology}, journal={J. Algebraic Geom.}, volume={6}, date={1997}, number={3}, pages={417--429}, issn={1056-3911}, review={\MR{1487221 (98m:14002)}}, } \bib{Smith2}{article}{ author={Smith, Karen E.}, title={A tight closure proof of Fujita's freeness conjecture for very ample line bundles}, journal={Math. Ann.}, volume={317}, date={2000}, number={2}, pages={285--293}, issn={0025-5831}, review={\MR{1764238 (2001e:14006)}}, doi={10.1007/s002080000094}, } \bib{SchwedeAdjoint}{article}{ author={Schwede, Karl}, title={A canonical linear system associated to adjoint divisors in characteristic $p>0$}, journal={J. Reine Angew. Math.}, volume={696}, date={2014}, pages={69--87}, issn={0075-4102}, review={\MR{3276163}}, doi={10.1515/crelle-2012-0087}, } \bib{Hara}{article}{ author={Hara, Nobuo}, title={A characteristic $p$ analog of multiplier ideals and applications}, journal={Comm. Algebra}, volume={33}, date={2005}, number={10}, pages={3375--3388}, issn={0092-7872}, review={\MR{2175438 (2006f:13006)}}, doi={10.1080/AGB-200060022}, } \bib{LangerAdjoint}{article}{ author={Langer, Adrian}, title={Adjoint linear systems on normal log surfaces}, journal={Compositio Math.}, volume={129}, date={2001}, number={1}, pages={47--66}, issn={0010-437X}, review={\MR{1856022 (2002h:14008)}}, doi={10.1023/A:1013137101524}, } \bib{Laz1}{book}{ author={Lazarsfeld, Robert}, title={Positivity in algebraic geometry. I}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]}, volume={48}, note={Classical setting: line bundles and linear series}, publisher={Springer-Verlag, Berlin}, date={2004}, pages={xviii+387}, isbn={3-540-22533-1}, review={\MR{2095471 (2005k:14001a)}}, doi={10.1007/978-3-642-18808-4}, } \bib{Laz2}{book}{ author={Lazarsfeld, Robert}, title={Positivity in algebraic geometry. II}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]}, volume={49}, note={Positivity for vector bundles, and multiplier ideals}, publisher={Springer-Verlag, Berlin}, date={2004}, pages={xviii+385}, isbn={3-540-22534-X}, review={\MR{2095472 (2005k:14001b)}}, doi={10.1007/978-3-642-18808-4}, } \bib{KollarMori}{book}{ author={Koll{\'a}r, J{\'a}nos}, author={Mori, Shigefumi}, title={Birational geometry of algebraic varieties}, series={Cambridge Tracts in Mathematics}, volume={134}, note={With the collaboration of C. H. Clemens and A. Corti; Translated from the 1998 Japanese original}, publisher={Cambridge University Press, Cambridge}, date={1998}, pages={viii+254}, isbn={0-521-63277-3}, review={\MR{1658959 (2000b:14018)}}, doi={10.1017/CBO9780511662560}, } \bib{HaconXu}{article}{ author={Hacon, Christopher D.}, author={Xu, Chenyang}, title={On the three dimensional minimal model program in positive characteristic}, journal={J. Amer. Math. Soc.}, volume={28}, date={2015}, number={3}, pages={711--744}, issn={0894-0347}, review={\MR{3327534}}, doi={10.1090/S0894-0347-2014-00809-2}, } \bib{Sakai}{article}{ author={Sakai, Fumio}, title={Reider-Serrano's method on normal surfaces}, conference={ title={Algebraic geometry}, address={L'Aquila}, date={1988}, }, book={ series={Lecture Notes in Math.}, volume={1417}, publisher={Springer, Berlin}, }, date={1990}, pages={301--319}, review={\MR{1040564 (91d:14001)}}, doi={10.1007/BFb0083346}, } \bib{TakagiFujita}{article}{ author={Takagi, Satoshi}, title={Fujita's approximation theorem in positive characteristics}, journal={J. Math. Kyoto Univ.}, volume={47}, date={2007}, number={1}, pages={179--202}, issn={0023-608X}, review={\MR{2359108 (2008i:14014)}}, } \bib{Hara2dimSing}{article}{ author={Hara, Nobuo}, title={Classification of two-dimensional $F$-regular and $F$-pure singularities}, journal={Adv. Math.}, volume={133}, date={1998}, number={1}, pages={33--53}, issn={0001-8708}, review={\MR{1492785 (99a:14048)}}, doi={10.1006/aima.1997.1682}, } \bib{Tanaka2dim}{article}{ author={Tanaka, Hiromu}, title={Minimal models and abundance for positive characteristic log surfaces}, journal={Nagoya Math. J.}, volume={216}, date={2014}, pages={1--70}, issn={0027-7630}, review={\MR{3319838}}, doi={10.1215/00277630-2801646}, } \bib{HaraWatanabe}{article}{ author={Hara, Nobuo}, author={Watanabe, Kei-Ichi}, title={F-regular and F-pure rings vs. log terminal and log canonical singularities}, journal={J. Algebraic Geom.}, volume={11}, date={2002}, number={2}, pages={363--392}, issn={1056-3911}, review={\MR{1874118 (2002k:13009)}}, doi={10.1090/S1056-3911-01-00306-X}, } \bib{MoriAlexeev}{article}{ author={Alexeev, Valery}, author={Mori, Shigefumi}, title={Bounding singular surfaces of general type}, conference={ title={Algebra, arithmetic and geometry with applications (West Lafayette, IN, 2000)}, }, book={ publisher={Springer, Berlin}, }, date={2004}, pages={143--174}, review={\MR{2037085 (2005f:14077)}}, } \bib{TanakaTrace}{article}{ author={Tanaka, Hiromu}, title={The trace map of Frobenius and extending sections for threefolds}, journal={Michigan Math. J.}, volume={64}, date={2015}, number={2}, pages={227--261}, issn={0026-2285}, review={\MR{3359024}}, doi={10.1307/mmj/1434731922}, } \bib{Tango}{article}{ author={Tango, Hiroshi}, title={On the behavior of extensions of vector bundles under the Frobenius map}, journal={Nagoya Math. J.}, volume={48}, date={1972}, pages={73--89}, issn={0027-7630}, review={\MR{0314851 (47 \#3401)}}, } \bib{Illusie}{book}{ author={Bertin, Jos{\'e}}, author={Demailly, Jean-Pierre}, author={Illusie, Luc}, author={Peters, Chris}, title={Introduction to Hodge theory}, series={SMF/AMS Texts and Monographs}, volume={8}, note={Translated from the 1996 French original by James Lewis and Peters}, publisher={American Mathematical Society, Providence, RI; Soci\'et\'e Math\'ematique de France, Paris}, date={2002}, pages={x+232}, isbn={0-8218-2040-0}, review={\MR{1924513 (2003g:14009)}}, } \bib{Katz}{article}{ author={Katz, Nicholas M.}, title={Nilpotent connections and the monodromy theorem: Applications of a result of Turrittin}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={39}, date={1970}, pages={175--232}, issn={0073-8301}, review={\MR{0291177 (45 \#271)}}, } \bib{Hara2}{article}{ author={Hara, Nobuo}, title={A characterization of rational singularities in terms of injectivity of Frobenius maps}, journal={Amer. J. Math.}, volume={120}, date={1998}, number={5}, pages={981--996}, issn={0002-9327}, review={\MR{1646049 (99h:13005)}}, } \bib{SchwedeFAdjunction}{article}{ author={Schwede, Karl}, title={$F$-adjunction}, journal={Algebra Number Theory}, volume={3}, date={2009}, number={8}, pages={907--950}, issn={1937-0652}, review={\MR{2587408 (2011b:14006)}}, doi={10.2140/ant.2009.3.907}, } \bib{Jiang}{article}{ author={Jiang, Chen}, title={Bounding the volumes of singular weak log del Pezzo surfaces}, journal={Internat. J. Math.}, volume={24}, date={2013}, number={13}, pages={1350110, 27}, issn={0129-167X}, review={\MR{3158579}}, doi={10.1142/S0129167X13501103}, } \end{comment} \end{document}
\begin{document} \maketitle \begin{abstract} We consider a proper flat fibration with real base and complex fibers. First we construct odd characteristic classes for such fibrations by a method that generalizes constructions of Bismut-Lott \cite{bl}. Then we consider the direct image of a fiberwise holomorphic vector bundle, which is a flat vector bundle on the base. We give a Riemann-Roch-Grothendieck theorem calculating the odd real characteristic classes of this flat vector bundle. \varepsilonnd{abstract} \tableofcontents \section{Introduction} We consider a compact real manifold $X$ equipped with a flat vector bundle $F$. We equip $X$ with a Riemannian metric $g^{TX}$. We equip $F$ with a Hermitian metric $g^F$. The real analytic torsion \cite{rs} is a spectral invariant of the Hodge Laplacian associated with the de Rham complex $\big(\Omega^\cdot(X,F),d^F\big)$. Let $\det H^\cdot(X,F)$ be the determinant line of the de Rham cohomology $H^\cdot(X,F)$. The Ray-Singer metric is the product of the real analytic torsion and of the $L^2$-metric on $\det H^\cdot(X,F)$. Cheeger \cite{c-cm} and M{\"u}ller \cite{m-cm} proved independently that, if $g^F$ is flat, then the Ray-Singer metric is independent of $g^{TX}$. M{\"u}ller \cite{m-cm-2} also extended this result to the unimodular case, i.e., the induced metric on $\det F$ is flat. In the general case, the dependence of the Ray-Singer metric on the metrics was calculated by Bismut-Zhang \cite{bz}. They also established an extension of the Cheeger-M\"{u}ller theorem in the general case. Now let $\pi : M \rightarrow S$ be a real smooth fibration with compact fiber $X$. Let $F$ be a complex flat vector bundle over $M$. Bismut and Lott \cite{bl} established Riemann-Roch-Grothendieck formulas, which calculate the odd Chern classes of the direct image $R^\cdot\pi_{*}F$ in terms of the Euler class of the relative tangent bundle $TX$ and the corresponding odd Chern classes of $F$. When equipping the considered vector bundles with metrics, these classes can be represented by explicit differential forms. By transgressing the equality of cohomology classes at the level of differential forms, they also obtained even analytic torsion forms on $S$, whose coboundary is equal to the difference between the differential forms appearing on the left and right hand side of the R.R.G. formula. In complex geometry, the objects parallel to the real analytic torsion and the Ray-Singer metric are known as the holomorphic analytic torsion and the Quillen metric, which were also introduced by Ray-Singer \cite{rs2}. These notions were extended to holomorphic fibrations by Bismut-Gillet-Soul\'{e} \cite{bgs2} and Bismut-K{\"o}hler \cite{bk}. In this paper, we consider a flat fibration $q:\mathcal{N}\rightarrow M$ with complex fiber $N$. We equip $\mathcal{N}$ with a complex vector bundle $E$, which is holomorphic along $N$ and flat along horizontal directions in $\mathcal{N}$. Then $R^\cdot\pi_{*}E$ is a flat vector bundle over $M$. We give a R.R.G. formula for the odd Chern classes of $R^\cdot\pi_{*}E$ in terms of the Todd class of the relative tangent bundle and of the Chern classes of $E$. Moreover, by equipping the various vector bundles with Hermitian metrics, we construct even analytic torsion forms on $M$, which transgress the equality of the corresponding cohomology classes. The results contained in this paper were announced in \cite{yzhang}. Let us now give more detail on the content of this paper. \subsection{Chern-Weil theory and its extensions} \ Let $M$ be a smooth manifold. Let $E$ be a complex vector bundle of rank $r$ over $M$. Let $\nabla^E$ be a connection on $E$. Let $P$ be an invariant polynomial on $\mathfrak{gl}(r,\mathbb{C})$. Chern-Weil theory assigns a closed differential form \begin{equation} P(E,\nabla^E) \in \Omega^\mathrm{even}(M) \;. \varepsilonnd{equation} Its cohomology class $\big[P(E,\nabla^E)\big]\in H^\mathrm{even}(M)$ is independent of $\nablaabla^{E}$, and will be denoted by $P(E)$. This theory will be referred to as the even Chern-Weil theory. If $\nabla^E$ is a flat connection, i.e., $\nabla^{E,2}=0$, $P(E,\nabla^E)$ is just a constant function on $M$. A Chern-Weil theory for flat vector bundles was developed by Bismut-Lott \cite[\textsection 1]{bl}. Let $(E,\nabla^E)$ be a flat complex vector bundle over $M$. Let $g^E$ be a Hermitian metric on $E$. Let $f$ be an odd polynomial. Bismut and Lott assigned a closed differential form \begin{equation} f(E,\nabla^E,g^E) \in \Omega^\mathrm{odd}(M) \;. \varepsilonnd{equation} Its cohomology class $\big[f(E,\nabla^E,g^E)\big]\in H^\mathrm{odd}(M)$ is independent of $g^E$, and will be denoted by $f(E,\nabla^E)$. This theory will be referred to as the odd Chern-Weil theory. In this paper, we will construct characteristic classes for flat fibrations with complex fibers. Our construction is a mixture of the even and odd Chern-Weil theory. Now we construct our flat fibration. Let $G$ be a Lie group. Let $p : P_G\rightarrow M$ be a flat $G$-principal bundle. Let $N$ be a compact complex manifold. We assume that $G$ acts holomorphically on $N$. Set \begin{equation} \mathcal{N} = P_G \times_G N \;. \varepsilonnd{equation} Let \begin{equation} q : \mathcal{N} \rightarrow M \varepsilonnd{equation} be the canonical projection. Then $q$ defines a flat fibration with canonical fiber $N$. Let $E_0$ be a holomorphic vector bundle over $N$. We assume that the action of $G$ lifts holomorphically to $E_0$. Set \begin{equation} E = P_G \times_G E_0 \;. \varepsilonnd{equation} Then $E$ is a complex vector bundle over $\mathcal{N}$. In \textsection \ref{dilab-sec-charac}, we construct odd characteristic forms as follows. We denote \begin{align} \begin{split} \Omega^\cdot(M) \; & = {\mathscr{C}^\infty}\big(M,\Lambda^\cdot(T^*M)\big) \;,\\ \Omega^\cdot(\mathcal{N},E) \; & = {\mathscr{C}^\infty}\big(\mathcal{N},\Lambda^\cdot(T^*\mathcal{N})\otimes E\big) \;. \varepsilonnd{split} \varepsilonnd{align} Let $d_M$ be the de Rham operator on $\Omega^\cdot(M)$. Let $d^E_M$ be the lift of $d_M$ to $\Omega^\cdot(\mathcal{N},E)$. Let $g^E$ be a Hermitian metric on $E$. Set \begin{equation} \omega^E = \big(g^E\big)^{-1} d^E_M\,g^E\in \Omega^1(\mathcal{N},\mathrm{End}(E)) \;. \varepsilonnd{equation} Let $\nabla^E_N$ be the fiberwise Chern connection on $(E,g^E)$. Let $A^E$ be the unitary connection on $E$ defined by \begin{equation} A^E = \nabla^E_N + d^E_M + \frac{1}{2}\omega^E \;. \varepsilonnd{equation} Let $r$ be the rank of $E$. Let $\mathfrak{gl}(r,\mathbb{C})$ be the Lie algebra of $GL(r,\mathbb{C})$. Let $P$ be an invariant polynomial on $\mathfrak{gl}(r,\mathbb{C})$. For $a,b\in\mathfrak{gl}(r,\mathbb{C})$, we use the following notation \begin{equation} \label{dilab-eq-intro-def-Pder} \Big\langle P'(a),b \Big\rangle = {\frac{\partial}{\partial t} P(a+tb)}_{t=0} \;. \varepsilonnd{equation} Let $N^{\Lambda^\cdot (T^*\mathcal{N})}$ be the number operator of $\Lambda^\cdot (T^*\mathcal{N})$, i.e., for $\alpha\in\Lambda^k (T^*\mathcal{N})$, $N^{\Lambda^\cdot (T^*\mathcal{N})}\alpha = k\alpha$. Put \begin{align} \label{dilab-eq-intro-P-tildeP} \begin{split} P(E,g^E) \; & = (2\pi i)^{-\frac{1}{2}N^{\Lambda^\cdot (T^*\mathcal{N})}} P(-A^{E,2}) \in \Omega^\mathrm{even}(\mathcal{N}) \;,\\ \widetilde{P}(E,g^E) \; & = (2\pi i)^{\frac{1}{2}-\frac{1}{2}N^{\Lambda^\cdot (T^*\mathcal{N})}} \Big\langle P'(-A^{E,2}),\frac{\omega^E}{2} \Big\rangle \in \Omega^\mathrm{odd}(\mathcal{N}) \;. \varepsilonnd{split} \varepsilonnd{align} \begin{thm} \label{dilab-thm-intro-ch-class} The differential form \begin{equation} q_*\big[P(E,g^E)\big] \in \Omega^\mathrm{even}(M) \varepsilonnd{equation} is a constant function. The differential form \begin{equation} q_*\big[\widetilde{P}(E,g^E)\big] \in \Omega^\mathrm{odd}(M) \varepsilonnd{equation} is closed. Its cohomology class \begin{equation} \left[q_*\big[\widetilde{P}(E,g^E)\big]\right]\in H^\mathrm{odd}(M) \varepsilonnd{equation} is independent of $g^E$. \varepsilonnd{thm} In the sequel, we use the notation \begin{equation} q_*\big[\widetilde{P}(E)\big] = \left[q_*\big[\widetilde{P}(E,g^E)\big]\right]\in H^\mathrm{odd}(M) \;. \varepsilonnd{equation} Now let $F$ be another vector bundle (of rank $r'$) over $\mathcal{N}$ satisfying the same properties as $E$. Let $g^F$ be a Hermitian metric on $F$. Let $Q$ be an invariant polynomial on $\mathfrak{gl}(r',\mathbb{C})$. The natural product on the forms $\widetilde{P}(E,g^E)$ and $\widetilde{Q}(F,g^F)$ is given by \begin{equation} \label{dilab-eq-intro-tildePQ} \widetilde{P}(E,g^E) * \widetilde{Q}(F,g^F) = \widetilde{P}(E,g^E)Q(F,g^F) + P(E,g^E)\widetilde{Q}(F,g^F) \;. \varepsilonnd{equation} \subsection{A R.R.G. theorem for flat fibrations with complex fibers} \label{dilab-subsect-intro-rrg} \ In the rest of the introduction, we suppose that $N$ is a K{\"a}hler manifold. Let $H^\cdot(N,E)$ be the fiberwise Dolbeault cohomology group of $E$ along $N$. Then $H^\cdot(N,E)$ is a graded flat vector bundle over $M$. Let $\nabla^{H^\cdot(N,E)}$ be its flat connection. Set $f(x) = x\varepsilonxp(x^2)$. Let \begin{equation} f\big(H^\cdot(N,E),\nabla^{H^\cdot(N,E)}\big) \in H^\mathrm{odd}(M,\mathbb{R}) \varepsilonnd{equation} be the Bismut-Lott odd characteristic class \cite[\textsection 1]{bl}. \begin{thm} \label{dilab-intro-thm-riemann-roch-grothendieck} We have \begin{equation} f\big(H^\cdot(N,E), \nablaabla^{H^\cdot(N,E)}\big) \\ = q_*\big[\widetilde{\mathrm{Td}}(TN)*\widetilde{\mathrm{ch}}(E)\big] \in H^\mathrm{odd}(M,\mathbb{R}) \;. \varepsilonnd{equation} \varepsilonnd{thm} Here $\widetilde{\mathrm{Td}}(TN)*\widetilde{\mathrm{ch}}(E)$ is defined by \varepsilonqref{dilab-eq-intro-P-tildeP} and \varepsilonqref{dilab-eq-intro-tildePQ}. Now we explain the idea of the proof. We will use the superconnection formalism \cite[\textsection 2]{bl}. Put \begin{equation} \mathscr{E} = \mathscr{C}^\infty(N,\Lambda^\cdot (\overline{T^*N})\otimes E ) \;, \varepsilonnd{equation} which is an infinite dimensional flat vector bundle over $M$. Let $d^\mathscr{E}_M$ be its flat connection. Let $\overline{\partial}^E_N$ be the Dolbeault operator on $\mathscr{E}$. Set \begin{equation} A^\mathscr{E} = \overline{\partial}^E_N + d^\mathscr{E}_M \;, \varepsilonnd{equation} which acts on $\Omega^\cdot(M,\mathscr{E})$. Here $A^\mathscr{E}$ is a flat superconnection on $\mathscr{E}$ in the sense of Bismut-Lott \cite[Definition 1.1]{bl}. Let $g^{TN}$ be a fiberwise K{\"a}hler metric on $TN$. Let $g^E$ be a Hermitian metric on $E$. Let $g^\mathscr{E}$ be the induced $L^2$-metric on $\mathscr{E}$. Let $A^{\mathscr{E},*}$ be the adjoint superconnection of $A^\mathscr{E}$ in the sense of Bismut-Lott \cite[Definition 1.6]{bl}. Let $N^{\Lambda^\cdot(T^*M)}$ be the number operator of $\Lambda^\cdot(T^*M)$. Set \begin{equation} D^\mathscr{E} = 2^{-N^{\Lambda^\cdot(T^*M)}} \big(A^{\mathscr{E},*} - A^\mathscr{E}\big) 2^{N^{\Lambda^\cdot(T^*M)}} \in\Omega^\cdot(M,\mathrm{End}(\mathscr{E})) \;. \varepsilonnd{equation} For $t>0$, let $D^\mathscr{E}_t$ be the operator $D^\mathscr{E}$ associated with the rescaled metric $\frac{1}{t}g^{TN}$. Following Bismut-Lott \cite[(2.22),(2,23)]{bl}, we define \begin{align} \label{dilab-eq-intro-def-alpha-beta-t} \begin{split} \alpha_t & = (2\pi i)^{\frac{1}{2}-\frac{1}{2}N^{\Lambda^\cdot(T^*M)}} \tr_\mathrm{s}\Big[D^\mathscr{E}_t\varepsilonxp(D^{\mathscr{E},2}_t)\Big] \;,\\ \beta_t & = (2\pi i)^{-\frac{1}{2}N^{\Lambda^\cdot(T^*M)}} \tr_\mathrm{s}\Big[ \frac{N^{\Lambda^\cdot(\overline{T^*N})}}{2}(1+2D^{\mathscr{E},2}_t) \varepsilonxp(D^{\mathscr{E},2}_t) \Big] \;. \varepsilonnd{split} \varepsilonnd{align} We have \begin{equation} \label{dilab-eq-intro-alpha-t-beta-t} d_M \alpha_t = 0 \;,\hspace{5mm} \frac{\partial}{\partial t} \alpha_t = \frac{1}{t}d_M \beta_t \;. \varepsilonnd{equation} Let $g^{H^\cdot(N,E)}$ be the metric on $H^\cdot(N,E)$ induced by the $L^2$-metric on $\mathscr{E}$ via the Hodge theorem. Let \begin{equation} f\big(H^\cdot(N,E),\nabla^{H^\cdot(N,E)},g^{H^\cdot(N,E)}\big) \in \Omega^\mathrm{odd}(M) \varepsilonnd{equation} be the Bismut-Lott odd characteristic form \cite[Definition 1.7]{bl}. Theorem \ref{dilab-intro-thm-riemann-roch-grothendieck} is a consequence of the following theorem. \begin{thm} \label{dilab-label-intro-asymptotics-alpha-t} As $t\rightarrow\infty$, \begin{equation} \label{dilab-intro-eq-asymptotics-alpha-t-1} \alpha_t = f\big(H^\cdot(N,E), \nablaabla^{H^\cdot(N,E)}, g^{H^\cdot(N,E)}\big) + \mathscr{O}\big(\frac{1}{\sqrt{t}}\big) \;. \varepsilonnd{equation} As $t\rightarrow 0$, \begin{equation} \label{dilab-intro-eq-asymptotics-alpha-t-2} \alpha_t = q_*\big[\widetilde{\mathrm{Td}}(TN,g^{TN})*\widetilde{\mathrm{ch}}(E,g^E)\big] + \frac{\text{a fixed exact form }}{t} + \mathscr{O}\big(\sqrt{t}\big) \;. \varepsilonnd{equation} \varepsilonnd{thm} \subsection{Analytic torsion forms} \label{dilab-subsect-intro-torsion-form} \ In the same way as in \varepsilonqref{dilab-intro-eq-asymptotics-alpha-t-1} and \varepsilonqref{dilab-intro-eq-asymptotics-alpha-t-2} , we also obtain an asymptotic estimate for $\beta_t$ as $t\rightarrow\infty$ and $t\rightarrow 0$. We construct explicitly an analytic torsion form \begin{equation} \mathscr{T}(g^{TN},g^E) \in \Omega^\mathrm{even}(M) \;, \varepsilonnd{equation} which is defined by subtracting the singularities of the following integral \begin{equation} - \int_0^\infty \beta_t \frac{dt}{t} \;. \varepsilonnd{equation} By \varepsilonqref{dilab-eq-intro-alpha-t-beta-t}, \varepsilonqref{dilab-intro-eq-asymptotics-alpha-t-1} and \varepsilonqref{dilab-intro-eq-asymptotics-alpha-t-2}, we have \begin{align} \begin{split} & d_M \mathscr{T}(g^{TN},g^E) \\ = \; & q_*\big[\widetilde{\mathrm{Td}}(TN,g^{TN})*\widetilde{\mathrm{ch}}(E,g^E)\big] - f\big(H^\cdot(N,E), \nabla^{H^\cdot(N,E)}, g^{H^\cdot(N,E)}\big) \;. \varepsilonnd{split} \varepsilonnd{align} Moreover, we show that the degree zero component of $\mathscr{T}(g^{TN},g^E) $ is the Ray-Singer holomorphic torsion \cite{rs2} associated with $(N,g^{TN},E,g^E)$. This paper is organized as follows. In \textsection \ref{dilab-sect-pre}, we recall several standard constructions and known results. Most of them can be found in \cite{bgv} and \cite[\textsection 1]{bl}. In \textsection \ref{dilab-sec-charac}, we construct characteristic classes for flat fibrations and prove Theorem \ref{dilab-thm-intro-ch-class}. In \textsection \ref{dilab-section-rrg}, we prove Theorem \ref{dilab-label-intro-asymptotics-alpha-t}. As a consequence, we establish Theorem \ref{dilab-intro-thm-riemann-roch-grothendieck}. We also construct the analytic torsion form $\mathscr{T}(g^{TN},g^E)$. \ \textbf{Acknowledgment} This paper is part of the author's PhD thesis. The author would like to thank his advisor Professor Jean-Michel Bismut for his guidance. The research leading to the results contained in this paper has received funding from the European Research Council (E.R.C.) under European Union's Seventh Framework Program (FP7/2007-2013)/ ERC grant agreement No. 291060. \section{Preliminaries} \label{dilab-sect-pre} \ This section is organized as follows. In \textsection \ref{dilab-subsec-superalg}, we introduce the superalgebra formalism. In \textsection \ref{dilab-subsect-clifford-pre}, we introduce the Clifford algebra. In \textsection \ref{dilab-subsec-evenodd-char}, we introduce the Chern-Weil theory. In \textsection \ref{dilab-subsec-fibration-connection-metric}, we introduce several objects associated with a smooth fibration. The constructions and results contained in this section can be found in \cite[\textsection 1]{bgv}, \cite[\textsection 1]{b}, \cite[\textsection 1]{bl}. \subsection{Superalgebras} \label{dilab-subsec-superalg} \ In the sequel, all the algebras will be over $\mathbb{R}$ or $\mathbb{C}$. \begin{defn} A superalgebra is an algebra $A$ equipped with a $\mathbb{Z}_2$-grading $A = A^{+} \oplus A^{-}$ such that \begin{equation} A^+A^\pm \subseteq A^\pm \;,\hspace{5mm} A^-A^\pm \subseteq A^\mp \;. \varepsilonnd{equation} \varepsilonnd{defn} Let $A$ be a superalgebra. An element $a\in A$ is said to be homogeneous if $a\in A^\pm$. We denote $\deg a = 0$ (resp. $\deg a = 1$) if $a\in A^+$ (resp. $a\in A^-$). The supercommutator of two homogeneous elements $a,b\in A$ is defined by \begin{equation} [a,b] = ab - (-1)^{\deg a \deg b}ba \;. \varepsilonnd{equation} Also $[\cdot,\cdot]$ extends by linearity to the whole superalgebra $A$. \begin{defn} Let $A$ and $B$ be two superalgebras. The $\mathbb{Z}_2$-graded tensor product $A\widehat{\otimes}B$ is identified with $A\otimes B$ as vector spaces, and the multiplication is given by \begin{equation} (a_1\otimes b_2)\cdot(a_2\otimes b_2) = (-1)^{\deg a_2 \deg b_1} a_1a_2 \otimes b_1b_2 \;. \varepsilonnd{equation} \varepsilonnd{defn} \begin{defn} Let $A$ be a superalgebra. A super $A$-module is a $\mathbb{Z}_2$-graded vector space $V = V^+ \oplus V^-$ equipped with an action of $A$ such that \begin{equation} A^+ V^\pm \subseteq V^\pm \;,\hspace{5mm} A^- V^\pm \subseteq V^\mp \;. \varepsilonnd{equation} \varepsilonnd{defn} Let $V = V^+ \oplus V^-$ be a $\mathbb{Z}_2$-graded vector space. Set \begin{equation} \tau = \mathrm{id}_{V^+} - \mathrm{id}_{V^-} \in \mathrm{End}(V) \;, \varepsilonnd{equation} and \begin{equation} \mathrm{End}^\pm(V) = \Big\{a\in\mathrm{End}(V)\;:\; \tau a = \pm a \tau \Big\} \;. \varepsilonnd{equation} Then $\mathrm{End}(V)= \mathrm{End}^+(V)\oplus\mathrm{End}^-(V)$ is a superalgebra, and $V$ is a super $\mathrm{End}(V)$-module. For $a\in\mathrm{End}(V)$, its supertrace is defined by \begin{equation} \tr_\mathrm{s}\big[a\big] = \tr\big[\tau a\big] \index{t rs@$\tr_\mathrm{s}$} \;. \varepsilonnd{equation} For $a,b\in\mathrm{End}(V)$, we have \begin{equation} \label{dilab-eq-trs-commutator-zero} \tr_\mathrm{s}\big[[a,b]\big] = 0 \;. \varepsilonnd{equation} In this paper, we will apply the superalgebra formalism to the following setting. Let $M$ be a smooth manifold. We denote by $\Omega^\cdot(M)$ be the algebra of differential forms on $M$. We always equip $\Omega^\cdot(M)$ with the $\mathbb{Z}_2$-grading $\Omega^\mathrm{even/odd}(M)$. Then $\Omega^\cdot(M)$ is a supercommutative superalgebra, i.e., $[\alpha_1,\alpha_2]=0$ for $\alpha_1,\alpha_2\in\Omega^\cdot(M)$. Let $F$ be a complex vector bundle over $M$. We denote by $\Omega^\cdot(M,F)$ the vector space of differential forms on $M$ with values in $F$. We equip $\Omega^\cdot(M,F)$ with the $\mathbb{Z}_2$-grading $\Omega^\mathrm{even/odd}(M,F)$. Then $\Omega^\cdot(M,F)$ is a super $\Omega^\cdot(M)$-module. \subsection{Clifford algebras} \label{dilab-subsect-clifford-pre} \ Let $V$ be a real vector space. Let $g^{V}$ be an Euclidean metric on $V$. Let \begin{equation} \bigotimes V := \bigoplus_{j=0}^\infty V^{\otimes j} \varepsilonnd{equation} be the tensor algebra of $V$. \begin{defn} Let $I\subseteq\bigotimes V$ be the bi-ideal generated by \begin{equation} u \otimes v + v \otimes u + 2 g^V(u,v) \;,\hspace{5mm} u,v\in V \;. \varepsilonnd{equation} Set \begin{equation} C(V,g^V) = \big( \bigotimes V \big) / I \;, \varepsilonnd{equation} called the Clifford algebra associated with $(V,g^V)$. \varepsilonnd{defn} Let \begin{equation} c : V \rightarrow C(V,g^V) \varepsilonnd{equation} be the map induced by the canonical injection $V\rightarrow \bigotimes V$. For $u,v\in V$, we have \begin{equation} c(u)c(v) + c(v)c(u) = -2g^V(u,v) \;. \varepsilonnd{equation} Let $e_1,\cdots,e_n\in V$ be an orthogonal basis of $V$. Then \begin{equation} \label{dilab-eq-basis-clifford-alg} c(e_{j_1})c(e_{j_2})\cdots c(e_{j_r}) \;,\hspace{5mm} 0 \leqslant r \leqslant n \;,\; j_1<j_2<\cdots<j_r \;, \varepsilonnd{equation} is a basis of $C(V,g^V)$. Let $C^\pm(V,g^V)\subseteq C(V,g^V)$ be the vector subspace spanned by the terms in \varepsilonqref{dilab-eq-basis-clifford-alg} with $r$ even/odd. Then $C(V,g^V)$ becomes a superalgebra. Now we suppose that $V$ is equipped with a complex structure $J\in\mathrm{End}(V)$ and that $g^V$ is $J$-invariant, i.e., $g^V(\cdot,\cdot) = g^V(J\cdot,J\cdot)$. Set \begin{equation} V_\mathbb{C} = V\otimes_\mathbb{R}\mathbb{C} \;. \varepsilonnd{equation} The action of $J$ extends $\mathbb{C}$-linearly to $V_\mathbb{C}$. The Euclidean metric $g^V$ extends to a $\mathbb{C}$-bilinear form on $V_\mathbb{C}$. Set \begin{equation} V_\mathbb{C}^{1,0} = \Big\{ v\in V_\mathbb{C}\;:\;Jv=iv\Big\} \;,\hspace{5mm} V_\mathbb{C}^{0,1} = \Big\{ v\in V_\mathbb{C}\;:\;Jv=-iv\Big\} \;. \varepsilonnd{equation} We have \begin{equation} V_\mathbb{C} = V_\mathbb{C}^{1,0} \oplus V_\mathbb{C}^{0,1} \;. \varepsilonnd{equation} For $v\in\mathbb{V}_\mathbb{C}$, let $v^{(1,0)}$ (resp. $v^{(0,1)}$) be its component in $V_\mathbb{C}^{1,0}$ (resp. $V_\mathbb{C}^{0,1}$). Let $V_\mathbb{C}^*$ be the vector space of $\mathbb{R}$-linear forms on $V_\mathbb{C}$. For $v\in V_\mathbb{C}$, let $v^*\in V_\mathbb{C}^*$ be its dual (with respect to $g^V$). Set \begin{equation} V_\mathbb{C}^{*,1,0} = \Big\{ f\in V_\mathbb{C}^*\;:\;f\circ J=if\Big\} \;,\hspace{5mm} V_\mathbb{C}^{*,0,1} = \Big\{ f\in V_\mathbb{C}^*\;:\;f\circ J=-if\Big\} \;. \varepsilonnd{equation} For $v\in V_\mathbb{C}^{1,0}$ (resp. $v\in V_\mathbb{C}^{0,1}$), we have $v^*\in V_\mathbb{C}^{*,0,1}$ (resp. $v^*\in V_\mathbb{C}^{*,1,0}$). For $v\in V_\mathbb{C}$, we define the product operator \begin{align} \begin{split} v^*\wedge : \; \Lambda^k V_\mathbb{C}^* & \rightarrow \Lambda^{k+1} V_\mathbb{C}^* \\ \alpha & \mapsto v^*\wedge\alpha\;, \varepsilonnd{split} \varepsilonnd{align} and the contraction operator \begin{align} \begin{split} i_v : \;\Lambda^k V_\mathbb{C}^* & \rightarrow \Lambda^{k-1} V_\mathbb{C}^* \\ \alpha & \mapsto \big((u_1,\cdots,u_{k-1}) \mapsto \alpha(v,u_1,\cdots,u_{k-1})\big) \;. \varepsilonnd{split} \varepsilonnd{align} Set \begin{align} \label{dilab-eq-pre-clifford-complex-rep} \begin{split} c : V & \rightarrow \mathrm{End}\big(\Lambda^\cdot (V_\mathbb{C}^{*,0,1}) \big) \\ v & \mapsto v^{(1,0),*}\!\wedge - i_{v^{(0,1)}} \;. \varepsilonnd{split} \varepsilonnd{align} For $u,v\in V$, we have \begin{equation} c(u)c(v)+c(v)c(u)+g^V(u,v) = 0 \;. \varepsilonnd{equation} Thus $c$ extends to a representation \begin{equation} c \; : \; C\big(V,\frac{1}{2}g^V\big) \rightarrow \mathrm{End}\big(\Lambda^\cdot (V_\mathbb{C}^{*,0,1}) \big) \;. \varepsilonnd{equation} \subsection{Even/odd characteristic classes} \label{dilab-subsec-evenodd-char} \ Let $M$ be a smooth manifold. Let $F$ be a complex vector bundle over $M$ of rank $r$. Let $\nabla^{F}$ be a connection on $F$. Then $\nabla^{F}$ induces a differential operator \begin{equation} \nabla^F : \Omega^{\cdot}(M,F)\rightarrow \Omega^{\cdot+1}(M,F) \;. \varepsilonnd{equation} Let \begin{equation} \nabla^{F,2} \in \Omega^2(M,\mathrm{End}(F)) \varepsilonnd{equation} be the curvature of $\nabla^F$. For $\omega\in\Omega^k(M)$, put \begin{equation} \varphi\omega = (2\pi i)^{-k/2} \omega \;. \varepsilonnd{equation} Let $\tr\big[\cdot\big] : \mathrm{End}(F) \rightarrow \mathbb{C}$ be the trace map, which extends to \begin{equation} \tr\big[\cdot\big] : \Omega^\cdot(M,\mathrm{End}(F)) \rightarrow \Omega^\cdot(M) \varepsilonnd{equation} such that for $\alpha\in\Omega^\cdot(M)$, $A\in{\mathscr{C}^\infty}(M,\mathrm{End}(F))$, \begin{equation} \tr\big[\omega A\big] = \omega \tr\big[ A \big]. \varepsilonnd{equation} Let $P$ be an invariant polynomial on $\mathfrak{gl}(r,\mathbb{C})$. \begin{thm}[Chern-Weil] \label{dilab-intro-thm-chern-weil} The differential form \begin{equation} \varphi P(-\nablaabla^{F,2})\in\Omega^\mathrm{even}(M) \varepsilonnd{equation} is closed. The cohomology class \begin{equation} P(F) := \left[ \varphi P(-\nablaabla^{F,2}) \right] \in H^\mathrm{even}(M) \varepsilonnd{equation} is independent of $\nabla^F$. \varepsilonnd{thm} Now we assume that $\nabla^F$ is a flat connection, i.e., $\nabla^{F,2} = 0$. Then \begin{equation} \varphi P(-\nablaabla^{F,2}) = P(0) \varepsilonnd{equation} is just a constant function on $M$. For flat vector bundles, there are non trivial characteristic classes of odd degree. We will follow the construction of Bismut-Lott \cite[\textsection 1]{bl}. Let $g^F$ be a Hermitian metric on $F$. Let $\nabla^{F,*}$ be the adjoint connection, i.e., for $\sigma_1,\sigma_2\in{\mathscr{C}^\infty}(M,F)$ and $U\in{\mathscr{C}^\infty}(M,TM)$, we have \begin{equation} g^F(\nabla^F_U \sigma_1,\sigma_2) + g^F(\sigma_1,\nabla^{F,*}_U \sigma_2) = U g^F(\sigma_1,\sigma_2) \;. \varepsilonnd{equation} Then $\nabla^{F,*,2} = 0$. Set \begin{equation} \omega^F = \nabla^{F,*} - \nabla^F \in \Omega^1(M,\mathrm{End}(F)) \;. \varepsilonnd{equation} Let $f$ be an odd polynomial in one variable with complex coefficients. Set \begin{equation} f(F,\nabla^F,g^F) = \sqrt{2\pi i} \varphi \tr \big[f(\omega^F/2)\big] \in \Omega^\mathrm{odd}(M) \;. \varepsilonnd{equation} The following theorem was established by Bismut-Lott \cite[Theorem 1.8]{bl}. \begin{thm} The differential form \begin{equation} f(F,\nabla^F,g^F) \in \Omega^\mathrm{odd}(M) \varepsilonnd{equation} is closed. The cohomology class \begin{equation} f(F,\nabla^F) := \left[ f(F,\nabla^F,g^F) \right] \in H^\mathrm{odd}(M) \varepsilonnd{equation} is independent of $g^F$. \varepsilonnd{thm} \begin{rem} If $f$ is an even polynomial, by \cite[Proposition 1.3]{bl}, we have \begin{equation} \tr \big[f(\omega^F)\big] = f(0) r \;. \varepsilonnd{equation} \varepsilonnd{rem} \subsection{Fibrations equipped with a connection and a fiberwise metric} \label{dilab-subsec-fibration-connection-metric} \ Let $\pi : \mathcal{N} \rightarrow M$ be a smooth fibration with compact fiber $N$. Let $TN$ be the relative tangent bundle of the fibration. We equip the fibration with a connection, i.e., a smooth splitting \begin{equation} \label{dilab-eq-pre-connection-fibration} T\mathcal{N} = T^H\mathcal{N} \oplus TN \;. \varepsilonnd{equation} Then $T^H\mathcal{N}\simeq\pi^*TM$. Let \begin{equation} P^{TN} : T\mathcal{N} \rightarrow TN \;,\hspace{5mm} P^{T^H\mathcal{N}} : T\mathcal{N} \rightarrow T^H\mathcal{N} \varepsilonnd{equation} be the projections. For $U\in TM$, let $U^H\in T^H\mathcal{N}$ be the lift of $U$, i.e., $\pi_*U^H = U$. For $U,V$ vector fields on $M$, set \begin{equation} \label{dilab-eq-pre-def-T} T(U,V) = [U,V]^H - [U^H,V^H] \;. \varepsilonnd{equation} We have $T\in\Omega^2(M,{\mathscr{C}^\infty}(N,TN))$. We call $T$ the curvature of the fibration. We equip $TM$ and $TN$ with Riemannian metrics $g^{TM}$ and $g^{TN}$. Let $\pi^* g^{TM}$ be the induced metric on $T^H\mathcal{N}$. Set \begin{equation} g^{T\mathcal{N}} = \pi^* g^{TM} \oplus g^{TN} \;, \varepsilonnd{equation} which is a Riemannian metric on $g^{T\mathcal{N}}$. Let $\left\langle\cdot,\cdot\right\rangle$ be the corresponding scalar product. Let $\nabla^{T\mathcal{N}}$ be the Levi-Civita connection on $T\mathcal{N}$ associated with $g^{T\mathcal{N}}$. \begin{defn} Let $\nablaabla^{TN}$ be the connection on $TN$ defined by \begin{equation} \nabla^{TN} = P^{TN} \nabla^{T\mathcal{N}} P^{TN}. \varepsilonnd{equation} \varepsilonnd{defn} Then $\nabla^{TN}$ is independent of $g^{TM}$ (cf. \cite[\textsection 1(c)]{b}). Let $L_\cdot$ be the Lie derivative. For $U$ a vector field on $M$, set \begin{equation} \omega^{TN}(U) = (g^{TN})^{-1} L_{U^H} g^{TN} \in {\mathscr{C}^\infty}(\mathcal{N},\mathrm{End}(TN)) \;. \varepsilonnd{equation} If $V\in TN$, then $\nabla^{TN}_V$ coincides with the usual Levi-Civita connection along the fiber $N$. If $U\in TM$, then (cf. \cite[\textsection 1(c)]{b}) \begin{equation}\label{eq:corr1} \nabla^{TN}_{U^H} = L_{U^H} + \frac{1}{2}\omega^{TN}(U) \;. \varepsilonnd{equation} Put \begin{equation} \nabla^{T\mathcal{N},\oplus} = P^{TN} \nabla^{T\mathcal{N}} P^{TN} \oplus P^{T^H\mathcal{N}} \nabla^{T\mathcal{N}} P^{T^H\mathcal{N}}. \varepsilonnd{equation} \begin{defn}\label{defSTX} For $U\in T\mathcal{N}$, set \begin{equation} \label{dilab-eq-pre-second-form} S^{TN}(U) = \nablaabla^{T\mathcal{N}}_U - \nablaabla^{T\mathcal{N},\oplus}_U \in {\mathscr{C}^\infty}(\mathcal{N}, \mathrm{End}(T\mathcal{N})) \;. \varepsilonnd{equation} \varepsilonnd{defn} Then $\left\langle S^{TN}\left(\cdot\right)\cdot,\cdot\right\rangle$ is independent of $g^{TM}$ (cf. \cite[\textsection 1(c)]{b}). \section{The Chern-Weil theory of a flat fibration} \label{dilab-sec-charac} The purpose of this section is to construct characteristic classes and characteristic forms on the total space of a flat fibration with compact complex fibers. This section is organized as follows. In \textsection \ref{dilab-subsec-conseq-chern-weil}, we state a consequence of the Chern-Weil theory, which will be of constant use in the rest of this section. In \textsection \ref{dilab-subsection-a-flat-fibration}, we define a flat fibration with complex fibers. In \textsection \ref{dilab-subsection-a-vector-and-superconnection}, we construct a complex vector bundle $E$ over the total space of fibration. In \textsection \ref{dilab-subsection-the-connections-and-supperconnections-to-metric}, we construct connections on $E$. In particular, given a Hermitian metric on $E$, we construct a unitary connection on $E$ and show that the integral along the fiber of the usual Chern-Weil forms associated with this connection vanishes in positive degree. In \textsection \ref{dilab-subsection-chern-simons-char}, we construct odd characteristic forms. These characteristic forms will appear on the right-hand side of the Riemann-Roch-Grothendieck formula in \textsection \ref{dilab-section-rrg}. In \textsection \ref{dilab-subsect-mul-odd-ch}, we construct a natural multiplication of the odd characteristic forms. \subsection{A consequence of Chern-Weil theory} \label{dilab-subsec-conseq-chern-weil} \ Let $N$ be a smooth compact oriented manifold. Let $\big(\Omega^{\cdot}(N),d_N\big)$ be the de Rham complex of smooth differential forms on $N$. We denote by $H^{\cdot}(N)$ its cohomology. Let $V$ be a finite dimensional real vector space. We will replace the de Rham complex $\big(\Omega^{\cdot}(N),d_N\big)$ by the twisted de Rham complex $\big(\Omega^{\cdot}(N,\Lambda^\cdot (V^*)),d_N\big)$. Its cohomology is equal to $H^{\cdot}(N)\widehat{\otimes}\Lambda^{\cdot}(V^*)$. Let $\big(\Omega^{\cdot}(N\times V), d_{N\times V}\big) $ be the de Rham complex of $N\times V$. Then $\big(\Omega^{\cdot}(N,\Lambda^\cdot (V^*)),d_N\big)$ can be identified with the subcomplex of $\big(\Omega^{\cdot}(N\times V), d_{N\times V}\big) $ that consists of forms which are constant along $V$. Let $p : N\times V \rightarrow N$ and $q : N\times V \rightarrow V$ be the natural projections. Let $q_{*}$ denote the integral along the fiber $N$, i.e., for $\alpha\in\Omega^\cdot(V)$ and $\beta\in\Omega^\cdot(N)$, \begin{equation} q_*[\alpha\wedge\beta] = \alpha\int_N\beta \;, \varepsilonnd{equation} By restricting $q_*$ to forms which are constant along $V$, we get a map \begin{equation} q_* : \Omega^{\cdot}(N,\Lambda^\cdot (V^*)) \rightarrow \Lambda^\cdot (V^*) \;. \varepsilonnd{equation} Let $E$ be a complex vector bundle of rank $r$ over $N$. Let $\nablaabla^{E}$ be a connection on $E$. Its curvature $\nablaabla^{E,2}$ is a smooth section of $\Lambda^2(T^*N) \otimes \text{End}(E)$. The vector bundle $E$ lifts to the vector bundle $p^*E$ on $N\times V$, and $\nablaabla^{E} $ lifts to a connection on $p^*E$, which is still denoted by $\nablaabla^E$. Let $S$ be a smooth section on $N$ of $V^*\otimes \text{End}(E)$. We can view $S$ as a section of $V^{*}\otimes\text{End}(E)$ on $N\times V$, which is constant along $V$. Then $\nablaabla^{E}+S$ is also a connection on $p^*E$. Its curvature $(\nablaabla^E+S)^2$ is a smooth section of $\Big(\Lambda^\cdot(T^*N)\widehat{\otimes}\Lambda^{\cdot}(V^*)\Big)^\mathrm{even}\otimes\text{End}(E)$ over $N\times V$, which is constant along $V$. The following proposition is a direct consequence of Chern-Weil theory. \begin{prop} \label{dilab-prop-conseq-chern-weil} For any invariant complex polynomial $P$ on $\mathfrak{gl}(r,\mathbb{C})$, \begin{equation} P\big(-(\nablaabla^E+S)^2\big) \in \Omega^\cdot(N,\Lambda^\cdot (V^*)) \varepsilonnd{equation} is closed. Its cohomology class \begin{equation} \left[P\big(-(\nablaabla^E+S)^2\big)\right] \in H^\cdot(N)\widehat{\otimes}\Lambda^\cdot(V^*) \varepsilonnd{equation} is independent of $\nablaabla^E$ and $S$. In particular, \begin{equation} \left[P\big(-(\nablaabla^E+S)^2\big)\right] \in H^\cdot(N)\subseteq H^\cdot(N)\widehat{\otimes}\Lambda^\cdot(V^*) \;. \varepsilonnd{equation} \varepsilonnd{prop} \subsection{A flat complex fibration} \label{dilab-subsection-a-flat-fibration} \ Let $G \index{G@$G$}$ be a Lie group. Let $N \index{Nm@$N$}$ be a compact complex manifold of dimension $n$. We assume that $G$ acts holomorphically on $N$. Let $M \index{M@$M$}$ be a real manifold. Let $p : P_G\rightarrow M \index{PG@$P_G$}$ be a principal $G$-bundle equipped with a connection. Set \begin{equation} \mathcal{N} = P_G\times_G N \;. \index{Nmathcal@$\mathcal{N}$} \varepsilonnd{equation} Let $q : \mathcal{N}\rightarrow M \index{q@$q$}$ be the natural projection, which defines a fibration with canonical fiber $N$. Let $T_\mathbb{R}N$ be the real tangent bundle of $N$. Set $T_\mathbb{C}N = T_\mathbb{C}N \otimes_\mathbb{R} \mathbb{C}$. The connection on $P_G$ induces a connection on the fibration $q : \mathcal{N}\rightarrow M$, i.e., a splitting \begin{equation} \label{dilab-eq-splitting-tangent-bundle} T\mathcal{N} = T_{\mathbb{R}}N \oplus T^H\mathcal{N} \index{TNmathcalH@$T^H\mathcal{N}$} \;. \varepsilonnd{equation} Then $T^H\mathcal{N}\simeq q^*TM$. The splitting \varepsilonqref{dilab-eq-splitting-tangent-bundle} induces the following identification \begin{equation} \label{dilab-eq-double-splitting-forms} \Lambda^\cdot(T_\mathbb{C}^*\mathcal{N}) = \Lambda^\cdot(T^*_\mathbb{C}N) \widehat{\otimes} q^*\Lambda^\cdot(T^*_\mathbb{C}M) \;. \varepsilonnd{equation} Let $TN$ be the holomorphic tangent bundle of $N$. Using the splitting $T_\mathbb{C}N = TN \oplus \overline{TN}$, we get a further splitting \begin{equation} \label{dilab-eq-triple-splitting-forms} \Lambda^\cdot(T_\mathbb{C}^*\mathcal{N}) = \Lambda^\cdot(T^*N) \widehat{\otimes} \Lambda^\cdot(\overline{T^*N}) \widehat{\otimes} q^*\Lambda^\cdot(T^*_\mathbb{C}M) \;. \varepsilonnd{equation} Put \begin{equation} \label{dilab-eq2-triple-splitting-forms} \Omega^{(p,q,r)}(\mathcal{N}) = {\mathscr{C}^\infty}\big(\mathcal{N}, \Lambda^p(T^*N) \widehat{\otimes} \Lambda^q(\overline{T^*N}) \widehat{\otimes} q^*\Lambda^r(T^*_\mathbb{C}M)\big) \index{OmegaNmathcal@$\Omega^{(p,q,r)}(\mathcal{N})$} \;. \varepsilonnd{equation} Then \begin{equation} \label{dilab-eq3-triple-splitting-forms} \Omega^k(\mathcal{N}) = \bigoplus_{p+q+r=k} \Omega^{(p,q,r)}(\mathcal{N}) \;. \varepsilonnd{equation} In the sequel, we assume that the connection on $P_G$ is flat. Then $q:\mathcal{N}\rightarrow M$ is a flat fibration, i.e., its curvature $T=0$ (cf. \varepsilonqref{dilab-eq-pre-def-T}). Let $d_N$ be the de Rham operator on $\Omega^\cdot(N)$. Let $d_M$ be the de Rham operator on $\Omega^\cdot(M)$, which lifts to $\Omega^\cdot(\mathcal{N})$ in the following sense : let $(f_\alpha)$ be a basis of $TM$, let $(f^\alpha)$ be the dual basis of $T^*M$. then \begin{equation} d_M = \sum_\alpha (q^*f^\alpha)\wedge L_{f_\alpha^H} \;. \varepsilonnd{equation} Let $d_{\mathcal{N}}$ \index{dNmathcal@$d_{\mathcal{N}}$} be the de Rham operator on $\mathcal{N}$. Since $T=0$, by \cite[Proposition 3.4]{bl}, we get \begin{equation} \label{dilab-eq-double-splitting-derham} d_{\mathcal{N}} = d_N + d_M \index{dN@$d_N$} \index{dM@$d_M$} \;. \varepsilonnd{equation} Let $\partial_N \index{dN partial@$\partial_N$}$ (resp. $\overline{\partial}_N \index{dN partialoverline@$\overline{\partial}_N$}$) be the holomorphic (resp. anti-holomorphic) Dolbeault operator on $N$. We have \begin{equation} \label{dilab-eq-vertical-splitting-derham} d_N = \partial_N + \overline{\partial}_N \;. \varepsilonnd{equation} By (\ref{dilab-eq-double-splitting-derham}) and (\ref{dilab-eq-vertical-splitting-derham}), we get \begin{equation} \label{dilab-eq-triple-splitting-derham} d_{\mathcal{N}} = \partial_N + \overline{\partial}_N + d_M \;. \varepsilonnd{equation} The following relations hold, \begin{align} \begin{split} & d_M^2 = d_N^2 = \partial_N^2 = \overline{\partial}_N^2 = 0 \;,\\ & \big[d_M,d_N\big] = \big[d_M,\partial_N\big] = \big[d_M,\overline{\partial}_N\big] = \big[d_N,\partial_N\big] = \big[d_N,\overline{\partial}_N\big] = \big[\partial_N,\overline{\partial}_N\big] = 0 \;. \varepsilonnd{split} \varepsilonnd{align} \subsection{A fiberwise holomorphic vector bundle} \label{dilab-subsection-a-vector-and-superconnection} \ Let $E_0\index{E0@$E_0$}$ be a holomorphic vector bundle over $N$ of rank $r$. We assume that the action of $G$ on $N$ lifts to a holomorphic action on $E_{0}$. Set \begin{equation} E = P_G \times_G E_0 \index{E@$E$}\;, \varepsilonnd{equation} which is a complex vector bundle over $\mathcal{N}$. Furthermore, $E$ is holomorphic along $N$. Let $\overline{\partial}_N^E \index{dN partialoverline E@$\overline{\partial}_N^E$}$ be the fiberwise holomorphic structure of $E$. Let $d_M^E \index{dM E@$d_M^E$}$ be the lift of the de Rham operator on $M$ to $\Omega^\cdot(\mathcal{N},E)$. We have \begin{equation} \label{dilab-eq-commute-n-E-NM} \overline{\partial}_N^{E,2} = d_M^{E,2} = \big[\overline{\partial}_N^E,d_M^E\big] = 0 \;. \varepsilonnd{equation} \subsection{Connections} \label{dilab-subsection-the-connections-and-supperconnections-to-metric} \ Set \begin{equation} \label{dilab-eq-def-flat-superconnection} {A^E}'' = \overline{\partial}^E_N + d^{E}_M \;, \index{AE prim2@${A^E}''$} \varepsilonnd{equation} which acts on $\Omega^\cdot(\mathcal{N},E)$. By \varepsilonqref{dilab-eq-commute-n-E-NM}, we have \begin{equation} \label{dilab-eq-AEsquare=0} \big({A^E}''\big)^2 = 0 \;. \varepsilonnd{equation} Let $\overline{E}^* $ be the anti-dual vector bundle of $E$. When replacing the complex structure of $N$ by the conjugate complex structure, $\overline{E}^*$ enjoys exactly the same properties as $E$. We construct $\partial^{\overline{E}^*}_N$ $d_M^{\overline{E}^*}$ and ${A^{\overline{E}^*}}'$ in the same way as $\overline{\partial}^E_N$, $d_M^E$ and ${A^E}''$. In particular, \begin{equation} \label{dilab-eq-def-flat-superconnection-antidual} {A^{\overline{E}^*}}' = \partial^{\overline{E}^*}_N + d_M^{\overline{E}^*} \;. \varepsilonnd{equation} Proceeding in the same way as in \varepsilonqref{dilab-eq-commute-n-E-NM} and \varepsilonqref{dilab-eq-AEsquare=0}, we have \begin{equation} \label{dilab-eq-commute-n-Edual-NM} \partial_N^{\overline{E}^*,2} = d_M^{\overline{E}^*,2} = \big[\overline{\partial}_N^{\overline{E}^*},d_M^{\overline{E}^*}\big] = 0 \;, \varepsilonnd{equation} and \begin{equation} \label{dilab-eq-AEdualsquare=0} \big({A^{\overline{E}^*}}'\big)^2 = 0 \;. \varepsilonnd{equation} Let $g^E$ be a Hermitian metric on $E$. Then $g^E$ defines an isomorphism $g^E : E \rightarrow \overline{E}^*$. Set \begin{equation} \label{dilab-eq-def-EN-EMu} \partial^E_N = (g^E)^{-1} \partial^{\overline{E}^*}_N g^E \index{dN partial E @$\partial^E_N$}\;,\hspace{5mm} d_M^{E,*} = (g^E)^{-1} d_M^{\overline{E}^*} g^E \index{dM E star@$d_M^{E,*}$}\;, \varepsilonnd{equation} which act on $\Omega^\cdot(\mathcal{N},E)$. By \varepsilonqref{dilab-eq-commute-n-Edual-NM} and \varepsilonqref{dilab-eq-def-EN-EMu}, we have \begin{equation} \label{dilab-eq-commute-n-E-NM-star} \partial_N^{E,2} = d_M^{E,*,2} = \big[ \overline{\partial}^E_N,d_M^{E,*} \big] = 0 \;. \varepsilonnd{equation} Set \begin{equation} \label{dilab-eq-def-flat-superconnection-star} {A^E}' = (g^E)^{-1}{A^{\overline{E}^*}}'g^E= \partial^E_N + d_M^{E,*} \index{AE prim@${A^E}'$}\;. \varepsilonnd{equation} Then, by \varepsilonqref{dilab-eq-commute-n-E-NM-star}, we have \begin{equation} \label{dilab-eq-AEprimsquare=0} \big({A^E}'\big)^2 = 0 \;. \varepsilonnd{equation} Let $N^{\Lambda^\cdot(T^*M)} \index{Nnumber M@$N^{\Lambda^\cdot(T^*M)}$}$ be the number operator of $\Lambda^\cdot(T^*M)$. \begin{defn} Set \begin{align} \label{dilab-eq-def-superconnection} \begin{split} & A^E = 2^{-N^{\Lambda^\cdot(T^*M)}} \big({A^E}' + {A^E}''\big) 2^{N^{\Lambda^\cdot(T^*M)}} \index{AE@$A^E$} \;,\\ & B^E = 2^{-N^{\Lambda^\cdot(T^*M)}} \big({A^E}' - {A^E}''\big) 2^{N^{\Lambda^\cdot(T^*M)}} \index{BE@$B^E$} \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{defn} By \varepsilonqref{dilab-eq-AEsquare=0} and \varepsilonqref{dilab-eq-AEprimsquare=0}, we have \begin{equation} \label{dilab-eq-relation-A-B} A^{E,2} = 2^{-N^{\Lambda^\cdot(T^*M)}} \big[{A^E}',{A^E}''\big] 2^{N^{\Lambda^\cdot(T^*M)}} = - B^{E,2} \;. \varepsilonnd{equation} Set \begin{equation}\label{eq:corr2} d_N^E = \partial^E_N + \overline{\partial}^E_N \index{dN E@$d_N^E$}\;,\hspace{5mm} d_M^{E,\mathrm{u}} = \frac{1}{2}\big(d_M^E + d_M^{E,*}\big) \index{dM E uni@$d_M^{E,\mathrm{u}}$}\;. \varepsilonnd{equation} Then \begin{equation} \label{dilab-eq-explicite-A} A^E = d_N^E + d_M^{E,\mathrm{u}} \;. \varepsilonnd{equation} Thus $A^E$ is a Hermitian connection on $E$ over $\mathcal{N}$. Set \begin{equation} \label{dilab-eq-def-omegaE} \omega^E = d_M^{E,*} - d_M^E = \big(g^E\big)^{-1} d_M^E g^E \in {\mathscr{C}^\infty}\big(\mathcal{N}, T^*M \otimes _{\mathbb R}\mathrm{End}(E)\big) \index{omegaE@$\omega^E$}\;. \varepsilonnd{equation} Then \begin{equation} \label{dilab-eq-explicite-B} B^E = \partial^E_N - \overline{\partial}^E_N + \frac{1}{2}\omega^E \;. \varepsilonnd{equation} Thus $B^E\in\Omega^\cdot\big(M,\text{End}(\Omega^\cdot(N,E))\big)$. \begin{prop} \label{dilab-prop-annulation-curvature} For any invariant polynomial $P$ on $\mathfrak{gl}(r,\mathbb{C})$, we have \begin{equation} \label{dilab-eq1-prop-annulation-curvature} \big(\partial_N-\overline{\partial}_N\big)P\big(-A^{E,2}\big) = 0 \;. \varepsilonnd{equation} Also \begin{equation} \label{dilab-eq2-prop-annulation-curvature} P\big(-A^{E,2}\big) - P\big(-d_N^{E,2}\big) \in \mathrm{Im} \big(\partial_N-\overline{\partial}_N\big) \;. \varepsilonnd{equation} As a consequence, we have \begin{equation} \label{dilab-eq3-prop-annulation-curvature} q_*\big[P\big(-A^{E,2}\big)\big] = q_*\big[P\big(-d_N^{E,2}\big)\big] \;, \varepsilonnd{equation} which is a constant function on $M$. \varepsilonnd{prop} \begin{proof} Let $N^{\Lambda^\cdot(\overline{T^*N})}$ be the number operator of $\Lambda^\cdot(\overline{T^*N})$. Set $U=(-1)^{N^{\Lambda^\cdot(\overline{T^*N})}}$. To establish \varepsilonqref{dilab-eq1-prop-annulation-curvature} and \varepsilonqref{dilab-eq2-prop-annulation-curvature}, we only need to show that \begin{equation} \label{dilab-eq01-prop-annulation-curvature} d_N UP\big(-A^{E,2}\big) = 0 \;, \varepsilonnd{equation} and \begin{equation} \label{dilab-eq02-prop-annulation-curvature} UP\big(-A^{E,2}\big) - UP\big(-d_N^{E,2}\big) \in \mathrm{Im}\big(d_N\big) \;. \varepsilonnd{equation} By \varepsilonqref{dilab-eq-explicite-B}, we have \begin{equation} U^{-1}B^EU = d_N^E + \frac{1}{2}\omega^E \;. \varepsilonnd{equation} Now, applying \varepsilonqref{dilab-eq-relation-A-B}, we get \begin{equation} \label{dilab-eq-prop-annulation-curvature-1} U^{-1}A^{E,2}U = -U^{-1}B^{E,2}U = -\big(d_N^E+\frac{1}{2}\omega^E\big)^2 \;. \varepsilonnd{equation} We may and we will assume that $P$ is homogeneous. By \varepsilonqref{dilab-eq-prop-annulation-curvature-1}, we have \begin{equation} \label{dilab-eq-prop-annulation-curvature-2} U P\big(-A^{E,2}\big) = (-1)^{\deg P} P\Big(-\big(d_N^E+\frac{1}{2}\omega^E\big)^2\Big) \;. \varepsilonnd{equation} Applying Proposition \ref{dilab-prop-conseq-chern-weil} to the right-hand side of \varepsilonqref{dilab-eq-prop-annulation-curvature-2}, we get \varepsilonqref{dilab-eq01-prop-annulation-curvature}. We decompose \varepsilonqref{dilab-eq-prop-annulation-curvature-2} according to \varepsilonqref{dilab-eq3-triple-splitting-forms}. By extracting the components which are of positive degree along $M$, we get \begin{align} \label{dilab-eq-prop-annulation-curvature-3} \begin{split} & U P\big(-A^{E,2}\big) - U P\big(-d^{E,2}_N\big) \\ & = (-1)^{\deg P} P\Big(-\big(d_N^E+\frac{1}{2}\omega^E\big)^2\Big) - (-1)^{\deg P} P\Big(-d_N^{E,2}\Big) \;. \varepsilonnd{split} \varepsilonnd{align} Applying Proposition \ref{dilab-prop-conseq-chern-weil} to the right-hand side of \varepsilonqref{dilab-eq-prop-annulation-curvature-3}, we get \varepsilonqref{dilab-eq02-prop-annulation-curvature}. Taking the integral of \varepsilonqref{dilab-eq2-prop-annulation-curvature} along $N$, we get \varepsilonqref{dilab-eq3-prop-annulation-curvature}. \varepsilonnd{proof} For $t\in\mathbb{R}$, set \begin{equation} A^E_t= d_N^E + td_M^E + (1-t)d_M^{E,*} \index{AEt@$A^E_t$}\;. \varepsilonnd{equation} In particular, \begin{equation} A^E_{1/2}=A^E \;. \varepsilonnd{equation} Set \begin{equation} \label{dilab-eq-def-vt} V_t = (2-2t)^{N^{\Lambda^\cdot(T^*N)}}(2t)^{N^{\Lambda^\cdot(\overline{T^*N})}} \index{Vt@$V_t$}\;. \varepsilonnd{equation} \begin{lemme} For $t \nablaeq 0,1$, we have \begin{equation} \label{dilab-eq-connection-t-rescaling} A^{E,2}_t = 4t(1-t)V_t^{-1}A^{E,2}V_t \;. \varepsilonnd{equation} \varepsilonnd{lemme} \begin{proof} By \varepsilonqref{dilab-eq-def-flat-superconnection} and \varepsilonqref{dilab-eq-def-flat-superconnection-star}, we have \begin{align} \label{dilab-prop-rescal-A-t-proof-eq-1} \begin{split} 2tV_t^{-1}2^{-N^{\Lambda^\cdot(T^*M)}}{A^E}''2^{N^{\Lambda^\cdot(T^*M)}}V_t & = \overline{\partial}^E_N + td_M^E \;,\\ (2-2t)V_t^{-1}2^{-N^{\Lambda^\cdot(T^*M)}}{A^E}'2^{N^{\Lambda^\cdot(T^*M)}}V_t & = \partial^E_N + (1-t)d_M^{E,*} \;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-eq-commute-n-E-NM}, \varepsilonqref{dilab-eq-commute-n-E-NM-star}, \varepsilonqref{dilab-eq-relation-A-B} and \varepsilonqref{dilab-prop-rescal-A-t-proof-eq-1}, we have \begin{align} \begin{split} & 4t(1-t)V_t^{-1}A^{E,2}V_t \\ = \; & \Big[(2-2t)V_t^{-1}2^{-N^{\Lambda^\cdot(T^*M)}}{A^E}'2^{N^{\Lambda^\cdot(T^*M)}}V_t\,,\,2tV_t^{-1}2^{-N^{\Lambda^\cdot(T^*M)}}{A^E}''2^{N^{\Lambda^\cdot(T^*M)}}V_t\Big] \\ = \; & \Big[\partial^E_N + (1-t)d_M^{E,*}\,,\,\overline{\partial}_N^E + td_M^E\Big] \\ = \; & \Big( \partial^E_N + (1-t)d_M^{E,*} + \overline{\partial}_N^E + td_M^E \Big)^2 = A^{E,2}_t \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{proof} Now we extend Proposition \ref{dilab-prop-annulation-curvature} by considering the extra parameter $t$. \begin{thm} \label{dilab-thm-gen-flat2trivialclass} For any invariant polynomial $P$ on $\mathfrak{gl}(r,\mathbb{C})$ and $t\in\mathbb{R}$, we have \begin{equation} \label{dilab-eq-thm-gen-flat2trivialclass} q_*\big[P\big(-A^{E,2}_t\big)\big] = q_*\big[P\big(-d_N^{E,2}\big)\big] \;, \varepsilonnd{equation} which is a constant function on $M$. \varepsilonnd{thm} \begin{proof} Since $q_*\big[P\big(-A^{E,2}_t\big)\big]$ is polynomial on $t$, it is sufficient to consider the case $t\nablaeq 0,1$. We may suppose that $P$ is homogeneous. By \varepsilonqref{dilab-eq-connection-t-rescaling}, we have \begin{equation} \label{dilab-eq1-pf-thm-gen-flat2trivialclass} q_*\big[P\big(-A^{E,2}_t\big)\big] = \big(4t(1-t)\big)^{\deg P} q_*\big[V_t^{-1}P\big(-A^{E,2}\big)\big] \;. \varepsilonnd{equation} Applying Proposition \ref{dilab-prop-annulation-curvature} to the right-hand side of \varepsilonqref{dilab-eq1-pf-thm-gen-flat2trivialclass}, we get \begin{equation} \label{dilab-eq2-pf-thm-gen-flat2trivialclass} q_*\big[P\big(-A^{E,2}_t\big)\big] = \big(4t(1-t)\big)^{\deg P} q_*\big[V_t^{-1}P\big(-d_N^{E,2}\big)\big] \;. \varepsilonnd{equation} Since $P\big(-d_N^{E,2}\big)$ is a $(\deg P, \deg P)$-form on $N$, we have \begin{equation} \label{dilab-eq3-pf-thm-gen-flat2trivialclass} V_t^{-1}P\big(-d_N^{E,2}\big) = \big(4t(1-t)\big)^{-\deg P} P\big(-d_N^{E,2}\big) \;. \varepsilonnd{equation} By \varepsilonqref{dilab-eq2-pf-thm-gen-flat2trivialclass} and \varepsilonqref{dilab-eq3-pf-thm-gen-flat2trivialclass}, we get \varepsilonqref{dilab-eq-thm-gen-flat2trivialclass}. \varepsilonnd{proof} \subsection{The odd characteristic forms} \label{dilab-subsection-chern-simons-char} \ Set $\varphi = (2\pi i)^{-\frac{1}{2}N^{\Lambda^\cdot (T^*\mathcal{N})}} \index{p hi var@$\varphi$}$. Let $P$ be an invariant polynomial on $\mathfrak{gl}(r,\mathbb{C})$. \begin{defn} For $t\in\mathbb{R}$, set \begin{equation} \widetilde{P}_t\big(E, g^E\big) = \sqrt{2\pi i} \varphi \; \left\langle P'\big(-A^{E,2}_t\big),\frac{\omega^E}{2} \right\rangle \in\Omega^\mathrm{odd}(\mathcal{N}) \index{Pinvt tilde@$\widetilde{P}_t(\cdot,\cdot)$}\;. \varepsilonnd{equation} \varepsilonnd{defn} Here the notation $\big\langle P'(\cdot),\cdot\big\rangle$ was defined in \varepsilonqref{dilab-eq-intro-def-Pder}. \begin{prop} \label{dilab-prop-class-odd} For $t\in\mathbb{R}$, the differential form \begin{equation} q_*\big[\widetilde{P}_t\big(E, g^E\big)\big]\in\Omega^\mathrm{odd}(M) \varepsilonnd{equation} is closed . Its cohomology class \begin{equation} \left[ q_*\big[\widetilde{P}_t\big(E, g^E\big)\big] \right] \in H^\cdot(M) \varepsilonnd{equation} is independent of $g^E$. \varepsilonnd{prop} \begin{proof} We have (cf. \cite[\textsection 1.4]{bgv}) \begin{align} \begin{split} \sqrt{2\pi i} \varphi \frac{\partial}{\partial t} P\big(-A^{E,2}_t\big) = \; & -\sqrt{2\pi i} \varphi \left\langle P'\big(-A^{E,2}_t\big),\big[A^E_t,\frac{\partial}{\partial t}A^E_t\big]\right\rangle \\ = \; & -\sqrt{2\pi i} \varphi \, d_\mathcal{N} \left\langle P'\big(-A^{E,2}_t\big),\frac{\partial}{\partial t}A^E_t\right\rangle \\ = \; & -d_\mathcal{N} \varphi \left\langle P'\big(-A^{E,2}_t\big),\frac{\partial}{\partial t}A^E_t\right\rangle \;. \varepsilonnd{split} \varepsilonnd{align} Since \begin{equation} \frac{\partial}{\partial t}A^E_t = d_M^E - d_M^{E,*} = - \omega^E \;, \varepsilonnd{equation} we have \begin{equation} \label{dilab-eq-closeness-chern-simons} \sqrt{2\pi i} \varphi \;\frac{\partial}{\partial t} P\big(-A^{E,2}_t\big) = 2 d_\mathcal{N}\widetilde{P}_t\big(E, g^E\big) \;. \varepsilonnd{equation} By Proposition \ref{dilab-thm-gen-flat2trivialclass}, we get \begin{equation} \label{dilab-proof-prop-class-odd-eq-1} \frac{\partial}{\partial t} q_*\big[P\big(-A^{E,2}_t\big)\big] = 0 \;. \varepsilonnd{equation} By (\ref{dilab-eq-closeness-chern-simons}) and (\ref{dilab-proof-prop-class-odd-eq-1}), we get \begin{equation} d_M q_*\big[\widetilde{P}_t\big(E, g^E\big)\big] = q_*\big[d_\mathcal{N}\widetilde{P}_t\big(E, g^E\big)\big] = 0 \;. \varepsilonnd{equation} Thus $q_*\big[\widetilde{P}_t\big(E, g^E\big)\big]$ is closed. The fact that $\left[q_*\big[\widetilde{P}_t\big(E, g^E\big)\big]\right]\in H^\cdot(M)$ is independent of $g^E$ comes from the functoriality of our construction (cf. \cite[\textsection 1.5]{bgv}). \varepsilonnd{proof} Now we study the dependence of $\widetilde{P}_t\big(E, g^E\big)$ on $t$. Recall that $V_t$ was defined in (\ref{dilab-eq-def-vt}). \begin{prop} If $P$ is homogeneous, for $t\in\mathbb{R}$, we have \begin{equation} \label{dilab-eq-chern-simons-form-t-rescaling} \widetilde{P}_t\big(E, g^E\big) = \big(4t(1-t)\big)^{\deg P - 1}V_t^{-1} \widetilde{P}_\frac{1}{2}\big(E, g^E\big) \;. \varepsilonnd{equation} In particular, \begin{equation} \label{dilab-eq-chern-simons-class-t-rescaling} q_*\big[\widetilde{P}_t\big(E, g^E\big)\big] = \big(4t(1-t)\big)^{\deg P - n - 1} q_*\big[\widetilde{P}_\frac{1}{2}\big(E, g^E\big)\big] \;. \varepsilonnd{equation} \varepsilonnd{prop} \begin{proof} Since \varepsilonqref{dilab-eq-chern-simons-form-t-rescaling} is a rational function of $t$, it is sufficient to consider the case $t\nablaeq 0,1$. By \varepsilonqref{dilab-eq-connection-t-rescaling}, we have \begin{align} \begin{split} \left\langle P'\big(-A^{E,2}_t\big)\,,\,\frac{\omega^E}{2}\right\rangle & = \left\langle P'\big(-4t(1-t)V_t^{-1}A^{E,2}_{\frac{1}{2}}V_t\big)\,,\,\frac{\omega^E}{2}\right\rangle \\ & = \big(4t(1-t)\big)^{\deg P'}V_t^{-1}\left\langle P'\big(-A^{E,2}_{\frac{1}{2}}\big)\,,\,\frac{\omega^E}{2}\right\rangle \\ & = \big(4t(1-t)\big)^{\deg P - 1}V_t^{-1}\left\langle P'\big(-A^{E,2}_{\frac{1}{2}}\big)\,,\,\frac{\omega^E}{2}\right\rangle \;, \varepsilonnd{split} \varepsilonnd{align} which is equivalent to \varepsilonqref{dilab-eq-chern-simons-form-t-rescaling}. \varepsilonnd{proof} In the sequel, we use the convention \begin{equation} \widetilde{P}\big(E, g^E\big) = \widetilde{P}_\frac{1}{2}\big(E, g^E\big) \index{Pinv tilde@$\widetilde{P}(\cdot,\cdot)$} \;. \varepsilonnd{equation} The following proposition is a refinement of Proposition \ref{dilab-prop-class-odd}. \begin{prop} We have \begin{align} \label{dilab-eq-prop-class-odd-close-explicite-1} \begin{split} & d_\mathcal{N} \widetilde{P}\big(E, g^E\big) \\ = \; & \frac{\sqrt{2\pi i}}{2}\varphi\; \big(\frac{\partial}{\partial t}V_t^{-1}\big)_{t=\frac{1}{2}} \big(\partial_N - \overline{\partial}_N\big) \int_0^1 \left\langle P'\Big(\big(\partial^E_N-\overline{\partial}^E_N+\frac{s\omega^E}{2}\big)^2\Big),\frac{\omega^E}{2}\right\rangle ds \;. \varepsilonnd{split} \varepsilonnd{align} In particular, for $p = 0,\cdots,n$, we have \begin{equation} \label{dilab-eq-prop-class-odd-close-explicite-2} \Big\{d_\mathcal{N}\widetilde{P}\big(E, g^E\big)\Big\}^{(p,p,\cdot)} = 0 \;. \varepsilonnd{equation} \varepsilonnd{prop} \begin{proof} By \varepsilonqref{dilab-eq-connection-t-rescaling}, we have \begin{align} \label{dilab-eq-pf-prop-class-odd-close-explicite-0} \begin{split} & \frac{\partial}{\partial t} \Big\{ \sqrt{2\pi i} \varphi\; P\big(-A^{E,2}_t\big) \Big\}_{t=\frac{1}{2}} \\ = \; & \frac{\partial}{\partial t} \Big\{ \sqrt{2\pi i} \varphi\; \big(4t(1-t)\big)^{\deg P}V_t^{-1}P\big(-A^{E,2}\big) \Big\}_{t=\frac{1}{2}} \;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-eq3-pf-thm-gen-flat2trivialclass} and \varepsilonqref{dilab-eq-pf-prop-class-odd-close-explicite-0}, we have \begin{align} \label{dilab-eq-pf-prop-class-odd-close-explicite-1} \begin{split} & \frac{\partial}{\partial t} \Big\{ \sqrt{2\pi i} \varphi\; P\big(-A^{E,2}_t\big) \Big\}_{t=\frac{1}{2}} \\ = \; & \frac{\partial}{\partial t} \Big\{ \sqrt{2\pi i} \varphi\; \big(4t(1-t)\big)^{\deg P}V_t^{-1} \Big(P\big(-A^{E,2}\big) - P\big(-d_N^{E,2}\big)\Big) \Big\}_{t=\frac{1}{2}} \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-eq-prop-annulation-curvature-1}, we have \begin{equation} \label{dilab-eq-pf-prop-class-odd-close-explicite-2} P\big(-A^{E,2}\big) - P\big(-d_N^{E,2}\big) = U \left( P\Big(\big(d_N^E+\frac{\omega^E}{2}\big)^2\Big) - P\big(d_N^{E,2}\big) \right) \;. \varepsilonnd{equation} As a consequence of Proposition \ref{dilab-prop-conseq-chern-weil} (cf. \cite[\textsection 1.5]{bgv}). , we get \begin{equation} \label{dilab-eq-pf-prop-class-odd-close-explicite-3} P\Big(\big(d_N^E+\frac{\omega^E}{2}\big)^2\Big) - P\big(d_N^{E,2}\big) = d_N \int_0^1 \left\langle P'\Big(\big(d_N^E+\frac{s\omega^E}{2}\big)^2\Big),\frac{\omega^E}{2} \right\rangle ds \;. \varepsilonnd{equation} Then \begin{align} \label{dilab-eq-pf-prop-class-odd-close-explicite-4} \begin{split} & U \left( P\Big(\big(d_N^E+\frac{\omega^E}{2}\big)^2\Big) - P\big(d_N^{E,2}\big) \right) \\ = \; & \big(\partial_N - \overline{\partial}_N\big) \int_0^1 \left\langle P'\Big(\big(\partial^E_N-\overline{\partial}^E_N+\frac{s\omega^E}{2}\big)^2\Big),\frac{\omega^E}{2} \right\rangle ds \;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-eq-closeness-chern-simons}, \varepsilonqref{dilab-eq-pf-prop-class-odd-close-explicite-1}, \varepsilonqref{dilab-eq-pf-prop-class-odd-close-explicite-2} and \varepsilonqref{dilab-eq-pf-prop-class-odd-close-explicite-4}, we get (\ref{dilab-eq-prop-class-odd-close-explicite-1}). For $p=0,\cdots,n$, we have \begin{equation} V_t^{-1}|_{\Omega^{(p,p,\cdot)}}=(4t(1-t))^{-p} \;, \varepsilonnd{equation} whose derivative at $t=\frac{1}{2}$ is zero. This proves (\ref{dilab-eq-prop-class-odd-close-explicite-2}). \varepsilonnd{proof} \subsection{Multiplication of odd characteristic forms} \label{dilab-subsect-mul-odd-ch} \ Put \begin{equation} P\big(E, g^E\big) = \varphi P\big(-A^{E,2}_\frac{1}{2}\big) \;. \varepsilonnd{equation} \begin{prop} \label{dilab-prop-multi-cs} Let $P,Q$ be two invariant polynomials. The following identity holds \begin{equation} \label{dilab-eq-prop-multi-cs} \widetilde{PQ}\big(E, g^E\big) = \widetilde{P}\big(E, g^E\big) \wedge Q\big(E, g^E\big) + P\big(E, g^E\big) \wedge \widetilde{Q}\big(E, g^E\big) \;. \varepsilonnd{equation} \varepsilonnd{prop} \begin{proof} We have \begin{align} \label{dilab-proof-prop-multi-cs-eq-1} \begin{split} & \left\langle (PQ)'\big(-A^{E,2}\big),\frac{\omega^E}{2} \right\rangle \\ = \; & \left\langle P'\big(-A^{E,2}\big),\frac{\omega^E}{2} \right\rangle \wedge Q\big(-A^{E,2}\big) + P\big(-A^{E,2}\big) \wedge \left\langle Q'\big(-A^{E,2}\big),\frac{\omega^E}{2} \right\rangle \;, \varepsilonnd{split} \varepsilonnd{align} which implies \varepsilonqref{dilab-eq-prop-multi-cs}. \varepsilonnd{proof} For $(\alpha,\tilde{\alpha}), (\beta,\tilde{\beta})\in\Omega^{\mathrm{even}}(\mathcal{N}) \times\Omega^{\mathrm{odd}}(\mathcal{N})$, put \begin{equation} (\alpha,\tilde{\alpha})\cdot(\beta,\tilde{\beta}) = (\alpha\wedge\beta,\tilde{\alpha}\wedge\beta+\alpha\wedge\tilde{\beta}) \;. \varepsilonnd{equation} Then $\big(\Omega^{\mathrm{even}}(\mathcal{N})\times\Omega^{\mathrm{odd}}(\mathcal{N}),\,+\,,\,\cdot\;\big)$ is a commutative ring. Let $\left( \mathbb{C}\big[\mathfrak{gl}(r,\mathbb{C})\big] \right)^{\mathrm{GL}(r,\mathbb{C})}$ be the ring of invariant polynomials on $\mathfrak{gl}(r,\mathbb{C})$. \begin{prop} \label{dilab-prop-mul-odd-char} The following map is a ring homomorphism. \begin{align} \begin{split} \left( \mathbb{C}\big[\mathfrak{gl}(r,\mathbb{C})\big] \right)^{\mathrm{GL}(r,\mathbb{C})} & \rightarrow \Omega^{\mathrm{even}}(\mathcal{N})\times\Omega^{\mathrm {odd}}(\mathcal{N}) \\ P & \mapsto \left(P\big(E, g^E\big),\widetilde{P}\big(E, g^E\big)\right) \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{prop} \begin{proof} This is a direct consequence of Proposition \ref{dilab-prop-multi-cs}. \varepsilonnd{proof} Let $F$ be another complex vector bundle over $\mathcal{N}$ satisfying the same properties as $E$. Let $r'$ be the rank of $F$. Let $g^F$ be a Hermitian metric on $F$. Let $Q$ be an invariant polynomial on $\mathfrak{gl}(r',\mathbb{C})$. \begin{defn} We define \begin{equation} \widetilde{P}\big(E, g^E\big)*\widetilde{Q}\big(F, g^F\big) = \widetilde{P}\big(E, g^E\big)Q\big(F, g^F\big) + P\big(E, g^E\big)\widetilde{Q}\big(F, g^F\big) \;. \varepsilonnd{equation} \varepsilonnd{defn} \begin{prop} \label{dilab-prop-mul-odd-char-EF} The differential form \begin{equation} q_*\big[\widetilde{P}\big(E, g^E\big)*\widetilde{Q}\big(F, g^F\big)\big] \in \Omega^\mathrm{odd}(M) \varepsilonnd{equation} is closed. Its cohomology class is independent of $g^E$ and $g^F$. \varepsilonnd{prop} \begin{proof} The argument leading to Proposition \ref{dilab-prop-class-odd} still works: the key step is to show that \begin{equation} 2 d_\mathcal{N} \widetilde{P}\big(E, g^E\big)*\widetilde{Q}\big(F, g^F\big) = \sqrt{2\pi i}\varphi \frac{\partial}{\partial t} \Big( P(-A^{E,2}_t)Q(-A^{F,2}_t) \Big)_{t=1/2} \;. \varepsilonnd{equation} \varepsilonnd{proof} \section{A Riemann-Roch-Grothendieck formula} \label{dilab-section-rrg} The purpose of this section is to establish a Riemann-Roch-Grothendieck formula, that express the odd Chern classes associated with the flat vector bundle $H^{\cdot}\left(N,E\right)$ in terms of the exotic characteristic classes that were defined in \textsection \ref{dilab-subsection-chern-simons-char}. This section is organized as follows. In \textsection \ref{dilab-subsection-flat-superconnection}, we introduce the infinite dimensional flat vector bundle $\mathscr{E}=\Omega^{(0,\cdot)}(N,E)$. In \textsection \ref{dilab-subsec-vertical-metrics}, we equip $TN$ with a fiberwise K{\"a}hler metric, $E$ with a Hermitian metric. In \textsection \ref{dilab-subsec-levi-civita-supperconnection}, we introduce the Levi-Civita superconnection on $\mathscr{E}$. In \textsection \ref{dilab-subsec-index-bundle}, we define the index bundle, which is the fiberwise Dolbeault cohomology of $E$. We show that the even characteristic form of the index bundle is a constant function on $M$. In \textsection \ref{dilab-subsec-riemann-roch-grothendieck}, we construct differential forms $\alpha_t$, $\beta_t$ in the same way as \cite[\textsection 3(h)]{bl}. We state explicit formulas calculating their asymptotics as $t\rightarrow\infty$ and $t\rightarrow 0$. As a consequence of these formulas, we obtain a Riemann-Roch-Grothendieck formula. In \textsection \ref{dilab-subsec-intermediate-rrg}, we prove the asymptotics of $\alpha_t$, $\beta_t$ stated in \textsection \ref{dilab-subsec-riemann-roch-grothendieck}. The techniques applied in the proof were initiated by Bismut-Gillet-Soul{\'e} \cite[\textsection 1(h)]{bgs3} and Bismut-K\"{o}hler \cite{bk}. The key idea is a Lichnerowicz formula involving additional Grassmannian variables $da$, $d\bar{a}$. In \textsection \ref{dilab-subsect-torsionform}, following \cite[\textsection 3(j)]{bl}, we construct analytic torsion forms on $M$, that transgress the R.R.G. formula at the level of differential forms. \subsection{A flat superconnection and its dual} \label{dilab-subsection-flat-superconnection} \ Set \begin{equation} \label{dilab-eq-def-inf-E} \mathscr{E}^q = \mathscr{C}^\infty(N,\Lambda^q(\overline{T^*N})\otimes E) \;,\hspace{5mm} \mathscr{E} = \bigoplus_q \mathscr{E}^q \index{Emathscr@$\mathscr{E}$}\;. \varepsilonnd{equation} Then $\mathscr{E}$ is an infinite dimensional flat vector bundle over $M$. By \varepsilonqref{dilab-eq2-triple-splitting-forms}, we have the identification \begin{equation} \Omega^\cdot(M,\mathscr{E}) = \Omega^{(0,\cdot,\cdot)}(\mathcal{N},E) \;. \varepsilonnd{equation} Let $\nabla^\mathscr{E} \index{nablaEmathscr@$\nabla^\mathscr{E}$}$ be the restriction of $d_M^E$ to $\Omega^\cdot(M,\mathscr{E})$. Then $\nabla^\mathscr{E}$ is the canonical flat connection on $\mathscr{E}$. Set \begin{equation} \label{dilab-eq-decomposition-superconnection-infi-dim} A^\mathscr{E} = \overline{\partial}^E_N + \nabla^\mathscr{E} \index{AEmathscr@$A^\mathscr{E}$} \;, \varepsilonnd{equation} which acts on $\Omega^\cdot(M,\mathscr{E})$. Then $A^\mathscr{E}$ is a superconnection on $\mathscr{E}$. Recall that the operator ${A^E}''$ on $\Omega^\cdot(\mathcal{N},E)$ was defined in \varepsilonqref{dilab-eq-def-flat-superconnection}. We have \begin{equation} A^\mathscr{E} = {A^E}''\big|_{\Omega^{(0,\cdot,\cdot)}(\mathcal{N},E)} \;. \varepsilonnd{equation} Then, by \varepsilonqref{dilab-eq-AEsquare=0}, we have \begin{equation} \label{dilab-eq-Asquare=0} A^{\mathscr{E},2} = 0 \;, \varepsilonnd{equation} which is equivalent to the following identities \begin{equation} \overline{\partial}^{E,2}_N = \nabla^{\mathscr{E},2} = \big[ \overline{\partial}^E_N , \nabla^\mathscr{E} \big] = 0 \;. \varepsilonnd{equation} Set \begin{equation} \overline{\mathscr{E}}^* = \mathscr{C}^\infty(N,\Lambda^\cdot(T^*N)\otimes\Lambda^n(\overline{T^*N})\otimes \overline{E}^*) \index{Emathscr dual@$\overline{\mathscr{E}}^*$} \;. \varepsilonnd{equation} Then $\overline{\mathscr{E}}^*$ is an infinite dimensional flat vector bundle over $M$. We have the identification \begin{equation} \Omega^\cdot(M,\overline{\mathscr{E}}^*) = \Omega^{(\cdot,n,\cdot)}(\mathcal{N},\overline{E}^*) \;. \varepsilonnd{equation} Let $\nabla^{\overline{\mathscr{E}}^*}$ be the restriction of $d_M^{\overline{E}^*}$ to $\Omega^\cdot(M,\overline{\mathscr{E}}^*)$. Then $\nabla^{\overline{\mathscr{E}}^*}$ is the canonical flat connection on $\overline{\mathscr{E}}^*$. Set \begin{equation} \label{dilab-eq-decomposition-superconnection-infi-dim-dual} A^{\overline{\mathscr{E}}^*} = \partial^{\overline{E}^*}_N + \nabla^{\overline{\mathscr{E}}^*} \index{AEmathscr dual@$A^{\overline{\mathscr{E}}^*}$}\;, \varepsilonnd{equation} which acts on $\Omega^\cdot(M,\overline{\mathscr{E}}^*)$. Then $A^{\overline{\mathscr{E}}^*}$ is a superconnection on $\overline{\mathscr{E}}^*$. Recall that the operator ${A^{\overline{E}^*}}'$ on $\Omega^\cdot(\mathcal{N},\overline{E}^*)$ was defined in \varepsilonqref{dilab-eq-def-flat-superconnection-antidual}. We have \begin{equation} A^{\overline{\mathscr{E}}^*} = {A^{\overline{E}^*}}'\big|_{\Omega^{(\cdot,n,\cdot)}(\mathcal{N},\overline{E}^*)} \;. \varepsilonnd{equation} Then, by \varepsilonqref{dilab-eq-AEdualsquare=0}, we have \begin{equation} \label{dilab-eq-Adualsquare=0} A^{\overline{\mathscr{E}}^*,2} =0 \;. \varepsilonnd{equation} Let \begin{equation} (\cdot,\cdot)_E : \overline{E}^* \times E \rightarrow \mathbb{C} \varepsilonnd{equation} be the canonical sesquilinear form, which extends to \begin{equation} (\cdot,\cdot)_E : \big( \Lambda^p(T^*N)\otimes\Lambda^n(\overline{T^*N})\otimes \overline{E}^* \big) \times \big( \Lambda^q(\overline{T^*N})\otimes E \big) \rightarrow \Lambda^{p+q}(T^*N)\otimes\Lambda^n(\overline{T^*N}) \;. \varepsilonnd{equation} We define \begin{align} (\cdot,\cdot)_\mathscr{E} : \overline{\mathscr{E}}^* \times \mathscr{E} & \rightarrow \mathbb{C} \nablaonumber\\ (\alpha,\beta) & \mapsto \int_N (\alpha,\beta)_E \;. \varepsilonnd{align} Thus $\overline{\mathscr{E}}^*$ is formally the anti-dual of $\mathscr{E}$. For $\alpha\in\Omega^\cdot(M,\overline{\mathscr{E}}^*)$ and $\beta\in\Omega^\cdot(M,\mathscr{E})$, the following identities hold \begin{align} \label{dilab-eq-dualrelation-connectoin-infi-dim} \begin{split} (\partial^{\overline{E}^*}_N\alpha,\beta)_\mathscr{E} + (-1)^{\deg\alpha}(\alpha,\overline{\partial}^E_N\beta)_\mathscr{E} & = 0 \;,\\ (\nabla^{\overline{\mathscr{E}}^*}\alpha,\beta)_\mathscr{E} + (-1)^{\deg\alpha}(\alpha,\nabla^\mathscr{E}\beta)_\mathscr{E} & = d_M(\alpha,\beta)_\mathscr{E} \;. \varepsilonnd{split} \varepsilonnd{align} By (\ref{dilab-eq-decomposition-superconnection-infi-dim}), (\ref{dilab-eq-decomposition-superconnection-infi-dim-dual}) and (\ref{dilab-eq-dualrelation-connectoin-infi-dim}), we get \begin{equation} (A^{\overline{\mathscr{E}}^*}\alpha,\beta)_\mathscr{E} + (-1)^{\deg\alpha}(\alpha,A^\mathscr{E}\beta)_\mathscr{E} = d_M(\alpha,\beta)_\mathscr{E} \;, \varepsilonnd{equation} i.e., $A^{\overline{\mathscr{E}}^*}$ is the dual superconnection of $A^\mathscr{E}$ in the sense of \cite[Definition 1.5]{bl}. \subsection{Hermitian metrics and connections on $TN$ and $E$} \label{dilab-subsec-vertical-metrics} \ From now on, we will assume that $N$ is a K\"{a}hler manifold. Let $J : T_\mathbb{R}N\rightarrow T_\mathbb{R}N \index{J@$J$}$ be the complex structure of $N$. \begin{prop} There exists a fiberwise K{\"a}hler metric $g^{TN}$ on $TN$, i.e., a Hermitian metric on $TN$ whose restriction to each fiber $N$ is a K{\"a}hler metric. \varepsilonnd{prop} \begin{proof} Let $(U_i)$ be a locally finite cover of $M$ by open balls. Let $(f_i : U_i \rightarrow \mathbb{R})$ be a partition of unity. For each $U_i$, we have the trivialization $\varphi_i : q^{-1}(U_i) \rightarrow N \times U_i$ as flat fibration. Let $p_i : N \times U_i \rightarrow N$ be the canonical projection. Let $g^{TN}_0$ be a K{\"a}hler metric on $TN_0$. Set \begin{equation} g^{TN} = \sum_i \big(q^*f_i\big)\big(\varphi_i^* p_i^* g^{TN}_0\big) \;. \varepsilonnd{equation} Then $g^{TN}$ satisfies the desired properties. \varepsilonnd{proof} Let $g^{TN}\index{gTN@$g^{TN}$}$ be a fiberwise K{\"a}hler metric on $TN$. Let \begin{equation} \omega\in{\mathscr{C}^\infty}\big(\mathcal{N},T^*N\otimes\overline{T^*N}\big) \index{omega@$\omega$} \varepsilonnd{equation} be the associated fiberwise K{\"a}hler form. Let \begin{equation} dv_N = \frac{\omega^n}{n!} \in{\mathscr{C}^\infty}\big(\mathcal{N},\Lambda^{2n}(T^*_\mathbb{R}N)\big) \varepsilonnd{equation} be the induced fiberwise volume form. Let $g^{\overline{TN}}$ and $g^{\Lambda^\cdot(\overline{T^*N})}$ be the Hermitian metrics on $\overline{TN}$ and $\Lambda^\cdot(\overline{T^*N})$ induced by $g^{TN}$. Let $g^{T_\mathbb{R}N}$ be the Riemannian metric on $T_\mathbb{R}N$ induced by $g^{TN}$. Let $\nablaabla^{T_\mathbb{R}N}\index{nablaTNR@$\nablaabla^{T_\mathbb{R}N}$}$ be the conection on $T_{\mathbb R}N$ associated with $\big(g^{T_{\mathbb R}N},T^{H}\mathcal{N}\big)$ in the same way as in \textsection \ref{dilab-subsec-fibration-connection-metric}. Recall that the connection $A^{TN}$ on $TN$ is defined by \varepsilonqref{dilab-eq-def-superconnection}. In the sequel, we change the notation as follows \begin{equation} \nabla^{TN} = A^{TN} \index{nablaTN@$\nabla^{TN}$}\;. \varepsilonnd{equation} Since the metric $g^{TN}$ is fiberwise K\"{a}hler, the connection on $T_{\mathbb{R}}N$ induced by $\nabla^{TN}$ along the fibre $N$ coincides with $\nablaabla^{T_{\mathbb{R}}N}$. Moreover, the complex structure of $T_{\mathbb{R}}N$ is flat with respect to the flat connection on $\mathcal{N}$. By \varepsilonqref{eq:corr1}, \varepsilonqref{eq:corr2} and \varepsilonqref{dilab-eq-def-omegaE}, these two connections also coincide in horizontal directions. The conclusion is that the connection $\nabla^{T_{\mathbb R}N}$ preserves the complex structure $J$, and induces the connection $\nabla^{TN}$ on $TN$. Let $\nabla^{\overline{TN}}\index{nablaTNbar@$\nabla^{\overline{TN}}$}$ and $\nabla^{\Lambda^\cdot(\overline{T^*N})}\index{nablalambdaTN@$\nabla^{\Lambda^\cdot(\overline{T^*N})}$}$ be the connections on $\overline{TN}$ and $\Lambda^\cdot(\overline{T^*N})$ induced by $\nabla^{TN}$. Let $g^E\index{gE@$g^E$}$ be a Hermitian metric of $E$. Let $\nabla^E$ be the connection on $E$ defined by \varepsilonqref{dilab-eq-def-superconnection}. Let $g^{\Lambda^\cdot(T^*_\mathbb{C}N)}$ be the $\mathbb{C}$-bilinear form on $\Lambda^\cdot(T^*_\mathbb{C}N)$ induced by $g^{TN}$. Let \begin{equation} * : \Lambda^\cdot(T^*_\mathbb{C}N) \rightarrow \Lambda^{2n-\cdot}(T^*_\mathbb{C}N) \index{star@$*$} \varepsilonnd{equation} be the usual Hodge operator acting on $\Lambda^\cdot(T^*_\mathbb{C}N)$, i.e., for $\alpha, \beta \in \Lambda^\cdot(T^*_\mathbb{C}N)$, \begin{equation*} g^{\Lambda^\cdot(T^*_\mathbb{C}N)}(\alpha,\beta) dv_N = \alpha \wedge \overline{* \beta} \;. \varepsilonnd{equation*} In particular, $*$ maps $\Lambda^\cdot(\overline{T^*N})$ to $\Lambda^n(T^*N)\otimes\Lambda^{n-\cdot}(\overline{T^*N})$. The Hermitian metric $g^E$ induces an identification $g^E : E \rightarrow \overline{E}^*$. The Hodge operator $*$ extends to \begin{equation} *^E : \Lambda^\cdot(\overline{T^*N}) \otimes E \rightarrow \Lambda^n (T^*N) \otimes \Lambda^{n-\cdot}(\overline{T^*N}) \otimes \overline{E}^* \index{starE@$*^E$} \;. \varepsilonnd{equation} Let $g^\mathscr{E}$ be a Hermitian metric on $\mathscr{E}$, such that for $\alpha,\beta\in\mathscr{E}$, \begin{equation} g^\mathscr{E}(\alpha,\beta) = \frac{1}{(2\pi)^n} \int_N (g^{\Lambda^\cdot(\overline{T^*N})}\otimes g^E)(\alpha,\beta)dv_N = \frac{(-1)^{\deg\alpha \, \deg\beta}}{(2\pi)^n}(*^E\alpha,\beta)_\mathscr{E} \index{gEmathscr@$g^\mathscr{E}$}\;. \varepsilonnd{equation} Set \begin{equation} \label{dilab-eq-def-omega-mathscrE} \omega^{\mathscr{E}} = \big(g^\mathscr{E}\big)^{-1}\nabla^{\overline{\mathscr{E}}^*}g^\mathscr{E} \in \mathscr{C}^\infty(M, T^*M \otimes \mathrm{End}(\mathscr{E})) \index{omegaEmathscr@$\omega^\mathscr{E}$} \varepsilonnd{equation} and \begin{equation} k_N = \big(dv_N\big)^{-1} d_M dv_N \in \mathscr{C}^\infty(\mathcal{N}, T^*M) \index{kN@$k_N$}\;. \varepsilonnd{equation} We define $\omega^{TN}$ in the same way as in \varepsilonqref{dilab-eq-def-omegaE}. Let $\omega^{\Lambda^{\cdot}\left(\overline{T^{*}N}\right)} \index{omegaLambdaTN@$\omega^{\Lambda^{\cdot}\left(\overline{T^{*}N}\right)}$}$ be the induced action of $\omega^{TN}$ on $\Lambda^{\cdot}(\overline{T^{*}N})$. Then $\omega^{\Lambda^{\cdot}\left(\overline{T^{*}N}\right)}$ is just the horizontal variation of the metric $g^{\Lambda^{\cdot}\left(\overline{T^{*}N}\right)}$ on $\Lambda^{\cdot}\left(\overline{T^{*}N}\right)$ with respect to the flat connection. We have \begin{equation} \label{dilab-eq-omega-big-e} \omega^\mathscr{E} = \omega^{\Lambda^\cdot(\overline{T^*N})} + \omega^E + k_N \;. \varepsilonnd{equation} \subsection{The Levi-Civita superconnection} \label{dilab-subsec-levi-civita-supperconnection} \ Recall that $A^\mathscr{E}$ and $A^{\overline{\mathscr{E}}^*}$ were defined in \varepsilonqref{dilab-eq-decomposition-superconnection-infi-dim} and \varepsilonqref{dilab-eq-decomposition-superconnection-infi-dim-dual}. Set \begin{equation} A^{\mathscr{E},*} = (*^E)^{-1}A^{\overline{\mathscr{E}}^*}*^E \index{AEmathscradjoint@$A^{\mathscr{E},*}$}\;, \varepsilonnd{equation} which acts on $\Omega^\cdot(M,\mathscr{E})$. Then $A^{\mathscr{E},*}$ is the adjoint superconnection of $A^\mathscr{E}$ (with respect to $g^\mathscr{E}$) in the sense of \cite[Definition 1.6]{bl}. By \varepsilonqref{dilab-eq-Adualsquare=0}, we have \begin{equation} \label{dilab-eq-Astarsquare=0} A^{\mathscr{E},*,2} = 0 \;. \varepsilonnd{equation} Set \begin{align} \label{dilab-eq-def-c-d} \begin{split} C^\mathscr{E} = \; & 2^{-N^{\Lambda^\cdot(T^*M)}} \big(A^{\mathscr{E},*} + A^\mathscr{E}\big) 2^{N^{\Lambda^\cdot(T^*M)}} \index{CEmathscr@$C^\mathscr{E}$}\;,\\ D^\mathscr{E} = \; & 2^{-N^{\Lambda^\cdot(T^*M)}} \big(A^{\mathscr{E},*} - A^\mathscr{E}\big) 2^{N^{\Lambda^\cdot(T^*M)}} \index{DEmathscr@$D^\mathscr{E}$}\;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-eq-Asquare=0} and \varepsilonqref{dilab-eq-Astarsquare=0}, we have \begin{equation} \label{dilab-eq-bianchi} C^{\mathscr{E},2} = -D^{\mathscr{E},2} = 2^{-N^{\Lambda^\cdot(T^*M)}} \big[A^\mathscr{E},A^{\mathscr{E},*}\big] 2^{N^{\Lambda^\cdot(T^*M)}}\;, \hspace{5mm} \big[C^\mathscr{E},D^\mathscr{E}\big] = 0 \;. \varepsilonnd{equation} Let $\overline{\partial}^{E,*}_N$ be the formal adjoint of $\overline{\partial}^E_N$ with respect to $g^\mathscr{E}$. Set \begin{equation} D^E_N = {\overline{\partial}}^E_N + {\overline{\partial}}^{E,*}_N \index{DEN@$D^E_N$} \;, \varepsilonnd{equation} which acts on $\mathscr{E}$. Then $D^E_N$ is the fiberwise $\text{spin}^c$-Dirac operator associated with $g^{TN}/2$. We recall that $\nabla^\mathscr{E}$ is defined in \textsection\ref{dilab-subsection-flat-superconnection}. Let $\nabla^{\mathscr{E},*}\index{nablaEmathscradjoint@$\nabla^{\mathscr{E},*}$}$ be the adjoint connection. Then \begin{equation} \nabla^{\mathscr{E},*} = \nabla^\mathscr{E} + \omega^\mathscr{E} \;. \varepsilonnd{equation} Set \begin{equation} \label{dilab-eq-def-nEu-nE} \nabla^{\mathscr{E},\mathrm{u}} = \frac{1}{2}\big(\nabla^{\mathscr{E},*}+\nabla^\mathscr{E}\big) = \nabla^\mathscr{E} + \frac{1}{2}\omega^\mathscr{E} \index{nablaEmathscru@$\nabla^{\mathscr{E},\mathrm{u}}$}\;, \varepsilonnd{equation} which is a unitary connection on $\mathscr{E}$. We have \begin{equation} \label{dilab-eq-lc-superconnection} C^\mathscr{E} = D^E_N + \nabla^{\mathscr{E},\mathrm{u}} \;,\hspace{5mm} D^\mathscr{E} = \overline{\partial}^{E,*}_N - \overline{\partial}^E_N + \frac{1}{2}\omega^\mathscr{E} \;. \varepsilonnd{equation} Recall that the Levi-Civita superconnection was introduced in \cite{b}. \begin{prop} \label{dilab-prop-lc-superconnection} The superconnection $C^\mathscr{E}$ is the Levi-Civita superconnection associated with $\big(T^H\mathcal{N}, g^{T_{\mathbb R}N}, g^E\big)$. \varepsilonnd{prop} \begin{proof} Since the metric $g^{TN}$ is fibrewise K\"{a}hler, up to the constant $\sqrt{2}$, the operator $D^E_N$ is the standard $\text{spin}^c$-Dirac operator along the fiber $N$. As we saw in \textsection \ref{dilab-subsec-vertical-metrics}, the connection $\nabla^{T_{\mathbb{R}}N}$ induced by $\nabla^{TN}$ is exactly the connection that was considered in \cite{b}. Finally, since our fibration is flat, the term in the Levi-Civita superconnection that contains the curvature of our fibration vanishes identically. This completes the proof. \varepsilonnd{proof} For $t>0$, let $C^\mathscr{E}_t,D^\mathscr{E}_t\index{CEmathscrt@$C^\mathscr{E}_t$}\index{DEmathscrt@$D^\mathscr{E}_t$}$ be $C^\mathscr{E},D^\mathscr{E}$ associated with the rescaled metric $g^{TN}/t$. By \varepsilonqref{dilab-eq-lc-superconnection}, we have \begin{equation} \label{dilab-eq-lc-superconnection-rescaling} C^\mathscr{E}_t = t\overline{\partial}^{E,*}_N + \overline{\partial}^E_N + \nabla^{\mathscr{E},\mathrm{u}} \;,\hspace{5mm} D^\mathscr{E}_t = t\overline{\partial}^{E,*}_N - \overline{\partial}^E_N + \frac{1}{2}\omega^\mathscr{E} \;. \varepsilonnd{equation} \subsection{The index bundle and its characteristic classes} \label{dilab-subsec-index-bundle} \ Let $H^\cdot(N,E_0)$ be the Dolbeault cohomology of $E_{0}$. The action of $G$ on $E_0$ induces an action of $G$ on $H^\cdot(N,E_0)$. Set \begin{equation} H^\cdot(N,E) = P_G \times_G H^\cdot(N,E_0) \index{HNE@$H^\cdot(N,E)$}\;. \varepsilonnd{equation} Let $\nablaabla^{H^\cdot(N,E)} \index{nablaHNE@$\nablaabla^{H^\cdot(N,E)}$}$ be the flat connection on $H^\cdot(N,E)$ induced by the flat connection on $P_G$. For $s\in\mathscr{C}^\infty(M,\mathscr{E})$ satisfying $\overline{\partial}^E_N s = 0$, let \begin{equation} [s]\in\mathscr{C}^\infty(M,H^\cdot(N,E)) \varepsilonnd{equation} be the corresponding cohomology class. Then \begin{equation} \label{dilab-def-flat-nabla-H} \nablaabla^{H^\cdot(N,E)}[s] = [\nabla^\mathscr{E} s] \in \Omega^1(M,H^\cdot(N,E))\;. \varepsilonnd{equation} By Hodge theory, there is a canonical identification \begin{equation} \label{dilab-eq-hodge-id} H^\cdot(N,E) \simeq \mathrm{ker} D^E_N \subseteq \mathscr{E} \;. \varepsilonnd{equation} Let $g^{H^\cdot(N,E)}$ be the metric on $H^\cdot(N,E)$ induced by $g^\mathscr{E}$ via the identification \varepsilonqref{dilab-eq-hodge-id}. Let $\nablaabla^{H^\cdot(N,E),*} \index{nablaHNEad@$\nablaabla^{H^\cdot(N,E),*}$}$ be the adjoint connection of $\nablaabla^{H^\cdot(N,E)}$ with respect to $g^{H^\cdot(N,E)}$. Set \begin{align} \begin{split} \nablaabla^{H^\cdot(N,E),\mathrm{u}} = \; & \frac{1}{2}\big(\nablaabla^{H^\cdot(N,E),*} + \nablaabla^{H^\cdot(N,E)} \big) \index{nablaHNEu@$\nablaabla^{H^\cdot(N,E),\mathrm{u}}$} \;, \\ \omega^{H^\cdot(N,E)} = \; & \nablaabla^{H^\cdot(N,E),*} - \nablaabla^{H^\cdot(N,E)} \index{omegaHNE@$\omega^{H^\cdot(N,E)}$} \;. \varepsilonnd{split} \varepsilonnd{align} Then $\nablaabla^{H^\cdot(N,E),\mathrm{u}}$ is a unitary connection and $\omega^{H^\cdot(N,E)} \in {\mathscr{C}^\infty}\big(M,\mathrm{End}(H^\cdot(N,E))\big)$. Put \begin{equation} \chi(N,E) = \sum_p (-1)^p \dim H^p(N,E) \index{chiNE@$\chi(N,E)$}\;. \varepsilonnd{equation} \begin{prop} \label{dilab-prop-chern-index-theorem} For $t>0$, we have \begin{equation} \label{dilab-eq-prop-chern-index-theorem} \varphi\tr_\mathrm{s}\big[\varepsilonxp(D^{\mathscr{E},2}_t)\big] = \chi(N,E) \;. \varepsilonnd{equation} \varepsilonnd{prop} \begin{proof} By the local families index theorem \cite{b}, as $t\rightarrow 0$, \begin{equation} \label{dilab-eq-1-proof-prop-chern-index-theorem} \varphi\tr_\mathrm{s}\big[\varepsilonxp(D^{\mathscr{E},2}_t)\big] = q_*\big[\mathrm{Td}(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\big] + \mathscr{O}(\sqrt{t}) \;. \varepsilonnd{equation} Furthermore, \begin{align} \label{dilab-eq-2-proof-prop-chern-index-theorem} \begin{split} \frac{\partial}{\partial t}\tr_\mathrm{s}\big[\varepsilonxp(D^{\mathscr{E},2}_t)\big] = \; & \tr_\mathrm{s}\big[\big[D^\mathscr{E}_t,\frac{\partial}{\partial t}D^\mathscr{E}_t\big]\varepsilonxp(D^{\mathscr{E},2}_t)\big] \\ = \; & \tr_\mathrm{s}\big[\big[D^\mathscr{E}_t,(\frac{\partial}{\partial t}D^\mathscr{E}_t)\varepsilonxp(D^{\mathscr{E},2}_t)\big]\big] = 0 \;. \varepsilonnd{split} \varepsilonnd{align} By Proposition \ref{dilab-thm-gen-flat2trivialclass} and the Riemann-Roch-Hirzebruch formula, we have \begin{equation} \label{dilab-eq-3-proof-prop-chern-index-theorem} q_*\big[\mathrm{Td}(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\big] = \chi(N,E) \;. \varepsilonnd{equation} Then \varepsilonqref{dilab-eq-prop-chern-index-theorem} follows from \varepsilonqref{dilab-eq-1-proof-prop-chern-index-theorem}-\varepsilonqref{dilab-eq-3-proof-prop-chern-index-theorem}. \varepsilonnd{proof} \subsection{A Riemann-Roch-Grothendieck formula} \label{dilab-subsec-riemann-roch-grothendieck} \ For $t>0$, set \begin{align} \label{dilab-eq-def-alphatbetat} \begin{split} \alpha_t & = \sqrt{2\pi i}\varphi\tr_\mathrm{s}\Big[D^\mathscr{E}_t\varepsilonxp(D^{\mathscr{E},2}_t)\Big] \in\Omega^\mathrm{odd}(M) \index{a lphat@$\alpha_t$}\;,\\ \beta_t & = \varphi\tr_\mathrm{s}\Big[ \frac{N^{\Lambda^\cdot(\overline{T^*N})}}{2}(1+2D^{\mathscr{E},2}_t) \varepsilonxp(D^{\mathscr{E},2}_t) \Big] \in\Omega^\mathrm{even}(M) \index{b etat@$\beta_t$}\;. \varepsilonnd{split} \varepsilonnd{align} \begin{prop} \label{dilab-prop-constant-class} For $t>0$, the differential form $\alpha_t$ is closed. Its cohomology class is independent of $g^{TN}$, $g^{E}$ and $t$. \varepsilonnd{prop} \begin{proof} By \varepsilonqref{dilab-eq-bianchi}, we have \begin{equation} d_M\sqrt{2\pi i}\varphi\tr_\mathrm{s}\big[D^\mathscr{E}_t\varepsilonxp(D^{\mathscr{E},2}_t)\big] = \varphi\tr_\mathrm{s}\big[\big[C^\mathscr{E}_t,D^\mathscr{E}_t\varepsilonxp(D^{\mathscr{E},2}_t)\big]\big] = 0 \;, \varepsilonnd{equation} which proves the closeness. Then, by the functoriality of our constructions, $[\alpha_t]\in H^\cdot(M)$ is independebt of the metrics. In particular, it is independent of $t$. \varepsilonnd{proof} \begin{prop} \label{dilab-prop-trans-alpha-beta} For $t>0$, the following identity holds: \begin{equation} \label{dilab-eq-prop-trans-alpha-beta} \frac{\partial}{\partial t}\alpha_t = \frac{1}{t}d_M \beta_t \;. \varepsilonnd{equation} \varepsilonnd{prop} \begin{proof} Set \begin{equation} \mathcal{N}_+ = \mathcal{N}\times\mathbb{R}_+ \;,\hspace{5mm} M_+ = M\times\mathbb{R}_+ \index{Nmathcal+@$\mathcal{N}_+$}\index{M+@$M_+$}\;. \varepsilonnd{equation} Let \begin{equation} q_+ = q \oplus \mathrm{id}_{\mathbb{R}_+} : \mathcal{N}_+\rightarrow M_+ \varepsilonnd{equation} be the obvious projection. Let $t$ be the coordinate on $\mathbb{R}_+$. We equip $TN$ with the metric $\frac{1}{t}g^{TN}$. Let $\mathscr{E}_+\index{Emathscr+@$\mathscr{E}_+$}$, $\omega^{\mathscr{E}_+}$, $C^{\mathscr{E}_+}$, $D^{\mathscr{E}_+}$ be the corresponding objects associated with the new fibration. The following identities hold (cf. \varepsilonqref{dilab-eq-def-omega-mathscrE}) \begin{align} \label{dilab-proof-prop-trans-alpha-beta-eq-0} \begin{split} d_{M_+} & = d_M + dt\wedge\frac{\partial}{\partial t} \;,\\ \omega^{\mathscr{E}_+} & = \omega^{\mathscr{E}} + \frac{1}{t}dt\wedge\big(N^{\Lambda^\cdot(\overline{T^*N})}-n\big) \;. \varepsilonnd{split} \varepsilonnd{align} Then, by \varepsilonqref{dilab-eq-lc-superconnection} and \varepsilonqref{dilab-eq-lc-superconnection-rescaling}, we get \begin{align} \begin{split} C^{\mathscr{E}_+} & = C^{\mathscr{E}}_t + dt\wedge\frac{\partial}{\partial t} + \frac{1}{2t}dt\wedge\big(N^{\Lambda^\cdot(\overline{T^*N})}-n\big) \;,\\ D^{\mathscr{E}_+} & = D^{\mathscr{E}}_t + \frac{1}{2t}dt\wedge\big(N^{\Lambda^\cdot(\overline{T^*N})}-n\big) \;. \varepsilonnd{split} \varepsilonnd{align} Thus \begin{align} \label{dilab-proof-prop-trans-alpha-beta-eq-1} \begin{split} & \sqrt{2\pi i}\varphi\tr_\mathrm{s}\big[D^{\mathscr{E}_+}\varepsilonxp(D^{\mathscr{E}_+,2})\big] \\ = \; & \sqrt{2\pi i}\varphi\tr_\mathrm{s}\big[D^{\mathscr{E}}\varepsilonxp(D^{\mathscr{E},2})\big] + \frac{1}{2t}dt\wedge\varphi\tr_\mathrm{s}\big[\big(N^{\Lambda^\cdot(\overline{T^*N})}-n\big)\varepsilonxp(D^{\mathscr{E},2})\big] \\ & + \sqrt{2\pi i}\varphi\tr_\mathrm{s}\Big[D^{\mathscr{E}}\varepsilonxp\big(D^{\mathscr{E},2} +\big[D^{\mathscr{E}},\frac{1}{2t}dt\wedge N^{\Lambda^\cdot(\overline{T^*N})}\big]\big)\Big] \\ = \; & \alpha_t + \frac{1}{2t}dt\wedge\varphi\tr_\mathrm{s}\big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp(D^{\mathscr{E},2})\big] - \chi(N,E)\frac{n}{2t}dt \\ & + \sqrt{2\pi i}\varphi\tr_\mathrm{s}\Big[D^{\mathscr{E}}\big[D^{\mathscr{E}}, \varepsilonxp\big(D^{\mathscr{E},2}+ \frac{1}{2t}dt\wedge N^{\Lambda^\cdot(\overline{T^*N})}\big)\big]\Big] \\ = \; & \alpha_t + \frac{1}{2t}dt\wedge\varphi\tr_\mathrm{s}\big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp(D^{\mathscr{E},2})\big] - \chi(N,E)\frac{n}{2t}dt \\ & + \sqrt{2\pi i}\varphi\tr_\mathrm{s}\Big[\big[D^{\mathscr{E}},D^{\mathscr{E}}\big] \varepsilonxp\big(D^{\mathscr{E},2}+ \frac{1}{2t}dt\wedge N^{\Lambda^\cdot(\overline{T^*N})}\big)\Big] \\ = \; & \alpha_t +\frac{1}{2t}dt\wedge\beta_t - \chi(N,E)\frac{n}{2t}dt \in \Omega^\cdot(M_+) \;. \varepsilonnd{split} \varepsilonnd{align} By Proposition \ref{dilab-prop-constant-class}, we have \begin{equation} \label{dilab-proof-prop-trans-alpha-beta-eq-2} d_{M_+}\sqrt{2\pi i}\varphi\tr_\mathrm{s}\big[D^{\mathscr{E}_+}\varepsilonxp(D^{\mathscr{E}_+,2})\big] = 0 \;. \varepsilonnd{equation} By \varepsilonqref{dilab-proof-prop-trans-alpha-beta-eq-0}, \varepsilonqref{dilab-proof-prop-trans-alpha-beta-eq-1} and \varepsilonqref{dilab-proof-prop-trans-alpha-beta-eq-2}, we get \varepsilonqref{dilab-eq-prop-trans-alpha-beta}. \varepsilonnd{proof} Set $f(x)=xe^{x^2}$. Following \cite[Definition 1.7]{bl}, we define the odd characteristic form \begin{equation} f(H^\cdot(N,E), \nablaabla^{H^\cdot(N,E)}, g^{H^\cdot(N,E)}) = \sqrt{2\pi i}\varphi\tr_\mathrm{s}\big[ f(\omega^{H^\cdot(N,E)}/2) \big] \in\Omega^\mathrm{odd}(M) \;. \varepsilonnd{equation} Put \begin{equation} \chi'(N,E) = \sum_p(-1)^pp\dim H^p(N,E) \index{chiNEprim@$\chi'(N,E)$}\;. \varepsilonnd{equation} Now we state the central result in this section. Its proof will be delayed to \textsection \ref{dilab-subsec-intermediate-rrg}. \begin{thm} \label{dilab-prop-large-small-time-convergence} As $t\rightarrow + \infty$, \begin{align} \begin{split} \label{dilab-eq-large-time-convergence} \alpha_t & = f(H^\cdot(N,E), \nablaabla^{H^\cdot(N,E)}, g^{H^\cdot(N,E)})+\mathscr{O}\big(\frac{1}{\sqrt{t}}\big) \;,\\ \beta_t & = \frac{1}{2}{\chi}'(N,E) + \mathscr{O}\big(\frac{1}{\sqrt{t}}\big) \;. \varepsilonnd{split} \varepsilonnd{align} As $t\rightarrow 0$, \begin{align} \begin{split} \label{dilab-eq-small-time-convergence} \alpha_t = \; & q_*\Big[\widetilde{\mathrm{Td}}(TN, g^{TN})*\widetilde{\mathrm{ch}}(E, g^E) \Big] \\ \; & + \frac{1}{2t} d_M q_*\Big[\frac{\omega}{2\pi}\mathrm{Td}(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] + \mathscr{O}\big(\sqrt{t}\big) \;,\\ \beta_t = \; & - \frac{1}{2}q_*\Big[\mathrm{Td}'(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] + \frac{n}{2}\chi(N,E) \\ \; & - \frac{1}{2t} q_*\Big[\frac{\omega}{2\pi}\mathrm{Td}(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] + \mathscr{O}\big(\sqrt{t}\big) \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{thm} \begin{rem} \label{dilab-rem-small-time-convergence} By Proposition \ref{dilab-prop-annulation-curvature}, we have \begin{equation} q_*\Big[\frac{\omega}{2\pi}\mathrm{Td}(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] \in {\mathscr{C}^\infty}(M) \;. \varepsilonnd{equation} \varepsilonnd{rem} By Proposition \ref{dilab-prop-constant-class} and Theorem \ref{dilab-prop-large-small-time-convergence}, we get the following R.R.G. formula. \begin{thm} \label{dilab-thm-riemann-roch-grothendieck} We have \begin{align} \label{dilab-thm-riemann-roch-grothendieck-eq} \begin{split} & \Big[f(H^\cdot(N,E), \nablaabla^{H^\cdot(N,E)}, g^{H^\cdot(N,E)})\Big] \\ = \; & \Big[q_*\Big[ \widetilde{\mathrm{Td}}(TN, g^{TN})*\widetilde{\mathrm{ch}}(E, g^E) \Big]\Big] \in H^\mathrm{odd}(M,\mathbb{R}) \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{thm} \subsection{Several intermediate results and the proof of Theorem \ref{dilab-prop-large-small-time-convergence}} \label{dilab-subsec-intermediate-rrg} \ We will now introduce various new odd Grassmann variables in order to be able to compute exactly the asymptotics of certain superconnection forms as $t\to 0$, and also to overcome the divergence of certain expressions. Our methods are closely related to the methods of \cite{bgs2,bgs3,bk}, where similar difficulties also appeared. Let $a\index{a@$a$}$ be an additional complex coordinate, $\varepsilonpsilon\index{epsilon@$\varepsilonpsilon$}$ be an auxiliary odd Grassmann variable. For \begin{equation} u,v\in \Big\{ 1 \,,\, da \,,\, d\bar{a} \,,\, dad\bar{a} \,,\, \varepsilonpsilon \,,\, \varepsilonpsilon da \,,\, \varepsilonpsilon d\bar{a} \,,\, \varepsilonpsilon dad\bar{a} \Big\} \varepsilonnd{equation} and $\sigma\in\Omega^\cdot(M)$, we denote \begin{align} \begin{split} (v \wedge \sigma)^u = \left\{ \begin{array}{rl} \sigma & \text{ if } u = v \;,\\ 0 & \text{ else } \;. \varepsilonnd{array} \right. \varepsilonnd{split} \varepsilonnd{align} \begin{lemme} \label{dilab-prop-a-bar-a-formula} The following identity holds \begin{align} \label{dilab-eq-1-prop-a-bar-a-formula} \begin{split} & \tr_\mathrm{s} \Big[D^\mathscr{E}\varepsilonxp\big(D^{\mathscr{E},2}\big)\Big] \\ = \; & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E},2} - da\,\frac{1}{2}\big(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N\big) \\ & \hspace{20mm} - d\bar{a}\,\big[\overline{\partial}^E_N+\overline{\partial}^{E,*}_N,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big] + dad\bar{a}\,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big)\Big]^{\varepsilonpsilon dad\bar{a}} \\ & + d_M \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2} \big)\Big] \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{lemme} \begin{proof} By \varepsilonqref{dilab-eq-bianchi} and \varepsilonqref{dilab-eq-lc-superconnection}, we have \begin{align} \label{dilab-proof-prop-a-bar-a-formula-eq-2} \begin{split} \big[N^{\Lambda^\cdot(\overline{T^*N})},C^{\mathscr{E},2}\big] = \; & -\big[N^{\Lambda^\cdot(\overline{T^*N})},D^{\mathscr{E},2}\big] \\ = \; & -\big[N^{\Lambda^\cdot(\overline{T^*N})},\big[\overline{\partial}^{E,*}_N-\overline{\partial}^E_N,\frac{1}{2}\omega^\mathscr{E}\big]\big] = \big[\overline{\partial}^E_N+\overline{\partial}^{E,*}_N,\frac{1}{2}\omega^\mathscr{E}\big] \;, \varepsilonnd{split} \varepsilonnd{align} which implies \begin{align} \label{dilab-proof-prop-a-bar-a-formula-eq-3} \begin{split} & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E},2} - da\,\frac{1}{2}\big(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N\big) - d\bar{a}\,\big[\overline{\partial}^E_N+\overline{\partial}^{E,*}_N,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big] \big)\Big]^{\varepsilonpsilon dad\bar{a}} \\ = \; & \frac{\partial}{\partial b} \tr_\mathrm{s} \Big[ - \frac{1}{2}(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N) \varepsilonxp\big( -C^{\mathscr{E},2} + b\big[\overline{\partial}^E_N+\overline{\partial}^{E,*}_N,\frac{1}{2}\omega^\mathscr{E}\big] \big)\Big]_{b=0} \\ = \; & \frac{\partial}{\partial b} \tr_\mathrm{s} \Big[ - \frac{1}{2}(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N) \varepsilonxp\big( -C^{\mathscr{E},2} + b\big[N^{\Lambda^\cdot(\overline{T^*N})},C^{\mathscr{E},2}\big] \big)\Big]_{b=0} \\ = \; & \frac{\partial}{\partial b} \tr_\mathrm{s} \Big[ - \frac{1}{2}(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N) \big[N^{\Lambda^\cdot(\overline{T^*N})},\varepsilonxp\big( -C^{\mathscr{E},2} \big)\big]\Big]\\ = \; & \tr_\mathrm{s} \Big[ -\frac{1}{2}\big[N^{\Lambda^\cdot(\overline{T^*N})},\overline{\partial}^E_N+\overline{\partial}^{E,*}_N\big]\varepsilonxp\big( - C^{\mathscr{E},2} \big)\Big] \\ = \; & \tr_\mathrm{s} \Big[\frac{1}{2}\big(\overline{\partial}^{E,*}_N-\overline{\partial}^E_N\big)\varepsilonxp\big( D^{\mathscr{E},2} \big)\Big] \;. \varepsilonnd{split} \varepsilonnd{align} Then \begin{align} \begin{split} & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E},2} - da\,\frac{1}{2}\big(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N\big) \\ &\hspace{35mm} - d\bar{a}\,\big[\overline{\partial}^E_N+\overline{\partial}^{E,*}_N,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big] + dad\bar{a}\,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big)\Big]^{\varepsilonpsilon dad\bar{a}} \\ = \; & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E},2} - da\,\frac{1}{2}\big(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N\big) - d\bar{a}\,\big[\overline{\partial}^E_N+\overline{\partial}^{E,*}_N,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big] \big)\Big]^{\varepsilonpsilon dad\bar{a}} \\ & + \tr_\mathrm{s} \Big[\frac{1}{2}\omega^\mathscr{E}\varepsilonxp\big(D^{\mathscr{E},2}\big)\Big] \\ = \; & \tr_\mathrm{s} \Big[\frac{1}{2}\big(\overline{\partial}^{E,*}_N - \overline{\partial}^E_N + \omega^\mathscr{E}\big)\varepsilonxp\big(D^{\mathscr{E},2}\big)\Big] \\ = \; & \tr_\mathrm{s} \Big[\big(\overline{\partial}^{E,*}_N - \overline{\partial}^E_N + \frac{1}{2}\omega^\mathscr{E}\big)\varepsilonxp\big(D^{\mathscr{E},2}\big)\Big] -\tr_\mathrm{s} \Big[\frac{1}{2} \big(\overline{\partial}^{E,*}_N - \overline{\partial}^E_N \big)\varepsilonxp\big(D^{\mathscr{E},2}\big)\Big] \\ = \; & \tr_\mathrm{s} \Big[D^\mathscr{E}\varepsilonxp\big(D^{\mathscr{E},2}\big)\Big] - \tr_\mathrm{s} \Big[\big[C^\mathscr{E},\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\big]\varepsilonxp\big( D^{\mathscr{E},2} \big)\Big] \\ = \; & \tr_\mathrm{s} \Big[D^\mathscr{E}\varepsilonxp\big(D^{\mathscr{E},2}\big)\Big] - d_M \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2} \big)\Big] \;. \varepsilonnd{split} \varepsilonnd{align} The last equation is just what we needed to prove. \varepsilonnd{proof} Let $\mathcal{N}_+$, $M_+$, $q_+$, $\mathscr{E}_+$, $\omega^{\mathscr{E}_+}$, $C^{\mathscr{E}_+}$ and $D^{\mathscr{E}_+}$ be the same as in the proof of Proposition \ref{dilab-prop-trans-alpha-beta}. \begin{lemme} \label{dilab-prop-a-bar-a-formula-second} For $t>0$, the following identity holds: \begin{align} \label{dilab-eq-prop-a-bar-a-formula-second} \begin{split} & \big(N^{\Lambda^\cdot(T^*M)} + 1 + t\frac{\partial}{\partial t} \big) \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] \\ = \; & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E}_+,2} - da\,\frac{1}{2}\big(\overline{\partial}^E_N + t\overline{\partial}^{E,*}_N\big) \\ & \hspace{20mm} - d\bar{a}\,\big[\overline{\partial}^E_N + t\overline{\partial}^{E,*}_N , \frac{\varepsilonpsilon t}{2}\omega^{\mathscr{E}_+}\big] + dad\bar{a}\,\frac{\varepsilonpsilon t}{2}\omega^{\mathscr{E}_+}\big)\Big]^{\varepsilonpsilon dad\bar{a}dt} \\ & + \text{closed form} \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{lemme} \begin{proof} By \varepsilonqref{dilab-eq-1-prop-a-bar-a-formula}, we get \begin{align} \begin{split} & \tr_\mathrm{s} \Big[D^{\mathscr{E}_+}\varepsilonxp\big(D^{\mathscr{E}_+,2}\big)\Big] \\ = \; & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E}_+,2} - da\,\frac{1}{2}\big( \overline{\partial}^E_N + t\overline{\partial}^{E,*}_N \big) \\ & \hspace{20mm} - d\bar{a}\,\big[\overline{\partial}^E_N + t\overline{\partial}^{E,*}_N , \frac{\varepsilonpsilon}{2}\omega^{\mathscr{E}_+}\big] + dad\bar{a}\,\frac{\varepsilonpsilon}{2}\omega^{\mathscr{E}_+}\big)\Big]^{\varepsilonpsilon dad\bar{a}} \\ & + d_{M_+} \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E}_+,2} \big)\Big] \;. \varepsilonnd{split} \varepsilonnd{align} Taking the $dt$ component, we get \begin{align} \label{dilab-proof-prop-a-bar-a-formula-second-eq-0} \begin{split} & \tr_\mathrm{s} \Big[\frac{1}{2t}\big(N^{\Lambda^\cdot(\overline{T^*N})}-n\big)\varepsilonxp\big(D^{\mathscr{E},2}_t\big)\Big] \\ & + \tr_\mathrm{s} \Big[D^{\mathscr{E}}_t\varepsilonxp\big(\big( D^{\mathscr{E}}_t + dt\,\frac{1}{2t}N^{\Lambda^\cdot(\overline{T^*N})} - dt\,\frac{n}{2t}\big)^2\big)\Big]^{dt} \\ = \; & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E}_+,2} - da\,\frac{1}{2}\big( \overline{\partial}^E_N + t\overline{\partial}^{E,*}_N \big) \\ & \hspace{20mm} - d\bar{a}\,\big[\overline{\partial}^E_N + t\overline{\partial}^{E,*}_N , \frac{\varepsilonpsilon}{2}\omega^{\mathscr{E}_+}\big] + dad\bar{a}\,\frac{\varepsilonpsilon}{2}\omega^{\mathscr{E}_+}\big)\Big]^{\varepsilonpsilon dad\bar{a}dt} \\ & - d_M \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( \big(D^{\mathscr{E}}_t + \frac{1}{2}dt\, N^{\Lambda^\cdot(\overline{T^*N})}\big)^2 \big)\Big]^{dt} \\ & + \frac{\partial}{\partial t} \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] \;. \varepsilonnd{split} \varepsilonnd{align} We multiply \varepsilonqref{dilab-proof-prop-a-bar-a-formula-second-eq-0} by $t$ and subtract the closed forms. Since $dt$ supercommutes with $N^{\Lambda^\cdot(\overline{T^*N})}$ and $D^{\mathscr{E}}_t$, By Proposition \ref{dilab-prop-chern-index-theorem}, \ref{dilab-prop-constant-class}, we can delete $\frac{n}{2t}$ and $dt\,\frac{n}{2t}$ on the left-hand side of \varepsilonqref{dilab-proof-prop-a-bar-a-formula-second-eq-0}. We obtain \begin{align} \label{dilab-proof-prop-a-bar-a-formula-second-eq-1} \begin{split} & \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big(D^{\mathscr{E},2}_t\big)\Big] \\ & + \tr_\mathrm{s} \Big[D^{\mathscr{E}}_t\varepsilonxp\big(\big( D^{\mathscr{E}}_t + dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})} \big)^2\big)\Big]^{dt} \\ = \; & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E}_+,2} - da\,\frac{1}{2}\big( \overline{\partial}^E_N + t\overline{\partial}^{E,*}_N \big) \\ & \hspace{20mm} - d\bar{a}\,\big[\overline{\partial}^E_N + t\overline{\partial}^{E,*}_N , \frac{\varepsilonpsilon t}{2}\omega^{\mathscr{E}_+}\big] + dad\bar{a}\,\frac{\varepsilonpsilon t}{2}\omega^{\mathscr{E}_+}\big)\Big]^{\varepsilonpsilon dad\bar{a}dt} \\ & + t\frac{\partial}{\partial t} \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big(D^{\mathscr{E},2}_t\big)\Big] + \text{closed form} \;. \varepsilonnd{split} \varepsilonnd{align} We have \begin{align} \label{dilab-proof-prop-a-bar-a-formula-second-eq-2} \begin{split} & d_M \tr_\mathrm{s} \Big[D^{\mathscr{E}}_t\varepsilonxp\big(\big( D^{\mathscr{E}}_t + dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})} \big)^2\big)\Big]^{dt} \\ = \; & \tr_\mathrm{s} \Big[\big[C^{\mathscr{E}}_t,D^{\mathscr{E}}_t\varepsilonxp\big(\big( D^{\mathscr{E}}_t + dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})} \big)^2\big)\big]\Big]^{dt} \\ = \; & - \tr_\mathrm{s} \Big[D^{\mathscr{E}}_t\varepsilonxp\big(D^{\mathscr{E},2}_t + \big[C^{\mathscr{E}}_t,\big[D^{\mathscr{E}}_t,dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\big]\big] \big)\Big]^{dt} \\ = \; & \tr_\mathrm{s} \Big[D^{\mathscr{E}}_t\varepsilonxp\big(D^{\mathscr{E},2}_t + \big[D^{\mathscr{E}}_t,\big[C^{\mathscr{E}}_t,dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\big]\big] \big)\Big]^{dt} \\ = \; & \tr_\mathrm{s} \Big[D^{\mathscr{E}}_t\big[D^{\mathscr{E}}_t,\varepsilonxp\big(D^{\mathscr{E},2}_t + \big[C^{\mathscr{E}}_t,dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\big]\big)\big]\Big]^{dt} \\ = \; & \tr_\mathrm{s} \Big[2D^{\mathscr{E},2}_t\varepsilonxp\big(D^{\mathscr{E},2}_t + \big[C^{\mathscr{E}}_t,dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\big]\big)\Big]^{dt} \\ = \; & \left(d_M \tr_\mathrm{s} \Big[2D^{\mathscr{E},2}_t\varepsilonxp\big(D^{\mathscr{E},2}_t + dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\big)\Big] \right)^{dt} \;. \varepsilonnd{split} \varepsilonnd{align} Thus \begin{align} \label{dilab-proof-prop-a-bar-a-formula-second-eq-3} \begin{split} & \tr_\mathrm{s} \Big[D^{\mathscr{E}}_t\varepsilonxp\big(\big( D^{\mathscr{E}}_t + dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})} \big)^2\big)\Big]^{dt} \\ = \; & \tr_\mathrm{s} \Big[2D^{\mathscr{E},2}_t\varepsilonxp\big(D^{\mathscr{E},2}_t + dt\,\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\big)\Big]^{dt} + \text{closed form} \\ = \; & \frac{\partial}{\partial b} \tr_\mathrm{s} \Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big((1+b)D^{\mathscr{E},2}_t\big)\Big]_{b=0} + \text{closed form} \\ = \; & \frac{\partial}{\partial b} \tr_\mathrm{s} \Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\Big( (1+b)^{\frac{1}{2}N^{\Lambda^\cdot(T^*M)}} D^{\mathscr{E},2}_{(1+b)t} (1+b)^{-\frac{1}{2}N^{\Lambda^\cdot(T^*M)}} \Big)\Big]_{b=0} \\ & + \text{closed form} \\ = \; & \frac{\partial}{\partial b} (1+b)^{\frac{1}{2}N^{\Lambda^\cdot(T^*M)}} \tr_\mathrm{s} \Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big(D^{\mathscr{E},2}_{(1+b)t}\big)\Big]_{b=0} + \text{closed form} \\ = \; & t\frac{\partial}{\partial t} \tr_\mathrm{s} \Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big(D^{\mathscr{E},2}_{t}\big)\Big] \\ & + \frac{1}{2}N^{\Lambda^\cdot(T^*M)} \tr_\mathrm{s} \Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big(D^{\mathscr{E},2}_{t}\big)\Big] + \text{closed form} \;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-proof-prop-a-bar-a-formula-second-eq-1} and \varepsilonqref{dilab-proof-prop-a-bar-a-formula-second-eq-3}, we get \varepsilonqref{dilab-eq-prop-a-bar-a-formula-second}. \varepsilonnd{proof} Let $r^N\index{rN@$r^N$}$ be the scalar curvature of $(N,g^{TN})$. Let \begin{equation} R^E = \nabla^{E,2} \;,\hspace{5mm} R^{TN} = \nabla^{TN,2} \varepsilonnd{equation} be the curvatures of $\nablaabla^{E}$ and $\nablaabla^{TN}$ on $E$ and $TN$ over $\mathcal{N}$. Let $\nablaabla^{\Lambda^{n}\left(TN\right)}$ be the connection $\Lambda^{n}\left(TN\right)$, which is induced by $\nabla^{TN}$. Then its curvature is $\mathrm{Tr}\left[R^{TN}\right]$. Recall that $S^{T_{\mathbb R}N}$ was defined in Definition \ref{defSTX}. Since our fibration is flat, it follows from \cite[(1.28)]{b} that, for $U\in T_{\mathbb R}N$ and $V,W\in T^{H}\mathcal{N}$, \begin{equation} \label{eq:van1} \left\langle S^{T_{\mathbb R}N}\left(U\right)V,W\right\rangle = \big\langle U,T(V,W)\big\rangle = 0 \;. \varepsilonnd{equation} Let $\nabla^{\Lambda^\cdot(\overline{T^*N})\otimes E} \index{nablaLambdaTNE@$\nabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}$}$ be the connection on $\Lambda^\cdot(\overline{T^*N})\otimes E$ induced by $\nabla^{\Lambda^\cdot(\overline{T^*N})}$ and $E$. Recall that $\omega$ is the fiberwise K{\"a}hler form, $\omega^{TN}$ and $\omega^E$ are the variation of metrics on $TN$ and $E$. We also recall that $c(\cdot)$ is the Clifford action (cf. \textsection \varepsilonqref{dilab-eq-pre-clifford-complex-rep}) associated with $g^{TN}/2$. Let $(e_i)_{1 \leqslant i \leqslant 2n}\index{ei@$e_i$}$ be an orthonormal basis of $T_\mathbb{R}N$, let $(e^i)_{1 \leqslant i \leqslant 2n}\index{eidual@$e^i$}$ be the corresponding dual basis. Let $(f_\alpha)_{1 \leqslant \alpha \leqslant m}\index{falpha@$f_\alpha$}$ a basis of $TM$. We identify the $f_{\alpha}$ with their horizontal lifts in $T^H\mathcal{N}$. Let $(f^\alpha)_{1 \leqslant \alpha \leqslant m}\index{falphadual@$f^\alpha$}$ be the corresponding dual basis. To interpret properly the formula that follows, we need to extend the basis $e_{i}$ to a parallel basis of $T_{\mathbb R}N$ near the point $x$ which is considered. Moreover, we may suppose that $\nabla^{T_\mathbb{R}N}_\cdot e_i = 0$ at the point $x$. \begin{prop} \label{dilab-prop-lichnerowicz-a-bar-a} The following identity holds: \begin{align} \label{dilab-eq-lichnerowicz-a-bar-a} \begin{split} & -C^{\mathscr{E},2} - da\,\frac{1}{2}(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N) - d\bar{a}\,\big[\overline{\partial}^E_N+\overline{\partial}^{E,*}_N,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big] + dad\bar{a}\,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E} \\ = \; & \frac{1}{2} \Big(\nabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}_{e_i} + \langle S^{T_\mathbb{R}N}(e_i)e_j,f_\alpha \rangle c(e_j)f^\alpha \\ & \hspace{10mm} - da\,\frac{1}{2}c(e_i) - d\bar{a}\varepsilonpsilon\,\frac{\sqrt{-1}}{2} (d_M\omega)(e_i,e_j)c(e_j) \Big)^2 \\ & - d\bar{a}\varepsilonpsilon\,\big[ \nabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}_{e_i} , \frac{1}{2}\omega^E - \frac{1}{8}(d_M\omega^{TN})(e_j,Je_j) \big] c(e_i) \\ & + dad\bar{a}\varepsilonpsilon\,\Big(\frac{1}{2}\omega^E - \frac{1}{8}(d_M\omega)(e_j,Je_j)\Big) \\ & - \frac{1}{2}\Big(R^E+\frac{1}{2}\tr[R^{TN}]\Big)(e_i,e_j)c(e_i)c(e_j) - \Big(R^E+\frac{1}{2}\tr[R^{TN}]\Big)(e_i,f_\alpha\big)c(e_i)f^\alpha \\ & - \frac{1}{2}\Big(R^E+\frac{1}{2}\tr[R^{TN}]\Big)(f_\alpha,f_\beta)f^\alpha f^\beta - \frac{1}{8}r^N \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{prop} \begin{proof} Applying \cite[Theorem 3.5]{b} with $t=1/\sqrt{2}$ and \varepsilonqref{eq:van1}, we have \begin{align} \label{dilab-eq-family-lichnerowicz} \begin{split} & -C^{\mathscr{E},2} \\ = \; & \frac{1}{2}\Big(\nabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}_{e_i} + \langle S^{T_\mathbb{R}N}(e_i)e_j,f_\alpha \rangle c(e_j)f^\alpha \Big)^2 \\ & - \frac{1}{2}\big(R^E+\frac{1}{2}\tr[R^{TN}]\big)(e_i,e_j)c(e_i)c(e_j) - \big(R^E+\frac{1}{2}\tr[R^{TN}]\big)(e_i,f_\alpha)c(e_i)f^\alpha \\ & - \frac{1}{2}\big(R^E+\frac{1}{2}\tr[R^{TN}]\big)(f_\alpha,f_\beta)f^\alpha f^\beta - \frac{1}{8}r^N \;. \varepsilonnd{split} \varepsilonnd{align} Taking the degree $0$ part of (\ref{dilab-eq-family-lichnerowicz}), we get \begin{align} \label{dilab-eq-point-lichnerowicz} \begin{split} & -\big(\overline{\partial}^E_N + \overline{\partial}^{E,*}_N\big)^2 \\ = & \; \frac{1}{2}\Big(\nabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}_{e_i}\Big)^2 - \frac{1}{2}\big(R^E+\frac{1}{2}\tr[R^{TN}]\big)(e_i,e_j)c(e_i)c(e_j) - \frac{1}{8}r^N \;. \varepsilonnd{split} \varepsilonnd{align} By \cite[Proposition 1.19]{bgs3} and by (\ref{dilab-eq-omega-big-e}), we get \begin{equation} \label{dilab-eq-omega-big-e-explicite} \omega^\mathscr{E} = -\frac{\sqrt{-1}}{2}(d_M\omega)(e_i,e_j)c(e_i)c(e_j) - \frac{1}{4}(d_M\omega)(e_i,Je_i) + \omega^E \;. \varepsilonnd{equation} Since $d_N\omega=0$ and $[d_N,d_M]=0$, we have $d_Nd_M\omega=0$. Therefore \begin{align} \label{dilab-prop-lichnerowicz-a-bar-a-proof-eq-1} \begin{split} & \big[\overline{\partial}^E_N + \overline{\partial}^{E,*}_N,-\frac{\varepsilonpsilon\sqrt{-1}}{4}(d_M\omega)(e_i,e_j)c(e_i)c(e_j)\big] \\ = \; & \frac{\varepsilonpsilon\sqrt{-1}}{4}\big[c(e_k)\nablaabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}_{e_k},(d_M\omega)(e_i,e_j)c(e_i)c(e_j)\big] \\ = \; & \frac{\varepsilonpsilon\sqrt{-1}}{4}\Big( \nabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}_{e_i}(d_M\omega)(e_i,e_j)c(e_j) + (d_M\omega)(e_i,e_j)c(e_j)\nablaabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}_{e_i}\Big) \;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-eq-point-lichnerowicz}, \varepsilonqref{dilab-eq-omega-big-e-explicite} and \varepsilonqref{dilab-prop-lichnerowicz-a-bar-a-proof-eq-1}, we get \begin{align} \label{dilab-eq-point-a-bar-a} \begin{split} & -\big(\overline{\partial}^E_N + \overline{\partial}^{E,*}_N\big)^2 - da\,\frac{1}{2}\big(\overline{\partial}^E_N+\overline{\partial}^{E,*}_N\big) - d\bar{a}\,\big[\overline{\partial}^E_N+\overline{\partial}^{E,*}_N,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big] + dad\bar{a}\,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E} \\ = \; & \frac{1}{2}\Big( \nabla^{\Lambda^\cdot\overline{T^*N}\otimes E}_{e_i} - da\,\frac{1}{2}c(e_i) - d\bar{a}\varepsilonpsilon\,\frac{\sqrt{-1}}{2}(d_M\omega)(e_i,e_j)c(e_j) \Big)^2 \\ & - d\bar{a}\varepsilonpsilon\,\big[ \nabla^{\Lambda^\cdot(\overline{T^*N})\otimes E}_{e_i} , \frac{1}{2}\omega^E - \frac{1}{8}(d_M\omega^{TN})(e_j,Je_j) \big] c(e_i) \\ & + dad\bar{a}\varepsilonpsilon\,\big(\frac{1}{2}\omega^E - \frac{1}{8}(d_M\omega)(e_j,Je_j)\big) \\ & - \frac{1}{2}\big(R^E+\frac{1}{2}\tr[R^{TN}]\big)(e_i,e_j)c(e_i)c(e_j) - \frac{1}{8}r^N \;. \varepsilonnd{split} \varepsilonnd{align} Comparing \varepsilonqref{dilab-eq-family-lichnerowicz}, \varepsilonqref{dilab-eq-point-lichnerowicz}, \varepsilonqref{dilab-eq-point-a-bar-a} with \varepsilonqref{dilab-eq-lichnerowicz-a-bar-a}, it only remains to show that \begin{align} \label{dilab-prop-lichnerowicz-a-bar-a-proof-eq-2} \begin{split} \sum_{i\nablaeq j}^{}\langle S^{T_\mathbb{R}N}(e_i)e_j,f_\alpha \rangle f^\alpha c(e_i) c(e_j) = \; & 0 \;,\\ \sum_i\sum_{j\nablaeq k}(d_M\omega)(e_i,e_j) \langle S^{T_\mathbb{R}N}(e_i)e_k,f_\alpha \rangle f^\alpha c(e_j) c(e_k) = \; & 0 \;. \varepsilonnd{split} \varepsilonnd{align} By \cite[\textsection 1(c)]{b}, if $U,V\in T\mathcal{N}$, then $S^{T_{\mathbb R}N}(U)V-S^{T_{\mathbb R}N}(V)U\in T_{\mathbb R}N$. Thus \begin{equation} \label{eq:err1} \langle S^{T_\mathbb{R}N}(e_i)e_j,f_\alpha \rangle = \langle S^{T_\mathbb{R}N}(e_j)e_i,f_\alpha \rangle \;. \varepsilonnd{equation} By \varepsilonqref{eq:err1}, we get the first identity in (\ref{dilab-prop-lichnerowicz-a-bar-a-proof-eq-2}). Now we prove the second identity in \varepsilonqref{dilab-prop-lichnerowicz-a-bar-a-proof-eq-2}. For simplicity, we introduce the following notation \begin{equation} \nablaabla_{f_\alpha} = i_{f_\alpha} d_M \;. \varepsilonnd{equation} By \cite[(1.5)]{b97}, we have \begin{equation} \langle S^{T_\mathbb{R}N}(e_i)e_k,f_\alpha \rangle =-\frac{1}{2} \left\langle \big(g^{T_\mathbb{R}N}\big)^{-1}\nablaabla_{f_{\alpha}}g^{T_\mathbb{R}N}(e_i)\,,\,e_k \right\rangle = -\frac{1}{2}\left(\nablaabla_{f_{\alpha}}\omega\right) (e_i,Je_k) \;. \varepsilonnd{equation} Therefore the second identity in \varepsilonqref{dilab-prop-lichnerowicz-a-bar-a-proof-eq-2} is equivalent to the follows: \begin{equation} \label{dilab-prop-lichnerowicz-a-bar-a-proof-eq-4} \sum_i\sum_{j\nablaeq k}(\nablaabla_{f_{\alpha}}\omega) (e_i,e_j) (\nablaabla_{f_{\beta}}\omega)(e_i,Je_k) f^\alpha f^\beta c(e_j) c(e_k) = 0 \;. \varepsilonnd{equation} Since $(Je_i)_{1\leqslant i\leqslant n}$ is also an orthogonal basis of $T_\mathbb{R}N$, using the fact that $\omega$ and $d_M\omega$ are $J$-invariant, we get \begin{align} \label{dilab-prop-lichnerowicz-a-bar-a-proof-eq-5} \begin{split} & \sum_i\sum_{j\nablaeq k}\left(\nabla_{f_\alpha}\omega\right) (e_i,e_j) \left(\nabla_{f_\beta}\omega\right) (e_i,Je_k) f^\alpha f^\beta c(e_j) c(e_k) \\ = \; & \frac{1}{2} \sum_i\sum_{j\nablaeq k}\left(\nabla_{f_\alpha}\omega\right) (e_i,e_j) \left(\nabla_{f_\beta}\omega\right) (e_i,Je_k) f^\alpha f^\beta c(e_j) c(e_k) \\ & + \frac{1}{2} \sum_i\sum_{j\nablaeq k}\left(\nabla_{f_\alpha}\omega\right) (Je_i,e_j) \left(\nabla_{f_\beta}\omega\right)(Je_i,Je_k) f^\alpha f^\beta c(e_j) c(e_k) \\ = \; & \frac{1}{2} \sum_i\sum_{j\nablaeq k}\left(\nabla_{f_\alpha}\omega\right) (e_i,e_j) \left(\nabla_{f_\beta}\omega\right) (e_i,Je_k) f^\alpha f^\beta c(e_j) c(e_k) \\ & - \frac{1}{2} \sum_i\sum_{j\nablaeq k}\left(\nabla_{f_\alpha}\omega\right) (e_i,Je_j) \left(\nabla_{f_\beta}\omega\right) (e_i,e_k) f^\alpha f^\beta c(e_j) c(e_k) \;. \varepsilonnd{split} \varepsilonnd{align} Exchanging the roles of $j,k$ and of $\alpha,\beta$, we obtain \begin{align} \label{dilab-prop-lichnerowicz-a-bar-a-proof-eq-6} \begin{split} & \sum_i\sum_{j\nablaeq k}\left(\nabla_{f_\alpha}\omega\right) (e_i,Je_j) \left(\nabla_{f_\beta}\omega\right) (e_i,e_k) f^\alpha f^\beta c(e_j) c(e_k) \\ = \; & \sum_i\sum_{j\nablaeq k}\left(\nabla_{f_\beta}\omega\right) (e_i,Je_k) \left(\nabla_{f_\alpha}\omega\right) (e_i,e_j) f^\beta f^\alpha c(e_k) c(e_j) \\ = \; & \sum_i\sum_{j\nablaeq k} \left(\nabla_{f_\alpha}\omega\right) (e_i,e_j) \left(\nabla_{f_\beta}\omega\right) (e_i,Je_k) f^\alpha f^\beta c(e_j) c(e_k) \;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-prop-lichnerowicz-a-bar-a-proof-eq-5} and \varepsilonqref{dilab-prop-lichnerowicz-a-bar-a-proof-eq-6}, we get \varepsilonqref{dilab-prop-lichnerowicz-a-bar-a-proof-eq-4}. \varepsilonnd{proof} \begin{proof}[Proof of Theorem \ref{dilab-prop-large-small-time-convergence}] The proof of \varepsilonqref{dilab-eq-large-time-convergence} follows the same argument as \cite[Theorem 3.16]{bl}. Now we prove the first formula in \varepsilonqref{dilab-eq-small-time-convergence}. By Lemma \ref{dilab-prop-a-bar-a-formula}, it is sufficient to establish the asymptotics of the following terms as $t\rightarrow 0$ : \begin{align} \label{dilab-eq-a0-proof-prop-small-time-convergence} \begin{split} & \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E},2}_t - da\,\frac{1}{2}\big(\overline{\partial}^E_N+t\overline{\partial}^{E,*}_N\big) \\ & \hspace{35mm} - d\bar{a}\,\big[\overline{\partial}^E_N+t\overline{\partial}^{E,*}_N,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big] + dad\bar{a}\,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big) \Big]^{\varepsilonpsilon dad\bar{a}} \;,\\ & d_M \tr_\mathrm{s} \Big[\frac{1}{2}N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] \;. \varepsilonnd{split} \varepsilonnd{align} As $t\rightarrow 0$, we claim that we can use equation \varepsilonqref{dilab-eq-lichnerowicz-a-bar-a} exactly as in Bismut-K\"{o}hler \cite[Theorem 3.22]{bk}. The main difference is that in \cite{bk}, the space of variations of the metrics is $1$-dimensional, while here it is the full basis $M$. By proceeding as in this reference, we get \begin{align} \label{dilab-eq-8-proof-prop-small-time-convergence} \begin{split} & \sqrt{2\pi i}\varphi \tr_\mathrm{s} \Big[\varepsilonxp\big(-C^{\mathscr{E},2}_t - da\,\frac{1}{2}\big(\overline{\partial}^E_N+t\overline{\partial}^{E,*}_N\big) \\ & \hspace{35mm} - d\bar{a}\,\big[\overline{\partial}^E_N+t\overline{\partial}^{E,*}_N,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big] + dad\bar{a}\,\frac{\varepsilonpsilon}{2}\omega^\mathscr{E}\big) \Big]^{\varepsilonpsilon dad\bar{a}} \\ = \; & q_*\Big[ \widetilde{\mathrm{Td}}(TN, g^{TN})*\widetilde{\mathrm{ch}}(E, g^E) \Big] + \mathscr{O}(t) \;. \varepsilonnd{split} \varepsilonnd{align} This gives the asymptotics of the first term in \varepsilonqref{dilab-eq-a0-proof-prop-small-time-convergence}. We turn to study the second term in \varepsilonqref{dilab-eq-a0-proof-prop-small-time-convergence}. As $t\rightarrow 0$, by the local families index theorem technique \cite{b}, we get \begin{equation} \label{dilab-eq-3-proof-prop-small-time-convergence} \varphi \tr_\mathrm{s} \Big[tN^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] = q_*\left[\frac{\omega}{2\pi}\text{\rm Td}(TN, \nablaabla^{TN})\text{\rm ch}(E, \nablaabla^E)\right] + \mathscr{O}(\sqrt{t}) \;. \varepsilonnd{equation} Furthermore, by \cite[Theorems 2.11, 2.16]{bgs2}, the asymptotics of $\tr_\mathrm{s} \Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big]$ is given by a Laurent series. By \varepsilonqref{dilab-eq-3-proof-prop-small-time-convergence}, we get \begin{equation} \label{dilab-eq-a1-proof-prop-small-time-convergence} \varphi \tr_\mathrm{s} \Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] = C_{-1} t^{-1} + C_0 + \mathscr{O}(t) \;, \varepsilonnd{equation} with \begin{equation} \label{dilab-eq-a2-proof-prop-small-time-convergence} C_{-1} = q_*\left[\frac{\omega}{2\pi}\text{\rm Td}(TN, \nablaabla^{TN})\text{\rm ch}(E, \nablaabla^E)\right] \;. \varepsilonnd{equation} Let $C_{-1}^{(p)}$ (resp. $C_0^{(p)}$) be the component of degree $p$ of $C_{-1}$ (resp. $C_0$). By Remark \ref{dilab-rem-small-time-convergence}, for $p>0$, $C_{-1}^{(p)}=0$. Then \begin{equation} \label{dilab-eq-4-proof-prop-small-time-convergence} \big(1 + N^{\Lambda^\cdot(T^*M)} + t\frac{\partial}{\partial t} \big) \tr_\mathrm{s}\Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] = \sum_p \Big( (p+1) C_0^{(p)} \Big) + \mathscr{O}(t) \;. \varepsilonnd{equation} Applying \varepsilonqref{dilab-eq-8-proof-prop-small-time-convergence} with $\mathscr{E}$ replaced by $\mathscr{E}_+$ (see the proof of Proposition \ref{dilab-prop-trans-alpha-beta}) and taking the $dt$ component, we get \begin{align} \label{dilab-eq-5-proof-prop-small-time-convergence} \begin{split} & \varphi \tr_\mathrm{s} \Big[ \varepsilonxp\big( -C^{\mathscr{E}_+,2} - da\,\frac{1}{2}\big( \overline{\partial}^E_N + t\overline{\partial}^{E,*}_N \big) \\ & \hspace{35mm} - d\bar{a}\,\big[\overline{\partial}^E_N + t\overline{\partial}^{E,*}_N , \frac{\varepsilonpsilon t}{2}\omega^{\mathscr{E}_+}\big] + dad\bar{a}\,\frac{\varepsilonpsilon t}{2}\omega^{\mathscr{E}_+}\big) \Big]^{\varepsilonpsilon dad\bar{a}dt} \\ = \; & - \frac{1}{2} q_*\Big[ \mathrm{Td}'(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] + \mathscr{O}(t) \;. \varepsilonnd{split} \varepsilonnd{align} By Theorem \ref{dilab-thm-gen-flat2trivialclass}, Lemma \ref{dilab-prop-a-bar-a-formula-second} and \varepsilonqref{dilab-eq-5-proof-prop-small-time-convergence}, we have \begin{equation} \label{dilab-eq-6-proof-prop-small-time-convergence} \big(1 + N^{\Lambda^\cdot(T^*M)} + t\frac{\partial}{\partial t} \big) \tr_\mathrm{s}\Big[N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] = \text{ closed form } + \mathscr{O}(t) \;. \varepsilonnd{equation} By \varepsilonqref{dilab-eq-4-proof-prop-small-time-convergence} and \varepsilonqref{dilab-eq-6-proof-prop-small-time-convergence}, we have \begin{align} \label{dilab-eq-7-proof-prop-small-time-convergence} d_M C_0 = 0 \;. \varepsilonnd{align} By \varepsilonqref{dilab-eq-a1-proof-prop-small-time-convergence}, \varepsilonqref{dilab-eq-a2-proof-prop-small-time-convergence} and \varepsilonqref{dilab-eq-7-proof-prop-small-time-convergence}, as $t\rightarrow 0$, we have \begin{align} \label{dilab-eq-10-proof-prop-small-time-convergence} \begin{split} & \sqrt{2\pi i} \varphi \, d_M \tr_\mathrm{s} \Big[ N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] \\ = \; & d_M \varphi \tr_\mathrm{s} \Big[ N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big( D^{\mathscr{E},2}_t \big)\Big] \\ = & \; \frac{1}{t} d_M q_*\left[\frac{\omega}{2\pi}\text{\rm Td}(TN, \nablaabla^{TN})\text{\rm ch}(E, \nablaabla^E)\right] + \mathscr{O}(\sqrt{t}) \;. \varepsilonnd{split} \varepsilonnd{align} This gives the asymptotics of the second term in \varepsilonqref{dilab-eq-a0-proof-prop-small-time-convergence}. The first formula in \varepsilonqref{dilab-eq-small-time-convergence} follows from Lemma \ref{dilab-prop-a-bar-a-formula}, \varepsilonqref{dilab-eq-8-proof-prop-small-time-convergence} and \varepsilonqref{dilab-eq-10-proof-prop-small-time-convergence}. The second formula in \varepsilonqref{dilab-eq-small-time-convergence} may be proved as a consequence of the first one by applying the same technique as in the proof of Proposition \ref{dilab-prop-trans-alpha-beta}. \varepsilonnd{proof} \subsection{Analytic torsion forms} \label{dilab-subsect-torsionform} \ We choose $g_1,g_2\in{\mathscr{C}^\infty}(\mathbb{R}_+,\mathbb{R})\index{g1@$g_1$}\index{g2@$g_2$}$ satisfying \begin{equation} \label{dilab-eq-condition-inf-g1g2} g_1(t) = 1 + \mathscr{O}\big(t\big) \;,\hspace{5mm} g_2(t) = 1 + \mathscr{O}\big(t^2\big) \;,\hspace{5mm}\text{as }t \rightarrow 0 \;, \varepsilonnd{equation} \begin{equation} \label{dilab-eq-condition-zero-g1g2} g_1(t) = \mathscr{O}\big(e^{-t}\big) \;,\hspace{5mm} g_2(t) = \mathscr{O}\big(e^{-t}\big) \;,\hspace{5mm}\text{as }t \rightarrow +\infty \;, \varepsilonnd{equation} and \begin{align} \label{dilab-eq-condition-g1g2} \begin{split} \int_0^1 \frac{g_1(t) - 1}{t} dt + \int_1^{+\infty} \frac{g_1(t)}{t} = \; & \Gamma'(1) - 2 \;,\\ \int_0^1 \frac{g_2(t) - 1}{t^2} dt + \int_1^{+\infty} \frac{g_2(t)}{t^2} = \; & 1 \;. \varepsilonnd{split} \varepsilonnd{align} Using Mellin tranformation, \varepsilonqref{dilab-eq-condition-g1g2} can be reformulated as follows \begin{align} \label{dilab-eq-condition-g1g2-zeta} \begin{split} \Big( \frac{d}{ds} \frac{1}{\Gamma(s)}\int_0^{+\infty} t^{s-1} g_1(t) dt \Big)_{s=0} = \; & - 2 \;,\\ \Big( \frac{d}{ds} \frac{1}{\Gamma(s)}\int_0^{+\infty} t^{s-2} g_2(t) dt \Big)_{s=0} = \; & 0 \;. \varepsilonnd{split} \varepsilonnd{align} \begin{defn} \label{dilab-def-analytic-torsion-form} The analytic torsion forms $\mathscr{T}(g^{TN},g^E)\in\Omega^\mathrm{even}(M)\index{tf@$\mathscr{T}(g^{TN},g^E)$}$ are defined by \begin{align} \begin{split} \mathscr{T}(g^{TN},g^E) = \; & - \int_{0}^{+\infty} \Big\{ \beta_t + \frac{g_1(t)-1}{2} {\chi}'(N,E) - \frac{g_1(t)}{2} n \chi(N,E) \\ & \hspace{25mm} + \frac{g_1(t)}{2} q_*\Big[\mathrm{Td}'(TN, \nabla^{TN})\mathrm{ch}(E, \nabla^E)\Big] \\ & \hspace{25mm} + \frac{g_2(t)}{2t} q_*\Big[\frac{\omega}{2\pi}\mathrm{Td}(TN, \nabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] \Big\} \frac{dt}{t} \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{defn} By Theorem \ref{dilab-prop-large-small-time-convergence}, $\mathscr{T}(g^{TN},g^E)$ is well-defined. Here we remark that $\mathscr{T}(g^{TN},g^E)$ is independent of $g_1$ and $g_2$. \begin{prop} \label{dilab-prop-diff-torsion} We have \begin{align} \label{dilab-eq-prop-diff-torsion} \begin{split} d_M \mathscr{T}(g^{TN},g^E) = \; & q_*\Big[\widetilde{\mathrm{Td}}(TN, g^{TN})*\widetilde{\mathrm{ch}}(E, g^E)\Big] \\ & - f(H^\cdot(N,E), \nabla^{H^\cdot(N,E)}, g^{H^\cdot(N,E)}) \;. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{prop} \begin{proof} By Theorem \ref{dilab-thm-gen-flat2trivialclass}, $q_*\Big[\mathrm{Td}'(TN, \nabla^{TN})\mathrm{ch}(E, \nabla^E)\Big]$ is a constant function on $M$. Then, by Proposition \ref{dilab-prop-trans-alpha-beta}, we get \begin{align} \label{dilab-eq1-pf-prop-diff-torsion} \begin{split} & d_M \mathscr{T}(g^{TN},g^E) \\ = \; & - \int_{0}^{+\infty} \Big\{ d_M \beta_t + \frac{g_2(t)}{2t} d_Mq_*\Big[\frac{\omega}{2\pi}\mathrm{Td}(TN, \nabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] \Big\} \frac{dt}{t} \\ = \; & - \int_{0}^{+\infty} \Big\{ \frac{\partial}{\partial t}\alpha_t + \frac{g_2(t)}{2t^2} d_Mq_*\Big[\frac{\omega}{2\pi}\mathrm{Td}(TN, \nabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] \Big\} dt \;. \varepsilonnd{split} \varepsilonnd{align} By Theorem \ref{dilab-prop-large-small-time-convergence}, \varepsilonqref{dilab-eq-condition-g1g2} and \varepsilonqref{dilab-eq1-pf-prop-diff-torsion}, we get \varepsilonqref{dilab-eq-prop-diff-torsion}. \varepsilonnd{proof} Proceeding in the same way as in \cite[Theorem 3.16]{bl}, we get \begin{equation} \tr_\mathrm{s}\Big[ N^{\Lambda^\cdot(\overline{T^*N})} \varepsilonxp\big(-tD^{E,2}_N\big) \Big] = \chi'(N,E) + \mathscr{O}\big(t^{-1}\big) \;, \hspace{5mm}\text{as } t\rightarrow +\infty \;. \varepsilonnd{equation} For $s\in\mathbb{C}$ with $\mathrm{Re}(s)>n$, we define \begin{equation} \label{dilab-eq-def-zeta} \theta(s) = -\frac{1}{\Gamma(s)}\int_0^{+\infty} t^{s-1} \left[ \tr_\mathrm{s} \big[ N^{\Lambda^\cdot(\overline{T^*N})}\varepsilonxp\big(-tD^{E,2}_N\big) \big] - {\chi}'(N,E) \right] dt \index{theta@$\theta(s)$}\;. \varepsilonnd{equation} By \cite{s}, the function $\theta(s)$ admits a meromorphic continuation to the whole complex plane, which is regular at $0\in\mathbb{C}$. Let $\mathscr{T}^{[0]}(g^{TN},g^E)$ be the component of $\mathscr{T}(g^{TN},g^E)$ of degree zero. \begin{prop} \label{dilab-prop-torsion-degzero} We have \begin{equation} \label{dilab-eq-prop-torsion-degzero} \mathscr{T}^{[0]}(g^{TN},g^E) = \frac{1}{2}{\theta}'(0) \;. \varepsilonnd{equation} \varepsilonnd{prop} \begin{proof} By \varepsilonqref{dilab-eq-lc-superconnection-rescaling} and \varepsilonqref{dilab-eq-def-alphatbetat}, we get \begin{align} \begin{split} \label{dilab-eq1-pf-prop-torsion-degzero} \beta_t^{[0]} = \; & \tr_\mathrm{s}\Big[ \frac{N^{\Lambda^\cdot(\overline{T^*N})}}{2}\big(1-2tD^{E,2}_N\big) \varepsilonxp\big(-tD^{E,2}_N\big) \Big] \\ = \; & \frac{1}{2}\Big(1+2t\frac{\partial}{\partial t}\Big)\tr_\mathrm{s}\Big[ N^{\Lambda^\cdot(\overline{T^*N})} \varepsilonxp\big(-tD^{E,2}_N\big) \Big] \;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-eq-a1-proof-prop-small-time-convergence}, there exist $a_0,a_1\in\mathbb{C}$ such that, as $t\rightarrow 0$, \begin{equation} \label{dilab-eq2-pf-prop-torsion-degzero} \tr_\mathrm{s}\Big[ N^{\Lambda^\cdot(\overline{T^*N})} \varepsilonxp\big(-tD^{E,2}_N\big) \Big] = a_{-1}t^{-1} + a_0 + \mathscr{O}\big(\sqrt{t}\big) \;. \varepsilonnd{equation} By \varepsilonqref{dilab-eq-small-time-convergence}, \varepsilonqref{dilab-eq1-pf-prop-torsion-degzero} and \varepsilonqref{dilab-eq2-pf-prop-torsion-degzero}, we get \begin{equation} \label{dilab-eq3-pf-prop-torsion-degzero} a_0 = - q_*\left[\mathrm{Td}'(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\right] + n \chi(N,E) \;. \varepsilonnd{equation} By \varepsilonqref{dilab-eq-def-zeta}, \varepsilonqref{dilab-eq2-pf-prop-torsion-degzero} and \varepsilonqref{dilab-eq3-pf-prop-torsion-degzero}, we get \begin{equation} \label{dilab-eq5-pf-prop-torsion-degzero} {\theta}(0) = q_*\left[\mathrm{Td}'(TN, \nablaabla^{TN})\mathrm{ch}(E, \nablaabla^E)\right] - n \chi(N,E) + {\chi}'(N,E) \;. \varepsilonnd{equation} By Definition \ref{dilab-def-analytic-torsion-form}, \varepsilonqref{dilab-eq-condition-g1g2-zeta}, \varepsilonqref{dilab-eq-def-zeta} and \varepsilonqref{dilab-eq1-pf-prop-torsion-degzero}, we have \begin{align} \label{dilab-eq6-pf-prop-torsion-degzero} \begin{split} & \mathscr{T}^{[0]}(g^{TN},g^E) \\ = \; & - \int_{0}^{+\infty} \Big\{ \frac{1}{2}\Big(1+2t\frac{\partial}{\partial t}\Big)\tr_\mathrm{s}\Big[ N^{\Lambda^\cdot(\overline{T^*N})} \varepsilonxp\big(-tD^{E,2}_N\big) \Big] - \frac{1}{2} {\chi}'(N,E) \\ & \hspace{10mm} + \frac{g_1(t)}{2} \left( q_*\Big[\mathrm{Td}'(TN, \nabla^{TN})\mathrm{ch}(E, \nabla^E)\Big] - n\chi(N,E) + {\chi}'(N,E) \right) \\ & \hspace{10mm} + \frac{g_2(t)}{2t} q_*\Big[\frac{\omega}{2\pi}\mathrm{Td}(TN, \nabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] \Big\} \frac{dt}{t} \\ = \; & - \frac{1}{2} \frac{d}{ds} \Big|_{s=0} \frac{1}{\Gamma(s)} \int_{0}^{+\infty} t^{s-1}\Big(1+2t\frac{\partial}{\partial t}\Big) \Big\{ \tr_\mathrm{s}\Big[ N^{\Lambda^\cdot(\overline{T^*N})} \varepsilonxp\big(-tD^{E,2}_N\big) \Big] \\ &\hspace{75mm} - {\chi}'(N,E) \Big\} dt \\ & - \frac{1}{2} \frac{d}{ds}\Big|_{s=0} \frac{1}{\Gamma(s)}\int_0^{+\infty} t^{s-1} g_1(t) dt \, \Big( q_*\Big[\mathrm{Td}'(TN, \nabla^{TN})\mathrm{ch}(E, \nabla^E)\Big] \\ &\hspace{75mm} - n\chi(N,E) + {\chi}'(N,E) \Big) \\ & - \frac{1}{2} \frac{d}{ds}\Big|_{s=0} \frac{1}{\Gamma(s)}\int_0^{+\infty} t^{s-2} g_2(t) dt \, q_*\Big[\frac{\omega}{2\pi}\mathrm{Td}(TN, \nabla^{TN})\mathrm{ch}(E, \nablaabla^E)\Big] \\ = \; & \frac{d}{ds}\Big|_{s=0} \frac{1-2s}{2}\theta(s) + q_*\Big[\mathrm{Td}'(TN, \nabla^{TN})\mathrm{ch}(E, \nabla^E)\Big] -n\chi(N,E) + {\chi}'(N,E) \\ = \; & \frac{1}{2}{\theta}'(0) - \theta(0) + q_*\Big[\mathrm{Td}'(TN, \nabla^{TN})\mathrm{ch}(E, \nabla^E)\Big] -\chi(N,E) + {\chi}'(N,E) \;. \varepsilonnd{split} \varepsilonnd{align} By \varepsilonqref{dilab-eq5-pf-prop-torsion-degzero} and \varepsilonqref{dilab-eq6-pf-prop-torsion-degzero}, we obtain \varepsilonqref{dilab-eq-prop-torsion-degzero}. \varepsilonnd{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{thebibliography}{BGS88b} \bibitem[BeGV04]{bgv} N.~Berline, E.~Getzler, and M.~Vergne, \varepsilonmph{Heat kernels and {D}irac operators}, Grundlehren Text Editions, Springer-Verlag, Berlin, 2004, Corrected reprint of the 1992 original. \MR{2273508 (2007m:58033)} \bibitem[BGS88a]{bgs2} J.-M. Bismut, H.~Gillet, and C.~Soul{\'e}, \varepsilonmph{Analytic torsion and holomorphic determinant bundles. {II}. {D}irect images and {B}ott-{C}hern forms}, Comm. Math. Phys. \textbf{115} (1988), no.~1, 79--126. \bibitem[BGS88b]{bgs3} \bysame, \varepsilonmph{Analytic torsion and holomorphic determinant bundles. {III}. {Q}uillen metrics on holomorphic determinants}, Comm. Math. Phys. \textbf{115} (1988), no.~2, 301--351. \bibitem[B86]{b} J.-M. Bismut, \varepsilonmph{The {A}tiyah-{S}inger index theorem for families of {D}irac operators: two heat equation proofs}, Invent. Math. \textbf{83} (1986), no.~1, 91--151. \bibitem[B97]{b97} \bysame, \varepsilonmph{Holomorphic families of immersions and higher analytic torsion forms}, Ast\'erisque (1997), no.~244, viii+275. \bibitem[BK92]{bk} J.-M. Bismut and K.~K{\"o}hler, \varepsilonmph{Higher analytic torsion forms for direct images and anomaly formulas}, J. Algebraic Geom. \textbf{1} (1992), no.~4, 647--684. \bibitem[BL95]{bl} J.-M. Bismut and J.~Lott, \varepsilonmph{Flat vector bundles, direct images and higher real analytic torsion}, J. Amer. Math. Soc. \textbf{8} (1995), no.~2, 291--363. \bibitem[BZ92]{bz} J.-M. Bismut and W.~Zhang, \varepsilonmph{An extension of a theorem by {C}heeger and {M}\"uller}, Ast\'erisque (1992), no.~205, 235, With an appendix by Fran{\c{c}}ois Laudenbach. \bibitem[Ch79]{c-cm} J.~Cheeger, \varepsilonmph{Analytic torsion and the heat equation}, Ann. of Math. (2) \textbf{109} (1979), no.~2, 259--322. \bibitem[M{\"u}78]{m-cm} W.~M{\"u}ller, \varepsilonmph{Analytic torsion and {$R$}-torsion of {R}iemannian manifolds}, Adv. in Math. \textbf{28} (1978), no.~3, 233--305. \bibitem[M{\"u}93]{m-cm-2} \bysame, \varepsilonmph{Analytic torsion and {$R$}-torsion for unimodular representations}, J. Amer. Math. Soc. \textbf{6} (1993), no.~3, 721--753. \bibitem[RS71]{rs} D.~B. Ray and I.~M. Singer, \varepsilonmph{{$R$}-torsion and the {L}aplacian on {R}iemannian manifolds}, Advances in Math. \textbf{7} (1971), 145--210. \bibitem[RS73]{rs2} \bysame, \varepsilonmph{Analytic torsion for complex manifolds}, Ann. of Math. (2) \textbf{98} (1973), 154--177. \bibitem[Se67]{s} R.~T. Seeley, \varepsilonmph{Complex powers of an elliptic operator}, Singular {I}ntegrals ({P}roc. {S}ympos. {P}ure {M}ath., {C}hicago, {I}ll., 1966), Amer. Math. Soc., Providence, R.I., 1967, pp.~288--307. \bibitem[Zh16]{yzhang} Y.~Zhang, \varepsilonmph{A {R}iemann-{R}och-{G}rothendieck theorem for flat fibrations with complex fibers}, C. R. Math. Acad. Sci. Paris \textbf{354} (2016), no.~4, 401--406. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Conjugacy and Centralizers in Groups of Piecewise Projective Homeomorphisms} \author{Francesco Matucci} \thanks{The first author is a member of the Gruppo Nazionale per le Strutture Algebriche, Geometriche e le loro Applicazioni (GNSAGA) of the Istituto Nazionale di Alta Matematica (INdAM) and gratefully acknowledges the support of the Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP Jovens Pesquisadores em Centros Emergentes grant 2016/12196-5), of the Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq Bolsa de Produtividade em Pesquisa PQ-2 grant 306614/2016-2), and of the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (CEMAT-Ci\^encias FCT projects UID/MULTI/04621/2019 and UIDB/04621/2020) and of the Universit\`a degli Studi di Milano - Bicocca (FA project ATE-2016-0045 ``Strutture Algebriche'').} \address{Università degli Studi di Milano-Bicocca, Italy} \email{\href{mailto:[email protected]}{[email protected]}} \author{Altair Santos de Oliveira-Tosti} \thanks{This work is part of the second author's PhD thesis at the University of Campinas. The second author gratefully acknowledges support from CNPq (grant 140876/2017-0) and Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES)} \address{Northern Paraná State University, Brazil} \email{\href{mailto:[email protected]}{[email protected]}} \begin{abstract} Monod introduced in \cite{Mon2013} a family of Thompson-like groups which provides natural counterexamples to the von Neumann-Day conjecture. We construct a characterization of conjugacy and an invariant and use them to compute centralizers in one group of this family. \end{abstract} \maketitle \markboth{Conjugacy and Centralizers in Groups of Piecewise Projective Homeomorphisms}{Francesco Matucci and Altair Santos de Oliveira-Tosti} \section{Introduction} The von Neumann conjecture states that a group is non-amenable if and only if it contains non-abelian free subgroups. It was formulated in 1957 by Mahlon Marsh Day and disproved in in 1980 by Alexander Ol'shanskii in \cite{Olshanskii1980} through a non-amenable Tarski monster group without any non-abelian free subgroup. The historically first potential counterexample to such conjecture is Thompson's group $F$ of piecewise-linear homeomorphisms of the real line. The group $F$ does not contain any non-abelian free subgroup, but is still not known to be amenable. Nicolas Monod introduced in \cite{Mon2013} a class of groups $H(A)$ depending on a subring $A$ of $\mathbb{R}$ providing another family of counterexamples of the von Neumann-Day conjecture \cite{Mon2013}. Monod's groups are very natural and ``Thompson-like'' as they are described by piecewise projective homeomorphisms of the real line. Later on Yash Lodha and Justin Moore \cite{LodhaMoore2016} found that \(H\left(\mathbb{Z}[1/\sqrt{2}]\right)\) contains a finitely presented subgroup, thus providing the first torsion-free finitely presented counterexample. Thompson-like groups have been extensively studied from the point of view of decision problems. Decision problems play an important role in group theory, giving a measure of the complexity of groups. A finitely presented group \(G\) has \textit{solvable conjugacy problem}, if there exists an algorithm which, given \(y,z\in G\), determines whether or not there is an element \(g\in G\) such that \(g^{-1}yg=z\). This problem has been studied for many classes of groups and is generally unsolvable. The conjugacy problem has been studied for several Thompson-like groups \cite{BarDunRob2016,BelkMat2014,BurMatVent2016,GillShort2013,GubaSapir1997,Higman1974,KassMat2012, Mat2010,Robertson2019,Salazar2010}. Monod's groups share commonalities used in approaches used to study the conjugacy problem, such as being a topological full group. In this paper we exploit such commonalities to understand conjugacy in Monod's group $H\coloneqq H(\mathbb{R})$ and find a criterion (Corollary \ref{thm:characterize-conjugacy} below) to establish conjugacy within the group. Matthew Brin and Craig Squier construct in \cite{BrinSquier2001} a conjugacy invariant in the infinitely generated group $\mathrm{PL}_{+}(\mathbb{R})$ of all piecewise-linear homeomorphisms of the real line with finitely many breakpoints and use it to compute element centralizers by adapting techniques developed in \cite{Mather1974}. This invariant has been revisited later in \cite{GillShort2013,Mat2010} and we adapt it in Theorem \ref{matconj} below to produce our own version of this invariant and compute centralizers: \begin{thmx}\label{thm:intro-a} Given \(z\in H\), then \[C_{H}\left(z\right)\cong\left(\mathbb{Z},+\right)^{n}\times\left(\mathbb{R},+\right)^{m}\times H^{k},\] for suitable \(k,m,n\in\mathbb{Z}_{\geq0}\). \end{thmx} Several of our results adapt to the general Monod groups $H(A)$ for a subring $A$ of $\mathbb{R}$, but there are some for which the proofs given for $H$ do not immediately apply to the groups $H(A)$. More precisely, the results of Section \ref{sec:stair} can be easily rephrased and proved for $H(A)$, while those from Sections \ref{sec:mather} and \ref{sec:centralizer} may extend too, but our proofs do not apply to $H(A)$. The work is organized as follows: in Section \ref{sec:monod}, we define Monod groups and present some basic properties, some of which shared with Thompson's group \(F\). In Section \ref{sec:stair}, we discuss a characterization of conjugacy, which is an adaptation of the \textit{Stair Algorithm}, developed by Kassabov and the first author in \cite{KassMat2012}. In Section \ref{sec:mather}, we define a conjugacy invariant (the \textit{Mather invariant}) for a class of elements by adapting techniques developed in \cite{Mather1974}, and we show the relation between the Stair Algorithm and the Mather invariant. In Section \ref{sec:centralizer}, we compute the centralizer subgroups of elements from \(H\) as applications of the preceding tools. \noindent \textbf{Acknowledgments:} The authors would like to thank James Belk, Martin Kassabov and Slobodan Tanushevski for helpful conversations about the present work. The authors would also like to thank an anonymous referee for helpful suggestions that improved the readability of the paper. \iffalse \begin{itemize} \item Quickly recall the Von-Neumann conjecture. Talk about Olshanskii \item Recall that Monod introduced his group to produce a simple counterexample. \item Say that Lodha-Moore produced a f.p. subgroup \item Say that Monod's group resembles Thompson's group $F$, for example, both are topological full groups (WARNING: put this way it's wrong) \item Mention Dehn's problems and how they've been studied a lot and understood in Thompson groups \item Say we're now going the other way: instead of creating a Thompson-like counterexample to the Von Neumann-Day conjecture, we're seeing if a property known for Thompson groups transfers to Monod's group. \item Explain that we have a characterization and an invariant of conjugacy, without stating them (but linking to such results) \item State the centralizer result precisely \item Put the acknowledgments elsewhere, like footnotes \end{itemize} Grupo de Monod e porque foi construído, porque é parecido ao grupo de Thompson F. Depois falar do problema da conjugação e que foi muito estudado para os grupos de Thompson (adicionar referências, mas talvez naoyfalar explicitamente de todos os grupos) e depois falar que temos resultados de conjugação, mas não escrevê-los explicitamente pois usam linguagem que não foi introduzido na introdução e depois escrever explicitamente o teorema dos centralizadores que se pode escrever sem precisar de nada mais \fi \section{Monod's Groups}\label{sec:monod} In this section, we will discuss groups of piecewise projective orientation-preserving homeomorphism of $\mathbb{R}\mathrm{P}^{1}$ which stabilize infinity and discuss some of their properties. These groups are called \textit{Monod's groups} and they were introduced by Nicolas Monod in \cite{Mon2013}. \iffalse and provide another family of counterexamples of the von Neumann-Day conjecture \cite{Mon2013} stating that all non-amenable groups contained free subgroups. Thompson's group \(F\) had been suggested as one as a potential counterexample to this conjecture as it contains no free subgroups, but it is still undetermined whether or not $F$ is amenable. In 1980 Alexander Ol'shanskii constructed a non-amenable Tarski monster group without any free subgroup \cite{Olshanskii1980}, disproving the von Neumann-Day conjecture. Since then, many other counterexamples were constructed later, but question as to whether a Thompson-like counterexample existed. Monod's groups provide such counterexamples and it was later found out by Lodha and Moore \cite{LodhaMoore2016} that \(H\left(\mathbb{Z}[1/\sqrt{2}]\right)\) contains a finitely presented subgroup which is itself a counterexample of von Neumann-Day conjecture. \fi We now introduce the notation that will be used in the paper. If \(A\) is a subring of \(\mathbb{R}\) with unit, the group of M\"obius transformations \(\mathrm{PSL}_{2}\left(A\right)\), under composition of functions, is the group of transformations of the real projective line $\mathbb{R}\mathrm{P}^{1}=\mathbb{R}\cup \{\infty\}$ of the form $f\colon t \mapsto \dfrac{at+b}{ct+d}$ for $a,b,c,d \in A$ where the determinant of the associated matrix $M_f= \left( \begin{matrix} a & b \\ c & d \end{matrix} \right) $ is equal to $1$. We say that $f$ is \textbf{hyperbolic} if $|\mathrm{tr}(M_f)|>2$. We consider the group $\mathrm{PPSL}_2(A)$ of piecewise projective homeomorphisms of $\mathbb{R}\mathrm{P}^{1}$ with multiplication given by composition of functions. We say that \(f\in \mathrm{PPSL}_2(A)\) if there are finitely many points \(t_{0},t_{1},\ldots,t_{n+1}\in\mathbb{R}\mathrm{P}^{1}\) so that on each interval \((-\infty,t_0]\), $[t_{i},t_{i+1}]$, \(i=0,1,\ldots,n-1\) and $[t_n,\infty)$ the map is a M\"obius transformation \[f\colon t \mapsto \dfrac{a_it+b_i}{c_it+d_i}, \] \noindent where \(a_id_i-c_ib_i=1\), for suitable \(a_i,b_i,c_i,d_i \in A\). \emph{Monod's group} $H(A)$ is the subgroup of $\mathrm{PPSL}_2(A)$ where \(f(\infty)=\infty\) and the points $t_0, \ldots, t_n$ lie in the set \(\mathcal{P}_{A}\) of fixed points of hyperbolic M\"obius transformations in \(\mathrm{PSL}_{2}\left(A\right)\). In the case $A=\mathbb{R}$ we simply write $H(\mathbb{R})=H$. We say that a point $t_{0}\in \mathcal{P}_{A}$ is a \textbf{breakpoint} of $f\in\mathrm{PPSL}(A)$ if there exists an $\varepsilon >0$ such that there do not exist $a,b,c,d \in A$, where $ad-cb=1$ and $f(t)= \dfrac{at+b}{ct+d}$ on $(t_{0}-\varepsilon,t_{0}+\varepsilon)$. One of the requirements to adapt the Stair Algorithm to this setting is to be able to simultaneously send a tuple of intervals to another such tuple, which means having a form of transitivity. We need \(H\) to act order \(k\)-transitively on \(\mathbb{R}\mathrm{P}^{1}\) and this is a property shared with Thompson's group \(F\). The proof of the following result is analogous to the one for $\mathrm{PL}_+(\mathbb{R})$. \begin{lem}\label{monod-transitive} Let \(t_{1}<t_{2}<\ldots<t_{k}\) and \(s_{1}<s_{2}<\ldots<s_{k}\) be elements from \(\mathbb{R}\mathrm{P}^{1}\setminus \{\infty\}\). Then there exists \(f\in H\) such that \(f(t_{i})=s_{i}\), for all \(i=1,2,\ldots,k\). \end{lem} \begin{proof} For all $i\in\left\{1,2,\ldots,k-1\right\}$, let us consider the intervals $[t_{i},t_{i+1}]$ and $[s_{i},s_{i+1}]$. Since $\mathrm{PSL}_2(\mathbb{R})$ is $2$-transitive on $\mathbb{R}\mathrm{P}^{1}$ (see \cite[Theorem 5.2.1 $(ii)$]{JonSin1987}) there exists an element $f_{i}\in \mathrm{PSL}_2(\mathbb{R})$ such that \begin{displaymath} f_{i}(t_{i})=s_{i}\;\;\textrm{and}\;\;f_{i}(t_{i+1})=s_{i+1}. \end{displaymath} \noindent Thus, it is enough to glue together these maps with two functions $f_{0},f_{k}\in\mathrm{PSL}_2\left(\mathbb{R}\right)$ defined on $\left(-\infty,t_{1}\right]$ and $\left[t_{k},+\infty\right)$, respectively, as \begin{displaymath} f_{0}(t)=\dfrac{a_{0}t+b_{0}}{d_{0}}\;\;\textrm{and}\;\;f_{k+1}(t)=\dfrac{a_{k}t+b_{k}}{d_{k}}, \end{displaymath} \noindent where $a_{0}d_{0}=a_{k}d_{k}=1$ and $a_{0},b_{0},d_{0},a_{k},b_{k},d_{k}$ are chosen in such way that $f_{0}(t_{1})=s_{1}$ and $f_{k}(t_{k})=s_{k}$. To finish, we construct the following element from $H$ \[ f(t)\coloneqq\begin{dcases} f_{0}(t),&\;\textrm{if}\;t\in\left(-\infty,t_{1}\right]\\ f_{i}(t),&\;\textrm{if}\;t\in\left[t_{i},t_{i+1}\right]\\ f_{k}(t),&\;\textrm{if}\;t\in\left[t_{k},+\infty\right) \end{dcases} \] for $i\in\left\{1,2,\ldots,k-1\right\}$, so that $f(t_{i})=s_{i}$, for all $i\in\left\{1,2,\ldots,k\right\}$. \end{proof} \begin{rem} The proof that Lemma \ref{monod-transitive} is true for $H$ does not immediately carry over to $H(A)$, for a subring $A$ of $\mathbb{R}$, as we are not aware of a transitivity result for fixed points of hyperbolic Mobi\"us transformations. In this paper we sometimes make use of Lemma \ref{monod-transitive} and, in these instances, our proofs do not immediately carry over to $H(A)$, although it is not clear that they cannot be achieved through a different route. Several of the results of this paper carry over to $H(A)$, while for others we cannot immediately say that they do. \end{rem} If \(f\in H\left(A\right)\), there are finitely many points \(t_{1},t_{2},\ldots,t_{n} \in \mathcal{P}_{A}\) such that on each interval \(\left(-\infty,t_{1}\right]\), \(\left[t_{i},t_{i+1}\right]\) for $i=1,\ldots,n-1$, and \(\left[t_{n},+\infty\right)\) we have \(f\colon t \mapsto \left(a_it+b_i\right)/\left(c_it+d_i\right)\), where \(a_id_i-c_ib_i=1\), for suitable \(a_i,b_i,c_i,d_i\in A\). Since $f(\pm \infty)=\pm \infty$, we must have $c_1=c_n=0$ and so \(f\colon t \mapsto (a_0t+b_0)/d_0\quad\textrm{and}\quad f\colon t \mapsto (a_nt+b_n)/d_n\) on \(\left(-\infty,t_{1}\right]\) and \(\left[t_{n},+\infty\right)\), respectively, where \(a_{0}d_{0}=1=a_{n}d_{n}\), for \(a_{0}, a_{n}, b_{0},b_{n}\in A\). Then we can say that elements in \(H\left(A\right)\) have \textbf{affine germs} at \(\pm\infty\). In other words, when \(t\in\left(-\infty,t_{1}\right]\) we rewrite \(f\) in this interval as \(f\left(t\right)=a_{0}^{2}t+a_{0}b_{0}\), for all \(t\in \left(-\infty,t_{1}\right]\), since \(a_{0}d_{0}=1\). Similarly, we can rewrite \(f\) as \(f\left(t\right)=a_{n}^{2}t+a_{n}b_{n}\), for all \(t\in \left[t_{n},+\infty\right)\), since \(a_{n}d_{n}=1\). \begin{rem}\label{Units-Translations}\cite{BurLodRee2018} Notice that, for all elements in \(H\left(A\right)\), the germs at infinity satisfy that the slopes \(a_{0}^{2}\) and \(a_{n}^{2}\) are units of the ring \(A\). Thus, if the only units of \(A\) are \(\pm1\), the first and last parts of maps in \(H\left(A\right)\) are translations. For instance, if \(A = \mathbb{Z}\), the only possibility is that \(a_{0}^{2}=a_{n}^{2}=1\). \end{rem} A property that is inherently used while studying the conjugacy problem in the works \cite{KassMat2012,Mat2010} which we will adapt to work for Monod's group \(H\) is that the Thompson-Stein groups $\mathrm{PL}_{A,G}(I)$, defined for a subring $A$ of $\mathbb{R}$ and a subgroup $G$ of the positive units of $A$, are full groups. \begin{defi}\index{Full group} Let $G$ be a group of homeomorphisms of some topological space $X$. \begin{itemize} \item[(a)] A homeomorphism $h$ of $X$ \textbf{locally agrees} with $G$ if for every point $p\in X$, there exists a neighborhood $U$ of $p$ and an element $g\in G$ such that \[\restr{h}{U}=\restr{g}{U}.\] We denote the set of all homeomorphisms of $X$ which locally agree with $G$ by $\left[G\right]$; \item[(b)] The group $G$ is a \textbf{full} if every homeomorphism of $X$ that locally agrees with $G$ belongs to $G$. In other words, $G$ is a full group if $G=\left[G\right]$. \end{itemize} \end{defi} \begin{lem} Monod's group $H(A)$ is a full group for any subring $A$ of $\mathbb{R}$. \end{lem} \begin{proof} Given a subring $A$ of $\mathbb{R}$ and $h \in [H(A)]$, compactness of $\mathbb{R}\mathrm{P}^{1}$ implies that $h$ has only finitely many breakpoints, as it locally agrees with maps from $H(A)$. Moreover, $h$ must have affine germs around $\pm \infty$, since it coincides with some element from $H(A)$ and so $h\in H(A)$. Therefore $[H(A)]\subseteq H(A)$. and so $H(A)$ is a full group. \end{proof} We finally recall another property of Monod's group which is shared with Thompson's group $F$ (see \cite{Mon2013}). \begin{lem} \label{thm:torsion-free} Monod's group $H(A)$ is torsion-free for any subring $A$ of $\mathbb{R}$. \end{lem} For more properties of Monod's groups, we encourage the interested reader to consult the references \cite{BurLodRee2018, Mon2013}. \section{The Stair Algorithm}\label{sec:stair} In this section, we adapt the \textit{Stair Algorithm} developed in \cite{BurMatVent2016,KassMat2012}. If there exists a conjugator between two elements, this algorithm allows us to construct such conjugator from an ``initial germ''. The algorithm constructs the conjugator by looking at necessary conditions it should satisfy and building it piece by piece until we reach the so-called ``final box'' and ending the construction. We show that, if a conjugator exists, it has to coincide with the homeomorphism we construct. In the following, if \(y,z\in H\) and there is a \(g\in H\) such that \(g^{-1}yg=z\), we will write \(y^{g}=z\). \subsection{Notations} Let us fix some notation. Given a $h \in H$, we define the \textbf{support} of $h$ to be $\mathrm{supp}(f)=\{t \in \mathbb{R} \mid f(t) \ne t\}$. \begin{defi} Let \(G\) be any subset of \(H\). We define \(G^{>}\) as the subset of \(G\) of all maps that lie above the diagonal, that is, \[G^{>}\coloneqq\left\{g\in G\mid g\left(t\right)>t,\; t\in\mathbb{R} \right\}.\] Similarly, we define \(G^{<}\). A homeomorphism $g \in G^{>} \cup G^{<}$ is called a \textbf{one-bump function}. Moreover, for every \(-\infty\leq p<q \leq+\infty\), we define \(G\left(p,q\right)\) as the set of elements of \(G\) with support contained inside \(\left(p,q\right)\), that is, \[G(p,q)\coloneqq\left\{g\in G \mid g\left(t\right)=t,\; t\notin(p,q) \right\}.\] We also define the subset \[G^{>}(p,q)\coloneqq\left\{g\in G \mid g\left(t\right)=t,\;\forall\; t\notin(p,q)\;\text{and}\;g\left(t\right)>t,\;\forall\;t\in(p,q)\right\}.\] Analogously, we define \(G^{<}\left(p,q\right)\). If $g \in G^{>}\left(p,q\right) \cup G^{<}\left(p,q\right)$, we say that \(g\) is a \textbf{one-bump function on \(\left(p,q\right)\)}. \end{defi} \begin{rem} If \(G\) is a subgroup, then \(g\in G^{>}\) if, and only if, \(g^{-1}\in G^{<}\). \end{rem} \indent Since elements \(f\in H\) are defined for all real numbers, we will define suitable ``boxes'' for real numbers around \(\pm\infty\). In order to work with numbers sufficiently close to \(\pm\infty\), we give the next definition. \begin{defi} A property \(\mathcal{P}\) holds \textbf{for \(t\) negative sufficiently large} (respectively, \textbf{for \(t\) positive sufficiently large}) to mean that there exists a real number \(L<0\) such that \(\mathcal{P}\) holds for every \(t\leq L\) (respectively, there is a positive real number \(R\) so that \(\mathcal{P}\) holds for every \(t\geq R\)). \end{defi} \subsection{Necessary Conditions} In \cite{KassMat2012}, Kassabov and the first author worked with the initial and final slopes of elements from \(\mathrm{PL}_{+}\left(\left[0,1\right]\right)\). If two elements from \(\mathrm{PL}_{+}\left(\left[0,1\right]\right)\) are conjugate, they coincide on suitable ``boxes'' around $0$ and $1$. Let us define similar concepts for elements from \(H\). Given \(y\in H\), let us denote the slope of \(y\) for \(t\) negative sufficiently large as \[y'\left(-\infty\right)\coloneqq \lim\limits_{t\rightarrow -\infty}y'\left(t\right).\] Similary, we denote the slope of \(y\) for \(t\) positive sufficiently large as \(y'\left(+\infty\right)\). However, if two elements from \(H\) have the same slopes for \(t\) negative sufficiently large, they do not necessarily coincide around \(-\infty\). Thus, in order to ensure that two elements coincide for \(t\) negative sufficiently large, we give the following definition. \begin{defi}\label{def:germs} We define the \textbf{germ of \(y \in H\) at \(-\infty\)} as the pair \[y_{-\infty} \coloneqq \left(y'\left(-\infty\right),y\left(L\right)-y'\left(-\infty\right)L\right),\] \noindent where \(L\) is the largest real number for which \(y\) is the affine map with slope \(y'\left(-\infty\right)\) on the interval \(\left(-\infty,L\right]\). If $y$ is affine on $\mathbb{R}$, then we $L$ can taken to be any real number. We call \(y_{-\infty}\) the \textbf{initial germ}. Analogously, we define the \textbf{final germ} \(y_{+\infty}\). \end{defi} \indent We remark that, for an element \(y\in H\), the initial germ \(y_{-\infty}\) and the final germ \(y_{+\infty}\) are elements of the \textbf{affine group} \(\mathrm{Aff}\left(\mathbb{R}\right)\), which is defined as the semidirect product \(\mathrm{Aff}\left(\mathbb{R}\right)\coloneqq\mathbb{R}_{>0}\ltimes\mathbb{R}\), where \(\mathbb{R}_{>0}\) denotes the multiplicative group \(\left(\mathbb{R}_{>0},\cdot\right)\) and \(\mathbb{R}\) denotes the additive group \(\left(\mathbb{R},+\right)\). The operation of this group is \((a,b)(c,d)\coloneqq (ac,b+ad)\). The identity element is \(\left(1,0\right)\) and inverses are given by \(\left(a,b\right)^{-1}=\left(a^{-1},-a^{-1}b\right)\). The following observation on slopes is the first necessary condition we test for conjugacy. Its proof is a straightforward calculation. \begin{lem}\label{monod-initialslope} Let \(y,z\in H\) be such that \(y^{g}=z\), for some \(g\in H\). Then for \(t\) negative (respectively, positive) sufficiently large we have \(y'\left(-\infty\right)=z'\left(-\infty\right)\) (respectively, \(\left(y'\left(+\infty\right)=z'\left(+\infty\right)\right)\)). \end{lem} The next necessary condition we observe if that if the conjugacy classes of the germs of \(y,z\in H\) at \(-\infty\), or at \(+\infty\), are different, then \(y\) and \(z\) cannot be conjugate. \begin{lem}\label{monod-germsconj} For any \(y,z\in H\) such that \(y^{g}=z\) for some \(g\in H\), then the conjugacy classes $y_{-\infty}^{\mathrm{Aff}\left(\mathbb{R}\right)}$ and $z_{-\infty}^{\mathrm{Aff}\left(\mathbb{R}\right)}$ of $y_{-\infty}$ and $z_{-\infty}$ inside $\mathrm{Aff}\left(\mathbb{R}\right)$ coincide. Similarly, we have \(y_{+\infty}^{\mathrm{Aff}\left(\mathbb{R}\right)}= z_{+\infty}^{\mathrm{Aff}\left(\mathbb{R}\right)}\). \end{lem} \begin{proof} Assume that \(g_{-\infty}=\left(a^{2},ab\right)\) and \(y_{-\infty}=\left(a_{0}^{2},a_{0}b_{0}\right)\). Since $y^g=z$, it is straightforward to see that, for \(t\) negative sufficiently large, we have that \begin{align*} z_{-\infty}=(y^g)_{-\infty}=y_{-\infty}^{g_{-\infty}}=(a^{-2},-a^{-1}b)\cdot(a_{0}^{2},a_{0}b_{0})\cdot (a^{2},ab)=\\ \left(a_{0}^{2},a_{0}b_{0}a^{-2}+\left(a_{0}^{2}-1\right)a^{-1}b\right) \end{align*} Thus \(y_{-\infty}^{\mathrm{Aff}\left(\mathbb{R}\right)}= z_{-\infty}^{\mathrm{Aff}\left(\mathbb{R}\right)}\). Similarly, we see \(y_{+\infty}^{\mathrm{Aff}\left(\mathbb{R}\right)}= z_{+\infty}^{\mathrm{Aff}\left(\mathbb{R}\right)}\). \end{proof} From now on, if \(y_{-\infty}\) and \(z_{-\infty}\) are conjugate in \(\mathrm{Aff}\left(\mathbb{R}\right)\), we will denote it by \(y_{-\infty}\sim_{\mathrm{Aff}\left(\mathbb{R}\right)}z_{-\infty}\). \subsection{Initial and final boxes} In this subsection, we see that a possible conjugator between two given elements is determined by its germs inside suitable boxes. \begin{lem}[Initial and final boxes]\label{monod-initialbox} Let \(y,z\in H^{>}\left(-\infty,p\right)\) for some \(-\infty<p\leq+\infty\) and let \(g\in H\) be such that \(y^{g}=z\). Then there exists a constant \(L\in\mathbb{R}\) (depending on \(y\) and \(z\)) such that \(g\) is affine on the initial box \(\left(-\infty,L\right]^{2}\). An analogous result holds, for \(y,z\in H^{>}\left(p,+\infty\right)\) for some \(-\infty\leq p<+\infty\) and a final box \(\left[R,+\infty\right)^{2}\). \end{lem} \begin{proof} By Lemma \ref{monod-initialslope}, there exists an $L<\min\{0,p\}$ such that \(y'\left(t\right)=z'\left(t\right)\) for \(t\leq L\). Up to replacing $L$ by a suitable $L_1 < L$ we can assume that \(y'\left(t\right)=z'\left(t\right)\) for every \(t\leq L\). Assume that \(g_{-\infty}=\left(a^{2},ab\right)\) and \(y_{-\infty}=\left(a_{0}^{2},a_{0}b_{0}\right)\) then, following the same calculations of Lemma \ref{monod-germsconj}, we have \[y\left(t\right)=a_{0}^{2}t+a_{0}b_{0}\;\text{and}\; z\left(t\right)=a_{0}^{2}t+a_{0}b_{0}a^{-2}+a^{-1}b(a_{0}^{2}-1)\] for all \(t\leq L\) and for suitable \(a,b\in\mathbb{R}\). We can rewrite our goal as follows: if we define \[\widetilde{L}\coloneqq \sup\left\{r \mid g\;\text{is affine on}\;\left(-\infty,r\right]\right\},\] \noindent then \(\widetilde{L}\geq\min\left\{L,g^{-1}\left(L\right)\right\}\). Let us assume the opposite, that is, \(\widetilde{L}<\min\left\{L,g^{-1}\left(L\right)\right\}\) and \[g\left(t\right)=\begin{cases} a^{2}t+ab,&\;\text{if}\;t\in\left(-\infty,\widetilde{L}\right], \\ \dfrac{\bar{a}t+\bar{b}}{\bar{c}t+\bar{d}},&\;\text{if}\;t\in\left[\widetilde{L},L_2\right), \end{cases}\] \noindent for suitable $\bar{a},\bar{b},\bar{c},\bar{d} \in \mathbb{R}$ and $\widetilde{L} < L_2 \le L$ so that $g$ has a breakpoint at $\widetilde{L}$. Without loss of generality, we can assume that $L_2 = L$. Since \(\widetilde{L}<L<0\) and \(z \in H^{>}\left(-\infty,p\right)\), we have $L<z(L)$ and so there exists a real number \(\sigma>1\) such that \(\sigma\widetilde{L}<\widetilde{L}<L\) and \(\widetilde{L}<z(\sigma\widetilde{L})<L\). On the other hand, \(\widetilde{L}<g^{-1}\left(L\right)\) and so \(\sigma\widetilde{L}<g^{-1}\left(L\right)\). Thus we have \(g(\sigma\widetilde{L})<L\), which means that \(y\) is affine around \(g(\sigma\widetilde{L})\) and \begin{align}\label{linbox-eq1} y(g(\sigma\widetilde{L}))&= y(a^{2}\sigma\widetilde{L}+ab) = a^{2}(a_{0}^{2}\sigma\widetilde{L})+a_{0}^{2}ab+a_{0}b_{0}\nonumber\\ &=a^{2}(a_{0}^{2}\sigma\widetilde{L}+a^{-2}a_{0}b_{0}+a^{-1}ba_{0}^{2}-a^{-1}b+a^{-1}b)\\ &=a^{2}z(\sigma\widetilde{L})+ab.\nonumber \end{align} Since \(gz\left(t\right)=yg\left(t\right)\) for every real number \(t\), equation \(\left(\ref{linbox-eq1}\right)\) returns \begin{align}\label{linbox-eq2} g(z(\sigma\widetilde{L}))=a^{2}(z(\sigma\widetilde{L}))+ab, \end{align} for any real number \(\sigma>1\). By definition of $g$ we also have that \begin{align}\label{linbox-eq3} g(z(\sigma\widetilde{L}))=\dfrac{\bar{a}(z(\sigma\widetilde{L}))+\bar{b}}{\bar{c}(z(\sigma\widetilde{L}))+\bar{d}}. \end{align} Then equating (\ref{linbox-eq2}) and (\ref{linbox-eq3}) we see that \begin{equation}\label{linbox-eq4} a^{2}(z(\sigma\widetilde{L}))+ab=\dfrac{\bar{a}(z(\sigma\widetilde{L}))+\bar{b}}{\bar{c}(z(\sigma\widetilde{L}))+\bar{d}}. \end{equation} By rewriting equation (\ref{linbox-eq4}), we get \begin{align}\label{linbox-eq5} a^{2}\bar{c}(z(\sigma\widetilde{L}))^{2}+(ab\bar{c}+a^{2}\bar{d})z(\sigma\widetilde{L})+ab\bar{d}=\bar{a}z(\sigma\widetilde{L})+\bar{b}. \end{align} Equation (\ref{linbox-eq5}) is a polynomial equation that holds for all the \(\sigma>1\) such that \(\sigma\widetilde{L}<\widetilde{L}<L\) and so, since there is an interval worth of such $\sigma$'s, either \(a^{2}=0\) or \(\bar{c}=0\). If \(a^{2}=0\), then \(g\) would not be a homeomorphism for \(t<L\), which is impossible. If \(\bar{c}=0\), then equation \(\left(\ref{linbox-eq5}\right)\), coupled with the fact that $\bar{a}\bar{d}=1$, implies that \[ a^{2}z(\sigma\widetilde{L})+ab=\bar{a}^2z(\sigma\widetilde{L})+\bar{a}\bar{b}, \] and so \(g\left(t\right)=a^{2}t+ab\) for \(t\in\left(-\infty,M\right]\) for some \(M>\widetilde{L}\), a contradiction to the definition of the breakpoint \(\widetilde{L}\). Hence, in all cases we have a contradiction to the assumption that \(\widetilde{L}<\min\left\{L,g^{-1}\left(L\right)\right\}\) and so we have that \(\widetilde{L}\geq\min\left\{L,g^{-1}\left(L\right)\right\}\). The proof for the final box is similar. \end{proof} \begin{rem} We notice that Lemma \ref{monod-initialbox} also holds for \(y,z\in H^{<}\left(-\infty,p\right)\), by just applying its statement to \(y^{-1}\) and \(z^{-1}\). \end{rem} \subsection{Building a Candidate Conjugator} In this subsection, we prove several lemmas which show how to buid a conjugator, if it exists. If this is the case, then we prove that the conjugator must be unique. Given two elements \(y,z\in H\), the set of their conjugators is a coset of the centralizer of either \(y\) or \(z\). Thus, it is important to begin by obtaining properties of centralizers, which we will do next. After that, we will identify \(y\) and \(z\) inside a box close to the initial box using a suitable conjugator, as mentioned before. Then we repeat this process and build more pieces of this potential conjugator until we reach the final affinity box. We omit the proof of some of the Lemmas, since they follow word-by-word from the original ones in \cite{KassMat2012} with a slight adaptation in which we use the initial germs. The proofs of the following two results are the same as that of \cite[Lemma 4.4]{KassMat2012} and \cite[Corollary 4.5]{KassMat2012}. \begin{lem}\label{monod-centralizer-identity} Let \(z\in H\) and suppose that there exist real numbers \(\lambda\) and \(\mu\) satisfying \(-\infty<\lambda\leq\mu<+\infty\), \(z\left(t\right)\leq \lambda\), for all \(t\in\left(-\infty, \mu\right]\) and that there is \(g\in H\) so that \(g\left(t\right)=t\) for all \(t\in \left(-\infty, \lambda\right]\) and \(g^{-1}zg\left(t\right)=z\left(t\right)\) for each \(t\in \left(-\infty, \mu\right]\). Then \(g\) is the identity map up to \(\mu\). \end{lem} In case of \(z\in H^{<}\), the previous lemma yields the following consequence. \begin{cor}\label{g=id} Let \(z\in H^{<}\) and \(g\in H\) such that \(g_{-\infty}=\left(1,0\right)\) and \(g^{-1}zg=z\). Then \(g\) is the identity map. \end{cor} The preceding two results allow us to construct a group monomorphism between the group of centralizers of elements from \(H\) and the group \(\mathrm{Aff}\left(\mathbb{R}\right)\) as well as showing uniqueness of conjugators \begin{lem}\label{monod-centralizers} Given \(z\in H^{<}\), the following map \begin{align*} \varphi_{z}\colon C_{H}\left(z\right) &\rightarrow \mathrm{Aff}\left(\mathbb{R}\right)\\ g &\mapsto g_{-\infty}, \end{align*} \noindent is a group monomorphism. \end{lem} \begin{proof} First of all, for each \(g_{1},g_{2}\in C_{H}\left(z\right)\), with \((g_1)_{-\infty}=\left(a_{0}^{2},a_{0}b_{0}\right)\) and \((g_2)_{-\infty}=\left(\bar{a}_{0}^{2},\bar{a}_{0}\bar{b}_{0}\right)\), there exists \(L\in \mathbb{R}\) so that \(g_{1}g_{2}\left(t\right)=a_{0}^{2}\bar{a}_{0}^{2}t+a_{0}^{2}\bar{a}_{0}\bar{b}_{0}+a_{0}b_{0}\) on $(-\infty,L]$. Then \[ (g_{1}g_{2})_{-\infty}=\left(a_{0}^{2}\bar{a}_{0}^{2},a_{0}b_{0}+a_{0}^{2}\bar{a}_{0}\bar{b}_{0}\right)= \left(a_{0}^{2},a_{0}b_{0}\right) \cdot \left(\bar{a}_{0}^{2},\bar{a}_{0}\bar{b}_{0}\right)= (g_1)_{-\infty} (g_2)_{-\infty}. \] so that \(\varphi_{z}\) is a well-defined group homomorphism. To show injectivity, suppose that \(\varphi\left(g_{1}\right)=\varphi\left(g_{2}\right)\) for $g_1,g_2 \in C_H(z)$. Then \(\left(a_{0}^{2},a_{0}b_{0}\right)=\left(\bar{a}_{0}^{2},\bar{a}_{0}\bar{b}_{0}\right)\). Thus there exists a number \(L\in\mathbb{R}\) so that \(g_{1}\left(t\right)=g_{2}\left(t\right)\) for all \(t\in\left(-\infty,L\right]\). Let us define \(g\coloneqq g_{1}g_{2}^{-1}\). We have \(g\left(t\right)=t\) for each \(t\in\left(-\infty,L\right]\). Moreover, we have \(g^{-1}zg=z\). It follows from Corollary \ref{g=id} that \(g\left(t\right)=t\) for all \(t\in\mathbb{R}\). Which implies that \(g_{1}\left(t\right)=g_{2}\left(t\right)\) for each \(t\in\mathbb{R}\). Therefore \(\varphi_{z}\) is a monomorphism. \end{proof} \begin{prop}[Uniqueness]\label{monod-uniqueness} Let \(y,z\in H^{<}\) and \(g\in H\) be maps so that \(y^{g}=z\). Then the conjugator \(g\) is uniquely determined by its initial germ \(g_{-\infty}\). \end{prop} \begin{proof} Let us assume that there are \(g_{1},g_{2}\in H\) so that \(g_{1}^{-1}yg_{1}=z\) and \(g_{2}^{-1}yg_{2}=z\) and with the same initial germ. Then \(\left(g_{1}g_{2}^{-1}\right)^{-1}y\left(g_{1}g_{2}^{-1}\right)=y\). Defining \(g\coloneqq g_{1}g_{2}^{-1}\), we get that \(g\left(t\right)=t\) for all \(t\in\left(-\infty,L\right]\), which implies that the initial germ of \(g\) is \(g_{-\infty}=\left(1,0\right)\). By Corollary \ref{g=id}, the unique centralizer of \(y\) with initial germ $\left(1,0\right)$ is the identity map. Then \(g\left(t\right)=t\) for all \(t\in\mathbb{R}\). Therefore, \(g_{1}=g_{2}\), which proves the uniqueness of a conjugator with a given initial germ, if it exists. \end{proof} The next lemma gives a tool to identify the graphs of \(y\) and \(z\) inside suitable boxes via some candidate conjugator. \begin{lem}[Identification Lemma]\label{monod-idlemma} Let \(y,z\in H^{<}\) and \(L\in\mathbb{R}\) be such that \(y\left(t\right)=z\left(t\right)\) for all \(t\in\left(-\infty,L\right]\). Then there exists \(g\in H\) so that \(z\left(t\right)=g^{-1}yg\left(t\right)\) for every \(t\in\left(-\infty,z^{-1}\left(L\right)\right]\) and \(g\left(t\right)=t\) in \(\left(-\infty,L\right]\). Moreover, this element \(g\) is uniquely determined on \(\left(L,z^{-1}\left(L\right)\right]\). \end{lem} \begin{proof} We start showing that, if such a \(g\in H\) exists, then it is uniquely determined on \(\left(L,z^{-1}\left(L\right)\right]\). In fact, if such a \(g\in H\) exists then, for each \(t\in\left(L,z^{-1}\left(L\right)\right]\), we have \(y\left(g\left(t\right)\right)=g\left(z\left(t\right)\right)=z\left(t\right)\), since \(z\left(t\right)\leq L\). Therefore \(g\left(t\right)=y^{-1}z\left(t\right)\), for every \(t\in\left(L,z^{-1}\left(L\right)\right]\). To show existence, we just define \[g\left(t\right)\coloneqq\begin{dcases} t,&\;\text{if}\;t\in\left(-\infty,L\right]\\ y^{-1}z\left(t\right),&\;\text{if}\;t\in\left[L,z^{-1}\left(L\right)\right] \end{dcases}\] and we extend it to the real line from the point $\left(z^{-1}\left(L\right),y^{-1}\left(L\right)\right)$, by gluing some order-preserving affine map defined on $[z^{-1}\left(L\right),+\infty)$. We also define $g(\pm \infty)=\pm \infty$. \end{proof} We repeatedly apply Lemma \ref{monod-idlemma} so that, if we iterate it $N$ times, we can build \(g\) on \(\left(-\infty, z^{-N}\left(L\right)\right]\) and this will be the key step for the Stair Algorithm in the next subsection. We conclude this subsection with a result whose proof can be obtained word for word from \cite[Lemma 4.13]{KassMat2012}. \begin{lem} \label{power-conj-lemma} Let \(y,z\in H^{<}\). Let us consider \(g\in H\) and \(n\in\mathbb{Z}_{>0}\). Then \(y^{g}=z\) if, and only if, \(\left(y^{n}\right)^{g}=z^{n}\). \end{lem} \subsection{The Stair Algorithm for \(H\)} We now adapt to $H$ the Stair Algorithm from \cite{KassMat2012} which constructs the unique candidate conjugator between two elements $y,z \in H$ with a given initial germ $(a^2,ab)\in \mathrm{Aff}\left(\mathbb{R}\right)$, that is, an element $g \in H$ such that, if there exists an $h \in H$ so that $h_{-\infty}=(a^2,ab)$ and $y^g=z$, then $h=g$. \begin{thm}[Stair Algorithm]\label{monod-stair} Let \(y,z\in H^{<}\) and let \(\left(-\infty,L\right]^{2}\) be the initial box given by $y$ and $z$. Let us consider \(\left(a^{2},ab\right)\in\mathrm{Aff}\left(\mathbb{R}\right)\) so that $a^2L+ab \le L$. Then there exists \(N\in\mathbb{Z}_{>0}\) such that the unique candidate conjugator \(g \in H\) between \(y\) and \(z\) with initial germ \(g_{-\infty}=\left(a^{2},ab\right)\) is given by \[g\left(t\right)=y^{-N}g_{0}z^{N}\left(t\right),\;\text{for }\;t\in\left(-\infty,z^{-N}\left(L\right)\right],\] \noindent and affine otherwise, where \(g_{0}\in H\) is an arbitrary homeomorphism which is affine on \(\left(-\infty,L\right]^{2}\) and so that \((g_0)_{-\infty}=\left(a^{2},ab\right)\). \end{thm} \begin{rem} We observe that the hypothesis on $(a^{2},ab)$ is a mild one. It ensures that $g_0(L)\le L$ and so, up to replacing \(g_{0}\) by \(g_{0}^{-1}\) and switching the role of \(y\) and \(z\), we can always assume that \(a^2L+ab \le L\). \end{rem} Before giving the proof of Theorem \ref{monod-stair}, we observe the following corollary and make a comment about completely characterizing conjugacy in Monod's group $H$. \begin{cor}\label{thm:characterize-conjugacy} Let \(y,z\in H^{<}\) and let \(\left(-\infty,L\right]^{2}\) and \(\left[R,+\infty\right)^{2}\) be, respectively, the initial and the final box given by $y$ and $z$. There is a $g \in H$ such that $y^g=z$ if and only if there is some $(a^2,ab) \in\mathrm{Aff}\left(\mathbb{R}\right)$ so that $a^2L+ab \le L$ and \[ \lim_{N \to \infty}y^{-N}g_{0}z^{N}\left(t\right) \] is affine inside \(\left[R,+\infty\right)^{2}\) and where \(g_{0}\in H\) is an arbitrary homeomorphism which is affine on \(\left(-\infty,L\right]^{2}\) and so that \((g_0)_{-\infty}=\left(a^{2},ab\right)\). \end{cor} \begin{rem} Corollary \ref{thm:characterize-conjugacy} gives a characterization of conjugacy inside Monod's group $H$. However, as is, such characterization cannot be used to construct a finite set of candidate conjugators and thus use them to solve the conjugacy problem in Monod's group $H(A)$ for a suitable subring $A \subseteq \mathbb{R}$ in a manner analogous what was done for the Thompson-Stein groups $\mathrm{PL}_{A,G}([0,1])$ in \cite{KassMat2012} for a suitable subring $A \subseteq \mathbb{R}$ and subgroup $G \subseteq U(A)_{>0}$ of the group of the positive units of $A$. We now explain better why not. Lemma 5.4 in \cite{KassMat2012} shows that there are only finitely conjugators between $y$ and $z$ whose initial and final slope lie within a bounded interval. After having used a suitable isomorphism (Lemma \ref{thm:unbounded-to-bounded} below) and thus considering a version of $H$ over the the unit interval $[0,1]$, we prove in Lemma \ref{thm:generalized-KasMat-5.3} below a similar result for centralizers with first and second derivative lying within bounded intervals. Even if Lemma \ref{thm:generalized-KasMat-5.3} can indeed be generalized to study conjugators with bounded first and second derivative (to get a result analogous to Lemma 5.4 in \cite{KassMat2012}), we cannot use the same trick of Lemmas 7.1 and 7.2 in \cite{KassMat2012} to bound both derivatives. By replacing a conjugator $g$ with $y^ng$ we can only bound the first derivative (so that it lives in a bounded interval), but have no available bound on the second derivative appearing in Lemma \ref{thm:generalized-KasMat-5.3} below: in other words, we can bound the $a$ appearing in $g_{-\infty}=(a^2,ab)$, but not the $b$ and so we would have to test continuum many $b$'s (or equivalently, continuum many initial germs) to find candidate conjugators. \end{rem} \begin{proof}[Proof of Theorem \ref{monod-stair}] First of all, we notice that we will consider \(y,z\in H^{<}\) such that their initial germs are in the same conjugacy class in \(\mathrm{Aff}\left(\mathbb{R}\right)\), otherwise \(y\) and \(z\) cannot be conjugate to each other by Lemma \ref{monod-germsconj}. Moreover, we further assume that \(\left(a^{2},ab\right)\in\mathrm{Aff}\left(\mathbb{R}\right)\) conjugates \(y_{-\infty}\) to \(z_{-\infty}\) in \(\mathrm{Aff}\left(\mathbb{R}\right)\), otherwise there cannot be a conjugator \(g\) for \(y\) and \(z\) with initial germ \(g_{-\infty}=\left(a^{2},ab\right)\), again by Lemma \ref{monod-germsconj}. Now, let \(\left[R,+\infty\right)^{2}\) be the final box and let \(N\in\mathbb{Z}_{>0}\) be sufficiently large so that \(\min\left\{z^{-N}\left(L\right),y^{-N}\left(a^{2}L+ab\right)\right\}>R\). We will now build a candidate conjugator $g$ between $y^{N}$ and $z^{N}$ as the product of two functions \(g_{0}\) and \(g_{1}\) and then use Lemma \ref{power-conj-lemma}. Since $y,z \in H^{<}$, a direct calculation shows that \(y^{N}\) and \(z^{N}\) are affine in the initial and final boxes of $y$ and $z$, so we can take them as the initial and final boxes of \(y^{N}\) and \(z^{N}\). We define \(g_{0}\) as \(g_{0}\left(t\right)\coloneqq a^{2}t+ab\), on \(\left(-\infty,L\right]^2\) and extend it to the real line so that \(g_{0}\in H\). Our assumption on $(a^2,ab)$ ensures that $g_0(L)\le L$. Now we define \(y_{1}\coloneqq g_{0}^{-1}yg_{0}\) and we look for a conjugator \(g_{1}\) between \(y_{1}^{N}\) and \(z^{N}\). We remark that \(y_{1}^{N}\) and \(z^{N}\) coincide on \(\left(-\infty,L\right]\), since \(y_{1}^{N}=g_{0}^{-1}y^{N}g_{0}=z^{N}\) and $y^N,z^N \in H^{<}$ are affine on \(\left(-\infty,L\right]\). Making use of the Identification Lemma \ref{monod-idlemma}, we define a $g_{1}\in H$ so that \[g_{1}\left(t\right)\coloneqq\begin{dcases} t,&\;\text{if}\;t\in\left(-\infty,L\right]\\ y_{1}^{-N}z^{N}\left(t\right),&\;\text{if}\;t\in\left[L,z^{-N}\left(L\right)\right]. \end{dcases}\] \noindent By construction we have \(g_{1}^{-1}y_{1}^{N}g_{1}=z^{N}\) on \(\left(-\infty,z^{-N}\left(L\right)\right]\). We now construct a function \(g\) on \(\left(-\infty,z^{-N}\left(L\right)\right]\) by defining \(g\left(t\right)\coloneqq g_{0}g_{1}\left(t\right)\), for \(t\in \left(-\infty,z^{-N}\left(L\right)\right]\). We observe that the last part of \(g\) is defined inside in the final box since \(t=z^{-N}\left(L\right)>R\) and \[g\left(z^{-N}\left(L\right)\right)=g_{0}g_{1}\left(z^{-N}\left(L\right)\right)>R.\] \noindent Moreover, by construction, \(g\) is a conjugator for \(y^{N}\) and \(z^{N}\) on \(\left(-\infty,z^{-N}\left(L\right)\right]\), that is, \(g=y^{-N}gz^{N}\) on \(\left(-\infty,z^{-N}\left(L\right)\right]\). Therefore, \[g\left(t\right)=y^{-N}gz^{N}\left(t\right)=y^{-N}g_{0}g_{1}z^{N}\left(t\right)=y^{-N}g_{0}z^{N}\left(t\right),\] \noindent since \(g_{1}z^{N}\left(t\right)=z^{N}\left(t\right)\) for every \(t\in \left(-\infty,z^{-N}\left(L\right)\right]\). If \(g\) is not an affine M\"obius function on \(\left[R,z^{-N}(L)\right]\), then \(g\) cannot be extended to a conjugator of \(y^N\) and \(z^N\) and the uniqueness of the shape of \(g\) (Proposition \ref{monod-uniqueness}) says that continuing the Stair Algorithm will build a function that cannot be a conjugator and therefore, a conjugator with initial germ \(\left(a^{2},ab\right)\) cannot exist or it would coincide with \(g\) on \(\left(-\infty, z^{-N}\left(L\right)\right]\). In the case that \(g\) is a affine M\"obius function on \(\left[R,z^{-N}\left(L\right)\right]\), we extend \(g\) to the whole real line by extending its affine piece on \(\left[R,z^{-N}\left(L\right)\right]\). The map that we construct (which we still call \(g\)) lies in \(H\). By Lemma \ref{monod-initialbox} and Proposition \ref{monod-uniqueness}, if there exists a conjugator between \(y^{N}\) and \(z^{N}\), with initial germ \(\left(a^{2},ab\right)\), it must be equal to \(g\). Then we just check if \(g\) conjugates \(y^{N}\) to \(z^{N}\). If \(g\) conjugates \(y^{N}\) to \(z^{N}\) then, by Lemma \ref{power-conj-lemma}, \(g\) is a conjugator between \(y\) and \(z\), as desired. \end{proof} \begin{rem}\label{thm:switch-<->} Let us suppose that \(y,z\in H^{<}\cup H^{>}\). In order to be conjugate, the Lemma \ref{monod-germsconj} says that their initial germ must be in the same conjugacy class in \(\mathrm{Aff}\left(\mathbb{R}\right)\). Similarly, their final germs must be in the same conjugacy class in \(\mathrm{Aff}\left(\mathbb{R}\right)\). In other words, either both \(y\) and \(z\) are in \(H^{<}\) or both are in \(H^{>}\). Furthermore, since \(g^{-1}yg=z\) if and only if \(g^{-1}y^{-1}g=z^{-1}\), we can reduce the study to the case where they are both in \(H^{<}\). \end{rem} \begin{rem} The stair algorithm for \(H^{<}\) can be reversed. This means that we can apply it in order to build a candidate for a conjugator between \(y,z\in H^{>}\). Thus, given an element \(\left(a^{2},ab\right)\in\mathrm{Aff}\left(\mathbb{R}\right)\), we can determine whether or not there is a conjugator \(g\) with final germ \(g_{+\infty}=\left(a^{2},ab\right)\). The proof is similar. We just begin to construct \(g\) from the final box. \end{rem} We observe that the proof of Stair Algorithm does not depend on the choice of \(g_{0}\), the only requirement on it is that it be affine on the initial box and \(g_{0_{-\infty}}=\left(a^{2},ab\right)\). Moreover, it gives a way to find candidate conjugators, if they exist, and we have chosen an initial germ. The following are two examples of construction of candidate conjugators via the Stair Algorithm. In the first example the candidate is indeed a conjugator, while in the second it is not. \begin{ex}\label{ex:Stair-Working} Consider the maps $y\left(t\right)=t-1$ and \[z\left(t\right)=\begin{dcases} \dfrac{2t-2}{\frac{-3}{2}t+2},&\;\text{if}\;t\in\left[0,1\right];\\ \dfrac{-2t+2}{\frac{-3}{2}t+1},&\;\text{if}\;t\in\left[1,2\right];\\ t-1,&\;\text{otherwise}. \end{dcases}\] Notice that $y,z\in H^{<}$ and that their initial and final germs are equal. Moreover, we have \(L=0\) and \(R=2\). Now we take $\left(1,-1\right)\in\mathrm{Aff}\left(\mathbb{R}\right)$ and construct a candidate conjugator between \(y^{4}\) and \(z^{4}\). We follow the procedure of the proof of the Stair Algorithm and define the maps $g_{0}\left(t\right)\coloneqq t-1$ and \[g_{1}\left(t\right)\coloneqq \begin{dcases} \dfrac{\frac{1}{2}t}{-\frac{3}{2}t+2},&\;\text{if}\;t\in\left[0,1\right];\\ t,&\;\text{otherwise}. \end{dcases}\] We then define $g:=g_{0}g_{1}$ and see that \[g\left(t\right)\coloneqq\begin{dcases} \dfrac{2t-2}{\frac{-3}{2}t+2},&\;\text{if}\;t\in\left[0,1\right];\\ t-1,&\;\text{otherwise}. \end{dcases}\quad\textrm{and}\quad g^{-1}\left(t\right)=\begin{dcases} \dfrac{2t+2}{\frac{3}{2}t+2},&\;\text{if}\;t\in\left[-1,0\right];\\ t+1,&\;\text{otherwise}. \end{dcases}\] We notice that \(g\in H\). A direct calculation shows that $g$ conjugates \(y^{4}\) to \(z^{4}\). By Lemma \ref{power-conj-lemma}, the element \(g\) is a conjugator between \(y\) and \(z\). \end{ex} \begin{ex}\label{Stair-Not-Working} Consider the maps $y\left(t\right)=t-1$ and \begin{displaymath} z\left(t\right)=\begin{dcases} \dfrac{-2t+2}{\frac{-3}{2}t+1},&\;\text{if}\;t\in\left[1,2\right];\\ t-1,&\;\text{otherwise}. \end{dcases} \end{displaymath} Notice that $y,z\in H^{<}$ and that their initial and final germs are equal. We observe that \(L=1\) and \(R=2\). Now we take \(\left(1,0\right)\in\mathrm{Aff}\left(\mathbb{R}\right)\) and construct a candidate conjugator between \(y^{3}\) and \(z^{3}\). We follow the procedure of the proof of the Stair Algorithm and define the maps $g_{0}\left(t\right)=t$ and \[g_{1}\left(t\right)=\begin{dcases} \dfrac{-\frac{7}{2}t+3}{-\frac{3}{2}t+1},&\;\text{if}\;t\in\left[1,2\right];\\ \dfrac{-5t+9}{\frac{-3}{2}t+\frac{5}{2}},&\;\text{if}\;t\in\left[2,3\right];\\ t,&\;\text{otherwise}. \end{dcases}\] We then define $g:=g_0g_1$ and see that \[g\left(t\right)=\begin{dcases} \dfrac{-\frac{7}{2}t+3}{-\frac{3}{2}t+1},&\;\text{if}\;t\in\left[1,2\right];\\ \dfrac{-5t+9}{\frac{-3}{2}t+\frac{5}{2}},&\;\text{if}\;t\in\left[2,3\right];\\ t,&\;\text{otherwise}. \end{dcases}\] \noindent We notice that $g$ is not a linear M\"{o}bius function on $\left[2,3\right]$. Thus, by Theorem \ref{monod-stair}, the element $g$ cannot be a conjugator between $y^{3}$ and $z^{3}$. By Lemma \ref{power-conj-lemma}, $g$ cannot be a conjugator between $y$ and $z$ as well. \end{ex} \begin{rem}\label{rem:subfied} Although this section is stated for $H$ for the sake of consistency of the paper, the proofs that all the results of this section hold for $H(A)$ too, with the following provisions: \begin{enumerate} \item The elements $L$ and $R$ defined for the affinity boxes in Lemma \ref{monod-initialbox} must live in $A$. This can always be achieved since, given any initial box $(-\infty,L]^2$, we can then take an $L' \le L$ in $L \in A$ and consider the box $(-\infty,L']^2$. Similarly we can do for the final one. \item Lemma \ref{monod-germsconj} needs to be stated by saying that $y_{-\infty}^{\mathrm{Aff}(A)}=z_{-\infty}^{\mathrm{Aff}(A)}$, where the affine group of $A$ is the subgroup of $\mathrm{Aff}(\mathbb{R})$ defined by $\mathrm{Aff}(A)= (U(A)_{>0},\cdot)\ltimes (A,+)$, where $U(A)_{>0}$ is the group of the positive units of $A$. Similarly, we must have $y_{+\infty}^{\mathrm{Aff}(A)}=z_{+\infty}^{\mathrm{Aff}(A)}$. \end{enumerate} \end{rem} \section{The Mather invariant}\label{sec:mather} We now construct a conjugacy invariant for a class of functions, called \emph{Mather invariant}, by adapting ideas from \cite{Mather1974,Mat2010}. While in the previous section we worked with $y,z \in H^{<}$, in this section we will work with $y,z \in H^{>}$ as it helps with the arguments and we can do so without loss of generality because of Remark \ref{thm:switch-<->}. We construct such invariant to deal with the case $y'(\pm \infty)=z'(\pm \infty)=1$ where the point of view of the Stair Algorithm cannot be used to cover all cases when computing element centralizers of elements which will be studied in the next section. In the remainder of this section we assume \(y,z\in H^{>}\) such that \(y\left(t\right)=z\left(t\right)=t+b_{0}\) if \(t\in \left(-\infty,L\right]\) and \(z\left(t\right)=y\left(t\right)=t+b_{1}\) if \(t\in \left[R,+\infty\right)\), for some suitable \(b_{0},b_{1}>0\), where \(L\) and \(R\) are, respectively, sufficiently large negative and positive real numbers. Let \(N\in\mathbb{Z}_{>0}\) be large enough so that \[ y^{N}\left(\left(y^{-1}\left(L\right),L\right)\right) \cup z^{N}\left(\left(z^{-1}\left(L\right),L\right)\right)\subset \left(R,+\infty\right). \] We intend to find a map \(s\in H\) such that \(s\left(y^{k}\left(L\right)\right)=k,\) for every \(k\in\mathbb{Z}\). We thus define the map $s$ as \begin{align*} s\colon \mathbb{R}&\rightarrow \mathbb{R}\\ t&\longmapsto s\left(t\right)\coloneqq \begin{dcases} s_{-1}\left(t\right),\;&\text{if}\;t\in(-\infty,L]\\ s_{j}\left(t\right),\;&\text{if}\;t\in[y^{j}\left(L\right),y^{j+1}\left(L\right)]\\ s_{N-1}\left(t\right),\;&\text{if}\;t\in[y^{N-1}\left(L\right),+\infty) \end{dcases} \end{align*} \noindent where \begin{align*} s_{-1}\colon \left[y^{-1}\left(L\right),L\right]&\rightarrow \left[-1,0\right] &s_{N-1}\colon [y^{N-1}\left(L\right),y^{N}\left(L\right)]&\rightarrow [N-1,N]\\ t &\mapsto \dfrac{t-L}{b_{0}},\qquad &t &\mapsto \dfrac{t-y^{N-1}\left(L\right)}{b_{1}}+N-1 \end{align*} and \begin{align*} s_{j}\colon [y^{j}\left(L\right),y^{j+1}\left(L\right)]&\rightarrow [j,j+1]\\ t &\mapsto \dfrac{t-y^{j}\left(L\right)}{y^{j+1}\left(L\right)-y^{j}\left(L\right)}+j,\;\forall\;j=0,1,,\ldots,N-2. \end{align*} Since \(L\) is a fixed point of some hyperbolic element from \(\mathrm{PSL}_{2}\left(\mathbb{R}\right)\), so is \(y^{j}\left(L\right)\), for every \(j=0,1,\ldots,N-2,N-1\). Also, since all of the $s_i$'s are affine with strictly positive slope, they can all be written as $s_i(t)=a_i^2 t + a_i b_i$ for suitable $a_i,b_i \in \mathbb{R}$ and so \(s\in H\). Moreover, it is clear that \(s\left(y^{k}\left(L\right)\right)=k\), for all \(k\in\mathbb{Z}\). If we define \(\overline{y}\coloneqq sys^{-1}\) and \(\overline{z}\coloneqq szs^{-1}\), we get that both functions are well-defined and lie in \(H\). Now, we observe that \begin{itemize} \item[(i)] If \(t\in \left(-\infty,0\right]\cup\left[N-1,+\infty\right)\), then \(\overline{y}\left(t\right)=\overline{z}\left(t\right)=t+1\); \item[(ii)] \(\overline{y}^{N},\overline{z}^{N}\in H\). \end{itemize} We define the circles \[C_{0}=\left(-\infty,0\right]/\{t\sim t+1\}\quad\text{and}\quad C_{1}=\left[N-1,+\infty\right)/\{t\sim t+1\}.\] Let us consider the natural projections \(p_{0}\colon\left(-\infty, 0\right]\rightarrow C_{0}\) and \\ \(p_{1}\colon\left[N-1, +\infty\right)\rightarrow C_{1}\). \noindent Then we restrict \(\overline{y}^{N}\) to the interval \(\left[-1,0\right]\) such that \(p_{0}\) surjects it onto \(C_{0}\). Since \(N\) is sufficiently large so that \(\overline{y}^{N}\left(\left(\overline{y}^{-1}\left(L\right),L\right)\right)\subset \left[R,+\infty\right)\), it follows that \(\overline{y}^{N}\) maps \(\left[-1,0\right]\) to \(\left[R,+\infty\right)\). Passing to quotients, we define \(\overline{y}^{\infty}\colon C_{0}\rightarrow C_{1}\) such as \(\overline{y}^{\infty}\left(\left[t\right]\right)=\left[\overline{y}^{N}\left(t\right)\right]\) making the following diagram commutative \[\xymatrix{ \ar @{} [dr] |{\circlearrowright} \left[-1, 0\right] \ar[d]_-{p_{0}} \ar[r]^-{\overline{y}^{N}} & \left[N-1,+\infty\right) \ar[d]^-{p_{1}} \\C_{0} \ar@{-->}[r]_{\overline{y}^{\infty}} & C_{1} }\] We emphasize that the map \(\overline{y}^{\infty}\) does not depend on the specific chosen value of \(N\), since if \(m\geq N\), \(\overline{y}^{m}\left(t\right)=\overline{y}^{m-N}\left(\overline{y}^{N}\left(t\right)\right)\), where \(\overline{y}^{N}\left(t\right)\in \left(R,+\infty\right)\) and \[\overline{y}^{N}\left(t\right)\sim \overline{y}^{N}\left(t\right)+1=\overline{y}\left(\overline{y}^{N}\left(t\right)\right)\sim\ldots\sim \overline{y}^{m-N}\left(\overline{y}^{N}\left(t\right)\right)=\overline{y}^{m}\left(t\right).\] Similarly, we define the map \(\overline{z}^{\infty}\). We remark that both these maps are piecewise-M\"obius homeomorphisms from the circle \(C_{0}\) to the circle \(C_{1}\). They are called the \textbf{Mather invariants} of \(\overline{y}\) and \(\overline{z}\). Assume now that there exists a $g \in H$ such that $gz=yg$. By conjugating by $s$, we get the equation \(\overline{gz}=\overline{y}\overline{g}\), where $\overline{g} \in H$. Since $\overline{y}$ and $\overline{z}$ are equal to the translation $t \mapsto t+1$ around $\pm \infty$, then the equation \(\overline{gz}=\overline{y}\overline{g}\) implies that $\overline{g} \in H$ is periodic for real numbers that are sufficiently large positive and negative and so, around $-\infty$ where $\overline{g}(t)=a^2t+ab$ is affine, we must have that $a^2=1$ so $\overline{g}$ is a translation, otherwise $\overline{g}$ would not be periodic. Similarly, $\overline{g}$ is a translation around $+\infty$. Thus $g$ itself is a translation around $\pm \infty$. We record this observation for independent later use. \begin{rem} \label{thm:affine-commute-translation} If an affine map $g(t)=a^2t+ab$ commutes with a translation $z(t)=t+k$, then $a^2=1$. \end{rem} Now, going back to the argument above, we see that it induces the equation \(\overline{gz}^{N}=\overline{y}^{N}\overline{g}\), where \(\overline{g}\) is periodic on \(\left(-\infty,0\right)\cup\left(N-1,+\infty\right)\) since it commutes with $\overline{y}$ and $\overline{z}$ on such intervals, and so \(\overline{g}\) passes to quotients and becomes \begin{align}\label{rot} v_{1, m}\overline{z}^{\infty}=\overline{y}^{\infty}v_{0,\ell}, \end{align} as done in \cite{BurMatVent2016,Mat2010}, since \(\overline{g},\overline{y},\overline{z}\in H\) and \(v_{0,\ell}\coloneqq p_{0}\overline{g}\) and \(v_{1, m}\coloneqq p_{1}\overline{g}\) are rotations of the circles \(C_{0}\) and \(C_{1}\), respectively, where \(\ell,m\) are the translation terms of \(g\) on \(\left(-\infty,0\right)\) and \(\left(N-1+\infty\right)\), respectively. The proof of the next result shows the relation between the Stair Algorithm and the Mather invariant. \begin{thm}\label{matconj} Let \(y,z\in H^{>}\) be such that \(y\left(t\right)=z\left(t\right)=t+b_{0}\) for \(t\in(-\infty, L]\) and \(y\left(t\right)=z\left(t\right)=t+b_{1}\) for \(t\in [R,+\infty)\) and let \(\overline{y}^{\infty},\overline{z}^{\infty}\colon C_{0}\rightarrow C_{1}\) be the corresponding Mather invariants. Then \(y\) and \(z\) are conjugate in \(H\) if and only if \(\;\overline{y}^{\infty}\) and \(\;\overline{z}^{\infty}\) differ by rotations \(v_{0,\ell}\) and \(v_{1,m}\) of the domain and range circles, for some \(\ell,m\in\mathbb{R}\): \[\xymatrix{\ar @{} [dr] |{\circlearrowright} C_{0} \ar[d]_-{v_{0, \ell}} \ar[r]^-{\overline{z}^{\infty}} & C_{1} \ar[d]^-{v_{1, m}} \\ C_{0} \ar[r]_{\overline{y}^{\infty}} & C_{1} }\] \end{thm} \begin{proof} The calculations above yield that, if \(g\in H\) conjugates \(y\) and \(z\), then equation \(\left(\ref{rot}\right)\) is satisfied, which is equivalent to say that \(\;\overline{y}^{\infty}\) and \(\;\overline{z}^{\infty}\) differ by rotations \(v_{0,\ell}\) and \(v_{1,m}\) of the domain and range circles, for some \(\ell,m\in\mathbb{R}\). Conversely, let us assume that there are \(\ell,m\in\mathbb{R}\) such that equation \(\left(\ref{rot}\right)\) is satisfied. Then, we choose \(g_{0}\in H\) which is affine in the initial box with a initial germ \((g_0)_{-\infty}=\left(1,\ell\right)\). Then we define a map $g$ as the following pointwise limit \(g\left(t\right)\coloneqq \lim_{n\rightarrow+\infty} y^{n}g_{0}z^{-n}\left(t\right)\). By the Stair Algorithm Theorem \ref{monod-stair}, we have \(gz=yg\), where \(g\in \mathrm{Homeo_+}(\mathbb{R})\) and it is such that, on any bounded interval, it coincides with the restriction of some function $\mathrm{PPSL}(\mathbb{R})$ to such interval. We need to show that \(g\in H\). By construction, \(g\) has finitely many breakpoints in \(\left(-\infty,N-1\right]\). Conjugating both sides of the equation \(gz=yg\) by \(s\), we get \[\overline{gz}=\overline{yg}.\] \noindent For all \(t\in\left(-\infty,0\right]\cup\left[N-1,+\infty\right)\), we have \(\overline{y}\left(t\right)=\overline{z}\left(t\right)=t+1\). Thus \(\overline{g}\left(t+1\right)=\overline{g}\left(t\right)+1\), for each \(t\in\left(-\infty,0\right]\cup\left[N-1,+\infty\right)\) and we can pass to quotients. Moreover, as argued above during the definition of the Mather invariants, we have that $\overline{g}$ is a translation $\overline{g}(t)=t+\ell$ on $(-\infty,0]$, while we still need to show that $\overline{g}$ is a translation on $[N-1,+\infty)$. Up to switch the role of $\overline{y}$ and $\overline{z}$, we can assume that $\ell \ge 0$. Passing the equation \(\overline{g}\overline{z}^{N}=\overline{y}^{N}\overline{g}\) to quotients, we obtain \[\overline{g}_{ind}\overline{z}^{\infty}\left(\left[t\right]\right)=\overline{y}^{\infty}v_{0, \ell}\left(\left[t\right]\right),\] \noindent for a suitable well-defined \(\overline{g}_{ind}\). By equation \(\left(\ref{rot}\right)\), we have \[\overline{g}_{ind}\overline{z}^{\infty}\left(\left[t\right]\right)=v_{1, m}\overline{z}^{\infty}\left(\left[t\right]\right),\] and so, by the cancellation law, we have \[\overline{g}_{ind}=v_{1, m}\] so that $\overline{g}_{ind}$ is a rotation by \(m\) of the circle \(C_{1}\). We now choose $N_0 \ge N$ large enough so that $d:=\overline{z}^{N_0}(-1-\ell)\ge N-1$ so that \[ \overline{z}^{N_0}([-1-\ell,-\ell])= \overline{z}^{N_0}([-1-\ell,z(-1-\ell)])=[d,z(d)]=[d,d+1], \] and \[\overline{g}(d)=\overline{y}^{N_0}(\overline{g}(-1-\ell))= \overline{y}^{N_0}(-1)\ge N-1. \] Hence the following commutative diagram holds \[\xymatrix{ & & & \left[-1-\ell,-\ell\right] \ar[dd]_(.5){p_{0}}|(.29)\hole \ar[rr]^-{\overline{z}^{N_0}} \ar[dlll]_{\overline{g}} & & [d,+\infty) \ar[dd]^(.5){p_{1}} \ar[dlll]_{\overline{g}}\\ \left[-1,0\right] \ar[dd]_-{p_{0}} \ar[rr]_(.5){\overline{y}^{N_0}} & & \left[N-1,+\infty\right) \ar[dd]^-{p_{1}} \\ & & & C_{0} \ar[rr]^(.5){\overline{z}^{\infty}} \ar[dlll]_{v_{0,\ell}}|(.42)\hole & & C_{1} \ar[dlll]^{\overline{g}_{ind}=v_{1, m}} \\ C_{0} \ar[rr]_{\overline{y}^{\infty}} & & C_{1} }\] To finish the proof, we need to see that \(\overline{g}\) is a affine M\"obius map on \(\left[N-1,+\infty\right)\), which will mean that \(\overline{g} \in H\). From the previous commutative diagram we get \(v_{1, m}p_{1}=p_{1}\overline{g}\), which implies that \(\left[\overline{g}\left(t\right)\right]=\left[t+m\right]\) for \([t]\in C_1\). By definition of the equivalence relation and the fact that \(\overline{g}\) is a periodic continuous function, we have that there exists some \(r\in\mathbb{Z}\) such that, \(\overline{g}\left(t\right)\coloneqq t+m+r\), for all \(t\in\left[N-1,+\infty\right)\). Therefore, \(\overline{g} \in H\). \end{proof} \begin{rem} The results of this section rely on Lemma \ref{monod-transitive} and so, in order to generalize them to $H(A)$, one needs to prove a generalized version of Lemma \ref{monod-transitive} for $H(A)$. In particular, the construction of the homeomorphism $s$ at the beginning of this section requires a version of Lemma \ref{monod-transitive} for $H(A)$ to construct the maps $s_j$ for $j=0,\ldots, N=2$. This is thus true for Section \ref{sec:centralizer} too. \end{rem} \section{Centralizers}\label{sec:centralizer} In this section, we use the conjugacy tools we have just constructed to calculate the centralizers of elements from $H$. We start by performing some calculations for centralizers of elements in \(\mathrm{Aff}\left(\mathbb{R}\right)\) and use this information to classify the centralizers of elements from $H$. \subsection{Centralizers in \(\mathrm{Aff}(\mathbb{R})\)}\label{subsec:centralizers-affine-group} Since Lemma \ref{monod-centralizers} gives a monomorphism $\varphi_{z}\colon C_{H}\left(z\right) \to \mathrm{Aff}\left(\mathbb{R}\right)$, it makes sense to investigate centralizers in \(\mathrm{Aff}\left(\mathbb{R}\right)\). If $\left(a,b\right)\in \mathrm{Aff}(\mathbb{R})$ and $a\neq 1$, then $\left(c,d\right)\in C_{\mathrm{Aff}(\mathbb{R})}\left(a,b\right)$ if and only if \[\left(c,d\right)=\left(a^{-1},-a^{-1}b\right)\left(c,d\right)\left(a,b\right)=\left(c,-a^{-1}b+a^{-1}d+a^{-1}bc\right)\] which is equivalent to $d=\dfrac{b(c-1)}{a-1}$, and so \[ C_{\mathrm{Aff}(\mathbb{R})}(a,b)=\left\{\left(c,\dfrac{b(c-1)}{a-1}\right)\in \mathrm{Aff}(\mathbb{R})\; \bigg \vert \; c\in \mathbb{R}_{>0}\right\}\cong \left(\mathbb{R},+\right). \] If $\left(1,b\right)\in \mathrm{Aff}(\mathbb{R})$ and $\left(c,d\right)\in C_{\mathrm{Aff}(\mathbb{R})}\left(a,b\right)$, we get \[(c,d)=(1,-b)(c,d)(1,b)=(c,b(c-1)+d)\] which implies that \[d=b(c-1)+d \Rightarrow b(c-1)=0 \Rightarrow b=0\;\textrm{or}\;c=1.\] If $b=0$, then \[C_{\mathrm{Aff}\left(\mathbb{R}\right)}(1,0)=\mathrm{Aff}\left(\mathbb{R}\right).\] If $b\neq 0$, then \[C_{\mathrm{Aff}\left(\mathbb{R}\right)}(1,b)=\{(1,d)\mid d\in \left(\mathbb{R},+\right)\} \cong \left(\mathbb{R},+\right).\] We collect our calculations in the following result. \begin{lem} \label{thm:centralizer-Aff} If $(1,0)\ne (a,b)\in \mathrm{Aff}\left(\mathbb{R}\right)$, then \(C_{\mathrm{Aff}\left(\mathbb{R}\right)}\left(a,b\right)\cong \left(\mathbb{R},+\right)\). \end{lem} \subsection{Centralizers in \(H\)}\label{subsec:centralizer-h} \iffalse \altair{Podemos remover. Lembro que a frase foi inserida para dizer que os centralizadores são infinitos. Na tese, ela não existe.}A simple observation to start our study is that, since \(H\) is torsion-free by Lemma \ref{thm:torsion-free}, then, for any non-trivial $z\in H$, \(\langle z\rangle \cong (\mathbb{Z},+)\) and so \(C_{H}\left(z\right)\) must be infinite. \francesco{FRA (possível ideia): We start by noticing that, since \(H\) is torsion-free by Lemma \ref{thm:torsion-free}, then \(C_{H}\left(z\right)\) is infinite for any non-trivial $z \in H$.} \fi We start by noticing that, since \(H\) is torsion-free by Lemma \ref{thm:torsion-free}, the subgroup \(C_{H}\left(z\right)\) is infinite for any non-trivial $z \in H$. Next, we will divide the study of centralizers of the elements from \(H\) into several cases. \iffalse First we observe that, if we consider the monomorphism \(\varphi_{z}\) defined in Lemma \ref{monod-centralizers} and recall that \(H\) is torsion-free by Lemma \ref{thm:torsion-free}, then we get \(\langle z\rangle\) is an infinite cyclic subgroup of \(C_{H}\left(z\right)\) for any $z \in H^{<}$. From the injectivity of \(\varphi_{z}\), it follows that \(\mathcal{A}_{z}\) is an infinite group. we relabel its image by \(\mathcal{A}_{z}\coloneqq\varphi_{z}\left(C_{H}\left(z\right)\right)\). Since \(H\) is torsion-free by Lemma \ref{thm:torsion-free}, then we get \(\langle z\rangle\) is an infinite cyclic subgroup of \(C_{H}\left(z\right)\) for any $z \in H^{<}$. From the injectivity of \(\varphi_{z}\), it follows that \(\mathcal{A}_{z}\) is an infinite group. Furthermore, we define the following group isomorphism \begin{align*} \varPsi_{z}\colon \mathcal{A}_{z}&\rightarrow C_{H}\left(z\right)\\ (a^{2},ab)&\mapsto \varPsi_{z}\left(a^{2},ab\right)\coloneqq\varphi_{z}^{-1}\left(\left(a^{2},ab\right)\right). \end{align*} \fi \iffalse We will divide the study of centralizers of the elements from \(H\) in several cases. \fi First, we consider \(z\in H\) without breakpoints, that is, the case where \(z\) is an affine map. Let us consider \(z\left(t\right)=a^{2}t+ab\) for all \(t\in \mathbb{R}\). If $a\neq\pm1$ we have the following result. \begin{prop} Let \(z\in H\) be so that \(z\left(t\right)=a^{2}t+ab\), for all \(t\in \mathbb{R}\), with \(a\neq\pm1\). Then \(C_{H}\left(z\right)\cong\left(\mathbb{R},+\right)\). \end{prop} \begin{proof} First, notice that in this case, \(z_{-\infty}=z_{+\infty}=(a^{2},ab)\). A direct calculation shows that \[T\coloneqq\left\{f\in H\mid f\left(t\right)=ct+\dfrac{ab(c-1)}{a^{2}-1},\;\forall\;t\in\mathbb{R},\;c>0\right\}\] is a subgroup of \(C_{H}\left(z\right)\). Using the map \(\varphi_{z}\) from Lemma \ref{monod-centralizers} we have \[\varphi_{z}\left(T\right)=\left\{\left(c,\dfrac{ab(c-1)}{a^{2}-1}\right)\mid c\in \left(\mathbb{R}_{>0},\cdot\right)\right\}=C_{\mathrm{Aff}(\mathbb{R})}(a^{2},ab).\] From \(\varphi_{z}\left(T\right)\leq \varphi_{z}\left(C_{H}\left(z\right)\right)\leq C_{\mathrm{Aff}\left(\mathbb{R}\right)}(a^{2},ab)\), we get \(\varphi_{z}\left(C_{H}\left(z\right)\right)=C_{\mathrm{Aff}(\mathbb{R})}(a^{2},ab)\). \noindent Since $\varphi_{z}$ is a group monomorphism, we have \(C_{H}\left(z\right)\cong C_{\mathrm{Aff}(\mathbb{R})}(a^{2},ab)\). Therefore, \(C_{H}\left(z\right)\cong\left(\mathbb{R},+\right)\) by Lemma \ref{thm:centralizer-Aff}. \end{proof} \indent From the previous result, we have the following. \begin{cor} \label{thm:centralizers-affine} Let \(y\in H\) be an element such that \(y=g^{-1}zg\), where \(z,g\in H\) and \(z\) is an affine map \(z\left(t\right)=a^{2}t+ab\), with $a^2\ne 1$. Then \(C_{H}(y)\cong \left(\mathbb{R},+\right)\). \end{cor} \iffalse {\large \francesco{2020-02-06: the whole next paragraph us \textbf{unchanged}, however it looks confusing. We start talking about affine maps, then we discover something about their slope (true, but why is it useful right now?) and then we finish saying that an element centralizing a translation is periodic (true but we should recall why).}} Before stating the next result, we observe that if an element \(z\in H\) does not have breakpoints, then \(z(t)=a^{2}t+ab\) on the whole real line. If \(a\neq\pm1\), \(z\) crosses the diagonal line. In this case, \(z\notin H^{<}\cup H^{>}\). Given any element \(z\in H^{<}\) so that around \(-\infty\) we have \(z\left(t\right)=a_{0}^{2}t+a_{0}b_{0}\) and \(z(t)=a_{n}^{2}t+a_{n}b_{n}\) around \(+\infty\), we have \(a_{0}^{2}\ge1\) and \(a_{n}^{2}\leq1\). As a consequence, centralizers of a translation \(z\in H^{<}\) are periodic maps. \fi Now we consider the case \(z\left(t\right)=a^{2}t+ab\) for all \(t\in \mathbb{R}\) and with $a = \pm1$, that is, $z$ is a translation. \begin{prop}\label{centralizer-translations} If \(z\in H^{<}\) is a translation, then \(C_{H}\left(z\right)\cong \left(\mathbb{R},+\right)\). \end{prop} \begin{proof} Let \(g\in C_{H}\left(z\right)\). Since $g \in H$ we have \(g\left(t\right)=a_{0}^{2}t+a_{0}b_{0}\), for some \(a_{0},b_{0}\in \mathbb{R}\), \(t\in\left(-\infty,L\right]\) and for suitable \(L\in\mathbb{R}\). The map $g$ commutes with the translation \(z\left(t\right)=t+k\), for some \(k<0\), so $g$ is periodic of period $|k|$ and so, by Remark \ref{thm:affine-commute-translation}, we have \(a_{0}^{2}=1\). Hence $g$ is a translation around $-\infty$ and it is periodic, so we must have that $g(t)=t+b_0$ for every $t \in \mathbb{R}$. Therefore, if $\varphi_z$ is the map of Lemma \ref{monod-centralizers}, we have \[ C_{H}\left(z\right)\cong \varphi_z(C_{H}\left(z\right)) \cong \{(1,b_0) \mid b_0 \in \mathbb{R}\}\cong (\mathbb{R},+). \] \end{proof} The previous proposition implies the following result. \begin{cor}\label{cor:centralizers-translations} Let \(y\in H\) be an element such that \(y=g^{-1}zg\), where \(z,g\in H\) and \(z\) is a translation. Then \(C_{H}\left(y\right)\cong \left(\mathbb{R},+\right)\). \end{cor} Let us now consider \(z\in H^{<}\) such that \(z'\left(-\infty\right)\neq z'\left(+\infty\right)\) and \(z\) has breakpoints. We start with the following result. \begin{lem}\label{the-goddamn-lemma-5.3-from-Kas-Mat} Given \(z\in H^{<}\) such that its initial and final affinity boxes with respect to \(z\) and itself are \(\left(-\infty,L\right]^{2}\) and \(\left[R,+\infty\right)^{2}\), respectively, and so that \(z'\left(-\infty\right)\neq z'\left(+\infty\right)\). Let \(s\in\mathbb{Z}_{>0}\) be such that \(z^{s}\left(R\right)<L\). Then either \(z^{-s}\) is not affine on \(\left[z^{s}\left(R\right),L\right]\) or \(z^{-2s}\) is not affine on \(\left[z^{2s}\left(R\right),L\right]\). \end{lem} \begin{proof} First of all, since $z\in H^{<}$, then $z^{-1}\in H^{>}$. Let us suppose that $z\left(t\right)=a_{0}^{2}t+a_{0}b_{0}$ on $(-\infty,L]$ and $z\left(t\right)=a_{n}^{2}t+a_{n}b_{n}$ on $[R,+\infty)$. Then by hypothesis, $a_{0}^{2}=z'\left(-\infty\right)\neq z'\left(+\infty\right) =a_{n}^{2}$. Moreover, $z^{-1}\left(t\right)=a_{0}^{-2}t-a_{0}^{-1}b_{0}$ on $(-\infty,z(L)]$ and $z^{-1}\left(t\right)=a_{n}^{-2}t-a_{n}^{-1}b_{n}$ on $[z(R),+\infty)$. Then since $z^{-1}\in H^{>}$, we have $z^{-s}$ is affine on $(-\infty,z^{s}(L)]$, with initial germ \[(z^{-s})_{-\infty}=\left(a_{0}^{-2s},-\sum_{j=1}^{s}a_{0}^{-2j+1}b_{0}\right).\] \noindent Moreover, $z^{-s}$ is affine on $[z^{s}(R),+\infty)$, which contains $[R,+\infty)$ with final germ \[(z^{-s})_{+\infty}=\left(a_{n}^{-2s},-\sum_{j=1}^{s}a_{n}^{-2j+1}b_{n}\right).\] Let us assume, by contradiction, that both $z^{-s}$ and $z^{-2s}$ are both affine on $[z^{s}(R),L]$ and $[z^{2s}(R),L]$, respectively, and that their germs on these two intervals are $(a,b)$ and $(c,d)$, respectively. Since $z^{-2s}=z^{-s}\circ z^{-s}$, we get $z^{-2s}$ is affine on $[z^{s}(R),L]$, because $z^{-s}$ is affine on $[z^{s}(R),L]$ by our assumption and $z^{-s}$ is affine on $[R,z^{-s}(L)]\subset [R,+\infty)$, with germ \[\left(a_{n}^{-2s},-\sum_{j=1}^{s}a_{n}^{-2j+1}b_{n}\right)(a,b).\] \noindent Moreover, $z^{-2s}$ is also affine on $[z^{2s}(R),z^{s}(L)]$, since $z^{-s}$ is affine on $(-\infty,z^{s}(L)]$ and on $[z^{s}(R),L]$ by our assumption, with germ \[(a,b)\left(a_{0}^{-2s},-\sum_{j=1}^{s}a_{0}^{-2j+1}b_{0}\right).\] \noindent By comparing the germ $(c,d)$ of $z^{-2s}$ on $[z^{2s}(R),L]$ with the germs of the same map $z^{-2s}$ on the two subintervals $[z^{s}(R),L]$ and $[z^{2s}(R),z^{s}(L)]$ of the interval $[z^{2s}(R),L]$, we get \[\left(a_{n}^{-2s},-\sum_{j=1}^{s}a_{n}^{-2j+1}b_{n}\right)(a,b)=(c,d)=(a,b)\left(a_{0}^{-2s},-\sum_{j=1}^{s}a_{0}^{-2j+1}b_{0}\right).\] From this, we must have \[a_{n}^{-2s}a=aa_{0}^{-2s}.\] Since the group $(\mathbb{R}_{>0},\cdot)$ is abelian, we have $a_{0}^{-2s}=a_{n}^{-2s}$. However, we are considering $z\in H^{<}$ such that the $a_{0}^2\neq a_{n}^2$, so that $a_{0}^{-2s} \ne a_{n}^{-2s}$ and we have a contradiction. Therefore, either $z^{-s}$ is not affine on $[z^{s}(R),L]$ or $z^{-2s}$ is not affine on $[z^{2s}(R),L]$. \end{proof} \begin{lem}\label{thm:unbounded-to-bounded} The group $H$ is isomorphic to the group $K$ of all piecewise M\"obius transformations of $[0,1]$ to itself with finitely many breakpoints. \end{lem} \begin{proof} We explicitly construct an isomorphism $\nabla:H \to K$. \iffalse Notice that $H|_{[0,1]}$ is the group obtained by taking $\{h \in H \mid h(t)=t, \forall\; t \not \in (0,1) \}$ \altair{Creio que esse conjunto não descreve o \(H|_{[0,1]}\) propriamente, pois não é obrigatório ter somente o intervalo \([0,1]\) como suporte.} \francesco{Acho que não entendo o que queira dizer. Desculpe. Talvez quer dizer que a primeira frase deste paragrafo teria de ser ``Notice that $H|_{[0,1]}$ is \textbf{isomorphic to the} group obtained by taking''? Neste caso, concordaria pois a descrição de $H|_{[0,1]}$ feita no enunciado. Verdadeiramente, se este é o que quiser dizer (e me parece fazer super sentido), é melhor usar o simbolo $H_{[0,1]}$ em lugar de $H|_{[0,1]}$ (no primeiro caso não falamos de que seja uma restrição de $H$)} and restricting its elements to $[0,1]$. Now we construct the isomorphism explicitly. \fi We use Lemma \ref{monod-transitive} to construct an element $h \in H$ such that $h(-3)=\frac{1}{3}$ and $h(-1)=\frac{1}{2}$. Now consider the map \[ f\left(t\right)=\begin{dcases} \frac{-1}{t} & t \in \left(-\infty,-3\right] \\ h\left(t\right) & t \in [-3,-1] \\ \frac{t+2}{t+3} & t \in \left[-1,+\infty\right). \end{dcases} \] \iffalse We now define $\nabla(g)=fgf^{-1}$ and notice that a direct calculation shows $\mathrm{im}(\nabla) \subseteq K$. Thus the map $\nabla:H \to K$ is well-defined and it is clearly a group isomorphism. \fi We now define \[ \nabla(g)(t)=\begin{dcases} fgf^{-1}(t) & t \in (0,1) \\ t & t \in \{0,1\} \end{dcases} \] and notice that a direct calculation shows $\mathrm{im}(\nabla) \subseteq K$. Thus the map $\nabla:H \to K$ is well-defined and it is clearly a group isomorphism with an obvious inverse. \end{proof} \begin{rem}\label{mobius-boxes} The isomorphism $\nabla$ of the proof of Lemma~\ref{thm:unbounded-to-bounded} switches $-\infty$ with $0$ and $+\infty$ with $1$ and allows us to study maps in Monod's group from a bounded point of view which will be useful in the proof of Lemma \ref{thm:generalized-KasMat-5.3}. Moreover, a straightforward calculation shows that, if $y,z \in H$ are such that $y_{-\infty}=z_{-\infty}$ and $y_{+\infty}=z_{+\infty}$, then the initial and final affinity boxes of $y$ and $z$ correspond to initial and final \emph{M\"obius boxes} of $\nabla(y)$ and $\nabla(z)$, where the images coincide and are M\"obius and a conjugator has to be M\"obius. \end{rem} In the next result we will freely use the isomorphism $\nabla:H \to K$ of Lemma~\ref{thm:unbounded-to-bounded}. \begin{lem} \label{thm:generalized-KasMat-5.3} Let \(z\in H^{<}\) be such that \(z\left(t\right)=a^2t+ab\) at \(-\infty\) with \(a^2>1\). Then there exists \(\varepsilon>0\) such that the only \(g\in C_{H}\left(z\right)\) with \(1-\varepsilon<\widetilde{g}~'(0)<1+\varepsilon\) and \(-\varepsilon<\widetilde{g}~''(0)<\varepsilon\), where $\widetilde{g}=\nabla(g)$, is \(g=\mathrm{id}\). \end{lem} \begin{proof} Let us consider $\widetilde{z}$ to be conjugate version of $z$ from the proof of Lemma~\ref{thm:unbounded-to-bounded}, that is, $\widetilde{z}=\nabla(z)$. Let $[0,\alpha]$ and $[\beta,1]$ be, respectively, the initial and final M\"obius boxes of $\widetilde{z}$ (see Remark \ref{mobius-boxes}) for suitable $0<\alpha<\beta<1$. By Lemma \ref{the-goddamn-lemma-5.3-from-Kas-Mat}, there exists an $N_1\in\mathbb{Z}_{>0}$ such that $\widetilde{z}^{-N_1}$ has a breakpoint $\mu_1$ on $[\widetilde{z}^{N_1}\left(\beta\right),\alpha]$. We now consider a real number $\alpha'$ such that $0<\alpha'<\mu_1<\alpha$ and we take a new initial (and smaller) M\"obius box $[0,\alpha']$ for $z$, we use Lemma \ref{the-goddamn-lemma-5.3-from-Kas-Mat} again and find that there exists $N_2\in\mathbb{Z}_{>0}$ such that $\widetilde{z}^{-N_2}$ has a breakpoint $\mu_2$ on $[\widetilde{z}^{N_2}\left(\beta\right),\alpha']$. Without loss of generality, assume that $\widetilde{z}^{N_2}\left(\beta\right)\le \widetilde{z}^{N_1}\left(\beta\right)$. Then there exists $\varepsilon>0$ such that $\{\mu_2<\mu_1 \} \subseteq I_{\varepsilon}\coloneqq \left[\widetilde{z}^{N_2}\left(\dfrac{\beta+\varepsilon}{1+\varepsilon}\right),(1-\varepsilon)\alpha\right]$. \begin{claim}\label{ze-plane-ze-plane} Let $0 < \varepsilon <\frac{1}{3}$ and $g\in C_{H}\left(z\right)$ such that $$1-\varepsilon<\widetilde{g}~'(0)<1+\varepsilon\;\;\textrm{and}\;\; -\varepsilon<\widetilde{g}~''(0)<\varepsilon.$$ Then $|\widetilde{g}(t)-\mathrm{id}(t)|<3\varepsilon+2\varepsilon^2$, for all $t \in [0,\alpha]$, so the family of functions $\widetilde{g}$ can be seen as uniformly converging to the identity function $\mathrm{id}$ on the interval $[0,\alpha]$. \end{claim} \begin{proof}[Proof of Claim \ref{ze-plane-ze-plane}] Let us consider \(\widetilde{g}=\nabla\left(g\right)\) so that \(\widetilde{g}\left(t\right)=\dfrac{at+b}{ct+d}\) on \(\left[0,\alpha\right]\), where \(ad-bc=1\). Then \(\widetilde{g}\left(0\right)=0\) and, consequently, \(b=0\) and \(ad=1\). Let us define \(\widetilde{g}'\left(0\right)\coloneqq\lambda\) and \(\widetilde{g}''\left(0\right)=\rho\). Since \[\widetilde{g}'\left(t\right)=\dfrac{1}{\left(ct+d\right)^{2}}\;\;\textrm{and}\;\;\widetilde{g}''\left(t\right)=-\dfrac{2c}{\left(ct+d\right)^{3}},\] we have \(\lambda=\dfrac{1}{d^2}\) and \(\rho=-\dfrac{2c}{d^3}\). Therefore, \(d^2=\dfrac{1}{\lambda}\) and \(c=\dfrac{-\rho d^3}{2}\). Observe that \[ \widetilde{g}\left(t\right)=\dfrac{at}{ct+d}=\dfrac{t}{cdt+d^2}=\dfrac{t}{\frac{-\rho t}{2 \lambda^2}+\frac{1}{\lambda}}= \dfrac{2\lambda^2 t}{-\rho t + 2\lambda} \] and so \begin{align*} |\widetilde{g}\left(t\right)-\mathrm{id}\left(t\right)|&=\left| \dfrac{2\lambda^2 t}{-\rho t + 2\lambda}-t \right|\\ &=\left|\frac{2\lambda^2 t - 2\lambda t+\rho t^2}{-\rho t + 2\lambda}\right|\\ &\le \left|2\lambda^2 t - 2\lambda t+\rho t^2\right|\\ &\le 2\left|\lambda\right|\cdot\left|\lambda-1\right|\cdot\left|t\right|+\left|\rho\right|\cdot\left|t\right|\\ &\le 2\left(1+\varepsilon\right)\varepsilon+\varepsilon\\ &\le 3\varepsilon + 2\varepsilon^2 \end{align*} where at the various steps we have observed that \(\left|t\right|\le1\), \(\left|\lambda\right|\le1+\varepsilon\), \(\left|\lambda -1\right|\le\varepsilon\) and, since $|\rho|<\varepsilon<\frac{1}{3}$, we have \[ |-\rho t+2\lambda | \ge |2\lambda-|\rho t|| \ge |2\lambda - \varepsilon| \ge |2(1-\varepsilon) - \varepsilon|=|2 - 3\varepsilon| \ge 1. \] \end{proof} \begin{claim}\label{again-ze-plane} Let $t_0 \in (0,\alpha)$. Then for any $1-\varepsilon<\widetilde{g}'(0)=\lambda <1+\varepsilon$, there is at most one $g \in C_H(z)$ such that $-\varepsilon<\widetilde{g}''(0)=\rho<\varepsilon$ and such that $\widetilde{g}^{-1}(t_0)=t_0$. \end{claim} \begin{proof}[Proof of Claim \ref{again-ze-plane}] We write $\widetilde{g}$ on the open interval $(0,\alpha)$ using the expression that was computed in the proof of Claim \ref{ze-plane-ze-plane}. Assume that $\widetilde{g}(t_0)=t_0$, then \[ t_0=\frac{2\lambda^2t_0}{-\rho t_0 + 2\lambda} \] and so \[ 1=\frac{2\lambda^2}{-\rho t_0 + 2\lambda} \] and so \[ -\rho t_0 + 2\lambda=2\lambda^2 \] and so \[ \rho=\frac{2\lambda-2\lambda^2}{t_0} \] If we assume that $\lambda=1+\tau$ for $-\varepsilon \le \tau \le \varepsilon$, then \[ \rho=\frac{2(1+\tau)-2(1+\tau)^2}{t_0}=\frac{-2\tau-2\tau^2}{t_0}. \] For any $-\varepsilon \le \tau \le \varepsilon$, the expression above returns a unique $\rho$. In case such expression returns $|\rho| \ge \varepsilon$, then $g$ cannot exist. On the other hand, if such expression returns $|\rho|< \varepsilon$, then the pair $(\tau,\rho)$ satisfies the required conditions. Therefore, for each $\lambda$ and we obtain at most one $g$ satisying the requirements. \end{proof} \noindent \emph{End of the Proof of Lemma \ref{thm:generalized-KasMat-5.3}.} Since we know that \begin{itemize} \item[(i)] \(\mu_{i}\) is a breakpoint for \(\widetilde{z}^{-N_{i}}\), \item[(ii)] \(\widetilde{z}^{-N_{i}}\left(\mu_{i}\right)\in\left[0,\alpha\right]\), and \item[(iii)] \(\widetilde{g}\) is a M\"obius transformation on \(\left[0,\alpha\right]\), \end{itemize} \noindent it follows that \(\mu_{i}\) is a breakpoint for \(\widetilde{g}\widetilde{z}^{-N_{i}}\). On the other hand, when we consider \(\widetilde{z}^{-N_i}\widetilde{g}\), the map \(\widetilde{g}\) pushes the breakpoint \(\mu_i\) of \(\widetilde{z}^{-N_i}\) to \(\widetilde{g}^{-1}(\mu_i)\), then \(\widetilde{g}^{-1}\left(\mu_{i}\right)\) is a breakpoint for \(\widetilde{z}^{-N_i}\widetilde{g}\). By construction, the set of breakpoints of $\widetilde{g}\widetilde{z}^{N_i}$ on $I_\varepsilon$ is $B:=\{\delta_1< \ldots < \delta_k \} \supseteq \{\mu_1<\mu_2\}$ and the set of breakpoints of $\widetilde{z}^{N_i}\widetilde{g}$ on $\widetilde{g}^{-1}(I_\varepsilon)$ is $\widetilde{g}^{-1}(B)=\{\widetilde{g}^{-1}(\delta_1)< \ldots < \widetilde{g}^{-1}(\delta_k) \} \{\widetilde{g}^{-1}(\mu_1)<\widetilde{g}^{-1}(\mu_2)\}$. However, since \(g\in C_{H}\left(z\right)\), then \(\widetilde{g}\widetilde{z}^{N_i}\left(t\right)=\widetilde{z}^{N_i}\widetilde{g}\left(t\right)\), for every \(t\in I_{\varepsilon}\) and so \(\widetilde{g}^{-1}\left(\delta_i\right)=\delta_i\) for \(i=1,\ldots,k\) and in particular \(\widetilde{g}^{-1}\left(\mu_i\right)=\mu_i\) for \(i=1,2\). By Claim \ref{again-ze-plane}, there can exist at most one $\mathrm{id}\ne g \in C_H(z)$ fixing $\mu_1$ and, since $\widetilde{g}$ fixes $0$ too, it cannot also fix $\mu_2$, otherwise \(g\) would be the identity map, by \cite[Corollary 2.5.3]{JonSin1987}. Similarly, there can exist at most one $\mathrm{id}\ne g \in C_H(z)$ fixing $\mu_2$ and such map cannot fix $\mu_1$ too. Then the only way to avoid a contradiction and have a \(g \in C_{H}\left(z\right)\) such that \(\widetilde{g}'\left(0\right)\) and \(\widetilde{g}''\left(0\right)\) satisfy the given conditions with respect to the chosen \(\varepsilon>0\) is that \(g=\mathrm{id}\). \end{proof} We now show that in many cases centralizers are infinite cyclic. \begin{prop}\label{monod-centralizer-breakpoints-affine} Let \(z\in H^{<}\) be such that \(z\left(t\right)=a^2t+ab\) around \(-\infty\) and \(a^2>1\). Then \(C_{H}\left(z\right)\) is a discrete subgroup of \(\left(\mathbb{R},+\right)\) and so it is isomorphic to \((\mathbb{Z},+)\). \end{prop} \begin{proof} By Lemma \ref{thm:generalized-KasMat-5.3}, the subgroup \(C_{H}\left(z\right)\) is a discrete set. Since \(C_{H}\left(z\right)\cong\varphi_z\left(C_{H}\left(z\right)\right)\le C_{\mathrm{Aff}\left(\mathbb{R}\right)}\left(z\right)\cong \left(\mathbb{R},+\right)\) and the subgroups of \(\left(\mathbb{R},+\right)\) are either discrete (then isomorphic to \(\left(\mathbb{Z},+\right)\)), or dense we get \(C_{H}\left(z\right)\cong\left(\mathbb{Z},+\right)\). \end{proof} \subsubsection{Mather invariant and centralizers} \label{subsec:mather-centralizer} As done is Section \ref{sec:mather}, we consider \(z\in H^{>}\) that is a translation around $\pm \infty$ and we use the Mather invariant of \(z\) in order to understand centralizers. \begin{prop}\label{thm:Centralizer-Last-Case} Consider \(z\in H^{>}\) such that \(z\left(t\right)=t+b_{0}\) for \(t\in\left(-\infty,L\right]\) and \(z\left(t\right)=t+b_{1}\) for \(t\in\left[R,+\infty\right)\). Then either \(C_{H}\left(z\right)\cong\left(\mathbb{Z},+\right)\) or \(C_{H}\left(z\right)\cong\left(\mathbb{R},+\right)\). \end{prop} \begin{proof} We follow notations from Section \ref{sec:mather}. Let $N \in \mathbb{Z}_{>0}$ large enough so that \(z^{N}\left(\left(z^{-1}\left(L\right),L\right)\right)\subset\left(R,+\infty\right)\). Up to conjugating \(z\) with \(s\), we will work with \(z\left(t\right)=t+1\). We define the relation \(t\sim t+1\) and construct the circles \(C_{0}\coloneqq\left(-\infty,0\right]/\sim\) and \(C_{1}\coloneqq\left[N-1,+\infty\right)/\sim\). By Theorem \ref{matconj}, a \(g\in H\) is a centralizer of \(z\) if and only if the following equation is satisfied \begin{align}\label{cent} z^{\infty}v_{0,\ell}=v_{1,m}z^{\infty}. \end{align} We now consider the map $V_{0}\colon\mathbb{R}\longrightarrow\mathbb{R}$ defined by \(V_0(t)=t+\ell\), which is a lift of of $v_{0,\ell}$, that is, it makes the the following diagram commute \[\xymatrix{\ar @{} [dr] |{\circlearrowright} \mathbb{R} \ar[d]_-{p_{0}} \ar[r]^-{V_{0}} & \mathbb{R} \ar[d]^-{p_{0}} \\ C_{0} \ar[r]_{v_{0,\ell}} & C_{0} }\] Similarly \(V_{1}(t)=t+m\) makes the following diagram commute \[\xymatrix{\ar @{} [dr] |{\circlearrowright} \mathbb{R} \ar[d]_-{p_{1}} \ar[r]^-{V_{1}} & \mathbb{R} \ar[d]^-{p_{1}} \\ C_{1} \ar[r]_{v_{1,m}} & C_{1} }\] Let $Z:\mathbb{R}\to\mathbb{R}$ be a lift of \(z^{\infty}\). The previous two commutative diagrams and equation (\ref{cent}) form three faces of a commutative cube analogous to that appearing in the proof of Theorem \ref{matconj} and so they imply that \(ZV_{0}=V_{1}Z\). In other words, for \(t\in\mathbb{R}\), we have \begin{align}\label{cent-lifting} Z(t+\ell)=ZV_{0}\left(t\right)=V_{1}Z\left(t\right)=Z\left(t\right)+m, \end{align} which means that the graph of \(Z\) is shifted back to itself. If the lift of \(z^{\infty}\) does not have breakpoints, the graph of \(Z\) is affine. Thus, there are infinitely many pairs \(\ell,m\in\mathbb{R}\) for which the graph can be shifted back to itself and so, for each \(\ell\in\mathbb{R}\), there exists an \(m\in\mathbb{R}\) so that equation (\ref{cent-lifting}) holds. Consequently, the image of the map \(\varphi_{z}\) from Lemma \ref{monod-centralizers} is so that \(\varphi_{z}\left(C_{H}\left(z\right)\right)\cong\left(\mathbb{R},+\right)\). Otherwise, the lift of \(z^{\infty}\) has breakpoints and the set of candidates for \(\ell\) forms a discrete subgroup of \(\left(\mathbb{R},+\right)\). Then \(\varphi_{z}\left(C_{H}\left(z\right)\right)\cong \left(\mathbb{Z},+\right)\). Therefore we have either \(C_{H}\left(z\right)\cong\left(\mathbb{Z},+\right)\) or \(C_{H}\left(z\right)\cong\left(\mathbb{R},+\right)\). \end{proof} We see two examples: in the first one, the subgroup of centralizers is isomorphic to $\left(\mathbb{R},+\right)$, while in the second it is isomorphic to $\left(\mathbb{Z},+\right)$. \begin{ex}\label{Ex-Translation-Conjugated} If we conjugate \(y\left(t\right)=t+1\) by \[g\left(t\right)=\begin{dcases} \dfrac{t-2}{\frac{3}{2}t-2},&\textrm{if}\;t\in[0,1],\\ t+1,&\textrm{otherwise}. \end{dcases} \] \noindent we get \begin{equation*} z\left(t\right)=\begin{dcases} \dfrac{2t+2}{\frac{3}{2}t+2},&\textrm{if}\;t\in[-1, 0],\\ \dfrac{t-2}{\frac{3}{2}t-2},&\textrm{if}\;t\in[0,1],\\ t+1,&\textrm{otherwise.} \end{dcases} \end{equation*} Then \(C_{H}\left(z\right)\cong \left(\mathbb{R},+\right)\), by Corollary \ref{cor:centralizers-translations}. \begin{figure} \caption{Graph of \(z\), from Example \ref{Ex-Translation-Conjugated} \label{GraphEx5} \end{figure} \end{ex} \begin{ex}\label{Ex-Discrete} Let us consider \[z\left(t\right)=\begin{dcases} \dfrac{t-2}{\frac{3}{2}t-2},&\;\textrm{if}\;t\in[0,1];\\ t+1,&\;\textrm{otherwise}. \end{dcases}\] Notice that \(z\in H^{>}\) and that \(L=0\) and \(R=1\). \begin{figure} \caption{Graph of \(z\), from Example \ref{Ex-Discrete} \end{figure} \noindent Its inverse is given by \[z^{-1}\left(t\right)=\begin{dcases} \dfrac{2t-2}{\frac{3}{2}t-1},&\;\textrm{if}\;t\in[1,2];\\ t-1,&\;\textrm{otherwise}. \end{dcases}\] If \(N=2\), we have \[z^{2}((z^{-1}(0),0))\subset \left[1,+\infty\right).\] Notice that we do not need to conjugate by the map \(s\), since it is a translation by one around $\pm \infty$, that is \(\overline{z}\coloneqq z\). Moreover, \[z^{2}\left(t\right)=\begin{dcases} \dfrac{t-1}{\frac{3}{2}t-\frac{1}{2}},&\;\textrm{if}\;t\in[-1,0];\\ \dfrac{\frac{5}{2}t-4}{\frac{3}{2}t-2},&\;\textrm{if}\;t\in[0,1];\\ t+2,&\;\textrm{otherwise}. \end{dcases}\] \begin{figure} \caption{Graph of \(z^{2} \end{figure} Considering the relation \(t\sim t+1\), we define \(C_{0}\coloneqq\left(-\infty,0\right]/t\sim t+1\) and \(C_{1}\coloneqq \left[1,+\infty\right)/t\sim t+1\). \noindent Then we get the Mather invariant \begin{align*} z^{\infty}\colon C_{0} &\longrightarrow C_{1} \\ \left[t\right] &\longmapsto z^{\infty}\left(\left[t\right]\right)=\left[z^{2}\left(t\right)\right]. \end{align*} \noindent The lift of this map making the following diagram commute \[\xymatrix{\ar @{} [dr] |{\circlearrowright} \mathbb{R} \ar[d]_-{p_{0}} \ar[r]^-{Z} & \mathbb{R} \ar[d]^-{p_{1}} \\ C_{0} \ar[r]_-{z^{\infty}} & C_{1}}\] \noindent is given by the periodic extension of the restriction of \(z^{2}\) defined on \(\left[-1,0\right]\) by \[Z\left(t\right) = z^{2}(t-x)+x,\] \noindent if \(x-1\leq t\leq x\), where \(x\in\mathbb{Z}\). Then the centralizer of \(Z\) is \(\left(\mathbb{Z},+\right)\). Moreover, notice that \(Z\notin H\). \begin{figure} \caption{Graph of the lift \(Z\).} \end{figure} \end{ex} \subsubsection{Main result about centralizers} We can now give a structure result for centralizers in $H$ (Theorem A in the introduction). \begin{thm} Given \(z\in H\), then \[C_{H}\left(z\right)\cong\left(\mathbb{Z},+\right)^{n}\times\left(\mathbb{R},+\right)^{m}\times H^{k},\] for suitable \(k,m,n\in\mathbb{Z}_{\geq 0}\). \end{thm} \begin{proof} The element $z$ has finitely many (possibly unbounded) intervals of fixed points, so its boundary \(\partial\mathrm{Fix}\left(z\right)=\left\{t_{0}<t_{1}<\ldots<t_{n}\right\}\) has only finitely points. If \(g\in C_{H}\left(z\right)\), then \(g\) fixes \(\partial\mathrm{Fix}\left(z\right)\) setwise. Moreover, since $g$ is order-preserving, it must fix $t_i$ for each $i=1,\ldots,n$. As a consequence, we can restrict to study centralizers in each of the subgroups \[ H\left(\left[t_{i},t_{i+1}\right]\right)=\{h \in H \mid h(t)=t, \forall t \not \in [t_{i},t_{i+1}]\} \cong H, \] where \(i=0,1,\ldots,n-1\). If \(z\left(t\right)=t\) on \(\left[t_{i},t_{i+1}\right]\), then it is easy to see that \(C_{H([t_i,t_{i+1}])}\left(z\right)=H([t_i,t_{i+1}])\cong H\). Otherwise, Corollaries \ref{thm:centralizers-affine} and \ref{cor:centralizers-translations} and Propositions \ref{monod-centralizer-breakpoints-affine} and \ref{thm:Centralizer-Last-Case}, cover the remaining cases (when $z$ is conjugate to an affine map or entirely above or below the diagonal) showing that either \(C_{H([t_i,t_{i+1}])}\left(z\right)\cong \left(\mathbb{R},+\right)\) or \(C_{H([t_i,t_{i+1}])}\left(z\right)\cong \left(\mathbb{Z},+\right)\). \end{proof} \end{document}
\begin{document} \title{Optimal investment and consumption under logarithmic utility and uncertainty model } \begin{abstract} We study a robust utility maximization problem in the case of an incomplete market and logarithmic utility with general stochastic constraints, not necessarily convex. Our problem is equivalent to maximizing of nonlinear expected logarithmic utility. We characterize the optimal solution using quadratic BSDE. \end{abstract} \textbf{Keywords}: G-expectation - G-martingale - BSDE - Robust Utility . \section{Introduction} Utility maximization represents an important problems in financial mathematics. This is an optimal investment problem faced by an economic agent having the possibility of investing in a financial market over a finite period of time with fixed investment horizon $T$. The goal of the agent is to find an optimal portfolio that allows him to maximize his "welfare" at time $T$. The founding work of Von Neumann and Morgenstern \cite{VM44}, made it possible to represent the preferences of the investor by means of a function of utility U and a given probability measure P reflecting his views as follows: $$E_{\mathbb{P}}[U(X_T^{x,\pi})]$$ Where $X_T^{x,\pi}$ is the wealth of investor at time $T$ starting from an initial wealth $x$ and adopting an investment strategy $\pi$. The investor's problem then consists in solving the optimization $$\sup_{\pi}E_{\mathbb{P}}[U(X_T^\pi)]$$ To solve this type of problem there are essentially two approaches:the dual approach \cite{SB09} and the BSDE approach \cite{IH2005} \\ In reality, several scenarios are plausible and it is impossible to precisely identify $\P$. Therefore, we must take into account this ambiguity on the model also known as Knightian uncertainty. Knightian uncertainty studies have undergone enormous developments in both theory and applications. The work of Maccheroni, Marinacci and Rustichini led to a new representation of preferences in the presence of model uncertainty, namely: $$\inf_{Q \in \mathcal{Q}}E_{\mathbb{P}}[U(X_T^{x,\pi})+\gamma(Q)]$$ where $\mathcal{Q}$ the set of plausible senarios and $\gamma (Q)$ is the penalty term.In other words, the investor will decide in the worst case. Thus, it will solve the following optimization problem known as robust utility maximization problem: $$\sup_{\pi}\inf_{Q \in \mathcal{Q}}E_{Q}[U(X_T^{x,\pi})+\gamma(Q)]$$ Several developments have been implemented on this subject, whether for the dual approach or the BSDE approach. In this paper we studies the robust utility maximization problem in incomplete market setting. We characterize the value function of our optimization problem using the quadratic BSDE. This was worked out through the class of nonlinear expectation called $g^*$- expectation. \section{Formulation Problem} We consider a filtred space $(\Omega, \mathcal{F}, \mathbb{F}^W, \mathbb{P})$ over a finite horizon time $T$, where the filtration $\mathbb{F}^W=(\mathcal{F}^W_t)_{t \in [0,T]}$ is generated by $d$- dimensional standard Brownian motion $W=(W^1,\ldots, W^d).$ For every measure $Q\ll P$ on $\mathcal{F}_T$ there is a predictable process $(\eta_t)_{t \in [0,T]}$ such that $\displaystyle E[\int_0^T\parallel \eta_t\parallel^2 dt]<+\infty \;\; Q.a.s$ and the density process of $Q$ with respect to $\mathbb{P}$ is an RCLL martingale $Z^Q =(Z^Q_t)_{t \in [0,T]}$ given by: $$ Z_{t}^{Q}=\mathcal{E}\left(\int_{0}^{t} \eta_{u} d W_{u}\right) Q . a . s, \forall t \in[0, T] $$ where $\mathcal{E}(M)_{t}=\exp \left(M_{t}-\frac{1}{2}\langle M\rangle_{t}\right)$ denotes the stochastic exponential of a continuous local martingale $M .$ We introduce a consistent time penalty given by: $$ \gamma_{t}(Q)=E_{Q}\left[\int_{t}^{T} h\left(\eta_{s}\right) ds \mid \mathcal{F}_{t}\right] $$ where $h: \mathbb{R}^{d} \rightarrow[0,+\infty]$ is a convex function, proper and lower semi-continuous function such that $h(0) \equiv 0 .$ We also assume that there are two positive constants $\kappa_{1}$ and $\kappa_{2}$ satisfying: $$ h(x) \geq \kappa_{1}\|x\|^{2}-\kappa_{2} $$ Our optimization problem is written as follows: $$\sup_{(\pi,c) \in\mathscr{A}_{e} } \inf_{Q^\eta \in \mathcal{Q}}\E\Big[\bar{\alpha}U(X_T^{x,\pi,c})+\alpha\int_0^T e^{-\int_0^s \delta_udu} u(c_s)ds +\int_{0}^{T}e^{-\int_0^s \delta_udu} h\left(\eta_{s}\right) ds\Big]$$ Where note that $\mathcal{Q}$ the space of all probability measures $Q^\eta $ on $(\Omega, \mathcal{F})$ such that $Q^\eta \ll \mathbb{P}$ on $\mathcal{F}_T$ and $\gamma_0(Q^\eta)< +\infty$. $U$ and $u$ are the utility functions and $\mathscr{A}_{e}$ is the set of admissible strategies which will be specified later. Our problem is then made up of two optimization subproblems. The first, the infmum, is studied by Faidi Matoussi and Mnif \cite{FMM17}. They have proven under the exponential integrability condition of the random variables $U(X_T^{x,\pi})$ and $\dint_0^Tu(c_s)ds$ that the infimum is reached in a one unique probability measure $\Q^{*}$ which is equivalent to $\P$ and and they characterized the value process of the dynamical optimization problem using the quadratic BSDE. More precisely, if $$\E_{\mathbb{P}}[\exp{\gamma |U(X_T^{x,\pi})|}]< +\infty \;\; \text{and} \;\; \E_{\mathbb{P}}[\exp{\gamma \int_0^T|u(c_s)|ds}]< +\infty \;\; \forall \gamma > 0$$ Then the process $$Y^{x,\pi,c}_t=\inf_{Q^\eta \in \mathcal{Q}}\E\Big[\bar{\alpha}U(X_T^{x,\pi,c})+\alpha\int_t^T e^{-\int_t^s \delta_udu} u(c_s)ds +\int_{t}^{T}e^{-\int_t^s \delta_udu} h\left(\eta_{s}\right) ds\Big]$$ satisfied the following quadratic BSDE: \begin{equation}\label{bsdecaraterization} \left\{\begin{array}{l} d Y_{t}^{x,\pi,c}=\left(\delta_{t} Y_{t}^{x,\pi,c}-\alpha u(c_t)+\beta h^{*} (\frac{1}{\beta}Z_{t})\right) dt-Z_{t} d W_{t}, \\ Y_{T}^{x,\pi,c}=\bar{\alpha} \bar{U}(X_T^{x,\pi,c}) . \end{array}\right. \end{equation} Where $h^*$ the Legendre-Fenchel transform of $h$. \\ In the sequel we are interested in the second problem, the supermum. \\ The financial market consists of one bond with interest rate zero and $d \leq m$ stocks. In case $d<m$, we face an incomplete market. The price process of stock $i$ evolves according to the equation: $$ \begin{aligned} &\frac{d S_{t}^{i}}{S_{t}^{i}}=b_{t}^{i} d t+\sigma_{t}^{i} d W_{t}, \quad i=1, \ldots, d \end{aligned} $$ where $b^{i}$ (resp. $\sigma^{i}$ ) is an $\mathbb{R}$ -valued (resp. $\mathbb{R}^{1 \times m}$ -valued) predictable uniformly bounded stochastic process. The lines of the $d \times m$ -matrix $\sigma$ are given by the vector $\sigma_{t}^{i}, i=1, \ldots, d$. The volatility matrix $\sigma=\left(\sigma^{i}\right)_{i=1, \ldots, d}$ has full rank and we assume that $\sigma \sigma^{t r}$ is uniformly elliptic, that is, $K I_{d} \geq \sigma \sigma^{t r} \geq \varepsilon I_{d}$ $P$ -a.s. for constants $K>\varepsilon>0 .$ The predictable $\mathbb{R}^{m}$ -valued process $$ \theta_{t}=\sigma_{t}^{t r}\left(\sigma_{t} \sigma_{t}^{t r}\right)^{-1} b_{t}, \quad t \in[0, T] $$ is then also uniformly bounded. A $d$ -dimensional $\mathbb{F}$ -predictable process $\pi=\left(\pi_{t}\right)_{0 \leq t \leq T}$ is called trading strategy if $\int \pi \frac{d S}{S}$ is well defined, for example, $\int_{0}^{T}\left\|\pi_{t} \sigma_{t}\right\|^{2} d t<\infty P$ -a.s. For $1 \leq i \leq d,$ the process $\pi_{t}^{i}$ describes the amount of money invested in stock $i$ at time $t .$ The number of shares is $\frac{\pi_{t}^{i}}{S_{t}^{2}} .$ The wealth process $X^{\pi}$ of a trading strategy $\pi$ with initial capital $x$ satisfies the equation $$ X_{t}^{x,\pi}=x+\sum_{i=1}^{d} \int_{0}^{t}\pi_{i, u} \frac{d S_{i, u}}{S_{i, u}}-\int_0^t c_u du =x+\int_{0}^{t} \pi_{u} \sigma_{u}\left(d W_{u}+\theta_{u} d u\right)-\int_0^t c_u du, \quad t \in[0, T] $$ In this notation $\pi$ has to be taken as a vector in $\mathbb{R}^{1 \times d}$. Trading strategies are self-financing. The investor uses his initial capital and during the trading interval $[0, T]$, there is no extra money flow out of or into his portfolio. Gains or losses are only obtained by trading with the stock. \\ Based on the above, the problem is equivalent to find: \begin{equation}\label{opt} V_0(x)=\sup_{(\pi,c) \in \mathscr{A}_{e}}Y_0^{x,\pi,c} \end{equation} such that \begin{equation} \left\{\begin{array}{l} d Y_{t}^{x,\pi,c}=\left(\delta_{t} Y_{t}^{x,\pi,c}-\alpha u(c_t)+\beta h^{*} (\frac{1}{\beta}Z_{t})\right) dt-Z_{t} d W_{t}, \\ Y_{T}^{x,\pi,c}=\bar{\alpha} \bar{U}(X_T^{x,\pi,c}) . \end{array}\right. \end{equation} To solve this problem, we use the notion of $g$-expectation introduced by Peng \cite{Peng97} . We then start with the main results on this notion of nonlinear expectation. \section{ $g$-expectation} The notion of $g$-expectations was firstly introduced by Peng in \cite{Peng97} in the case of lipschitz generator. from which most basic material of this section is taken Let $\xi$ a strictly positive random variable such that $\E_\mathbb{P}[\exp(\gamma |\xi|)]<\infty$ forall $\gamma>0$ and $g:\mathbb{R}\times \mathbb{R}^d \rightarrow \mathbb{R}$ satisfies the following conditions: \begin{itemize} \item $\forall t \in [0, T ], g(t,0)=0$ \item $\forall t \in [0, T ], z \mapsto g(t, z)$ is a continuous convex function, \item There are two positives constants $ \beta$ and $\gamma$ such that $$ \forall (t,z)\in \R\times \R^d; |g(t,\ z)|\leq\beta|z|^{2}+\gamma .$$ \end{itemize} then the BSDE $$Y_{t}=\displaystyle \xi+\int_{t}^{T}g(s,\ Z_{s})ds-\int^{T}Z_{s}dW_{s}, $$ admits a unique solution $(Y,\ Z)\in \mathscr{S}_T^{\infty}(\mathbb{R})\times \mathscr{H}_T^p\left(\mathbb{R}^m\right)$ where \begin{itemize} \item[-] For $p \in \mathbb{N}, \mathscr{H}_T^p\left(\mathbb{R}^m\right)$ is the set of all $\mathbb{R}^m$-valued stochastic processes $Z$ which are predictable with respect to $\mathbb{F}$ and satisfy $\E\left[\left(\dint_0^T\left|Z_t\right|^2 d t\right)^{\frac{p}{2}}\right]<\infty$. \item[-] $\mathscr{S}_T^{\infty}$ is the set of all continuous bounded $\mathbb{F}$-adapted semi-martingales $Y$ that satisfy $$ \forall \lambda>0, \quad \E\left[e^{\lambda \sup\limits_{t \in[0, T]}\left|Y_t\right|}\right]<\infty, $$ \end{itemize} $Y_{t}$ is called conditional $g$-expectation of $\xi$ under $\mathcal{F}_t$ and and is noted $\mathcal{E}_{g}[\xi|\mathcal{F}_t]$ and $Y_{0}$ is called $g$ -expectation of $\xi$, denoted by $Y_{0}=\mathcal{E}_{g}(\xi)$. \begin{Definition} \begin{enumerate} \item A process $X_{t}$ is called a $\mathrm{g}$-martingale, if for any $s \geq t$ $$\mathcal{E}_{g}[X_{t}|\mathcal{F}_s]=X_{s}$$ \item A process $X_{t}$ is called a $\mathrm{g}$-submartingale, if for any $s \geq t$ $$\mathcal{E}_{g}[X_{t}|\mathcal{F}_s]\geq X_{s}$$ \item A process $X_{t}$ is called a $\mathrm{g}$-supermartingale, if for any $s \geq t$ $$\mathcal{E}_{g}[X_{t}|\mathcal{F}_s]\leq X_{s}$$ \end{enumerate} \end{Definition} The following result is an immediate consequence of the comparison theorem \begin{Proposition} If $g_1 \leq g_2$ then any $\mathrm{g}_1$-supermartingale is $\mathrm{g}_2$-submartingale. \end{Proposition} \begin{Lemma} $$Y_0^{x,\pi,c}=\Ec_g[e^{-\int_0^T \delta_u du}\bar{\alpha} \bar{U}(X_T^{x,\pi,c})+\int_0^T\alpha e^{-\int_0^s \delta_u du} u(c_s)ds]$$ Where $$g(w,t,z)=\beta e^{-\int_0^t \delta_u du} h^{*} \left(\frac{1}{\beta}e^{\int_0^t \delta_u du}z\right)$$ \end{Lemma} \begin{proof} Let the stochastic process $L^{x,\pi,c}$ defined by: $$L^{x,\pi,c}_t:= e^{-\int_0^t \delta_u du}Y^{x,\pi,c}_t+\int_0^t\alpha e^{-\int_0^s \delta_u du} u(c_s)ds$$ The process $L^{x,\pi,c}$ satisfy the following BSDE: \begin{equation}\label{gcaracterization} \left\{\begin{array}{l} d L_{t}^{x,\pi,c}=\beta e^{-\int_0^t \delta_u du} h^{*} \left(\frac{1}{\beta}e^{\int_0^t \delta_u du}Z_{t}\right) dt-Z_{t} d W_{t}, \\ L_{T}^{x,\pi,c}=e^{-\int_0^T \delta_u du}\bar{\alpha} \bar{U}(X_T^{x,\pi,c})+\int_0^T\alpha e^{-\int_0^s \delta_u du} u(c_s)ds . \end{array}\right. \end{equation} In other words $$L_{t}^{x,\pi,c}=\Ec_g\Big[ e^{-\int_0^T \delta_u du}\bar{\alpha} \bar{U}(X_T^{x,\pi,c})+\int_0^T\alpha e^{-\int_0^s \delta_u du} u(c_s)ds |\Fc_t\Big] $$ \end{proof} \begin{Remark} The conditions mentioned above are verified by the BSDE(\ref{gcaracterization}). Indeed: \begin{enumerate} \item Since $h(0)=0$ and $h(x)\geq 0$ forall $x\in \R,$ we have $h^*(0)=\sup\limits_{y \leq 0}-h(y)=0.$ \item $h$ is supposed to be continuous and convex function so $h^*$ is also. \item Since $ \forall x \in \R^d; |h(x)| \geq \kappa_{1}\|x\|^{2}-\kappa_{2} $ then $ \forall x \in \R^d; |h^*(x)| \leq \dfrac{1}{2\kappa_{1}}\|x\|^{2}-\kappa_{2} $ \end{enumerate} \end{Remark} Thus our problem reduces to a utility maximization problem under the nonlinear expectation. \section{Logarithme utility} In this section, we consider the logarithm utility, which is given by $$ U(x)=\ln(x) $$ We here suppose that the initial wealth is strictly positive: $x > 0.$ We denote by $\pi_{t}$ the fraction of wealth invested in the risky assets at time $t \in[0, T],$ and $c_t$ the consumption rate at time $t$ that is, the portfolio value at time $t$ is given by $$ \dfrac{dX_t^{\pi,c}}{X_t^{\pi,c}}=\pi_t\sigma_t(dW_t+\theta_tdt)-c_tdt\;\;; X_0^{\pi,c}=x $$ and one can write \begin{equation}\label{eq0} X_t^{\pi,c}=x\Ec(\pi\sigma\textbf{.} W^{\Q})_t \exp(-\int_0^tc_sds)>0 \end{equation} where $\Ec$ is the stochastic exponential and $W^{\Q}_t= W_t +\int_0^t \theta_sds.$ \\ To formulate consumption and investment constraints we introduce non-empty subsets $\Cc \in \Pc$ and $\Ac \in \Pc^{1\times m}$, where $\Pc$ denotes the set of all real-valued predictable processes $(c_t)_{0\ leq t \leq T}$ and $\Pc^{1\times m}$ the set of all predictable processes $(\pi_t)_{0\ leq t \leq T}$ with values in $\R^{1\times m}.$ \\ We assume that $\Cc$ and $\Ac$ are sequentially closed and $\Pc$-stable. \begin{Definition} an admissible strategy is a pair $(\pi,c) \in \Ac \times \Cc$ satisfying \begin{equation} \E[\exp(\gamma (|\ln(X_T^{x,\pi,c})|+\int_0^T |\ln(c_s)|ds)]< \infty ; \forall \gamma>0 \end{equation} \begin{Proposition} The set of admissible strategy is sequentially closed and $\Pc$-stable. \end{Proposition} \begin{proof} let $(\pi^{(n)},c^{(n)})\in \mathscr{A}\times \mathscr{C}$ which converges $\nu \otimes \P$ a.e to $(\pi,c)$ \\ Since $\Cc$ and $\Ac$ are sequentially closed then $(\pi,c) \in \Ac \times \Cc.$ \\ By Fatou lemma, we have $$\E[\exp(\gamma (|\ln(X_T^{x,\pi,c})|+\int_0^T |\ln(c_s)|ds)] \leq \liminf\limits_{n}\E[\exp(\gamma (|\ln(X_T^{x,\pi^{(n)},c^{(n)}}|)+\int_0^T |\ln(c^{(n)}_s)|ds)]< \infty$$ and so $\mathscr{A}\times \mathscr{C}$ is sequentially closed. \\ The $\Pc$-stability arises from convexity of the function $x\mapsto e^{|x|}.$ \end{proof} \begin{Remark} Using the same argument above, if $\Ac$ and $\Cc$ are $\Pc$-convex then $\mathscr{A}\times \mathscr{C}$ is $\Pc$-convex. \end{Remark} \end{Definition} We assume in the sequel that there exists a pair $(\bar{\pi},\bar{c}) \in \mathscr{A}\times \mathscr{C}$ such that $\ln(c)-c$ and $\pi$ are bounded. \\ To solve this problem we adopt the same approach as imkeller hu \cite{IH2005} and Jiang \cite{Jiang16}. \\Let us first start with the following useful lemma \begin{Lemma}\label{lemma1} For any given $(\pi,c) \in \mathscr{A}_{e}$ , if there exists a RCLL adapted process $R^{x,\pi,c}$ such that \begin{itemize} \item $\forall (\pi,c) \in \mathscr{A}\times \mathscr{C} ; R_{T}^{x,\pi,c}:=\bar{\alpha}e^{-\int_0^T \delta_u du }\ln( X_{T}^{x,\pi,c})+\dint_0^T \alpha e^{-\int_0^u \delta_s ds } \ln (c_u X_{u}^{x,\pi,c})du$ \item $\forall (\pi,c) \in \mathscr{A}\times \mathscr{C} ; R^{x,\pi,c}$ is a $ g$-supermartingale; \item There exists $(\pi^{*},c^{*}) \in \mathscr{A}\times \mathscr{C}$ such that $R^{ \pi^{*},c^*}$ is a $g$ -martingale, then $(\pi^{*},c^{*})$ is an optimal strategy to Problem (\ref{opt}) \end{itemize} \end{Lemma} \begin{proof} Proof. For given $t \in[0, T]$ and $(\hat{\pi},\hat{c}) \in \mathscr{A}_{e}$, we introduce $\tilde{\pi}_{u}=\pi_{u} I_{u \leq t}+\hat{\pi}_{u} I_{u>t}$, and $\tilde{c}_{u}=c_{u} I_{u \leq t}+\hat{c}_{u} I_{u>t}$ one can see that ($\tilde{\pi},\tilde{c}) \in \mathscr{A}_{e}$ and $X_{t}^{x,\tilde{\pi},\tilde{c}}=X_{t}^{x,\pi,c}$. Since $R^{e, \tilde{\pi}}$ is a $g$-supermartingale, we have \begin{align*} R_{t}^{x,\pi,c} & \geq \Ec_g[\bar{\alpha}e^{-\int_0^T \delta_u du }\Big(\ln( X_{T}^{x,\tilde{\pi},\tilde{c}})\Big)+\dint_0^T \alpha e^{-\int_0^u \delta_s ds } \ln (\tilde{c}_u X_{u}^{x,\tilde{\pi},\tilde{c}})du|\Fc_t] \\&=\Ec_g\Big[\bar{\alpha}e^{-\int_0^T \delta_u du }\Big(\ln( X_{t}^{x,\pi,c}\dfrac{X_{T}^{x,\hat{\pi},\hat{c}}}{X_{t}^{x,\hat{\pi},\hat{c}}})\Big)+\dint_0^t \alpha e^{-\int_0^u \delta_s ds } \ln (c_u X_{u}^{x,\pi,c})du \\& +\dint_t^T \alpha e^{-\int_0^u \delta_s ds } \ln (\hat{c}_u X_{t}^{x,\pi,c}\dfrac{X_{u}^{x,\hat{\pi},\hat{c}}}{X_{t}^{x,\hat{\pi},\hat{c}}})du|\Fc_t\Big] \end{align*} thus, \begin{align*} R_{t}^{x,\pi,c} & \geq \sup\limits_{(\hat{\pi},\hat{c})}\Ec_g\Big[\bar{\alpha}e^{-\int_0^T \delta_u du }\ln( X_{t}^{x,\pi,c}\dfrac{X_{T}^{x,\hat{\pi},\hat{c}}}{X_{t}^{x,\hat{\pi},\hat{c}}})+\dint_0^t \alpha e^{-\int_0^u \delta_s ds } \ln (c_u X_{u}^{x,\pi,c})du \\& +\dint_t^T \alpha e^{-\int_0^u \delta_s ds } \ln (\hat{c}_u X_{t}^{x,\pi,c}\dfrac{X_{u}^{x,\hat{\pi},\hat{c}}}{X_{t}^{x,\hat{\pi},\hat{c}}})du|\Fc_t\Big] \end{align*} Furthermore, as for $(\pi^*,c^*); $ $R_{t}^{ \pi^{*}}$ is a $g$ -martingale, \begin{align*} R_{t}^{x,\pi^*,c^*} & = \Ec_g\Big[\bar{\alpha}e^{-\int_0^T \delta_u du }\ln( X_{t}^{x,\pi^*,c^*}\dfrac{X_{T}^{x,\pi^*,c^*}}{X_{t}^{x,\pi^*,c^*}})+\dint_0^t \alpha e^{-\int_0^u \delta_s ds } \ln (c_u X_{u}^{x,\pi^*,c^*})du \\& +\dint_t^T \alpha e^{-\int_0^u \delta_s ds } \ln (c^*_u X_{t}^{x,\pi^*,c^*}\dfrac{X_{u}^{x,\pi^*,c^*}}{X_{t}^{x,\pi^*,c^*}})du|\Fc_t\Big] \\ & \leq \sup\limits_{(\hat{\pi},\hat{c})}\Ec_g\Big[\bar{\alpha}e^{-\int_0^T \delta_u du }\ln( X_{t}^{x,\pi^*,c^*}\dfrac{X_{T}^{x,\hat{\pi},\hat{c}}}{X_{t}^{x,\hat{\pi},\hat{c}}})+\dint_0^t \alpha e^{-\int_0^u \delta_s ds } \ln (c_u X_{u}^{x,\pi^*,c^*})du \\& +\dint_t^T \alpha e^{-\int_0^u \delta_s ds } \ln (\hat{c}_u X_{t}^{x,\pi^*,c^*}\dfrac{X_{u}^{x,\hat{\pi},\hat{c}}}{X_{t}^{x,\hat{\pi},\hat{c}}})du|\Fc_t\Big] \end{align*} Thus \begin{align*} R_{t}^{x,\pi^*,c^*} & = \sup\limits_{(\hat{\pi},\hat{c})}\Ec_g\Big[\bar{\alpha}e^{-\int_0^T \delta_u du }\ln( X_{t}^{x,\pi^*,c^*}\dfrac{X_{T}^{x,\hat{\pi},\hat{c}}}{X_{t}^{x,\hat{\pi},\hat{c}}})+\dint_0^t \alpha e^{-\int_0^u \delta_s ds } \ln (c_u X_{u}^{x,\pi^*,c^*})du \\& +\dint_t^T \alpha e^{-\int_0^u \delta_s ds } \ln (\hat{c}_u X_{t}^{x,\pi^*,c^*}\dfrac{X_{u}^{x,\hat{\pi},\hat{c}}}{X_{t}^{x,\hat{\pi},\hat{c}}})du|\Fc_t\Big] \end{align*} holds for each $t \in [0, T]$, and by taking $t = 0$, we know that $(\pi^*,c^*)$ is an optimal strategy \end{proof} We are now trying to build a family of processes $R^{x,\pi,c}$ which satisfies the conditions of the previous lemma. We assume in the sequel that the process $\delta$ is deterministic. \\ We seek $R^{x,\pi,c}$ in the form $$R_{t}^{x,\pi,c}:=\bar{\alpha}h(t)e^{-\int_0^t \delta_u du }\Big(\ln( X_{t}^{x,\pi,c})-Y_t\Big)+\dint_0^t \alpha e^{-\int_0^u \delta_s ds } \ln (c_u X_{u}^{x,\pi,c})du$$ Where the process $Y$ satisfie a quadratic BSDE of the form $$dY_t=\rho_t Y_t dt + f(t,Z_t)dt -Z_t dW_t\;\;\; Y_T=0$$ and $h$ is the solution of the ordinary Cauchy problem \begin{equation}\label{cauchypb} \forall 0 \leq t \leq T; \bar{\alpha} h'(t)=\bar{\alpha}\delta_t h(t)-\alpha \;\; \text{and} \;\; h(T)=1. \end{equation} So that $$h(t)=e^{-\int_t^T \delta_u du }+\dfrac{\alpha}{\bar{\alpha}} e^{\int_0^t \delta_u du }\int_t^Te^{-\int_0^u \delta_s ds }du$$ Our goal now is to find the process $\rho$ and the function $f$ such that the process $R$ satisfies the conditions mentioned in the lemma (\ref{lemma1}). \\ Using ito formula we have \begin{align*} dR_{t}^{x,\pi,c}&=\bar{\alpha}e^{-\int_0^t \delta_u du }(h'(t)-\delta_t h(t))\Big(\ln( X_{t}^{x,\pi,c})-Y_t\Big)dt \\&+\bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(\pi_t \sigma_{t}dW_{t}-\pi_t \sigma_{t}\theta_{t}dt - c_tdt-\rho_t Y_t dt - f(t,Z_t)dt +Z_t dW_t) \\&+\alpha e^{+\int_0^u \delta_s ds } \ln (c_t) dt +\alpha e^{-\int_0^u \delta_s ds } \ln (X_{t}^{x,\pi,c}) dt \\&=-\alpha e^{-\int_0^t \delta_u du }\Big(\ln( X_{t}^{x,\pi,c})-Y_t\Big)dt \\&+\bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(\pi_t \sigma_{t}dW_{t}-\pi_t \sigma_{t}\theta_{t}dt - c_tdt-\rho_t Y_t dt - f(t,Z_t)dt +Z_t dW_t) \\&+\alpha e^{-\int_0^u \delta_s ds } \ln (c_t) dt +\alpha e^{-\int_0^u \delta_s ds } \ln (X_{t}^{x,\pi,c}) dt \end{align*} To get rid of the term $Y_t dt$, we choose $\rho_t=\dfrac{\alpha}{\bar{\alpha}h(t)} \forall 0\leq t \leq T.$ We obtain then \begin{align*} dR_{t}^{x,\pi,c}&= e^{-\int_0^t \delta_u du }\Big(\alpha \ln (c_t) dt -\bar{\alpha}h(t)(\pi_t \sigma_{t}\theta_{t} + c_t+ f(t,Z_t))dt\Big) +\bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(\pi_t \sigma_{t}+Z_t)dW_{t} \\&=e^{-\int_0^t \delta_u du }\Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big)dt -\bar{\alpha}h(t) e^{-\int_0^t \delta_u du }\Big(\pi_t \sigma_{t}\theta_{t} + f(t,\dfrac{e^{\int_0^t \delta_u du }}{\bar{\alpha}h(t)}\tilde{Z}_t-\pi_t\sigma_t)\Big)dt +\tilde{Z}_tdW_{t} \end{align*} Therefore $R^{x,\pi,c}$ is solution of the BSDE $$ \left\{\begin{array}{l} d R_{t}^{x,\pi,c}=F(t,Z^{\pi,c}_t)dt+Z^{\pi,c}_tdW_{t} \\ R_{T}^{x,\pi,c}:=\bar{\alpha}e^{-\int_0^T \delta_u du }\ln( X_{T}^{x,\pi,c})+\dint_0^T \alpha e^{-\int_0^u \delta_s ds } \ln (c_u X_{u}^{x,\pi,c})du. \end{array}\right. $$ Where $$F(t,z)=e^{-\int_0^t \delta_u du }\Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big) -\bar{\alpha}h(t) e^{-\int_0^t \delta_u du }\Big(\pi_t \sigma_{t}\theta_{t} + f(t,\dfrac{e^{\int_0^t \delta_u du }}{\bar{\alpha}h(t)}z-\pi_t\sigma_t)\Big)$$ we can choose $f$ such that: $$ F(t,Z_t^{\pi,c}) \leq g \left(t, Z_t^{\pi,c}\right) \forall (\pi,c); P.a.s$$ with the equality is rached for some $(\pi^*,c^*) \in \mathscr{A} \times \mathscr{C}$ \\ It is sufficient that forall $z\in \R^d$; forall $(\pi,c)\in \mathscr{A}\times \mathscr{C}$ we have \begin{equation}\label{eq1} e^{-\int_0^t \delta_u du }\Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big) -\bar{\alpha}h(t) e^{-\int_0^t \delta_u du }\Big(\pi_t \sigma_{t}\theta_{t} + f(t,\dfrac{e^{\int_0^t \delta_u du }}{\bar{\alpha}h(t)}z-\pi_t\sigma_t)\Big) \leq g \left(t, z\right)\;\;\; \P.a.s \end{equation} and there is at least $(\pi^*,c^*)$ such that forall $z\in \R^d$ \begin{equation}\label{eq2} e^{-\int_0^t \delta_u du }\Big(\alpha \ln (c^*_t) -\bar{\alpha}h(t)c^*_t\Big) -\bar{\alpha}h(t) e^{-\int_0^t \delta_u du }\Big(\pi^*_t \sigma_{t}\theta_{t} - f(t,\pi^*_t\sigma_t-\dfrac{e^{\int_0^t \delta_u du }}{\bar{\alpha}h(t)}z)\Big) = g \left(t, z\right)\;\;\; \P.a.s \end{equation} Equation (\ref{eq1} ) implies that forall $z\in \R^d$; forall $\pi\in \mathscr{A}$ we have \begin{equation}\label{eq3} e^{-\int_0^t \delta_u du }\sup_{c\in \mathscr{C}}\Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big) -\bar{\alpha}h(t) e^{-\int_0^t \delta_u du }\Big(\pi_t \sigma_{t}\theta_{t} + f(t,\dfrac{e^{\int_0^t \delta_u du }}{\bar{\alpha}h(t)}z-\pi_t\sigma_t)\Big) \leq g \left(t, z\right)\;\;\; \P.a.s \end{equation} which is equivalent to $$\forall z, \forall \pi \in \mathscr{A}; f(t,z) \geq \frac{1}{\bar{\alpha}h(t)} \left(e^{-\int_0^t \delta_u du }\sup_{c\in \mathscr{C}}\Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big)- g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi_t\sigma_t)\right)\right)-\pi_t \sigma_{t} \theta_{t}\;\;\; \P.a.s$$ and so $$\forall z ; f(t,z) \geq \frac{1}{\bar{\alpha}h(t)} e^{-\int_0^t \delta_u du }\sup_{c\in \mathscr{C}}\Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big)- \inf_{\pi \in \mathscr{A}}\left(\frac{1}{\bar{\alpha}h(t)}g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi_t\sigma_t)\right)+\pi_t \sigma_{t} \theta_{t}\right)\;\;\; \P.a.s$$ The sets $$\{c \in \mathscr{C}; \alpha \ln (c) -\bar{\alpha}h(t)c \geq \alpha \ln (\bar{c}) -\bar{\alpha}h(t)\bar{c}\}$$ and $$\{\pi \in \mathscr{A}; g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi\sigma)\right)+\pi \sigma \theta \leq g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \bar{\pi}\sigma)\right)+\bar{\pi} \sigma \theta\}$$ are $L^0-$ bounded. It follows from Theorem \textbf{4.4} in \cite{cheridito11} the existence of a pair $(\pi^*,c^*) \in \mathscr{A} \times \mathscr{C}$ such that $$\inf_{\pi \in \mathscr{A}}g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi_t\sigma_t)\right)+\pi_t \sigma_{t} \theta_{t}=g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi^*_t\sigma_t)\right)+\pi^*_t \sigma_{t} \theta_{t}$$ and $$\sup_{c\in \mathscr{C}}\Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big)=\alpha \ln (c^*_t) -\bar{\alpha}h(t)c^*_t$$ To achieve equation (\ref{eq2}) we must choose \begin{equation}\label{eq4} \begin{split} f(t,z)& = \frac{1}{\bar{\alpha}h(t)} e^{-\int_0^t \delta_u du }\sup_{c\in \mathscr{C}}\Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big)- \inf_{\pi \in \mathscr{A}}\left(\frac{1}{\bar{\alpha}h(t)}g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi_t\sigma_t)\right)+\pi_t \sigma_{t} \theta_{t}\right) \\ & = \frac{1}{\bar{\alpha}h(t)} \left(e^{-\int_0^t \delta_u du }\Big(\alpha \ln (c^*_t) -\bar{\alpha}h(t)c^*_t\Big)- g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi^*_t\sigma_t)\right)-\pi^*_t \sigma_{t} \theta_{t}\right)\;\;\; \P.a.s \end{split} \end{equation} It remains to verify the measurability of the function $f$. We have the following lemma \begin{Lemma} \begin{enumerate} \item $f$ is is predictible process \item There are a predictables processes $c^*$ and $\pi^*$ such that \begin{equation*} f(t,z)= \frac{1}{\bar{\alpha}h(t)} \left(e^{-\int_0^t \delta_u du }\Big(\alpha \ln (c^*_t) -\bar{\alpha}h(t)c^*_t\Big)- g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi^*_t\sigma_t)\right)-\pi^*_t \sigma_{t} \theta_{t}\right)\;\;\; \P.a.s \end{equation*} \end{enumerate} \end{Lemma} From the above we deduce the following result \begin{Theorem} The optimal value of the problem (\ref{opt}) is given by $$V_0(x)=\bar{\alpha}h(0)\Big(\ln(x)-Y_0\Big)$$ Where $Y_.$ is the solution of the following BSDE $$dY_t=\dfrac{\alpha}{\bar{\alpha}h(t)} Y_t dt + f(t,Z_t)dt-Z_t dW_t\;\;;\; Y_T=0$$ The functions $h$ and is respectively given by (\ref{cauchypb}) and (\ref{eq4}). \\ Morover $(\pi^*,c^*)$ is an optimal admissible strategy if and only if $$c^* \in \argmax_{c\in \Cc} \Big(\alpha \ln (c_t) -\bar{\alpha}h(t)c_t\Big) \;\;\; \text{and} \;\;\;\pi^* \in \argmin_{\pi\in \Ac} g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi_t\sigma_t)\right)+\pi_t \sigma_{t} \theta_{t}$$ \end{Theorem} \section{Example} In this section we suppose that $h(x)=\dfrac{1}{2}\|x\|^2; \forall x \in \R^d$ which matches the entropic utility case. Then, we have $ h^*(x)=\dfrac{1}{2}\|x\|^2$ and therefore $g(w,t,z)=\dfrac{1}{\beta }e^{\int_0^t \delta_u du} \|z\|^2$. We also assume that $\dfrac{\alpha}{\bar{\alpha}h} \in \Cc$ \begin{Proposition} The generator (\ref{eq4}) is given by \begin{equation*} f(t,z) = \frac{1}{\bar{\alpha}h(t)} e^{-\int_0^t \delta_u du }(\alpha\ln(\dfrac{\alpha}{\bar{\alpha}h(t)})-\alpha)- \dfrac{\bar{\alpha}}{\beta}h(t)e^{-\int_0^t \delta_u du }d^2(z+\dfrac{\beta e^{\int_0^t \delta_u du } }{2\bar{\alpha} h(t) }\theta_t,\sigma \Ac)-z_t\theta_t-\dfrac{\beta e^{\int_0^t \delta_u du } }{4\bar{\alpha} h(t) }\|\theta_t\|^2 \end{equation*} \end{Proposition} \begin{proof} \begin{equation*} \begin{split} &\frac{1}{\bar{\alpha}h(t)}g \left(t, \bar{\alpha}h(t)e^{-\int_0^t \delta_u du }(z+ \pi_t\sigma_t)\right)+\pi_t \sigma_{t} \theta_{t} \\& =\dfrac{\bar{\alpha}}{\beta}h(t)e^{-\int_0^t \delta_u du }\|z+\pi_t\sigma_t\|^2+\pi_t\sigma_t\theta_t \\& =\dfrac{\bar{\alpha}}{\beta}h(t)e^{-\int_0^t \delta_u du }\|z+\pi_t\sigma_t+\dfrac{\beta e^{\int_0^t \delta_u du } }{2\bar{\alpha} h(t) }\theta_t\|^2-z_t\theta_t-\dfrac{\beta e^{\int_0^t \delta_u du } }{4\bar{\alpha} h(t) }\|\theta_t\|^2 \end{split} \end{equation*} \end{proof} \end{document}
\begin{document} \title{Higher order polars of quasi-ordinary singularities \footnotetext{ \begin{minipage}[t]{4in}{\small 2000 {\it Mathematics Subject Classification:\/} Primary 32S05; Secondary 14H20.\\ Key words and phrases: quasi-ordinary polynomial, higher order polar, factorization, P-contact, self-contact.\\ The first-named author was partially supported by the Spanish Project MTM 2016-80659-P.} \end{minipage}}} \author{Evelia R.\ Garc\'{\i}a Barroso and Janusz Gwo\'zdziewicz} \maketitle \begin{abstract} A quasi-ordinary polynomial is a monic polynomial with coefficients in the power series ring such that its discriminant equals a monomial up to unit. In this paper we study higher derivatives of quasi-ordinary poly\-nomials, also called higher order polars. We find factorizations of these polars. Our research in this paper goes in two directions. We generali\-ze the results of Casas-Alvero and our previous results on higher order polars in the plane to irreducible quasi-ordinary polynomials. We also generalize the factorization of the first polar of a quasi-ordinary polynomial (not necessary irreducible) given by the first-named author and Gonz\'alez-P\'erez to higher order polars. This is a new result even in the plane case. Our results remain true when we replace quasi-ordinary polynomials by quasi-ordinary power series. \end{abstract} \section{Introduction} \label{intro} In \cite{Merle} Merle gave a decomposition theorem of a generic polar curve of an irreducible plane curve singularity, according to its topological type. The factors of this decomposition are not necessary irreducible. Merle`s decomposition was generalized to reduced plane curve germs by Kuo and Lu \cite{K-L}, Delgado de la Mata \cite{Delgado}, Eggers \cite{Eggers}, Garc\'{\i}a Barroso \cite{GB} among others. In \cite{GB-GP}, Garc\'{\i}a Barroso and Gonz\'alez P\'erez obtained decompositions of the polar hypersurfaces of quasi-ordinary singularities. On the other hand, Casas-Alvero in \cite{Casas} generalized the results of Merle to higher order polars of an irreducible plane curve. In \cite{Forum} we improved his results giving a finer decomposition in such a way that we are able to determine the topological type of some irreducible factors of the polar as well as their number. \noindent Our research in this paper goes in two directions. We generalize the results of \cite{Casas} and \cite{Forum} on higher order polars to irreducible quasi-ordinary singularities (see Theorem \ref{Merle} and Proposition \ref{ppppp}). We also generalize the factorization of the first polar of a quasi-ordinary singularity (not necessary irreducible) from \cite{GB-GP} to higher order polars (see Theorem \ref{dec-red-qo}). This is a new result even in the plane case. \noindent Our approach is based on Kuo-Lu trees, Eggers trees, Newton polytopes and resultants. As it was remarked in \cite{tesis} and \cite{GB-GP}, the irreducible factors of the polar of a quasi-ordinary singularity are not necessary quasi-ordinary. For that reason, we mesure the relative position of these irreducible factors and those of the quasi-ordinary singularity using a new notion called the {\em P-contact}, which plays in our situation the role of the {\em logarithmic distance} introduced by P\l oski in \cite{Ploski}. \noindent The paper is organized as follows. \noindent In Section \ref{section-Newton-polytopes} we recall the notion of the Newton polytope of a Weierstrass polynomial $f\in \mathbf{K}[[\underline{x}]][y]$ and we use it together with the Rond-Schober irreducibility criterium \cite{R-S}, in order to give sufficient conditions for the reducibility of $f$. The most important result in this section is Corollary \ref{irred}, which allows us to characterize, in Theorem \ref{pack}, the irreducible factors of the higher order polars of the polynomial $f$. \noindent In Section \ref{section-Kuo-Lu-tree} we present the notion of the Kuo-Lu tree of a quasi-ordinary Weierstrass polynomial. Then in Section \ref{section-Compatibility with pseudo-balls} we identify the bars of a Kuo-Lu tree with certain sets of fractional power series called {\em pseudo-balls} and we introduce the notion of {\em compatibility} of a Weierstrass polynomial with a pseudo-ball. Every quasi-ordinary Weierstrass polynomial is compatible with every pseudo-ball associated with its Kuo-Lu tree. Moreover if a Weierstrass polynomial is compatible with a pseudo-ball then any factor of it is compatible too (see Corollary \ref{CCKiel}). In Lemma \ref{derivatives} we prove that, under some conditions, the normalized higher derivatives inherit the compatibility property. In Section \ref{section-Conjugate-pseudo-balls} we introduce, using Galois automorphisms, an equivalence relation in the set of pseudo-balls, called {\em conjugacy}, and we explore the compatibility property for conjugate pseudo-balls. We generalize the Kuo-Lu Lemma \cite[Lemma 3.3]{K-L} to higher derivatives in Section \ref{section-Kuo-Lu Lemma}. In Section \ref{section-Newton-polytopes-resultants} we introduce our main tool, monomial substitutions, that allows us to reduce several questions to the case of two variables. In particular, if $f$ and $g$ are power series in $d+1$ variables such that after generic monomials substitutions we obtain power series $\bar f,\bar g$ in two variables with equal Newton polygons, then the Newton polytopes of $f$ and $g$ are also equal (see Corollary \ref{R1}). In Section \ref{section-Eggers-tree} we extend the notion of Eggers tree introduced in \cite{Eggers}, to quasi-ordinary settings. Remark that the tree we use here is not exactly the Eggers-Wall tree introduced in \cite{tesis} for the quasi-ordinary situation. The main result of Section \ref{section-Irreducible factors} is Theorem \ref{pack}, where we characterize the irreducible factors of higher derivatives of quasi-ordinary Weierstrass polynomials. Theorem \ref{pack} allows us to give factorizations of higher derivatives, in terms of the Eggers tree, in Section \ref{section-Eggers factorization}. Theorem \ref{dec-red-qo} generalizes the factorization from \cite{Casas} on higher order polars to quasi-ordinary singularities (not necessary irreducible) and also the factorization from \cite{GB-GP} to higher order polars. Theorem \ref{Merle} and Proposition \ref{ppppp} extend the statements of \cite[Theorem 6.2]{Forum} to irreducible quasi-ordinary Weierstrass polynomials. Finally in Section \ref{section-Eggers series} we establish that our results also hold for quasi-ordinary power series. \section{Newton polytopes} \label{section-Newton-polytopes} \noindent Let $\alpha=\sum \alpha_{\bf i} \underline{x}^{\bf i}\in S[[\underline{x}]]$ be a non zero formal power series with coefficients in a ring $S$, where $\underline{x}=(x_1,\ldots, x_d)$ and $\underline{x}^{\bf i}=x_1^{i_1}\cdots x_d^{i_d}$, with ${\bf i}=(i_1,\ldots,i_d)$. The {\em Newton polytope} $\Delta(\alpha)\subset \mathbf{R}^d$ of $\alpha$ is the convex hull of the set $\bigcup_{\alpha_{\bf i}\neq 0} {\bf i}+\mathbf{R}^d_{\geq 0}$. By convention the Newton polytope of the zero power series is the empty set. \noindent The Newton polytope of a polynomial $f=\sum_{{\bf i},j}a_{{\bf i},j}\underline{x}^{\bf i}y^j\in S[[\underline{x}]][y]$ is the polytope $\Delta(f)\subset \mathbf{R}^d\times \mathbf{R}$ of $f$ viewed as a power series in $x_1,\ldots, x_d,y$. If $\Gamma$ is a compact face of $\Delta(f)$ then $f|_{\Gamma}:=\sum_{({\bf i},j)\in {\Gamma}}a_{{\bf i},j}\underline{x}^{\bf i}y^j\in S[\underline{x}][y]$ is called the {\em symbolic restriction} of $f$ to $\Gamma$. \\ \noindent We say that a subset of $\mathbf{R}^{d+1}$ is a {\em Newton polytope} if it is the Newton polytope of some polynomial in $S[[\underline{x}]][y]$. \noindent Let ${\bf q}=(q_1,\dots,q_d)\in\mathbf{Q}_{\geq0}^d$ and let $k$ be a positive integer. We define the {\em elementary Newton polytope} \[ \Bigl\{\Teissr{{\bf q}}{k}{3}{1.5}\Bigr\}:= \mbox{convex hull}\;\bigl( \{\, (q_1,\dots,q_d,0), (0,\dots,0,k)\,\} +\mathbf{R}_{\geq0}^{d+1}\bigr)\;. \] \noindent Its {\em inclination} is, by definiton, $\frac{1}{k}{\bf q}$. \noindent We denote by $\Bigl\{\Teissr{\infty}{k}{3}{1.5}\Bigr\}$ the Newton polytope $\Delta(y^k)$, which is the first orthant translated by $(0,\dots,0,k)$. By convention we consider it as an elementary polytope. \begin{Example} The elementary Newton polytope $\left\{\Teissr{(4,2)}{8}{5}{2.5}\right\}$ is \begin{center} \begin{tikzpicture}[scale=2.5] \draw [->](0,0,0) -- (2.2,0,0); \draw[->](0,0,0) -- (0,1.5,0) ; \draw(0,0,0) -- (0,0,-1); \draw[thick] (0,0,0) -- (0,1,0) node[left] {$(0,\!0,\!8)$}; \draw[dashed](0,1,0) -- (0,1,-2); \draw[very thick](0,1,0)-- (2,1,0); \draw[very thick](0,1,0)-- (0,1.5,0); \draw[very thick] (0,1,0) -- (0.5,0,-0.25) node[left] {$(4,\!2,\!0)$}; \draw[very thick](0.5,0,-0.25) -- (2,0,-0.25); \draw[dashed](0.5,0,-0.25) -- (0.5,0,-3); \draw[->][dashed](0,0,-1) -- (0,0,-3); \node[draw,circle,inner sep=2pt,fill=black] at (0,1,0) {}; \node[draw,circle,inner sep=2pt,fill=black] at (0.5,0,-0.25) {}; \end{tikzpicture} \end{center} \end{Example} \noindent A Newton polytope is {\em polygonal} if the maximal dimension of its compact faces is one. \noindent Remember that the Minkowski sum of $A,B\subset \mathbf{R}^{d+1}$ is the set $A+B:=\{a+b\;:\;a\in A,b\in B\}$. If a Newton polytope $\Delta$ has a representation of the type \begin{equation} \label{canonical} \Delta= \sum_{i=1}^r \left\{\Teissr{{\bf q}_i}{k_i}{2}{1}\right\}, \end{equation} \noindent then summing all the elementary Newton polytopes of the same inclination in \eqref{canonical} we obtain a unique representation, up to the order of the terms, called {\em canonical representation} of $\Delta$. If the inclinations can be well-ordered then $\Delta$ is polygonal. \subsection{Newton polytopes and factorizations} \noindent Let $\mathbf{K}$ be a field of characteristic zero. We denote by $\mathbf{K}[[x_1^{1/k},\dots, x_d^{1/k}]]$ the ring of fractional power series in $d$ variables where all the exponents are nonnegative rational numbers with denominator $k\in \mathbf{N}\backslash\{0\}$. Put $\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]:= \bigcup_{k\in \mathbf{N}\backslash\{0\}}\mathbf{K}[[x_1^{1/k},\dots, x_d^{1/k}]]$. We will denote by \[ \alpha\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]=\{\alpha w: w\in \mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]\} \] \noindent the ideal of $\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ generated by $\alpha \in \mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$. \noindent A {\em Weierstrass polynomial} is a monic polynomial where the coefficients different from the leading coefficient are non-units of the formal power series. Notice that, according to this definition, the constant polynomial 1 is a Weierstrass polynomial. \noindent The next lemma gives sufficient conditions for reducibility of Weierstrass polynomials. One of the consequences of this lemma is that a Weierstrass polynomial with a polygonal Newton polytope admits a decomposition into coprime factors such that the Newton polytope of each factor is elementary (see Theorem \ref{Th:decomp}, see also \cite[Theorem 3]{GB-GP}). \begin{Lemma} \label{RS} Let $g=y^m+c_1 y^{m-1}+\cdots + c_m\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial. Assume that there exists ${\bf q} \in \mathbf{Q}^d$ such that $c_i\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]\subseteq \underline{x}^{i {\bf q}}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ for all $1\leq i\leq m$ with equality for some $i=i_0$, $1\leq i_0 <m$ and strict inclusion for $i=m$. Then $g$ has at least two coprime factors. \end{Lemma} \begin{proof} We will apply \cite[Theorem 2.4]{R-S}. Without lost of generality we may assume that $i_0$ is the maximal index $i\in\{1,\ldots,m-1\}$ such that $c_i\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]=\underline{x}^{i {\bf q}}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$. Then the segment $\Gamma$ with endpoints $(0,\dots,0, m)$ and $(i_0{\bf q}, m-i_0)$ is an edge of~$\Delta(g)$. The symbolic restriction of $g$ to $\Gamma$ is the product $g|_{\Gamma}=y^{m-i_0}\cdot \tilde g$, where $\tilde g \in \mathbf{K}[\underline{x}][y]$ is coprime with $y$. The associated polyhedron of $g$, in the sense of Rond-Schober (see \cite[page 4732]{R-S} is $m{\bf q}+ \mathbf{R}_{\geq0}^{d}$. Hence the polynomial $g$ verifies the hypothesis of \cite[Theorem 2.4]{R-S} and the lemma follows. \end{proof} \noindent \begin{Remark} \label{geomRS} The assumptions of Lemma~\ref{RS} mean geometrically that the Newton polytope $\Delta(g)$ is included in the elementary polytope $\left\{\Teissr{m{\bf q}}{m}{4}{2}\right\}$, and $\Delta(g)$ has an edge $\Gamma$, which endpoints $(0,\dots,0, m)$ and $(i_0{\bf q}, m-i_0)$, for some $1\leq i_0<m$. The next picture illustrates the situation: \begin{center} \begin{tikzpicture}[scale=2.5] \draw [->](0,0,0) -- (1.5,0,0); \draw[->](0,0,0) -- (0,1,0) ; \draw[->](0,0,0) -- (0,0,1.5); \draw[thick] (0,0.7,0) -- (0.7,0,1); \draw[very thick, dashed, color=blue] (0,0.7,0) -- (0,1.3,0); \draw[very thick, dashed, color=blue] (0,0.7,0) -- (0,0.7,1.3); \draw[very thick, dashed, color=blue] (0,0.7,0) -- (1.3,0.7,0); \draw[very thick, color=blue] (0.385,0.315,0.55) -- (0,0.7,0); \draw[very thick, color=blue] (0.385,0.315,0.55) -- (1,0,1); \draw[very thick, color=blue] (0.385,0.315,0.55) -- (0.7,0,1.4); \draw[very thick, color=blue] (1,0,1) -- (0.7,0,1.4); \draw[very thick, dashed, color=blue ] (0.7,0,1.4) -- (0.7,0,2.55); \draw[very thick, dashed, color=blue] (1,0,1) -- (1.9,0,1); \draw[very thick, dashed, color=blue] (0.385,0.315,0.55) -- (1.5,0.315,0.55); \draw[very thick, dashed, color=blue] (0.385,0.315,0.55) -- (0.385,0.315,2); \draw (0,0.8,0) node[right]{$({\bf 0}, m)$}; \node[right,above] at (0.785,0.315,0.55) {$(i_0{\bf q}, m-i_0)$}; \draw (1,0,1) node[below]{$(m{\bf q},0)$}; \node[draw,circle,inner sep=1.4pt,fill=black] at (0,0.7,0) {}; \node[draw,circle,inner sep=1.4pt,fill=black] at (0.385,0.315,0.55) {}; \node[draw,circle,inner sep=1.4pt,fill=black] at (0.7,0,1) {}; \end{tikzpicture} \end{center} \end{Remark} \begin{Theorem}\label{Th:decomp} Let $f\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial. Assume that $\Delta(f)$ is a polygonal Newton polytope with canonical representation $\sum_{i=1}^r \left\{\Teissr{{\bf q}_i}{k_i}{2}{1}\right\}$. Then $f$ admits a factorization $f_1\cdots f_r$, where $f_i\in \mathbf{K}[[\underline{x}]][y]$ are Weierstrass polynomials, not necessarily irreducible, such that $\Delta(f_i)=\left\{\Teissr{{\bf q}_i}{k_i}{2}{1}\right\}$ for $i=1,\dots,r$. \end{Theorem} \noindent \begin{proof} Let $f=g_1\cdots g_s$ be the factorization of $f$ into irreducible Weierstrass polynomials. Since the Newton polytope of a product is the Minkowski sum of the Newton polytopes of the factors, by hypothesis we get $\Delta(g_j)=\sum_{i=1}^r b_{ij} \left\{\Teissr{{\bf q}_i}{k_i}{2}{1}\right\}$ for some $b_{ij}\in \mathbf{Q}_{\geq 0}$. By Remark \ref{geomRS} $\Delta(g_j)$ is elementary, hence for fixed $j$ only one term of the previous sum is nonzero. On the other hand, for fixed $i$, we get $\sum_{j}b_{ij}=1$. Put $f_i:=\prod g_j$, where the product runs over all $g_j$ such that $b_{ij}\neq 0$. Then $f=f_1\cdots f_r,$ where $\Delta(f_i)=\left\{\Teissr{{\bf q}_i}{k_i}{2}{1}\right\}$ for $i=1,\dots,r$. \end{proof} \begin{Theorem}\label{Th:discriminant} Let $f(y),g(y)\in \mathbf{L}[y]$ be monic polynomials, where $\mathbf{L}$ is a field of characteristic zero. If $g(y)$ is irreducible in the ring $\mathbf{L}[y]$ then the polynomial $R(T)=\mathrm{Res}\,_y(T-f(y),g(y))$, where $\mathrm{Res}\,_y(-,-)$ denotes the resultant, is either irreducible in $\mathbf{L}[T]$ or is a power of an irreducible polynomial. \end{Theorem} \noindent \begin{proof} Let $y_1,\dots, y_m$ be the roots of $g(y)$ in the algebraic closure of the field~$\mathbf{L}$. Then $R(T)=\prod_{i=1}^m(T-f(y_i))$. Since $\mathbf{L}$ is a field of characteristic zero and $g(y)$ is irreducible, the Galois group of the field extension $\mathbf{L}\hookrightarrow \mathbf{L}(y_1,\dots, y_m)$ acts transitively on the set $\{y_1,\dots, y_m\}$. It follows that this group acts transitively on the set $\{f(y_1),\dots, f(y_m)\}$. Hence if $R=R_1\cdots R_s$ is a factorization of $R=R(T)$ into irreducible monic polynomials in the ring $\mathbf{L}[T]$ then $R_i=R_j$ for $i\neq j$. \end{proof} \noindent Next corollary will be used in the proof of the main result of the decompositions of higher polars, which is Theorem \ref{pack}. \begin{Corollary}\label{irred} Let $f(y)$, $g(y) \in \mathbf{K}[[\underline{x}]][y]$ be Weierstrass polynomials. If the resultant $\mathrm{Res}\,_y(g(y),f(y)-T) \in \mathbf{K}[[\underline{x}]][T]$ satisfies the assumptions of Lemma~\ref{RS}, then $g(y)$ is not irreducible in the ring $\mathbf{K}[[\underline{x}]][y]$. \end{Corollary} \noindent \begin{proof} By Lemma \ref{RS} the polynomial $R(T)$ has at least two coprime factors. By Theorem \ref{Th:discriminant}, $g(y)$, considered as a polynomial in $\mathbf{K}((\underline{x}))[y]$, is not irreducible, thus by Gauss Lemma it is not irreducible as a polynomial in $\mathbf{K}[[\underline{x}]][y]$. \end{proof} \begin{Remark} \label{Beata} Beata Hejmej in \cite{Beata} generalizes Theorem \ref{Th:discriminant} to polynomials with coefficients in a field of any characteristic. Hence the results of this section hold for fields of arbitrary characteristic. \end{Remark} \section{Kuo-Lu tree of a quasi-ordinary polynomial} \label{section-Kuo-Lu-tree} \noindent From now on $\mathbf{K}$ will be an algebraically closed field of characteristic zero. Let $f(y)\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial of degree $n$. Such a polynomial is \emph{quasi-ordinary} if its $y$-discriminant equals $\underline{x}^{\bf i}u(\underline{x})$, where $u(\underline{x})$ is a unit in $\mathbf{K}[[\underline{x}]]$ and ${\mathbf i}\in \mathbf{N}^{d}$. After Jung-Abhyankar theorem (see \cite[Theorem 1.3]{Parusinski-Rond}) the roots of $f$ are in the ring $\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ of fractional power series and we may factorize $f(y)$ as $\prod_{i=1}^n(y-\alpha_i)$, where $\alpha_i$ is zero or a fractional power series of nonnegative order. Put $\mathop\mathrm{Zer} f:=\{\alpha_i\,:\;1\leq i\leq n\}$. Since the differences of roots divide the discriminant, for $i\neq j$ we have \begin{equation} \label{contact} \alpha_i-\alpha_j=\underline{x}^{{\bf q}_{ij}}v_{ij}(\underline{x}),\;\;\; \hbox{\rm for some }{\bf q}_{ij} \in \mathbf{Q}^d \;\hbox{\rm and } v_{ij}(0)\neq0. \end{equation} \noindent The {\em contact} of $\alpha_i$ and $\alpha_j$ is by definition $O(\alpha_i,\alpha_j):={\bf q}_{ij}$. By convention $O(\alpha_i,\alpha_i)=+\infty$. \noindent We introduce in $\mathbf{Q}_{\geq 0}^d$ the partial order: ${\bf q} \leq {\bf q}'$ if ${\bf q}'-{\bf q} \in \mathbf{Q}_{\geq 0}^d$. By convention $+\infty$ is bigger than any element of $\mathbf{Q}_{\geq 0}^d$. \noindent After \cite[Lemma 4.7]{B-M}, for every $\alpha_i,\alpha_j,\alpha_k\in \mathop\mathrm{Zer} f$ one has $O(\alpha_i,\alpha_k)\leq O(\alpha_j,\alpha_k)$ or $O(\alpha_i,\alpha_k)\geq O(\alpha_j,\alpha_k)$. \noindent Moreover, we have the {\em strong triangle inequality:} \begin{equation} \label{STI} O(\alpha_i,\alpha_j)\geq \min \{O(\alpha_i,\alpha_k), O(\alpha_j,\alpha_k)\}. \tag{STI} \end{equation} \noindent In general, we say that the {\em contact} between the fractional power series $\alpha$ and $\beta$ is {\em well-defined} if and only if $\alpha-\beta=\underline{x}^{{\bf q}}w(\underline{x})$, for some ${\bf q} \in \mathbf{Q}^d$ and $w\in \mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ such that $w(0)\neq0$. In such a case we put $O(\alpha,\beta)={\bf q}$. \noindent Now we construct the {\em Kuo-Lu tree} of a quasi-ordinary Weierstrass polynomial~$f$. Given ${\bf q} \in \mathbf{Q}^d_{\geq 0}$ we put $\alpha_i \equiv \alpha_j$ mod ${\bf q}^{+}$ if $O(\alpha_i,\alpha_j)>{\bf q},$ for $\alpha_i,\alpha_j\in \mathop\mathrm{Zer} f$. \noindent Let ${\bf h}_0$ be the minimal contact between the elements of $\mathop\mathrm{Zer} f$. We represent $\mathop\mathrm{Zer} f$ as a horizontal bar $B_0$ and call $h(B_0)$ the {\em height} of $B_0$. The equivalence relation $\equiv$ mod $h(B_0)^+$ divides $B_0=\mathop\mathrm{Zer} f$ into cosets $B_1, \ldots, B_r$. We draw $r$ vertical segments from the bar $B_0$ and at the end of the $j$th vertical segment we draw a horizontal bar which represents $B_j$. The bar $B_j$ is called a {\em postbar} of $B_0$ and in such a situation we write $B_0 \perp B_j$. We repeat this construction recursively for every $B_j$ with at least two elements. The set of bars ordered by the inclusion relation is a tree. Following \cite{K-L} we call this tree the {\em Kuo-Lu tree} of $f$ and denote it $T(f)$. The bar $B_0$ of minimal height is called the {\em root} of $T(f)$. For every bar $B$ of $T(f)$ there exists a unique sequence $B_0\perp B' \perp B'' \perp \cdots \perp B$, starting in $B_0$ and ending in $B$. \noindent In the above construction, we do not draw the bars $\{\alpha_i\}\subset \mathop\mathrm{Zer} f$. These bars are the {\em leaves} of $T(f)$ and they are the only bars of infinite height. \noindent Let $B,B'\in T(f)$ be such that $B\perp B'$. All fractional power series belonging to $B'$ have the same term with the exponent $h(B)$. Let $c$ be the coefficient of such term. We say that $B'$ {\em is supported at the point} $c$ on $B$ and we denote it by $B\perp_c B'$. Observe that different postbars of $B$ are supported at different points. \noindent This construction is adapted from \cite{K-L} to quasi-ordinary case. \begin{Example}\label{ex:KL} Let $f=f_1f_2\in \mathbf{C}[[x_{1},x_{2}]][y]$, where $f_1=y^2-x_1^{3}x_2^2$ and $f_2=y-x_1^{5}x_2^2$. Observe that $f$ is quasi-ordinary since its $y$-discriminant equals $4x_{1}^{9}x_{2}^{6}(-1+x_{1}^{7}x_{2}^{2})^{2}$. The roots of $f$ are $\alpha=x_1^{3/2}x_2$, $\beta=-x_1^{3/2}x_2$ and $\gamma=x_1^{5}x_2^2$. The Kuo-Lu tree of $f$ is: \begin{center} \begin{tikzpicture}[scale=1] \draw [-, thick](0,0) -- (0,1); \draw[-, thick](-1,1) -- (1,1); \draw[-, thick](-1,1) -- (-1,2); \draw[-, thick](1,1) -- (1,2); \draw[-, thick](-0.2,1) -- (-0.2,2); \node[right] at (1,1) {$\left(\frac{3}{2},1\right)$}; \node[above] at (-1,2) {$\alpha$}; \node[above] at (-0.2,2) {$\beta$}; \node[above] at (1,2) {$\gamma$}; \node[below] at (0,0) {$T(f)$}; \end{tikzpicture} \end{center} \end{Example} \noindent In the above picture we draw also a vertical segment supporting $T(f)$ called by Kuo and Lu in \cite{K-L} the {\em main trunk} of the tree. \section{Compatibility with pseudo-balls} \label{section-Compatibility with pseudo-balls} Let $\alpha\in\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ be a fractional power series and ${\bf h}\in \mathbf{Q}^d_{\geq 0}$. The {\em pseudo-ball} centered in $\alpha$ and of height ${\bf h}$ is the set $\alpha+\underline{x}^{\bf h}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$. The {\em pseudo-ball centered in $\alpha$ of infinite height} is the set $\{\alpha\}$. \noindent Let $f$ be a quasi-ordinary polynomial $f$. Consider the bar $B=\{\alpha_{i_1},\dots,\alpha_{i_s} \}$ with finite height ${\bf h}$ of the Kuo-Lu tree $T(f)$. Set $\tilde B:=\alpha + \underline{x}^{\bf h}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$, where $\alpha \in B$. As $\alpha_{i_k}-\alpha_{i_l}\in \underline{x}^{\bf h}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ for $1\leq k\leq l \leq s$ the pseudo-ball $\tilde B$ is independent of the choice of $\alpha$. If $B=\{\alpha_i \}$ is a bar of infinite height then we put $\tilde B= B$. The mapping $B\to \tilde B$ is a one-to-one correspondence between $T(f)$ and the set of pseudo-balls $\tilde T(f):=\{\alpha_i+(\alpha_i-\alpha_j)\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]: \alpha_i, \alpha_j\in\mathop\mathrm{Zer} f\}$. For the purposes of this article it is easier to deal with pseudo-balls, hence from now on, we shall identify the elements of $T(f)$ with corresponding pseudo-balls. Such pseudo-balls will be called {\em quasi-ordinary pseudo-balls}. \noindent Let $B=\alpha + \underline{x}^{h(B)}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ be a quasi-ordinary pseudo-ball of finite height. Every $\gamma\in B$ has a form $\gamma=\lambda_B(\underline{x})+c_{\gamma}\underline{x}^{h(B)}+\cdots$, where $\lambda_B(\underline{x})$ is obtained from any $\beta\in B$ by omitting all the terms of order bigger than or equal to $h(B)$ and ellipsis means terms of higher order. We call the number $c_{\gamma}$ the {\em leading coefficient of } $\gamma$ {\em with respect to} $B$ and denote it $\mbox{\rm lc}_B(\gamma)$. Remark that $c_{\gamma}$ can be zero. \noindent Let $\mathbf{L}$ be the field of fractions of $\mathbf{K}[[\underline{x}]]$. It follows from \cite[Remark 2.3]{GP} that any truncation of a root of a quasi-ordinary polynomial is a root of a quasi-ordinary polynomial. Hence the field extensions $\mathbf{L} \hookrightarrow \mathbf{L}(\lambda_B(\underline{x})) \hookrightarrow \mathbf{L}(\lambda_B(\underline{x}), \underline{x}^{h(B)}) $ are algebraic and we can associate with $B$ two numbers: \label{pagen} \begin{itemize} \item the degree of the field extension $\mathbf{L} \hookrightarrow \mathbf{L}(\lambda_B(\underline{x}))$ that we will denote $N(B)$, \item the degree of the field extension $\mathbf{L}(\lambda_B(\underline{x})) \hookrightarrow \mathbf{L}(\lambda_B(\underline{x}), \underline{x}^{h(B)})$ that we will denote $n(B)$. \end{itemize} \noindent In this section we introduce the notion of \emph {compatibility} of a Weierstass polynomial $g$ with a pseudo-ball $B$. We define a polynomial $G_B(z)$ which will play an important role in the sequel. \begin{Definition} \label{compatible} Let $g(y)\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial and $B$ be a pseudo-ball of finite height. If \begin{equation}\label{eq:comp} g(\lambda_B(\underline{x}) + z\underline{x}^{h(B)})=G_B(z)\underline{x}^{q(g,B)}+\cdots \end{equation} for some $G_B(z)\in\mathbf{K}[z]\setminus\{0\}$ and some exponent $q(g,B)\in (\mathbf Q_{\geq 0})^d$ then we will say that $g$~is compatible with $B$. In \eqref{eq:comp} $\cdots$ means terms of higher order. The polynomial $G_B(z)$ will be called the $B$-{characteristic polynomial} of $g$. \end{Definition} \begin{Example} Return to Example \ref{ex:KL}. Let $B=\alpha+x_{1}^{3/2}x_{2}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ be a pseudo-ball of $T(f)$ of height $h(B)=\left(\frac{3}{2},1\right)$. Observe that \[ f(\lambda_{B}+z\underline{x}^{h(B)})=f(z\underline{x}^{\left(\frac{3}{2},1\right)})=z(z^{2}-1)x_{1}^{9/2}x_{2}^{3}+\cdots\] Hence the polynomial $f$ is compatible with the pseudo-ball $B$ and its $B$-characteristic polynomial is $F_B(z)=z(z^2-1)$, but for example the polynomial $g(y)=y-x_{1}-x_{2}$ is not compatible with $B$. \end{Example} \noindent Our next goal is to prove in Corollary \ref{CCKiel} that if a Weierstrass polynomial is compatible with a pseudo-ball then any factor of it is also compatible. \begin{Lemma}\label{L:compatibility} Let $g(y)\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial and let $B$ be a pseudo-ball of finite height. Consider $g(\lambda_B(\underline{x})+z\underline{x}^{h(B)})$ as a fractional power series $\tilde g(\underline{x})$ with coefficients in $\mathbf{K}[z]$. Then $g(y)$ is compatible with $B$ if and only if the Newton polytope of $\tilde g(\underline{x})$ equals the Newton polytope of a monomial. \end{Lemma} \noindent \begin{proof} If $g$ is compatible with $B$ then by (\ref{eq:comp}) we get $\Delta\bigl(\tilde g(\underline{x})\bigr)=\Delta\bigl(\underline{x}^{q(g,B)}\bigr)$. Conversely, suppose that the Newton polytope of $\tilde g(\underline{x})$ equals the Newton polytope of the monomial $\underline{x}^{\bf q}$. Then $\tilde g(\underline{x})$ has a form $\underline{x}^{\bf q}\sum_{i=0}^n a_i(\underline{x})z^{n-i}$, where at least one of the values $a_i(0)$ is nonzero. Hence the $B$-characteristic polynomial of $g$ is $G_B(z)= \sum_{i=0}^n a_i(0)z^{n-i}$. \end{proof} \begin{Remark} \label{tt2} From the proof of Lemma \ref{L:compatibility} we get that $\tilde g(\underline{x})$ has the form $G_B(z)\underline{x}^{q(g,B)}+ \sum_{h>q(g,B)}a_h(z)\underline{x}^h$, where $a_h(z)\in\mathbf{K}[z]$. \end{Remark} \begin{Corollary} \label{CCKiel} Let $g\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial compatible with a pseudo-ball $B$. Then any factor of $g$ is compatible with $B$. \end{Corollary} \noindent \begin{proof} The Newton polytope of the product is the Minkowki sum of Newton polytopes of the factors. Hence, if $\Delta(\tilde g)=\Delta(\underline{x}^{\bf q})$ and $\tilde g=\tilde g_1\tilde g_2$ then $\Delta(\tilde g_i)$ have the form $\Delta(\underline{x}^{{\bf q}_i})$ for some ${\bf q}_1, {\bf q}_2$ such that ${\bf q}={\bf q}_1+{\bf q}_2.$ \end{proof} \noindent Next lemma generalizes to $d$ variables \cite[Lemma 3.1]{Forum}. \begin{Lemma}\label{derivatives} Let $f(y)\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial of degree $n$ compatible with the pseudo-ball $B$. Then for every $k\in\{1,\dots, \deg F_B(z)\}$ the Weierstrass polynomial $g(y)=\frac{(n-k)!}{n!}\frac{d^k}{dy^k}f(y)$ is also compatible with~$B$ and its $B$-characteristic polynomial is $G_B(z)=\frac{(n-k)!}{n!}\frac{d^k}{dz^k}F_B(z)$. \end{Lemma} \noindent \begin{proof} \noindent Differentiating identity $f(\lambda_B(\underline{x}) + z\underline{x}^{h(B)})=F_B(z)\underline{x}^{q(f,B)}+\cdots$ with respect to $z$ we get $f'(\lambda_B(\underline{x})+z^{h(B)})\underline{x}^{h(B)}=F'_B(z)\underline{x}^{q(f,B)}+\cdots$. Hence $f'(\lambda_B(\underline{x})+z\underline{x}^{h(B)})=F'_B(z)\underline{x}^{q(f,B)-h(B)}+\cdots$, which proves the lemma for $k=1$. The proof for higher derivatives runs by induction on $k$. \end{proof} \noindent Let $f(y)\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial of degree $n$. The Weierstrass polynomial $\frac{(n-k)!}{n!}\frac{d^k}{dy^k}f(y)$ of Lemma \ref{derivatives} will be called the {\it normalized $k$th derivative} of the Weierstrass polynomial $f(y)\in \mathbf{K}[[\underline{x}]][y]$ and we will denote it by $f^{(k)}(y)$. The variety of equation $f^{(k)}=0$ is called the $k${\em th polar} of $f=0$. Since the normalized $n$th derivative of $f$ is constant, in the rest of the paper we consider normalized $k$th derivatives of $f$ for $1\leq k<\deg f$. \begin{Lemma}\label{subst} Let $f(y)\in\mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial and let $B$ be a pseudo-ball of finite height. \begin{enumerate} \item If $f$ is compatible with $B$, then for any $\gamma \in B$ we have \begin{equation} \label{eq:F,q1xxx} f(\gamma)=F_B(\mbox{\rm lc}_{B}\gamma)\underline{x}^{q(f,B)}+\cdots\, \end{equation} \item If $f(y)=\prod_{i=1}^n(y-\alpha_i)$ and we assume that one of the following holds: $\underline{x}$ is a single variable and $B$ is arbitrary or $f$ is quasi-ordinary and $B\in \tilde{T}(f)$ then $f$ is compatible with $B$ and we have \begin{equation} \label{eq:F,q2} F_B(z)=\mbox{const}\prod_{i:\alpha_i\in B}(z-\mbox{\rm lc}_B\alpha_i) \end{equation} and \begin{equation} \label{eq:F,q3} q(f,B)=\sum_{i=1}^n \min(O(\lambda_B,\alpha_i),h(B)). \end{equation} \end{enumerate} \end{Lemma} \noindent \begin{proof} Since $\gamma \in B$ we can write $\gamma=\lambda_B(\underline{x})+z\underline{x}^{h(B)}$, where $z=\mbox{\rm lc}_B(\gamma)+\cdots$. By Remark \ref{tt2} we have $f(\gamma)=f(\lambda_B(\underline{x})+z\underline{x}^{h(B)})=F_B(z)\underline{x}^{q(f,B)}+\cdots=F_B(\mbox{\rm lc}_{B}\gamma)\underline{x}^{q(f,B)}+\cdots$. This proves (\ref{eq:F,q1xxx}). \noindent Suppose $\gamma=\lambda_B(x)+zx^h,$ where $z$ is a constant. We have $f(\gamma)=\prod_{i=1}^n(\gamma-\alpha_i)$. In order to prove (\ref{eq:F,q2}) and (\ref{eq:F,q3}), it is enough to compute the initial term of every factor $\gamma-\alpha_i$. If $\alpha_i\in B$ then the initial term of $\gamma-\alpha_i$ equals $(\mbox{\rm lc}_B\gamma-\mbox{\rm lc}_{B}\alpha_i)\underline{x}^{h(B)}$. Otherwise the initial terms of $\gamma-\alpha_i$ and $\lambda_B-\alpha_i$ are equal. We finish the proof multiplying the initial terms. \end{proof} \begin{Corollary} Let $f(y)\in \mathbf{K}[[\underline{x}]][y]$ be a quasi-ordinary Weierstrass polynomial. Then every factor of $f(y)$ is compatible with all bars $B\in T(f)$ of finite height. \end{Corollary} \begin{Lemma} \label{in-chain} Let $f(y)\in \mathbf{K}[[\underline{x}]][y]$ be a quasi-ordinary Weierstrass polynomial, $p(y)$ be a factor of $f(y)$ and $B,B' $ be bars of finite heights in $T(f)$ such that $B\perp B'$. Then \[ q(p,B')-q(p,B)=\sharp ( \mathop\mathrm{Zer} p \cap B')[h(B')-h(B)]. \] \end{Lemma} \noindent \begin{proof} Put $p(y)=\prod_{\alpha \in \mathop\mathrm{Zer} p}(y-\alpha)$. Let $\gamma \in B, \gamma' \in B'$ be such that $O(\gamma, \alpha)=h(B)$ for all $\alpha \in B\cap \mathop\mathrm{Zer} p$ and $\mathrm{cont}(\gamma', \alpha)=h(B')$ for all $\alpha \in B'\cap \mathop\mathrm{Zer} p$. By the STI we get $O (\gamma',\alpha)=O(\gamma,\alpha)$ for any $\alpha \in \mathop\mathrm{Zer} p\backslash B'$. If $\alpha \in \mathop\mathrm{Zer} p \cap B'$ then $O(\gamma,\alpha)=h(B)$ and $O(\gamma',\alpha)=h(B')$. Hence \begin{eqnarray*} q(p,B')-q(p,B)&=&\sum_{\alpha \in \mathop\mathrm{Zer} p}O (\gamma',\alpha)-\sum_{\alpha \in \mathop\mathrm{Zer} p}O (\gamma,\alpha)\\ &=&\sharp (\mathop\mathrm{Zer} p \cap B')[h(B')-h(B)]. \end{eqnarray*} \end{proof} \noindent Lemma \ref{in-chain} is similar in spirit to \cite[Lemma 2.7]{Hungarica}. \begin{Lemma} \label{power} Let $B$ be a quasi-ordinary pseudo-ball and let $g(y)\in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial compatible with $B$. Then \begin{enumerate} \item $G_{B}(z)=z^k\cdot H(z^{n(B)})$, for some $k\in \mathbf{N}$ and $H(z)\in \mathbf{K}[z]$. \item If $g$ is irreducible and quasi-ordinary then $G_{B}(z)=az^{k}$ or $G_{B}(z)=a(z^{n(B)}-c)^{l}$, for some non-zero $a,c\in \mathbf{K}$ and some $l\in \mathbf{N}$. \end{enumerate} \end{Lemma} \noindent \begin{proof} Let $\mathbf{L}$ be the field of quotients of $\mathbf{K}[[\underline{x}]]$. By \cite[Lemma 5.7]{Lipman} and \cite[Remark 2.7]{GP} the algebraic extension $\mathbf{L}(\lambda_B(\underline{x}))\hookrightarrow \mathbf{L}(\lambda_B(\underline{x}),\underline{x}^{h(B)})$ is cyclic. Hence the generator $\varphi$ of the group $Gal(\mathbf{L}(\lambda_B(\underline{x})\hookrightarrow \mathbf{L}(\lambda_B(\underline{x}),\underline{x}^{h(B)})$ acts as follows: $\varphi(\lambda_B(\underline{x}))=\lambda_B(\underline{x})$ and $\varphi(\underline{x}^{h(B)})=\omega \underline{x}^{h(B)},$ where $\omega$ is a primitive $n(B)$th root of the unity. Applying $\varphi$ to~(\ref{eq:comp}) we get \begin{equation} \label{KP1} g(\lambda_B(\underline{x})+z \omega x^{h(B)})=G_B(z)\omega^kx^{q(g,B)}+\cdots \end{equation} for some $0\leq k < n(B)$. Substituting $\omega z$ for $z$ in (\ref{eq:comp}) and comparing with (\ref{KP1}) we get $G_B(z)\omega^k=G_B(\omega z)$. Multiplying this equality by $(\omega z)^{n(B)-k}$ and putting $W(z):=z^{n(B)-k}G_B(z)$ we obtain $W(z)= W(\omega z).$ This implies that $W(z)=\overline W(z^{n(B)})$, for some $\overline W(z)\in \mathbf{K}[z]$. We finish the proof putting $H(z^{n(B)})=z^{-n(B)}\overline W(z^{n(B)})$. This proves the first part of the lemma. \noindent Suppose now that $g$ is irreducible and quasi-ordinary. Let $\gamma=\lambda_B(\underline{x})+ cx^{h(B)}+\cdots\in B\cap \mathop\mathrm{Zer} g$. Since the extension $ \mathbf{L}(\lambda_B(\underline{x}))\hookrightarrow \mathbf{L}(\lambda_B(\underline{x}),\underline{x}^{h(B)})\hookrightarrow \mathbf{L}(\gamma)$ is Galois, any other root of $g$ belonging to $B$ has the form $\lambda_B(\underline{x})+ \omega^{i} cx^{h(B)}+\cdots$, for some $0\leq i < n(B)$. Using the first part of the lemma and the equality \eqref {eq:F,q2} we complete the proof.\end{proof} \section{Conjugate pseudo-balls} \label{section-Conjugate-pseudo-balls} \noindent In this section we define an equivalence relation between pseudo-balls called {\em conjugacy relation}. This will allow us to introduce, in Section \ref{section-Eggers-tree}, the notion of the Eggers tree of a quasi-ordinary Weierstrass polynomial. \noindent Let $\mathbf{L}$ be the field of fractions of $\mathbf{K}[[\underline{x}]]$ and $\mathbf{M}$ be the field of fractions of $\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$. \begin{Lemma} \label{propp:1} Let $\varphi$ be an $\mathbf{L}$-automorphism of $\mathbf{M}$. Then \begin{enumerate} \item For any ${\bf q}\in \mathbf{Q}^d$ there exists a root $\omega$ of the unity such that $\varphi(\underline{x}^{\bf q})=\omega \cdot \underline{x}^{\bf q}$, \item $\varphi(\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]) = \mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$, \item If $u$ is a unit of the ring $\in \mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ and ${\bf q}\in (\mathbf{Q}_{\geq 0})^d$ then $\varphi(u\cdot \underline{x}^{\bf q})=\tilde u \cdot \underline{x}^{\bf q}$ for some unit $\tilde u\in \mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$. \end{enumerate} \end{Lemma} \noindent \begin{proof} \noindent Let $k$ be a positive integer. Observe that $x_i=\varphi(x_i)=\varphi\bigl((x_i^{1/k})^k\bigr)=\varphi(x_i^{1/k})^k$. Hence $\varphi(x_i^{1/k})=c\cdot x_i^{1/k}$ for some $c\in \mathbf{K}\backslash\{0\}$ such that $c^k=1$. It follows that for any ${\bf q}\in\mathbf{Q}^d$ there exists $\omega\in \mathbf{K}$ such that $\varphi(\underline{x}^{{\bf q}})=\omega\underline{x}^{\bf q}$ and $\omega^m=1$ for some positive integer $m$. \noindent Every element of the ring $\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$ can be represented as a finite sum $\sum_{\bf q} a_{\bf q} \underline {x}^{\bf q}$ where ${\bf q}=(q_1,\dots, q_d) \in(\mathbf{Q}_{\geq 0})^d$ ($0\leq q_i<1$) and $a_{\bf q }\in \mathbf{K}[[\underline{x}]]$. This together with~{\em 1}. proves items~{\em 2}.\ and~{\em 3}.\ of the lemma. \end{proof} \noindent Let $B$, $B'$ be pseudo-balls. We say that $B$ and $B'$ are {\em conjugate} if there exists an $\mathbf{L}$-automorphism $\varphi$ of $\mathbf{M}$ such that $B'=\varphi(B)$. The conjugacy of pseudo-balls is an equivalence relation. It follows from Lemma~\ref{propp:1} that conjugate pseudo-balls have the same height. Moreover two quasi-ordinary pseudo-balls $B$ and $B'$ of the same height are conjugate if any irreducible quasi-ordinary polynomial which has one of its roots in $B$ has another root in $B'$ (in this way conjugate bars were defined in \cite[Definition 6.1]{K-P}). If $B'=\varphi(B)$ then $\lambda_{B'}=\varphi(\lambda_{B})$. The converse is also true; if $h(B)=h(B')$ and there exists an $\mathbf{L}$-automorphism $\varphi$ of $\mathbf{M}$ such that $\lambda_{B'}=\varphi(\lambda_{B})$ then $B$ and $B'$ are conjugate. \label{card[B]} It follows from the above that the number of pseudo-balls conjugate with $B$ is equal to the degree of the minimal polynomial of $\lambda_B$, which is the degree $N(B)$ of the field extension $\mathbf{L} \hookrightarrow \mathbf{L}(\lambda_B(\underline{x}))$. \begin{Lemma} \label{LL:1} Let $B,B'$ be quasi-ordinary conjugate pseudo-balls. If $p(y)\in \mathbf{K}[[\underline{x}]][y]$ is a Weierstrass polynomial compatible with $B$ then \begin{enumerate} \item $p(y)$ is compatible with $B'$. \item $q(p,B)=q(p,B')$. \item The characteristic polynomials $P_{B'}(z)$ and $P_{B}(z)$ of $p(y)$ verify the equality $P_{B'}(z)=\theta P_B(\omega z)$, for some roots of the unity $\theta$ and $\omega$. \end{enumerate} \end{Lemma} \noindent \begin{proof} Let $\mathbf{L}$ be the field of quotients of $\mathbf{K}[[\underline{x}]]$ and let $\varphi$ be a $\mathbf{L}$-automorphism of $\mathbf{M}$ such that $\varphi(B)=B'$. Then $\varphi(\lambda_B)=\lambda_{B'}$. By Lemma \ref{propp:1} we have $\varphi(\underline{x}^{h(B)})=\omega^{-1} \underline{x}^{h(B)}$ and $\varphi(\underline{x}^{q(p,B)})=\theta \underline{x}^{q(p,B)}$ for some roots of the unity $\theta$ and $\omega$. Applying $\varphi$ to~(\ref{eq:comp}), with $g$ replaced by $p$, we get \[ p(\lambda_{B'}+z \omega^{-1} \underline{x}^{h(B)})=P_B(z)\theta \underline{x}^{q(p,B)}+\cdots \] \noindent This gives $q(p,B)=q(p,B')$ and $P_{B'}(\omega^{-1} z)=\theta P_B(z)$. \end{proof} \section{Kuo-Lu Lemma for higher derivatives} \label{section-Kuo-Lu Lemma} \noindent Let $f(y)\in \mathbf{K}[[\underline{x}]][y]$ be a quasi-ordinary Weierstrass polynomial. We begin with combinatorial results concerning the Kuo-Lu tree $T(f)$. Remember that we identify any bar of $T(f)$ with the corresponding quasi-ordinary pseudo-ball. At the end of the section we apply these results to Newton-Puiseux roots of higher derivatives of $f(y)$. \noindent Take an integer $k$ such that $1\leq k\leq \deg f$. With every bar $B$ of $T(f)$ we a\-sso\-ciate the numbers: \begin{itemize} \item $m(B)$ which is the number of roots of $f(y)$ which belong to $B$, \item $n_k(B)=\max\{m(B)-k,0\}$, and \item $t_k(B)=n_k(B)-\sum_{B\perp B'}n_k(B')$. \end{itemize} \begin{Remark} \label{t=0} \noindent For $1\leq k < m(B)$ we have $n_k(B)>0$, $t_k(B)>0$ and for $m(B)\leq k \leq \deg f$ we have $n_k(B) =t_k(B)=0$. \end{Remark} \noindent We denote by $T_k(f)$ the sub-tree of $T(f)$ consisting of the bars $B\in T(f)$ such that $m(B)\geq k$. \noindent Let $F\in\mathbf{K}[z]$ be a non constant polynomial. Let $F^{(k)}$ denotes the $k$th derivative of $F$. \begin{Definition} \noindent We will say that $F$ is $k$-{\em regular} if one of the following conditions holds: \begin{enumerate} \item $F^{(k)}$ is zero or \item $F^{(k)}$ is nonzero and there is not a root of $F$ of multiplicity $\leq k$ which is a root of $F^{(k)}$. \end{enumerate} \end{Definition} \noindent Recall that common roots of a polynomial $F$ and its first derivative are multiple roots of $F$. Hence any polynomial is 1-regular. \label{page:F irr} \noindent In general it is not easy to verify the $k$-regular property. In this papers polynomials of the form \begin{equation} \label{eq:F irr} F(z)=(z^{n}-c)^{l}\in K[z], \end{equation} play an important role. Their $k$-regularity, for any $k$, is a consequence of Lemma \ref{AL}. \begin{Remark} \label{r: +-} Let $F(z)=\mbox{const} \prod_{i=1}^r (z-z_i)^{m_i}$, where $z_i$ are pairwise different, $m_i\geq k$ for $1\leq i \leq s$ and $m_i < k$ for $s < i \leq r$. After differentiating, the multiplicity of any root drops by one. Hence putting ${\cal F}^{\oplus}(z)=\prod_{i=1}^s (z-z_i)^{m_i-k}$ we obtain the decomposition \begin{equation}\label{k-derivative} F^{(k)}(z)={\cal F}^{\oplus}(z){\cal F}^{\ominus}(z) \end{equation} into two coprime polynomials. A polynomial $F$ is $k$-regular if and only if $F$ and ${\cal F}^{\ominus}$ do not have common roots. \end{Remark} \begin{Definition} Let $f\in \mathbf{K}[[\underline x]][y]$ be a quasi-ordinary Weierstrass polynomial. We say that $f$ is Kuo-Lu $k$-regular if for every $B\in T(f)$ of finite height the polynomial $F_B(z)$ is $k$-regular. \end{Definition} \noindent We finish this subsection with some results for Weierstrass polynomials with coefficients in the ring of the formal power series in one variable. \noindent Let $f(y)\in\mathbf{K}[[x]][y]$ be a square-free Weierstrass polynomial. Fix $B\in T_k(f)$ and assume that $ \{B_1,\dots,B_s\}$ is the set of post-bars of $B$ in $T_k(f)$. Denote $B^{\circ}=B\setminus(B_1\cup\dots\cup B_s)$. \begin{Theorem}\label{higher-Kuo-Lu} Let $f(y)\in\mathbf{K}[[x]][y]$ be a square-free Weierstrass polynomial over the ring of formal power series in one variable. Let $f(y)= \prod_{i=1}^n(y-\alpha_i)$ and $f^{(k)}(y)=\prod_{j=1}^{n-k}(y-\beta_j)$ be the Newton-Puiseux factorizations of $f$ and $f^{(k)}$. Then \begin{itemize} \item[(i)] for every $B\in T_k(f)$ the set $\{j: \beta_j\in B\}$ has $n_k(B)$ elements, \item[(ii)] for every $B\in T_k(f)$ the set $\{j: \beta_j\in B^{\circ}\}$ has $t_k(B)$ elements, \item[(iii)] for every $\beta_j$ there exists a unique $B\in T_k(f)$ such that $\beta_j\in B^{\circ}$. \item[(iv)] Let $B\in T_k(f)$. If the polynomial $F_B(z)$ is $k$-regular then for every $\alpha_i\in B$, $\beta_j\in B^{\circ}$ one has $O(\alpha_i,\beta_j)=h(B)$. Otherwise there exist $\alpha_i\in B$, $\beta_j\in B^{\circ}$ such that $O(\alpha_i,\beta_j)>h(B)$. \end{itemize} \end{Theorem} \noindent \begin{proof} \noindent \textit{Proof of (i).} Suppose first, that $B\in T_k(B)$ has finite height. Then by Lemma \ref{subst} $F_B(z) = \mbox{const} \prod_{i:\alpha_i\in B} (z-\mbox{\rm lc}_B(\alpha_i))$. By equality $(4)$ of this lemma and Lemma \ref{derivatives} we get $F_B^{(k)}(z) = \mbox{const} \prod_{j:\beta_j\in B} (z-\mbox{\rm lc}_B(\beta_j))$. Hence, the set $\{j:\beta_j\in B\}$ has $\deg F_B-k = n_k(B)$ elements. \noindent If the height of $B$ is infinite then $B=\{\alpha_i \}$ for exactly one Newton-Puiseux root $\alpha_i$ of $f(y)$. Hence for $k=1$ $n_1(B)=0$ and $f'(y)$ does not have roots in $B$ and for $k>1$ $B\notin T_k(f)$. \noindent \textit{Proof of (ii).} It is enough to count the elements of the set $\{j:\beta_j\in B^{\circ}\}$ using~\textit{(i)}. \noindent \textit{Proof of (iii).} Let $B_0$ be the root of the tree $T(f)$. By~\textit{(i)}, $\{\beta_1,\dots, \beta_{n-k}\}$ is a subset of $B_0$. It is clear that the sets $B^{\circ}$ for $B\in T_k(f)$ are pairwise disjoint and their union is equal to $B_0$. This proves~\textit{(iii)}. \noindent \textit{Proof of (iv).} Assume that $B_1$, \dots, $B_r$ are the post-bars of $B$ supported at points $z_1$, \dots, $z_r$ respectively, and that $m(B_i)\geq k$ for $i\in \{1,\dots s\}$, $m(B_i)< k$ for $i\in \{s+1,\dots r\}$. Then by Lemma \ref{subst} $F_B(z)=\prod_{i=1}^r (z-z_i)^{m(B_i)}$. \noindent After Remark \ref{r: +-} the $k$th derivative of $F_B(z)$ is the product of two coprime polynomials $$F_B^{(k)}(z)={\cal F}^{\oplus}_B(z){\cal F}^{\ominus}_B(z),$$ \noindent where ${\cal F}^{\oplus}_B(z):=\prod_{i=1}^s (z-z_i)^{n_k(B_i)}$. \noindent We get $\deg {\cal F}^{\ominus}_B(z)=t_k(B)$. Hence it follows from~\emph{(ii)} and~\emph{(iii)} that all roots of ${\cal F}^{\ominus}_B(z)$ correspond to those Newton-Puiseux roots of $f^{(k)}(y)$ that belong to $B^{\circ}$. For $\alpha_i\in B$, $\beta_j\in B^{\circ}$ one has $O(\alpha_i,\beta_j)>h(B)$ if and only if $\mbox{\rm lc}_B(\alpha_i)=\mbox{\rm lc}_B(\beta_j)$, which means that the polynomials $F_B(z)$ and ${\cal F}^{\ominus}_B(z)$ have a common root. Since $F_B(z)$ is $k$-regular if and only if ${\cal F}^{\ominus}_B(z)$ and $F_B(z)$ do not have common roots we get~\textit{(iv)}. \end{proof} \begin{Remark}\label{R:1} Let $f(y)\in\mathbf{K}[[x]][y]$ be a square-free Weierstrass polynomial over the ring of formal power series in one variable. Let $B\in T_k(f)$, $\beta_i \in B\cap \mathop\mathrm{Zer} f^{(k)}$ and put $c=\mbox{\rm lc}_{B}\beta_i$. Then ${\cal F}^{\oplus}_B(c)\neq 0$ if and only if $\beta_i \in B^{\circ}$. If ${\cal F}^{\oplus}_B(c)=0$ then there exists a sequence of postbars $B\perp_c B_1\perp \cdots \perp B_l$ such that $\beta_i \in B_l^{\circ}$ and $ B_l\in T_k(f)$. \end{Remark} \noindent For quasi-ordinary Weierstrass polynomials which are Kuo-Lu $k$-regular, the counterpart of \cite[Lemma 3.3]{K-L} is true: \begin{Corollary}\label{Kuo-Lu} Let $f(y)\in\mathbf{K}[[x]][y]$ be a square-free Weierstrass polynomial over the ring of formal power series in one variable. Assume that $f$ is Kuo-Lu $k$-regular. Then under assumptions and notations of Theorem~\ref{higher-Kuo-Lu}, for every $\alpha_i\in \mathop\mathrm{Zer} f$, $\beta_s\in \mathop\mathrm{Zer} f^{(k)}$ there exists $\alpha_j\in \mathop\mathrm{Zer} f$ such that $O(\alpha_i,\beta_s)=O(\alpha_i,\alpha_j)$. \end{Corollary} \section{Newton polytopes of resultants} \label{section-Newton-polytopes-resultants} \noindent In this section we give a formula for the Newton polytope of the resultant $\mathrm{Res}\,_y(f^{(k)}(y),p(y)-T)$, where $f(y)$ is a Kuo-Lu $k$-regular quasi-ordinary Weierstrass polynomial, $p(y)$ is a factor of $f(y)$ and $T$ is a new variable. We prove that for irreducible $p(y)$, the Newton polytope of the resultant is polygonal. \subsection{Monomial substitutions} \noindent Let $g(\underline x,y)\in \mathbf{K}[[\underline x,y]]$. For any monomial substitution $x_1=u^{r_1}$,\dots, $x_d=u^{r_d}$, where $r_i$ are positive integers, we put \begin{equation} \label{bar} \bar g^{[{\bf r}]}(u,y):=g(u^{r_1},\ldots,u^{r_d},y). \end{equation} \noindent We will write simply $\bar{g}(u,y)$ when no confusion can arise. \noindent Observe that for $g=\underline{x}^{\bf s}$ we get $\bar g^{[{\bf r}]}=u^{\langle \bf{r},\bf{s}\rangle}$, where $\langle \cdot,\cdot \rangle$ denotes the scalar product. \begin{Lemma}\label{L:2} Let $f(y)\in \mathbf{K}[[\underline x]][y]$ be a quasi-ordinary Weierstrass polynomial. There is a one-to-one correspondence between the bars of $T(f)$ and the bars of $T(\bar f^{[{\bf r}]})$. If $B$ and $\bar B$ are the corresponding bars of $T(f)$ and $T(\bar f^{[{\bf r}]})$ respectively then \begin{enumerate} \item $h(\bar B) = \scalar{{\bf r}}{h(B)}$ and $t_k(\bar B)=t_k(B)$. \item For any factor $g$ of $f$, the $B$-characteristic polynomial of $g$ and the $\bar B$-characteristic polynomial of $\bar g^{[{\bf r}]}$ are equal and $q(\bar g^{[{\bf r}]},\bar B) = \scalar{{\bf r}}{q(g,B)}$. \end{enumerate} \end{Lemma} \noindent \begin{proof} Set $u^{\bf r}=(u^{r_1},\dots,u^{r_d})$. If $\mathop\mathrm{Zer} f=\{\alpha_i(\underline{x})\}_{i=1}^n$ then $\mathop\mathrm{Zer} \bar f^{[{\bf r}]}=\{\alpha_i(u^{\bf r})\}_{i=1}^n$ and $O(\alpha_i(u^{\bf r}),\alpha_j(u^{\bf r}))=\scalar{{\bf r}} {O(\alpha_i(\underline{x}),\alpha_j(\underline{x}))}$ for $i\neq j$. \noindent Hence every bar $B=\{\alpha_{i_j}(\underline{x})\}_{j=1}^k$ of $T(f)$ yields the bar $\bar B=\{\alpha_{i_j}(u^{{\bf r}})\}_{j=1}^k$ of $T(\bar f^{[{\bf r}]})$ of height $\scalar{{\bf r}}{h(B)}$. \noindent Substituting $u^{r_i}$ for $x_i$ in the equation (\ref{eq:comp}) appearing in Definition \ref{compatible}, we get \[ \bar g^{[{\bf r}]}(\lambda_{\bar B}(u) + zu^{h(\bar B)})=G_{B}(z)u^{\scalar{{\bf r}}{q(g,B)}}+\cdots, \] \noindent hence the second part of the lemma follows. \end{proof} \noindent The proof of the next lemma is similar in spirit to the proof of \cite[Theorem 4.1]{IMRN} and the proof of \cite[Theorem 9.2]{IMRN}. The same arguments were used there in special situation. Here we repeat the proof for the convenience of the reader. \begin{Lemma} \label{R2} Let $g(\underline x,y)\in \mathbf{K}[[\underline x,y]]$ and $\Delta\subseteq \mathbf{R}^{d+1}$ be a Newton polytope. For any ${\bf r}\in (\mathbf{R}_{>0})^d$ let $\bar\Delta^{[{\bf r}]}$ be the image of $\Delta$ by the linear mapping $\pi_{\bf r}:\mathbf{R}^{d}\times \mathbf{R} \longrightarrow \mathbf{R}^2$ given by $({\bf a}, b)\mapsto (\langle {\bf r}, {\bf a} \rangle, b)$. If $\Delta(\bar g^{[{\bf r}]})=\bar{\Delta}^{[{\bf r}]}$ for every ${\bf r}\in (\mathbf{N}\backslash\{0\})^d$, then $\Delta(g)=\Delta$. \end{Lemma} \noindent \begin{proof} For every Newton polytope $\Delta \subseteq (\mathbf{R}_{\geq 0})^{d+1}$ and every $v \in (\mathbf{R}_{\geq 0})^{d+1}$ we define the {\em support function} $l(v,\Delta)=\min\{\langle v,\alpha\rangle \;:\;\alpha \in \Delta\}$. To prove the lemma it is enough to show that the support functions $l(\cdot,\Delta(g))$ and $l(\cdot,\Delta)$ are equal. As these functions are continuous it suffices to show the equality on a dense subset of $\mathbf{R}_{\geq0}^{d+1}$. \noindent Let $\vec r=(r_1,\dots,r_{d+1})=({\bf r},r_{d+1})\in\mathbf{R}_{\geq0}^{d+1}$, where ${\bf r}=(r_1,\dots,r_d)$.\\ \noindent Perturbing $\vec r$ a little we may assume that the hyperplane $\{\,\alpha\in\mathbf{R}^{d+1}:\scalar{\vec r}{\alpha}=l(\vec r,\Delta(g)\,\}$ supports $\Delta(g)$ at exactly one point $\check{\alpha}=(\underline{\check{\alpha}},\check{\alpha}_{d+1})$. Since after a small change of $\vec r$ the support point remains the same, we can assume, perturbing $\vec r$ again if necessary, that all $r_i$ are positive rational numbers. \noindent We will show that \begin{equation}\label{Eq:3} l(\vec r,\Delta)=l(\vec r,\Delta(g)). \end{equation} \noindent Multiplying $\vec r$ by the common denominator of $r_1$, \dots, $r_{d+1}$ we may assume that all $r_i$ are positive integers. At this point of the proof we fix $\vec r$. We claim that $l(\vec r,\Delta)=l\bigl((1,r_{d+1}),\bar{\Delta}^{[{\bf r}]}\bigr)$ and $l(\vec r,\Delta(g))=l\bigl((1,r_{d+1}),\Delta\left (\bar g^{[{\bf r}]}\right )\bigr)$. \noindent First equality follows from the definition of $\pi_{\bf r}$ and the identity \[ \scalar{\vec r}{\alpha}=\scalar{(1,r_{d+1})}{\pi_{\bf r}(\alpha)} \] \noindent for $\alpha\in\mathbf{R}^{d+1}.$ \noindent Write $\alpha=(\underline \alpha, \alpha_{d+1})\in \mathbf{R}^{d+1}$ and $g(\underline x,y)=\sum_{\alpha}d_{\alpha}\underline {x}^{\underline {\alpha}}y^{\alpha_{d+1}}\in \mathbf{K}[[\underline x,y]]$. Since the hyperplane $\{\,\alpha\in\mathbf{R}^{d+1}:\scalar{\vec r}{\alpha}=l(\vec r,\Delta(g)\,\}$ supports $\Delta(g)$ at $\check{\alpha}$, the term $d_{\check{\alpha}}u^{\scalar{{\bf r}}{\underline{\check{\alpha}}}}y^{\check{\alpha}_{d+1}}$ of $\bar g^{[{\bf r}]}$, satisfies the equality $\scalar{{\bf r}}{\underline{\check{\alpha}}}+r_{d+1}\check{\alpha}_{d+1}=l(\vec r,\Delta(g))$, while for all other terms $d_{\alpha}u^{\scalar{{\bf r}} {\underline{\alpha}}}y^{\alpha_{d+1}}$ with $d_{\alpha}\neq0$ appearing in $\bar g^{[{\bf r}]}$, we have $\scalar{{\bf r}}{\underline{\alpha}}+r_{d+1}\alpha_{d+1}>l(\vec r,\Delta(g))$. \noindent Hence $l\bigl((1,r_{d+1}),\Delta(g)\bigr)= \scalar{{\bf r}}{\underline{\check{\alpha}}}+r_{d+1}\check{\alpha}_{d+1}= l(\vec r,\Delta(g))$, so we get (\ref{Eq:3}). \end{proof} \begin{Corollary} \label{R1} Let $g_1(\underline x,y)$, $g_2(\underline x, y)\in \mathbf{K}[[\underline x,y]]$. Suppose that $\Delta(\bar g_1^{[{\bf r}]})=\Delta(\bar g_2^{[{\bf r}]})$ for every ${\bf r}\in (\mathbf{N}\backslash\{0\})^d$. Then $\Delta(g_1)=\Delta(g_2)$. \end{Corollary} \begin{Theorem}\label{higher-discriminants} Assume that $f\in\mathbf{K}[[\underline{x}]][y]$ is a Kuo-Lu $k$-regular quasi-ordinary Weierstrass polynomial and $p$ is a Weierstrass polynomial which is a factor of $f$ in $\mathbf{K}[[\underline{x}]][y]$. Then the Newton polytope of $R(T):=\mathrm{Res}\,_y(f^{(k)}(y),p(y)-T)\in \mathbf{K}[[\underline{x}]][T]$ is equal to \begin{equation}\label{jac_Nd} \sum_{\genfrac{}{}{0pt}{2}{B\in T(f)}{t_k(B)\neq 0}} \left\{\Teissr{t_k(B)q(p,B)}{\rule{0pt}{2.5ex} t_k(B)}{13}{6.5}\right\}. \end{equation} \end{Theorem} \noindent \begin{proof} \noindent First we will prove the theorem for $d=1$. We use the notation of Theorem \ref{higher-Kuo-Lu}. Let $\prod_{j=1}^{n-k}(y-\beta_j)$ be the Newton-Puiseux factorization of $f^{(k)}(y)$. By the well-known properties of the resultants we have \begin{equation} \label{Ress} \mathrm{Res}\,_y(f^{(k)}(y),p(y)-T)=\pm \prod_{j=1}^{n-k}(p(\beta_j)-T). \end{equation} \noindent By Theorem~\ref{higher-Kuo-Lu}, for every $\beta_j$ there exists a unique bar $B\in T(f)$ such that $\beta_j\in B^{\circ}.$ For such a bar, $h(B)$ is finite and $t_k(B)\neq 0$. By Corollary~\ref{CCKiel} the polynomial $p$ is compatible with $B$ and by (\ref{eq:F,q2}) of Lemma \ref{subst} ${P}_{B}(z)$ is a factor of ${F}_{B}(z)$. By Theorem \ref{higher-Kuo-Lu} $(iv)$ we get that $O(\alpha_i,\beta_j)=h(B)$ for any $\alpha_i\in B$. Hence $\mbox{\rm lc}_B\beta_j$ does not belong to the set $\{\mbox{\rm lc}_B\alpha_i\;:\;\alpha_i\in B\}$. So by the equality (\ref{eq:F,q2}) in Lemma \ref{subst} we have ${ F}_B(\mbox{\rm lc}_B\beta_j)\neq 0$ and consequently ${P}_{B}(\mbox{\rm lc}_B\beta_j)\neq 0$. Now, using equality (\ref{eq:F,q1xxx}) of Lemma \ref{subst} we conclude that the Newton polytope of $p(\beta_j)-T$ is equal to $\left\{\Teissr{q(p,B)}{1}{7}{3.5}\right\}$. \noindent Using the property that the Newton polytope of a product is the Minkowski sum of the Newton polytopes of its factors, and $(ii)$ of Theorem \ref{higher-Kuo-Lu} we finish the proof for $d=1$. \noindent Assume now that $d>1$. \noindent Let $x_1=u^{r_1}$,\dots, $x_d=u^{r_d}$ be a monomial substitution, where $r_i$ are positive integers. By Lemma ~\ref{L:2} $ f^{[{\bf r}]}$ is Kuo-Lu $k$-regular, hence by the first part of the proof ($d=1$) \[ \Delta(\bar R^{[{\bf r}]})= \sum_{\genfrac{}{}{0pt}{2}{B\in T(f)}{t_k(B)\neq 0}} \left\{\Teissr{t_k(\bar B)q(\bar p^{[{\bf r}]},\bar B)}{\rule{0pt}{2.5ex} t_k(\bar B)}{14}{7}\right\}. \] \noindent For any elementary polytope of the above sum, Lemma \ref{L:2} gives \[ \left\{\Teissr{t_k(\bar B)q(\bar p^{[{\bf r}]},\bar B)}{\rule{0pt}{2.5ex} t_k(\bar B)}{14}{7}\right\} = \left\{\Teissr{t_k(B)\langle {\bf r},q(p,B)\rangle}{\rule{0pt}{2.5ex} t_k(B)}{16}{8}\right\} = \pi_{\bf r}\left(\left\{\Teissr{t_k(B)q(p,B)}{\rule{0pt}{2.5ex} t_k(B)}{13}{6.5}\right\}\right). \] Since the image of the Minkowski sum of Newton polytopes is the Minkowski sum of the images, we get $\Delta(\bar R^{[{\bf r}]})=\pi_{\bf r}(\Delta)$, where $\Delta$ denotes the Newton polytope given in (\ref{jac_Nd}). By Lemma~\ref{R2} we get $\Delta(R)=\Delta$. \end{proof} \section{ Eggers tree of a quasi-ordinary Weierstrass polynomial} \label{section-Eggers-tree} \noindent In this section we introduce the {\em Eggers tree} of a quasi-ordinary Weierstrass polynomial $f$, after the conjugacy relation defined in Section \ref{section-Conjugate-pseudo-balls}. Denote by $[B]$ the conjugacy class of the pseudo-ball $B$ of the Kuo-Lu $T(f)$. By definition, the {\em Eggers tree} of $f$, denoted by $E(f)$, is the set of conjugacy classes with the natural order induced by the Kuo-Lu tree. This is the natural generalization of the Eggers tree associated with plane curves in \cite{Eggers}. The notion of Eggers tree, for quasi-ordinary singularities, was introduced by Popescu-Pampu in \cite{tesis}. He defined a slightly different notion of the Eggers tree, since he generalized to quasi-ordinary singularities the version of Eggers tree defined for curves in \cite{Wall}. \noindent The leaves of $E(f)$ correspond with irreducible factors of $f$. Following Eggers we draw them in white color. By definition, the {\em root} of $E(f)$ is its vertex of minimum height. The {\em branches} of $E(f)$ are the smallest sub-trees of $E(f)$ containing the root and one of its leaves. Let $[B]$ be a vertex in the branch of $E(f)$ corresponding with the irreducible componente $f_i$ of $f$. Eggers draws in a dashed way the edge leaving from the vertex $[B]$ in this branch if there are not two roots of $f_{i}$ with contact $h(B)$. \noindent Recall that the number of pseudo-balls conjugate with a quasi-ordinary pseudo-ball $B$ is $N(B)$ (see page \pageref{card[B]}). \noindent Let $[B]$ be a vertex of the Eggers tree of a quasi-ordinary polynomial $f$. By Lemma \ref{LL:1}, for any $k\in \{1,\ldots, \deg f\}$, the numbers $n_{k}(B)$ and $t_{k}(B)$ do not depend on the representative of $[B]$. Moreover, if $p(y)\in \mathbf{K}[[\underline{x}]][y]$ is a Weierstrass polynomial compatible with $B$ then the number $q(p,B)$ and the degree of its $B$-characteristic polynomial are also independent of the representative of $[B]$. \label{Egg ex} \noindent The Eggers tree of the quasi-ordinary polynomial $f=f_{1}f_{2}$ from Example \ref{ex:KL} is \begin{center} \begin{tikzpicture}[scale=0.5] \draw [thick](0,0) -- (-1.5,2); \draw[thick, dashed](0,0) -- (1.5,2); \draw (0,-0.1) node[below]{$[B]$}; \draw (-1.5,2.2) node[above]{$f_{1}$}; \draw (1.5,2.2) node[above]{$f_{2}$}; \node[draw,circle,inner sep=3pt,fill=black] at (0,0) {}; \node[draw,circle,inner sep=3pt,fill=white] at (1.5,2) {}; \node[draw,circle,inner sep=3pt,fill=white] at (-1.5,2) {}; \end{tikzpicture} \end{center} \begin{Remark} \label{incr} If $p$ is an irreducible factor of $f$ then, following Lemma \ref{in-chain}, the sequence $\{q(p,B)\}_{[B]}$ is increasing along the branch ${P}$ of the Eggers tree of $f$ containing the leave representing $p$. Moreover, if $[B]$ does not belong to $P$ then $q(p,B)=q(p,B_{0})$, where $[B_{0}]$ is the last common vertex of $P$ and the branches of the Eggers tree containing $[B]$. Hence, the set $\{q(p,B)\}_{[B]}$ is well-ordered. \end{Remark} \noindent After Remark \ref{incr} we get \begin{Corollary} \label{coro:polygonal} Let $f\in\mathbf{K}[[\underline{x}]][y]$ be a Kuo-Lu $k$-regular quasi-ordinary Weierstrass polynomial and $p$ a Weierstrass polynomial which is an irreducible factor of $f$ in $\mathbf{K}[[\underline{x}]][y]$. Then the Newton polytope in (\ref{jac_Nd}) is polygonal. \end{Corollary} \section{Irreducible factors of higher derivatives} \label{section-Irreducible factors} Let $f$ be a quasi-ordinary Weierstrass polynomial. In this section we study irreducible factors of normalized higher derivatives $f^{(k)}$. We show that every such an irreducible factor can be associated with a certain vertex $[B]$ of the Eggers tree of $f$. By definition an {\em Eggers factor} will be the product of all irreducible factors associated with the same vertex of $E(f)$. The {\em Eggers factorization} of a higher derivative is the product of all its Eggers factors. It generalizes to higher derivatives the factorization of the first polar given in \cite{Eggers} and \cite{GB} for plane curves and in \cite{GB-GP} for quasi-ordinary polynomials.\\ \noindent Let $F_B(z)$ be the $B$-characteristic polynomial of $f$. After Remark \ref{r: +-}, the polynomial $F_B^{(k)}(z)$ is the product of two coprime polynomials ${\cal F}^{\oplus}_B(z)$ and ${\cal F}^{\ominus}_B(z),$ where \[ {\cal F}^{\oplus}_B(z)=\prod_{B\perp_{z_i} B_i} (z-z_i)^{n_k(B_i)}. \] \begin{Theorem} \label{pack} Let $f(y)$ be a quasi-ordinary Weierstrass polynomial and let $g(y) \in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial which is an irreducible factor of $f^{(k)}(y)$. Then there exists $[B]\in E(f)$, with $B\in T_k(f)$, such that: \begin{enumerate} \item If $B'\in T_k(f)\backslash [B]$ then every root of $G_{B'}(z)$ is a root of ${\cal F}^{\oplus}_{B'}(z)$. \item If $B'\in T_k(f)\cap [B]$ then $G_{B'}(z)$ and ${\cal F}^{\oplus}_{B'}(z)$ do not have common roots. Moreover \begin{equation} \label{fact} G_B(z)=az^l\;\; \hbox{\rm or }\;\; G_B(z)=a(z^{n(B)}-c)^l \end{equation} \noindent for some $l\geq 1$ and $a,c\in \mathbf{K}\backslash \{0\}$. If $l=1$ then $g(y)$ is quasi-ordinary. \end{enumerate} \end{Theorem} \noindent \begin{proof} Let ${\cal T}=\{\,B\in T_k(f): G_B(z) \mbox{ has a root which is not a root of } {\cal F}^{\oplus}_B(z)\,\}$. By Remark \ref{R:1}, $B\in {\cal T}$ if and only if for any monomial substitution $\bar g$ has a Newton-Puiseux root that belongs to $\bar B^{\circ}$. \noindent Let ${\cal E}=\{\,[B]\in E(f): B\in {\cal T}\,\}$. We will show that ${\cal E}$ has only one element. Suppose that this is not the case, and let $[B_0]$ be the infimum of ${\cal E}$ in the ordered set $E(f)$ (the infimum exists because $E(f)$ has the structure of a tree). \noindent Let $[B']$ be an element of ${\cal E}$ different from $[B_0]$ and let $p$ be any irreducible factor of $f$ such that one of its roots belongs to $B'$. By definition of the Eggers tree there exists $B_1\in [B_0]$ such that $B'\subsetneq B_1$. Since $B'\in T_k(f)$, one of the roots of $P_{B_1}(z)$ has multiplicity bigger or equal than $k$. Hence, by the second statement of Lemma \ref{power} all the roots of $P_{B_1}(z)$ have this property. By \eqref{k-derivative}, the polynomial $P_{B_1}(z)$ could only share roots with ${\cal F}^{\oplus}_{B_1}(z)$. Hence, by Remark \ref{r: +-} the polynomials $P_{B_1}(z)$ and ${\cal F}^{\ominus}_{B_1}(z)$ are coprime. \noindent Let $\bar g=\prod_{i=1}^m(y-\bar\beta_i)$ be the Newton-Puiseux factorization of $g$ after some monomial substitution. Fix $B\in [B_0]$. By Lemmas \ref{LL:1} and \ref{L:2} we get $q(\bar p,\bar B)=q(\bar p,\bar B_0)$. Let us define two sets of indexes associated with $\bar B$: $$ I_{\bar B}=\{i: \bar\beta_i\in \bar B, P_B(\mbox{\rm lc}_{\bar B}\bar\beta_i)\neq 0\,\}, $$ $$ J_{\bar B}=\{i: \bar\beta_i\in \bar B, P_B(\mbox{\rm lc}_{\bar B}\bar\beta_i)= 0\,\}. $$ \noindent Directly from the definition of $P_B$ we have: if $i\in I_{\bar B}$ then $\mathrm{ord}\, \bar{p}(\bar\beta_i)=q(\bar p,\bar B_0)$, and if $i\in J_{\bar B}$ then $\mathrm{ord}\, \bar{p}(\bar\beta_i)>q(\bar p,\bar B_0)$. \noindent The cardinality of $I_{\bar B}$ is equal to the number of roots of $G_B(z)$ counted with multiplicities which are not the roots of $P_B(z)$. Similarly the cardinality of $J_{\bar B}$ is equal to the number of roots of $G_B(z)$ counted with multiplicities which are the roots of $P_B(z)$. Hence the cardinality of these sets does not depend on the choice of the monomial substitution. Let $I:=\bigcup_{B\in [B_0]} I_{\bar B}$ and $J:=\bigcup_{B\in [B_0]} J_{\bar B}$. Observe that $\mathrm{ord}\, \bar{p}(\bar\beta_i)=q(\bar p,\bar B_0)$ for $i\in I$ and $\mathrm{ord}\, \bar{p}(\bar\beta_i)>q(\bar p,\bar B_0)$ for $i\in J$. \noindent The sets $I$ and $J$ depend on the choice of the monomial substitution but their cardinality does not. We will show that the set $J$ is nonempty. Since $B'\in {\cal T}$, there exists $\bar\beta_i \in \bar B'$. Any root of $\bar p$ that belongs to $\bar B'$ has the same leading coefficient with respect to $\bar B_1$ as $\bar\beta_i$. Hence $P_{B_1}(\mbox{\rm lc}_{\bar B_1}\bar\beta_i)=0$, which gives $i\in J_{\bar{B}_1}\subset J$. \noindent Now we will prove that the set $I$ is empty. Suppose that it is not the case. Put $R(T):= \mathrm{Res}\,_y(g, p-T)$ and $\bar R(T):= \mathrm{Res}\,_y(\bar g, \bar p-T)$. We can write \begin{eqnarray*} R(T) &=& \pm T^m+c_1T^{m-1}+\cdots +c_m, \\ \bar R(T) &=& \pm T^m+\bar{c}_1T^{m-1}+\cdots +\bar{c}_m, \end{eqnarray*} \noindent for some $c_i\in \mathbf{K}[[\underline{x}]]$. By a well-known formula for the resultant we have $\bar R(T)=\pm\prod_{i=1}^m(\bar p(\bar\beta_i)-T)$. Since the Newton polygon of a product is the Minkowski sum of the Newton polygons of its factors, $\Delta(\bar R(T))$ has an edge of inclination $q(\bar p,\bar B_{0})$ starting in the point $(0,m)$. The projection of this edge to the vertical axis has length $\sharp I$. This gives \begin{eqnarray*} \mathrm{ord}\, \bar{c}_i&\geq& iq(\bar p,\bar B_{0}) \;\;\hbox{\rm for $1\leq i < \sharp I$}, \nonumber \\ \mathrm{ord}\, \bar{c}_i&=& iq(\bar p,\bar B_{0}) \;\;\hbox{\rm for $i=\sharp I$},\\ \mathrm{ord}\, \bar{c}_i&>& iq(\bar p,\bar B_{0}) \;\;\hbox{\rm for $ \sharp I<i\leq m$}. \nonumber \end{eqnarray*} \noindent Since the monomial substitution was arbitrary, we have \begin{eqnarray*} {c}_i\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]&\subseteq& \underline{x}^{iq (p,B_{0})}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]] \;\;\;\;\;\;\hbox{\rm for $1\leq i < \sharp I$}, \\ {c}_i\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]&=& \underline{x}^{iq (p,B_{0})}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]] \;\;\;\;\;\;\hbox{\rm for $i=\sharp I$}, \\ {c}_i\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]&\subsetneq& \underline{x}^{iq (p,B_{0})}\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]] \;\;\;\;\;\;\hbox{\rm for $ \sharp I<i\leq m$}. \end{eqnarray*} \noindent By Corollary~\ref{irred} $g$ is not irreducible and we get a contradiction. \noindent We conclude that $I=\emptyset$. This means that for every $\bar\beta_i$ there exists $B\in[B_0]$ such that $\bar\beta_i\in\bar B$ and $\mathrm{ord}\, \bar{p}(\bar\beta_i)>q(\bar p,\bar B)$. By Remark \ref{R:1}, $\bar \beta_i$ belongs to a post-bar of $\bar B$, which has a nonempty intersection with $\mathop\mathrm{Zer} \bar p$. All post-bars of $B\in[B_0]$ that have nonempty intersection with $\mathop\mathrm{Zer} p$ conjugate. They form the vertex of $E(f)$, bigger than $[B_0]$, which is smaller or equal (with the natural order in $E(f)$) than any element of $\cal E$. Hence $[B_0]$ cannot be the infimum of ${\cal E}$ and we arrive again at a contradiction. \noindent We have shown that ${\cal E}$ has only one element. Denote it by $[B_0]$. Hence for any monomial substitution we have $\mathop\mathrm{Zer} \bar g \subset \bigcup_{B\in [B_0]}\bar B^{\circ}$. By Remark \ref{R:1} we get~\textit{1.}\ and the first part of~\textit{2.} \noindent Now we will find the form of $G_B(z)$, for any $B\in [B_0]$. If for every $\bar\beta_i\in B$ the leading coefficient $\mbox{\rm lc}_{\bar B} \bar\beta_i$ is $0$, then obviously $G_B(z)=az^l$. Otherwise by Lemma \ref{power} there exist $c\neq 0$ and a polynomial $G_1(z)$ coprime with $z^{n(B)}-c^{n(B)}$ such that $G_B(z)=G_1(z)(z^{n(B)}-c^{n(B)})^l$. Let $p(y)$ be the minimal Weierstrass polynomial of $\lambda_B(\underline{x})+c\underline{x}^{h(B)}$. Then $P_B(z)=\mbox{const}\cdot(z^{n(B)}-c^{n(B)})$. \noindent Proceeding as in the first part of the proof we define again the sets $I$, $J$ of indexes. By the choice of $p(y)$ the set $J$ is nonempty. If the polynomial $G_1(z)$ has positive degree then the set $I$ is nonempty and we arrive at a contradiction. Hence $G_1(z)$ is a constant which proves the second part of the theorem. \noindent Now we prove that if $l=1$ in (\ref{fact}) then $g(y)$ is quasi-ordinary. Let $p(y)\in \mathbf{K}[[\underline{x}]][y]$ be the minimal polynomial of $\lambda_B(\underline{x})$ if $G_B(z)=az$ or the minimal polynomial of $\lambda_B(\underline{x})+c{\underline{x}}^{h(B)}$ if $G_B(z)=a(z^{n(B)}-c^{n(B)})$. Then $G_B(z)$ is equal to $P_B(z)$ up to multiplication by a constant. By Lemma~\ref{LL:1} for any $B'\in[B_{0}]=[B]$ the characteristic polynomials $G_{B'}(z)$ and $P_{B'}(z)$ have the same form, in particular have the same number of roots and all their roots are simple. Take any monomial substitution and let $\bar\beta' $, $\bar\beta''$ be different roots of $\bar g(y)$. Since $\mathop\mathrm{Zer} \bar g \subset \bigcup_{B\in [B_0]}\bar B^{\circ}$ there exist $B', B''\in [B_0]$ such that $\bar\beta'\in\bar B'$ and $\bar\beta''\in\bar B''$. If $B'=B''$ then $O(\bar\beta',\bar\beta'')=h(\bar B')$ because $\bar\beta'$ and $\bar\beta''$ have different leading coefficients with respect to $\bar B'$. If $B'\neq B''$ then $O(\bar\beta',\bar\beta'')=O(\lambda_{\bar B'},\lambda_{\bar B''})$. In both cases the contact $O(\bar\beta',\bar\beta'')$ depends only on $B'$ and $B''$. The same argument applies to the roots of $\bar p(y)$. As a consequence any bijection $\Phi:\mathop\mathrm{Zer} \bar g\to \mathop\mathrm{Zer} \bar p$ such that $\Phi(\bar B'\cap\mathop\mathrm{Zer} \bar g)=\bar B'\cap\mathop\mathrm{Zer} \bar p$ for $B'\in [B_0]$ preserves contacts. \noindent Since the discriminant of a monic polynomial is the product of differences of its roots, the discriminant of $\bar{g}$ and the discriminant of $\bar{p}$ have the same order. Then by Corollary~\ref{R1} the Newton polytopes of the discriminants of $g(y)$ and $p(y)$ are equal and we conclude that $g(y)$ is quasi-ordinary. \end{proof} \noindent For $k$-regular quasi-ordinary Weierstrass polynomials we can say more. \begin{Corollary} \label{pack1} Let $f(y)$ be a $k$-regular Kuo-Lu quasi-ordinary Weierstrass polynomial and let $g(y) \in \mathbf{K}[[\underline{x}]][y]$ be a Weierstrass polynomial which is an irreducible factor of $f^{(k)}(y)$. Then there exists $[B]\in E(f)$ with $B\in T_k(f)$ such that: \begin{enumerate} \item If $B'\in T(f)\cap [B]$ then $G_{B'}(z)$ and $F_{B'}(z)$ do not have common roots. \item If $B'\in T_{k}(f)\backslash [B]$ then every root of $G_{B'}(z)$ is a root of ${\cal F}^{\oplus}_{B'}(z)$. \item If $B'\in T(f)\setminus T_k(f)$ then $G_{B'}(z)$ is a non-zero constant polynomial. \end{enumerate} \end{Corollary} \noindent \begin{proof} Take $B'\in T_k(f)$. Then by Lemma~\ref{derivatives} and the definition of $k$-regularity $G_{B'}(z)$ and ${\cal F}_{B'}^{\ominus}(z)$ do not have common roots. Hence for $B'\in T_k(f)$ it is enough to use Theorem~\ref{pack}. This proves {\em 1.} The second statement is the first item of Theorem~\ref{pack}. \noindent Now let $B'\in T(f)\setminus T_k(f)$. Consider the chain of bars $B_0\perp_c B_1\perp \cdots \perp B_s=B'$ of $T(f)$ such that $B_0\in T_k(f)$ and $B_i\notin T_k(f)$ for $1\leq i \leq s$. By the $k$-regularity of $F_{B_0}(z)$, we get $G_{B_0}(c)\neq 0$. Since $g$ is compatible with $B_0$, after (\ref{eq:F,q1xxx}) of Lemma \ref{subst}, we have \[g(\lambda_{B'}(\underline{x})+z\underline{x}^{h(B')})=g(\lambda_{B_0}(\underline{x})+c\underline{x}^{h(B_0)}+\cdots) = G_{B_0}(c)\underline{x}^{q(g,B_0)}+\cdots,\] which shows that $g$ is also compatible with $B'$ and its $B'$-characteristic polynomial $G_{B'}(z)$ equals $G_{B_0}(c)$.\end{proof} \section{Eggers factorizations of higher derivatives} \label{section-Eggers factorization} \noindent Let $f$ be a quasi-ordinary Weierstrass polynomial. In this section we propose a factorization of the normalized derivative $f^{(k)}$ into factors associated with points of Eggers tree $E(f)$. \begin{Definition} Let $g,p\in \mathbf{K}[[\underline{x}]][y]$ be Weierstrass polynomials. The P-contact between $g$ and $p$ is \[ \mathrm{cont}_{\rm P}(g,p):=\frac{1}{\deg g \deg p}\Delta(\mathrm{Res}\,_y(g,p)). \] \end{Definition} \noindent The notion of P-contact has its counterpart in the theory of plane analytic curves: for $y$-regular plane branches it is related with the {\em logarithmic distance} studied by P\l oski in \cite{Ploski}, since in such case $\Delta(\mathrm{Res}\,_y(g,p))$ equals the Newton polygon of a monomial $x^m$, where $m$ is the {\it intersection multiplicity} of the branches $g=0$ and $p=0$. \noindent If $g$ is compatible with a pseudo-ball $B$ then we put \[ \mathrm{cont}_{\rm P}(g,B):=\frac{1}{\deg g}\Delta(\underline x^{q(g,B)}). \] \begin{Proposition}\label{self-contact} Let $B$ be a quasi-ordinary pseudo-ball of finite height and let $f$ be an irreducible quasi-ordinary Weierstrass polynomial compatible with $B$ such that $\mathop\mathrm{Zer} f \cap B\neq \emptyset$ or equivalently such that $F_B(z)$ has positive degree. Then $\mathrm{cont}_{\rm P}(f,B)$ does not depend on $f$. \end{Proposition} \noindent \begin{proof} Take any $f_1$, $f_2$ satisfying the assumptions of the proposition and let $\alpha_1 \in \mathop\mathrm{Zer} f_1 \cap B$, $\alpha_2 \in \mathop\mathrm{Zer} f_2 \cap B$. Choose a constant $c\in \mathbf{K}$ such that $(F_i)_B(c)\neq 0$, for $i=1,2$ and let $\gamma =\lambda_B+c\underline x^{h(B)}$. Then $O(\gamma,\alpha_1)=O(\gamma,\alpha_2)= h(B)$ and for any $\xi \in \mathop\mathrm{Zer} f_1\cup \mathop\mathrm{Zer} f_2$ we have $O(\xi,\gamma)\leq h(B)$. \noindent Let $G$ be a finite subgroup of $\mathbf{L}$-automorphisms of $\mathbf{M}$ that acts transitively on the sets $\mathop\mathrm{Zer} f_1$ and $\mathop\mathrm{Zer} f_2$. By the orbit stabilizer theorem, for $i\in \{1,2\}$ we get \[ \frac{1}{|G|} \sum_{\sigma\in G} O(\gamma,\sigma(\alpha_i)) = \frac{1}{\deg f_i} \sum_{\alpha\in\mathop\mathrm{Zer} f_i} O(\gamma,\alpha) = \frac{1}{\deg f_i}q(f_i,B). \] \noindent By STI we have $O(\gamma,\sigma(\alpha_1))=O(\gamma,\sigma(\alpha_2) )$ for all $\sigma\in G$. Thus $\frac{1}{\deg f_1}q(f_1,B)= \frac{1}{\deg f_2}q(f_2,B)$. \end{proof} \noindent After Proposition~\ref{self-contact} we define the \emph{self-contact} of a pseudo-ball $B$ of finite height as \[ \mbox{self-contact}(B):= \mathrm{cont}_{\rm P}(f,B), \] \noindent for any $f$ satisfying the assumptions of this proposition. \noindent By Lemma~\ref{LL:1} conjugate pseudo-balls have the same self-contact, hence the self-contact of $[B]$ is well-defined for any vertex $[B]$ of ${E}(f)$, where $B$ is of finite height. \noindent In the set of Newton polytopes we define the next partial order: $\Delta_1 \succeq \Delta_2$ if and only if $\Delta_1 \subseteq \Delta_2$. Observe that $\Delta(\underline x^{{\bf q}_1})\succeq \Delta(\underline x^{{\bf q}_2})$ if and only if ${\bf q}_1 \geq {\bf q}_2$. Now we show how the self-contacts of $[B]\in E(f)$ determine the P-contacts between irreducible factors of $f$. \begin{Proposition}\label{s-c1} Let $f$ be a quasi-ordinary Weierstrass polynomial. Then the self contacts of vertices of finite height increase along the branches of $E(f)$. Moreover for any different irreducible factors $f_1$, $f_2$ of $f$ \begin{equation} \mathrm{cont}_{\rm P}(f_1,f_2) = \max \{\mbox{\rm self-contact}([B]) \} \label{s-c}, \end{equation} where the maximum is taken over all $[B]\in E(f)$ such that $\mathop\mathrm{Zer} f_i \cap B\neq \emptyset$ for $i=1,2$. \end{Proposition} \noindent \begin{proof} Let $B$, $B'$ be pseudo-balls of $T(f)$ of finite height such that $B'\subsetneq B$. Choose an irreducible factor $f_i$ of $f$ such that $\mathop\mathrm{Zer} f_i \cap B'\neq \emptyset$. By Lemma \ref{in-chain} we get $q(f_i,B)< q(f_i,B')$, hence $\mbox{self-contact}(B)\prec \mbox{self-contact}(B')$. \noindent Let $[B]\in E(f)$ be the maximum (with the order defined in $E(f)$) of the set of all vertices $[B']\in E(f)$ such that $\mathop\mathrm{Zer} f_i \cap B'\neq \emptyset$ for $i=1,2$. The pseudo-ball $B$ has the form $\gamma+(\gamma-\delta)\mathbf{K}[[\underline{x}^{1/\mathbf{N}}]]$, for some $\gamma \in \mathop\mathrm{Zer} f_1$ and $\delta \in \mathop\mathrm{Zer} f_2$ with maximal possible contact. By the choice of $\gamma$ and $ \delta$, we have $O(\gamma,\delta')\leq h(B)$ for all $\delta'\in\mathop\mathrm{Zer} f_2\cap B$, consequently $(F_2)_{B}(\mbox{\rm lc}_B \gamma)\neq 0$. Then $f_2(\gamma)=(F_2)_B(\mbox{\rm lc}_B \gamma)\underline{x}^{q(f_2,B)}+\cdots$. \noindent Applying the Galois action associated with the irreducible polynomial $f_{2}$ we get $\Delta({f_2(\gamma)})=\Delta({f_2(\gamma')})$, for any $\gamma,\gamma' \in \mathop\mathrm{Zer} f_{1}$. Hence by the definition of the self-contact and the identity $\Delta(\mathrm{Res}\,_y(f_1,f_2))=\sum_{\gamma \in \mathop\mathrm{Zer} f_{1}}\Delta({f_2(\gamma)})$ we have \begin{eqnarray*} \mbox{self-contact}(B)&=&\mathrm{cont}_{\rm P}(f_2,B)=\frac{1}{\deg f_2}\Delta(\underline x^{q(f_2,B)})\\ &=& \frac{1}{\deg f_1 \deg f_2}\deg f_1\;\Delta({f_2(\gamma)})\\ &=&\frac{1}{\deg f_1 \deg f_2}\Delta(\mathrm{Res}\,_y(f_1,f_2))=\mathrm{cont}_{\rm P}(f_1,f_2). \end{eqnarray*} \end{proof} \begin{Theorem} \label{dec-red-qo} Let $f \in \mathbf{K}[[\underline{x}]][y]$ be a quasi-ordinary Weierstrass polynomial. Then \[f^{(k)}=\prod_{[B]\in E(f)}p_{[B]},\] where $p_{[B]}$ are Weierstrass polynomials such that \begin{enumerate} \item The $B$-characteristic polynomial of $p_{[B]}$ equals $F_B^{\ominus}$ up to multiplication by constants and $\deg p_{[B]}=N(B)t_k(B)$. \item For every irreducible factor $g$ of $p_{[B]}$ and every irreducible factor $f_i$ of~$f$ we get \begin{enumerate} \item[(a)] $\mathrm{cont}_{\rm P}(g,B)=\mbox{\rm self-contact}(B).$ \item[(b)] If $\mathrm{cont}_{\rm P}(f_i,B)\prec \mbox{\rm self-contact}(B)$ then $\mathrm{cont}_{\rm P}(f_i,g)=\mathrm{cont}_{\rm P}(f_i,B)$. \item[(c)] If $ \mathrm{cont}_{\rm P}(f_i,B)=\mbox{\rm self-contact}(B)$ then $\mathrm{cont}_{\rm P}(f_i,g)\succeq \mathrm{cont}_{\rm P}(f_i,B)$. \end{enumerate} \item If $f$ is $k$-regular then the inequalities $\succeq$ in (c) become equalities. \item For every irreducible factor $g$ of $p_{[B]}$ there is an irreducible factor $f_i$ of $f$ such that $\mathrm{cont}_{\rm P}(f_i,g) = \mathrm{cont}_{\rm P}(f_i,B)= \mbox{\rm self-contact}(B).$ \end{enumerate} \end{Theorem} \noindent \begin{proof} We define $p_{[B]}$ as the product of all irreducible factors of $f^{(k)}$ having the same $[B]$ in Theorem~\ref{pack} (by convention the product of an empty family is $1$). After some monomial substitution $\bar p_{[B]}$ has $N(B)t_k(B)$ roots and all of them are in $\bigcup_{B'\in[B]}\bar{B'}^0$. Consequently $\deg p_{[B]}=N(B)t_k(B)$.\\ \noindent Now we will prove the second statement. Since $p_{[B]}$ has positive degree we may assume that $B\in T_{k}(f)$. Let $f_i$ be an irreducible factor of $f$. If $\mathrm{cont}_{\rm P}(f_i,B)\prec \mbox{ self-contact}(B)$ then by Proposition \ref {self-contact} $(F_i)_B(z)$ is a non-zero constant polynomial. Hence, for any $\bar\gamma\in \mathop\mathrm{Zer} \bar p_{[B]}$ we have $\mathrm{ord}\,\bar{f_i}(\bar{\gamma})=q(\bar f_i,\bar B)$, which proves {\em 2(b)}. Suppose now that $\mathrm{cont}_{\rm P}(f_i,B)=\mbox{ self-contact}(B)$. For every root $\bar\gamma\in \mathop\mathrm{Zer} \bar p_{[B]}$ we have $\mathrm{ord}\,\bar{f_i}(\bar{\gamma})\geq q(\bar f_i,\bar B)$ with equality in the $k$-regular case. Hence if $g$ is an irreducible factor of $p_{[B]}$ then $\mathrm{ord}\, \mathrm{Res}\,_y(\bar f_i, \bar g )\geq (\deg g)\cdot q(\bar f_i,\bar B)$ with equality in the $k$-regular case. This gives {\em 2(c)} and {\em 3}. \\ \noindent If the polynomial $F_{B}(z)$ is as in \eqref{eq:F irr} then it is $k$-regular. In this case for any irreducible factor $f_i$ of $f$, with $(F_i)_B(z)$ of positive degree, the polynomials $(F_i)_B(z)$ and $G_B(z)$ do not have common factors. \noindent If $F_{B}(z)$ is not as in \eqref{eq:F irr}, then by Lemma \ref{power} there is an irreducible factor $f_i$ of $f$ such that the polynomials $(F_i)_B(z)$ and $G_B(z)$ do not have common factors and $(F_i)_B(z)$ has positive degree. After any monomial substitution, we have $\mathrm{ord}\, \bar f_i(\bar\gamma)=q(\bar f_i,\bar B)$, for every $\bar\gamma \in \mathop\mathrm{Zer} \bar g$. This gives $\mathrm{ord}\, \mathrm{Res}\,(\bar f_i,\bar g)=\deg g \cdot q(\bar f_i,\bar B)$. Since the monomial substitution was arbitrary, the fourth statement of the theorem holds true in all cases. \noindent It rests to prove {\em 2(a)}. Choose $f_i$ as in the proof of the fourth statament. Then $\Delta(g(\alpha))=\Delta(\underline x^{q(g,B)})$ for any $\alpha \in B\cap \mathop\mathrm{Zer} f_i$. \noindent Applying the same argument as in the end of the proof of Proposition \ref{s-c1}, we get $\Delta(\mathrm{Res}\,_y(f_i,g))=\deg f_{i}\Delta(g(\alpha))=\deg f_i\cdot \Delta(\underline x^{q(g,B)})$. After the fourth statement and the definition of the P-contact: $\mbox{self-contact}(B)=\mathrm{cont}_{\rm P}(f_i,g)=\frac{1}{\deg g\deg f_i}\Delta(\mathrm{Res}\,_y(f_i,g))= \frac{1}{\deg g}\Delta(\underline x^{q(g,B)})=\mathrm{cont}_{\rm P}(g,B).$ \end{proof} \begin{Example} We consider the example in \cite[Section 10]{GB-GP}: let $f=f_{1,1}f_{1,2}f_{2,1}f_{2,2}$, where $f_{i,j}=(y^2-ix_1^{3}x_2^2)^2-jx_1^{5}x_2^{4}y$ are irreducible quasi-ordinary polynomials for $i,j\in\{1,2\}$. The Kuo-Lu and the Eggers tree of $f$ are \begin{center} \begin{tikzpicture}[scale=0.8] \draw [-, thick](0,0) -- (0,1); \draw[-, thick](-3,1) -- (3,1); \draw[-, thick](-3,1) -- (-3,2); \draw[-, thick](3,1) -- (3,2); \draw[-, thick](-1.5,1) -- (-1.5,2); \draw[-, thick](1.5,1) -- (1.5,2); \draw[-, thick](-3.5,2) -- (-2.5,2); \draw[-, thick](-2,2) -- (-1,2); \draw[-, thick](1,2) -- (2,2); \draw[-, thick](2.5,2) -- (3.5,2); \draw[-, thick](-3.5,2) -- (-3.5,3); \draw[-, thick](-3.2,2) -- (-3.2,3); \draw[-, thick](-2.8,2) -- (-2.8,3); \draw[-, thick](-2.5,2) -- (-2.5,3); \draw[-, thick](-1,2) -- (-1,3); \draw[-, thick](-1.3,2) -- (-1.3,3); \draw[-, thick](-1.7,2) -- (-1.7,3); \draw[-, thick](-2,2) -- (-2,3); \draw[-, thick](1,2) -- (1,3); \draw[-, thick](1.3,2) -- (1.3,3); \draw[-, thick](1.7,2) -- (1.7,3); \draw[-, thick](2,2) -- (2,3); \draw[-, thick](2.5,2) -- (2.5,3); \draw[-, thick](2.8,2) -- (2.8,3); \draw[-, thick](3.2,2) -- (3.2,3); \draw[-, thick](3.5,2) -- (3.5,3); \begin{scope}[shift={(6.5,1)},scale=1] \draw [thick](0,0) -- (-1.5,2); \draw[thick](0,0) -- (1.5,2); \draw [thick](0.8,1) -- (0.5,2); \draw[thick](-0.8,1) -- (-0.5,2) ; \draw (0,-0.1) node[below]{$[B_1]$}; \draw (-0.9,1) node[left]{$[B_2]$}; \draw (0.9,1) node[right]{$[B_3]$}; \draw (-1.5,2.1) node[above]{$f_{1,1}$}; \draw (-0.5,2.1) node[above]{$f_{1,2}$}; \draw (0.5,2.1) node[above]{$f_{2,1}$}; \draw (1.5,2.1) node[above]{$f_{2,2}$}; \node[draw,circle,inner sep=3pt,fill=black] at (0.8,1) {}; \node[draw,circle,inner sep=3pt,fill=black] at (-0.8,1) {}; \node[draw,circle,inner sep=3pt,fill=black] at (0,0) {}; \node[draw,circle,inner sep=3pt,fill=white] at (1.5,2) {}; \node[draw,circle,inner sep=3pt,fill=white] at (0.5,2) {}; \node[draw,circle,inner sep=3pt,fill=white] at (-0.5,2) {}; \node[draw,circle,inner sep=3pt,fill=white] at (-1.5,2) {}; \end{scope} \end{tikzpicture} \end{center} \noindent The heights of the vertices of the Eggers tree are: $h[B_1]=\left(\frac{3}{2},1\right)$, $h([B_2])=h([B_3])=\left(\frac{7}{4},\frac{3}{2}\right)$; the self-contacts are $\hbox{\rm self-contact}([B_1])=\frac{1}{4}\Delta(\underline x^{(6,4)})$ and $\hbox{\rm self-contact}([B_2])=\hbox{\rm self-contact}([B_3])=\frac{1}{4}\Delta\left(\underline x^{(13,10)}\right)$. \noindent For any $1\leq k \leq 16$, the degrees of polynomials $p_{[B_i]}$ are \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline &$\deg p_{[B_1]}$ & $\deg p_{[B_2]}$ & $\deg p_{[B_3]}$ \\ \hline \hline $f^{(1)}$ & 3 & 6 & 6 \\ \hline $f^{(2)}$ & 6 & 4 & 4 \\ \hline $f^{(3)}$ & 9 & 2 & 2 \\ \hline $f^{(k)}$ & 16-$k$ & 0 & 0 \\ \hline \end{tabular} \end{center} \noindent The characteristic polynomials are $F_{B_1}(z)=(z^2-1)^4(z^2-2)^4$, $F_{B_2}(z)=(4z^2-1)(4z^2-2)$ and $F_{B_3}(Z)=(8z^2-\sqrt{2})(8z^2-2\sqrt{2})$. We can verify that these polynomials are $k$-regular for any $k$. \noindent Theorem \ref {dec-red-qo} allows us to compute the {\rm P-contact} between the irreducible factors of $f$ and the irreducible factors of its higher order polars. For any $k$ and any irreducible factor $g$ of $p_{[B_{1}]}$, we have $ \mathrm{cont}_{\rm P}(f_{i,j},g)=\hbox{\rm self-contact}([B_1])$. For any $k$ and any irreducible factor $g$ of $p_{[B_{2}]}$, we have $ \mathrm{cont}_{\rm P}(f_{1,j},g)=\hbox{\rm self-contact}([B_2])$ and $ \mathrm{cont}_{\rm P}(f_{2,j},g)=\hbox{\rm self-contact}([B_1])$, for any $j=1,2$. We have the symmetric situation for the irreducible factors of $p_{[B_{3}]}$. \end{Example} \begin{Example} The second polar of the quasi-ordinary polynomial $f$ from the Example \ref{ex:KL} (see page \pageref{Egg ex} for its Eggers tree) has only one Eggers factor $p_{[B]}=y$, with $\mathrm{cont}_{\rm P}(f_2,y)=\Delta (\underline x^{(5,2)})\succ \mathrm{cont}_{\rm P}(f_1,y)=\Delta (\underline x^{(3/2,1)})= \mbox{\rm self-contact}(B),$ hence in item 2. (c) of Theorem \ref{dec-red-qo} we have equality for $f_{1}$ and strict inequality for $f_{2}$. \end{Example} \noindent Now we study the examples of \cite{Casas}: \begin{Example} (\cite[Example 5.1]{Casas}) Let $f=y^3+x^2y$. The Eggers tree of $f$ has only one vertex $[B]$ of finite height, where $B=x\mathbf{K}[[x^{1/\mathbf{N}}]]$. The $B$-characteristic polynomial of $f$ is as in the previous example, so it is not $2$-regular. We get $f^{(2)}=p_{[B]}=y$. If $f_{1}=y-x$, $f_{2}=y+x$ and $f_{3}=y$ then $\emptyset=\mathrm{cont}_{\rm P}(f_3,y)\succeq \mathrm{cont}_{\rm P}(f_i,y)=\mathrm{cont}_{\rm P}(f_i,B)=\Delta(x)=\mbox{\rm self-contact}(B)$ for $i=1,2$. This illustrates the fourth statement of Theorem \ref{dec-red-qo}. \end{Example} \begin{Example} (\cite[Example 5.2]{Casas}) Let $f_a=y^4+ax^2y^2+x^2y+x^{10}$. We get $f_a=f_{a1}f_{a2}$, where $f_{a1}$ is irreducible and the contact of any two different roots of it is $\frac{2}{3}$, and $f_{a2}=0$ is a smooth curve tangent to $y=0$. The Eggers tree of $f_a$ is: \begin{center} \begin{tikzpicture}[scale=0.5] \draw [thick](0,0) -- (-1.5,2); \draw[thick, dashed](0,0) -- (1.5,2); \draw (0,-0.1) node[below]{$[B]$}; \draw (-1.5,2.2) node[above]{$f_{a1}$}; \draw (1.5,2.2) node[above]{$f_{a2}$}; \node[draw,circle,inner sep=3pt,fill=black] at (0,0) {}; \node[draw,circle,inner sep=3pt,fill=white] at (1.5,2) {}; \node[draw,circle,inner sep=3pt,fill=white] at (-1.5,2) {}; \end{tikzpicture} \end{center} \noindent The characteristic polynomial $F_B(z)$ equals $z^4+z$. Hence $f_a$ is not $2$-regular. For any irreducible factor $g$ of $f^{(2)}_a$ we get $\mathrm{cont}_{\rm P}(f_{a1},g)=\Delta(x^{2/3})=\mbox{ self-contact}(B)$. For $f_{a2}$ the P-contact depends on $a$: \[ \mathrm{cont}_{\rm P}(f_{a2},g)=\left\{\begin{array}{ll} \Delta(x) & \hbox{\rm for $a\neq 0$}\\ \Delta(x^{8}) & \hbox{\rm for $a=0$.} \end{array} \right. \] \end{Example} \subsection{Irreducible case} \noindent Assume that $f(y)\in \mathbf{K}[[\underline{x}]][y]$ is an irreducible quasi-ordinary Weierstrass polynomial of degree $n>1$ and $\mathop\mathrm{Zer} f=\{\alpha_i\}_{i=1}^n$. By \cite{Lipman} the set $\{O(\alpha_i,\alpha_j)\,:\, i\neq j\}:=\{{\bf h}_1,\ldots, {\bf h}_s\}$ is well-ordered, so we may assume that ${\bf h}_1\leq {\bf h}_2 \leq \cdots \leq {\bf h}_s$. These values are the finite heights of the bars of $T(f)$. The sequence ${\bf h}_1,\ldots, {\bf h}_s$ is called the sequence of {\it characteristic exponents} of $f(y)$. Let $B_i$ be any bar in $T(f)$ of height ${\bf h}_i$. By \cite[Remark 2.7]{GP} the degree $n(B_i)$ of the field extension $\mathbf{L}(\lambda_{B_i}(\underline{x})) \hookrightarrow \mathbf{L}(\lambda_{B_i}(\underline{x}), \underline{x}^{{\bf h}_i})$ does not depend on the choice of $B_i$ and will be denoted by $n_i$. Put $e_i:=n_{i+1}\cdots n_s$ for $0\leq i \leq s$ (by convention the empty product is one). Observe that $T(f)$ has a special structure: all bars of the same height are conjugate and there are $n_1\cdots n_{i-1}$ conjugate bars of height~${\bf h}_i$ (see \cite[Theorem 6.2]{IMRN}). \noindent By~(\ref{jac_Nd}) we get \begin{equation} \label{deltaRes1} \Delta((\mathrm{Res}\,_y (f^{(k)}, f-T))=\sum_{i=1}^s n_1\cdots n_{i-1}t_k(B_i)\left\{\Teissr{q(f,B_i)}{1}{8}{4}\right\}, \end{equation} \noindent where $B_i$ is any ball of $T(f)$ of height ${\bf h}_i$ and \[ t_k(B_i)=\left\{ \begin{array}{ll} (n_i-1)k &\hbox{ for $1\leq k\leq e_{i}$},\\ e_{i-1}-k & \hbox{ for $e_{i}\leq k \leq e_{i-1}$},\\ 0 & \hbox{ for $e_{i-1} \leq k < n $}. \end{array} \right . \] \noindent Let $i_{k}\in \{1,\ldots, s\}$ be such that $e_{i_k}\leq k< e_{i_k-1}$. Then $t_k(B_i)$ is positive if and only if $1\leq i \leq i_{k}$. \noindent The Newton polytope of (\ref{deltaRes1}) is polygonal (see Corollary \ref{coro:polygonal}) and has $i_{k}$ edges of different inclinations. After Theorem \ref{Th:decomp} we decompose $\mathrm{Res}\,_y(f^{(k)}, f-T)=\prod_{i=1}^{i_{k}} R_i$, where $\deg_T R_i=(n_1\cdots n_{i-1})t_k(B_i)$ and any $R_i$ has an elementary Newton polytope of inclination $q(f,B_i)$. \noindent Such a decomposition of the resultant can be also obtained from Eggers factorization of $f^{(k)}$. By Lemma \ref{power} the $B_{i}$-characteristic polynomial of $f$ has the form \begin{equation} \label {eq:char-irred} F_{B_{i}}(z)=\hbox{\rm constant}(z^{n_i}-c_{B_{i}})^{e_i}, \end{equation} \noindent for some $c_{B_i}\in \mathbf{K}\setminus \{0\}$. The properties of such polynomials are described in the following lemma, which was proved in \cite[Lemma~5.3]{Forum} for complex polynomials but by Lefschetz Principle it holds true for polynomials over any algebraically closed field of characteristic zero. \begin{Lemma} \label{AL} Let $\mathbf{K}$ be an algebraically closed field of charactersitic zero. If $F(z)=(z^n-c)^{e}\in \mathbf{K}[z]$ with $c\neq 0$ then for $1\leq k<\deg F(z)$ one has $\frac{d^k}{dz^k}F(z)=C z^{a}(z^n-c)^{b}\prod_{i=1}^{d}(z^n-c_i)$, where $C\neq 0$ and \begin{enumerate} \item[(1)] $0\leq a <n$ and $a+k\equiv 0 \pmod n$, \item[(2)] $b=\max\{e-k,0\}$, \item[(3)] $d=\min\{e,k\}-\mbox{\rm lc}eil \frac{k}{n} \rceil$, where $\mbox{\rm lc}eil x \rceil$ denotes the smallest integer bigger than or equal to $x$, \item [(4)] $c_i\neq c_j$ for $1\leq i<j\leq d$ and $0\neq c_i\neq c$ for $1\leq i\leq d$. \end{enumerate} \end{Lemma} \begin{Corollary} \label{irr-kregular} Every irreducible quasi-ordinary Weierstrass polynomial is Kuo-Lu $k$-regular for any positive integer $k$. \end{Corollary} \begin{Theorem}\label{Merle} Let $f(y)\in \mathbf{K}[[\underline{x}]][y]$ be an irreducible quasi-ordinary Weierstrass polynomial of degree $n>1$ and characteristic exponents ${\bf h}_1, \ldots, {\bf h}_s$. Then \begin{equation} \label{dM} f^{(k)}(y)= \prod_{i=1}^{i_{k}}p_i, \end{equation} where \begin{enumerate} \item $p_i$ is a Weierstrass polynomial in $\mathbf{K}[[\underline{x}]][y]$ of degree $n_1\cdots n_{i-1}t_k(B_i)$. \item Any irreducible factor $g$ of $p_i$ verifies \[ \mathrm{cont}_{\rm P}(g,f)=\mbox{self-contact}(B_i). \] \item The $B_i$-characteristic polynomial of $p_i$ is $(P_i)_{B_i}=const F_{B_i}^{\ominus}.$ \end{enumerate} \end{Theorem} \noindent \begin{proof} \noindent The theorem follows from Corollary \ref{irr-kregular} and the first, second and third part of Theorem~\ref{dec-red-qo}. \end{proof} \begin{Proposition} \label{ppppp} Let $f(y)\in \mathbf{K}[[\underline{x}]][y]$ be an irreducible quasi-ordinary Weierstrass polynomial with characteristic exponents ${\bf h}_1, \ldots, {\bf h}_s$. Let $a$, $d$ be integers such that $0\leq a <n_i$, $a+k\equiv 0 \pmod {n_i}$ and $d=\min\{e_i,k\}-\mbox{\rm lc}eil \frac{k}{n_i} \rceil$. Then every $p_i$ of (\ref{dM}) admits a factorization of the form $p_i=p_{i0}p_{i1}\cdots p_{id},$ where \begin{enumerate} \item the corresponding $B_i$-characteristic polynomials are $P_{i0}(z)=\mbox{const}\cdot z^a$, $P_{ij}(z)=\mbox{const}\cdot (z^{n_i}-c_j)$ with $c_j\neq c_l$ for $1\leq j<l\leq d$ and $c_j\neq0$. \item $p_{i0}$ is a Weierstrass polynomial of degree $a\cdot n_1\cdots n_{i-1}$ not necessarily quasi-ordinary. \item Every $p_{ij}$ for $1\leq j \leq d$ is a quasi-ordinary irreducible Weierstrass polynomial of degree $n_1\cdots n_i$ and characteristic exponents ${\bf h}_1, \ldots, {\bf h}_i$. \end{enumerate} \end{Proposition} \noindent \begin{proof} \noindent After \eqref {eq:char-irred} $F_{B_i}(z)$ has the form $a(z^{n_i}-c)^{e_i}$ for some nonzero $a$ and $c$. \noindent By the first part of Theorem \ref{dec-red-qo} and Lemma \ref{AL} the polynomial $P_{i,B_i}(z)=\hbox{\rm const}\cdot z^{a}\prod_{j=1}^{d}(z^{n_{i}}-c_j)$. This polynomial is the product of the $B_i$-characteristic polynomials of the irreducible factors of $p_{i}$. From the second part of Theorem~\ref{pack}, we know that $p_{i}$ has $d$ irreducible factors $\{p_{ij}\}_{j=1}^{d}$ such that $P_{ij}(z)=\mbox{const}\cdot (z^{n_i}-c_j)$. If $p_{i}$ has other irreducible factors, then $p_{i0}$ is their product. It also follows from Theorem~\ref{pack} that $p_{ij}$ are quasi-ordinary for $1\leq j\leq d$. \\ \noindent By a similar argument as in the first part of the proof of Theorem \ref{dec-red-qo} we get $\deg p_{ij}=N(B_{i})\deg P_{i,jB_i}(z)$. Since $N(B_{i})=n_{1}\cdots n_{i-1}$, we obtain the statements about the degrees of $p_{ij}$.\\ \noindent Fix $p_{ij}$ for $j\in\{1,\ldots,d\}$. The pseudo-ball $B_{i}$ has $n_{1}\cdots n_{i-1}$ conjugate pseudo-balls. Each of these pseudo-balls contains $n_{i}$ roots of $p_{ij}$. Since the roots of $P_{i,j}(z)$ are simple, any two roots of $p_{ij}$ belonging to the same pseudo-ball have different leading coefficients with respect to $B_i$, so their contact equals ${\bf h}_{i}$. Now, if we consider two roots of $p_{ij}$ belonging to different conjugate pseudo-balls, then their contact depends only on these two pseudo-balls, hence it is equal to ${\bf h}_{l}$ for some $l\in\{1,\ldots, i-1\}$. We conclude that the characteristic exponents of $p_{ij}$ are ${\bf h}_1, \ldots, {\bf h}_i$. \end{proof} \noindent In Proposition \ref{ppppp} the integer $a$ can be $0$, in such a case $p_{i0}=1$. If $a=1$ then $p_{i0}$ is quasi-ordinary with characteristic exponents ${\bf h}_1, \ldots, {\bf h}_{i-1}$. Moreover $d$ can be zero and in such a case $p_i=p_{i0}$. \section{Eggers decomposition for power series} \label{section-Eggers series} \noindent In this section we deal with power series in variables $\underline{x}$ and $y$. A power series will be called {\it quasi-ordinary} if it is a product of a unity and a quasi-ordinary Weierstrass polynomial. We outline how to generalize the results of previous sections to quasi-ordinary power series. For that we need the next generalization of Lemma~\ref{derivatives}: \begin{Lemma}\label{derivatives-series} Let $f=u f^*$ and $\frac{\partial ^k}{\partial y^k}f=w g^*,$ where $u$, $w\in\mathbf{K}[[\underline{x},y]]$ are unities, $f^*$, $g^*\in \mathbf{K}[[\underline{x}]][y]$ are Weierstrass polynomials and $1\leq k \leq n=\deg f^*$. Assume that $f^{*}$ is compatible with a pseudo-ball $B$. Then $g^{*}$ is compatible with $B$ and $G^*_B(z)=\frac{(n-k)!}{n!}\frac{d^k}{dz^k}F_B^*(z)$. \end{Lemma} \noindent \begin{proof} Substituting $\underline{x}=0$ we get $f(0,y)=u(0,0)y^n+\cdots$. Hence $\frac{\partial ^k f}{\partial y^k}(0,y)=\frac{n!}{(n-k)!}u(0,0)y^{n-k}+\cdots$. On the other hand $\frac{\partial ^k f}{\partial y^k}(0,y)=w(0,y)g^{*}(0,y)$ which implies that \begin{equation} \label{eqw} w(0,0)=\frac{n!}{(n-k)!}u(0,0). \end{equation} \noindent By the assumption of compatibility of $f^{*}$ we have \[ f^*(\underline{x},\lambda_B(\underline{x}) + z\underline{x}^{h(B)})=F^*_B(z)\underline{x}^{q(f^*,B)}+\cdots . \] Hence $f_1(\underline{x},z):=\underline{x}^{-q(f^*,B)}f(\underline{x},\lambda_B(\underline{x}) + z\underline{x}^{h(B)})$ is a fractional power series such that \begin{equation}\label{eq:11.1} f_1(0,z)=u(0,0)F^*_B(z) . \end{equation} By the chain rule of differentiation \begin{equation}\label{eq:11.2} \frac{\partial ^k f_1}{\partial z^k}(\underline{x},z)= \frac{\partial ^k f}{\partial y^k} (\underline{x},\lambda_B(\underline{x})+z\underline{x}^{h(B)}) \cdot\underline{x}^{k h(B)-q(f^*,B)}. \end{equation} Differentiating (\ref{eq:11.1}) yields $\frac{\partial ^k f_1}{\partial z^k}(0,z)=u(0,0)\frac{d^k}{dz^k}F_B^*(z)$. Thus \begin{equation}\label{eq:11.3} \frac{\partial ^k f_1}{\partial z^k}(\underline{x},z)=u(0,0)\frac{d^k}{dz^k}F_B^*(z)+\hbox{\rm terms of positive degree in $\underline{x}$}. \end{equation} Comparing (\ref{eq:11.2}) and (\ref{eq:11.3}) we get \[ \frac{\partial ^k f}{\partial y^k} (\underline{x},\lambda_B(\underline{x})+z\underline{x}^{h(B)}) = u(0,0)\frac{d^k}{dz^k}F_B^*(z)\cdot\underline{x}^{q(f^*,B)-k h(B)} + \cdots \] By the definition of $g^{*}$, the left hand side of the above equality can be written as \[ w(0,0)\,g^*(\underline{x},\lambda_B(\underline{x})+z\underline{x}^{h(B)})+\cdots, \] which gives, after (\ref{eqw}) \[ \frac{n!}{(n-k)!}u(0,0) g^*(\underline{x},\lambda_B(\underline{x})+z\underline{x}^{h(B)}) = u(0,0)\frac{d^k}{dz^k}F_B^*(z)\cdot\underline{x}^{q(f^*,B)-k h(B)} + \cdots \] \noindent and finishes the proof. \end{proof} \noindent Theorem~\ref{higher-Kuo-Lu}, Corollary~\ref{Kuo-Lu}, Theorem~\ref{higher-discriminants}, Theorem~\ref{pack}, Corollary~\ref{pack1}, Theorem~\ref{dec-red-qo}, Theorem~ \ref{Merle} and Proposition \ref{ppppp}, where $f^{(k)}$ stands for the Weierstrass polynomial of $k$th derivative, remain true for quasi-ordinary power series. For the proofs it is enough to replace the power series by their Weierstrass polynomials and use Lemma~\ref{derivatives-series} instead of Lemma~\ref{derivatives} when required. \noindent {\small Evelia Rosa Garc\'{\i}a Barroso\\ Departamento de Matem\'aticas, Estad\'{\i}stica e I.O.\\ Secci\'on de Matem\'aticas, Universidad de La Laguna\\ Apartado de Correos 456\\ 38200 La Laguna, Tenerife, Espa\~na\\ e-mail: [email protected]} \noindent {\small Janusz Gwo\'zdziewicz\\ Institute of Mathematics\\ Pedagogical University of Krak\'ow\\ Podchor\c a{\accent95 z}ych 2\\ PL-30-084 Cracow, Poland\\ e-mail: [email protected]} \end{document}
\begin{document} \title{Excluding false negative error in certification of quantum channels} \begin{abstract} Certification of quantum channels is based on quantum hypothesis testing and involves also preparation of an input state and choosing the final measurement. This work primarily focuses on the scenario when the false negative error cannot occur, even if it leads to the growth of the probability of false positive error. We establish a condition when it is possible to exclude false negative error after a finite number of queries to the quantum channel in parallel, and we provide an upper bound on the number of queries. On top of that, we found a class of channels which allow for excluding false negative error after a finite number of queries in parallel, but cannot be distinguished unambiguously. Moreover, it will be proved that parallel certification scheme is always sufficient, however the number of steps may be decreased by the use of adaptive scheme. Finally, we consider examples of certification of various classes of quantum channels and measurements. \end{abstract} \mathcal{S}ection{Introduction} Being deceived is not a nice experience. People have been developing plenty of methods to protect themselves against being cheated and one of these methods concerns verification of objects, also quantum ones. The cornerstone for theoretical studies on discrimination of quantum objects was laid by Helstrom~\cite{helstrom1976quantum} a few decades ago. In the era of Noisy Intermediate-Scale Quantum (NISQ) devices \cite{preskill2018quantum,bharti2021noisy}, assuring the correctness of components in undeniably in the spotlight. A broad review of multipronged modern methods of certification as well as benchmarking of quantum states and processes can be found in the recent paper~\cite{eisert2020quantum}. For a more introductory tutorial to the theory of system certification we refer the reader to~\cite{kliesch2021theory}. Verification of quantum processes is often studied in the context of specific elements of quantum information processing tasks. Protocols for efficient certification of quantum processes, such as quantum gates and circuits, were recently studied in~\cite{liu2020efficient,zeng2020quantum,zhu2020efficient}. Let us introduce the most general problem of verification studied in this work. Assume there are two known quantum channels and one of them is secretly chosen. Then, we are given the secretly chosen channel to verify which of the two channels it is. We are allowed to prepare any input state and apply the given channel on it. Finally, we can prepare any quantum measurement and measure the output state. Basing on the measurement's outcome we make a decision which of the two channels was secretly chosen. In this work we focus on the case when we are promised which of the channels is given. After performing some certification procedure we can either agree that the channel was the promised one or claim that we were cheated. We want to assure that we will always realize when we are cheated. It may happen though, that we appear to be too suspicious and claim that we were cheated when we were not. There are three major theoretical approaches towards verification of quantum channels called minimum error discrimination, unambiguous discrimination and certification. All these three approaches can be generalized to the multiple-shot case, that is when the given channel can be used multiple times in various configurations. The most straightforward possibility is the parallel scheme and the most sophisticated is the adaptive scheme (where we are allowed to use any processing between the uses of the given channel). The first approach is called minimum error discrimination (a.k.a. distinguishability or symmetric discrimination) and makes use of the distance between quantum channels expressed by the use of the diamond norm. In this scenario one wants to minimize the probability of making the erroneous decision using the bound on this probability given by the Holevo-Helstrom theorem \cite{helstrom1976quantum,watrous2018theory}. Single-shot discrimination of unitary channels and von Neumann measurements were studied in~\cite{ziman2010single,bae2015discrimination} and \cite{ji2006identification,sedlak2014optimal,puchala2018strategies} respectively. Parallel discrimination of quantum channels was studied eg. in \cite{duan2016parallel,cao2016minimal}. It appeared that parallel discrimination scheme is optimal in the case of distinguishability of unitary channels~\cite{chiribella2008memory} and von Neumann measurements~\cite{puchala2021multiple}. In some cases however, the use of adaptive discrimination scheme can significantly improve the certification~\cite{harrow2010adaptive,krawiec2020discrimination}. Advantages of the use of adaptive discrimination scheme in the asymptotic regime were studied in~\cite{salek2020adaptive}. Fundamental and ultimate limits for quantum channel discrimination were derived in \cite{pirandola2019fundamental,zhuang2020ultimate}. The works \cite{katariya2020evaluating,wang2019resource} address the problem of distinguishability of quantum channels in the context of resource theory. In the second approach, that is unambiguous discrimination, there are three possible outcomes. Two of them designate quantum channels while the third option is the inconclusive result. In this approach, when the result indicated which channel was given, we know it for sure. There is a chance however, that we will obtain an inconclusive answer. Unambiguous discrimination of quantum channels was considered in~\cite{wang2006unambiguous}, while unambiguous discrimination of von Neumann measurements was explored in~\cite{puchala2021multiple}. Studies on unambiguous discrimination of quantum channels took a great advantage of unambiguous discrimination of quantum states, which can be found eg. in~\cite{feng2004unambiguous,zhang2006unambiguous, herzog2005optimum,bergou2006optimal,herzog2009discrimination}. The third approach, known as certification or asymmetric discrimination, is based on hypothesis testing. We are promised to be given one of the two channels and associate this channel with the null hypothesis, $H_0$. The other channel is associated with the alternative hypothesis, $H_1$. When making a decision whether to accept or to reject the null hypothesis, two types of errors may occur, that is we can come across false positive and false negative errors. In this work we consider the situation when we want to assure that false negative error will not occur, even if the probability of false positive error grows. A similar task of minimizing probability of false negative error having fixed bound on the probability of false positive was studied in the case of von Neumann measurements in~\cite{lewandowska2021optimal}. Certification of quantum channels was studied in the asymptotic regime e.g. in~\cite{wilde2020amortized,salek2020adaptive,katariya2020evaluating}. It should come as no surprise that in some cases perfect verification is not possible by any finite number of steps. Conditions for perfect minimum error discrimination of quantum operations were derived in~\cite{duan2009perfect}. Similar condition for unambiguous discrimination was proved in~\cite{wang2006unambiguous}. However, no such conditions have been stated for certification. In this work we derive a condition when we can exclude false negative error after a finite number of uses in parallel. This condition holds for arbitrary quantum channels and is expressed by the use of Kraus operators of these channels. We will provide an example of channels which can be certified in a finite number of queries in parallel, but cannot be distinguished unambiguously. Moreover, we will show that, in contrast to discrimination of quantum channels \cite{harrow2010adaptive,krawiec2020discrimination}, parallel certification scheme is always sufficient for certification, although the number of uses of the certified channel may not be optimal. On top of that, we will consider certification of quantum measurements and focus on the class of measurements with rank-one effects. The detailed derivation of the upper bound for the probability of false positive error will be presented for SIC POVMs. This work is organized as follows. After preliminaries in Section~\ref{sec:preliminaries}, we present our main result, that is the condition when excluding false negative is possible in a finite number of uses in parallel, in Theorem~\ref{th:condition_for_parallel_certification} in Section~\ref{sec:condition_parallel_certification}. Certification of quantum measurements is discussed in Section~\ref{sec:certification_of_povms}. Then, in Section~\ref{sec:adaptive_certification} we study adaptive certification and Stein setting. The condition when excluding false negative is possible in adaptive scheme is stated therein as Theorem~\ref{th:equivalence_parallel_adaptive}. Finally, conclusions can be found in Section~\ref{sec:conclusions}. \mathcal{S}ection{Preliminaries}\label{sec:preliminaries} Let $\mathcal{D}_d$ denote the set of quantum states of dimension $d$, that is the set of positive semidefinite operators having trace equal one. Throughout this paper quantum states will be denoted by lower-case Greek letters, usually $\rho, \mathcal{S}igma, \tau$. For any state $\rho \in \mathcal{D}_d$ we can write its spectral decomposition as $\rho = \mathcal{S}um_i p_i \proj{\lambda_i}$. Having a set of quantum states $\{\rho_1, \ldots, \rho_m\}$ with spectral decompositions $\rho_1 = \mathcal{S}um_{i_1} p_{i_1} \proj{\lambda_{i_1}}, \ldots, \rho_m = \mathcal{S}um_{i_m} p_{i_m} \proj{\lambda_{i_m}}$ respectively, their \emph{support} is defined as $\mathcal{S}upp(\rho_1, \ldots, \rho_m) \coloneqq \mathrm{span} \{ \ket{\lambda_{i_j}}: p_{i_j} > 0 \}$. The set of unitary matrices of dimension $d$ will be denoted $\mathcal{U}_d$. Quantum channels are linear maps which are completely positive and trace preserving. In this work we will often take advantage of the Kraus representations of channels. Let \begin{equation} \mathbb{P}hi_0(X) := \mathcal{S}um_{i=1}^k E_i X E_i^\dagger, \quad \mathbb{P}hi_1(X) := \mathcal{S}um_{j=1}^l F_j X F_j^\dagger \end{equation} be the Kraus representations of the channels that will correspond to null and alternative hypotheses, respectively. The sets of operators $\{E_i\}_i$ and $\{F_j\}_j$ are called Kraus operators of channels $\mathbb{P}hi_0$ and $\mathbb{P}hi_1$, respectively. We will use the notation $\mathcal{S}upp(\mathbb{P}hi_0) \coloneqq \mathrm{span}\{E_i \}_i$, $\mathcal{S}upp(\mathbb{P}hi_1) \coloneqq \mathrm{span}\{F_j \}_j$, to denote the supports of quantum channels. Moreover, the notation ${\rm 1\hspace{-0.9mm}l}$ will be used for the identity channel. The most general quantum measurements, known also as POVMs (positive operator valued measure) are defined as a collection of positive semidefinite operators $\mathcal{P} = \{ M_1, \ldots, M_m \}$ which fulfills the condition $\mathcal{S}um_{i=1}^m M_i = {\rm 1\hspace{-0.9mm}l}_d$, where ${\rm 1\hspace{-0.9mm}l}_d$ denotes the identity matrix of dimension $d$. When a quantum state $\rho$ is measured by the measurement $\mathcal{P}$, then the label $i$ is obtained with probability $\Tr (E_i \rho)$ and the state $\rho$ ceases to exist. A special class of quantum measurements are projective von Neumann measurements. These POVMs have rank-one effects of the form $\{ \proj{u_1}, \ldots, \proj{u_d} \}$, where vectors $\{\ket{u_i}\}_{i=1}^d$ form an orthonormal basis and therefore they are columns of some unitary matrix $U \in\mathcal{U}_d$. Now we proceed to describing the detailed scheme of certification. There are two quantum channels: $\mathbb{P}hi_0$ and $\mathbb{P}hi_1$. We are promised that we are given $\mathbb{P}hi_0$ but we are not sure and we want to verify it using hypothesis testing. We associate the channel $\mathbb{P}hi_0$ with the null hypothesis $H_0$ and we associate the other channel $\mathbb{P}hi_1$ with the alternative hypothesis $H_1$. We consider the following scheme. We are allowed to prepare any (possibly entangled) input state and perform the given channel on it. Then, we prepare a binary measurement $\{\Omega_0, {\rm 1\hspace{-0.9mm}l}- \Omega_0\}$ and measure the output state. If we obtain the label associated with the effect $\Omega_0$, then we decide that the certified channel was $\mathbb{P}hi_0$ and we accept the null hypothesis. If we get the label associated with the effect ${\rm 1\hspace{-0.9mm}l}- \Omega_0$, then we decide that the certified channel was $\mathbb{P}hi_1$ and therefore we reject the null hypothesis. The aim of certification is to make a decision whether to accept or to reject $H_0$. While making such a decision one can come upon two types of errors. The false positive error (also known as type I error) happens when we reject the null hypothesis when in fact it was true. The converse situation, that is accepting the null hypothesis when the alternative hypothesis was correct, is known as the false negative (or type II) error. In this work we will focus on the situation when the probability of the false negative error equals zero and we want to minimize the probability of false positive error. Let us now take a closer look into the scheme of entanglement-assisted single-shot certification procedure. We begin with preparing an input state $\ket{\psi}$ on the compound space. Then, we apply the certified channel extended by the identity channel on the input state, obtaining as the output the state either $\rho_0^{\ket{\psi}} = \left( \mathbb{P}hi_0 \otimes {\rm 1\hspace{-0.9mm}l} \right) (\proj{\psi})$, if the given channel was $\mathbb{P}hi_0$, or $\rho_1^{\ket{\psi}} = \left( \mathbb{P}hi_1 \otimes {\rm 1\hspace{-0.9mm}l} \right) (\proj{\psi})$, if the given channel was $\mathbb{P}hi_1$. Eventually, we perform the measurement $\{\Omega_0, {\rm 1\hspace{-0.9mm}l} - \Omega_0 \}$, where the effect $\Omega_0$ accepts hypothesis $H_0$ and the effect ${\rm 1\hspace{-0.9mm}l}-\Omega_0$ accepts the alternative hypothesis $H_1$. Assuming that the input state $\ket{\psi}$ and measurement effect $\Omega_0$ have been fixed, the probability of making the false positive error is given by \begin{equation}\label{def_of_p1_conditional_single_shot} p_1 \left(\ket{\psi}, \Omega_0 \right) \coloneqq \Tr \left( ({\rm 1\hspace{-0.9mm}l} - \Omega_0) \rho_0^{\ket{\psi}} \right) = 1- \Tr \left( \Omega_0 \rho_0^{\ket{\psi}} \right). \end{equation} In a similar manner we have the probability of making the false negative error, that is \begin{equation} p_2 \left( \ket{\psi}, \Omega_0 \right) \coloneqq \Tr \left( \Omega_0 \rho_1^{\ket{\psi}} \right). \end{equation} We will be interested in the situation when probability of the false negative error is equal to zero and we want to minimize the probability of false positive error. Therefore, we introduce the notation \begin{equation}\label{eq:def_of_p1} p_1 \coloneqq \min_{\ket{\psi}, \Omega_0} \left\{p_1 \left(\ket{\psi}, \Omega_0 \right): \ p_2 \left(\ket{\psi}, \Omega_0 \right) = 0 \right\} \end{equation} for minimized probability of false positive error in the single-shot scenario. For a given $\epsilon >0$, we say that quantum channel $\mathbb{P}hi_0$ \emph{can be $\epsilon$-certified against} channel $\mathbb{P}hi_1$ if there exist an input state $\ket{\psi}$ and measurement effect $\Omega_0$ such that $p_2 \left(\ket{\psi}, \Omega_0 \right) = 0$ and $p_1 \left(\ket{\psi}, \Omega_0 \right) \leq \epsilon$. In other words, quantum channel $\mathbb{P}hi_0$ can be $\epsilon$-certified against another channel $\mathbb{P}hi_1$ if we can assure no false negative will occur and the probability of false positive error is smaller than $\epsilon$. When performing the certification of quantum channels, we can use the channels many times in various configurations. Now we proceed to introducing notation needed for studying parallel and adaptive certification schemes. \mathcal{S}ubsection{Parallel certification scheme} Let $N$ denote the number of uses of the quantum channel in parallel. A schematic representation of the scenario of parallel certification is depicted in Figure~\ref{fig:parallal_scheme}. In this scheme we consider certifying tensor products of the channels. In other words, parallel certification of channels $\mathbb{P}hi_0$ and $\mathbb{P}hi_1$ can be seen as certifying channels $\mathbb{P}hi_0^{\otimes N}$ and $\mathbb{P}hi_1^{\otimes N}$ for some natural number $N$. \begin{figure} \caption{Parallel certification scheme} \label{fig:parallal_scheme} \end{figure} Let $\ket{\psi}$ be the input state to the certification procedure. After applying the channel $\mathbb{P}hi_0$ $N$ times in parallel, we obtain the output state \begin{equation} \mathcal{S}igma_0^{N, \ket{\psi}} = \left(\mathbb{P}hi_0^{\otimes N} \otimes {\rm 1\hspace{-0.9mm}l}\right) (\proj\psi), \end{equation} if the channel was $\mathbb{P}hi_0$, and similarly \begin{equation} \mathcal{S}igma_1^{N, \ket{\psi}} = \left(\mathbb{P}hi_1^{\otimes N} \otimes {\rm 1\hspace{-0.9mm}l}\right) (\proj\psi), \end{equation} if the channel was $\mathbb{P}hi_1$. In the same spirit let \begin{equation}\label{eq:def_of_errors_in_parallel_scheme} p_1^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0 \right) = \Tr \left(({\rm 1\hspace{-0.9mm}l} - \Omega_0) \mathcal{S}igma_0^{N, \ket{\psi}}\right) , \quad p_2^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0 \right) = \Tr\left(\Omega_0 \mathcal{S}igma_1^{N, \ket{\psi}} \right) \end{equation} be the probabilities of occurring false positive and false negative errors respectively. When $N=1$, then we arrive at single-shot certification. Therefore we will neglect the upper index and simply write $p_1 \left(\ket{\psi}, \Omega_0 \right)$ and $p_2 \left(\ket{\psi}, \Omega_0 \right)$. We introduce the notation \begin{equation}\label{eq:def_of_p1_parallel} p_1^{\mathbb{P}, N} \coloneqq \min_{\ket{\psi}, \Omega_0} \left\{p_1^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0 \right): \ p_2^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0 \right) = 0 \right\} \end{equation} for the minimized probability of false positive error in the parallel scheme. We say that quantum channel $\mathbb{P}hi_0$ \emph{can be certified against} $\mathbb{P}hi_1$ \emph{in the parallel scheme}, if for every $\epsilon >0$ there exist a natural number $N$, an input state $\ket{\psi}$ and measurement effect $\Omega_0$ such that $p_2^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0 \right) = 0$ and $p_1^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0\right) \leq \epsilon$. Let us now elaborate a bit on the number of steps needed for certification. Assume that we have fixed upper bound on the probability of false positive error, $\epsilon >0$. We will be interested in calculating the minimal number of queries, $N_\epsilon$, for which $p_2^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0 \right) = 0$ and $p_1^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0\right) \leq \epsilon$ for some input state $\ket{\psi}$ and measurement effect $\Omega_0$. Such a number, $N_\epsilon$, will be called the \emph{minimal number of steps needed for parallel certification}. \mathcal{S}ubsection{Adaptive certification scheme} Adaptive certification scheme allows for the use of processing between the uses of the certified channel, therefore this procedure is more complex then the parallel certification. However, when the processings only swap the subsystems, then the adaptive scheme may reduce to the parallel one. \begin{figure} \caption{Adaptive certification scheme. The processings $\Xi_1, \ldots, \Xi_{N-1} \label{fig:adaptive_scheme} \end{figure} Assume as previously that $\ket{\psi}$ is the input state to the certification procedure in which the certified channel in used $N$ times and any processing is allowed between the uses of this channel. The scheme of this procedure is presented in the Figure~\ref{fig:adaptive_scheme}. Having the input state $\ket\psi$ on the compound register, we perform the certified channel (denoted by the black box with question mark) on one part of it. Having the output state we can perform some processing $\Xi_1$ and therefore get prepared for the next use of the certified channel. Than again, we apply the certified channel on one register of the prepared state and again, we can perform processing $\Xi_2$. We repeat this procedure $N-1$ times. After the $N$-th use of the certified channel we obtain the state either $\tau_0^{N, \ket{\psi}}$, if the channel was $\mathbb{P}hi_0$, or $\tau_1^{N, \ket{\psi}}$, if the channel was $\mathbb{P}hi_1$. Then, we prepare a global measurement $\{\Omega_0, {\rm 1\hspace{-0.9mm}l} - \Omega_0\}$ and apply it on the output state. Let \begin{equation} p_1^{\mathbb{A}, N} \left(\ket{\psi}, \Omega_0 \right) = \Tr \left(({\rm 1\hspace{-0.9mm}l} - \Omega_0) \tau_0^{N, \ket{\psi}}\right) , \quad p_2^{\mathbb{A}, N} \left(\ket{\psi}, \Omega_0 \right) = \Tr\left(\Omega_0 \tau_1^{N, \ket{\psi}} \right) \end{equation} be the probabilities of the false positive and false negative errors in adaptive scheme, respectively, when the input state and the measurement effects were fixed. When $N=1$, then we will neglect the upper index and simply write $p_1 \left(\ket{\psi}, \Omega_0 \right)$ and $p_2 \left(\ket{\psi}, \Omega_0 \right)$. We say that quantum channel $\mathbb{P}hi_0$ \emph{can be certified against} $\mathbb{P}hi_1$ \emph{in the adaptive scheme}, if for every $\epsilon >0$ there exist a natural number $N$, an input state $\ket{\psi}$ and measurement effect $\Omega_0$ such that $p_2^{\mathbb{A}, N} \left(\ket{\psi}, \Omega_0 \right) = 0$ and $p_1^{\mathbb{A}, N} \left(\ket{\psi}, \Omega_0\right) \leq \epsilon$. For a fixed upper bound on the probability of false positive error, $\epsilon$, we introduce the \emph{minimal number of steps needed for adaptive certification}, $N_\epsilon$, as the minimal number of steps after which $p_2^{\mathbb{A}, N} \left(\ket{\psi}, \Omega_0 \right) = 0$ and $p_1^{\mathbb{A}, N} \left(\ket{\psi}, \Omega_0\right) \leq \epsilon$ for some input state $\ket{\psi}$ and measurement effect $\Omega_0$. \mathcal{S}ection{Parallel certification}\label{sec:condition_parallel_certification} Not all quantum channels can be discriminated perfectly after a finite number of queries. Conditions for perfect discrimination were states in the work~\cite{duan2009perfect}. Similar conditions for unambiguous discrimination were proved in~\cite{wang2006unambiguous}. In this section we will complement these results with the condition concerning parallel certification. More specifically, we will prove a simple necessary and sufficient condition when a quantum channel $\mathbb{P}hi_0$ can be certified against some other channel $\mathbb{P}hi_1$. As the condition utilizes the notion of the support of a quantum channel, recall that it is defined as the span of their Kraus operators. The condition will be stated as Theorem~\ref{th:condition_for_parallel_certification}, however its proof will be presented after introducing two technical lemmas. In fact, the statement of Theorem~\ref{th:condition_for_parallel_certification} is a bit more general, that is it concerns the situation when the alternative hypothesis corresponds to a set of channels $ \left\{ \mathbb{P}hi_1, \ldots , \mathbb{P}hi_m \right\}$ having Kraus operators $\left\{ F^{(1)}_{{j_1}} \right\}_{{j_1}}, \ldots, \left\{ F^{(m)}_{{j_m}} \right\}_{{j_m}}$ respectively. More precisely, the alternative hypothesis corresponds to the situation when every black box contains one of the channels $ \left\{ \mathbb{P}hi_1, \ldots , \mathbb{P}hi_m \right\}$ but not necessarily the same. We will use the notation $\mathcal{S}upp \left(\mathbb{P}hi_1, \ldots , \mathbb{P}hi_m \right) \coloneqq \mathrm{span} \left\{ F^{(1)}_{{j_1}} , \ldots , F^{(m)}_{{j_m}} \right\}_{ {j_1}, \ldots,{j_m}}$. \begin{theorem}\label{th:condition_for_parallel_certification} Quantum channel $\mathbb{P}hi_0$ can be certified against quantum channels $\mathbb{P}hi_1, \ldots, \mathbb{P}hi_m$ in the parallel scheme if and only if $\mathcal{S}upp(\mathbb{P}hi_0) \not\mathcal{S}ubseteq \mathcal{S}upp \left(\mathbb{P}hi_1, \ldots, \mathbb{P}hi_m \right)$. Moreover, to ensure that the probability of false positive error is no greater than $\epsilon$, the number of steps needed for parallel certification is bounded by $N_\epsilon \geq \left\lceil \frac{\log \epsilon}{\log p_1}\right\rceil$, where $p_1$ is the upper bound on probability of false positive error in single-shot certification. \end{theorem} Before presenting the proof of this theorem we will introduce two lemmas. The proofs of lemmas are postponed to Appendix~\ref{app:proofs_of_lemmas}. Lemma~\ref{lm:inclusion_of_supports} states that if the inclusion does not hold for supports of the quantum channels, then the inclusion also does not hold for supports of output states assuming that the input state has full Schmidt rank. The proof of Lemma~\ref{lm:inclusion_of_supports} is based on the proof in~\cite[Theorem~$1$]{wang2006unambiguous}, which studies unambiguous discrimination among quantum operations. \begin{lemma}\label{lm:inclusion_of_supports} Let $\{\ket{a_t}\}_t$ and $\{\ket{b_t}\}_t$ be two orthonormal bases and $\ket{\psi} \coloneqq \mathcal{S}um_t \lambda_t \ket{a_t} \ket{b_t}$ where $\lambda_t>0$ for every $t$. Let also $\rho_0^{\ket{\psi}} = \left(\mathbb{P}hi_0 \otimes {\rm 1\hspace{-0.9mm}l}\right) (\proj\psi)$ and $\rho_j^{\ket{\psi}} = \left(\mathbb{P}hi_j \otimes {\rm 1\hspace{-0.9mm}l}\right) (\proj\psi)$ for $j=1, \ldots, m$. If $\mathcal{S}upp(\mathbb{P}hi_0) \not\mathcal{S}ubseteq \mathcal{S}upp(\mathbb{P}hi_1, \ldots ,\mathbb{P}hi_m)$, then $\mathcal{S}upp\left(\rho_0^{\ket{\psi}}\right) \not\mathcal{S}ubseteq \mathcal{S}upp\left(\rho_1^{\ket{\psi}} , \ldots , \rho_m^{\ket{\psi}}\right)$. \end{lemma} Lemma~\ref{lm:inclusion_of_supports_2} also concerns inclusions of supports. It states that if the inclusion of supports does not hold for some output states, then it does not hold also for supports of the channels. \begin{lemma}\label{lm:inclusion_of_supports_2} With the notation as above, if there exists a natural number $N$ and an input state $\ket{\psi}$ such that $\mathcal{S}upp\left( \mathcal{S}igma_0^{N, \ket{\psi}} \right) \not\mathcal{S}ubseteq \mathcal{S}upp \left( \mathcal{S}igma_1^{N, \ket{\psi}}, \ldots, \mathcal{S}igma_m^{N, \ket{\psi}} \right)$, then $\mathcal{S}upp (\mathbb{P}hi_0) \not\mathcal{S}ubseteq \mathcal{S}upp \left(\mathbb{P}hi_1, \ldots, \mathbb{P}hi_m\right)$. \end{lemma} Finally, we are in position to present the proof of Theorem~\ref{th:condition_for_parallel_certification}. \begin{proof}[Proof of Theorem~\ref{th:condition_for_parallel_certification}] ($\impliedby$) Let $\mathcal{S}upp(\mathbb{P}hi_0) \not\mathcal{S}ubseteq \mathcal{S}upp \left(\mathbb{P}hi_1, \ldots, \mathbb{P}hi_m \right)$. From Lemma~\ref{lm:inclusion_of_supports} this implies $\mathcal{S}upp\left(\rho_0^{\ket{\psi}} \right) \not\mathcal{S}ubseteq \mathcal{S}upp\left(\rho_1^{\ket{\psi}} , \ldots , \rho_m^{\ket{\psi}}\right)$ where the input state is $\ket{\psi}=\mathcal{S}um_t \lambda_t \ket{a_t} \ket{b_t}$. Hence we can always find a state $\ket{\phi_0}$ for which \begin{equation} \ket{\phi_0} \not\perp \mathcal{S}upp\left(\rho_0^{\ket{\psi}} \right) \quad \textrm{and} \quad \ket{\phi_0} \perp \mathcal{S}upp\left(\rho_1^{\ket{\psi}} , \ldots , \rho_m^{\ket{\psi}}\right), \end{equation} and therefore \begin{equation} \bra{\phi_0}\rho_0^{\ket{\psi}} \ket{\phi_0} >0 \quad \textrm{and} \quad \bra{\phi_0}\rho_i^{\ket{\psi}} \ket{\phi_0} = 0 \end{equation} for $i = 1 , \ldots, m$. Now we consider the certification scheme by taking the measurement with effects $\{\Omega_0, {\rm 1\hspace{-0.9mm}l} - \Omega_0 \}$. Without loss of generality we can assume that $\Omega_0 \coloneqq \proj{\phi_0}$ is a rank-one operator. We calculate \begin{eqnarray} \begin{split} &\tr \left(\Omega_0 \rho_0^{\ket{\psi}} \right) = \bra{\phi_0} \rho_0^{\ket{\psi}} \ket{\phi_0} >0 \\ &p_2 \left( \ket{\psi}, \Omega_0 \right) = \mathcal{S}um_{i=1}^{m} \tr \left(\Omega_0 \rho_i^{\ket{\psi}} \right) = \mathcal{S}um_{i=1}^{m} \bra{\phi_0} \rho_i^{\ket{\psi}} \ket{\phi_0} =0 \\ &p_1 \left( \ket{\psi}, \Omega_0 \right) = \tr \left(({\rm 1\hspace{-0.9mm}l} - \Omega_0) \rho_0^{\ket{\psi}} \right) = 1- \bra{\phi_0} \rho_0^{\ket{\psi}} \ket{\phi_0} < 1 \end{split} \end{eqnarray} Hence after sufficiently many uses, $N$, of the certified channel in parallel (actually when $N \geq \left\lceil \frac{\log \epsilon}{\log p_1}\right\rceil$) we obtain that $\tr \left(\Omega_1^{\otimes N} {\left(\rho_0^{\ket{\psi}}\right)}^{\otimes N} \right) \leq \epsilon$ for any positive $\epsilon$. Therefore after $N$ queries we will be able to exclude false negative error. ($\implies$) Assume that $\mathbb{P}hi_0$ can be certified against $\mathbb{P}hi_1, \ldots , \mathbb{P}hi_m$ in the parallel scenario. This means that there exist a natural number $N$, an input state $\ket{\psi}$ and a positive operator (measurement effect) $\Omega_0$ on the composite system such that \begin{equation} \begin{split} &p_1^{\mathbb{P}, N} \left( \ket{\psi}, \Omega_0\right) = 1- \tr \left(\Omega_0 \left( \mathbb{P}hi_0^{\otimes N} \otimes {\rm 1\hspace{-0.9mm}l} \right)( \proj{\psi})\right) \leq \epsilon <1 \\ &\tr \left(\Omega_0 \left( \mathbb{P}hi_{i_1} \otimes \ldots \otimes \mathbb{P}hi_{i_N} \otimes {\rm 1\hspace{-0.9mm}l} \right)( \proj{\psi})\right) =0, \quad \forall_{i_1, \ldots , i_N \in \{1, \ldots,m\}}. \end{split} \end{equation} Therefore $\tr \left(\Omega_0 \left( \mathbb{P}hi_0^{\otimes N} \otimes {\rm 1\hspace{-0.9mm}l} \right)( \proj{\psi})\right) >0$ and thus \begin{equation} \begin{split} \Omega_0 &\not\perp \mathcal{S}upp \left(\left( \mathbb{P}hi_0^{\otimes N} \otimes {\rm 1\hspace{-0.9mm}l} \right)\left( \proj{\psi}\right)\right) = \mathrm{span} \left\{ \left( E_{i_1} \otimes \ldots \otimes E_{i_N} \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi}\right\}_{i_1, \ldots, i_N} \\ \Omega_0 &\perp \mathrm{span} \left\{ \left( K_{l}^{\otimes N} \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi}\right\}_{l_1, \ldots, l_N}, \end{split} \end{equation} where $\{K_l\}_l = \left\{ F^{(1)}_{{j_1}} , \ldots , F^{(m)}_{{j_m}} \right\}_{ {j_1}, \ldots,{j_m}} $ is the set of all Kraus operators of channels $\mathbb{P}hi_i$ from the alternative hypothesis. Hence \begin{equation}\label{eq:to_be_contradicted} \mathrm{span} \left\{ \left( E_{i_1} \otimes \ldots \otimes E_{i_N} \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi}\right\}_{i_1, \ldots, i_N} \not\mathcal{S}ubseteq \mathrm{span} \left\{ \left( K_{l_1} \otimes \ldots \otimes K_{l_N} \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi}\right\}_{l_1, \ldots, l_N}. \end{equation} The reminder of the proof follows directly from Lemma~\ref{lm:inclusion_of_supports_2}. \end{proof} It is worth mentioning that in the above proof the measurement effect $\Omega_0$ is a rank-one projection operator. This is sufficient to prove that quantum channel $\mathbb{P}hi_0$ can be certified against $\mathbb{P}hi_1$ in the parallel scheme, but this is, in most of the cases, not optimal. \begin{remark} If we consider the hypothesis testing scenario when we have to decide whether the unknown channel is either $\mathbb{P}hi_0$ or some fixed $\mathbb{P}hi_i$, with $\mathbb{P}hi_i \in \{ \mathbb{P}hi_1, \ldots, \mathbb{P}hi_m\}$ ({\emph{i.e.\/}} we have either $\mathbb{P}hi_0^{\otimes N}$ or $\mathbb{P}hi_i^{\otimes N}$), then the quantum channel $\mathbb{P}hi_0$ can be certified against $\mathbb{P}hi_{1}, \ldots , \mathbb{P}hi_m$ in the parallel scheme if and only if \begin{equation} \mathcal{S}upp \left( \mathbb{P}hi_0 \right) \not\mathcal{S}ubseteq \mathcal{S}upp \left( \mathbb{P}hi_j \right) \end{equation} for every $j \in \{1, \ldots, m\}$. \end{remark} This remark follows directly from considering certification with simple alternative hypotheses. In the remainder of this section we will discuss two examples. The first example shows that if quantum channels can be certified in the parallel scheme, then it does not have to imply that they can be discriminated unambiguously. We will provide an explicit example of mixed-unitary channels which fulfill the condition from Theorem~\ref{th:condition_for_parallel_certification}, and therefore can be certified in the parallel scheme, but cannot be discriminated unambiguously. In the second example we will consider the situation when the channel associated with the $H_1$ hypothesis is the identity channel and derive an upper bound on the probability of false positive error. \mathcal{S}ubsection{Channels which cannot be discriminated unambiguously but still can be certified.} In this subsection we will give an example of a class of channels which cannot be discriminated unambiguously, but they can be certified by a finite number of uses in the parallel scheme. The work~\cite{wang2006unambiguous} presents the condition when quantum channels can be unambiguously discriminated by a finite number of uses. More precisely, Theorem~2 therein states that if a set of quantum channels $\mathcal{S} = \{\mathbb{P}hi_i\}_i$ satisfies the condition $\mathcal{S}upp (\mathbb{P}hi_i) \not\mathcal{S}ubseteq \mathcal{S}upp(\mathbb{P}hi_j)$ for every $\mathbb{P}hi_i, \mathbb{P}hi_j \in \mathcal{S}$, then they can be discriminated unambiguously in a finite number of uses. Now we proceed to presenting our example. Let $\mathbb{P}hi_0$ be a mixed unitary channel of the form \begin{equation} \mathbb{P}hi_0(\rho) = \mathcal{S}um_{i=1}^m p_i U_i \rho U_i^\dagger, \end{equation} where $p = (p_1, \ldots, p_m)$ is a probability vector and $\{ U_1, \ldots, U_m\}$ are unitary matrices. As the second channel we take a unitary channel of the form $\mathbb{P}hi_1(\rho) = \tilde{U} \rho \tilde{U}^\dagger$, where we make a crucial assumption that $\tilde{U} \in \{ U_1, \ldots, U_m\}$. Therefore we have $\mathcal{S}upp(\mathbb{P}hi_0) = \mathrm{span} \{\mathcal{S}qrt{p_i}U_i \}_i$, while $\mathcal{S}upp(\mathbb{P}hi_1) = \mathrm{span} \{\tilde{U}\}$. In this example it can be easily seen that the condition for unambiguous discrimination is not fulfilled as $\mathcal{S}upp (\mathbb{P}hi_1) \mathcal{S}ubseteq \mathcal{S}upp(\mathbb{P}hi_0)$. Nevertheless, the condition from Theorem~\ref{th:condition_for_parallel_certification} is fulfilled as $\mathcal{S}upp (\mathbb{P}hi_0) \not\mathcal{S}ubseteq \mathcal{S}upp(\mathbb{P}hi_1)$, and hence it is possible to exclude false negative error after a finite number of queries in parallel. \mathcal{S}ubsection{Certification of arbitrary channel against the identity channel} Assume that we want to certify channel $\mathbb{P}hi_0$, which Kraus operators are $\{E_i\}_i$, against the identity channel $\mathbb{P}hi_1$ having Kraus operator $\{{\rm 1\hspace{-0.9mm}l}\}$. We will show that as long as the channel $\mathbb{P}hi_0$ is not the identity channel, it can always be certified against the identity channel in the parallel scheme. \begin{proposition} Every quantum channel (except the identity channel) can be certified against the identity channel in the parallel scheme. \end{proposition} \begin{proof} Let $\ket{\psi}$ be an input state. After applying the certified channels on it, we obtain the state either $\rho_0^{\ket{\psi}} = \left(\mathbb{P}hi_0 \otimes {\rm 1\hspace{-0.9mm}l}\right)\left(\proj{\psi}\right)$, if the channel was $\mathbb{P}hi_0$, or $\rho_1^{\ket{\psi}} = \proj{\psi}$, if the channels was $\mathbb{P}hi_1$. As the final measurement effect we can take $\Omega_0 \coloneqq {\rm 1\hspace{-0.9mm}l} - \proj{\psi}$, which is always orthogonal to $\rho_1^{\ket{\psi}}$, hence no false negative error will occur. Having the input state and final measurement fixed, we will calculate the probability of false positive error in the single-shot scheme \begin{equation}\label{eq:p1_certifying_from_identity} \begin{split} p_1 (\ket{\psi}, \Omega_0) &= 1- \Tr\left(\Omega_0 \rho_0^{\ket{\psi}} \right) = 1-\Tr\left(\left({\rm 1\hspace{-0.9mm}l} - \proj{\psi}\right) \rho_0^{\ket{\psi}} \right) =\Tr\left(\proj{\psi} \rho_0^{\ket{\psi}} \right) \\ &= \bra{\psi} \left(\left(\mathbb{P}hi_0\otimes {\rm 1\hspace{-0.9mm}l}\right) (\proj{\psi}) \right) \ket{\psi} <1, \end{split} \end{equation} where the last inequality follows from the fact that $\mathbb{P}hi_0$ is not the identity channel. Therefore, after sufficiently many queries in the parallel scheme the probability of false positive error will be arbitrarily small. \end{proof} Note that the expression for the probability of false positive error in Eq.~\eqref{eq:p1_certifying_from_identity} is in fact the fidelity between the input state and the output of the channel $\mathbb{P}hi_0$ extended by the identity channel. As we were not imposing any specific assumptions on the input state, we can take the one which minimizes the expression in Eq.~\eqref{eq:p1_certifying_from_identity}. Therefore, the probability of the false positive error in the single-shot certification yields \begin{equation} p_1 = \min_{\ket{\psi}} \bra{\psi} \left(\left(\mathbb{P}hi_0\otimes {\rm 1\hspace{-0.9mm}l}\right) (\proj{\psi}) \right) \ket{\psi}. \end{equation} Eventually, to make sure that the probability of false positive error will not be greater than $\epsilon$, we will need $N_\epsilon \geq \left\lceil \frac{\log \epsilon}{\log p_1} \right\rceil$ steps in the parallel scheme. From the above considerations we can draw a simple conclusion concerning the situation when $\mathbb{P}hi_0(X)= UX U^\dagger$ is a unitary channel. Then, as the unitary channel has only one Kraus operator, it holds that $p_1 =\min_{\ket{\psi}} \left\vert \bra{\psi} \left( U \otimes {\rm 1\hspace{-0.9mm}l} \right) \ket{\psi} \right\vert^2 = \min_{\ket{\psi}} \left\vert \bra{\psi} U \ket{\psi} \right\vert^2 = \nu^2(U)$, where $\nu(U)$ is the distance from zero to the numerical range of the matrix $U$~\cite{puchala2021multiple,lewandowska2021optimal}. Thanks to this geometrical representation (see further~\cite{puchala2021multiple}) one can deduce the connection between the probability of false positive error, $p_1$, and the probability of making an error in the unambiguous discrimination of unitary channels. More specifically, let $p^u_{\mathrm{error}}$ denote the probability of making an erroneous decision in unambiguous discrimination of unitary channels. Then, it holds that $p^u_{\mathrm{error}} = p_1^2$. Therefore, in the case of certification of unitary channels the probability of making the false positive error is significantly smaller than the probability of erroneous unambiguous discrimination. \mathcal{S}ection{Certification of quantum measurements}\label{sec:certification_of_povms} In this section we will take a closer look into the certification of quantum measurements. We will begin with general POVMs and later focus on the class of measurements with rank-one effects. Before stating the results, let us recall that every quantum measurement can be associated with quantum-classical channel defined as \begin{equation} \mathcal{P}(\rho) = \mathcal{S}um_i \tr (M_i \rho) \proj{i}, \end{equation} where $\{M_i\}_i$ are measurement's effects and $\tr (M_i \rho)$ is the probability of obtaining the $i$-th label. The following proposition can be seen as a corollary from Theorem~\ref{th:condition_for_parallel_certification} as it gives a simple condition when we forbid false negative error. This condition is expressed in terms of inclusion of supports of the measurements' effects. \begin{proposition}\label{prop:certification_of_POVMs} Let $\mathcal{P}_0$ and $\mathcal{P}_1$ be POVMs with effects $\{M_i\}_{i=1}^m$ and $\{N_i\}_{i=1}^m$ respectively. Then $\mathcal{P}_0$ can be certified against $\mathcal{P}_1$ in the parallel scheme if and only if there exists a pair of effects $M_i$, $N_i$ for which $\mathcal{S}upp(M_i) \not\mathcal{S}ubseteq \mathcal{S}upp(N_i)$. \end{proposition} \begin{proof} Let \begin{equation} M_i = \mathcal{S}um_{k_i} \alpha_{k_i}^i \proj{x_{k_i}^i} \end{equation} be the spectral decomposition of $M_i$ (where $\alpha_{k_i}^i>0$ for every $k$). Then \begin{equation} \begin{split} \mathcal{P}_0 (\rho) = \mathcal{S}um_i \proj{i} \tr(M_i \rho) = \mathcal{S}um_i \mathcal{S}um_{k_i} \alpha_{k_i}^i \ketbra{i}{x_{k_i}^i} \rho \ketbra{x_{k_i}^i}{i} \end{split} \end{equation} and hence the Kraus operators of $\mathcal{P}_0$ are $\left\{ \mathcal{S}qrt{\alpha_{k_i}^i} \ketbra{i}{x_{k_i}^i} \right\}_{k_i,i}$. Analogously, the Kraus operators of $\mathcal{P}_1$ are $\left\{ \mathcal{S}qrt{\beta_{k_i}^i} \ketbra{i}{y_{k_i}^i} \right\}_{k_i,i}$. Therefore from Theorem~\ref{th:condition_for_parallel_certification} we have that $\mathcal{P}_0$ can be certified against $\mathcal{P}_1$ in the parallel scheme if and only if \begin{equation} \mathrm{span} \left\{ \mathcal{S}qrt{\alpha_{k_i}^i} \ketbra{i}{x_{k_i}^i} \right\}_{k_i,i} \not\mathcal{S}ubseteq \mathrm{span} \left\{ \mathcal{S}qrt{\beta_{k_i}^i} \ketbra{i}{y_{k_i}^i} \right\}_{k_i,i}, \end{equation} that is when there exists a pair of effects $M_i$, $N_i$ for which $\mathcal{S}upp(M_i) \not\mathcal{S}ubseteq \mathcal{S}upp(N_i)$. \end{proof} The above proposition holds for any pair of quantum measurements. In the case of POVMs with rank-one effects, the above condition can still be simplified to linear independence of vectors. This is stated as the following corollary. \begin{corollary}\label{corr:certification_POVMs_with_rank_one_effects} Let $\mathcal{P}_0$ and $\mathcal{P}_1$ be measurements with effects $\{ \alpha_i \proj{x_i}\}_{i=1}^m$ and $\{ \beta_i\proj{y_i}\}_{i=1}^m$ for $\alpha_i, \beta_i \in (0, 1]$, respectively. Then $\mathcal{P}_0$ can be certified against $\mathcal{P}_1$ in the parallel scheme if and only if there exists a pair of vectors $\ket{x_i}$, $\ket{y_i}$ which are linearly independent. \end{corollary} While studying the certification of measurements with rank-one effects, one cannot overlook their very important subclass, namely projective von Neumann measurements. These measurements have effects of the form $\{ \proj{u_1}, \ldots , \proj{u_n} \}$, where $\{\ket{u_i}\}_i$ form an orthonormal basis. This class of measurements was studied in~\cite{lewandowska2021optimal}, though in a slightly different context. The main result of that work was the expression for minimized probability of the false negative error, where the bound on the false positive error was assumed. In this work, however, we consider the situation when false negative error must be equal zero after sufficiently many uses. Nevertheless, from Corollary~\ref{corr:certification_POVMs_with_rank_one_effects} we can draw a conclusion that any von Neumann measurement can be certified against some other von Neumann measurement if and only if the measurements are not the same. \mathcal{S}ubsection{SIC POVMs}\label{sec:sic_povms_calculated} Now we proceed to studying the certification of a special class of measurements with rank-one effects, that is symmetric informationally complete (SIC) POVMs \cite{renes2004symmetric,flammia2006sic,zhu2010sic,appleby2009properties}. We will directly calculate the bounds on the false positive error in the single-shot and parallel certification. We will be using the following notation. The SIC POVM $\mathcal{P}_0$ with effects $\{\proj{x_i}\}_{i=1}^{d^2}$, where $\proj{x_i} = \frac{1}{d} \proj{\phi_i}$ and $\Vert \ket{\phi_i} \Vert = 1$, will be associated with the $H_0$ hypothesis. The SIC POVM $\mathcal{P}_1$ corresponding to the alternative $H_1$ hypothesis will have effects $\{\proj{y_i}\}_{i=1}^{d^2}$, where $\proj{y_i} = \frac{1}{d} \proj{\phi_{\pi(i)}}$ and $\pi$ is a permutation of $d^2$ elements. Moreover, the SIC condition assures that $| \braket{\phi_i}{\phi_{\pi(i)}}|^2 = \frac{1}{d+1}$ whenever $i \neq \pi(i)$. \begin{remark} From Corollary~\ref{corr:certification_POVMs_with_rank_one_effects} it follows that for a SIC POVMs $\mathcal{P}_0$ can be certified against SIC POVM $\mathcal{P}_1$ in the parallel scheme as long as $\mathcal{P}_0 \neq \mathcal{P}_1$. \end{remark} Now we are working towards calculating the upper bound on the probability of the false positive error in single-shot certification of SIC POVMs. As the input state we take the maximally entangled state $\ket{\psi} \coloneqq \frac{1}{\mathcal{S}qrt{d}} \ketV{{\rm 1\hspace{-0.9mm}l}}$. If the measurement was $\mathcal{P}_0$, then the output state is \begin{equation} \rho_0^{\ket{\psi}} = \left(\mathcal{P}_0 \otimes {\rm 1\hspace{-0.9mm}l}\right) \left( \proj{\psi}\right) = \mathcal{S}um_{i=1}^{d^2} \proj{i} \otimes \frac{1}{d}(\proj{x_i})^\top = \mathcal{S}um_{i=1}^{d^2} \proj{i} \otimes \frac{1}{d^2}(\proj{\phi_i})^\top, \end{equation} and similarly, if the measurement was $\mathcal{P}_1$, then the output state is \begin{equation} \rho_1^{\ket{\psi}} = \mathcal{S}um_{i=1}^{d^2} \proj{i} \otimes \frac{1}{d^2}(\proj{\phi_{\pi(i)}})^\top. \end{equation} As the output states have block-diagonal structure, we take the measurement effect to be in the block-diagonal form, that is \begin{equation}\label{eq:Omega_SIC_construction_single_shot} \Omega_0 \coloneqq \mathcal{S}um_{i=1}^{d^2} \proj{i} \otimes \Omega_i^\top, \end{equation} where for every $i$ we assume $\Omega_i \perp \proj{\phi_{\pi(i)}}$ to ensure that the probability of the false negative error is equal to zero. We calculate \begin{equation} \begin{split} \tr \left( \Omega_0 \rho_0 \right) &= \tr \left(\left( \mathcal{S}um_{i=1}^{d^2} \proj{i} \otimes \Omega_i^\top \right) \left( \mathcal{S}um_{j=1}^{d^2} \proj{j} \otimes \frac{1}{d^2}(\proj{\phi_j})^\top \right)\right) \\ &= \tr \left( \mathcal{S}um_{i=1}^{d^2} \proj{i} \otimes \Omega_i^\top \frac{1}{d^2} (\proj{\phi_i}) ^\top \right) =\frac{1}{d^2} \mathcal{S}um_{i=1}^{d^2} \bra{\phi_i} \Omega_i \ket{\phi_i}. \end{split} \end{equation} Let $k$ be the number of fixed points of the permutation $\pi$. Taking $\Omega_i \coloneqq {\rm 1\hspace{-0.9mm}l} - \proj{\phi_{\pi(i)}}$ we obtain \begin{equation} \begin{split} \tr \left( \Omega_0 \rho_0 \right) &= \frac{1}{d^2} \mathcal{S}um_{i=1}^{d^2} \bra{\phi_i} \Omega_i \ket{\phi_i} = \frac{1}{d^2} \mathcal{S}um_{i=1}^{d^2} \bra{\phi_i} \left({\rm 1\hspace{-0.9mm}l} - \proj{\phi_{\pi(i)}}\right) \ket{\phi_i} \\ &= \frac{1}{d^2} \mathcal{S}um_{i=1}^{d^2} \left( 1- |\braket{\phi_i}{\phi_{\pi(i)}} |^2 \right) = \frac{1}{d^2} \left(d^2 - k\right) \left( 1- |\braket{\phi_i}{\phi_{\pi(i)}} |^2 \right) \\ &= \frac{1}{d^2} \left(d^2 - k\right) \left( 1- \frac{1}{d+1} \right) = \frac{d^2 - k}{d^2 + d}. \end{split} \end{equation} So far all the calculations were done for some fixed input state (maximally entangled state) and measurement effect $\Omega_0$, which give us actually only the upper bound on the probability of the false positive error. The current choice of $\Omega_i = {\rm 1\hspace{-0.9mm}l} - \proj{\phi_{\pi(i)}}$ seems like a good candidate, but we do not know whether it is possible to find a better one. Using the notation for the probability of the false positive error introduced in Eq. \eqref{eq:def_of_p1} and \eqref{def_of_p1_conditional_single_shot} we can write our bound as \begin{equation} p_1 \leq p_1 \left(\ket{\psi}, \Omega_0 \right)= 1-\tr( \Omega_0 \rho_0) = \frac{d + k}{d^2+d}. \end{equation} On top of that, if $\pi$ does not have fixed points, that is when $k=0$, we have $p_1 \leq \frac{1}{d+1}$ and the number of steps needed for parallel certification is bounded by $N_\epsilon \geq \left\lceil - \frac{\log \epsilon}{\log (d+1)} \right\rceil$. In the case when the permutation $\pi$ has one fixed point, that is when $k=1$, it holds that $p_1 \leq \frac{1}{d}$ and hence the number of steps needed for parallel certification can be bounded by $N_\epsilon \geq \left\lceil - \frac{\log \epsilon}{\log d} \right\rceil$. \mathcal{S}ubsection{Parallel certification of SIC POVMs}\label{sec:sic_povms_parallel} Let us consider a generalization of the results from previous subsection into the parallel scenario. We want to certify SIC POVMs $\mathcal{P}_0$ and $\mathcal{P}_1$ defined as in Subsection~\ref{sec:sic_povms_calculated}, however we assume that we are allowed to use the certified SIC POVM $N$ times in parallel. In this setup we associate the $H_0$ hypothesis with the measurement $\mathcal{P}_0^{\otimes N}$, and analogously we associate the $H_1$ hypothesis with the measurement $\mathcal{P}_1^{\otimes N}$. It appears that the upper bound on false positive error is very similar to the upper bound for the single-shot case. Straightforward but lengthy and technical calculations give us \begin{equation}\label{eq:sics_parallel_bound} p_1^{\mathbb{P}, N} \leq \left( \frac{d + k}{d^2+d}\right)^N. \end{equation} The detailed derivation of this bound is relegated to Appendix~\ref{app:sics_parallel}. \mathcal{S}ection{Adaptive certification and Stein setting}\label{sec:adaptive_certification} So far we were considering only the scheme in which the given channel is used a finite number of times in parallel. In this section we will focus on studying a more general scheme of certification, that is the adaptive certification. In the adaptive scenario, we use the given channel $N$ times and between the uses we can perform some processing. It seems natural that the use of adaptive scheme instead of the simple parallel one should improve the certification. Surprisingly, in the case of von Neumann measurements the use of adaptive scheme gives no advantage over the parallel one~\cite{puchala2021multiple,lewandowska2021optimal}. In other cases it appears that the use of processing is indeed a necessary step towards perfect discrimination~\cite{harrow2010adaptive,krawiec2020discrimination}. Having the adaptive scheme as a generalization of the parallel one, let us take a step further and take a look into the asymptotic setting. In other words, let us discuss the situation when the number of uses of the certified channel tends to infinity. There are various settings known in the literature concerning asymptotic discrimination, like Stein and Hoeffding settings for asymmetric discrimination, as well as Chernoff and Han-Kobayashi settings for symmetric discrimination. In the context of this work we will discuss only the setting concerning asymmetric discrimination, however a concise introduction to all of these settings can be found e.g. in~\cite{wilde2020amortized}. Arguably, the most well-known of these is the Hoeffding setting which assumes the bound on the false negative error to be decreasing exponentially, and its area of interest is characterizing the error exponent of probability of false positive error. Adaptive strategies for asymptotic discrimination in Hoeffding setting were recently explored in~\cite{salek2020adaptive}. In the Stein setting, on the other hand, we assume a constraint on the probability of false positive error and study the error exponent of the false negative error. Let us define a non-asymptotic quantity \begin{equation} \zeta_n (\epsilon) \coloneqq \mathcal{S}up_{\Omega_0, \ket{\psi} } \left\{ -\frac{1}{n} \log p_2^{\mathbb{A}, n} \left( \ket{\psi}, \Omega_0 \right) : p_1^{\mathbb{A}, n} \left( \ket{\psi}, \Omega_0 \right) \leq \epsilon \right\}, \end{equation} which describes the behavior of probabilities of errors in adaptive discrimination scheme. The probability of false positive error after $n$ queries is upper-bounded by some fixed $\epsilon$, and we are interested in studying how quickly the probability of false negative error decreases. Therefore we consider the logarithm of probability of false negative error divided by the number of queries. Finally, a supremum is taken over all possible adaptive strategies, that is we can choose the best input state, final measurement as well as the processings between uses of the certified channel. Note that in the previous sections we were considering $p_2^{\mathbb{A}, N}$ instead of $p_2^{\mathbb{A}, n}$, which in used in the Stein setting. The aim of this difference is to emphasize that in the Stein setting we study the situation in which the number of uses, $n$, tends to infinity. In contrary, in previous sections we were interested only in the case when the number of uses, $N$, was finite. Having introduced the non-asymptotic quantity $\zeta_n (\epsilon)$, let us consider the case when the number of queries, $n$, tends to infinity. To do so, we define the upper limit of the Stein exponent as \begin{equation}\label{eq:stein_bounds} \overline{\zeta} (\epsilon) \coloneqq \limsup_{n \to \infty} \zeta_n (\epsilon). \end{equation} Note that when $\overline{\zeta} (\epsilon)$ is finite, then the probability of the false negative error for adaptive certification will not be equal to zero for any finite number of uses $N$. A very useful Remark $19$ from \cite{wilde2020amortized} states that $\overline{\zeta}(\epsilon)$ is finite if and only if \begin{equation} \mathcal{S}upp \left( (\mathbb{P}hi_0 \otimes {\rm 1\hspace{-0.9mm}l})(\proj{\psi_\text{ent}} ) \right) \mathcal{S}ubseteq \mathcal{S}upp \left( (\mathbb{P}hi_1 \otimes {\rm 1\hspace{-0.9mm}l})(\proj{\psi_\text{ent}} ) \right), \end{equation} where $\ket{\psi_\text{ent}}$ is the maximally entangled state. Finally, we are in position to express the theorem stating the relation between adaptive and parallel certification. \begin{theorem}\label{th:equivalence_parallel_adaptive} Quantum channel $\mathbb{P}hi_0$ can be certified against quantum channel $\mathbb{P}hi_1$ in the parallel scenario if and only if quantum channel $\mathbb{P}hi_0$ can be certified against quantum channel $\mathbb{P}hi_1$ in the adaptive scenario. \end{theorem} Before presenting the proof of the Theorem we will state a useful lemma, which proof is postponed to Appendix~\ref{app:proofs_of_lemmas}. \begin{lemma}\label{lm:stein} Let $\overline{\zeta}(\epsilon)$ be as in Eq.~\eqref{eq:stein_bounds}. Then $\overline{\zeta}(\epsilon)$ is finite if and only if $\mathcal{S}upp (\mathbb{P}hi_0) \mathcal{S}ubseteq \mathcal{S}upp (\mathbb{P}hi_1)$. \end{lemma} \begin{proof}[Proof of Theorem \ref{th:equivalence_parallel_adaptive}] When quantum channel $\mathbb{P}hi_0$ can be certified against the channel $\mathbb{P}hi_1$ in the parallel scenario, then naturally, $\mathbb{P}hi_0$ can be certified against the channel $\mathbb{P}hi_1$ in the adaptive scenario. Therefore it suffices to prove the reverse implication. Assume that the channel $\mathbb{P}hi_0$ can be certified against $\mathbb{P}hi_1$ in the adaptive scenario. This means that $\overline{\zeta}(\epsilon)$ is infinite. Hence from Lemma~\ref{lm:stein} it holds that $\mathcal{S}upp (\mathbb{P}hi_0) \not\mathcal{S}ubseteq \mathcal{S}upp (\mathbb{P}hi_1)$. Finally, from Theorem~\ref{th:condition_for_parallel_certification} we obtain that $\mathbb{P}hi_0$ can be certified against $\mathbb{P}hi_1$ in the parallel scheme. \end{proof} Theorem~\ref{th:equivalence_parallel_adaptive} states that if a quantum channel $\mathbb{P}hi_0$ can be certified against $\mathbb{P}hi_1$ in a finite number of queries, then the use of parallel scheme is always sufficient. Therefore it may appear that adaptive certification is of no value. Nevertheless, in some cases it still may be worth using adaptive certification to reduce the number of uses of the certified channel. For example in the case of SIC POVMs the use of adaptive scheme reduces the number of steps significantly~\cite{krawiec2020discrimination}. A pair of qutrit SIC POVMs can be discriminated perfectly after two queries in adaptive scenario, therefore they can also be certified. Nevertheless, they cannot be discriminated perfectly after any finite number of queries in parallel. On the other hand, in the case of von Neumann measurements the number of steps is the same no matter which scheme is used~\cite{puchala2021multiple}. \mathcal{S}ection{Conclusions}\label{sec:conclusions} As certification of quantum channels is in the NISQ era a task of significant importance, the main aim of this work was to give an insight into this problem from theoretical perspective. Certification was considered as an extension of quantum hypothesis testing, which includes also preparation of an input state and the final measurement. We primarily focused on multiple-shot schemes of certification, that is our areas of interest were mostly parallel and adaptive certification schemes. The parallel scheme consists in certifying tensor products of channels while adaptive scheme is the most general of all scenarios. We derived a condition when after a finite number of queries in the parallel scenario one can assure that the false negative error will not occur. We pointed a class of channels which allow for excluding false negative error after a finite number of uses in parallel but cannot be discriminated unambiguously. On top of that, having a fixed upper bound on the probability of false positive error, we found a bound on the number of queries needed to make the probability of false positive error no greater than this fixed bound. Moreover, we took into consideration the most general adaptive certification scheme and studied whether it can improve the certification. It turned out that the use of parallel certification scheme is always sufficient to assure that the false negative error will not occur after a finite number of queries. Nevertheless, the number of queries needed to have the probability of false positive error sufficiently small, may be decreased by using adaptive scheme. \mathcal{S}ection*{Acknowledgments} This work was supported by the Foundation for Polish Science (FNP) under grant number POIR.04.04.00-00-17C1/18-00. The project ,,Near-term quantum computers: Challenges, optimal implementations and applications'' under Grant Number POIR.04.\\04.00-00-17C1/18-00, is carried out within the Team-Net programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. \mathcal{S}ection*{Author contributions} All authors participated in proving the theorems constituting the results of this work. {\L}.P. and Z.P. set the research objective and A.K. prepared the figures. \mathcal{S}ection*{Competing interests} The authors declare no competing interests. \appendix \mathcal{S}ection{Proofs of lemmas}\label{app:proofs_of_lemmas} \begin{proof}[Proof of Lemma~\ref{lm:inclusion_of_supports}] Suppose by contradiction that $\mathcal{S}upp\left(\rho_0^{\ket{\psi}}\right) \mathcal{S}ubseteq \mathcal{S}upp\left(\rho_1^{\ket{\psi}} , \ldots , \rho_m^{\ket{\psi}}\right)$, that is \begin{equation} \mathrm{span} \left\{ (E_i \otimes {\rm 1\hspace{-0.9mm}l}) \ket{\psi} \right\}_i \mathcal{S}ubseteq \mathrm{span} \left\{ \left(F^{(1)}_{j_1} \otimes {\rm 1\hspace{-0.9mm}l}\right) \ket{\psi} , \ldots , \left(F^{(m)}_{j_m} \otimes {\rm 1\hspace{-0.9mm}l}\right) \ket{\psi} \right\}_{ {j_1}, \ldots,{j_m}}. \end{equation} Hence for every $i$ \begin{equation} \begin{split} (E_i \otimes {\rm 1\hspace{-0.9mm}l}) \ket{\psi} &= \mathcal{S}um_{j_1} \beta^{(1)}_{j_1} \left(F^{(1)}_{j_1} \otimes {\rm 1\hspace{-0.9mm}l}\right) \ket{\psi} + \ldots + \mathcal{S}um_{j_m} \beta^{(m)}_{j_m} \left(F^{(m)}_{j_m} \otimes {\rm 1\hspace{-0.9mm}l}\right) \ket{\psi} \\ &= \left(\mathcal{S}um_{j_1} \beta^{(1)}_{j_1} \left(F^{(1)}_{j_1} \otimes {\rm 1\hspace{-0.9mm}l}\right) + \ldots + \mathcal{S}um_{j_m} \beta^{(m)}_{j_m} \left(F^{(m)}_{j_m} \otimes {\rm 1\hspace{-0.9mm}l}\right) \right) \ket{\psi} \\ &= \left(\left(\mathcal{S}um_{j_1} \beta^{(1)}_{j_1} F^{(1)}_{j_1} + \ldots + \mathcal{S}um_{j_m} \beta^{(m)}_{j_m} F^{(m)}_{j_m} \right) \otimes {\rm 1\hspace{-0.9mm}l} \right) \ket{\psi}, \end{split} \end{equation} where not all $\beta^{(k)}_{j_k}$ are equal to zero. As $\ket{\psi} \coloneqq \mathcal{S}um_t \lambda_t \ket{a_t} \ket{b_t}$, we have \begin{equation} (E_i \otimes {\rm 1\hspace{-0.9mm}l}) \ket{\psi} = \mathcal{S}um_t \lambda_t \left( E_i \ket{a_t} \otimes\ket{b_t} \right) \end{equation} and \begin{equation} \begin{split} &\left( \left(\mathcal{S}um_{j_1} \beta^{(1)}_{j_1} F^{(1)}_{j_1} + \ldots + \mathcal{S}um_{j_m} \beta^{(m)}_{j_m} F^{(m)}_{j_m} \right) \otimes {\rm 1\hspace{-0.9mm}l} \right) \ket{\psi} \\ &= \mathcal{S}um_t \lambda_t \left(\mathcal{S}um_{j_1} \beta^{(1)}_{j_1} F^{(1)}_{j_1} + \ldots + \mathcal{S}um_{j_m} \beta^{(m)}_{j_m} F^{(m)}_{j_m} \right) \ket{a_t} \otimes \ket{b_t} \end{split} \end{equation} As $\{\ket{b_t}\}_t$ is an orthonormal basis, then for every $t$ \begin{equation} E_i \ket{a_t} = \left(\mathcal{S}um_{j_1} \beta^{(1)}_{j_1} F^{(1)}_{j_1} + \ldots + \mathcal{S}um_{j_m} \beta^{(m)}_{j_m} F^{(m)}_{j_m} \right) \ket{a_t}, \end{equation} and hence \begin{equation} E_i = \mathcal{S}um_{j_1} \beta^{(1)}_{j_1} F^{(1)}_{j_1} + \ldots + \mathcal{S}um_{j_m} \beta^{(m)}_{j_m} F^{(m)}_{j_m}. \end{equation} Therefore \begin{equation} \mathrm{span} \{E_i\}_i \mathcal{S}ubseteq \mathrm{span} \left\{ F^{(1)}_{{j_1}} , \ldots , F^{(m)}_{{j_m}} \right\}_{ {j_1}, \ldots,{j_m}}, \end{equation} which implies that $\mathcal{S}upp(\mathbb{P}hi_0) \mathcal{S}ubseteq \mathcal{S}upp \left(\mathbb{P}hi_1, \ldots , \mathbb{P}hi_m \right)$. Finally, from the law of contraposition we obtain that if $\mathcal{S}upp(\mathbb{P}hi_0) \not\mathcal{S}ubseteq \mathcal{S}upp \left(\mathbb{P}hi_1, \ldots , \mathbb{P}hi_m \right)$, then $\mathcal{S}upp\left(\rho_0^{\ket{\psi}}\right) \not\mathcal{S}ubseteq \mathcal{S}upp\left(\rho_1^{\ket{\psi}} , \ldots , \rho_m^{\ket{\psi}}\right)$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:inclusion_of_supports_2}] Assume by contradiction that $\mathcal{S}upp (\mathbb{P}hi_0) \mathcal{S}ubseteq \mathcal{S}upp \left(\mathbb{P}hi_1, \ldots \mathbb{P}hi_m\right)$, that is \begin{equation} \mathrm{span} \{E_i\}_i \mathcal{S}ubseteq \mathrm{span} \left\{ F^{(1)}_{{j_1}} , \ldots , F^{(m)}_{{j_m}} \right\}_{ {j_1}, \ldots,{j_m}} \end{equation} To simplify the notation, without loss of generality we define that $\mathrm{span} \left\{ F^{(1)}_{{j_1}} , \ldots , F^{(m)}_{{j_m}} \right\}_{ {j_1}, \ldots,{j_m}} \eqqcolon \mathrm{span} \{K_l\}_l$. Hence for every natural number $N$ it also holds that \begin{equation} \mathrm{span} \left\{ E_{i_1} \otimes \ldots \otimes E_{i_N} \right\}_{i_1, \ldots , i_N} \mathcal{S}ubseteq \mathrm{span} \left\{ K_{l_1} \otimes \ldots \otimes K_{l_N} \right\}_{l_1, \ldots , l_N}. \end{equation} Thus for every $i_1, \ldots , i_N$ we have that \begin{equation} E_{i_1} \otimes \ldots \otimes E_{i_N} = \mathcal{S}um_{l_1, \ldots , l_N} \beta_{l_1, \ldots , l_N} K_{l_1} \otimes \ldots \otimes K_{l_N}, \end{equation} where not all $\beta_{l_1, \ldots , l_N}$ are equal to zero. Therefore for every $i_1, \ldots , i_N$ and input state $\ket{\psi}$, it also holds that \begin{equation} \begin{split} \left( \left(E_{i_1} \otimes \ldots \otimes E_{i_N}\right) \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi} &= \left( \left(\mathcal{S}um_{l_1, \ldots , l_N} \beta_{l_1, \ldots , l_N} K_{l_1} \otimes \ldots \otimes K_{l_N} \right) \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi} \\ &= \mathcal{S}um_{l_1, \ldots , l_N} \beta_{l_1, \ldots , l_N} \left( \left(K_{l_1} \otimes \ldots \otimes K_{l_N} \right) \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi} \end{split} \end{equation} and hence \begin{equation} \mathrm{span} \left\{ \left( \left(E_{i_1} \otimes \ldots \otimes E_{i_N}\right) \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi}\right\}_{i_1, \ldots , i_N} \mathcal{S}ubseteq \mathrm{span} \left\{ \left( \left(K_{l_1} \otimes \ldots \otimes K_{l_N} \right) \otimes {\rm 1\hspace{-0.9mm}l} \right)\ket{\psi}\right\}_{l_1, \ldots , l_N}. \end{equation} The above can be rewritten as \begin{equation} \mathcal{S}upp\left( \mathcal{S}igma_0^{N, \ket{\psi}} \right) \mathcal{S}ubseteq \mathcal{S}upp \left( \mathcal{S}igma_1^{N, \ket{\psi}}, \ldots, \mathcal{S}igma_m^{N, \ket{\psi}} \right), \quad N \in \mathbb{N}. \end{equation} Eventually, by the law of contraposition we obtain that if for some natural number $N$ and an input state $\ket{\psi}$ it holds that \begin{equation} \mathcal{S}upp\left( \mathcal{S}igma_0^{N, \ket{\psi}} \right) \not\mathcal{S}ubseteq \mathcal{S}upp \left( \mathcal{S}igma_1^{N, \ket{\psi}}, \ldots, \mathcal{S}igma_m^{N, \ket{\psi}} \right), \end{equation} then $\mathcal{S}upp (\mathbb{P}hi_0) \not\mathcal{S}ubseteq \mathcal{S}upp \left(\mathbb{P}hi_1, \ldots \mathbb{P}hi_m\right)$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:stein}] ($\implies$) Let $\ket{\psi_\text{ent}}$ be the maximally entangled state. When $\overline{\zeta}(\epsilon) < \infty$, then from Remark~$19$ from~\cite{wilde2020amortized} we have that \begin{equation} \mathcal{S}upp \left( (\mathbb{P}hi_0 \otimes {\rm 1\hspace{-0.9mm}l})(\proj{\psi_\text{ent}} ) \right) \mathcal{S}ubseteq \mathcal{S}upp \left( (\mathbb{P}hi_1 \otimes {\rm 1\hspace{-0.9mm}l})(\proj{\psi_\text{ent}} ) \right). \end{equation} From Lemma~\ref{lm:inclusion_of_supports}, (as the maximally entangled state has full Schmidt rank), the above implies that $\mathcal{S}upp (\mathbb{P}hi_0) \mathcal{S}ubseteq \mathcal{S}upp (\mathbb{P}hi_1)$. ($\impliedby$) Now we assume that $\mathcal{S}upp (\mathbb{P}hi_0) \mathcal{S}ubseteq \mathcal{S}upp (\mathbb{P}hi_1)$. From Lemma~\ref{lm:inclusion_of_supports_2}, this implies that for every natural number $N$ and every input state $\ket{\psi}$ it holds that \begin{equation} \mathcal{S}upp \left( (\mathbb{P}hi_0^{\otimes N} \otimes {\rm 1\hspace{-0.9mm}l})(\proj{\psi} ) \right) \mathcal{S}ubseteq \mathcal{S}upp \left( (\mathbb{P}hi_1^{\otimes N} \otimes {\rm 1\hspace{-0.9mm}l})(\proj{\psi} ) \right). \end{equation} Taking $N=1$ and $\ket{\psi} = \ket{\psi_\text{ent}}$ we obtain \begin{equation} \mathcal{S}upp \left( (\mathbb{P}hi_0 \otimes {\rm 1\hspace{-0.9mm}l})(\proj{\psi_\text{ent}} ) \right) \mathcal{S}ubseteq \mathcal{S}upp \left( (\mathbb{P}hi_1 \otimes {\rm 1\hspace{-0.9mm}l})(\proj{\psi_\text{ent}} ) \right). \end{equation} Therefore from Remark~$19$ from~\cite{wilde2020amortized} we obtain that $\overline{\zeta}(\epsilon) < \infty$. \end{proof} \mathcal{S}ection{Derivation of Eq.~\eqref{eq:sics_parallel_bound}}\label{app:sics_parallel} Let $\mathcal{P}_0$ and $\mathcal{P}_1$ be as defined in Subsection~\ref{sec:sic_povms_calculated}. We consider the scenario where the certified measurement is used $N$ times in parallel. To calculate the bound on the parallel certification we will take particular choices of an input state and a final measurement. As for the input state, we will take the maximally entangled state, similarly as it was in the single-shot case. Applying tensor product of the SIC POVMs on the input state we obtain the output states either \begin{equation} \begin{split} \mathcal{S}igma_0^{N, \ket{\psi}} &= \frac{1}{d^{N}} \left( \mathcal{P}_0 \otimes \ldots \otimes \mathcal{P}_0 \otimes {\rm 1\hspace{-0.9mm}l} \right) \left( \projV{{\rm 1\hspace{-0.9mm}l}} \right) \\ &= \frac{1}{d^N} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \proj{i_1 \cdots i_N} \otimes \frac{1}{d^N}\left(\proj{\phi_{i_1} \cdots \phi_{i_N}}\right)^\top \\ &= \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \proj{i_1 \cdots i_N} \otimes \left(\proj{\phi_{i_1} \cdots \phi_{i_N}}\right)^\top \end{split} \end{equation} if the measurement was $\mathcal{P}_0$, or \begin{equation} \begin{split} \mathcal{S}igma_1^{N, \ket{\psi}} = \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \proj{i_1 \cdots i_N} \otimes \left(\proj{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}}\right)^\top \end{split} \end{equation} if the measurement was $\mathcal{P}_1$. Similarly to the single-shot scenario, we take the measurement effect with block-diagonal structure, that is \begin{equation} \Omega_0 \coloneqq \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \proj{i_1 \cdots i_N} \otimes \Omega_{i_1 \cdots i_N}^\top \end{equation} where we require $\Omega_{i_1 \cdots i_N} \perp \proj{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}}$ to make sure that the false negative error will be equal zero. We calculate \begin{equation} \begin{split} \tr \left(\Omega_0 \mathcal{S}igma_0^{N, \ket{\psi}} \right) &= \tr \left(\left( \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \proj{i_1 \cdots i_N} \otimes \Omega_{i_1 \cdots i_N}^\top \right) \right. \\ & \quad \quad \left. \left(\frac{1}{d^{2N}} \mathcal{S}um_{k_1 , \ldots , k_N =1}^{d^2} \proj{k_1 \cdots k_N} \otimes \left(\proj{\phi_{k_1} \cdots \phi_{k_N}}\right)^\top\right) \right) \\ &= \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \mathcal{S}um_{k_1 , \ldots , k_N =1}^{d^2} \tr \left( \left(\proj{i_1 \cdots i_N} \otimes \Omega_{i_1 \cdots i_N}^\top \right) \right. \\ & \left. \quad \quad \left( \proj{k_1 \cdots k_N} \otimes \left(\proj{\phi_{k_1} \cdots \phi_{k_N}}\right)^\top\right) \right) \\ &= \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \mathcal{S}um_{k_1 , \ldots , k_N =1}^{d^2} \tr \left(\proj{i_1 \cdots i_N} \proj{k_1 \cdots k_N} \right. \\ & \left. \quad \quad \otimes \Omega_{i_1 \cdots i_N}^\top \left(\proj{\phi_{k_1} \cdots \phi_{k_N}}\right)^\top \right) \\ &= \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \tr \left(\proj{i_1 \cdots i_N} \otimes \Omega_{i_1 \cdots i_N}^\top \left(\proj{\phi_{i_1} \cdots \phi_{i_N}}\right)^\top \right)\\ &= \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \tr \left( \Omega_{i_1 \cdots i_N} \proj{\phi_{i_1} \cdots \phi_{i_N}} \right). \end{split} \end{equation} There are many possible choices of such $\Omega_{i_1 \cdots i_N}$ which fulfill the condition $\Omega_{i_1 \cdots i_N} \perp \proj{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}}$, but for the time being we will take the one defined as follows \begin{equation} \Omega_{i_1 \cdots i_N} \coloneqq {\rm 1\hspace{-0.9mm}l} - \proj{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}}. \end{equation} This choice of measurement effect may not appear optimal in general, but it is suitable for calculations due to its concise form. Therefore \begin{equation} \begin{split} \tr \left(\Omega_0 \mathcal{S}igma_0^{N, \ket{\psi}} \right) &= \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \tr \left( \left( {\rm 1\hspace{-0.9mm}l} - \proj{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}} \right) \proj{\phi_{i_1} \cdots \phi_{i_N}} \right) \\ &= \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \left( 1- \left\vert \braket{\phi_{i_1} \cdots \phi_{i_N}}{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}} \right\vert^2 \right) \\ &=1- \frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \left\vert \braket{\phi_{i_1} \cdots \phi_{i_N}}{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}} \right\vert^2. \end{split} \end{equation} Therefore we have \begin{equation} p_1^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0 \right) = 1-\tr \left(\Omega_0 \mathcal{S}igma_0^{N, \ket{\psi}} \right) =\frac{1}{d^{2N}} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \left\vert \braket{\phi_{i_1} \cdots \phi_{i_N}}{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}} \right\vert^2. \end{equation} To get the exact upper bound we need to calculate the sum, that is to explain that \begin{equation}\label{eq:parallel_sics_combinatorics} \mathcal{S}um_{i_1 , \ldots , i_N =1}^{d^2} \left\vert \braket{\phi_{i_1} \cdots \phi_{i_N}}{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}} \right\vert^2 = \mathcal{S}um_{s=0}^{N} \binom{N}{N-s} k^{N-s} (d^2 - k)^s \frac{1}{(d+1)^s}. \end{equation} First, note that \begin{equation} \left\vert \braket{\phi_{i_1} \cdots \phi_{i_N}}{\phi_{\pi(i_1)} \cdots \phi_{\pi(i_N)}} \right\vert^2 =\left\vert \braket{\phi_{i_1}}{\phi_{\pi(i_1)}} \cdots \braket{\phi_{i_N}}{\phi_{\pi(i_N)}} \right\vert^2 = \frac{1}{(d+1)^s}, \end{equation} where $s \coloneqq | \{ i_l :\ i_l \neq \pi(i_l) \}|$. In other words, every time we encounter a fixed point of the permutation we have a factor $\braket{\phi_{i_j}}{\phi_{\pi(i_j)}}$ which is equal one. Let us now focus on consecutive factors of the right hand site of the Eq.~\eqref{eq:parallel_sics_combinatorics}. The factor $\binom{N}{N-s}$ corresponds to choosing $N-s$ elements for which $\braket{\phi_{i_j}}{\phi_{\pi(i_j)}} = 1$. Then, on each of those elements there can be one of $k$ elements (as $k$ stands for the number of fixed points of the permutation $\pi$), therefore $k^{N-s}$. Then, on the remaining $s$ elements there can one of $d^2 - k$ values which are not fixed points of the permutation, hence we obtain $(d^2 - k)^s$. Further calculations reveal the concise expression for the upper bound on the probability of false negative error, that is \begin{equation} p_1^{\mathbb{P}, N} \left(\ket{\psi}, \Omega_0 \right) = \frac{1}{d^{2N}} \mathcal{S}um_{s=0}^{N} \binom{N}{N-s} k^{N-s} (d^2 - k)^s \frac{1}{(d+1)^s} =\left( \frac{d + k}{d^2+d}\right)^N. \end{equation} In the case of permutation $\pi$ having no fixed points, that is when $k=0$, the above bound simplifies to $p_1^{\mathbb{P}, N} \leq \left( \frac{1}{d+1}\right)^N$. \end{document}
\begin{document} \title{Performance of autonomous quantum thermal machines: \\ Hilbert space dimension as a thermodynamical resource} \author{Ralph Silva}\affiliation{D\'epartement de Physique Th\'eorique, Universit\'e de Gen\`eve, 1211 Gen\`eve, Switzerland} \author{Gonzalo Manzano} \affiliation{Departamento de F\'isica At\'omica, Molecular y Nuclear and GISC, Universidad Complutense Madrid, 28040 Madrid, Spain} \author{Paul Skrzypczyk}\affiliation{H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol, BS8 1TL, United Kingdom} \author{Nicolas Brunner}\affiliation{D\'epartement de Physique Th\'eorique, Universit\'e de Gen\`eve, 1211 Gen\`eve, Switzerland} \begin{abstract} Multilevel autonomous quantum thermal machines are discussed. In particular, we explore the relation between the size of the machine (captured by Hilbert space dimension), and the performance of the machine. Using the concepts of virtual qubits and virtual temperatures, we show that higher dimensional machines can outperform smaller ones. For instance, by considering refrigerators with more levels, lower temperatures can be achieved, as well as higher power. We discuss the optimal design for refrigerators of a given dimension. As a consequence we obtain a statement of the third law in terms of Hilbert space dimension: reaching absolute zero temperature requires infinite dimension. These results demonstrate that Hilbert space dimension should be considered a thermodynamic resource. \end{abstract} \maketitle \section{Introduction} Autonomous quantum thermal machines function via thermal contact to heat baths at different temperatures, powering different thermodynamic operations without any external source of work. For instance, small quantum absorption refrigerators use only two thermal reservoirs, one as a heat source, and the other as a heat sink, in order to cool a system to a temperature lower than that of either of the thermal reservoirs \cite{schulz,palao,linden10,levy,magic}. More generally, autonomous quantum thermal machines represent an ideal platform for exploring quantum thermodynamics \cite{book,review1,review2}, as they allow one to avoid introducing explicitly the concept of work, a notably difficult and controversial issue. The efficiency of these machines has been investigated \cite{schulz,skrzypczyk,correa1,woods}, and quantum effects, such as coherence and entanglement, were shown to enhance their performance \cite{brunner14,correa14, uzdin15,marcus,brask15b,frenzel16}. Also, these machines are of interest from a practical point of view, and several implementations have been proposed \cite{venturelli,chen,mari,bellomo,brask15,mitchison2}. More formally, autonomous thermal machines are modelled by considering a set of quantum levels (the machine), some of which are selectively coupled to different thermal baths as well as to an object to be acted upon. Various models of thermal baths and thermal couplings can be considered and formalized via master equations, which usually involves many different parameters, including coupling factors or bath spectral densities, to precisely characterize the machine and its interaction with the environment (see e.g. \cite{marcus}). Nevertheless, the basic functioning of these machines can be captured in much simpler terms. In particular, the notion of `virtual qubits' and `virtual temperatures' \cite{virtual} (see also \cite{janzing}), essentially associating a temperature to a transition via its population ratio, was developed in order to capture the fundamental limitations of the simplest machines. Therefore, some of the main features of the machine can be deduced from simple considerations about its {\it static} configuration, i.e. without requiring any specific knowledge about the dynamics of the thermalization process induced by contact with the baths. In the present work we discuss the performance of general thermal machines, involving an arbitrary number of levels. Exploiting the notions of virtual qubits and virtual temperatures, we characterize fundamental limits of such machines, based on its level structure and the way it is coupled to the reservoirs. This allows us to explore the relation between the size of the machine as given by its Hilbert space dimension (or equivalently the number of its available levels), and its performance. We find that machines with more levels can outperform simpler machines. In particular, considering fixed thermodynamic resources (two heat baths at different temperatures), we show that lower temperatures, as well as higher cooling power, can always be engineered using higher dimensional refrigerators. By characterizing the range of virtual qubits and virtual temperatures that can be reached with fixed resources, we propose optimal designs for single-cycle, multi-cycle and concatenated machines featuring an arbitrary number of levels. Furthermore, our considerations lead to a formulation of the third law in terms of Hilbert space dimension of the machine: reaching absolute zero temperature requires infinite dimension. The paper is organized as follows. We begin in Sec.~\ref{sec:primitive} by discussing the role of the swap operation as the primitive operation for the functioning of autonomous quantum thermal machines, allowing an extremely simple characterization of their performance in terms of virtual qubits and virtual temperatures. Sec.~\ref{sec:qutrit} is devoted to reviewing the basic functioning of a three-level quantum thermal machine, helping us to identify various {\it resources and} {\it limitations} when optimizing its design. Our general results for higher dimensional thermal machines are presented in Sec.~\ref{sec:results}, where we point out the existence of two different strategies for improving performance. The first strategy consists of adding energy levels to the original thermal cycle, and is analyzed in detail in Sec.~\ref{sec:single-cycle}, while the extension to the case of multi-cycle machines in presented in Sec.~\ref{sec:multi-cycle}. The second strategy, based upon concatenating qutrit machines, is analyzed in Sec.~\ref{sec:concatenated}. Furthermore, in Sec.~\ref{sec:thirdlaw} we discuss the third law of thermodynamics in terms of Hilbert space dimension, while Sec.~\ref{sec:dynamics} is devoted to characterizing the trade-off between the power and speed of operation of the thermal machine, given an explicit model of thermalization. Finally, our conclusions are presented in Sec.~\ref{sec:conclusions}. \section{The primitive operation} \label{sec:primitive} Generally speaking, the working of an autonomous quantum thermal machine can be divided into two steps which are continuously repeated. For clarity, we discuss the case of a fridge powered by two thermal baths at different temperatures. In the first step, a temperature colder than the cold bath is engineered on a subspace of the machine, i.e. on a subset of the levels comprising the machine. This can be done by selectively coupling levels in the machine to the thermal baths. The second step consists in interacting the engineered subspace with an external physical system to be cooled. We will consider a pair of levels of the machine to constitute our engineering subspace, the population ratio of which can be tuned in order to correspond to a cold temperature. Here we shall refer to this pairs of levels as the `virtual qubit', and its associated temperature as its `virtual temperature' \cite{virtual}. Typically the virtual qubit is chosen to be resonant with the system to be cooled in order to avoid non energy conserving interactions. Notably, the swap operation between the virtual qubit and the external physical system, can thus be considered as the primitive operation of quantum fridges, and more generally of all quantum thermal machines. Let us consider a machine comprised of $n$ levels, with associated Hilbert space $\mathcal{H}$ such that ${\rm dim}\mathcal{H} = n$, and Hamiltonian $H_{\rm M}$. Within this machine, we will refer to any pair of levels ($\ket{k}$ and $\ket{l}$) as a {\it transition}, denoted $\Gamma_{k,l}$. Among the $n(n-1)/2$ possible transitions, we focus our attention on a particular pair of levels $\ket{i}$ and $\ket{j}$ with populations $\lambda_i$ and $\lambda_j$ and energies $E_i$ and $E_j > E_i$. Assume the transition $\Gamma_{i, j}$ is coupled to the external system to be cooled, hence represening the virtual qubit. Here it will be useful to introduce two quantities to fully characterize the virtual qubit, namely its normalization $N_{\rm v}$ and its (normalized) bias $Z_{\rm v}$ defined by \begin{align} N_{\rm v} ~:=~ \lambda_i + \lambda_j \quad \quad Z_{\rm v} ~:=~ \frac{\lambda_i - \lambda_j}{N_{\rm v}}. \end{align} As we focus here on the case where the density operator of the machine is diagonal in the energy basis \cite{footnote1}, we may define its temperature, i.e. the virtual temperature, via the Gibbs relation $\lambda_j = \lambda_i e^{- E_{\rm v}/ k_\mathrm{B} T_{\rm v}}$. That is \begin{align} T_{\rm v} := \frac{E_{\rm v}}{k_\mathrm{B}} \ln\frac{\lambda_j}{\lambda_i} \end{align} where we defined $E_{\rm v} ~:=~ E_j -E_i$ as the energy gap of the virtual qubit. The virtual temperature is then monotonically related to the above introduced bias by \begin{equation} Z_{\rm v} = \tanh(\beta_{\rm v} E_{\rm v}/2) \end{equation} where $\beta_{\rm v} = 1/k_\mathrm{B} T_{\rm v}$ is the inverse virtual temperature. Notice that $-1 \leq Z_{\rm v} \leq 1$, where the lower bound represents a virtual qubit with complete population inversion ($\beta_{\rm v} \rightarrow - \infty$) and the upper bound correspond to the virtual qubit in its ground state $\ket{i}$ ($\beta_{\rm v} \rightarrow 0$). Next, we interact the virtual qubit with the physical system via the swap operation. For simplicity, the physical system is taken here to be a qubit with energy gap $E_{\rm v}$, hence resonant with the virtual qubit. We denote the levels of the physical system by $\ket{0}$ and $\ket{1}$, with corresponding populations $p_0$ and $p_1$, and hence bias $Z_{\rm s}= p_0 - p_1$ (note that $N_{\rm s}=1$). The swap operation is given by \begin{align} U = \mathbb{I} &- \ket{i,1}\bra{i,1} - \ket{j,0}\bra{j,0}~ + \nonumber \\ &+ \ket{i,1}\bra{j,0} + \ket{j,0}\bra{i,1}. \end{align} The effect of the swap operation is to modify the bias of the physical system, which changes from $Z_{\rm s}$ to \begin{equation}\label{swapeffect} Z_{\rm s}^\prime = N_{\rm v} Z_{\rm v} + (1-N_{\rm v})Z_{\rm s}. \end{equation} The above equation can be intuitively understood as follows. With probability $N_{\rm v}$, the virtual qubit is available (i.e. the machine is in the subspace of the virtual qubit), and the swap replaces the initial bias of the system with the bias of the virtual qubit. With the complementary probability, $1-N_{\rm v}$, the virtual qubit is not available, hence the swap cannot take place and the bias of the system remains unchanged. Consequently, the virtual temperature fundamentally limits the temperature the external system can reach. A complete derivation of Eq.~\eqref{swapeffect} can be found in Appendix \ref{AppSwap}. Finally, it is worth noticing that the virtual qubit must be refreshed in order to ensure the continuous operation of the machine. Indeed, after interaction with the system, the virtual qubit is left with the initial bias of the system, $Z_{\rm s}$, and must be therefore reset to the desired bias, $Z_{\rm v}$, in order to continue operating. Given the above perspective on the working of quantum thermal machines, two different directions to improve the performance of a machine \orange{emerge}. The first consists in optimizing the properties of the virtual qubit ($N_{\rm v} $ and $Z_{\rm v} $) \orange{in order to achieve the desired bias $Z_{\rm s}^\prime$ in the external system ($Z_{\rm s}^\prime \rightarrow 1$ in the case of a fridge)}, which represent the {\it statics} of the machine. The second consists in optimizing the dynamics of the machine, in particular the rate of interaction with the \orange{external} system and the rate at which the virtual qubit is refreshed by contact with the thermal baths. Crucially, whereas the dynamics is model dependent, the statics are model independent, and hence universal properties of the machine. In the following sections, we shall see how the performance of thermal machines can be optimized in the presence of natural constraints, \orange{such as limits on the available energy gaps or on the dimension of its Hilbert space}. Our focus will primarily be on the statics: we will see that increasing the number of levels of the machine will allow for increased performance (for instance to be able to cool to lower temperatures). However, in the last sections, we will move beyond purely static considerations, and discuss the interplay between statics and dynamics. Again we find that machines with more levels can lead to enhanced performance. \section{Warm-up: qutrit machine} \label{sec:qutrit} In order to better ilustrate the main concepts, we start our analysis with the smallest possible quantum thermal machine, comprising only three energy levels $\ket{1}$, $\ket{2}$ and $\ket{3}$, working between two thermal baths at different temperatures. This machine can be operated as a fridge or as a heat engine depending on which transitions are coupled to the hot and cold baths. For simplicity, our presentation will focus on the former (see Fig.~\ref{qutrit}). In this case, the transition $\Gamma_{1,3}$ is coupled to the cold bath at inverse temperature $\beta_{\rm c}$, while transition $\Gamma_{2,3}$ is coupled to the hot bath at $\beta_{\rm h} < \beta_{\rm c}$. Finally, the transition $\Gamma_{1,2}$ is choosen to be the virtual qubit. The operation of the qutrit fridge can be understood as a simple thermal cycle: \begin{equation} \ket{2} \xrightarrow{\beta_{\rm h}} \ket{3} \xrightarrow{\beta_{\rm c}} \ket{1}. \end{equation} in which a quantum of energy $\Delta E_{23} \equiv E_3 - E_2$ is adsorbed from the hot bath making the machine jump from state $\ket{2}$ to $\ket{3}$, followed by a jump from $\ket{3}$ to $\ket{1}$ while emiting a quantum of energy $\Delta E_{13}$ to the cold bath. The cycle is closed by swap of the virtual qubit, $\Gamma_{1,2}$, with the external qubit to be cooled as described in Sec.~\ref{sec:primitive}. This cycle involves 3 states, and is thus of length 3. It represents the basic building block of the machine. \begin{figure} \caption{The smallest possible fridge comprising three energy levels. Throughout this paper, couplings to $\beta_{\rm c} \label{qutrit} \end{figure} The fact that transitions $\Gamma_{1,3}$ and $\Gamma_{2,3}$ are coupled to baths at different temperatures will allow us to control the (inverse) temperature of the virtual qubit, $\beta_{\rm v}$. While there exist many different possible models for representing the coupling to a thermal bath, the only feature that we will consider here is that, after sufficient time, each transition connected to a bath will thermalize. That is, in the steady-state of the machine, the population ratio of a transition $\Gamma_{i,j}$ coupled to a thermal bath, will be equal to $e^{- \Delta E_{ij} \beta_{\rm bath}}$, where $\Delta E_{ij}$ is the energy gap of the transition, and $\beta_{\rm bath}$ the inverse temperature of the bath. Under such conditions, the inverse temperature of the virtual qubit and its norm are given by \begin{align}\label{betavfridge0} \beta_{\rm v} &= \beta_{\rm c} + (\beta_{\rm c} - \beta_{\rm h}) \left( \frac{ \Delta E_{13}}{E_{\rm v}} -1 \right), \\ \label{normfridge0} N_{\rm v} &= \frac{1 + e^{-\beta_{\rm v} E_{\rm v}}}{1 + e^{-\beta_{\rm v} E_{\rm v}} + e^{-\beta_{\rm c} \Delta E_{13}}} \end{align} where $E_{\rm v} \equiv \Delta E_{12}$ is the virtual qubit energy gap, chosen to match the energy gap of the qubit to be cooled. Note that we have $\beta_{\rm v} > \beta_{\rm c}$ (since $\Delta E_{13} > E_\mathrm{v}$), implying that the machine works as a refrigerator. At this point, one can already identify various {\it resources} for the control of the virtual temperature $\beta_{\rm v}$. The first is the range of available temperatures, captured by $\beta_{\rm c}$ and $\beta_{\rm h}$. The second is the largest energy gap, $\Delta E_{13}$ coupled to a thermal bath. Clearly if $\Delta E_{13}$ is unbounded, then we can cool arbitrarily close to absolute zero, i.e. $\beta_{\rm v}\rightarrow \infty$ as $\Delta E_{13} \rightarrow \infty$ while $N_{\rm v} \rightarrow 1$, implying $Z_{\rm s}^\prime \rightarrow 1$, c.f. Eq.~\eqref{swapeffect}. However, it is reasonable to impose a bound on this quantity, which we label $E_{\rm max}$. From physical considerations, one expects that thermal effects play role only up to a certain energy scale. In general, a thermal bath is characterized by a spectral density with a cutoff for high frequencies. This implies the existence of an energy above which there exist a negligible number of systems in the bath. In any case, the coldest achievable temperature given this maximum energy is then given by \begin{align}\label{betavfridge1} \beta_{\rm v} &= \beta_{\rm c} + (\beta_{\rm c} - \beta_{\rm h}) \left( \frac{ E_{\rm max}}{E_{\rm v}} -1 \right). \end{align} As mentioned above, the qutrit machine can also work as a heat pump or heat engine, if one switches the hot and cold baths. Imposing again a maximum energy gap coupled to a bath we obtain the following lower bound in the inverse virtual temperature \begin{align}\label{betavengine1} \beta_{\rm v} &= \beta_{\rm h} - (\beta_{\rm c}-\beta_{\rm h}) \left( \frac{ E_{\rm max}}{E_{\rm v}} -1 \right) . \end{align} Notice that in this case $\beta_{\rm v} < \beta_{\rm h}$. Moreover, when $\beta_\mathrm{c}/(\beta_\mathrm{c} - \beta_\mathrm{h}) < E_\mathrm{max}/E_\mathrm{v}$, then $\beta_\mathrm{v} < 0$, and the machine transitions from a heat pump to a heat engine. \section{Summary of results} \label{sec:results} We have seen that imposing a bound on the maximum energy gap the performance of the simplest qutrit machine becomes limited through the range of accessible virtual temperatures. The general question investigated below is whether these limits can be overcome. That is, can we engineer colder temperatures (or hotter ones, as well as achieving population inversion) by using more sophisticated machines? \begin{figure} \caption{Sketch of machines discussed in the present work. We consider several generalizations of the simplest qutrit machine (top left). We first discuss single cycle machine (top right), which can then be extended to multi-cycle machines (bottom right). Second, we study concatenated qutrit machines (bottom left). \label{outlook} \label{outlook} \end{figure} Clearly, in order to optimize the effect the machine has on the physical system, there are two important features the virtual qubit should have following Eq.~\eqref{swapeffect}. First, it should have a high bias $Z_{\rm v}$. Second, the norm $N_\mathrm{v}$ should be as close to one as possible. Below we discuss different classes of multilevel machines, and investigate the range of available virtual qubits as a function of the number of levels $n$ of the machine. First we will see that the range of accessible virtual temperatures (or equivalently bias $Z_{\rm v}$) increases as $n$ increases. Hence machines with more levels allow one to reach lower temperatures, given fixed thermal resources. However, this usually comes at the price of having a relatively low norm $N_{\rm v}$ for the virtual qubit, which is clearly a detrimental feature. Nevertheless we will see that it is always possible to bring the norm back to one by adding extra levels. We discuss two natural ways to generalize the qutrit machines to more levels, sketched in Fig.~\ref{outlook}. The first one consists in adding levels and thermal couplings in order to extend the length of the thermal cycle. In other words, while the qutrit machine represents a machine with one cycle of length three, we now consider machines with a single cycle of length $n$. This will allow us to improve both the bias and the normalization of the virtual qubit. We first characterize the optimal single-cycle machine, which in the limit of large $n$, approaches perfect bias (i.e. zero virtual temperature, or perfect population inversion). However, while the norm $N_{\rm v}$ does not vanish, it is bounded away from one in this case. We then show how the norm can be further increased to one by extending the optimal single-cycle machine to a multi-cycle machine. This procedures requires the addition of $n-2$ levels, while maintaining the same bias. In Fig.~\ref{mainres} we show the range of available virtual qubits (as characterized by its norm $N_{\rm v}$ and bias $Z_{\rm v}$) as a function of the number of levels $n$, for single cycle machines (green dots) and multi-cycle machines (blue dots). \begin{figure} \caption{Performance of machines as a function of dimension. The accessible virtual qubit, characterized by the bias $Z_{\rm v} \label{mainres} \end{figure} Next, we follow a second possibility which consists in concatenating $k$ qutrit machines. The main idea is that the hot bath is now effectively replaced by an even hotter bath/source of work, engineered via the use of an additional qutrit heat pump/engine. In the limit of $k$ large, we can also approach perfect bias and the norm tends to one (see red dots on Fig.~\ref{mainres}), similarly to the multi-cycle machine. It is however worth mentioning that in this case the machine has now $n=3^k$ levels, while the multi-cycle machine used only a number of levels linear in $n$. The above results, which are summarized in Fig.~\ref{mainres}, clearly demonstrate that machines with a larger Hilbert space can outperform smaller ones, which implies that the Hilbert space dimension should be considered a thermodynamical resource. Note that, for clarity, results are generally discussed for the case of fridges, but hold also for heat engines {\it mutatis mutandis}. \section{Single-cycle machines}\label{sec:single-cycle} We start by discussing thermal machines featuring an arbitrary number of levels, $n$, but only a single thermal cycle. We define a {\it $n-$level (thermal) cycle machine} as a quantum system with Hilbert space $\mathcal{H}$ of dimension $n$, and Hamiltonian $H= \sum_{j=1}^n E_j \ket{j}\bra{j}$, where every transition $\Gamma_{j,{j+1}}$, is coupled to a thermal bath. It is worth mentioning that the levels $\{ \ket{j}\}$, with $1\leq j\leq n$, are not necessarily ordered with respect to its associated energies $E_j$. We further denote the energy gap of the transition $\Gamma_{j,{j+1}}$ as $\Delta E_{j,{j+1}} = E_{{j+1}} - E_{j}$, and the temperature of the bath coupled to this transition is labelled as $\beta_{j,{j+1}}$. We choose the transition $\Gamma_{1,n}$ to correspond to the virtual qubit of the machine, whose energy gap, $E_{\rm v}$, obeys the following consistency relation \begin{equation}\label{virtualgap} E_{\rm v} = \sum_{j=1}^{n-1} E_{{j+1}} - E_{j} = \sum_{j=1}^{n-1} \Delta E_{j,{j+1}}. \end{equation} In the absence of any additional couplings, the machine approaches a steady state, as each transition tends to equilibrate with the thermal bath to which it is coupled. We notice that each level is involved in at least one thermal coupling. This implies that the density matrix of the steady state must be diagonal in the energy basis, as all off-diagonal elements decay away due to the thermal interactions. Additionally, the populations of the two levels in each transition are given by the Gibbs ratio corresponding to the temperature of the bath. Labeling the population of the $\ket{j}$ state as $p_{j}$, we have \begin{equation} \frac{p_{{j+1}}}{p_{j}} = e^{-\beta_{j,{j+1}} \Delta E_{j,{j+1}}} \quad \text{for } 1 \leq j \leq n-1. \end{equation} The above $n-1$ thermal couplings determine the ratios between all of the populations $\{p_{j}\}$. Together with the normalization condition $\sum_j p_{j}=1$, this completely determines the steady state of the machine \cite{footnote3}. The virtual temperature corresponding to transition $\Gamma_{1, n}$ can hence be obtained from \begin{align} e^{-\beta_{\rm v} E_{\rm v}} &= \frac{p_{n}}{p_{1}} = \frac{p_{n}}{p_{n-1}} \frac{p_{n-1}}{p_{n-2}} ~...~ \frac{p_{2}}{p_{1}}, \end{align} leading to \begin{align} \beta_{\rm v} &= \sum_{j=1}^{n-1} \beta_{j,{j+1}} \frac{\Delta E_{j,{j+1}}}{E_{\rm v}}. \label{cyclebetav} \end{align} Similarly one may calculate the norm of the virtual qubit, \begin{align}\label{cycleNv} N_{\rm v} &= \left( \frac{1 + e^{-\beta_{\rm v} E_{\rm v}}}{1 + \sum_{j=1}^{n-1} \prod_{k=1}^{k=j} e^{-\beta_{k, {k+1}} \Delta E_{k, {k+1}}}} \right). \end{align} We are interested in the best single cycle machine, that is, the one which using a limited set of {\it resources}, achieves the largest change in bias of the system acted upon, $Z_{\rm s}' -Z_{\rm s}$, as given in Eq.~\eqref{swapeffect}). This corresponds to the one that achieves the largest possible bias, $Z_{\rm v}$, together with the largest norm, $N_{\rm v}$, given this optimized bias. In what follows we determine the optimal single cycle machine with $n$ levels, given bath temperatures and bound on the energy of a coupled transition $E_{\rm max}$. \subsection{Optimal single-cycle machine} \begin{figure} \caption{Sketch of the optimal single-cycle refrigerator, for an even number of levels $n$.} \label{optimalcycle} \end{figure} The optimal arbritrary single cycle fridge, sketched in Fig.~\ref{optimalcycle}, has a rather simple structure. All but one of its transitions are at the maximal allowed energy, $E_{\rm max}$. Roughly, the first half of the transitions (starting from the upper state of the virtual qubit) are all connected to the hot bath, while the second half of the transitions are connected to the cold bath. A complete proof of optimality can be found in Appendix \ref{AppOptTech}. Furthermore, explicit expressions for the inverse virtual temperature and norms in this case can be easily obtained from Eqs.(\ref{cyclebetav}) and (\ref{cycleNv}). For the case of refrigerator with an even number of levels $n$, they read \begin{eqnarray}\label{betav} \beta_{\rm v}^{(n)} &=& \beta_{\rm c} + \left( \beta_{\rm c} - \beta_{\rm h} \right) \left( \frac{n}{2} -1 \right) \frac{E_{\rm max}}{E_{\rm v}} \\ \label{Nv} N_{\rm v}^{(n)} &=& \frac{ 1 + e^{-\beta_{\rm v}^{(n)} E_{\rm v}} }{Ê \frac{ 1 - e^{-\frac{n}{2} \beta_{\rm c} E_{\rm max}} }{1 - e^{-\beta_{\rm c} E_{\rm max}} } + e^{-\beta_{\rm v}^{(n)} E_{\rm v}} \frac{ 1 - e^{-\frac{n}{2} \beta_{\rm h} E_{\rm max}} }{ 1 - e^{-\beta_{\rm h} E_{\rm max}} } }, \end{eqnarray} while the complete results for all $n$, and heat engines are given, respectively, in Appendices \ref{AppOptTech} and \ref{switching}. Let us now discuss the performance of the optimal machine. As becomes apparent from Eq.~\eqref{betav}, the number of levels $n$ is clearly a thermodynamical resource, as it allows to reach colder temperatures. Indeed, one finds that the virtual temperature is improved by a fixed amount whenever two extra levels are added, \begin{equation} \left( \beta_{\rm v}^{(n+2)} - \beta_{\rm v}^{(n)} \right) E_{\rm v} = \left( \beta_{\rm c} - \beta_{\rm h} \right) E_{\rm max}. \end{equation} This relation encapsulates the interplay between the resources involved in constructing a quantum thermal machine - the range of available thermal baths $\{\beta_{\rm c}, \beta_{\rm h}\}$, the range of thermal interactions ($E_{\rm max}$), and the number of levels $n$. Remarkably, as the inverse virtual temperature $\beta_{\rm v}$ increases linearly with $n$, one can engineer a virtual temperature arbitrarily close to absolute zero. Similarly, for a heat engine, one can obtain a virtual qubit with arbitrarily close to perfect population inversion. This is possible because as $n$ increases, the norm of the virtual qubit does not decrease arbitrarily, but remains bounded below away from zero. Indeed from Eq.~\eqref{Nv}, the norm asymptotically approaches a finite value \begin{equation} \lim_{n\rightarrow\infty} N_{\rm v}^{(n)} = \left( 1 - e^{-\beta_{\rm c} E_{\rm max}} \right), \end{equation} which is, interestingly, independent of both $\beta_{\rm h}$ and $E_{\rm v}$. Finally, we briefly comment on the efficiency (also often referred to as the \emph{coefficient of performance} (COP)) of the optimal single cycle machine. Here we adopt the standard definition of the efficiency of an absorption refrigerator, that is, the ratio between the heat extracted from the object to be cooled and the heat extracted from the hot bath. This can be easily calculated by looking at a single complete cycle of the machine. Imagine that a quantum $E_{\rm v}$ of heat is extracted from the external qubit, in the jump $\ket{1} \rightarrow \ket{n}$ produced by the swap operation. To complete the cycle, the following sequence of jumps must necessarily occur: \begin{equation} \ket{n} \xrightarrow{\beta_{\rm h}} ... \xrightarrow{\beta_{\rm h}} \ket{n/2 +1} \xrightarrow{\beta_{\rm c}} \ket{n/2} \xrightarrow{\beta_{\rm c}} ... \xrightarrow{\beta_{\rm c}} \ket{1} \end{equation} where $n/2 - 1$ energy quanta $E_{\rm max}$ of heat are adsorbed from the hot bath while releasing $n/2 - 1$ quanta $E_{\rm max}$ and one quantum $E_{\rm v}$ of heat to the cold bath. The efficiency is hence given by: \begin{align}\label{efficiency} \eta_\mathrm{fridge}^{(n)} = \frac{E_{\rm v}}{\left( \frac{n}{2} -1 \right) E_{\rm max}} = \frac{\beta_{\rm c} - \beta_{\rm h}}{\beta_{\rm v}^{(n)} - \beta_{\rm c}}. \end{align} where the second equality follows by exploiting Eq.~\eqref{betav} (see Appendix D). Crucially, Eq.~\eqref{efficiency} corresponds to Carnot efficiency for an endoreversible absorption refrigerator that is extracting heat from a bath at the temperature $\beta_{\rm v}^{(n)} \geq \beta_{\rm c} \geq \beta_{\rm h}$. That is, if the object to be cooled (now an external bath) is infinitesimally above the temperature of the virtual qubit (such that the virtual qubit cools it down by an infinitesimal amount), then the efficiency (COP) of this process approaches the Carnot limit. Note that such absorption refrigerators have the property that the COP drops as the temperature of the cold reservoir drops. In the present case, since $\beta_\mathrm{v}^{(n)}$ drops linearly with $n$, so too does the efficiency of the machine. Intuitively, this makes sense, since the amount of heat drawn from the hot bath (per cycle) increases linearly with $n$, while the heat extracted from the cold bath remains constant (see Fig.~\ref{optimalcycle}). \section{Multi-cycle machines}\label{sec:multi-cycle} We have seen that the optimal single cycle machine can enhance the virtual temperature by increasing the number of levels $n$. Basically, this comes at the price of having the norm $N_{\rm v}$ relatively low, which is clearly a detrimental feature. Hence, it is natural to ask if, by adding levels, the norm can be brought back to unity while keeping the same virtual temperature. Below we will see that this is always possible, and in fact, requires only (roughly) twice the number of levels. For clarity, we illustrate the method starting from the qutrit fridge, that has a virtual qubit whose norm is strictly smaller than $1$. By adding a fourth level, we will achieve $N_{\rm v}=1$, while maintaining the bias. The fourth level is chosen specifically so that $E_4 = E_{\rm v} + E_{\rm max}$, and the transition $\Gamma_{2,4}$ is coupled to the cold bath (see Fig.~\ref{improveNv}(a)). Hence by design, the new transition $\Gamma_{3,4}$ has the same energy gap $E_{\rm v}$ as the original virtual qubit $\Gamma_{1,2}$. Furthermore, one can verify that both transitions possess the same virtual temperature. In fact one can identify two $3-$level fridge cycles at work in the new system, $\{ \ket{2} \rightarrow \ket{3} \rightarrow \ket{1} \}$ and $\{ \ket{4} \rightarrow \ket{2} \rightarrow \ket{3} \}$. Thus one could also connect $\Gamma_{3,4}$ to the external system that is to be cooled. Since the two transitions can be coupled at the same time to the external system, they both contribute to the virtual qubit. Thus, the norm of the (total) virtual qubit is obtained by summing the populations of each transition (virtual qubit). As the two transitions include all four levels, we find that $N_{\rm v}=1$. \begin{figure} \caption{Starting from the qutrit fridge, and adding a fourth level $\ket{4} \label{improveNv} \end{figure} Alternatively, one could view the four level machine as consisting of two real qubits, see Fig.~\ref{improveNv} (b). As one of these real qubits corresponds to the virtual qubit, it follows that its norm must be $N_{\rm v}=1$. We term this procedure the {\it virtual qubit amplification} of a single cycle machine. Next, we show explicitly how to perform the above construction starting from any $n$ level single cycle machine. This general virtual qubit amplification procedure requires the addition of $n-2$ additional levels. This is the most economical procedure possible, since the original $n$ level cycle contains $n-2$ levels which do not contribute to the virtual qubit. The general construction works as follows. Consider a single $n$-level thermal cycle machine as described in Sec.~\ref{sec:single-cycle} : a set of $n$ levels with corresponding energies $E_{j}$ ($1\leq j \leq n$), subsequent $n-1$ transitions coupled to thermal baths at corresponding inverse temperatures $\beta_{j, {j+1}}$, and virtual qubit $\Gamma_{1, n}$, where $E_{n} - E_{1} = E_{\rm v}$. To amplify the virtual qubit, one now adds $n-2$ energy levels. Each new level is added in order to form a virtual qubit with each level of the original cycle except for the virtual qubit levels $\ket{1}$ and $\ket{n}$ (see Fig.~\ref{qubitrealization1}). The energy of the new levels must be chosen such that \begin{align} E_{{j+n-1}} = E_{j} + E_{\rm v} \end{align} where $j$ runs from $2$ to $n-1$. The corresponding thermal couplings are chosen in such a manner that the structure of the cycle from $j=n$ to $j=2n-2$ is identical to the structure from $j=1$ to $j=n-1$. Specifically, this means choosing \begin{align} \beta_{{j+n-2},{j+n-1}} &= \beta_{{j-1},j}. \end{align} Following this procedure we finish with a final Hilbert space for the machine $\mathcal{H}$ with total dimension $n' \equiv {\rm dim}\mathcal{H} =2(n-1)$. One can verify that all the new virtual qubits ($\Gamma_{{1+j}, {n+j}}$) have the same virtual temperature $\beta_{\rm v}$ as the original virtual qubit $\Gamma_{1, n}$. None of these transitions share an energy level, i.e. they are mutually exclusive, and together they comprise all of the $2n-2$ levels present in the system. If every one of these transitions is connected together to the external system, then the effective virtual qubit reaches norm $N_{\rm v}=1$ as required. The inverse virtual temperature of the multi-cycle fridge can hence be expressed in terms of the total number of levels $n'$. For instance in the case of $n$ even, we have: \begin{equation}\label{eq:multibetafin} \beta_{\rm v}^{(n')} = \beta_{\rm c} + (\beta_{\rm c} - \beta_{\rm h})\left(\frac{n'}{4} - \frac{1}{2} \right) \frac{E_{\rm max}}{E_{\rm v}}. \end{equation} Finally we note that, as in the simple case discussed above, the final machine can be viewed as a tensor product of an $n-1$ level cycle and the virtual qubit (which now becomes a real qubit since $N_{\rm v}=1$). In fact, this procedure also allows one to easily convert a fridge into a heat engine, and vice versa, as discussed in Appendix \ref{switching}. The virtual qubit amplification procedure is schematically depicted for the case of a $5-$level fridge cycle in Fig.~\ref{qubitrealization1}. \begin{figure} \caption{(a) Starting from a 5 level fridge, and adding 3 levels (dashed lines), the norm of the virtual qubit can be boosted to $N_{\rm v} \label{qubitrealization1} \end{figure} \section{Concatenated qutrit machines} \label{sec:concatenated} As we commented previously, a different possibility for generalizing the simplest qutrit machine consists in concatenating several qutrit machines. Here we analyze this possibility by characterizing the virtual qubits achievable by concatenating $k$ qutrit machines (see Sec.~\ref{sec:qutrit}). For simplicity we start with case of concatenating $k=2$ qutrit machines in order to obtain a better fridge. The coupling between the two qutrit machines can be achieved considering a simple swap Hamiltonian coupling the transitions $\Gamma_{2,3}^{(1)}$ and $\Gamma_{2,3}^{(2)}$: \begin{align} H_{\rm int} &= g ( \ket{2,3}\bra{3,2} + {\rm h.c.} ), \end{align} as shown on Fig.~\ref{2qutrit}. Here the first qutrit machine represents the actual fridge while the second one works as a heat engine, replacing the hot bath on the transition $\Gamma_{2,3}^{(1)}$. This corresponds to coupling $\Gamma_{2,3}^{(1)}$ to an effective temperature which is hotter than \orange{the temperature of the hot bath (or equivalently inverse temperature lower than $\beta_{\rm h}$)}, resulting in a fridge \orange{with an improved} bias $Z_{\rm v}$. Indeed the \orange{inverse} virtual temperature achieved by the concatenated qutrit machine is found to be \begin{align}\label{betavfridge2} \beta_{\rm v}^{(2)} = \beta_{\rm c} + (\beta_{\rm c}-\beta_{\rm h}) \frac{E_{\rm max}}{E_{\rm v}}, \end{align} which is colder than the virtual temperature of the simple qutrit fridge (see Eq.~\eqref{betavfridge1}). Importantly, this enhancement has been achieved without modifying the value of $E_{\rm max}$, and considering the same temperatures $\beta_{\rm c}$ and $\beta_{\rm h}$ \orange{for the thermal baths}. Details about calculations are given in Appendix \ref{qutrits}. \begin{figure} \caption{By concatenating two qutrit machines, one obtains a better fridge, outperforming the simple qutrit fridge. Specifically, the new $6$-level machine consists now a qutrit fridge (left) which is boosted via the use of a qutrit heat engines (right). The role of this heat engine is to create an effectively hotter temperature (hotter than $T_{\rm h} \label{2qutrit} \end{figure} The process may now be iterated, replacing the coupling of $\Gamma_{2,3}^{(2)}$ to the cold bath $\beta_{\rm c}$ by a coupling to a third qutrit fridge, effectively at a temperature colder than $\beta_{\rm c}$, and so on, as sketched in Fig.~\ref{nqutritlimit}. In this manner one can construct a machine resulting of the concatenation of $k$ qutrit machines. Following calculations given in Appendix \ref{qutrits}, we obtain simple expressions for the virtual temperatures \begin{equation}\label{nqutrit} \beta_{\rm v}^{(k)} = \begin{cases} \beta_{\rm c} + (\beta_{\rm c} - \beta_{\rm h})\frac{k}{2}\frac{E_{\rm max}}{E_{\rm v}} & \text{ if $k$ is even,} \\ \beta_{\rm c} + (\beta_{\rm c} - \beta_{\rm h})\left( \frac{k+1}{2}\frac{E_{\rm max}}{E_{\rm v}} - 1 \right) & \text{ if $k$ is odd.} \end{cases} \end{equation} Again, we see that the virtual temperature approaches absolute zero as $k$ becomes large. Similarly for a concatenated heat engine, one can approach perfect inversion \orange{(see details in Appendix \ref{qutrits})}. \begin{figure} \caption{Concatenating many qutrit machines.\label{nqutritlimit} \label{nqutritlimit} \end{figure} \twocolumngrid Note that the above expressions are similar to those obtained for the virtual temperature in the case of the single cycle machine. In particular setting $k=n-2$ we obtain exactly the same result. This correspondence can be intuitively understood via the following observations. First, the single qutrit machine is the same as a $3-$level cycle. Furthermore, the effect of replacing one of the thermal couplings in a qutrit machine by a coupling to an additional qutrit effectively replaces one thermal coupling by two, thus increasing the number of thermal interactions within the working cycle by one. For example, in the two qutrit fridge (Fig.~\ref{2qutrit}), the effective thermal cycle is \begin{equation} \ket{22} \xrightarrow{\beta_{\rm c}} \ket{21} \xrightarrow{\beta_{\rm h}} \ket{23} \xrightarrow{H_{\rm int}} \ket{32} \xrightarrow{\beta_{\rm c}} \ket{12}. \end{equation} Although this is a cycle of length 5, the virtual temperature is only influenced by the 3 thermal couplings, \orange{because} the coupling on the degenerate transition $\ket{23}\leftrightarrow\ket{32}$ \orange{has zero energy gap (}see Eq.~\eqref{cyclebetav}). Since the thermal couplings are the same as those in the optimal $4-$level fridge single cycle, we get the same virtual temperature. By induction, the $k-$qutrit machine has the same $\beta_{\rm v}$ (and indeed the same thermal couplings within its working cycle) as the optimal $(k+2)-$level single cycle. \orange{Finally,} it is also important to discuss the behavior of the norm $N_{\rm v}$ of the virtual qubit in order to characterize the performance of the concatenated machine. Interestingly we find that $N_{\rm v} \rightarrow 1$ in the limit of large $k$. This can be intuitively understood for the case of the concatenated heat engine, \orange{depicted in} Fig.~\ref{nqutritlimit}. As $k$ becomes large, the virtual temperature $\beta_{\rm v}$ approaches $-\infty$. Thus the population ratio $\frac{p_1}{p_2} \rightarrow 0$, implying that $p_1 \rightarrow 0$. However, since $\Gamma_{1,3}^{(1)}$ is coupled to a thermal bath at $\beta_{\rm h}$, the population ratio $\frac{p_3}{p_1}$ equals $e^{-\beta_{\rm h} E_{\rm max}}$, implying that $p_3 \rightarrow 0$. Thus in the limit $k \rightarrow \infty$, the state of the first qutrit approaches the pure state $\ket{2}\bra{2}$, and thus $N_{\rm v} = p_1 + p_2 \rightarrow 1$. To understand the case of the fridge, consider in Fig.~\ref{nqutritlimit} that the machine begins with the second qutrit instead of the first one. This is now a fridge, where the virtual qubit is the transition $\Gamma_{2,3}^{(2)}$. By a similar analysis to the above, we find that the state of the qutrit approaches $\ket{2}\bra{2}$ in the limit $k \rightarrow \infty$, and thus $N_{\rm v} \rightarrow 1$. It is instructive to observe that in both cases, the concatenation of qutrit machines takes the state of the original qutrit closer to the state where all of the population is in the middle level $\ket{2}\bra{2}$, which is both the ideal fridge with respect to $\Gamma_{2,3}$, and the ideal machine with respect to $\Gamma_{1,2}$. Therefore we can conclude that, again, increasing the number of levels, or equivalently the dimension \orange{of the machine Hilbert space, $n \equiv {\rm dim}\mathcal{H} = 3^k$}, the performance is increased. Indeed, as $k$ increase, the virtual qubit bias approaches $Z_{\rm v}=1$ (or $Z_{\rm v}=-1$ for a heat engine), while \orange{its} norm becomes maximal, i.e. $N_{\rm v} \rightarrow 1$. However notice that \orange{in this case} the dimension of the machine grows rapidly. \orange{Indeed the inverse virtual temperature now grows only logarithmically with the total number of levels, $n$. For instance when $k$ is even we have: \begin{equation} \beta_{\rm v}^{(n)} = \beta_{\rm c} + (\beta_{\rm c} - \beta_{\rm h}) \left( \frac{\log_3 n}{2} \right) \frac{E_{\rm max}}{E_{\rm v}} \end{equation} to be compared with the multi-cycle fridge case in Eq.~\eqref{eq:multibetafin}.} \orange{ \section{Third law} \label{sec:thirdlaw} The above results show that when the dimension of the Hilbert space of the thermal machine tends to infinity, the virtual temperature can approach absolute zero even though the maximal energy gap which is coupled to a thermal bath is finite. Nevertheless, an important point is that, in all the constructions given, for any finite $n$, the lowest possible temperature is always strictly greater than zero. This can be directly seen from the expressions for the inverse virtual temperature of the optimal single-cycle machines, as given in Eq.~\eqref{betav} and Appendix \ref{AppOptTech}. Therefore any single-cycle fridge requires an infinite number of levels in order to cool to absolute zero. Next, we notice that the lowest temperatures of any other mutli-cycle machine with different virtual qubits working in parallel can achieve is bounded by the temperature achieved in any of these cycles. This follows from the fact that the effect of multiple cycles on the virtual qubit can be decomposed as a sum of the effect of each individual cycle. Thus, the bound on the temperature we derive for single-cycle $n$ level machines holds for general machines with $n$ levels.} Therefore we obtain a statement of the third law in terms of Hilbert space dimension. In particular, from \eqref{swapeffect} we see that the bias (and therefore temperature) and norm of the virtual qubit determine to what temperature an external object can be bought to in a single (or multiple) cycles of a thermal machine. The fact that the virtual temperature only approaches zero as the dimension of the thermal machine approaches infinity shows that an brining an external object to absolute zero requires a machine with an infinite number of levels. This is static version of the third law, complementary to previous statements \cite{Masanes,amikam}, stated in terms of number of steps, time, or energy required in order to reach absolute zero. Finally, we note that in the case of the multi-cycle machine, since the norm of the virtual qubit is unity, in a single swap operation the external object is bought to exactly the temperature of the virtual qubit. Thus, using a machine of Hilbert space dimension $n$, we can cool an external object to the inverse temperature \eqref{eq:multibetafin}, which corresponds asymptotically to the scaling \begin{equation} T_\mathrm{s} \sim \frac{1}{n} \end{equation} \textit{i.e.}~the temperature scales inversely with the Hilbert space dimension. \section{Statics vs dynamics for single-cycle machines}\label{sec:dynamics} So far, we have discussed improving the {\it static} configuration of the thermal machine by increasing its dimension. This analysis characterizes the task of cooling (or heating) an external system via a single swap, a so-called {\it single shot} thermodynamic operation. However, more generally we are interested in continuously cooling the external system, as the latter is unavoidably in contact with its own environment, and thus requires repeated swaps with the virtual qubit in order to maintain the cooling (or heating) effect. As we have seen in Sec.~\ref{sec:primitive}, after a single swap between the virtual qubit and the external system, the bias of the virtual qubit is switched with that of the external system. Thus the virtual qubit needs to be ``reset'' before the next interaction is possible, an operation which should require some time to be performed, and hence introduces limitations on the power of the machines. This ``time of reset'' depends in general on the thermalization model, which forces us to go beyond purely static considerations. To illustrate this point we will discuss here the dynamics of the single-cycle refrigerators. Intuitively one may expect the time of reset of the virtual qubit increases as the number of levels in the cycle increases, i.e. the larger the cycle of the machine, the longer it takes the machine to perform the series of jumps reinitializing it. This introduces the following tradeoff. Previously we saw that machines with longer cycles were able to achieve lower temperatures for a single swap. However, they would also take longer to reset. Therefore in order to engineer a good fridge, one could consider (i) a high dimensional fridge (i.e. a long cycle) achieving low temperatures at slower rate, or (ii) a low-dimensional fridge achieving not as low temperatures, but at a faster rate. In order to find out which regime is better, we consider single-cycle fridges coupled to thermal baths, as modelled by a Markovian master equation. Since the thermalization occurs here only on transitions, the specific details of the model are not crucial, and all models (either simple heuristic ones \cite{linden10} or those derived explicitly by microscopic derivations \cite{breuer}) lead to the same qualitative conclusions. We find that the relevant parameter is timescale at which the external system interacts with its environment $\tau_{\rm s}$. If this timescale is short, then the fridge has little time to `reset' the virtual qubit. Therefore a shorter cycle, that resets quickly, is optimal in this case. If on the contrary the system timescale is long, there is more time available in order to reset the virtual qubit. Thus a longer cycle, providing lower temperatures, is preferable. This trade-off is illustrated in Fig.~\ref{betavsN}. \begin{figure} \caption{Relationship between the steady-state virtual temperature and the length of the cycle. We consider various equilibration timescales, $\tau_{\rm s} \label{betavsN} \end{figure} \twocolumngrid We also observe from Fig.~\ref{betavsN} that, for given timescale $\tau_{\rm s}$, there is an optimal length of the cycle. In Fig.~\ref{optnvst}, we plot the optimal length of the cycle for different timescales. The optimal length appears to be logarithmic with respect to $\tau_{\rm s}$. However, for fast timescales, we observe that the optimal cycle has length 4. This suggests that the simplest qutrit machine is always outperformed in this regime. \begin{figure} \caption{Length of the optimal cycle versus equilibration timescale $\tau_{\rm s} \label{optnvst} \end{figure} \twocolumngrid \section{Discussion/Conclusion} \label{sec:conclusions} We discussed the performance of quantum absorption thermal machines, in particular with respect to the size of the machine. Specifically, we considered several designs of machines with $n$ levels and described the static properties of the machine, in particular the range of available virtual qubits, which characterizes the fundamental limit of the machine. Notably, as $n$ increases, a larger range of virtual temperatures becomes available, showing that a machine with $n+1$ levels can outperform a machine with $n$ levels. Moreover, in order to achieve virtual qubits with perfect bias (i.e. achieving a virtual qubit at zero temperature, or with complete population inversion), the required number of levels $n$ diverges. This can be viewed as a statement of the third law, complementary to previous ones. Usually stated in terms of number of steps, time, or energy required in order to reach absolute zero temperature, we obtain here a statement of the third law in terms of Hilbert space dimension: reaching absolute zero requires infinite dimension. Moreover, we also discussed machines with multiple cycles running in parallel. Here performance is increased, as the norm of the virtual qubit can be brought to one, i.e. the virtual qubit becomes a real one. Finally, similar performance is achieved for a design based on the concatenation of the simplest qutrit machine. While generally suboptimal in terms of performance, this design gives nevertheless a more intuitive picture and may be more amenable to implementations, as the couplings are simpler. An outstanding question left open here concerns the performance of machine where multiple cycles run in parallel. In particular, it would be interesting to understand how to design the most effective machine, given a fixed number of levels (as well as constraints on the energy and temperatures). One may expect that the time necessary to reset the machine is considerably decreased, providing potentially a strong advantage over single-cycle machines. \begin{thebibliography}{99} \bibitem{schulz} H.~E.~D. Scovil and E.~O. Schulz-DuBois, Phys. Rev. Lett. {\bf 2}, 262 (1959). \bibitem{linden10} N.~Linden, S.~Popescu, P.~Skrzypczyk, Phys. Rev. Lett. \textbf{105}, 130401 (2010). \bibitem{palao} J. P. Palao, R. Kosloff, and J. M. Gordon, Phys. Rev. E \textbf{64}, 056130 (2001). \bibitem{levy} A. Levy, R. Alicki, and R. Kosloff, Phys. Rev. E {\bf 85}, 061126 (2012). \bibitem{magic} R. Silva, P. Skrzypczyk, N. Brunner, Phys. Rev. E {\bf 92}, 012136 (2015). \bibitem{book} J. Gemmer, M. Michel, G. Mahler, Quantum Thermodynamics, Lecture Notes in Physics, Springer (2009). \bibitem{review1} R. Kosloff, A. Levy, Annual Review of Physical Chemistry {\bf 65}, 365-393 (2014). \bibitem{review2} D. Gelbwaser-Klimovsky, Wolfgang Niedenzu, Gershon Kurizki, Advances In Atomic, Molecular, and Optical Physics {\bf 64}, 329 (2015). \bibitem{skrzypczyk} P.~Skrzypczyk, N.~Brunner, N.~Linden, S.~Popescu, J. Phys. A: Math. Theor. \textbf{44}, 492002 (2011). \bibitem{correa1} L. A. Correa, J. P. Palao, G. Adesso, D. Alonso, Phys. Rev. E {\bf 87}, 042131 (2013). \bibitem{woods} M.P. Woods, N. Ng, S. Wehner, arXiv:1506.02322. \bibitem{brunner14} N. Brunner, M. Huber, N. Linden, S. Popescu, R. Silva, P. Skrzypczyk, Phys. Rev. E {\bf 89}, 032115 (2014). \bibitem{correa14} L. A. Correa, J. P. Palao, D. Alonso, and G. Adesso, Sci. Rep. {\bf 4}, 3949 (2014). \bibitem{uzdin15} R. Uzdin, A. Levy, R. Kosloff, Phys. Rev. X {\bf 5}, 031044 (2015). \bibitem{marcus} M.T. Mitchison, M.P. Woods, J. Prior, M. Huber, New J. Phys. {\bf 17}, 115013 (2015). \bibitem{brask15b} J.B. Brask and N. Brunner, Phys. Rev. E {\bf 92}, 062101 (2015). \bibitem{frenzel16} M. F. Frenzel, D. Jennings, T. Rudolph, New J. Phys. {\bf 18}, 023037 (2016). \bibitem{chen} Y.-X. Chen, S.-W. Li, EPL {\bf 97}, 40003 (2012). \bibitem{mari} A. Mari, J. Eisert, Phys. Rev. Lett. {\bf 108}, 120602 (2012). \bibitem{venturelli} D. Venturelli, R. Fazio, V. Giovannetti, Phys. Rev. Lett. {\bf 110}, 256801 (2013). \bibitem{brask15} J.B. Brask, G. Haack, N. Brunner, and M. Huber, New J. Phys. {\bf 17}, 113029 (2015). \bibitem{bellomo} B. Leggio, B. Bellomo, M. Antezza, Phys. Rev. A {\bf 91}, 012117 (2015). \bibitem{mitchison2} M.T. Mitchison, M. Huber, J. Prior, M.P. Woods, M.B. Plenio, arXiv:1603.02082. \bibitem{virtual} N. Brunner, N. Linden, S. Popescu, P. Skrzypczyk, Phys. Rev. E {\bf 85}, 051117 (2012). \bibitem{janzing} D. Janzing, P. Wocjan, R. Zeier, R. Geiss, and Th. Beth, Int. J. Theor. Phys. {\bf 39}, 2717 (2000). \bibitem{GevKos96} E. Geva and R. Kosloff, J. Chem. Phys. {\bf 104}, 7681 (1996). \bibitem{LevKos12} A. Levy, R. Kosloff, Phys. Rev. Lett. {\bf 108}, 070604 (2012). \bibitem{CorreaNv} L.A. Correa, Phys. Rev. E {\bf 89}, 042128 (2014). \bibitem{footnote1} More generally, one could also consider the case of virtual qubits with coherences. \bibitem{footnote3} If the cycle covers only a subspace of all the machine levels, then the populations are determined with respect to the total population of the subspace. \bibitem{breuer} H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, UK, 2002). \bibitem{Masanes} L. Masanes, J. Oppenheim, arxiv:1412.3828. \bibitem{amikam} A. Levy, R. Alicki, R. Kosloff, Phys.Rev. E. {\bf 85},061126 (2012). \end{thebibliography} \onecolumngrid \section*{APPENDICES} \appendix \twocolumngrid \section{The swap operation as the primitive for thermodynamic operations.}\label{AppSwap} This appendix elaborates on the swap as the primitive operation of quantum thermal machines. Consider a setup involving a real qubit of energy $E_{\rm v}$, the system, with bias $Z_{\rm s}$. In order to modify the bias (e.g. to cool the system), the system now interacts with a virtual qubit (i.e. a pair of levels $\{i,j\}$ within the machine) which has the same energy gap as the system, i.e. $E_{\rm v}=E_j - E_i $. The energy-conserving ``swap" interaction is described by a unitary \begin{align} U = \mathbb{I}_{\rm sv} &- \ket{i,0}_{\rm sv}\!\bra{i,0} - \ket{j,1}_{\rm sv}\!\bra{j,1} \nonumber\\ &+ \ket{i,1}_{\rm sv}\!\bra{j,0} + \ket{j,0}_{\rm sv}\!\bra{i,1}, \end{align} where $\ket{0}_{\rm s}$ and $\ket{1}_{\rm s}$ denote the ground and excited states of the system. The effect of the swap upon two real qubits would be to swap the states of the qubits for one another (assuming the initial state as diagonal and uncorrelated). However, this is not the case for one real and one virtual qubit, as we show presently. We assume that the real qubit begins in a diagonal state. If one labels the populations of the initial state in the ground and excited levels of the system as $p_0$ and $p_1$, then using the definition of the bias, $Z = p_0 - p_1$, its initial state is \begin{equation} \rho_{\rm s} = \frac{1+Z_{\rm s}}{2} \ket{0}_{\rm s}\!\bra{0} + \frac{1-Z_{\rm s}}{2} \ket{1}_{\rm s}\!\bra{1}. \end{equation} For the virtual qubit, the sum of the populations is not $1$ in general, i.e. $N_{\rm v} = p_i + p_j < 1$. Assuming that the state is block diagonal (w.r.t. the virtual qubit), \begin{align} \rho_{\rm v} = N_{\rm v} \left( \frac{1 + Z_{\rm v}}{2} \ket{i}_{\rm v}\!\bra{i} + \frac{1-Z_{\rm v}}{2} \ket{j}_{\rm v}\!\bra{j} \right) \nonumber \\ + (1 - N_{\rm v}) \rho^\prime_{\rm v}, \end{align} where $\rho^\prime_{\rm v}$ is an arbitrary (normalized) state of the remaining levels in the machine. After applying $U$, the final state of the system and the machine containing the virtual qubit is \onecolumngrid \begin{align} U \rho_{\rm s} \otimes \rho_{\rm v} U^\dagger &= \left( \frac{1+Z_{\rm s}}{2} \right) N_{\rm v} \left( \frac{1+Z_{\rm v}}{2} \right) \ket{00}_{\rm sv}\!\!\bra{00} + \left( \frac{1-Z_{\rm s}}{2} \right) N_{\rm v} \left( \frac{1+Z_{\rm v}}{2} \right) \ket{01}_{\rm sv}\!\!\bra{01} \nonumber\\ &+ \left( \frac{1+Z_{\rm s}}{2} \right) N_{\rm v} \left( \frac{1-Z_{\rm v}}{2} \right) \ket{10}_{\rm sv}\!\!\bra{10} + \left( \frac{1-Z_{\rm s}}{2} \right) N_{\rm v} \left( \frac{1-Z_{\rm v}}{2} \right) \ket{11}_{\rm sv}\!\!\bra{11} + (1-N_{\rm v}) \rho_{\rm s} \otimes \rho^\prime_{\rm v}, \end{align} from which the final reduced state of the system is \begin{align} \rho_{\rm s}^f &= \left[ N_{\rm v} \left( \frac{1+Z_{\rm v}}{2} \right) + (1-N_{\rm v}) \left( \frac{1+Z_{\rm s}}{2} \right) \right] \ket{0}_{\rm s}\!\!\bra{0} + \left[ N_{\rm v} \left( \frac{1-Z_{\rm v}}{2} \right) + (1-N_{\rm v}) \left( \frac{1-Z_{\rm s}}{2} \right) \right] \ket{1}_{\rm s}\!\!\bra{1}. \end{align} At the end of the protocol, the bias of the real qubit has been modified to \begin{equation}\label{appbiaschange} Z_{\rm s}^\prime = N_{\rm v} Z_{\rm v} + (1-N_{\rm v}) Z_{\rm s} \;\;\;\;\;\Longrightarrow\;\;\;\;\; \Delta Z_{\rm s} = Z_{\rm s}^\prime - Z_{\rm s} = N_{\rm v} \left( Z_{\rm v} - Z_{\rm s} \right). \end{equation} \section{Optimal single cycle machines}\label{AppOptTech} \twocolumngrid We prove optimality of the single cycle machine discussed in Section \ref{sec:single-cycle} of the main text. While there are several ways in which performance could be discussed, we are mainly concerned here with optimality under the swap operation \eqref{appbiaschange}. That is, which machine achieves the largest change in the bias of the system acted upon. Consider a machine with $n$ levels and a single cycle (of length $n$). All transitions must be coupled to available temperatures, namely \begin{equation}\label{constraintbetaapp} \beta_{\rm h} \leq \beta_{j,j+1} \leq \beta_{\rm c}. \end{equation} Note that intermediate temperatures can be obtained by coupling to both baths at $\beta_{\rm c}$ and $\beta_{\rm h}$. Furthermore, the energy gaps of the transitions are bounded, \begin{equation}\label{constrainttransition} - E_{\rm max} \leq \Delta E_{j,j+1} \leq E_{\rm max}. \end{equation} The cycle approaches here a diagonal steady state, as every level is interacting with a thermal bath. The ratio of populations of every transition matches the temperature of the bath it is coupled to, \begin{equation} \frac{p_{j+1}}{p_{j}} = e^{-\beta_{j,j+1} \Delta E_{j,j+1}} \quad \text{for } 1 \leq j \leq n-1. \end{equation} Together with the normalization condition $\sum_j p_{j} = 1$, this completely determines the steady state. The virtual temperature $\beta_{\rm v}$ is given by \begin{align} e^{-\beta_{\rm v} E_{\rm v}} &= \frac{p_{n}}{p_{1}} = \frac{p_{n}}{p_{n-1}} \frac{p_{n-1}}{p_{n-2}}...\frac{p_{2}}{p_{1}}, \\ \therefore \beta_{\rm v} E_{\rm v} &= \sum_{j=1}^{n-1} \beta_{j,j+1} \Delta E_{j,j+1}. \label{cyclebetavapp} \end{align} Similarly, the norm $N_{\rm v}$ is found to be \begin{align} N_{\rm v} &= \frac{p_{1}+p_{n}}{p_{1} \left( 1 + \frac{p_{2}}{p_{1}} + \frac{p_{3}}{p_{1}} + ... \right)} \\ &=\left( \frac{1 + e^{-\beta_{\rm v} E_{\rm v}}}{1 + \sum_{j=1}^{n-1} \prod_{k=1}^{k=j} e^{-\beta_{k,k+1} \Delta E_{k,k+1}}} \right).\label{cycleNvapp} \end{align} We proceed to determine the unique $n$ level single cycle that \emph{minimizes the ratios of the population of every level ${j}$ in the cycle with respect to one of the levels of the virtual qubit.} This is then proven to be the optimal cycle. For clarity we detail the proof for the case of the fridge, i.e. we minimize the ratios w.r.t. the ground state of the virtual qubit. The proof for the heat engine is similar. Consider the population ratio \begin{align} \frac{p_{j}}{p_{1}} &= \prod_{k=1}^{j-1} e^{-\beta_{k,k+1} \Delta E_{k,k+1}} \\ &= \exp \left[ -\sum_{k=1}^{j-1} \beta_{k,k+1} \Delta E_{k,k+1} \right]. \end{align} To minimize this ratio, one should maximize the summation above. Regardless of the values of any energy gap $\Delta E$, maximizing the sum requires picking the highest possible temperature $\beta_{\rm c}$ if the energy gap is positive, and the smallest possible temperature $\beta_{\rm h}$ if the energy gap is negative. Thus one can collect together the positive and negative energy gaps to simplify the expression. Labeling the sum of the positive energy gaps as $Q_+^j$ and the sum of the negative ones as $Q_-^j$, we obtain \begin{equation}\label{ratiogeneral} \frac{p_{j}}{p_{1}} = \exp \left[ - \left( \beta_{\rm c} Q_+^j + \beta_{\rm h} Q_-^j \right) \right]. \end{equation} In addition, we have the consistency relation \begin{equation}\label{Qconsistency} Q_+^j + Q_-^j = \Delta E_{1,j} = \sum_{k=1}^{j-1} \Delta E_{k,k+1}, \end{equation} which leads to \begin{equation}\label{twostep} \frac{p_{j}}{p_{1}} = \exp \left[ - \beta_{\rm h} \Delta E_{1,j} - \left( \beta_{\rm c} - \beta_{\rm h} \right) Q_+^j \right]. \end{equation} We proceed to minimize the ratio in two steps. First we find the optimum $Q_+^j$ for a fixed $\Delta E_{1,j}$, followed by optimizing over $\Delta E_{1,j}$. For a fixed energy gap $\Delta E_{1,j}$, the minimum Gibbs ratio is achieved when $Q_+^j$ is as large as possible (since $\beta_{\rm c}-\beta_{\rm h} >0$). Recall that $Q_+^j$ is the sum of positive transitions in the cycle from $1$ to $j$, each of which are bounded by $E_{\rm max}$. Also the number of transitions at $E_{\rm max}$ between $1$ to $j$ is limited by the consistency relation \eqref{Qconsistency}. Optimizing for $Q_+^j$ subject to these constraints results in values for the sizes and number of transition in the cycle from $1$ to $j$ as summarized in the Table \ref{transitionsizearbitrary}, for a fixed $\Delta E_{1,j} = m E_{\rm max} + \delta_j$ (where $m = \Delta E_{1,j} \mod E_{\rm max}$). In spite of the dependence on the optimum current $Q_+^j$ upon the relative parities of $j$ and $m$, it is straightforward to verify that the optimum $Q_+^j$ increases monotonically w.r.t. $\Delta E_{1,j}$. Thus to complete the minimization of \eqref{twostep}, one has to maximize $\Delta E_{1,j}$. This proceeds in an analogous manner to the optimization of $Q_+^j$, with the major difference being that $\Delta E_{1,j}$ must be chosen keeping in mind the consistency condition for the energy gap of the virtual qubit \eqref{virtualgap}. The result is summarized in Table \ref{transitionsizeintermediate}, for the $n$ level cycle. This completes the optimization of the ratio $p_{j}/p_{1}$. From Table \ref {transitionsizeintermediate} we see that there is a unique construction of the $n$ level cycle that simultaneously fulfils the optimization criteria for all $j$: for all $j\leq n/2$ fix all of the transitions to be $+E_{\rm max}$, next fix a transition to be $E_{\rm v}$ or $-(E_{\rm max} - E_{\rm v})$, depending on the parity of $n$, and continue with all the remaining transitions fixed to be $-E_{\rm max}$. \onecolumngrid \begin{center} \begin{table}[b] \renewcommand{2.0}{2.0} \begin{tabular}{| c | c | c | c | c || c | c |} \hline No. of transitions & $+E_{\rm max}$ & $+\delta_j$ & $-(E_{\rm max} - \delta_j)$ & $-E_{\rm max}$ & $Q_+$ & $Q_-$ \\ \hline if $j$ and $m$ are both even or odd & $\frac{j+m}{2} - 1$ & $1$ & $0$ & $\frac{j-m}{2} - 1$ & $\left( \frac{j+m}{2} - 1 \right) E_{\rm max} + \delta_j$ & $-\left( \frac{j-m}{2} - 1 \right) E_{\rm max}$ \\ \hline if $j$ and $m$ are of opposite parity & $\frac{j+m-1}{2}$ & $0$ & $1$ & $\frac{j-m-3}{2}$ & $\left( \frac{j+m-1}{2} \right) E_{\rm max}$ & $-\left( \frac{j-m-1}{2} \right) E_{\rm max} + \delta_j$ \\ \hline \end{tabular} \caption{Transition number and size, and heat currents, to maximize the heat current $Q_+^j$ associated to an arbitrary level $j$ w.r.t. the first energy level, within a thermal cycle.} \label{transitionsizearbitrary} \end{table} \end{center} \begin{center} \begin{table}[t] \renewcommand{2.0}{2.0} \begin{tabular}{| c | c | c | c | c || c | c || c |} \hline No. of transitions & $+E_{\rm max}$ & $+E_{\rm v}$ & $-(E_{\rm max} - E_{\rm v})$ & $-E_{\rm max}$ & $Q^j_+$ & $Q^j_-$ & $\Delta E_{1,j}$ \\ \hline if $j\leq \frac{n}{2}$ & $j-1$ & $0$ & $0$ & $0$ & $(j-1) E_{\rm max}$ & $0$ & $(j-1) E_{\rm max}$ \\ \hline $j>\frac{n}{2}$, $n$ even & $\frac{n}{2} - 1$ & $1$ & $0$ & $j - \frac{n}{2}$ & $\left( \frac{n}{2} -1 \right) E_{\rm max} + E_{\rm v}$ & $-\left( j- \frac{n}{2} \right) E_{\rm max}$ & $(n-j-1)E_{\rm max} + E_{\rm v}$ \\ \hline $j>\frac{n}{2}$, $n$ odd & $\frac{n-1}{2}$ & $0$ & $1$ & $j - \frac{n+1}{2}$ & $\left( \frac{n-1}{2} \right) E_{\rm max}$ & $-\left( j - \frac{n-1}{2} \right) E_{\rm max} + E_{\rm v}$ & $(n-j-1) E_{\rm max} + E_{\rm v}$ \\ \hline \end{tabular} \caption{Transition number and size, and heat currents, to minimize the Gibbs ratio of an arbitrary level $j$ w.r.t. the first energy level, within a thermal cycle.} \label{transitionsizeintermediate} \end{table} \end{center} \twocolumngrid Finally, connecting all $+ve$ transitions to $\beta_{\rm c}$ and $-ve$ transitions to $\beta_{\rm h}$, one arrives at the optimal $n$ level cycle fridge, schematically depicted in Fig \ref{optimalcycle}. If we instead minimize the ratios of populations to the excited state of the virtual qubit ($p_{j}/p_{n}$), we obtain the optimal $n$ level cycle engine, which has the same arrangement of energy levels as the fridge, with only the temperatures swapped, $\beta_{\rm c} \leftrightarrow \beta_{\rm h}$. For completeness, we present below the virtual temperatures $\beta_{\rm v}^{(n)}$ and norms $N_{\rm v}^{(n)}$ achieved by the optimal $n$ level cycle fridge and engine. \onecolumngrid \begin{center} \begin{table}[h] \renewcommand{2.0}{2.0} \begin{tabular}{| c | c | c |} \hline $\beta_{v}^{(n)} E_{\rm v}$ & $n$ even & $n$ odd \\ \hline Fridge & $\;\;\; \beta_{\rm c} E_{\rm v} + \left( \beta_{\rm c} - \beta_{\rm h} \right) \left( \frac{n}{2} -1 \right) E_{\rm max} \;\;\;$ & $\;\;\; \beta_{\rm c} E_{\rm v} + \left( \beta_{\rm c} - \beta_{\rm h} \right) \left[ \left( \frac{n}{2} - \frac{1}{2} \right) E_{\rm max} - E_{\rm v} \right] \;\;\;$ \\ \hline Engine & $\;\;\; \beta_{\rm h} E_{\rm v} - \left( \beta_{\rm c} - \beta_{\rm h} \right) \left( \frac{n}{2} -1 \right) E_{\rm max} \;\;\;$ & $\;\;\; \beta_{\rm h} E_{\rm v} - \left( \beta_{\rm c} - \beta_{\rm h} \right) \left[ \left( \frac{n}{2} - \frac{1}{2} \right) E_{\rm max} - E_{\rm v} \right] \;\;\;$ \\ \hline \end{tabular} \caption{Optimal virtual temperatures of a thermal cycle of length $n$.} \label{optimumbetav} \end{table} \renewcommand{2.0}{2.0} \begin{table}[h] \begin{tabular}{| c | c |} \hline $N_{\rm v}^{(n)}$ & $n$-level optimal fridge cycle \\ \hline $n_{even}$ & $\;\;\; \left( 1 + e^{-\beta_{\rm v}^{(n)} E_{\rm v}} \right) \left[ \left( 1 - e^{-\beta_{\rm c} E_{\rm max}} \right)^{-1} \left( 1 - e^{-\frac{n}{2} \beta_{\rm c} E_{\rm max}} \right) + e^{-\beta_{\rm v}^{(n)} E_{\rm v}} \left( 1 - e^{-\beta_{\rm h} E_{\rm max}} \right)^{-1} \left( 1 - e^{-\frac{n}{2} \beta_{\rm h} E_{\rm max}} \right) \right]^{-1} \;\;\;$ \\ \hline $n_{odd}$ & $\;\;\; \left( 1 + e^{-\beta_{\rm v}^{(n)} E_{\rm v}} \right) \left[ \left( 1 - e^{-\beta_{\rm c} E_{\rm max}} \right)^{-1} \left( 1 - e^{-\left( \frac{n+1}{2} \right) \beta_{\rm c} E_{\rm max}} \right) + e^{-\beta_{\rm v}^{(n)} E_{\rm v}} \left( 1 - e^{-\beta_{\rm h} E_{\rm max}} \right)^{-1} \left( 1 - e^{-\left( \frac{n-1}{2} \right) \beta_{\rm h} E_{\rm max}} \right) \right]^{-1} \;\;\;$ \\ \hline & $n$-level optimal engine cycle \\ \hline $n_{even}$ & $\;\;\; \left( 1 + e^{+\beta_{\rm v}^{(n)} E_{\rm v}} \right) \left[ \left( 1 - e^{-\beta_{\rm c} E_{\rm max}} \right)^{-1} \left( 1 - e^{-\frac{n}{2} \beta_{\rm c} E_{\rm max}} \right) + e^{+\beta_{\rm v}^{(n)} E_{\rm v}} \left( 1 - e^{-\beta_{\rm h} E_{\rm max}} \right)^{-1} \left( 1 - e^{-\frac{n}{2} \beta_{\rm h} E_{\rm max}} \right) \right]^{-1} \;\;\;$ \\ \hline $n_{odd}$ & $\;\;\; \left( 1 + e^{+\beta_{\rm v}^{(n)} E_{\rm v}} \right) \left[ \left( 1 - e^{-\beta_{\rm c} E_{\rm max}} \right)^{-1} \left( 1 - e^{-\left( \frac{n-1}{2} \right) \beta_{\rm c} E_{\rm max}} \right) + e^{+\beta_{\rm v}^{(n)} E_{\rm v}} \left( 1 - e^{-\beta_{\rm h} E_{\rm max}} \right)^{-1} \left( 1 - e^{-\left( \frac{n+1}{2} \right) \beta_{\rm h} E_{\rm max}} \right) \right]^{-1} \;\;\;$ \\ \hline \end{tabular} \caption{Norm $N_{\rm v}$ of the optimal $n-$level thermal cycle, in terms of the virtual temperature $\beta_{\rm v}^{(n)}$.} \label{optimalnorms} \end{table} \end{center} \twocolumngrid \subsection{Characterizations of optimality for single-cycle machines.} Here we demonstrate useful properties of the optimal $n$ level cycle, in particular that it achieves the largest change in the bias of an external qubit under the swap operation. Recall the technical definition in Appendix \ref{AppOptTech}, that the optimal cycle is the unique cycle (fridge) that minimizes the ratios of every single population to the ground state of the virtual qubit $p_{1}$. In particular, this includes the Gibbs ratio of the virtual qubit itself, $p_{n}/p_{1}$, and thus the optimal cycle \emph{maximizes the bias $Z_{\rm v}$}. In addition, using the normalization of the cycle $\sum_j p_{j} = 1$, one can express the norm of the virtual qubit in the useful form \begin{align}\label{appcycleNv} N_{\rm v}&= \left( \frac{1 + e^{-\beta_{\rm v} E_{\rm v}}}{1 + \sum_{j=2}^{n} p_{j}/p_{n}} \right). \end{align} Since the optimal cycle is the unique cycle that minimizes the denominator above, in particular it does so for the case that $\beta_{\rm v}$ is the optimal temperature (corresponding to the optimum bias $Z_{\rm v}$), thus the optimal cycle \emph{achieves the highest norm $N_{\rm v}$ given the maximum bias $Z_{\rm v}$}. Expressing the population of the ground state of the virtual qubit as \begin{equation}\label{optp1} p_{1} = \frac{1}{1 + \sum_{j=2}^{n} p_{j}/p_{n}}, \end{equation} it is clear that the optimal cycle also \emph{maximizes the population $p_{1}$}, which is equivalently the \emph{maximal value of $N_{\rm v} (1+Z_{\rm v})$}. Since the optimal cycle both maximizes $p_{1}$ and minimizes $p_{n}/p_{1}$, we may conclude that it \emph{maximizes the difference between the populations} \begin{equation}\label{optNvZv} p_{1} - p_{n} = N_{\rm v} Z_{\rm v} = p_{1} \left( 1 - \frac{p_{n}}{p_{1}} \right). \end{equation} Equivalently, in the case of the engine, the optimal $n$ level cycle: \begin{itemize} \item minimizes $Z_{\rm v}$, \item maximizes $N_{\rm v}$ given the minimum $Z_{\rm v}$, \item maximizes $p_{n} = N_{\rm v} (1-Z_{\rm v})/2$, and \item maximizes $p_{n} - p_{1} = -N_{\rm v} Z_{\rm v}$. \end{itemize} We may now prove that the optimal cycle achieves the largest change in the bias of an external qubit via the swap operation. Via \eqref{appbiaschange}, the difference in bias at the end of the swap is \begin{equation}\label{swapbiaschange} Z_{\rm s}^\prime - Z_{\rm s} = N_{\rm v} (Z_{\rm v} - Z_{\rm s}). \end{equation} Labelling the norm and bias of the optimal $n$ level fridge as $\{N_{\rm v}^+,Z_{\rm v}^+\}$, and that of an arbitrary $n$ level cycle as $\{N_{\rm v}, Z_{\rm v}\}$, \begin{align} Z_{\rm v} &\leq Z_{\rm v}^+ & N_{\rm v} Z_{\rm v} &\leq N_{\rm v}^+ Z_{\rm v}^+. \end{align} Thus for the swap using an arbitrary cycle, \begin{align} Z_{\rm s}^\prime - Z_{\rm s} &< \frac{N_{\rm v}^+ Z_{\rm v}^+}{Z_{\rm v}} \left( Z_{\rm v} - Z_{\rm s} \right) = N_{\rm v}^+ Z_{\rm v}^+ \left( 1 - \frac{Z_{\rm s}}{Z_{\rm v}} \right), \nonumber\\ &< N_{\rm v}^+ Z_{\rm v}^+ \left( 1 - \frac{Z_{\rm s}}{Z_{\rm v}^+} \right) = N_{\rm v}^+ \left( Z_{\rm v}^+ - Z_{\rm s} \right). \end{align} Thus the change in the bias is upper bounded by that achieved by the optimal fridge cycle. One may also prove the analogous result involving the optimal engine cycle, \begin{equation} Z_{\rm s} - Z_{\rm s}^\prime = N_{\rm v} (Z_{\rm s} - Z_{\rm v}) < N_{\rm v}^- \left( Z_{\rm s} - Z_{\rm v}^- \right), \end{equation} where $\{N_{\rm v}^-,Z_{\rm v}^-\}$ are the norm and bias of the optimal engine cycle. \subsection{Efficiency of single cycle machines} Recall the normal definitions of efficiency for absorption thermal machines. For a fridge, this is defined as the ratio between the heat drawn from the object to be cooled to the heat drawn from the hot bath. For an engine, it is the ratio between the work done to the heat drawn from the hot bath. In the case of the thermal cycle, the energy gap of the virtual qubit $E_{\rm v}$ represents both the heat drawn in the case of the fridge, and the work done in the case of the engine. Every time the virtual qubit exchanges $E_{\rm v}$ with an external system, it has to be reset by moving through the entire cycle. From Table \ref{transitionsizeintermediate}, we can calculate the heat currents to/from each bath, identifying $Q_+$ and $Q_-$ from the table with $Q_{\rm c}$ and $Q_{\rm h}$ respectively, in the case of the fridge, and the opposite for the engine. One can thus re-express the virtual temperature of the thermal cycle (Table \ref{optimumbetav}) in terms of the heat currents, \begin{align} \text{(fridge) } \quad \beta_{v}^{(n)} E_{\rm v} &= \beta_{\rm c} \left( Q_{\rm h} + E_{\rm v} \right) - \beta_{\rm h} Q_{\rm h}, \\ \text{(engine) } \quad \beta_{v}^{(n)} E_{\rm v} &= \beta_{\rm h} Q_{\rm h}- \beta_{\rm c} \left( Q_{\rm h} - E_{\rm v} \right). \end{align} Solving for the efficiency $\eta = E_{\rm v}/Q_{\rm h}$, one recovers the efficiencies of the thermal cycle, \begin{align} \eta_{fridge}^{(n)} &= \frac{\beta_{\rm c} - \beta_{\rm h}}{\beta_{\rm v}^{(n)} - \beta_{\rm c}}, & \eta_{engine}^{(n)} &= \frac{\beta_{\rm c} - \beta_{\rm h}}{\beta_{\rm c} - \beta_{\rm v}^{(n)}}. \end{align} In both cases the efficiency falls off with increasing $\beta_{\rm v}$, and thus in the case of the optimal $n$ level cycle, one finds that with an increasing number of levels, as the magnitude of $\beta_{\rm v}^{(n)}$ increases linearly with $n$, so the efficiency $\eta$ falls off inversely with $n$. \onecolumngrid \section{Switching between fridges and engines}\label{switching} \twocolumngrid When viewed in reverse, the amplification of the norm of a virtual qubit (Section \ref{sec:multi-cycle}) presents itself as a novel method to amplify the norm of any thermal cycle to one; simply connect its virtual qubit to a real qubit via a suitable interaction Hamiltonian, and use the real qubit instead to interact with the external system. The real qubit is now our ``virtual qubit". To be more precise, consider that one has a single $n-$level cycle, whose virtual qubit, labelled by the states $\ket{1}_{cycle}$ and $\ket{n}_{cycle}$, has an energy gap of $E_{\rm v}$ and a virtual temperature of $\beta_{\rm v}$. Couple this transition to a real qubit (labelled by $\ket{g}_{\rm v}$ and $\ket{e_{\rm v}}$) with the same energy gap $E_{\rm v}$ via a swap-like Hamiltonian, such as \begin{equation} H_{int} = g \left( \ket{1}_{cycle}\!\bra{n} \otimes \ket{e}_{\rm v}\!\bra{g } + c.c. \right). \end{equation} This arrangement is depicted in Fig. \ref{realqubitcouplings}(a). In the steady state, the populations of the levels must satisfy \begin{equation} p(\ket{1}_{cycle} \otimes \ket{e}_{\rm v}) = p(\ket{n}_{cycle} \otimes \ket{g}_{\rm v}). \end{equation} But since $p_{n}/p_{1} = e^{-\beta_{\rm v} E_{\rm v}}$ via the thermal cycle, it follows that the real qubit levels exhibit the same population ratio, i.e. \begin{equation} \frac{p_{e_{\rm v}}}{p_{g_{\rm v}}} = e^{-\beta_{\rm v} E_{\rm v}}. \end{equation} This completes the virtual qubit amplification procedure, since $N_{\rm v}=1$ for the real qubit. In fact one can do even more, if the states $\ket{1}_{cycle} \otimes \ket{e}_{\rm v}$ and $\ket{n}_{cycle} \otimes \ket{g}_{\rm v}$ are coupled via a thermal bath rather than an energy conserving interaction. In this case the two states need not be degenerate. If the energy gap of the real qubit is labelled as $E_{\rm v}^\prime$, and the two states above are coupled to $\beta_{bath}$, as in Fig. \ref{realqubitcouplings}(b), then in the steady state, the populations satisfy \begin{align} \frac{p_{1} p_{e_{\rm v}}}{p_{n} p_{g_{\rm v}}} &= e^{-\beta_{bath} (E_{\rm v}^\prime - E_{\rm v})}. \end{align} Once again the virtual temperature of the virtual qubit of the cycle, $p_{n}/p_{1} = e^{-\beta_{\rm v} E_{\rm v}}$, the virtual temperature $\beta_{\rm v}^\prime$ of the real qubit may be determined, \begin{equation} \beta_{\rm v}^\prime E_{\rm v}^\prime = \beta_{\rm v} E_{\rm v} + \beta_{bath} (E_{\rm v}^\prime - E_{\rm v}). \end{equation} Finally, consider that rather than couple the states $\ket{1}_{cycle} \otimes \ket{e}_{\rm v}$ and $\ket{n}_{cycle} \otimes \ket{g}_{\rm v}$, one couples instead $\ket{1}_{cycle} \otimes \ket{g}_{\rm v}$ and $\ket{n}_{cycle} \otimes \ket{e}_{\rm v}$ to a thermal bath, see Fig. \ref{realqubitcouplings}(c). Similarly to the above, one may determine that the real qubit has the virtual temperature \begin{equation} \beta_{\rm v}^\prime E_{\rm v}^\prime = -\beta_{\rm v} E_{\rm v} + \beta_{bath} (E_{\rm v} + E_{\rm v}^\prime). \end{equation} However, in this case the contribution of the original virtual temperature is multiplied by $-1$, effectively switching the machine from a fridge to an engine or vice versa! Thus given a $n-$level fridge cycle, one may switch to an engine and vice-versa, by using the appropriate thermal coupling between the cycle and the real qubit. \onecolumngrid \begin{figure} \caption{Different methods of amplifying the virtual qubit of an arbitrary cycle. (a) Amplification that maintains the energy and bias of the virtual qubit. (b) Amplification that modifies (possibly amplifies) the bias of the virtual qubit. (c) Amplication that flips the bias of the virtual qubit.\label{realqubitcouplings} \label{realqubitcouplings} \end{figure} \twocolumngrid \onecolumngrid \section{Concatenated qutrit machines}\label{qutrits} \begin{figure} \caption{Engine formed out of the concatenation of many qutrit machines.\label{nqutritappendix} \label{nqutritappendix} \end{figure} \twocolumngrid In this section we consider the concatenation of qutrit machines, and determine the bias and norm of the virtual qubit in its steady state of operation. To arrive at the steady state, it is simpler to begin from the end of the concatenation, and derive the state inductively. To begin with, consider the final (rightmost) qutrit in Fig. \ref{nqutritappendix}, ignoring it's interaction with the penultimate qutrit. It is equivalent to a single qutrit fridge, and it's populations are completely determined by the two thermal couplings. One now introduces a swap-like interaction between the uncoupled transition of the final qutrit and the corresponding transition of the penultimate qutrit, \begin{equation} H_{int} = g ( \ket{12}_{n(n-1)}\bra{21} + c.c. ). \end{equation} This interaction induces the transition of the penultimate qutrit $\Gamma_{12}^{(n-1)}$ to have the same Gibbs ratio as that of $\Gamma_{12}^{(n)}$. If one also couples $\Gamma_{13}^{(n-1)}$ to $\beta_{\rm h}$, that fixes a second Gibbs ratio on the penultimate qutrit, leading to the populations of the penultimate qutrit being completely determined. The state is still diagonal, and a product state, as the thermal couplings only fix the Gibbs ratio on single qutrits, while the interaction matches the Gibbs ratio of a transition whose ratio is already fixed, to one that is not yet determined. Note that the same state of the penultimate qutrit would have been found if one had simply assumed that in place of the final qutrit, there was instead a thermal bath at the virtual temperature of $\Gamma_{12}^{(n)}$. One may repeat this process inductively to determine the state of the first qutrit in the sequence, and in turn the virtual temperature of the transition $\Gamma_{01}^{(1)}$, finding as in the main text \eqref{nqutrit} \begin{equation}\label{nqutritapp} \beta_{\rm v}^{(k)} = \begin{cases} \beta_{\rm c} + (\beta_{\rm c} - \beta_{\rm h})\frac{k}{2}\frac{E_{\rm max}}{E_{\rm v}} & \text{ if $k$ is even,} \\ \beta_{\rm c} + (\beta_{\rm c} - \beta_{\rm h})\left( \frac{k+1}{2}\frac{E_{\rm max}}{E_{\rm v}} - 1 \right) & \text{ if $k$ is odd.} \end{cases} \end{equation} The virtual temperatures for the engine are the same as above with $\beta_{\rm c}$ and $\beta_{\rm h}$ switched. Note that the virtual temperature of $k$ concatenated qutrit is identical to that of the optimal $k+2$ level thermal cycle, Table \ref{optimumbetav}. We are also interested in calculating the norm $N_{\rm v}$ of the virtual qubit. An interesting freedom in the case of the qutrit machine is the choice of whether to have the virtual qubit as the transition between the lower two levels $\Gamma_{12}$ or $\Gamma_{23}$ of the first qutrit (modifying the energies accordingly so that the energy gap is always $E_{\rm v}$). We are especially interested in the behaviour of the norm as the number of concatenated qutrits becomes large (and $\beta_{\rm v}$ approaches $\pm \infty$.) While this choice has no bearing on the bias of the virtual qubit, it does affects its norm. On may calculate for the case of the fridge, the norm of the virtual qubit is \begin{align} N^{(23)}_{\rm v} &= \frac{1 + e^{-\beta_{\rm v} W}}{1 + e^{-\beta_{\rm v} W} + e^{-\beta_{\rm v} W} e^{+\beta_{\rm c} E_{\rm max}}} \\ \lim_{\beta_{\rm v} \rightarrow +\infty} N^{(23)}_{\rm v} &= 1, \end{align} in the case that the virtual qubit is $\Gamma_{23}$, and \begin{align} N^{(12)}_{\rm v} &= \frac{1 + e^{-\beta_{\rm v} W}}{1 + e^{-\beta_{\rm v} W} + e^{-\beta_{\rm c} E_{\rm max}}}, \\ \lim_{\beta_{\rm v} \rightarrow +\infty} N^{(12)}_{\rm v} &= \frac{1}{1 + e^{-\beta_{\rm c} E_{\rm max}}} \end{align} in the case the virtual qubit is $\Gamma_{12}$. Clearly it is advantageous to place the virtual qubit on the upper two levels. This is the opposite for the case of the engine. We find that the corresponding norms for the case of lower and upper virtual qubits is respectively \begin{align} N^{(23)}_{\rm v} &= \frac{1 + e^{+\beta_{\rm v} W}}{1 + e^{+\beta_{\rm v} W} + e^{+\beta_{\rm h} E_{\rm max}}}, \\ \lim_{\beta_{\rm v} \rightarrow -\infty} N^{(23)}_{\rm v} &= \frac{1}{1 + e^{-\beta_{\rm h} E_{\rm max}}}. \\ N^{(12)}_{\rm v} &= \frac{1 + e^{+\beta_{\rm v} W}}{1 + e^{+\beta_{\rm v} W} + e^{+\beta_{\rm v} W} e^{\beta_{\rm h} E_{\rm max}}}, \\ \lim_{\beta_{\rm v} \rightarrow -\infty} N^{(12)}_{\rm v} &= 1. \end{align} This motivates the choice of $\Gamma_{23}$ as the virtual qubit for the fridge, and $\Gamma_{12}$ as the virtual qubit for the engine. Also note that via this choice, in the limit $n\rightarrow \infty$, both the fridge and the engine qutrits approach the same state, i.e. a qutrit with all of its population in the middle energy level. \end{document}
\begin{document} \title{The doublet of Dirac fermions in the field of the non-Abelian monopole, isotopic chiral symmetry, and parity selection rules} \date {} \author {V.M.Red'kov \\ Institute of Physics, Belarus Academy of Sciences\\ F Skoryna Avenue 68, Minsk 72, Republic ob Belarus\\ e-mail: [email protected]} \maketitle \begin{abstract} The paper concerns a~problem of Dirac fermion doublet in the~external monopole potential obtained by embedding the~Abelian $SU(2)$ monopole solution in the~non-Abelian scheme. In this particular case, the~doublet-monopole Hamiltonian is invariant under some symmetry operations consisting of a~(complex and one parametric) Abelian subgroup in the complex rotational group $SO(3.C) : [\hat{H} , \hat{F}(A)]_{-} = 0 , \hat{F}(A) \in SO(3.C)$. This symmetry results in a~certain freedom in choosing a~discrete operator $\hat{N}_{A}$ ($A$ is a~complex number) entering the~complete set of quantum variables. The~same complex number $A$ represents an~additional parameter at the~basis wave functions $\Psi ^{A}_{\epsilon jm\delta \mu }(t,r,\theta ,\phi )$. The {\em generalized} inversion-like operator $\hat{N}_{A}$ implies its own ($A$-dependent) definition for scalar and pseudoscalar, and further affords certain generalized $N_{A}$-parity selection rules. All the~different sets of basis functions $\Psi ^{A}_{\epsilon jm\delta \mu }(x)$ determine globally the~same Hilbert space. The~functions $\Psi ^{A}_{\epsilon jm\delta \mu }(x)$ decompose into linear combinations of $\Psi ^{A=0}_{\epsilon jm\delta \mu }(x)$. However, the bases considered turn out to be nonorthogonal ones when $A^{*}\neq A$; the latter correlates with the non-self-conjugacy property of the~operator $\hat{N}_{A}$ at $A^{*}\neq A$. The meaning of possibility to violate the~known quantum-mechanical regulation on self-conjugacy as regards the~inversion-like operator $\hat{N}_{A}$ is discussed. The question of possible physical understanding the~complex expectation values for $\hat{N}_{A}$ (at $A^{*}\neq A$) is examined. Also, the~problem of possible physical status for the~matrix $\hat{F}(A)$ at $A^{*} = A$ is considered in full detail: since the~matrix belongs formally to the~gauge group $SU(2)^{gauge.}_{loc.}$, but in the~same time, being a~symmetry operation for the~Hamiltonian under consideration, this operator generates linear transformations on basis wave functions. It is emphasized that interpretation of the~$A$-freedom as exclusively a~gauge one is not justified since this will leads to a~logical collision with the~quantum superposition principle, and besides, there will arise the~conclusion that two sorts of basis states $\Psi ^{A=0}_{\epsilon jm,+1,\mu}(x)$ and $\Psi^{A=0}_{\epsilon jm,-1,\mu}(x)$ are to be physically identical. The~latter could be interpreted only as a~return to the Abelian scheme again. PACS number: 0365, 1130, 2110H, 0230 \end{abstract} \subsection*{1. Introduction\footnote{A more short version of the paper may be seen in [101]}.} While there not exists at present definitive succeeded experiments concerning monopoles, it is nevertheless true that there exists a~veritable jungle of literature on the~monopole theories. Moreover, properties of more general monopoles, associated with large gauge groups now thought to be relevant in physics. As evidenced even by a~cursory examination of some popular surveys (see, for example, [1,2]), the~whole monopole area covers and touches quite a~variety of fundamental problems. The~most outstanding of them are: the~electric charge quantization [3-10], $P$-violation in purely electromagnetic processes [11-16], scattering on the~Dirac string [17-19], spin from monopole and spin from isospin [20-23], bound states in fermion-monopole system and violation of the Hermiticity property [24-38], fermion-number breaking in the~presence of a~magnetic monopole and monopole catalysis of baryon decay [39-41]. The~tremendous volume of publications on monopole topics (and there is no hint that its raise will stop) attests the~interest which they enjoy among theoretical physicists, but the~same token, clearly indicates the~unsettled and problematical nature of those objects: the~puzzle of monopole seems to be one of the~still yet unsolved problems of particle physics\footnote{Very physicists have contributed to investigation of the~monopole-based theories. The~wide scope of the~field and the~prodigious number of investigators associated with various of its developments make it all but hopleless to list even the~principal contributors. The present study does not pretend to be a~survey in this matter, so I give but a few of the~most important references which may be useful to the~readers who wish some supplementary material or are interested in more techical developments beyonds the~scope of the~present treatment.} In the same time, the~study of monopoles has now reached a~point where further progress depends on a~clearer understanding of this object that had been available so far. Apparently what is needed is neither the~search of decided experiments, which are unlikely to be successful, nor a~new solution of some nonlinear systems of equations, but rather the analysis and careful criticism of already considered results. In refwerence to this, leaving aside a~major part of various monopole problems, much more comprehensive in themselves, just some aspects of the~$SU(2)$-model will be a~subject of the~present work. That of course seriously restricts the~generality of consideration, but it should be emphasized at once that though much more involved monopole-like configurations are consistently (and somewhat routinely) invented and reported in the~literature; in the~same time we should recognize that certain purely Abelian or, the~most contiguous with it, $SU(2)$-model's aspects came to light when considering those generalized systems. In view of that, the~particular $SU(2)$-model's features, being considered here, might be of reasonable interest for a~more large number of non-Abelian models\footnote{Some more discussion on possible extension to any different gauge theories are given in the~conclusive Sec.~11 ; and that possibility of generalization lends interest to the~present study.}. Once the~non-Abelian monopole had been brought by 't~Hooft and Polyakov [42-44] into scientific usage, its main properties had been noted and examined. The background of thinking of the~whole (non-Abelian) monopole problem in that time can be easily traced: it was obviously tied up with the~most outstanding points of its Abelian counterpart, Dirac monopole; namely singularity properties, quantization conditions, and some other contiguous to them. This reflected the~impress of old attitude towards the~monopole, which had been imbibed by physicists from early Dirac's investigation on this matter and consists in drawing special attention to the~known singularity aspects. Evidently, the~most significant and noticeable achievement of that new theory was the~elimination of the~singularity aspect from the~theory. It should be noted that, from the~very beginning, the~main emphasis had been drawn to just a~spatially radial non-singular structure of that new monopole-like system. Thus, the~required absence of singularity had been achieved and therefore the~clouds over this part of the~subject had been dispersed. Much less attention has been given to a~number of other sides of that non-Abelian construct. For instance: To the~'t Hooft-Polyakov construction has been assigned a~monopole-like status; what is a~distinctive physical feature of it that provides the~principal grounds for such an~assignment? Or, which part of this system represents {\it a~trace} of Abelian monopole and therefore bears this same old quality, and which one is referred to its own and purely non-Abelian nature? So, the~question in issue is the role and status of the~Abelian monopole in the non-Abelian theory. Although a~number of general and rigorous relationships connecting these aspects of these two models have been established to date \footnote{Sometimes, it is considered to be solely a~subsidiary construct, being appropriate to mimic the 't~Hooft-Polyakov potential: at least, it can exactly simulate the latter far away from the region $r=0$ (at spatial infinity).} little work has been done so far in linking them at the~level of a~specifically contrasting examination where all detailed calculations have been carried out. This article will endeavor to supply this work\footnote{Though evidently,ultimate answers have not been found by this work as well, it might hoped that a~certain exploration into and clearing up this matter have been achieved.}. In general, there are several ways of approaching the monopole problems. As known, together with geometrically topological way of exploration into them, another approach to studying such configurations is possible, namely, that concerns any physical manifestations of monopoles when they are considered as external potentials. Moreover, from the~physical standpoint, this latter method can thought of as a~more visualizable one in comparison with less obvious and more direct topological language. So, the~basic frame of our further investigation is the~study of a~particle multiplet in the~external monopole potentials. Much work in studying quantum mechanical particles in the~monopole potentials, in both the~Abelian and non-Abelian cases, has been done in the~literature; see, respectively, in [45-48] and [49-54]. For definiteness, we restrict ourselves to the~simplest doublet case; taking special attention to any manifestations of just the~Abelian monopole on the non-Abelian background. Many properties of this system turn out to be of interest in themselves, producing in their totality an example of the~theory of what may be called `Abelian monopole in non-Abelian embodiment'. Generally speaking, this theory is the~most important aspect of the~present study. Now, for convenience of the readers, some remarks about the~approach and technique used in the~work are to be given. The~primary technical `novelty' is that, in the paper, the~tetrad (generally relativistic) method [55-63] of Tetrode-Weyl-Fock-Ivanenko (TWFI) for describing a~spinor particle will be exploited. So, the~matter equation for an~isotopic doublet of spinor particles in the field of the~non-Abelian monopole is taken in the~form $$ [i\; \gamma ^{\alpha }(x) \; ( \partial _{\alpha } \; + \; \Gamma _{\alpha }(x) \; - \; i\; e\; t^{a} \; W^{(a)}_{\alpha } \; ) \; - \; (m \; + \; \kappa \; \Phi ^{(a)} t^{a}) \; ]\; \Psi (x) = 0 \; . \eqno(1.1) $$ \noindent where $\gamma^{\alpha}(x)$ are the~generalized Dirac matrices, $\Gamma_{\alpha}(x)$ stands for the~bispinor connection; $e$ and $\kappa$ are certain constants. The choice of the~formalism to deal with the~monopole-doublet problem has turned out to be of great fruitfulness for examining this system. Taking of just this method is not an~accidental step. It is matter that, as known (but seemingly not very vastly), the~use of a~special spherical tetrad in the~theory of a~spin $1/2$ particle had led Schr\"odinger and Pauli [64, 65] to a~basis of remarkable features. In particular, the~following explicit expression for (spin $1/2$ particle's) momentum operator components had been calculated $$ J_{1}= l_{1} + {{i \sigma ^{12} \cos \phi }\over{ \sin\theta}} ,\qquad J_{2}= l_{2} + {{i \sigma ^{12} \sin \phi }\over{ \sin\theta}} ,\qquad J_{3} = l_{3} \eqno(1.2) $$ \noindent just that kind of structure for $J_{i}$ typifies this frame in bispinor space. This Schr\"{o}dinger's basis had been used with great efficiency by Pauli in his investigation [65] on the~problem of allowed spherically symmetric wave functions in quantum mechanics. For our purposes, just several simple rules extracted from the~much more comprehensive Pauli's analysis will be quite sufficient (those are almost mnemonic working regulations). They can be explained on the~base of $S=1/2$ particle case. To this end, using any representation of $\gamma$ matrices where $\sigma^{12} = {1 \over 2}\;(\sigma_{3} \oplus \sigma_{3})$ (throughout the work, the Weyl's spinor frame is used) and taking into account the~explicit form for $\vec{J}^{2}, J_{3}$ according to (1.2), it is readily verified that the~most general bispinor functions with fixed quantum numbers $j,m$ are to be $$ \Phi _{jm}(t,r,\theta ,\phi ) = \left ( \begin{array}{l} f_{1}(t,r) \; D^{j}_{-m,-1/2}(\phi ,\theta ,0) \\ f_{2}(t,r) \; D^{j}_{-m,+1/2}(\phi ,\theta ,0) \\ f_{3}(t,r) \; D^{j}_{-m,-1/2}(\phi ,\theta ,0) \\ f_{4}(t,r) \; D^{j}_{-m,+1/2}(\phi ,\theta ,0) \end{array} \right ) \eqno(1.3) $$ \noindent where $D^{j}_{mm'}$ designates the Wigner's $D$-functions (the~notation and subsequently required formulas, according to [66], are adopted). One should take notice of the low right indices $-1/2$ and $+1/2$ of $D$-functions in (1.3), which correlate with the~explicit diagonal structure of the~matrix $\sigma^{12} = {1 \over 2}\; ( \sigma_{3} \oplus \sigma_{3})$. The~Pauli criterion allows only half integer values for $j$. So, one may remember some very primary facts of $D$-functions theory and then produce, almost automatically, proper wave functions. It seems rather likely, that there may exist a~generalized analog of such a~representation for $J_{i}$-operators, that might be successfully used whenever in a~linear problem there exists a~spherical symmetry, irrespective of the~concrete embodiment of such a~symmetry. In particular, the~case of electron in the~external Abelian monopole field, together with the~ problem of selecting the~allowed wave functions as well as the~Dirac charge quantization condition, completely come under that Shr\"{o}dinger-Pauli method. In particular, components of the generalized conserved momentum can be expressed as follows (for more detail, see [67]) $$ j^{eg}_{1} = l_{1} + {{(i\sigma ^{12} - eg) \cos \phi} \over { \sin \theta }} , \qquad j^{eg}_{2} = l_{2} + {{(i\sigma ^{12} - eg) \sin\phi} \over {\sin \theta }} , \qquad j^{eg}_{3} = l_{3} \eqno(1.4) $$ \noindent where $e$ and $g$ are an~electrical and magnetic charge, respectively. In accordance with the~above regulations, the~corresponding wave functions are to be built up as (also see (4.3a,b)) $$ \Phi^{eg} _{jm}(t,r,\theta ,\phi ) = \left ( \begin{array}{l} f_{1}(t,r) \; D^{j}_{-m,eg-1/2}(\phi ,\theta ,0) \\ f_{2}(t,r) \; D^{j}_{-m,eg+1/2}(\phi ,\theta ,0) \\ f_{3}(t,r) \; D^{j}_{-m,eg-1/2}(\phi ,\theta ,0) \\ f_{4}(t,r) \; D^{j}_{-m,eg+1/2}(\phi ,\theta ,0) \end{array} \right ) \; . \eqno(1.5) $$ \noindent The~Pauli criterion produces two results: first, $\mid eg \mid = 0, 1/2, 1, 3/2,\ldots $ (what is called the~Dirac charge quantization condition; second, the quantum number $j$ in (1.5) may take the values $\mid eg \mid -1/2 , \mid eg \mid +1/2, \mid eg \mid +3/2, \ldots$ that selects the~proper spinor particle-monopole functions. So, it seems rather a~natural step: to try exploiting some generalized Schr\"{o}dinger's basis at analyzing the~problem of $SU(2)$ multiplet in the~non-Abelian monopole field, if by no reason than curiosity or search of some points of unification. There exists else one line justified the~interest to just the~aforementioned approach: the~Shr\"{o}dinger's tetrad basis and Wigner's $D$-functions are deeply connected with what is called the~formalism of spin-weight harmonics [68-70] developed in the~frame of the~Newman-Penrose method of light (or isotropic)) tetrad. Some relationships between spin-weight and spinor monopole harmonics have been examined in the literature [71-73], the~present work follows the~notation used in [67]. There is an~additional reason for special attention just to the~Scr\"{o}dinger's basis on the~background of non-Abelian monopole matter. As will be seen subsequently, that basis can be associated with the~unitary isotopic gauge in the~non-Abelian monopole problem\footnote{There is something rather enigmatic in such a~relation between those, apparently not-touching each other, matters. In the~same time, those always will be attractive points for theoreticians}; in Sec.2 the latter fact will be discussed in full detail. Thus, the present work is, in its working mathematical language, somewhat reminiscent in contrast to prodigious modern investigations based on the~abstruse geometrical theory underlying the monopole matter; it exploits rather conventional (if not ancient) mathematical and physical methods and intends to trace some unifying points among them (tetrad approach, Wigner's functions, selection of allowable wave function, non-Abelian theories, and monopole area). After all these opening and general statements, some more concrete remarks referring to our further work and designated to delineate its content are to be given. Sec.2 determines explicitly all the technical facts necessary to follow the~subsequent content of the work in full detail; in that sense, it plays a~subsidiary role. Here, in the~first place, the~question of intrinsic structure of the~t'Hooft-Polyakov potentials $(\Phi^{a}(x), W^{a}_{\beta})$ is reexamined (to be more exact, the~dyon ansatz of Julia-Zee [74] is taken initially). The~main guideline consists in the~following: It is well-known that the usual Abelian monopole potential generates a~certain non-Abelian potential being a~solution of the~Yang-Mills ($Y-M$) equations. First, such a~specific non-Abelian solution was found out in [75]. A~procedure itself of that embedding the~Abelian 4-vector $A_{\mu }(x)$ in the non-Abelian scheme: $ A_{\mu }(x) \rightarrow A^{(a)}_{\mu }(x) \equiv ( 0 , 0 , A^{(3)}_{\mu }$ = $ A_{\mu }(x) )$ ensures automatically that $A^{(a)}_{\mu }(x)$ will satisfy the free $Y-M$ equations. Thus, it may be readily verified that the vector $A_{\mu }(x) = (0, 0, 0, A_{\phi } = g\cos\theta )$ obeys the Maxwell general covariant equations in every curved space-time with the~spherical symmetry: $$ dS^{2}=[e^{2\nu }(dt)^{2} - e^{2\mu }(dr)^{2} - r^{2} ((d\theta )^{2} +\sin^{2}\theta (d\phi )^{2}) ] \; ; A_{\phi } = g\cos\theta \rightarrow F_{\theta \phi } = - g \sin\theta; $$ \noindent here we will face a~single equation that coincides with the~Abelian one. In turn, the~non-Abelian tensor $F^{(a)}_{\mu \nu }(x)$ defined by $ \; F^{(a)}_{\mu \nu }(x) = \nabla _{\mu } \; A^{(a)}_{\nu } \; - \; \nabla _{\nu }\; A^{(a)}_{\mu } \; +\; e\; \epsilon _{abc} \; A^{(b)}_{\mu } A^{(c)}_{\nu }\;$ and associated with the $A^{(a)}_{\mu }$ above has a~very simple isotopic structure: $ F^{(3)}_{\theta \phi } = - g \sin\theta $ and all other $F^{(a)}_{\nu \mu }$ are equal to zero. So, this substitution $ F^{(a)}_{\nu \mu } = ( 0 , 0, F^{(3)}_{\theta \phi } = - g \sin \theta) $ leads the~$Y-M$ equations to the~single equation of the~Abelian case. Therefore, strictly speaking, we cannot state that $A^{(a)}_{\mu }(x)$ obeys a~certain set of really nonlinear equations (it satisfies linear rather than nonlinear ones). Thus, this potential may be interpreted as a~trivially non-Abelian solution of $Y-M$ equations. Supposing that such a~sub-potential is presented in the well-known monopole solutions: $$ \Phi ^{(a)}(x) = x^{a} \Phi (r) , \qquad W^{(a)}_{0}(x) = x^{a} F(r) , \qquad W^{(a)}_{i}(x) = \epsilon_ {iab} x^{b} K(r) \eqno(1.6) $$ \noindent we can try to establish explicitly that constituent structure. The use of the~spherical coordinates and special gauge transformation enables us to separate the~trivial and non-trivial (in other terms, Abelian and genuinely non-Abelian) parts of the~potentials (1.6) into different isotopic components (see the formula (2.7b) ). In the~process of this rearrangement of the monopole's constituents, heuristically useful concepts of three gauges: Cartesian (associated with the~representation (1.6)), Dirac and Schwinger's (both latter are unitary ones) in isotopic space are defined. In order to avoid possible confusion with all the~different bases we will label the~functions and operators by signs of used gauges. The abbreviations $S., D., C. $ will be associated, respectively, with the~Schwinger, Dirac, and Cartesian gauges in the~isotopic space, whereas the~abbreviation {\it sph.} and {\it Cart.} are referred to the~spherical and Cartesian tetrads, respectively\footnote{Below, for simplicity, those latter are often omitted.}. Also, in Sec.2, we briefly review several, the~most important for our work, facts concerning the~generally relativistic Dirac's equation. Besides, for convenience of the~readers, some information about the~aforementioned Pauli's criterion is given. Sec.3 begins analyzing the~doublet-monopole problem. Starting from Schwinger unitary gauge $(\;\Phi_{(a)\beta}^{S.},\; W_{(a)\beta}^{S.}\;)$ and the~spherical tetrad-based matter equation (1.1), the problem of obtaining and partly analyzing the relevant radial equations is studied here all over again. At the~correlated choice of frames in both Lorentzian and isotopic space, an explicit form of the total momentum operator\footnote{Another essential feature of the~given frame is the~appearance of a~very simple expression for the~term which mixes up together two distinct components of the isotopic doublet (see (3.1)). Moreover, it is evident at once that both these features will be retained, with no substantial variations, when generalizing this particular problem to more complex ones with other fixed Lorentzian or isotopic spin.}: $$ J^{S.}_{1} = l_{1} + {{(i \sigma ^{12} + t^{3}) \cos\phi} \over {\sin \theta }} , \qquad J^{S.}_{2} = l_{2} + {{(i \sigma ^{12} + t^{3}) \sin\phi} \over {\sin \theta }} , \qquad J^{S.}_{3}= l_{3} \eqno(1.7) $$ \noindent characterizes this particular composite frame as of Schr\''{o}dinger's type; so, the~above general technique is quite applicable. The $(\theta, \phi)$-dependence in the~relevant wave function is described by $D$-functions of three kinds: $D^{j}_{-m,m'}, \; m' = 0, -1, +1$ (also see in (3.3)): $$ \Psi _{\epsilon jm}(x)\; = \; {{e^{-i\epsilon t} } \over r} \; [\; T_{+1/2} \otimes F(r,\theta,\phi) \; + \; T_{-1/2} \otimes G(r,\theta,\phi)\; ] , $$ $$ F = \left ( \begin{array}{l} f_{1}(r) D_{-1} \\ f_{2}(r) D_{\; 0} \\ f_{3}(r) D_{-1} \\ f_{4}(r) D_{\; 0} \end{array} \right ) , \;\; G = \left ( \begin{array}{l} g_{1}(r) D_{\; 0} \\ g_{2}(r) D_{+1} \\ g_{3}(r) D_{\; 0} \\ g_{4}(r) D_{+1} \end{array} \right ) \; , T_{+1/2}= \pmatrix{1 \cr 0} \qquad ,\;\; T_{-1/2} = \pmatrix{0 \cr 1} \; . \eqno(1.8) $$ \noindent $D_{\sigma} \equiv D_{-m, \sigma}^{j}(\phi, \theta, 0)$. The~Pauli criterion (see sec.2) allows here all positive integer values for $j : j = 0, 1, 2, 3,\ldots$ \noindent The separation of variables in the equation is accomplished by the~conventional $D$-function recursive relation techniques. Moreover, it turns out that only two relationships from the~enormous $D$-function apparatus are really needed in doing this separation [66]: $$ {{\partial} \over {\partial \beta }} \; D^{j}_{mm'}(\alpha ,\beta ,\gamma ) = +{1\over 2} \; \sqrt {(j+m')(j-m'+1)} \; e^{-i\gamma } D^{j}_{m,m'-1} \; - $$ $$ {1\over 2} \; \sqrt {(j-m')(j+m'+1)} \; e^{+i\gamma } D^{j}_{m,m'+1} \;\; ; $$ $$ {{m-m' \cos \theta} \over {\sin \theta}} D^{j}_{mm'}(\alpha ,\beta ,\gamma ) = -{1\over 2} \; \sqrt {(j+m')(j-m'+1)} \; e^{-i\gamma } D^{j}_{m,m'-1} \; - $$ $$ {1\over 2} \sqrt {(j-m')(j+m'+1)}\; e^{+i\gamma } D^{j}_{m,m'+1} \; . \eqno(1.9) $$ As known, an important case in theoretical investigation is the electron-monopole system at the minimal value of the quantum number $j$; so, the~case $j = 0$ should be considered especially carefully, and we do this. In the~chosen frame, it is the~independence on $\theta ,\phi $-variables that sets the~wave functions of minimal $j$ apart from all other particle multiplet states (certainly, functions $f_{1}(r)$ , $ f_{3}(r)$ , $ g_{2}(r) $ , $g_{4}(r)$ in the substitution (1.8) must be equated to zero at once). Correspondingly, the~relevant angular term in the~wave equation will be effectively eliminated. The~system of radial equations found by separation of variables ($4$ and $8$ equations in the cases of $j = 0$ and $j > 0$, respectively) are rather complicated. They are simplified by searching a~suitable operator that could be diagonalized simultaneously with $ \vec J ^{2} , J_{3}$. The~usual space reflection ($P$-inversion) operator for a~bispinor doublet field has to be followed by a~certain discrete transformation in the~isotopic space, so that a~required quantity could be constructed. The~solution of this problem which has been established to date (see, fore example, in [1, 11-16, 76-85]) is not general as much as possible. For this reason, the~question of reflection symmetry in the~doublet-monopole system is reexamined here all over again. As a~result we find out\footnote{And this is a~crucial moment in subsequent construction of the~present work.} that there are two different possibilities depending on what type of external monopole potential is taken. So, in case of the~non-trivial potential, the~composite reflection operator with required properties is (apart from an~arbitrary numerical factor) $$ \hat{N}^{S.} \; = \; \hat{\pi } \otimes \hat{P}_{bisp.} \otimes \hat{P} , \qquad \hat{\pi } = + \sigma _{1} \eqno(1.10) $$ \noindent here, the quantities $ \hat{\pi }$ and $\hat{P}_{bisp.}$ represent fixed matrices acting in the isotopic and bispinor space, respectively, and changing simultaneously with any variations of relevant bases (see (3.8a)). A~totally different situation occurs in case of the~simplest monopole potential. Now, a~possible additional operator, suitable for separating the~variables, depends on an~arbitrary complex numerical parameter $A$ ($\Delta = e^{iA}\; , \; e^{iA} \neq 0 , \infty $ ): $$ \hat{N}^{S.}_{A} = \hat{\pi }_{A} \otimes \hat{P}_{bisp.} \otimes \hat{P} , \qquad \hat{\pi }_{A} = e^{iA \sigma _{3}} \sigma _{1} \; . \eqno{1.11a} $$ \noindent The same quantity $A$ appears also in expressions for the~corresponding wave functions $ \Psi ^{A}_{\epsilon jm\delta}(t,r,\theta ,\phi)$ (the eigenvalues $ N_{A} = \delta (-1)^{j+1} ; \delta = \pm 1$) : $$ \Psi ^{A}_{\epsilon jm\delta}(x) = [\; T_{+1/2} \otimes F(x) \; + \; \delta\; e^{iA}\; T_{-1/2} \otimes G(x) \; ] \; \eqno(1.11b) $$ \noindent the additional limitations (3.10a) are imposed on the~radial functions in $F$ and $G$. Further, throughout all the sections 4-11, we look into the~fermion doublet just in this simplest monopole field, and all results and discussion concern only this particular system unless the~inverse is indicated. Sec.4 , in the~first place, finished the~work on searching a~remaining operator from a~supposedly complete set: $\{\; \hat{H}, \;\vec{J}^{2},\; J_{3}, \hat{N}_{A}, \; \hat{K} = ? \}$ . That $\hat{K} $ is determined as a~natural extension of the~well-known (Abelian) Dirac operator to the~non-Abelian case. Correspondingly, the~set of radial equations is eventually reduced to a~set of two ones; the~latter is well known and coincides with that relating to the~Abelian electron-monopole system; which has been studied by many authors. Then, on simple comparing the~non-Abelian doublet functions with the~Abelian ones, we arrive at an~explicit factorization of the~doublet functions by Abelian ones and isotopic basis vectors (see (4.4)). The~relevant decompositions have been found for the~composite states with all values of $J$, including the~minimal one $J_{min.}=0$ too. As known, the~case of minimal $J$ in the~Abelian theory supplies some unexpected and rather singular features: in particular, it gives a~candidate for a~possible bound state in the~electron-monopole system, it significantly touches the~Hamiltonian self-conjugacy property and some others. Thus, as evidenced by the~factorization, all those purely Abelian peculiarities, concerning the~$J_{min.}$-state, likewise turn out to be represented, in a~practically unchanged form, in the~non~Abelian theory (it should be remembered that here the~case of special non-Abelian monopole field is meant). Else one fact associated with the~above decomposition should be noted. It is matter that the~$A$-ambiguity in determining~the discrete operator $\hat{N}_{A}$ ranges from zero to infinity: $0 < \mid e^{A} \mid < \infty $. Therefore, the~two distinct isotopic doublet components (see (1.11b)), proportional respectively to $T_{-1/2}$ and $T_{+1/2}$, cannot be eigenfunctions of the $\hat{N}_{A}$ whatever the~values of $A$-parameter may be. In other words, the~above $\hat{N}_{A}$ is to be considered as a~specifically non-Abelian operator, and the~parameter $e^{iA}$ itself may be regarded as a~quantity measuring violation of Abelicity in the~composite non-Abelian wave function. Of course, these two Abelian-like doublet states can formally be obtained from (1.11b) too: it suffices to put $e^{iA} = 0$ or $ \infty$, but those singular cases are not covered by the~above-mentioned complete set of operators. In that sense, the~two bound values $A = 0$ and $\infty$ represent singular transition points between th~Abelian and non-Abelian theories. Sec.~5 concerns distinctions between Abelian and non-Abelian monopoles. We consider the~question: what in the~stated above (Sections 2-4) is dictated solely by presence of the~external field and what is determined, in turn, only by isotopic multiplet's structure. To this end, we compare the~fermion doublet multiplet, being subjected to the~monopole effect, with a~free one. We draw attention to the~fact that these two systems have their spherical symmetry operators $\vec{J}^{2}, J_{3} , \hat{N}$ identically coincided. Correspondingly, the relevant wave functions do not vary at all in their dependence on angular variables $\theta , \phi $; instead, a~single difference appears in one parametric function entering the~systems of radial equations. These non-Abelian wave functions' property sharply contrasts with the Abelian one. Indeed, as well known, particle's wave functions and all spherical symmetry operators undergo substantial changes (see (1.4) and (1.5) as the~external monopole field is in effect. To clarify and spell out all the~significance of such a~`minor' alteration in (1.5) as the simple displacement in a~single index, we look at just one mathematical characteristic of the~$D$-functions involved in the~particle wave functions: namely, their boundary properties at the~points $\theta = 0$ and $\theta = \pi$ (see Tables 1-3 in Sec.4). On comparing those characteristics for $D^{j}_{-m, \pm1/2}(\phi, \theta,0)$ and $D^{j}_{-m, eg \pm1/2}(\phi, \theta,0)$ we can conclude that these sets of $D$-functions provide us with the~bases in different functional spaces $\{ F^{eg=0}(\theta,\phi) \}$ and $\{ F^{eg \neq 0}(\theta,\phi) \}$. Every of those functional spaces is characterized by its own behavior at limiting points, which is irreconcilable with that of any other space. So, from the very~beginning, in the~Abelian monopole situation, we face a~fact being crucial one by its further implications: the~space of quantum-mechanical states of a~particle in the~monopole field is quite a~contrasting one to a~free particle's space. This circumstance implies a~lot of hampering implications. In particular, we discuss some relations of them to quantum-mechanical superposition principle. Also, else one awkward question of that kind is: what is the~meaning of the~relevant scattering theory, if even at infinity itself, some manifestation of the~magnetic charge presence does not vanish [17-19,86-94] (just because of the~given $\theta, \phi$-dependence). By contrast, in the non-Abelian theory, there not exist any problems of such a~kind. Even more, we may state that one of the~substantial features characterizing the~non-Abelian monopole is that such a~potential, does not destroy the~isotopic angular structure of the~particle multiplet. From this point of view, this potential represents a~certain analog of a~spherically symmetrical Abelian potential $A_{\mu } = ( A_{0}(r), 0,0,0,)$ rather than of the~Abelian {\it monopole} one. In this connection, one additional remark might be useful: one should give attention to the~fact that the~designation {\it monopole} in the~non-Abelian terminology, anticipates tacitly interpretation of $W^{a}_{\mu }(x)$ as ones carrying, in a~new situation, the~essence of the~well-known Abelian monopole, although really, as evidenced by the~above arguments and some other, the~real degree of their similarity may be probably less than one might expect. Also, the following point should be stressed. Though as was mentioned above, certain close relationships between the~non-Abelian doublet wave functions and Abelian fermion-monopole functions occur (see the formulas (4.4a,b)), the~non-Abelian situation, in reality, is intrinsically non-monopole-like ($=$ non-singular one). The following aspect is meant: in the~non-Abelian case, the~totality of possible transformations (upon the~relevant wave functions) which bear the~gauge status are materially different from ones that there are in effect in the~purely Abelian theory. As a~consequence of this, the~non-Abelian fermion doublet wave functions (1.8) can be readily transformed, by carrying out together the~gauge transformations in Lorentzian and isotopic spaces ($S. \; \rightarrow \; C.\;$ and $\;sph. \; \rightarrow \; Cart.$), into the form (see the formulas (A.9) and (A.10)) where they will be single-valued functions of spatial points. In the~Abelian monopole situation, the~representation for particle-monopole functions can by no means be translated to any single-valued one. So, in a~sense, the~whole multiplicity of Abelian monopole manifestations seems to be much more problematical than non-Abelian monopole's. Furthermore, as it appears to be likely, examination of the non-Abelian case does not lead up to solving some purely Abelian problems. These two mathematical and physical theories should be only associated heuristically. As else one confirmation to this general view, in Sec.5 , we will compare two different situations relating to discrete symmetry problems tied up respectively with the~Abelian and non-Abelian models (see also in [1,11-16,76-85]). We explain carefully how the~Abelian $P$-symmetry problem (to be exact, its violation by a~magnetic charge) is embedded in the~non-Abelian model and the~way Abelian $P$-violation results in the~ discrete composite symmetry in the~non-Abelian theory. The~main ideas are as follows: In virtue of the~well-known Abelian monopole $P$-violation, the usual bispinor particle $P$-inversion operator $\hat{P}_{bisp.} \otimes \hat{P}$ does not commute with the Hamiltonian $\hat{H}^{eg}$. The way of how to obtain a~certain formal covariance of the~monopole-containing system with respect to $P$-symmetry there has been a~subject of special interest in the~literature. All the suggestions represent, in the essence, a~single one: the~magnetic charge characteristic is to be considered as a~pseudo scalar quantity\footnote{One should take into account that this, as it is, applies only to the~Schwinger basis; the~use of the~Dirac gauge or any other, except Wu-Yang's, implies quite definite modifications in representation of the~$P$-operation on the monopole 4-potential; see, for example, in [78]}. For the~subject under consideration, this assumption implies that one ought to accompany the~ordinary $P$-transformation with a~formal operator $\hat{\pi}$ changing the~parameter $g$ into $-g$. Correspondingly, the~composite Abelian discrete operator $$ \hat{M} = \hat{\pi}_{Abel.} \otimes \hat{P}_{bisp.} \otimes \hat{P} $$ \noindent will commute with the~relevant Hamiltonian indeed. Besides, this $\hat{M}$ can be diagonalized\footnote{It should be emphasized that some unexpected peculiarities with that procedure, in reality, occur as we turn to the states of minimal values of $j$; for more detail see in [67]} on the~functions $\Psi^{eg}: \; \hat{M} \; \Psi^{eg}_{\epsilon jm \delta} = \delta \; (-1)^{j+1} \; \Psi^{eg}_{\epsilon jm \delta}$. However, as evidenced in Sec.5, this operator $\hat{M}$ does not result in a~basic structural condition $$ \Phi (t,-\vec{x}) \; = \; ( 4\times 4-matrix) \; \Phi (t,\vec{x}) \eqno(1.12a) $$ \noindent which would guarantee indeed the~existence of certain selection rules with respect to a~discrete operator. In place of (1.12a), there exists just the~following one $$ \Phi^{+eg}_{\epsilon jm \delta } (t,-\vec{x}) \; = \; \delta (-1)^{j+1} \; \hat{P}_{bisp.}\; \Phi ^{-eg}_{\epsilon jm \delta }(t,\vec{x}) \eqno(1.12b) $$ \noindent take notice of change in the~sign at $eg$ parameter; this minor alteration\footnote{This observation can be conceptualized in more formal mathematical terms: this inversion-like operator $\hat{M}$ turns out to be a~non-self-adjoint one; therefore it might not follow all the familiar patterns of behavior for self-adjoint quantities.} is completely detrimental to the~possibility of producing any selection rules: those do not exist whatever. Therefore, no discrete symmetry-based selection rules in presence of Abelian monopole are possible; those will be really achieved only if any relation with the~general structure ((1.12a), apart from modification due to the~type of particle, which would influence a~matrix involved) exists. However, a~relation of required structure there occurs in the~non-Abelian model: $$ \Psi_{\epsilon jm \delta } (t,-\vec{x}) \; = \; \delta (-1)^{j+1} (\sigma^{2} \otimes \hat{P}_{bisp.}) \; \Psi_{\epsilon jm \delta }(t,\vec{x}) \eqno(1.12c) $$ \noindent Correspondingly, the~relevant selection rules with respect to that composite $N$-parity can be established, and we did it. Sec.6 concerns some technical details related to explicit expressions of the~doublet wave functions and discrete operator $\hat{N}_{A}$ in the two other gauges: Dirac's unitary and Cartesian ones\footnote{Among other thing, this material is designated to ease understanding in full for readers preferring the~use of these gauges.}. The following expression for the~above non-Abelian $\hat{\pi}_{A}$-operation, now referring to the~Cartesian gauge, has been calculated: $\hat{\pi }^{C.}_{A} = (- i ) \; \exp ( i A \vec{\sigma} \vec{n}_{\theta ,\phi } )$ , where $\vec{n}_{\theta ,\phi }$ stands for the ordinary radial unit vector. If $A=0$ then the~form of the operator $N_{A=0}^{C.}$ (it does not involve any isotopic transformation) might be a~source of some speculation about an~extremely significant role of the~Abelian $P$-symmetry in the~non-Abelian model. Some subtle considerations related to this matter are discussed. In particular, it should be remembered that a~genuinely Abelian fermion $P$-symmetry implies both a~definite explicit expression for $P$-operation and definite properties of the~corresponding wave functions. In this connection, the~relevant decomposition of fermion doublet wave functions in terms of Abelian fermion-like functions and unit isotopic vectors is given. From that it is evident that the~usual Abelian fermion wave functions and non-Abelian doublet ones belong to substantially different types (see (6.9) at $A=0$), so that the~$P$-inversion operator plays only a~subsidiary role in forming the~composite functions. Sec.7 turns to the question of how the above complex parameter $A$ can manifests itself physically in matrix elements. On that line, as a~natural illustration, the problem of parity selection rules is looked into again, but now depending on the $A$-background. The~notions of a~composite $N_{A}$-scalar and $N_{A}$-pseudoscalar related to some quantities with non-trivial isotopic structure are given; then the~corresponding selection rules are found. For every fixed $A$, these rules imply their own special limitations on composite scalars and pseudoscalars, which are individualized by this $A$; correspondingly, selection rules arising in sequel for matrix elements (if a quantity belongs to the~scalars or pseudoscalars) differ basically from each other. Sec.8 is interested in the~question: Where does the above $A$-ambiguity come from? As shown, the~origin of such a~freedom lies in the~existence of an~additional (one parametric) operation $U(A)$ that leaves the~doublet-monopole Hamiltinian invariant. Just this operation $U(A)$ changes $\hat{N}_{A=0}$ into $\hat{N}_{A}$. Different values for $A$ lead to the~same whole functional space; each fixed $A$ governs only the basis states $\Psi^{A} _{\epsilon jm\delta }(x)$ of it, and the~symmetry operation acts transitively on those states: $\Psi ^{A'}_{\epsilon jm\delta}(x) = U(A'-A) \Psi ^{A}_{\epsilon jm\delta }(x)$. An analogy between that isotopic symmetry and a~more familiar example of Abelian chiral ($\gamma ^{5}$) symmetry in massless Dirac field theory [95,96] is drawn\footnote{So, the~author suggests the~term `isotopic chiral symmetry'. Besides, to be terminologically exact, one should split the~notion of $\gamma^{5}$ complex chiral symmetry into two ones; properly $\gamma^{5}$-symmetry (when $A$ is real number) and conformal symmetry (when $A$ is purely imaginary number); but for simplicity, we will use the~term `complex chiral symmetry'.}. The~role of the~Abelian $\gamma^{5}$-matrix is taken by the~isotopic $\sigma_{3}$-matrix: its form in the $S.$-gauge is $U^{S.}(A) = exp(A/2) \; exp(i {A \over 2} \sigma_{3})$. Some additional technical details touching this operation are given; in particular, we find expressions for $U(A)$ in the~Cartesian gauge. In Cartesian frame, this symmetry transformation takes the~form $$ U_{C.}(A) = e^{+iA/2} \exp [\; -i\; {A\over 2} \;\vec{\sigma }\; \vec{n}_{\theta ,\phi }\; ] \eqno(1.13) $$ \noindent where the~second factor represents a~2-spinor local transformation from the~ 3-dimensional complex rotation group $SO(3.C)$. The~explicit coordinate dependence appearing in Cartesian gauge results from the~non-commutation $\sigma_{3}$ with a~gauge transformation involved into transition from Shwinger's to Cartesian isotopic basis. In the~analogous Abelian situation, the form of the~chiral transformation remains the~same because $\gamma^{5}$ and the~relevant gauge matrix (that belongs to the~bispinor local representation of the group $SL(2.C)$) are commutative with each other. In Sec.9, we look into some qualitative peculiarities of the~$A$-freedom placing special notice to the~division of $A$-s values into the real and complex ones. All those values (complex as well as real) are equally permissible: they only govern bases ot the~same Hilbert space of quantum states, which can be related to each other by the use of the ordinary superposition principle. However, a~material distinction between real and complex $A$-s will appear, if one turns to the~orthogonality properties of those basis states $\Psi^{A}_{\epsilon jm\delta}(x)$. As will be seen, at $A^{*} \neq A$, the~states $\Psi^{A}_{\epsilon jm, -1}(x)$ and $\Psi^{A}_{\epsilon jm, +1}(x)$ are not orthogonal to each other. Such specific (non-orthogonal) bases, though not being of very common use and having a~number of peculiar features, are allowed to be exploited in conventional quantum theory. Else one fact associated with the~above (real-complex) division of $A$-s is that the~discrete operator $\hat{N}_{A}$ represents a~non-self-adjoint quantity as $A^{*} \neq A$. In this point, there are two possibilities to choose from: whether we restrict ourselves to the~real $A$-values (correspondingly, no problems with self-conjugacy there arise) or we exploit the~complex $A$-values as well as the~real ones, and thereby, the non-orthogonal bases and non-self-adjoint character of the~discrete operator, are allowed in the~theory. We have chosen to accept and look into the~second possibility. Further, we consider narrowly such a~specific nature of the~$\hat{N}_{A}$ since, as the~complex $A$-values are allowed, we will violate the~well-known quantum-mechanical regulation about the self-adjointness of measurable physical quantities. The~main guideline ideas in clearing up the~problem faced us here is as follows. One should notice the~fact that the~single relation $(\hat{N}_{A})^2 = I $ is abundantly sufficient one to produce real proper values: $+1$ and $-1$. Furthermore, as it was stressed above, just {\it real values} are not material here whatever; instead, the~only required consequence of this symmetry is the~mere distinction between two different quantum possibilities. In the~light of this, the~automatical incorporation of all discrete operators into class of self-adjoint ones does not seem inevitable. But accepting this, there is a~problem to solve: what is the meaning of complex expectation values of such non-self-adjoint operators. We carefully explain how one may interpret all such complex values as being physically measurable ones. Finally, we devote Sec.10 to clearing up else one, and rather important from the~physical viewpoint, peculiarity of the~doublet-monopole system. It is matter that if the~parameter $A$ is real one, then the~matrix translating $\Psi ^{A=0}_{\epsilon jm\delta }(x)$ into $\Psi ^{A}_{\epsilon jm\delta }(x)$ coincides (apart from a~phase factor $e^{iA/2}$) with a~matrix lying in the~group $SU(2)$. However, the~group $SU(2)$ has the~status of gauge one for the~system under consideration. So, the point of view might be brought to light: one could claim that the~two functions $\Psi ^{A}(x)$ and $\Psi ^{A'}(x)$ , referring respectively to the~different values $A$ and $A'$, represent in reality only the transforms of each other in the sense of $SU(2)$ gauge theory. And further, as a~direct consequence, one could insist on the~impossibility in principle to observe indeed any physical distinctions between those wave functions. If the~above transformation gets estimated so, then ultimately one will conclude that the~above $N_{A}$-parity selection rules (explicitly depended on $A$ which is real in that case) are the~mathematical fiction only, since the~transformation $U(A)$ is not physically observable. For answering this question, we carefully follow some interplay between the~quantum-mechanical superposition principle and the~concepts of gauge and non-gauge symmetries. As will be seen, the general outlook prescribing to interpret the transformation $U(A)$ as exclusively a~gauge one contradicts with some basic regulations stemming from the~superposition principle. Even more, as shown, starting from the~exclusively gauge understanding of that transformation, one can arrive at the~requirement of {\it physical identification} of the~two states $\Psi ^{A}_{\epsilon jm\delta =-1}(x)$ and $\Psi ^{A}_{\epsilon jm\delta = +1}(x)$. However, in a~sense, this is equivalent to the effective returning into the~Abelian scheme, that hardly can be desirable effect. Nevertheless, the matrix $U(A)$ belongs to $SU(2)^{gauge}_{loc.}$ (apart from $U(1)$ factor). In order not to reach a~deadlock, in author's opinion, there exists just one and very simple way out of this situation, which consists in the following: The complete symmetry group of the~system under consideration is of the~form $ \hat{F}(A) \otimes SL(2.C)^{loc.}_{gauge} \otimes SU(2)^{loc.}_{gauge} $. This group, in particular, contains the~gauge and non-gauge symmetry operations which both have the~same mathematical form but different physical status. Finally, in Sec.11, we briefly discuss extension of the~present analysis to other situations, with different values of isotopic and Lorentzian spin, and gauge groups. It is argued that some facts discerned in the~present work for $SU(2)$-model might bear upon similar aspects of other gauge group-based theories. In Supplement A, we consider some additional relationships between explicit forms of the~fermion-monopole functions in the~bases of spherical and Cartesian tetrads. \subsection*{2 Dirac and Schwinger gauges in isotopic space} This section deals with some representation of the non-Abelian monopole potential, which will be the most convenient one to formulate and analyze the problem of isotopic multiplet in this field. Let us begin describing in detail this matter. The well-known form of the monopole solution introduced by t'Hooft and Polyakov ([42-44]; see also Julia-Zee [74]) may be taken as a starting point. The field $W^{(a)}_{\alpha }$ represents a~covariant vector with the~usual transformation law $W^{(a)}_{\beta } = (\partial x^{i} / \partial x^{\beta }) W^{(a)}_{i})$ and our first step is the~change of variables in 3-space. Thus, the~given potentials $( \Phi ^{(a)}(x), W^{(a)}_{\alpha })$ convert into $ (\Phi ^{(a)}(x), W^{(a)}_{t}, W^{(a)}_{r}$, $W^{(a)}_{\theta }, W^{(a)}_{\phi})$. Our second step is a~special gauge transformation in the isotopic space. The~required gauge matrix can be determined (only partly) by the~condition $\;(O_{ab} \Phi ^{b}(x) ) = ( 0 , 0 , r \Phi (r)\; )$. This equation has a~set of solutions since the~isotopic rotation by every angle about the~third axis $(0, 0, 1)$ will not change the~finishing vector $( 0, 0, r \Phi (r) )$. We fix such an~ambiguity by deciding in favor of the~simplest transformation matrix. It will be convenient to utilize the~known group $SO(3.R)$ parameterization through the Gibbs 3-vector\footnote{The author highly recommends the book [97] for many further details developing the Gibbs approach to groups $SO(3.R), SO(3.C), SO_{0}(3.1)$, etc.} $$ O = O( \vec{c}) = I + 2 \;{ \vec{c}^{\times } + ( \vec{c}^{\times })^{2} \over 1 + \vec{c}^{2}} \;\; , \qquad (\vec{c}^{\times })_{ac} = - \epsilon _{acb} \; c_{b} \; . \eqno(2.1) $$ \noindent According to [98], the simplest rotation above is $\vec{B} = O(\vec{c}) \vec{A} ,\; \vec{c} = [\vec{B} \vec{A} ] / (\vec{A} +\vec{B} ) \vec{A}\;$, therefore, $$ \hbox{if}\;\;\;\; \vec{A} = r \Phi (r) \; \vec{n}_{\theta, \phi} \; , \; \vec{B} = r \Phi (r) ( 0 , 0 , 1 )\; , \; \; \hbox{then} \;\; \vec{c} = {{\sin \theta }\over {1 + \cos \theta }} \; ( + \sin \phi , - \cos \phi , 0 ) \; . \eqno(2.2) $$ \noindent Together with varying the~scalar field $\Phi ^{a}(x)$, the~vector triplet $W^{(a)}_{\beta }(x)$ is to be transformed from one isotopic gauge to another under the~law [99] $$ W'^{(a)}_{\alpha }(x)\; = \; O_{ab}(\vec{c}(x)) \; W^{(b)}_{\alpha }(x) \; + \; {1\over e} \; f_{ab}( \vec{c}(x))\; {{\partial c_{b} } \over { \partial x^{\alpha }}} \;\; , \qquad f(\vec{c}) = - 2\; {{1 + \vec{c}^{\times }} \over {1 + \vec{c}^2}} \;\; . \eqno(2.3) $$ \noindent With the use of (2.3), we obtain the~new representation $$ \Phi^{D.(a)}= r \Phi (r) \left ( \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right )\; , \qquad W^{D.(a)}_{\theta } = (r^{2} K + 1/e) \left ( \begin{array}{c} - \sin \phi \\ + \cos \phi \\ 0 \end{array} \right ) \; , \qquad W^{D.(a)}_{r} = \pmatrix{0\cr0\cr0} \; , $$ $$ W^{D.(a)}_{t} = \left ( \begin{array}{c} 0 \\ 0 \\ rF(r) \end{array} \right ) \; , \qquad W^{D.(a)}_{\phi } = \left ( \begin{array}{c} -(r^{2}K + 1/e) \sin\theta \cos\phi \\ -(r^{2}K + 1/e) \sin\theta \sin\phi \\ {1 \over e} (\cos\theta - 1) \end{array} \right ) \; . \eqno(2.4) $$ \noindent It should be noticed that the~factor $(r^{2} K(r) + 1/ e)$ will vanish when $K = -1 / e r^{2}$. Thus, only the~delicate fitting of the~single proportional coefficient (it must be taken as $-1/ e)$ results in the~actual formal simplification of the~non-Abelian monopole potential. There exists close connection between $W^{D.(a)}_{\phi }$ from (2.4) and the~Dirac's expression for the~Abelian monopole potential (supposing that $\vec{n} = (0, 0 ,-1)$): $$ A^{\beta }_{D.} = g \;\left ( \; 0 ,\; {{[\; \vec{n} \; \vec{r}\; ]} \over {(r + \vec{r} \; \vec{n} )\; r }}\; \right ) \;\; , \qquad or \qquad A^{D.}_{\phi } = - g \; ( \cos \theta - 1 ) \; . \eqno(2.5) $$ \noindent So, $W^{triv.}_{(a)\alpha }(x)$ from (2.4) (produced by setting $K = - 1 / e r^{2}$) can be thought of as the~result of embedding the~ Abelian potential (2.5) in the~non-Abelian gauge scheme: $W^{(a)D.}_{\alpha }(x) \equiv ( 0 , 0 , A^{D.}_{\alpha }(x))$. The quantity $W^{(a)D.}_{\alpha }(x)$ labelled with symbol $D.$ will be named after its Abelian counterpart; in other words, this potential will be treated as relating to the Dirac's non-Abelian gauge in the~isotopic space. In Abelian case, the Dirac's potential $A^{D.}_{\alpha }(x)$ can be converted into the~Schwinger form $A^{S.}_{\alpha }(x)$ $$ A^{S.}_{\alpha } = \left ( 0 ,\; g \; {{[\; \vec{r}\; \vec{n}\; ] \; ( \vec{r} \; \vec{n})} \over {(r^{2} \; - \; ( \vec{r}\; \vec{n})^{2})r}} \right ) \; , \qquad or \qquad A^{S.}_{\phi } = g \; \cos \theta \eqno(2.6) $$ \noindent by means of the~following transformation $$ A^{S.}_{\alpha }\; = \;A ^{D.}_{\alpha } \; + \; {{\hbar c} \over{ie}} \; S \; {{ \partial} \over { \partial x^{\alpha }}} \; S^{-1}, \qquad S(x) = \exp (-i{eg \over \hbar c} \phi ) \;\; . $$ \noindent It is possible to draw an~analogy between the~Abelian and non-Abelian models. That is, we may introduce the~Schwinger non-Abelian basis in the~isotopic space: $$ ( \Phi ^{D.(a)}, W^{D.(a)}_{\alpha }) \qquad \stackrel{\vec{c}~'} \rightarrow \qquad ( \Phi ^{S.(a)}, W^{S.(a)}_{\alpha } ) , \qquad \vec{c}~' = ( 0 , 0 , - \tan \phi /2 ) \; ; \eqno(2.7a) $$ \noindent where $$ O(\vec{c}~') = \pmatrix{ \cos \phi & \sin \phi & 0 \cr -\sin \phi & \cos \phi & 0 \cr 0 & 0 & 1 } \; . $$ \noindent Now an~explicit form of the~monopole potential is given by $$ W^{S.(a)}_{\theta} = \left ( \begin{array}{c} 0 \\( r^{2} K + 1/e ) \\ 0 \end{array} \right ) \; , \qquad W^{S.(a)}_{\phi } = \left ( \begin{array}{c} -(r^{2} K + 1/e) \\ 0 \\ {1\over e} \cos \theta \end{array} \right )\; , $$ $$ W^{S.(a)}_{r} = \pmatrix{0\cr0\cr0}, \qquad W^{S.(a)}_{t} = \pmatrix{ 0 \cr 0 \cr rF(r)} , \qquad \Phi ^{S.(a)} = \pmatrix{0 \cr 0 \cr r\Phi (r)} \eqno(2.7b) $$ \noindent where the symbol $S.$ stands for the~Schwinger gauge. Both $D.$- and $S.$-gauges (see (2.4) and (2.7b)) are unitary ones in the~isotopic space due to the~respective scalar fields $\Phi ^{D.}_{(a)}(x)$ and $\Phi ^{S.}_{(a)}(x)$ are $x_{3}$-unidirectional, but one of them (Schwinger's) seems simpler than another (Dirac's). For the following it will be convenient to determine the~matrix $0(\vec{c}~'')$ relating the~Cartesian gauge of isotopic space with Schwinger's: $$ O(\vec{c}~'') = O(\vec{c}~') O(\vec{c}) = \pmatrix{ \cos\theta \cos\phi & \cos\theta \sin\phi & -\sin\theta \cr -\sin\phi & \cos\phi & 0 \cr \sin\theta \cos\phi & \sin\theta \sin\phi & \cos\theta } , $$ $$ \vec{c}~'' = (+ \tan\theta /2 \tan\phi /2, - \tan\theta /2, -\tan\phi /2). \eqno(2.8) $$ This matrix $O(\vec{c}~'')$ is also well-known in other context as a~matrix linking Cartesian and spherical tetrads in the space-time of special relativity (as well as in a~curved space-time of spherical symmetry) $$ x^{\alpha } = (x^{0}, x^{1}, x^{2}, x^{3}) \; , \;\; dS^{2}= [(dx_{0})^{2} - (dx_{1})^{2} - (dx_{2})^{2} - (dx_{3})^{2}] \; , \;\; e^{\alpha }_{(a)}(x) = \delta ^{\alpha }_{a} \eqno(2.9a) $$ \noindent and $$ x'^{\alpha } = ( t , r , \theta , \phi ) \; , \qquad dS^{2} = [dt^{2} - dr^{2} - r^{2}(d\theta ^{2} + \sin ^{2} \theta d\phi ^{2})] \; , $$ $$ e^{\alpha'}_{(0)} = ( 1 , 0 , 0 , 0 )\; , \qquad e^{\alpha'}_{(1)} = ( 0, 0 , 1/r, 0 )\; , $$ $$ e^{\alpha'}_{(2)} = ( 0 ,0 , 0 ,1/ r \sin \theta)\; , \qquad e^{\alpha'}_{(3)} = ( 0 , 1 , 0 , 0 ) \;\; . \eqno(2.9b) $$ Below we review briefly some relevant facts about the~tetrad formalism. In the~presence of an~external gravitational field, the~starting Dirac equation $(i \gamma ^{a} \partial /\partial x^{a} - m ) \Psi (x) = 0$ is generalized into [55-63] $$ [\; i \gamma ^{\alpha}(x) \; (\partial_{\alpha} \; + \; \Gamma _{\alpha }(x) ) \; - \; m \; ] \; \Psi (x) = 0 \eqno(2.10) $$ \noindent where $\gamma ^{\alpha }(x) = \gamma ^{a} e^{\alpha }_{(a)}(x)$, and $e^{\alpha }_{(a)}(x)$, $\Gamma _{\alpha }(x) = {1\over 2} \sigma ^{ab} e^{\beta }_{(a)} \nabla _{\alpha }(e^{\alpha }_{(b)\beta })$, $\nabla _{\alpha }$ stand for a~tetrad, the~bispinor connection, and the~covariant derivative symbol, respectively. In the~spinor basis: $$ \psi (x)= \left ( \begin{array}{c} \xi (x) \\ \eta (x) \end{array} \right ) , \qquad \gamma ^{a} = \left ( \begin{array}{cc} 0 & \bar{\sigma}^{a} \\ \sigma^{a} & 0 \end{array} \right ) , \qquad \sigma ^{a} = (I ,+ \sigma ^{k}), \qquad \bar{\sigma }^{a} = (I, -\sigma^{k}) $$ \noindent ($\sigma ^{k}$ are the two-row Pauli spin matrices; $k = 1,2,3$) we have two equations $$ i \sigma ^{\alpha }(x) \;[\; \partial_{\alpha} \; + \; \Sigma _{\alpha }(x)\; ] \; \xi (x) = m \;\eta (x), \qquad i \bar{\sigma}^{\alpha }(x)\; [\; \partial_{\alpha } \; + \; \bar{\Sigma}_{\alpha }(x)\; ] \; \eta (x) = m \; \xi (x) \eqno(2.11) $$ \noindent where the~symbols $\sigma ^{\alpha }(x), \bar{\sigma }^{\alpha }(x), \Sigma _{\alpha }(x), \bar{\Sigma }_{\alpha }(x)$ denote respectively $$ \sigma ^{\alpha }(x)= \sigma ^{a} e^{\alpha }_{(a)}(x),\qquad \bar{\sigma}^{\alpha }(x)= \bar{\sigma}^{a} e^{\alpha }_{(a)}(x), $$ $$ \Sigma _{\alpha }(x) = {1\over 2} \Sigma ^{ab} e^{\beta }_{(a)} \nabla _{\alpha }(e_{(b)\beta }) , \qquad \bar{\Sigma}_{\alpha }(x) = {1\over 2} \bar{\Sigma}^{ab} e^{\beta }_{(x)} \nabla _{\alpha }(e_{(b)\beta }) , $$ $$ \Sigma ^{ab} = {1\over 4}(\bar{\sigma}^{a} \sigma^{b} - \bar{\sigma}^{b} \sigma^{a}), \qquad \bar{\Sigma}^{ab} = {1\over 4} (\sigma^{a} \bar{\sigma}^{b} - \sigma^{b} \bar{\sigma}^{a}) \; . $$ \noindent Setting $m$ equal to zero, we obtain Weyl equation for neutrino $\eta (x)$ and anti-neutrino $\xi (x)$, or the~Dirac equation for a~massless particle (the latter will be used further in Sec.8). The form of equations (2.10), (2.11) implies quite definite their symmetry properties. It is common, considering the~Dirac equation in the~same space-time, to use some different tetrads $e^{\beta }_{(a)}(x)$ and $e'^{\beta }_{(b)}(x)$, so that we have the~equation (2.10) and an~analogous one with a~new tetrad mark. In other words, together with (2.10) there exists an~equation on $\Psi'(x)$, where quantities $\gamma'^{\alpha }(x)$ and $\Gamma'_{\alpha}(x)$, in contrast with $\gamma^{\alpha }(x)$ and $\Gamma_{\alpha}(x)$, are based on a~new tetrad $e'^{\beta }_{b)}(x)$ related to $e^{\beta }_{(a)}(x)$ through a~certain local Lorentz matrix $$ e'^{\beta }_{(b)}(x) \; = \; L^{\;\;a}_{b}(x) \; e^{\beta }_{(a)}(x) \; . \eqno(2.12a) $$ \noindent It may be shown that these two Dirac equations on functions $\Psi (x)$ and $\Psi'(x)$ are related to each other by a~definite bispinor transformation: $$ \xi'(x) = B(k(x)) \xi (x), \qquad \eta'(x) = B^{+}(\bar{k}(x)) \eta (x) \; . \eqno(2.12b) $$ \noindent Here, $B(k(x)) = \sigma ^{a} k_{a}(x)$ is a~local matrix from the $SL(2.C)$ group; 4-vector $k_{a}$ is the well-known parameter on this group [100]. The~matrix $L^{\;\;a}_{b}(x)$ from (2.12a) may be expressed as a~function of arguments $k_{a}(x)$ and $k^{*}_{a}(x)$ : $$ L^{a}_{b}(k, k^{*}) \; = \; \bar{\delta}^{c}_{b} \; [\; - \delta^{a}_{c} \; k^{n} \; k^{*}_{n} \; + \; k_{c} \; k^{a*}\; + \; k^{*}_{c} \; k^{a} \; + \; i \epsilon ^{\;\;anm}_{c}\; k_{n}\; k^{*}_{m} \;] \eqno(2.12c) $$ \noindent where $\bar{\delta}^{c}_{b}$ is a~special Cronecker symbol: $$ \bar{\delta}^{c}_{b} = 0 \;\;\; if \;\;\;c \neq \; ; \;\;= +1 \;\;\; if \;\; \; c = b = 0 \; ; \;\; = -1 \;\;\; if \;\; \; c = b = 1,2,3 \;\; . $$ By the way, it is normal practice that some different tetrads are used in examining the~Dirac equation on the~same Rimannian space-time background. If there is a~need to analyze some correlation between solutions in those distinct tetrads, then it is important to know what are the~relevant gauge transformations over the~spinor wave functions. In particular, the~matrix relating spinor wave functions in Cartesian and spherical tetrads (see (2.9)) is as follows $$ B = \pm \pmatrix{ \cos\theta /2 \; e^{i\phi /2} & \sin\theta /2 \; e^{-i\phi /2} \cr -\sin\theta /2 \; e^{i\phi /2} & \cos\theta /2 \; e^{-i\phi /2} } \equiv B( \vec{c}~'') = \pm {{ I - i \vec{\sigma } \vec{c}~'' } \over { \sqrt{1 - (\vec{c}~'')^{2} }}} \; . \eqno(2.12d) $$ \noindent The vector matrix $L^{\;\;a}_{b}(\theta ,\phi )$ referring to the spinor's $B(\theta ,\phi )$ is the same as $O(\vec{c}~'')$ from (2.8). It is significant that the~two gauge transformations, arising in quite different contexts, correspond so closely with each other. This basis of spherical tetrad will play a~substantial role in our subsequent work. This Schr\"odinger frame of spherical tetrad [64] was used with great efficiency by Pauli [65] when investigating the~problem of allowed spherically symmetrical wave functions in quantum mechanics. Below, we briefly review some results of this investigation. Let the $J^{\lambda }_{i}$ denote $$ J_{1}= (\; l_{1} + \lambda\; {{\cos \phi }\over {\sin \theta}}\; ),\qquad J_{2}= (\; l_{2} + \lambda\; {{\sin \phi }\over {\sin \theta}}\; ),\qquad J_{3} = l_{3} \; . $$ \noindent At an~arbitrary $\lambda$, as readily verified, those $J_{i}$ satisfy the~commutation rules of the~Lie algebra $SU(2): [ J_{a},\; J_{b} ] = i \; \epsilon _{abc} \; J_{c}$. As known, all irreducible representations of such an~ abstract algebra are determined by a~set of weights $j = 0, 1/2, 1, 3/2,... \; \; ({\em dim} \;j = 2j + 1)$. Given the~explicit expressions of $J_{a}$ above, we will find functions $\Phi ^{\lambda }_{jm}(\theta ,\phi )$ on which the~representation of weight $j$ is realized. In agreement with the~generally known method, those solutions are to be established by the~following relations $$ J_{+} \; \Phi ^{\lambda }_{jj} \; = \; 0 \;\; , \qquad \Phi ^{\lambda }_{jm} \; = \; \sqrt{{(j+m)! \over (j-m)! \; (2j)! }} \; J^{(j-m)}_{-} \; \Phi^{\lambda}_{jj} \;\; , \eqno(2.13) $$ $$ J_ {\pm} \; = \; ( J_{1} \pm i J_{2}) \; = \; e^{\pm i\phi }\; [\; \pm { \partial \over \partial \theta } \; + \; i \cot \theta \; { \partial \over \partial \phi} \; + \; { \lambda \over \sin \theta }\; ] \;\; . $$ \noindent From the equations $J_{+} \; \Phi ^{\lambda }_{jj} \; = \; 0 \;$ and $\; J_{3} \; \Phi ^{\lambda }_{jj} \; = \; j \; \Phi ^{\lambda }_{jj}$, it follows that $$ \Phi ^{\lambda }_{jj} = N^{\lambda }_{jj} \; e^{ij\phi} \; \sin^{j}\theta \;\; {( 1 + \cos \theta )^{+\lambda /2} \over ( 1 - \cos \theta )^{\lambda /2}} ,\; N^{\lambda }_{jj} = {1 \over \sqrt{2\pi}} \; { 1 \over 2^{j} } \; \sqrt{{(2j+1) \over \Gamma(j+m+1) \; \Gamma(j-m+1)}} \; . $$ \noindent Further, employing (2.13) we produce the~functions $\Phi ^{\lambda }_{jm}$ $$ \Phi ^{\lambda }_{jm} \; = \; N^{\lambda }_{jm} \; e^{im\phi} \; {1 \over \sin^{m}\theta } \;\; {(1 - \cos \theta)^{\lambda/2} \over (1 + \cos \theta)^{+\lambda/2}} \; \times $$ $$ ({ d \over d \cos \theta})^{j-m} \; [\; (1 + \cos \theta ) ^{j + \lambda } \; (1 - \cos \theta ) ^{j-\lambda } \; ] \eqno(2.14) $$ \noindent where $$ N^{\lambda }_{jm} \; = \; {1 \over \sqrt{2\pi} 2^{j}} \; \sqrt{{(2j+1) \; (j+m)! \over 2(j-m)! \Gamma(j + \lambda +1) \; \Gamma(j- \lambda +1)}} \;\; . $$ \noindent The Pauli criterion tells us that the $(2j + 1)$ functions $ \Phi ^{\lambda }_{jm}(\theta ,\phi ), \; m = - j,... , +j$ so constructed, are guaranteed to be a~basis for a~finite-dimension representation, providing that the~functions $\Phi ^{\lambda }_{j,-j}(\theta ,\phi )$, found by this procedure, obey the~identity $$ J_{-} \;\; \Phi ^{\lambda }_{j,-j} \; = \; 0 \; . \eqno(2.15a) $$ \noindent After substituting the~function $\Phi ^{\lambda }_{j,-j}(\theta ,\phi )$, the~relation (2.15a) reads $$ J_{-} \; \Phi ^{\lambda }_{j,-j} \; = \; N^{\lambda }_{j,-j} \; e^{-i(j+1)\phi }\; (\sin \theta)^{j+1} \; {(1 - \cos \theta )^{\lambda /2} \over (1 + \cos \theta )^{\lambda /2}} \; \times $$ $$ ({d \over d \cos \theta})^{2j+1} \; [\; (1 + \cos \theta )^{j+\lambda } \; (1 - \cos \theta )^{j-\lambda } ) \; ]\; = \; 0 \eqno(2.15b) $$ \noindent which in turn gives the~following restriction on $j$ and $\lambda $ $$ ({d \over d \cos \theta})^{2j+1} \; [\; (1 + \cos \theta )^{j+\lambda } \; (1 - \cos \theta )^{j-\lambda } \; ] \; = \; 0 \; \; . \eqno(2.15c) $$ \noindent But the~relation (2.15c) can be satisfied only if the~factor $P(\theta )$, subjected to the~operation of taking derivative $( d/d \cos \theta ) ^{2j+1}$, is a~polynomial of degree $2j$ in $ \cos \theta$. So, we have (as a~result of the~Pauli criterion) 1. {\em the} $\lambda$ {\em is allowed to take values} $, +1/2,\; -1/2,\; +1,\; -1, \ldots$ \noindent Besides, as the~latter condition is satisfied, $P(\theta )$ takes different forms depending on the $(j , \lambda)$-correlation: $$ P(\theta ) \; = \; (1 + \cos \theta )^{j+\lambda } \; (1 - \cos \theta )^{j - \lambda } \; = \; P^{2j}(\cos \theta ),\qquad if\qquad j = \mid \lambda \mid, \mid \lambda \mid +1,... $$ or $$ P(\theta ) \; = \; { P^{2j+1}(\cos \theta ) \over \sin \theta }, \qquad if \qquad j = \mid \lambda \mid +1/2, \mid \lambda \mid +3/2,... $$ \noindent so that the second necessary condition resulting from the~Pauli criterion is 2. {\em given } $\lambda$ {\em according to 1., the number j is allowed to take values} $j = \mid \lambda \mid, \mid \lambda \mid +1,...$ \noindent Hereafter, these two conditions: 1 and 2 will be termed, respectively, as the~first and the~second Pauli consequences\footnote{We draw attention to that the Pauli criterion $ J_{-} \Phi _{j,-j}(t,r,\theta ,\phi )\; =\; 0 $ affords the~condition that is invariant relative to possible gauge transformations. The function $\Phi _{j,m}(t,r,\theta ,\phi )$ may be subjected to any gauge transformation. But if all the~components $J_{i}$ vary in a~corresponding way too, then the Pauli condition provides the~same result on $(j,\lambda)$-quantization. In contrast to this, the common requirement to be a~single-valued function of spatial points, often applied to produce a~criterion on selection of allowable wave functions in quantum mechanics, is not invariant under gauge transformations and can easily be destroyed by a~suitable gauge one.}. Also, it should be noted that the~angular variable $\phi $ is not affected (charged) by the Pauli criterion; instead, a~variable that works above is the~$\theta$. Significantly, in the~contrast to this, the~well-known procedure [3-10] of deriving the~electric charge quantization condition from investigating continuity properties of quantum mechanical wave functions, such a~working variable is the $\phi $. If the~first and second Pauli consequences fail, then we face rather unpleasant mathema\-ti\-cal and physical problems\footnote{Reader is referred to the~Pauli article [65] for more detail about those peculiarities.}. As a~simple illustration, we may indicate the~familiar case when $\lambda= 0$; if the~second Pauli condition is violated, then we will have the~integer and half-integer values of the~orbital angular momentum number $l = 0, 1/2, 1, 3/2,\ldots\;$ As regards the~Dirac electron with the components of the total angular momentum in the~form (1.2), we have to employ the~above Pauli criterion in the~constituent form owing to $\lambda $ changed into $\Sigma _{3}$. Ultimately, we obtain the~allowable set $J = 1/2, 3/2, \ldots$. A~fact of primary practical importance to us is that the~functions $\Phi ^{\lambda }_{jm}(\theta, \phi )$ constructed above relate directly to the~known Wigner $D$-functions [66]: $ \Phi ^{\lambda }_{jm}(\theta , \phi ) \; = \; (-1)^{j-m} \; D^{j}_{-m, \lambda}(\phi, \theta, 0) $. \subsection*{3. Separation of variables and a~composite inversion operator} We will utilize the general relativity covariant formalism when a~fundamental {\em Dirac} equation is (1.1). In the~spherical tetrad basis (2.9b) and the~Schwinger unitary gauge of the~monopole potentials (2.7b), the~matter equation (1.1) takes the form $$ \left [\;\gamma ^{0} \; ( i \; \partial _{t} \; +\; e\; r F(r) \; t^{3})\; + \; i \gamma ^{3}\; ( \partial _{r} \;+ \; {1\over r}) \; +\; {1\over r} \; \Sigma ^{S.}_{\theta ,\phi } \; + \; \right. \eqno(3.1) $$ $$ \left. {{e r^{2}K(r) + 1 } \over r} \; (\gamma ^{1} \otimes t^{2}\; - \; \gamma ^{2} \otimes t^{1})\; - \; ( \; m \; + \; \kappa \; r\; \Phi(r)\; t^{3} )\; \right ] \; \Psi ^{S.} = 0 \;\; , $$ $$ \Sigma ^{S.}_{\theta ,\phi } = \left [ \; i \; \gamma ^{1} \; \partial _{\theta } \;+ \; \gamma ^{2}\; {{ i\partial _{\phi } \;+ \; (i \sigma ^{12} \; + \; t^{3})\cos \theta} \over {\sin \theta }}\; \right ] \eqno(3.2) $$ \noindent here $t^{j} = (1/2)\; \sigma ^{j}$. The equation's representation (3.1) itself is remarkable: the~choice of working basis automatically produces a~required rearrangement of its terms. It is useful to look at all the~particular ones in (3.1) and further to trace their respective and rather distinctive contributions; as will be seen, each of them has its practical side in the~subsequent formation of the~doublet-monopole system's properties. In particular, just one term in (3.1), proportional to $(e r^{2} K(r) + 1)$, mixes up together the~components of the~multiplet and this term vanishes in case of the~simplest monopole potential. The~peculiarity of both $ e\; r\; F(r) \; t^{3}$ and $\kappa \; r\; \Phi(r) \;t^{3} $ terms, at least as far as they are really touched in the~present work, will be brought to light when we turn to the~diagonalization of a~composite (isotopic-Lorentzian) discrete operator. In the~given basis, the~components of total conserved momentum are determined by (1.7), and correspondingly, the~starting doublet wave function $\Psi _{\epsilon jm}(x)$ is as in (1.8). An~important case in theoretical investigation is the~electron-monopole system at the~minimal value of quantum number $j$. The~allowed values for $j$ are $0, 1, 2,\ldots$; the~case of $j = 0$ needs a~careful separate consideratiobn. If $j=0$, then the~used symbols $D^{0}_{0,\pm 1}$ (in (1.8)) are meaningless, and the~wave function $\Psi _{\epsilon 0}(x)$ has to be constructed as $$ \Psi _{\epsilon 0} = {e^{-i\epsilon t} \over r} \left [\; T_{+1/2} \otimes \left ( \begin{array}{l} 0 \\ f_{2}(r) \\ 0 \\ f_{4}(r) \end{array} \right ) \; + \; T_{-1/2} \otimes \left ( \begin{array}{l} g_{1}(r) \\ 0 \\ g_{3}(r) \\ 0 \end{array} \right ) \; \right ] \; . \eqno(3.3) $$ Using the~required recursive relations for Wigner functions (see (1.9)) ( $\nu = {\sqrt{j(j + 1)}}, \omega={\sqrt{(j - 1)(j + 2)}}, j \neq 0$) $$ \partial _{\theta } D_{-1} = {1\over 2} (\omega D_{-2} - \nu D_{0}),\qquad {{m - \cos \theta } \over{ \sin \theta }} D_{-1} = {1\over 2} (\omega D_{-2} + \nu D_{0}), $$ $$ \partial _{\theta } D_{0} = {1\over 2} (\nu D_{-1} - \nu D_{+1}), \qquad {m \over {\sin \theta }} D_{0} = {1\over 2} (\nu D_{-1} + \nu D_{0}), $$ $$ \partial _{\theta } D_{+1} = {1\over 2} (\nu D_{0} - \omega D_{+2}), \qquad {{m + \cos \theta } \over{ \sin \theta }} D_{+1} = {1\over 2} (\nu D_{0} + \omega D_{+2}) \eqno(3.4a) $$ \noindent we find $$ \Sigma ^{S.}_{\theta ,\phi } \;\Psi ^{S.}_{jm} = \; \nu \; \left [\; T_{+1/2} \otimes \left ( \begin{array}{l} -i f_{4} \; D_{-1} \\ +i f_{3} \; D_{0} \\ +i f_{2} \; D_{-1} \\ -i f_{1}\; D_{0} \end{array} \right ) \; + \; T_{-1/2} \otimes \left ( \begin{array}{l} -i g_{4} \; D_{0} \\ +i g_{3} \; D_{+1} \\ +i g_{2} \; D_{0} \\ -i g_{1} \; D_{+1} \end{array} \right ) \;\right ] \; . \eqno(3.4b) $$ Further, let us write down the~expression for the term that mixes up the~isotopic components $$ {{e r^{2}K(r) + 1} \over r} \; (\gamma ^{1}\otimes t^{2} \; - \; \gamma ^{2} \otimes t^{1}) \; \Psi _{jm} = \;{{e r^{2}K(r) + 1 } \over 2 r} \; \times $$ $$ \left [ \; T_{+1/2} \otimes \left ( \begin{array}{l} 0 \\ +i g_{3} D_{0} \\ 0 \\ -i g_{1} D_{0} \end{array} \right ) \; + \; T_{-1/2} \otimes \left ( \begin{array}{l} -i f_{4} D_{0} \\ 0 \\ +i f_{2} D_{0} \\ 0 \end{array} \right ) \;\right ] \; . \eqno(3.5) $$ After a simple calculation one finds the~system of radial equations (for shortness we set $W\equiv (e\; r^{2}\; K(r)\; +\; 1)/2 \; , \; \tilde{F} \equiv e\; r \; F(r)/2 \; , \; \tilde{\Phi } \equiv \kappa\; r \; \Phi (r)/2\;$) $$ (- i {d\over dr} + \epsilon + \tilde{F} ) f_{3} - i{\nu \over r} f_{4} - ( m + \tilde{\Phi } ) f_{1} = 0 $$ $$ (+ i {d\over dr} + \epsilon + \tilde{F}) f_{4} + i{\nu \over r} f_{3} + i{W \over r} g_{3} - ( m + \tilde{\Phi }) f_{2} = 0 $$ $$ (+ i {d\over dr} + \epsilon + \tilde{F} ) f_{1} + i{\nu \over r} f_{2} - ( m + \tilde{\Phi } ) f_{3} = 0 $$ $$ (- i {d\over dr} + \epsilon + \tilde{F} ) f_{2} - i{\nu \over r} f_{1} - i{W\over r} g_{1} - ( m + \tilde{\Phi } ) f_{4} = 0 $$ $$ (- i{d\over dr} + \epsilon - \tilde{F} ) g_{3} - i{\nu \over r} g_{4} - i{W\over r} f_{4} - ( m - \tilde{\Phi } ) g_{1} = 0 $$ $$ (+ i {d\over dr} + \epsilon - \tilde{F} ) g_{4} + i{\nu \over r} g_{3} - ( m - \tilde{\Phi } ) g_{2} = 0 $$ $$ (+ i{d \over dr} + \epsilon - \tilde{F} ) g_{1} + i{\nu \over r} g_{2} + i{W\over r} f_{2} - ( m - \tilde{\Phi } ) g_{3} = 0 $$ $$ (-i{d\over dr} + \epsilon - \tilde{F} ) g_{2} - i{\nu \over r} g_{1} - ( m - \tilde{\Phi } ) g_{4} = 0 \; . \eqno(3.6) $$ \noindent When $j$ takes on value $0$ (then $\Sigma _{\theta ,\phi } \Psi _{\epsilon 0} \equiv 0$), the~radial system is $$ ( + i {d\over dr} + \epsilon + \tilde{F} ) f_{4} + i{W\over r} g_{3} - ( m + \tilde{\Phi } ) f_{2} = 0 $$ $$ ( - i {d\over dr} + \epsilon + \tilde{F} ) f_{2} - i{W\over r} g_{1} - ( m + \tilde{\Phi } ) f_{4} = 0 $$ $$ ( - i{d\over dr} + \epsilon - \tilde{F} ) g_{3} - i{W\over r} f_{4} - ( m - \tilde{\Phi } ) g_{1} = 0 $$ $$ ( + i {d\over dr} + \epsilon - \tilde{F} ) g_{1} + i{W\over r} f_{2} - ( m - \tilde{\Phi } ) g_{3} = 0 \; . \eqno(3.7) $$ \noindent Both these systems (3.6) and (3.7) are sufficiently complicated. To proceed further in a~situation like that, it is normal practice to search a~suitable operator which could be diagonalized additionally. It is known that the~usual $P$-inversion operator for a~bispinor field cannot be completely appropriate for this purpose and a~required quantity has to be constructed as a~combination of bispinor $P$-inversion operator and a~certain discrete transformation in the~isotopic space. Indeed, considering that the~usual $P$-inversion operator for a~bispinor field (in the~basis of Cartesian tetrad, it is $\hat{P}_{bisp.}^{Cart.} \otimes \hat{P} = i \gamma^{0} \otimes \hat{P}$, where $\hat{P}$ causes the~usual $P$-reflection of space coordinates) is determined in the~given (spherical) basis as $$ \hat{P}_{bisp.}^{sph.} \otimes \hat{P} = \left ( \begin{array}{rrrr} 0 & 0 & 0 & -1 \\ 0 & 0 & -1 & 0 \\ 0 & -1 & 0 & 0 \\ -1 & 0 & 0 & 0 \end{array} \right ) \otimes \hat{P} = - ( \gamma ^{5} \gamma ^{1} ) \otimes \hat{P} \eqno(3.8a) $$ \noindent and it acts upon the wave function $\Psi _{jm}(x)$ as follows (the factor $e^{i\epsilon t} /r$ is omitted) $$ (\hat{P}_{bisp.}^{sph.} \otimes \hat{P}) \; \Psi _{\epsilon jm}(x) = (-1)^{j+1} \left [\; T_{+1/2} \otimes \left ( \begin{array}{l} f_{4} \; D_{0} \\ f_{3} \; D_{+1} \\ f_{2} \; D_{0} \\ f_{1} \; D_{+1} \end{array} \right ) \; + \; T_{-1/2} \otimes \left ( \begin{array}{l} g_{4} \; D_{-1} \\ g_{3} \; D_{0} \\ g_{2} \; D_{-1} \\ g_{1} \; D_{0} \end{array} \right )\;\right ] \; . \eqno(3.8b) $$ \noindent This relationship points the~way towards the~search for a~required discrete operator: it would have the~structure $$ \hat{N}^{S.}_{sph.} \equiv \hat{\pi}^{S.} \otimes \hat{P}_{bisp.}^{sph.} \otimes \hat{P} \; , \qquad \hat{\pi}^{S.} = ( a \; \sigma^{1} \; + \; b \; \sigma^{2} ) , \qquad \hat{\pi}^{S.} \; T_{\pm 1/2} = ( a \; \pm \; i b ) \; T_{\mp 1/2}\; . \eqno(3.9) $$ \noindent The total multiplier at the~quantity $\hat{\pi }^{S.}$ is not material one for separating the~variables, below one sets $(\hat{\pi }^{S.})^{2} = ( a^{2} \; +\; b^{2} ) = + 1$. From the~equation $\hat{N}^{S.}_{sph.} \Psi _{jm} = N \Psi _{jm}$ one finds two proper values $N$ and corresponding limitation on the~functions $f_{i}(r)$ and $g_{i}(r)$: $$ N = \delta \; (-1)^{j+1} \; , \;\; \delta = \pm \; 1 : \qquad g_{1} = \delta \; (a + i b) \; f_{4} \; , \qquad g_{2} = \delta \; (a + i b) \; f_{3} \; , $$ $$ g_{3} = \delta \; (a + i b) \; f_{2} \; ,\qquad g_{4} = \delta \; (a + i b)\; f_{1} \; . \eqno(3.10a) $$ \noindent Taking into account the relations (3.10a), one produces the~equations ( $\Delta \equiv (a + i b )$ ) $$ (-i{d\over dr} + \epsilon + \tilde{F} ) f_{3} - {\nu \over r} f_{4} - ( m + \tilde{\Phi}) f_{1} = 0 $$ $$ (+i{d\over dr} + \epsilon + \tilde{F} ) f_{4} + {\nu \over r} f_{3} + i{W \over r} - \delta \Delta f_{2} - ( m + \tilde{\Phi} ) f_{2} = 0 $$ $$ (+i{d\over dr} + \epsilon + \tilde{F} ) f_{1} + {\nu \over r} f_{2} - ( m + \tilde {\Phi } ) f_{3} = 0 $$ $$ (-i{d\over dr} + \epsilon + \tilde{F} ) f_{2} - {\nu \over r} f_{1} - i{W\over r} \delta \Delta f_{4} - ( m + \tilde{\Phi } ) f_{4} = 0 $$ $$ (-i{d\over dr} + \epsilon - \tilde{F} ) f_{2} - {\nu \over r} f_{1} - i {W\over r} \Delta ^{-1} \delta f_{4}- ( m - \tilde{\Phi } ) f_{4} = 0 $$ $$ (+i{d\over dr} + \epsilon - \tilde{F} ) f_{1} + {\nu \over r} f_{2} - ( m - \tilde{\Phi } ) f_{3} = 0 $$ $$ (+i{d\over dr} + \epsilon - \tilde{F} ) f_{4} + {\nu \over r} f_{3} + i {W\over r} \Delta ^{-1} \delta f_{2}- ( m - \tilde{\Phi } ) f_{2} = 0 $$ $$ (-i{d\over dr} + \epsilon - \tilde{F} ) f_{3} - {\nu \over r} f_{4} - ( m - \tilde{\Phi } ) f_{1} = 0 \;\; . \eqno(3.10b) $$ \noindent It is evident at once that the~system (3.10b) would be compatible with itself provided that $\tilde{F}(r) = 0$ and $\tilde{\Phi }(r) = 0$. In other words, the above-mentioned operator $\hat{N}^{S.}$ can be diagonalized on the~functions $\Psi _{\epsilon jm}(x)$ if and only if $W^{(a)}_{t}= 0$ and $\kappa = 0$; below we suppose that these requirements will be satisfied. Moreover, given this limitation satisfied, it is necessary to draw distinction between two cases depending on expression for $W(r)$. If $W(r) = 0$, the~difference between $\Delta $ and $\Delta ^{-1}$ in the~equations (3.10b) is not essential in simplifying these equations (because the~relevant terms just vanish). Thus, for the first case, the~system (3.10b) converts into (the symbol $\Delta$ at $\hat{N}$ stands for the~$a$- and $b$-dependence): $$ W(r) = 0 , \qquad \hat{N}^{S.}_{\Delta } = (a \;\sigma ^{1} + b \; \sigma ^{2}) \otimes \hat{P}_{bisp.} \otimes \hat{P}\;\;: $$ $$ ( - i {d\over dr} + \epsilon ) f_{3} - {\nu \over r} f_{4} - m f_{1} = 0 \; , \qquad ( + i {d\over dr} + \epsilon ) f_{4} + {\nu \over r} f_{3} - m f_{2} = 0 \; , $$ $$ ( + i {d\over dr} + \epsilon ) f_{1} + {\nu \over r} f_{2} - m f_{3} = 0 \; , \qquad ( - i {d\over dr} + \epsilon ) f_{2} - {\nu \over r} f_{1} - m f_{4} = 0 \; . \eqno(3.11) $$ There exists sharply distinct situation at $W \neq 0$. Here, the~equations are consistent with each other only if $\Delta = \Delta ^{-1}$; therefore $\Delta = (a + i b) = \pm 1$. Combining this relation with the~normalizing condition $(a + i b) (a - i b) = 1$, one gets $a = \pm 1$ and $b = 0$ (for definiteness, let this parameter $a$ be equal $+1$). The~corresponding set of radial equations, obtained from (3.10b), is $$ \hat{N}^{S.}= ( \sigma ^{1} \otimes \hat{P}_{bisp} \otimes \hat{P}), \;\; N = \delta (-1)^{j +1} \;\; : $$ $$ ( -i{d\over dr} + \epsilon ) f_{3} - {\nu \over r} f_{4} - m f_{1} = 0 \; , \qquad ( +i{d\over dr} + \epsilon ) f_{4} + {\nu \over r} f_{3} + i{W\over r} \delta f_{2} - m f_{2} = 0 \; , $$ $$ ( +i {d\over dr} + \epsilon ) f_{1} + {\nu \over r} f_{2} - m f_{3} = 0 \; , \qquad (- i {d\over dr} + \epsilon ) f_{2} - {\nu \over r} f_{1} - i{W\over r} \delta f_{4} - m f_{4} = 0 \; . \eqno(3.12) $$ In the~same way, the~case $j = 0$ can be considered. Here, the~proper values and limitation are $$ N = - \; \delta ,\;\; \delta = \pm 1 \; : \qquad g_{1}(r) \; = \delta \; \Delta \; f_{4}(r) \;,\qquad g_{3}(r) \; = \delta \; \Delta \; f_{2}(r) \; . \eqno(3.13a) $$ \noindent Further, the~quantities $\tilde{F}$ and $\tilde{\Phi }$ are to be equated to zero; again there are two possibilities depending on $W$: $$ W(r) = 0 \; : \qquad ( i {d\over dr} + \epsilon ) f_{4} - m f_{2} = 0 \; , \qquad (-i {d\over dr} + \epsilon ) f_{2} - m f_{4} = 0 \; ; \eqno(3.13b) $$ $$ W(r) \neq 0 \; : \qquad ( i {d\over dr} + \epsilon ) f_{4} - ( m - i {\delta \over r} W ) f_{2} = 0 \; , \;\; (- i {d\over dr} + \epsilon ) f_{2} - ( m + i {\delta\over r} W ) f_{4} = 0 \; . \eqno(3.13c) $$ The explicit forms of the wave functions $\Psi _{\epsilon jm\delta }(x)$ and $\Psi _{\epsilon 0\delta }(x)$ are as follows: The case $\; W(r) \neq 0 , \; j > 0 \; $, $$ \Psi _{\epsilon jm}(x) = {{e^{-i\epsilon}t} \over r } \left [\; T_{+1/2} \otimes \left ( \begin{array}{l} f_{1} \; D_{-1} \\ f_{2} \; D_{0} \\ f_{3} \; D_{-1} \\ f_{4} \; D_{0} \end{array} \right ) \; + \; \delta \; T_{-1/2} \otimes \left ( \begin{array}{l} f_{4} \; D_{0} \\ f_{3} \; D_{+1} \\ f_{2} \; D_{0} \\ f_{1} \; D_{+1} \end{array} \right )\; \right ] \; ; \eqno(3.14a) $$ The case $W(r) \neq 0 , \;\; j = 0 \;$, $$ \Psi _{\epsilon 0} = {e^{-i\epsilon t} \over r} \left [\; T_{+1/2} \otimes \left ( \begin{array}{l} 0 \\ f_{2}(r) \\ 0 \\ f_{4}(r) \end{array} \right ) \; + \; \delta \; T_{-1/2} \otimes \left ( \begin{array}{l} f_{4}(r) \\ 0 \\ f_{2}(r) \\ 0 \end{array} \right ) \; \right ] \eqno(3.14b) $$ \noindent where $\delta \; T_{-1/2}$ is to be changed for $\delta \; \Delta \; T_{-1/2}$ when $W = 0$. These formulas point to the~non-featureless and non-formal union of the~one particle pattern with another (two distinct isotopic components), but a~structural and specific one; at that, the~second term in the~composite doublet wave function is strictly determined up to the~whole phase factor, by the~first term, so that this system is not a~plain sum of two components without any intrinsic structure. \subsection*{4 Analyzis of the particular case of simplest monopole field} Now, some added aspects of the~simplest monopole are examined more closely. The~system of radial equations, specified for this potential, is basically simpler than in general case, so that the~whole problem including the~radial functions can be carried out to its complete conclusion. Actually, the equation (3.11) admits of some further simplifications owing to diagonalyzing the operator $\hat{K}_{\theta ,\phi }= - i \gamma ^{0} \gamma ^{5} \Sigma _{\theta ,\phi }$. From the~equation $ \hat{K}_{\theta ,\phi } \Psi _{jm} = \lambda \Psi _{jm}$ , it follows that $\lambda = - \mu \; {\sqrt{j(j + 1}}), \; \mu = \pm 1$ and $$ f_{4} = \mu \; f_{1} , \qquad f_{3} = \mu \; f_{2} ,\qquad g_{4} = \mu \; g_{1} , \qquad g_{3} = \mu \; g_{2} \; . \eqno(4.1) $$ \noindent Correspondingly, the system (3.11) yields $$ (+ i {d\over dr} + \epsilon ) f_{1} + i {\nu \over r} f _{2} - \mu \; m \; f_{2} = 0 \; , \qquad (- i {d\over dr} + \epsilon ) f_{2} - i {\nu \over r} f _{1} - \mu \; m \; f_{1} = 0 \; . \eqno(4.2a) $$ \noindent The wave function with quantum numbers $(\epsilon , j, m, \delta , \mu )$ has the~form $$ \Psi _{\epsilon jm\delta\mu}^{\Delta}(x) = {{e^{-i\epsilon}t} \over r }\; \left [\; T_{+1/2} \otimes \left ( \begin{array}{l} f_{1} \; D_{-1} \\ f_{2} \; D_{0} \\ \mu f_{3} \; D_{-1} \\ \mu f_{4} \; D_{0} \end{array} \right ) \; + \; \Delta \; \mu\; \delta\; T_{-1/2} \otimes \left ( \begin{array}{l} f_{4} \; D_{0} \\ f_{3} \; D_{+1} \\ \mu f_{2} \; D_{0} \\ \mu f_{1} \; D_{+1} \end{array} \right ) \; \right ] \; . \eqno(4.2b) $$ We will not consider these systems of two radial equations; this would represent an~easy problem concerning the~well-known spherical Bessel functions. Instead, we relate these functions (4.2b) (also $\Psi _{\epsilon 0\delta }(x)$) with the~wave functions satisfying the~Dirac equation in the~Abelian monopole potential. Those latter were investigated by many authors; below we will use the~notation according to [67]). At $ j > j_{\min }$ these Abelian functions are described as in (1.5) with taking into account the~additional relation $$ f_{4} = \mu \; f_{1} , \qquad f_{3} = \mu \; f_{2} , \qquad \mu = \pm 1 \; . $$ \noindent For the minimal values $\;j = j_{min.} = \mid eg\mid -1/2 $, they are\footnote{Just these functions can be referred to the solutions of third type in terminology used by Kazama, Yang, and Goldhaber; see in [25].}: $$ eg = + 1/2, + 1, + 3/2, ... \qquad \Phi ^{(eg)}_{\epsilon 0} (t,r,\theta ,\phi )= \left ( \begin{array}{l} f_{1}(t,r) \; D^{j}_{-m,-1/2}(\phi ,\theta ,0) \\ 0 \\ f_{3}(t,r) \; D^{j}_{-m,-1/2}(\phi ,\theta ,0) \\ 0 \end{array} \right ) \; ; \eqno(4.3a) $$ $$ eg = - 1/2, - 1, - 3/2,...\qquad \Phi ^{(eg)}_{\epsilon 0}(t,r,\theta ,\phi ) = \left ( \begin{array}{l} 0 \\ f_{2}(t,r) \; D^{j}_{-m,+1/2}(\phi ,\theta ,0) \\ 0 \\ f_{4}(t,r) \; D^{j}_{-m,+1/2}(\phi ,\theta ,0) \end{array} \right ) . \eqno(4.3b) $$ \noindent On comparing the formulas (3.14a,b) with (1.5) and ((4.3a,b), the~following expansions can be easily found (respectively, for $j > 0$ and $j=0$ cases): $$ \Psi ^{\Delta \delta \mu }_{\epsilon jm}(x) \; = \; \left [ \; T_{+1/2} \otimes \Phi ^{eg=-1/2}_{\epsilon jm\mu }(x) \;\; + \;\; \mu \; \delta \; \Delta\; T_{-1/2} \otimes \Phi ^{eg =+1/2}_{\epsilon jm\mu }(x)\;\right ] \; , \eqno(4.4a) $$ $$ \Psi ^{\Delta }_{\epsilon 0\delta } (x) \;= \; \left [ \; T_{+1/2} \otimes \Phi ^{eg =-1/2}_{\epsilon 0}(x) \; \; + \;\; \delta \; \Delta \;\; T_{-1/2} \otimes \Phi ^{eg =+1/2}_{\epsilon 0}(x) \;\right ]\; . \eqno(4.4b) $$ \noindent In reference with the formulas (4.4a,b), one additional remark should be given. Though, as evidenced by (4.4a,b), definite close relationships between the~non-Abelian doublet wave functions and Abelian fermion-monopole functions can be explicitly discerned, in reality, the~non-Abelian situation is intrinsically non-monopole-like (non-singular one). Indeed, in the~non-Abelian case, the~totality of possible transformations (upon the~relevant wave functions) which bear the~gauge status are very different from ones that there are in the~purely Abelian theory. In a~consequence of this, the~non-Abelian fermion doublet wave functions (1.8) can be readily transformed, by carrying out the~gauge transformations in Lorentzian and isotopic spaces together ($S. \; \rightarrow \; C.\;$ and $\;sph. \; \rightarrow \; Cart.$), into the~form (see the formulas (A.9) and (A.10)) where they are single-valued functions of spatial points. In the~Abelian monopole situation, the~analogous particle-monopole functions can by no means be translated to any single-valued ones (see also in Supplement A). \subsection*{5 On distinction between manifestation of the Abelian and non-Abelian monopoles. Some comments about parity selection rules} The problem that we have discussed so far concerned solely an~isotopic doublet affected by the~external monopole field. Let us now pass on to ~somewhat another subject, namely, we consider what in the~everything having stated above was dictated by the~presence of the~non-Abelian external field and what was fixed only by the~isotopic multiplet structure. To this end, it suffices to compare the~doublet-monopole system with a~free doublet. A free wave equation is as follows $$ \left [\;(\;i \gamma ^{0} \partial _{t} \; + \; i \;\gamma ^{3} ( \partial _{r} \; + \; {1\over r} ) + {{\gamma ^{1}\otimes t^{2} \; - \; \gamma ^{2}\otimes t^{1}} \over r} \;\; +\right. $$ $$ \left. {1\over r} \; (\; i \gamma ^{1} \; + \; \gamma ^{2} \; {{\partial _{\phi } \; +\; (i\sigma ^{12} \;+\; t^{3}) \cos \theta} \over \sin\theta }\; ) \; - \; m\; \right ]\; \Psi ^{S.}_{sph.} = 0 \; . \eqno(5.1) $$ \noindent We draw attention to the~term $(\gamma ^{1} \otimes t^{2} - \gamma ^{2} \otimes t^{1})/r$ mixing both isotopic components, which somewhat events out any contrast between these two physical systems, so that everything said above and concerned the~case $W \neq 0$ is valid for this particular situation too\footnote{Correspondingly, no $A$-freedom in choosing the~composite inversion-like operator occurs in case of free fermion doublet.}. A~single distinction is the~explicit form of the~factor at $( \gamma ^{1} \otimes t^{2} - \gamma ^{2} \times t^{1}) /r $-term. The presence of such a~mixing term in the~equation referring to the~free doublet might seem rather surprising fact. Nevertheless, as can be easily shown, its origin is due to a~gauge transformation. Actually, the~corresponding free equation in the~Cartesian isotopic gauge (compare it with (1.1)) $$ [\; i\; \gamma ^{\alpha }(x) (\partial _{\alpha } \; + \; \Gamma _{\alpha }(x)) \otimes I \; - \; m \;] \Psi ^{0}_{C.} = 0 \eqno(5.2a) $$ \noindent takes on the following explicit form in the basis of spherical tetrad $$ \left [ \;i \gamma ^{0} \partial _{t} \; + \; i \gamma ^{3} \; (\partial _{r} \; + \; {1 \over r} ) \; + \; {1\over r} \; (\; i \gamma ^{1}\; +\; \gamma ^{2}\; {{ i\partial _{\phi }\; + \; i\sigma ^{12}\cos \theta } \over \sin \theta } -\; m \;\right ]\; \Psi ^{0}_{C.} = 0 \; . \eqno(5.2b) $$ \noindent Applying the~isotopic gauge transformation to $\Psi ^{0}_{C.}$ (see (2.12d)): $\Psi ^{0}_{S.}(x) \; = \; B(\theta, \phi)\; \Psi ^{0}_{C.}(x)$, one can bring the equation (5.2b) to the~form $$ \left [ \;i \gamma ^{\alpha }(x) \; ( \partial _{\alpha } \; + \; \Gamma _{\alpha }(x) ) \otimes I\; + \; i\gamma ^{\alpha }(x) \otimes ( B \; {{\partial B^{-1} } \over {\partial x^{\alpha }}} ) \; - \; m \;\right ] \; \Psi ^{0}_{S.} = 0 \eqno(5.3a) $$ \noindent where $$ i\; \gamma ^{\alpha }(x) \otimes ( B \; {{ \partial B^{-1} } \over { \partial x^{\alpha }}} )\; = \; {1\over r} (\gamma ^{1}\otimes t ^{2} - \gamma ^{2}\otimes t ^{1}) + \gamma ^{2} \otimes {{t_{3} \cos \theta} \over {r \sin \theta }} \; . \eqno(5.3b) $$ \noindent The first term in (5.3b) will mix the isotopic components, the second represents a~term being an~essential addition to angular operator $\Sigma _{\theta ,\phi }$; and both of them have arisen out of the~above gauge transformation. Their correlated appearing may be regarded as a~formal mathematical description of efficient linking the radial functions through the~kinematical coupling of two isotopic components by diagonalization of the~total angular momentum operators. In this context, the~above-mentioned simplification of radial equations at $W = 0$ can be interpreted as follows: an efficient cinematical mixing (owing to the ordinary scheme of angular momentum addition) the different isotopic components is destroyed through simple placing that system into the external trivial monopole field, so that the~angular coupling is conserved but the~efficient linking through radial functions no longer obtains. In other words, these two factors cancel out each other. It is significant that in both cases, the~wave functions obeying the~free equation and the~equation with external monopole potentials, respectively, do not vary at all in their $\theta,\phi$-dependence. A~single manifestation of the~external monopole field is the~change in the~single parametric function $W(r)$: the quantity $W^{0}(r) = 1$ is to be replaced with another $W(r) = ( 1 + e r^{2} K(r) )$. The~above correlates with the~fact that the~operators of spherical symmetry $\vec{J}~^{2}, J_{3}, \hat{N}$ of these two different physical systems exactly coincide. Totally different from this is the~situation in the~Abelian problem when the~spherical symmetry operators and wave functions are both basically transformed (see (1.3) and (1.4)) in presence of the~Abelian monopole. The~free basic wave functions (setting $eg = 0$ in (1.3)) $\Phi ^{0}_{\epsilon JM\delta }(t,r,\theta ,\phi )$ and the~monopole ones $\Phi ^{eg}_{\epsilon jm\mu }(t,r,\theta ,\phi )$ vary noticeably in their boundary properties at $\theta = 0, \pi$. Let us consider this question in some more detail. To clarify all the~significance of the~mere displacement in a~single index at the wave functions in (1.5), we are going to look at just one mathematical characteristic of those $D$-functions involved in the~particle wave functions: namely, their boundary properties at the~points $\theta = 0$ and $\theta = \pi$. So, the following Tables can be produced (only some of them are written out): Table $1a \qquad D^{j}_{m,+1/2}\;$: $$ \left. \begin{array}{llcc} & & \qquad \theta = 0 & \theta = \pi \\ j=1/2 & & & \\ & m=-1/2 & \qquad 0 & e^{+i\phi/2} \\ & m=+1/2 & \qquad e^{-i\phi/2} & 0 \\ j=3/2 & & & \\ & m=-1/2 & \qquad 0 & e^{+i\phi/2} \\ & m=+1/2 & \qquad e^{-i\phi/2} & 0 \\ & m=-3/2 & \qquad 0 & 0 \\ & m=+3/2 & \qquad 0 & 0 \\ j=5/2,... & & & \end{array} \right. $$ Table $1b \qquad D^{j}_{m,-1/2} \;$: $$ \left. \begin{array}{llcc} & & \qquad \theta = 0 & \theta = \pi \\ \\ j=1/2 & & & \\ & m=-1/2 & \qquad e^{+i\phi/2} & 0 \\ & m=+1/2 & \qquad 0 & e^{-i\phi/2} \\ j=3/2 & & & \\ & m=-1/2 & \qquad e^{+i\phi/2} & 0 \\ & m=+1/2 & \qquad 0 & e^{-i\phi/2} \\ & m=-3/2 & \qquad 0 & 0 \\ & m=+3/2 & \qquad 0 & 0 \\ j=5/2,... & & & \end{array} \right. $$ Table $2a \qquad D^{j}_{m,+1}\;$: $$ \left. \begin{array}{llcc} & & \qquad \theta = 0 & \theta = \pi \\ j=1 & & & \\ & m=0 & \qquad 0 & 0 \\ & m=-1 & \qquad 0 & e^{+i\phi} \\ & m=+1 & \qquad e^{-i\phi} & 0 \\ j=2 & & & \\ & m=0 & \qquad 0 & 0 \\ & m=-1 & \qquad 0 & e^{+i\phi} \\ & m=+1 & \qquad e^{-i\phi} & 0 \\ & m=-2 & \qquad 0 & 0 \\ & m=+2 & \qquad 0 & 0 \\ j=3,... & & & \end{array} \right. $$ Table $2b \qquad D^{j}_{m,-1} \;$: $$ \left. \begin{array}{llcc} & & \qquad \theta = 0 & \theta = \pi \\ \\ j=1 & & & \\ & m=0 & \qquad 0 & 0 \\ & m=-1 & \qquad e^{+i\phi} & 0 \\ & m=+1 & \qquad 0 & e^{-i\phi} \\ j=2 & & & \\ & m=0 & \qquad 0 & 0 \\ & m=-1 & \qquad e^{+i\phi} & 0 \\ & m=+1 & \qquad 0 & e^{-i\phi } \\ & m=-2 & \qquad 0 & 0 \\ & m=+2 & \qquad 0 & 0 \\ j=3,... & & & \end{array} \right. $$ Table $3a \qquad D^{j}_{m,+3/2}\;$: $$ \left. \begin{array}{llcc} & & \qquad \theta = 0 & \theta = \pi \\ j=3/2 & & & \\ & m=-1/2 & \qquad 0 & 0 \\ & m=+1/2 & \qquad 0 & 0 \\ & m=-3/2 & \qquad 0 & e^{+i3\phi/2} \\ & m=+3/2 & \qquad e^{-i3\phi/2} & 0 \\ j=5/2 & & & \\ & m=-1/2 & \qquad 0 & 0 \\ & m=+1/2 & \qquad 0 & 0 \\ & m=-3/2 & \qquad 0 & e^{+i3\phi/2} \\ & m=+3/2 & \qquad e^{-i3\phi/2} & 0 \\ & m=-5/2 & \qquad 0 & 0 \\ & m=+5/2 & \qquad 0 & 0 \\ j=7/2,... & & & \end{array} \right. $$ Table $3b \qquad D^{j}_{m,-3/2} \;$: $$ \left. \begin{array}{llcc} & & \qquad \theta = 0 & \theta = \pi \\ \\ j=3/2 & & & \\ & m=-1/2 & \qquad 0 & 0 \\ & m=+1/2 & \qquad 0 & 0 \\ & m=-3/2 & \qquad e^{+i3\phi/2} & 0 \\ & m=+3/2 & \qquad 0 & e^{-i3\phi/2} \\ j=5/2 & & & \\ & m=-1/2 & \qquad 0 & 0 \\ & m=+1/2 & \qquad 0 & 0 \\ & m=-3/2 & \qquad e^{+i3\phi/2} & 0 \\ & m=+3/2 & \qquad 0 & e^{-i3\phi/2} \\ & m=-5/2 & \qquad 0 & 0 \\ & m=+5/2 & \qquad 0 & 0 \\ j=7/2,... & & & \end{array} \right. $$ \noindent On comparing such characteristics for $D^{j}_{-m, \pm1/2}(\phi, \theta,0)$ and $D^{j}_{-m, eg \pm1/2}(\phi, \theta,0)$ , we can immediately conclude that these sets of $D$-functions provide us with the~bases in different functional spaces $\{ F^{eg=0}(\theta,\phi) \}$ and $\{ F^{eg \neq 0}(\theta,\phi) \}$ (all various values of the~parameter $eg$ lead to different functional spaces as well). Each of those spaces is characterized by its own behavior at limiting points, which is irreconcilable with that of any other space. These peculiarities are rather crucial on their implications. For example, that difference leads to some obvious problems referring to the~basic superposition principle of quantum mechanics. Indeed, it is understandable that any possible series decompositions of particle wave functions $\Phi^{eg\neq0}$ by $\Phi^{eg =0}$ and inversely: $$ \Phi ^{eg}_{\epsilon jm\mu }(t,r,\theta ,\phi ) \; = \; \Sigma \;C^{\epsilon JM\delta }_{\epsilon jm\mu }\; \Phi ^{0}_{\epsilon JM\phi }(t,r,\theta ,\phi ) \; , $$ $$ \Phi ^{0}_{\epsilon JM\delta }(t,r,\theta ,\phi ) \; = \; \Sigma \; C^{\epsilon jn\mu }_{\epsilon JM\delta } \; \Phi ^{eg}_{\epsilon jm\mu }(t,r,\theta ,\phi ) $$ \noindent cannot be correct at the~whole $x_{3}$-axis. The~latter leads to a~very interesting, if not serious, question as to the~physical status of the~monopole potential. It is matter that conventional (one particle-based) quantum mechanics presupposes tacitly that any quantum object remains intrinsically the~same as a~certain identified entity when this object is placed into an~arbitrary external field. For example, an~isolated free electron and an~electron in the~Coulomb field, in both cases, are represented by the~same single entity, {\it{electron}}, just situated in the~two different conditions. Mathematically, this tacit assumption is expressed as the~possibility to exploit together the~fundamental superposition principle and presupposedly identity of those Hilbert spaces: $\{\Phi^{free}\} \equiv \{\Phi^{ext. field}\}$. So, we always can obtain extensions of any wave functions of the~type $\Phi^{ext. field}$ in terms of functions $\Phi^{free}$, or inversely\footnote{For instance, there exists the~momentum representation $\Phi(p)$ for the~Coulomb wave functions, which may be considered just as an~illustration to the~above.}. It is easily understandable that limitations imposed by this condition on external fields are quite restrictive, and we could show that many commonly used potentials, referring to real sources, satisfy them. Whereas, evidently, the~presupposed existence of a~magnetic charge is inconsistent with this basic proposition of the~theory. Indeed, at~the whole $x_{3}$ axis, some of particle-monopole functions cannot, in principle, be represented as any linear combinations of free particle functions: those later $\{ \Phi^{eg=0}(x_{3}.0,0) \}$ do not contain at all any functions required to describe some representatives from $\{ \Phi^{eg \neq 0}(x_{3}.0,0) \}$ (see Tables 1-3 and Supplement A). Furthermore, one might regard the~above criterion as, in a~sense, a~superselection principle yielding the~definite separation of any mathematically possible potentials into the~two classes: real and not real ones (respectively, not changing and changing the relevant Hilbert spaces). In such terms, the~Abelian monopole potential should be thought of as unphysical and forbidden; whereas the~non-Abelian potential may be regarded as quite allowable. In this connection, some more remarks might be added. As a~matter of fact, the~status of monopoles in physics is, in general, rather peculiar. Indeed, a~modern age in understanding electricity and magnetism (EM) was ushered in by Dirac [3] who had argued on the~line of quantum mechanical arguments that electric charge $e$ must be quantized as an~isolated magnetic charge $g$ exists, thereby he had gave an~appreciably significant piece of EM essentials, now known as the~Dirac's (electric charge) quantization condition. Certainly, the~electric charge had been quantized. However --- to say exactly, from the~very beginning, this Dirac's condition might be interpreted a~bit differently --- to say the~least: one could say that a~new introduced quantity (magnetic charge) must be quantized as the~electric charge exists. In any case, the~problem is just one: any experimental observation of a~magnetic charge, unfortunately or may be luckily, has not been registered to date. Therefore, at the~present time we have to count solely on the~theoretical investigation of those constructs. Furthermore, for the~reason alone of such a~consistently invisible character of a~magnetic charge, at least as it concerns experiments, there appear grounds for special seeking some rationalization of such a~strange, if not enigmatic, and persistent disinclination of the~monopoles to be seen experimentally. Even more, it is the time to have searched some possible formally conceptualized grounds for forbidding, in principle, this charge from existence in nature. Therefore, some provisional steps in this direction might be made yet today. In reference to this, else one peculiarity of the~above property of the~particle-monopole functions deserves to be especially noticed: the~irreconcilable character of monopole-affected and free functions, respectively, might be interpreted in physical terms as the~fundamental impossibility to eliminate the~monopole influence on the~particle wave functions over all space up to infinitely distance points. This property implies a~lot of hampering implications, in turn giving rise to some awkward questions. For instance, the~question of that kind is: what is the~meaning of the~relevant scattering theory, if even at infinity itself, some manifestation of the~magnetic charge presence does not vanish (just because of the~given $\theta,\phi$-dependence). That point finds its natural corollary in giving rise the~well-known difficulties in the~relevant scattering theory [17-19,86-94]: the particle-monopole functions, being regarded at the~asymptotical infinity (far away from the~region of $r=0$), exhibit such kind of behavior that does not fit it to be appropriate for a~free quantum-mechanical particle. So, we cannot get rid of the~Abelian monopole effects up to infinitely distant points, and such a~property is removed far from what is familiar when a~situation is less singular (for instance, of external electric charge presence). Now, as a~way of further contrasting the~Abelian and non-Abelian models, we are going to pass on another subject and consider the~question of discrete symmetry (restricting ourselves to $P$-transformation-involving operations) in both these theories (see also in [1,11-16,76-85]). In the Abelian case (when $eg \neq 0$), the~monopole wave functions cannot be proper functions of the~usual space reflection operator for a~particle (for definiteness, the~bispinor field case is meant). There exists only the~following relationship $( j >j_{min.}$) $$ (\hat{P}_{bisp.} \otimes \hat{P} ) \; \Phi ^{eg}_{\epsilon jm\mu }(x) = {{e^{-i\epsilon t} } \over r } \; \mu \; (-1)^{j+1} \; \left ( \begin{array}{r} f_{1}(r) \; D^{j}_{-m,-eg -1/2} \\ f_{2}(r) \; D^{j}_{-m,-eg+1/2} \\ \mu f_{2}(r) \; D^{j}_{-m,-eg -1/2} \\ \mu f_{1}(r) \; D^{j}_{-m,-eg+1/2} \end{array} \right ) \eqno(5.4a) $$ \noindent take notice of the sign `minus' at the~parameter $eg$. By contrast, the~required relation for free wave functions occurs $$ (\hat{P}_{bisp.} \otimes \hat{P} ) \; \Phi ^{0}_{\epsilon JM\mu }(x) = \; \delta \; (-1)^{J+1} \; \Phi ^{0}_{\epsilon JM\delta }(x) \; . \eqno(5.4b) $$ \noindent It should be emphasized that a~certain, diagonalized on the~functions $\Phi ^{eg}_{\epsilon jm\mu }$, discrete operator can be obtained through multiplying the~usual $P$-inversion bispinor operator by the~formal one $\hat{\pi }_{Abel.}$ that affects the~$eg$-parameter in the~wave functions as follows: $\hat{\pi}_{Abel.} \; \Phi ^{+eg}_{\epsilon jm\mu }(x) = \Phi ^{-eg}_{\epsilon jm\mu }(x)\;$. Thus, we have $$ \hat{M} = \hat{\pi}_{Abel.} \otimes \hat{P}_{bisp.} \otimes \hat{P} \; , \qquad \hat{M} \; \Phi ^{eg}_{\epsilon jm\mu }(x) = \mu \; (-1)^{j+1} \; \Phi ^{eg}_{\epsilon jm\mu }(x) \eqno(5.4c) $$ \noindent but, as may be seen, the~latter fact does not allow us to obtain any $M$-parity selection rules. Actually, a~matrix element for some physical observable $\hat{G}^{0}(x)$ is to be $$ \int \bar{\Phi}^{eg}_{\epsilon jm\mu }(x) \; \hat{G}^{0}(x) \; \Phi ^{eg}_{\epsilon j'm'\mu'}(x) dV \equiv \int r^{2} dr \int f(\vec{x}) \; d \Omega \;\; . $$ \noindent First we examine the case $e g = 0$, in order to compare it with the~situation at $eg \neq 0$. Let us relate $f(-\vec{x})$ with $f(\vec{x})$. Considering the~relation (and the~same with $J'M'\delta '$) $$ \Phi ^{0}_{\epsilon JM\delta }(-\vec{x} ) = \delta \; (-1)^{J+1} \; \hat{P}_{bisp.} \; \Phi ^{0}_{\epsilon JM\delta }(\vec{x} ) \eqno(5.5a) $$ \noindent we get $$ f(-\vec{x}) = \; \delta \; \delta'\; (-1)^{J+J'+1} \; \bar{\Phi}_{\epsilon JM\delta }(\vec{x}) \; \left [\; \hat{P}^{+}_{bisp.} \; \hat{G}^{0}(-\vec{x}) \; \hat{P}_{bisp.}\; \right ] \; \Phi _{\epsilon J'M' \delta'}(\vec{x}) \; . \eqno(5.5b) $$ \noindent Thus, if the quantity $\hat{G}^{0}(\vec{x})$ obeys the~equation $$ \hat{P}^{+}_{bisp.} \; \hat{G}^{0}(-\vec{x}) \; \hat{P}_{bisp.} = \; \omega ^{0} \; \hat{G}^{0}(\vec{x}) \eqno(5.5c) $$ \noindent here $\omega ^{0}$ defined to be $+1$ or $ -1$ relates to the scalar and pseudoscalar, respectively, then the~relationship (5.5b) comes to $$ f(-\vec{x}) = \; \omega \; \delta\; \delta' \; (-1)^{J+J'+1} \; f(\vec{x}) \eqno(5.5d) $$ \noindent that generates the well-known $P$-parity selection rules. In contrast to everything just said, the~situation at $eg \neq 0$ is completely different because any equality in the form (5.5a) or (5.5b) does not appear there; instead, the~relations (5.4a) and (1.12b) only occur. So, there not exist any $M$-parity selection rules in the~presence of the~Abelian monopole. In accordance with this, for instance, the~expectation value for the~usual operator of space coordinates $\vec{x}$ need not equal zero and it follows this (see,for example, in [79-85]). Now, let us return to the~non-Abelian problem when there exists the~relationship of required form: $$ \Psi_{\epsilon jm\delta }(-\vec{x} ) = ( \sigma ^{2} \otimes \hat{P}_{bisp.}) \; \delta \; (-1)^{j+1} \; \Psi _{\epsilon jm\delta }(\vec{x} ) \eqno(5.6a) $$ \noindent owing to the $N$-reflection symmetry; so that $$ f(-\vec{x} ) = \; \delta \; \delta' \; (-1)^{j+j'} \; \bar{\Psi}_{\epsilon jm\delta }(x) \; \left [\; (\sigma ^{2} \otimes \hat{P}_{bisp.}^{+} ) \; \hat{G}(-\vec{x}) \; (\sigma ^{2} \otimes \hat{P}_{bisp.} )\; \right ] \; \Psi _{\epsilon j'm'\delta'}(\vec{x}) \; . \eqno(5.6b) $$ \noindent If a~certain quantity $\hat{G}(\vec{x})$ depending on isotopic coordinates $$ \hat{G}(\vec{x}) = \pmatrix{\hat{g}_{11}(\vec{x}) & \hat{g}_{12}(\vec{x}) \cr \hat{g}_{21}(\vec{x}) & \hat{g}_{22}(\vec{x})} \otimes \hat{G}^{0}(\vec{x}) $$ \noindent obeys the~following structural condition (what is the~definition of the~composite scalar and pseudoscalar) $$ (\sigma ^{2} \otimes \hat{P}_{bisp.}^{+}) \; \hat{G}(-\vec{x})\; (\sigma ^{2} \otimes \hat{P}_{bisp.}) = \; \Omega \; \hat{G}(\vec{x}) \eqno(5.6c) $$ \noindent where $\Omega $ defined to be $+1$ or $-1$, the~relationship (5.6b) converts into $$ f(-\vec{x}) = \; \Omega \; \delta \; \delta' \; (-1)^{j+j'} \; f(\vec{x}) \eqno(5.6d) $$ \noindent that results in the evident $N$-parity selection rules. For instance, applying these rules to $\hat{G}(\vec{x}) \equiv \vec{x}$, we found out $ <\Psi _{\epsilon jm \delta }(x) \mid \vec{x} \mid \Psi _{\epsilon jm \delta }(x) > \;\; \sim \; [\; 1 \; - \; \delta ^{2} \; (-1)^{2j}\; ] \; \equiv \; 0 \; . $ \subsection*{6. Some additional remarks on $\hat{N}_{A}$ operator in Cartesian gauge} Now we proceed further with studying the~reflection symmetry for the~fermion-monopole system and consider the~question of explicit form of the~discrete operator $\hat{N}_{\Delta }$ in several other gauges. All our calculations so far (in sections 3-5) have been tied with the~Schwinger isotopic frame, now let us turn to the~unitary Dirac and the~Cartesian (both isotopic) gauges. Simple calculations result in $$ \hat{\pi }^{D.}_{\Delta } \; = \; \pmatrix{ 0 & -i\; ( a - i \; b)\; e^{-i \phi } \cr + i \;( a + i\; b) \; e^{+i\phi } & 0 } \; , \eqno(6.1a) $$ $$ \hat{\pi }^{C.}_{\Delta } = \left ( \begin{array}{cc} (-i \;\; {\Delta + \Delta^{-1} \over 2} \; + \; i \; {\Delta - \Delta^{-1} \over 2}\cos\theta)& \; + \; i \sin\theta\; e^{-i\phi} \; {\Delta - \Delta^{-1} \over 2} \\ + \; i\; \sin\theta\; e^{+i\phi} \; {\Delta - \Delta^{-1} \over 2} & (- i \;\; {\Delta + \Delta^{-1} \over 2} \; - \; i{\Delta - \Delta^{-1} \over 2} \; \cos\theta) \end{array} \right ) \eqno(6.1b) $$ \noindent and the~corresponding wave functions $$ \Phi ^{D.\Delta}_{\epsilon jm\delta } = \; {e^{- i \epsilon t} \over r} \; \left [\; e^{+i\phi /2} \; T_{+1/2} \otimes F \; + \; \Delta \; \delta \; e^{-i\phi /2} \; T_{-1/2} \otimes G \;\right ] \; , \eqno(6.2a) $$ $$ \Psi ^{C.\Delta}_{\epsilon jm\delta } (x) = \; {e^{- i \epsilon t} \over r} \; \left [ \; \pmatrix{ \cos\theta /2 & e^{-i\phi /2} \cr \sin\theta /2 & e^{+i\phi /2}} \otimes F\; + \; \; \Delta \; \delta \; \pmatrix{ \- \sin\theta /2 & e^{-i\phi /2} \cr \cos\theta /2 & e^{+i\phi /2}} \otimes G \;\right ] \; . \eqno(6.2b) $$ \noindent It is convenient for our further work to rewrite the expression (6.1b) for the~matrix $\hat{\pi }^{C.}_{\Delta }$ in the~form $$ \hat{\pi }^{C.}_{\Delta } = \left [\; -i \; {\Delta + \Delta ^{-1} \over 2} \; + \; i \; {\Delta - \Delta ^{-1} \over 2} \; ( \vec{\sigma } \; \vec{n}_{\theta ,\phi } )\;\right ] \; . \eqno(6.3) $$ \noindent Setting $\Delta = 1$, it follows from (6.3) that $\hat{\pi }^{C.}_{\Delta } = - i\; I$. Therefore, the~above $N$-reflection operator in the~Cartesian gauge takes on the form (at $\Delta = 1$) $$ \hat{N}^{C.} = \; ( - i \; I) \otimes \hat{P}_{bisp.} \otimes \hat{P} \eqno(6.4) $$ \noindent thus, the $\hat{N}^{C.}$ does not involve any transformation on the~isotopic coordinates. In other words, the~ordinary $P$-reflection operator for a~bispinor field can be diagonalized upon the~composite (doublet) wave functions. But this fact is not of primary or conceptualizable importance; mainly because it is not gauge invariant. Therefore, relying on this relationship, we cannot come to the~conclusion that non-Abelian problem of monopole discrete symmetry amounts to the~Abelian (monopole free) problem of discrete symmetry. Furthermore, that non-Abelian theory's discrete symmetry features have no relationship to the~Abelian monopole case. The clue to understanding this is that the~Abelian fermion-monopole wave functions $F(x)$ and $G(x)$ (see in (6.2b)) are represented in the~non-Abelian functions $\Psi ^{C.\Delta }_{\epsilon jm\delta }(x)$ only as constructing elements $$ \Psi ^{C. \Delta }_{\epsilon jm\delta } (x) = \; {e^{-i\epsilon t}\over r}\; \left [\; T_{+1/2} \otimes \left (\; \cos \theta /2\; e^{-i\phi /2} \;F(x) \; + \; \; \Delta \; \delta \;\sin \theta /2 \;e^{-i\phi /2} G(x)\; \right )\;\; + \right. $$ $$ \left. T_{-1/2} \otimes \left ( \;\sin \theta /2 \;e^{+i\phi /2}\; F(x)\; + \; \Delta \; \delta \; \cos \theta /2 \; e^{+i\phi /2} \;G(x)\; \right ) \;\right ] \eqno(6.5) $$ \noindent but the~multiplying functions at $( T_{+1/2} \otimes)$ and $(T_{-1/2} \otimes )$ , in themselves, cannot be obtained by any $U(1)$-gauge transformation from the~real Abelian particle-monopole functions $F(x)$ and $G(x)$, and we should set a~higher value on this than on the~form of $\hat{N}^{C.}$ in (6.4). In other words, at the~price of the~gauge transformation used above ( $S. \rightarrow C.$), we only have carried the non-null action upon isotopic coordinates (generated by $N^{S.}$-inversion) into a~null action upon these coordinates and a~concomitant vanishing of all individual Abelian-like qualities (belonging solely to the $F(x)$ and $G(x)$). The relation (6.5) is interesting from another standpoint: it is convenient to produce some factorizations of the~doublet-fermion functions by the~Abelian fermion functions and the~isotopic vectors $T_{\pm 1/2}$. Indeed, taking into account the~known recursive relations [66] $$ \cos{\beta \over 2} \; e^{i(\alpha + \gamma)/2} \; D ^{j}_{m+1/2,m'+1/2}(\alpha,\beta, \gamma) \; = { \sqrt{(j+m+1/2)(j+m'+1/2)} \over 2j+1} \; D^{j-1/2}_{m,m'}(\alpha,\beta, \gamma)\; + \; $$ $$ { \sqrt{(j-m+1/2)(j-m'+1/2)} \over 2j+1} \; D^{j+1/2}_{m,m'}(\alpha,\beta, \gamma) \; ; $$ $$ \sin{\beta \over 2} \; e^{i(\alpha - \gamma)/2} D ^{j}_{m+1/2,m'-1/2}(\alpha,\beta, \gamma) = -\; { \sqrt{(j+m+1/2)(j-m'+1/2)} \over 2j+1} D^{j-1/2}_{m,m'}(\alpha,\beta, \gamma)\; + \; $$ $$ { \sqrt{(j-m+1/2)(j+m'+1/2)} \over 2j+1} \; D^{j+1/2}_{m,m'}(\alpha,\beta, \gamma) \; $$ \noindent the~representation (6.5) can be transformed into the~form: $$ \Psi ^{C. \Delta }_{\epsilon jm\delta } (x) = \; {e^{-i\epsilon t}\over r}\; \times $$ $$ \left [\; \; T_{+1/2} \otimes \; \; {\sqrt{j+m} \over 2j+1} \; \left ( \begin{array}{r} (\; \sqrt{j+1} \; f_{1} \; + \; \delta \; \Delta \; \sqrt{j}\; f_{4} \; ) \\ \\ (\;\sqrt{j}\; f_{2} \; + \; \delta \; \Delta \; \sqrt{j+1}\; f_{3}\;) \\ + \; \delta \; \Delta \; (\; \sqrt{j} \; f_{2}\; +\; \delta \; \Delta^{-1}\; \sqrt{j+1}\; f_{3}\;) \\ +\; \delta \; \Delta \; (\; \sqrt{j+1} \; f_{1}\; +\; \delta \; \Delta^{-1}\; \sqrt{j}\; f_{4}\;) \end{array} \right ) \; \left ( D^{j-1/2}_{-m+1/2}\right ) ;\; + \right. $$ $$ T_{+1/2} \otimes \; {\sqrt{j-m+1} \over 2j+1} \; \left ( \begin{array}{r} (\;\sqrt{j}\; f_{1} \; - \; \delta \; \Delta \; \sqrt{j+1} \; f_{4}\; ) \\ (\; \sqrt{j+1} \; f_{2} \; -\; \delta \; \Delta \; \sqrt{j} \; f_{3}\; ) \\ -\; \delta \; \Delta \; (\; \sqrt{j+1} \; f_{2} \; - \; \delta \; \Delta^{-1} \; \sqrt{j} \; f_{3}\; ) \\ -\; \delta \; \Delta \; (\; \sqrt{j}\; f_{1}\; -\; \delta \; \Delta^{-1}\; \sqrt{j+1} \; f_{4}\; ) \end{array} \right ) \; \left ( D^{j+1/2}_{-m+1/2}\right ) ;\; + $$ $$ T_{-1/2} \otimes \; {\sqrt{j-m} \over 2j+1} \; \left ( \begin{array}{r} (\;- \sqrt{j+1} \; f_{1} \; + \; \delta \; \Delta \;\sqrt{j}\; f_{4}\; ) \\ (\;-\sqrt{j}\; f_{2} \; + \; \delta \; \Delta \; \sqrt{j+1}\; f_{3}) \\ -\; \delta \; \Delta \; (\;- \sqrt{j} \; f_{2} \; + \; \delta \; \Delta^{-1} \; \sqrt{j+1}\; f_{3}\;) \\ -\;\delta \; \Delta \; (\; -\sqrt{j+1} \; f_{1} \; + \; \delta \; \Delta^{-1}\; \sqrt{j}\; f_{4}) \end{array} \right ) \;\left ( D^{j-1/2}_{-m-1/2}\right ) \;\; + $$ $$ \left. T_{-1/2} \otimes \; {\sqrt{j+m+1} \over 2j+1} \; \left ( \begin{array}{r} (\;\sqrt{j}\; f_{1} \; + \; \delta \; \Delta \; \sqrt{j+1} \; f_{4}\; ) \\ (\; \sqrt{j+1} \; f_{2} \; +\; \delta \; \Delta \; \sqrt{j} \; f_{3}\; ) \\ +\; \delta \; \Delta \; (\; \sqrt{j+1} \; f_{2} \; + \; \delta \; \Delta^{-1} \; \sqrt{j} \; f_{3}\; ) \\ +\; \delta \; \Delta \; (\; \sqrt{j}\; f_{1}\; +\; \delta \; \Delta^{-1}\; \sqrt{j+1} \; f_{4}\; ) \end{array} \right ) \;\left ( D^{j+1/2}_{-m-1/2}\right ) \; \right ] \eqno(6.6) $$ \noindent where, for the sake of brevity, the~angular dependence in the~wave functions is described with the~use of the~following symbolic designation (here there are four different possibilities) $$ D^{j \pm 1/2}_{-m \pm 1/2} \; = \; \left ( \begin{array}{c} D^{j \pm 1/2}_{-m \pm 1/2,-1/2} \\ D^{j \pm 1/2}_{-m \pm 1/2,+1/2} \\ D^{j \pm 1/2}_{-m \pm 1/2,-1/2} \\ D^{j \pm 1/2}_{-m \pm 1/2,+1/2} \end{array} \right ) \; . $$ Now, remembering the~equality $(a^{2} + b^{2}) = 1$, let us introduce a~new variable $A$ defined by $\cos A = a$ and $\sin A = b$, so the~above operator $\hat{\pi }^{C.}_{\Delta }$ is expressed as $$ \hat{\pi }^{C.}_{A} = (- i ) \; \exp [ \;i A \; \vec{\sigma } \; \vec{n}_{\theta ,\phi }\; ] \eqno(6.7) $$ \noindent here the $A$ is a~complex parameter due to the~quantities $a$ and $b$ are complex ones. Correspondingly, using the~notation according to $$ \sqrt{j+1}\; f_{1} \; + \; \delta \; e^{iA} \; \sqrt{j}\; f_{4}\; = \; K^{A}_{\delta} \; , \qquad \sqrt{j} \; f_{2} \; + \; \delta \; e^{iA} \; \sqrt{j+1}\; f_{3}\; = \; L^{A}_{\delta} \; , $$ $$ \sqrt{j}\; f_{1} \; - \; \delta \; e^{iA} \; \sqrt{j+1} \; f_{4}\; = \; M^{A}_{-\delta} \; , \qquad \sqrt{j+1} \; f_{2} \; - \; \delta \; e^{iA} \; \sqrt{j} \; f_{3}\; =\; N^{A}_{-\delta} $$ \noindent the~representation (6.6) can be rewritten as follows $$ \Psi ^{C. A}_{\epsilon jm\delta }(x)\; = \; {e^{-i\epsilon t}\over r}\; \times $$ $$ \left [\; T_{+1/2} \otimes \; {\sqrt{j+m} \over 2j+1} \; \left ( \begin{array}{r} K^{+A}_{\delta} \\ L^{+A}_{\delta} \\ \delta \; e^{iA} \; L^{-A}_{\delta} \\ \delta \; e^{iA} \; K^{-A}_{\delta} \end{array} \right )\; \left ( D^{j-1/2}_{-m+1/2}\right ) \;\; + \right. $$ $$ T_{+1/2} \otimes \; {\sqrt{j-m+1} \over 2j+1} \; \left ( \begin{array}{r} M^{+A}_{-\delta} \\ N^{+A}_{-\delta} \\ -\; \delta \; e^{iA}\; N^{-A}_{-\delta} \\ -\; \delta \; e^{iA} \; M^{-A}_{-\delta} \end{array} \right )\; \left ( D^{j+1/2}_{-m+1/2}\right ) \; \; + $$ $$ T_{-1/2} \otimes \; {\sqrt{j-m} \over 2j+1} \; \left ( \begin{array}{r} -K^{+A}_{-\delta} \\ -L^{+A}_{-\delta} \\ \delta \; e^{iA} \; L^{-A}_{-\delta} \\ \delta \; e^{iA} \; K^{-A}_{-\delta} \end{array} \right ) \;\left ( D^{j-1/2}_{-m-1/2}\right ) \;\; + $$ $$ \left. T_{-1/2} \otimes \; {\sqrt{j+m+1} \over 2j+1} \; \left ( \begin{array}{r} M^{+A}_{\delta} \\ N^{+A}_{\delta} \\ \delta \; e^{iA} \; N^{-A}_{\delta} \\ \delta \; e^{iA} \; M^{-A}_{\delta} \end{array} \right ) \;\left ( D^{j+1/2}_{-m-1/2}\right ) \; \;\right ] \; . \eqno(6.8) $$ \noindent When $A = 0$, then all the~formulas in (6.8) will be significantly simplified, so that the~familiar sub-structure of electronic wave functions with fixed P-parity can easily be seen (see (8.4)). This is what one might expect because, if $A=0$, then the~operator $N_{A=0}^{C.}$ does not involve any isotopic transformation. The~latter might be a~source of some speculation about an~extremely significant role of the~Abelian $P$-symmetry in the~non-Abelian model. However, one should remember that a~genuinely Abelian fermion $P$-symmetry implies both a~definite explicit expression for $P$-operation and definite properties of the~corresponding wave functions. The~above decomposition (6.8) of fermion doublet wave functions in terms of Abelian fermion functions and unit isotopic vectors shows that the~usual Abelian fermion particle wave functions and non-Abelian doublet ones belong to substantially different classes. (see (6.9) at $A=0$); so that the~Abelian like $P$-operation plays only a~subsidiary role in forming the~whole composite wave functions; besides, as was mentioned above, this role will be completely negated in other isotopic gauges. By the way, an~analogous principle of checking the~property of functional space (in particular, as to whether or not the~relevant functions are single-valued ones, apart from any possible gauge transformations) might serve as a~guideline argument to prevent some serious discussion (if not speculation) on fermion interpretation for bosons as well as boson-like interpretation for fermions [20-23] when, in the~Abelian model, the~monopole is in effect and the~case of half-integer $eg$-values is realized. Of course, such possibilities seem striking and attractive for every physicist, however they are correct and quite satisfactory ones only at first glance. Evidently, such monopole-based fermions or bosons being produced from the usual bosons and fermions respectively, turn out to be tied with functional spaces which are absolutely different from those used in reference with the~usual (fermion or boson) particles. In addition, the~Lorentz group-based transformation characteristics of such new `fermions and boson', in reality, will completely negate their new nature and will be dictated by their old classification assignment. Also (in the~author's opinion), the vastly discussed producing `spin from isospin' [20-23] belongs to the same class of striking but hardly realized possibilities. Many arguments against might be formulated; the~simplest one is as follows: the~correct understanding of the~meaning of Lorentzian spin presupposes quite definite properties of this characteristic under Lorentz group transformations; however, such a~`spin', being produced from isospin, does not obey these regulations. Else one theoretical criterion might be given: placing such an~object into the background of any Rimannian (curved) space-time. Evidently, the gravitational field will ignore any boson-fermion inverting based on the~above mechanism. \subsection*{7. Free parameter at discrete symmetry \\ and $N_{A}$-parity selection rules} Now we proceed with analyzing the~totality of the~discrete operators $\hat{N}_{A}$, which all are suitable for separation of variables. What is the~meaning of the~parameter $A$? In other words, how can this $A$ manifest itself and why does such an~unexpected ambiguity exist? The $A$ fixes up one of the~complete set of operators $\{\; i\; \partial _{t} ,\; \vec{J}^{2} , \; J_{3} ,\; \hat{N}_{A} ,\; \hat{K} \;\}$, and correspondingly this~$A$ also labels all basic wave functions. It is obvious, that this parameter $A$ can manifest itself in matrix elements of physical quantities. To see this, it suffices to look at the general structure of the~relevant expectation value of those observables\footnote{To be exact, any variations in this $A$ will lead to alteration in normalization conditions for the relevant wave function $\Psi ^{A}_{\epsilon jm \delta \mu }$ (see in Sec.9), so that this circumstance should be taken into account; but for simplicity, we pass over those alterations.} (the $\epsilon , J, M$ are omitted): $$ \bar{G} \; = \; < \Psi ^{A}_{\delta \mu } \mid \; \hat{G} \; \mid \Psi ^{A}_{\delta \mu } > \; = \; < T_{+1/2} \otimes \Phi^{(+)} _{\mu }(x) \; \mid \hat{G} \mid \; T_{+1/2} \otimes \Phi^{(+)} _{\mu }(x) > \; + $$ $$ \mid e^{iA}\; \mid \;\;< T_{-1/2} \otimes \Phi^{(-)} _{\mu }(x) \mid \; \hat{G} \; \mid T_{-1/2} \otimes \Phi^{(-)} _{\mu }(x) > \;\; + $$ $$ 2 \; \delta \; \mu \;\; Re \; \left [\; e^{iA}\; < T_{+1/2} \otimes \Phi^{(+)} _{\mu}(x) \mid \; \hat{G} \; \mid T_{-1/2} \otimes \Phi^{(-)} _{\mu }(x) > \;\right ] \; . \eqno(7.1) $$ \noindent If such a~$\hat{G}$ has the~ diagonal isotopic structure $$ \hat{G}(x)= \left ( \begin{array}{cc} \hat{g}_{11}(x) & 0 \\ 0 & \hat{g}_{22}(x) \end{array} \right ) \otimes \hat{G}^{0}(x) $$ \noindent then the third term in (7.1) vanishes and the~matrix element only depends on $\mid e^{iA}\mid$. As a~simple example let us consider a~new form of the~above-mentioned selection rules depending on the~$A$-parameter. Now, the~matrix element examined is $$ \int \bar{\Psi} ^{A}_{\epsilon JM\delta \mu }(x) \; \hat{G}(x)\; \Psi ^{A}_{\epsilon J'M'\delta'\mu'}(x) \; dV \; \equiv \; \int r^{2} dr \int f^{A}(\vec{x}) \; d \Omega $$ \noindent then $$ f^{A}(-\vec{x} ) = \; \delta \; \delta'\; (-1)^{J+J'} \; \bar {\Psi}^{A}_{\epsilon JM\delta \mu }(x) \; \times $$ $$ \left [\;(a^{*} \sigma ^{1} + b^{*} \sigma ^{2}) \otimes \hat{P}_{bisp.} \hat{G}(-\vec{x}) \; (a \sigma ^{1} + b \sigma ^{2}) \otimes \hat{P}_{bisp.} \; \right ] \; \Psi ^{A}_{\epsilon J'M'\delta'\mu'} (\vec{x}) \; . \eqno(7.3a) $$ \noindent If this $\hat{G}$ obeys the condition $$ \left [\;(a^{*} \sigma ^{1} + b^{*} \sigma ^{2}) \otimes \hat{P}_{bisp.}\;] \; \hat{G}(-\vec{x}) \; [\; (a \sigma ^{2} + b \sigma ^{1}) \otimes \hat{P}_{bisp.}\; \right ]\; = \; \Omega ^{A} \; \hat{G}(\vec{x}) \eqno(7.3b) $$ \noindent which is equivalent to $$ \left ( \begin{array}{cc} e^{i(A-A^{*})} \; \hat{g}_{22} (-\vec{x}) & \; e^{-i(A+A^{*})} \; \hat{g}_{21}(-\vec{x}) \\ e^{i(A+A^{*})} \; \hat{g}_{12} (-\vec{x}) & \; e^{-i(A-A^{*})} \; \hat{g}_{11}(-\vec{x}) \end{array} \right ) \otimes $$ $$ \left [ \; \hat{P}_{bisp.} \; \hat{G}^{0}(-\vec{x})\; \hat{P}_{bisp.} \; \right ] \; = \; \Omega^{A} \left ( \begin{array}{cc} \hat{g}_{11}(\vec{x}) & \hat{g}_{12}(\vec{x}) \\ \hat{g}_{21} (\vec{x}) & \hat{g}_{22}(\vec{x}) \end{array} \right ) \otimes \hat{G}(\vec{x}) \eqno(7.3c) $$ \noindent where $\Omega ^{A} = + 1$ or $-1$, then the relationship (7.3a) comes to $$ f^{A}(-\vec{x})\; = \; \Omega ^{A}\; \delta \; \delta'\; (-1)^{J+J'} \; f^{A}(\vec{x}) \; . \eqno(7.3d) $$ \noindent Taking into account (7.3d), we bring the matrix element's integral above to the~form $$ \int \bar{\Psi}^{A}_{\epsilon JM\delta \mu }(x) \; \hat{G}(x) \; \Phi ^{A}_{\epsilon J'M'\delta'\mu'}(x) \;dV \; = \; \left [\; 1\; + \; \Omega ^{A} \; \delta \; \delta'\; (-1)^{J+J'} \; \right ] \; \int_{V_{1/2}} f^{A}( \vec{x} ) \;dV \eqno(7.4a) $$ \noindent where the~integration in the~right-hand side is done on the~half-space. This expansion provides the following selection rules: $$ M E \equiv 0 \qquad \longleftrightarrow \qquad \left [\; 1\; +\; \Omega^{A} \; \delta \; \delta' \; (-1)^{J+J'}\; \right ] \; =\; 0 \;\; . \eqno(7.4b) $$ \noindent It is to be especially emphasized that the~quantity $\Omega ^{A}$, defined to be $+ 1$ or $- 1$, is not the~same as the~analogous that $\omega $ in (5.6d). These $\omega$ and $\Omega^{A}$ involve their own particular limitations on composite scalar or pseudoscalar because they imply respective (and rather specific) configurations of their isotopic parts, obtained by delicate fitting all the~quantities $\hat{g}_{ij}$. Therefore, each of those $A$ will generate its own distinctive selection rules. \subsection*{8. Existence of the parameter $A$ and isotopic chiral symmetry} Where does this $A$-ambiguity come from and what is the meaning of this parameter $A$? To proceed further with this problem, one is to realize that the~all different values for $A$ lead to the same whole functional space; each fixed value for $A$ governs only the~basis states $\Psi ^{A}_{\epsilon JM\delta \mu }(x)$ associated with $A$, but with no change in~the whole space. Connection between any two sets of functions $\{ \Psi(x) \}^{A}$ and $\{ \Psi(x) \}^{A'}$ is characterized by $$ \Psi ^{A'\; S.}_{\epsilon JM\delta \mu } = U_{S.}(A'-A) \;\Psi ^{A \;S.}_{\epsilon JM\delta \mu }(x) \; , \qquad U _{S.}(A'-A) = \; e^{-iA} \; \left ( \begin{array}{cc} e^{iA} & 0 \\ 0 & e ^{iA'} \end{array} \right ) \otimes I \; . \eqno(8.1a) $$ \noindent Besides, it is readily verified that the operator $\hat{N}^{S.}_{A}$ (depending on $A$) can be obtained from the~operator $\hat{N}^{S.}$ as follows $$ \hat{N}^{S.}_{A} \; = \; U_{S.}(A) \;\; \hat{N}^{S.} \; U^{-1}_{S.}(A) \; . \eqno(8.1b) $$ \noindent The matrix $U_{S.}(A'-A)$ is so simple only in the~Schwinger basis; after translating that into Cartesian one we will have $$ \Psi ^{A'\;C.}_{\epsilon JM\delta \mu } (x) \; = \; U_{C.} (A'-A) \; \Psi ^{A\; C.}_{\epsilon JM\delta \mu }(x) \; , \eqno(8.1c) $$ $$ S_{C.}= {1 \over \Delta} \left ( \begin{array}{cc} (\Delta \cos ^{2} \theta /2 \; + \; \Delta' \sin ^{2}\theta /2 ) & {1\over 2} \; (\Delta \; - \; \Delta')\; \sin \theta \; e^{-i\phi } \\ {1\over 2} \; (\Delta \; - \; \Delta' ) \; \sin \theta \; e^{+i\phi } & (\Delta' \cos ^{2}\theta /2 \; +\; \Delta \sin ^{2} \theta /2 ) \end{array} \right ) \otimes I $$ \noindent and $S_{C.}(A'-A)$ satisfies the equation $$ \hat{N}^{C.}_{A} \; = \; U_{C.} (A) \;\; \hat{N}^{C.} \; \hat{U}^{-1}_{Cart.} (A) \; . \eqno(8.1d) $$ In connection with everything said above on parity selection rules and just `unexpectedly' established relationship (8.1b) (or (8.1d)), we need to think this over again before finding a~conclusive answer. Let us begin from some generalities. As well known, when analyzing any Lie group problems (or their algebra's) there indeed exists a~concept of equivalent representations: $U \; M_{k} \; U^{-1} \; = \; M'_{k} \; \rightarrow \; M_{k} \; \sim \; M'_{k}$. In this context, the two sets of operators $\{ J^{S.}_{i}, \; \hat{N}^{S.} \}$ and $\{ J^{S.}_{i}, \; \hat{N}^{S.}_{A} \}$ provide basically just the same representation of the $O(3.R)$-algebra $$ \{ J^{S.}_{i}, \; \hat{N}^{S.}_{A} \} \; = \; U_{S.}(A) \; \{ J^{S.}_{i}, \; \hat{N}^{S.} \} \; U^{-1}_{S.}(A) \; . \eqno(8.2a) $$ \noindent The totally different situation occurs in the context of the use of those two operator sets as physical observables concerning the system with the~fixed Hamiltonian $$ \{ \vec{J}^{2}_{S.}, \; J^{S.}_{3}, \; \hat{N}^{S.} \}^{\hat{H}} \qquad and \qquad \{ \vec{J}^{2}_{S.}, \; J^{S.}_{3}, \; \hat{N}^{S.}_{A} \}^{\hat{H}} \; . \eqno(8.2b) $$ \noindent Actually, in this case the two operator sets represent different observables at the same physical system: both of them are followed by the~same Hamiltonian $\hat{H}$ and also lead to the~same functional space, changing only its basis vectors $\{ \Psi _{\epsilon JM\delta \mu }(x) \}^{A}$. Moreover, in the~quantum mechanics it seems always possible to relate two arbitrary complete sets of operators by some unitary transformation: $$ \{ \hat{X}_{\mu } , \; \mu = 1, \ldots \}^{\hat{H}} \;\; \rightarrow \;\; \{ \hat{Y}_{\mu } , \; \mu = 1, \ldots \}^{\hat{H}} \; , \{ \Phi _{x_{1} \ldots x_{s}} \} \;\; \rightarrow \;\; \{ \Phi _{y_{1} \ldots y_{s}} \} \; . $$ \noindent But arbitrary transformations $U$ cannot generate, through converting $U \; \{ \hat{X}_{\mu } \} \; U^{-1} \; = \; \hat{Y}_{\mu }$, a~new complete set of variables; instead, only some Hamiltonian symmetry's operations are suitable for this: $U \; \hat{H} \; U^{-1} = H$. In this connection, we may recall a more familiar situation for Dirac massless field [95,96]. The wave equation for this system was earlier mentioned (see Sec.~2) and that has the~form $$ i \bar{\sigma}^{\alpha}(x)\; (\partial _{\alpha } + \bar{\Sigma}_{\alpha })\; \xi (x) = 0 \;\; , \qquad i \sigma ^{\alpha }(x) \; (\partial _{\alpha } + \Sigma _{\alpha })\; \eta (x) = 0 \; . \eqno(8.3a) $$ \noindent If the function $\Phi (x) = ( \xi (x) , \; \eta (x) )$ is subjected to the~transformation $$ \pmatrix{\xi'(x) \cr \eta'(x)} = \pmatrix{I & 0 \cr 0 & z\;I} \pmatrix{\xi (x) \cr \eta (x)} \eqno(8.3b) $$ \noindent where $z$ is an arbitrary complex number, then the new function $\Phi'(x) = ( \xi'(x), \eta'(x) )$ satisfies again the equation in the form (8.3a). This manifests the Dirac massless field's symmetry with respect to the~transformation $$ \hat{H}' = U \; \hat{H} \; U^{-1} = \hat{H} , \qquad \Phi'(x) = U \; \Phi (x) \; . \eqno(8.3c) $$ \noindent The existence of the~symmetry raises the~question as to whether this symmetry affects determination of complete set of diagonalized operators and constructing spherical wave solutions. These solutions, conformed to diagonalizing the~usual bispinor $P$-inversion operator, in addition to $\vec{j}^{2}$ and $j_{3}$, are as in (5.4a) at $eg =0$. In the same time, other spherical solutions, together with corresponding diagonalized discrete operator, can be produced: $$ \Phi^{z}_{\epsilon jm \delta } = {e^{-i\epsilon t}\over r} \left ( \begin{array}{r} f_{1} \; D^{j}_{-m, -1/2} \\ f_{2} \; D^{j}_{-m, +1/2} \\ z \; \delta \; f_{2} \; D^{j}_{-m, -1/2} \\ z \; \delta \; f_{1} \; D^{j}_{-m, +1/2} \end{array} \right ) \; , \eqno(8.4a) $$ $$ U \;\left (\; \hat{P}^{sph.}_{bisp.} \otimes \hat{P} \; \right ) \; U^{-1}\; = \; \left [\; {1\over 2} \; ( z + {1\over z} )\; (- \gamma^{5} \gamma^{1}) \; + \; {1\over 2} \; ( z - {1\over z} ) \; (- \gamma^{1}) \;\right ] \otimes \hat{P} \; . \eqno(8.4b) $$ \noindent Introducing another complex variable $A$ instead of the~parameter $z :\; z = (\cos A + i \sin A) = e^{iA}$; so that the~operator from (8.4b) is rewritten in the~form $$ ( \cos A \; + \; i \sin A \; \gamma ^{5} ) \; (- \gamma ^{5} \; \gamma ^{1}) \otimes \hat{P} \; \equiv e^{+iA \gamma ^{5}} \; \hat{P}^{sph.}_{bisp.} \otimes \hat{P} \eqno(8.4c) $$ \noindent (8.3b) may be expressed as follows $$ \Phi'(x) \; = \; e^{+iA/2} \; exp(+i \gamma ^{5} {A\over 2}) \; \Phi(x) \eqno(8.4d) $$ \noindent Evidently, that translation of the basis of spherical tetrad into Cartesian tetrad's will preserve the~general structure of (8.4c): $ e^{+iA \gamma ^{5}} \;\hat{P}^{Cart.}_{bisp.} \otimes \hat{P}\; $, since the~gauge matrix $S(k(x),\bar{k}^{*}(x))$ and matrix $\gamma ^{5}$ are commutative with each other. In contrast to this, translation of the~isotopic Schwinger frame into the~Cartesian that does change the~form $\hat{N}_{A}$: the~initial one is $$ \hat{N}^{S.}_{A} \; = \; (e^{-i A \; \sigma^{3}} \hat{\pi }^{S.}) \otimes \hat{P}_{bisp.} \otimes \hat{P} \eqno(8.5a) $$ \noindent and the finishing form is $$ \hat{N}^{C.}_{A} \; =\; (-i) \; \exp \left [\; - i \; A \; \vec{\sigma }\; \vec{n}_{\theta ,\phi } \right ] \otimes \hat{P}_{bisp.} \otimes \hat{P} \; . \eqno(8.5b) $$ \noindent The appearance of this dependence on variables $\theta,\phi$ comes from noncommutation of the~gauge transformation $B(\theta ,\phi )$ and matrix $\sigma^{3}$ (the~latter plays the~role of $\gamma ^{5}$ in case of Abelian chiral symmetry (8.4d)). The transformation $U(A)$, after translating it to the~Cartesian basis (see (8.1c)), can be brought to the~form $$ U ^{C.}(A)\; = \;\left [\; {1 + e^{iA} \over 2} \; + \; {1 - e^{iA} \over 2} \;\vec{\sigma} \; \vec{n}_{\theta ,\phi } \;\right ] \; . $$ \noindent Separating out the factor $e^{iA/2}$ in the right-hand side of this formula, we can rewrite the $U^{C.}$ in the~form $$ U^{C.} \; = \; e^{iA/2} \; \exp \left [ -\; i \; {A\over 2}\; \vec{\sigma }\; \vec{n}_{\theta ,\phi }\right ] \eqno(8.6) $$ \noindent where the second factor lies in the~(local) spinor representation of the~3-dimensional complex rotational group $SO(3.C)$. This matrix provides a~very special transformation upon the~isotopic fermion doublet and can be thought of as an~analogue of the~Abelian chiral symmetry transformation; it may be also termed as the~transformation of {\em isotopic (complex) chiral} symmetry. This symmetry leads to the $A$-ambiguity (8.5) and permits to choose an~arbitrary reflection operator from the~totality $\{ \hat{N}_{A} \}$. \subsection*{9. Complex values of the $A$ and interplay between the~quantum mechanical superposition principle and self-conjugacy requirement} In this section, let us look closely at some qualitative peculiarities of the~above considered $A$-freedom placing special notice to the~division of $A$-s into the~real and complex values. It is convenient to work at this matter in the Schwinger unitary basis. Recall that the $A$-freedom tell us that simultaneously with $\hat{H}, \vec{j}^{2} , \hat{j}_{3}$, else one discrete operator $ \hat{N}_{A} $, that depends generally on a~complex number $A$, can be diagonalized on the wave functions. Correspondingly, the basis functions associated with the complete set ( $ \hat{H} ,\vec{j}^{2}, \hat{j}_{3},\hat{N}_{A}$ ) besides being certain determined functions of the~relevant quantum numbers $(\epsilon , j, m, \delta )$, are subject to the~$A$-dependence. In other words, all different values of this $A$ lead to different quantum-mechanical bases of the~system. There exists a~set of possibilities, but one can relate every two of them by means of a~respective linear transformation. For example, the~states $\Psi ^{A}_{\epsilon jm\delta }(x)$ decompose into the following linear combinations of the~initial states $\Psi ^{A=0}_{\epsilon jm\delta }(x)$ (further, this $A=0$ index will be omitted): $$ \Psi ^{A}_{\epsilon jm\delta }(x) \; = \; \left [ \; {{1 + \delta e^{iA} } \over 2} \; \Psi _{\epsilon jm,+1} \; + \; {{1 - \delta e^{iA} } \over 2} \; \Psi _{\epsilon jm,-1}\; \right ] \; . \eqno(9.1) $$ \noindent One should give heed to that, no matter what an $A$ is (either real or complex one), the~new states (9.1), being linear combinations of the initial states, are permissible as well as old ones. This added aspect of the~allowance of the~complex values for $A$ conforms to the quantum-mechanical superposition principle: the~latter presupposes that arbitrary complex coefficients $c_{i}$ in a~linear combination of some basis states $\; \Sigma c_{i} \Psi _{i}\;$ are acceptable. However, an~essential and subtle distinction between real and complex $A$-s comes straightforward to light as we turn to the~matter of normalization and orthogonality for $\Psi ^{A}_{\epsilon jm\delta }(x)$. An~elementary calculation gives $$ < \Psi^{A}_{\epsilon jm,\delta} \mid \Psi^{A}_{\epsilon jm,\delta} > = { {1 + e^{i(A-A^{*}) }} \over 2} \Psi_ {\epsilon j m,+1 } \; ; \;\; < \Psi^{A}_{\epsilon jm,\delta} \mid \Psi^{A}_{\epsilon jm,-\delta} > = { {1 - e^{i(A-A^{*}) }} \over 2} \eqno(9.2) $$ \noindent i.e. if $A \neq A^{*}$ then the~normalizing condition for $\Psi ^{A}_{\epsilon jm\delta }(x)$ does not coincide with that for $\Psi _{\epsilon jm\delta }(x)$, and what is more, the~states $\Psi ^{A}_{\epsilon jm,-1}(x)$ and $\Psi ^{A}_{\epsilon jm,+1}(x)$ are not mutually orthogonal. The~latter means that we face here the~non-orthogonal basis in Hilbert space and the pure imaginary part of the~$A$ plays a~crucial role in the~description of its non-orthogonality property. The {\em oblique} character of the basis $\Psi ^{A}_{\epsilon jm\delta }(x) $ (if $A \neq A^{*}$ ) exhibits its very essential qualitative distinction from {\em perpendicular} one for $\Psi _{\epsilon jm\delta }(x)$. However, those specific bases in quantum mechanics, though not being of very common use and having a~number of peculiar features, are allowed to be exploited in conventional quantum theory. Even more, in a~sense, the~existence itself of the~non-orthogonal bases in the~Hilbert space represents a~direct consequence of the~quantum-mechanical superposition principle\footnote{For this reason, a~prohibition against complex $A$-s could be partly a~prohibition against the conventional superposition principle too (narrowing it); since all complex values for $A$, having forbidden, imply specific limitations on two coefficients in (9.1); but those are not presupposed by the~superposition principle itself.}. Up to this point, the~complex $A$-s seem to be good as well as the~real ones. Now, it is the~moment to point to some clouds handing over this part of the~subject. Indeed, as readily verified, the operator $\hat{N}_{A}$ does not represent a~self-conjugated (self-adjoint) one\footnote{The author is grateful to Dr. E.A.Tolkachev for pointing out that it is so.} $$ < \hat{N}_{A} \Phi (x) \mid \Psi (x) > \; = \; < \Phi (x) \mid \; e^{i(A-A^{*})\sigma_{3}} \; \hat{N}_{A} \; \Psi (x) >. $$ \noindent It is understandable that this (non-self-conjugacy) property correlates with the~above-mentioned nonorthogonality conditions: as well known, a~self-conjugated operator entails both real its eigenvalues and the~orthogonality of its eigenfunctions. As already noted, the~eigenvalues of $\hat{N}_{A}$ are real ones and this conforms to the~general statement that all inversion-like operators possess the~property of the~kind: if $\hat{G}^{2} = I$ then $\lambda $ is a real number, as $\hat{G} \; \Phi _{\lambda } = \lambda \; \Phi _{\lambda } )$. So, we have got into a~point to choose: whether one has to reject all complex values for $A$ and thereby narrow (if not violate) the~one quantum mechanical principle of major generality (of superposition) or whether it is remain to accept all complex $A$-s as well as real ones and thereby, in turn, stretch another quantum-mechanical regulation about the~self-adjoint character of {\em physical} quantities. We have chosen to accept and look into the~second possibility. In the author's opinion, one should accord the~primacy of the~general superposition principle over the~self-adjointness requirement. In support of this point of view, there exist clear-cut physical grounds. Indeed, recall the~quantum-mechanical status of all inversion-like quantities: they serve always to distinguish two quantum-mechanical states. Moreover, to those quantum variables there not correspond any classical variables; the~latter correlates with that any classical apparatus measuring those discrete variables does not exist whatsoever. In contrast to this, one should recollect why the~self-adjointness requirement had been imposed on physical quantum operators. The reason is that such operators imply all their eigenvalues to be real. Besides, that limitation on physical quantum variables had been put, in the first place, for quantum variables having their classical counterparts (with the continuum of classical values measured). And after this, in the~second place, the~discrete quantities such as $P$-inversion and like it were tacitly incorporated into a~set of self-adjoint mathematical operations, as a~{\em natural} extrapolation. But one should notice (and the~author inclines to place a~special emphasis on this) the~fact that the~single relation $\hat{N}^{2}_{A} = I$ is completely sufficient that the~eigenvalues of $\hat{N}_{A}$ to be real. In the~light of this, the~above-mentioned automatic incorporation of those discrete operators into a~set of self-adjoint ones does not seem inevitable. But admitting this, there is a~problem to solve: what is the~meaning of complex expectation values of such non self-adjoint discrete operators; since, evidently, the~conventional formula $<\Psi \mid \hat{N}_{A} \mid \Psi >$ provides us with complex values. Indeed, let $\Psi (x)$ be $\Psi (x) = [ m \Psi _{+1}(x) + n \Psi _{-1}(x)]$, then $$ < \Psi \mid \hat{N}_{A} \mid \Psi > \; = \; < m \; \Psi_{+1}(x) + n \; \Psi_{-1}(x) \mid m \; \Psi_{+1}(x) - n \; \Psi _{-1}(x) > \; = \; $$ $$ \left [\; ( m^{*} m - n^{*} n )\; {{1+e^{i(A-A^{*})}} \over 2} \; + \; ( n^{*} m - n m^{*} )\; {{1-e^{i(A-A^{*})}} \over 2}\;\right ] \; . \eqno(9.3) $$ \noindent Must one be skeptical about those complex $ \bar{N}_{A}$ , or treat them as physically acceptable quantities? Let us examine this problem in more detail. It is reasonable to begin with an~elementary consideration of the~measuring procedure of the~$\hat{N} = \hat{N}_{A=0}$. Let a~wave function $\Psi (x)$ decompose into the~combination $$ \Psi (x) \; = \;\left [ \; e^{i\alpha } \cos^{2} \Gamma \; \Psi _{+1}(x) \; + \; e^{i\beta } \sin^{2} \Gamma \; \Psi _{-1}(x) \; \right ] \eqno(9.4a) $$ \noindent where $\alpha $ and $\beta \in [ 0 , 2 \pi ]$, and $\Gamma \in [ 0 , \pi /2 ]$. For the $\hat{N}$ expectation value, one gets $$ \bar{N} = \; < \Psi \mid \hat{N} \mid \Psi > \; = (-1)^{j+1}\; (\cos^{2} \Gamma - \sin ^{2} \Gamma ) = (-1)^{j+1} \; \cos 2\Gamma \; . \eqno(9.4b) $$ \noindent From (9.4b), one can conclude that $\bar{N}$, after having measured, provides us only with the~information about the~parameter $\Gamma $ at (9.4a), but does not furnish any information on the~phase factors $e^{i\alpha }$ and $e^{i\beta }$ (or their relative factor $e^{i(\alpha -\beta )}$). This interpretation of measured $\bar{N}$ as receptacle of the~quite definite information about superposition coefficients in the~decomposition (9.4a), represents one and only physical meaning of the~$\bar{N}$. Now, returning to the case of $\hat{N}_{A}$ operation, one should put an~analogous question concerning the~$\bar{N}_{A}$. The~material question is: what kind of information about $\Psi (x)$ can be extracted from the~measured $\bar{N}_{A}$. It is convenient to rewrite the~above function $\Psi (x)$ as a~linear combination of functions $\Psi ^{A}_{\epsilon jm,+1}$ and $\Psi ^{A}_{\epsilon jm,-1}$. Thus inverting the~relations (9.1), we get $$ \Psi _{\epsilon jm,+1} \; = \; \left [ \;{{1 + e^{-iA}} \over 2} \; \Psi ^{A}_{\epsilon jm,+1} \; + \; {{1 - e^{-iA}} \over 2} \; \Psi ^{A}_{\epsilon jm,-1} \; \right ] \; ; $$ $$ \Psi _{\epsilon jm,-1} \; = \; \left [ \; {{1 - e^{-iA}} \over 2} \; \Psi ^{A}_{\epsilon jm,+1} \; + \; {{1 + e^{-iA}} \over 2} \; \Psi ^{A}_{\epsilon jm,-1} \; \right ] $$ \noindent and then $\Psi (x)$ takes the~form (the~fixed quantum numbers $\epsilon ,j,m$ are omitted) $$ \Psi (x) = \left [\; \left (\; e^{i\alpha} \cos \Gamma \; {{1 + e^{-iA}} \over 2} \;+ \; e^{i\beta } \sin \Gamma \; {{1 - e^{-iA}} \over 2} \; \right ) \; \Psi ^{A}_{+1}(x) \; + \right. \eqno(9.5a) $$ $$ \left. \left (\; e^{i\alpha} \cos \Gamma \; {{1 - e^{-iA}} \over 2} \; + \; e^{i\beta } \sin \Gamma \; {{1 + e^{-iA}} \over 2} \;\right ) \; \Psi ^{A}_{+1}(x) \;\right ] \; = \; \left [\; m \; \Psi ^{A}_{+1}(x) \; + \; n \; \Psi ^{A}_{-1}(x) \;\right ] \; . $$ \noindent Although the~quantity $A$ enters the~expansion (9.5a), but really $\Psi (x)$ only contains three arbitrary parameters: those are $\Gamma , e^{i\alpha }$, and $e^{i\beta }$. After simple calculation one gets $$ \bar{N}_{A} = \; < \Psi \mid \hat{N}_{A} \mid \Psi >\; = (-1)^{j+1} \; ( \rho \cosh g + i \sigma \sinh g ) \; , \eqno(9.5b) $$ $$ \rho = \cos 2 \Gamma \cos f + \sin 2\Gamma \sin f \sin (\alpha -\beta ) \; , \;\; \sigma = - \cos 2 \Gamma \sin f + \sin 2\Gamma \cos f \sin (\alpha -\beta ) $$ \noindent where $f$ and $g$ are real parameters defined by $A = f + i \; g$ . Examining this expression, one may single out four particular cases for separate consideration. Those are: $$ 1. \qquad g = 0 \; , f = 0 \; ,\qquad \bar{N}_{A} = (-1)^{j+1} \cos 2\Gamma \eqno(9.6a) $$ \noindent here, the $\bar{N}$ only fixes $\Gamma $, but $e^{i(\alpha -\beta )}$ remains indefinite. $$ 2. \qquad g = 0 , f \neq 0 ,\qquad \bar{N}_{A} = (-1)^{j+1} \left (\cos 2\Gamma \cos f + \sin 2\Gamma \sin f \sin (\alpha -\beta ) \right ) \eqno(9.6b) $$ \noindent here, the measured $\bar{N}_{A}$ does not fix $\Gamma $ and $(\alpha - \beta )$, but only imposes a~certain limitation on both these parameters. $$ 3. \qquad g \neq 0 , f = 0 ,\qquad \bar{N}_{A} = (-1)^{j+1} \left ( \cos 2 \Gamma \cosh g + i \sin 2 \Gamma \sin (\alpha -\beta) \sinh g \right ) \eqno(9.6c) $$ \noindent here, the $\bar{N}_{A}$ determines both $\Gamma $ and $(\alpha - \beta )$; and thereby this complex $\bar{N}_{A}$ is the~physical quantity being quite interpreted one. Finally, for the~fourth case $ (g \neq 0 , f \neq 0 ) $, it follows $$ 4. \qquad \cos 2\Gamma = ( \rho \cos f - \sigma \sin f ) , \qquad \sin 2\Gamma \sin (\alpha -\beta ) = ( \rho \cos f + \sigma \sin f ) \eqno(9.6d) $$ \noindent i.e. the complex $\bar{N}_{A}$ also gives some information about $\Gamma $ and $(\alpha - \beta )$ and therefore has character of a physically interpreted quantity. \subsection*{10. Why $A$-freedom is not a gauge one? On logical collision between concepts of gauge and non-gauge symmetries} There exists else one cloud over the~subject under consideration\footnote{The author is grateful to E.~A.~Tolkachev, L.~M.~Tomil'chik, and Ya.~M.~Shnir for the~fruitful discussion on this matter}. Indeed, if the~parameter $A$ is a~real number, then the~matrix $S(A)$ translating $\Psi _{\epsilon jm\delta }(x)$ into $\Psi ^{A}_{\epsilon jm\delta }(x)$ coincides (apart from a~phase factor $e^{iA/2}$) with a~matrix lying in the group $SU(2)$: $$ \hat{F}(A) \equiv e^{-iA/2} S(A) \in SU(2)_{loc.}\; , \;\; \Psi'^{A}_{\epsilon jm\delta }(x) \equiv \hat{F}(A) \Psi _{\epsilon jm\delta }(x) = e^{-iA/2} \Psi ^{A}_{\epsilon jm\delta }(x) \; . \eqno(10.1) $$ \noindent However, the group $SU(2)_{loc.}$ has the~status of gauge one for this system. So, else one point of view could be brought to light: one could claim that two functions $\Psi _{\epsilon jm\delta }(x)$ and $\Psi'^{A}_{\epsilon jm\delta }(x)$ (at $A^{*} = A$) are related by means of a~gauge transformation: and therefore the $\Psi'^{A}_{\epsilon jm\delta }(x)$ exhibits in other ways the~same physical state $\Psi _{\epsilon jm\delta }(x)$. And further, as a~direct consequence, one could insist on the~impossibility in principle to observe indeed any physical distinctions between the~wave functions $\Psi _{\epsilon jm\delta }(x)$ and $\Psi'^{A}_{\epsilon jm\delta }(x)$. If the~$\hat{F}(A)$ transformation gets estimated so, then ultimately one concludes that the~above $N_{A}$-parity selection rules (explicitly depended on $A$ which is the real for this case) are only a~mathematical fiction since the~transformation $\hat{F}(A)$ is not physically observable. In this point we run across a~problem of material physical significance, in which one could perceive the~tense interplay of the~quantum-mechanical superposition principle and subtle distinction between the~concepts of gauge and non-gauge symmetries. In examining of this phenomenon, one should accord the~primacy of careful coordination of foregoing quantum-mechanical generalities over all other considerations. So, a~question of principle is either the $\hat{F}(A)$ transformation provides us with a~gauge one or not? The~same question can be reformulated as follows: is the~fact $\hat{F}(A) \in SU(2)_{loc.}$ sufficient to interpret $\hat{F}(A)$ exclusively as the~transformation with gauge status? For the moment let us suppose that the $\hat{F}(A)$ is exclusively a~gauge transformation and no other else. Then all functions $\Psi'^{ A}_{\epsilon jm\delta }(x) \equiv \hat{F}(A) \Psi _{\epsilon jm\delta }(x)$ represent the~same physically identified state which had been described already by the~initial function $\Psi _{\epsilon jm\delta }(x)$. In other words, the~function $\Psi _{\epsilon jm\delta }(x)$ and the~following $$ \Psi'^{A}_{\epsilon jm\delta } = \left [\; {{e^{-iA/2} + \delta e^{iA/2} } \over 2} \; \Psi _{\epsilon jm,+1} \;+ \; {{e^{-iA/2} - \delta e^{iA/2} } \over 2}\; \Psi _{\epsilon jm,-1}\; \right ] \eqno(10.2) $$ \noindent are both only different representatives of the~same physical state. However, such an~outlook is not acceptable on several physical grounds. For clearing up this matter it is sufficient to have recourse again to the~quantum-mechanical superposition principle and its concomitant requirements. Indeed, the~possibility not to~accompany the~transition of form $\Psi _{\epsilon jm\delta }(x) \rightarrow \Psi'^{A}_{\epsilon jm\delta}(x)$ by the~similarity transformation on all physical operators ( it is meant $\hat{G} \rightarrow \hat{G}' = \hat{F}(A) \hat{G} \hat{F}^{-1}(A) )$ is generally supposed to be an~essential constituent part in understanding the~conventional superposition principle. Evidently, the~essence of the~superposition principle in quantum mechanic consists in just this assertion but not in a~simple fixation and reminding of the~linearity property of the~matter equation. In contrast to this, the~gauge-like interpretation of the~transformation $\hat{F}(A)$ makes us accompany the~change $\Psi _{\epsilon jm\delta }(x) \rightarrow \Psi '^{A}_{\epsilon jm\delta }(x)$ by a~similarity transformation on $\hat{G} $. Thus, a~general outlook prescribing to interpret the transformation $\hat{F}(A)$ as exclusively a gauge one, contradicts with regulations stemming from the~superposition principle. One could suggest that any contradiction does not arise here if all genuine observable operators are invariant under the similarity transformation above and all other (not obeyed it) operators are unphysical fictions. In this connection, let us look more closely at the~character of limitations imposed on $\hat{G}$ by this condition $\hat{G} = \hat{F}(A) \hat{G} \hat{F}^{-1}(A)$; an~elementary analysis shows that the~$\hat{G}$ is to be of diagonal isotopic structure, i.e. $\hat{g}_{12}(x)$ and $\hat{g}_{21}(x)$ must be equated to zero. All other possibilities for $\hat{G}$ are associated with the~assertion that a~certain physical distinction between $\Psi _{\epsilon jm\delta }(x)$ and $\Psi'^{A}_{\epsilon jm\delta }(x)$ is observable. But there are no grounds for the~use of physical operators with this diagonal structure only, and all the more, for imposing the~limitation of the~form $A^{*} = A$. Besides, the~simultaneous acceptance of operators (with a~status of physical ones) of diagonal isotopic structure only and the~added limitation in the~form $A^{*} = A$, are indissolubly tied up with a~very definite conceiving of the~particle doublet itself. Indeed, this can be physically interpreted as an exclusively additive character of the~particle doublet. In other words, it can be considered as follows: one must, in the~first place, measure separately the quantities $\hat{g}_{11}(x)$ and $\hat{g}_{22}(x)$ and after, in the~second place, one can sum up both results. In author's opinion, having supposed such an attitude for the particle doublet essence, which in turn presupposes a~quite definite measurement procedures as allowed, it is a~mystic and fruitless outlook that further we can hope to have found any real correlations of quantum-mechanical nature between components $T_{+1/2}\otimes \Phi ^{+}(x)$ and $T_{-1/2}\otimes \Phi ^{-}(x)$. If such a~point of view is recognized as a~truly physical one, then they automatically give the understanding of particle doublet as an~entity to some mystic powers which are not controlled by the~quantum-mechanical mathematical formalism. Instead, in author's opinion, a~truly quantum-mechanical nature of particle doublet conception envisages that some physical operators of non-diagonal form must exist really. By the~way, if one insists on a~diagonal form only as possible form of physical operators, one should consider a~further dimension to the~problem under consideration: what is the~meaning of the $A$-freedom. Indeed, let us turn back to (10.1) and (10.2) again and set $e^{iA} = 1$ ($A = \pi $), then these relations, in particular, give $$ \hat{F} (A = \pi ) \; \Psi _{\epsilon jm,-1}(x) \; \equiv \; \Psi _{\epsilon jm,+1}(x). $$ \noindent The latter shows that if one decides in favor of the~gauge character only of the~$A$-freedom, then one faces a~very strange case. Since two consistently distinguishable thus for and linearly independent of each other solutions $\Psi _{\epsilon jm,-1}(x)$ and $\Psi _{\epsilon jm,+1}(x)$ turn out to be only different representatives of a~single invariant state. But then the~natural and legitimate question arises: what is the~meaning of such a~physical situation. Recalling that, generally speaking, the quantum doublet states $\Psi ^{A}_{\epsilon jm\delta \mu }(x)$ (above the number $\mu $ was often omitted) bear five quantum numbers in place of four ones ($\epsilon ,j,m,\mu$ ) in the Abelian case, and that distinction (4 from 5) between Abelian and non-Abelian situations seems to be quite understandable and natural, as a result of addition by hands a~new degree of freedom at going over to the~non-Abelian case. In the light of this, it is easy to realize that the~{\em physical identification} of the functions $\Psi _{\epsilon jm,-1,\mu }(x)$ and $\Psi _{\epsilon jm,+1,\mu }(x)$ , being effectively generated (through the~transformation $\hat{F}(A)$) just from the~gauge understanding of the~$A$-freedom, represents a~return to the~Abelian scheme again. But what is the meaning of such a~strange reversion? Thus, seemingly, the~interpretation of the~$A$-freedom as exclusively a~gauge one is not justified since this leads to a~logical collision with the~quantum superposition principle and also entails the~return to the~Abelian scheme. However, the matrix $F(A) \in SU(2)^{gauge}_{loc.}$. In author's opinion, there exists just one and very simple way out of this situation which consists in the following: The complete symmetry group of system under consideration is (apart from a rotational symmetry related with $\vec{j}^{2}, j_{3}$ that has non-gauge character) of the form $\hat{F}(A) \otimes SL(2.C)^{loc.}_{gauge} \otimes SU(2)^{loc.}_{gauge}$. This group, in particular, contains the gauge and non-gauge symmetry operations which both have the~same mathematical form but different physical status. Only such a~way of understanding allows us not to reach a~deadlock. \subsection*{11. Discussion and some generalities} In conclusion, some additional general notices are to be given. The specific analysis implemented in the~above study may play a ~part in considering analogous situations for more complicated gauge groups [1], serving as some guidelines. Since the case of freedom in choosing an~explicit form of certain discrete operators, which has its roots in the fact that a certain subgroup $G'$ of a~complete gauge group $G$ commutes with a~Hamiltonian can appear. Then those symmetry operations $G'$ will generate some linear transformations in a set of basis functions, which in their mathematical form will coincide with a~matrix (independent of space coordinates) lying formally in the gauge group G. It appears that analogous studying such Abelian monopole manifestations on the~background of other (big) gauge groups $G$ is feasible and would require no large departures from the~present scheme. Those latter, seemingly would be completely determined by the~inner structure of the~relevant Lie algebras, in particular, their respective Cartan's sub-algebras. The~number of elements in those sub-algebras would coincide with the~number of (one-parametric) generalized chiral symmetry transformations. Else one remark may be given. Existence of the~above isotopic chiral symmetry is not relevant to whether a~particle multiplet carries the~isotopic spin $T=1/2$ and the~Lorentzian spin $S=1/2$. The general structure of the matter equation (see (3.1)) will remain the~same if one extends the~problem to any other values of $T$ and $S$ (to retain the formal similarity, one ought to exploit the~first order wave equation formalism)\footnote{For instance, the author has carried out all required calculations for $T=1$ case. Technically, this is some more laborious task but considered conclusions are in part similar. The~main difference arisen is that now there exist two independent one-parametric symmetries of the~triplet-monopole Hamiltonian (instead of the~one discerned by the~present work for $T=1/2$ case), these symmetry operations vary substantially in their mathematical forms and physical manifestations. An account of this has appeared to be rather unwieldy, so that it has not been included in the~present paper.}. Moreover, all the~problem can be easily extended to an~arbitrary curved space-time of spherical symmetry. \section*{Acknowledgments} I would like to express my gratitude to Prof. A.~A.~Bogush, who has looked through the manuscript and contributed many suggestions for corrections and additions. I am also grateful to Dr E.~A.~Tolkachev, whose strong criticism of an initial variant of the article makes me revise some ideas before sending the conclusive variant to the journal, and also to Prof. L.~M.~Tomil'chik for contributing comments and friendly advice. The author is especially indebted to. Dr V.~V.~Gilewsky for his wholehearted support and agreeing to help in preparing the \LaTeX file on the article. \subsection*{Supplement A. Connection between electron-monopole functions in spherical and Cartesian bases} Let us consider relationships between fermion-monopole functions in spherical and Cartesian bases. First, we look at connection between $D$-functions used above and the~so-called spinor monopole harmonics. To this ent, one ought to perform subsequently two translations: from the~spherical tetrad and 2-spinor (by Weyl) frame in bispinor space into, respectively, the~Cartesian tetrad and the~so-called Pauli's (bispinor) frame. In the first place, it is convenient to accomplish those translations for a~free electronic function; so as, in the second place, to follow this pattern further in~the monopole case. So, subjecting that free electronic function to the~local bispinor gauge transformation (associated with the~change $sph. \;\; \rightarrow \;\; Cart.$) $$ \Phi _{Cart.} = \left ( \begin{array}{cc} U^{-1} & 0 \\ 0 & U^{-1} \end{array} \right ) \; \Psi_{sph.} , \qquad U^{-1} = \left ( \begin{array}{lr} \cos \theta /2 \; e^{-i\phi /2} & - \sin \theta /2 \; e^{-i\phi /2} \\ \sin \theta /2 \; e^{+i\phi /2} & \cos \theta /2 \; e^{+i\phi /2} \end{array} \right ) $$ \noindent and further, taking the~bispinor frame from the~Weyl 2-spinor form into the~Pauli's $$ \Phi ^{Pauli.}_{Cart.} = \left ( \begin{array}{c} \varphi \\ \xi \end{array} \right ) , \qquad \Phi^{Weyl}_{Cart.} = \left ( \begin{array}{c} \xi \\ \eta \end{array} \right ) , \qquad \varphi = { \xi + \eta \over \sqrt{2}} ,\;\; \chi = { \xi - \eta \over \sqrt{2}} $$ \noindent we get $$ \varphi = \left [\; {f_{1} + f_{3} \over \sqrt{2} } \; \left ( \begin{array}{c} \cos \theta /2 \; e^{-i\phi /2} \\ \sin \theta /2 \; e^{+i\phi /2} \end{array} \right ) \; D_{-1/2} \; + \; { f_{2} + f_{4} \over \sqrt{2}} \left ( \begin{array}{c} -\sin \theta /2 \; e^{-i\phi /2} \\ \cos \theta /2 \; e^{+i\phi /2} \end{array} \right ) \; D_{+1/2} \; \right ] ; \eqno(A.1a) $$ $$ \chi =\left [\; {f_{1} - f_{3} \over \sqrt{2} } \; \left ( \begin{array}{c} \cos \theta /2 \; e^{-i\phi /2} \\ \sin \theta /2 \; e^{+i\phi /2} \end{array} \right ) \; D_{-1/2} \; + \; { f_{2} - f_{4} \over \sqrt{2}} \left ( \begin{array}{c} -\sin \theta /2 \; e^{-i\phi /2} \\ \cos \theta /2 \; e^{+i\phi /2} \end{array} \right ) \; D_{+1/2}\; \right ] ; \eqno(A.1b) $$ \noindent Further, for the~above solutions with fixed proper values of $P$-operator, we produce $$ P=(-1)^{j+1} : \Phi ^{Pauli}_{Cart.} = {e^{-i\epsilon t} \over r \sqrt{2}} \; \left ( \begin{array}{c} (f_{1} + f_{2}) ( \chi _{+1/2} \; D_{-1/2} + \chi _{-1/2} \;D_{+1/2} ) \\ (f_{1} - f_{2}) ( \chi _{+1/2} \; D_{-1/2} - \chi _{-1/2} \; D_{+1/2}) \end{array} \right ) \eqno(A.2a) $$ $$ P = (-1)^{j} : \;\;\; \Phi ^{Pauli}_{Cart.} = { e^{-i\epsilon t} \over r \sqrt{2}} \; \left ( \begin{array}{c} (f_{1} - f_{2}) ( \chi _{+1/2} \; D_{-1/2} - \chi _{-1/2} \; D_{+1/2}) \\ (f_{1} + f_{2}) ( \chi _{+1/2} \; D_{-1/2} + \chi _{-1/2} \; D_{+1/2} ) \end{array} \right ) \eqno(A.2b) $$ \noindent where $\chi _{+1/2}$ and $\chi _{-1/2}$ designate the~columns of matrix $U^{-1}(\theta ,\phi )$ (in the literature they are termed as helicity spinors) $$ \chi _{+1/2} = \left ( \begin{array}{c} \cos \theta /2 \; e^{-i\phi /2} \\ \sin \theta /2 \; e^{+i\phi /2} \end{array} \right ) , \qquad \chi _{-1/2} = \left ( \begin{array}{c} -\sin \theta /2 \; e^{-i\phi /2} \\ \cos \theta /2 \; e^{+i\phi /2} \end{array} \right ) . \eqno(A.2c) $$ \noindent Now, using the known extensions for spherical spinors $\Omega ^{j\pm 1/2}_{jm}(\theta ,\phi )$ in terms of $\chi _{\pm 1/2}$ and $D$-functions [66]: $$ \Omega ^{(+)}_{jm} = (-1)^{m+1/2} \sqrt{(2j+1)/8\pi}\; ( + \chi _{+1/2} \; D_{-1/2} \; + \; \chi _{-1/2} \; D_{+1/2}) , $$ $$ \Omega ^{(-)}_{jm} = (-1)^{m+1/2} \sqrt{(2j+1)/8\pi} \; ( - \chi _{+1/2} \; D_{-1/2} \; + \; \chi _{-1/2} \; D_{+1/2}) $$ \noindent we eventually arrive at the~common representation of the~spinor spherical solutions: $$ P = (-1)^{j+1} : \qquad \Phi ^{Pauli}_{Cart.}= {e^{-i\epsilon t} \over r} \; \left ( \begin{array}{r} + f(r) \; \Omega ^{(+)}_{jm} (\theta ,\phi ) \\ - i\;g(r)\; \Omega ^{(-)}_{jm}(\theta ,\phi ) \end{array} \right ) ; \eqno(A.3a) $$ $$ P = (-1)^{j} : \qquad \Phi ^{Pauli.}_{Cart.} = {e^{-i\epsilon t} \over r} \; \left ( \begin{array}{r} -i\; g(r) \; \Omega ^{(-)}_{jm} (\theta ,\phi ) \\ f(r) \; \Omega ^{(+)}_{jm}(\theta ,\phi ) \end{array} \right ) . \eqno(A.3b) $$ The Abelian monopole situation can be considered in the same way. As a~result, we produce the~following representation of the~monopole-electron functions in terms of `new' angular harmonics ($k \equiv e g$) $$ M = (-1)^{j+1} : \qquad \Phi ^{Pauli.}_{Cart.} = {e^{-i\epsilon t} \over r} \; \left ( \begin{array}{r} + f(r) \; \xi ^{(1)}_{jmk} (\theta ,\phi ) \\ -i \; g(r) \; \xi ^{(2)}_{jmk}(\theta ,\phi ) \end{array} \right ) ; \eqno(A.4a) $$ $$ M = (-1)^{j} : \qquad \Phi^{Pauli}_{(eg)Cart.} ={ e^{-i\epsilon t} \over r} \; \left ( \begin{array}{r} -i \; g(r); \xi ^{(1)}_{jmk}(\theta ,\phi ) \\ +f(r) \; \xi ^{(2)}_{jmk}(\theta ,\phi ) \end{array} \right ) . \eqno(A.4b) $$ \noindent Here, the two column functions $\xi ^{(1)}_{jmk} (\theta ,\phi )$ and $\xi ^{(2)}_{jmk} (\theta ,\phi )$ denote the~special combinations of $\chi _{\pm 1/2}(\theta ,\phi )$ and $D_{-m,eg/hc\pm 1/2}(\phi ,\theta ,0)$: $$ \xi ^{(1)}_{jmk} = ( + \chi _{-1/2}\; D_{k+1/2} + \chi _{+1/2} \;D_{k-1/2} ) \; , \qquad \xi ^{(2)}_{jmk} = ( + \chi _{-1/2} \; D_{k+1/2}- \chi _{+1/2} \; D_{k-1/2} ) \eqno(A.5) $$ \noindent compare them with analogous extensions for $\Omega ^{j\pm 1/2}_{jm}(\theta ,\phi )$. These 2-component and $(\theta ,\phi )$-dependent functions $\xi ^{(1)}_{jmk}(\theta ,\phi)$ and $\xi ^{(2)}_{jmk}(\theta ,\phi)$ just provide what is called spinor monopole harmonics. It should be useful to write down the~detailed explicit form of these generalized harmonics. Given the known expressions for $\chi $- and $D$-functions, the formulae yield the following $$ \xi ^{(1,2)}_{jmk}(\theta ,\phi ) = \left [ \; e^{{\rm im}\phi } \; \left ( \begin{array}{r} -\sin \theta /2 \; e^{-i\phi /2} \\ \cos \theta /2 \; e^{+i\phi /2} \end{array} \right ) \; d^{j}_{-m,k+1/2} (\cos \theta ) \; \pm \right. $$ $$ \left. e^{im\phi } \; \left ( \begin{array}{c} \cos \theta /2 \; e^{-i\phi /2} \\ \sin \theta /2 \; e^{+i\phi /2} \end{array} \right ) \; d^{j}_{-m,k-1/2} (\cos \theta ) \; \right ] \eqno(A.6) $$ \noindent here, the signs $+$ (plus) and $-$ (minus) refer to $\xi ^{(1)}$ and $\xi ^{(2)}$, respectively. One can equally work whether in terms of monopole harmonics $\xi ^{(1,2)}(\theta ,\phi )$ or directly in terms of $D$-functions, but the latter alternative has an~advantage over the former because of the~straightforward access to the `unlimited' $D$-function apparatus, instead of proving and producing just disguised old results. Above, at translating the electron-monopole functions into the Cartesian tetrad and Pauli's spin frame, we had overlooked the case of minimal $j$. Turning to it, on straightforward calculation we find (for $k\; <\; 0$ and $k\; > \; 0$ , respectively) $$ k > 0 \; : \qquad \Phi ^{(eg)Cart.}_{j_{min.}} \; = \; {e^{-i\epsilon t} \over \sqrt{2} r} \; \left ( \begin{array}{c} ( f_{1} + f_{3}) \\ ( f_{1} - f_{3}) \end{array} \right ) \; \chi _{+1/2}(\theta ,\phi)\; D^{\mid k \mid -1/2}_{-m,k-1/2} (\theta ,\phi ,0) ; \eqno(A.7a) $$ $$ k < 0 \; : \qquad \Phi ^{(eg)Cart.}_{j_{min.}} \; = \; {e^{-i\epsilon t} \over \sqrt{2} r} \; \left ( \begin{array}{c} ( f_{2} + f_{4}) \\ ( f_{2} - f_{4}) \end{array} \right ) \; \chi _{-1/2}(\theta ,\phi)\; D^{\mid k \mid -1/2} _{-m,k+1/2} (\theta ,\phi ,0) \; . \eqno(A.7b) $$ In addition, now it is a~convenient point to clarify the~way how the~used gauge transformation $U^{-1}(\theta, \phi)$, not being a~single-valued matrix-function of spatial points, affects the~continuity (or discontinuity) properties of the~wave functions under consideration. Returning to the~$\varphi(x)$ from (A.1a) at the~points $\theta = 0, \pi$ (the functions $\chi(x)$ from (A.1b) are completely analogous ones), one gets $$ \varphi (\theta = 0) \; \sim \; \left [\; {f_{1} + f_{3} \over \sqrt{2} } \; \left ( \begin{array}{c} e^{-i\phi /2} \\ 0 \end{array} \right ) \; D^{j}_{-m,-1/2}(\phi, \theta = 0,0) \; + \right. $$ $$ \left. { f_{2} + f_{4} \over \sqrt{2}} \left ( \begin{array}{c} 0 \\ e^{+i\phi /2} \end{array} \right ) \; D^{j}_{-m, +1/2}(\phi, \theta = 0,0) \; \right ] \; ; \eqno(A.8a) $$ $$ \varphi(\theta = \pi ) \; \sim \; \left [ \; {f_{1} + f_{3} \over \sqrt{2} } \; \left ( \begin{array}{c} 0 \\ e^{+i\phi /2} \end{array} \right ) \; D^{j}_{-m,-1/2}(\phi, \theta = \pi, 0) \; + \right. $$ $$ \left. { f_{2} + f_{4} \over \sqrt{2}} \left ( \begin{array}{c} e^{-i\phi /2} \\ 0 \end{array} \right ) \; D^{j}_{-m, +1/2}(\phi, \theta = \pi, 0) \; \right ] \; . \eqno(A.8b) $$ \noindent Further, allowing for the~relevant relations from Tables $1a,b$, one produces: $\varphi(\theta = 0)\;$ and $\varphi(\theta = \pi)\;$ are single-valued functions. In turn, for the~monopole case, in place of (A.7a,b) one gets (for definiteness, let $eg = +1/2$) $$ \varphi^{(eg=+1/2)} (\theta = 0) \; \sim \; = \; \left [\; {f_{1} + f_{3} \over \sqrt{2} } \; \left ( \begin{array}{c} e^{-i\phi /2} \\ 0 \end{array} \right ) \; D^{j}_{-m,0}(\phi, \theta = 0,0) \; + \right. $$ $$ \left. { f_{2} + f_{4} \over \sqrt{2}} \left ( \begin{array}{c} 0 \\ e^{+i\phi /2} \end{array} \right ) \; D^{j}_{-m, +1}(\phi, \theta = 0,0) \; \right ] \; ; \eqno(A.9a) $$ $$ \varphi^{(eg=+1/2)}(\theta = \pi ) \; \sim \; \left [ \; {f_{1} + f_{3} \over \sqrt{2} } \; \left ( \begin{array}{c} 0 \\ e^{+i\phi /2} \end{array} \right ) \; D^{j}_{-m,0}(\phi, \theta = \pi, 0) \; + \right. $$ $$ \left. { f_{2} + f_{4} \over \sqrt{2}} \left ( \begin{array}{c} e^{-i\phi /2} \\ 0 \end{array} \right ) \; D^{j}_{-m, +1}(\phi, \theta = \pi, 0) \; \right ] \; . \eqno(A.9b) $$ \noindent From that, allowing for the relations from Tables $2a,b$, one finds that the~totality of all $\varphi^{(eg=+1/2)}(x)$ consists of both regular and non-regular (non-single-valued) functions at the~$x_{3}$ axis; these latter behave like $e^{-i\phi /2}$ and $e^{+i\phi /2}$ at the~half-axes $\theta = 0$ and $\theta = \pi$, respectively. The case of minimal $j$ follows the~same behavior: for example, if $eg = \pm1/2$ then one gets $$ eg = +1/2 \;\; : \qquad \Phi ^{(eg=+1/2)Cart.}_{j_{min.}} \; = \; {e^{-i\epsilon t} \over \sqrt{2} r} \; \left ( \begin{array}{c} ( f_{1} + f_{3}) \\ ( f_{1} - f_{3}) \end{array} \right ) \; \chi _{+1/2}(\theta ,\phi)\; \; ; \eqno(A.10a) $$ $$ eg = -1/2 \;\; :\qquad \Phi ^{(eg=-1/2)Cart.}_{j_{min.}} = {e^{-i\epsilon t} \over \sqrt{2} r} \; \left ( \begin{array}{c} ( f_{2} + f_{4}) \; \\ ( f_{2} - f_{4}) \end{array} \right ) \; \chi _{-1/2}(\theta ,\phi) \; . \eqno(A.10b) $$ Now, let us turn to the non-Abelian doublet-fermion case. The task is to translate the~composite functions $\Psi ^{C. sph.(A)}_{\epsilon jm\delta}$ (see (6.9)) into the Cartesian tetrad basis: $$ \Psi^{C. sph.}_{\epsilon jm\delta} \qquad \rightarrow \qquad \Psi ^{C. Cart.}_{\epsilon jm\delta} = \left ( \begin{array}{c} \Sigma^{(+)}_{\epsilon jm\delta}(x) \\ \Sigma^{(-)}_{\epsilon jm\delta}(x) \end{array} \right ) $$ \noindent where the~2-component structure in Lorentzian space is explicitly detailed. For those composite two-column functions $\Sigma^{(\pm)}_{\epsilon jm\delta}(x)$ one gets $$ \Sigma^{(\pm)}_{\epsilon jm\delta}(x) \; = \; {e^{-i\epsilon t}\over r}\; \times \eqno(A.11) $$ $$ \left \{\; T_{+1/2} \otimes \; {\sqrt{j+m} \over 2j+1} {1 \over \sqrt{2}} \; \left [\; (\;K^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; L^{-A}_{\delta}\;) \; \chi_{+1/2} \; D^{j-1/2}_{-m+1/2,-1/2} \;\; + \right. \right. $$ $$ \left. (\; L^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; K^{-A}_{\delta}\;) \; \chi_{-1/2} \; D^{j-1/2}_{-m+1/2,+1/2} \; \right ] \;\; + $$ $$ T_{+1/2} \otimes \; {\sqrt{j-m+1} \over 2j+1} \;{1 \over \sqrt{2}} \; \left [ \; (\;M^{A}_{-\delta} \; \mp \; \delta\; e^{iA}\; N^{-A}_{-\delta}\;) \; \chi_{+1/2} \; D^{j+1/2}_{-m+1/2,-1/2} \;\; + \right. $$ $$ \left. (\; N^{A}_{-\delta} \; \mp \; \delta\; e^{iA}\; M^{-A}_{-\delta}\;) \; \chi_{-1/2} \; D^{j+1/2}_{-m+1/2,+1/2} \; \right ] \;\; + $$ $$ T_{-1/2} \otimes \; {\sqrt{j-m} \over 2j+1} {1 \over \sqrt{2}} \; \left[ \; (-\;K^{A}_{-\delta} \; \pm \; \delta\; e^{iA}\; L^{-A}_{-\delta}\;) \; \chi_{+1/2} \; D^{j-1/2}_{-m-1/2,-1/2} \;\; + \right. $$ $$ \left. (-\; L^{A}_{-\delta} \; \pm \; \delta\; e^{iA}\; K^{-A}_{-\delta}\;) \; \chi_{-1/2} \; D^{j-1/2}_{-m-1/2,+1/2} \; \right ] \;\; + $$ $$ T_{-1/2} \otimes \; {\sqrt{j+m+1} \over 2j+1} \;{1 \over \sqrt{2}} \; \left [\; (\;M^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; N^{-A}_{\delta}\;) \; \chi_{+1/2} \; D^{j+1/2}_{-m-1/2,-1/2} \;\; + \right. $$ $$ \left. \left. (\; N^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; M^{-A}_{\delta}\;) \; \chi_{-1/2} \; D^{j+1/2}_{-m-1/2,+1/2} \; \right ] \; \right \} \; . $$ \noindent With the~use of four formulas [16] $$ \chi_{\pm1/2} \; D^{j-1/2}_{-m+1/2,\mp1/2} \; = \; (\; \Omega ^{(+)}_{j-1/2,m-1/2} \; \mp \; \Omega^{(-)}_{j-1/2,m-1/2} \;) \; {\sqrt{4\pi} \over 2 \sqrt{j}(-1)^{m} } \; , $$ $$ \chi_{\pm1/2} \; D^{j+1/2}_{-m+1/2,\mp1/2} \; = \; (\; \Omega ^{(+)}_{j+1/2,m-1/2} \; \mp \;\Omega^{(-)}_{j+1/2,m-1/2} \;) \; {\sqrt{4\pi} \over 2 \sqrt{j+1}(-1)^{m} } \; , $$ $$ \chi_{\pm1/2} \; D^{j-1/2}_{-m-1/2,\mp1/2} \; = \; ( \; \Omega ^{(+)}_{j-1/2,m+1/2} \; \mp \;\Omega^{(-)}_{j-1/2,m+1/2} \;) \; {\sqrt{4\pi} \over 2 \sqrt{j}(-1)^{m+1} } \; , $$ $$ \chi_{\pm1/2} \; D^{j+1/2}_{-m-1/2,\mp1/2} \; = \; ( \; \Omega ^{(+)}_{j+1/2,m+1/2} \; \mp \;\Omega^{(-)}_{j+1/2,m+1/2} \;) \; {\sqrt{4\pi} \over 2 \sqrt{j+1} (-1)^{m+1} } \; $$ \noindent the~above expression (A.11) for $\Sigma^{(\pm)}_{\epsilon jm\delta}(x)$ can be rewritten in the~form $$ \Sigma^{(\pm)}_{\epsilon jm\delta}(x) \; = \; {e^{-i\epsilon t}\over r}\; \times \eqno(A.12) $$ $$ T_{+1/2} \otimes \; B \;\; \left \{ \;\; \left [ \; (\;K^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; K^{-A}_{\delta}\;) \; + \; (\;L^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; L^{-A}_{\delta}\;)\; \right ] \; \Omega^{(+)}_{j-1/2,m-1/2} \; + \right. $$ $$ \left. \left [ \; (\;-K^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; K^{-A}_{\delta}\;) \; + \; (\;L^{A}_{\delta} \; \mp \; \delta\; e^{iA}\; L^{-A}_{\delta}\;) \;\right ]\; \Omega^{(-)}_{j-1/2,m-1/2} \;\; \right \} \;\; + $$ $$ T_{+1/2} \otimes \; C \;\; \left \{ \;\; \left [ \;(\;M^{A}_{-\delta} \; \mp \; \delta\; e^{iA}\; M^{-A}_{-\delta}\;) \; + \; (\;N^{A}_{-\delta} \; \mp \; \delta\; e^{iA}\; N^{-A}_{-\delta} \;) \;\right ] \; \Omega^{(+)}_{j+1/2,m-1/2} \; + \right. $$ $$ \left. \left [\; (\;-M^{A}_{-\delta} \; \mp \; \delta\; e^{iA}\; M^{-A}_{-\delta}\;) \; + \; (\;N^{A}_{-\delta} \; \pm \; \delta\; e^{iA}\; N^{-A}_{-\delta}\;) \;\right ] \; \Omega^{(-)}_{j+1/2,m-1/2} \;\; \right \} \;\; + $$ $$ T_{-1/2} \otimes \; D \;\; \left \{ \;\; \left [\; (\;-K^{A}_{-\delta} \; \pm \; \delta\; e^{iA}\; K^{-A}_{-\delta}\;) \; + \; (\;-L^{A}_{-\delta} \; \pm \; \delta\; e^{iA}\; L^{-A}_{-\delta}\;)\; \right ] \; \Omega^{(+)}_{j-1/2,m+1/2} \; + \right. $$ $$ \left. \left [\; (\;K^{A}_{-\delta} \; \pm \; \delta\; e^{iA}\; K^{-A}_{-\delta}\;) \; + \; (\;-L^{A}_{-\delta} \; \mp \; \delta\; e^{iA}\; L^{-A}_{-\delta}\;) \;\right ] \; \Omega^{(-)}_{j-1/2,m+1/2} \;\; \right \} \;\; + $$ $$ T_{-1/2} \otimes \; E \;\; \left \{ \;\; \left [\; (\;M^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; M^{-A}_{\delta}\;) \; + \; (\;N^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; N^{-A}_{\delta}\;)\; \right ] \; \Omega^{(+)}_{j+1/2,m+1/2} \; + \right. $$ $$ \left. [\; (\;-M^{A}_{\delta} \; \pm \; \delta\; e^{iA}\; M^{-A}_{\delta}\;) \; + \; (\;N^{A}_{\delta} \; \mp \; \delta\; e^{iA}\; N^{-A}_{\delta}\;) \;] \; \Omega^{(-)}_{j+1/2,m+1/2} \;\; \right \} $$ \noindent where the~symbols $B, C, D, E $ denote respectively $$ B \; = \; {\sqrt{j+m} \over 2j+1} \; {1 \over \sqrt{2}} \; {\sqrt{4\pi} \over 2 \sqrt{j}(-1)^{m} } \; , \;\; C\; = \; {\sqrt{j-m+1} \over 2j+1} \;{1 \over \sqrt{2}} \; {\sqrt{4\pi} \over 2 \sqrt{j+1}(-1)^{m} } \; , $$ $$ D \; = \; {\sqrt{j-m} \over 2j+1} \; {1 \over \sqrt{2}} \; {\sqrt{4\pi} \over 2 \sqrt{j}(-1)^{m+1} } \; , \;\; E \; =\; {\sqrt{j+m+1} \over 2j+1} \; {1 \over \sqrt{2}} \; {\sqrt{4\pi} \over 2 \sqrt{j+1} (-1)^{m+1} } \;\; . $$ The~representation (A.12) will be significantly simplified if $A=0$; so one can find $$ A=0 \; : \qquad \Sigma^{(\pm)}_{\epsilon jm\delta}(x) \; = \; {e^{-i\epsilon t}\over r}\; \times \eqno(A.13) $$ $$ \left [ \; T_{+1/2} \otimes \; B \;\; (\;\pm \; \delta\; K_{\delta}\; + \; L_{\delta} \; ) \; \Omega^{(\pm \delta)}_{j-1/2,m-1/2} \; + \right. $$ $$ T_{+1/2} \otimes \; C \;\; ( \;\mp \delta \; M_{-\delta} \; + \; N_{-\delta} \;) \; \Omega^{(\mp \delta )}_{j+1/2,m-1/2} \; + $$ $$ T_{-1/2} \otimes \; D \;\; (\;\pm \delta \; K_{-\delta} \; - \;L_{-\delta} \; ) \; \Omega^{(\mp \delta)}_{j-1/2,m+1/2} \; + $$ $$ \left. T_{-1/2} \otimes \; E \;\; (\;\pm \delta M_{\delta} \; + \; N_{\delta} \;) \; \Omega^{(\pm \delta)}_{j+1/2,m+1/2} \; \right ] \; . $$ In particular, the~formula (A.13) apparently exhibits the~Abelian fermion-like sub-structure thta stems from the~Abelian-like $P$-inversion operation (6.4). \end{document}
{\rm \mathfrak{b}}egin{document} \title{$p$-adic heights of generalized Heegner cycles} {\rm \mathfrak{d}}ate{\today} {\rm \mathfrak{a}}uthor{Ariel Shnidman} {\rm \mathfrak{a}}ddress{Department of Mathematics, Boston College, Chestnut Hill, MA 02467-3806} \email{[email protected]} {\rm \mathfrak{b}}egin{abstract} We relate the $p$-adic heights of generalized Heegner cycles to the derivative of a $p$-adic $L$-function attached to a pair $(f, {\rm \mathfrak{c}}hi)$, where $f$ is an ordinary weight $2r$ newform and ${\rm \mathfrak{c}}hi$ is an unramified imaginary quadratic Hecke character of infinity type $(\ell,0)$, with $0 < \ell < 2r$. This generalizes the $p$-adic Gross-Zagier formula in the case $\ell = 0$ due to Perrin-Riou (in weight two) and Nekov\'a\v{r} (in higher weight). \end{abstract} {\rm \mathfrak{m}}aketitle \tableofcontents {\rm \sigma}ection{Introduction} Let $p$ be an odd prime, $N \geq 3$ an integer prime to $p$, and $f = {\rm \sigma}um a_n q^n$ a newform of weight $2r> 2$ on $X_0(N)$ with $a_1 = 1$. Fix embeddings ${\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}} \to {\rm {\rm \mathfrak{m}}athbb{C}}$ and ${\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}} \to {\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athbb{Q}}_p$ once and for all, and suppose that $f$ is ordinary at $p$, i.e. the coefficient $a_p\in {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ is a $p$-adic unit. Building on work of Perrin-Riou {\rm \mathfrak{c}}ite{PR1}, Nekov\'a\v{r} {\rm \mathfrak{c}}ite{Nek} proved a $p$-adic analogue of the Gross-Zagier formula {\rm \mathfrak{c}}ite{GZ} for $f$ along with any character ${\rm {\rm \mathfrak{m}}athbb{C}}C: {\rm \mathcal{G}}al(H/K) \to {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}^\times$. Here, $K$ is an imaginary quadratic field of odd discriminant $D$ such that all primes dividing $pN$ split in $K$, and $H$ is the Hilbert class field of $K$. Nekov\'a\v{r}'s formula relates the $p$-adic height of a Heegner cycle to the derivative of a $p$-adic $L$-function attached to the pair $(f, {\rm {\rm \mathfrak{m}}athbb{C}}C)$. Together with the Euler system constructed in {\rm \mathfrak{c}}ite{NekEuler}, the formula implies a weak form of Perrin-Riou's conjecture {\rm \mathfrak{c}}ite[Conj.\ 2.7]{colmez}, a $p$-adic analogue of the Bloch-Kato conjecture for the motive $f \otimes K$ {\rm \mathfrak{c}}ite[Theorem B]{Nek}. The connection between special values of $L$-functions and algebraic cycles is part of a very general (conjectural) framework articulated in the works of Beilinson, Bloch, Kato, Perrin-Riou, and others. Despite the fact that these conjectures can be formulated for arbitrary motives, they have been verified only in very special cases. The goal of this paper is to extend the ideas and computations in {\rm \mathfrak{c}}ite{Nek} to a larger class of motives. Specifically, we will consider motives of the form $f \otimes\, {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi$, where \[{\rm \mathfrak{c}}hi:{\rm {\rm \mathfrak{m}}athbb{A}}_K^\times/K^\times \to {\rm {\rm \mathfrak{m}}athbb{C}}^\times\] is an unramified Hecke character of infinity type $(\ell,0)$, with $0 < \ell = 2k < 2r$, and \[{\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi = {\rm \sigma}um_{{\rm \mathfrak{a}} {\rm \sigma}ubset {\rm \mathcal{O}}_K} {\rm \mathfrak{c}}hi({\rm \mathfrak{a}})q^{{\rm \textbf{N}}{\rm \mathfrak{a}}}\] is the associated theta series. The conditions on $\ell$ guarantee that the Hecke character ${\rm \mathfrak{c}}hi_0 := {\rm \mathfrak{c}}hi^{-1}{\rm \textbf{N}}^{r+k}$ of infinity type $(r+k, r-k)$ is critical in the sense of {\rm \mathfrak{c}}ite[\S4]{BDP1}. Note that $L(f,{\rm \mathfrak{c}}hi_0^{-1}, 0) = L(f, {\rm \mathfrak{c}}hi, r+k)$ is the central value of the Rankin-Selberg $L$-function attached to $f \otimes {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi$. If we take $\ell = 0$, then ${\rm \mathfrak{c}}hi$ comes from a character of ${\rm \mathcal{G}}al(H/K)$, so we are in the situation of {\rm \mathfrak{c}}ite{Nek}. Our main result (Theorem \ref{main}) extends Nekov\'a\v{r}'s formula to the case $\ell > 0$ by relating $p$-adic heights of \textit{generalized} Heegner cycles to the derivative of a $p$-adic $L$-function attached to the pair $(f,{\rm \mathfrak{c}}hi)$. We now describe both the algebraic cycles and the $p$-adic $L$-function needed to state the formula. {\rm \sigma}ubsection{Generalized Heegner cycles} Let $Y(N)/{\rm {\rm \mathfrak{m}}athbb{Q}}$ be the modular curve parametrizing elliptic curves with full level $N$ structure, and let ${\rm {\rm \mathfrak{m}}athcal{E}} \to Y(N)$ be the universal elliptic curve with level $N$ structure. Denote by $W = W_{2r-2}$, the canonical non-singular compactification of the $(2r-2)$-fold fiber product of ${\rm {\rm \mathfrak{m}}athcal{E}}$ with itself over $Y(N)$ {\rm \mathfrak{c}}ite{Sch}. Finally, let $A/H$ be an elliptic curve with complex multiplication by ${\rm \mathcal{O}}_K$ and good reduction at primes above $p$. We assume further that $A$ is isogenous (over $H$) to each of its ${\rm \mathcal{G}}al(H/K)$-conjugates $A^{\rm \sigma}igma$ and that $A^\tau {\rm \mathfrak{c}}ong A$, where $\tau$ is complex conjugation. Such an $A$ exists since $K$ has odd discriminant {\rm \mathfrak{c}}ite[\S 11]{Gr}. Set $X = W_H \times_H A^\ell$, where $W_H$ is the base change to $H$. $X$ is fibered over the compactified modular curve $X(N)_H$, the typical geometric fiber being of the form $E^{2r-2} \times A^\ell$, for some elliptic curve $E$. The $(2r + 2k - 1)$-dimensional variety $X$ contains a rich supply of \textit{generalized} Heegner cycles supported in the fibers of $X$ above Heegner points on $X_0(N)$ (we view $X$ as fibered over $X_0(N)$ via $X(N) \to X_0(N)$). These cycles were first introduced by Bertolini, Darmon, and Prasanna in {\rm \mathfrak{c}}ite{BDP1}. In Section \ref{cycles}, we define certain cycles $\e_B{\rm \epsilon}ilon Y$ and $\e_B{\rm \mathfrak{b}}ar\e Y$ in ${\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X)_K$ which sit in the fiber above a Heegner point on $X_0(N)(H)$, and which are variants of the generalized Heegner cycles which appear in {\rm \mathfrak{c}}ite{BDP3}. Here, ${\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X)_K$ is the group of codimension $r+k$ cycles on $X$ with coefficients in $K$ modulo rational equivalence. In fact, for each ideal ${\rm \mathfrak{a}}$ of $K$, we define cycles $\e_B\e Y^{\rm \mathfrak{a}}$ and $\e_B{\rm \mathfrak{b}}ar\e Y^{\rm \mathfrak{a}}$ in ${\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X)_K$, each one sitting in the fiber above a Heegner point. These cycles are replacements for the notion of ${\rm \mathcal{G}}al(H/K)$-conjugates of $\e_B\e Y$ and $\e_B{\rm \mathfrak{b}}ar\e Y$. The latter do not exist as cycles on $X$, as $X$ is not (generally) defined over $K$. In particular, we have $\e_B\e Y^{{\rm \mathcal{O}}_K} =\e_B \e Y$. The cycles $\e_B\e Y^{\rm \mathfrak{a}}$ and $\e_B{\rm \mathfrak{b}}ar\e Y^{\rm \mathfrak{a}}$ are homologically trivial on $X$ (Corollary \ref{homtriv}), so they lie in the domain of the $p$-adic Abel-Jacobi map $${\rm {\rm \mathfrak{m}}athbb{P}}hi: {\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X)_{0,K} \to H^1(H,V),$$ where $V$ is the ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H/H)$-representation $H_{\rm et}^{2r+2k-1}({\rm \mathfrak{b}}ar X, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)(r+k)$. We will focus on a particular 4-dimensional $p$-adic representation $V_{f, A, \ell}$, which admits a map $$H_{\rm et}^{2r+2k-1}({\rm \mathfrak{b}}ar X, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)(r+k) \to V_{f,A,\ell}.$$ $V_{f,A,\ell}$ is a ${\rm {\rm \mathfrak{m}}athbb{Q}}_p(f)$-vector space, where ${\rm {\rm \mathfrak{m}}athbb{Q}}_p(f)$ is the field obtained by adjoining the coefficents of $f$. As a Galois representation, $V_{f,A,\ell}$ is ordinary (Theorem \ref{ord}) and is closely related to the $p$-adic realization of the motive $f \otimes {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi$ (see Section \ref{cycles}). After projecting, one obtains a map $${\rm {\rm \mathfrak{m}}athbb{P}}hi_f: {\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X)_{0,K} \to H^1(H,V_{f,A,\ell}),$$ which we again call the Abel-Jacobi map. For any ideal ${\rm \mathfrak{a}}$ of $K$, define $z_f^{\rm \mathfrak{a}} = {\rm {\rm \mathfrak{m}}athbb{P}}hi_f(\e_B\e Y^{\rm \mathfrak{a}})$ and ${\rm \mathfrak{b}}ar z_f^{\rm \mathfrak{a}} = {\rm {\rm \mathfrak{m}}athbb{P}}hi_f(\e_B{\rm \mathfrak{b}}ar \e Y^{\rm \mathfrak{a}})$. One knows that the image of ${\rm {\rm \mathfrak{m}}athbb{P}}hi_f$ lies in the Bloch-Kato subgroup \[H^1_f(H,V_{f,A,\ell}) {\rm \sigma}ubset H^1(H,V_{f,A,\ell})\] (Theorem \ref{AJ}). If we fix a continuous homomorphism $\ell_K: {\rm {\rm \mathfrak{m}}athbb{A}}^\times_K/K^\times \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p$, then {\rm \mathfrak{c}}ite{Nekhts} provides a symmetric ${\rm {\rm \mathfrak{m}}athbb{Q}}_p(f)$-linear height pairing $${\rm \lambda}angle \, \,, \, \rangle_{\ell_K} : H^1_f(H, V_{f,A,\ell}) \times H^1_f(H,V_{f,A,\ell}) \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p(f).$$ We can extend this height pairing ${\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p$-linearly to $H^1_f(H, V_{f,A,\ell}) \otimes {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. The cohomology classes ${\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1}z_f^{\rm \mathfrak{a}}$ and ${\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1}{\rm \mathfrak{b}}ar z_f^{\rm \mathfrak{a}}$ depend only on the class ${\rm {\rm \mathfrak{m}}athbb{A}}A$ of ${\rm \mathfrak{a}}$ in the class group ${\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$, of size $h = h_K$. We denote the former by $z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A$ and the latter by $z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A$. Finally, set {\rm \mathfrak{b}}egin{equation*}z_{f,{\rm \mathfrak{c}}hi} = {\rm \mathfrak{f}}rac{1}{h}{\rm \sigma}um_{{\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)} z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A \hspace{5mm} {\rm \mathfrak{m}}box{and} \hspace{5mm} z_{f,{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi} = {\rm \mathfrak{f}}rac{1}{h}{\rm \sigma}um_{{\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)} z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A,\end{equation*} both being elements of $H^1_f(H,V_{f,A,\ell}) \otimes {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p.$ Our main theorem relates ${\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}, z_{f, {\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi}\rangle_{\ell_K}$ to the derivative of a $p$-adic $L$-function which we now describe. {\rm \sigma}ubsection{The $p$-adic $L$-function} Recall, if $f ={\rm \sigma}um a_n q^n \in M_j({\rm \mathcal{G}}amma_0(M), {\rm {\rm \mathfrak{m}}athfrak{p}}si)$ and $g = {\rm \sigma}um b_n q^n \in M_{j'}({\rm \mathcal{G}}amma_0(M), \xi)$, then the Rankin-Selberg convolution is $$L(f,g,s) = L_M(2s+2 - j - j', {\rm {\rm \mathfrak{m}}athfrak{p}}si\xi){\rm \sigma}um_{n\geq 1} a_nb_nn^{-s},$$ where $$L_M(s, {\rm {\rm \mathfrak{m}}athfrak{p}}si\xi) = {\rm {\rm \mathfrak{m}}athfrak{p}}rod_{p {\rm \mathfrak{n}}ot {\rm \mathfrak{d}}ivides M} {\rm \lambda}eft(1 - ({\rm {\rm \mathfrak{m}}athfrak{p}}si\xi)(p)p^{-s}{\rm rig}ht)^{-1}.$$ Let $K_\infty / K$ be the ${\rm {\rm \mathfrak{m}}athbb{Z}}_p^2$-extension of $K$ and let $K_p$ be the maximal abelian extension of $K$ unramified away from $p$. In Section 2, we define a $p$-adic $L$-function $L_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda)$, which is a ${\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p$-valued function of continuous characters ${\rm \lambda}ambda: {\rm \mathcal{G}}al(K_\infty/K) \to 1 + p{\rm {\rm \mathfrak{m}}athbb{Z}}_p$. The function $L_p(f \otimes {\rm \mathfrak{c}}hi)$ is the restriction of an analytic function on ${\rm Hom}({\rm \mathcal{G}}al(K_p/K), {\rm {\rm \mathfrak{m}}athbb{C}}_p^\times)$, which is characterized by the following interpolation property: if ${\rm {\rm \mathfrak{m}}athcal{W}} : {\rm \mathcal{G}}al(K_p/K) \to {\rm {\rm \mathfrak{m}}athbb{C}}_p^\times$ is a finite order character of conductor ${\rm \mathfrak{f}}$, with ${\rm \textbf{N}}{\rm \mathfrak{f}} = p^{\rm \mathfrak{b}}eta$, then $$L_p(f \otimes\, {\rm \mathfrak{c}}hi)({\rm {\rm \mathfrak{m}}athcal{W}}) = C_{f,k}{\rm {\rm \mathfrak{m}}athcal{W}}(N)\overline{{\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}}({\rm {\rm \mathfrak{m}}athcal{D}})\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})V_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}})L(f,{\rm {\rm \mathfrak{m}}athbb{T}}heta_{\overline{{\rm \mathfrak{c}}hi {\rm {\rm \mathfrak{m}}athcal{W}}}}, r + k)$$ with $$C_{f,k} = {\rm \mathfrak{f}}rac{2(r-k-1)!(r+k-1)!}{(4{\rm {\rm \mathfrak{m}}athfrak{p}}i)^{2r}{\rm \mathfrak{a}}lpha_p(f)^{\rm \mathfrak{b}}eta{\rm \lambda}angle f, f\rangle_N},$$ and where ${\rm \mathfrak{a}}lpha_p(f)$ is the unit root of $x^2 - a_p(f) x + p^{2r-1}$, ${\rm \lambda}angle f, f\rangle_N$ is the Petersson inner product, ${\rm {\rm \mathfrak{m}}athcal{D}} = {\rm \lambda}eft({\rm \sigma}qrt D{\rm rig}ht)$ is the different of $K$, ${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\overline{{\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}}}$ is the theta series $${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\overline{{\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}}}\, = {\rm \sigma}um_{({\rm \mathfrak{a}},{\rm \mathfrak{f}}) = 1}\overline{{\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}}({\rm \mathfrak{a}})q^{{\rm \textbf{N}}{\rm \mathfrak{a}}},$$ $\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})$ is the root number for $L({\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}}, s)$, and $$V_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}}) = {\rm {\rm \mathfrak{m}}athfrak{p}}rod_{{\rm {\rm \mathfrak{m}}athfrak{p}} | p} {\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{({\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athcal{W}})({\rm {\rm \mathfrak{m}}athfrak{p}})}{{\rm \mathfrak{a}}lpha_p(f)}{\rm \textbf{N}}({\rm {\rm \mathfrak{m}}athfrak{p}})^{r-k-1}{\rm rig}ht) {\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})({\rm {\rm \mathfrak{m}}athfrak{p}})}{{\rm \mathfrak{a}}lpha_p(f)}{\rm \textbf{N}}({\rm {\rm \mathfrak{m}}athfrak{p}})^{r-k-1}{\rm rig}ht).$$ Recall we have fixed a continuous homomorphism $\ell_K: {\rm {\rm \mathfrak{m}}athbb{A}}^\times_K/K^\times \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. Thinking of $\ell_K$ as a map ${\rm \mathcal{G}}al(K_\infty/K) \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p$, we may write $\ell_K = p^{-n}{\rm \lambda}og_p {\rm \mathfrak{c}}irc {\rm \lambda}ambda$, for some continuous ${\rm \lambda}ambda: {\rm \mathcal{G}}al(K_\infty/K) \to 1+p{\rm {\rm \mathfrak{m}}athbb{Z}}_p$ and some integer $n$. The derivative of $L_p$ at the trivial character in the direction of $\ell_K$ is by definition $$L_p'(f \otimes {\rm \mathfrak{c}}hi, \ell_K,{\rm \mathfrak{m}}athbbm{1}) = p^{-n}{\rm \mathfrak{f}}rac{d}{ds}L_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda^s){\rm \mathfrak{b}}igg|_{s=0}.$$ With these definitions, we can finally state our main result. {\rm \mathfrak{b}}egin{theorem}{\rm \lambda}abel{main} If ${\rm \mathfrak{c}}hi$ is an unramified Hecke character of $K$ of infinity type $(\ell, 0)$ with $0 < \ell = 2k < 2r$, then $$L_p'(f \otimes {\rm \mathfrak{c}}hi,\ell_K, {\rm \mathfrak{m}}athbbm{1}) =(-1)^k {\rm {\rm \mathfrak{m}}athfrak{p}}rod_{{\rm {\rm \mathfrak{m}}athfrak{p}} | p}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}})p^{r-k-1}}{{\rm \mathfrak{a}}lpha_p(f)} {\rm rig}ht)^2{\rm \mathfrak{f}}rac{h{\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}{\rm rig}ht\rangle_{\ell_K}}{u^2{\rm \lambda}eft(4|D|{\rm rig}ht)^{r-k-1}},$$ where $h = h_K$ is the class number and $u = {\rm \mathfrak{f}}rac{1}{2}{\rm \mathcal{O}}_K^\times$. \end{theorem} {\rm \mathfrak{b}}egin{remark}{\rm \lambda}abel{realj} Our assumption that $A^\tau {\rm \mathfrak{c}}ong A$ implies that the lattice corresponding to $A$ is 2-torsion in the class group. This is convenient for proving the vanishing of the $p$-adic height in the anti-cyclotomic direction, but not strictly necessary. One should be able to prove the theorem without this assumption by making use of the functoriality of the height pairing to relate heights on $X$ to heights on $X^\tau$, but we omit the details. \end{remark} {\rm \mathfrak{b}}egin{remark} When $\ell = 0$ the cycles and the $p$-adic $L$-function simplify to those constructed in {\rm \mathfrak{c}}ite{Nek}, and the main theorem becomes Nekov\'a\v{r}'s formula, at least up to a somewhat controversial sign. It appears that a sign was forgotten in {\rm \mathfrak{c}}ite[II.6.2.3]{Nek}, causing the discrepancy with our formula and with Perrin-Riou's as well. Perrin-Riou's formula {\rm \mathfrak{c}}ite{PR1} covers the case $\ell = 0$ and $r = 1$. \end{remark} {\rm \mathfrak{b}}egin{remark} We have assumed $N \geq 3$ for the sake of exposition. For $N < 3$, the proof should be modified to account for the lack of a fine moduli space and extra automorphisms in the local intersection theory. These details are spelled out in {\rm \mathfrak{c}}ite{Nek} and pose no new problems. \end{remark} {\rm \mathfrak{b}}egin{remark} There should be an archimedean analogue of Theorem \ref{main}, generalizing Zhang's formula for Heegner cycles {\rm \mathfrak{c}}ite{Zhang} to the `generalized' situation. The author plans to present such a result in the near future. \end{remark} {\rm \sigma}ubsection{Applications}{\rm \lambda}abel{apps} Theorem \ref{main} implies special cases of Perrin-Riou's $p$-adic Bloch-Kato conjecture. The assumption that $A$ is isogenous to all its ${\rm \mathcal{G}}al(H/K)$-conjugates implies that the Hecke character \[{\rm {\rm \mathfrak{m}}athfrak{p}}si_H: {\rm {\rm \mathfrak{m}}athbb{A}}^\times_H \to {\rm {\rm \mathfrak{m}}athbb{C}}^\times,\] which is attached to $A$ by the theory of complex multiplication, factors as ${\rm {\rm \mathfrak{m}}athfrak{p}}si_H = {\rm {\rm \mathfrak{m}}athfrak{p}}si {\rm \mathfrak{c}}irc {\rm \textbf{N}}m_{H/K}$, where ${\rm {\rm \mathfrak{m}}athfrak{p}}si$ is a $(1,0)$-Hecke character of $K$. Assume for simplicity that ${\rm \mathfrak{c}}hi = {\rm {\rm \mathfrak{m}}athfrak{p}}si^\ell$, and set ${\rm \mathfrak{c}}hi_H = {\rm {\rm \mathfrak{m}}athfrak{p}}si_H^\ell$ and $G_H := {\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H/H)$. Then the $G_H$-representation $V_{f,A,\ell}$ is the $p$-adic realization of a Chow motive $M(f)_H \otimes M({\rm \mathfrak{c}}hi_H)$. Here, $M(f)$ is the motive over ${\rm {\rm \mathfrak{m}}athbb{Q}}$ attached to $f$ by Deligne, and $M({\rm \mathfrak{c}}hi_H)$ is a motive over $H$ (with coefficients in $K$) cutting out a two dimensional piece of the middle degree cohomology of $A^\ell$. In fact, the motive $M({\rm \mathfrak{c}}hi_H)$ descends to a motive $M({\rm \mathfrak{c}}hi)$ over $K$ with coefficients in ${\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi)$ (see Remark \ref{descend}). We write $V_{f,{\rm \mathfrak{c}}hi}$ for the $p$-adic realization of $M(f)_K \otimes M({\rm \mathfrak{c}}hi)$, so that $V_{f,{\rm \mathfrak{c}}hi}$ is a $G_K$-representation whose restriction to $G_H$ is isomorphic to $V_{f,A,\ell}$. In fact, $V_{f,{\rm \mathfrak{c}}hi} {\rm \mathfrak{c}}ong {\rm \mathfrak{c}}hi \oplus {\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi$, where we now think of ${\rm \mathfrak{c}}hi$ as a ${\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi) \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p$-valued character of $G_K$. It follows that \[L(V_{f,{\rm \mathfrak{c}}hi}, s) = L(f,{\rm \mathfrak{c}}hi,s)L(f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi,s) = L(f,{\rm \mathfrak{c}}hi,s)^2.\] The Bloch-Kato conjecture for the motive $M(f)_K \otimes M({\rm \mathfrak{c}}hi)$ over $K$ reads \[ {\rm \mathfrak{d}}im H^1_f(K, V_{f,{\rm \mathfrak{c}}hi}) = 2{\rm \mathfrak{c}}dot {\rm ord}_{s = r + k} L(f,{\rm \mathfrak{c}}hi,s).\] Similarly, Perrin-Riou's $p$-adic conjecture {\rm \mathfrak{c}}ite[Conj.\ 2.7]{colmez} {\rm \mathfrak{c}}ite[4.2.2]{PRbook} reads {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{PRconj} {\rm \mathfrak{d}}im H^1_f(K, V_{f,{\rm \mathfrak{c}}hi}) = 2{\rm \mathfrak{c}}dot {\rm ord}_{{\rm \lambda}ambda = {\rm \mathbbm{1}}} L(f,{\rm \mathfrak{c}}hi,\ell_K, {\rm \lambda}ambda), \end{equation} where $\ell_K$ is the cyclotomic logarithm and the derivatives are taken in the cyclotomic direction. In Section \ref{proof}, we deduce the ``analytic rank 1" case of Perrin-Riou's conjecture by combining our main formula with the forthcoming results of Elias {\rm \mathfrak{c}}ite{yara} on Euler systems for generalized Heegner cycles: {\rm \mathfrak{b}}egin{theorem}{\rm \lambda}abel{PRproof} If $L'_p(f \otimes {\rm \mathfrak{c}}hi, \ell_K, {\rm \mathfrak{m}}athbbm{1}) {\rm \mathfrak{n}}eq 0$, then $(\ref{PRconj})$ is true, i.e.\ Perrin-Riou's $p$-adic Bloch-Kato conjecture holds for the motive $M(f)_K \otimes M({\rm \mathfrak{c}}hi)$. \end{theorem} {\rm \mathfrak{b}}egin{remark} Alternatively, we can think of $z_{f,{\rm \mathfrak{c}}hi}$ (resp.\ $z_{f,{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi}$) as giving a class in $H^1_f(K, V_f \otimes {\rm \mathfrak{c}}hi)$ (resp.\ $H^1_f(K, V_f \otimes {\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi)$), and note that $L(V_f \otimes {\rm \mathfrak{c}}hi, s) = L(f,{\rm \mathfrak{c}}hi, s) = L(V_f \otimes {\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi,s )$. The Bloch-Kato conjecture for the motive $f \otimes {\rm \mathfrak{c}}hi$ over $K$ then reads \[ {\rm \mathfrak{d}}im H^1_f(K, V_f \otimes {\rm \mathfrak{c}}hi) = {\rm ord}_{s = r + k} L(f,{\rm \mathfrak{c}}hi,s),\] and similarly for ${\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi$ and the $p$-adic $L$-functions. \end{remark} We anticipate that Theorem \ref{main} can also be used to study the variation of generalized Heegner cycles in $p$-adic families, in the spirit of {\rm \mathfrak{c}}ite{francesc} and {\rm \mathfrak{c}}ite{BHiwa}. Theorem \ref{main} allows for variation in not just the weight of the modular form $f$, but in the weight of the Hecke character ${\rm \mathfrak{c}}hi$ as well. {\rm \sigma}ubsection{Related work} There has been much recent work on the connections between Heegner cycles and $p$-adic $L$-functions. Generalized Heegner cycles were first studied in {\rm \mathfrak{c}}ite{BDP1}, where their Abel-Jacobi classes were related to the special {\it value} (not the derivative) of a different Rankin-Selberg $p$-adic $L$-function. Brooks extended these results to Shimura curves over ${\rm {\rm \mathfrak{m}}athbb{Q}}$ {\rm \mathfrak{c}}ite{hunter} and recently Liu, Zhang, and Zhang proved a general formula for arbitrary totally real fields {\rm \mathfrak{c}}ite{lzz}. In {\rm \mathfrak{c}}ite{disegni}, Disegni computes $p$-adic heights of Heegner points on Shimura curves, generalizing the weight 2 formula of Perrin-Riou for modular curves. Kobayashi {\rm \mathfrak{c}}ite{kob} extended Perrin-Riou's height formula to the supersingular case. Our work is the first (as far as we know) to study $p$-adic heights of generalized Heegner cycles. {\rm \sigma}ubsection{Proof outline} The proof of Theorem \ref{main} follows {\rm \mathfrak{c}}ite{Nek} and {\rm \mathfrak{c}}ite{PR1} rather closely. For this reason, we have chosen to retain much of Nekov\' a\u r's notation and not to dwell long on computations easily adapted to our situation. We define the $p$-adic $L$-function $L_p(f \otimes {\rm \mathfrak{c}}hi, {\rm \lambda}ambda)$ in Section \ref{pL} and show that it vanishes in the anticyclotomic direction. In Section \ref{integrate}, we integrate the $p$-adic logarithm against the $p$-adic Rankin-Selberg measure to compute what is essentially the derivative of $L_p(f\otimes {\rm \mathfrak{c}}hi)$ at the trivial character in the cyclotomic direction. In Section \ref{cycles}, we define the generalized Heegner cycles and describe Hecke operators and $p$-adic Abel-Jacobi maps attached to the variety $X$. After proving some properties of generalized Heegner cycles, we show that the RHS of Theorem \ref{main} vanishes when $\ell_K$ is anticyclotomic. In Section \ref{localhts} we compute the local cyclotomic heights of $z_f$ at places $v$ which are prime to $p$. In Section \ref{ordsec}, we prove that $V_{f,A,\ell}$ is an ordinary representation. We complete the proof of the main theorem in Section \ref{proof}, modulo the results from the final section. In the final section, we fix the proof in {\rm \mathfrak{c}}ite[II.5]{Nek}, to complete a proof of the vanishing of the contribution coming from local heights at primes above $p$. The key ingredient is the theory of relative Lubin-Tate groups and Theorem \ref{crysmixed}. The latter is a result in $p$-adic Hodge theory which relies on Faltings' proof of Fontaine's $C_{\rm \mathfrak{c}}ris$ conjecture. This theorem (or rather, its proof) is quite general and should be useful for computing $p$-adic heights of algebraic cycles sitting on varieties fibered over curves. {\rm \sigma}ubsection{Acknowledgments} I am grateful to Kartik Prasanna for suggesting this problem and for his patience and direction. Thanks go to Hunter Brooks for several productive conversations, and to Bhargav Bhatt, Daniel Disegni, Yara Elias, Olivier Fouquet, Adrian Iovita, Shinichi Kobayashi, Jan Nekov\'a\v{r}, and Martin Olsson for helpful correspondence. The author was partially supported by National Science Foundation RTG grant DMS-0943832. {\rm \sigma}ection{Constructing the $p$-adic $L$-functions}{\rm \lambda}abel{pL} Recall $f \in S_{2r}({\rm \mathcal{G}}amma_0(N))$ is an ordinary newform with trivial nebentypus. As in the introduction, ${\rm \mathfrak{c}}hi: {\rm {\rm \mathfrak{m}}athbb{A}}^\times_K/K^\times \to {\rm {\rm \mathfrak{m}}athbb{C}}^\times$ is an unramified Hecke character of infinity type $(2k,0)$ with $0 < 2k < 2r$. For conventions regarding Hecke characters, see {\rm \mathfrak{c}}ite[\S4.1]{BDP1}. All that follows will apply to ${\rm \mathfrak{c}}hi$ of infinity type $(0,2k)$ with suitable modifications. In this section, we follow {\rm \mathfrak{c}}ite{Nek} and define a $p$-adic $L$-function attached to the pair $(f, {\rm \mathfrak{c}}hi)$ which interpolates special values of certain Rankin-Selberg convolutions. {\rm \sigma}ubsection{$p$-adic measures} We use the notation of {\rm \mathfrak{c}}ite{Nek} unless stated otherwise. We construct the $p$-adic $L$-function only in the setting needed for Theorem \ref{main}; in the notation of {\rm \mathfrak{c}}ite{Nek}, this means that ${\rm \mathcal{O}}mega = 1, N_1 = N_2 = c_1 = c_2 = c = 1, N_3 = N'_3 = N, {\rm {\rm \mathfrak{m}}athcal{D}}elta = {\rm {\rm \mathfrak{m}}athcal{D}}elta_1 = {\rm {\rm \mathfrak{m}}athcal{D}}elta_2 = |D|, {\rm {\rm \mathfrak{m}}athcal{D}}elta_3 = 1,$ and $\gamma = \gamma_3 = 0$. We begin by defining theta measures. Fix an integer $m \geq 1$ and let ${\rm \mathcal{O}}_m$ be the order of conductor $m$ in $K$. Let ${\rm \mathfrak{a}}$ be proper ${\rm \mathcal{O}}_m$-ideal whose class in ${\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_m)$ is denoted by ${\rm {\rm \mathfrak{m}}athbb{A}}A$. The quadratic form $$Q_{\rm \mathfrak{a}}(x) = {\rm \textbf{N}}(x)/{\rm \textbf{N}}({\rm \mathfrak{a}}),$$ takes integer values on ${\rm \mathfrak{a}}$. Define the measure ${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A$ on ${\rm {\rm \mathfrak{m}}athbb{Z}}^\times_p$ by {\rm \mathfrak{b}}egin{equation} {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A (a ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u)) = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})^{-1}{\rm \sigma}um_{{\rm \sigma}ubstack{x \in {\rm \mathfrak{a}} \\ Q_{\rm \mathfrak{a}}(x) \equiv a \, ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u) }}\\ {\rm \mathfrak{b}}ar x^\ell q^{Q_{\rm \mathfrak{a}}(x)}. \end{equation} To keep things from getting unwieldy we have omitted ${\rm \mathfrak{c}}hi$ from the notation of the measure. If ${\rm {\rm \mathfrak{m}}athfrak{p}}hi$ is a function on ${\rm {\rm \mathfrak{m}}athbb{Z}}/p^{\rm \mathfrak{n}}u{\rm {\rm \mathfrak{m}}athbb{Z}}$ with values in a $p$-adic ring $A$, then {\rm \mathfrak{b}}egin{equation} {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A({\rm {\rm \mathfrak{m}}athfrak{p}}hi) = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})^{-1}{\rm \sigma}um_{x \in {\rm \mathfrak{a}}} {\rm {\rm \mathfrak{m}}athfrak{p}}hi(Q_{\rm \mathfrak{a}}(x)) {\rm \mathfrak{b}}ar x^\ell q^{Q_{\rm \mathfrak{a}}(x)} = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})^{-1} {\rm \sigma}um_{n \geq 1} {\rm {\rm \mathfrak{m}}athfrak{p}}hi(n) \rho_{\rm \mathfrak{a}}(n, \ell) q^n, \end{equation} where $\rho_{\rm \mathfrak{a}}(n, \ell)$ is the sum ${\rm \sigma}um {\rm \mathfrak{b}}ar x^\ell$ over all $x \in {\rm \mathfrak{a}}$ with $Q_{\rm \mathfrak{a}}(x) = n.$ We have $$\rho_{{\rm \mathfrak{a}}(\gamma)}(n,\ell) = {\rm \mathfrak{b}}ar \gamma^\ell \rho_{\rm \mathfrak{a}}(n,\ell),$$ for all $\gamma\in K^\times$, so that ${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A$ is independent of the choice of representative ${\rm \mathfrak{a}}$ for the class ${\rm {\rm \mathfrak{m}}athbb{A}}A$. For ${\rm \mathfrak{a}} \in {\rm {\rm \mathfrak{m}}athbb{A}}A$, {\rm \mathfrak{b}}egin{equation} {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})^{-1}{\rm \sigma}um_{x \in {\rm \mathfrak{a}}} {\rm \mathfrak{b}}ar x^\ell q^{Q_{\rm \mathfrak{a}}(x)} = w_m{\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{a}}' \in {\rm {\rm \mathfrak{m}}athbb{A}}A \\ {\rm \mathfrak{a}}'{\rm \sigma}ubset {\rm \mathcal{O}}_m}}\\ {\rm \mathfrak{c}}hi({\rm \mathfrak{a}}')q^{{\rm \textbf{N}}({\rm \mathfrak{a}}')} = w_m {\rm \sigma}um_{n \geq 1} r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(n) q^n, \end{equation} since $\ell$ is a multiple of $w_m$. The coefficients $r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(n)$ play the role of (and generalize) the numbers $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m)$ that appear in {\rm \mathfrak{c}}ite{GZ} and {\rm \mathfrak{c}}ite{Nek}. {\rm \mathfrak{b}}egin{proposition} ${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A({\rm {\rm \mathfrak{m}}athfrak{p}}hi)$ is a cusp form in $M_{\ell+1}({\rm \mathcal{G}}amma_1(M), A)$, with $M = {\rm lcm}(|D|m^2,p^{2{\rm \mathfrak{n}}u})$. \end{proposition} {\rm \mathfrak{b}}egin{proof} It is classical {\rm \mathfrak{c}}ite{Ogg} that ${\rm \sigma}um_{x \in {\rm \mathfrak{a}}} {\rm \mathfrak{b}}ar x^\ell q^{Q_{\rm \mathfrak{a}}(x)}$ is a cusp form in $M_{\ell+1}({\rm \mathcal{G}}amma_1(|D|m^2))$. It follows from {\rm \mathfrak{c}}ite[Proposition 1.1]{Hida1} that weighting this form by ${\rm {\rm \mathfrak{m}}athfrak{p}}hi$ gives a modular form of the desired level. \end{proof} For a fixed integer $C$, define the Eisenstein measures $$E_1({\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u))(z) = E_1(z,{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{{\rm \mathfrak{a}}lpha,p^{\rm \mathfrak{n}}u}) $$ and $$E_1^C({\rm \mathfrak{a}}lpha({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u))(z) = E_1({\rm \mathfrak{a}}lpha({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u))(z) - CE_1(C^{-1}{\rm \mathfrak{a}}lpha({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u))(z),$$ as in {\rm \mathfrak{c}}ite[I.3.6]{Nek}. Similarly, we define the following convolution measure on ${\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times$ {\rm \mathfrak{b}}egin{align*} &{\rm {\rm \mathfrak{m}}athbb{P}}hi^C_{\rm {\rm \mathfrak{m}}athbb{A}}A ( a ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u)) =\\ &H{\rm \lambda}eft[ {\rm \sigma}um _{{\rm \mathfrak{a}}lpha \in ({\rm {\rm \mathfrak{m}}athbb{Z}}/|D|p^{\rm \mathfrak{n}}u{\rm {\rm \mathfrak{m}}athbb{Z}})^\times} \xi({\rm \mathfrak{a}}lpha){\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A({\rm \mathfrak{a}}lpha^2a ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u))(z){\rm \mathfrak{d}}elta^{r-1 - k}_1(E_1^C({\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od |D|p^{\rm \mathfrak{n}}u))(Nz)){\rm rig}ht], \end{align*} which takes values in $\overline M_{2r}({\rm \mathcal{G}}amma_0(N|D| p^\infty); {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})^{-1}p^{-{\rm \mathfrak{d}}elta}{\rm {\rm \mathfrak{m}}athbb{Z}}_p)$, for some ${\rm \mathfrak{d}}elta$ depending only on $r$ and $k$ {\rm \mathfrak{c}}ite[Lem.\ 5.1]{Hida1}. Here, $H$ is holomorphic projection, ${\rm \mathfrak{d}}elta_1^{r-1-k}$ is Shimura's differential operator, and $\xi = {\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{{\rm \mathfrak{c}}dot}{\rm rig}ht)$. We are implicitly identifying ${\rm {\rm \mathfrak{m}}athbb{Z}}_p$ with the ring of integers of $K_{\rm {\rm \mathfrak{m}}athfrak{p}}$ for a prime ${\rm {\rm \mathfrak{m}}athfrak{p}}$ above $p$ (which is split in $K$), so that $x^\ell \in {\rm {\rm \mathfrak{m}}athbb{Z}}_p$ for all $x \in {\rm \mathfrak{a}}$. The measure ${\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A^C$ is defined by $${\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A^C = {\rm \mathfrak{f}}rac{1}{2w_m}{\rm {\rm \mathfrak{m}}athbb{P}}hi^C_{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \mathfrak{b}}igg|_{2r} {\rm \mathfrak{m}}athscr{T}(|D|)_{N|D|p^\infty/Np^\infty},$$ where $${\rm \mathfrak{m}}athscr{T}: M_{2r}{\rm \lambda}eft({\rm \mathcal{G}}amma_0{\rm \lambda}eft(N|D|p^\infty{\rm rig}ht), {\rm \mathfrak{c}}dot{\rm rig}ht) \to M_{2r}{\rm \lambda}eft({\rm \mathcal{G}}amma_0{\rm \lambda}eft(Np^\infty{\rm rig}ht),{\rm \mathfrak{c}}dot{\rm rig}ht)$$ is the trace map, i.e.\ the adjoint to the operator $g {\rm \mathfrak{m}}apsto |D|^{r-1} g{\rm \mathfrak{b}}igg|_{2r} {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} |D| & 0 \\ 0 & 1 \end{array} {\rm rig}ht).$ For ring class field characters $\rho: G(H_m/K) \to \overline {\rm {\rm \mathfrak{m}}athbb{Q}}^\times$, define $${\rm {\rm \mathfrak{m}}athbb{P}}hi_\rho^C = {\rm \sigma}um_{[{\rm {\rm \mathfrak{m}}athbb{A}}A] \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_m)} \rho([{\rm {\rm \mathfrak{m}}athbb{A}}A])^{-1}{\rm {\rm \mathfrak{m}}athbb{P}}hi_{\rm {\rm \mathfrak{m}}athbb{A}}A^C,$$ and similarly for ${\rm {\rm \mathfrak{m}}athbb{P}}si^C_\rho$. We define ${\rm {\rm \mathfrak{m}}athbb{P}}si^C_{f,\rho} = L_{f_0}({\rm {\rm \mathfrak{m}}athbb{P}}si_\rho^C),$ where $L_{f_0}$ is the Hida projector attached to the $p$-stabilization $$f_0 = f(z) - {\rm \mathfrak{f}}rac{p^{2r-1}}{{\rm \mathfrak{a}}lpha_p(f)}f(pz)$$ of $f$ (see {\rm \mathfrak{c}}ite[I.2]{Nek} for its definition and properties). Explicitly, if $g \in M_j({\rm \mathcal{G}}amma_0(Np^{\rm \mathfrak{m}}u); {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}})$ with ${\rm \mathfrak{m}}u \geq 1$, then {\rm \mathfrak{b}}egin{equation} L_{f_0(g)} = {\rm \lambda}eft({\rm \mathfrak{f}}rac{p^{j/2 -1}}{{\rm \mathfrak{a}}lpha_p(f)}{\rm rig}ht)^{{\rm \mathfrak{m}}u-1} {\rm \mathfrak{f}}rac{{\rm \lambda}eft{\rm \lambda}angle f_0^\tau {\rm \mathfrak{b}}igg |_j {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 0 & -1 \\ N p^{\rm \mathfrak{m}}u & 0 \end{array} {\rm rig}ht), g {\rm rig}ht\rangle_{Np^{\rm \mathfrak{m}}u}}{{\rm \lambda}eft{\rm \lambda}angle f_0^\tau {\rm \mathfrak{b}}igg |_j {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 0 & -1 \\ N p & 0 \end{array} {\rm rig}ht), f_0 {\rm rig}ht\rangle_{Np}}. \end{equation} We also define a measure ${\rm {\rm \mathfrak{m}}athbb{P}}si_f^C$ on ${\rm \mathcal{G}}al(H_{p^\infty}/K) \times {\rm \mathcal{G}}al(K({\rm \mathfrak{m}}u_{p^\infty})/K)$ by $${\rm {\rm \mathfrak{m}}athbb{P}}si^C_f({\rm \sigma}igma \,({\rm \mathfrak{m}}od p^n), \tau \, ({\rm \mathfrak{m}}od p^m)) = L_{f_0}({\rm {\rm \mathfrak{m}}athbb{P}}si^C_{\rm {\rm \mathfrak{m}}athbb{A}}A (a\, ({\rm \mathfrak{m}}od p^m))),$$ where ${\rm \sigma}igma$ corresponds to ${\rm {\rm \mathfrak{m}}athbb{A}}A$ and $\tau$ corresponds to $a \in ({\rm {\rm \mathfrak{m}}athbb{Z}}/p^m{\rm {\rm \mathfrak{m}}athbb{Z}})^*$ under the Artin map. Finally, as in {\rm \mathfrak{c}}ite{Nek}, we define modified measures $\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A^C, \tilde {\rm {\rm \mathfrak{m}}athbb{P}}si^C_\rho,$ etc., by replacing ${\rm \mathfrak{m}}athscr{T}(|D|)$ with ${\rm \mathfrak{m}}athscr{T}(1)$ in the definition of ${\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A^C$. {\rm \sigma}ubsection{Integrating characters against the Rankin-Selberg measure} In this subsection, we integrate finite order characters of the ${\rm {\rm \mathfrak{m}}athbb{Z}}_p^2$-extension of $K$ against the measures constructed in the previous section and show that they recover special values of Rankin-Selberg $L$-functions. This allows us to prove a functional equation for the (soon to be defined) $p$-adic $L$-function. We follow the computations in {\rm \mathfrak{c}}ite[I.5]{Nek} and {\rm \mathfrak{c}}ite[\S4]{PR2}. Let ${\rm et}a$ denote a character $({\rm {\rm \mathfrak{m}}athbb{Z}}/p^{\rm \mathfrak{n}}u{\rm {\rm \mathfrak{m}}athbb{Z}})^\times \to {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}^\times$. Exactly as in {\rm \mathfrak{c}}ite[Lemma 7]{PR2}, we compute: {\rm \mathfrak{b}}egin{equation}\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}^\times_p} {\rm et}a \,d{\rm {\rm \mathfrak{m}}athbb{P}}hi_{\rm {\rm \mathfrak{m}}athbb{A}}A^C = {\rm \lambda}eft(1 - C\xi(C){\rm \mathfrak{b}}ar {\rm et}a^2(C){\rm rig}ht)H[{\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A({\rm et}a)(z){\rm \mathfrak{d}}elta_1^{r-k-1}{\rm \lambda}eft(E_1(Nz,{\rm {\rm \mathfrak{m}}athfrak{p}}hi){\rm rig}ht)]. \end{equation} Similarly, if $\rho$ is a ring class character with conductor a power of $p$, {\rm \mathfrak{b}}egin{equation} \int_{{\rm {\rm \mathfrak{m}}athbb{Z}}^\times_p} {\rm et}a \,d{\rm {\rm \mathfrak{m}}athbb{P}}hi_\rho^C = w_m{\rm \lambda}eft(1 - C\xi(C){\rm \mathfrak{b}}ar {\rm et}a^2(C){\rm rig}ht)H[{\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{W}} '')(z){\rm \mathfrak{d}}elta_1^{r-k-1}{\rm \lambda}eft(E_1(Nz,{\rm {\rm \mathfrak{m}}athfrak{p}}hi){\rm rig}ht)], \end{equation} where ${\rm {\rm \mathfrak{m}}athcal{W}}'' = \rho {\rm \mathfrak{c}}dot ({\rm et}a {\rm \mathfrak{c}}irc {\rm \textbf{N}})$, the latter being thought of as a character modulo the ideal ${\rm \mathfrak{f}} = {\rm lcm}($cond $\rho$, cond ${\rm et}a, p)$. We denote by ${\rm {\rm \mathfrak{m}}athcal{W}}$ the primitive character associated to ${\rm {\rm \mathfrak{m}}athcal{W}}''$. By definition, $${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{W}}'')(z) = {\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{a}} {\rm \sigma}ubset {\rm \mathcal{O}} \\ ({\rm \mathfrak{a}}, {\rm \mathfrak{f}}) = 1}}{\rm {\rm \mathfrak{m}}athcal{W}}''({\rm \mathfrak{a}}){\rm \mathfrak{c}}hi({\rm \mathfrak{a}})q^{{\rm \textbf{N}}({\rm \mathfrak{a}})}.$$ This is a cusp form in $S_{\ell+1}{\rm \lambda}eft(|D|{\rm \textbf{N}}^K_{\rm {\rm \mathfrak{m}}athbb{Q}}{\rm \lambda}eft({\rm \mathfrak{f}}_{{\rm {\rm \mathfrak{m}}athcal{W}}''}{\rm rig}ht), {\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{{\rm \mathfrak{c}}dot}{\rm rig}ht){\rm et}a^2{\rm rig}ht)$, since ${\rm \mathfrak{c}}hi$ is unramified (see {\rm \mathfrak{c}}ite{Ogg} for a more general result). The computations of {\rm \mathfrak{c}}ite[I.5.3-4]{Nek} carry over to our situation, except the theta series transformation law now reads {\rm \mathfrak{b}}egin{equation} {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{W}}'')(z){\rm \mathfrak{b}}igg|_{\ell+1} {\rm \mathfrak{m}}athscr{F} = {\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{w}{\rm rig}ht){\rm \mathfrak{b}}ar {\rm et}a ^2(w){\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi(W''){\rm \mathfrak{b}}igg|_{\ell + 1}{\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 0 & -1 \\ |D| p^{\rm \mathfrak{m}}u & 0 \end{array} {\rm rig}ht), \end{equation} where ${\rm \mathfrak{m}}athscr{F}$ is the involution $$ {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 0 & -1 \\ N |D|p^{\rm \mathfrak{m}}u & 0 \end{array} {\rm rig}ht) {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} N & y \\ N |D|p^{\rm \mathfrak{m}}u t & N \end{array} {\rm rig}ht)$$ with $Nxw - |D|p^{\rm \mathfrak{m}}u ty = 1$. We then obtain {\rm \mathfrak{b}}egin{align*} &\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times}{\rm et}a \, d{\rm {\rm \mathfrak{m}}athbb{P}}si_{f,\rho}^C = \\ &{\rm \lambda}eft(1 - C\xi(C){\rm \mathfrak{b}}ar {\rm et}a^2(C){\rm rig}ht){\rm \mathfrak{f}}rac{{\rm \lambda}eft({\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{{\rm \mathfrak{c}}dot}{\rm rig}ht){\rm et}a^2{\rm rig}ht)(N){\rm \lambda}ambda_N(f)|D|^{r - 1/2}}{(4{\rm {\rm \mathfrak{m}}athfrak{p}}i i){\rm \mathfrak{a}}lpha_p(f)^{-1}p^{r-1}} {\rm \mathfrak{f}}rac{{\rm {\rm \mathfrak{m}}athcal{L}}ambda_{\rm \mathfrak{m}}u({\rm {\rm \mathfrak{m}}athcal{W}}'')}{{\rm \lambda}eft< f_0^\tau {\rm \mathfrak{b}}igg|_{2r} {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 0 & -1 \\ N p & 0 \end{array} {\rm rig}ht), f_0{\rm rig}ht >_{Np}}, \end{align*} where \[{\rm {\rm \mathfrak{m}}athcal{L}}ambda_{\rm \mathfrak{m}}u({\rm {\rm \mathfrak{m}}athcal{W}}'') = {\rm \mathfrak{f}}rac{p^{{\rm \mathfrak{m}}u(r-1/2)}}{{\rm \mathfrak{a}}lpha_p(f)^{\rm \mathfrak{m}}u}{\rm \lambda}eft< f_0^\tau, {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi{\rm \lambda}eft({\rm {\rm \mathfrak{m}}athcal{W}}''{\rm rig}ht){\rm \mathfrak{b}}igg|_{\ell + 1}{\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 0 & -1 \\ |D| p^{\rm \mathfrak{m}}u & 0 \end{array} {\rm rig}ht){\rm \mathfrak{d}}elta^{r-k-1}{\rm \lambda}eft(E_1{\rm \lambda}eft(z,\xi {\rm \mathfrak{b}}ar {\rm et}a^2 {\rm rig}ht){\rm rig}ht){\rm rig}ht>_{N|D| p^{\rm \mathfrak{m}}u}.\] We define $\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})$ by the relation {\rm \mathfrak{b}}egin{equation}{\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{W}}){\rm \mathfrak{b}}ig|_{\ell + 1}{\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 0 & -1 \\ |D|p^{\rm \mathfrak{b}}eta & 0 \end{array} {\rm rig}ht) = (-1)^{k+1}i \tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}){\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}}),\end{equation} with $|D|p^{\rm \mathfrak{b}}eta$ being the level ${\rm {\rm \mathfrak{m}}athcal{D}}elta({\rm {\rm \mathfrak{m}}athcal{W}})$ of ${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{W}})$. One knows ({\rm \mathfrak{c}}ite[Thm.\ 4.3.12]{Miy}) that $\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}) \in {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}^\times$, $|\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})| = 1$, and $${\rm {\rm \mathfrak{m}}athcal{L}}ambda({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}},s) = \tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}){\rm {\rm \mathfrak{m}}athcal{L}}ambda({\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athcal{W}},\ell+1-s),$$ where $${\rm {\rm \mathfrak{m}}athcal{L}}ambda({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}},s) = {\rm \lambda}eft(|D|p^{\rm \mathfrak{b}}eta{\rm rig}ht)^{s/2}(2{\rm {\rm \mathfrak{m}}athfrak{p}}i)^{-s}{\rm \mathcal{G}}amma(s)L({\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{W}}),s).$$ Modifying the computations in {\rm \mathfrak{c}}ite[\S4]{PR2}, we find that {\rm \mathfrak{b}}egin{equation} {\rm {\rm \mathfrak{m}}athcal{L}}ambda_{\rm \mathfrak{m}}u({\rm {\rm \mathfrak{m}}athcal{W}}'') = (-1)^{k+1}i\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}}){\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{a}} | p \\ {\rm \textbf{N}}({\rm \mathfrak{a}}) = p^s}}{\rm \mathfrak{m}}u({\rm \mathfrak{a}}){\rm \mathfrak{c}}hi({\rm \mathfrak{a}}){\rm {\rm \mathfrak{m}}athcal{W}}({\rm \mathfrak{a}}){\rm {\rm \mathfrak{m}}athcal{L}}ambda_{{\rm \mathfrak{m}}u,s},\end{equation} with {\rm \mathfrak{b}}egin{equation}{\rm {\rm \mathfrak{m}}athcal{L}}ambda_{{\rm \mathfrak{m}}u,s} = {\rm \mathfrak{f}}rac{p^{{\rm \mathfrak{m}}u{\rm \lambda}eft(r-{\rm \mathfrak{f}}rac{1}{2}{\rm rig}ht) -s{\rm \lambda}eft(k+{\rm \mathfrak{f}}rac{1}{2}{\rm rig}ht)} }{{\rm \mathfrak{a}}lpha_p(f)^{\rm \mathfrak{m}}u} {\rm \lambda}eft<f_0^\tau, {\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi}({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}}){\rm \mathfrak{b}}igg|_{\ell + 1}{\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} p^x & 0 \\ 0 & 1 \end{array} {\rm rig}ht){\rm \mathfrak{d}}elta^{r-k-1}(E_1(z,\xi{\rm \mathfrak{b}}ar {\rm et}a^2)){\rm rig}ht>_{N|D|p^{\rm \mathfrak{m}}u}\end{equation} and $x = {\rm \mathfrak{m}}u - {\rm \mathfrak{b}}eta -s$. Following {\rm \mathfrak{c}}ite[\S4.4]{PR2}, we compute: {\rm \mathfrak{b}}egin{align*} &{\rm {\rm \mathfrak{m}}athcal{L}}ambda_{\rm \mathfrak{m}}u({\rm {\rm \mathfrak{m}}athcal{W}}'') =\\ & (-1)^ri\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})V_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}}){\rm \lambda}eft({\rm \mathfrak{f}}rac{p^{r-1/2}}{{\rm \mathfrak{a}}lpha_p(f)}{\rm rig}ht)^{\rm \mathfrak{b}}eta {\rm \mathfrak{f}}rac{2(r+k-1)!(r-k-1)!}{(4{\rm {\rm \mathfrak{m}}athfrak{p}}i)^{2r-1}} L(f,{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}}),r+k), \end{align*} where $$V_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}}) = {\rm {\rm \mathfrak{m}}athfrak{p}}rod_{{\rm {\rm \mathfrak{m}}athfrak{p}} | p} {\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{({\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athcal{W}})({\rm {\rm \mathfrak{m}}athfrak{p}})}{{\rm \mathfrak{a}}lpha_{{\rm \textbf{N}}({\rm {\rm \mathfrak{m}}athfrak{p}})}(f)}{\rm \textbf{N}}({\rm {\rm \mathfrak{m}}athfrak{p}})^{r-k-1}{\rm rig}ht) {\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})({\rm {\rm \mathfrak{m}}athfrak{p}})}{{\rm \mathfrak{a}}lpha_{{\rm \textbf{N}}({\rm {\rm \mathfrak{m}}athfrak{p}})}(f)}{\rm \textbf{N}}({\rm {\rm \mathfrak{m}}athfrak{p}})^{r-k-1}{\rm rig}ht).$$ We have used the fact that {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{pair} {\rm \lambda}eft{\rm \lambda}angle f^\tau, g{\rm \mathfrak{d}}elta^{r-k-1}_1(E_1(z,{\rm {\rm \mathfrak{m}}athfrak{p}}hi)){\rm rig}ht\rangle_M = {\rm \mathfrak{f}}rac{(1-\e(-1))(r+k-1)!(r-k-1)!}{(-1)^{r-k-1}(4{\rm {\rm \mathfrak{m}}athfrak{p}}i)^{2r-1}}L(f,g,r+k) \end{equation} for any $g \in S_{2k+1}(M', {\rm \epsilon}ilon)$, and where $M = M'N$. Equation (\ref{pair}) follows from the usual unfolding trick and the fact {\rm \mathfrak{c}}ite[I.1.5.3]{Nek} that $${\rm \mathfrak{d}}elta_1^{r-k-1}(E_1(z,{\rm {\rm \mathfrak{m}}athfrak{p}}hi)) = {\rm \mathfrak{f}}rac{(r-k-1)!}{(-4{\rm {\rm \mathfrak{m}}athfrak{p}}i)^{r-k-1}}E_{r-k}(z,{\rm {\rm \mathfrak{m}}athfrak{p}}hi).$$ We have also used the following generalization of {\rm \mathfrak{c}}ite[Lemma 23]{PR2}. {\rm \mathfrak{b}}egin{lemma} If $g$ is a modular form whose $L$-function admits a Euler product expansion ${\rm {\rm \mathfrak{m}}athfrak{p}}rod_p G_p(p^{-s})$, then $$L(f_0,g,r+k) = G_p{\rm \lambda}eft(p^{r-k-1}{\rm \mathfrak{a}}lpha_p(f)^{-1}{\rm rig}ht)L(f,g,r+k).$$ \end{lemma} Putting these calculations together, we obtain the following interpolation result. {\rm \mathfrak{b}}egin{theorem}{\rm \lambda}abel{interp} For finite order characters ${\rm {\rm \mathfrak{m}}athcal{W}} = \rho{\rm \mathfrak{c}}dot({\rm et}a {\rm \mathfrak{c}}irc {\rm \textbf{N}})$ as above, $${\rm \lambda}eft(1 - C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht){\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}}(C){\rm rig}ht)^{-1}\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times}{\rm et}a \, d{\rm {\rm \mathfrak{m}}athbb{P}}si^C_{f,\rho} = {\rm \mathfrak{f}}rac{{\rm \mathfrak{m}}athcal{L}_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}})V_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}}){\rm {\rm \mathfrak{m}}athcal{D}}elta({\rm {\rm \mathfrak{m}}athcal{W}})^{r-1/2}}{ {\rm \mathfrak{a}}lpha_p(f)^{\rm \mathfrak{b}}eta H_p(f)},$$ where $${\rm \mathfrak{m}}athcal{L}_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}}) = {\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht) {\rm {\rm \mathfrak{m}}athcal{W}}(N)\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})C(r,k){\rm \mathfrak{f}}rac{L(f,{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}}),r+k)}{{\rm \lambda}eft<f,f{\rm rig}ht>_N}.$$ Here, $$C(r,k) = {\rm \mathfrak{f}}rac{2(-1)^{r-1}(r-k-1)!(r+k-1)!}{(4{\rm {\rm \mathfrak{m}}athfrak{p}}i)^{2r}}$$ and $$H_p(f) = {\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{p^{2r-2}}{{\rm \mathfrak{a}}lpha_p(f)^2}{\rm rig}ht){\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{p^{2r-1}}{{\rm \mathfrak{a}}lpha_p(f)^2}{\rm rig}ht).$$ \end{theorem} The modified measures $\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{f,\rho}^C$ satisfy $$\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}^\times_p}{\rm et}a \, d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{f,\rho}^C = |D|^{1-r}\overline{({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})}({\rm \mathfrak{m}}athcal{D})\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}^\times_p}{\rm et}a \,d{\rm {\rm \mathfrak{m}}athbb{P}}si_{f,\rho}^C,$$ where ${\rm \mathfrak{m}}athcal{D} = {\rm \lambda}eft({\rm \sigma}qrt{D}{\rm rig}ht)$ is the different of $K$. Now to define the $p$-adic $L$-function. Recall we have fixed an integer $C$ prime to $N|D| p$. {\rm \mathfrak{b}}egin{definition} For any continuous character ${\rm {\rm \mathfrak{m}}athfrak{p}}hi: G(H_{p^\infty}({\rm \mathfrak{m}}u_{p^\infty})/K) \to {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p^\times$ with conductor of $p$-power norm, we define {\rm \mathfrak{b}}egin{align*} L_p&(f \otimes {\rm \mathfrak{c}}hi, {\rm {\rm \mathfrak{m}}athfrak{p}}hi) =\\ & (-1)^{r-1}H_p(f){\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht){\rm \lambda}eft(1 - C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht){\rm {\rm \mathfrak{m}}athfrak{p}}hi(C)^{-1}{\rm rig}ht)^{-1}\int_{G(H_{p^\infty}({\rm \mathfrak{m}}u_{p^\infty})/K) } {\rm {\rm \mathfrak{m}}athfrak{p}}hi \, d \tilde{\rm {\rm \mathfrak{m}}athbb{P}}si^C_f. \end{align*} \end{definition} The $p$-adic $L$-function $L_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda) := L_p(f \otimes {\rm \mathfrak{c}}hi, {\rm \lambda}ambda)$ is a function of characters $${\rm \lambda}ambda: G(H_{p^\infty}({\rm \mathfrak{m}}u_{p^\infty})/K) \to (1+p{\rm {\rm \mathfrak{m}}athbb{Z}}_p).$$ $L_p(f \otimes {\rm \mathfrak{c}}hi)$ is an Iwasawa function with values in $c^{-1}{\rm \mathcal{O}}_{\widehat{{\rm {\rm \mathfrak{m}}athbb{Q}}(f,{\rm \mathfrak{c}}hi)}}$, where $\widehat{{\rm {\rm \mathfrak{m}}athbb{Q}}(f,{\rm \mathfrak{c}}hi)}$ is the $p$-adic closure (using our fixed embedding ${\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}} \hookrightarrow {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p$) of the field generated by the coefficients of $f$ and the values of ${\rm \mathfrak{c}}hi$, and $c \in \widehat{{\rm {\rm \mathfrak{m}}athbb{Q}}(f,{\rm \mathfrak{c}}hi)}$ is non-zero. We can construct analogous measures and an analogous $p$-adic $L$-function for ${\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi$, which is a Hecke character of infinity type $(0,\ell)$. There is a functional equation relating $L_p(f \otimes {\rm \mathfrak{c}}hi)$ to $L_p(f \otimes {\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi),$ which we now describe. First define $${\rm {\rm \mathfrak{m}}athcal{L}}ambda_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda) = {\rm \lambda}ambda({\rm \mathfrak{m}}athcal{D}N^{-1}){\rm \lambda}ambda(N)^{1/2}L_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda).$$ {\rm \mathfrak{b}}egin{proposition} ${\rm {\rm \mathfrak{m}}athcal{L}}ambda_p$ satisfies the functional equation $${\rm {\rm \mathfrak{m}}athcal{L}}ambda_p{\rm \lambda}eft(f \otimes {\rm \mathfrak{c}}hi{\rm rig}ht){\rm \lambda}eft({\rm \lambda}ambda{\rm rig}ht) = {\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht){\rm {\rm \mathfrak{m}}athcal{L}}ambda_p{\rm \lambda}eft(f\otimes {\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi{\rm rig}ht){\rm \lambda}eft({\rm \lambda}ambda^{-1}{\rm rig}ht).$$ \end{proposition} {\rm \mathfrak{b}}egin{proof} It suffices to prove this for all finite order characters ${\rm {\rm \mathfrak{m}}athcal{W}}$. For such ${\rm {\rm \mathfrak{m}}athcal{W}}$, the functional equation for the Rankin-Selberg convolution reads {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{fxleq}L(f,{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}({\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athcal{W}}), r+k) = {\rm \mathfrak{f}}rac{{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht){\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}}(N)}{\tau({\rm \mathfrak{c}}hi{\rm {\rm \mathfrak{m}}athcal{W}})^2}L(f, {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{W}}),r+k),\end{equation} so $${\rm \mathfrak{f}}rac{{\rm \mathfrak{m}}athcal{L}_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}})}{{\rm \mathfrak{m}}athcal{L}_p(f,{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi, {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}})} = {\rm {\rm \mathfrak{m}}athcal{W}}(N){\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht).$$ We also have $V_p(f,{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi, {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}}) = V_p(f,{\rm \mathfrak{c}}hi,{\rm {\rm \mathfrak{m}}athcal{W}}),$ so that $${\rm \mathfrak{f}}rac{L_p(f \otimes {\rm \mathfrak{c}}hi)({\rm {\rm \mathfrak{m}}athcal{W}})}{L_p(f \otimes {\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi)( {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}})} = {\rm {\rm \mathfrak{m}}athcal{W}}(N){\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht){\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athcal{W}}({\rm \mathfrak{m}}athcal{D})^2.$$ The proposition now follows from a simple computation. \end{proof} Recall the notation ${\rm \lambda}ambda^\tau({\rm \mathfrak{a}}) = {\rm \lambda}ambda({\rm \mathfrak{a}}^\tau)$. {\rm \mathfrak{b}}egin{corollary}{\rm \lambda}abel{Lvanish} Suppose ${\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{N}{\rm rig}ht) = 1$ and ${\rm \lambda}ambda$ is anticyclotomic, i.e. ${\rm \lambda}ambda{\rm \lambda}ambda^\tau = 1$. Then $L_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda) = 0$. \end{corollary} {\rm \mathfrak{b}}egin{proof} From the functional equation and the fact that $${\rm {\rm \mathfrak{m}}athcal{L}}ambda_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda) = {\rm {\rm \mathfrak{m}}athcal{L}}ambda_p(f \otimes {\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi)({\rm \lambda}ambda^\tau),$$ we obtain $${\rm {\rm \mathfrak{m}}athcal{L}}ambda_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda) = -{\rm {\rm \mathfrak{m}}athcal{L}}ambda_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda^{-\tau}).$$ Since ${\rm \lambda}ambda$ is anticyclotomic, this is equal to $-{\rm {\rm \mathfrak{m}}athcal{L}}ambda_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda).$ \end{proof} {\rm \sigma}ection{Fourier expansion of the $p$-adic $L$-function}{\rm \lambda}abel{integrate} This section is devoted to computing the Fourier coefficients of $\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times}{\rm \lambda}ambda \, d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A$, where ${\rm \lambda}ambda$ is a continuous function ${\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. These computations allow us to relate $L'_p(f \otimes {\rm \mathfrak{c}}hi, {\rm \mathfrak{m}}athbbm{1})$ to heights of generalized Heegner cycles. We follow the computations in {\rm \mathfrak{c}}ite[I.6]{Nek}, however the transformation laws for theta series attached to Hecke characters complicate things a bit. We have {\rm \mathfrak{b}}egin{align*} {\rm {\rm \mathfrak{m}}athbb{P}}hi^C_{{\rm {\rm \mathfrak{m}}athbb{A}}A} &( a ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u)) = \\ &H{\rm \lambda}eft[ {\rm \sigma}um _{{\rm \mathfrak{a}}lpha \in ({\rm {\rm \mathfrak{m}}athbb{Z}}/|D|p^{\rm \mathfrak{n}}u{\rm {\rm \mathfrak{m}}athbb{Z}})^\times} \xi({\rm \mathfrak{a}}lpha){\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A({\rm \mathfrak{a}}lpha^2a ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u))(z){\rm \mathfrak{d}}elta^{r-1 - k}_1(E_1^C({\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od |D|p^{\rm \mathfrak{n}}u))(Nz)){\rm rig}ht], \end{align*} For each factorization $D = D_1D_2$ (with the signs normalized so that $D_1$ is a discriminant), we define \[W_{D_1}^{(v)} = {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} |D_1|a & b \\ N|D|p^{\rm \mathfrak{n}}u c & |D_1|d \end{array} {\rm rig}ht),\] of determinant $|D_1|$. {\rm \mathfrak{b}}egin{lemma}{\rm \lambda}abel{theta} For $W_{D_1}^{({\rm \mathfrak{n}}u)}$ as above and ${\rm \mathfrak{a}}lpha \in {\rm \lambda}eft({\rm {\rm \mathfrak{m}}athbb{Z}}/|D|p^{\rm \mathfrak{n}}u{\rm {\rm \mathfrak{m}}athbb{Z}}{\rm rig}ht)^\times$, $${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \lambda}eft({\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u){\rm rig}ht)(z){\rm \mathfrak{b}}igg|_{\ell+1}W_{D_1}^{(v)} = {\rm \mathfrak{f}}rac{|D_1|^k}{{\rm \mathfrak{c}}hi({\rm \mathfrak{m}}athcal{D}_1)} \gamma{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \mathfrak{d}}_1^{-1}}{\rm \lambda}eft(|D_1|a^2 {\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u){\rm rig}ht)(z),$$ where $$\gamma = {\rm \lambda}eft({\rm \mathfrak{f}}rac{D_1}{cp^{\rm \mathfrak{n}}u N}{\rm rig}ht){\rm \lambda}eft({\rm \mathfrak{f}}rac{D_2}{a{\rm \textbf{N}}({\rm \mathfrak{a}})}{\rm rig}ht)\kappa(D_1)^{-1},$$ and ${\rm \mathfrak{m}}athcal{D}_1$ is the ideal of norm $|D_1|$ in ${\rm \mathcal{O}}_K$ and $\kappa(D_1) = 1$ if $D_1 > 0$, otherwise $\kappa(D_1) = i$. \end{lemma} {\rm \mathfrak{b}}egin{remark} Note that the factor ${\rm \mathfrak{f}}rac{|D_1|^k}{{\rm \mathfrak{c}}hi({\rm \mathfrak{m}}athcal{D}_1)}$ is equal to ${\rm {\rm \mathfrak{m}}athfrak{p}}m 1.$ \end{remark} {\rm \mathfrak{b}}egin{proof} The proof proceeds as in {\rm \mathfrak{c}}ite[\S3.2]{PR1}, but requires some extra Fourier analysis. We sketch the argument for the convenience of the reader. Fixing an ideal ${\rm \mathfrak{a}}$ in the class of ${\rm {\rm \mathfrak{m}}athbb{A}}A$, we set $L = p^{\rm \mathfrak{n}}u {\rm \mathfrak{a}}$ and let $L^*$ be the dual lattice with the respect to the quadratic form $Q_{\rm \mathfrak{a}}$. Denote by $S = S_{\rm \mathfrak{a}}$ the symmetric bilinear form corresponding to $Q_{\rm \mathfrak{a}}$, so $S_{\rm \mathfrak{a}}({\rm \mathfrak{a}}lpha, {\rm \mathfrak{b}}eta) = {\rm \mathfrak{f}}rac{1}{{\rm \textbf{N}}({\rm \mathfrak{a}})}{\rm {\rm \mathfrak{m}}athbb{T}}r({\rm \mathfrak{a}}lpha{\rm \mathfrak{b}}ar{\rm \mathfrak{b}}eta)$. For $u \in L^*$, define $${\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(u,L) = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm \mathfrak{a}})^{-1}{\rm \sigma}um_{{\rm \sigma}ubstack{w - u \in L\\ w \in L^*}} {\rm \mathfrak{b}}ar w^\ell q^{Q_{\rm \mathfrak{a}}(w)}.$$ For any $c \in {\rm {\rm \mathfrak{m}}athbb{Z}}$, one checks the following relations: {\rm \mathfrak{b}}egin{equation}{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(u,L) = {\rm \sigma}um_{{\rm \sigma}ubstack{w - u \in L\\ w \in L^*/cL}} {\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}}, {\rm \mathfrak{c}}hi}(w,cL),\end{equation} {\rm \mathfrak{b}}egin{equation}{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(u,cL)(c^2z) = c^{-\ell}{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(cu,c^2L)(z),\end{equation} and for all $a \in {\rm {\rm \mathfrak{m}}athbb{Z}}$ and $w \in L^*$, {\rm \mathfrak{b}}egin{equation}{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(w,cL){\rm \lambda}eft(z + {\rm \mathfrak{f}}rac{a}{c}{\rm rig}ht) = e{\rm \lambda}eft({\rm \mathfrak{f}}rac{a}{c}Q_{\rm \mathfrak{a}}(w){\rm rig}ht){\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(w,cL).\end{equation} We also have {\rm \mathfrak{b}}egin{equation}z^{-(\ell+1)}{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(w,cL){\rm \lambda}eft({\rm \mathfrak{f}}rac{-1}{z}{\rm rig}ht) = -ic^{-2}[L^*:L]^{-1/2} {\rm \sigma}um_{y \in (cL)^*/cL} e{\rm \lambda}eft(S_{\rm \mathfrak{a}}(w,y){\rm rig}ht){\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(y,cL).\end{equation} This follows from the identity {\rm \mathfrak{b}}egin{equation}z^{\ell+1}{\rm \sigma}um_{x \in L} P(x + u)e{\rm \lambda}eft(Q_{\rm \mathfrak{a}}(x+y)z{\rm rig}ht) = i[L^*:L]^{-1/2}{\rm \sigma}um_{y \in L^*}P(y)e{\rm \lambda}eft({\rm \mathfrak{f}}rac{-Q_a(y)}{z}{\rm rig}ht)e{\rm \lambda}eft(S_{\rm \mathfrak{a}}(y,u){\rm rig}ht),\end{equation} valid for any rank two integral quadratic space $(L, Q_{\rm \mathfrak{a}}, S_{\rm \mathfrak{a}})$ and any polynomial $P$ of degree $\ell$ which is spherical for $Q_{\rm \mathfrak{a}}$. See {\rm \mathfrak{c}}ite{Wall} for a proof of this version of Poisson summation. Now write $$W_{D_1}^{({\rm \mathfrak{n}}u)} = H{\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} |D_1| & 0 \\ 0 & 1 \end{array} {\rm rig}ht)$$ with $H \in {\rm SL}_2({\rm {\rm \mathfrak{m}}athbb{Z}})$. Exactly as in {\rm \mathfrak{c}}ite{PR1}, we use the relations above to compute $${\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}({\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u)){\rm \mathfrak{b}}igg |_{\ell+1}H = \gamma|D_1|^{-1/2} {\rm \sigma}um_{{\rm \sigma}ubstack{u \in {\rm \mathfrak{a}}/L\\ Q_{\rm \mathfrak{a}}(u) \equiv {\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u)}}{\rm \sigma}um_{{\rm \sigma}ubstack{w \in L^*/L\\ w + au \in {\rm {\rm \mathfrak{m}}athcal{D}}_1^{-1}p^r {\rm \mathfrak{a}}}} {\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}(w,L)$$ so that {\rm \mathfrak{b}}egin{align*} {\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}},{\rm \mathfrak{c}}hi}({\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u)){\rm \mathfrak{b}}igg|_{\ell+1} W_{D_1}^{({\rm \mathfrak{n}}u)} &= \gamma|D_1|^k{\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})^{-1}{\rm \sigma}um_{{\rm \sigma}ubstack{w \in {\rm {\rm \mathfrak{m}}athcal{D}}_1^{-1}{\rm \mathfrak{a}}\\ Q_{{\rm \mathfrak{a}}{\rm {\rm \mathfrak{m}}athcal{D}}_1^{-1}}(w)\equiv |D_1|a^2{\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od p^r)}}{\rm \mathfrak{b}}ar w^\ell q^{Q_{{\rm \mathfrak{a}}{\rm {\rm \mathfrak{m}}athcal{D}}_1^{-1}}(w)}\\ &= {\rm \mathfrak{f}}rac{|D_1|^k}{{\rm \mathfrak{c}}hi({\rm \mathfrak{m}}athcal{D}_1)} \gamma{\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm \mathfrak{a}}{\rm {\rm \mathfrak{m}}athcal{D}}_1^{-1},{\rm \mathfrak{c}}hi}{\rm \lambda}eft(|D_1|a^2{\rm \mathfrak{a}}lpha ({\rm \mathfrak{m}}od p^{\rm \mathfrak{n}}u){\rm rig}ht)(z), \end{align*} as desired. \end{proof} For any function ${\rm \lambda}ambda$ on $(Z/p^{\rm \mathfrak{n}}u{\rm {\rm \mathfrak{m}}athbb{Z}})^\times$, we define $h_{D_1}({\rm \lambda}ambda)$ as in {\rm \mathfrak{c}}ite[I.6.3]{Nek}, so that $$\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} {\rm \lambda}ambda \, d\tilde{\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A = {\rm \mathfrak{f}}rac{1}{2w} {\rm \sigma}um_{D = D_1{\rm \mathfrak{c}}dot D_2} {\rm \sigma}um_{j \in {\rm {\rm \mathfrak{m}}athbb{Z}}/|D_1|{\rm {\rm \mathfrak{m}}athbb{Z}}} h_{D_1}({\rm \lambda}ambda){\rm \mathfrak{b}}igg|_{2r} {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 1 & j \\ 0 & |D_1| \end{array} {\rm rig}ht).$$ The Fourier coefficient computation in {\rm \mathfrak{c}}ite[I.6.5]{Nek} remains valid, except one needs to use the following proposition in place of {\rm \mathfrak{c}}ite[I.1.9]{Nek}: {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{jacobi} Let $f = {\rm \sigma}um_{n \geq 1} a(n)q^n$ be a cusp form of weight $\ell + 1 = 2k + 1$, and $g = {\rm \sigma}um_{n \geq 0}b(n)q^n$ a holomorphic modular form of weight one, both on ${\rm \mathcal{G}}amma_0(N)$. Then $H(f {\rm \mathfrak{d}}elta_1^{r-k-1}(g)) = {\rm \sigma}um_{n \geq 1} c(n)q^n$ with \[c(n) = {\rm \mathfrak{f}}rac{(-1)^{r-k-1}}{{\rm \mathfrak{b}}inom{2r-2}{r-k-1}}n^{r-k-1}{\rm \sigma}um_{i + j = n} a(i)b(j)H_{r-k-1,k}{\rm \lambda}eft({\rm \mathfrak{f}}rac{i - j}{i + j}{\rm rig}ht),\] where \[H_{m,k}(t) = {\rm \mathfrak{f}}rac{1}{2^m{\rm \mathfrak{c}}dot (m + 2k)!} {\rm \lambda}eft({\rm \mathfrak{f}}rac{d}{dt}{\rm rig}ht)^{m + 2k}[(t^2 - 1)^m(t - 1)^{2k}]\] \end{proposition} {\rm \mathfrak{b}}egin{proof} From {\rm \mathfrak{c}}ite[I.1.2.4, I.1.3.2]{Nek}, we have \[c(n) = {\rm \mathfrak{f}}rac{(r-k-1)!}{(-4{\rm {\rm \mathfrak{m}}athfrak{p}}i)^{r-k-1}}{\rm \mathfrak{c}}dot{\rm \mathfrak{f}}rac{(4{\rm {\rm \mathfrak{m}}athfrak{p}}i n)^{2r-1}}{(2r-2)!} {\rm \sigma}um_{ i + j = n}a(i)b(j)\int_0^\infty p_{r-k-1}(4{\rm {\rm \mathfrak{m}}athfrak{p}}i j y)e^{-4{\rm {\rm \mathfrak{m}}athfrak{p}}i n y} y^{r + k - 1}dy,\] where \[p_m(x) = {\rm \sigma}um_{a = 0}^m {\rm \mathfrak{b}}inom{m}{a}{\rm \mathfrak{f}}rac{(-x)^a}{a!}.\] The integral is evaluated using the following lemma. {\rm \mathfrak{b}}egin{lemma} Let $m, k \geq 0$. Then \[\int_0^\infty p_m(4{\rm {\rm \mathfrak{m}}athfrak{p}}i j y)e^{-4{\rm {\rm \mathfrak{m}}athfrak{p}}i (i + j)y} y^{m + 2k}dy = {\rm \mathfrak{f}}rac{(m + 2k)!}{(4{\rm {\rm \mathfrak{m}}athfrak{p}}i(i + j))^{m+2k+1}}H_{m,k}{\rm \lambda}eft({\rm \mathfrak{f}}rac{i-j}{i+j}{\rm rig}ht)\] \end{lemma} {\rm \mathfrak{b}}egin{proof} Evaluating the elementary integrals, we find that the left hand side is equal to \[{\rm \mathfrak{f}}rac{m!}{(4{\rm {\rm \mathfrak{m}}athfrak{p}}i (i+ j))^{m + 2k + 1}}G_{m,k}{\rm \lambda}eft({\rm \mathfrak{f}}rac{j}{i + j}{\rm rig}ht).\] where \[G_{m,k}(t) = {\rm \sigma}um_{a = 0}^m (-1)^a {\rm \mathfrak{f}}rac{(m + 2k + a)!}{(a!)^2 (m - a)!} t^a.\] It therefore suffices to prove the identity {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{combo}G_{m,k}(t) = {\rm \mathfrak{f}}rac{(m+2k)!}{m!}H_{m,k}(1 - 2t). \end{equation} This is proved by showing that both sides satisfy the same defining recurrence relation (and base cases). Indeed, one can check directly that for $m \geq 1$: {\rm \mathfrak{b}}egin{align}{\rm \lambda}abel{recur} (m + 1)^2(m + k)&G_{m+1,k}(t) =\\ (2m + 2k +1)&[m^2 + m + 2km + k - (m + k)(2m+ 2k + 2)t]G_{m,k}(t){\rm \mathfrak{n}}onumber \\ &- (m +k +1)(m + 2k)^2G_{m - 1,k}(t){\rm \mathfrak{n}}onumber. \end{align} That the right hand side of (\ref{combo}) satisfies the same recurrence relation amounts to the well known recurrence relation for the Jacobi polynomials \[P_n^{({\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta)}(t) = {\rm \mathfrak{f}}rac{(-1)^n}{2^n n!}(1 - t)^{-{\rm \mathfrak{a}}lpha}(1 + t)^{-{\rm \mathfrak{b}}eta} {\rm \mathfrak{f}}rac{d^n}{dt^n}{\rm \lambda}eft[ (1 - t)^{\rm \mathfrak{a}}lpha(1 + t)^{\rm \mathfrak{b}}eta (1 - t^2)^n{\rm rig}ht].\] Indeed, we have \[H_{m,k}(t) = 2^{2k}{\rm \mathfrak{c}}dot P^{(0,-2k)}_{m + 2k}(t)(1+t)^{-2k},\] and one checks that the recurrence relation {\rm \mathfrak{b}}egin{align*} 2(n+1)(n + {\rm \mathfrak{b}}eta + 1)(2n + {\rm \mathfrak{b}}eta)&P_{n+1}^{(0,{\rm \mathfrak{b}}eta)}(t) =\\ (2n + &{\rm \mathfrak{b}}eta +1)[(2n + {\rm \mathfrak{b}}eta+2)(2n + {\rm \mathfrak{b}}eta )t - {\rm \mathfrak{b}}eta^2]P_n^{(0,{\rm \mathfrak{b}}eta)}(t)\\ &- 2n(n + {\rm \mathfrak{b}}eta)(2n + {\rm \mathfrak{b}}eta + 2)P_{n-1}^{(0,{\rm \mathfrak{b}}eta)}(t) \end{align*} translates (using $n = m + 2k$ and ${\rm \mathfrak{b}}eta = -2k$) into the recurrence (\ref{recur}) for the polynomials ${\rm \mathfrak{f}}rac{(m + 2k)!}{m!}H_{m,k}(1 - 2t)$. \end{proof} Finally, to prove the proposition, we simply plug in $m = r - k - 1$ into the previous lemma and simplify our above expression for $c(n)$. \end{proof} Recall that for any ideal class ${\rm {\rm \mathfrak{m}}athbb{A}}A$, we have defined $$r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(j) = {\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{a}} \in {\rm {\rm \mathfrak{m}}athbb{A}}A \\ {\rm \mathfrak{a}} {\rm \sigma}ubset {\rm \mathcal{O}} \\ {\rm \textbf{N}}({\rm \mathfrak{a}}) = j}} {\rm \mathfrak{c}}hi({\rm \mathfrak{a}}).$$ Putting together Lemma \ref{theta}, Proposition \ref{jacobi}, and the manipulation of symbols in {\rm \mathfrak{c}}ite[I.6.5]{Nek}, we obtain {\rm \mathfrak{b}}egin{align*} a_m{\rm \lambda}eft(\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} {\rm \lambda}ambda d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A {\rm rig}ht) & = {\rm \mathfrak{f}}rac{(-1)^{r - k - 1}}{{\rm \mathfrak{b}}inom{2r - 2}{r - k - 1}}m^{r-k-1}{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht){\rm \sigma}um_{D = D_1D_2} {\rm \lambda}eft({\rm \mathfrak{f}}rac{D_2}{N{\rm \mathfrak{a}}}{\rm rig}ht){\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{D}}_1)^{-1}\\ &\times{\rm \sigma}um_{{\rm \sigma}ubstack{j +nN = |D_1|m\\ (p,j) = 1}}{\rm \sigma}um_{{\rm \sigma}ubstack{d | n\\ (p,d) =1}} r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athcal{D}}_1^{-1},{\rm \mathfrak{c}}hi}(j) {\rm \lambda}eft({\rm \mathfrak{f}}rac{D_2}{-dN}{\rm rig}ht){\rm \lambda}eft({\rm \mathfrak{f}}rac{D_1}{|D_2|n/d}{\rm rig}ht)\\ & \times {\rm \lambda}ambda{\rm \lambda}eft({\rm \mathfrak{f}}rac{m|D_1| - nN}{|D_1|d^2}{\rm rig}ht) \times H_{r - k -1,k}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{m|D_1|}{\rm rig}ht). \end{align*} {\rm \mathfrak{b}}egin{lemma} $$r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athcal{D}}_1^{-1},{\rm \mathfrak{c}}hi}(j) = {\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{D}}_2)^{-1}r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(j|D_2|).$$ \end{lemma} {\rm \mathfrak{b}}egin{proof} Since ${\rm {\rm \mathfrak{m}}athcal{D}}_1$ is 2-torsion in the class group, the left hand side equals $ r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athcal{D}}_1,{\rm \mathfrak{c}}hi}(j)$. The lemma now follows from the definitions once one notes that ${\rm \mathfrak{b}} {\rm \mathfrak{m}}apsto {\rm \mathfrak{b}}{\rm {\rm \mathfrak{m}}athcal{D}}_2$ is a bijection from integral ideals of norm $j$ in ${\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athcal{D}}_1$ to integral ideals of norm $j|D_2|$ in ${\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athcal{D}}$. \end{proof} Using the lemma and also the change of variables employed in {\rm \mathfrak{c}}ite{Nek}, we obtain our version of {\rm \mathfrak{c}}ite[Proposition 6.6]{Nek}. {\rm \mathfrak{b}}egin{proposition} If $p | m$, then {\rm \mathfrak{b}}egin{align*} a_m{\rm \lambda}eft(\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} {\rm \lambda}ambda d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A {\rm rig}ht) =& {\rm \mathfrak{f}}rac{(-1)^{r - 1}}{{\rm \mathfrak{b}}inom{2r - 2}{r -k - 1}}m^{r-k-1}{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht)|D|^{-k}{\rm \sigma}um_{{\rm \sigma}ubstack{1 {\rm \lambda}eq n {\rm \lambda}eq {\rm \mathfrak{f}}rac{m|D|}{N}\\ (p,n) = 1}}r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(m|D| - nN) \\ &\times H_{r - k -1,k}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{m|D|}{\rm rig}ht){\rm \sigma}um_{d | n}{\rm \epsilon}ilon_{\rm {\rm \mathfrak{m}}athbb{A}}A(n,d) {\rm \lambda}ambda{\rm \lambda}eft({\rm \mathfrak{f}}rac{m|D| - nN}{|D|}{\rm \mathfrak{c}}dot {\rm \mathfrak{f}}rac{d^2}{n^2}{\rm rig}ht). \end{align*} Here, $\e_{\rm {\rm \mathfrak{m}}athbb{A}}A(n,d) = 0$ if $(d,n/d, |D|) > 1$, and otherwise \[\e_{\rm {\rm \mathfrak{m}}athbb{A}}A(n,d) = {\rm \lambda}eft({\rm \mathfrak{f}}rac{D_1}{d}{\rm rig}ht){\rm \lambda}eft({\rm \mathfrak{f}}rac{D_2}{-nN/d}{\rm rig}ht){\rm \lambda}eft({\rm \mathfrak{f}}rac{D_2}{{\rm \textbf{N}}({\rm {\rm \mathfrak{m}}athbb{A}}A)}{\rm rig}ht),\] where $(d,|D|) = |D_2|$ and $D = D_1D_2$. \end{proposition} {\rm \mathfrak{b}}egin{proof} The proof is as in {\rm \mathfrak{c}}ite{Nek}. We have also used the fact that ${\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athcal{D}}) = D^k$ to get the extra factor of $|D|^{-k}$ and the correct sign (recall that $D$ is negative!). \end{proof} {\rm \mathfrak{b}}egin{corollary}{\rm \lambda}abel{fourier} If ${\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{N}{\rm rig}ht) = 1$ and $p | m$, then {\rm \mathfrak{b}}egin{align*} a_m&{\rm \lambda}eft(\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} {\rm \lambda}og_p d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A {\rm rig}ht) = \\ &{\rm \mathfrak{f}}rac{(-1)^r} {{\rm \mathfrak{b}}inom{2r - 2}{r -k - 1}}m^{r-k-1}|D|^{-k}{\rm \sigma}um_{{\rm \sigma}ubstack{1 {\rm \lambda}eq n {\rm \lambda}eq {\rm \mathfrak{f}}rac{m|D|}{N}\\ (p,n) = 1}}r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(m|D| - nN){\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athbb{A}}A(n)H_{r - k -1,k}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{m|D|}{\rm rig}ht), \end{align*} with $${\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athbb{A}}A(n) = {\rm \sigma}um_{d | n} {\rm \epsilon}ilon_{\rm {\rm \mathfrak{m}}athbb{A}}A(n,d) {\rm \lambda}og_p{\rm \lambda}eft({\rm \mathfrak{f}}rac{n}{d^2}{\rm rig}ht).$$ \end{corollary} {\rm \mathfrak{b}}egin{proof} As in {\rm \mathfrak{c}}ite{PR1}. \end{proof} {\rm \sigma}ection{Generalized Heegner cycles}{\rm \lambda}abel{cycles} In the previous section we computed Fourier coefficients of $p$-adic modular forms closely related to the derivative of $L_p(f,{\rm \mathfrak{c}}hi)$ at the trivial character and in the cyclotomic direction. We expect similar looking Fourier coefficients to appear as the sum of local heights of certain cycles, with the sum varying over the finite places of $H$ which are prime to $p$. These cycles should come from the motive attached to $f \otimes {\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi$. Since ${\rm {\rm \mathfrak{m}}athbb{T}}heta_{\rm \mathfrak{c}}hi$ has weight $2k + 1$, work of Deligne and Scholl provides a motive inside the cohomology of a Kuga-Sato variety which is the fiber product of $2k-1$ copies of the universal elliptic curve over $X_1(|D|)$. We work with a closely related motive, which we describe now. We fix an elliptic curve $A/H$ with the following properties: {\rm \mathfrak{b}}egin{enumerate} \item ${\rm {\rm \mathfrak{m}}athcal{E}}nd_H(A) = {\rm \mathcal{O}}_K$. \item $A$ has good reduction at primes above $p$. \item $A$ is isogenous to each of its ${\rm \mathcal{G}}al(H/K)$-conjugates. \item $A^\tau {\rm \mathfrak{c}}ong A$, where $\tau$ is complex conjugation. \end{enumerate} {\rm \mathfrak{b}}egin{remark} Since $D$ is odd, we may even choose such an $A$ with the added feature that ${\rm {\rm \mathfrak{m}}athfrak{p}}si_A^2$ is an unramified Hecke character of type (2,0) (see {\rm \mathfrak{c}}ite{Roh}). In that case, ${\rm {\rm \mathfrak{m}}athfrak{p}}si^{2k}_A$ differs from ${\rm \mathfrak{c}}hi$ by a character of ${\rm \mathcal{G}}al(H/K)$, so this is a natural choice of $A$, given ${\rm \mathfrak{c}}hi$. In general, ${\rm {\rm \mathfrak{m}}athfrak{p}}si_A^{2k}{\rm \mathfrak{c}}hi^{-1}$ is a finite order Hecke character. \end{remark} We will use a two-dimensional submotive of $A^{2k}$ whose $\ell$-adic realizations are isomorphic to those of the Deligne-Scholl motive for ${\rm {\rm \mathfrak{m}}athbb{T}}heta_{{\rm {\rm \mathfrak{m}}athfrak{p}}si_A^{2k}}$ (see {\rm \mathfrak{c}}ite{BDP3}). From Property (3), $A$ is isogenous to $A^{\rm \sigma}igma$ over $H$ for each ${\rm \sigma}igma \in G := {\rm \mathcal{G}}al(H/K)$. If ${\rm \sigma}igma$ corresponds to an ideal class $[{\rm \mathfrak{a}}] \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$ via the Artin map, then one such isogeny ${\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}} : A \to A^{\rm \sigma}igma$ is given by $A \to A/A[{\rm \mathfrak{a}}]$, at least if ${\rm \mathfrak{a}}$ is integral. A different choice of integral ideal ${\rm \mathfrak{a}}' \in [{\rm \mathfrak{a}}]$ gives an isomorphic elliptic curve over $H$, and the maps ${\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}$ and ${\rm {\rm \mathfrak{m}}athfrak{p}}hi_{{\rm \mathfrak{a}}'}$ will differ by endomorphisms of $A$ and $A^{\rm \sigma}igma$. As in the introduction, let $Y(N)/{\rm {\rm \mathfrak{m}}athbb{Q}}$ be the modular curve parametrizing elliptic curves with full level $N$ structure, and let ${\rm {\rm \mathfrak{m}}athcal{E}} \to Y(N)$ be the universal elliptic curve with level $N$ structure. The canonical non-singular compactification of the $(2r-2)$-fold fiber product $${\rm {\rm \mathfrak{m}}athcal{E}} \times_{Y(N)} {\rm \mathfrak{c}}dots \times_{Y(N)} {\rm {\rm \mathfrak{m}}athcal{E}},$$ will be denoted by $W = W_{2r-2}$ {\rm \mathfrak{c}}ite{Sch}; $W$ is a variety over ${\rm {\rm \mathfrak{m}}athbb{Q}}$. The map $W \to X(N)$ to the compactified modular curve has fibers (over non-cuspidal points) of the form $E^{2r-2}$, for some elliptic curve $E$. We set $$X = X_{r,N,k} = W_H \times A^{2k},$$ where $W_H$ is the base change to $H$. Recall the curve $X_0(N)/{\rm {\rm \mathfrak{m}}athbb{Q}}$, the coarse moduli space of generalized elliptic curves with a cyclic subgroup of order $N$. $X_0(N)$ is the quotient of $X(N)$ by the action of the standard Borel subgroup $B {\rm \sigma}ubset {\rm \mathcal{G}}L_2{\rm \lambda}eft({\rm {\rm \mathfrak{m}}athbb{Z}}/N{\rm {\rm \mathfrak{m}}athbb{Z}}{\rm rig}ht)/\{{\rm {\rm \mathfrak{m}}athfrak{p}}m1\}$. The computations of the Fourier coefficients in the previous section suggest that we consider the following \textit{generalized Heegner cycle} on $X$. Fix a Heegner point $y \in Y_0(N)(H)$ represented by a cyclic $N$-isogeny $A \to A'$, for some elliptic curve $A'/H$ with CM by ${\rm \mathcal{O}}_K$. Such an isogeny exists since each prime dividing $N$ splits in $K$. Also let $\tilde y$ be a point of $Y(N)_H$ over $y$. The fiber $E_{\tilde y}$ of the universal elliptic curve ${\rm {\rm \mathfrak{m}}athcal{E}} \to Y(N)$ above the point $\tilde y$ is isomorphic to $A_F$, where $F {\rm \sigma}upset H$ is the residue field of $\tilde y$. Let $${\rm {\rm \mathfrak{m}}athcal{D}}elta {\rm \sigma}ubset E_{\tilde y} \times A_F {\rm \mathfrak{c}}ong A_F \times A_F$$ be the diagonal, and we write ${\rm \mathcal{G}}amma_{{\rm \sigma}qrt D} {\rm \sigma}ubset E_{\tilde y} \times E_{\tilde y}$ for the graph of ${\rm \sigma}qrt D \in {\rm {\rm \mathfrak{m}}athcal{E}}nd(E_{\tilde y}) {\rm \mathfrak{c}}ong {\rm \mathcal{O}}_K$. We define $$Y = {\rm \mathcal{G}}amma_{{\rm \sigma}qrt D}^{r-1-k} \times {\rm {\rm \mathfrak{m}}athcal{D}}elta^{2k} {\rm \sigma}ubset X_{\tilde y} {\rm \mathfrak{c}}ong A_F^{2r-2} \times A_F^{2k},$$ so that $Y \in {\rm {\rm \mathfrak{m}}athbb{C}}H^{k+r}(X_F)$. Here $X_{\tilde y}$ is the fiber of the natural projection $X \to X(N)$ above the point $\tilde y$. Since $X$ is not defined over ${\rm {\rm \mathfrak{m}}athbb{Q}}$, we need to find cycles to play the role of ${\rm \mathcal{G}}al(H/K)$-conjugates of $Y$. For each ${\rm \sigma}igma \in {\rm \mathcal{G}}al(H/K)$ we have a corresponding ideal class ${\rm {\rm \mathfrak{m}}athbb{A}}A$. For each integral ideal ${\rm \mathfrak{a}} \in {\rm {\rm \mathfrak{m}}athbb{A}}A$, define the cycle $Y^{\rm \mathfrak{a}}$ as follows: {\rm \mathfrak{b}}egin{equation*}Y^{\rm \mathfrak{a}} = {\rm \mathcal{G}}amma_{{\rm \sigma}qrt{D}}^{r-k-1} \times {\rm \lambda}eft({\rm \mathcal{G}}amma^t_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}}{\rm rig}ht)^{2k} {\rm \sigma}ubset {\rm \lambda}eft(A_F^{\rm \mathfrak{a}} \times A_F^{\rm \mathfrak{a}} {\rm rig}ht)^{r-k-1} \times {\rm \lambda}eft(A_F^{\rm \mathfrak{a}} \times A_F{\rm rig}ht)^{2k} = X_{\tilde y^{\rm \sigma}igma} {\rm \sigma}ubset X_F.\end{equation*} Here, ${\rm \mathcal{G}}amma^t_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}}$ is the transpose of ${\rm \mathcal{G}}amma_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}}$, the graph of ${\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}: A \to A^{\rm \mathfrak{a}}$. The cycle $Y^{\rm \mathfrak{a}} \in {\rm {\rm \mathfrak{m}}athbb{C}}H^{k+r}(X_F)$ is \textit{not} independent of the class of ${\rm \mathfrak{a}}$ in ${\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$, but certain expressions involving $Y^{\rm \mathfrak{a}}$ \textit{will} be independent of the class of ${\rm \mathfrak{a}}$. Note that $Y = Y^{{\rm \mathcal{O}}_K}$. {\rm \sigma}ubsection{Projectors} Next we define a projector $\e \in {\rm {\rm \mathfrak{m}}athbb{C}}orr^0(X, X)_K$ so that $\e Y^{\rm \mathfrak{a}}$ lies in the group ${\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X_F)_{0,K}$ of homologically trivial $(r+k)$-cycles with coefficients in $K$. Here, ${\rm {\rm \mathfrak{m}}athbb{C}}orr^0(X,X)_K$ is the ring of degree 0 correspondences with coefficients in $K$. For definitions and conventions concerning motives, correspondences, and projectors see {\rm \mathfrak{c}}ite[\S2]{BDP3}. The projector is defined as ${\rm \epsilon}ilon = {\rm \epsilon}ilon_X = {\rm \epsilon}ilon_W{\rm \epsilon}ilon_\ell$. Here, ${\rm \epsilon}ilon_W$ is the pullback to $X$ of the Deligne-Scholl projector $\tilde\e_W \in {\rm {\rm \mathfrak{m}}athbb{Q}}[{\rm {\rm \mathfrak{m}}athbb{A}}ut(W)]$ which projects onto the subspace of $H^{2r-1}(W)$ coming from modular forms of weight $2r$ (see e.g.\ {\rm \mathfrak{c}}ite[\S2]{BDP1}). The second factor ${\rm \epsilon}ilon_\ell$ is the pullback to $X$ of the projector $${\rm \epsilon}ilon_\ell = {\rm \lambda}eft({\rm \mathfrak{f}}rac{{\rm \sigma}qrt{D} + [{\rm \sigma}qrt{D}]}{2{\rm \sigma}qrt{D}}{\rm rig}ht)^{\otimes \ell} {\rm \mathfrak{c}}irc {\rm \lambda}eft({\rm \mathfrak{f}}rac{1 - [-1]}{2}{\rm rig}ht)^{\otimes \ell} \in {\rm {\rm \mathfrak{m}}athbb{C}}orr^0(A^\ell, A^\ell)_K,$$ denoted by the same symbol. On the $p$-adic realization of the motive $M_{A^\ell,K}$, ${\rm \epsilon}ilon_\ell$ projects onto the 1-dimensional ${\rm {\rm \mathfrak{m}}athbb{Q}}_p$-subspace $V_{{\rm {\rm \mathfrak{m}}athfrak{p}}}^{\otimes 2k}A$ of $${\rm Sym}^{2k}H_{\rm et}^1({\rm \mathfrak{b}}ar A,{\rm {\rm \mathfrak{m}}athbb{Q}}_p)(k) {\rm \sigma}ubset H_{\rm et}^{2k}({\rm \mathfrak{b}}ar A^{2k},{\rm {\rm \mathfrak{m}}athbb{Q}}_p(k)).$$ Here, ${\rm {\rm \mathfrak{m}}athfrak{p}}$ is the prime of $K$ above $p$ which is determined by our chosen embedding $K {\rm \hookrightarrow} {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ and $V_{{\rm {\rm \mathfrak{m}}athfrak{p}}}A = {\rm \mathfrak{m}}athop{\varprojlim}_{n} A[{\rm {\rm \mathfrak{m}}athfrak{p}}^n] \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ is the ${\rm {\rm \mathfrak{m}}athfrak{p}}$-adic Tate module of $A$. See Section \ref{ordsec} and {\rm \mathfrak{c}}ite[\S1.2]{BDP3} for more details. We also make use of the projectors $${\rm \mathfrak{b}}ar\e_\ell = {\rm \lambda}eft({\rm \mathfrak{f}}rac{{\rm \sigma}qrt{D} - [{\rm \sigma}qrt{D}]}{2{\rm \sigma}qrt{D}}{\rm rig}ht)^{\otimes \ell }{\rm \mathfrak{c}}irc {\rm \lambda}eft({\rm \mathfrak{f}}rac{1 - [-1]}{2}{\rm rig}ht)^{\otimes \ell} \in {\rm {\rm \mathfrak{m}}athbb{C}}orr^0(A^\ell, A^\ell)_K$$ and $\kappa_\ell = \e_\ell + {\rm \mathfrak{b}}ar \e_\ell$. The first projects onto $V_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}A^{\otimes \ell}$ and the latter onto $V_{\rm {\rm \mathfrak{m}}athfrak{p}} A^{\otimes \ell} \oplus V_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}A^{\otimes \ell}$. Set ${\rm \mathfrak{b}}ar \e = \e_W {\rm \mathfrak{b}}ar\e_\ell$ and $\e'= \e_W \kappa_\ell$. {\rm \mathfrak{b}}egin{remark}{\rm \lambda}abel{descend} For this remark, suppose that ${\rm \mathfrak{c}}hi = {\rm {\rm \mathfrak{m}}athfrak{p}}si^\ell$, where ${\rm {\rm \mathfrak{m}}athfrak{p}}si$ is the $(1,0)$-Hecke character attached to $A$ by the theory of complex multiplication. This means the $G_H$-action on $H^1({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)(1)$ is given by the $(K \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p)^\times$-values Galois character ${\rm {\rm \mathfrak{m}}athfrak{p}}si_H = {\rm {\rm \mathfrak{m}}athfrak{p}}si {\rm \mathfrak{c}}irc {\rm \textbf{N}}m_{H/K}$. If we write ${\rm \mathfrak{c}}hi_H = {\rm {\rm \mathfrak{m}}athfrak{p}}si_H^\ell$, then the motive $M({\rm \mathfrak{c}}hi_H)$ over $H$ (with coefficients in $K$) from Section \ref{apps} is defined by the triple $(A^{2k}, \kappa_\ell, k)$. We explain how to descend this to a motive over $K$ with coefficients in ${\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi)$ (this a modification of a construction from an earlier draft of {\rm \mathfrak{c}}ite{BDP3}). Let $e_K$ and ${\rm \mathfrak{b}}ar e_K$ be the idempotents in $K \otimes K$ corresponding to the first and second projections $K \otimes K {\rm \mathfrak{c}}ong K \times K \to K$. For each ${\rm \sigma}igma \in {\rm \mathcal{G}}al(H/K)$ choose an ideal ${\rm \mathfrak{a}} {\rm \sigma}ubset {\rm \mathcal{O}}_K$ corresponding to ${\rm \sigma}igma$ under the Artin map and define \[ {\rm \mathcal{G}}amma({\rm \sigma}igma) := e_K {\rm \mathfrak{c}}dot ({\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}} \times {\rm \mathfrak{c}}dots \times {\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}) \otimes {\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1} \in {\rm Hom}{\rm \lambda}eft(A^\ell, (A^\ell)^{\rm \sigma}igma{\rm rig}ht) \otimes_{\rm {\rm \mathfrak{m}}athbb{Q}} {\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi)\] \[ {\rm \mathfrak{b}}ar {\rm \mathcal{G}}amma({\rm \sigma}igma) := {\rm \mathfrak{b}}ar e_K {\rm \mathfrak{c}}dot ({\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}} \times {\rm \mathfrak{c}}dots \times {\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}) \otimes {\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1} \in {\rm Hom}{\rm \lambda}eft(A^\ell, (A^\ell)^{\rm \sigma}igma{\rm rig}ht) \otimes_{\rm {\rm \mathfrak{m}}athbb{Q}} {\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi).\] Since ${\rm \mathfrak{c}}hi(\gamma {\rm \mathfrak{a}}) = \gamma^\ell {\rm \mathfrak{c}}hi({\rm \mathfrak{a}})$ and ${\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\gamma {\rm \mathfrak{a}}} = \gamma {\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}$, these definitions are independent of the choice of ${\rm \mathfrak{a}}$. Moreover, \[{\rm \mathcal{G}}amma({\rm \sigma}igma \tau) = {\rm \mathcal{G}}amma({\rm \sigma}igma)^\tau {\rm \mathfrak{c}}irc {\rm \mathcal{G}}amma(\tau)\] and similarly for ${\rm \mathfrak{b}}ar{\rm \mathcal{G}}amma$. We set \[{\rm {\rm \mathfrak{m}}athcal{L}}ambda({\rm \sigma}igma) = \kappa_\ell {\rm \mathfrak{c}}irc ({\rm \mathcal{G}}amma({\rm \sigma}igma) + {\rm \mathfrak{b}}ar {\rm \mathcal{G}}amma({\rm \sigma}igma)) {\rm \mathfrak{c}}irc \kappa_\ell^{\rm \sigma}igma \in {\rm {\rm \mathfrak{m}}athbb{C}}orr^0(A^\ell, (A^{\rm \sigma}igma)^\ell)_{\rm {\rm \mathfrak{m}}athbb{Q}} \otimes_{\rm {\rm \mathfrak{m}}athbb{Q}} {\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi).\] Then the collection $\{{\rm {\rm \mathfrak{m}}athcal{L}}ambda({\rm \sigma}igma)\}_{\rm \sigma}igma$ gives descent data for the motive $M({\rm \mathfrak{c}}hi_H) \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi)$, hence determines a motive $M({\rm \mathfrak{c}}hi)$ over $K$ with coefficients in ${\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi)$. The $p$-adic realization of $M({\rm \mathfrak{c}}hi)$ is ${\rm \mathfrak{c}}hi \oplus {\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi$ where ${\rm \mathfrak{c}}hi$ is now thought of as a ${\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi)\otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p$-valued character of $G_K$. \end{remark} Define the following sheaf on $X(N)$: $${\rm {\rm \mathfrak{m}}athcal{L}} = j_*{\rm Sym}^w(R^1f_*{\rm {\rm \mathfrak{m}}athbb{Q}}_p) \otimes \kappa_\ell H^{2k}_{\rm et}({\rm \mathfrak{b}}ar A^{2k}, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(k)),$$ where $w = 2r- 2$, and $j: Y(N) \hookrightarrow X(N)$ and $f: {\rm {\rm \mathfrak{m}}athcal{E}} \to Y(N)$ are the natural maps. From now on we drop the subscript `${\rm et}$' from all cohomology groups and set ${\rm \mathfrak{b}}ar Z = Z \times_{{\rm Spec \hspace{1mm}} k} {\rm Spec \hspace{1mm}} {\rm \mathfrak{b}}ar k$ for any variety defined over a field $k$. We also use the notation $V_K = V \otimes K$, for any abelian group $V$. {\rm \mathfrak{b}}egin{theorem} There is a canonical isomorphism $$H^1({\rm \mathfrak{b}}ar X(N), {\rm {\rm \mathfrak{m}}athcal{L}}) \xrightarrow{{\rm \sigma}im} {\rm \epsilon}ilon' H^{2r+2k-1}({\rm \mathfrak{b}}ar X,{\rm {\rm \mathfrak{m}}athbb{Q}}_p)= \e' H^*({\rm \mathfrak{b}}ar X, {\rm {\rm \mathfrak{m}}athbb{Q}}_p).$$ \end{theorem} {\rm \mathfrak{b}}egin{proof} See {\rm \mathfrak{c}}ite[II.2.4]{Nek} and {\rm \mathfrak{c}}ite[Prop.\ 2.4]{BDP1}. \end{proof} {\rm \mathfrak{b}}egin{corollary}{\rm \lambda}abel{homtriv} The cycles ${\rm \epsilon}ilon Y^{\rm \mathfrak{a}}$ and ${\rm \mathfrak{b}}ar\e Y^{\rm \mathfrak{a}}$ are homologically trivial on $X_F$, i.e. they lie in the domain of the $p$-adic Abel-Jacobi map $${\rm {\rm \mathfrak{m}}athbb{P}}hi: {\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X_F)_{0,K} \to H^1(F,H^{2r+2k-1}({\rm \mathfrak{b}}ar X, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(r+k))).$$ \end{corollary} {\rm \mathfrak{b}}egin{proof} By the theorem, $\e' Y^{\rm \mathfrak{a}}$ is in the kernel of the map $${\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X_F)_K \to H^{2r+2k}({\rm \mathfrak{b}}ar X_F, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(r+k)),$$ i.e. it is homologically trivial. Moreover, $\e = \e \e'$ and ${\rm \mathfrak{b}}ar \e = {\rm \mathfrak{b}}ar \e \e'$. Since Abel-Jacobi maps commute with algebraic correspondences, it follows that $\e Y^{\rm \mathfrak{a}}$ and ${\rm \mathfrak{b}}ar\e Y^{\rm \mathfrak{a}}$ are homologically trivial as well. \end{proof} {\rm \sigma}ubsection{Bloch-Kato Selmer groups} Let $F$ be a finite extension of ${\rm {\rm \mathfrak{m}}athbb{Q}}_\ell$ ($\ell$ a prime, possibly equal to $p$) and let $V$ be a continuous $p$-adic representation of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar F/F)$. Then there is a Bloch-Kato subgroup $H^1_f(F, V) {\rm \sigma}ubset H^1(F,V)$, defined for example in {\rm \mathfrak{c}}ite{BK} or {\rm \mathfrak{c}}ite[1.12 and 2.1.4]{Nekhts}. If $\ell {\rm \mathfrak{n}}eq p$ (resp.\ $\ell = p$) and $V$ is unramified (resp.\ crystalline), then $H^1_f(F,V) = {\rm {\rm \mathfrak{m}}athcal{E}}xt^1({\rm {\rm \mathfrak{m}}athbb{Q}}_p, V)$ in the category of unramified (resp.\ crystalline) representations of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar F/F)$. If instead $F$ is a number field, then $H^1_f(F,V)$ is defined to be the set of classes in $H^1(F,V)$ which restrict to classes in $H^1_f(F_v,V)$ for all finite primes $v$ of $F$. The Bloch-Kato Selmer group plays an important role in the general theory of $p$-adic heights of homologically trivial algebraic cycles on a smooth projective variety $X/F$ defined over a number field $F$. Indeed, Nekov\'{a}\v{r}'s $p$-adic height pairing is only defined on $H^1_f(F, V)$, and not on the Chow group ${\rm {\rm \mathfrak{m}}athbb{C}}H^j(X)_0$ of homologically trivial cycles of codimension $j$. Here $V = H^{2j-1}({\rm \mathfrak{b}}ar X, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(j))$. This is compatible with the Bloch-Kato conjecture {\rm \mathfrak{c}}ite{BK}, which asserts (among other, much deeper statements) that the image of the Abel-Jacobi map \[{\rm {\rm \mathfrak{m}}athbb{P}}hi: {\rm {\rm \mathfrak{m}}athbb{C}}H^j(X)_0 \to H^1(F,V)\] is contained in $H^1_f(F,V)$. The next couple of results follow {\rm \mathfrak{c}}ite[II.2]{Nek} and verify this aspect of the Bloch-Kato conjecture in our situation, allowing us to consider $p$-adic heights of generalized Heegner cycles. We also give a more concrete description of the Abel-Jacobi images of generalized Heegner cycles in terms of local systems on the modular curve. Denote by $b(Y^{\rm \mathfrak{a}})$ the cohomology class of ${\rm \epsilon}ilon({\rm \mathfrak{b}}ar Y^{\rm \mathfrak{a}})$ in the fiber ${\rm \mathfrak{b}}ar X_{\tilde y}$, so that $b(Y^{\rm \mathfrak{a}})$ lies in $$\e' H^{2r+2k -2}{\rm \lambda}eft({\rm \mathfrak{b}}ar X_{\tilde y^{\rm \sigma}igma}, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(r+k -1){\rm rig}ht)^{G({\rm \mathfrak{b}}ar F/F)} \xrightarrow{{\rm \sigma}im} H^0{\rm \lambda}eft(\overline{\tilde y^{\rm \sigma}igma}, {\rm {\rm \mathfrak{m}}athscr{B}}{\rm rig}ht)^{G({\rm \mathfrak{b}}ar F/F)},$$ where $${\rm {\rm \mathfrak{m}}athscr{B}} = {\rm Sym}^{2r-2}(R^1f_*{\rm {\rm \mathfrak{m}}athbb{Q}}_p)(r-1) \otimes \kappa_\ell H^{2k}{\rm \lambda}eft({\rm \mathfrak{b}}ar A^{2k}, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(k){\rm rig}ht),$$ the sheaf on $Y(N)$. The isomorphism above follows from proper base change, Lemma 1.8 of {\rm \mathfrak{c}}ite{BDP1}, and the Kunneth formula. Similarly, let ${\rm \mathfrak{b}}ar b(Y^{\rm \mathfrak{a}})$ be the class of ${\rm \mathfrak{b}}ar \e {\rm \mathfrak{b}}ar Y^{\rm \mathfrak{a}}$. For the next proposition, let $j: Y(N) \to X(N)$ be the inclusion. {\rm \mathfrak{b}}egin{theorem}{\rm \lambda}abel{AJ} Set $V = H^{2r+2k-1}({\rm \mathfrak{b}}ar X, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(r+k))$. {\rm \mathfrak{b}}egin{enumerate} \item $V$ is a crystalline representation of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_v/H_v)$ for all $v | p$. \item The Abel-Jacobi images $z^{\rm \mathfrak{a}} = {\rm {\rm \mathfrak{m}}athbb{P}}hi({\rm \epsilon}ilon Y^{\rm \mathfrak{a}}), {\rm \mathfrak{b}}ar z^{\rm \mathfrak{a}} = {\rm {\rm \mathfrak{m}}athbb{P}}hi({\rm \mathfrak{b}}ar\e Y^{\rm \mathfrak{a}}) \in H^1(F, V)$ lie in the subspace $H^1_f{\rm \lambda}eft(F, V{\rm rig}ht).$ \item The element $z^{\rm \mathfrak{a}}$, thought of as an extension of $p$-adic Galois representations, can be obtained as the pull back of $$ 0 \to H^1(\overline{X(N)}, j_*{\rm {\rm \mathfrak{m}}athscr{B}})(1) \to H^1(\overline{X(N)} - \overline{\tilde y^{\rm \sigma}igma}, j_*{\rm {\rm \mathfrak{m}}athscr{B}})(1) \to H^0(\overline{\tilde y^{\rm \sigma}igma}, {\rm {\rm \mathfrak{m}}athscr{B}}) \to 0$$ by the map ${\rm {\rm \mathfrak{m}}athbb{Q}}_p \to H^0{\rm \lambda}eft(\overline{\tilde y^{\rm \sigma}igma}, {\rm {\rm \mathfrak{m}}athscr{B}}{\rm rig}ht)$ sending 1 to $b(Y^{\rm \mathfrak{a}})$, and similarly for ${\rm \mathfrak{b}}ar z^{\rm \mathfrak{a}}$. In particular, $z^{\rm \mathfrak{a}}$ and ${\rm \mathfrak{b}}ar z^{\rm \mathfrak{a}}$ only depend on $b(Y^{\rm \mathfrak{a}})$ and ${\rm \mathfrak{b}}ar b(Y^{\rm \mathfrak{a}})$ respectively. \end{enumerate} \end{theorem} {\rm \mathfrak{b}}egin{proof} (1) follows from Faltings' theorem {\rm \mathfrak{c}}ite{Falt} and the fact that $X$ has good reduction at primes above $p$. (2) is a general result due to Nekov\'{a}\v{r}, see {\rm \mathfrak{c}}ite[Theorem 3.1]{Nek2}. To apply the result one needs to know the purity conjecture for the monodromy filtration for $X$. But this is known for $W$ and $A^\ell$, so it holds for $X$ as well {\rm \mathfrak{c}}ite[3.2]{Nek2}. We note that (2) is ultimately a local statement at each place $v$ of $H$, and for $v | p$, the approach taken in the proof of Theorem \ref{crysmixed} below gives an alternate proof of this local statement. Statement (3) can be proved exactly as in {\rm \mathfrak{c}}ite[II.2.4]{Nek}. \end{proof} {\rm \mathfrak{b}}egin{definition} If $F/H$ is a field extension, then a \textit{Tate vector} is an element in $H^0({\rm \mathfrak{b}}ar y_0, {\rm {\rm \mathfrak{m}}athscr{B}})^{{\rm \mathcal{G}}al({\rm \mathfrak{b}}ar F/F)}$ for some $y_0 \in Y(N)(F)$. A \textit{Tate cycle} is a formal finite sum of Tate vectors over $F$. The group of Tate cycles is denoted $Z(Y(N), F)$. \end{definition} Let ${\rm {\rm \mathfrak{m}}athfrak{p}}i: X(N) \to X_0(N) = X(N)/B$ be the quotient map, and as in {\rm \mathfrak{c}}ite{Nek}, define $\e_B = (\#B)^{-1}{\rm \sigma}um_{g \in B} g$, which acts on $X(N)$ and its cohomology. Set ${\rm {\rm \mathfrak{m}}athbb{A}}a = ({\rm {\rm \mathfrak{m}}athfrak{p}}i_* {\rm {\rm \mathfrak{m}}athscr{B}})^B$, $a(Y^{\rm \mathfrak{a}}) = \e_Bb(Y^{\rm \mathfrak{a}})$, and ${\rm \mathfrak{b}}ar a(Y^{\rm \mathfrak{a}}) = \e_B {\rm \mathfrak{b}}ar b(Y^{\rm \mathfrak{a}})$. We define the group $Z(Y_0(N), F)$ of Tate cycles on $Y_0(N)$ exactly as for $Y(N)$, but with ${\rm {\rm \mathfrak{m}}athscr{B}}$ replaced by ${\rm {\rm \mathfrak{m}}athbb{A}}a$. Let $j_0: Y_0(N) \to X_0(N)$ be the inclusion. Note that $a(Y^{\rm \mathfrak{a}})$ is an element of $Z(Y(N),H)$, not just $Z(Y(N),F)$. {\rm \mathfrak{b}}egin{proposition} The element ${\rm {\rm \mathfrak{m}}athbb{P}}hi(\e_B\e Y^{\rm \mathfrak{a}}) \in H^1{\rm \lambda}eft(H, H^1{\rm \lambda}eft(\overline{X_0(N)}, (j_0)_*{\rm {\rm \mathfrak{m}}athbb{A}}a{\rm rig}ht){\rm \lambda}eft(1{\rm rig}ht){\rm rig}ht)$, thought of as an extension of $p$-adic Galois representations, can be obtained as the pull back of $$ 0 \to H^1{\rm \lambda}eft(\overline{X_0(N)}, j_*{\rm {\rm \mathfrak{m}}athbb{A}}a{\rm rig}ht)(1) \to H^1{\rm \lambda}eft(\overline{X_0(N)} - {\rm \mathfrak{b}}ar y^{\rm \sigma}igma, j_*{\rm {\rm \mathfrak{m}}athbb{A}}a{\rm rig}ht)(1) \to H^0({\rm \mathfrak{b}}ar y^{\rm \sigma}igma, {\rm {\rm \mathfrak{m}}athbb{A}}a) \to 0$$ by the map ${\rm {\rm \mathfrak{m}}athbb{Q}}_p \to H^0{\rm \lambda}eft(\overline{y^{\rm \sigma}igma}, {\rm {\rm \mathfrak{m}}athbb{A}}a{\rm rig}ht)$ sending 1 to $a(Y^{\rm \mathfrak{a}})$. In particular, ${\rm {\rm \mathfrak{m}}athbb{P}}hi(\e_B{\rm \epsilon}ilon Y^{\rm \mathfrak{a}})$ only depends on $a(Y^{\rm \mathfrak{a}})$. Similarly, ${\rm {\rm \mathfrak{m}}athbb{P}}hi(\e_B {\rm \mathfrak{b}}ar \e Y^{\rm \mathfrak{a}})$ depends only on ${\rm \mathfrak{b}}ar a(Y^{\rm \mathfrak{a}})$. \end{proposition} In fact, for any field $F/H$ one can define a map ${\rm {\rm \mathfrak{m}}athbb{P}}hi_T: Z(Y_0(N), F) \to H^1(F, H^1({\rm \mathfrak{b}}ar X_0(N), j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}a)(1))$, by pulling back the appropriate exact sequence as above. We then have ${\rm {\rm \mathfrak{m}}athbb{P}}hi(\e_B \e Y^{\rm \mathfrak{a}}) = {\rm {\rm \mathfrak{m}}athbb{P}}hi_T(a(Y^{\rm \mathfrak{a}}))$ and ${\rm {\rm \mathfrak{m}}athbb{P}}hi(\e_B {\rm \mathfrak{b}}ar \e Y^{\rm \mathfrak{a}}) = {\rm {\rm \mathfrak{m}}athbb{P}}hi_T({\rm \mathfrak{b}}ar a Y^{\rm \mathfrak{a}})$. For more detail, see {\rm \mathfrak{c}}ite[II.2.6]{Nek}. {\rm \sigma}ubsection{Hecke operators} The Hecke operators on $W_{2r-2}$ from {\rm \mathfrak{c}}ite{Nek} pull back to give Hecke operators $T_m$ on $X$. The $T_m$ are correspondences on $X$; they act on Chow groups and cohomology groups and commute with Abel-Jacobi maps. To describe the action of the Hecke algebra ${\rm {\rm \mathfrak{m}}athbb{T}}$ on Tate vectors, we need to say what $T_m$ does to an element of $H^0({\rm \mathfrak{b}}ar y_0, {\rm {\rm \mathfrak{m}}athbb{A}}a)^{G({\rm \mathfrak{b}}ar F/F)}$ for an arbitrary point $y_0 \in X_0(N)(F)$, $F$ an extension of $H$. Such an element is represented by a triple $(E,C,b)$ where $E$ is an elliptic curve, $C$ is a subgroup of order $N$, and $$b \in {\rm Sym}^w(H^1({\rm \mathfrak{b}}ar E, {\rm {\rm \mathfrak{m}}athbb{Q}}_p))(r-1) \otimes \kappa_\ell {\rm Sym}^{2k} (H^1({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p))(k).$$ As the Hecke operators are defined via base change from those on $W_{2r-2}$, we have: $$T_m(E,C,b) = {\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \lambda}ambda: E \to E' \\ {\rm \mathfrak{d}}eg({\rm \lambda}ambda) = m}} (E', {\rm \lambda}ambda(C), ({\rm \lambda}ambda^w \times {\rm id})_*(b)),$$ where we are using the map ${\rm \lambda}ambda^w \times {\rm id} : E^w \times A^\ell \to E'^w \times A^\ell$. Now set $V_{r,A,\ell} = \e_B\e' V = H^1(\overline{X_0(N)}, (j_0)_*{\rm {\rm \mathfrak{m}}athbb{A}}a)(1),$ a subrepresentation of $V$. Then $z^{\rm \mathfrak{a}} := {\rm {\rm \mathfrak{m}}athbb{P}}hi(\e_B{\rm \epsilon}ilon Y^{\rm \mathfrak{a}})$ lands in the Bloch-Kato subspace $H^1_f(H,V_{r,A,\ell}) {\rm \sigma}ubset H^1(H,V_{r,A,\ell})$, by Proposition \ref{AJ}. For any newform $f \in S_{2r}({\rm \mathcal{G}}amma_0(N))$, we let $V_{f,A,\ell}$ be the $f$-isotypic component of $V_{r,A, \ell}$ with respect to the action of ${\rm {\rm \mathfrak{m}}athbb{T}}$. Consider the $f$-isotypic Abel-Jacobi map $${\rm {\rm \mathfrak{m}}athbb{P}}hi_f: {\rm {\rm \mathfrak{m}}athbb{C}}H^{r+k}(X)_{0,K} \to H^1_f(H,V_{f,A,\ell}),$$ and set $z^{\rm \mathfrak{a}}_f = {\rm {\rm \mathfrak{m}}athbb{P}}hi_f(\e_B \e Y^{\rm \mathfrak{a}})$ and ${\rm \mathfrak{b}}ar z_f^{\rm \mathfrak{a}} = {\rm {\rm \mathfrak{m}}athbb{P}}hi_f(\e_B{\rm \mathfrak{b}}ar\e Y^{\rm \mathfrak{a}})$. As is shown in Section \ref{ordsec}, the $p$-adic representation $V_{f,A,\ell}$ is ordinary and satisfies $V_{f,A,\ell} {\rm \mathfrak{c}}ong V_{f,A,\ell}^*(1)$. The results of {\rm \mathfrak{c}}ite{Nekhts} therefore give a symmetric pairing $${\rm \lambda}angle \, , \rangle_{\ell_K}: H_f^1(H, V_{f,A,\ell}) \times H_f^1(H,V_{f,A,\ell}) \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p(f),$$ depending on a choice of logarithm $\ell_K : {\rm {\rm \mathfrak{m}}athbb{A}}_K^\times /K^\times \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ and the canonical splitting of the local Hodge filtrations at places $v$ of $H$ above $p$. We will sometimes omit the dependence on $\ell_K$ in the notation for the heights if a choice has been fixed. If $a,b \in Z(Y_0(N),F)$ are two Tate cycles, then we will write ${\rm \lambda}eft{\rm \lambda}angle a,b {\rm rig}ht\rangle_{\ell_K} $ for ${\rm \lambda}eft{\rm \lambda}angle {\rm {\rm \mathfrak{m}}athbb{P}}hi_T(a), {\rm {\rm \mathfrak{m}}athbb{P}}hi_T(b){\rm rig}ht\rangle_{\ell_K}$. {\rm \sigma}ubsection{Intersection theory}{\rm \lambda}abel{intersect} Here we collect some facts about generalized Heegner cycles and their corresponding cohomology classes. We first recall the intersection theory on products of elliptic curves; see {\rm \mathfrak{c}}ite[II.3]{Nek} for proofs. Let $E, E', E''$ be elliptic curves over an algbraically closed field $k$ of characteristic not $p$, and set $$H^i(Y) = H_{\rm et}^i(Y,{\rm {\rm \mathfrak{m}}athbb{Q}}_p) = {\rm \lambda}eft({\rm \lambda}im_n H^i_{\rm et} (Y, {\rm {\rm \mathfrak{m}}athbb{Z}}/p^n{\rm {\rm \mathfrak{m}}athbb{Z}}){\rm rig}ht) \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p$$ for any variety $Y/k$. A pair $({\rm \mathfrak{a}}lpha, {\rm \mathfrak{b}}eta)$ of isogenies ${\rm \mathfrak{a}}lpha\in {\rm Hom}(E'', E)$ and ${\rm \mathfrak{b}}eta \in {\rm Hom}(E'', E')$, determines a cycle $${\rm \mathcal{G}}amma_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta} = ({\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta)_*(1) \in {\rm {\rm \mathfrak{m}}athbb{C}}H^1(E \times E'),$$ where $({\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta)_* : {\rm {\rm \mathfrak{m}}athbb{C}}H^0(E'') \to {\rm {\rm \mathfrak{m}}athbb{C}}H^1(E \times E')$ is the push forward. The image of ${\rm \mathcal{G}}amma_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta}$ under the cycle class map ${\rm {\rm \mathfrak{m}}athbb{C}}H^1(E \times E') \to H^2(E \times E')(1)$ will be denoted by $[{\rm \mathcal{G}}amma_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta}]$. Also let $X_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta}$ be the projection of $[{\rm \mathcal{G}}amma_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta}]$ to $H^1(E) \otimes H^1(E')(1)$, i.e. $$X_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta} = [{\rm \mathcal{G}}amma_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta}] - {\rm \mathfrak{d}}eg({\rm \mathfrak{a}}lpha)h - {\rm \mathfrak{d}}eg({\rm \mathfrak{b}}eta)v,$$ where $h$ is the horizontal class $[{\rm \mathcal{G}}amma_{1,0}]$ and $v$ is the vertical class $[{\rm \mathcal{G}}amma_{0,1}]$. If ${\rm \mathfrak{a}}lpha \in {\rm Hom}(E, E')$, we write ${\rm \mathcal{G}}amma_{\rm \mathfrak{a}}lpha$ and $X_{{\rm \mathfrak{a}}lpha}$ for ${\rm \mathcal{G}}amma_{1,{\rm \mathfrak{a}}lpha}$ and $X_{1,{\rm \mathfrak{a}}lpha}$, respectively. If ${\rm \mathfrak{b}}eta \in {\rm Hom}(E' ,E)$ we write ${\rm \mathcal{G}}amma_{\rm \mathfrak{b}}eta^t$ and $X_{\rm \mathfrak{b}}eta^t$ for ${\rm \mathcal{G}}amma_{{\rm \mathfrak{b}}eta, 1}$ and $X_{{\rm \mathfrak{b}}eta, 1}$, respectively. Finally, let $$(\, ,\,): H^2(E \times E')(1) \times H^2(E \times E')(1) \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p,$$ be the non-degenerate cup product pairing. {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{bilin} With notation as above, {\rm \mathfrak{b}}egin{enumerate} \item The map $${\rm Hom}(E'', E) \times {\rm Hom}(E'', E') \to H^1(E) \otimes H^1(E')(1)$$ given by $({\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta) {\rm \mathfrak{m}}apsto X_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta}$ is biadditive. \item The map ${\rm Hom}(E, E') \to H^1(E) \times H^1(E')(1)$ given by ${\rm \mathfrak{a}}lpha {\rm \mathfrak{m}}apsto X_{{\rm \mathfrak{a}}lpha}$ is an injective group homomorphism. \item If $E = E'$, then $X_{{\rm \mathfrak{a}}lpha,{\rm \mathfrak{b}}eta} = X_{{\rm \mathfrak{b}}eta \hat {\rm \mathfrak{a}}lpha}$ and $(X_{\rm \mathfrak{a}}lpha, X_{\rm \mathfrak{b}}eta) = -{\rm {\rm \mathfrak{m}}athbb{T}}r({\rm \mathfrak{a}}lpha\hat {\rm \mathfrak{b}}eta)$ for all ${\rm \mathfrak{a}}lpha, {\rm \mathfrak{b}}eta \in {\rm {\rm \mathfrak{m}}athcal{E}}nd(E)$. \end{enumerate} Here, ${\rm {\rm \mathfrak{m}}athbb{T}}r :{\rm {\rm \mathfrak{m}}athcal{E}}nd(E) \to {\rm {\rm \mathfrak{m}}athbb{Z}}$ is the map ${\rm \mathfrak{a}}lpha {\rm \mathfrak{m}}apsto {\rm \mathfrak{a}}lpha + \hat{\rm \mathfrak{a}}lpha$. \end{proposition} It is convenient to think of $H^1(E)$ as $V_pE^* = {\rm Hom}(V_pE, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)$, where $V_pE = T_pE \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ is the $p$-adic Tate module. The Weil pairing $$V_pE \times V_pE \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p(1)$$ gives identifications $V_pE^*(1) {\rm \mathfrak{c}}ong V_pE$ and ${\rm \mathfrak{b}}igwedge^2V_pE {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athbb{Q}}_p(1)$. We then have the following diagram of isomorphisms \[{\rm \mathfrak{b}}egin{CD} {\rm \lambda}eft(V_pE \otimes V_pE{\rm rig}ht)(-1) @>>> {\rm \lambda}eft({\rm Sym}^2V_pE \oplus {\rm \mathfrak{b}}igwedge^2 V_pE{\rm rig}ht)(-1) @>>> {\rm Sym}^2V_pE (-1)\oplus {\rm {\rm \mathfrak{m}}athbb{Q}}_p\\ @VVV @. @V{\rm \mathfrak{d}}elta VV \\ V_pE^* \otimes V_pE @>>> {\rm {\rm \mathfrak{m}}athcal{E}}nd(V_pE) @>>> {\rm {\rm \mathfrak{m}}athcal{E}}nd_0(V_pE) \oplus {\rm {\rm \mathfrak{m}}athbb{Q}}_p \end{CD}\] One checks that ${\rm \mathfrak{d}}elta$ identifies ${\rm Sym}^2 V_pE(-1)$ with the space ${\rm {\rm \mathfrak{m}}athcal{E}}nd_0(V_pE)$ of traceless endomorphisms of $V_pE$. Now suppose that $E$ has complex multiplication by ${\rm \mathcal{O}}_K$ and that $p = {\rm {\rm \mathfrak{m}}athfrak{p}} {\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}$ splits in $K$. Then $$V_pE = V_{\rm {\rm \mathfrak{m}}athfrak{p}} E \oplus V_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}}E,$$ where $V_{\rm {\rm \mathfrak{m}}athfrak{p}} = \varprojlim E[{\rm {\rm \mathfrak{m}}athfrak{p}}^n] \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ and $V_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}} = \varprojlim E[{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}^n] \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. Let $x^*$ and $y^*$ be a basis for $V_{\rm {\rm \mathfrak{m}}athfrak{p}} E$ and $V_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}}E$ respectively, and let $x,y$ be the dual basis of $H^1(E)$ arising from the Weil pairing. Since the Weil pairing is non-degenerate, we may assume that $e(x^*,y^*) = 1 \in {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. If ${\rm \mathfrak{a}}lpha \in {\rm {\rm \mathfrak{m}}athcal{E}}nd(E)$, then the class $X_{\rm \mathfrak{a}}lpha \in H^1(E) \otimes H^1(E)(1)$, when thought of as an element of ${\rm {\rm \mathfrak{m}}athcal{E}}nd(V_pE)$ via the isomorphisms above, is simply the map $V{\rm \mathfrak{a}}lpha : V_pE \to V_pE$ induced on Tate modules. Thus, $X_1 = {\rm \lambda}ambda(x \otimes y - y\otimes x)$ for some ${\rm \lambda}ambda \in {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. Recall that one can compute the intersection pairing on $H^1(E)^{\otimes 2}$ in terms of the cup product on $H^1(E)$: $$(a \otimes b, c \otimes d) = -(a {\rm \mathfrak{c}}up c)(b{\rm \mathfrak{c}}up d).$$ Since $(X_1,X_1) = -2$, we conclude that ${\rm \lambda}ambda = 1$. Next we claim that {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{eigen} X_{{\rm \sigma}qrt{D}} = {\rm {\rm \mathfrak{m}}athfrak{p}}m {\rm \sigma}qrt D(x \otimes y + y \otimes x). \end{equation} To prove this, it suffices to show that $V{\rm \sigma}qrt D$ acts on $V_{\rm {\rm \mathfrak{m}}athfrak{p}}$ by ${\rm \sigma}qrt D$ and on $V_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$ by $-{\rm \sigma}qrt D$. Indeed, under the identifications \[H^1(E) \otimes H^1(E)(1) {\rm \mathfrak{c}}ong V_pE^* \otimes V_pE^*(1) {\rm \mathfrak{c}}ong V_pE^* \otimes V_pE {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athcal{E}}nd(V_pE),\] $x \otimes y$ corresponds to the element $f \in {\rm {\rm \mathfrak{m}}athcal{E}}nd(V_p)$ such that $f(ax^* + by^*) = ax^*$ whereas $y \otimes x$ corresponds to $g \in {\rm {\rm \mathfrak{m}}athcal{E}}nd(V_p)$ such that $g(ax^* + by^*) = -by^*$. To understand how $V{\rm \sigma}qrt D$ acts on $V_{\rm {\rm \mathfrak{m}}athfrak{p}}$, write ${\rm {\rm \mathfrak{m}}athfrak{p}}^n = p^n{\rm {\rm \mathfrak{m}}athbb{Z}} + {\rm \mathfrak{f}}rac{b + {\rm \sigma}qrt D}{2}{\rm {\rm \mathfrak{m}}athbb{Z}}$ for some $b,c \in {\rm {\rm \mathfrak{m}}athbb{Z}}$ such that $b^2 - 4p^nc = D$, which is possible because $p$ splits in $K$. For $P \in E[{\rm {\rm \mathfrak{m}}athfrak{p}}^n]$, one has $(b + {\rm \sigma}qrt D)(P)= 0$, so ${\rm \sigma}qrt D(P) = -bP$. Since $b \equiv {\rm {\rm \mathfrak{m}}athfrak{p}}m {\rm \sigma}qrt D$ (mod ${\rm {\rm \mathfrak{m}}athfrak{p}}^n)$, it follows upon taking a limit that $(V{\rm \sigma}qrt D)(x^*) = {\rm {\rm \mathfrak{m}}athfrak{p}}m {\rm \sigma}qrt D x^*$. Since we can write ${\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}^n = p^n{\rm {\rm \mathfrak{m}}athbb{Z}} + {\rm \mathfrak{f}}rac{b - {\rm \sigma}qrt D}{2}{\rm {\rm \mathfrak{m}}athbb{Z}}$, we also have $(V{\rm \sigma}qrt D)(y^*) = {\rm \mathfrak{m}}p {\rm \sigma}qrt Dy^*$, and this proves the claim. Hence $$X_\gamma = \gamma(x \otimes y) - {\rm \mathfrak{b}}ar\gamma (y \otimes x) \in H^1(E) \otimes H^1(E)(1),$$ for all $\gamma \in {\rm \mathcal{O}}_K \hookrightarrow {\rm {\rm \mathfrak{m}}athcal{E}}nd(E)$. Finally, note that the projector $\e_1 \in {\rm {\rm \mathfrak{m}}athbb{C}}orr^0(E,E)_K$ defined earlier acts on $H^1(E)$ as projection onto $V_{\rm {\rm \mathfrak{m}}athfrak{p}}$. {\rm \mathfrak{b}}egin{proposition} Let ${\rm \mathfrak{a}} {\rm \sigma}ubset {\rm \mathcal{O}}_K$ be an ideal and ${\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$ its ideal class. Then the elements $$z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A = {\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1}z_f^{\rm \mathfrak{a}} \hspace{5mm} {\rm \mathfrak{m}}box{and}\hspace{5mm} z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A = {\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1}{\rm \mathfrak{b}}ar z_f^{\rm \mathfrak{a}}$$ in $H^1_f(H, V_{f,A,\ell})_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athbb{Q}}_p}$ depend only on ${\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$. \end{proposition} {\rm \mathfrak{b}}egin{proof} To prove the proposition for $z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A$, we wish to relate $z_f^{\rm \mathfrak{a}}$ to $z_f^{{\rm \mathfrak{a}}(\gamma)}$ for some $\gamma \in {\rm \mathcal{O}}_K$ and some integral ideal ${\rm \mathfrak{a}}$. The contribution to $z_f^{\rm \mathfrak{a}}$ from one of the ``generalized" components ${\rm \mathcal{G}}amma_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}}^t {\rm \sigma}ubset A^{\rm \mathfrak{a}} \times A$ is $\e X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}, 1}$, where $X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1} \in H^1({\rm \mathfrak{b}}ar A^{\rm \mathfrak{a}}, {\rm {\rm \mathfrak{m}}athbb{Q}}_p) \otimes H^1({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)$ is the class of $${\rm \mathcal{G}}amma_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}}^t - {\rm \mathfrak{d}}eg({\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}})h - v \in {\rm {\rm \mathfrak{m}}athbb{C}}H^1(A^{\rm \mathfrak{a}} \times A),$$ as above. Let $x,y$ be a basis of $H^1({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)$ such that $$X_{\gamma,1} = {\rm \mathfrak{b}}ar\gamma(x \otimes y) - \gamma(y \otimes x) \in H^1({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p) \otimes H^1({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p),$$ for all $\gamma \in {\rm \mathcal{O}}_K$. Let $x_{\rm \mathfrak{a}}, y_{\rm \mathfrak{a}}$ be the basis of $H^1({\rm \mathfrak{b}}ar A^{\rm \mathfrak{a}}, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)$ corresponding to $x,y$ under the isomorphism ${\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}^*: H^1({\rm \mathfrak{b}}ar A^{\rm \mathfrak{a}}, {\rm {\rm \mathfrak{m}}athbb{Q}}_p) \to H^1({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p).$ One checks that $$({\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}} \times {\rm id})^*(X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}, 1}) = {\rm \mathfrak{d}}eg({\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}})X_{1,1}$$ and so $$X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}, 1} = {\rm \mathfrak{d}}eg({\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}){\rm \lambda}eft(x_{\rm \mathfrak{a}} \otimes y - y_{\rm \mathfrak{a}} \otimes x{\rm rig}ht).$$ Similarly, $$X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{{\rm \mathfrak{a}}(\gamma)},1} = X_{\gamma{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}, 1} = {\rm \mathfrak{d}}eg({\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}){\rm \lambda}eft({\rm \mathfrak{b}}ar\gamma (x_{\rm \mathfrak{a}} \otimes y) - \gamma(y_{\rm \mathfrak{a}} \otimes x){\rm rig}ht).$$ Since the projector $\e$ kills $y$, we find that $\e X_{\gamma{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1} = \gamma \e X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1}$. In the components which come purely from the Kuga-Sato variety $W_{2r-2}$, the two cycles $Y^{\rm \mathfrak{a}}$ and $Y^{{\rm \mathfrak{a}}(\gamma)}$ are identical -- they both have the form $\e {\rm \mathcal{G}}amma_{{\rm \sigma}qrt D}^{r-k-1}$. Taking the tensor product of the $\ell$ ``generalized" components and the $r-k-1$ Kuga-Sato components, we conclude that $$z_f^{{\rm \mathfrak{a}}(\gamma)} = \gamma^\ell z_f^{\rm \mathfrak{a}},$$ as desired. The proof for $z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A$ is similar: since ${\rm \mathfrak{b}}ar z_f^{\rm \mathfrak{a}}$ is defined using ${\rm \mathfrak{b}}ar\e $ instead of $\e$, the extra factor of ${\rm \mathfrak{b}}ar\gamma^\ell$ which pops out is accounted for by the factor ${\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1}$. \end{proof} {\rm \mathfrak{b}}egin{lemma}{\rm \lambda}abel{invariance} For any ideal classes ${\rm {\rm \mathfrak{m}}athbb{A}}A, {\rm {\rm \mathfrak{m}}athcal{B}},{\rm {\rm \mathfrak{m}}athbb{C}}C \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$, we have $${\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athcal{B}} {\rm rig}ht\rangle = {\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}^{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athbb{C}}C} , z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{{\rm {\rm \mathfrak{m}}athcal{B}}{\rm {\rm \mathfrak{m}}athbb{C}}C}{\rm rig}ht\rangle$$ \end{lemma} {\rm \mathfrak{b}}egin{proof} It suffices to prove ${\rm \lambda}eft{\rm \lambda}angle z^{{\rm id}}_{f,{\rm \mathfrak{c}}hi}, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athcal{B}} {\rm rig}ht\rangle = {\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{{\rm {\rm \mathfrak{m}}athcal{B}}{\rm {\rm \mathfrak{m}}athbb{A}}A}{\rm rig}ht\rangle$ for all ${\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm {\rm \mathfrak{m}}athcal{B}} \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$. Equivalently, we must show {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{htfunc} {\rm \textbf{N}}m({\rm \mathfrak{a}})^\ell {\rm \lambda}eft{\rm \lambda}angle z^{{\rm \mathcal{O}}_K}_f , {\rm \mathfrak{b}}ar z_f^{\rm \mathfrak{b}} {\rm rig}ht\rangle = {\rm \lambda}eft{\rm \lambda}angle z_f^{\rm \mathfrak{a}}, {\rm \mathfrak{b}}ar z_f^{{\rm \mathfrak{b}}{\rm \mathfrak{a}}}{\rm rig}ht\rangle, \end{equation} for all integral ideals ${\rm \mathfrak{a}}$ and ${\rm \mathfrak{b}}$. Let ${\rm \sigma}igma \in {\rm \mathcal{G}}al({\rm \mathfrak{b}}ar K/ K)$ restrict to an element of ${\rm \mathcal{G}}al(H/K)$ which corresponds to ${\rm \mathfrak{a}}$ under the Artin map. Consider the morphisms of Chow groups $${\rm \sigma}igma: {\rm {\rm \mathfrak{m}}athbb{C}}H^*(\overline{W \times A^\ell})_K \to {\rm {\rm \mathfrak{m}}athbb{C}}H^*(\overline{W \times (A^{\rm \sigma}igma)^\ell})_K$$ and $$\xi = ({\rm id} \times {\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}^\ell)^*: {\rm {\rm \mathfrak{m}}athbb{C}}H^*(\overline{W \times (A^{\rm \sigma}igma)^\ell})_K \to {\rm {\rm \mathfrak{m}}athbb{C}}H^*(\overline{W \times A^\ell})_K.$$ After identifying $A^{\rm \sigma}igma$ with $A^{\rm \mathfrak{a}}$, one checks that $(\xi {\rm \mathfrak{c}}irc {\rm \sigma}igma)(Y^{\rm \mathfrak{b}}) = Y^{{\rm \mathfrak{a}}{\rm \mathfrak{b}}}$. Indeed, since ${\rm \mathfrak{a}}$ and ${\rm \mathfrak{b}}$ are integral, the graph of ${\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{b}}^{\rm \sigma}igma : A^{\rm \sigma}igma \to (A^{\rm \mathfrak{b}})^{\rm \sigma}igma$ can be identified with the graph of the projection map ${\rm {\rm \mathfrak{m}}athfrak{p}}hi: A/A[{\rm \mathfrak{a}}] \to A/A[{\rm \mathfrak{a}}{\rm \mathfrak{b}}]$ (first note the two isogenies have the same kernel and then use the main theorem of complex multiplication). The latter is pulled back to ${\rm \mathcal{G}}amma_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{{\rm \mathfrak{a}}{\rm \mathfrak{b}}}}$ by $({\rm id} \times {\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}})^*$. It follows that $(\xi {\rm \mathfrak{c}}irc {\rm \sigma}igma)(Y^{\rm \mathfrak{b}}) = Y^{{\rm \mathfrak{a}}{\rm \mathfrak{b}}}$, and the identity therefore holds for the corresponding cohomology classes. On cohomology, ${\rm \sigma}igma$ and $\xi$ are isomorphisms, so (\ref{htfunc}) follows from the functoriality of $p$-adic heights {\rm \mathfrak{c}}ite[Theorem 4.11]{Nekhts}. We are using the fact that ${\rm \lambda}eft(\hat{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}^\ell{\rm rig}ht)^*$ is adjoint to $ {\rm \lambda}eft({\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}^\ell{\rm rig}ht)^*$ under the pairing given by Poincar\'e duality, and that ${\rm \mathfrak{d}}eg {\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}} = {\rm \textbf{N}}m({\rm \mathfrak{a}})$. \end{proof} The goal now is to compute ${\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}, z_{f,{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi}\rangle$, where $$z_{f,{\rm \mathfrak{c}}hi} = {\rm \mathfrak{f}}rac{1}{h}{\rm \sigma}um_{{\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)} z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A \hspace{5mm} {\rm \mathfrak{m}}box{and} \hspace{5mm}z_{f,{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi} = {\rm \mathfrak{f}}rac{1}{h}{\rm \sigma}um_{{\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)} z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A.$$ Here, we have extended the $p$-adic height ${\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athbb{Q}}_p$-linearly. Let $\tau \in {\rm \mathcal{G}}al(H/{\rm {\rm \mathfrak{m}}athbb{Q}})$ be a lift of the generator of ${\rm \mathcal{G}}al(K/{\rm {\rm \mathfrak{m}}athbb{Q}})$. As $A$ and $W$ are defined over ${\rm {\rm \mathfrak{m}}athbb{R}}$, $\tau$ acts on $X = W \times A^\ell$ and its cohomology. {\rm \mathfrak{b}}egin{lemma}{\rm \lambda}abel{atkin} Let ${\rm \mathfrak{n}}{\rm \sigma}ubset {\rm \mathcal{O}}_K$ be the ideal of norm $N$ corresponding to the Heegner point $y \in X_0(N)$, and let $(-1)^r\e_f$ be the sign of the functional equation for $L(f,s)$. Then $$\tau(z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A) = (-1)^{r-k-1}\e_f{\rm \mathfrak{c}}hi({\rm \mathfrak{n}})N^{-k} z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{{\rm {\rm \mathfrak{m}}athbb{A}}A^{-1}[{\rm \mathfrak{b}}ar{\rm \mathfrak{n}}]}$$ and $$\tau(z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A) = (-1)^{r-k-1}\e_f{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi({\rm \mathfrak{n}})N^{-k} z_{f,{\rm \mathfrak{c}}hi}^{{\rm {\rm \mathfrak{m}}athbb{A}}A^{-1}[{\rm \mathfrak{b}}ar{\rm \mathfrak{n}}]}.$$ \end{lemma} {\rm \mathfrak{b}}egin{proof} Let $W^0_j(N)$ be the Kuga-Sato variety over $X_0(N)$, i.e. the quotient of $W_{j}$ by the action of the Borel subgroup $B$. Recall the map $W_N: W^0_j \to W^0_j$ which sends a point $P \in {\rm \mathfrak{b}}ar E^j$ in the fiber above a diagram ${\rm {\rm \mathfrak{m}}athfrak{p}}hi: E \to E/E[{\rm \mathfrak{n}}]$ to the point ${\rm {\rm \mathfrak{m}}athfrak{p}}hi^j(P)$ in the fiber above the diagram $\hat{\rm {\rm \mathfrak{m}}athfrak{p}}hi: E/E[{\rm \mathfrak{n}}] \to E/E[N]$. Meanwhile, complex conjugation sends the Heegner point $A^{\rm \mathfrak{a}} \to A^{\rm \mathfrak{a}}/A^{\rm \mathfrak{a}}[{\rm \mathfrak{n}}]$ to the Heegner point $A^{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}} \to A^{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}}/A^{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}}[{\rm \mathfrak{b}}ar {\rm \mathfrak{n}}]$. Thus on a generalized component of our cycle, we have $$(W_N \times {\rm id})^*(X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}{\rm \mathfrak{b}}ar{\rm \mathfrak{n}}},1}) = NX_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}},1} = N\tau(X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1}),$$ where these objects are thought of as Chow cycles on $X$ which are supported on the fiber of $X$ above $(\tilde y)^{{\rm \sigma}igma\tau}$. Since $\tau$ takes $V_{\rm {\rm \mathfrak{m}}athfrak{p}} A$ to $V_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}} A$, we even have $$ (W_N \times {\rm id})^*({\rm \mathfrak{b}}ar \e_1 X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}{\rm \mathfrak{b}}ar{\rm \mathfrak{n}}},1}) = N{\rm \mathfrak{b}}ar\e_1X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}},1} = N\tau(\e_1X_{{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1}).$$ On the purely Kuga-Sato components, one computes {\rm \mathfrak{c}}ite[6.2]{NekEuler} $$W_N ^*(X_{{\rm \sigma}qrt D}) = NX_{{\rm \sigma}qrt D} = -N\tau (X_{{\rm \sigma}qrt D}),$$ where the $X_{{\rm \sigma}qrt D}$ in the equation above are supported on $\tilde y^{{\rm {\rm \mathfrak{m}}athbb{F}}rob({\rm \mathfrak{b}}ar {\rm \mathfrak{a}} {\rm \mathfrak{b}}ar {\rm \mathfrak{n}})}$, $\tilde y^{{\rm {\rm \mathfrak{m}}athbb{F}}rob({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})}$, and $\tilde y^{{\rm {\rm \mathfrak{m}}athbb{F}}rob({\rm \mathfrak{a}})}$ respectively. On the other hand, $(W_N \times {\rm id})^2 = [N] \times {\rm id}$, where $[N]: W_{2r-2}^0 \to W_{2r-2}^0$ is multiplication by $N$ in each fiber. On cycles and cohomology, $[N] \times {\rm id}$ acts as multiplication by $N^{2r-2}$. Since $W_N$ commutes with the Hecke operators, we see that $(W_N \times {\rm id})$ acts as multiplication by ${\rm {\rm \mathfrak{m}}athfrak{p}}m N^{r-1}$ on the $f$-isotypic part of cohomology, and this sign is well known to equal $\e_f$. Putting things together, we obtain $$\tau(z_f^{\rm \mathfrak{a}}) = {\rm \mathfrak{f}}rac{(-1)^{r-k-1}(W_N \times {\rm id})^*({\rm \mathfrak{b}}ar z_f^{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}{\rm \mathfrak{b}}ar{\rm \mathfrak{n}}})}{N^{2k+ r-k-1}} = {\rm \mathfrak{f}}rac{(-1)^{r-k-1}\e_f{\rm \mathfrak{b}}ar z_f^{{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}{\rm \mathfrak{b}}ar{\rm \mathfrak{n}}}}{N^{k}},$$ from which the first identity in the lemma follows. The proof of the second identity is entirely analogous. \end{proof} {\rm \mathfrak{b}}egin{theorem}{\rm \lambda}abel{Htvanish} If $\ell_K : {\rm {\rm \mathfrak{m}}athbb{A}}^\times_K/K^\times \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ is anticyclotomic, i.e. $\ell_K {\rm \mathfrak{c}}irc \tau|_K = -\ell_K$, then $${\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}, z_{f,{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi}\rangle_{\ell_K} = 0.$$ In particular, Theorem \ref{main} holds for such $\ell_K$. \end{theorem} {\rm \mathfrak{b}}egin{proof} From the previous lemma we have $$\tau(z_{f,{\rm \mathfrak{c}}hi}) = (-1)^{r-k-1}\e_f{\rm \mathfrak{c}}hi({\rm \mathfrak{n}})N^{-k}z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}$$ and $$\tau(z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}) = (-1)^{r-k-1}\e_f{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi({\rm \mathfrak{n}})N^{-k}z_{f,{\rm \mathfrak{c}}hi}.$$ Thus $${\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi},z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}{\rm rig}ht\rangle_{\ell_K} = {\rm \lambda}eft{\rm \lambda}angle \tau(z_{f,{\rm \mathfrak{c}}hi}), \tau(z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}){\rm rig}ht\rangle_{\ell_K{\rm \mathfrak{c}}irc \tau}= {\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}, z_{f,{\rm \mathfrak{c}}hi}{\rm rig}ht\rangle_{-\ell_K} = -{\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}{\rm rig}ht\rangle_{\ell_K},$$ which proves the vanishing. Theorem \ref{main} now follows from Corollary \ref{Lvanish}. \end{proof} Since any logarithm $\ell_K$ can be decomposed into a sum of a cyclotomic and an anticyclotomic logarithm, it now suffices to prove Theorem \ref{main} for cyclotomic $\ell_K$, i.e. we may assume $\ell_K = \ell_K {\rm \mathfrak{c}}irc \tau|_K$. By Lemma \ref{invariance} we have {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{ortho} {\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}{\rm rig}ht\rangle = {\rm \mathfrak{f}}rac{1}{h}{\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi}^{{\rm \mathcal{O}}_K}, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}{\rm rig}ht\rangle={\rm \mathfrak{f}}rac{1}{h}{\rm \sigma}um_{{\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)} {\rm \lambda}eft{\rm \lambda}angle z_f, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm rig}ht\rangle. \end{equation} The height ${\rm \lambda}angle \, , \rangle$ can be written as a sum of local heights: $${\rm \lambda}angle x , y\rangle = {\rm \sigma}um_v {\rm \lambda}angle x, y\rangle_v,$$ where $v$ varies over the \textit{finite} places of $H$. These local heights are defined in general in {\rm \mathfrak{c}}ite{Nekhts} and computed explicitly for cyclotomic $\ell_K$ in {\rm \mathfrak{c}}ite[Proposition II.2.16]{Nek} in a situation similar to ours. In the next section we compute the local heights ${\rm \lambda}angle z_f, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A\rangle_v$ for finite places $v$ of $H$ not dividing $p$. The contribution from local heights at places $v | p$ will be treated separately. {\rm \sigma}ection{Local $p$-adic heights at primes away from $p$}{\rm \lambda}abel{localhts} Our goal is to compute ${\rm \lambda}eft{\rm \lambda}angle z_f, z^{\rm {\rm \mathfrak{m}}athbb{A}}A_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}{\rm rig}ht\rangle_{\ell_K}$ when $\ell_K$ is cyclotomic. Since such a homomorphism is unique up to scaling, we may assume that $\ell_K = {\rm \lambda}og_p {\rm \mathfrak{c}}irc {\rm \lambda}ambda$, where ${\rm \lambda}ambda: G(K_\infty/K) \to 1 + p{\rm {\rm \mathfrak{m}}athbb{Z}}_p$ is the cyclotomic character and ${\rm \lambda}og_p$ is Iwasawa's $p$-adic logarithm. We may write ${\rm \lambda}ambda = \tilde {\rm \lambda}ambda {\rm \mathfrak{c}}irc \textbf{N}$, where $\tilde {\rm \lambda}ambda : {\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times \to 1 + p{\rm {\rm \mathfrak{m}}athbb{Z}}_p$ is given by $\tilde {\rm \lambda}ambda(x) = {\rm \lambda}angle x \rangle^{-1}$. Here, ${\rm \lambda}angle x \rangle = x \omega^{-1}(x)$, where $\omega$ is the Teichmuller character. We maintain the following notations and assumptions for the rest of this section. Fix an ideal class ${\rm {\rm \mathfrak{m}}athbb{A}}A$ and an integer $m \geq 1$, and suppose that there are no integral ideals in ${\rm {\rm \mathfrak{m}}athbb{A}}A$ of norm $m$, i.e. $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m) = 0$. Choose an integral representative ${\rm \mathfrak{a}} \in {\rm {\rm \mathfrak{m}}athbb{A}}A$ and let ${\rm \sigma}igma \in {\rm \mathcal{G}}al(H/K)$ correspond to ${\rm {\rm \mathfrak{m}}athbb{A}}A$ under the Artin map. Write $x = b(Y)$ and ${\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}} = {\rm \mathfrak{b}}ar b (Y^{\rm \mathfrak{a}})$ for the two Tate vectors supported at the points $y$ and $y^{\rm \sigma}igma$ in $X_0(N)(H)$. Let $v$ be a finite place of $H$ not dividing $p$ and set $F = H_v$. Write ${\rm {\rm \mathfrak{m}}athcal{L}}ambda$ for the ring of integers in $F^{\rm ur}$, the maximal unramified extension of $F$, and let ${\rm {\rm \mathfrak{m}}athbb{F}} = {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{F}}_\ell$ be the residue field of ${\rm {\rm \mathfrak{m}}athcal{L}}ambda$. Write $\underline X_0(N)\to {\rm Spec \hspace{1mm}} {\rm {\rm \mathfrak{m}}athbb{Z}}$ for the integral model of $X_0(N)$ constructed in {\rm \mathfrak{c}}ite{KM}, and let $\underline X_0(N)_{\rm {\rm \mathfrak{m}}athcal{L}}ambda$ be the base change to ${\rm Spec \hspace{1mm}} {\rm {\rm \mathfrak{m}}athcal{L}}ambda$. Finally, write $i: Y_0(N) \times_{\rm {\rm \mathfrak{m}}athbb{Q}} F^{\rm ur} \hookrightarrow \underline X_0(N)_{\rm {\rm \mathfrak{m}}athcal{L}}ambda$ for the inclusion. Now suppose $a, b$ are elements of $Z(Y_0(N),F^{\rm ur})$ supported at points $y_a {\rm \mathfrak{n}}eq y_b$ of $X_0(N)(F^{\rm ur})$ of good reduction. Let $\underline y_a$ and $\underline y_b$ be the Zariski closure of the points $y_a$ and $y_b$ in $\underline X_0(N)_{\rm {\rm \mathfrak{m}}athcal{L}}ambda$ and let $\underline a$ and $\underline b$ be extensions of $a$ and $b$ to $H^0(\underline y_a, i_*{\rm {\rm \mathfrak{m}}athbb{A}}a)$ and $H^0(\underline y_b, i_*{\rm {\rm \mathfrak{m}}athbb{A}}a)$ respectively. If $\underline y_a$ and $\underline y_b$ have common special fiber $z$ (so $z$ corresponds to an elliptic curve $E/{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{F}}$), then define $$(a,b)_v = (\underline y_a {\rm \mathfrak{c}}dot \underline y_b)_z {\rm \mathfrak{c}}dot (\underline a_z, \underline b_z),$$ where $(\underline y_a {\rm \mathfrak{c}}dot \underline y_b)_z$ is the usual local intersection number on the arithmetic surface $\underline X_0(N)_{\rm {\rm \mathfrak{m}}athcal{L}}ambda$ and $(\underline a_z, \underline b_z)$ is the intersection pairing on the cohomology of $E^{2r-2} \times A_{\rm {\rm \mathfrak{m}}athbb{F}}^\ell$, where $A_{\rm {\rm \mathfrak{m}}athbb{F}}$ is the reduction of $A_{{\rm \mathfrak{b}}ar F}$. {\rm \mathfrak{b}}egin{remark} Note that while $A$ may not have good reduction at $v$, it has potential good reduction. We can therefore identify $H^i_{\rm et}(A_{{\rm \mathfrak{b}}ar F}, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)$ and $H^i_{\rm et}(A_{\rm {\rm \mathfrak{m}}athbb{F}}, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)$ as vector spaces, but not as ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar F/F)$-representations. Since the ensuing intersection theoretic computations can be performed over an algebraic closure, this is enough for our purposes. \end{remark} Our assumption that $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m) = 0$ implies that the Tate vectors $x$ and $T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}}$ have disjoint support. By {\rm \mathfrak{c}}ite{ST}, we may assume that they are supported at points of ${\rm {\rm \mathfrak{m}}athcal{X}}_0(N)_{\rm {\rm \mathfrak{m}}athcal{L}}ambda$ which are represented by elliptic curves with good reduction. The following proposition gives a way to compute the local heights purely in terms of Tate vectors. This technique of computing heights of cycles on higher dimensional motives coming from local systems on curves is the key to the entire computation. The idea goes back to work of Deligne, Beilinson, Brylinski, and Scholl, among others. {\rm \mathfrak{b}}egin{proposition} With notation and assumptions as above, we have {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{localht} {\rm \lambda}eft{\rm \lambda}angle x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}} {\rm rig}ht\rangle_v = -{\rm \lambda}eft(x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}}{\rm rig}ht)_v{\rm \lambda}og_p({\rm \textbf{N}} v), \end{equation} \end{proposition} {\rm \mathfrak{b}}egin{proof} The proof is exactly as in {\rm \mathfrak{c}}ite[II.2.16 and II.4.5]{Nek}. In our case, one uses that $H^2(\underline X_0(N),i_*{\rm {\rm \mathfrak{m}}athbb{A}}a(1) ) = 0$. This follows from the fact that if ${\rm {\rm \mathfrak{m}}athbb{A}}a' = {\rm \lambda}eft({\rm {\rm \mathfrak{m}}athfrak{p}}i_*{\rm Sym}^{2r-2}(R^1f_*{\rm {\rm \mathfrak{m}}athbb{Q}}_p)(r-1){\rm rig}ht)^B$, then ${\rm {\rm \mathfrak{m}}athbb{A}}a = {\rm {\rm \mathfrak{m}}athbb{A}}a' \otimes W$, where $W$ is a trivial two-dimensional local system, and $H^2(\underline X_0(N), i_*{\rm {\rm \mathfrak{m}}athbb{A}}a') = 0$ {\rm \mathfrak{c}}ite[14.5.5.1]{KM}. \end{proof} Recall that over ${\rm {\rm \mathfrak{m}}athcal{L}}ambda$, the sections $\underline y$ and $\underline y^{\rm \sigma}igma$ correspond to cyclic isogenies of degree $N$. We will confuse the two notions, so that the notation ${\rm Hom}_{\rm {\rm \mathfrak{m}}athcal{L}}ambda(\underline y^{\rm \sigma}igma, \underline y)$ makes sense. See {\rm \mathfrak{c}}ite{Nek} and {\rm \mathfrak{c}}ite{BC} for details. {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{deform} Suppose $v$ is a finite prime of $H$ not divisible by $p$. If $m \geq 1$ is prime to $N$ and satisfies $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m) = 0$, then $$(x,T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}})_v = {\rm \mathfrak{f}}rac{1}{2} m^{r-k - 1}{\rm \sigma}um_{n \geq 1} {\rm \sigma}um_g {\rm \lambda}eft({\rm \mathfrak{b}}ar\e{\rm \lambda}eft(X_{g {\rm \sigma}qrt D g^{-1}}^{\otimes r - k - 1} \otimes X_{\overline{g{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}}}^{\otimes \ell}{\rm rig}ht), {\rm \epsilon}ilon{\rm \lambda}eft(X_{{\rm \sigma}qrt D }^{\otimes r - k - 1} \otimes X_1^{\otimes \ell}{\rm rig}ht){\rm rig}ht),$$ where the sum is over $g \in {\rm Hom}_{{\rm {\rm \mathfrak{m}}athcal{L}}ambda/{\rm {\rm \mathfrak{m}}athfrak{p}}i^n}(\underline y^{\rm \sigma}igma, \underline y)$ of degree $m$. The intersection pairing on the right takes place in the cohomology of $E^{2r-2} \times A_{\rm {\rm \mathfrak{m}}athbb{F}}^\ell$, where $E {\rm \mathfrak{c}}ong A_{\rm {\rm \mathfrak{m}}athbb{F}}$ is the elliptic curve over ${\rm {\rm \mathfrak{m}}athbb{F}}$ corresponding to the special fiber $\underline y_s$ of $\underline y$. \end{proposition} {\rm \mathfrak{b}}egin{proof} The proof builds on that of {\rm \mathfrak{c}}ite[II.4.12]{Nek}, so we only mention what is new to our setting. We write $m$ as $m = m_0q^t$ where $q$ is the rational prime below $v$ (this is what Nekov\'a\v{r} calls $\ell$). In the notation of {\rm \mathfrak{c}}ite{Nek}, we need to compute the special fiber of $\underline x_g^{\rm \mathfrak{a}}(j)$, where $g \in {\rm Hom}_{\rm {\rm \mathfrak{m}}athcal{L}}ambda(\underline y^{\rm \sigma}igma, \underline y_g^{\rm \sigma}igma)$ is an isogeny of degree $m_0$. There is no harm in assuming $r = k+1$, because the description of the purely Kuga-Sato components of $\underline x_g^{\rm \mathfrak{a}}(j)$ (i.e. coming from factors of the cycle $Y^{\rm \mathfrak{a}} $ of the form ${\rm \mathcal{G}}amma_{{\rm \sigma}qrt D} {\rm \sigma}ubset E^{\rm \mathfrak{a}} \times E^{\rm \mathfrak{a}}$) is handled in {\rm \mathfrak{c}}ite{Nek}. Assume now that $q$ is inert in $K$ and $t$ is even. In this case the special fiber $(\underline y)_s$ is supersingular, and the special fiber $(\underline x_g^{\rm \mathfrak{a}})_s$ of the Tate vector is represented by the pair $${\rm \lambda}eft((\underline y_g^{\rm \sigma}igma)_s, {\rm \mathfrak{b}}ar\e{\rm \lambda}eft(X^{\otimes \ell}_{g{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1}{\rm rig}ht){\rm rig}ht).$$ This follows from the definition of the Hecke operators and the following fact: if $g: E \to E'$ is an isogeny and ${\rm {\rm \mathfrak{m}}athfrak{p}}hi: A \to E$ is an isogeny, then $${\rm \lambda}eft(g \times {\rm id}{\rm rig}ht)_*({\rm \mathcal{G}}amma^t_{\rm {\rm \mathfrak{m}}athfrak{p}}hi) = {\rm \mathcal{G}}amma^t_{g{\rm {\rm \mathfrak{m}}athfrak{p}}hi} \in {\rm {\rm \mathfrak{m}}athbb{C}}H^1(E' \times A).$$ Since any isogeny $h \in {\rm Hom}_{{\rm {\rm \mathfrak{m}}athcal{L}}ambda/{\rm {\rm \mathfrak{m}}athfrak{p}}i^n}(\underline y^{\rm \sigma}igma_g, \underline y)$ of degree $q^t$ on the special fiber $\underline y_s {\rm \mathfrak{c}}ong (\underline y_g^{\rm \sigma}igma)_s$ is of the form $q^{t/2}h_0$, with $h_0$ of degree 1, we find that, assuming $\underline y$ and $\underline y_g^{\rm \sigma}igma(j)$ intersect, $(\underline x_g^{\rm \mathfrak{a}}(j))_s$ is represented by {\rm \mathfrak{b}}egin{align*} {\rm \lambda}eft((\underline y_g^{\rm \sigma}igma)_s, {\rm \mathfrak{b}}ar\e{\rm \lambda}eft(X^{\otimes \ell}_{q^{t/2}g{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1}{\rm rig}ht){\rm rig}ht) &= {\rm \lambda}eft(\underline y_s, {\rm \mathfrak{b}}ar\e{\rm \lambda}eft(X^{\otimes \ell}_{h_0q^{t/2}g{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1}{\rm rig}ht){\rm rig}ht)\\ &= {\rm \lambda}eft(\underline y_s, {\rm \mathfrak{b}}ar\e{\rm \lambda}eft(X^{\otimes \ell}_{hg{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}},1}{\rm rig}ht){\rm rig}ht) \\ &= {\rm \lambda}eft(\underline y_s, {\rm \mathfrak{b}}ar\e{\rm \lambda}eft(X^{\otimes \ell}_{\overline{hg{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}}}{\rm rig}ht){\rm rig}ht), \end{align*} as desired. The proof when $t$ is odd or when $q$ is ramified is similar. If $q$ is split in $K$, then both sides of the equation are 0, as is shown in {\rm \mathfrak{c}}ite{GZ}. \end{proof} When $v$ lies over a non-split prime, ${\rm {\rm \mathfrak{m}}athcal{E}}nd_{{\rm {\rm \mathfrak{m}}athcal{L}}ambda/{\rm {\rm \mathfrak{m}}athfrak{p}}i}(\underline y) = {\rm {\rm \mathfrak{m}}athcal{E}}nd(E)$ is an order $R$ in a quaternion algebra $B$ and we can make the double sum on the right hand side more explicit. To do this, we follow {\rm \mathfrak{c}}ite{GZ} and identify ${\rm Hom}_{{\rm {\rm \mathfrak{m}}athcal{L}}ambda/{\rm {\rm \mathfrak{m}}athfrak{p}}i} (\underline y^{\rm \sigma}igma, \underline y)$ with $R{\rm \mathfrak{a}}$ by sending a map $g$ to $b = g{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}$. The reduction of endomorphisms induces an embedding $K \hookrightarrow B$, which in turn determines a canonical decomposition $B = K \oplus Kj$. Thus every $b\in B$ can be written as $b = {\rm \mathfrak{a}}lpha + {\rm \mathfrak{b}}eta j$ with ${\rm \mathfrak{a}}lpha, {\rm \mathfrak{b}}eta \in K$. Recall also that the reduced norm on $B$ is additive with respect to this decomposition, i.e. ${\rm \textbf{N}}(b) = {\rm \textbf{N}}({\rm \mathfrak{a}}lpha) + {\rm \textbf{N}}({\rm \mathfrak{b}}eta j)$. {\rm \mathfrak{b}}egin{proposition} If $g{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}} = b = {\rm \mathfrak{a}}lpha + {\rm \mathfrak{b}}eta j \in {\rm {\rm \mathfrak{m}}athcal{E}}nd(E)$, then {\rm \mathfrak{b}}egin{align*}{\rm \lambda}eft({\rm \mathfrak{b}}ar\e (X_{g{\rm \sigma}qrt D g^{-1}}^{r-k-1} \otimes X_{{\rm \mathfrak{b}}ar b}^{\otimes \ell}), {\rm rig}ht. & {\rm \lambda}eft. \e (X_{{\rm \sigma}qrt D}^{\otimes r-k-1} \otimes X_1^{\otimes \ell}){\rm rig}ht) =\\ & {\rm \mathfrak{f}}rac{(4D)^{r-k-1}}{{\rm \mathfrak{b}}inom{2r-2}{r-k-1}}{\rm \mathfrak{b}}ar{\rm \mathfrak{a}}lpha^{2k} H_{r-k-1,k}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2{\rm \textbf{N}}({\rm \mathfrak{b}}eta j)}{{\rm \textbf{N}}(b)}{\rm rig}ht), \end{align*} where \[H_{m,k}(t) = {\rm \mathfrak{f}}rac{1}{2^m{\rm \mathfrak{c}}dot (m + 2k)!} {\rm \lambda}eft({\rm \mathfrak{f}}rac{d}{dt}{\rm rig}ht)^{m + 2k}[(t^2 - 1)^m(t - 1)^{2k}]\] \end{proposition} {\rm \mathfrak{b}}egin{proof} Recall from Section \ref{intersect} that we have chosen a basis $x^*,y^*$ of $V_pE$, and a dual basis $x,y$ of $H^1(E)$ such that $x^* \in V_{\rm {\rm \mathfrak{m}}athfrak{p}} E$, $y^* \in V_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}} E$, and $(x^*,y^*) = 1$. We have already seen that $X_{\rm \mathfrak{a}}lpha = {\rm \mathfrak{a}}lpha x \otimes y - {\rm \mathfrak{b}}ar{\rm \mathfrak{a}}lpha y \otimes x$. Since $\gamma j = j {\rm \mathfrak{b}}ar \gamma$ for all $\gamma \in K$, $Vj$ swaps $V_{\rm {\rm \mathfrak{m}}athfrak{p}} E$ and $V_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}E$. So we can write \[Vj = {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} 0 & u \\ v & 0 \end{array} {\rm rig}ht) \] for some $u,v \in {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ such that $uv = {\rm \textbf{N}}(j) = -j^2$. It follows that \[X_b = {\rm \mathfrak{a}}lpha x \otimes y - {\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha y \otimes x + {\rm \mathfrak{b}}eta u y \otimes y - {\rm \mathfrak{b}}ar {\rm \mathfrak{b}}eta v x \otimes x.\] Next note that $g {\rm \sigma}qrt D g^{-1} = b {\rm \sigma}qrt D b^{-1}$. We write $b {\rm \sigma}qrt D b^{-1} = \gamma + {\rm \mathfrak{d}}elta j$, so that $\gamma = {\rm \mathfrak{f}}rac{{\rm \sigma}qrt D}{{\rm \textbf{N}}(b)}({\rm \textbf{N}}({\rm \mathfrak{a}}lpha) - {\rm \textbf{N}}({\rm \mathfrak{b}}eta j))$ and ${\rm \mathfrak{d}}elta = {\rm \mathfrak{f}}rac{-2{\rm \sigma}qrt D}{{\rm \textbf{N}}(b)}{\rm \mathfrak{a}}lpha{\rm \mathfrak{b}}eta$. Thus $X_{g {\rm \sigma}qrt D g^{-1}}$ already lies in ${\rm Sym}^2 H^1(E)$, and hence (working now in the symmetric algebra) \[{\rm \mathfrak{b}}ar \e X_{g{\rm \sigma}qrt D g^{-1}} = 2\gamma xy + {\rm \mathfrak{d}}elta uy^2 - {\rm \mathfrak{b}}ar {\rm \mathfrak{d}}elta v x^2 = {\rm \mathfrak{f}}rac{2{\rm \sigma}qrt D}{{\rm \textbf{N}}(b)}({\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha x - {\rm \mathfrak{b}}eta uy)({\rm \mathfrak{a}}lpha y + {\rm \mathfrak{b}}ar {\rm \mathfrak{b}}eta vx),\] since ${\rm \mathfrak{b}}ar \e$ acts as Scholl's projector $\e_W$ on the purely Kuga-Sato components. The cohomology classes $X_{{\rm \mathfrak{b}}ar b}$ in the statement of the proposition are on `mixed' components, i.e. they live in $H^1(E) \otimes H^1(E')$, where $E$ comes from a Kuga-Sato component and $E'$ (which is abstractly isomorphic to E) comes from the factor $A^\ell$. Thus \[X_{{\rm \mathfrak{b}}ar b} = {\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha x \otimes y' - {\rm \mathfrak{a}}lpha y \otimes x' - {\rm \mathfrak{b}}eta u y \otimes y' + {\rm \mathfrak{b}}ar{\rm \mathfrak{b}}eta x \otimes x',\] and ${\rm \mathfrak{b}}ar \e X_{{\rm \mathfrak{b}}ar b} = ({\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha x - {\rm \mathfrak{b}}eta u y)y'$, since ${\rm \mathfrak{b}}ar \e$ acts trivially on $H^1(E)$ and kills the basis vector $x'$ in $H^1(E')$. Using these observations together with the compatibility of the projectors with the multiplication in the appropriate symmetric algebras, we compute {\rm \mathfrak{b}}egin{align*} {\rm \lambda}eft({\rm \mathfrak{b}}ar\e {\rm rig}ht. & {\rm \lambda}eft. (X_{g{\rm \sigma}qrt D g^{-1}}^{r-k-1} \otimes X_{{\rm \mathfrak{b}}ar b}^{\otimes \ell}), \e (X_{{\rm \sigma}qrt D}^{\otimes r-k-1} \otimes X_1^{\otimes \ell}){\rm rig}ht)& \\ &= {\rm \lambda}eft((2\gamma xy + {\rm \mathfrak{d}}elta u y^2 - {\rm \mathfrak{b}}ar {\rm \mathfrak{d}}elta x^2)^{r-k-1}({\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha x - {\rm \mathfrak{b}}eta u y)^{2k}\otimes y'^{2k}, (2{\rm \sigma}qrt D xy)^{r-k-1}y^{2k}\otimes x'^{2k}{\rm rig}ht)\\ &= {\rm \lambda}eft({\rm \mathfrak{f}}rac{4D}{{\rm \textbf{N}}(b)}{\rm rig}ht)^{r-k-1}(y'^{2k},x'^{2k}){\rm \lambda}eft(({\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha x - {\rm \mathfrak{b}}eta uy)^{r+k-1}({\rm \mathfrak{a}}lpha y + {\rm \mathfrak{b}}ar {\rm \mathfrak{b}}eta vx)^{r-k-1}, x^{r-k-1}y^{r+k-1}{\rm rig}ht)\\ &= {\rm \lambda}eft({\rm \mathfrak{f}}rac{4D}{{\rm \textbf{N}}(b)}{\rm rig}ht)^{r-k-1}(y'^{2k},x'^{2k})(y^{r-k-1}x^{r+k-1}, x^{r-k-1}y^{r+k-1}){\rm \mathfrak{c}}dot C\\ &= {\rm \mathfrak{f}}rac{(4D)^{r-k-1}}{{\rm \textbf{N}}(b)^{r-k-1}{\rm \mathfrak{b}}inom{2r-2}{r-k-1}}{\rm \mathfrak{c}}dot C, \end{align*} where $C$ is the coefficient of the monomial $y^{r-k-1}x^{r+k-1}$ in $({\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha x - {\rm \mathfrak{b}}eta uy)^{r+k-1}({\rm \mathfrak{a}}lpha y + {\rm \mathfrak{b}}ar {\rm \mathfrak{b}}eta vx)^{r-k-1}$. The pairings in the second to last line are the natural ones on ${\rm Sym}^{2k}H^1(E')$ and ${\rm Sym}^{2r-2}H^1(E)$ induced from the pairings on the full tensor algebras. For example, ${\rm Sym}^{2r-2} H^1(E)$ has a natural pairing coming from the cup product $(\, , \,)$ on $H^1(E)$: $$(v_1 \otimes {\rm \mathfrak{c}}dots \otimes v_{2r-2}) \times (w_1\otimes {\rm \mathfrak{c}}dots \otimes w_{2r-2}) {\rm \mathfrak{m}}apsto {\rm \mathfrak{f}}rac{1}{(2r-2)!}{\rm \sigma}um_{{\rm \sigma}igma \in S_{2r-2}} {\rm {\rm \mathfrak{m}}athfrak{p}}rod_{i =1}^{2r-2} (v_i, w_{{\rm \sigma}igma(i)}).$$ In particular, $(x^ay^b, x^cy^d) = 0$ unless $a = d$ and $b =c$, and \[(x^ay^b, y^ax^b) = {\rm \mathfrak{f}}rac{a!b!}{(a+b)!} = {\rm \mathfrak{b}}inom{a+b}{a}^{-1}.\] We have also used that on ${\rm Sym}^{2r-2} H^1(E)\otimes {\rm Sym}^{2k} H^1(E')$ we have ${\rm \lambda}eft(u \otimes v, w \otimes z{\rm rig}ht) = (u,w)(v,z)$. To compute the value of $C$, note that in general, the coefficient of $x^{m + 2k}$ in \[(ax + b)^{m + 2k}(cx + d)^m\] is equal to $a^{2k}(ad - bc)^m H_{m,k}{\rm \lambda}eft({\rm \mathfrak{f}}rac{ad + bc}{ad - bc}{\rm rig}ht)$. This is proved using the method of {\rm \mathfrak{c}}ite[3.3.3]{Zhang}. Applying this to the situation at hand, we find that \[C = {\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha^{2k} {\rm \textbf{N}}(b)^{r-k-1} H_{r-k-1,k}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2{\rm \textbf{N}}({\rm \mathfrak{b}}eta j)}{{\rm \textbf{N}}(b)}{\rm rig}ht).\] Plugging this in, we obtain the desired expression for the pairing on the special fiber. \end{proof} For each prime $q$, define ${\rm \lambda}angle x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}} \rangle_q = {\rm \sigma}um_{v | q} {\rm \lambda}angle x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}} \rangle_v.$ {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{htcoeff} Assume that $(m,N) = 1$, $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m) = 0$ and that $N > 1$. Then {\rm \mathfrak{b}}egin{align*} {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm \mathfrak{a}})&^{-1}{\rm \sigma}um_{q {\rm \mathfrak{n}}eq p} {\rm \lambda}angle x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}} \rangle_q =\\ &-u^2 {\rm \mathfrak{f}}rac{{\rm \lambda}eft(4|D|m{\rm rig}ht)^{r-k-1}}{D^k{\rm \mathfrak{c}}dot {\rm \mathfrak{b}}inom{2r-2}{r-k-1}} {\rm \sigma}um_{0 < n < {\rm \mathfrak{f}}rac{m|D|}{N}} {\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athbb{A}}A(n)r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}{\rm \lambda}eft(m|D| - nN{\rm rig}ht)H_{r-k-1,k}{\rm \lambda}eft(1-{\rm \mathfrak{f}}rac{2nN}{m|D|}{\rm rig}ht), \end{align*} with ${\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athbb{A}}A(n)$ defined as in Corollary $\ref{fourier}$. \end{proposition} {\rm \mathfrak{b}}egin{proof} This type of sum arises from Proposition \ref{deform} exactly as in {\rm \mathfrak{c}}ite[II.4.17]{Nek} and {\rm \mathfrak{c}}ite{GZ}, so we omit the details. The main new feature here is that each $b = {\rm \mathfrak{a}}lpha + {\rm \mathfrak{b}}eta j \in R{\rm \mathfrak{a}}$ of degree $m$ is weighted by ${\rm \mathfrak{b}}ar{\rm \mathfrak{a}}lpha^\ell$, by the previous proposition. Thus the numbers $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(j)$, with $j = m|D| - nN$, and which in {\rm \mathfrak{c}}ite[II.4.17]{Nek} are simply counting the number of such $b$, become non-trivial sums of the form $${\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{c}} {\rm \sigma}ubset {\rm \mathcal{O}}_K \\ [{\rm \mathfrak{c}}] = {\rm {\rm \mathfrak{m}}athbb{A}}A^{-1}{\rm {\rm \mathfrak{m}}athcal{D}}\\ {\rm \textbf{N}}m({\rm \mathfrak{c}}) = j}} {\rm \mathfrak{b}}ar {\rm \mathfrak{a}}lpha^\ell.$$ Here, ${\rm \mathfrak{a}}lpha \in \ {\rm \mathfrak{d}}^{-1}{\rm \mathfrak{a}}$ and ${\rm \mathfrak{c}} = ({\rm \mathfrak{a}}lpha){\rm \mathfrak{d}} {\rm \mathfrak{a}}^{-1}$ (see {\rm \mathfrak{c}}ite[p. 265]{GZ}). Rewriting this sum, we obtain \[{\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{c}} {\rm \sigma}ubset {\rm \mathcal{O}}_K \\ [{\rm \mathfrak{c}}] = {\rm {\rm \mathfrak{m}}athbb{A}}A^{-1}{\rm {\rm \mathfrak{m}}athcal{D}}\\ {\rm \textbf{N}}m({\rm \mathfrak{c}}) = j}} {\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi ({\rm \mathfrak{c}} {\rm \mathfrak{a}}{\rm \mathfrak{d}}^{-1}) ={\rm \mathfrak{f}}rac{{\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm \mathfrak{a}})}{{\rm \mathfrak{c}}hi({\rm \mathfrak{d}})}{\rm \mathfrak{c}}dot {\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{c}} {\rm \sigma}ubset {\rm \mathcal{O}}_K \\ [{\rm \mathfrak{c}}] = {\rm {\rm \mathfrak{m}}athbb{A}}A^{-1}{\rm {\rm \mathfrak{m}}athcal{D}}\\ {\rm \textbf{N}}m({\rm \mathfrak{c}}) = j}} {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{c}}) = {\rm \mathfrak{f}}rac{{\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})}{D^k} {\rm \mathfrak{c}}dot {\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{c}} {\rm \sigma}ubset {\rm \mathcal{O}}_K \\ [{\rm \mathfrak{c}}] = {\rm {\rm \mathfrak{m}}athbb{A}}A\\ {\rm \textbf{N}}m({\rm \mathfrak{c}}) = j}} {\rm \mathfrak{c}}hi({\rm \mathfrak{c}}) = {\rm \mathfrak{f}}rac{{\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})}{D^k} r_{{\rm {\rm \mathfrak{m}}athbb{A}}A, {\rm \mathfrak{c}}hi}(j).\] Multiplying by ${\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})^{-1}$, we get the desired result. \end{proof} We define $$B_m^{\rm \sigma}igma = m^{r-k-1}{\rm \sigma}um_{{\rm \sigma}ubstack{n= 1\\ (p,n) =1}}^{{\rm \mathfrak{f}}rac{m|D|}{N}} r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(m|D|-nN){\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athbb{A}}A(n)H_{r-k-1,k}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{m|D|}{\rm rig}ht)$$ $$C_m^{\rm \sigma}igma = m^{r-k-1}{\rm \sigma}um_{n=1}^{{\rm \mathfrak{f}}rac{m|D|}{N}} r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(m|D|-nN){\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athbb{A}}A(n)H_{r-k-1,k}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{m|D|}{\rm rig}ht)$$ Up to a constant, the $B_m^{\rm \sigma}igma$ appear as coefficients of the derivative of the $p$-adic $L$-function defined earlier and $C_m^{\rm \sigma}igma$ contributes to the height of our generalized Heegner cycle. Just as in {\rm \mathfrak{c}}ite[I.6.7]{Nek}, we wish to relate the $B_m^{\rm \sigma}igma$ to the $C_m^{\rm \sigma}igma$. Let $U_p$ be the operator defined by $C^{\rm \sigma}igma_m {\rm \mathfrak{m}}apsto C^{\rm \sigma}igma_{mp}$ and similarly for $B_m^{\rm \sigma}igma$. For a prime ${\rm {\rm \mathfrak{m}}athfrak{p}}$ of $K$ above $p$, we write ${\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athfrak{p}}$ for ${\rm {\rm \mathfrak{m}}athbb{F}}rob({\rm {\rm \mathfrak{m}}athfrak{p}}) \in {\rm \mathcal{G}}al(H/K)$. We will also let ${\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athfrak{p}}$ be the operator $C_m^{\rm \sigma}igma {\rm \mathfrak{m}}apsto C_m^{{\rm \sigma}igma {\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athfrak{p}}}$. {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{mainid} Suppose $p>2$ is a prime which splits in $K$ and that ${\rm \mathfrak{c}}hi$ is an unramified Hecke character of $K$ of infinity type $(\ell,0)$ with $\ell = 2k$. Then $${\rm {\rm \mathfrak{m}}athfrak{p}}rod_{{\rm {\rm \mathfrak{m}}athfrak{p}} | p} {\rm \lambda}eft(U_p - p^{r-k -1}{\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}){\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athfrak{p}}{\rm rig}ht)^2 C_m^{\rm \sigma}igma = {\rm \lambda}eft(U^4_p - p^{2r-2}U_p^2{\rm rig}ht) B_m^{\rm \sigma}igma.$$ \end{proposition} {\rm \mathfrak{b}}egin{proof} The proof follows {\rm \mathfrak{c}}ite[Proposition 3.20]{PR1}, which is the case $r = 1$ and $\ell = k= 0$. We first generalize {\rm \mathfrak{c}}ite[Lemma 3.11]{PR1} and write down relations between the various $r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(-)$. {\rm \mathfrak{b}}egin{lemma} Set $r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(t) = 0$ if $t \in {\rm {\rm \mathfrak{m}}athbb{Q}} {\rm \sigma}etminus {\rm {\rm \mathfrak{m}}athbb{N}}$. For all integers $m > 0$, we have {\rm \mathfrak{b}}egin{enumerate} \item $r_{{\rm {\rm \mathfrak{m}}athbb{A}}A, {\rm \mathfrak{c}}hi}(mp) + p^\ell r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(m/p) = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}})r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athfrak{p}},{\rm \mathfrak{c}}hi}(m) + {\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}})r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}, {\rm \mathfrak{c}}hi}(m)$. \item $r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(mp^2) + p^{2\ell}r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(m/p^2) = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}^2)r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athfrak{p}}^2,{\rm \mathfrak{c}}hi}(m) + {\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}}^2)r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}^2,{\rm \mathfrak{c}}hi}(m)$ if $p | m$. \item $r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(mp^2) - p^\ell r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(m) = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}^2)r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athfrak{p}}^2,{\rm \mathfrak{c}}hi}(m) + {\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}}^2)r_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}^2,{\rm \mathfrak{c}}hi}(m)$ if $p {\rm \mathfrak{n}}mid m$. \item If $n = n_0p^t$ with $p {\rm \mathfrak{n}}ot {\rm \mathfrak{d}}ivides n_0$, then ${\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athbb{A}}A(n) = (t+1){\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A,t}(n_0)$, where ${\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A,t} = {\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athfrak{p}}^t} = {\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}^t}.$ \item ${\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \mathfrak{b}}^2}(n) = {\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athbb{A}}A(n)$ for any ideal ${\rm \mathfrak{b}}$. \end{enumerate} \end{lemma} {\rm \mathfrak{b}}egin{proof} Note that every integral ideal ${\rm \mathfrak{a}}$ in ${\rm {\rm \mathfrak{m}}athbb{A}}A$ of norm $mp$ is either of the form ${\rm \mathfrak{a}}' {\rm {\rm \mathfrak{m}}athfrak{p}}$ with ${\rm \mathfrak{a}}' \in {\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}$ of norm $m$ or it is of the form ${\rm \mathfrak{a}}' {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}$ with ${\rm \mathfrak{a}}' \in {\rm {\rm \mathfrak{m}}athbb{A}}A{\rm {\rm \mathfrak{m}}athfrak{p}}$ of norm $m$. Moreover, an ideal of norm $mp$ which can be written as such a product in two ways is necessarily the product of an integral ideal in ${\rm {\rm \mathfrak{m}}athbb{A}}A$ of norm $m/p$ with $(p)$. The first claim now follows from the fact that $$r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(t) = {\rm \sigma}um_{{\rm \sigma}ubstack{{\rm \mathfrak{a}} {\rm \sigma}ubset {\rm \mathcal{O}} \\ {\rm \mathfrak{a}} \in {\rm {\rm \mathfrak{m}}athbb{A}}A\\ {\rm \textbf{N}}({\rm \mathfrak{a}}) = t}} {\rm \mathfrak{c}}hi({\rm \mathfrak{a}}),$$ and that ${\rm \mathfrak{c}}hi((p)) = p^\ell$. Parts (2) and (3) follow formally from (1). (4) is proven in {\rm \mathfrak{c}}ite{PR1} and (5) is clear from the definition. \end{proof} Going back to the proof of Proposition \ref{mainid}, the LHS is equal to {\rm \mathfrak{b}}egin{align*} C_{mp^4}^{\rm \sigma} - 2p^{r-k-1}&{\rm \lambda}eft({\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}) C_{mp^3}^{{\rm \sigma} {\rm \sigma}_{\rm {\rm \mathfrak{m}}athfrak{p}}}+ {\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}}) C_{mp^3}^{{\rm \sigma} {\rm \sigma}_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}}{\rm rig}ht) \\ &+p^{2(r-k-1)}{\rm \lambda}eft({\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}})^2 C_{mp^2}^{{\rm \sigma} {\rm \sigma}_{{\rm {\rm \mathfrak{m}}athfrak{p}}^2}} + 4p^\ell C_{mp^2}^{\rm \sigma} + {\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}})C_{mp^2}^{{\rm \sigma} {\rm \sigma}_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}^2}}{\rm rig}ht)\\ &-2p^{3(r-k-1)+\ell} {\rm \lambda}eft({\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}) C_{mp}^{{\rm \sigma} {\rm \sigma}_{\rm {\rm \mathfrak{m}}athfrak{p}}} + {\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}}) C_{mp}^{{\rm \sigma} {\rm \sigma}_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}}{\rm rig}ht) + p^{4(r-1)}C_m^{\rm \sigma}. \end{align*} In the following we write $v(p)$ for the $p$-adic valuation of an integer $n$, and $n = n_0p^{v(p)}$. For the sake of brevity we also set $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(u,v) = r_{{\rm {\rm \mathfrak{m}}athbb{A}}A,{\rm \mathfrak{c}}hi}(u|D| - vN)$ for integers $u$ and $v$ and $H(x) = H_{r-k-1,k}(x)$. Then by the lemma, the LHS above is equal to $${\rm \sigma}um_{n = 1}^{m|D|/N}(v(n)+1)(mp^4)^{r-k-1} M(n),$$ where $M(n)$ equals {\rm \mathfrak{b}}egin{align*} r_{\rm {\rm \mathfrak{m}}athbb{A}}A&(mp^4, n){\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A,v(n)}(n_0)H{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{mp^4|D|}{\rm rig}ht)\\ &-2{\rm \lambda}eft[r_{\rm {\rm \mathfrak{m}}athbb{A}}A(mp^4, pn) + p^\ell r_{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \lambda}eft(mp^2, n/p{\rm rig}ht){\rm rig}ht]{\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A, v(n) + 1}(n_0)H{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{mp^3|D|}{\rm rig}ht)\\ &+ {\rm \lambda}eft[r_{\rm {\rm \mathfrak{m}}athbb{A}}A(mp^4, p^2n) + {\rm \mathfrak{b}}egin{cases} p^{2\ell}r_{\rm {\rm \mathfrak{m}}athbb{A}}A{\rm \lambda}eft(m, n/p^2{\rm rig}ht) + 4p^\ell r_{\rm {\rm \mathfrak{m}}athbb{A}}A(mp^2, n) &{\rm \mathfrak{m}}box{if } p | n \\ 3p^\ell r_{\rm {\rm \mathfrak{m}}athbb{A}}A(mp^2, n) & {\rm \mathfrak{m}}box{if } p {\rm \mathfrak{n}}ot {\rm \mathfrak{d}}ivides n \end{cases} {\rm rig}ht]\\ & \hspace{25mm}\times {\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A,v(n)}(n_0)H{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{mp^2|D|}{\rm rig}ht)\\ &-2p^\ell{\rm \lambda}eft[r_{\rm {\rm \mathfrak{m}}athbb{A}}A(mp^2, pn) + p^\ell r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m, n/p){\rm rig}ht]{\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A,v(n)+1}(n_0)H{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{mp|D|}{\rm rig}ht)\\ &+p^{2\ell} r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m,n){\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A,v(n)}(n_0)H{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2nN}{m|D|}{\rm rig}ht). \end{align*} Grouping in terms of the $n_0$ which arise in this sum, we find that the LHS is equal to $${\rm \sigma}um_{(n_0,p) = 1} {\rm \sigma}um_t {\rm \sigma}igma_{{\rm {\rm \mathfrak{m}}athbb{A}}A, t}(n_0)A_t$$ where $A_t$ equals {\rm \mathfrak{b}}egin{align*} (mp^4)&^{r-k-1}r_{\rm {\rm \mathfrak{m}}athbb{A}}A(mp^4, p^tn_0){\rm \lambda}eft[t + 1 - 2t + {\rm \mathfrak{b}}egin{cases} t - 1 &{\rm \mathfrak{m}}box{if } t \geq 1 \\ 0 & {\rm \mathfrak{m}}box{if } t = 0 \end{cases} {\rm rig}ht]H{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2n_0p^tN}{mp^4|D|}{\rm rig}ht)\\ &+ (mp^2)^{r-k-1}p^{2r-2}r_{\rm {\rm \mathfrak{m}}athbb{A}}A(mp^2,p^tn_0){\rm \lambda}eft[-2(t+2) + {\rm \mathfrak{b}}egin{cases} 4(t+1) - 2t &{\rm \mathfrak{m}}box{if } t \geq 1 \\ 3 & {\rm \mathfrak{m}}box{if } t = 0 \end{cases}{\rm rig}ht]\\ & \hspace{25mm} \times H{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2n_0p^tN}{mp^2|D|}{\rm rig}ht)\\ &+ m^{r-k-1}p^{4r-4}r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m, p^tn_0){\rm \lambda}eft[t+3 - 2(t+2) + t + 1{\rm rig}ht]H{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{2n_0p^tN}{m|D|}{\rm rig}ht). \end{align*} So $A_t = 0$ unless $t = 0$, and we conclude that the LHS is equal to $(U_p^4 - p^{2r-2}U_p^2)B_m^{\rm \sigma}$, as desired. \end{proof} {\rm \sigma}ection{Ordinary representations}{\rm \lambda}abel{ordsec} The contributions to the $p$-adic height ${\rm \lambda}angle z_f, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A \rangle$ coming from places $v | p$ will eventually be shown to vanish. The proof is as in {\rm \mathfrak{c}}ite{Nek} (though see Section \ref{nekfix}), where the key fact is that the local $p$-adic Galois representation $V_f$ attached to $f$ is ordinary. We recall this notion and prove that the Galois representation $V_{f,A,\ell} = V_f \otimes \kappa_\ell H^\ell({\rm \mathfrak{b}}ar A^\ell,{\rm {\rm \mathfrak{m}}athbb{Q}}_p)(k)$ is ordinary as well. {\rm \mathfrak{b}}egin{definition} Let $F$ be a finite extension of ${\rm {\rm \mathfrak{m}}athbb{Q}}_p$. A $p$-adic Galois representation $V$ of $G_F = {\rm \mathcal{G}}al({\rm \mathfrak{b}}ar F/ F)$ is \textit{ordinary} if it admits a decreasing filtration by subrepresentations $$ {\rm \mathfrak{c}}dots F^i V {\rm \sigma}upset F^{i+1} V {\rm \sigma}upset {\rm \mathfrak{c}}dots$$ such that ${\rm \mathfrak{b}}igcup F^i V = V$, ${\rm \mathfrak{b}}igcap F^i V = 0,$ and for each $i$, $F^i V / F^{i+1}V = A_i(i)$, with $A_i$ unramified. \end{definition} Recall we have defined $\e' = \e_W\kappa_\ell$ with $$\kappa_\ell = {\rm \lambda}eft[{\rm \lambda}eft({\rm \mathfrak{f}}rac{{\rm \sigma}qrt{D} + [{\rm \sigma}qrt{D}]}{2{\rm \sigma}qrt{D}}{\rm rig}ht)^{\otimes \ell} + {\rm \lambda}eft({\rm \mathfrak{f}}rac{{\rm \sigma}qrt{D} - [{\rm \sigma}qrt{D}]}{2{\rm \sigma}qrt{D}}{\rm rig}ht)^{\otimes \ell} {\rm rig}ht]{\rm \mathfrak{c}}irc {\rm \lambda}eft({\rm \mathfrak{f}}rac{1 - [-1]}{2}{\rm rig}ht)^{\otimes \ell}.$$ {\rm \mathfrak{b}}egin{theorem}{\rm \lambda}abel{ord} Let $f \in S_{2r}({\rm \mathcal{G}}amma_0(N))$ be an ordinary newform and let $V_f$ be the $2$-dimensional $p$-adic Galois representation associated to $f$ by Deligne. Let $A/H$ be an elliptic curve with CM by ${\rm \mathcal{O}}_K$ and assume $p$ splits in $K$ and $A$ has good reduction at primes above $p$. For any $\ell = 2k \geq 0$, set $W = \kappa_\ell H^\ell({\rm \mathfrak{b}}ar A^\ell,{\rm {\rm \mathfrak{m}}athbb{Q}}_p)(k)$. Then for any place $v$ of $H$ above $p$, $V_{f,A,\ell} = V_f \otimes W$ is an ordinary $p$-adic Galois representations of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_v / H_v)$. \end{theorem} {\rm \mathfrak{b}}egin{proof} First we recall that $V_f$ is ordinary. Indeed, Wiles {\rm \mathfrak{c}}ite{Wi} proves that the action of the decomposition group $D_p$ on $V_f$ is given by \[ {\rm \lambda}eft( {\rm \mathfrak{b}}egin{array}{cc} {\rm \epsilon}ilon_1 & * \\ 0 & {\rm \epsilon}ilon_2 \end{array} {\rm rig}ht) \] with ${\rm \epsilon}ilon_2$ unramified. Since, ${\rm \mathfrak{d}}et V_f$ is ${\rm \mathfrak{c}}hi_{\rm \mathfrak{c}}yc^{2r-1}$, we have ${\rm \epsilon}ilon_1 = {\rm \epsilon}ilon_2^{-1}{\rm \mathfrak{c}}hi^{2r-1}_{\rm \mathfrak{c}}yc$. Thus, the filtration $$F^0V_f = V_f {\rm \sigma}upset F^1V_f = F^{2r-1} V_f = {\rm \epsilon}ilon_1 {\rm \sigma}upset F^{2r}V_f = 0,$$ shows that $V_f$ is an ordinary ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}_p/{\rm {\rm \mathfrak{m}}athbb{Q}}_p)$-representation and hence an ordinary ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_v/H_v)$-representation as well. Next we describe the ordinary filtration on (a Tate twist of) $W$. {\rm \mathfrak{b}}egin{proposition} Write $(p) = {\rm {\rm \mathfrak{m}}athfrak{p}} {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}$ as ideals in $K$. Then the $p$-adic representation $M = \kappa_\ell H^\ell_{\rm et}({\rm \mathfrak{b}}ar A^\ell, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)(\ell)$ of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_v/ H_v)$ has an ordinary filtration $$F^0M = M {\rm \sigma}upset F^1M = F^\ell M {\rm \sigma}upset F^{\ell+ 1}M = 0.$$ \end{proposition} {\rm \mathfrak{b}}egin{proof} The theory of complex multiplication associates to $A$ an algebraic Hecke character ${\rm {\rm \mathfrak{m}}athfrak{p}}si: {\rm {\rm \mathfrak{m}}athbb{A}}^\times_H \to K^\times$ of type ${\rm \textbf{N}}m: H^\times \to K^\times$ such that for any uniformizer ${\rm {\rm \mathfrak{m}}athfrak{p}}i_v$ at a place $v$ not dividing $p$ or the conductor of $A$, ${\rm {\rm \mathfrak{m}}athfrak{p}}si({\rm {\rm \mathfrak{m}}athfrak{p}}i_v) \in K {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athcal{E}}nd(A)$ is a lift of the Frobenius morphism of the reduction $A_v$ at $v$. The composition $$t_p: {\rm {\rm \mathfrak{m}}athbb{A}}^\times_H {\rm \sigma}tackrel{{\rm \textbf{N}}m}{\rm \lambda}ongrightarrow {\rm {\rm \mathfrak{m}}athbb{A}}^\times_K \to (K \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p)^\times$$ agrees with ${\rm {\rm \mathfrak{m}}athfrak{p}}si$ on $H^\times$, giving a continuous map $$\rho' = {\rm {\rm \mathfrak{m}}athfrak{p}}si t_p^{-1}: {\rm {\rm \mathfrak{m}}athbb{A}}^\times_H/H^\times \to (K \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p)^\times.$$ Since the target is totally disconnected, this factors through a map $$\rho: G_H^{\rm \mathfrak{a}}b \to (K\otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p)^\times.$$ By construction of the Hecke character (and the Chebotarev density theorem), the action of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H/H)$ on the rank 1 $(K \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p)$-module $T_p A \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ is given by the character $\rho$. Since $p$ splits in $K$, we have $$(K \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p)^\times {\rm \mathfrak{c}}ong K_{\rm {\rm \mathfrak{m}}athfrak{p}}^\times \oplus K_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}^\times = {\rm {\rm \mathfrak{m}}athbb{Q}}_p^\times \oplus {\rm {\rm \mathfrak{m}}athbb{Q}}_p^\times.$$ Now write $\rho = \rho_{\rm {\rm \mathfrak{m}}athfrak{p}} \oplus \rho_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$, where $\rho_{\rm {\rm \mathfrak{m}}athfrak{p}}$ and $\rho_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$ are the characters obtained by projecting $\rho$ onto $K_{\rm {\rm \mathfrak{m}}athfrak{p}}^\times$ and $K_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}}^\times$. {\rm \mathfrak{b}}egin{lemma} Let ${\rm \mathfrak{c}}hi_{\rm \mathfrak{c}}yc: {\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_v / H_v) \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p^\times$ denote the cyclotomic character and consider $\rho_{\rm {\rm \mathfrak{m}}athfrak{p}}$ and $ \rho_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$ as representations of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_v/H_v)$. Then $\rho_{\rm {\rm \mathfrak{m}}athfrak{p}}\rho_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}} = {\rm \mathfrak{c}}hi_{\rm \mathfrak{c}}yc$ and $\rho_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$ is unramified. \end{lemma} {\rm \mathfrak{b}}egin{proof} The non-degeneracy of the Weil pairing shows that ${\rm \mathfrak{b}}igwedge^2 T_p A {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athbb{Z}}_p(1)$. It then follows from the previous discussion that $\rho_{\rm {\rm \mathfrak{m}}athfrak{p}}\rho_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}} = {\rm \mathfrak{c}}hi_{\rm \mathfrak{c}}yc$. That $\rho_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$ is unramified follows from the fact that $t_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}(H_v) = 1$ and $v$ is prime to the conductor of ${\rm {\rm \mathfrak{m}}athfrak{p}}si$. Indeed, the conductor of $A$ is the square of the conductor of ${\rm {\rm \mathfrak{m}}athfrak{p}}si$ {\rm \mathfrak{c}}ite{Gr}, and $A$ has good reduction at $p$. \end{proof} {\rm \mathfrak{b}}egin{remark} Let ${\rm \mathfrak{m}}athcal{A}/{\rm \mathcal{O}}_H$ be the N\'eron model of $A/H$. Since ${\rm \mathfrak{m}}athcal{A}[{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}^n]$ is \'etale, it follows that the ${\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}$-adic Tate module $V_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}A$ is unramified at $v$. We can therefore identify $\rho_{\rm {\rm \mathfrak{m}}athfrak{p}} {\rm \mathfrak{c}}ong V_{\rm {\rm \mathfrak{m}}athfrak{p}} A$ and $\rho_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}} = V_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}A$. One can also see this from the computation in equation \ref{eigen}. \end{remark} {\rm \mathfrak{b}}egin{lemma} As ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_v/ H_v)$-representations, $$H^1_{\rm et}({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)(1) {\rm \mathfrak{c}}ong \rho_{\rm {\rm \mathfrak{m}}athfrak{p}} \oplus \rho_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}}$$ and $$M = \kappa_\ell H^\ell_{\rm et}({\rm \mathfrak{b}}ar A^\ell, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)(\ell) {\rm \mathfrak{c}}ong \rho_{\rm {\rm \mathfrak{m}}athfrak{p}}^\ell \oplus \rho_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}}^\ell.$$ \end{lemma} {\rm \mathfrak{b}}egin{proof} The first claim follows from the fact that \[T_pA \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}_p {\rm \mathfrak{c}}ong H^1_{\rm et}({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)(1).\] Fix an embedding $\iota : {\rm {\rm \mathfrak{m}}athcal{E}}nd(A) {\rm \hookrightarrow} K$, which by our choices, induces an embedding ${\rm {\rm \mathfrak{m}}athcal{E}}nd(A) {\rm \hookrightarrow} {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. By the definition of $\rho$, $\rho_{\rm {\rm \mathfrak{m}}athfrak{p}}$ is the subspace of $H^1_{\rm et}({\rm \mathfrak{b}}ar A, {\rm {\rm \mathfrak{m}}athbb{Q}}_p)(1)$ on which ${\rm \mathfrak{a}}lpha \in {\rm {\rm \mathfrak{m}}athcal{E}}nd(A)$ acts by $\iota({\rm \mathfrak{a}}lpha)$, whereas on $\rho_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$, ${\rm \mathfrak{a}}lpha$ acts as ${\rm \mathfrak{b}}ar \iota({\rm \mathfrak{a}}lpha)$. The second statement now follows from the Kunneth formula and the definition of $\kappa_\ell$. \end{proof} Now set $F^0M = M$, $F^1M = F^\ell M = {\rm {\rm \mathfrak{m}}athfrak{p}}si^\ell$, and $F^{\ell + 1}M = 0$. By the lemmas above, this gives an ordinary filtration of $M$ and proves the proposition. \end{proof} Now to prove the theorem. We have specified ordinary filtrations $F^iV_f$ and $F^iM$ above. A simple check shows that $$F^i(V_f \otimes M) = {\rm \sigma}um_{p + q = i} F^pV_f \otimes F^qM $$ is an ordinary filtration on $V_f \otimes M $. Since $V_{f,A,\ell} = V_f \otimes W= (V_f \otimes M)(-k)$ and Tate twisting preserves ordinarity, this proves $V_{f,A,\ell}$ is ordinary. \end{proof} {\rm \mathfrak{b}}egin{remark} Another way to obtain the ordinary filtration on $M$ is to use the fact that $M$ is isomorphic to the $p$-adic realization of the motive $M_{\theta_{{\rm {\rm \mathfrak{m}}athfrak{p}}si^\ell}}$ attached to the modular form $\theta_{{\rm {\rm \mathfrak{m}}athfrak{p}}si^\ell}$ of weight $\ell + 1$. Since $A$ has ordinary reduction at $p$, $\theta_{\rm {\rm \mathfrak{m}}athfrak{p}}si$ is an ordinary modular form, and it follows that $\theta_{{\rm {\rm \mathfrak{m}}athfrak{p}}si^\ell}$ is ordinary as well. We may therefore apply Wiles' theorem again to obtain an ordinary filtration on $W$. \end{remark} {\rm \mathfrak{b}}egin{proposition} The ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H/H)$ representation $V_{f,A,\ell} = V_f \otimes W$ satisfies $V_{f,A,\ell}^*(1) {\rm \mathfrak{c}}ong V_{f,A,\ell}$. \end{proposition} {\rm \mathfrak{b}}egin{proof} Recall that $V_f^*(1) {\rm \mathfrak{c}}ong V_f$, so we need to show that $W^* {\rm \mathfrak{c}}ong W$. This follows from the two lemmas above. \end{proof} {\rm \sigma}ection{Proof of Theorem \ref{main}}{\rm \lambda}abel{proof} In what follows, normalized primitive forms $f_{\rm \mathfrak{b}}eta \in S_{2r}({\rm \mathcal{G}}amma_0(N))$ will be indexed by the corresponding ${\rm {\rm \mathfrak{m}}athbb{Q}}$-algebra homomorphisms ${\rm \mathfrak{b}}eta: {\rm {\rm \mathfrak{m}}athbb{T}} \to {\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athbb{Q}}$. We let ${\rm \mathfrak{b}}eta_0$ be the homomorphism corresponding to our chosen newform $f$. If ${\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$, then $$F_{\rm {\rm \mathfrak{m}}athbb{A}}A := {\rm \sigma}um_{{\rm \mathfrak{b}}eta} {\rm \lambda}angle z_{{\rm \mathfrak{b}}eta,{\rm \mathfrak{c}}hi}, z_{{\rm \mathfrak{b}}eta,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A \rangle f_{\rm \mathfrak{b}}eta$$ is a cusp form in $S_{2r}({\rm \mathcal{G}}amma_0(N); {\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi))$. Indeed, for $(m,N) = 1$, we have $${\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm \mathfrak{a}})a_m(F_{\rm {\rm \mathfrak{m}}athbb{A}}A) ={\rm \sigma}um_{{\rm \mathfrak{b}}eta} {\rm \lambda}angle z_{\rm \mathfrak{b}}eta, {\rm \mathfrak{b}}ar z_{\rm \mathfrak{b}}eta^{\rm \mathfrak{a}}\rangle {\rm \mathfrak{b}}eta(T_m) = {\rm \lambda}angle z, T_m {\rm \mathfrak{b}}ar z^{\rm \mathfrak{a}}\rangle = {\rm \lambda}angle x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}}\rangle \in {\rm {\rm \mathfrak{m}}athbb{Q}}_p,$$ because the Hecke operators are self-adjoint with respect to the height pairing. If $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m) = 0$, then we have the decomposition $$a_m(F_{\rm {\rm \mathfrak{m}}athbb{A}}A) = c_m^{\rm \sigma}igma + d_m^{\rm \sigma}igma$$ where $$c_m^{\rm \sigma}igma = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm \mathfrak{a}})^{-1}{\rm \sigma}um_{v {\rm \mathfrak{n}}otdivides p} {\rm \lambda}angle x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}}\rangle_v, \hspace{5mm} d^{\rm \sigma}igma_m = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm \mathfrak{a}})^{-1} {\rm \sigma}um_{v | p} {\rm \lambda}angle x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}} \rangle_v,$$ and the sums are over \textit{finite} places of $H$. Both sides of the equation in Theorem \ref{main} depend linearly on a choice of arithmetic logarithm $\ell_K : {\rm {\rm \mathfrak{m}}athbb{A}}^\times_K/K^\times \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. By Theorem \ref{Htvanish}, it suffices to proves the main theorem for cyclotomic $\ell_K$, i.e. $\ell_K = \ell_K {\rm \mathfrak{c}}irc \tau$. As cyclotomic logarithms are unique up to scalar we only need to consider the case $\ell_K = \ell_{\rm {\rm \mathfrak{m}}athbb{Q}} {\rm \mathfrak{c}}irc \textbf{N}$. Thus, $\ell_K = {\rm \lambda}og_p {\rm \mathfrak{c}}irc {\rm \lambda}ambda$, where ${\rm \lambda}ambda: G(K_\infty/K) \to 1 + p{\rm {\rm \mathfrak{m}}athbb{Z}}_p$ is the cyclotomic character. As before, we write ${\rm \lambda}ambda = \tilde {\rm \lambda}ambda {\rm \mathfrak{c}}irc \textbf{N}$, where $\tilde {\rm \lambda}ambda : {\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times \to 1 + p{\rm {\rm \mathfrak{m}}athbb{Z}}_p$ is given by $\tilde {\rm \lambda}ambda(x) = {\rm \lambda}angle x \rangle^{-1}$. By definition, $${\rm \lambda}eft.L_p'(f \otimes {\rm \mathfrak{c}}hi , {\rm \mathfrak{m}}athbbm{1}) = {\rm \mathfrak{f}}rac{d}{ds} L_p(f \otimes {\rm \mathfrak{c}}hi, {\rm \lambda}ambda^s){\rm rig}ht|_{s = 0}.$$ Also by definition, {\rm \mathfrak{b}}egin{align*} L_p(f \otimes {\rm \mathfrak{c}}hi, {\rm \lambda}ambda^s) &= (-1)^{r-1}H_p(f) {\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{-N}{\rm rig}ht){\rm \lambda}eft(1-C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht){\rm \lambda}ambda^s(C)^{-1}{\rm rig}ht)^{-1}\\ &\hspace{20mm} \times \int_{G(H_{p^\infty}({\rm \mathfrak{m}}u_{p^\infty})/K)}{\rm \lambda}ambda^s d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{f,1,1}^C \\&=(-1)^r H_p(f) {\rm \lambda}eft(1-C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht)\tilde {\rm \lambda}ambda^{-2s}(C){\rm rig}ht)^{-1}\\ & \hspace{20mm} \times \int_{G(H_{p^\infty}({\rm \mathfrak{m}}u_{p^\infty})/K)}{\rm \lambda}ambda^s d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{f,1,1}^C, \end{align*} where $C$ is an arbitrary integer prime to $N|D|p$. The measure $\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si^C_{f,1,1}$ is given by: $$\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{f,1,1}^C({\rm \sigma}igma ({\rm \mathfrak{m}}od p^n), \tau ({\rm \mathfrak{m}}od p^m)) = L_{f_0}(\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{{\rm {\rm \mathfrak{m}}athbb{A}}A,1}^C(a ({\rm \mathfrak{m}}od p^m)))$$ where $a$ corresponds to the restriction of $\tau$ under the Artin map and ${\rm \sigma}igma$ corresponds to $[{\rm {\rm \mathfrak{m}}athbb{A}}A] \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_{p^n})$. We have {\rm \mathfrak{b}}egin{align*} L_p&(f \otimes {\rm \mathfrak{c}}hi)( {\rm \lambda}ambda^s) =\\ & (-1)^r H_p(f) {\rm \lambda}eft(1-C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht){\rm \lambda}angle C\rangle ^{2s}{\rm rig}ht)^{-1}L_{f_0}{\rm \lambda}eft[{\rm \sigma}um_{{\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)} \int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times}{\rm \lambda}angle x \rangle^{-s} d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{{\rm {\rm \mathfrak{m}}athbb{A}}A,1}^C{\rm rig}ht]. \end{align*} Using ${\rm \lambda}og {\rm \lambda}angle x \rangle = {\rm \lambda}og x$, we compute {\rm \mathfrak{b}}egin{align*} {\rm \lambda}eft.{\rm \mathfrak{f}}rac{d}{ds}{\rm rig}ht|_{s=0}{\rm \lambda}eft({\rm \lambda}eft(1-C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht){\rm rig}ht. {\rm rig}ht. & {\rm \lambda}eft. {\rm \lambda}eft. {\rm \lambda}angle C \rangle^{2s}{\rm rig}ht)^{-1} \int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} {\rm \lambda}angle x \rangle^{-s}d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A^C{\rm rig}ht) \\ &= {\rm \lambda}eft(1 - C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht){\rm rig}ht)^{-1}\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times}{\rm \lambda}og x \, d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si^C_{\rm {\rm \mathfrak{m}}athbb{A}}A + (*) \int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si^C_{\rm {\rm \mathfrak{m}}athbb{A}}A \\ &= {\rm \lambda}eft(1 - C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht){\rm rig}ht)^{-1}\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times}{\rm \lambda}og x \, d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si^C_{\rm {\rm \mathfrak{m}}athbb{A}}A \end{align*} The integral $\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si^C_{\rm {\rm \mathfrak{m}}athbb{A}}A$ vanishes because by Corollary \ref{Lvanish}, $L_p(f \otimes {\rm \mathfrak{c}}hi)({\rm \lambda}ambda) = 0$ for all anticyclotomic ${\rm \lambda}ambda$, in particular for ${\rm \lambda}ambda = 1$. If we set $$G_{\rm \sigma}igma = (-1)^r \int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} {\rm \lambda}og_p \, d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm \mathfrak{b}}ar M_{2r} ({\rm \mathcal{G}}amma_0(Np^\infty); {\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi)),$$ then using the identity $$\int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} {\rm \lambda}ambda({\rm \mathfrak{b}}eta) \, d\tilde {\rm {\rm \mathfrak{m}}athbb{P}}si^C_{\rm {\rm \mathfrak{m}}athbb{A}}A = \int_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times} {\rm \lambda}ambda ({\rm \mathfrak{b}}eta) - C{\rm \lambda}eft({\rm \mathfrak{f}}rac{D}{C}{\rm rig}ht){\rm \lambda}ambda(C^{-2}{\rm \mathfrak{b}}eta) \, d\tilde{\rm {\rm \mathfrak{m}}athbb{P}}si_{\rm {\rm \mathfrak{m}}athbb{A}}A,$$ we obtain $$L_p'(f \otimes {\rm \mathfrak{c}}hi, {\rm \mathfrak{m}}athbbm{1}) = -H_p(f) {\rm \sigma}um_{{\rm \sigma}igma \in G(H/K)} L_{f_0}(G_{\rm \sigma}igma).$$ Define the operator \[{\rm {\rm \mathfrak{m}}athbb{F}}F = {\rm {\rm \mathfrak{m}}athfrak{p}}rod_{{\rm {\rm \mathfrak{m}}athfrak{p}} | p} {\rm \lambda}eft(U_p - p^{r-k-1} {\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}}) {\rm \sigma}igma_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}{\rm rig}ht)^2.\] Putting together Corollary \ref{fourier} and Propositions \ref{htcoeff} and \ref{mainid}, we obtain {\rm \mathfrak{b}}egin{proposition} If $p | m$, $(m,N) = 1$ and $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m) = 0$, then {\rm \mathfrak{b}}egin{align*} {\rm \lambda}eft. c_m^{\rm \sigma}igma {\rm rig}ht| {\rm {\rm \mathfrak{m}}athbb{F}}F = (-1)^{k+1} {\rm \lambda}eft(4|D|{\rm rig}ht)^{r-k-1}u^2 a_m(G_{\rm \sigma}igma){\rm \mathfrak{b}}igg| {\rm \lambda}eft(U_p^4 - p^{2r -2}U_p^2{\rm rig}ht). \end{align*} \end{proposition} We define the $p$-adic modular form $$H_{\rm \sigma}igma = F_{\rm {\rm \mathfrak{m}}athbb{A}}A | {\rm {\rm \mathfrak{m}}athbb{F}}F +(-1)^k {\rm \lambda}eft(4|D|{\rm rig}ht)^{r-k-1}u^2 G_{\rm \sigma}igma {\rm \mathfrak{b}}igg| {\rm \lambda}eft(U^4_p - p^{2r-2}U_p^2{\rm rig}ht).$$ By construction, when $p | m$, $(m,N) = 1$ and $r_{\rm {\rm \mathfrak{m}}athbb{A}}A(m) = 0$, we have $$a_m(H_{\rm \sigma}igma) = d_m^{\rm \sigma}igma | {\rm {\rm \mathfrak{m}}athbb{F}}F = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar{\rm \mathfrak{a}})^{-1}{\rm \sigma}um_{v | p} {\rm \lambda}angle x, T_m {\rm \mathfrak{b}}ar x^{\rm \mathfrak{a}}\rangle_v | {\rm {\rm \mathfrak{m}}athbb{F}}F.$$ {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{localvanishing} Define the operator $${\rm {\rm \mathfrak{m}}athbb{F}}F' = (U_p - {\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athfrak{p}})(U_p{\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athfrak{p}} - p^{2r-2})(U_p - {\rm \sigma}igma_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}})(U_p{\rm \sigma}igma_{{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}} - p^{2r-2}).$$ Then $L_{f_0}(H_{\rm \sigma}igma | {\rm {\rm \mathfrak{m}}athbb{F}}F') = 0$. \end{proposition} {\rm \mathfrak{b}}egin{proof} The proof should be exactly as in {\rm \mathfrak{c}}ite[II.5.10]{Nek}, however the proof given there is not correct. In the next section we explain how to modify Nekov\'a\v{r}'s argument to prove the desired vanishing. For our purposes in this section, the important point is that this modified proof goes through if we replace the representation $V_{f,A,0} = V_f$ (i.e. the $\ell = 0$ case which Nekov\'a\v{r} considers) with our representation $V_{f,A,\ell} = V_f \otimes W$, where $W$ corresponds to a trivial local system. Indeed, the proof works ``on the curve" and essentially ignores the local system. The only inputs specific to the local system are two representation-theoretic conditions: it suffices to know that the representation $V_{f,A,\ell}$ is ordinary and crystalline. These follow from Theorems \ref{ord} and \ref{AJ}, respectively. \end{proof} It follows that $$L_{f_0}{\rm \lambda}eft(F_{\rm {\rm \mathfrak{m}}athbb{A}}A | {\rm {\rm \mathfrak{m}}athbb{F}}F {\rm {\rm \mathfrak{m}}athbb{F}}F'{\rm rig}ht) = (-1)^{k+1}{\rm \lambda}eft(4|D|{\rm rig}ht)^{r-k-1}u^2 L_{f_0}{\rm \lambda}eft(G_{\rm \sigma}igma {\rm \mathfrak{b}}igg| {\rm \lambda}eft(U^4_p - p^{2r-2}U^2_p{\rm rig}ht){\rm {\rm \mathfrak{m}}athbb{F}}F'{\rm rig}ht).$$ Since $L_{f_0} {\rm \mathfrak{c}}irc U_p = {\rm \mathfrak{a}}lpha_p(f)L_{f_0}$, we can remove ${\rm {\rm \mathfrak{m}}athbb{F}}F'$ from the equation above; we may divide out the extra factors that arise as they are non-zero by the Weil conjectures. Summing this formula over ${\rm \sigma}igma \in {\rm \mathcal{G}}al(H/K)$, we obtain {\rm \mathfrak{b}}egin{align*} L_{f_0}(f) &{\rm {\rm \mathfrak{m}}athfrak{p}}rod_{{\rm {\rm \mathfrak{m}}athfrak{p}} | p} {\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}})p^{r-k-1}}{{\rm \mathfrak{a}}lpha_p(f)}{\rm rig}ht)^2{\rm \sigma}um_{{\rm \sigma}igma \in {\rm \mathcal{G}}al(H/K)} {\rm \lambda}angle z_f, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A\rangle \\ &= (-1)^k {\rm \lambda}eft(4|D|{\rm rig}ht)^{r-k-1}u^2 H_p(f)^{-1}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{p^{2r-2}}{{\rm \mathfrak{a}}lpha_p(f)^2}{\rm rig}ht) L_p'(f \otimes {\rm \mathfrak{c}}hi, {\rm \mathfrak{m}}athbbm{1}). \end{align*} Note that the operators ${\rm \sigma}igma_{\rm {\rm \mathfrak{m}}athfrak{p}}$ and ${\rm \sigma}igma_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$ (in the definition of ${\rm {\rm \mathfrak{m}}athbb{F}}F$) permute the various ${\rm \lambda}angle z_f, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A\rangle$ as ${\rm {\rm \mathfrak{m}}athbb{A}}A$ ranges through the class group. So after summing over ${\rm \mathcal{G}}al(H/K)$, these operators have no effect and therefore do not show up in the Euler product in the left hand side.{\rm \mathfrak{f}}ootnote{This is unlike what happens in {\rm \mathfrak{c}}ite{Nek}. The difference stems from the fact that we inserted the Hecke character into the definition of the measures defining the $p$-adic $L$-function.} By Hida's computation {\rm \mathfrak{c}}ite[I.2.4.2]{Nek}: $${\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{p^{2r-2}}{{\rm \mathfrak{a}}lpha_p(f)^2}{\rm rig}ht)= H_p(f)L_{f_0}(f),$$ so we obtain {\rm \mathfrak{b}}egin{equation*}L'_p(f \otimes {\rm \mathfrak{c}}hi, {\rm \mathfrak{m}}athbbm{1}) =(-1)^k{\rm {\rm \mathfrak{m}}athfrak{p}}rod_{{\rm {\rm \mathfrak{m}}athfrak{p}} | p}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}})p^{r-k-1}}{{\rm \mathfrak{a}}lpha_p(f)} {\rm rig}ht)^2 {\rm \mathfrak{f}}rac{{\rm \sigma}um_{{\rm {\rm \mathfrak{m}}athbb{A}}A \in {\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)} {\rm \lambda}angle z_f, z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A \rangle}{{\rm \lambda}eft(4|D|{\rm rig}ht)^{r-k-1}u^2}. \end{equation*} By equation (\ref{ortho}), this equals \[ (-1)^k{\rm {\rm \mathfrak{m}}athfrak{p}}rod_{{\rm {\rm \mathfrak{m}}athfrak{p}} | p}{\rm \lambda}eft(1 - {\rm \mathfrak{f}}rac{{\rm \mathfrak{c}}hi({\rm {\rm \mathfrak{m}}athfrak{p}})p^{r-k-1}}{{\rm \mathfrak{a}}lpha_p(f)} {\rm rig}ht)^2 {\rm \mathfrak{f}}rac{h {\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi} z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A \rangle}{{\rm \lambda}eft(4|D|{\rm rig}ht)^{r-k-1}u^2}\] proves Theorem \ref{main}. {\rm \mathfrak{b}}egin{proof}[Proof of Theorem $\ref{PRproof}$] We now assume ${\rm \mathfrak{c}}hi = {\rm {\rm \mathfrak{m}}athfrak{p}}si^\ell$ as in Section \ref{apps}. Recall that the cohomology classes $z_f$ and ${\rm \mathfrak{b}}ar z_f$ live in $H^1_f(H, V_{f,A,\ell})$. Recall also $V_{f,A,\ell}$ is the 4-dimensional $p$-adic realization of the motive $M(f)_H \otimes M({\rm \mathfrak{c}}hi_H)$ over $H$ with coefficients in ${\rm {\rm \mathfrak{m}}athbb{Q}}(f)$. Using Remark \ref{descend}, we have a motive $M(f)_K \otimes M({\rm \mathfrak{c}}hi)$ over $K$ with coefficients in ${\rm {\rm \mathfrak{m}}athbb{Q}}(f,{\rm \mathfrak{c}}hi)$ descending $M(f)_H \otimes M({\rm \mathfrak{c}}hi_H) \otimes {\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{c}}hi)$. The $p$-adic realization of this motive over $K$ is what we called $V_{f,{\rm \mathfrak{c}}hi}$. Thus we may think of the classes $z_f$ and ${\rm \mathfrak{b}}ar z_f$ in $H^1_f(H, V_{f,A,\ell}) {\rm \mathfrak{c}}ong H^1(H, V_{f,{\rm \mathfrak{c}}hi})$. Define \[z^K_f = {\rm \mathfrak{c}}or_{H/K}(z_f) \hspace{5mm} {\rm \mathfrak{m}}box{and} \hspace{5mm} {\rm \mathfrak{b}}ar z^K_f = {\rm \mathfrak{c}}or_{H/K}({\rm \mathfrak{b}}ar z_f)\] in $H^1_f(K, V_{f,{\rm \mathfrak{c}}hi})$. {\rm \mathfrak{b}}egin{lemma}{\rm \lambda}abel{cores} \[{\rm res}_{H/K}(z_f^K) = hz_{f,{\rm \mathfrak{c}}hi}\hspace{5mm} {\rm \mathfrak{m}}box{and} \hspace{5mm} {\rm res}_{H/K}({\rm \mathfrak{b}}ar z_f^K) = h z_{f, {\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}.\] \end{lemma} {\rm \mathfrak{b}}egin{proof} Note that there is a natural action of ${\rm \mathcal{G}}al(H/K)$ on $H^1(H, V_{f,{\rm \mathfrak{c}}hi})$, since $V_{f,{\rm \mathfrak{c}}hi}$ is a $G_K$-representation. Since ${\rm res} {\rm \mathfrak{c}}irc {\rm \mathfrak{c}}or = {\rm \textbf{N}}m$, it suffices to show that for each ${\rm \sigma}igma \in {\rm \mathcal{G}}al(H/K)$, $z_f^{\rm \sigma}igma = z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A$ and ${\rm \mathfrak{b}}ar z^{\rm \sigma}igma_f = z_{f, {\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A$, where ${\rm {\rm \mathfrak{m}}athbb{A}}A$ corresponds to ${\rm \sigma}igma$ under the Artin map. Recall that \[z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A = {\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1} {\rm {\rm \mathfrak{m}}athbb{P}}hi_f{\rm \lambda}eft(\e_B\e Y^{\rm \mathfrak{a}}{\rm rig}ht) \hspace{5mm} {\rm \mathfrak{m}}box{and} \hspace{5mm} z_{f,{\rm \mathfrak{b}}ar {\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A = {\rm \mathfrak{c}}hi({\rm \mathfrak{b}}ar {\rm \mathfrak{a}})^{-1} {\rm {\rm \mathfrak{m}}athbb{P}}hi_f{\rm \lambda}eft(\e_B{\rm \mathfrak{b}}ar\e Y^{\rm \mathfrak{a}}{\rm rig}ht),\] for any ideal ${\rm \mathfrak{a}}$ in the class of ${\rm {\rm \mathfrak{m}}athbb{A}}A$. To prove $z_f^{\rm \sigma}igma = z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A$, we first describe explicitly the action of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar K/K)$ on the subspace $\e V_{f,A,\ell} {\rm \sigma}ubset V_{f,A,\ell}$, after identifying the spaces $V_{f,A,\ell}$ and $V_{f,{\rm \mathfrak{c}}hi}$. For each ${\rm \sigma}igma \in {\rm \mathcal{G}}al({\rm \mathfrak{b}}ar K/K)$, we have maps \[\e_\ell H^\ell({\rm \mathfrak{b}}ar A^\ell, {\rm {\rm \mathfrak{m}}athbb{Q}}_p) {\rm \sigma}tackrel{{\rm \sigma}igma^*}{{\rm \lambda}ongrightarrow} \e_\ell^{\rm \sigma}igma H^\ell(\overline{A^{\rm \sigma}igma}^\ell, {\rm {\rm \mathfrak{m}}athbb{Q}}_p) \xrightarrow{{\rm \mathfrak{c}}hi({\rm \mathfrak{a}})^{-1}{\rm {\rm \mathfrak{m}}athfrak{p}}hi_{\rm \mathfrak{a}}^{\ell *}}\e_\ell H^\ell({\rm \mathfrak{b}}ar A^\ell, {\rm {\rm \mathfrak{m}}athbb{Q}}_p),\] which induces an action of $G_K$ on $\e V_{f,A,\ell} = V_f \otimes \e H^\ell({\rm \mathfrak{b}}ar A^\ell, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(k))$. By definition of $M({\rm \mathfrak{c}}hi)$, this agrees with the action of $G_K$ on $V_{f,{\rm \mathfrak{c}}hi}$. Now the argument in the proof of Lemma \ref{invariance} shows that $z_f^{\rm \sigma}igma = z_{f,{\rm \mathfrak{c}}hi}^{\rm {\rm \mathfrak{m}}athbb{A}}A$. A similar argument works for ${\rm \mathfrak{b}}ar z_f^{\rm \sigma}igma$. \end{proof} By Lemma \ref{cores}, ${\rm res}_{H/K}(z^K_{f,{\rm \mathfrak{c}}hi} ) = hz_{f,{\rm \mathfrak{c}}hi}$ and ${\rm res}_{H/K}({\rm \mathfrak{b}}ar z^K_f) = h z_{f, {\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}$. It follows that {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{hteq} {\rm \lambda}eft{\rm \lambda}angle z^K_f,{\rm \mathfrak{b}}ar z^K_f {\rm rig}ht\rangle_K = h {\rm \lambda}eft{\rm \lambda}angle z_{f,{\rm \mathfrak{c}}hi} , z_{f,{\rm \mathfrak{b}}ar{\rm \mathfrak{c}}hi}{\rm rig}ht\rangle_H. \end{equation} Now assume that $L'_p(f \otimes {\rm \mathfrak{c}}hi, \ell_K, {\rm \mathfrak{m}}athbbm{1}) {\rm \mathfrak{n}}eq 0$. By Theorem \ref{main} and (\ref{hteq}), the cohomology classes $z_f^K$ and ${\rm \mathfrak{b}}ar z_f^K$ are non-zero, giving two independent elements of $H^1_f(K, V_{f,{\rm \mathfrak{c}}hi}).$ This proves one inequality in Perrin-Riou's conjecture (\ref{PRconj}). The other inequality follows from forthcoming work of Elias {\rm \mathfrak{c}}ite{yara} constructing an Euler system of generalized Heegner classes and extending the methods of Kolyvagin and Nekov\' a\v r in {\rm \mathfrak{c}}ite{NekEuler} to our setting (see also {\rm \mathfrak{c}}ite[Theorem B]{hsieh}). \end{proof} {\rm \sigma}ection{Local $p$-adic heights at primes above $p$}{\rm \lambda}abel{nekfix} The purpose of this last section is to fix the proof of {\rm \mathfrak{c}}ite[II.5.10]{Nek} on which both Nekov\'a\v{r}'s Theorem A and our main theorem rely. In the first two subsections we gather some facts about relative Lubin-Tate groups and ring class field towers, and in \ref{fixes} we explain how to modify the proof in {\rm \mathfrak{c}}ite{Nek}. We have isolated and fixed two arguments of {\rm \mathfrak{c}}ite[II.5]{Nek}, instead of rewriting the entire argument of that section. {\rm \sigma}ubsection{Relative Lubin-Tate groups} The reference for this material is {\rm \mathfrak{c}}ite[\S 1]{dS}. Let $F/{\rm {\rm \mathfrak{m}}athbb{Q}}_p$ be a finite extension and let $L$ be the unramified extension of $K$ of degree ${\rm \mathfrak{d}}elta \geq 1$. Write ${\rm \mathfrak{m}}_F$ and ${\rm \mathfrak{m}}_L$ for the maximal ideals in ${\rm \mathcal{O}}_F$ and ${\rm \mathcal{O}}_L$ and write $q$ for the cardinality of ${\rm \mathcal{O}}_F/{\rm \mathfrak{m}}_F$. We let ${\rm {\rm \mathfrak{m}}athfrak{p}}hi: L \to L$ be the Frobenius automorphism lifting $x \to x^q$ and normalize the valuation on $F$ so that a uniformizer has valuation 1. Let $\xi \in F$ be an element of valuation ${\rm \mathfrak{d}}elta$ and let $f \in {\rm \mathcal{O}}_{L}[[X]]$ be such that \[f(X) = \varpi X + O(X^2) \hspace{4mm} {\rm \mathfrak{m}}box{and} \hspace{4mm} f(X) \equiv X^q \, {\rm \mathfrak{m}}od {\rm \mathfrak{m}}_L,\] where $\varpi \in {\rm \mathcal{O}}_L$ satisfies ${\rm \textbf{N}}m_{L/F}(\varpi) = \xi$. Note that $\varpi$ exists and is a uniformizer, since ${\rm \textbf{N}}m_{L/F}(L^\times)$ is the set of elements in $F^\times$ with valuation in ${\rm \mathfrak{d}}elta{\rm {\rm \mathfrak{m}}athbb{Z}}$. {\rm \mathfrak{b}}egin{theorem} There is a unique one dimensional formal group law $F_f \in {\rm \mathcal{O}}_L[[X,Y]]$ for which $f$ is a lift of Frobenius, i.e.\ for which $f \in {\rm Hom}(F_f, F_f^{\rm {\rm \mathfrak{m}}athfrak{p}}hi)$. $F_f$ comes equipped with an isomorphism ${\rm \mathcal{O}}_F {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athcal{E}}nd(F_f)$ denoted $a {\rm \mathfrak{m}}apsto [a]_f$, and the isomorphism class of $F_f/{\rm \mathcal{O}}_L$ depends only on $\xi$ and not on the choice of $f$. \end{theorem} Now let $M$ be the valuation ideal of ${\rm {\rm \mathfrak{m}}athbb{C}}_p$ and let $M_f$ the $M$-valued points of $F_f$. For each $n \geq 0$, the ${\rm \mathfrak{m}}_F^n$-torsion points of $F_f$ are by definition \[W_f^n = \{ \omega \in M_f : \, [a]_f(\omega) = 0 \hspace{3mm} {\rm \mathfrak{m}}box{for all} \hspace{3mm} a \in {\rm \mathfrak{m}}_F^n\} \] {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{relLT} For each $n \geq 1$, set $L^n_\xi = L(W_f^n)$. Then {\rm \mathfrak{b}}egin{enumerate} \item $L_\xi^n$ is a totally ramified extension of $L$ of degree $(q-1)q^{n-1}$ and is abelian over $F$. \item There is a canonical isomorphism $({\rm \mathcal{O}}_F/{\rm \mathfrak{m}}_F^n)^\times {\rm \mathfrak{c}}ong {\rm \mathcal{G}}al(L^n_\xi/L)$ given by $u {\rm \mathfrak{m}}apsto {\rm \sigma}igma_u$, where ${\rm \sigma}igma_u(\omega) = [u^{-1}]_f(\omega)$ for $\omega \in W_f^n$. \item Both the field $L^n_\xi$ and the isomorphism above are independent of the choice of $f$. \item The map $u {\rm \mathfrak{m}}apsto {\rm \sigma}igma_u$ is compatible with the local Artin map $r_F: F^\times \to {\rm \mathcal{G}}al(F^{\rm \mathfrak{a}}b/F)$. \item The field $L^n_\xi$ corresponds to the subgroup $\xi^{\rm {\rm \mathfrak{m}}athbb{Z}}{\rm \mathfrak{c}}dot {\rm \lambda}eft(1 + {\rm \mathfrak{m}}_F^n{\rm rig}ht) {\rm \sigma}ubset F^\times$ via local class field theory. \end{enumerate} \end{proposition} Writing $L_\xi = {\rm \mathfrak{b}}igcup_n L_\xi^n$, we see that ${\rm \mathcal{G}}al(L_\xi/L) {\rm \mathfrak{c}}ong {\rm \mathcal{O}}_F^\times$ and the group of universal norms in $F^\times$ coming from $L_\xi$ is $\xi^{\rm {\rm \mathfrak{m}}athbb{Z}}$. Moreover, we have an isomorphism ${\rm \mathcal{G}}al(L_\xi/ L) \to {\rm \mathcal{O}}_F^\times$ who's inverse is $r_F|_{{\rm \mathcal{O}}_F^\times}$ composed with the restriction ${\rm \mathcal{G}}al(F^{\rm \mathfrak{a}}b/F) \to {\rm \mathcal{G}}al(L_\xi/F)$. {\rm \sigma}ubsection{Relative Lubin-Tate groups and ring class field towers} Now let $v$ be a place of $H$ above $p$ and above the prime ${\rm {\rm \mathfrak{m}}athfrak{p}}$ of $K$. For each $j \geq 1$, write $H_{j,w}$ for the completion of the ring class field $H_{p^j}$ of conductor $p^j$ at the unique place $w = w(j)$ above $v$. In particular, $H_{0,v} = H_v$. If ${\rm \mathfrak{d}}elta$ is the order of ${\rm {\rm \mathfrak{m}}athfrak{p}}$ in ${\rm {\rm \mathfrak{m}}athbb{P}}ic({\rm \mathcal{O}}_K)$, then $H_v$ is the unramified extension of $K_{\rm {\rm \mathfrak{m}}athfrak{p}} {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ of degree ${\rm \mathfrak{d}}elta$. Since $p$ splits in $K$, $H_{j,w}/H_v$ is totally ramified of degree $(p-1)p^{j-1}/u$, where recall $u = \#{\rm \mathcal{O}}_K^\times /2$. Moreover, ${\rm \mathcal{G}}al(H_{j,w}/H_v)$ is cyclic and $H_{j,w}$ is abelian over ${\rm {\rm \mathfrak{m}}athbb{Q}}_p$. We call $H_\infty = {\rm \mathfrak{b}}igcup_j H_{j,w}$ the local ring class field tower; it contains the anticyclotomic ${\rm {\rm \mathfrak{m}}athbb{Z}}_p$-extension of $K_{\rm {\rm \mathfrak{m}}athfrak{p}}$. To ease notation and to recall the notation of the previous section, we write $L = H_v$. {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{unif} Write ${\rm {\rm \mathfrak{m}}athfrak{p}}^{\rm \mathfrak{d}}elta = ({\rm {\rm \mathfrak{m}}athfrak{p}}i)$ for some ${\rm {\rm \mathfrak{m}}athfrak{p}}i \in {\rm \mathcal{O}}_K$. Then $H_\infty$ is contained in the field $L_\xi$ attached to the Lubin-Tate group relative to the extension $L/{\rm {\rm \mathfrak{m}}athbb{Q}}_p$ with parameter $\xi = {\rm {\rm \mathfrak{m}}athfrak{p}}i/{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}i$ in $K_{\rm {\rm \mathfrak{m}}athfrak{p}} {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. If ${\rm \mathcal{O}}_K^\times = \{{\rm {\rm \mathfrak{m}}athfrak{p}}m 1\}$, then $H_\infty = L_\xi$. \end{proposition} {\rm \mathfrak{b}}egin{remark} Note that there are other natural Lubin-Tate groups relative to $L/{\rm {\rm \mathfrak{m}}athbb{Q}}_p$ coming from the class field theory of $K$, namely the formal groups of elliptic curves with complex multiplication by ${\rm \mathcal{O}}_K$. These formal groups will have different parameters however, as can be seen from the discussion in {\rm \mathfrak{c}}ite[II.1.10]{dS}. \end{remark} {\rm \mathfrak{b}}egin{proof} By $(5)$ of Proposition \ref{relLT}, it is enough to prove that $H_\infty$ is the subfield of ${\rm {\rm \mathfrak{m}}athbb{Q}}_p^{\rm \mathfrak{a}}b$ corresponding to the subgroup $({\rm {\rm \mathfrak{m}}athfrak{p}}i/{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}i)^{\rm {\rm \mathfrak{m}}athbb{Z}}{\rm \mathfrak{c}}dot {\rm \mathfrak{m}}u_K^2$ under local class field theory. First we show that $({\rm {\rm \mathfrak{m}}athfrak{p}}i/{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}i)$ is norm from every $H_{j,w}$. Using the compatibility between local and global reciprocity maps, this will follow if the idele (with non-trivial entry in the ${\rm {\rm \mathfrak{m}}athfrak{p}}$ slot) \[({\rm \lambda}dots 1\, , 1\, , {\rm {\rm \mathfrak{m}}athfrak{p}}i/{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}i\, , 1,\, 1\,, {\rm \lambda}dots) \in {\rm {\rm \mathfrak{m}}athbb{A}}^\times_K\] is in the kernel of the reciprocity map \[r_j: {\rm {\rm \mathfrak{m}}athbb{A}}^\times_K/K^\times \to {\rm \mathcal{G}}al(K^{\rm \mathfrak{a}}b/K) \to {\rm \mathcal{G}}al(H_{p^j}/K),\] for each $j$. Since the kernel of $r_j$ is $K^\times{\rm {\rm \mathfrak{m}}athbb{A}}_{K,\infty}^\times\hat {\rm \mathcal{O}}_{p^j}^\times$, it is enough to show that \[({\rm \lambda}dots 1/{\rm {\rm \mathfrak{m}}athfrak{p}}i\, , 1/{\rm {\rm \mathfrak{m}}athfrak{p}}i\, , 1/{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}i\,, 1/{\rm {\rm \mathfrak{m}}athfrak{p}}i\,, 1/{\rm {\rm \mathfrak{m}}athfrak{p}}i\,, {\rm \lambda}dots) \in \hat{\rm \mathcal{O}}_{p^j}^\times.\] This is clear at all primes away from $p$ since ${\rm {\rm \mathfrak{m}}athfrak{p}}i$ is a unit at those places. At $p$, it amounts to showing that $(1/{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}i, 1/{\rm {\rm \mathfrak{m}}athfrak{p}}i) \in K_{\rm {\rm \mathfrak{m}}athfrak{p}} \times K_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}}$ lands in the diagonal copy of ${\rm {\rm \mathfrak{m}}athbb{Z}}_p$ under the identification $K_{\rm {\rm \mathfrak{m}}athfrak{p}} \times K_{{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}} {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athbb{Q}}_p \times {\rm {\rm \mathfrak{m}}athbb{Q}}_p$, and this is also clear. Since $L/{\rm {\rm \mathfrak{m}}athbb{Q}}_p$ is unramified of degree ${\rm \mathfrak{d}}elta$ and $\xi ={\rm {\rm \mathfrak{m}}athfrak{p}}i/{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}i$ has valuation ${\rm \mathfrak{d}}elta$, it remains to prove that the only units in ${\rm {\rm \mathfrak{m}}athbb{Q}}_p$ which are universal norms for the tower $H_\infty/{\rm {\rm \mathfrak{m}}athbb{Q}}_p$ are those in ${\rm \mathfrak{m}}u_K^2$. But by the same argument as above, the only way ${\rm \mathfrak{a}}lpha \in {\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times$ can be a norm from every $H_{j,w}$ is if ${\rm \mathfrak{a}}lpha \zeta = {\rm \mathfrak{b}}ar \zeta$ for some global unit $\zeta \in K$. But then $\zeta$ is a root of unity and ${\rm \mathfrak{a}}lpha = \zeta^{-1}{\rm \mathfrak{b}}ar\zeta = \zeta^{-2}$, so ${\rm \mathfrak{a}}lpha$ is in ${\rm \mathfrak{m}}u_K^2$. Conversely, it's clear that each $\zeta \in {\rm \mathfrak{m}}u_K^2$ is a universal norm. \end{proof} {\rm \mathfrak{b}}egin{remark} Since we are assuming $K$ has odd discriminant, the equality $H_\infty = L_\xi$ holds unless $K = {\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{m}}u_3)$. For ease of exposition we will assume $K {\rm \mathfrak{n}}eq {\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{m}}u_3)$ for the rest of this section; the modifications needed for the case $K = {\rm {\rm \mathfrak{m}}athbb{Q}}({\rm \mathfrak{m}}u_3)$ are easy enough. \end{remark} We will need one more technical fact about the relative Lubin-Tate group $F_f$ cutting out $H_\infty$. Let ${\rm \mathfrak{c}}hi_\xi: {\rm \mathcal{G}}al({\rm \mathfrak{b}}ar L/L) \to {\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times$, be the character giving the Galois action on the torsion points of $F_f$. We let ${\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi)$ denote the 1-dimensional ${\rm {\rm \mathfrak{m}}athbb{Q}}_p$-vector space endowed with the action of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar L/L)$ determined by ${\rm \mathfrak{c}}hi_\xi$, and we denote by $D_{\rm \mathfrak{c}}ris({\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi))$ the usual filtered ${\rm {\rm \mathfrak{m}}athfrak{p}}hi$-module contravariantly attached to the ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar L/L)$-representation ${\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi)$ by Fontaine. {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{LTcrys} The representation ${\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi)$ is crystalline and the frobenius map on the $1$-dimensional $L$-vector space $D_{\rm \mathfrak{c}}ris({\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi))$ is given by multiplication by $\xi$. \end{proposition} {\rm \mathfrak{b}}egin{proof} This is presumably well known, but with a lack of reference we will verify this fact using {\rm \mathfrak{c}}ite[Prop.\ B.4]{conrad}. There it is shown that ${\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi)$ is crystalline if and only if there exists a homomorphism of tori ${\rm \mathfrak{c}}hi' : L^\times \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p^\times$ which agrees with the restriction of ${\rm \mathfrak{c}}hi_\xi {\rm \mathfrak{c}}irc r_L$ to ${\rm \mathcal{O}}_L^\times$. In that case, frobenius on $D_{\rm \mathfrak{c}}ris({\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi))$ is given by multiplication by ${\rm \mathfrak{c}}hi_\xi(r_L(\varpi)) {\rm \mathfrak{c}}dot {\rm \mathfrak{c}}hi'(\varpi)^{-1}$, where $\varpi$ is any uniformizer of $L$.{\rm \mathfrak{f}}ootnote{Note that we are using the contravariant $D_{\rm \mathfrak{c}}ris$, whereas {\rm \mathfrak{c}}ite{conrad} uses the covariant version.} Combining (2) and (4) of Proposition \ref{relLT} with the commutativity of the following diagram \[{\rm \mathfrak{b}}egin{CD} L^\times @>r_L>> {\rm \mathcal{G}}al(L^{\rm \mathfrak{a}}b/L) \\ @V{\rm \textbf{N}}m VV @VV{\rm res} V \\ {\rm {\rm \mathfrak{m}}athbb{Q}}_p^\times @>r_{{\rm {\rm \mathfrak{m}}athbb{Q}}_p}>> {\rm \mathcal{G}}al({\rm {\rm \mathfrak{m}}athbb{Q}}_p^{\rm \mathfrak{a}}b/{\rm {\rm \mathfrak{m}}athbb{Q}}_p), \end{CD}\] we see that ${\rm \mathfrak{c}}hi' = {\rm \textbf{N}}m^{-1}$ gives such a homomorphism, so the crystallinity follows. Note that by construction ${\rm \mathfrak{c}}hi_\xi: {\rm \mathcal{G}}al(L^{\rm \mathfrak{a}}b/L) \to {\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times$ factors through a character $\tilde{\rm \mathfrak{c}}hi_\xi: {\rm \mathcal{G}}al({\rm {\rm \mathfrak{m}}athbb{Q}}_p^{\rm \mathfrak{a}}b/L) \to {\rm {\rm \mathfrak{m}}athbb{Z}}_p^\times$. So if we choose $\varpi$ to be such that ${\rm \textbf{N}}m_{L/{\rm {\rm \mathfrak{m}}athbb{Q}}_p}(\varpi) = \xi$, then {\rm \mathfrak{b}}egin{align*} {\rm \mathfrak{c}}hi_\xi(r_L(\varpi)) &= \tilde {\rm \mathfrak{c}}hi_\xi(r_{{\rm {\rm \mathfrak{m}}athbb{Q}}_p}({\rm \textbf{N}}m(\varpi)))\\ &= \tilde{\rm \mathfrak{c}}hi_\xi(r_{{\rm {\rm \mathfrak{m}}athbb{Q}}_p}(\xi)) = 1. \end{align*} Thus, the frobenius is given by multiplication by ${\rm \mathfrak{c}}hi'(\varpi)^{-1} = {\rm \textbf{N}}m_{L/{\rm {\rm \mathfrak{m}}athbb{Q}}_p}(\varpi) = \xi$. \end{proof} {\rm \sigma}ubsection{Local heights at $p$ in ring class field towers}{\rm \lambda}abel{fixes} The proofs of both {\rm \mathfrak{c}}ite[II.5.6]{Nek} and {\rm \mathfrak{c}}ite[II.5.10]{Nek} mistakenly assert that $H_{j,w}$ contains the $j$-th layer of the cyclotomic ${\rm {\rm \mathfrak{m}}athbb{Z}}_p$-extension of ${\rm {\rm \mathfrak{m}}athbb{Q}}_p$ (as opposed to the anticyclotomic ${\rm {\rm \mathfrak{m}}athbb{Z}}_p$-extension). This issue first arises in the proofs of {\rm \mathfrak{c}}ite[II.5.9]{Nek} and {\rm \mathfrak{c}}ite[II.5.10]{Nek}. We explain now how to adjust the proof of {\rm \mathfrak{c}}ite[II.5.10]{Nek}; similar adjustments may be used to fix the proof of {\rm \mathfrak{c}}ite[II.5.9]{Nek}. Our approach is in the spirit of Nekov\'{a}\v{r}'s original argument, but uses extra results from $p$-adic Hodge theory to carry the argument through. Recall the setting of {\rm \mathfrak{c}}ite[II.5.10]{Nek}: $x$ is the Tate vector corresponding to our (generalized) Heegner cycle $\e_B \e Y$, and $V = H^1_{\rm et}({\rm \mathfrak{b}}ar X_0(N),j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1)$. We have the Tate cycle \[x_f = {\rm \sigma}um_{m \in S} c_{f, m} T_m x \in Z(Y_0(N),H) \otimes_{{\rm {\rm \mathfrak{m}}athbb{Q}}_p} L,\] a certain linear combination (with coefficients $c_{f,m}$ living in a large enough field $L$) of $T_m x$ such that\[{\rm {\rm \mathfrak{m}}athbb{P}}hi_T(x_f) = z_f \in H^1_f(H,V) \otimes_{{\rm {\rm \mathfrak{m}}athbb{Q}}_p} L.\] Moreover, each $m \in S$ satisfies $(m,pN) = 1$ and $r(m) = 0$, where $r(m)$ is the number of ideals in $K$ of norm $m$. To fix the proof of {\rm \mathfrak{c}}ite[II.5.10]{Nek}, we prove the following vanishing result for local heights at primes $v$ of $H$ above $p$. {\rm \mathfrak{b}}egin{theorem}{\rm \lambda}abel{vanishing} For each $j \geq 1$, let $h^{\rm \sigma}igma_j \in Z_f(Y_0(N), H_{j,w})$ be a Tate vector supported on a point $y_j \in Y_0(N)$ corresponding to an elliptic curve $E_j$ such that ${\rm {\rm \mathfrak{m}}athcal{E}}nd(E_j)$ is the order in ${\rm \mathcal{O}}_K$ of index $p^j$. Then \[ {\rm \lambda}im_{j \to \infty} {\rm \lambda}angle x_f, N_{H_{j,w}/H_v}(h^{\rm \sigma}igma_j) \rangle_v = 0.\] \end{theorem} {\rm \mathfrak{b}}egin{proof} Recall that $E_j$ is a quotient of an elliptic curve $E$ with CM by ${\rm \mathcal{O}}_K$ by a (cyclic) subgroup of order $p^j$ which does not contain either the canonical subgroup $E[{\rm {\rm \mathfrak{m}}athfrak{p}}]$ or its dual $E[{\rm \mathfrak{b}}ar {\rm {\rm \mathfrak{m}}athfrak{p}}]$. By the compatibility of local heights with norms {\rm \mathfrak{c}}ite[II.1.9.1]{Nek}, we have {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{pairing} {\rm \lambda}eft{\rm \lambda}angle x_f, N_{H_{j,w}/H_v}(h^{\rm \sigma}igma_j){\rm rig}ht\rangle_{v, \ell_v} = {\rm \lambda}eft{\rm \lambda}angle x_f, h^{\rm \sigma}igma_j {\rm rig}ht\rangle_{w, \ell_w},\end{equation} where $\ell_w = \ell_v {\rm \mathfrak{c}}irc N_{H_{j,w}/H_v}$. Recall that we are assuming now that $\ell_K = {\rm \lambda}og_p {\rm \mathfrak{c}}irc {\rm \lambda}ambda$, where ${\rm \lambda}: {\rm \mathcal{G}}al(K_\infty/K) \to 1 + p{\rm {\rm \mathfrak{m}}athbb{Z}}_p$ is the cyclotomic character. Thus the local component $\ell_v : H_v^\times \to {\rm {\rm \mathfrak{m}}athbb{Q}}_p$ of $\ell_K$ is $\ell_v = {\rm \lambda}og_p {\rm \mathfrak{c}}irc N_{H_v/{\rm {\rm \mathfrak{m}}athbb{Q}}_p}$, and \[\ell_w = {\rm \lambda}og_p {\rm \mathfrak{c}}irc N_{H_{j,w}/{\rm {\rm \mathfrak{m}}athbb{Q}}_p}.\] We have seen that the ring class field tower $H_\infty$ is cut out by a relative Lubin-Tate group. In fact, it follows from the results in the previous sections that $H_{j,w} = L^j_\xi$, where $L = H_v$ and $\xi = {\rm {\rm \mathfrak{m}}athfrak{p}}i/{\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athfrak{p}}i$ as before. Let $E$ be the mixed extension used to compute the height pairing of $x_f$ and $h^{\rm \sigma}igma_j$ (as in {\rm \mathfrak{c}}ite[II.1.7]{Nek}), and let $E_w$ be its restriction to the decomposition group at $w$. Assume that \[{\rm \mathfrak{m}}box{$E_w$ is a crystalline representation of ${\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_{j,w}/H_{j,w})$}.\] Then by definition of the local height, we have {\rm \mathfrak{b}}egin{align*} {\rm \lambda}eft{\rm \lambda}angle x_f, h^{\rm \sigma}igma_j {\rm rig}ht\rangle_{w, \ell_w} &= \ell_w(r_w([E_w]))\\ &={\rm \lambda}og_p{\rm \lambda}eft(N_{H_{j,w}/{\rm {\rm \mathfrak{m}}athbb{Q}}_p}(r_w([E_w])){\rm rig}ht). \end{align*} where $r_w([E_w])$ is an element of $\widehat{{\rm \mathcal{O}}_{H_{j,w}}^\times} \otimes_{{\rm {\rm \mathfrak{m}}athbb{Z}}_p} {\rm {\rm \mathfrak{m}}athbb{Q}}_p$. In fact, the ordinarity of $f$ allows Nekov\'{a}\v{r} to ``bound denominators"; i.e. he shows \[p^{-d_j}{\rm \lambda}eft{\rm \lambda}angle x_f, h^{\rm \sigma}igma_j {\rm rig}ht\rangle_{w, \ell_w} \in {\rm \lambda}og_p{\rm \lambda}eft(N_{H_{j,w}/{\rm {\rm \mathfrak{m}}athbb{Q}}_p}{\rm \lambda}eft(\widehat{{\rm \mathcal{O}}_{H_{j,w}}^\times}{\rm rig}ht){\rm rig}ht) .\] for some integer $d_j$. Indeed, this follows from our assumption that $E_w$ is crystalline and the proofs in {\rm \mathfrak{c}}ite[II.1.10, II.5.10]{Nek}; note that $H^1_f(H_{j,w},{\rm {\rm \mathfrak{m}}athbb{Z}}_p(1)) = \widehat{{\rm \mathcal{O}}_{H_{j,w}}^\times}$. Moreover, the $d_j$ are uniformly bounded as $j$ varies. Nekov\'{a}\v{r}'s proof of this last fact does not quite work, but we fix this issue in Proposition \ref{hodgetate} below. Let us write $d = {\rm \sigma}up_j d_j$. By Proposition \ref{relLT}, we have \[p^{-d}{\rm \lambda}eft{\rm \lambda}angle x_f, h^{\rm \sigma}igma_j {\rm rig}ht\rangle_{w, \ell_w} \in {\rm \lambda}og_p(1 + p^j{\rm {\rm \mathfrak{m}}athbb{Z}}_p) {\rm \sigma}ubset p^j{\rm {\rm \mathfrak{m}}athbb{Z}}_p.\] The theorem would then follow upon taking the limit as $j \to \infty$. It therefore remains to show that $E_w$ is crystalline. First we need a lemma. {\rm \mathfrak{b}}egin{lemma}{\rm \lambda}abel{disjoint} Let $m \in S$ and $j$ be as above. Then the supports of $T_m x$ and $b^{\rm \sigma}igma_{p^j}$ are disjoint on the generic and special fibers of the integral model ${\rm {\rm \mathfrak{m}}athcal{X}}$ of $X_0(N)$. \end{lemma} {\rm \mathfrak{b}}egin{proof} Let $z \in Y_0(N)({\rm \mathfrak{b}}ar{\rm {\rm \mathfrak{m}}athbb{Q}}_p)$ be in the support of $T_m x$ and let $y$ be the Heegner point supporting the Tate cycle $x$. Thinking of these points as elliptic curves via the moduli interpretation, there is an isogeny ${\rm {\rm \mathfrak{m}}athfrak{p}}hi: y \to z$ of degree prime to $p$ since $(p,m) = 1$. Recall $p$ splits in $K$, so that $y$ has ordinary reduction $y_s$ at $v$. Since ${\rm {\rm \mathfrak{m}}athcal{E}}nd(y) {\rm \mathfrak{c}}ong {\rm \mathcal{O}}_K {\rm \mathfrak{c}}ong {\rm {\rm \mathfrak{m}}athcal{E}}nd(y_s)$, $y$ is a Serre-Tate canonical lift of $y_s$. As ${\rm {\rm \mathfrak{m}}athfrak{p}}hi$ induces an isomorphism of $p$-divisible groups, $z$ is also a canonical lift of its reduction. On the other hand, the curve $E_j$ supporting $h_j^{\rm \sigma}igma$ has CM by a non-maximal order of $p$-power index in ${\rm \mathcal{O}}_K$ and is therefore not a canonical lift of its reduction. Indeed, the reduction of $E_j$ is an elliptic curve with CM by the full ring ${\rm \mathcal{O}}_K$ as it obtained by successive quotients of $y_s$ by either the kernel of Frobenius or Verschiebung. This shows that $T_mx$ and $b^{\rm \sigma}igma_{p^j}$ have disjoint support in the generic fiber. By {\rm \mathfrak{c}}ite[III.4.3]{GZ}, the divisors $T_{mn} y$ and $y^\tau$ are disjoint in the generic fiber, for any $\tau \in {\rm \mathcal{G}}al(H/K)$. Since all points in the support of these divisors are canonical lifts, the divisors must not intersect in the special fiber either. But we saw above that the special fiber of $E_j$ is a Galois conjugate of the reduction of $y$, so $E_j$ and $T_my$ are disjoint on the special fiber as well. \end{proof} Next we note that $T_m x$ is a sum ${\rm \sigma}um d_i$, where each $d_i$ is supported on a single closed point $S$ of $Y_0(N)/H_{j,w}$. Using norm compatibility once more and base changing to an extension ${\rm {\rm \mathfrak{m}}athbb{F}}/H_{j,w}$ which splits $S$, we may assume that $S \in Y_0(N)({\rm {\rm \mathfrak{m}}athbb{F}})$. It then suffices to show that the mixed extension $E_w'$ corresponding to $d_i$ and $h^{\rm \sigma}igma_j$ is crystalline. Recall from {\rm \mathfrak{c}}ite[II.2.8]{Nek} that this mixed extension is a subquotient of \[H^1({\rm \mathfrak{b}}ar X_0(N) - {\rm \mathfrak{b}}ar S\, {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} {\rm \mathfrak{b}}ar T, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1),\] where $T = y_j$ is the point supporting $h_j^{\rm \sigma}igma$. So it is enough to show that this cohomology group is itself crystalline. Finally, this follows from combining the previous lemma with the following result. \end{proof} {\rm \mathfrak{b}}egin{theorem}{\rm \lambda}abel{crysmixed} Suppose ${\rm {\rm \mathfrak{m}}athbb{F}}$ is a finite extension of ${\rm {\rm \mathfrak{m}}athbb{Q}}_p$ and let $S,T \in Y_0(N)({\rm {\rm \mathfrak{m}}athbb{F}})$ be points with non-cuspidal reduction and which do not intersect in the special fiber. Then $H^1({\rm \mathfrak{b}}ar X_0(N) - {\rm \mathfrak{b}}ar S {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} {\rm \mathfrak{b}}ar T, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1)$ is a crystalline representation of $G_{\rm {\rm \mathfrak{m}}athbb{F}}$. \end{theorem} {\rm \mathfrak{b}}egin{remark} Suppose $F$ is a $p$-adic field and $X/{\rm Spec \hspace{1mm}} {\rm \mathcal{O}}_F$ is a smooth projective variety of relative dimension $2k-1$. If $Y, Z {\rm \sigma}ubset X$ are two (smooth) subvarieties of codimension $k$ which do not intersect on the special fiber, then one expects that $H^{2k-1}({\rm \mathfrak{b}}ar X_F - {\rm \mathfrak{b}}ar Y_F {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} {\rm \mathfrak{b}}ar Z_F, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(k))$ is a crystalline representation of $G_F$. The theorem above proves this for cycles sitting in fibers of a map $X \to C$ to a curve. The general case should follow from the machinery developed in the recent preprint {\rm \mathfrak{c}}ite{niziol}. \end{remark} {\rm \mathfrak{b}}egin{proof} Write ${\rm {\rm \mathfrak{m}}athbb{V}} = H^1({\rm \mathfrak{b}}ar X_0(N) - {\rm \mathfrak{b}}ar S {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} T, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1)$. The sketch of the proof is as follows. Faltings' comparison isomorphism {\rm \mathfrak{c}}ite{Falt} identifies $D_{\rm \mathfrak{c}}ris({\rm {\rm \mathfrak{m}}athbb{V}})$ with the crystalline analogue of ${\rm {\rm \mathfrak{m}}athbb{V}}$, which we will refer to (in this sketch) as $H^1_{\rm \mathfrak{c}}ris(X - S {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} T, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)$. The dimension of ${\rm {\rm \mathfrak{m}}athbb{V}}$ is determined by the standard exact sequences {\rm \mathfrak{b}}egin{equation}{\rm \lambda}abel{gysin} 0 \to H^0({\rm \mathfrak{b}}ar T, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1) \to {\rm {\rm \mathfrak{m}}athbb{V}} \to H^1({\rm \mathfrak{b}}ar X -{\rm \mathfrak{b}}ar S, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1) \to 0\end{equation} \[0 \to H^1({\rm \mathfrak{b}}ar X, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1) \to H^1({\rm \mathfrak{b}}ar X - {\rm \mathfrak{b}}ar S, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1) \to H^0({\rm \mathfrak{b}}ar S, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A) \to 0\] Similar exact sequences should hold in the crystalline theory (i.e.\ with $H^1$ replaced by $H^1_{\rm \mathfrak{c}}ris$ everywhere) since $S$ and $T$ reduce to distinct points on the special fiber. Using the known crystallinity of $H^1({\rm \mathfrak{b}}ar X, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1)$, $H^0({\rm \mathfrak{b}}ar T, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1)$, and $H^0({\rm \mathfrak{b}}ar S, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)$ (the latter two because the fibers of $X \to X(N)$ above $S$ and $T$ have good reduction), we conclude that \[{\rm \mathfrak{d}}im_{{\rm {\rm \mathfrak{m}}athbb{Q}}_p} {\rm {\rm \mathfrak{m}}athbb{V}} = {\rm \mathfrak{d}}im_{F_0} H^1_{\rm \mathfrak{c}}ris(X - S {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} T, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A),\] i.e. that ${\rm {\rm \mathfrak{m}}athbb{V}}$ is crystalline. To turn this sketch into a proof, we need to say explicitly what $H^1_{\rm \mathfrak{c}}ris(X - S {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} T, j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)$ is. Note that the usual crystalline cohomology is not a good candidate because it is not usually finite dimensional unless the variety is smooth and projective. Let us describe in more detail the comparison isomorphism which we invoked above. The main result of {\rm \mathfrak{c}}ite{Falt} concerns the cohomology of a smooth projective variety with trivial coefficients. In our setting, however, we deal with cohomology of an affine curve with partial support along the boundary and with non-trivial coefficients. The proof of the comparison isomorphism in this more complicated situation is sketched briefly in {\rm \mathfrak{c}}ite{Falt} as well, but we follow the exposition {\rm \mathfrak{c}}ite{ols}, where the modifications we need are explained explicitly and in detail. Let $R$ be the ring of integers of ${\rm {\rm \mathfrak{m}}athbb{F}}$ and set $V = {\rm Spec \hspace{1mm}}(R)$. Let $X/V$ be a smooth projective curve and let $S,T \in X(V)$ be two rational sections which we think of as divisors on $X$. We assume that $S$ and $T$ do not intersect, even on the closed fiber. Set $D = S {\rm \mathfrak{c}}up T$ and $X^o = X - D$. The divisor $D$ defines a log structure $M_X$ on $X$ and we let $(Y,M_Y)$ be the closed fiber of $(X,M_X)$. We use the log-convergent topos $((Y,M_Y)/V)_{\rm \mathfrak{c}}onv$ to define the `crystalline' analogue of ${\rm {\rm \mathfrak{m}}athbb{V}}$. There is an isocrystal $J_S$ on $((Y,M_Y)/V)_{\rm \mathfrak{c}}onv$ which is \'etale locally defined by the ideal sheaf of $S$; see {\rm \mathfrak{c}}ite[\S13]{ols} for its precise definition and for more regarding the convergent topos. {\rm \mathfrak{b}}egin{theorem}[Faltings, Olsson] Let $L$ be a crystalline sheaf on $X^o_{\rm {\rm \mathfrak{m}}athbb{F}}$ associated to a filtered isocrystal $(F, \varphi_F, Fil_{\rm {\rm \mathfrak{m}}athbb{F}}F)$. Then there is an isomorphism {\rm \mathfrak{b}}egin{equation} B_{\rm \mathfrak{c}}ris({\rm \mathfrak{b}}ar V) \otimes_{\rm {\rm \mathfrak{m}}athbb{F}} H^1(((Y,M_Y)/V)_{\rm \mathfrak{c}}onv, F \otimes J_S) \to B_{\rm \mathfrak{c}}ris({\rm \mathfrak{b}}ar V) \otimes_{{\rm {\rm \mathfrak{m}}athbb{Q}}_p} H^1({\rm \mathfrak{b}}ar X - {\rm \mathfrak{b}}ar S {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} {\rm \mathfrak{b}}ar T, L). \end{equation} \end{theorem} As $L = j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A$ is crystalline {\rm \mathfrak{c}}ite[6.3]{Falt}, we may apply this theorem in our situation. Taking Galois invariants, we conclude that $D_{\rm \mathfrak{c}}ris({\rm {\rm \mathfrak{m}}athbb{V}}) = H^1(((Y, M_Y)/V)_{\rm \mathfrak{c}}onv, F \otimes J_S)$. To complete the proof of Theorem \ref{crysmixed}, it would be enough to know that the convergent cohomology group $D_{\rm \mathfrak{c}}ris({\rm {\rm \mathfrak{m}}athbb{V}})$ sits in exact sequences analogous to the standard Gysin sequences (\ref{gysin}). These sequences hold in any cohomology theory satisfying the Bloch-Ogus axioms, but unfortunately convergent cohomology is not known to satisfy these axioms. On the other hand, rigid cohomology does satisfy the Bloch-Ogus axioms {\rm \mathfrak{c}}ite{petrequin}. So we apply Shiho's log convergent-rigid comparison isomorphism {\rm \mathfrak{c}}ite[2.4.4]{shiho} to identify $D_{\rm \mathfrak{c}}ris({\rm {\rm \mathfrak{m}}athbb{V}})$ with $H^1_{\rm rig}(Y - S_s {\rm \hspace{1mm} rel\hspace{1mm}}}\def\e{{\rm {\rm \epsilon}ilon} T_s, j^{\rm \mathfrak{d}}agger {\rm {\rm \mathfrak{m}}athcal{E}})$, for a certain overconvergent isocrystal $j^{\rm \mathfrak{d}}agger{\rm {\rm \mathfrak{m}}athcal{E}}$ which is the analogue of $j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A$ on the special fiber. Here $S_s$ and $T_s$ are the points on the special fiber. We have similar identifications with rigid cohomology for each term appearing in the sequences (\ref{gysin}), and the corresponding short exact sequences of rigid cohomology groups are exact. The crystallinity of ${\rm {\rm \mathfrak{m}}athbb{V}}$ now follows from dimension counting. \end{proof} {\rm \mathfrak{b}}egin{remark} Theorem \ref{vanishing} has two components: first one must bound denominators and then one shows that the heights go to 0 $p$-adically. In the argument above, the ordinarity of $f$ was the crucial input needed to bound denominators. We briefly explain the modifications need to fix the proof of {\rm \mathfrak{c}}ite[II.5.9]{Nek}, where one pairs Heegner cycles of $p$-power conductor with cycles in the kernel of the local Abel-Jacobi map (the higher weight analogue of principal divisors). The fact that these cycles are Abel-Jacobi trivial allows us to make a ``bounded denominators" argument even without an ordinarity assumption; see {\rm \mathfrak{c}}ite[II.1.9]{Nek}. To kill the $p$-adic height, we further note that the particular AJ-trivial cycles in the proof of II.5.9 are again linear combinations of various $T_n x$, with $r(n) = 0$. This lets us invoke Lemme \ref{disjoint} and Theorem \ref{crysmixed}, as before. \end{remark} As we alluded to in the proof of Theorem \ref{vanishing}, the proof of {\rm \mathfrak{c}}ite[II.5.11]{Nek} again assumes that $H_\infty$ contains the cyclotomic ${\rm {\rm \mathfrak{m}}athbb{Z}}_p$-extension of ${\rm {\rm \mathfrak{m}}athbb{Q}}_p$. To fix the proof there, it is enough to prove the following proposition. {\rm \mathfrak{b}}egin{proposition}{\rm \lambda}abel{hodgetate} Let $V$ be the Galois representation $H^1_{\rm et}({\rm \mathfrak{b}}ar X_0(N),j_{0*}{\rm {\rm \mathfrak{m}}athbb{A}}A)(1)$ attached to weight $2r$ cusp forms. If we set $H_{\infty} = {\rm \mathfrak{b}}igcup_j H_{j,w}$, then \[H^0(H_\infty, V) = 0.\] \end{proposition} {\rm \mathfrak{b}}egin{proof} We follow Nekov\'{a}\v{r}'s approach, but instead of using the cyclotomic character we use the character ${\rm \mathfrak{c}}hi_\xi$ coming from the relative Lubin-Tate group attached to $H_\infty$, defined above. By Proposition \ref{LTcrys}, the $G_{{\rm {\rm \mathfrak{m}}athbb{Q}}_p}$-representation ${\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi)$ is crystalline and the frobenius on $D_{\rm \mathfrak{c}}ris({\rm {\rm \mathfrak{m}}athbb{Q}}_p({\rm \mathfrak{c}}hi_\xi))$ is given by multiplication by $\xi$, where $\xi$ is defined in Proposition \ref{unif}. Since $V$ is Hodge-Tate, there is an inclusion of ${\rm \mathcal{G}}al(H_\infty/H_v)$-representations \[H^0(H_\infty, V) {\rm \sigma}ubset \oplus_{j \in {\rm {\rm \mathfrak{m}}athbb{Z}}} H^0(H_v, V({\rm \mathfrak{c}}hi_\xi^j))({\rm \mathfrak{c}}hi_\xi^{-j}).\] Indeed, $H^0(H_\infty, V)$ has an action by ${\rm \mathcal{G}}al(H_\infty /H)$ which we can break up into isotypic parts indexed by characters ${\rm \mathfrak{c}}hi_\xi^s$, with $s \in {\rm {\rm \mathfrak{m}}athbb{Z}}_p$. But of these characters, the only ones which are Hodge-Tate are those with $s \in {\rm {\rm \mathfrak{m}}athbb{Z}}$, so we obtain the inclusion above. So it suffices to show that for each $j$, $H^0(H_v, V({\rm \mathfrak{c}}hi_\xi^j))({\rm \mathfrak{c}}hi_\xi^{-j}) = 0$. Tensoring the inclusion ${\rm {\rm \mathfrak{m}}athbb{Q}}_p \to B_{\rm \mathfrak{c}}ris^{f = 1}$ by $V({\rm \mathfrak{c}}hi_\xi^j)$, taking invariants, and then twisting the resulting filtered frobenius modules by ${\rm \mathfrak{c}}hi_\xi^{-j}$, we obtain \[H^0(H_v, V({\rm \mathfrak{c}}hi_\xi^j))({\rm \mathfrak{c}}hi_{\xi}^{-j}) {\rm \sigma}ubset D_{\rm \mathfrak{c}}ris(V)^{f = \xi^{-j}}.\] As an element of ${\rm {\rm \mathfrak{m}}athbb{C}}$, $\xi$ has absolute value 1. Since $V$ appears in the odd degree cohomology of the Kuga-Sato variety, {\rm \mathfrak{c}}ite{KM} implies that $D_{\rm \mathfrak{c}}ris(V)^{f = \xi^{-j}}$ vanishes and the proposition follows. \end{proof} Finally, for completeness, we explain how Proposition \ref{hodgetate} is used in the proof of Proposition \ref{localvanishing}. Let $X$ be the (generalized) Kuga-Sato variety over $H_v$ and let $T$ be the image of the map $$H^{2r + 2k -1}({\rm \mathfrak{b}}ar X, {\rm {\rm \mathfrak{m}}athbb{Z}}_p(r+k)) \to V = H^{2r +2k-1}({\rm \mathfrak{b}}ar X, {\rm {\rm \mathfrak{m}}athbb{Q}}_p(r+k)).$$ Proposition \ref{hodgetate} is used to infer the following fact, whose proof was left to the reader in {\rm \mathfrak{c}}ite{Nek}. {\rm \mathfrak{b}}egin{proposition} The numbers $\#H^1(H_{j,w}, T)_{\rm tors}$ are bounded as $j \to \infty$. \end{proposition} {\rm \mathfrak{b}}egin{proof} From the short exact sequence $$0 \to T \to V \to V/T \to 0,$$ we have $$(V/T)^{G_j} \to H^1(G_j, T) \to H^1(G_j, V) \to 0,$$ where $G_j = {\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H_{j,w}/H_{j,w}).$ As $H^1(G_j, V)$ is torsion-free, we see that $(V/T)^{G_j}$ maps surjectively onto $H^1(G_j,T)_{\rm tors}$. An element of order $p^a$ in $(V/T)^{G_j}$ is of the form $p^{-a}t$ for some $t \in T$ not divisible by $p$ in $T$. We then have ${\rm \sigma}igma t - t \in p^a T$ for all ${\rm \sigma}igma \in G_j$. As $V/T {\rm \mathfrak{c}}ong ({\rm {\rm \mathfrak{m}}athbb{Q}}_p/{\rm {\rm \mathfrak{m}}athbb{Z}}_p)^n$ for some integer $n$, it suffices to show that $a$ is bounded as we vary over all elements of $(V/T)^{G_j}$ and all $j$. Suppose these $a$ are not bounded. Then we can find a sequence $t_i \in T$ such that $t_i {\rm \mathfrak{n}}otin pT$ and such that ${\rm \sigma}igma t_i - t_i \in p^{a(i)}T$ for all ${\rm \sigma}igma \in G_\infty := {\rm \mathcal{G}}al({\rm \mathfrak{b}}ar H/H_\infty)$. Here, $a(i)$ is a non-decreasing sequence going to infinity with $i$. Since $T$ is compact we may replace $t_i$ with a convergent subsequence, and define $t = {\rm \lambda}im t_i$. We claim that $t \in H^0(H_\infty, V)$. Indeed, for any $i$ we have \[{\rm \sigma}igma t - t = {\rm \sigma}igma(t - t_i) - (t - t_i) + {\rm \sigma}igma t_i - t_i.\] For any $n > 0$, we can choose $i$ large enough so that $(t - t_i) \in p^nT$ and ${\rm \sigma}igma t_i - t_i \in p^nT$, showing that ${\rm \sigma}igma t = t$. By Proposition \ref{hodgetate}, $t = 0$, which contradicts the fact that $t = {\rm \lambda}im t_i$ and $t_i {\rm \mathfrak{n}}ot \in pT$. \end{proof} {\rm \mathfrak{b}}egin{thebibliography}{12} {\rm \mathfrak{b}}ibitem[BDP1]{BDP1} M.\ Bertolini, H.\ Darmon, and K.\ Prasanna, Generalized Heegner cycles and $p$-adic Rankin $L$-series, {\it Duke Math.\ J.} {{\rm \mathfrak{b}}f 162} no.\ 6, (2013) 1033-1148. {\rm \mathfrak{b}}ibitem[BDP2]{BDP3} M.\ Bertolini, H.\ Darmon, and K.\ Prasanna, Chow-Heegner points on CM elliptic curves and values of $p$-adic $L$-functions, {\it Int.\ Math.\ Res.\ Not.\ IMRN} no.\ 3, (2014) 745-793. {\rm \mathfrak{b}}ibitem[BDP3]{BDP5} M.\ Bertolini, H.\ Darmon, and K.\ Prasanna, $p$-adic $L$-functions and the coniveau filtration on Chow groups, submitted. {\rm \mathfrak{b}}ibitem[Be]{bertrand} D.\ Bertrand, Valeurs de fonctions th\^eta et hauteurs $p$-adiques, {\it Prog.\ Math., } Vol.\ 22 Birkh\"auser, Boston, 1982, pp. 1-11. {\rm \mathfrak{b}}ibitem[Br]{hunter} E.H.\ Brooks, Shimura curves and special values of $p$-adic $L$-functions, {\it Int.\ Math.\ Res.\ Not.\ IMRN}, doi: 10.1093/imrn/rnu062 {\rm \mathfrak{b}}ibitem[BK]{BK} S.\ Bloch and K.\ Kato, {\it $L$-functions and Tamagawa numbers of motives}, in: The Grothendieck Festschrift, Vol.\ I, Progress in Mathematics {{\rm \mathfrak{b}}f 86}, Birkh\"{a}user, Boston, Basel, 1990, pp. 330-400. {\rm \mathfrak{b}}ibitem[Ca]{francesc} F.\ Castella, Heegner cycles and higher weight specializations of big Heegner points, {\it Math.\ Ann.\ } {{\rm \mathfrak{b}}f 356} (2013), no.\ 4, 1247-1282. {\rm \mathfrak{b}}ibitem[CH]{hsieh} F.\ Castella and M.\ Hsieh, Heegner cycles and $p$-adic $L$-functions, preprint. {\rm \mathfrak{b}}ibitem[Co]{colmez} P.\ Colmez, Fonctions L $p$-adiques, S\'eminaire Bourbaki, Vol.\ 1998/99, {\it Ast\'erisque} {{\rm \mathfrak{b}}f 266} 2000, Exp.\ 851. {\rm \mathfrak{b}}ibitem[C1]{BC} B.\ Conrad, Gross-Zagier revisited, in Heegner points and Rankin $L$-series, {\it Math.\ Sci.\ Res.\ Inst.\ Pub.} {{\rm \mathfrak{b}}f 49}, Cambridge Univ.\ Press, Cambridge, 2004, 67-163. {\rm \mathfrak{b}}ibitem[C2]{conrad} B. Conrad, Lifting global representations with local properties, preprint. {\rm \mathfrak{b}}ibitem[DN]{niziol} F.\ D\'eglise and W.\ Niziol, On $p$-adic absolute hodge cohomology and syntomic coefficients, I, preprint. {\rm \mathfrak{b}}ibitem[dS]{dS} E.\ de Shalit, Iwasawa theory of elliptic curves with complex multiplication, {\it Perspectives in Math.} {{\rm \mathfrak{b}}f 3}, Orlando: Academic Press (1987). {\rm \mathfrak{b}}ibitem[D]{disegni} D.\ Disegni, $p$-adic heights of Heegner points on Shimura curves, preprint. {\rm \mathfrak{b}}ibitem[E]{yara} Y.\ Elias, On the Selmer group attached to a modular form and an algebraic Hecke character, preprint. {\rm \mathfrak{b}}ibitem[F]{Falt} G.\ Faltings, Crystalline cohomology and $p$-adic Galois representations, pp. 25-80 in Algebraic Analysis, Geometry, and Number Theory, the Johns Hopkins University Press (1989). {\rm \mathfrak{b}}ibitem[G]{Gr} B.\ Gross, Arithmetic on elliptic curves with complex multiplication, LNM 776, Springer-Verlag, 1980. {\rm \mathfrak{b}}ibitem[GZ]{GZ} B.\ Gross, D.\ Zagier, Heegner points and derivatives of $L$-series, {\it Invent.\ Math.} {{\rm \mathfrak{b}}f 84} (1986), 225-320. {\rm \mathfrak{b}}ibitem[H]{Hida1} H.\ Hida, A $p$-adic measure attached to the zeta functions associated with two elliptic modular forms. I, {\it Invent.\ Math.} {{\rm \mathfrak{b}}f 79} (1985), 159-195. {\rm \mathfrak{b}}ibitem[Ho]{BHiwa} B.\ Howard, The Iwasawa theoretic Gross-Zagier theorem, {\it Compositio Math.\ } {{\rm \mathfrak{b}}f 141}, no.\ 4 (2005), 811-846 {\rm \mathfrak{b}}ibitem[I]{Iwan} H.\ Iwaniec, Topics in classical automorphic forms, {\it Grad.\ Studies in Math.}, {{\rm \mathfrak{b}}f 17} 1997. {\rm \mathfrak{b}}ibitem[KM]{KM} N.\ Katz and B.\ Mazur, Arithmetic Moduli of Elliptic Curves, {\it Ann.\ of Math.\ Studies} {{\rm \mathfrak{b}}f 108}, Princeton University Press, 1985. {\rm \mathfrak{b}}ibitem[K]{kob} S.\ Kobayashi, The $p$-adic Gross-Zagier formula for elliptic curves at supersingular primes, {\it Invent.\ Math.\ } {{\rm \mathfrak{b}}f 191} (2013), no.\ 3, 527-629. {\rm \mathfrak{b}}ibitem[LZZ]{lzz} Y.\ Liu, S.\ Zhang, and W.\ Zhang, On $p$-adic Waldspurger formula, preprint. {\rm \mathfrak{b}}ibitem[M]{Miy} T.\ Miyake, Modular Forms, Springer-Verlag, 1989. {\rm \mathfrak{b}}ibitem[N1]{NekEuler} J.\ Nekov\'{a}\v{r}, Kolyvagin's method for Chow groups of Kuga-Sato varieties, {\it Invent.\ Math.} {{\rm \mathfrak{b}}f 107} (1992), 99-125. {\rm \mathfrak{b}}ibitem[N2]{Nekhts} J.\ Nekov\'{a}\v{r}, On $p$-adic height pairings, in: S\'eminaire de th\'eorie des numbres de Paris 1990/91, {\it Progress in Math.} {{\rm \mathfrak{b}}f 108}, (1993), (David, S., ed.), 127-202. {\rm \mathfrak{b}}ibitem[N3]{Nek} J.\ Nekov\'{a}\v{r}, On the $p$-adic height of Heegner cycles, {\it Math.\ Ann.} {{\rm \mathfrak{b}}f 302} (1995), no.\ 4, 609-686. {\rm \mathfrak{b}}ibitem[N4]{Nek2} J.\ Nekov\'{a}\v{r}, $p$-adic Abel-Jacobi maps and $p$-adic heights, The arithmetic and geometry of algebraic cycles (Banff, AB, 1998), 367-379, CRM Proc. Lecture Notes, 24, {\it Amer.\ Math.\ Soc.}, Providence, RI, 2000. {\rm \mathfrak{b}}ibitem[Og]{Ogg} A.\ Ogg, Modular forms and Dirichlet series, Benjamin, (1969). {\rm \mathfrak{b}}ibitem[Ol]{ols} M.\ Olsson, On Faltings' method of almost \'etale extensions, in {\it Algebraic Geometry --- Seattle 2005} part 2, Proc.\ Sympos.\ Pure Math.\ {{\rm \mathfrak{b}}f 80}, Amer.\ Math.\ Soc., Providence, 2009, 811-936. {\rm \mathfrak{b}}ibitem[P]{petrequin} D.\ P\'etrequin, Classes de Chern et classes de cycles en cohomologie rigide, {\it Bull.\ Soc.\ Math.\ France} {{\rm \mathfrak{b}}f 131} (1) (2003), 59-121. {\rm \mathfrak{b}}ibitem[PR1]{PR1} B.\ Perrin-Riou, Points de Heegner et d\'{e}riv\'{e}es de fonctions $L$ $p$-adiques, {\it Invent.\ Math.} {{\rm \mathfrak{b}}f 89} (1987), 455-510. {\rm \mathfrak{b}}ibitem[PR2]{PR2} B.\ Perrin-Riou, Fonctions $L$ $p$-adiques associ\'{e}es \`{a} une forme modulaire et \`{a} un corps quadratique imaginaire, {\it J.\ London Math.\ Soc.\ } {{\rm \mathfrak{b}}f 38} (1988), 1-32. {\rm \mathfrak{b}}ibitem[PR3]{PRbook} B.\ Perrin-Riou, $p$-adic $L$-functions and $p$-adic representations, {\it SMF/AMS Texts Monogr.\ } {{\rm \mathfrak{b}}f 3}, Amer.\ Math.\ Soc., Providence, and Soc.\ Math.\ France, Paris, 2000. {\rm \mathfrak{b}}ibitem[R]{Roh} D.\ Rohrlich, Root numbers of Hecke $L$-functions of CM fields, {\it Amer.\ J.\ Math.} {{\rm \mathfrak{b}}f 104} (1982), 517-543. {\rm \mathfrak{b}}ibitem[Sc]{Sch} A.\ Scholl, Motives for modular forms, {\it Invent.\ Math.} {{\rm \mathfrak{b}}f 100} (1990), 419-430. {\rm \mathfrak{b}}ibitem[Sh]{shiho} A.\ Shiho, Crystalline fundamental groups. II. Log convergent cohomology and rigid cohomology, {\it J.\ Math.\ Sci.\ Univ.\ Tokyo} {{\rm \mathfrak{b}}f 9} (2002), 1-163. {\rm \mathfrak{b}}ibitem[ST]{ST} J.P.\ Serre, J.\ Tate, Good reduction of abelian varieties, {\it Ann.\ of Math.} {{\rm \mathfrak{b}}f 88} (1968), 492-517. {\rm \mathfrak{b}}ibitem[Wa]{Wall} L.\ Walling, The Eichler commutation relation for theta series with spherical harmonics, {\it Acta Arith.} {{\rm \mathfrak{b}}f 63} (1993), no.\ 3, 233-254. {\rm \mathfrak{b}}ibitem[Wi]{Wi} A.\ Wiles, On ordinary ${\rm \lambda}ambda$-adic representations associated to modular forms, {\it Invent.\ Math.} {{\rm \mathfrak{b}}f 94} (1988), 529-573. {\rm \mathfrak{b}}ibitem[Z]{Zhang} S.W\ Zhang, Heights of Heegner cycles and derivatives of $L$-series, {\it Invent.\ Math.} {{\rm \mathfrak{b}}f 130} (1997), 99-152. \end{thebibliography} \end{document}
\begin{document} \title{Curvature Motion in a Minkowski Plane} \author{Vitor Balestro} \address[V.Balestro and R.Teixeira]{ Instituto de Matem\'{a}tica -- UFF -- Niter\'{o}i -- Brazil} \email{[email protected]\\ [email protected]} \author{Marcos Craizer} \address[M.Craizer]{ Departamento de Matem\'{a}tica -- PUC-Rio -- Rio de Janeiro -- Brazil} \email{[email protected]} \author{Ralph C. Teixeira} \begin{abstract} In this paper we study the curvature flow of a curve in a plane endowed with a minkowskian norm whose unit ball is smooth. We show that many of the properties known in the euclidean case can be extended (with due adaptations) to this new situation. In particular, we show that simple, closed, strictly convex, smooth curves remain so until the area enclosed by them vanishes.\ Moreover, their isoperimetric ratios converge to the minimum possible value, only attained by the minkowskian circle -- so these curves converge to a minkowskian "circular point" as the enclosed area approaches zero. \end{abstract} \subjclass{52A10, 52A21, 52A40, 53A35, 53C44} \keywords{curvature motion, minkowski plane, isoperimetric inequality} \maketitle \section{\label{SecIntro}Introduction} \noindent\textit{Note: Since we wrote this paper, it came to our attention that its main results have already been published. See: }\\ \noindent[1] Gage, \textit{M., Evolving plane curves by curvature in relative geometries}, Duke Math J. 72 (1993) 441-466. \\ \noindent[2] Gage, M. \& Li, Y., \textit{Evolving plane curves by curvature in relative geometries II}, Duke Math J. 75 (1994) 79-98. \\ Possibly the most fascinating front deformation, the classical planar Curvature Motion is defined by \begin{equation*} \frac{\partial \gamma }{\partial t}\left( u,t\right) =k\left( u,t\right) N\left( u,t\right) \end{equation*} where $k$ and $N$ are the curvature and the inwards unit normal vector to the closed curve $\gamma \left( \cdot ,t\right) $ at the point $\gamma \left( u,t\right) $. A series of papers (\cite{gage1}, \cite{gage2}, \cite {gage3} and \cite{grayson}) has shown that any embedded curve in the Euclidean plane remains embedded and converges to a "circular point" in finite time.\ Moreover, if $L\left( t\right) $ and $A\left( t\right) $ are the length of $\gamma $ and the area it encloses, some very simple formulae can be shown about their evolutions: \begin{align*} \frac{dL}{dt}& =-\int k^{2}ds \\ \frac{dA}{dt}& =-2\pi \\ \lim_{t\rightarrow t_{V}}\frac{L^{2}}{A}& =4\pi \end{align*} (where $t_{V}=\frac{A\left( 0\right) }{2\pi }$) the last one being one interpretation of "converging to a circular shape". On the other hand, a Minkowski plane is a 2-dimensional vector space with a norm which can be defined by its unit ball $\mathcal{P}$ (a convex symmetric set). Of course, along with a different geometry, come different notions of lengths, normal vectors and curvature, which we very briefly review in the next section (see \cite{thompson} for details and \cite{martini1}, \cite {martini2} for a survey). So it is natural to ask: are the properties of Curvature Motion still valid on the Minkowski plane, with the due adaptations? The goal of this paper is to answer a resounding YES, at least when the boundary of $\mathcal{P}$ is smooth and the initial curve $\gamma \left( \cdot ,0\right) $ is smooth and strictly convex. More specifically, following similar techniques as in \cite{gage1}, \cite{gage2} and \cite {gage3}, we show that the flow is well defined up to the vanishing time $ t_{V}=\frac{A\left( \gamma \left( 0\right) \right) }{2A\left( \mathcal{P} \right) }$, and that \begin{eqnarray*} \frac{dL_{\mathcal{Q}}}{dt} &=&-\int k^{2}ds \\ \frac{dA}{dt} &=&-2A\left( \mathcal{P}\right) \\ \lim_{t\rightarrow t_{V}}\frac{L_{\mathcal{Q}}^{2}}{A} &=&4A\left( \mathcal{P }\right) \end{eqnarray*} where $A\left( \mathcal{P}\right) $ is the area of the unit ball $\mathcal{P} $ and all lengths are taken with respect to the metric defined by the dual unit ball $\mathcal{Q}$. The structure of this paper is as follows: section \ref{SecMink} briefly reminds us of the basic ideas of Minkowski plane geometry, including some notation choices. Section \ref{SecIso} states many interesting and necessary minkowskian isoperimetric inequalities; it is divided in two subsections, the first devoted to the minkowskian version of Gage \'{} s inequality and the second to a lemma with a more technical proof. Section \ref{SecCurv} defines the Minkowskian curvature flow and calculates the evolutions of curvatures, lengths and areas as long as the flow is well defined. Section \ref{SecConv} shows the convergence of the isoperimetric ratio to the "circular" value $4A\left( \mathcal{P}\right) $ if the enclosed area goes to $0$ and the curves remain simple and convex along the motion. Finally, the technical section \ref{SecExist} has the job of showing the existence of such a flow, all the way until the enclosed area converges to $ 0 $, at least when the initial curve is strictly convex and smooth, rounding up the former results. \textbf{Acknowledgment.} The authors would like to thank CNPq for financial support during the preparation of this manuscript. \section{\label{SecMink}Minkowski plane and its dual} Let $\mathcal{P}$ be a strictly convex set, symmetric (which, throughout the paper, will mean "symmetric with respect to the origin"), whose boundary is given by a $C^{\infty }$ curve $p$. We endow the plane $\mathbb{R}^{2}$ with a norm which makes $\mathcal{P}$ the unit ball. In other words, given $v\in \mathbb{R}^{2}$, write $v=tp$ for some $t\geq 0$ and some $p$ in the boundary of $\mathcal{P}$, and define $||v||_{\mathcal{P}}=t$. Denoting $e_{r}=(\cos \theta ,\sin \theta )$ and $e_{\theta }=(-\sin \theta ,\cos \theta )$ we parameterize $p$ by $p(\theta )$, $0\leq \theta \leq 2\pi $, such that $p^{\prime }(\theta )$ is a non-negative multiple of $e_{\theta }$, i.e., the angle between the $x$-axis and $p^{\prime }\left( \theta \right) $ is $\theta +\pi /2$. We can write \begin{align} p(\theta )& =a(\theta )e_{r}+a^{\prime }(\theta )e_{\theta } \label{eqn:Mink} \\ p^{\prime }\left( \theta \right) & =\left( a\left( \theta \right) +a^{\prime \prime }\left( \theta \right) \right) e_{\theta } \notag \\ \left[ p,p^{\prime }\right] & =a\left( \theta \right) \left( a\left( \theta \right) +a^{\prime \prime }\left( \theta \right) \right) \notag \end{align} where $a(\theta )$ is the support function of $\mathcal{P}$. Furthermore, we shall assume $a(\theta )+a^{\prime \prime }(\theta )>0$ for each $0\leq \theta \leq 2\pi $, which is equivalent to say that the curvature of $p$ is strictly positive. The dual unity ball $\mathcal{P}^{\ast }$ can be naturally identified with a convex set $\mathcal{Q}$ in the plane with $p^{\ast }(w)=[w,q]$ for any $ w\in \mathbb{R}^{2}$. One can see that \begin{eqnarray} q(\theta ) &=&\frac{p^{\prime }(\theta )}{[p(\theta ),p^{\prime }(\theta )]}= \frac{1}{a\left( \theta \right) }e_{\theta } \label{eqn:2} \\ q^{\prime }\left( \theta \right) &=&-\frac{1}{a\left( \theta \right) }e_{r}- \frac{a^{\prime }\left( \theta \right) }{a^{2}\left( \theta \right) } e_{\theta } \notag \\ \left[ q,q^{\prime }\right] &=&a^{-2}\left( \theta \right) \notag \end{eqnarray} is a parameterization of the boundary of $\mathcal{Q}$. It is not difficult to see that $q$ is a convex symmetric curve with strictly positive curvature as well. It also holds that \begin{equation} p(\theta )=-\frac{q^{\prime }(\theta )}{[q(\theta ),q^{\prime }(\theta )]} =-a^{2}q^{\prime }\left( \theta \right) \label{eqn:3} \end{equation} Given another closed, strictly convex curve $\gamma $, we can parameterize it by $\theta $ such that $\gamma ^{\prime }(\theta )=\lambda (\theta )q(\theta )$ (in fact, anytime we use the notation $f^{\prime }$ we mean derivative with respect to this parameter $\theta $). The Minkowski $ \mathcal{Q}$-length $L_{\mathcal{Q}}$ of $\gamma $ is defined as \begin{equation*} L_{\mathcal{Q}}(\gamma )=\int_{0}^{2\pi }\lambda (\theta )\ d\theta \end{equation*} which inspires another useful parameterization of $\gamma $ by its $\mathcal{ Q}$-arclength parameter $s$: \begin{equation*} s\left( \theta \right) =\int_{0}^{\theta }\lambda \left( \sigma \right) d\sigma \end{equation*} Sometimes we will need a third different parameterization $\gamma \left( u\right) $ for such a curve. In that case, we define $v=\frac{ds}{du}$ so we can write \begin{equation*} ds=vdu=\lambda d\theta \end{equation*} If a $\mathcal{P}$-circle is tangent to $\gamma $ at $\gamma \left( \theta \right) $, the line joining its center to $\gamma \left( \theta \right) $ must be parallel to $p\left( \theta \right) $. Thus, it is natural to define the minkowskian unit normal to the curve $\gamma $ at the point $\gamma (\theta )$ as $p(\theta )$. The inverse of the radius of a $\mathcal{P}$ -circle which has a $3$-point contact with $\gamma $ at $\gamma \left( \theta \right) $ is the \emph{minkowskian curvature} \begin{equation} k(\theta )=\left[ p,p^{\prime }\right] \frac{d\theta }{ds}=\frac{ [p,p^{\prime }]}{\lambda (\theta )}. \label{eqn:5} \end{equation} Other notions of minkowskian curvature are possible -- in \cite{petty} $ k\left( \theta \right) $ is called "circular curvature" (see also \cite {craizer} and \cite{tabachnikov}). Define the support function $f:[0,2\pi ]\rightarrow \mathbb{R}$ of $\gamma $ by $f(\theta )=[\gamma (\theta ),q(\theta )]$. Notice that we can take $f$ naturally on the parameter $s$. We have \begin{prop} \label{prop1} The following equalities hold: \begin{description} \item[(a)] $\displaystyle\int_0^{L_{\mathcal{Q}}}k(s)ds = 2A(\mathcal{P})$; \item[(b)] $\displaystyle\int_{0}^{L_{\mathcal{Q}}}f(s)ds=2A$; and \item[(c)] $\displaystyle\int_{0}^{L_{\mathcal{Q}}}f(s)k(s)ds=L_{\mathcal{Q} } $ \end{description} \end{prop} \begin{proof} Since $ds=\lambda d\theta $, equation (\ref{eqn:5}) yields \begin{equation*} \int_{0}^{L_{\mathcal{Q}}}k(s)ds=\int_{0}^{2\pi }k(\theta )\lambda (\theta )d\theta =\int_{0}^{2\pi }[p(\theta ),p^{\prime }(\theta )]d\theta =2A( \mathcal{P}) \end{equation*} and this proves \textbf{(a)}. For \textbf{(b)} we calculate \begin{equation*} \int_{0}^{L_{\mathcal{Q}}}f(s)ds=\int_{0}^{2\pi }[\gamma (\theta ),q(\theta )]\lambda (\theta )d\theta =\int_{0}^{2\pi }[\gamma (\theta ),\gamma ^{\prime }(\theta )]d\theta =2A \end{equation*} Now, for \textbf{(c)}, \begin{align*} \int_{0}^{L_{\mathcal{Q}}}f(s)k(s)ds& =\int_{0}^{2\pi }[\gamma (\theta ),q(\theta )]k(\theta )\lambda (\theta )d\theta =\int_{0}^{2\pi }[\gamma (\theta ),q(\theta )][p(\theta ),p^{\prime }(\theta )]d\theta = \\ & =\int_{0}^{2\pi }[\gamma (\theta ),p^{\prime }(\theta )]d\theta =\int_{0}^{2\pi }[p(\theta ),\gamma ^{\prime }(\theta )]d\theta =\int_{0}^{2\pi }\lambda (\theta )d\theta =L_{\mathcal{Q}} \end{align*} \end{proof} \section{\label{SecIso}Some Isoperimetric Inequalities} Consider again a smooth, closed and convex curve $\gamma $ with $\mathcal{Q}$ -length $L_{\mathcal{Q}}$ enclosing the (usual) area $A$. The following isoperimetric inequality generalizes the classical euclidean one (see Cap.4 of\textbf{\ \cite{thompson}}): \begin{equation} \frac{L_{\mathcal{Q}}^{2}}{A}\geq 4A(\mathcal{P}), \label{eqn:isop} \end{equation} where $A(\mathcal{P})$ is the usual area of the unit $\mathcal{P}$-ball. As in the euclidean case, the equality holds if and only if the curve is the boundary of some $\mathcal{P}$-ball. \subsection{The Minkowskian Gage Inequality} We now turn our attention to prove a version of the Gage's inequality (see \textbf{\cite{gage1}}) in the Minkowski plane. Let $\mathcal{C}$ be the space of smooth, simple, closed and strictly convex curves in the plane endowed with the Hausdorff topology. We have: \begin{teo} \label{teoGage}There exists a non-negative, continuous, scale-invariant functional $F:\mathcal{C}\rightarrow \mathbb{R}$ such that \begin{equation*} \left( 1-F(\gamma )\right) \int_{0}^{L_{\mathcal{Q}}}k^{2}ds-A(\mathcal{P}) \frac{L_{\mathcal{Q}}}{A}\geq 0, \end{equation*} where $A$, $L_{\mathcal{Q}}$ and $k$ are the area, $\mathcal{Q}$-length and curvature of $\gamma $. Moreover $F\left( \gamma \right) =0$ if and only if $ \gamma $ is a $\mathcal{P}$-circle. \end{teo} \begin{coro} \label{teo1} Given $\gamma \in \mathcal{C}$, we have \begin{equation} \int_{0}^{L_{\mathcal{Q}}}k^{2}ds-A(\mathcal{P})\frac{L_{\mathcal{Q}}}{A} \geq 0. \label{eqn:12} \end{equation} with equality if and only if $\gamma $ is a $\mathcal{P}$-circle. \end{coro} In order to prove this, we need many results. We start by recalling an useful Bonnesen inequality whose proof can be found in Theorem 4.5.5 of \cite {thompson}: \begin{teo} \label{lemma3} For $\gamma \in \mathcal{C}$, let $r_{\mathrm{in}}$ be the radius of the biggest inscribed $\mathcal{P}$-circle and $r_{\mathrm{out}}$ the radius of the smallest circumscribed $\mathcal{P}$-circle. Then \begin{equation} rL_{\mathcal{Q}}-A-A(\mathcal{P})r^{2}\geq 0 \label{eqn:14} \end{equation} whenever $r_{\mathrm{in}}\leq r\leq r_{\mathrm{out}}$. \end{teo} \begin{lemma} \label{teo01} The equality in (\ref{eqn:14}) holds for $r=r_{\mathrm{in}}$ if and only if $\gamma $ is homothetic to the $\mathcal{P}$-circle. \end{lemma} We have not seen a proof of this lemma in the literature, so we prove it in the next subsection. Now let us begin to build the functional $F\left( \gamma \right) $ of Gage \'{} s inequality: \begin{prop} \label{teo2} Consider the space $\mathcal{C}_{s}$ consisting of curves in $ \mathcal{C}$ which are symmetric. Define the functional $E:\mathcal{C} _{s}\rightarrow \mathbb{R}$ by \begin{equation*} E(\gamma )=1+\frac{A(\mathcal{P})r_{\mathrm{in}}r_{\mathrm{out}}}{A}-\frac{ 2A(\mathcal{P})(r_{\mathrm{in}}+r_{\mathrm{out}})}{L_{\mathcal{Q}}} \end{equation*} Then, the following hold:\newline \textbf{(1)} $L_{\mathcal{Q}}A\left( 1-E(\gamma )\right) \geq A(\mathcal{P}) \displaystyle\int_{0}^{L_{\mathcal{Q}}}f^{2}ds$, for all $\gamma \in \mathcal{C}_{s}$;\newline \textbf{(2)} $E(\gamma )\geq 0$ and equality holds if and only if $\gamma $ is a $\mathcal{P}$-circle; and\newline \textbf{(3)} If $\gamma _{j}$ is a sequence in $\mathcal{C}_{s}$ such that $ \displaystyle\lim_{j\rightarrow \infty }E(\gamma _{j})=0$ and if the sequence of the normalized curves $\eta _{j}=\gamma _{j}\displaystyle\sqrt{ \frac{A(\mathcal{P})}{A}}$ is contained in some bounded region of the plane, then the region $H_{j}$ enclosed by $\eta _{j}$ converges in the Hausdorff metric to $\mathcal{P}$, as $j\rightarrow \infty $. \end{prop} \begin{proof} If $r_{\mathrm{in}}\leq r\leq r_{\mathrm{out}}$ then \begin{equation*} \left( r-r_{\mathrm{in}}\right) \left( r_{\mathrm{out}}-r\right) \geq 0\Rightarrow r\left( r_{\mathrm{in}}+r_{\mathrm{out}}\right) -r_{\mathrm{in} }r_{\mathrm{out}}\geq r^{2} \end{equation*} For a curve $\gamma $ in $\mathcal{C}_{s}$ the support function $f$ satisfies $r_{\mathrm{in}}\leq f\leq r_{\mathrm{out}}$ for every value of the parameter, so we can take $r=f$ above. Then we integrate the above inequality to obtain \begin{equation*} \left( r_{\mathrm{out}}+r_{\mathrm{in}}\right) \int_{0}^{L_{\mathcal{Q}}}f\ ds-r_{\mathrm{in}}r_{\mathrm{out}}L_{\mathcal{Q}}\geq \int_{0}^{L_{\mathcal{Q }}}f^{2}ds \end{equation*} Since $\displaystyle\int_{0}^{L_{\mathcal{Q}}}f\ ds=2A$ we have the inequality in \textbf{\textit{(1)}}. For \textbf{\textit{(2)}} let $g\left( r\right) =rL_{\mathcal{Q}}-A-A(\mathcal{P})r^{2}.$ From Theorem \ref{lemma3} .we know that $g\left( r_{\mathrm{in}}\right) ,g\left( r_{\mathrm{out} }\right) \geq 0,$ so we may write for $r\in \left[ r_{\mathrm{in}},r_{ \mathrm{out}}\right] $ \begin{equation*} \frac{r-r_{\mathrm{in}}}{r_{\mathrm{out}}-r_{\mathrm{in}}}g\left( r_{\mathrm{ out}}\right) +\frac{r_{\mathrm{out}}-r}{r_{\mathrm{out}}-r_{\mathrm{in}}} g\left( r_{\mathrm{in}}\right) \geq 0 \end{equation*} which can be rewritten as \begin{equation*} r\left( L_{\mathcal{Q}}-A\left( \mathcal{P}\right) \left( r_{\mathrm{out} }+r_{\mathrm{in}}\right) \right) -A+A\left( \mathcal{P}\right) r_{\mathrm{in} }r_{\mathrm{out}}\geq 0. \end{equation*} Taking $r=f$ and integrating with respect to $s$ \begin{equation*} 2A\left( L_{Q}-A\left( P\right) \left( r_{\mathrm{out}}+r_{\mathrm{in} }\right) \right) -AL_{\mathcal{Q}}+A\left( \mathcal{P}\right) r_{\mathrm{in} }r_{\mathrm{out}}L_{\mathcal{Q}}\geq 0 \end{equation*} which shows that $E\left( \gamma \right) \geq 0$. Since $f$ ranges from $r_{ \mathrm{in}}$ to $r_{\mathrm{out}}$, if $E(\gamma )=0$ then we must have $ g\left( r_{\mathrm{in}}\right) =g\left( r_{\mathrm{out}}\right) =0$ and Lemma \ref{teo01} says that $\gamma $ is a $\mathcal{P}-$circle. For \textit{\textbf{(3)}} let $\gamma _{j}$ be a sequence in $\mathcal{C} _{s} $ such that $\displaystyle\lim_{j\rightarrow \infty }E(\gamma _{j})=0$ and assume that all normalized curves $\eta _{j}$ lie at a same bounded region of the plane. Notice that $E(\eta _{j})=E(\gamma _{j})$ for every $ j\in \mathbb{N}$ and then $\displaystyle\lim_{j\rightarrow \infty }E(\eta _{j})=0$. Denote by $H_{j}$ the region enclosed by $\eta _{j}$. By Blaschke's Selection Theorem we have that there exists a subsequence $ H_{j_{k}}$ which converges to a convex set $H$. Since $E$ is a continuous functional (considering the Hausdorff topology in $\mathcal{C}_{s}$) we have $E(H)=\displaystyle\lim_{k\rightarrow \infty }E(H_{j_{k}})=0$, and then $H$ must be the unit $\mathcal{P}$-circle. It is also true that every convergent subsequence of $H_{j}$ converges to the unit $\mathcal{P}$-circle. It follows immediately that $H_{j}$ itself converges to the unit $\mathcal{P}$ -circle. This concludes the proof. \end{proof} Finally we appropriately extend the functional $E$ to the desired functional $F$, as done in \cite{gage2}. Let $\gamma \in \mathcal{C}$, and consider all chords which divide the area inside $\gamma $ in two equal parts. Pick one (call it $S$) such that the tangent lines to $\gamma $ on the extreme points are parallel. Let $\gamma _{1}$ (with $\mathcal{Q}$-length $L_{1}$) and $ \gamma _{2}$ (with $\mathcal{Q}$-length $L_{2}$) be the two portions of $ \gamma $ determined by $S$. Placing the $x$-axis along $S$ and the origin at its midpoint we can build two curves $\gamma _{1}^{\ast }$ and $\gamma _{2}^{\ast }$ which belongs to $\mathcal{C}_{s}$ by reflecting $\gamma _{1}$ and $\gamma _{2}$ through the origin. Since the functional $E$ is well defined for these new curves we could define $F(\gamma )$ by \begin{equation*} F(\gamma )=\frac{L_{1}}{L_{\mathcal{Q}}}E(\gamma _{1}^{\ast })+\frac{L_{2}}{ L_{\mathcal{Q}}}E(\gamma _{2}^{\ast }), \end{equation*} but, although this definition looks natural (because it coincides with $E$ in $\mathcal{C}_{s}$), it is not always correct. This happens because the choice of the chord $S$ is not necessarily unique. To overcome this trouble we define $F$ to be the supremum of the above expression between all possible choices of $S$. It is not difficult to prove that the functional $F$ has also the properties \textbf{\textit{(1)}}, \textbf{\textit{(2)}} and \textbf{\textit{(3)}}, and we will omit the details. Now we can finish the proof of Theorem \ref{teoGage}. By Schwarz inequality we have \begin{equation*} L_{\mathcal{Q}}=\int_{0}^{L_{\mathcal{Q}}}fk\ ds\leq \left( \int_{0}^{L_{ \mathcal{Q}}}f^{2}\ ds\right) ^{1/2}\left( \int_{0}^{L_{\mathcal{Q} }}k^{2}ds\right) ^{1/2} \end{equation*} Squaring both sides and using inequality expressed in Theorem \ref{teo2} yields \begin{equation*} L_{\mathcal{Q}}^{2}\leq \left( \int_{0}^{L_{\mathcal{Q}}}f^{2}\ ds\right) \left( \int_{0}^{L_{\mathcal{Q}}}k^{2}ds\right) \leq \frac{L_{\mathcal{Q}}A}{ A(\mathcal{P})}\left( 1-F(\gamma )\right) \int_{0}^{L_{\mathcal{Q}}}k^{2}ds \end{equation*} and the desired result comes immediately. \subsection{Proof of Lemma \protect\ref{teo01}} Assume that $\gamma \in {\mathcal{C}}$ is symmetric. We start with the following quite intuitive result: \begin{lemma} \label{lemma:rinmu0} Denote by $\mu _{0}$ the minimum curvature radius of $ \gamma $. Then $\mu _{0}\leq r_{\mathrm{in}}$ with equality only in case $ \gamma $ is a $\mathcal{P}$-circle. \end{lemma} \begin{proof} Assume that $\mu (0)=\mu _{0}$, where $\mu _{0}=\min \{\mu (\theta )|0\leq \theta \leq 2\pi \}$. We may also assume that $(-a,a)$, $0\leq a<\frac{\pi }{ 2}$, is the maximal interval where $\mu (\theta )=\mu _{0}$. Observe that if $a=\frac{\pi }{2}$ then, by the symmetry of $\gamma $, $\gamma $ would necessarily be a $\mathcal{P}$-circle. For any $-\pi \leq \theta \leq \pi $, denote $P_{0}=(x_{0}(\theta ),y_{0}(\theta ))$ the $\mathcal{P}$-circle of radius $\mu _{0}$ osculating at $\gamma (0)$. In the euclidean case, $x_{0}=\mu _{0}\sin (\theta )$, $ y_{0}=\mu _{0}-\mu _{0}\cos (\theta )$. Denote also $\gamma (\theta )=(x(\theta ),y(\theta ))$, $q(\theta )=(q_{1}(\theta ),q_{2}(\theta ))$ and $p(\theta )=(p_{1}(\theta ),p_{2}(\theta ))$. Since \begin{equation*} \gamma ^{\prime }(\theta )=\mu (\theta )[p,p^{\prime }](\theta )q(\theta ), \end{equation*} we can write \begin{equation*} y(\theta )=\int_{0}^{\theta }\mu \lbrack p,p^{\prime }]q_{2}d\theta \geq \mu _{0}\left( p_{2}(\theta )-p_{2}(0)\right) =y_{0}(\theta ), \end{equation*} with equality if and only if $-a\leq \theta \leq a$. We conclude that $ y(\theta )\geq y_{0}(\theta )$, and the equality holds if and only if $ -a\leq \theta \leq a$. Thus the osculating $\mathcal{P}$-circle $P_{0}$ is tangent to $\gamma $ only at $\gamma (\theta )$, $-a\leq \theta \leq a$. So there exists $\epsilon >0$ such that $P_{0}+(0,\epsilon )$ is contained in the interior of $\gamma $, thus proving the proposition. \end{proof} For $0\leq r\leq r_{\mathrm{in}}$, denote $D_{r}$ the set of points inside $ \gamma $ whose distance to $\gamma $ is $\geq r$ and let $C_{r}=\partial D_{r}$. Denote by $L_{\mathcal{Q}}(r)$ the $\mathcal{Q}$-length of $C_{r}$. The following proposition is easy to prove: \begin{prop} \label{prop:ALr} If $r\leq \mu _{0}$, then \begin{equation} L_{\mathcal{Q}}(r)=L_{\mathcal{Q}}-2A(\mathcal{P})r. \label{eq:Lr} \end{equation} Moreover, \begin{equation} A=\int_{0}^{r_{\mathrm{in}}}L_{\mathcal{Q}}(r)dr. \label{eq:area} \end{equation} \end{prop} \begin{proof} Let \begin{equation*} \beta (\theta ,r)=\gamma (\theta )-rp(\theta ) \end{equation*} be a parameterization of $C_{r}$, $\theta \in I(r)$. If $r<\mu _{0}$, then $ I(r)=[0.2\pi ]$ and so \begin{equation*} L_{\mathcal{Q}}(r)=\int_{0}^{2\pi }[p,p^{\prime }](\mu (\theta )-r)d\theta =L_{\mathcal{Q}}(0)-2A(P)r, \end{equation*} which proves equation (\ref{eq:Lr}). To prove equation (\ref{eq:area}), observe that the region $D$ enclosed by $\gamma $ is the disjoint union of $ C_{r}$, $0\leq r\leq r_{\mathrm{in}}$. Since \begin{equation*} \left[ \frac{\partial \beta }{\partial \theta },\frac{\partial \beta }{ \partial r}\right] =[p,p^{\prime }]\left( \mu (\theta )-r\right) . \end{equation*} and $\mu (\theta )-r>0$, for $\theta \in I(r)$, we conclude that \begin{equation*} A=\int_{r=0}^{r_{in}}\int_{I(r)}[p,p^{\prime }]\left( \mu (\theta )-r\right) d\theta dr=\int_{r=0}^{r_{in}}L_{\mathcal{Q}}(r)dr. \end{equation*} \end{proof} Now consider an arc of the unit $\mathcal{P}$-circle defined by $\theta _{1}\leq \theta \leq \theta _{2}$. Taking the tangents to $\mathcal{P}$ at $ \theta =\theta _{1}$ and $\theta =\theta _{2}$ we obtain a polygonal line formed by a pair of segments (see Figure \ref{F1}). It is not difficult to verify that the $Q$-lengths of the segments are \begin{equation} L_{1}=\frac{1-[p(\theta _{1}),q(\theta _{2})]}{[q(\theta _{1}),q(\theta _{2})]};\ \ L_{2}=\frac{1-[p(\theta _{2}),q(\theta _{1})]}{[q(\theta _{1}),q(\theta _{2})]}. \end{equation} The $Q$-length of the arc is given by \begin{equation} L_{arc}=\int_{\theta _{1}}^{\theta _{2}}[p,p^{\prime }]d\theta , \end{equation} and we define \begin{equation} \delta (\theta _{1},\theta _{2})=L_{1}+L_{2}-L_{arc}. \end{equation} Then $\delta =\delta (\theta _{1},\theta _{2})$ is strictly positive (see \cite{thompson}). \begin{figure} \caption{Tangent segments to $\mathcal{P} \label{F1} \end{figure} \begin{figure} \caption{Arcs of $C$ between $\theta _{1} \label{F2} \label{F2} \end{figure} For $r>\mu _{0}$, the curves $C_{r}$ necessarily admit corners. Thus we must consider curves which are smooth by parts with a finite number of vertices. \begin{lemma} \label{lemma:corner} Assume that $C$ is smooth by parts and at some corner $ V $ the parameter of the tangent lines are $\theta _{1}$ and $\theta _{2}$. Consider an arc of $\mathcal{P}$-circle of radius $z$ inscribed in this corner. Denote by $l_{1}(z)$ and $l_{2}(z)$ the $Q$-lengths of the arcs of $ C $ between the vertex $V$ and the tangency points $P_{1}\left( z\right) $ and $P_{2}\left( z\right) $(see Figure \ref{F2}). Then \begin{equation*} l_{1}(z)+l_{2}(z)=zL_{arc}+z\delta \left( \theta _{1},\theta _{2}\right) +O(z^{2}), \end{equation*} where $\lim_{z\rightarrow 0}\frac{O(z^{2})}{z}=0.$ \end{lemma} \begin{proof} We parameterize $C$ around $V$ by an $\mathcal{Q}$-arclength parameter $s$ using a function $g:\left( -\varepsilon ,\varepsilon \right) \rightarrow \mathbb{R}^{2}$ with $g\left( 0\right) =V$ (so $g$ is smooth everywhere except at $s=0$, where we have lateral derivatives \begin{equation*} \frac{dg}{ds}\left( 0_{-}\right) =q\left( \theta _{1}\right) \ \mathrm{and} \ \frac{dg}{ds}\left( 0_{+}\right) =q\left( \theta _{2}\right) \end{equation*} The definition of the tangent points means that $P_{1}\left( z\right) =g\left( -l_{1}\left( z\right) \right) $ and $P_{2}\left( z\right) =g\left( l_{2}\left( z\right) \right) $. Writing the position of the center $O\left( z\right) $ in two ways, we write $l_{1}$ and $l_{2}$ implicitly as a function of $z$: \begin{equation*} g\left( -l_{1}\left( z\right) \right) -zp\left( \theta _{1}\left( z\right) \right) =g\left( l_{2}\left( z\right) \right) -zp\left( \theta _{2}\left( z\right) \right) \end{equation*} where $\theta _{1}\left( z\right) $ and $\theta _{2}\left( z\right) $ are the $\theta $-parameters associated to $P_{1}$ and $P_{2}$. Take derivatives with respect to $z$ and then take $z\rightarrow 0$ (so $\theta _{1}\left( z\right) \rightarrow \theta _{1}$ and $\theta _{2}\left( z\right) \rightarrow \theta _{2}$) to arrive at \begin{equation*} -q\left( \theta _{1}\right) \frac{dl_{1}}{dz}\left( 0\right) -p\left( \theta _{1}\right) =q\left( \theta _{2}\right) \frac{dl_{2}}{dz}\left( 0\right) -p\left( \theta _{2}\right) \end{equation*} Now, this equation depends only on the angles $\theta _{1}$ and $\theta _{2}$ -- the exact shape of $C$ does not matter at all! So, up to first order, the lengths $l_{1}$ and $l_{2}$ depend on $z$ the same way they would if the curve were already the polygonal in \ref{F1} (scaled by a factor of $z$), that is \begin{eqnarray*} l_{1}\left( z\right) +l_{2}\left( z\right) &=&z\left( L_{1}+L_{2}\right) +O\left( z^{2}\right) = \\ &=&zL_{arc}+z\delta \left( \theta _{1},\theta _{2}\right) +O\left( z^{2}\right) . \end{eqnarray*} \end{proof} \begin{prop} \label{prop:corner} Consider a convex curve $C$ with at least one corner. Then \begin{equation*} \frac{dL_{\mathcal{Q}}}{ds}(0)<-2A(\mathcal{P}). \end{equation*} \end{prop} \begin{proof} Denote $\theta _{1}<\theta _{2}$ the angles of a corner $k$ and consider an arc of the circle $\mathcal{P}$ defined by the angles $\theta _{1}$ and $ \theta _{2}$. Consider $z$ small and inscribe a circle $z\mathcal{P}$ at a corner $k$. Denote $\overline{C}_{z}$ the curve obtained from $C$ by substituting each corner by the corresponding arc of the circle $z\mathcal{P} $. By Lemma \ref{lemma:corner}, the length difference at the corner $k$ is $ l_{k}(C)-L_{k}(\overline{C}_{z})=z\delta _{k}+O(z^{2})$. Since \begin{equation*} L_{\mathcal{Q}}(\overline{C}_{z})=L_{\mathcal{Q}}(z)+2A(\mathcal{P})z, \end{equation*} we conclude that \begin{equation*} L_{\mathcal{Q}}(0)=L_{\mathcal{Q}}(z)+\left( 2A(\mathcal{P})+\sum_{k}\delta _{k}\right) z+O(z^{2}), \end{equation*} thus proving the proposition. \end{proof} \begin{coro} \label{cor:Lr} If $r>\mu _{0}$ then $L_{\mathcal{Q}}(r)<L_{\mathcal{Q}}-2A( \mathcal{P})r$. \end{coro} \begin{proof} By Proposition \ref{prop:corner}, $\frac{dL_{Q}}{ds}\leq -2A(\mathcal{P})$ with equality if and only if $s\leq \mu _{0}$. In fact, $C_{r}$ has a corner if and only if $r>\mu _{0}$. Integrating from $0$ to $r$ we obtain $L_{ \mathcal{Q}}(r)<L_{\mathcal{Q}}-2A(\mathcal{P})r$. \end{proof} Now we can complete the proof of Lemma \ref{teo01}: if equality holds in \ref {eqn:14}, then Proposition \ref{prop:ALr} and Corollary \ref{cor:Lr} imply that $r_{\mathrm{in}}\leq \mu _{0}$. But then Lemma \ref{lemma:rinmu0} implies that $r_{\mathrm{in}}=\mu _{0}$ and $\gamma $ is a $P$-circle. \textbf{Remark.} Lemma \ref{teo01} is not necessarily true if $\gamma $ and the $\mathcal{P}$-ball are not smooth! A counterexample: take $\mathcal{P}$ to be the square whose vertices are $(\pm 1,\pm 1)$ (so $\mathcal{Q}$ will be the square $\left\vert x\right\vert +\left\vert y\right\vert \leq 1$) and $\gamma $ to be the rectangle with vertices $(\pm 2,\pm 1)$. \section{\label{SecCurv}The minkowskian curvature flow} We define the minkowskian curvature flow to be a family of closed curves $ F:S^{1}\times \lbrack 0,T)\rightarrow \mathbb{R}^{2}$ satisfying the following: \begin{align} \frac{\partial F}{\partial u}(u,t)& =v(u,t).q(\theta (u,t));\ \mathrm{and} \label{eqn:7} \\ \frac{\partial F}{\partial t}(u,t)& =-k(u,t).p(\theta (u,t)) \notag \\ F\left( u,0\right) & =\gamma \left( u\right) \notag \end{align} where $\gamma $ is a simple closed curve and, as usual, $\theta (u,t)$ is defined such that the angle between the $x$-axis and $\partial F/\partial u$ at the point $(u,t)$ is $\theta (u,t)+\pi /2$. \begin{lemma} \label{lemma1} The following hold at each point of the flow: \begin{description} \item[(a)] $\displaystyle\frac{\partial v}{\partial t}=-k^{2}v$; and \item[(b)] $\displaystyle\frac{\partial \theta }{\partial t}=\frac{1}{v\left[ q,q^{\prime }\right] }\frac{\partial k}{\partial u}=\frac{1}{\left[ q,q^{\prime }\right] }\frac{\partial k}{\partial s}$ \end{description} \end{lemma} \begin{proof} Notice first that \begin{equation*} \frac{\partial }{\partial t}\left( \frac{\partial F}{\partial u}\right) = \frac{\partial v}{\partial t}.q+v\frac{\partial \theta }{\partial t} .q^{\prime }=\frac{\partial v}{\partial t}.q-v\frac{\partial \theta }{ \partial t}[q,q^{\prime }].p \end{equation*} Now, \begin{equation*} \frac{\partial }{\partial u}\left( \frac{\partial F}{\partial t}\right) =- \frac{\partial k}{\partial u}.p-k\frac{\partial \theta }{\partial u} .p^{\prime }=-\frac{\partial k}{\partial u}.p-k\frac{v}{\lambda } [p,p^{\prime }].q=-\frac{\partial k}{\partial u}.p-k^{2}v.q \end{equation*} Then the result follows since $p$ and $q$ are always linearly independent. \end{proof} \begin{lemma} \label{lemma2} The evolutions of the $\mathcal{Q}$-arclength and of the area are given respectively by \begin{equation*} \frac{dL_{\mathcal{Q}}}{dt}=-\int_{0}^{L_{\mathcal{Q}}(t)}k^{2}(s,t)ds;\ \mathrm{and} \end{equation*} \begin{equation*} \frac{dA}{dt}=-2A(\mathcal{P}) \end{equation*} \end{lemma} \begin{proof} Since $L_{\mathcal{Q}}(t)=\displaystyle\int_{0}^{2\pi }v(u,t)du$ we have \begin{equation*} \frac{dL_{\mathcal{Q}}}{dt}=\int_{0}^{2\pi }\frac{\partial v}{\partial t} du=-\int_{0}^{2\pi }k^{2}v\ du=-\int_{0}^{L_{\mathcal{Q}}}k^{2}\ ds \end{equation*} The area $A(t)$ of the curve at time $t$ is given by \begin{equation*} \frac{dA}{dt}=\displaystyle\frac{1}{2}\int_{0}^{2\pi }\left[ F(u,t),\frac{ \partial F}{\partial u}(u,t)\right] du \end{equation*} \newline Thus, integrating by parts, \begin{align*} \frac{dA}{dt}& =\frac{1}{2}\int_{0}^{2\pi }\left[ \frac{\partial F}{\partial t},\frac{\partial F}{\partial u}\right] du+\frac{1}{2}\int_{0}^{2\pi }\left[ F,\frac{\partial ^{2}F}{\partial t\partial u}\right] du= \\ & =\int_{0}^{2\pi }\left[ \frac{\partial F}{\partial t},\frac{\partial F}{ \partial u}\right] du=\int_{0}^{2\pi }[-kp,vq]du= \\ & =\int_{0}^{2\pi }-kv\ du=-\int_{0}^{L_{\mathcal{Q}}}k\ ds=-2A(\mathcal{P}) \end{align*} \end{proof} With these evolution formulae one can easily show that the evolution of the isoperimetric ratio is \begin{equation} \frac{d}{dt}\left( \frac{L_{\mathcal{Q}}^{2}}{A}\right) =-\frac{2L_{\mathcal{ Q}}}{A}\left( \int_{0}^{L_{\mathcal{Q}}}k^{2}ds-A(\mathcal{P})\frac{L_{ \mathcal{Q}}}{A}\right) \label{eqn:disodt} \end{equation} which, given (\ref{eqn:12}), shows that the isoperimetric ratio is nonincreasing along the motion. In the next section we will prove that, as in the euclidean case, if the flow continues until the area converges to zero and the curves remain simple and convex along the motion, then the isoperimetric ratio converges to the optimum value $4A(\mathcal{P})$. But first, we establish the evolution of the curvature function. \begin{lemma} \label{lemmapde} The minkowskian curvature $k$ evolves according to the PDE \begin{equation*} \frac{\partial k}{\partial \tau }=\frac{a}{a+a^{\prime \prime }}k^{2}\frac{ \partial ^{2}k}{\partial \theta ^{2}}+\frac{2a^{\prime }}{a+a^{\prime \prime }}k^{2}\frac{\partial k}{\partial \theta }+k^{3}, \end{equation*} where $\tau $ is the time parameter which is independent with $\theta $. \end{lemma} \begin{proof} Using Lemma \ref{lemma1}\textbf{(a)} and $ds=vdu$, we arrive at \begin{equation*} \frac{\partial }{\partial t}\frac{\partial }{\partial s}-\frac{\partial }{ \partial s}\frac{\partial }{\partial t}=k^{2}\frac{\partial }{\partial s} \end{equation*} just as in the Euclidean case. We apply this to the function $\theta =\theta (s,t)$ and use $ds=\lambda d\theta $ and Lemma \ref{lemma1}\textbf{(b)} to obtain \begin{equation*} \frac{\partial }{\partial t}\left( \frac{k}{[p,p^{\prime }]}\right) -\frac{ \partial }{\partial s}\left( \frac{1}{\left[ q,q^{\prime }\right] }\frac{ \partial k}{\partial s}\right) =\frac{k^{3}}{[p,p^{\prime }]}. \end{equation*} Unfortunately $p$ and $q$ now depend on $t$ as well, so, using equations ( \ref{eqn:Mink}) and (\ref{eqn:2}), we arrive at \begin{equation*} \frac{1}{a\left( a+a^{\prime \prime }\right) }\frac{\partial k}{\partial t}- \frac{k}{a^{2}\left( a+a^{\prime \prime }\right) ^{2}}\left[ a^{\prime }\left( a+a^{\prime \prime }\right) +a\left( a^{\prime }+a^{\prime \prime \prime }\right) \right] \frac{\partial \theta }{\partial t}-2aa^{\prime } \frac{\partial k}{\partial s}\frac{\partial \theta }{\partial s}-a^{2}\frac{ \partial ^{2}k}{\partial s^{2}}=\frac{k^{3}}{a\left( a+a^{\prime \prime }\right) } \end{equation*} Now we change all $s$-derivatives to $\theta $-derivatives using equation ( \ref{eqn:5}), and use Lemma \ref{lemma1}\textbf{(b)} to eventually get to \begin{equation*} \frac{\partial k}{\partial t}-\frac{2a^{\prime }}{a+a^{\prime \prime }}k^{2} \frac{\partial k}{\partial \theta }-\frac{a}{a+a^{\prime \prime }}k\left( \frac{\partial k}{\partial \theta }\right) ^{2}-\frac{a}{a+a^{\prime \prime } }k^{2}\frac{\partial ^{2}k}{\partial \theta ^{2}}=k^{3} \end{equation*} Now, writing $k=k(\theta ,\tau )$ yields \begin{equation*} \frac{\partial k}{\partial t}=\frac{\partial k}{\partial \theta }\frac{ \partial \theta }{\partial t}+\frac{\partial k}{\partial \tau }. \end{equation*} Using this (and replacing once again $\displaystyle\frac{\partial \theta }{ \partial t}$ using Lemma \ref{lemma1}\textbf{(b))} we finish the proof. \end{proof} \section{\label{SecConv}Convergence of the isoperimetric ratio} We now turn to show that the flow rounds the curves if they approach a vanishing point. In the following $\gamma (u,t):S^{1}\times \lbrack 0,T)\rightarrow \mathbb{R}^{2}$ is a family (on parameter $t$) of curves in $ \mathcal{C}$ which solves the minkowskian curvature flow (in the next section, we will show that $\gamma \left( \cdot ,0\right) \in \mathcal{ C\Rightarrow \gamma }\left( \cdot ,t\right) \in \mathcal{C}$). The $\mathcal{ Q}$-length and the area of the curve at time $t$ are denoted, as usual, by $ L_{\mathcal{Q}}(t)$ and $A(t)$. \begin{lemma} \label{lemma6} If $\displaystyle\lim_{t\rightarrow T}A(t)=0$ then \begin{equation*} \liminf_{t\rightarrow T}L_{\mathcal{Q}}(t)\left( \int_{0}^{L_{\mathcal{Q} }(t)}k^{2}ds-A(\mathcal{P})\frac{L_{\mathcal{Q}}(t)}{A(t)}\right) =0 \end{equation*} \end{lemma} \begin{proof} Suppose there exist $\epsilon >0$ and $t_{1}\in (0,T)$ such that \begin{equation*} L_{\mathcal{Q}}(t)\left( \int_{0}^{L_{\mathcal{Q}}(t)}k^{2}ds-A(\mathcal{P}) \frac{L_{\mathcal{Q}}(t)}{A(t)}\right) >\epsilon \end{equation*} for every $t\in (t_{1},T)$. Put $g(t)=\log (A(t))$ for $t\in \lbrack 0,T)$. Using the evolution of the isoperimetric ratio (\ref{eqn:disodt}) we have \begin{equation*} \frac{d}{dt}\left( \frac{L_{\mathcal{Q}}^{2}}{A}\right) \leq -\frac{2}{A} \epsilon =\frac{\epsilon }{A(\mathcal{P})}\frac{dg}{dt} \end{equation*} Fix $t\in (t_{1},T)$. Isoperimetric inequality (\ref{eqn:isop}) and integration (from $t_{1}$ to $t$) yield \begin{equation*} 4A(\mathcal{P})\leq \frac{L_{\mathcal{Q}}^{2}(t)}{A(t)}\leq \frac{L_{ \mathcal{Q}}^{2}(t_{1})}{A(t_{1})}-\frac{\epsilon }{A(\mathcal{P})}\log (A(t_{1}))+\frac{\epsilon }{A(\mathcal{P})}\log (A(t)) \end{equation*} But the right hand side goes to $-\infty $ as $t$ converges to $T$. This contradiction completes the proof. \end{proof} Now we are ready to prove the main theorem of this section. \begin{teo} \label{teo3} If $\displaystyle\lim_{t\rightarrow T}A(t) = 0$ then $ \displaystyle\lim_{t \rightarrow T} \frac{L_{\mathcal{Q}}^2(t)}{A(t)} = 4A( \mathcal{P})$ \end{teo} \begin{proof} First we rewrite the inequality in Theorem \ref{teoGage} as \begin{equation*} \int_{0}^{L_{\mathcal{Q}}}k^{2}ds-A(\mathcal{P})\frac{L_{\mathcal{Q}}}{A} \geq F(\gamma )\int_{0}^{L_{\mathcal{Q}}}k^{2}ds \end{equation*} Schwarz inequality yields \begin{equation*} L_{\mathcal{Q}}\int_{0}^{L_{\mathcal{Q}}}k^{2}ds\geq \left( \int_{0}^{L_{ \mathcal{Q}}}k\ ds\right) ^{2}=4A(\mathcal{P})^{2} \end{equation*} Combining both inequalities we have the following inequality for each curve $ \gamma (\cdot ,t)$: \begin{equation*} L_{\mathcal{Q}}\left( \int_{0}^{L_{\mathcal{Q}}}k^{2}ds-A(\mathcal{P})\frac{ L_{\mathcal{Q}}}{A}\right) \geq 4A(\mathcal{P})^{2}F(\gamma ) \end{equation*} The previous Lemma guarantees that the left hand side converges to $0$ for some subsequence $t_{j}\rightarrow T$. Since $F$ is a non-negative functional we have also $F(t_{j})\rightarrow 0$ as $t_{j}\rightarrow T$. Let $\eta _{j}$ be the normalized curve $\eta _{j}=\displaystyle\sqrt{\frac{A( \mathcal{P})}{A}}\gamma (\cdot ,t_{j})$. Using the same technique presented in \textbf{\cite{gage2}} one can show that the curves $\eta _{j}$ lie in one same bounded region of the plane and then. Since $F$ satisfies property \textbf{\textit{(3)}} of Theorem \ref{teo2}, the region $H_{j}$ enclosed by $\eta _{j} $ converges in the Hausdorff topology to the unit $\mathcal{P}$-disc. It follows that $\displaystyle\frac{L_{\mathcal{Q}}^{2}(t_{j})}{A(t_{j})}$ converges to $4A(\mathcal{P})$ as $t_{j}\rightarrow T$. Since $\displaystyle \frac{L_{\mathcal{Q}}^{2}(t)}{A(t)}$ is nonincreasing the convergence holds, in fact, for every value of the parameter and we have the desired result. \end{proof} \section{\label{SecExist}Existence of the minkowskian curvature flow} The final step is to prove that the minkowskian curvature flow in fact exists and continues until the area enclosed by the curves converges to zero. We now establish: \begin{lemma} \label{lemma7} Let $k:[0,2\pi ]\rightarrow \mathbb{R}$ be a $C^{1}$ positive $2\pi $-periodic function. Then, $k$ is the Minkowski curvature of a simple closed strictly convex $C^{2}$ plane curve if and only if \begin{equation} \int_{0}^{2\pi }\frac{a(\theta )+a^{\prime \prime }(\theta )}{k(\theta )} \sin \theta \ d\theta =\int_{0}^{2\pi }\frac{a(\theta )+a^{\prime \prime }(\theta )}{k(\theta )}\cos \theta \ d\theta =0 \label{eqn:16} \end{equation} \end{lemma} \begin{proof} Suppose first that $\gamma :[0,2\pi ]\rightarrow \mathbb{R}$ is a closed $ C^{2}$ curve whose curvature is given by $k$. As $\displaystyle \int_{0}^{2\pi }\gamma ^{\prime }(\theta )d\theta =0$ we have \begin{equation*} 0=\int_{0}^{2\pi }\lambda (\theta )q(\theta )d\theta =\int_{0}^{2\pi }\frac{ [p(\theta ),p^{\prime }(\theta )]}{k(\theta )a(\theta )}(-\sin \theta ,\cos \theta )d\theta \end{equation*} And then the desired equalities comes from equation \ref{eqn:Mink}. On the other hand if $k$ is a $C^{1}$ positive $2\pi $-periodic function such that (\ref{eqn:16}) holds we can define \begin{equation*} \gamma (\theta )=\left( -\int_{0}^{\theta }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma )}\sin \sigma \ d\sigma ,\int_{0}^{\theta }\frac{ a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma )}\cos \sigma \ d\sigma \right) \end{equation*} which is clearly a closed $C^{2}$ curve. Furthermore, \begin{equation*} \gamma ^{\prime }(\theta )=\frac{a(\theta )+a^{\prime \prime }(\theta )}{ k(\theta )}(-\sin \theta ,\cos \theta )=\frac{a(a+a^{\prime \prime })(\theta )}{k(\theta )}q(\theta )=\frac{[p(\theta ),p^{\prime }(\theta )]}{k(\theta )} q(\theta ) \end{equation*} and then the Minkowski curvature of $\gamma $ is precisely $k$. To complete the proof notice that $\gamma $ is simple as long as its Gauss map is injective. \end{proof} Now, inspired by Lemma \ref{lemmapde} we will see how the solution to the curvature motion emerges from the solution of a parabolic differential equation. From now on, we use $t$ for the time parameter which is independent with $\theta $. \begin{teo} \label{teo4} Consider a function $k:S^{1}\times \lbrack 0,T)\rightarrow \mathbb{R}$, such that $k\in C^{2+\alpha ,1+\alpha }(S^{1}\times \lbrack 0,T-\epsilon ])$ for all $\epsilon >0$, satisfying the evolution equation: \begin{equation} \frac{\partial k}{\partial t}=\frac{a}{a+a^{\prime \prime }}k^{2}\frac{ \partial ^{2}k}{\partial \theta ^{2}}+\frac{2a^{\prime }}{a+a^{\prime \prime }}k^{2}\frac{\partial k}{\partial \theta }+k^{3} \label{eqn:17} \end{equation} with initial value $k(\theta ,0)=\varphi (\theta )$ where $\varphi $ is a strictly positive $C^{1+\alpha }$ function such that: \begin{equation*} \int_{0}^{2\pi }\frac{a(\theta )+a^{\prime \prime }(\theta )}{\varphi (\theta )}\sin \theta \ d\theta =\int_{0}^{2\pi }\frac{a(\theta )+a^{\prime \prime }(\theta )}{\varphi (\theta )}\cos \theta \ d\theta =0 \end{equation*} Using this function (whose short term existence and uniqueness are guaranteed by standard theory on parabolic equations) one can build the family of curves on parameter $t$: \begin{align*} F(\theta ,t)& =\left( -\int_{0}^{\theta }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma ,t)}\sin \sigma \ d\sigma -\int_{0}^{t}a(0)k(0,s)\ ds,\right. \\ & \left. \int_{0}^{\theta }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{ k(\sigma ,t)}\cos \sigma \ d\sigma -\int_{0}^{t}a(0)\frac{\partial k}{ \partial \sigma }(0,s)+a^{\prime }(0)k(0,s)\ ds\right) \end{align*} for which the following holds: \begin{description} \item[(a)] for each fixed $t$ the map $\theta \mapsto F(\theta,t)$ is a simple closed strictly convex curve parameterized as usual (the tangent vector at $\theta$ points in the $q(\theta)$ direction) whose Minkowski curvature is given by $\theta \mapsto k(\theta,t)$. \item[(b)] $\displaystyle\frac{\partial F}{\partial t}(\theta ,t)=-k(\theta ,t)p(\theta )-a^{2}(\theta )\frac{\partial k}{\partial \theta }(\theta ,t)q(\theta )$ \end{description} \end{teo} \begin{proof} For each fixed $t$ the curve $\theta \mapsto F(\theta ,t)$ is, up to a translation, built as in Lemma \ref{lemma7}, and is clearly parameterized as usual. So let's begin proving that $k(\theta ,t)$ is a strictly positive function. Define \begin{equation*} k_{\mathrm{MIN}}(t)=\inf_{[0,2\pi ]}k(\theta ,t) \end{equation*} and notice that $k_{\mathrm{MIN}}$ is a continuous function which is positive when $t=0$ (by the initial value conditions and compactness). We claim that $k_{\mathrm{MIN}}$ is bounded from below by $k_{\mathrm{MIN}}(0)$ . In fact, suppose there exists $t\in (0,T)$ such that $0<k_{\mathrm{MIN} }(t)=\delta <k_{\mathrm{MIN}}(0)$ and take $t_{0}=\inf k_{\mathrm{MIN} }^{-1}(\delta )$. Since $k_{\mathrm{MIN}}^{-1}(\delta )$ is a closed set we have $t_{0}\in (0,T)$. By compactness the function $\theta \mapsto k(\theta ,t_{0})$ assumes the value $\delta $ for some $\theta _{0}\in \lbrack 0,2\pi ]$. Then, \begin{equation} \frac{\partial k}{\partial t}(\theta _{0},t_{0})\leq 0;\ \ \frac{\partial k}{ \partial \theta }(\theta _{0},t_{0})=0;\ \ \mathrm{and}\ \frac{\partial ^{2}k }{\partial \theta ^{2}}(\theta _{0},t_{0})\geq 0 \label{eqn:18} \end{equation} For the first inequality observe that the function $t\mapsto k(\theta _{0},t) $ must be nonincreasing by the left near $t_{0}$, otherwise the definition of $t_{0}$ would be contradicted. The last two relations emerge from the fact that $\theta _{0}$ is a minimum of the function $\theta \mapsto k(\theta ,t_{0})$. Finally, (\ref{eqn:18}) and $k(\theta _{0},t_{0})=\delta >0$ contradict the assumption that $k$ satisfies (\ref{eqn:17}), as long as $\frac{a}{ a+a^{\prime \prime }}>0$. This proves the claim and as consequence we have that $k$ is strictly positive. Our next step is to prove that, for each $t$, we have \begin{equation*} \int_{0}^{2\pi }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma ,t)} \sin \sigma \ d\sigma =\int_{0}^{2\pi }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma ,t)}\cos \sigma \ d\sigma =0 \end{equation*} By the hypothesis this is true for $t=0$. So it's enough to prove that the derivatives of the functions $t\mapsto -\displaystyle\int_{0}^{2\pi }\frac{ a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma ,t)}\sin \sigma \ d\sigma $ and $t\mapsto \displaystyle\int_{0}^{2\pi }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma ,t)}\cos \sigma \ d\sigma $ vanish identically. Using ( \ref{eqn:17}) and integration by parts we calculate \begin{align*} \frac{d}{dt}\left( -\int_{0}^{2\pi }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma ,t)}\sin \sigma \ d\sigma \right) & =\int_{0}^{2\pi } \frac{1}{k^{2}}\frac{\partial k}{\partial t}(a+a^{\prime \prime })\sin \sigma \ d\sigma = \\ & =\int_{0}^{2\pi }\sin \sigma \frac{\partial }{\partial \sigma }\left( a \frac{\partial k}{\partial \sigma }\right) +\sin \sigma \frac{\partial }{ \partial \sigma }\left( a^{\prime }k\right) +ak\sin \sigma \ d\sigma = \\ & =a\frac{\partial k}{\partial \theta }\sin \theta \bigg|_{0}^{2\pi }+a^{\prime }k\sin \theta \bigg|_{0}^{2\pi }-ak\cos \theta \bigg|_{0}^{2\pi }=0 \end{align*} where the last equality comes from the fact that all the involved functions are $2\pi $-periodic. We do the same for the other function and then Lemma \ref{lemma7} yields \textbf{(a)}. \newline For \textbf{(b)} we calculate the time derivatives of each component using, again, integration by parts and (\ref{eqn:17}). For the first component we have: \begin{multline*} \frac{d}{dt}\left( -\int_{0}^{\theta }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma ,t)}\sin \sigma \ d\sigma -\int_{0}^{t}a(0)k(0,s)\ ds\right) = \\ =\int_{0}^{\theta }\frac{1}{k^{2}}\frac{\partial k}{\partial t}(a+a^{\prime \prime })\sin \sigma \ d\sigma -a(0)k(0,t)= \\ =\int_{0}^{\theta }\sin \sigma \frac{\partial }{\partial \sigma }\left( a \frac{\partial k}{\partial \sigma }\right) +\sin \sigma \frac{\partial }{ \partial \sigma }\left( a^{\prime }k\right) +ak\sin \sigma \ d\sigma -a(0)k(0,t)= \\ =a\frac{\partial k}{\partial \sigma }\sin \sigma \bigg|_{0}^{\theta }+a^{\prime }k\sin \sigma \bigg|_{0}^{\theta }-ak\cos \sigma \bigg| _{0}^{\theta }-a(0)k(0,t)= \\ =a(\theta )\frac{\partial k}{\partial \theta }(\theta ,t)\sin \theta +a^{\prime }(\theta )k(\theta ,t)\sin \theta -a(\theta )k(\theta ,t)\cos \theta \end{multline*} And for the second: \begin{multline*} \frac{d}{dt}\left( \int_{0}^{\theta }\frac{a(\sigma )+a^{\prime \prime }(\sigma )}{k(\sigma ,t)}\cos \sigma \ d\sigma -\int_{0}^{t}a(0)\frac{ \partial k}{\partial \sigma }(0,s)+a^{\prime }(0)k(0,s)\ ds\right) = \\ =-\int_{0}^{\theta }a\frac{\partial ^{2}k}{\partial \sigma ^{2}}\cos \sigma +2a^{\prime }\frac{\partial k}{\partial \theta }\cos \sigma +(a+a^{\prime \prime })k\cos \sigma \ d\sigma -a(0)\frac{\partial k}{\partial \theta } (0,t)-a^{\prime }(0)k(0,t)= \\ =-a\frac{\partial k}{\partial \sigma }\cos \sigma \bigg|_{0}^{\theta }-a^{\prime }k\cos \sigma \bigg|_{0}^{\theta }-ak\sin \sigma \bigg| _{0}^{\theta }-a(0)\frac{\partial k}{\partial \theta }(0,t)-a^{\prime }(0)k(0,t)= \\ =-a(\theta )\frac{\partial k}{\partial \theta }(\theta ,t)\cos \theta -a^{\prime }(\theta )k(\theta ,t)\cos \theta -a(\theta )k(\theta ,t)\sin \theta \end{multline*} Therefore, \begin{multline*} \frac{\partial F}{\partial t}(\theta ,t)=-k(\theta ,t).\left( a(\theta )\cos \theta -a^{\prime }(\theta )\sin \theta ,a(\theta )\sin \theta +a^{\prime }(\theta )\cos \theta \right) -a(\theta )\frac{\partial k}{\partial \theta } (\theta ,t).(-\sin \theta ,\cos \theta )= \\ =-k(\theta ,t)p(\theta )-a^{2}(\theta )\frac{\partial k}{\partial \theta } (\theta ,t)q(\theta ) \end{multline*} and this concludes the proof. \end{proof} By changing the space parameter one can make the tangential component vanish while keeping the shape of the curves. For this reason Theorem \ref{teo4} yields the desired Minkowski curvature flow stated in (\ref{eqn:7}). Notice that it follows also that the curves remain simple and strictly convex along the motion. To show that the solution continues until the area enclosed by the curves converges to zero we prove that the curvature and its derivatives remain bounded as long as the area is bounded away from zero. Let us begin with a Lemma that is independent of the flow. \begin{defi} \label{defi1} Consider a curve parameterized by the usual $\theta $ and with Minkowski curvature $k$. We define the minkowskian median curvature $k^{\ast }$ for the curve as the supremum of all values $x$ for which we have $ k(\theta )>x$ on some interval of length $\pi $. \end{defi} \begin{lemma} \label{lemma8} Let $\gamma :[0,2\pi ]\rightarrow \mathbb{R}^{2}$ be a curve in the Minkowski plane which is simple, closed and convex. Denote, as usual, the $\mathcal{Q}$-length and the enclosed area by $L_{\mathcal{Q}}$ and $A$ respectively. Then, \begin{equation*} k^{\ast }\leq C\frac{L_{\mathcal{Q}}}{A} \end{equation*} for some constant $C$ that doesn't depends on the curve. \end{lemma} \begin{proof} Writing as usual $\gamma ^{\prime }(\theta )=\lambda (\theta )q(\theta )$ we have that the $\mathcal{Q}$-length is given by \begin{equation*} s(\theta )=\int_{0}^{\theta }\lambda (\sigma )d\sigma \end{equation*} and the euclidean length is given by \begin{equation*} s_{E}(\theta )=\int_{0}^{\theta }\lambda (\sigma )|q(\sigma )|d\sigma \end{equation*} where $|\cdot |$ is the euclidean norm. Put $q_{0}=\max_{[0,2\pi ]}|q(\theta )|$. Denoting by $L$ the euclidean length of $\gamma $ is easy to see that $ L\leq q_{0}L_{\mathcal{Q}}$. Furthermore, denoting the euclidean curvature by $k_{E}$ we have $k_{E}(\theta )=k(\theta )\left( |q(\theta )|[p(\theta ),p^{\prime }(\theta )]\right) ^{-1}$. If $0<B<k^{\ast }$ we can take an interval $(\theta _{0},\theta _{0}+\pi )$ in which $k>B$. Moreover, we know that the area is bounded by any usual width times $L/2$. Then \begin{align*} A& \leq \frac{L}{2}\left\vert \int_{\theta _{0}}^{\theta _{0}+\pi }\frac{ \sin (\theta _{0}-\theta )}{k_{E}(\theta )}d\theta \right\vert =\frac{L}{2} \left\vert \int_{\theta _{0}}^{\theta _{0}+\pi }\frac{|q(\theta )|[p(\theta ),p^{\prime }(\theta )]\sin (\theta _{0}-\theta )}{k(\theta )}d\theta \right\vert \leq \\ & \leq \frac{q_{0}^{2}L_{\mathcal{Q}}}{2}\max_{\theta \in \lbrack 0,2\pi ]}[p(\theta ),p^{\prime }(\theta )]\int_{\theta _{0}}^{\theta _{0}+\pi }\left\vert \frac{\sin (\theta _{0}-\theta )}{k(\theta )}\right\vert d\theta \leq \frac{q_{0}^{2}L_{\mathcal{Q}}}{2}\max_{\theta \in \lbrack 0,2\pi ]}[p(\theta ),p^{\prime }(\theta )]\frac{2}{B}= \\ & =q_{0}^{2}\max_{\theta \in \lbrack 0,2\pi ]}[p(\theta ),p^{\prime }(\theta )]\frac{L_{\mathcal{Q}}}{B} \end{align*} Making $B\rightarrow k^{\ast }$ and taking $C=q_{0}^{2}\max_{\theta \in \lbrack 0,2\pi ]}[p(\theta ),p^{\prime }(\theta )]$ conclude the proof. Note carefully that $C$ only depends on the set $\mathcal{P}$ chosen as unit ball of our Minkowski plane. \end{proof} It is natural to denote by $k^{\ast }(t)$ the minkowskian median curvature of the flow curve $\theta \mapsto F(\theta ,t)$. Notice that if the areas enclosed by the curves are bounded from below on $[0,T)$ by some number $c>0$ then the median curvatures have an uniform upper bound on $[0,T)$. \begin{prop} \label{prop3} If $k^*(t)$ is bounded on $[0,T)$ then $\displaystyle \int_0^{2\pi}\left(a(\theta)+a^{\prime \prime }(\theta)\right)a(\theta)\log k(\theta,t)d\theta$ is also bounded on $[0,T)$. \end{prop} \begin{proof} First, adopting an easier notation we calculate \begin{align*} \frac{d}{dt}\left( \int_{0}^{2\pi }a(a+a^{\prime \prime })\log k\ d\theta \right) & =\int_{0}^{2\pi }\frac{a(a+a^{\prime \prime })}{k}\frac{\partial k }{\partial t}d\theta = \\ & =\int_{0}^{2\pi }a^{2}k\frac{\partial ^{2}k}{\partial \theta ^{2}} +2aa^{\prime }k\frac{\partial k}{\partial \theta }+a\left( a+a^{\prime \prime }\right) k^{2}d\theta = \\ & =\int_{0}^{2\pi }-\left( a\frac{\partial k}{\partial \theta }\right) ^{2}+(ak)^{2}+aa^{\prime \prime }k^{2}\ d\theta = \\ & =\int_{0}^{2\pi }(ak)^{2}-\left( \frac{\partial (ak)}{\partial \theta } \right) ^{2}d\theta \end{align*} here we used integration by parts and the evolution equation. A version of the Wirtinger's inequality states that if $f:[a,b]\rightarrow \mathbb{R}$ is a $C^{1}$ function such that $b-a\leq \pi $ and $f(a)=f(b)=0$ then \begin{equation*} \int_{a}^{b}f^{2}dx\leq \int_{a}^{b}\left( f^{\prime }\right) ^{2}dx \end{equation*} and we will use this result to estimate the above integral. Fix $t$ and consider the set $A\subseteq \lbrack 0,2\pi ]$ given by $A=\{\theta \in \lbrack 0,2\pi ]\mid k(\theta ,t)>k^{\ast }(t)\}$. By the definition of $ k^{\ast }$ we note that $A$ is an at most countable union of disjoint intervals $I_{j}$ with $|I_{j}|\leq \pi $ for each $j$ and such that $ k(\theta ,t)=k^{\ast }(t)$ on its endpoints. So, applying the Wirtinger's inequality to the restriction of the function $\theta \mapsto a(\theta )k(\theta ,t)-a(\theta )k^{\ast }(\theta )$ to an interval $I_{j}$ yields \begin{equation*} \int_{I_{j}}(ak)^{2}-2a^{2}kk^{\ast }+(ak^{\ast })^{2}d\theta \leq \int_{I_{j}}\left( \frac{\partial (ak)}{\partial \theta }\right) ^{2}-2a^{\prime }k^{\ast }\frac{\partial (ak)}{\partial \theta }+(a^{\prime }k^{\ast })^{2}d\theta \end{equation*} And then, \begin{equation*} \int_{I_{j}}(ak)^{2}-\left( \frac{\partial (ak)}{\partial \theta }\right) ^{2}d\theta \leq 2k^{\ast }(t)\int_{I_{j}}a\left( a+a^{\prime \prime }\right) kd\theta -k^{\ast }(t)^{2}\int_{I_{j}}\left( a^{\prime }\right) ^{2}+2aa^{\prime \prime }+a^{2}\ d\theta \end{equation*} Summating over $j$ yields the following estimate on $A$: \begin{align*} \int_{A}(ak)^{2}-\left( \frac{\partial (ak)}{\partial \theta }\right) ^{2}d\theta & \leq 2k^{\ast }(t)\int_{0}^{2\pi }a\left( a+a^{\prime \prime }\right) kd\theta -k^{\ast }(t)^{2}\int_{A}\left( a^{\prime }\right) ^{2}+2aa^{\prime \prime }+a^{2}\ d\theta = \\ & =-2k^{\ast }(t)\frac{dL_{\mathcal{Q}}}{dt}-k^{\ast }(t)^{2}\int_{A}\left( a^{\prime }\right) ^{2}+2aa^{\prime \prime }+a^{2}\ d\theta \leq \\ & \leq -2k^{\ast }(t)\frac{dL_{\mathcal{Q}}}{dt}+2\pi k^{\ast }(t)^{2}\max_{[0,2\pi ]}\left\vert \left( a^{\prime }\right) ^{2}+2aa^{\prime \prime }+a^{2}\right\vert \end{align*} On $[0,2\pi ]-A$ we have the estimate \begin{equation*} \int_{\lbrack 0,2\pi ]-A}(ak)^{2}-\left( \frac{\partial (ak)}{\partial \theta }\right) ^{2}d\theta \leq \int_{\lbrack 0,2\pi ]-A}(ak)^{2}d\theta \leq 2\pi k^{\ast }(t)^{2}\max_{[0,2\pi ]}a^{2} \end{equation*} Suppose that $M>0$ is an upper bound for $k^{\ast }(t)$ on $[0,T)$. Then, the above estimates yields \begin{equation*} \frac{d}{dt}\left( \int_{0}^{2\pi }a(a+a^{\prime \prime })\log k\ d\theta \right) \leq -2M\frac{dL_{\mathcal{Q}}}{dt}+2\pi M^{2}C_{0} \end{equation*} For some constant $C_{0}>0$ that only depends on the unit $\mathcal{P}$-ball chosen. Let $\displaystyle\int_{0}^{2\pi }a(a+a^{\prime \prime })\log k\ d\theta =C_{1}$ for $t=0$. We write \begin{align*} \int_{0}^{2\pi }a(\theta )(a(\theta )+a^{\prime \prime }(\theta ))\log k(\theta ,t)\ d\theta & =C_{1}+\int_{0}^{t}\left( \frac{d}{dt}\left( \int_{0}^{2\pi }a(a+a^{\prime \prime })\log k\ d\theta \right) \right) dt\leq \\ & \leq C_{1}+\int_{0}^{t}-2M\frac{dL_{\mathcal{Q}}}{dt}+2\pi M^{2}C_{0}\ dt\leq \\ & \leq C_{1}-2M\left( L_{\mathcal{Q}}(t)-L_{\mathcal{Q}}(0)\right) +2\pi M^{2}C_{0}T\leq \\ & \leq C_{1}+2ML_{\mathcal{Q}}(0)+2\pi M^{2}C_{0}T \end{align*} and this completes the proof since the right side does not depends on $t$. \end{proof} \begin{lemma} \label{lemma9} If $\displaystyle\int_0^{2\pi}a(\theta)(a(\theta)+a^{\prime \prime }(\theta))\log k(\theta,t) \ d\theta$ is bounded on $[0,T)$, then for any $\delta > 0$ there exists a constant $C$ such that if $k(\theta,t) > C$ on an interval $J$ (varying the parameter $\theta$) then we have necessarily $|J| \leq \delta$. \end{lemma} \begin{proof} Fix $\delta >0$ and take $[b,c]\subseteq \lbrack 0,2\pi ]$ with length greater than $\delta $. Suppose that $k(\theta ,t)>C$ on $[b,c]$ for some $t$ . Remembering that $k_{\mathrm{MIN}}(0)$ is a lower bound for $k(\theta ,t)$ we have \begin{multline*} \int_{0}^{2\pi }a(a+a^{\prime \prime })\log k(\theta ,t)d\theta = \\ =\int_{0}^{b}a(a+a^{\prime \prime })\log k(\theta ,t)d\theta +\int_{b}^{c}a(a+a^{\prime \prime })\log k(\theta ,t)d\theta +\int_{c}^{2\pi }a(a+a^{\prime \prime })\log k(\theta ,t)d\theta \geq \\ \geq \log \left( k_{\mathrm{MIN}}(0)\right) \int_{0}^{b}a(a+a^{\prime \prime })\ d\theta +\delta \log C\int_{0}^{2\pi }a(a+a^{\prime \prime })d\theta +\log \left( k_{\mathrm{MIN}}(0)\right) \int_{c}^{2\pi }a(a+a^{\prime \prime })\ d\theta \end{multline*} If $k_{\mathrm{MIN}}(0)\leq 1$, then \begin{multline*} \int_{0}^{2\pi }a(a+a^{\prime \prime })\log k(\theta ,t)d\theta \geq \left[ \delta \log C+(2\pi +b-c)\log \left( k_{\mathrm{MIN}}(0)\right) \right] \max_{[0,2\pi ]}a(a+a^{\prime \prime })\geq \\ \geq \left[ \delta \log C+(2\pi -\delta )\log \left( k_{\mathrm{MIN} }(0)\right) \right] \max_{[0,2\pi ]}a(a+a^{\prime \prime }) \end{multline*} Otherwise we have \begin{multline*} \int_{0}^{2\pi }a(a+a^{\prime \prime })\log k(\theta ,t)d\theta \geq \\ \geq \delta \log C\int_{0}^{2\pi }a(a+a^{\prime \prime })d\theta +(2\pi +b-c)\log \left( k_{\mathrm{MIN}}(0)\right) \min_{[0,2\pi ]}a(a+a^{\prime \prime })\geq \\ \geq \left( \delta \log C\right) \max_{[0,2\pi ]}a(a+a^{\prime \prime }) \end{multline*} Both cases are contradictions when $C$ is sufficiently large since the left side is bounded on $[0,T)$. This proves the result. \end{proof} \begin{lemma} \label{lemma10} The function $t\mapsto \displaystyle\int_{0}^{2\pi }\left( a(\theta )k(\theta ,t)\right) ^{2}-\left( \frac{\partial }{\partial \theta } (a(\theta )k(\theta ,t))\right) ^{2}d\theta $ is nondecreasing. In par\-ti\-cu\-lar, one can find a constant $N\geq 0$ such that the inequality \begin{equation*} \int_{0}^{2\pi }\left( \frac{\partial (ak)}{\partial \theta }\right) ^{2}d\theta \leq \int_{0}^{2\pi }(ak)^{2}d\theta +N \end{equation*} holds on $[0,T)$. \end{lemma} \begin{proof} We compute \begin{multline*} \frac{d}{dt}\left( \int_{0}^{2\pi }\left( a(\theta )k(\theta ,t)\right) ^{2}-\left( \frac{\partial }{\partial \theta }(a(\theta )k(\theta ,t))\right) ^{2}d\theta \right) =\int_{0}^{2\pi }2a^{2}k\frac{\partial k}{ \partial t}-2\frac{\partial (ak)}{\partial \theta }\frac{\partial ^{2}(ak)}{ \partial \theta \partial t}\ d\theta = \\ =\int_{0}^{2\pi }2a^{2}k\frac{\partial k}{\partial t}+2a\frac{\partial ^{2}(ak)}{\partial \theta ^{2}}\frac{\partial k}{\partial t}d\theta =\int_{0}^{2\pi }2a\frac{\partial k}{\partial t}\left( ak+\frac{\partial ^{2}(ak)}{\partial \theta ^{2}}\right) d\theta = \\ =\int_{0}^{2\pi }2a\frac{\partial k}{\partial t}\left( ak+a^{\prime \prime }k+2a^{\prime }\frac{\partial k}{\partial \theta }+a\frac{\partial ^{2}k}{ \partial \theta ^{2}}\right) \ d\theta =2\int_{0}^{2\pi }\frac{a(a+a^{\prime \prime })}{k^{2}}\left( \frac{\partial k}{\partial t}\right) ^{2}d\theta \geq 0 \end{multline*} \newline and this proves the first claim. To find $N\geq 0$ with the desired property it is enough to take any positive number greater then the value of the function when $t=0$. \end{proof} \begin{prop} \label{prop4} If $\displaystyle\int_0^{2\pi}a(\theta)(a(\theta)+a^{\prime \prime }(\theta))\log k(\theta,t) \ d\theta$ is bounded on $[0,T)$, then $ k(\theta,t)$ has an upper bound on $S^1\times [0,T)$. \end{prop} \begin{proof} We shall find an upper bound for the function $t\mapsto k_{\mathrm{MAX}}(t)$ . Fix $t\in \lbrack 0,T)$ and let $\theta _{0}\in \lbrack 0,2\pi ]$ such that $k(\theta _{0},t)=k_{\mathrm{MAX}}(t)$. Denote $\min_{[0,2\pi ]}a=a_{0}$ and $\max_{[0,2\pi ]}a=a_{1}$, choose $0<\delta <\displaystyle\frac{a_{0}^{2} }{2\pi a_{1}^{2}}$ and let $C$ be as in Lemma \ref{lemma9}. Therefore, we can take $b\in \lbrack 0,2\pi ]$ such that $k(b,t)\leq C$ and $0<|b-\theta _{0}|\leq \delta $. Changing the parameter if necessary we can assume $ b<\theta _{0}$. Moreover, let $N>0$ be as in Lemma \ref{lemma10}. Using the Holder's inequality we calculate \begin{multline*} k_{\mathrm{MAX}}(t)=\frac{1}{a(\theta _{0})}a(\theta _{0})k(\theta _{0},t)= \frac{1}{a(\theta _{0})}a(b)k(b,t)+\frac{1}{a(\theta _{0})}\int_{b}^{\theta _{0}}\frac{\partial (ak)}{\partial \theta }d\theta \leq \\ \leq \frac{Ca(b)}{a(\theta _{0})}+\frac{\sqrt{\delta }}{a(\theta _{0})} \left( \int_{b}^{\theta _{0}}\left( \frac{\partial (ak)}{\partial \theta } \right) ^{2}d\theta \right) ^{1/2}\leq \frac{Ca(b)}{a(\theta _{0})}+\frac{ \sqrt{\delta }}{a(\theta _{0})}\left( \int_{0}^{2\pi }(ak)^{2}d\theta +N\right) ^{1/2}\leq \\ \leq \frac{Ca(b)}{a(\theta _{0})}+\frac{\sqrt{\delta }}{a(\theta _{0})}\sqrt{ 2\pi }a_{1}k_{\mathrm{MAX}}(t)+\frac{\sqrt{\delta N}}{a(\theta _{0})}\leq \frac{Ca_{1}}{a_{0}}+\frac{a_{1}\sqrt{2\pi \delta }}{a_{0}}k_{\mathrm{MAX} }(t)+\frac{\sqrt{\delta N}}{a_{0}} \end{multline*} Then we have \begin{equation*} k_{\mathrm{MAX}}(t)\left( 1-\frac{a_{1}}{a_{0}}\sqrt{2\pi \delta }\right) \leq \frac{Ca_{1}+\sqrt{\delta N}}{a_{0}} \end{equation*} And, finally, by the assumption on $\delta $, \begin{equation*} k_{\mathrm{MAX}}(t)\leq \frac{Ca_{1}+\sqrt{\delta N}}{a_{0}-a_{1}\sqrt{2\pi \delta }} \end{equation*} Since the right side does not depends on $t$ we have the desired. \end{proof} Combining these lemmas and propositions yields immediately the following theorem: \begin{teo} \label{teo5} Let $A(t)$ denote the area enclosed by the curve $\theta \mapsto k(\theta ,t)$. If $A(t)$ admits a strictly positive lower bound on $ [0,T)$, then $k(\theta ,t)$ is uniformly bounded on $S^{1}\times \lbrack 0,T) $. \end{teo} We now turn our attention to prove that the derivatives of $k$ remain bounded as long as $k$ is bounded. \begin{prop} \label{prop5} If $k$ is bounded on $S^1\times [0,T)$, then $\displaystyle \frac{\partial k}{\partial\theta}$ is also bounded on $S^1\times [0,T)$. \end{prop} \begin{proof} Consider the function $f:S^{1}\times \lbrack 0,T)$ given by $f=\displaystyle a^{2}\left( \theta \right) \frac{\partial k}{\partial \theta }e^{ct}$, where $c$ is to be chosen later. After some calculations we see that $f$ is a solution of the second order parabolic equation \begin{equation*} \frac{\partial f}{\partial t}=\left( c+3k^{2}\right) f-k^{2}\frac{2a^{\prime }}{a+a^{\prime \prime }}\frac{\partial f}{\partial \theta }+\frac{\partial }{ \partial \theta }\left( k^{2}\frac{a}{a+a^{\prime \prime }}\frac{\partial f}{ \partial \theta }\right) \end{equation*} Now, taking $c\leq -3\max_{S^{1}\times \lbrack 0,T)}k^{2}$ we can bound $f$ using the maximum principle. It follows that $\displaystyle\frac{\partial k}{ \partial \theta }$ is also bounded for finite time. \end{proof} To prove that the second spatial derivative is bounded we follow, again, the method used in \textbf{\cite{gage3}}. \begin{lemma} \label{lemma11} Define the function $\xi :[0,T)\rightarrow \mathbb{R}$ by \begin{equation*} \xi (t)=\int_{0}^{2\pi }\left( \frac{\partial ^{2}k}{\partial \theta ^{2}} \right) ^{4}d\theta \end{equation*} If $k$ is bounded in $S^{1}\times \lbrack 0,T)$ then the function $\xi $ is also bounded in $[0,T)$. \end{lemma} \begin{proof} Let us denote for simplicity $F=\displaystyle\frac{a}{a+a^{\prime \prime }}$ and $G=\displaystyle\frac{2a^{\prime }}{a+a^{\prime \prime }}$. Using integration by parts and the evolution equation we compute \begin{align*} \frac{d\xi }{dt}& =\frac{d}{dt}\left( \int_{0}^{2\pi }\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{4}d\theta \right) =4\int_{0}^{2\pi }\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{3}\frac{ \partial }{\partial t}\left( \frac{\partial ^{2}k}{\partial \theta ^{2}} \right) d\theta = \\ & =-12\int_{0}^{2\pi }\left( \frac{\partial ^{2}k}{\partial \theta ^{2}} \right) ^{2}\frac{\partial ^{3}k}{\partial \theta ^{3}}\frac{\partial }{ \partial \theta }\left( k^{2}F\frac{\partial ^{2}k}{\partial \theta ^{2}} +k^{2}G\frac{\partial k}{\partial \theta }+k^{3}\right) d\theta = \\ & =12\int_{0}^{2\pi }k\frac{\partial k}{\partial \theta }\left( \frac{ \partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}\frac{\partial ^{3}k}{ \partial \theta ^{3}}\left( -2F\frac{\partial ^{2}k}{\partial \theta ^{2}}-k \frac{\partial G}{\partial \theta }-3k\right) -\frac{\partial F}{\partial \theta }k^{2}\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{3} \frac{\partial ^{3}k}{\partial \theta ^{3}}- \\ & -2Gk\left( \frac{\partial k}{\partial \theta }\right) ^{2}\left( \frac{ \partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}\frac{\partial ^{3}k}{ \partial \theta ^{3}}-Gk^{2}\left( \frac{\partial ^{2}k}{\partial \theta ^{2} }\right) ^{3}\frac{\partial ^{3}k}{\partial \theta ^{3}}-Fk^{2}\left( \frac{ \partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}\left( \frac{\partial ^{3}k }{\partial \theta ^{3}}\right) ^{2}d\theta \end{align*} Now put $C_{1}=\min_{S^{1}}F$. We have \begin{equation*} -Fk^{2}\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}\left( \frac{\partial ^{3}k}{\partial \theta ^{3}}\right) ^{2}\leq -C_{1}k^{2}\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}\left( \frac{\partial ^{3}k}{\partial \theta ^{3}}\right) ^{2} \end{equation*} Notice that $C_{1}>0$ and remember that if $k$ is bounded then $\displaystyle \frac{\partial k}{\partial \theta }$ is also bounded. Choosing $\displaystyle \epsilon =\frac{16}{C_{1}}$ we can use the inequality $ab\leq \displaystyle \frac{4}{\epsilon }a^{2}+\epsilon b^{2}$ to estimate the other terms of the integral as follows: \noindent $\mathrm{1.}\ \displaystyle\left( k\frac{\partial ^{2}k}{\partial \theta ^{2}}\frac{\partial ^{3}k}{\partial \theta ^{3}}\right) \left[ \left( -2F\frac{\partial ^{2}k}{\partial \theta ^{2}}-k\frac{\partial G}{\partial \theta }-3k\right) \frac{\partial k}{\partial \theta }\frac{\partial ^{2}k}{ \partial \theta ^{2}}\right] \leq \frac{4}{\epsilon }\left( k\frac{\partial ^{2}k}{\partial \theta ^{2}}\frac{\partial ^{3}k}{\partial \theta ^{3}} \right) ^{2}+C_{2}\left( \left( \frac{\partial ^{2}k}{\partial \theta ^{2}} \right) ^{4}+\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}\right) ,$ where $C_{2}=\epsilon \displaystyle\max_{S^{1}\times \lbrack 0,T)}\left( \frac{\partial k}{\partial \theta }\right) ^{2}\left( 12F^{2}+3k^{2}\left( \frac{\partial G}{\partial \theta }\right) ^{2}+27k^{2}\right) $. Here we also used the inequality\newline $\left( a+b+c\right) ^{2}\leq 3\left( a^{2}+b^{2}+c^{2}\right) $. \noindent $\mathrm{2.}\ \displaystyle\left( k\frac{\partial ^{2}k}{\partial \theta ^{2}}\frac{\partial ^{3}k}{\partial \theta ^{3}}\right) \left[ -\frac{ \partial F}{\partial \theta }k\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}\right] \leq \frac{4}{\epsilon }\left( k\frac{\partial ^{2}k }{\partial \theta ^{2}}\frac{\partial ^{3}k}{\partial \theta ^{3}}\right) ^{2}+C_{3}\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{4},$ where $C_{3}=\epsilon \displaystyle\max_{S^{1}}\left( \frac{\partial F}{ \partial \theta }\right) ^{2}\max_{S^{1}\times \lbrack 0,T)}k^{2}.$ \noindent $\mathrm{3.}\ \displaystyle\left( k\frac{\partial ^{2}k}{\partial \theta ^{2}}\frac{\partial ^{3}k}{\partial \theta ^{3}}\right) \left[ -2G\left( \frac{\partial k}{\partial \theta }\right) ^{2}\frac{\partial ^{2}k }{\partial \theta ^{2}}\right] \leq \frac{4}{\epsilon }\left( k\frac{ \partial ^{2}k}{\partial \theta ^{2}}\frac{\partial ^{3}k}{\partial \theta ^{3}}\right) ^{2}+C_{4}\left( \frac{\partial ^{2}k}{\partial \theta ^{2}} \right) ^{2},$ where $C_{4}=4\epsilon \displaystyle\max_{S^{1}\times \lbrack 0,T)}\left( \frac{\partial k}{\partial \theta }\right) ^{4}\max_{S^{1}}G^{2}. $ \noindent $\mathrm{4.}\ \displaystyle\left( k\frac{\partial ^{2}k}{\partial \theta ^{2}}\frac{\partial ^{3}k}{\partial \theta ^{3}}\right) \left[ -Gk\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}\right] \leq \frac{4}{\epsilon }\left( k\frac{\partial ^{2}k}{\partial \theta ^{2}} \frac{\partial ^{3}k}{\partial \theta ^{3}}\right) ^{2}+C_{5}\left( \frac{ \partial ^{2}k}{\partial \theta ^{2}}\right) ^{4},$ where $C_{5}=\epsilon \displaystyle\max_{S^{1}}G^{2}\max_{S^{1}\times \lbrack 0,T)}k^{2}.$ Combining these estimates and writing $C_{6}=12(C_{2}+C_{3}+C_{5})$ and $ C_{7}=12(C_{2}+C_{4})$ we have \begin{equation*} \frac{d\xi }{dt}\leq C_{6}\int_{0}^{2\pi }\left( \frac{\partial ^{2}k}{ \partial \theta ^{2}}\right) ^{4}d\theta +C_{7}\int_{0}^{2\pi }\left( \frac{ \partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}d\theta \end{equation*} Holder's inequality and the inequality $\sqrt{A}\leq \displaystyle\frac{A+1}{ 2}$ (if $A\geq 0$) yield \begin{align*} \frac{d\xi }{dt}& \leq C_{6}\int_{0}^{2\pi }\left( \frac{\partial ^{2}k}{ \partial \theta ^{2}}\right) ^{4}d\theta +C_{7}\sqrt{2\pi }\left( \int_{0}^{2\pi }\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{4}d\theta \right) ^{1/2}= \\ & =C_{6}\xi +C_{7}\sqrt{2\pi }\sqrt{\xi }\leq C_{6}\xi +C_{7}\sqrt{2\pi } \left( \frac{\xi +1}{2}\right) =C_{8}\xi +C_{9}, \end{align*} with $C_{8}=C_{6}+\displaystyle\frac{C_{7}\sqrt{2\pi }}{2}$ and $C_{9}= \displaystyle\frac{C_{7}\sqrt{2\pi }}{2}$. By the Gronwall's inequality we have immediately that $\xi $ is bounded for finite time. This completes the proof. \end{proof} \begin{coro} \label{coro1} The function $t \mapsto \displaystyle\int_0^{2\pi}\left(\frac{ \partial^2k}{\partial\theta^2}\right)^2d\theta$ is bounded in $[0,T)$. \end{coro} \begin{proof} This is immediate by the Holder's inequality. \end{proof} \begin{lemma} \label{lemma12} The function $\beta :[0,T)\rightarrow \mathbb{R}$ given by \begin{equation*} \beta (t)=\int_{0}^{2\pi }\left( \frac{\partial ^{3}k}{\partial \theta ^{3}} \right) ^{2}d\theta \end{equation*} is bounded provided $k$ is bounded in $S^{1}\times \lbrack 0,T)$. \end{lemma} \begin{proof} Adopting the same notation as in Lemma \ref{lemma11} and using, again, integration by parts and the evolution equation we have the formula \begin{align*} & \frac{d\beta}{dt} = -2\int_{0}^{2\pi }\frac{\partial ^{4}k}{\partial \theta ^{4}}\left( \frac{ \partial ^{2}F}{\partial \theta ^{2}}k^{2}\frac{\partial ^{2}k}{\partial \theta ^{2}}+2\frac{\partial F}{\partial \theta }k\frac{\partial k}{\partial \theta }\frac{\partial ^{2}k}{\partial \theta ^{2}}+\frac{\partial F}{ \partial \theta }k^{2}\frac{\partial ^{3}k}{\partial \theta ^{3}}+2\frac{ \partial F}{\partial \theta }k\frac{\partial k}{\partial \theta }\frac{ \partial ^{2}k}{\partial \theta ^{2}}+\right. \\ & \left. +2F\left( \frac{\partial k}{\partial \theta }\right) ^{2}\frac{ \partial ^{2}k}{\partial \theta ^{2}}+2Fk\left( \frac{\partial ^{2}k}{ \partial \theta ^{2}}\right) ^{2}+2Fk\frac{\partial k}{\partial \theta } \frac{\partial ^{3}k}{\partial \theta ^{3}}+\frac{\partial F}{\partial \theta }k^{2}\frac{\partial ^{3}k}{\partial \theta ^{3}}+2Fk\frac{\partial k }{\partial \theta }\frac{\partial ^{3}k}{\partial \theta ^{3}}+\right. \\ & \left. +Fk^{2}\frac{\partial ^{4}k}{\partial \theta ^{4}}+\frac{\partial ^{2}G}{\partial \theta ^{2}}k^{2}\frac{\partial k}{\partial \theta }+2k\frac{ \partial G}{\partial \theta }\left( \frac{\partial k}{\partial \theta } \right) ^{2}+\frac{\partial G}{\partial \theta }k^{2}\frac{\partial ^{2}k}{ \partial \theta ^{2}}+2\frac{\partial G}{\partial \theta }k\left( \frac{ \partial k}{\partial \theta }\right) ^{2}+\right. \\ & \left. +2G\left( \frac{\partial k}{\partial \theta }\right) ^{3}+4Gk\frac{ \partial k}{\partial \theta }\frac{\partial ^{2}k}{\partial \theta ^{2}}+ \frac{\partial G}{\partial \theta }k^{2}\frac{\partial ^{2}k}{\partial \theta ^{2}}+2Gk\frac{\partial k}{\partial \theta }\frac{\partial ^{2}k}{ \partial \theta ^{2}}+Gk^{2}\frac{\partial ^{3}k}{\partial \theta ^{3}} +\right. \\ & \left. +6k\left( \frac{\partial k}{\partial \theta }\right) ^{2}+3k^{2} \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) d\theta \end{align*} By the same trick used in Lemma \ref{lemma11} we can transform away the fourth derivative. Using bounds for $k$, $\displaystyle\frac{\partial k}{ \partial \theta }$, $\displaystyle\int_{0}^{2\pi }\left( \frac{\partial ^{2}k }{\partial \theta ^{2}}\right) ^{4}d\theta $ and $\displaystyle \int_{0}^{2\pi }\left( \frac{\partial ^{2}k}{\partial \theta ^{2}}\right) ^{2}d\theta $ and the fact that $k$ is bounded away from zero in $ S^{1}\times \lbrack 0,T)$ by $k_{\mathrm{MIN}}(0)$ we have \begin{equation*} \frac{d\beta }{dt}\leq C_{1}\beta +C_{2} \end{equation*} where $C_{1}$ and $C_{2}$ are constants that don't depend on $t$. Now, Gronwall's inequality gives that $\beta $ is bounded in $[0,T)$. \end{proof} \begin{prop} \label{prop6} If $k$ is bounded in $S^1\times [0,T)$ then $k^{\prime \prime } $ is also bounded in $S^1\times [0,T)$. \end{prop} \begin{proof} We use the Poincar\'{e} inequality: if $u\in C^{1}([0,2\pi ])$ then \begin{equation*} \max_{\lbrack 0,2\pi ]}|u|\leq \frac{1}{2\pi }\int_{0}^{2\pi }u+\int_{0}^{2\pi }|u^{\prime }| \end{equation*} Fix $t\in \lbrack 0,T)$. Then \begin{equation*} \max_{\lbrack 0,2\pi ]}k^{\prime \prime 2}\leq \frac{1}{2\pi }\int_{0}^{2\pi }k^{\prime \prime 2}d\theta +2\int_{0}^{2\pi }|k^{\prime \prime }(\theta ,t)k^{\prime \prime \prime }(\theta ,t)|\ d\theta \end{equation*} Using Schwarz's inequality on the last integral and the previous lemmas we have that $\displaystyle\max_{[0,2\pi ]}k^{\prime \prime 2}$ is bounded by a constant that doesn't depend on $t$. This completes the proof. \end{proof} \begin{prop} \label{prop7} If $k$ is bounded then all the spatial derivatives of $k$ are also bounded. \end{prop} \begin{proof} We have already proved that the two first derivatives are bounded. Using this, we will prove that $\displaystyle\frac{\partial ^{3}k}{\partial \theta ^{3}}$ is bounded. Consider the function $u:S^{1}\times \lbrack 0,T)\rightarrow \mathbb{R}$ given by $u=e^{ct}\displaystyle\frac{\partial ^{3}k}{\partial \theta ^{3}}$. This function is a solution to a linear parabolic second order equation of type \begin{equation*} \frac{\partial u}{\partial t}=Fk^{2}\frac{\partial ^{2}u}{\partial \theta ^{2}}+P\frac{\partial u}{\partial \theta }+(c+Q)u+R, \end{equation*} where $P,Q$ and $R$ are polynomials whose variables are the functions $k$, $ F $, $G$ and its derivatives. Since the derivatives of $k$ only appear until the second order in the terms $Q$ and $R$ one can use the maximum principle for a suitable $c$ to show that $u$ is bounded. It follows that $ \displaystyle\frac{\partial ^{3}k}{\partial \theta ^{3}}$ is bounded for finite time. For the higher derivatives the argument is essentially the same. \end{proof} \begin{coro} \label{coro2} If $k$ is bounded then its time derivatives of all orders are also bounded. \end{coro} \begin{proof} All the time derivatives depends polynomially on the spatial derivatives. Then, uniform bounds on the spatial derivatives yields uniform bounds to the time derivatives. \end{proof} \begin{teo} \label{teo6} The solution to the minkowskian curvature evolution PDE continues until the area converges to zero. \end{teo} \begin{proof} We just proved that if $\displaystyle\lim_{t\rightarrow T}A(t)>0$ then $k$ and all of its derivatives remain bounded. By the Arzela's theorem $k$ has a limit as $t$ goes to $T$ which is $C^{\infty }$. This shows that as long as the area remains bounded away from zero we can extend the solution, and then the solution exists until the area goes to $0$. \end{proof} \end{document}
\begin{document} \title{Ideals of equations for elements in a free group and Stallings folding} \begin{abstract} Let $F$ be a finitely generated free group and let $H\le F$ be a finitely generated subgroup. Given an element $g\in F$, we study the ideal $\mathfrak I_g$ of equations for $g$ with coefficients in $H$, i.e. the elements $w(x)\in H*\gen{x}$ such that $w(g)=1$ in $F$. The ideal $\mathfrak I_g$ is a normal subgroup of $H*\gen{x}$, and we provide an algorithm, based on Stallings folding operations, to compute a finite set of generators for $\mathfrak I_g$ as a normal subgroup. We provide an algorithm to find an equation in $\mathfrak I_g$ with minimum degree, i.e. an equation $w(x)$ such that its cyclic reduction contains the minimum possible number of occurrences of $x$ and $x^{-1}$; this answers a question of A. Rosenmann and E. Ventura. More generally, we provide an algorithm that, given $d\in\mathbb N$, determines whether $\mathfrak I_g$ contains equations of degree $d$ or not, and we give a characterization of the set of all the equations of that specific degree. We define the set $D_g$ of all integers $d$ such that $\mathfrak I_g$ contains equations of degree $d$; we show that $D_g$ coincides, up to a finite set, either with the set of non-negative even numbers or with the set of natural numbers. Finally, we provide examples to illustrate the techniques introduces in this paper. We discuss the case where $\rank{H}=1$. We prove that both kinds of sets $D_g$ can actually occur. The examples also show that the equations of minimum possible degree aren't in general enough to generate the whole ideal $\mathfrak I_g$ as a normal subgroup. \end{abstract} \begin{center} \small \textit{Keywords:} Equations over Groups, Free Groups\\ \small \textit{2010 Mathematics subject classification:} 20F70, 20E05 (20F65) \end{center} \section{Introduction} Given an extension of fields $K\subseteq F$ and an element $\alpha\in F$, a first interesting question to ask is to determine whether the element $\alpha$ is algebraic over $K$, i.e. whether it satisfies some non-trivial equation with coefficients in $K$. In other words we want to determine whether there exists a non-trivial polynomial $p(x)\in K[x]$ such that $p(\alpha)=0$. If the answer is affirmative, one tries to study the ideal $I_\alpha\subseteq K[x]$ of equations for $\alpha$ over $K$: this turns out to be a principal ideal, and thus its structure is very simple. Completely analogous questions can be asked in the context of group theory, but the answers turn out to be more complicated. Let $F_n$ be a free group generated by $n$ elements $a_1,...,a_n$. Let $H\le F_n$ be a finitely generated subgroup and consider an infinite cyclic group $\gen{x}\cong\mathbb Z$. An \textbf{equation} in $x$ with coefficients in $H$ is an element $w\in H*\gen{x}$ in the free product of $H$ and $\gen{x}$; $w$ has a unique expression as a reduced word in the alphabeth $\{x,x^{-1}\}\cup H\setminus\{1\}$. For an equation $w\in H*\gen{x}$ we define the \textbf{degree} of $w$ as the number of occurrences of $x$ and $\ol{x}$ in the cyclic reduction of $w$. For an element $g\in F_n$, consider the map $\varphi_g:H*\gen{x}\rightarrow F_n$ that is the inclusion on $H$, and that sends $x$ to $g$; this is the ``evaluation in $g$'' map. We say that $g$ is a \textbf{solution} for the equation $w$ if $\varphi_g(w)=1$. We define the \textbf{ideal} $\mathfrak I_g$ to be the normal subgroup $\mathfrak I_g=\ker\varphi_g$ of $H*\gen{x}$. Fix a finitely generated subgroup $H\le F_n$ and an element $g\in F_n$. First of all, we would like to determine whether $\mathfrak I_g$ is trivial or not, i.e. whether $g$ satisfies some non-trivial equation over $H$ or not. This has been answered recently by A. Rosenmann and E. Ventura in \cite{Rosenmann1}, and in particular they obtained the following result: \begin{mydef} Let $H\le F_n$ be a finitely generated subgroup and let $g\in F_n$ be any element. We say that $g$ \textbf{depends} on $H$ if any of the following equivalent conditions hold: (i) The ideal $\mathfrak I_g$ is non-trivial. (ii) We have $\rank{\gen{H,g}}\le\rank{H}$. \end{mydef} \begin{mythm} Let $H\le F_n$ be a finitely generated subgroup. Then there is an algorithm that computes a finite set of elements $g_1,...,g_k\in F_n$ such that, for every $g\in F_n$, the following are equivalent: (i) The element $g$ depends on $H$. (ii) The element $g$ belongs to one of the double cosets $Hg_1H,...,Hg_kH$. \end{mythm} For the whole paper, when an algorithm takes in input a finitely generated subgroup $H\le F_n$, we mean that the subgroup is given by means of a finite set of generators, each provided as a word in the basis $a_1,...,a_n$ of $F_n$. \ In this paper, we study the structure of the ideal $\mathfrak I_g$ of the equations with coefficients in $H$ and with $g$ as a solution. Most of our results can be generalized to equations in more variables, but for simplicity of notation we deal only with the one-variable case; the statements of the results in more variables can be found in Section \ref{SectionMultivariate} at the end of the paper. \ For an arbitrary homomorphism from a finitely generated free group to a finitely presented group, the kernel is always finitely generated as a normal subgroup. If the target is free, then it follows from Grushko's Theorem that there is an algorithm to find a finite normal generating set. In Section \ref{SectionStallings} we describe an efficient algorithm, with focus on the case of the map $\varphi_g:H*\gen{x}\rightarrow F_n$ defined above, whose kernel is exactly the ideal $\ker\varphi_g=\mathfrak I_g$. The key idea, based on Stallings folding operations, is the following. In a chain of folding operations, the rank-preserving folding operations are homotopy equivalences, and thus isomorphisms at the level of fundamental group, while the non-rank-preserving folding operations give a non-injective map of fundamental groups (that means, in our case, adding generators to the kernel). The novel aspect of our algorithm, which is explained in section \ref{SectionStallings}, is the following: we show that the non-rank-preserving folding operations can be postponed until the end of the chain of folding operations (see Figure \ref{gmgl}): this gives a clean and efficient way to produce a set of generators for the kernel as a normal subgroup of $H*\gen{x}$. \ In \cite{Rosenmann1} Rosenmann and Ventura ask the following question: \begin{myquest} Is it possible to (algorithmically) find an equation of minimum degree for an element $g$ that depends on $H$? \end{myquest} In Section \ref{SectionMain} we give an affirmative answer to this question. \begin{introthm}[See Corollary \ref{algorithm}]\label{introthm1} There is an algorithm that, given $H\le F_n$ finitely generated and $g\in F_n$ that depends on $H$, produces a non-trivial equation $w\in\mathfrak I_g$ of minimum possible degree. \end{introthm} We give a brief outline of the proof of Theorem \ref{introthm1}. We take a non-trivial (cyclically reduced) equation $w\in\mathfrak I_g$ of minimum possible degree; we think of $w$ as a word in the letters $a_1,...,a_n,x$ (where $a_1,...,a_n$ is a basis for $F_n$). We then prove that, for words of sufficient length, some parts of the word $w$ can be literally cut away, by means of a move that we call a ``parallel cancellation move'', introduced in Lemma \ref{parallelcancellation}; this produces another equation $w'\in\mathfrak I_g$, which is strictly shorter than $w$ and which has the same degree. By iterating this process, we prove that there is an equation in $\mathfrak I_g$ of minimum possible degree whose length is bounded (see Theorem \ref{main} for the precise bound). With this bound established, the algorithm now just takes all the (finitely many) elements of $H*\gen{x}$ which are short enough, and for each of them it checks whether it belongs to $\mathfrak I_g$, recording its degree. A completely analogous result holds for equations of any fixed degree $d$: if $\mathfrak I_g$ contains a non-trivial equation of degree $d$, then it contains one whose length is bounded (see Theorem \ref{main3} for the precise bound). In particular we prove the following theorem: \begin{introthm}[See Corollary \ref{algorithm3}]\label{introthm2} There is an algorithm that, given $H\le F_n$ and $g\in F_n$ and an integer $d\ge1$, tells us whether $\mathfrak I_g$ contains non-trivial equations of degree $d$, and, if so, produces an equation $w\in\mathfrak I_g$ of degree $d$. \end{introthm} One of the interesting features of the algorithm is that the ``parallel cancellation moves'' of Lemma \ref{parallelcancellation} have inverses, namely the ``parallel insertion moves'' which we introduce in Lemma \ref{parallelinsertion}. This means that the arbitrary (cyclically reduced) equation of degree $d$, can be obtained from a short equation of degree $d$ by means of a finite number of insertion moves; this gives a characterization of all the equations of degree $d$ in terms of a finite number of short equations (see Theorem \ref{main4} for the details). \ Next, we study the set $D_g=\{d\in\mathbb N : \mathfrak I_g$ contains a non-trivial equation of degree $d\}$. We prove that $D_g$ coincides with either $\mathbb N$ or $2\mathbb N$ (the set of non-negative even numbers), up to a finite set. We provide an algorithm that, given $H$ and $g$, computes the set $D_g$. \begin{introthm}[See Theorem \ref{degreeset}]\label{introdegreeset} Exactly one of the following possibilities takes place: (i) $D_g$ contains an odd number and $\mathbb N\setminus D_g$ is finite. (ii) $D_g$ contains only even numbers and $2\mathbb N\setminus D_g$ is finite. \end{introthm} \begin{introthm}[See Theorem \ref{algorithm4}] Given $H\le F_n$ finitely generated and $g\in F_n$ that depends on $H$, there is an algorithm that: (a) Determines whether we fall into case (i) or (ii) of Theorem \ref{introdegreeset}. (b) Computes the finite set $\mathbb N\setminus D_g$ or $2\mathbb N\setminus D_g$ respectively. \end{introthm} \ In Section \ref{SectionExamples}, we make use of the tools developed in the rest of the paper in order to work out explicit computations in some specific cases; in each case we compute the minimum degree $d_{min}$ for an equation in $\mathfrak I_g$ and the set $D_g$ of possible degrees. In example \ref{examplecyclic} we deal with the case where $\rank{H}=1$, showing that in this case $d_{min}$ is either $1$ or $2$. Examples \ref{example46} and \ref{example23} show that both cases of Theorem \ref{introdegreeset} can occur. One may be tempted to conjecture that the equations of $\mathfrak I_g$ of minimum possible degree $d_{min}$ are enough to generate the ideal $\mathfrak I_g$; we give counterexamples to this (see Examples \ref{example23} and \ref{example234}), showing that the ideal $\mathfrak I_g$ is not always generated by just the equations of degree $d_{min}$. \ In a subsequent paper we will further investigate the properties of the ideal $\mathfrak I_g$. \section{Preliminaries and notations}\label{Preliminaries} With the word \textbf{graph} we mean a $1$-dimensional CW complex. We allow for multiple edges between the same pair of vertices, and we allow for edges from a vertex to itself. For a graph $G$ we denote with $V=V(G)$ the $0$-skeleton of $G$, and each point of $V$ is called \textit{vertex}; each connected component of $G\setminus V$ is called \textit{open edge} and its closure is called an \textit{edge}. A \textbf{combinatorial map} $f:G\rightarrow G'$ between graphs is a continuous map which sends each vertex of $G$ to a vertex of $G'$, and each open edge of $G$ homeomorphically onto an open edge of $G'$. For $l\ge1$ we define $I_l$ to be the graph obtained from a subdivision of the unit interval $[0,1]$ into $l$ arcs. More precisely, $I_l$ has $l+1$ vertices at $\frac{i}{l}$ for $i=0,...,l$, and $l$ edges given by closed intervals. For $l\ge1$ we define $C_l$ to be the graph obtained from a subdivision of the unit circle $\{(x,y) : x^2+y^2=1\}\subseteq\mathbb R^2$ into $l$ arcs. More precisely $C_l$ has $l$ vertices at the points $(\cos(\frac{2\pi i}{l}),\sin(\frac{2\pi i}{l}))$ for $i=0,...,l-1$, and $l$ edges given by closed arcs on the unit circle. Let $G$ be a graph. A \textbf{combinatorial path} in $G$ is a combinatorial map $\sigma:I_l\rightarrow G$ for some $l\ge1$; a \textbf{combinatorial loop} in $G$ is a combinatorial map $\sigma:C_l\rightarrow G$ for some $l\ge1$. We say a combinatorial path (resp. loop) is \textbf{reduced} if it is locally injective. The local injectivity has to be checked only at the vertices of $I_l$ or $C_l$: in the interior of the edges, every combinatorial path/loop is locally injective by definition. We say that a combinatorial path $\sigma:I_l\rightarrow G$ with $\sigma(0)=\sigma(1)$ is \textbf{cyclically reduced} if it is reduced when seen as a combinatorial loop. \begin{figure} \caption{The graph $I_5$.} \label{clil} \end{figure} We are going to need the notion of core graph and of pointed core graph. Whenever we consider a graph $G$ with a basepoint, we always mean the basepoint to be a vertex. \begin{mydef} Let $G$ be a connected graph which is not a tree. Define its \textbf{core graph} $\core{G}$ as the subgraph given by the union of (the images of) all the reduced loops. \end{mydef} \begin{myobs} Notice that $\core{G}$ is connected and every vertex has valence at least $2$. \end{myobs} \begin{mydef} Let $(G,*)$ be a connected pointed graph which is not a tree. Define its \textbf{pointed core graph} $\bcore{G}$ as the subgraph given by the union of (the images of) all the reduced paths from the basepoint to itself. \end{mydef} \begin{myobs} For a pointed graph $(G,*)$, there is a unique shortest path $\sigma$ (either trivial or embedded) connecting the basepoint to $\core{G}$; the graph $\bcore{G}$ consists exactly of the union $\core{G}\cup\im{\sigma}$. \end{myobs} We shall need to work explicitly with the following well-known construction. Let $T$ be a maximal tree contained in $G$, and let $E$ be the set of edges which are not contained in $T$; suppose we are given an orientation on each edge $e\in E$. For $e\in E$, there is a unique reduced path $\sigma_e$ in $G$ that starts at the basepoint, moves along $T$ to the initial vertex of $e$, crosses $e$ according to the orientation, and moves along $T$ from the final vertex of $e$ to the basepoint. \begin{myprop}\label{fundamentalgroupgraph} The fundamental group $\pi_1(G)$ is a free group with a basis given by the homotopy classes of the paths $\sigma_e$ for $e\in E$. \end{myprop} \subsection{Reduction of paths} Consider the graph $I_l$ and for an edge $s$ of $I_l$ denote with $\omeno{s},\opiu{s}$ the vertices of $s$. Since $I_l$ is a subdivision of the unit interval, we adopt the convention that $\omeno{s}<\opiu{s}$ as points of the unit interval. Let $G$ be a graph and $\sigma:I_l\rightarrow G$ be a combinatorial path. If $\sigma$ is not reduced, then we can find two consecutive edges $s,t$ of $I_l$ such that $\sigma$ sends $s,t$ to the same edge $e$ of $G$, but crossed with opposite orientations. Let's say we have $\opiu{s}=\omeno{t}$: we consider the interval $s\cup t=[\omeno{s},\opiu{t}]$ and we collapse it to a point. We obtain a graph isomorphic to $I_{l-2}$, and we can define a map $\sigma':I_{l-2}\rightarrow G$ which is equal to $\sigma$, except on the collapsed interval, where we set it to be equal to $\sigma(\omeno{s})=\sigma(\opiu{t})$. The map $\sigma':I_{l-2}\rightarrow G$ is a combinatorial path, and it is homotopic to $\sigma$ (relative to the endpoints). If the path $\sigma'$ is not yet reduced, then we can reiterate the same process. This motivates the following definition: \begin{mydef}\label{reductionprocess} Let $G$ be a graph and let $\sigma:I_l\rightarrow G$ be a combinatorial path. A \textbf{reduction process} for $\sigma$ is a sequence $(s_1,t_1),...,(s_m,t_m)$ with the following properties: (i) $s_1,t_1,...,s_m,t_m$ are pairwise distinct edges of $I_l$. (ii) For every $k=1,...,m$ we have $\omeno{s_k}<\omeno{t_k}$. (iii) For every $k=1,...,m$, if we collapse each of $s_1,t_1,...,s_{k-1},t_{k-1}$ to a point, in the quotient graph the edges $s_k,t_k$ are adjacent. (iv) For every $k=1,...,m$ the map $\sigma$ sends $s_k,t_k$ to the same edge of $G$ crossed with opposite orientations. \end{mydef} Think of $(s_k,t_k)$ as the $k$-th cancellation to be performed on the path $\sigma$. Condition (iii) says that, after performing the first $k-1$ cancellations, the edges $s_k,t_k$ are adjacent, ready to be canceled against each other. Condition (iv) ensures that $\sigma$ sends $s_k,t_k$ to the same edge of $G$ but with opposite orientations, so that the cancellation actually makes sense. Condition (ii) is just a useful convention, saying that the edges $s_k,t_k$ appear in this order on the unit interval $I_l$. \begin{mylemma}\label{lemmino1} Let $G$ be a graph and let $\sigma:I_l\rightarrow G$ be a combinatorial path, together with a reduction process $(s_1,t_1),...,(s_m,t_m)$. Then for every $1\le \alpha<\beta\le m$ the edges $s_\alpha,t_\alpha,s_\beta,t_\beta$ appear on the interval in one of these orders: $s_\alpha,t_\alpha,s_\beta,t_\beta$ or $s_\beta,t_\beta,s_\alpha,t_\alpha$ or $s_\beta,s_\alpha,t_\alpha,t_\beta$. \end{mylemma} \begin{proof} When we collapse $s_1,t_1,...,s_{\alpha-1},t_{\alpha-1}$ to a point, we have that $s_\alpha$ and $t_\alpha$ become adjacent. This means that $s_\beta,t_\beta$ can't both occur between $s_\alpha$ and $t_\alpha$. The conclusion follows. \end{proof} Let $\sigma:I_l\rightarrow G$ be a path and let $(s_1,t_1),...,(s_m,t_m)$ be a reduction process for $\sigma$. If $2m<l$, then we can collapse each of the edges $s_1,t_1,...,s_m,t_m$ to a point in order to get a graph isomorphic to $I_{l-2m}$. We can define a continuous map $\sigma':I_{l-2m}\rightarrow G$ which is equal to $\sigma$ on the edges which are not collapsed in the process. The map $\sigma'$ is a combinatorial path which is homotopic to $\sigma$ (relative to the endpoints), and it is called \textbf{residual path} of the cancellation process. \begin{myprop}\label{extensionofprocess} Let $\sigma:I_l\rightarrow G$ be a combinatorial path and let $(s_1,t_1),...,(s_m,t_m)$ be a reduction process for $\sigma$. Then exactly one of the following holds: (i) We have $2m=l$ and $\sigma(0)=\sigma(1)$ and $\sigma$ is nullhomotopic (relative to the endpoints). (ii) We have $2m<l$ and the residual path $\sigma':I_{l-2m}\rightarrow G$ is reduced. (iii) There is a couple $(s_{m+1},t_{m+1})$ such that $(s_1,t_1),...,(s_m,t_m),(s_{m+1},t_{m+1})$ is a reduction process for $\sigma$. \end{myprop} \begin{proof} Suppose that $2m<l$ and that the residual path $\sigma':I_{l-2m}\rightarrow G$ is not reduced. Then there are two adjacent edges $s',t'$ in $I_{l-2m}$ such that $\sigma'$ sends $s',t'$ to the same edge of $G$, crossed with opposite orientation; let's also assume $\omeno{s'}<\omeno{t'}$. The domain $I_{l-2m}$ of $\sigma'$ is a quotient of the domain $I_l$ of $\sigma$; thus we find unique edges $s_{m+1},t_{m+1}$ of $I_l$ such that the quotient sends $s_{m+1},t_{m+1}$ to $s',t'$ respectively; notice that $\omeno{s_{m+1}}<\omeno{t_{m+1}}$ and that $s_{m+1},t_{m+1}$ are distinct, and they are also distinct from $s_1,t_1,...,s_m,t_m$. From the definition of $\sigma'$, it is immediate to see that $\sigma$ sends $s_{m+1},t_{m+1}$ to the same edge of $G$, crossed with opposite orientation. It follows that $(s_1,t_1),...,(s_m,t_m),(s_{m+1},t_{m+1})$ is a reduction process for $\sigma$, as desired. \end{proof} The above proposition essentially says that a reduction process can be inductively extended, until we get a path which is either trivial or reduced. A reduction process $(s_1,t_1),...,(s_m,t_m)$ is called \textbf{maximal} if it can't be extended by adding a couple of edges $(s_{m+1},t_{m+1})$, i.e. if it falls into case (i) or (ii) of Proposition \ref{extensionofprocess}. Of course every path admits at least one maximal reduction process. Despite the maximal reduction process not being unique in general, it turns out that the residual path is unique, as shown in the following proposition: \begin{myprop}\label{reducedpath} Let $[\sigma]$ be a non-trivial homotopy class of paths $\sigma:[0,1]\rightarrow G$ (relative to their endpoints). Then the homotopy class contains a unique reduced path $\ol{\sigma}:I_r\rightarrow G$. Moreover, for every combinatorial path $\sigma':I_l\rightarrow G$ and for every maximal reduction process $(s_1,t_1),...,(s_m,t_m)$ for $\sigma'$, we have $l-2m=r$ and the residual path coincides with $\ol\sigma$. \end{myprop} \begin{proof} Let $p:\ot G\rightarrow G$ be the universal cover and choose a lifting $\tau:I_l\rightarrow\ot G$ of the combinatorial path $\sigma$: we have that $\tau$ is a combinatorial path, connecting two distinct vertices $v_0=\tau(0)$ and $v_1=\tau(1)$ of $\ot G$. Let $\ol\sigma:I_r\rightarrow G$ be any reduced path in the homotopy class of $\sigma$: then there is a unique lifting $\ol\tau:I_r\rightarrow\ot G$ with $\ol\tau(0)=v_0$ and $\ol\tau(1)=v_1$, and this is a reduced path. But since $\ot G$ is a tree, there is a unique reduced path connecting $v_0$ and $v_1$. This means that $\ol\tau$ is uniquely determined by the homotopy class, and thus $\ol\sigma=p\circ\ol\tau$ is uniquely determined too. The conclusion follows. \end{proof} The following graphical representation of a reduction process will be useful. Let $\sigma:I_l\rightarrow G$ be a combinatorial path and let $(s_1,t_1),...,(s_m,t_m)$ be a reduction process for $\sigma$. Consider $I_l$ as a subdivision of the unit interval $[0,1]\times\{0\}\subseteq\mathbb R^2$. For each couple $(s_i,t_i)$, take a smooth path $r_i$ in the upper half-plane connecting the midpoint of $s_i$ to the midpoint of $t_i$. The paths $r_1,...,r_m$ can be taken to be pairwise disjoint, as in figure \ref{diagramreductionprocess}. \begin{figure} \caption{An example of a possible diagram for a reduction process.} \label{diagramreductionprocess} \end{figure} \subsection{Labeled graphs} We consider the finitely generated free group $F_n$ of rank $n$, generated by $a_1,...,a_n$. We write $\ol{a_i}=a_i^{-1}$. We denote with $R_n$ the standard $n$-rose, i.e. the graph with one vertex $*$ and $n$ oriented edges labeled $a_1,...,a_n$. The fundamental group $\pi_1(R_n,*)$ will be identified with $F_n$: the path going along the edge labeled $a_i$ (with the right orientation) corresponds to the element $a_i\in F_n$. \begin{mydef}\label{grafo} An \textbf{$F_n\text{-labeled graph}$} is a graph $G$ together with a map $f:G\rightarrow R_n$ sending each vertex of $G$ to the unique vertex of $R_n$, and each open edge of $G$ homeomorphically to one edge of $R_n$. \end{mydef} This means that every edge of $G$ is equipped with a label in $\{a_1,...,a_n\}$ and an orientation, according to which edge of $R_n$ it is mapped to; the map $f:G\rightarrow R_n$ is called \textbf{labeling map} for $G$. \begin{mydef} Let $G_0,G_1$ be $F_n\text{-labeled graph}s$ with labeling maps $f_0,f_1$ respectively. A map $h:G_0\rightarrow G_1$ is called \textbf{label-preserving} if $f_1\circ h=f_0$. \end{mydef} This means that the map $h$ sends each vertex to a vertex, and each open edge homeomorphically onto an edge with the same label and orientation. In particular $h$ is a combinatorial map. \subsection{Core graph of a subgroup} From the theory of covering spaces, we know that pointed covering spaces of $R_n$ are in bijection with subgroups of the fundamental group $\pi_1(R_n,*)=F_n$. Given a pointed covering space $p:(P,*)\rightarrow(R_n,*)$, we have that the map $p_*:\pi_1(P,*)\rightarrow F_n$ is injective, and thus $\pi_1(P,*)$ can be identified with its image $p_*(\pi_1(P,*))=H$, determining a subgroup $H\le F_n$. Conversely, given a subgroup $H\le F_n$, there is a unique pointed covering space $p:(P,*)\rightarrow(R_n,*)$ such that $p_*(\pi_1(P,*))=H$: we define $(\cov{H},*)=(P,*)$ to be such covering space. \begin{myrmk} A covering space $p:(P,*)\rightarrow(R_n,*)$ is in particular an $F_n\text{-labeled graph}$. \end{myrmk} \begin{mydef} Define the core graph $\core{H}$ and the pointed core graph $\bcore{H}$ to be the core and the pointed core of $(\cov{H},*)$, respectively. \end{mydef} Of course $\cov{H},\core{H},\bcore{H}$ are $F_n\text{-labeled graph}$s, with labeling map given by the covering projection $p$, and by its restriction to the subgraphs $\core{H}$ and $\bcore{H}$ respectively. The labeling map $f:\bcore{H}\rightarrow R_n$ gives a map $f_*:\pi_1(\bcore{H},*)\rightarrow F_n$ which induces an isomorphism $f_*:\pi_1(\bcore{H},*)\rightarrow H$. We have that $H$ is finitely generated if and only if $\core{H}$ is finite (and if and only if $\bcore{H}$ is finite). In that case, $\core{H}$ and $\bcore{H}$ can be built algorithmically from a finite set of generators for $H$, see algorithm 5.4 in \cite{Stallings}. Given two finitely generated subgroups $H_1,H_2$, it is possible to algorithmically build the core graph $\core{H_1\cap H_2}$ of their intersection, see theorem 5.5 in \cite{Stallings}. This also allows one to prove Howson's theorem, stating that the intersection of two finitely generated subgroups of a free group is finitely generated. \section{Equations and Stallings folding}\label{SectionStallings} Fix $H\le F_n$ finitely generated and $g\in F_n$ that depends on $H$. We here introduce an efficient way of computing a set of generators for the ideal $\mathfrak I_g\le H*\gen{x}$ as a normal subgroup. The technique is based on the classical Stallings folding operations; the novel aspect of what we do is that we focus on the non-rank-preserving folding operations, which are the ones responsible for the generators of the ideal, and we delay them until the end of the chain of folding operations. The same technique can be used more generally to compute a set of normal generators for the kernel of any homomorphism between free groups. \subsection{Stallings folding} We will assume that the reader has some confidence with the classical Stallings folding operation, for which I refer to \cite{Stallings}. I briefly recall the main properties that we are going to use. Let $G$ be a finite connected $F_n\text{-labeled graph}$ and suppose there are two distinct edges $e_1,e_2$ with endpoints $v,v_1$ and $v,v_2$ respectively. Suppose that $e_1$ and $e_2$ have the same label and orientation. We can identify $v_1$ with $v_2$, and $e_1$ with $e_2$: we then get a label-preserving quotient map of graphs $q:G\rightarrow G'$. \begin{mydef} The quotient map $q:G\rightarrow G'$ is called \textbf{Stallings folding}. \end{mydef} Given a finite connected $F_n\text{-labeled graph}$ $G$, we can successively apply folding operations to $G$ in order to get a sequence $G=G^0\rightarrow G^1\rightarrow...\rightarrow G^l$. Notice that the number of edges decreases by $1$ at each step, and thus the length of any such chain is bounded (by the number of the edges of $G$). The following proposition, although not explicitly stated in \cite{Stallings}, is a well-known consequence. \begin{myprop}\label{folding} Let $G$ be a finite connected $F_n\text{-labeled graph}$ and let $G=G^0\rightarrow G^1\rightarrow...\rightarrow G^m$ be a maximal sequence of folding operations. Also, fix a basepoint $*\in G$, inducing a basepoint $*\in G^i$ for $i=0,...,l$. Then we have the following: (i) Each such sequence has the same length $m$ and the same final graph $G^m$. (ii) Let $f^i:G^i\rightarrow R_n$ be the labeling map. Then the image of $f^i_*:\pi_1(G^i,*)\rightarrow\pi_1(R_n,*)$ is the same subgroup $H\le F_n$ for every $i=1,...,m$. (iii) For every $i=1,...,m$ there is a unique label-preserving map of pointed graphs $h^i:G^i\rightarrow\cov{H}$. The image $\im{h^i}$ is the same subgraph of $\cov{H}$ for every $i=1,...,m$. (iv) The map $h^m$ is an embedding of $G^m$ as a subgraph of $\cov{H}$ and the subgraph $h^m(G^m)$ contains $\bcore{H}$. In particular $h^m_*:\pi_1(G^m,*)\rightarrow\pi_1(\cov{H},*)$ is an isomorphism and the map $f^m_*:\pi_1(G^m,*)\rightarrow F_n$ is injective. \end{myprop} \begin{mydef} Let $G$ be a finite connected $F_n\text{-labeled graph}$. Define its \textbf{folded graph} $\fold{G}$ to be the $F_n\text{-labeled graph}$ $G^m$ obtained from any maximal sequence of folding operations as in Proposition \ref{folding}. \end{mydef} \subsection{Rank-preserving and non-rank-preserving folding operations} Let $G$ be an $F_n\text{-labeled graph}$ and let $q:G\rightarrow G'$ be a folding operation. \begin{mydef} A Stallings folding $q:G\rightarrow G'$ is called \textbf{rank-preserving} if it is an homotopy equivalence. \end{mydef} In that case, for every basepoint $*\in G$, the map $q:(G,*)\rightarrow(G',q(*))$ is a pointed homotopy equivalence and $q_*:\pi_1(G,*)\rightarrow\pi_1(G',q(*))$ is an isomorphism. Being rank-preserving is equivalent to the requirement that the endpoints that we are identifying are distinct (see also figure \ref{rankpreserving}). \begin{figure} \caption{Examples of configurations where a folding operation is possible. The two examples on the left produce rank-preserving folding operations; the two examples on the right produce non-rank-preserving folding operations.} \label{rankpreserving} \end{figure} \begin{figure} \caption{Above we have two examples of graphs, and in each of them we want to perform two folding operations (one involving $a$-labeled edges, and the other involving $b$-labeled edges). In the graph on the left, we can perform the two operations in any order. In the graph on the right, we are forced to perform the operation on the $a$-labeled edges first.} \label{commutefolding} \end{figure} In a sequence of folding operations as in Proposition \ref{folding}, it is not always possible to change the order of the operations; see for example figure \ref{commutefolding}. Informally, we could say that certain folding operations are required before being able to perform other operations. The key observation is that the non-rank-preserving folding operations change the set of edges of $G$, but they do not change the set of vertices of $G$; as a consequence, they are not a requirement for any other operation. This can be made precise as follows. \begin{myprop}\label{folding2} Let $G$ be a finite connected $F_n\text{-labeled graph}$. Let $G=G^0\rightarrow G^1\rightarrow...\rightarrow G^k$ be a maximal sequence of rank-preserving folding operations. Let $G^k\rightarrow G^{k+1}\rightarrow...\rightarrow G^m$ be a maximal sequence of folding operations for $G^k$. Also, fix a basepoint $*\in G$, inducing a basepoint $*\in G^i$ for every $i=1,...,m$. Then we have the following: (i) Each map in the first sequence is a (pointed) homotopy equivalence; the map $G^0\rightarrow G^k$ is an homotopy equivalence. (ii) The second sequence only contains non-rank-preserving folding operations; the map $G^k\rightarrow G^m$ is an isomorphism on the set of vertices. (iii) The concatenation of the two sequences produces a folding sequence as in Proposition \ref{folding}. In particular $G^m=\fold{G}$. (iv) The numbers $k,m$ do not depend on the chosen sequences. \end{myprop} \begin{myrmk} This shows that the graph $G^k$ is essentially $\fold{G}$, but with some edge repeated two or more times (see figure \ref{gmgl}). The repeated edges (and their multiplicity) can depend on the chosen sequence of folding operations; the graph $G^k$ is not uniquely determined by $G$. \end{myrmk} \begin{proof} Part (i) is trivial. For (ii), suppose the sequence $G^k\rightarrow...\rightarrow G^m$ contains a rank-preserving folding operation, and let $j\ge k$ be the smallest integer such that $G^j\rightarrow G^{j+1}$ is rank-preserving; this means that there are two edges $e_1,e_2$ in $G^j$ with an endpoint $v$ in common, the other endpoints $v_1\not=v_2$ distinct, and the same label and orientation. Let $p:G^k\rightarrow G^j$ be the composition of the sequence of folding operations $G^k\rightarrow...\rightarrow G^j$: each of those operations is non-rank-preserving, and in particular it induces an isomorphism on the set of vertices. Thus we can take the vertices $p^{-1}(v)$ and $p^{-1}(v_1)\not=p^{-1}(v_2)$. Take any edge $e_3\in p^{-1}(e_1)$ and $e_4\in p^{-1}(e_2)$ and we have that in $G^k$ is it possible to fold $e_3$ and $e_4$, performing a rank-preserving folding operation. This gives a contradiction because the sequence of rank-preserving folding operations $G^0\rightarrow...\rightarrow G^k$ was maximal. Part (iii) is trivial. For part (iv), we observe the following: along the sequence $G^0\rightarrow...\rightarrow G^k$, at each step the number of vertices decreases by one, while along the sequence $G^k\rightarrow...\rightarrow G^m$ the number of vertices is preserved. Thus $k$ is equal to the number of vertices of $G$ minus the number of vertices of $\fold{G}$, regardless of the chosen sequence. By Proposition \ref{folding} the sum $m+k$ doesn't depend on the chosen sequence, and thus neither does $m$. \end{proof} \begin{figure} \caption{An example of the result of the folding procedure described in Proposition \ref{folding2} \label{gmgl} \end{figure} \subsection{A set of normal generators for the set of equations}\label{SubsectionG} \begin{mythm}\label{idealfingen} Let $H\le F_n$ be a finitely generated subgroup and $g\in F_n$ be an element. Then we have the following: (i) The ideal $\mathfrak I_g\trianglelefteq H*\gen{x}$ is finitely generated as a normal subgroup. (ii) The set of generators for $\mathfrak I_g$ can be taken to be a subset of a basis for $H*\gen{x}$. (iii) There is an algorithm that, given $H$ and $g$, computes a finite set of normal generators for $\mathfrak I_g$ which is also a subset of a basis for $H*\gen{x}$. \end{mythm} \begin{proof} Let $\varphi_g:H*\gen{x}\rightarrow F_n$ be the corresponding evaluation map, so that $\mathfrak I_g=\ker{\varphi_g}$. Let $G=\bcore{H}\vee\bcore{\gen{g}}$ be the pointed $F_n\text{-labeled graph}$ obtained by identifying the basepoints of $\bcore{H}$ and $\bcore{\gen{g}}$; see figure \ref{graphG}. Let $f:G\rightarrow R_n$ be the labeling map, inducing a map $f_*:\pi_1(G,*)\rightarrow\pi_1(R_n,*)$ between the fundamental groups. Let $\theta:H*\gen{x}\rightarrow\pi_1(G,*)$ be the isomorphism sending each element of $H$ to the corresponding path in $\bcore{H}$, and the element $x$ to the path in $\bcore{\gen{g}}$ corresponding to the element $g$. It is immediate to see that $f_*\circ\theta=\varphi_g$ as maps from $H*\gen{x}$ to $\pi_1(R_n,*)=F_n$: in particular we have $\mathfrak I_g=\ker{\varphi_g}=\ker(f_*\circ\theta)=\theta^{-1}(\ker{f_*})$. \begin{figure} \caption{In the picture we can see the graph $G$. Here $F_2=\gen{a,b} \label{graphG} \end{figure} Let $G=G^0\rightarrow...\rightarrow G^k$ be a maximal sequence of rank-preserving folding operations and let $G^k\rightarrow...\rightarrow G^m$ be a maximal sequence of folding operations for $G^k$, as in Proposition \ref{folding2}. The basepoint $*\in G$ induces a basepoint $*\in G^i$ for every $i=0,...,m$. Let $p:G\rightarrow G^k$ and $q:G^k\rightarrow G^m$ be the quotient maps given by the sequences of foldings. Let $R_n$ be the $n$-rose with edges labeled $a_1,...,a_n$ and let $f:G\rightarrow R_n$ and $f^k:G^k\rightarrow R_n$ and $f^m:G^m\rightarrow R_n$ be the labeling maps. By Proposition \ref{folding2} we have that $p_*:\pi_1(G,*)\rightarrow\pi_1(G^k,*)$ is an homotopy equivalence; by Proposition \ref{folding} we have that $f^m_*:\pi_1(G^m,*)\rightarrow\pi_1(R_n,*)=F_n$ is injective. Since diagram \ref{maps} commutes, we have $\ker{\varphi_g}=\ker(f_*\circ\theta)=\ker(f^m_*\circ q_*\circ p_*\circ\theta)=\theta^{-1}(p_*^{-1}(\ker{q_*}))$. We now show that $\ker{q_*}$ is quite easy to compute, and the maps $p_*^{-1}$ and $\theta^{-1}$ can be made explicit too. \begin{figure} \caption{The diagram commutes.} \label{maps} \end{figure} Let $T$ be a maximal tree for $G^k$. Let $e_1,...,e_{r}$ be the list of edges in $G^k\setminus T$ (each with its orientation coming from the labeling): these give a basis $\sigma_1,...,\sigma_{r}$ for the fundamental group $\pi_1(G^k,*)$, as defined in Proposition \ref{fundamentalgroupgraph}. By Proposition \ref{folding2}, the map $q:G^k\rightarrow G^m$ is an isomorphism on the set of vertices (see figure \ref{gmgl}), and thus $\restr{q}{T}$ is an homeomorphism and $q(T)$ is a maximal tree for $G^m$. Let $d_1,...,d_s$ be the list of edges in $G^m\setminus q(T)$ (each with its orientation coming from the labeling map): these give a basis $\tau_1,...,\tau_s$ for the fundamental group $\pi_1(G^m,*)$, according to Proposition \ref{fundamentalgroupgraph}. The map $q_*:\pi_1(G^k,*)\rightarrow\pi_1(G^m,*)$ is now very easy to describe: we have $$q_*(\sigma_i)=\begin{cases} 1&\text{if }q(e_i)\in q(T)\\ \tau_j&\text{if }q(e_i)=d_j\in G^m\setminus q(T) \end{cases}$$ For each $d_j\in G^m\setminus q(T)$ fix an index $i(j)$ such that $q(e_{i(j)})=d_j$. Then we can define the set $$N=\{\sigma_i : q_*(\sigma_i)=1\}\cup\{\sigma_{i'}\sigma_{i(j)}^{-1} : q_*(\sigma_{i'})=d_j\text{ and }i'\not=i(j)\}$$ and we observe that $N$ is a set of normal generators for $\ker{q_*}$, and it also has the additional property of being a subset of a basis for $\pi_1(G^k,*)$. We observe that the inverse $p_*^{-1}$ can be made explicit as follows. Each folding operation in the chain $G=G^0\rightarrow...\rightarrow G^k$ is a pointed homotopy equivalence, and it is easy to produce homotopy inverses $\alpha^i:G^i\rightarrow G^{i-1}$ for $i=1,...,k$. We now take the composition $\alpha=\alpha^1\circ...\circ\alpha^k:G^k\rightarrow G$ and we observe that the map $\alpha_*:\pi_1(G^k,*)\rightarrow\pi_1(G,*)$ is exactly the desired inverse $\alpha_*=p_*^{-1}$. Finally, the map $\theta^{-1}$ works as follows: we take a path $\gamma:I_l\rightarrow G$ with $\gamma(0)=\gamma(1)=*$, we take its reduction $\ol\gamma$, and we write down the word that we read while going along $\ol\gamma$; whenever we cross $\bcore{x}$ we write $x$ or $\ol{x}$ instead of the labels of the edges of $\bcore{x}$. The result of this process is exactly the element $\theta^{-1}([\gamma])\in H*\gen{x}$. \end{proof} \begin{myrmk} Notice that with the above argument, we are able to produce a basis $c_1,...,c_{r+1}$ for $H*\gen{x}$, where $c_i=\theta^{-1}(p_*^{-1}(\sigma_i))$, such that each of $h_1,...,h_r,x$ written as a reduced word in $c_1,...,c_{r+1}$ has at most the same length as $h_1,...,h_r,g$ written as a reduced word in $a_1,...,a_n$, respectively; moreover, the ideal $\mathfrak I_g$ is generated (as normal subgroup) by words in $c_1,...,c_{r+1}$ of length at most $2$. \end{myrmk} \begin{myrmk} Consider the subgroup $\gen{H,g}\le F_n$. We can compute a basis for $\gen{H,g}$, and we can then use $\gen{H,g}$ as ambient group instead of $F_n$ itself; this doesn't change the kernel $\ker{\varphi_g}$. In other words, we can assume that the graph $G^m$ that we obtain at the end of the folding process is exactly the rose $R_n$, and that the graph $G^k$ is a rose too, but with some label repeated more than once on the petals. This assumption makes the computations easier. \end{myrmk} \section{The minimum degree of an equation}\label{SectionMain} In this section, we work with a fixed finitely generated subgroup $H\le F_n$ and with a fixed element $g\in F_n$ such that $g$ depends on $H$. With the same notation as in the proof of Theorem \ref{idealfingen}, we consider the $F_n\text{-labeled graph}$ $G=\bcore{H}\vee\bcore{\gen{g}}$ with labeling map $f:G\rightarrow R_n$, inducing a map $f_*:\pi_1(G,*)\rightarrow F_n$ of fundamental groups. We consider the isomorphism $\theta:H*\gen{x}\rightarrow\pi_1(G,*)$ as defined in the proof of Theorem \ref{idealfingen}. \begin{mydef}\label{defcorrpath} Let $w\in H*\gen{x}$ be a non-trivial equation. Define the \textbf{corresponding path} $\sigma:I_l\rightarrow G$ as the unique reduced path in the homotopy class $\theta(w)$ (see Proposition \ref{reducedpath}). \end{mydef} \begin{mydef}\label{defcorrequation} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$. Define the \textbf{corresponding equation} $w\in H*\gen{x}$ as $w=\theta^{-1}([\sigma])$. \end{mydef} The two above definitions give a bijection between non-trivial equations $w\in H*\gen{x}$ and reduced paths $\sigma:I_l\rightarrow G$. Cyclically reduced paths correspond to \textit{cyclically reduced equations}, i.e. equations $w\in H*\gen{x}$ such that, when we write $w$ as a reduced word in the letters $a_1,...,a_n,x$, the word is also cyclically reduced. The aim of this section is to prove the following theorem: \begin{mythm}\label{main} Let $L$ be the number of edges of the graph $G$ and let $d_{min}$ be the minimum possible degree for a non-trivial equation in $\mathfrak I_g$. Then there is a non-trivial equation $w\in\mathfrak I_g$ of degree $d_{min}$ and such that the corresponding path $\sigma:I_l\rightarrow G$ has length $l\le16L^2d_{min}$. \end{mythm} \subsection{Innermost cancellations} The degree of a cyclically reduced equation can be computed by looking at how many times we cross the edges of $\core{\gen{g}}$. \begin{mylemma}\label{degree} Let $\sigma:I_l\rightarrow G$ be a cyclically reduced path. Let $e$ be any edge of $G$ that belongs to the subgraph $\core{\gen{g}}$. Then the degree of the equation $w$ corresponding to $\sigma$ coincides with the number of times $\sigma$ crosses the edge $e$ (in either direction). \end{mylemma} \begin{proof} Write the equation $w$ as a cyclically reduced word $c_1x^{\alpha_1}c_2x^{\alpha_2}...c_rx^{\alpha_r}c_{r+1}$ with $\alpha_1,...,\alpha_r\in\mathbb Z\setminus\{0\}$ and $c_1,...,c_{r+1}\in H$. Then in the graph $G$ we have that $\theta(w)=\theta(c_1)\cdot\theta(x^{\alpha_1})\cdot...\cdot\theta(x^{\alpha_r})\cdot\theta(c_{r+1})$, where the $\cdot$ symbol denotes the concatenation of paths (without any homotopy). It is immediate to see that $\theta(x^{\alpha_i})$ crosses each edge of $\core{\gen{g}}$ exactly $\abs{\alpha_i}$ times, for $i=1,...,r$, and that $\theta(c_i)$ is contained in $\bcore{H}$ and thus it doesn't cross any edge of $\core{\gen{g}}$. The conclusion follows. \end{proof} Non-trivial equations $w\in\mathfrak I_g$ with $g$ as a solution correspond to reduced paths $\sigma:I_l\rightarrow G$ such that $f\circ\sigma$ is homotopycally trivial (relative to the endpoints). In this case we can take a maximal reduction process $(s_1,t_1),...,(s_{l/2},t_{l/2})$ for $f\circ\sigma$. The following two lemmas, which will be of fundamental importance in what follows, tell us that the degree of an equation is closely related to the number of innermost cancellations. \begin{mydef} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopycally trivial (relative to the endpoints); let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. A couple $(s_i,t_i)$ is called \textbf{innermost cancellation} if $s_i$ and $t_i$ are adjacent on the interval $I_l$. \end{mydef} \begin{mylemma}\label{innermostcouple} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial (relative to its endpoints); let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Let $(s_i,t_i)$ be an innermost cancellation. Then, among $\sigma(s_i)$ and $\sigma(t_i)$, one is an edge of $\bcore{H}$ and the other is an edge of $\bcore{\gen{g}}$ (and the vertex between them is the basepoint). \end{mylemma} \begin{proof} Since $(s_i,t_i)$ is a couple of a reduction process for $f\circ\sigma$, we have that $(f\circ\sigma)(s_i)$ and $(f\circ\sigma)(t_i)$ are the same edge of $\fold{G}$ but with opposite orientations. In particular $(f\circ\sigma)(s_i)$ and $(f\circ\sigma)(t_i)$ have the same label and opposite orientations, and, since $p$ is label-preserving, the two edges $\sigma(s_i)$ and $\sigma(t_i)$ have the same label and opposite orientations too. Observe that $\sigma(s_i)$ and $\sigma(t_i)$ are adjacent but distinct, since $\sigma$ is a reduced path. This means that $\sigma(s_i)$ and $\sigma(t_i)$ can't both belong to $\bcore{H}$ (because it is folded), and can't both belong to $\bcore{\gen{g}}$ (because it is folded too). Thus one of them has to belong to $\bcore{H}$ and the other to $\bcore{\gen{g}}$, and the conclusion follows. \end{proof} \begin{mylemma}\label{innermostbounded} Let $\sigma:I_l\rightarrow G$ be a cyclically reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial (relative to its endpoints); let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal cancellation process for $f\circ\sigma$. Suppose the equation $w\in\mathfrak I_g$ corresponding to $\sigma$ has degree $d$. Then the reduction process contains at most $2d$ innermost cancellations. \end{mylemma} \begin{proof} As in the proof of Lemma \ref{degree}, write the equation $w$ as a cyclically reduced word $$c_1x^{\alpha_1}c_2x^{\alpha_2}...c_rx^{\alpha_r}c_{r+1}$$ with $\alpha_1,...,\alpha_r\in\mathbb Z\setminus\{0\}$ and $c_1,...,c_{r+1}\in H$. In the graph $G$ we have $\sigma=\theta(w)=\theta(c_1)\cdot\theta(x^{\alpha_1})\cdot...\cdot\theta(x^{\alpha_r})\cdot\theta(c_{r+1})$, where the $\cdot$ symbol denotes the concatenation of paths (without any homotopy). We see that $w$ has degree $d=\abs{\alpha_1}+...+\abs{\alpha_r}\ge r$ and, using Lemma \ref{innermostcouple}, that the path $\sigma$ contains at most $2r$ innermost cancellations. The conclusion follows. \end{proof} \subsection{Parallel cancellation} In this subsection we introduce the parallel cancellation moves, which allow us to produce a shorter equation from a longer one. We give a characterization of which parallel cancellation moves preserve the degree of the equation. Recall that $I_l$ is the unit interval $[0,1]$ subdivided into $l$ segments, and recall that for an edge $s$ of $I_l$, we denote with $\omeno{s},\opiu{s}$ the endpoints of $s$, ordered on the interval in such a way that $\omeno{s}<\opiu{s}$. \begin{mydef}\label{parallel} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. We say that two couples $(s_\alpha,t_\alpha),(s_\beta,t_\beta)$ with $\alpha<\beta$ are \textbf{$\text{parallel}$} if they satisfy the following conditions: (i) The edges $s_\beta,s_\alpha,t_\alpha,t_\beta$ appear in this order on $I_l$. (ii) The map $\sigma$ sends $s_\alpha,s_\beta$ to the same edge of $G$ crossed with the same orientation. (iii) The map $\sigma$ sends $t_\alpha,t_\beta$ to the same edge of $G$ crossed with the same orientation. \end{mydef} The reason behind the definition of $\text{parallel}$ couples is that they allow us to perform a cancellation move, which I now describe, that will be of fundamental importance in the proof of the main theorem. Let $(s_\alpha,t_\alpha),(s_\beta,t_\beta)$ be two parallel couples; we take the subgraph of $I_l$ given by the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ and collapse it to a point; we also take the subgraph of $I_l$ given by the interval $[\opiu{t_\alpha},\opiu{t_\beta}]$ and collapse it to a point (see figure \ref{cancellationmove}). We obtain a graph isomorphic to $I_{l'}$, and notice that $2\le l'\le l-2$ (because there are at least two edges that get collapsed, namely $s_\beta$ and $t_\beta$, and two that don't get collapsed, namely $s_\alpha$ and $t_\alpha$). We can define a map $\sigma':I_{l'}\rightarrow G$ which is equal to $\sigma$, except on the collapsed interval $[\omeno{s_\beta},\omeno{s_\alpha}]$, where we set it equal to $\sigma(\omeno{s_\beta})=\sigma(\omeno{s_\alpha})$, and except on the collapsed interval $[\opiu{t_\alpha},\opiu{t_\beta}]$, where we set it equal to $\sigma(\opiu{t_\alpha})=\sigma(\opiu{t_\beta})$. This gives a well-defined combinatorial path $\sigma':I_{l'}\rightarrow G$. \begin{mylemma}[Parallel cancellation]\label{parallelcancellation} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Suppose for some $1\le\alpha<\beta\le l/2$ the couples $(s_\alpha,t_\alpha),(s_\beta,t_\beta)$ are $\text{parallel}$ and define the map $\sigma':I_{l'}\rightarrow G$ as above. Then $\sigma'$ is a reduced path with $\sigma'(0)=\sigma'(1)=*$ and $f\circ\sigma'$ is homotopically trivial. A maximal reduction process for $f\circ\sigma'$ can be obtained from $(s_1,t_1),...,(s_{l/2},t_{l/2})$ by removing the couples containing edges which get collapsed in the definition of $I_{l'}$. \end{mylemma} \begin{proof} Consider the map $\sigma':I_{l'}\rightarrow G$ and we want to show that it is reduced. At the vertices in the interval $[0,\omeno{s_\beta})$, the local injectivity of $\sigma'$ immediately follows from the local injectivity of $\sigma$; the same holds for the vertices in the intervals $[\opiu{s_\alpha},\omeno{t_\alpha}]$ and $(\opiu{t_\beta},1]$. For the vertex of $I_{l'}$ corresponding to the collapsed interval $[\omeno{s_\beta},\omeno{s_\alpha}]$, the local injectivity of $\sigma'$ follows from the local injectivity of $\sigma$ at $\omeno{s_\beta}$ (and here we use the hypothesis that $\sigma$ sends $s_\beta$ and $s_\alpha$ to the same edge of $G$, crossed with the same orientation). Similarly, for the vertex of $I_{l'}$ corresponding to the collapsed interval $[\opiu{t_\alpha},\opiu{t_\beta}]$, the local injectivity of $\sigma'$ follows from the local injectivity of $\sigma$ at $\opiu{t_\beta}$. This shows that $\sigma'$ is reduced. It is easy to see, using Lemma \ref{lemmino1}, that for every $1\le i\le m$ we have that either both or none of $s_i,t_i$ is collapsed to a point. Consider the sequence $(s_1,t_1),...,(s_{l/2},t_{l/2})$ and we remove the couple of edges that get collapsed (preserving the order of the other couples): the remaining couples contain the edges of $I_{l'}$, each appearing exactly once. We thus get a sequence $(q_1,r_1),...,(q_{l'/2},r_{l'/2})$ of couples of edges of $I_{l'}$. We want to prove that this is a reduction process for $\sigma'$ (the thesis then immediately follows). We take a couple $(q_i,r_i)$ for some $1\le i\le l'/2$ and we want to prove that, if in $I_{l'}$ we collapse each of $q_1,r_1,...,q_{i-1},r_{i-1}$ to a point, the edges $q_i,r_i$ become adjacent. We have $(q_i,r_i)=(s_j,t_j)$ for some $1\le j\le l/2$. But In the sequence $(s_1,t_1),...,(s_{l/2},t_{l/2})$ we have that, collapsing each of $(s_1,t_1),...,(s_{j-1},t_{j-1})$, the edges $s_j,t_j$ become adjacent; of the couples $(s_1,t_1),...,(s_{j-1},t_{j-1})$, some get collapsed when passing from $I_l$ to $I_{l'}$, and the others are exactly the couples $(q_1,r_1),...,(q_{i-1},r_{i-1})$; thus, if we collapse $(q_1,r_1),...,(q_{i-1},r_{i-1})$ too, the two edges $q_i$ and $r_i$ become adjacent, as desired. Finally, take a couple $(q_i,r_i)$ for some $1\le i\le l'/2$, and we want to prove that $f\circ\sigma'$ sends $q_i$ and $r_i$ to the same edge of $\fold{G}$ crossed with opposite orientation. But $(q_i,r_i)=(s_j,t_j)$ for some $1\le j\le l/2$, and $f\circ\sigma$ sends $s_j$ and $t_j$ to the same edge of $\fold{G}$ crossed with opposite orientation. Since $\sigma'$ is defined to coincide with $\sigma$ on the edges $q_i=s_j$ and $r_i=t_j$, we have that $f\circ\sigma'$ sends $q_i$ and $r_i$ to the same edge of $\fold{G}$ crossed with opposite orientation. Thus $(q_1,r_1),...,(q_{l'/2},r_{l'/2})$ is a maximal reduction process for $f\circ\sigma'$, as desired. \end{proof} \begin{figure} \caption{An example of a diagram for a maximal reduction process. The cancellation move collapses two intervals, which are painted below the interval, i.e. $[\omeno{s_\beta} \label{cancellationmove} \end{figure} The following two lemmas give us information about how the degree of an equation changes when we perform a parallel cancellation move on the corresponding path. \begin{mylemma}\label{degreedecreases} Let $\sigma:I_l\rightarrow G$ and $\sigma':I_{l'}\rightarrow G$ be cyclically reduced paths with $\sigma(0)=\sigma(1)=\sigma'(0)=\sigma'(1)=*$ and suppose $\sigma'$ is obtained from $\sigma$ by means of a cancellation move as described in Lemma \ref{parallelcancellation}. Then the degrees $d,d'$ of the corresponding equations $w,w'$ satisfy $d'\le d$. \end{mylemma} \begin{proof} Fix an edge $e$ of $G$ belonging to $\core{\gen{g}}$ and apply Lemma \ref{degree}: the domain of $\sigma'$ is the domain of $\sigma$ with some edges collapsed, and thus the number of times $\sigma'$ crosses the edge $e$ is lesser or equal than the number of times $\sigma$ does. \end{proof} \begin{mydef}\label{defdegpres} Let $\sigma:I_l\rightarrow G$ and $\sigma':I_{l'}\rightarrow G$ be reduced paths with $\sigma(0)=\sigma(1)=\sigma'(0)=\sigma'(1)=*$ and suppose $\sigma'$ is obtained from $\sigma$ by means of a cancellation move as described in Lemma \ref{parallelcancellation}. We say that the parallel cancellation move is \textbf{degree-preserving} if the two equations $w,w'$ corresponding to the paths $\sigma,\sigma'$ have the same degree $d=d'$. \end{mydef} \begin{mylemma}\label{degreepreserving} Let $\sigma:I_l\rightarrow G$ be a cyclically reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Suppose there are two parallel couples $(s_\alpha,t_\alpha),(s_\beta,t_\beta)$ with $1\le\alpha<\beta\le l/2$, and let $\sigma':I_{l'}\rightarrow G$ be the reduced path obtained with the cancellation move described in Lemma \ref{parallelcancellation}. If the cancellation move is degree-preserving, then the images by $\sigma$ of the two intervals $[\omeno{s_\beta},\omeno{s_\alpha}]$ and $[\opiu{t_\alpha},\opiu{t_\beta}]$ are contained in $\core{H}$; moreover, the two paths $\restr{f\circ\sigma}{[\omeno{s_\beta},\omeno{s_\alpha}]}$ and $\restr{f\circ\sigma}{[\opiu{t_\alpha},\opiu{t_\beta}]}$ are the same, but walked in reverse direction. \end{mylemma} \begin{proof} By hypothesis, $\sigma(s_\beta)$ and $\sigma(s_\alpha)$ are the same edge $e$ of $G$ crossed with the same orientation. Suppose first that $e$ doesn't belong to $\core{H}$. Then any reduced path that starts and ends with $e$ has to cross all the edges of $\core{\gen{g}}$. Thus, by Lemma \ref{degree}, when we collapse the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ the degree strictly decreases. Suppose now that $e$ belongs to $\core{H}$. Suppose there is an edge $s$ in the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ such that $\sigma(s)$ doesn't belong to $\core{H}$. If $\sigma(s)$ belongs to $\core{\gen{g}}$, then, by Lemma \ref{degree}, when we collapse the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ the degree strictly decreases. If $\sigma(s)$ doesn't belong to $\core{H}$ nor to $\core{\gen{g}}$, then at least one of the paths $\restr{\sigma}{[\omeno{s_\beta},\omeno{s}]}$ and $\restr{\sigma}{[\opiu{s},\omeno{s_\alpha}]}$ crosses all the edges of $\bcore{\gen{g}}$; in particular, by Lemma \ref{degree}, when collapsing the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ the degree strictly decreases. Thus the only possibility is that the path $\restr{\sigma}{[\omeno{s_\beta},\omeno{s_\alpha}]}$ is contained in $\core{H}$. Similarly, we obtain that the path $\restr{\sigma}{[\opiu{t_\alpha},\opiu{t_\beta}]}$ is contained in $\core{H}$ too. This proves the first part of the lemma. For the second part, suppose the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ contains the two edges $s_i,t_i$ for some couple $(s_i,t_i)$ of our reduction process. Then the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ has to contain an innermost couple $(s_j,t_j)$ of our reduction process, i.e. a couple with $s_j,t_j$ adjacent on $I_l$. But by Lemma \ref{innermostcouple}, at least one of the edges $\sigma(s_j),\sigma(t_j)$ has to belong to $\bcore{\gen{g}}$, which is a contradiction with our assumptions. Thus the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ does not contain both $s_i,t_i$ for any couple $(s_i,t_i)$ of our reduction process. The same holds for the interval $[\opiu{t_\alpha},\opiu{t_\beta}]$. Now, using Lemma \ref{lemmino1}, it is easy to see that the reduction process has to pair up the edges of the interval $[\omeno{s_\beta},\omeno{s_\alpha}]$ with the edges of the interval $[\opiu{t_\alpha},\opiu{t_\beta}]$, and the pairing has to be done in decreasing order. It follows that $\restr{f\circ\sigma}{[\omeno{s_\beta},\omeno{s_\alpha}]}$ and $\restr{f\circ\sigma}{[\opiu{t_\alpha},\opiu{t_\beta}]}$ are the same path, walked in reverse directions. \end{proof} \subsection{The minimum possible degree for a non-trivial equation} Let $L$ be the number of edges of the graph $G$. \begin{myprop}\label{findingparallels} Let $w\in\mathfrak I_g$ be a cyclically reduced equation of degree $d$ and let $\sigma:I_l\rightarrow G$ be the corresponding path; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be any maximal reduction process for $f\circ\sigma$. Suppose $l>16L^2d$. Then the reduction process contains two $\text{parallel}$ couples $(s_\alpha,t_\alpha),(s_\beta,t_\beta)$. \end{myprop} \begin{proof} To each couple $(s_i,t_i)$ we associate the quadruple of edges $(\sigma(s_i),\epsilon,\sigma(t_i),\delta)$ where $\sigma(s_i),\sigma(t_i)$ are edges of $G$ and $\epsilon,\delta\in\{+1,-1\}$ tell us the orientation with which $\sigma(s_i)$ and $\sigma(t_i)$ cross their images. There are $l/2$ couples that are sent to $4L^2$ possible quadruples; but by hypothesis we have $l/2>4L^2\cdot 2d$, so we can find at least $2d+1$ couples $(s_{i_1},t_{i_1}),...,(s_{i_{2d+1}},t_{i_{2d+1}})$ with $i_1<...<i_{2d+1}$ which are sent to the same quadruple; this means that $\sigma$ sends $s_{i_1},...,s_{i_{2d+1}}$ all to the same edge of $G$ crossed with the same orientation, and $t_{i_1},...,t_{i_{2d+1}}$ all to the same edge of $G$ crossed with the same orientation. Each of the couples $(s_{i_k},t_{i_k})$ has to contain an innermost cancellation, i.e. there is a cancellation $(q_k,r_k)$ with $\opiu{s_{i_k}}\le\opiu{q_k}=\omeno{r_k}\le\omeno{t_{i_k}}$. By Lemma \ref{innermostbounded} there are at most $2d$ innermost cancellations: since we have $2d+1$ couples $(s_{i_1},t_{i_1}),...,(s_{i_{2d+1}},t_{i_{2d+1}})$, two of them, let's say $(s_{i_j},t_{i_j})$ and $(s_{i_k},t_{i_k})$ with $j<k$, have to contain the same innermost cancellation $(q_j,r_j)=(q_k,r_k)$. But this forces $s_{i_j},s_{i_k},t_{i_k},t_{i_j}$ to appear in this order on the interval $I_l$. Thus the two couples $(s_{i_j},t_{i_j}),(s_{i_k},t_{i_k})$ are parallel, as desired. \end{proof} We are now ready to prove Theorem \ref{main}. \begin{proof}[Proof of Theorem \ref{main}] Let $w\in\mathfrak I_g$ be a non-trivial equation of degree $d_{min}$, and let $\sigma:I_l\rightarrow G$ be the corresponding path. Suppose also that, between the equations of degree $d_{min}$, the equation $w$ has the property that the length $l$ of the corresponding path is the minimum possible. This in particular implies that $w$ is cyclically reduced. Assume by contradiction that $l>16L^2d_{min}$. Then take any maximal reduction process $(s_1,t_1),...,(s_{l/2},t_{l/2})$ for $f\circ\sigma$, and by Proposition \ref{findingparallels} we can find two parallel couples $(s_i,t_i),(s_j,t_j)$. We perform the corresponding parallel cancellation move (according to Lemma \ref{parallelcancellation}) and we obtain a path $\sigma':I_{l'}\rightarrow G$, with corresponding equation $w'\in\mathfrak I_g$ with $w'\not=1$. Notice that $l'<l$; moreover, by Lemma \ref{degreedecreases}, the degree $d'$ of $w'$ satisfies $d'\le d_{min}$, but since $d_{min}$ is the minimum possible this implies $d'=d_{min}$. But then $l'<l$ contradicts the minimality of $l$. This proves the theorem. \end{proof} \begin{mycor}\label{algorithm} There is an algorithm that, given $H$ and $g$ such that $g$ depends on $H$, produces a non-trivial equation $w\in\mathfrak I_g$ of minimum possible degree. \end{mycor} \begin{proof}[Algorithm] We first produce an upper bound $D$ on the minimum degree of an equation in $\mathfrak I_g$; this is done for example by taking any non-trivial equation in $\mathfrak I_g$, and taking its degree $D$. Given this upper bound $D$, we take all the non-trivial reduced paths $\sigma:I_l\rightarrow G$ from the basepoint to itself and of length $l\le16L^2D$. For each such path $\sigma$, we check whether $f\circ\sigma$ is homotopically trivial (in linear time on a pushdown automaton, with a free reduction process), and we compute the degree of the corresponding equation $w$. We take the minimum of all the degrees of those equations: this is also the minimum possible degree for a non-trivial equation in $\mathfrak I_g$. \end{proof} \section{The set of minimum-degree equations} In this section we describe a parallel insertion move and we show that it is an inverse to the degree-preserving cancellation moves. We also provide a few lemmas that help us manipulate sequences of insertion moves. The aim of this section is to provide an explicit characterization of the set of all the equations of minimum possible degree (and more generally, of the set of all the equations of a certain fixed degree). \subsection{Parallel insertion} Observe that, for every vertex $v\in\core{H}$, the group $\pi_1(\core{H},v)$ can be seen as a subgroup of $F_n$, by means of the injective map $\pi_1(f):\pi_1(\core{H},v)\rightarrow\pi_1(R_n,*)$, where $f:\core{H}\rightarrow R_n$ is the labeling map. \begin{mydef}\label{insertionpath} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Fix a couple $(s_\alpha,t_\alpha)$ such that $\sigma(s_\alpha)$ and $\sigma(t_\alpha)$ belong to $\core{H}$; an element $u\in F_n$ is called \textbf{$\text{insertion word}$ for $\sigma$ at $(s_\alpha,t_\alpha)$} if it satisfies the following conditions: (i) $u$ belongs to the subgroup $\pi_1(\core{H},\sigma(\omeno{s_\alpha}))\cap\pi_1(\core{H},\sigma(\opiu{t_\alpha}))$ of $F_n$. (ii) $u$ begins with the label of $\sigma(s_\alpha)$, if $\sigma$ crosses $\sigma(s_\alpha)$ with the same orientation of the labeling, or with the inverse of that label, if $\sigma$ crosses $\sigma(s_\alpha)$ with opposite orientation to the labeling. (iii) $u$ is cyclically reduced. \end{mydef} Let $u$ be a $\text{insertion word}$ at $(s_\alpha,t_\alpha)$ for the path $\sigma$. Then there is a unique reduced path $\tau_1:I_r\rightarrow G$ representing $u\in\pi_1(\core{H},\sigma(\omeno{s_\alpha}))$; similarly, there is a unique reduced path $\tau_2:I_r\rightarrow G$ representing $\ol{u}\in\pi_1(\core{H},\sigma(\opiu{t_\alpha}))$. These two paths have the same length $r$, which is also the length of the word $u$. Now cut $I_l$ at the two points $\omeno{s_\alpha}$ and $\opiu{t_\alpha}$, and insert an interval of length $r$ at each of these two cuts, in order to obtain an interval $I_{l+2r}$; define the map $\sigma':I_{l+2r}\rightarrow G$ which is equal to $\sigma$ on the edges that belonged to $I_l$, and is equal to $\tau_1$ on the interval added at the cut at $\omeno{s_\alpha}$, and is equal to $\tau_2$ on the interval added at the cut at $\opiu{t_\alpha}$; see also figure \ref{insertionmove}. \begin{mylemma}[Parallel insertion]\label{parallelinsertion} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Let $(s_\alpha,t_\alpha)$ be a couple such that $\sigma(s_\alpha)$ and $\sigma(t_\alpha)$ belong to $\core{H}$ and let $u$ be an $\text{insertion word}$ for $\sigma$ at $(s_\alpha,t_\alpha)$. Let $\sigma':I_{l+2r}\rightarrow G$ be the path defined as above. Then $\sigma'$ is a reduced path with $\sigma'(0)=\sigma'(1)=*$ and $f\circ\sigma'$ is homotopically trivial. Moreover, there is a maximal reduction process for $f\circ\sigma'$ containing the couples $(s_1,t_1),...,(s_{l/2},t_{l/2})$. \end{mylemma} \begin{proof} Of course we have $\sigma'(0)=\sigma'(1)=*$. The fact that $\sigma'$ is reduced follows from the fact that $\sigma$ is reduced, and from the fact that $u$ is cyclically reduced (here it is important that $u$ is cyclically reduced and begins with the label of $\sigma(s_\alpha)$ or with its inverse: otherwise the local injectivity of $\sigma'$ may fail at $\omeno{s_\alpha}$). Let $e_r,...,e_1$ be the edges of the interval which is the domain of $\tau_1$ and let $e_1',...,e_r'$ be the edges of the interval which is the domain of $\tau_2$; we mean that $e_r,...,e_1$ and $e_1',...,e_r'$ appear in this order on $I_{l+2r}$. Then a maximal reduction process for $f\circ\sigma'$ is given by $(s_1,t_1),...,(s_\alpha,t_\alpha),(e_1,e_1'),...,(e_r,e_r'),(s_{\alpha+1},t_{\alpha+1 }),...,(s_{l/2},t_{l/2})$, and in particular $f\circ\sigma'$ is homotopically trivial, as desired. \end{proof} \begin{myrmk} Notice that these moves of parallel insertion depend on the existence of an element $u\in\pi_1(\core{H},\sigma(\omeno{s_\alpha}))\cap\pi_1(\core{H},\sigma(\opiu{t_\alpha}))$ with some specific properties. The two subgroups $\pi_1(\core{H},\sigma(\omeno{s_\alpha}))$ and $\pi_1(\core{H},\sigma(\opiu{t_\alpha}))$ are both conjugates of $H$, so there are cases where the possibilities for $u$ are very limited (for example if $H$ is malnormal in $F_n$, meaning that every two distinct conjugates of $H$ have trivial intersection). In any case it is possible that $\sigma(\omeno{s_\alpha})=\sigma(\opiu{t_\alpha})$, giving the possibility for at least some insertion moves to be performed. \end{myrmk} \begin{figure} \caption{An example of an insertion move. In the image above, we can see a diagram for a maximal reduction process for $\sigma$. In the image in the middle, we see the two cuts at $\omeno{s_\alpha} \label{insertionmove} \end{figure} The following two lemmas show that the parallel insertion moves of Lemma \ref{parallelinsertion} are essentially the inverse of the parallel cancellation moves of Lemma \ref{parallelcancellation} which are degree-preserving as in Definition \ref{defdegpres}. \begin{mylemma}\label{parallelinverses} Let $\sigma:I_l\rightarrow G$ be a cyclically reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Suppose there are two parallel couples $(s_\alpha,t_\alpha),(s_\beta,t_\beta)$ and let $\sigma':I_{l'}\rightarrow G$ be the path obtained with the cancellation move of Lemma \ref{parallelcancellation}. Suppose that the cancellation move is degree-preserving, and let $u$ be the word that we read when going along $\sigma([\omeno{s_\beta},\omeno{s_\alpha}])$. Then $\sigma$ can be obtained from $\sigma'$ with an insertion move as described in Lemma \ref{parallelinsertion}, using the $\text{insertion word}$ $u$ for $\sigma'$ at $(s_\alpha,t_\alpha)$. \end{mylemma} \begin{proof} Immediate from Lemma \ref{degreepreserving}. \end{proof} \begin{mylemma}\label{parallelinverses2} Let $\sigma':I_l\rightarrow G$ be a cyclically reduced path with $\sigma'(0)=\sigma'(1)=*$ and such that $f\circ\sigma'$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma'$. Let $(s_\alpha,t_\alpha)$ be a couple such that $\sigma'(s_\alpha)$ and $\sigma'(t_\alpha)$ belong to $\core{H}$ and let $u$ be an $\text{insertion word}$ for $\sigma'$ at $(s_\alpha,t_\alpha)$; let $\sigma$ be the path obtained from $\sigma'$ by means of the insertion move of Lemma \ref{parallelinsertion}. Then $\sigma'$ can be obtained from $\sigma$ by means of a cancellation move which collapses the intervals that we just added; moreover this cancellation move is degree-preserving. \end{mylemma} \begin{proof} Immediate from the definitions. \end{proof} We are now going to prove the technical Lemmas \ref{insertioncommutes}, \ref{insertionsame} and \ref{insertioninsertion}; these will allow us to manipulate a sequence of insertion moves. The following lemma says that, if we take a path $\sigma$ and we have two parallel insertion moves that we want to perform on $\sigma$, then we can perform them in any order that we want, and we get the same result. \begin{mylemma}\label{insertioncommutes} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Let $(s_\alpha,t_\alpha),(s_{\alpha'},t_{\alpha'})$ be distinct couples such that $\sigma(s_\alpha),\sigma(t_\alpha),\sigma(s_{\alpha'}),\sigma(t_{\alpha'})$ belong to $\core{H}$, and let $u,u'$ be $\text{insertion word}$s for $\sigma$ at $(s_\alpha,t_\alpha),(s_{\alpha'},t_{\alpha'})$ respectively. Perform on $\sigma$ the insertion move relative to $u,(s_\alpha,t_\alpha)$ and then the insertion move relative to $u',(s_{\alpha'},t_{\alpha'})$ in order to obtain a path $\mu_1$. Perform on $\sigma$ the insertion move relative to $u',(s_{\alpha'},t_{\alpha'})$ and then the insertion move relative to $u,(s_\alpha,t_\alpha)$ in order to get a path $\mu_2$. Then $\mu_1$ and $\mu_2$ are the same path. \end{mylemma} \begin{proof} The two domains of $\mu_1,\mu_2$ are defined starting with the same interval $I_l$, and adding edges as explained in Lemma \ref{parallelinsertion}. The edges added are the same, and the maps $\mu_1,\mu_2$ are defined in the same way on those edges. The only thing that changes is the order in which the edges are added, but the resulting paths $\mu_1$ and $\mu_2$ are the same. \end{proof} The following lemma says that, if we take a path and we perform two parallel insertion moves at the same couple of edges, then we can consolidate then into one single insertion move instead (at the same couple of edges). \begin{mylemma}\label{insertionsame} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Let $(s_\alpha,t_\alpha)$ be a couple in the reduction process such that $\sigma(s_\alpha),\sigma(t_\alpha)$ belong to $\core{H}$, and let $u,u'$ be $\text{insertion word}$s for $\sigma$ at $(s_\alpha,t_\alpha)$. Perform on $\sigma$ the insertion move relative to $u,(s_\alpha,t_\alpha)$ and then the insertion move relative to $u',(s_\alpha,t_\alpha)$ in order to obtain a path $\mu_1$. Perform on $\sigma$ the insertion move relative to $uu',(s_\alpha,t_\alpha)$ in order to obtain a path $\mu_2$. Then $\mu_1$ and $\mu_2$ are the same path. \end{mylemma} \begin{proof} Completely analogous to the proof of Lemma \ref{insertioncommutes}. \end{proof} The following Lemma \ref{insertioninsertion} says that, if we take a path and we perform a parallel insertion move at a couple of edges, and then another insertion move at a couple of edges that we just added, then we can again consolidate the two insertion moves into a single one. Notice that this is slightly different from the previous Lemma \ref{insertionsame}. \begin{mylemma}\label{insertioninsertion} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Let $(s_\alpha,t_\alpha)$ be a couple such that $\sigma(s_\alpha)$ and $\sigma(t_\alpha)$ belong to $\core{H}$, and let $u$ be an $\text{insertion word}$ for $\sigma$ at $(s_\alpha,t_\alpha)$; let $\sigma'$ be the reduced path obtained with the insertion move relative to $u,(s_\alpha,t_\alpha)$, and take a maximal reduction process for $f\circ\sigma'$ containing all the couples $(s_1,t_1),...,(s_{l/2},t_{l/2})$. Let $(s',t')$ be a couple in the reduction process for $f\circ\sigma'$ that does not belong to the reduction process for $f\circ\sigma$, and let $u'$ be an $\text{insertion word}$ for $\sigma'$ at $(s',t')$; let $\sigma''$ be the path obtained from $\sigma'$ after performing the insertion move relative to $u',(s',t')$. Then there is an $\text{insertion word}$ $\ol u$ for $\sigma$ at $(s_\alpha,t_\alpha)$ such that, if we perform on $\sigma$ the insertion move relative to $\ol{u},(s_\alpha,t_\alpha)$, we obtain $\sigma''$. \end{mylemma} \begin{proof} Completely analogous to the proof of Lemma \ref{insertioncommutes}. \end{proof} \subsection{Characterization of all the minimum-degree equations} Recall that $L$ is the number of edges of $G$. Let $d_{min}$ be the minimum possible degree for a non-trivial equation $w\in\mathfrak I_g$. \begin{mythm}\label{main2} Let $w\in\mathfrak I_g$ be a cyclically reduced equation of degree $d_{min}$ and let $\sigma:I_l\rightarrow G$ be the corresponding reduced path. Then there is a cyclically reduced equation $w'\in\mathfrak I_g$ of degree $d_{min}$ with corresponding path $\sigma':I_{l'}\rightarrow G$, and a maximal reduction process for $\sigma'$, such that: (i) The path $\sigma'$ has length $l'\le 16L^2d_{min}$. (ii) The path $\sigma$ can be obtained from $\sigma'$ by means of at most $l'/2$ insertion moves (as in Lemma \ref{parallelinsertion}), each of them performed on a distinct couple of edges of $I_{l'}$. \end{mythm} \begin{proof} If the length of $\sigma$ is $l>16L^2d_{min}$, then by Proposition \ref{findingparallels} we can perform a cancellation move on $\sigma$ in order to get a shorter path. The degree can't strictly increase, by Lemma \ref{degreedecreases}, and can't strictly decrease, since $d_{min}$ was minimum. Thus we obtain a strictly shorter path, whose corresponding equation has the same degree $d_{min}$. We reiterate the process, and after a finite number of parallel cancellation moves we have to obtain a path $\sigma':I_{l'}\rightarrow G$ with corresponding equation of degree $d_{min}$ and of length $l'\le16L^2d_{min}$. Since $\sigma'$ is obtained from $\sigma$ by means of a finite number of parallel cancellation moves, by Lemma \ref{parallelinverses} this means that $\sigma$ can be obtained from $\sigma'$ by means of a sequence of insertion moves of Lemma \ref{parallelinsertion}. Take a sequence of insertion moves $\iota_1,...,\iota_p$ that changes $\sigma'$ into $\sigma$, and has minimum length $p$ between all such sequences. Suppose there are two insertions $\iota_q,\iota_r$ with $q<r$ such that $\iota_r$ acts on a couple of edges that is added by $\iota_q$: then we take an innermost couple of insertions with that property, so that each transformation $\iota_j$ with $q<j<r$ acts on a couple of edges different from $\iota_r$. In particular, by Lemma \ref{insertioncommutes}, we can change the order in our sequence in order to bring $\iota_r$ adjacent to $\iota_q$, and we can then apply Lemma \ref{insertioninsertion} in order to substitute $\iota_q,\iota_r$ with a single insertion move. This contradicts the minimality of the length $p$ of the sequence. Thus in our sequence $\iota_1,...,\iota_q$ we have that each insertion move acts on a couple of edges of the original interval of definition $I_{l'}$ of $\sigma'$. If two insertion moves $\iota_q,\iota_r$ with $q<r$ act on the same couple of edges of $I_{l'}$, then we reason as above, and by means of Lemmas \ref{insertioncommutes} and \ref{insertionsame} we can substitute them with a single insertion move, contradicting the minimality of $p$. It follows that each couple of insertion moves of the sequence $\iota_1,...,\iota_p$ acts on a different couple of edges of the original interval of definition $I_{l'}$ of $\sigma'$, and in particular $p\le l'/2$. The conclusion follows. \end{proof} \subsection{Equations of an arbitrary fixed degree} Until now we focused on the study of the equations of minimum possible degree, but the results can be generalized to equations of any fixed degree. Let $d\ge1$ be an integer. Let $L$ be the number of edges of the graph $G$. The following proposition is similar to Proposition \ref{findingparallels}, but with the difference that this time we are looking for a parallel cancellation move which is degree-preserving. \begin{myprop}\label{findingparallels2} Let $w\in\mathfrak I_g$ be a cyclically reduced equation of degree $d$ and let $\sigma:I_l\rightarrow G$ be the corresponding path; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be any maximal reduction process for $f\circ\sigma$. Suppose $l>32L^4d^2+16L^3d$. Then the reduction process contains two $\text{parallel}$ couples $(s_\alpha,t_\alpha),(s_\beta,t_\beta)$ such that the corresponding parallel cancellation move is degree-preserving. \end{myprop} \begin{proof} Take the domain $I_l$ of $\sigma$ and remove all the edges $r$ with the following property: $r$ belongs to a couple $(s_i,t_i)$ such that at least one of $\sigma(s_i),\sigma(t_i)$ is an edge of $\bcore{\gen{g}}$. By Lemma \ref{degree}, for every edge $e$ of $\bcore{\gen{g}}$ there are exactly $d$ edges of $I_l$ that are sent to $e$; and $\bcore{\gen{g}}$ contains at most $L$ edges. This means that we removed from $I_l$ at most $2Ld$ edges, and thus there remain at most $2Ld+1$ connected components, which we call $C_1,...,C_a$ for $a\le2Ld+1$. Since $l\ge16L^3d(2Ld+1)+1$, there is at least one connected component $\ol{C}\in\{C_1,...,C_a\}$ of length at least $16L^3d+1$. We observe that there is no couple $(s_i,t_i)$ with both $s_i,t_i\in\ol{C}$: otherwise, the interval $[\omeno{s_i},\opiu{t_i}]\subseteq\ol{C}$ would contain an innermost cancellation, and thus by Lemma \ref{innermostcouple} we would find an edge of $\ol{C}$ which is sent to $\bcore{\gen{g}}$, contradiction. Suppose the connected component $\ol{C}$ contains at least $8L^3d+1$ edges $s_i$ belonging to couples $(s_i,t_i)$ of the cancellation process (otherwise $\ol{C}$ has to contain at least $8L^3d+1$ edges $t_i$ belonging to couples $(s_i,t_i)$ of the reduction process, and the reasoning is analogous). To each such edge $s_i$, we associate the quintuple $(\sigma(s_i),\epsilon,\sigma(t_i),\delta,C_k)$ where $\sigma(s_i),\sigma(t_i)$ are edges of $G$ and $\epsilon,\delta\in\{+1,-1\}$ tell us the orientation with which $\sigma(s_i)$ and $\sigma(t_i)$ cross their images, and $C_k\in\{C_1,...,C_a\}\setminus\{\ol{C}\}$ is the connected component which $t_i$ belongs to. Since we have at least $8L^3d+1$ edges $s_i$ in $\ol{C}$ and at most $L\cdot 2\cdot L\cdot 2\cdot (2Ld)$ possible quintuples, there are at least two edges $s_{i_1},s_{i_2}$ with the same associated quintuple $(\sigma(s_{i_1}),\epsilon,\sigma(t_{i_1}),\delta,C_k)=(\sigma(s_{i_2}),\epsilon,\sigma(t_{i_2}),\delta,C_k)$. It immediately follows that $(s_{i_1},t_{i_1})$ and $(s_{i_2},t_{i_2})$ are parallel couples. Without loss of generality we can assume that $\omeno{s_{i_1}}<\omeno{s_{i_2}}$; we have that the interval $[\omeno{s_{i_1}},\omeno{s_{i_2}}]$ is contained in $\ol{C}$ and the interval $[\opiu{t_{i_2}},\opiu{t_{i_1}}]$ is contained in $C_k$. Thus, when we perform the cancellation move relative to the parallel couples $(s_{i_1},t_{i_1})$ and $(s_{i_2},t_{i_2})$, we only remove edges whose image is in $\bcore{H}$. We conclude from Lemma \ref{degree} that the cancellation move is degree-preserving, as desired. \end{proof} We are now ready to state and prove the analogues to Theorem \ref{main} and to Theorem \ref{main2}. \begin{mythm}\label{main3} Suppose $\mathfrak I_g$ contains a non-trivial equation of degree $d$. Then $\mathfrak I_g$ contains a non-trivial equation $w$ of degree $d$ such that the corresponding path $\sigma:I_l\rightarrow G$ has length $l\le32L^4d^2+16L^3d$. \end{mythm} \begin{proof} Take any non-trivial equation in $\mathfrak I_g$ of degree $d$ and such that the corresponding path $\sigma:I_l\rightarrow G$ has minimum length $l$; in particular this implies that the equation is cyclically reduced. If $l>32L^4d^2+16L^3d$ then by Proposition \ref{findingparallels2} we can perform a degree-preserving cancellation move on $\sigma$, and thus we can find a non-trivial equation in $\mathfrak I_g$ of degree $d$ whose corresponding path is strictly shorter, contradiction. Thus we must have $l\le32L^4d^2+16L^3d$, and the conclusion follows. \end{proof} \begin{mycor}\label{algorithm3} There is an algorithm that, given $H\le F_n$ finitely generated and $g\in F_n$ and an integer $d\ge1$, tells us whether $\mathfrak I_g$ contains non-trivial equations of degree $d$, and, if so, produces an equation $w\in\mathfrak I_g$ of degree $d$. \end{mycor} \begin{mythm}\label{main4} Let $w\in\mathfrak I_g$ be a cyclically reduced equation of degree $d$ and let $\sigma:I_l\rightarrow G$ be the corresponding reduced path. Then there is a cyclically reduced equation $w'\in\mathfrak I_g$ of degree $d$ with corresponding path $\sigma':I_{l'}\rightarrow G$, and a maximal reduction process for $\sigma'$, such that: (i) The path $\sigma'$ has length $l'\le32L^4d^2+16L^3d$. (ii) The path $\sigma$ can be obtained from $\sigma'$ by means of at most $l'/2$ insertion moves (as in Lemma \ref{parallelinsertion}), each of them performed on a distinct couple of edges of $I_{l'}$. \end{mythm} \begin{proof} Completely analogous to the proof of Theorem \ref{main2}. \end{proof} \subsection{The set of possible degrees}\label{SubsectionD} Let $H\le F_n$ be a finitely generated subgroup, let $g\in F_n$ be an element that depends on $H$, and let $\mathfrak I_g\trianglelefteq H*\gen{x}$ be the ideal of the equations for $g$ over $H$. \begin{mydef}\label{defDg} Define $D_g=\{d\in\mathbb N :\text{ there is a non-trivial equation }w\in\mathfrak I_g\text{ of degree }d\}$. \end{mydef} \begin{mylemma}\label{obtainingdegrees} If $d,d'\in D_g$ and $k\ge0$ then $d+d'+2k\in D_g$. \end{mylemma} \begin{proof} Let $w\in\mathfrak I_g$ be an equation of degree $d$: up to cyclic permutation, we can assume that $w$ is of the form $c_1x^{e_1}...c_\alpha x^{e_\alpha}$ with $c_1,...,c_\alpha\in H\setminus\{1\}$ and $e_1,...,e_\alpha\in\mathbb Z\setminus\{0\}$. Similarly, let $w'\in\mathfrak I_g$ be an equation of degree $d'$, and similarly we assume that $w'=c_1'x^{e_1'}...c_{\beta}'x^{e_\beta'}$ with $c_1',...,c_\beta'\in H\setminus\{1\}$ and $e_1',...,e_\beta'\in\mathbb Z\setminus\{0\}$. Without loss of generality, also assume that $e_\beta'>0$ and we take $h\in H\setminus\{1,c_1\}$. Then $w''=\ol{h}wh\ol{x}^kw'x^k$ belongs to $\mathfrak I_g$ and has degree $d+d'+2k$, for any $k\ge0$. The conclusion follows. \end{proof} Denote with $2\mathbb N$ the set of non-negative even numbers. \begin{mythm}\label{degreeset} Exactly one of the following possibilities takes place: (i) $D_g$ contains an odd number and $\mathbb N\setminus D_g$ is finite. (ii) $D_g$ contains only even numbers and $2\mathbb N\setminus D_g$ is finite. \end{mythm} \begin{proof} If $\mathfrak I_g$ contains only equations of even degree, then we take any equation of even degree $d$, and by Lemma \ref{obtainingdegrees} we are able to obtain equations of degree $d+d+2k$ for every $k\ge0$. Thus in this case we have that $2\mathbb N\setminus D_g$ is finite. Suppose now $\mathfrak I_g$ contains an equation of odd degree $d$. Then by Lemma \ref{obtainingdegrees} we are able to obtain equations of degree $d+d+2k$ for every $k\ge0$, and thus equations of every even degree big enough. In particular we are able to obtain an equation of degree $2d$, and thus by Lemma \ref{obtainingdegrees} we are able to obtain equations of degree $2d+d+2k$ for every $k\ge0$, and thus equations of every odd degree big enough. Thus in this case we have that $\mathbb N\setminus D_g$ is finite. \end{proof} In order to understand whether we fall into case (i) or (ii) of Theorem \ref{degreeset}, it is enough to look at a set of normal generators for $\mathfrak I_g$. \begin{mylemma}\label{evendegree} Let $H\le F_n$ be a finitely generated subgroup and let $g\in F_n$ be an element that depends on $H$. Suppose the set of equations $W\subseteq\mathfrak I_g$ generates $\mathfrak I_g$ as normal subgroup of $H*\gen{x}$, and suppose every equation $w\in W$ has even degree. Then every equation in $\mathfrak I_g$ has even degree. \end{mylemma} \begin{proof} Consider the homomorphism $\phi:H*\gen{x}\rightarrow\mathbb Z/2\mathbb Z$ defined by $\phi(h)=0$ for every $h\in H$ and $\phi(x)=1$. Observe that $\phi$ sends equations of even degree to $0$ and equations of odd degree to $1$. Since every $w\in W$ has even degree, we have that $W$ is contained in $\ker\phi$. But then the normal subgroup $\mathfrak I_g$ generated by $W$ is contained in $\ker\phi$ too, and thus $\mathfrak I_g$ only contains equations of even degree. \end{proof} \begin{mythm}\label{algorithm4} Given $H\le F_n$ finitely generated and $g\in F_n$ that depends on $H$, there is an algorithm that: (a) Determines whether we fall into case (i) or (ii) of Theorem \ref{degreeset}. (b) Computes the finite set $\mathbb N\setminus D_g$ or $2\mathbb N\setminus D_g$ respectively. \end{mythm} \begin{proof} Let $\mathfrak I_g=\ggen{w_1,...,w_k}$ be a finite set of normal generators for $\mathfrak I_g$, which can be obtained with the algorithm of Theorem \ref{idealfingen}. According to Lemma \ref{evendegree}, if one of $w_1,...,w_k$ has odd degree then we fall into case (i) of Theorem \ref{degreeset}, otherwise we fall in case (ii) of Theorem \ref{degreeset}. If we fall into case (i), then we take $w_i$ of degree $d_i$ odd, and with the same proof of Theorem \ref{degreeset} we have that $\mathbb N\setminus D_g\subseteq\{1,...,3d_i\}$. For each degree $d\in\{1,...,3d_i\}$ we use Corollary \ref{algorithm3} to determine whether $d$ belongs to $D_g$. If we fall into case (ii), we perform an analogous procedure. \end{proof} \section{Examples}\label{SectionExamples} We now provide a few examples for the reader, to illustrate the techniques introduced in the present paper. In each example $D_g$ denotes the set introduced in Definition \ref{defDg} and $d_{min}$ denotes the minimum of $D_g$. \subsection{Cyclic subgroups}\label{examplecyclic} Let $F_n=\gen{a_1,...,a_n}$ and suppose that $H$ has rank $1$, let's say $H=\gen{h}$ for some $h\in F_n$ with $h\not=1$. In order for an element $g$ to depend on $H$, we must have that $g,h$ belong to the same cyclic subgroup of $F_n$. We can use $\gen{H,g}$ as ambient free group instead of $F_n$: without loss of generality, in the following we assume that $F_n=\gen{a}$ and that $H=\gen{h}$ where $h=a^m$ with $m\ge1$, and that $g=a^k$ with $k\ge0$ coprime with $m$. The graph $G=\bcore{H}\vee\bcore{\gen{g}}$ here has rank $2$ while $\bcore{\gen{H,g}}$ has rank $1$. This means that the algorithm of Theorem \ref{idealfingen} produces a single generator for the ideal $\mathfrak I_g\trianglelefteq H*\gen{x}$. One possible such generator $w_{m,k}$ for each $m\ge1$ and $k\ge0$ coprime can be obtained by means of the following recursive formula: $$\begin{cases} w_{1,0}(h,x)=\ol{x}\\ w_{m,k}(h,x)=w_{m-k,k}(h\ol{x},x) \text{ for } m>k\\ w_{m,k}(h,x)=w_{m,k-m}(h,x\ol{h}) \text{ for } m\le k \end{cases}$$ Moreover, with this definition it is possible to prove by induction that $w_{m,k}(h,x)$ contains $k$ occurrences of $h$, no occurrence of $\ol{h}$, no occurrence of $x$, and $m$ occurrences of $\ol{x}$. In particular $w_{m,k}\in\mathfrak I_g$ is an equation of degree $m$. \begin{myrmk} In the case $h=a^5$ and $x=a^2$ we have the generator $w_{5,2}(h,x)=h\ol{x}^2h\ol{x}^3$ for the ideal $\mathfrak I_g$. We observe that the most immediate candidate $h^2\ol{x}^5$ doesn't work, because it is contained in $\mathfrak I_g$ but it doesn't generate the whole ideal. \end{myrmk} \begin{myrmk} The following is a well-known property of one-relator groups due to Magnus: if two elements of a free group generate the same normal subgroup, then they coincide, up to conjugation and inverse. In particular, the generator $w_{m,k}$ defined above is essentially the unique generator for the ideal $\mathfrak I_g$. \end{myrmk} Let $w\in\gen{h,x}$ be a non-trivial cyclically reduced element, which up to conjugation can be written in the form $w=h^{e_1}x^{f_1}...h^{e_r}x^{f_r}$ with $r\ge1$ and $e_1,...,e_r,f_1,...,f_r\in\mathbb Z\setminus\{0\}$. The condition $w\in\mathfrak I_g$ is equivalent to $(e_1+...+e_r)m+(f_1+...+f_r)k=0$, and since $m,k$ are coprime this means that for some $p\in\mathbb Z$ we have $f_1+...+f_r=pm$ and $e_1+...+e_r=-pk$. The degree of the equation is $d=\abs{f_1}+...+\abs{f_r}$. Suppose $m=1$. Then we have $w_{1,k}=h^k\ol{x}$. In this case $d_{min}=1$ and $D_g=\mathbb N\setminus\{0\}$. Suppose $m\ge2$ is even. Then $d_{min}=2$ and $D_g=2\mathbb N\setminus\{0\}$. For the $\supseteq$ inclusion, we have the equation $[h,x^s]$ of degree $2s$ for each $s\ge1$. For the $\subseteq$ inclusion, notice that the unique generator $w_{m,k}$ has even degree, and thus by Lemma \ref{evendegree} each equation has even degree. Suppose $m\ge2$ is odd. Then $d_{min}=2$ and $D_g=\{d : d\ge2$ even$\}\cup\{d : d\ge m$ odd$\}$. For the $\supseteq$ inclusion, we have the equation $[h,x]$ of degree $2$ and the equation $w_{m,k}$ of degree $m$, and we can use Lemma \ref{obtainingdegrees}. For the $\subseteq$ inclusion, we notice that, if an equation is written in the form $w=h^{e_1}x^{f_1}...h^{e_r}x^{f_r}$ as above, then either $f_1+...+f_r=0$, in which case the degree $d=\abs{f_1}+...+\abs{f_r}$ is even, or $m\le\abs{f_1+...+f_r}\le\abs{f_1}+...+\abs{f_r}=d$. \subsection{An ideal with only even-degree equations}\label{example46} Let $F_2=\gen{a,b}$ and consider the subgroup $H=\gen{h_1,h_2}$ with $h_1=ba$ and $h_2=ab^2\ol{a}$ and the element $g=a$. We can build the corresponding graph $G$, see figure \ref{example46G}, and we have that $\pi_1(G,*)$ is a free group with three generators $[\mu_{h_1}],[\mu_{h_2}],[\mu_g]$, which are the homotopy classes of the reduced paths $\mu_{h_1},\mu_{h_2},\mu_g$ corresponding to the elements $h_1,h_2\in H$ and $g$ respectively. We can perform a sequence of rank-preserving folding operations on $G$, see figure \ref{example46fold}, and we end up with a rose $R'$ with one $a$-labeled edge $e_1$ and two $b$-labeled edges $e_2,e_3$. Let $p:(G,*)\rightarrow(R',*)$ be the map given by the composition of the folding operations, and notice that by Proposition \ref{folding2} this is a pointed homotopy equivalence: a pointed homotopy inverse can be built following the chain of folding operations, and is given by $q:(R',*)\rightarrow(G,*)$ which sends the edge $e_1$ to the path $\mu_g$, the edge $e_2$ to the path $\mu_{h_1}\ol\mu_g$ and the edge $e_3$ to the path $\ol\mu_g\mu_{h_2}\mu_g\mu_g\ol\mu_{h_1}$. In order to obtain generators for the kernel $\mathfrak I_g\le H*\gen{x}$ we have to look at the image $q_*(e_3\ol{e}_2)$: we obtain that the kernel is generated (as a normal subgroup) by just one equation $\mathfrak I_g=\ggen{\ol{x}h_2xx\ol{h}_1x\ol{h}_1}$. \begin{figure} \caption{In the picture we can see the graph $G$ of Example \ref{example46} \label{example46G} \end{figure} \begin{figure} \caption{A maximal rank-preserving folding sequence for the graph $G$ of Example \ref{example46} \label{example46fold} \end{figure} We observe that this unique generator has even degree, and thus Lemma \ref{evendegree} tells us that every equation in $\mathfrak I_g$ has even degree. We shall explain why there is no equation of degree $2$, there is exactly one equation of degree $4$ up to conjugation and inverse and there are equations of degree $6$. By Lemma \ref{obtainingdegrees} it follows that $d_{min}=4$ and $D_g=2\mathbb N\setminus\{0,2\}$. \begin{myrmk} In order characterize all the equations of degrees $2,4,6$, we could use Theorem \ref{main4}; this is too long to do by hands, but quite easy to do with the aid of a computer. It is also possible to prove them with some combinatorics of the cancellation between words; we do not provide a full proof here, but rather a sketch. Consider the map between free groups $\psi:\gen{h_1,h_2,x}\rightarrow\gen{h_1,x}$ with $\psi(h_1)=h_1,\psi(x)=x,\psi(h_2)=xh_1\ol{x}h_1\ol{x}\ol{x}$ and notice that $\mathfrak I_g=\ker\psi$. Up to conjugation, an equation $w\in\mathfrak I_g$ can be written as reduced word $w(h_1,h_2,x)=u_1(h_1,h_2)x^{e_1}...u_r(h_1,h_2)x^{e_r}$. We now substitute each occurrence of $h_2$ with $xh_1\ol{x}h_1\ol{x}\ol{x}$, and each occurrence of $\ol{h_2}$ with $xx\ol{h}_1x\ol{h}_1\ol{x}$, and after this substitution we reduce the obtained word, until we get the trivial word. During the reduction process, each block $xh_1\ol{x}h_1\ol{x}\ol{x}$ and $xx\ol{h}_1x\ol{h}_1\ol{x}$, obtained from an occurrence of $h_2$ or $\ol{h}_2$, will completely cancel at some point: we take the occurrence of $h_2$ such that the corresponding block is the first to completely cancel during the reduction process. We now look at the word $w$ near that occurrence of $h_2$ or $\ol{h}_2$, and we obtain that $w$ contains at least one of $$h_2xx\ol{h}_1x,\ \ol{x}h_2xx,\ x\ol{h}_1\ol{x}h_2x,\ x\ol{h}_1x\ol{h}_1\ol{x}h_2,\ h_2xx\ol{h}_1\ol{x}\ol{h}_2,\ \ol{h}_2xx\ol{h}_1\ol{x}h_2$$ (or of their inverses) as a subword. These can be substituted with (respectively) $$xh_1,\ h_1\ol{x}h_1,\ h_1\ol{x},\ \ol{x},\ xx\ol{h}_1\ol{x},\ xx\ol{h}_1\ol{x}$$ in order to get a shorter (possibly not reduced) equation. This immediately implies that $\mathfrak I_g$ contains no equation of degree $2$, and it also allows to deduce that the only equations of degree $4$ are the conjugates of the generator. With some more work, it is also possible to give a characterization of all the degree $6$ equations. \end{myrmk} \subsection{An ideal with both even-degree and odd-degree equations}\label{example23} Consider the subgroup $H=\gen{h_1,h_2}$ with $h_1=b$ and $h_2=ababa$ and the element $g=a$. We see the corresponding graph $G$ in figure \ref{example23G}. We can now proceed as in Example \ref{example46}: we choose a maximal sequence of rank-preserving folding operations for $G$, we build an homotopy inverse to the sequence of folding operations, we obtain a generator for the normal subgroup $\mathfrak I_g\trianglelefteq H*\gen{x}$. Whatever sequence of folding operations you choose, you will always get the same generator, up to inverse and cyclic permutations, namely $\mathfrak I_g=\ggen{\ol{h}_2xh_1xh_1x}$. \begin{figure} \caption{In the picture we can see the graph $G$ of Example \ref{example23} \label{example23G} \end{figure} We have that $\mathfrak I_g$ contains no equation of degree $1$. It contains equations of degree $2$, which are exactly the ones of the form $[(h_2h_1)^i,xh_1]$ for $i\not=0$ up to conjugation and inverses. It also contains equations of degree $3$ (possibly essentially different from the generator). By Lemma \ref{obtainingdegrees} it follows that $d_{min}=2$ and $D_g=\{d : d\ge2\}$. We observe that the equations of minimum possible degree are not enough in this case to generate the whole ideal: in fact, according to Lemma \ref{evendegree}, equations of degree $2$ generate a normal subgroup containing only even-degree equations. \begin{myrmk} As in Example \ref{example46}, it is possible to characterize equations of degree $2$ and $3$ with Theorem \ref{main4}, using a computer, or it is possible to do it by hands with some combinatorics of the cancellations inside the words; and again, for this second method, we provide a sketch below. Consider the map between free groups $\psi:\gen{h_1,h_2,x}\rightarrow\gen{h_1,x}$ with $\psi(h_1)=h_1,\psi(x)=x,\psi(h_2)=xh_1xh_1x$ and notice that $\mathfrak I_g=\ker\psi$. Up to conjugation, an equation $w\in\mathfrak I_g$ can be written as reduced word $w(h_1,h_2,x)=u_1(h_1,h_2)x^{e_1}...u_r(h_1,h_2)x^{e_r}$. We now substitute each occurrence of $h_2$ with $xh_1xh_1x$ and each occurrence of $\ol{h_2}$ with $xh_1xh_1x$; as in Example \ref{example46} we take the occurrence of $h_2$ or $\ol{h}_2$ such that the corresponding $xh_1xh_1x$ or $\ol{x}\ol{h}_1\ol{x}\ol{h}_1\ol{x}$ is the first to completely cancel, and we look at the word $w$ near that occurrence of $h_2$ or $\ol{h}_2$. We obtain that $w$ contains at least one of $$\ol{h}_2xh_1x,\ x\ol{h}_2x,\ xh_1x\ol{h}_2,\ \ol{h}_2xh_1h_2,\ h_2h_1x\ol{h}_2,$$ (or of their inverses) as a subword. These can be substituted with (respectively) $$\ol{x}\ol{h}_1,\ \ol{h}_1\ol{x}\ol{h}_1,\ \ol{h}_1\ol{x},\ h_1x,\ xh_1$$ in order to get a shorter (possibly not reduced) equation. Dealing with some cases it can be proved that equations of degree $2$ are exactly the ones of the form $[(h_2h_1)^i,xh_1]$ for $i\not=0$ up to conjugation and inverses, and one can produce equations of degree $3$ which are essentially different from the generator. \end{myrmk} \subsection{An ideal with two generators}\label{example234} Consider the subgroup $H=\gen{h_1,h_2,h_3}$ with $h_1=a^2\ol{b}\ol{a}$ and $h_2=a^3$ and $h_3=ba\ol{b}$ and the element $g=a^2\ol{b}$. We can see the corresponding graph $G$ in figure \ref{example234G} and a maximal sequence of rank-preserving folding operations in figure \ref{example234fold}. The group $\pi_1(G,*)$ is a free group with four generators $[\mu_{h_1}],[\mu_{h_2}],[\mu_{h_3}],[\mu_g]$, which are the homotopy classes of the reduced paths $\mu_{h_1},\mu_{h_2},\mu_{h_3},\mu_g$ corresponding to the elements $h_1,h_2,h_3\in H$ and $g$ respectively. At the end of the sequence of folding operations we obtain a rose $R'$ with one $b$-labeled edge $e_1$ and three $a$-labeled edges $e_2,e_3,e_4$. The map $p:(G,*)\rightarrow(R',*)$ given by the composition of the folding operations is an homotopy equivalence, according to Proposition \ref{folding2}, and a pointed homotopy inverse is $q:(R',*)\rightarrow(G,*)$ which sends the edge $e_1$ to the path $\mu_{h_3}\ol\mu_g\ol\mu_{h_1}\mu_g$, the edge $e_2$ to the path $\ol\mu_g\mu_{h_1}\mu_g\ol\mu_{h_3}\ol\mu_g\mu_{h_2}$, the edge $e_3$ to the path $\ol\mu_{h_1}\mu_g$ and the edge $e_4$ to the path $\ol\mu_g\mu_{h_1}\mu_g\mu_{h_3}\ol\mu_g\ol\mu_{h_1}\mu_g$. We look at the images $q_*(e_2\ol{e}_3)$ and $q_*(e_4\ol{e}_3)$ and we obtain that the kernel $\mathfrak I_g\le H*\gen{x}$ is generated (as normal subgroup) by the equations $\mathfrak I_g=\ggen{\ol{x}h_1x\ol{h}_3\ol{x}h_2\ol{x}h_1,\ol{x}h_1xh_3\ol{x}}$. \begin{figure} \caption{In the picture we can see the graph $G$ of Example \ref{example234} \label{example234G} \end{figure} \begin{figure} \caption{A maximal rank-preserving folding sequence for the graph $G$ of Example \ref{example234} \label{example234fold} \end{figure} It is easy to show that $\mathfrak I_g$ contains equations of degree $2$, but not equations of degree $1$. It follows that $d_{min}=2$ and $D_g=\mathbb N\setminus\{0,1\}$. We observe that, despite $d_{min}=2$, equations of degree $2$ are not enough to generate the whole ideal $\mathfrak I_g$; in fact, the equations of degree $2$ generate a normal subgroup containing only even-degree equations (see Lemma \ref{evendegree}), while $\mathfrak I_g$ also contains equations of odd degree. \section{Equations in more variables}\label{SectionMultivariate} We point out that most of the results of this paper can be generalized to equations in more than one variable. Let $F_n$ be a free group generated by $n$ elements $a_1,...,a_n$. Let $H\le F_n$ be a finitely generated subgroup and let $\gen{x_1},\gen{x_2},...,\gen{x_m}$ be infinite cyclic groups. \begin{mydef} An \textbf{equation} with coefficients in $H$ is an element $w\in H*\gen{x_1}*...*\gen{x_m}$. \end{mydef} \begin{mydef} Define the \textbf{multi-degree} of an equation $w\in H*\gen{x_1}*...*\gen{x_m}$ as the $m$-tuple $(d_1,...,d_m)$ of integer numbers, where $d_i$ is the number of occurrences of $x_i$ and of $\ol{x}_i$ in the cyclic reduction of $w$. \end{mydef} For $(g_1,...,g_m)\in(F_n)^m$ we define the map $\varphi_{g_1,...,g_m}:H*\gen{x_1}*...*\gen{x_m}\rightarrow F_n$ such that $\restr{\varphi_{g_1,...,g_m}}{H}$ is the inclusion and $\varphi_{g_1,...,g_m}(x_i)=g_i$ for $i=1,...,m$. \begin{mydef} We say that an $m$-tuple $(g_1,...,g_m)\in(F_n)^m$ is a \textbf{solution} to the equation $w\in H*\gen{x_1}*...*\gen{x_m}$ if $w\in\ker\varphi_{g_1,...,g_m}$. \end{mydef} \begin{mydef} For $(g_1,...,g_m)\in(F_n)^m$ we define the \textbf{ideal} $\mathfrak I_{g_1,...,g_m}$ to be the normal subgroup $\mathfrak I_{g_1,...,g_m}=\ker\varphi_{g_1,...,g_m}\trianglelefteq H*\gen{x_1}*...*\gen{x_m}$. \end{mydef} \begin{mydef} We say that $(g_1,...,g_m)\in(F_n)^m$ \textbf{depends on $H$} if $\mathfrak I_{g_1,...,g_m}$ is non-trivial. \end{mydef} Fix now an $m$-tuple $(g_1,...,g_m)\in(F_n)^m$. As we did in the one-variable case, we now want to see equations as paths in a suitable graph. Let $G=\bcore{H}\vee\bcore{\gen{g_1}}\vee...\vee\bcore{\gen{g_m}}$ be the $F_n\text{-labeled graph}$ given by the disjoint union of $\bcore{H},\bcore{\gen{g_1}},...,\bcore{\gen{g_m}}$, where we identify all the basepoints to a unique point. Let also $f:G\rightarrow R_n$ be the labeling map. With the same argument as in Theorem \ref{idealfingen} we can prove the following theorem: \begin{mythm} We have the following: (i) The ideal $\mathfrak I_{g_1,...,g_m}\trianglelefteq H*\gen{x_1}*...*\gen{x_m}$ is finitely generated as a normal subgroup. (ii) The set of generators for $\mathfrak I_{g_1,...,g_m}$ can be taken to be a subset of a basis for $H*\gen{x_1}*...*\gen{x_m}$. (iii) There is an algorithm that, given $H$ and $g_1,...,g_m$, computes a finite set of normal generators for $\mathfrak I_{g_1,...,g_m}$ which is also a subset of a basis for $H*\gen{x_1}*...*\gen{x_m}$. \end{mythm} We have an isomorphism $\theta:H*\gen{x_1}*...*\gen{x_m}\rightarrow\pi_1(G,*)$ so that we can define the same correspondence as in Definitions \ref{defcorrpath} and \ref{defcorrequation}. Non-trivial equations $w\in\mathfrak I_{g_1,...,g_m}$ correspond to reduced paths $\sigma:I_l\rightarrow G$ with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial (relative to its endpoints). The following three lemmas relate the degree of an equation to its corresponding path, and the proofs are exactly the same as for Lemmas \ref{degree}, \ref{innermostcouple} and \ref{innermostbounded}. \begin{mylemma} Let $\sigma:I_l\rightarrow G$ be a cyclically reduced path. For $i=1,...,m$ let $e_i$ be any edge of $G$ that belongs to the subgraph $\core{\gen{g_i}}$. Let also $(d_1,...,d_m)$ be the multi-degree of the equation $w$ corresponding to $\sigma$. Then the path $\sigma$ crosses the edge $e_i$ exactly $d_i$ times (in either direction). \end{mylemma} \begin{mylemma} Let $\sigma:I_l\rightarrow G$ be a reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial (relative to its endpoints); let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal reduction process for $f\circ\sigma$. Let $(s_i,t_i)$ be an innermost cancellation. Then, among $\sigma(s_i)$ and $\sigma(t_i)$, one is an edge of $\bcore{H}$ and the other is an edge of $\bcore{\gen{g_1}}\vee...\vee\bcore{\gen{g_m}}$. \end{mylemma} \begin{mylemma} Let $\sigma:I_l\rightarrow G$ be a cyclically reduced path with $\sigma(0)=\sigma(1)=*$ and such that $f\circ\sigma$ is homotopically trivial (relative to its endpoints); let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be a maximal cancellation process for $f\circ\sigma$. Let $(d_1,...,d_m)$ be the multi-degree of the equation $w\in\mathfrak I_{g_1,...,g_m}$ corresponding to $\sigma$. Then the reduction process contains at most $2(d_1+...+d_m)$ innermost cancellations. \end{mylemma} We can define a parallel cancellation move, in the exact same way as in Definition \ref{parallel} and Lemma \ref{parallelcancellation}. Lemma \ref{degreepreserving} about degree-preserving parallel cancellation moves remains true too, where degree-preserving means that the equations $w,w'$, before and after the parallel cancellation move respectively, have the same multi-degree $(d_1,...,d_m)=(d_1',...,d_m')$. It is also possible to define a parallel insertion move, exactly as in Definition \ref{insertionpath} and Lemma \ref{parallelinsertion}. Lemmas \ref{parallelinverses}, \ref{parallelinverses2}, \ref{insertioncommutes}, \ref{insertionsame}, \ref{insertioninsertion} remain true. In the exact same way as we proved Proposition \ref{findingparallels2} and Theorems \ref{main3} and \ref{main4}, we can prove the following. \begin{myprop} Let $w\in\mathfrak I_{g_1,...,g_m}$ be a cyclically reduced equation of multi-degree $(d_1,...,d_m)$ and let $\sigma:I_l\rightarrow G$ be the corresponding path; let $(s_1,t_1),...,(s_{l/2},t_{l/2})$ be any maximal reduction process for $f\circ\sigma$. Suppose $l>32L^4(d_1+...+d_m)^2+16L^3(d_1+...+d_m)$. Then the reduction process contains two $\text{parallel}$ couples $(s_\alpha,t_\alpha),(s_\beta,t_\beta)$ such that the corresponding parallel cancellation move is degree-preserving. \end{myprop} \begin{mythm} Suppose $\mathfrak I_{g_1,...,g_m}$ contains a non-trivial equation of degree $(d_1,...,d_m)$. Then $\mathfrak I_{g_1,...,g_m}$ contains non-trivial equation $w$ of degree $(d_1,...,d_m)$ such that the corresponding path $\sigma:I_l\rightarrow G$ has length $l\le 32L^4(d_1+...+d_m)^2+16L^3(d_1+...+d_m)$. \end{mythm} \begin{mycor} There is an algorithm that, given $H,g_1,...,g_m$ and an $m$-tuple $(d_1,...,d_m)$ of non-negative integers, tells us whether $\mathfrak I_{g_1,...,g_m}$ contains non-trivial equations of multi-degree $(d_1,...,d_m)$, and, if so, produces an equation $w\in\mathfrak I_{g_1,...,g_m}$ of multi-degree $(d_1,...,d_m)$. \end{mycor} \begin{mythm} Let $w\in\mathfrak I_{g_1,...,g_m}$ be a cyclically reduced equation of multi-degree $(d_1,...,d_m)$ and let $\sigma:I_l\rightarrow G$ be the corresponding path. Then there is a cyclically reduced equation $w'\in\mathfrak I_{g_1,...,g_m}$ of degree $(d_1,...,d_m)$ with corresponding path $\sigma':I_{l'}\rightarrow G$, and a maximal reduction process for $\sigma'$, such that: (i) The path $\sigma'$ has length $l'\le 32L^4(d_1+...+d_m)^2+16L^3(d_1+...+d_m)$. (ii) The path $\sigma$ can be obtained from $\sigma'$ by means of at most $l'/2$ insertion moves, each of them performed on a distinct couple of edges of $I_{l'}$. \end{mythm} \nocite{*} \end{document}
\begin{document} \title[Popoviciu's type inequalities]{Popoviciu's type inequalities for $h$-${\rm{MN}}$-convex functions} \author[M.W. Alomari]{M.W. Alomari} \address{Department of Mathematics, Faculty of Science and Information Technology, Irbid National University, 2600 Irbid 21110, Jordan.} \email{[email protected]} \date{\today} \subjclass[2000]{26D15} \keywords{Popoviciu's inequality, $h$-convex function, Means} \begin{abstract} In this work, Popoviciu type inequalities for $h$-${\rm{MN}}$-convex functions are proved, where M and N are specific mathematical means. Some direct examples are pointed out. \end{abstract} \maketitle \section{Introduction} We recall that, a function ${\rm{M}}:(0,\infty) \to (0,\infty)$ is called a Mean function if \begin{enumerate} \item {\rm{Symmetry:}} ${\rm{M}}\left(x,y\right)={\rm{M}}\left(y,x\right)$. \item {\rm{Reflexivity:}} ${\rm{M}}\left(x,x\right)=x$. \item {\rm{Monotonicity:}} $\min\{x,y\} \le {\rm{M}}\left(x,y\right) \le \max\{x,y\}$. \item {\rm{Homogeneity:}} $ {\rm{M}}\left(\lambda x,\lambda y\right)=\lambda {\rm{M}}\left(x,y\right)$, for any positive scalar $\lambda$. \end{enumerate} The most famous and old known mathematical means are listed as follows: \begin{enumerate} \item The arithmetic mean : $$A := A\left( {\alpha ,\beta } \right) = \frac{{\alpha + \beta }}{2},\,\,\,\,\,\alpha ,\beta \in \mathbb{R}_+.$$ \item The geometric mean : $$G: = G\left( {\alpha ,\beta } \right) = \sqrt {\alpha \beta },\,\,\,\,\,\alpha ,\beta \in \mathbb{R}_+$$ \item The harmonic mean : $$H: = H\left( {\alpha ,\beta } \right) = \frac{2}{{\frac{1}{\alpha } + \frac{1}{\beta }}},\,\,\,\,\,\alpha ,\beta \in \mathbb{R}_+ - \left\{ 0 \right\}.$$ \end{enumerate} In particular, we have the famous inequality $ H \le G \le A$. In 2007, Anderson et al.\, in \cite{AVV} developed a systematic study to the classical theory of continuous and midconvex functions, by replacing a given mean instead of the arithmetic mean. \begin{definition} \label{def4}Let $f : I \to \left(0,\infty\right)$ be a continuous function where $I \subseteq (0,\infty)$. Let ${\rm{M}}$ and ${\rm{N}}$ be any two Mean functions. We say $f$ is ${\rm{{\rm{MN}}}}$-convex (concave) if \begin{align} f \left({\rm{M}}\left( x, y\right)\right) \le (\ge) \, {\rm{N}}\left(f (x), f (y)\right), \label{eq1.5} \end{align} for all $x,y \in I$ and $t\in [0,1]$. \end{definition} In fact, the authors in \cite{AVV} discussed the midconvexity of positive continuous real functions according to some Means. Hence, the usual midconvexity is a special case when both mean values are arithmetic means. Also, they studied the dependence of ${\rm{MN}}$-convexity on ${\rm{M}}$ and ${\rm{N}}$ and give sufficient conditions for ${\rm{MN}}$-convexity of functions defined by Maclaurin series. For other works regarding ${\rm{MN}}$-convexity see \cite{N} and \cite{NP}. The class of $h$-convex functions, which generalizes convex, $s$-convex (denoted by $K_s^2$, \cite{B1}--\cite{B3}, \cite{HL}), Godunova-Levin functions (denoted by $Q(I)$, \cite{GL}) and $P$-functions (denoted by $P(I)$, \cite{PR}), was introduced by Varo\v{s}anec in \cite{V}. Namely, the $h$-convex function is defined as a non-negative function $f : I \to \mathbb{R}$ which satisfies \begin{align*} f\left( {t\alpha +\left(1-t\right)\beta} \right)\le h\left(t\right) f\left( {\alpha} \right)+ h\left(1-t\right) f\left( {\beta} \right), \end{align*} where $h$ is a non-negative function, $t\in ¸ (0, 1)\subseteq J$ and $x,y \in I $, where $I$ and $J$ are real intervals such that $(0,1) \subseteq J $. Accordingly, some properties of $h$-convex functions were discussed in the same work of Varo\v{s}anec.\\ Let $h: J\to \left(0,\infty\right)$ be a non-negative function. Define the function ${\rm{M}}:\left[0,1\right]\to \left[a,b\right]$ given by ${\rm{{\rm{M}}}}\left(t\right)={\rm{{\rm{M}}}}\left( {t;a,b} \right)$; where by ${\rm{{\rm{M}}}}\left( {t;a,b} \right)$ we mean one of the following functions: \begin{enumerate} \item $A_h\left( {a,b} \right):=h\left( {1 - t} \right)a + h\left( {t} \right)b ;\qquad \text{The generalized Arithmetic Mean}$.\\ \item $G_h\left( {a,b} \right)=a^{h\left( {1-t} \right)} b^{h\left( {t} \right)};\qquad\qquad\,\,\,\,\,\,\, \text{The generalized Geometric Mean}$.\\ \item $H_h\left( {a,b} \right):=\frac{ab}{h\left( {t} \right) a + h\left( {1 - t} \right)b} = \frac{1}{A_h\left( {\frac{1}{a},\frac{1}{b}} \right)}; \qquad\,\,\,\,\, \text{The generalized Harmonic Mean}$.\\ \end{enumerate} Note that ${\rm{{\rm{M}}}}\left( {h\left( {0} \right);a,b} \right)=a$ and ${\rm{M}}\left( {h\left( {1} \right);a,b} \right)=b$. Clearly, for $h(t)=t$ with $t=\frac{1}{2}$, the means $A_{\frac{1}{2}}$, $G_{\frac{1}{2}}$ and $H_{\frac{1}{2}}$, respectively; represents the midpoint of the $A_{t}$, $G_{t}$ and $H_{t}$, respectively; which was discussed in \cite{AVV} in viewing of Definition \ref{def4}. For $h(t)=t$, we note that the above means are related with celebrated AM-GM-HM inequality \begin{align*} H_t\left( {a,b} \right)\le G_t\left( {a,b} \right) \le A_t\left( {a,b} \right),\qquad \forall \,\, t\in [0,1]. \end{align*} Indeed, one can easily prove more general form of the above inequality; that is if $h$ is positive increasing on $[0,1]$ then the generalized AM-GM-HM inequality is given by \begin{align} \label{AM-GM-HM}H_h\left( {a,b} \right)\le G_h\left( {a,b} \right) \le A_h\left( {a,b} \right),\qquad \forall \,\, t\in [0,1]\,\,\,\,\text{and}\,\,\,\, a,b>0. \end{align} The Definition \ref{def4} can be extended according to the defined mean ${\rm{{\rm{M}}}}\left( {t;a,b} \right)$, as follows: Let $f : I \to \left(0,\infty\right)$ be any function. Let ${\rm{M}}$ and ${\rm{N}}$ be any two Mean functions. We say $f$ is ${\rm{{\rm{MN}}}}$-convex (concave) if \begin{align*} f \left({\rm{M}}\left(t;x, y\right)\right) \le (\ge) \, {\rm{N}}\left(t;f (x), f (y)\right), \end{align*} for all $x,y \in I$ and $t\in [0,1]$. More generally, the authors of this paper have introduced the class of ${\rm{M_tN_h}}$-convex functions by generalizing the concept of ${\rm{M_tN_t}}$-convexity and combining it with $h$-convexity \cite{A}. \begin{definition}\cite{A} \label{def5} Let $h: J\to \left(0,\infty\right)$ be a non-negative function. Let $f : I\to \left(0,\infty\right)$ be any function. Let ${\rm{M}}:\left[0,1\right]\to \left[a,b\right]$ and ${\rm{N}}:\left(0,\infty\right)\to \left(0,\infty\right)$ be any two Mean functions. We say $f$ is $h$-${\rm{{\rm{MN}}}}$-convex (-concave) or that $f$ belongs to the class $\overline{\mathcal{MN}}\left(h,I\right)$ ($\underline{\mathcal{MN}}\left(h,I\right)$) if \begin{align} f \left({\rm{M}}\left(t;x, y\right)\right) \le (\ge) \, {\rm{N}}\left(h(t);f (x), f (y)\right),\label{eq1.3} \end{align} for all $x,y \in I$ and $t\in [0,1]$. \end{definition} Clearly, if ${\rm{M}}\left(t;x, y\right)=A_t\left( {x,y} \right)={\rm{N}}\left(t;x, y\right)$, then Definition \ref{def5} reduces to the original concept of $h$-convexity. Also, if we assume $f$ is continuous, $h(t)=t$ and $t=\frac{1}{2}$ in \eqref{eq2.4}, then the Definition \ref{def5} reduces to the Definition \ref{def4}. The cases of $h$-${\rm{{\rm{MN}}}}$-convexity are given with respect to a certain mean, as follow: \begin{enumerate} \item $f$ is ${\rm{A_tG_h}}$-convex iff \begin{align} f\left( {t\alpha + \left( {1 - t} \right)\beta } \right) \le \left[ {f\left( \alpha \right)} \right]^{h\left( t \right)} \left[ {f\left( \beta \right)} \right]^{h\left(1- t \right)}, \qquad 0\le t\le 1,\label{eqhAG} \end{align} \item $f$ is ${\rm{A_tH_h}}$-convex iff \begin{align} f\left( {t\alpha + \left( {1 - t} \right)\beta } \right) \le\frac{{f\left( \alpha \right)f\left( \beta \right)}}{{h\left( 1-t \right)f\left( \alpha \right) + h\left( { t} \right)f\left( \beta \right)}}, \qquad 0\le t\le 1.\label{eqhAH} \end{align} \item $f$ is ${\rm{G_tA_h}}$-convex iff \begin{align} f\left( {\alpha ^t \beta ^{1 - t} } \right) \le h\left( {t} \right)f\left( \alpha \right) + h\left( {1 - t} \right)f\left( \beta \right), \qquad 0\le t\le 1.\label{eqhGA} \end{align} \item $f$ is ${\rm{G_tG_h}}$-convex iff \begin{align} f\left( {\alpha ^t \beta ^{1 - t} } \right) \le \left[ {f\left( \alpha \right)} \right]^{h\left( {t} \right)} \left[ {f\left( \beta \right)} \right]^{h\left( {1 - t} \right)}, \qquad 0\le t\le 1.\label{eqhGG} \end{align} \item $f$ is ${\rm{G_tH_h}}$-convex iff \begin{align} f\left( {\alpha ^t \beta ^{1 - t} } \right) \le \frac{{f\left( \alpha \right)f\left( \beta \right)}}{{h\left( {1- t} \right)f\left( \alpha \right) + h\left( { t} \right)f\left( \beta \right)}}, \qquad 0\le t\le 1.\label{eqhGH} \end{align} \item $f$ is ${\rm{H_tA_h}}$-convex iff \begin{align} f\left( {\frac{{\alpha \beta }}{{t\alpha + \left( {1 - t} \right)\beta }}} \right) \le h\left( {1-t} \right)f\left( \alpha \right) + h\left( { t} \right)f\left( \beta \right), \qquad 0\le t\le 1.\label{eqhHA} \end{align} \item $f$ is ${\rm{H_tG_h}}$-convex iff \begin{align} f\left( {\frac{{\alpha \beta }}{{t\alpha + \left( {1 - t} \right)\beta }}} \right) \le \left[ {f\left( \alpha \right)} \right]^{h\left( { 1-t} \right)} \left[ {f\left( \beta \right)} \right]^{h\left( { t} \right)}, \qquad 0\le t\le 1.\label{eqhHG} \end{align} \item $f$ is ${\rm{H_tH_h}}$-convex iff \begin{align} f\left( {\frac{{\alpha \beta }}{{t\alpha + \left( {1 - t} \right)\beta }}} \right) \le \frac{{f\left( \alpha \right)f\left( \beta \right)}}{{h\left( {t} \right)f\left( \alpha \right) + h\left( {1 - t} \right)f\left( \beta \right)}}, \qquad 0\le t\le 1.\label{eqhHH} \end{align} \end{enumerate} \begin{remark} In all previous cases, $h(t)$ and $h(1-t)$ are not equal to zero at the same time. Therefore, if $h(0)=0$ and $h(1)=1$, then the Mean function ${\rm{N}}$ satisfying the conditions ${\rm{N}}\left( {h\left( 0 \right),f\left( x \right),f\left( y \right)} \right) = f\left( x \right)$ and $ {\rm{N}}\left( {h\left( 1 \right),f\left( x \right),f\left( y \right)} \right) = f\left(y \right) $. \end{remark} \begin{remark} According to the Definition \ref{def5}, we may extend the classes $Q(I), P(I)$ and $K_s^2$ by replacing the arithmetic mean by another given one. Let ${\rm{M}}:\left[0,1\right]\to \left[a,b\right]$ and ${\rm{N}}:\left(0,\infty\right)\to \left(0,\infty\right)$ be any two Mean functions. \begin{enumerate} \item Let $s\in (0,1]$, a function $f : I\to \left(0,\infty\right)$ is ${\rm{M_tN_{t^s}}}$-convex function or that $f$ belongs to the class $K^2_s\left(I;{\rm{M_t}},{\rm{N_{t^s}}}\right)$ if for all $x,y \in I$ and $t\in [0,1]$ we have \begin{align} f \left({\rm{M}}\left(t;x, y\right)\right) \le {\rm{N}}\left(t^s;f (x), f (y)\right).\label{eq1.12} \end{align} \item We say that $f : I \to \left(0,\infty\right)$ is an extended Godunova-Levin function or that $f$ belongs to the class $Q\left(I;{\rm{M_t}},{\rm{N_{1/t}}}\right)$ if for all $x,y \in I$ and $t\in (0,1)$ we have \begin{align} f \left({\rm{M}}\left(t;x, y\right)\right) \le {\rm{N}}\left(\frac{1}{t};f (x), f (y)\right).\label{eq1.13} \end{align} \item We say that $f : I\to \left(0,\infty\right)$ is $P$-${\rm{M_tN_{t=1}}}$-function or that $f$ belongs to the class $P\left(I;{\rm{M_t}},{\rm{N_1}}\right)$ if for all $x,y \in I$ and $t\in [0,1]$ we have \begin{align} f \left({\rm{M}}\left(t;x, y\right)\right) \le {\rm{N}}\left(1;f (x), f (y)\right).\label{eq1.14} \end{align} In \eqref{eq1.12}--\eqref{eq1.14}, setting ${\rm{M}}\left(t;x, y\right)= {\rm{A_t}}\left(x, y\right)={\rm{N}}\left(t;x, y\right)$, we then refer to the original definitions of these class of convexities. \end{enumerate} \end{remark} \begin{remark} \label{remark2}Let $h$ be a non-negative function such that $h\left(t\right) \ge t$ for $t\in \left(0,1\right)$. For instance $h_r\left(t\right) = t^r$, $t\in \left(0,1\right)$ has that property. In particular, for $r\le 1$, if $f$ is a non-negative ${\rm{M_tN_t}}$-convex function on $I$, then for $x,y\in I$, $t \in (0,1)$ we have \begin{align*} f \left({\rm{M}}\left(t;x, y\right)\right) \le {\rm{N}}\left(t;f (x), f (y)\right) \le {\rm{N}}\left(t^r;f (x), f (y)\right)= {\rm{N}}\left(h\left(t\right);f (x), f (y)\right), \end{align*} for all $r\le 1$ and $t\in \left(0,1\right)$. So that $f$ is ${\rm{M_tN_h}}$-convex. Similarly, if the function satisfies the property $h\left(t\right) \le t$ for $t \in \left(0,1\right)$, then $f$ is a non-negative ${\rm{M_tN_h}}$-concave. In particular, for $r\ge 1$, the function $h_r(t)$ has that property for $t\in \left(0,1\right)$. So that if $f$ is a non-negative ${\rm{M_tN_t}}$-concave function on $I$, then for $x,y\in I$, $t \in (0,1)$ we have \begin{align*} f \left({\rm{M}}\left(t;x, y\right)\right) \ge {\rm{N}}\left(t;f (x), f (y)\right) \ge {\rm{N}}\left(t^r;f (x), f (y)\right)={\rm{N}}\left(h\left(t\right);f (x), f (y)\right), \end{align*} for all $r\ge 1$ and $t\in \left(0,1\right)$, which means that $f$ is ${\rm{M_tN_h}}$-concave. \end{remark} As known, it is not easy to determine whether a given function is convex or not. Because of that, Jensen in \cite{J} proved his famous characterization of convex functions. Simply, for a continuous functions $f$ defined on a real interval $I$, $f$ is convex if and only if \begin{align*} f\left( {\frac{{x + y}}{2}} \right) \le \frac{{f\left( x \right) + f\left( y \right)}}{2}, \end{align*} for all $x,y\in I$. In 1965, another characterization was presented by Popoviciu \cite{P}, where he proved that the following theorem. \begin{theorem} Let $f:I\to \mathbb{R}$ be continuous. Then, $f$ is convex if and only if \begin{align} \frac{2}{3}\left[{ f\left( {\frac{{x + z}}{2}} \right)+f\left( {\frac{{y + z}}{2}} \right)+f\left( {\frac{{x + y}}{2}} \right)}\right] \le f\left( {\frac{{x + y + z}}{3}} \right) +\frac{f\left( x \right)+f\left( y \right)+f\left( z \right)}{3}, \label{eq1.15} \end{align} for all $x,y,z\in I$, and the equality occurred by $f(x)=x$, $x\in I$. \end{theorem} The corresponding version of Popoviciu inequality for ${\rm{G_tG_t}}$-convex (concave) function was presented by \cite{N}, where he proved that for all $x,y,z\in I$ the inequality \begin{align} f^2\left( {\sqrt {xz}} \right)f^2\left( {\sqrt {yz}} \right)f^2\left( {\sqrt {xy}} \right) \le (\ge) f^3\left( {\sqrt[3]{xyz} } \right) f\left( x \right)f\left( y \right)f\left( z \right), \end{align} holds. One of the most applicable benefits of Popoviciu's inequality is to maximize and/or minimize a given function (or certain real quantities) with out using derivatives, so that such type of inequalities plays an important role in Optimizations and Approximations. Another serious usefulness is to generalize some old famous inequalities, e.g., the Popoviciu's inequality can be considered as an elegant generalization of Hlawka's inequality using convexity as a simple tool of geometry. For any real numbers $x,y,z$, the Hlawka's inequality reads: \begin{align} \label{Hlawkaineq}\left| x \right| + \left| y \right| + \left| z \right| + \left| {x + y + z} \right| \ge \left| {x + z} \right| + \left| {z + y} \right| + \left| {x + y} \right|. \end{align} D. Smiley \& M. Smiley \cite{W} (see also \cite{SS}, p. 756), interpreted Hlawka's inequality geometrically by saying that: ``the total length over all sums of pairs from three vectors is not greater than the perimeter of the quadrilateral defined by the three vectors.'' For recent comprehensive history regarding Hlawka's inequality see \cite{F}. It's convenient to note that, a normed linear space for which inequality \eqref{Hlawkaineq} holds for all $x$, $y$, $z$ is called a Hlawka space or quadrilateral space, see \cite{TTH} and \cite{TTMT} (also \cite{SS}). For instance, each inner product space is a Hlawka space, \cite{MPF}. The extended version of Popoviciu's inequality to several variables was not possible without the help of Hlawka's inequality, as it inspired the authors of \cite{BNP} to develop a higher dimensional analogue of Popoviciu's inequality based on his characterization. Interesting generalizations and counterparts of Popoviciu inequality with some ramified consequences can be found in \cite{G} and \cite{VS}.\\ Therefore, as Popoviciu's inequality one of the most popular generalization of Hlawka's inequality, and due to its important usefulness, in this work we establish the corresponding Popoviciu type inequalities according to a given mean used instead of the arithmetic mean. Namely, for $h$-${\rm{AN}}$-convex functions several inequalities of Popoviciu type are proved. In this way, we extend Hlawka's inequality based on the geometric structure used under an $h$-${\rm{AN}}$-convex mappings. \section{Popoviciu type inequalities for $h$-${\rm{AN}}$-convex functions} After focus consideration we find that, there is neither nonnegative $\frac{1}{t}$-${\rm{M_tA_t}}$-concave nor $\frac{1}{t}$-${\rm{M_tH_t}}$--convex functions, where $M_t=A_t,G_t,H_t$. The same observation holds for $h\left(t\right)=t^k$, $k\le -1$, $t\in (0,1)$. To see how this holds, suppose on the contrary that there is a nonnegative function $f$ which is ${\rm{M_tA_{1/t}}}$-concave on $I$. Thus, for Means ${\rm{M_t}}$ and ${\rm{A_t}}$, the reverse inequality of \eqref{eq1.3} holds for all $x,y \in I$ and $t\in (0,1)$. \begin{align*} f\left( {M \left( {t;x,y} \right)} \right) \ge \frac{1}{1-t} f\left( x \right) + \frac{1}{t} f\left( y \right). \end{align*} Since $ M_t \left( {x,x} \right)=x$, so by setting $x=y$ we have \begin{align*} f\left( {x} \right) \ge \frac{1}{1-t} f\left( x \right) + \frac{1}{t}f\left( x \right) = \frac{1}{t\left(1-t\right)} f\left( x \right), \end{align*} which is equivalent to write $\left(t-t^2-1\right)f\left(x\right) \ge 0$, $\forall t\in (0,1)$. But since $f$ is non-negative we must have $t-t^2-1\ge0$, $0<t<1$ which is impossible and thus we got a contradiction. Hence, we must have $f\left(x\right) \le 0$. In case when $f$ is nonnegative ${\rm{M_tH_{1/t}}}$--convex function, then \begin{align*} f\left( {M \left( {t;x,y} \right)} \right) \le \frac{{t\left( 1-t \right)f\left( x \right)f\left(y \right)}}{{tf\left( x \right) +\left( 1-t \right)f\left( y \right)}}, \end{align*} setting $x=y$ we have \begin{align*} f\left( {x} \right) \le t\left( 1-t \right)f\left( x \right), \end{align*} and this is equivalent to write $\left(t\left( 1-t \right) -1\right)f\left( x \right)\ge0$, since $f$ is nonnegative we must have $t\left( 1-t \right) -1\ge0$ which impossible for $t\in (0,1)$, which contradicts the nonnegativity assumption of $f$. Hence, $f\le0$. \begin{remark} \label{cor}There is no nonnegative ${\rm{M_tA_1}}$-concave nor ${\rm{M_tH_{1}}}$-convex functions, where $M_t=A_t,G_t,H_t$. The proof is simpler than that ones given above. \end{remark} According to the previous discussion, we need to extend the classes $Q\left(I;{\rm{M_t}},{\rm{A_{1/t}}}\right)$, $Q\left(I;{\rm{M_t}},{\rm{H_{1/t}}}\right)$, $P\left(I;{\rm{M_t}},{\rm{A_1}}\right)$, and $P\left(I;{\rm{M_t}},{\rm{H_1}}\right)$. Consequently, we say that a function $f:I\to \mathbb{R}$ \begin{enumerate} \item is ${\rm{M_tA_{1/t}}}$-concave, if $-f \in Q\left(I;{\rm{M_t}},{\rm{A_{1/t}}}\right)$, i.e., \begin{align*} f \left({\rm{M}}\left(t;x, y\right)\right)\ge \frac{1}{1-t} f\left( x \right) + \frac{1}{t} f\left( y \right), \end{align*} for all $x,y \in I$ and $t\in (0,1)$.\\ \item is ${\rm{M_tH_{1/t}}}$-convex, if $-f \in Q\left(I;{\rm{M_t}},{\rm{H_{1/t}}}\right)$, i.e., \begin{align*} f \left({\rm{M}}\left(t;x, y\right)\right)\ge \frac{{t\left( 1-t \right)f\left( x \right)f\left(y \right)}}{{tf\left( x \right) +\left( 1-t \right)f\left( y \right)}}, \end{align*} for all $x,y \in I$ and $t\in (0,1)$.\\ \item is ${\rm{M_tA_1}}$-concave, if $-f \in P\left(I;{\rm{M_t}},{\rm{A_1}}\right)$, i.e., \begin{align*} f \left({\rm{M}}\left(t;x, y\right)\right)\ge f\left( x \right) +f\left( y \right) \end{align*} for all $x,y \in I$ and $t\in (0,1)$.\\ \item is ${\rm{M_tH_1}}$-concave, if $-f \in P\left(I;{\rm{M_t}},{\rm{H_1}}\right)$, i.e., \begin{align*} f \left({\rm{M}}\left(t;x, y\right)\right)\ge\frac{{f\left( x \right)f\left( y \right)}}{{f\left( x \right) + f\left( y \right)}}, \end{align*} for all $x,y \in I$ and $t\in (0,1)$. \end{enumerate} In the same way, there is no ${\rm{M_tG_{1/t}}}$-concave function satisfies $f\left(x\right)> 1$. To support this assertion, assume there exists ${\rm{M_tG_{1/t}}}$-concave function, so that for Means ${\rm{M_t}}$ and ${\rm{G_t}}$, the reverse inequality of \eqref{eq1.3} holds for all $x,y \in I$ and $t\in (0,1)$. \begin{align*} f\left( {M \left( {t;x,y} \right)} \right) \ge \left[f\left( x \right)\right]^{\frac{1}{1-t}}\left[ f\left( y \right)\right]^ {\frac{1}{t}}, \end{align*} since $ M_t \left( {x,x} \right)=x$, so by setting $x=y$ we have \begin{align*} f\left( {x} \right) \ge \left[f\left( x \right)\right]^{\frac{1}{1-t}+ \frac{1}{t}}, \end{align*} since $f\left(x\right)> 1$ and $t\in (0,1)$ then we must have $\frac{1}{1-t}+ \frac{1}{t}\le1$ which is equivalent to write $1\le t \left(1-t\right)$ for all $t\in (0,1)$ and this is impossible, thus we have a contradiction. Hence, we must have $ 0\le f\left(x\right) \le 1$. \begin{remark} There is no $1$-${\rm{M_tG_t}}$-concave function satisfies $f\left(x\right)> 1$. The proof is simpler than that ones given above. \end{remark} A function $h: I\to \mathbb{R}$ is said to be \begin{enumerate} \item additive if $h\left( {s + t} \right) = h\left( s \right) + h\left( t \right)$, \item subadditive if $h\left( {s + t} \right) \le h\left( s \right) + h\left( t \right)$, \item superadditive if $h\left( {s + t} \right) \ge h\left( s \right) + h\left( t \right)$, \end{enumerate} for all $s,t \in I$. For example, let $h:I\to \left(0,\infty\right)$ given by $h\left(x\right)=x^k$, $x>0$. Then $h$ is \begin{enumerate} \item additive if $k=1$. \item subadditive if $k \in \left(-\infty,-1\right]\cup \left[0,1\right)$. \item superadditive if $k \in \left(-1,0\right) \cup \left(1,\infty\right)$. \end{enumerate} We note here, in all next results and for the classes ${\rm{M_tA_{1/t}}}$-concave, ${\rm{M_tG_{1/t}}}$-concave, ${\rm{M_tH_{1/t}}}$-convex , ${\rm{M_tA_1}}$-concave, and ${\rm{M_tH_1}}$-convex functions, $f$ is defined to be $f : I \to \mathbb{R}$, $I\subseteq (0,\infty)$. \subsection{The case when $f$ is $h$-${\rm{AA}}$-convex} Now, we are ready to state our first main result. \begin{theorem} \label{thm1} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive function. If $f : I \to \left(0,\infty\right)$ be an ${\rm{A_tA_h}}$-convex (concave) function, then \begin{multline} f\left( {\frac{{x + z}}{2}} \right)+f\left( {\frac{{y + z}}{2}} \right)+f\left( {\frac{{x + y}}{2}} \right) \\ \le\,(\ge)\, h\left( {3/2} \right) f\left( {\frac{{x + y + z}}{3}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right], \label{eq2.1} \end{multline} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{A_tA_h}}$-convex iff the inequality \begin{align*} f\left( {t\alpha + \left( {1 - t} \right)\beta } \right) \le h\left( t \right) f\left( \alpha \right) + h\left(1- t \right)f\left( \beta \right), \qquad 0\le t\le 1, \end{align*} holds for all $\alpha,\beta\in I$. Assume that $x\le y \le z$. If $y \le \frac{x+y+z}{3}$, then \begin{align*} \frac{{x + y + z}}{3} \le \frac{{x + z}}{2} \le z \,\,\text{and}\,\, \frac{{x + y + z}}{3} \le \frac{{y + z}}{2} \le z, \end{align*} so that there exist two numbers $s,t \in \left[0,1\right]$ satisfying \begin{align*} \frac{{x + z}}{2} = s\left( {\frac{{x + y + z}}{3}} \right) + \left( {1 - s} \right)z, \end{align*} and \begin{align*} \frac{{y + z}}{2} = t\left( {\frac{{x + y + z}}{3}} \right) + \left( {1 - t} \right)z. \end{align*} Summing up, we get $\left(x+y-2z\right)\left(s+t-\frac{3}{2}\right)=0$. If $x+y-2z=0$, then $x = y = z$, and Popoviciu's inequality holds. If $s+t=\frac{3}{2}$, then since $f$ is ${\rm{A_tA_h}}$-convex, we have \begin{align*} f\left( {\frac{{x + z}}{2}} \right) = f\left[ {s\left( {\frac{{x + y + z}}{3}} \right) + \left( {1 - s} \right)z} \right] \le h\left( s \right) f\left( {\frac{{x + y + z}}{3}} \right)+h\left( {1 - s} \right) f\left( z \right), \end{align*} \begin{align*} f\left( {\frac{{y + z}}{2}} \right) = f\left[ {t\left( {\frac{{x + y + z}}{3}} \right) + \left( {1 - t} \right)z} \right] \le h\left( t \right) f\left( {\frac{{x + y + z}}{3}} \right)+h\left( {1 - t} \right) f\left( z \right), \end{align*} and \begin{align*} f\left( {\frac{{x + y}}{2}} \right) \le h\left( {1/2} \right)\left[ {f\left( x \right)+f\left( y \right)} \right]. \end{align*} Summing up these inequalities taking into account that $h$ is superadditive we get \begin{align*} &f\left( {\frac{{x + z}}{2}} \right)+f\left( {\frac{{y + z}}{2}} \right)+f\left( {\frac{{x + y}}{2}} \right) \\ &\le h\left( s \right)f\left( {\frac{{x + y + z}}{3}} \right)+h\left( {1 - s} \right)f\left( z \right) +h\left( t \right)f\left( {\frac{{x + y + z}}{3}} \right)+h\left( {1 - t} \right)f\left( z \right)\\ &\qquad+ h\left( {1/2}\right)\left[ {f\left( x \right)+f\left( y \right)} \right]\\ &= \left[ {h\left( s \right) + h\left( t \right)} \right]f\left( {\frac{{x + y + z}}{3}} \right)+ \left[ {h\left( {1 - s} \right) + h\left( {1 - t} \right)} \right]f\left( z \right)+ h\left( {1/2}\right)\left[ {f\left( x \right)+f\left( y \right)} \right] \\ &\le h\left( {s + t} \right) f\left( {\frac{{x + y + z}}{3}} \right) +h\left( {2 - s - t} \right) f\left( z \right)+h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)} \right] \\ &= h\left( {3/2} \right) f\left( {\frac{{x + y + z}}{3}} \right) +h\left( {1/2} \right) f\left( z \right)+h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)} \right] \\ &= h\left( {3/2} \right) f\left( {\frac{{x + y + z}}{3}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right], \end{align*} as desired. \end{proof} \begin{remark} Setting $z=y$ in \eqref{eq2.1}, then we have \begin{align*} 2f\left( {\frac{{x + y}}{2}} \right)+f\left( {y} \right) \le \,(\ge)\, h\left( {3/2} \right) f\left( {\frac{{x + 2y}}{3}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+2f\left( y \right)} \right]. \end{align*} for all $x,y\in I$. \end{remark} \begin{remark} Setting $z=y$ in \eqref{eq2.1}, then we get \begin{align*} 2f\left( {\frac{{x + y}}{2}} \right)+f\left( {y} \right) \le\,(\ge)\, h\left( {3/2} \right) f\left( {\frac{{x + 2y}}{3}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+2f\left( y \right)} \right], \end{align*} for all $x,y\in I$. \end{remark} \begin{corollary} \label{cor1} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive function. If $f : I \to \left(0,\infty\right)$ be an ${\rm{A_tA_t}}$-convex (concave) function, then \begin{align*} \frac{2}{3}\left[{ f\left( {\frac{{x + z}}{2}} \right)+f\left( {\frac{{y + z}}{2}} \right)+f\left( {\frac{{x + y}}{2}} \right)}\right] \le\,(\ge)\, f\left( {\frac{{x + y + z}}{3}} \right) +\frac{f\left( x \right)+f\left( y \right)+f\left( z \right)}{3}, \end{align*} for all $x,y,z\in I$. The equality holds when $f$ is affine. \end{corollary} \begin{example} \begin{enumerate} \item Let $f\left(x\right)=x^p$, $p\ge1$ then $f$ is ${\rm{A_tA_t}}$-convex for all $x>0$. Applying Corollary \ref{cor1}, we get \begin{align*} \frac{2}{3}\left[{ \left( {\frac{{x + z}}{2}} \right)^p+\left( {\frac{{y + z}}{2}} \right)^p+ \left( {\frac{{x + y}}{2}} \right)^p}\right] \le \left( {\frac{{x + y + z}}{3}} \right)^p + \frac{x^p+y^p+z^p}{3}, \end{align*} for all $x,y,z>0$. \item Let $f\left(x\right)=-\log x$, then $f$ is ${\rm{A_tA_t}}$-convex for all $0<x<1$. Applying Corollary \ref{cor1}, we get \begin{align*} \left( {x + z} \right)^2 \left( {y + z} \right)^2 \left( {x + y} \right)^2 \ge \frac{{64}}{{27}}\left( {x + y + z} \right)^3 \left( {xyz} \right) , \end{align*} for all $1>x,y,z>0$. \end{enumerate} \end{example} \begin{corollary} \label{cor2} If $f : I \to \mathbb{R}$ be an ${\rm{A_tA_{1/t}}}$-concave function, then \begin{align*} \frac{3}{2}\left[{ f\left( {\frac{{x + z}}{2}} \right)+f\left( {\frac{{y + z}}{2}} \right)+f\left( {\frac{{x + y}}{2}} \right)}\right] \le\,(\ge)\, f\left( {\frac{{x + y + z}}{3}} \right) +3\left[{f\left( x \right)+f\left( y \right)+f\left( z \right)}\right], \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)= \log x$, then $f$ is an ${\rm{A_tA_{1/t}}}$-concave for $0<x<1$. Applying Corollary \ref{cor2}, we get \begin{align*} \left( {x + z} \right)^3 \left( {y + z} \right)^3 \left( {x + y} \right)^3 \ge \frac{{512}}{9}\left( {x + y + z} \right)^2 \left( {xyz} \right)^6, \end{align*} for all $0< x,y,z<1$. \end{example} \begin{corollary} \label{cor3} If $f : I \to \mathbb{R}$ be an ${\rm{A_tA_1}}$-concave function, then \begin{align*} f\left( {\frac{{x + z}}{2}} \right)+f\left( {\frac{{y + z}}{2}} \right)+f\left( {\frac{{x + y}}{2}} \right) \le\,(\ge)\, f\left( {\frac{{x + y + z}}{3}} \right) + f\left( x \right)+f\left( y \right)+f\left( z \right), \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)= \log x$, which is a non-negative ${\rm{A_tA_1}}$-concave for all $0<x<1$. Applying Corollary \ref{cor3}, we get \begin{align*} \left( {x + z} \right)\left( {y + z} \right)\left( {x + y} \right) \ge \frac{8}{3}\left( {x + y + z} \right)\left( {xyz} \right), \end{align*} for all $0< x,y,z< 1$. \end{example} \begin{corollary} \label{cor4}In Theorem \ref{thm1}. \begin{enumerate} \item If $h : J \to \left(0,\infty\right)$ is a nonnegative is superadditive and $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tA_h}}$-convex and subadditive, then \begin{multline*} f\left( {x+y+z} \right)\le f\left( {\frac{{x + z}}{2}} \right)+f\left( {\frac{{y + z}}{2}} \right)+f\left( {\frac{{x + y}}{2}} \right) \\ \le h\left( {3/2} \right) f\left( {\frac{{x + y + z}}{3}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right] \\ \le h\left( {3/2} \right) \left[{f\left( {\frac{x}{3}} \right)+f\left( {\frac{y}{3}} \right)+f\left( {\frac{z}{3}} \right) }\right] +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right], \end{multline*} for all $x,y,z\in I$. If $h$ is nonnegative subadditive on $J$ and $f$ is an ${\rm{A_tA_h}}$-concave and superadditive, then the inequality is reversed. \item If $h : J \to \left(0,\infty\right)$ is a nonnegative is superadditive and $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tA_h}}$-convex and superadditive, then \begin{multline*} f\left( {\frac{{x + z}}{2}} \right)+f\left( {\frac{{y + z}}{2}} \right)+f\left( {\frac{{x + y}}{2}} \right) \\ \le h\left( {3/2} \right) f\left( {\frac{{x + y + z}}{3}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right] \\ \le h\left( {3/2} \right) f\left( {\frac{{x + y + z}}{3}} \right) +h\left(1/2 \right)f\left( x+y+z \right), \end{multline*} for all $x,y,z\in I$. If $h$ is a nonnegative is subadditive and $f$ is an ${\rm{A_tA_h}}$-concave and subadditive, then the inequality is reversed. \end{enumerate} \end{corollary} \subsection{The case when $f$ is $h$-${\rm{A_tG_t}}$-convex} \begin{theorem} \label{thm3} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive function. If $f : I \to \left(0,\infty\right)$ be an ${\rm{A_tG_h}}$-convex (concave) function, then \begin{align} f\left( {\frac{{x + z}}{2}} \right)f\left( {\frac{{y + z}}{2}} \right)f\left( {\frac{{x + y}}{2}} \right) \le \, (\ge) \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)},\label{eq2.2} \end{align} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{A_tG_h}}$-convex iff the inequality \begin{align*} f\left( {t\alpha + \left( {1 - t} \right)\beta } \right) \le \left[ {f\left( \alpha \right)} \right]^{h\left( t \right)} \left[ {f\left( \beta \right)} \right]^{h\left(1- t \right)}, \qquad 0\le t\le 1 \end{align*} holds for all $\alpha,\beta \in I$. As in the proof of Theorem \ref{thm1}, we have $\left(x+y-2z\right)\left(s+t-\frac{3}{2}\right)=0$. If $x+y-2z=0$, then $x = y = z$, and Popoviciu's inequality holds. If $s+t=\frac{3}{2}$, then since $f$ is ${\rm{A_tG_t}}$-convex, we have \begin{align*} f\left( {\frac{{x + z}}{2}} \right) = f\left[ {s\left( {\frac{{x + y + z}}{3}} \right) + \left( {1 - s} \right)z} \right] \le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( s \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - s} \right)} \end{align*} \begin{align*} f\left( {\frac{{y + z}}{2}} \right) = f\left[ {t\left( {\frac{{x + y + z}}{3}} \right) + \left( {1 - t} \right)z} \right] \le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( t \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - t} \right)} \end{align*} and \begin{align*} f\left( {\frac{{x + y}}{2}} \right) \le \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \end{align*} Multiplying these inequalities we get \begin{align*} &f\left( {\frac{{x + z}}{2}} \right)f\left( {\frac{{y + z}}{2}} \right)f\left( {\frac{{x + y}}{2}} \right) \\ &\le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( s \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - s} \right)} \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( t \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &= \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( s \right) + h\left( t \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - s} \right) + h\left( {1 - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &\le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( {s + t} \right)} \left[ {f\left( z \right)} \right]^{h\left( {2 - s - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{\frac{1}{2}} \\ &= \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)} \\ \end{align*} \end{proof} \begin{remark} Setting $z=y$ in \eqref{eq2.2}, then we have \begin{align*} f^2\left( {\frac{{x + y}}{2}} \right)f\left( {y} \right) \le \, (\ge) \left[ {f\left( {\frac{{x + 2y }}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f^2\left( y \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y\in I$. \end{remark} \begin{corollary} \label{cor5} If $f : I \to \left(0,\infty\right)$ be an ${\rm{A_tG_t}}$-convex function, then \begin{align*} f^2\left( {\frac{{x + z}}{2}} \right)f^2\left( {\frac{{y + z}}{2}} \right)f^2\left( {\frac{{x + y}}{2}} \right) \le f^3\left( {\frac{{x + y + z}}{3}} \right) f\left( x \right)f\left( y \right)f\left( z \right), \end{align*} for all $x,y,z\in I$. The equality occurred for $f\left(x\right)={\rm{e}}^x$, $x>0$. \end{corollary} \begin{example} $f\left(x\right)=\cosh \left(x\right)$, $x\in \mathbb{R}$ is ${\rm{A_tG_t}}$-convex function. Applying Corollary \ref{cor5} we get \begin{align*} {\rm{cosh}}^{\rm{2}} \left( {\frac{{x + z}}{2}} \right){\rm{cosh}}^{\rm{2}} \left( {\frac{{y + z}}{2}} \right){\rm{cosh}}^{\rm{2}} \left( {\frac{{x + y}}{2}} \right) \le {\rm{cosh}}^{\rm{3}} \left( {\frac{{x + y + z}}{3}} \right)\cosh \left( x \right)\cosh \left( y \right)\cosh \left( z \right) \end{align*} \end{example} \begin{corollary} \label{cor6} If $f : I \to \left(0,\infty\right)$ be an ${\rm{A_tG_{1/t}}}$-concave function, then \begin{align*} f^3\left( {\frac{{x + z}}{2}} \right)f^3\left( {\frac{{y + z}}{2}} \right)f^3\left( {\frac{{x + y}}{2}} \right) \ge f^2\left( {\frac{{x + y + z}}{3}} \right) f^6\left( x \right)f^6\left( y \right)f^6\left( z \right), \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} $f\left(x\right)=\arcsin \left(x\right)$, is $\frac{1}{t}$-${\rm{A_tG_t}}$-concave for $x\in[0,1]$. Applying Corollary \ref{cor6} we get \begin{multline*} \arcsin^3\left( {\frac{{x + z}}{2}} \right)\arcsin^3\left( {\frac{{y + z}}{2}} \right)\arcsin^3\left( {\frac{{x + y}}{2}} \right) \\ \ge \arcsin^2\left( {\frac{{x + y + z}}{3}} \right) \arcsin^6\left( x \right)\arcsin^6\left( y \right)\arcsin^6\left( z \right), \end{multline*} for all $0\le x,y,z\le1$. \end{example} \begin{corollary} \label{cor7} If $f : I \to \left(0,\infty\right)$ be an $1$-${\rm{A_tG_t}}$-concave function, then \begin{align*} f\left( {\frac{{x + z}}{2}} \right)f\left( {\frac{{y + z}}{2}} \right)f\left( {\frac{{x + y}}{2}} \right) \le \, (\ge) f\left( {\frac{{x + y + z}}{3}} \right) f\left( x \right)f\left( y \right)f\left( z \right), \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=\arcsin \left(x\right)$, is ${\rm{A_tG_1}}$-concave for $x\in[0,1]$. Applying Corollary \ref{cor7} we get \begin{align*} \arcsin\left( {\frac{{x + z}}{2}} \right)\arcsin\left( {\frac{{y + z}}{2}} \right)\arcsin\left( {\frac{{x + y}}{2}} \right) \arcsin\left( {\frac{{x + y + z}}{3}} \right) \arcsin\left( x \right)\arcsin\left( y \right)\arcsin\left( z \right), \end{align*} for all $0\le x,y,z \le 1$. \end{example} \begin{corollary} \label{cor8}In Theorem \ref{thm3}. \begin{enumerate} \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tG_h}}$-convex and submultiplicative, \begin{align*} f\left( {\frac{\left(x+z\right)\left(y+z\right)\left(x+y\right)}{8}} \right)&\le f\left( {\frac{{x + z}}{2}} \right)f\left( {\frac{{y + z}}{2}} \right)f\left( {\frac{{x + y}}{2}} \right) \\ &\le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y,z\in I$. If $f$ is an $h$-${\rm{A_tG_t}}$-concave and supermultiplicative, then the inequality is reversed. \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tG_h}}$-convex and supermultiplicative, then \begin{align*} f\left( {\frac{{x + z}}{2}} \right)f\left( {\frac{{y + z}}{2}} \right)f\left( {\frac{{x + y}}{2}} \right) &\le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)} \\ &\le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( xyz \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y,z\in I$. If $f$ is an ${\rm{A_tA_h}}$-concave and submultiplicative, then the inequality is reversed. \end{enumerate} \end{corollary} \begin{corollary} \label{cor9}In Theorem \ref{thm3}. \begin{enumerate} \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tG_h}}$-convex and superadditive, \begin{multline*} \left[{f\left( {\frac{x }{2}} \right) +f\left( {\frac{z}{2}} \right)}\right]\left[{f\left( {\frac{y }{2}} \right) +f\left( {\frac{z}{2}} \right)}\right]\left[{f\left( {\frac{x }{2}} \right) +f\left( {\frac{y}{2}} \right)}\right] \\ \le f\left( {\frac{{x + z}}{2}} \right)f\left( {\frac{{y + z}}{2}} \right)f\left( {\frac{{x + y}}{2}} \right) \\ \le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \end{multline*} for all $x,y,z\in I$. If $f$ is an ${\rm{A_tG_h}}$-concave and subadditive, then the inequality is reversed. \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tG_h}}$-convex and subadditive, then \begin{multline*} f\left( {\frac{{x + z}}{2}} \right)f\left( {\frac{{y + z}}{2}} \right)f\left( {\frac{{x + y}}{2}} \right) \\ \le \left[ {f\left( {\frac{{x + y + z}}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)} \\ \le \left[ {f\left( {\frac{{x }}{3}} \right)+f\left( {\frac{{y }}{3}} \right)+f\left( {\frac{{ z}}{3}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \end{multline*} for all $x,y,z\in I$. If $f$ is an ${\rm{A_tG_h}}$-concave and submultiplicative, then the inequality is reversed. \end{enumerate} \end{corollary} \subsection{The case when $f$ is ${\rm{A_tH_h}}$-convex} \begin{theorem} \label{thm4} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive function. If $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tH_h}}$-concave (convex), then \begin{align} &\frac{1}{{f\left( {\frac{{x + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{y + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{x + y}}{2}} \right)}}\nonumber \\ &\le \,(\ge)\,h\left( {1/2} \right)\left[ {\frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)}}, \label{eq2.3} \end{align} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{A_tH_h}}$-convex iff the inequality \begin{align*} f\left( {t\alpha + \left( {1 - t} \right)\beta } \right) \le\frac{{f\left( \alpha \right)f\left( \beta \right)}}{{h\left( 1-t \right)f\left( \alpha \right) + h\left( {1 - t} \right)f\left( \beta \right)}}, \qquad 0\le t\le 1 \end{align*} holds for all $\alpha,\beta \in I$. As in the proof of Theorem \ref{thm1}, we have $\left(x+y-2z\right)\left(s+t-\frac{3}{2}\right)=0$. If $x+y-2z=0$, then $x = y = z$, and Popoviciu's inequality holds. If $s+t=\frac{3}{2}$, then since $f$ is ${\rm{A_tH_h}}$-convex, we have \begin{align*} f\left( {\frac{{x + z}}{2}} \right) = f\left[ {s\left( {\frac{{x + y + z}}{3}} \right) + \left( {1 - s} \right)z} \right] \ge \frac{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}}{{h\left( {1 - s} \right)f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + h\left( s \right)f\left( z \right)}}, \end{align*} and this equivalent to write \begin{align} \frac{1}{{f\left( {\frac{{x + z}}{2}} \right)}} \le \frac{{h\left( {1 - s} \right)f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + h\left( s \right)f\left( z \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}}, \label{eq2.4} \end{align} similarly, \begin{align*} f\left( {\frac{{y + z}}{2}} \right) = f\left[ {t\left( {\frac{{x + y + z}}{3}} \right) + \left( {1 - t} \right)z} \right] \ge \frac{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}}{{h\left( {1 - t} \right)f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + h\left( t \right)f\left( z \right)}}, \end{align*} which equivalent to write \begin{align} \frac{1}{{f\left( {\frac{{y + z}}{2}} \right)}} \le \frac{{h\left( {1 - t} \right)f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + h\left( t \right)f\left( z \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}}, \label{eq2.5} \end{align} and \begin{align} f\left( {\frac{{x + y}}{2}} \right) &\ge \frac{{f\left( x \right)f\left( y \right)}}{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}\nonumber \\ \Longleftrightarrow\frac{1}{{f\left( {\frac{{x + y}}{2}} \right)}} &\le \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}}, \label{eq2.6} \end{align} Summing the inequalities \eqref{eq2.4}--\eqref{eq2.6}, we get \begin{align*} &\frac{1}{{f\left( {\frac{{x + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{y + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{x + y}}{2}} \right)}} \\ &\le \frac{{h\left( {1 - s} \right)f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + h\left( s \right)f\left( z \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}} + \frac{{h\left( {1 - t} \right)f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + h\left( t \right)f\left( z \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}} \\&\qquad+ \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}} \\ &= \frac{{\left[ {h\left( {1 - s} \right) + h\left( {1 - t} \right)} \right]f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + \left[ {h\left( s \right) + h\left( t \right)} \right]f\left( z \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}} \\&\qquad+ \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}} \\ &\le \frac{{h\left( {2 - s - t} \right)f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + h\left( {s + t} \right)f\left( z \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}} + \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}} \\ &= \frac{{h\left( {1/2} \right)f\left( {{\textstyle{{x + y + z} \over 3}}} \right) + h\left( {3/2} \right)f\left( z \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)f\left( z \right)}} + \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}} \\ &=h\left( {1/2} \right)\left[ {\frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)}} \end{align*} \end{proof} \begin{remark} In \eqref{eq2.3}, setting $z=y$, then we have \begin{align*} \frac{2}{{f\left( {\frac{{x + y}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{y + z}}{2}} \right)}} \le \,(\ge)\,h\left( {1/2} \right)\left[ {\frac{2}{{f\left( y \right)}} + \frac{1}{{f\left( x \right)}} } \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {{\textstyle{{x + 2y} \over 3}}} \right)}}, \end{align*} for all $x,y\in I$. \end{remark} \begin{corollary} \label{cor10} If $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tH_t}}$-concave (convex), then \begin{align*} \frac{2}{3}\left[{\frac{1}{{f\left( {\frac{{x + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{y + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{x + y}}{2}} \right)}}}\right] \le \,(\ge)\,\frac{1}{3}\left[ {\frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{1}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)}}, \end{align*} for all $x,y,z\in I$. The equality holds with $f\left(x\right)=\frac{1}{x}$, $x>0$. \end{corollary} \begin{example} Let $f\left(x\right)=x^p$, $p\ge 1$. Then ${\rm{A_tH_t}}$-concave for $x\ge 1$. Applying Corollary \ref{cor10}, we get \begin{align*} \frac{2}{3}\left[ {\left( {\frac{{x + z}}{2}} \right)^{ - p} + \left( {\frac{{y + z}}{2}} \right)^{ - p} + \left( {\frac{{x + y}}{2}} \right)^{ - p} } \right] \le \frac{{x^{ - p} + y^{ - p} + z^{ - p} }}{3} + \left( {\frac{{x + y + z}}{3}} \right)^{ - p} \end{align*} for all $x,y,z \ge 1$. \end{example} \begin{corollary} \label{cor11} If $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tH_{1/t}}}$-convex, then \begin{align*} \frac{3}{2}\left[{\frac{1}{{f\left( {\frac{{x + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{y + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{x + y}}{2}} \right)}}}\right] \le 3\left[ {\frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{1}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)}}, \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)= - \log \left(x\right)$, $x\gneqq 1$. Then, $f$ is ${\rm{A_tH_{1/t}}}$-convex for $x\gneqq 1$. Applying Corollary \ref{cor11}, we get \begin{align*} \frac{3}{2}\left[ {\frac{1}{{\log \left( {\frac{{x + z}}{2}} \right)}} + \frac{1}{{\log \left( {\frac{{y + z}}{2}} \right)}} + \frac{1}{{\log \left( {\frac{{x + y}}{2}} \right)}}} \right] \le 3\left( {\frac{1}{{\log x}} + \frac{1}{{\log y}} + \frac{1}{{\log z}}} \right) + \log \left( {xyz} \right)^{\frac{1}{3}}, \end{align*} for all $x,y,z\gneqq 1$. \end{example} \begin{corollary} \label{cor12} If $f : I \to \left(0,\infty\right)$ is an ${\rm{A_tH_1}}$-convex, then \begin{align*} \frac{1}{{f\left( {\frac{{x + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{y + z}}{2}} \right)}} + \frac{1}{{f\left( {\frac{{x + y}}{2}} \right)}} \le \left[ {\frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{1}}{{f\left( {{\textstyle{{x + y + z} \over 3}}} \right)}}, \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=-\log \left(x\right)$, $x\gneqq 1$. Then, $f$ is ${\rm{A_tH_1}}$-convex on $x\gneqq 1$. Applying Corollary \ref{cor12}, we get \begin{align*} \frac{1}{{\log \left( {\frac{{x + z}}{2}} \right)}} + \frac{1}{{\log \left( {\frac{{y + z}}{2}} \right)}} + \frac{1}{{\log \left( {\frac{{x + y}}{2}} \right)}} \le \frac{1}{{\log x}} + \frac{1}{{\log y}} + \frac{1}{{\log z}} + \log \left( {xyz} \right)^{\frac{1}{3}}, \end{align*} for all $x,y,z\gneqq 1$. \end{example} \section{Popoviciu inequalities for $h$-${\rm{GN}}$-convex functions} \subsection{The case when $f$ is ${\rm{G_tA_h}}$-convex} \begin{theorem} \label{thm5} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive function. If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tA_h}}$-convex function, then \begin{align} f\left( {\sqrt{xz}} \right)+f\left( {\sqrt{yz}} \right)+f\left( {\sqrt{xy}} \right) \le \,(\ge)\, h\left( {3/2} \right) f\left( {\sqrt[3]{xyz}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right], \label{eq3.1} \end{align} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{G_tA_h}}$-convex iff the inequality \begin{align*} f\left( {\alpha ^t \beta ^{1 - t} } \right) \le h\left( {t} \right)f\left( \alpha \right) + h\left( {1 - t} \right)f\left( \beta \right), \qquad 0\le t\le 1 \end{align*} holds for all $\alpha,\beta \in I$. Assume that $x\le y \le z$. If $y \le \left( {xyz} \right)^{1/3}$, then \begin{align*} \left( {xyz} \right)^{1/3} \le \left( {xz} \right)^{1/2} \le z \,\,\text{and}\,\, \left( {xyz} \right)^{1/3} \le \left( {yz} \right)^{1/2} \le z, \end{align*} so that there exist two numbers $s,t \in \left[0,1\right]$ satisfying \begin{align*} \left( {xz} \right)^{1/2} = \left( {xyz} \right)^{s/3} z^{1 - s} \end{align*} and \begin{align*} \left( {yz} \right)^{1/2} = \left( {xyz} \right)^{t/3} z^{1 - t} \end{align*} Multiplying the above equations, we get $$\left( {xyz} \right)^{1/2} z^{1/2} = \left( {xyz} \right)^{\left( {s + t} \right)/3} z^{2 - \left( {s + t} \right)} $$ or $$\left( {xyz} \right)^{\frac{{\left( {s + t} \right)}}{3} - \frac{1}{2}} z^{2 - \left( {s + t} \right) - \frac{1}{2}} = 1.$$ If $xyz^2=1$, then $x = y = z$, and Popoviciu's inequality holds. If $s+t=\frac{3}{2}$, then since $f$ is ${\rm{G_tA_h}}$-convex, we have \begin{align*} f\left( {\sqrt {xz} } \right) &= f\left[ {\left( {xyz} \right)^{s/3} z^{1 - s} } \right] \le h\left( s \right)\left[ {f\left( {\sqrt[3]{xyz} } \right)} \right] + h\left( {1 - s} \right)\left[ {f\left( z \right)} \right] \\ f\left( {\sqrt {yz} } \right) &= f\left[ {\left( {xyz} \right)^{t/3} z^{1 - t} } \right] \le h\left( t \right)\left[ {f\left( {\sqrt[3]{xyz}} \right)} \right] + h\left( {1 - t} \right)\left[ {f\left( z \right)} \right] \\ f\left( {\sqrt {xy} } \right) &\le h\left( {\frac{1}{2}} \right)\left[ {f\left( x \right) + f\left( y \right)} \right] \end{align*} Summing up these inequalities, we get \begin{align*} &f\left( {\sqrt{xz}} \right)+f\left( {\sqrt{yz}} \right)+f\left( {\sqrt{xy}} \right) \\ &\le h\left( s \right)f\left( {\sqrt[3]{xyz}} \right)+h\left( {1 - s} \right)f\left( z \right) +h\left( t \right)f\left( {\sqrt[3]{xyz}} \right)+h\left( {1 - t} \right)f\left( z \right)\\ &\qquad+ h\left( {1/2}\right)\left[ {f\left( x \right)+f\left( y \right)} \right]\\ &= \left[ {h\left( s \right) + h\left( t \right)} \right]f\left( {\sqrt[3]{xyz}} \right)+ \left[ {h\left( {1 - s} \right) + h\left( {1 - t} \right)} \right]f\left( z \right)+ h\left( {1/2}\right)\left[ {f\left( x \right)+f\left( y \right)} \right] \\ &\le h\left( {s + t} \right) f\left( {\sqrt[3]{xyz}} \right) +h\left( {2 - s - t} \right) f\left( z \right)+h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)} \right] \\ &= h\left( {3/2} \right) f\left( {\sqrt[3]{xyz}} \right) +h\left( {1/2} \right) f\left( z \right)+h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)} \right] \\ &= h\left( {3/2} \right) f\left( {\sqrt[3]{xyz}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right], \end{align*} which proves the inequality \eqref{eq3.1}. \end{proof} \begin{remark} Setting $z=y$ in \eqref{eq3.1}, we get \begin{align*} 2f\left( {\sqrt{xy}} \right)+f\left( {y} \right)\le \,(\ge)\, h\left( {3/2} \right) f\left( {\sqrt[3]{xy^2}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+2f\left( y \right)} \right], \end{align*} for all $x,y\in I$. \end{remark} \begin{corollary} \label{cor13} If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tA_t}}$-convex function, then \begin{align*} \frac{2}{3}\left[ {f\left( {\sqrt {xz} } \right) + f\left( {\sqrt {yz} } \right) + f\left( {\sqrt {xy} } \right)} \right] \le f\left( {\sqrt[3]{{xyz}}} \right) + \frac{{f\left( x \right) + f\left( y \right) + f\left( z \right)}}{3}, \end{align*} for all $x,y,z\in I$. The equality holds with $f\left(x\right)=\log\left(x\right)$, $x>1$. \end{corollary} \begin{example} Let $f\left(x\right)=\cosh\left(x\right)$, $x>0$. Then, $f$ is ${\rm{G_tA_t}}$-convex on $(0,\infty)$. Applying Corollary \ref{cor13} we get \begin{align*} \frac{2}{3}\left[ {\cosh \left( {\sqrt {xz} } \right) + \cosh \left( {\sqrt {yz} } \right) + \cosh \left( {\sqrt {xy} } \right)} \right] \le \cosh \left( {\sqrt[3]{{xyz}}} \right) + \frac{{\cosh \left( x \right) + \cosh \left( y \right) + \cosh \left( z \right)}}{3}, \end{align*} for all $x,y,z > 0$. \end{example} \begin{corollary} \label{cor14}If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tA_{1/t}}}$-concave function, then \begin{align*} \frac{3}{2}\left[{f\left( {\sqrt{xz}} \right)+f\left( {\sqrt{yz}} \right)+f\left( {\sqrt{xy}} \right)}\right] \ge f\left( {\sqrt[3]{xyz}} \right) +3 \left(f\left( x \right)+f\left( y \right)+f\left( z \right)\right) \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=-x^2$, $x>0$. Then, $f$ is ${\rm{G_tA_{1/t}}}$-concave on $(0,\infty)$. Applying Corollary \ref{cor14} we get \begin{align*} \frac{3}{2}\left({xz + yz+xy } \right) \le \left( {\sqrt[3]{{xyz}}} \right)^2 +3 \left(x^2+y^2+z^2\right) \end{align*} for all $x,y,z > 0$. \end{example} \begin{corollary} \label{cor15}If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tA_1}}$-concave function, then \begin{align*} f\left( {\sqrt{xz}} \right)+f\left( {\sqrt{yz}} \right)+f\left( {\sqrt{xy}} \right)\ge f\left( {\sqrt[3]{xyz}} \right) + f\left( x \right)+f\left( y \right)+f\left( z \right), \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=-x^2$, $x>0$. Then, $f$ is ${\rm{G_tA_1}}$-convex on $(0,\infty)$. Applying Corollary \ref{cor15} we get \begin{align*} xz + yz+xy \le \left( {\sqrt[3]{{xyz}}} \right)^2 + x^2+y^2+z^2 \end{align*} for all $x,y,z > 0$. \end{example} \begin{corollary} \label{cor16}In Theorem \ref{thm5}. \begin{enumerate} \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{G_tA_h}}$-convex and superadditive, \begin{align*} f\left( {\sqrt{xz}} \right)+f\left( {\sqrt{yz}} \right)+f\left( {\sqrt{xy}} \right) &\le h\left( {3/2} \right) f\left( {\sqrt[3]{xyz}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right] \\ &\le h\left( {3/2} \right) f\left( {\sqrt[3]{xyz}} \right) +h\left(1/2 \right) f\left( x +y+z\right), \end{align*} for all $x,y,z\in I$. If $f$ is an ${\rm{G_tA_h}}$-concave and subadditive, then the inequality is reversed. \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{G_tA_h}}$-convex and subadditive, then \begin{align*} f\left( {\sqrt{xz}+\sqrt{yz}+\sqrt{xy}} \right) &\le f\left( {\sqrt{xz}} \right)+f\left( {\sqrt{yz}} \right)+f\left( {\sqrt{xy}} \right) \\&\le h\left( {3/2} \right) f\left( {\sqrt[3]{xyz}} \right) +h\left(1/2 \right)\left[ {f\left( x \right)+f\left( y \right)+f\left( z \right)} \right], \end{align*} for all $x,y,z\in I$. If $f$ is an ${\rm{G_tA_h}}$-concave and superadditive, then the inequality is reversed. \end{enumerate} \end{corollary} \begin{example} Let $f\left(x\right)=\cosh\left(x\right)$, which is ${\rm{G_tA_t}}$-convex and superadditive on $(0,\infty)$. Applying Corollary \ref{cor16} we get \begin{align*} \frac{2}{3}\left[ {\cosh \left( {\sqrt {xz} } \right) + \cosh \left( {\sqrt {yz} } \right) + \cosh \left( {\sqrt {xy} } \right)} \right] &\le \cosh \left( {\sqrt[3]{{xyz}}} \right) + \frac{{\cosh \left( x \right) + \cosh \left( y \right) + \cosh \left( z \right)}}{3} \\ &\le \cosh\left( {\sqrt[3]{xyz}} \right) + \frac{1}{3}\cosh\left( x +y+z\right), \end{align*} for all $x,y,z > 0$. \end{example} \subsection{The case when $f$ is ${\rm{G_tG_h}}$-convex} \begin{theorem} \label{thm6}Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive function. If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tG_h}}$-convex function, then \begin{align} f\left( {\sqrt {xz}} \right)f\left( {\sqrt {yz}} \right)f\left( {\sqrt {xy}} \right) \le \,(\ge)\,\left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( 3/2 \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)},\label{eq3.2} \end{align} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{G_tG_h}}$-convex iff the inequality \begin{align*} f\left( {\alpha ^t \beta ^{1 - t} } \right) \le \left[ {f\left( \alpha \right)} \right]^{h\left( {t} \right)} \left[ {f\left( \beta \right)} \right]^{h\left( {1 - t} \right)}, \qquad 0\le t\le 1 \end{align*} holds for all $\alpha, \beta \in I$. As in the proof of Theorem \ref{thm4}, if $xyz^2=1$, then $x = y = z$, and Popoviciu's inequality holds. If $s+t=\frac{3}{2}$, then since $f$ is ${\rm{G_tG_h}}$-convex, we have \begin{align*} f\left( {\sqrt {xz} } \right) &= f\left[ {\left( {xyz} \right)^{s/3} z^{1 - s} } \right] \le \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( s \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - s} \right)}, \\ f\left( {\sqrt {yz} } \right) &= f\left[ {\left( {xyz} \right)^{t/3} z^{1 - t} } \right] \le \left[ {f\left( { \sqrt[3]{xyz}} \right)} \right]^{h\left( t \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - t} \right)}, \\ f\left( {\sqrt {xy} } \right) &\le h\left( {\frac{1}{2}} \right)\left[ {f\left( x \right) + f\left( y \right)} \right]. \end{align*} Multiplying these inequalities we get \begin{align*} &f\left( {\sqrt {xz}} \right)f\left( {\sqrt {yz}} \right)f\left({\sqrt {xy}} \right) \\ &\le \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( s \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - s} \right)}\left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( t \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &= \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( s \right)+h\left( t \right)} \left[ {f\left( z \right)} \right]^{h\left( {1 - s} \right)+h\left( {1 - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &\le \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( s+t \right)} \left[ {f\left( z \right)} \right]^{h\left( {2 - s - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &= \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( 3/2 \right)} \left[ {f\left( z \right)} \right]^{h\left( {1/2} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)}\\ &= \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( 3/2 \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \end{align*} \end{proof} \begin{remark} Setting $z=y$ in \eqref{eq3.2}, we get \begin{align*} f^2\left( {\sqrt {xy}} \right)f\left( {y} \right) \le \,(\ge)\,\left[ {f\left( {\sqrt[3]{xy^2} } \right)} \right]^{h\left( 3/2 \right)} \left[ {f\left( x \right)f^2\left( y \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y\in I$. \end{remark} \begin{corollary} \label{cor17} If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tG_t}}$-convex (concave) function, then \begin{align*} f^2\left( {\sqrt {xz}} \right)f^2\left( {\sqrt {yz}} \right)f^2\left( {\sqrt {xy}} \right) \le (\ge) f^3\left( {\sqrt[3]{xyz} } \right) f\left( x \right)f\left( y \right)f\left( z \right), \end{align*} for all $x,y,z\in I$. The equality holds with $f\left(x\right)={\rm{e}}^x$, $x>0$. \end{corollary} \begin{example} Let $f\left(x\right)=\cosh\left(x\right)$, which is ${\rm{G_tG_t}}$-convex on $(0,\infty)$. Applying Corollary \ref{cor17} we get \begin{align*} \cosh^2\left( {\sqrt {xz}} \right)\cosh^2\left( {\sqrt {yz}} \right)\cosh^2\left( {\sqrt {xy}} \right) \le f^3\left( {\sqrt[3]{xyz} } \right) \cosh\left( x \right)\cosh\left( y \right)\cosh\left( z \right), \end{align*} for all $x,y,z > 0$. \end{example} \begin{corollary} \label{cor18} If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tG_1/t}}$-concave function, then \begin{align*} f^3\left( {\sqrt {xz}} \right)f^3\left( {\sqrt {yz}} \right)f^3\left( {\sqrt {xy}} \right) \ge f^2\left( {\sqrt[3]{xyz} } \right) f^6\left( x \right)f^6\left( y \right)f^6\left( z \right), \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)= \exp\left(-x\right)$ which is $\frac{1}{t}$-${\rm{G_tG_t}}$-concave on $(0,\infty)$. Applying Corollary \ref{cor18} we get \begin{align*} \sqrt {xz} + \sqrt {yz}+ \sqrt {xy} \le \frac{2}{3}\sqrt[3]{xyz}+2x+2y+2z, \end{align*} for all $x,y,z > 0$. \end{example} \begin{corollary} \label{cor19} If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tG_1}}$-concave function, then \begin{align*} f\left( {\sqrt {xz}} \right)f\left( {\sqrt {yz}} \right)f\left( {\sqrt {xy}} \right) \le f\left( {\sqrt[3]{xyz} } \right)f\left( x \right)f\left( y \right)f\left( z \right), \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=\exp\left(-x\right)$, which is ${\rm{G_tG_1}}$-concave on $(0,\infty)$. Applying Corollary \ref{cor19} we get \begin{align*} \sqrt {xz}+ \sqrt {yz}+\sqrt {xy} \le \sqrt[3]{xyz} + x+y+z, \end{align*} for all $x,y,z > 0$. \end{example} \begin{corollary} \label{cor20}In Theorem \ref{thm6}. \begin{enumerate} \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{G_tG_h}}$-convex and supermultiplicative, \begin{align*} f\left( {\sqrt {xz}} \right)f\left( {\sqrt {yz}} \right)f\left( {\sqrt {xy}} \right) &\le \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( 3/2 \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)} \\ &\le \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( 3/2 \right)} \left[ {f\left( xyz \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y,z\in I$. \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{G_tG_h}}$-convex and submultiplicative, then \begin{align*} f\left( {xzy} \right) &\le f\left( {\sqrt {xz}} \right)f\left( {\sqrt {yz}} \right)f\left( {\sqrt {xy}} \right) \\ &\le \left[ {f\left( {\sqrt[3]{xyz} } \right)} \right]^{h\left( 3/2 \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)} \\ &\le \left[ {f\left( {\sqrt[3]{x} } \right)f\left( {\sqrt[3]{y} } \right)f\left( {\sqrt[3]{z} } \right)} \right]^{h\left( 3/2 \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y,z\in I$. \end{enumerate} \end{corollary} \begin{example} Let $f\left(x\right)=\cosh\left(x\right)$, which is ${\rm{G_tG_t}}$-convex and supermultiplicative on $[1,\infty)$. Applying Corollary \ref{cor20} we get \begin{align*} \cosh^2\left( {\sqrt {xz}} \right)\cosh^2\left( {\sqrt {yz}} \right)\cosh^2\left( {\sqrt {xy}} \right) &\le \cosh^3\left( {\sqrt[3]{xyz} } \right) \cosh\left( x \right)\cosh\left( y \right)\cosh\left( z \right) \\ &\le \cosh^3\left( {\sqrt[3]{xyz} } \right) \cosh\left( xyz \right) \end{align*} for all $x,y,z \ge 1$. \end{example} \subsection{The case when $f$ is ${\rm{G_tH_h}}$-convex} \begin{theorem} \label{thm7} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive function. If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tH_h}}$-concave (convex) function, then \begin{align} &\frac{1}{{f\left( {\sqrt {xz} } \right)}} + \frac{1}{{f\left( {\sqrt {yz} } \right)}} + \frac{1}{{f\left( {\sqrt {xy} } \right)}}\nonumber \\ &\le \,(\ge)\,h\left( {\frac{1}{2}} \right)\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)}}, \label{eq3.3} \end{align} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{G_tH_h}}$-convex iff the inequality \begin{align*} f\left( {\alpha ^t \beta ^{1 - t} } \right) \le \frac{{f\left( \alpha \right)f\left( \beta \right)}}{{h\left( {1- t} \right)f\left( \alpha \right) + h\left( { t} \right)f\left( \beta \right)}}, \qquad 0\le t\le 1 \end{align*} holds for all $\alpha, \beta \in I$. As in the proof of Theorem \ref{thm4}, if $xyz^2=1$, then $x = y = z$, and Popoviciu's inequality holds. If $s+t=\frac{3}{2}$, then since $f$ is ${\rm{G_tH_h}}$-convex, we have \begin{align*} f\left( {\sqrt {xz} } \right) = f\left[ {\left( {xyz} \right)^{s/3} z^{1 - s} } \right] \ge \frac{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}}{{h\left( {1 - s} \right)f\left( {\sqrt[3]{{xyz}}} \right) + h\left( s \right)f\left( z \right)}} \end{align*} and this equivalent to write \begin{align} \frac{1}{{f\left( {\sqrt {xz} } \right)}} \le \frac{{h\left( {1 - s} \right)f\left( {\sqrt[3]{{xyz}}} \right) + h\left( s \right)f\left( z \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}},\label{eq3.4} \end{align} similarly, \begin{align*} f\left( {\sqrt {yz} } \right) = f\left[ {\left( {xyz} \right)^{t/3} z^{1 - t} } \right] \ge \frac{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}}{{h\left( {1 - t} \right)f\left( {\sqrt[3]{{xyz}}} \right) + h\left( t \right)f\left( z \right)}} \end{align*} which equivalent to write \begin{align} \frac{1}{{f\left( {\sqrt {yz} } \right)}} \le \frac{{h\left( {1 - t} \right)f\left( {\sqrt[3]{{xyz}}} \right) + h\left( t \right)f\left( z \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}},\label{eq3.5} \end{align} and \begin{align} f\left( {\sqrt {xy} } \right) \ge \frac{{f\left( x \right)f\left( y \right)}}{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}\nonumber \\ \Longleftrightarrow \frac{1}{{f\left( {\sqrt {xy} } \right)}} \le \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}}.\label{eq3.6} \end{align} Summing the inequalities \eqref{eq3.4}--\eqref{eq3.6}, we get \begin{align*} &\frac{1}{{f\left( {\sqrt {xz} } \right)}} + \frac{1}{{f\left( {\sqrt {yz} } \right)}} + \frac{1}{{f\left( {\sqrt {xy} } \right)}} \\ &\le \frac{{h\left( {1 - s} \right)f\left( {\sqrt[3]{{xyz}}} \right) + h\left( s \right)f\left( z \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}} + \frac{{h\left( {1 - t} \right)f\left( {\sqrt[3]{{xyz}}} \right) + h\left( t \right)f\left( z \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}} \\&\qquad+ \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}} \\ &= \frac{{\left[ {h\left( {1 - s} \right) + h\left( {1 - t} \right)} \right]f\left( {\sqrt[3]{{xyz}}} \right) + \left[ {h\left( s \right) + h\left( t \right)} \right]f\left( z \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}} \\&\qquad+ \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}} \\ &\le \frac{{h\left( {2 - s - t} \right)f\left( {\sqrt[3]{{xyz}}} \right) + h\left( {s + t} \right)f\left( z \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}} + \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}} \\ &\le \frac{{h\left( {1/2} \right)f\left( {\sqrt[3]{{xyz}}} \right) + h\left( {3/2} \right)f\left( z \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)f\left( z \right)}} + \frac{{h\left( {1/2} \right)\left( {f\left( x \right) + f\left( y \right)} \right)}}{{f\left( x \right)f\left( y \right)}} \\ &= h\left( {\frac{1}{2}} \right)\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {\sqrt[3]{{xyz}}} \right)}}, \end{align*} which proves the inequality in \eqref{eq3.3}. \end{proof} \begin{remark} Setting $z=y$ in \eqref{eq3.3}, then we get \begin{align*} \frac{2}{{f\left( {\sqrt {xy} } \right)}} + \frac{1}{{f\left( {y} \right)}} \le \,(\ge)\,h\left( {\frac{1}{2}} \right)\left[ {\frac{1}{{f\left( x \right)}} + \frac{2}{{f\left( y \right)}} } \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {\sqrt[3]{{xy^2}}} \right)}}, \end{align*} for all $x,y\in I$. \end{remark} \begin{corollary} \label{cor21}If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tH_t}}$-concave (convex) function, then \begin{align*} \frac{2}{3}\left[{\frac{1}{{f\left( {\sqrt {xz} } \right)}} + \frac{1}{{f\left( {\sqrt {yz} } \right)}} + \frac{1}{{f\left( {\sqrt {xy} } \right)}}}\right]\le \,(\ge)\,\frac{1}{3}\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{1}}{{f\left( {\sqrt[3]{{xyz}}} \right)}}, \end{align*} for all $x,y,z\in I$. The equality holds with $f\left(x\right)=\frac{1}{\log\left(x\right)}$, $x \gvertneqq 1$. \end{corollary} \begin{example} Let $f\left(x\right)=\cosh \left(x\right)$, then $f$ is ${\rm{G_tH_t}}$-convex for all $x \ge1$. Applying Corollary \ref{cor21}, then we get \begin{multline*} \frac{2}{3}\left[{\frac{1}{{\cosh\left( {\sqrt {xz} } \right)}} + \frac{1}{{\cosh\left( {\sqrt {yz} } \right)}} + \frac{1}{{\cosh\left( {\sqrt {xy} } \right)}}}\right] \\ \ge \frac{1}{3}\left[ {\frac{1}{{\cosh\left( x \right)}} + \frac{1}{{\cosh\left( y \right)}} + \frac{1}{{\cosh\left( z \right)}}} \right] + \frac{{1}}{{\cosh\left( {\sqrt[3]{{xyz}}} \right)}}, \end{multline*} for all $x,y,z \ge1$. \end{example} \begin{corollary} \label{cor22} If $f : I \to \left(0,\infty\right)$ is ${\rm{G_tH_{1/t}}}$-convex function, then \begin{align*} \frac{3}{2}\left[{\frac{1}{{f\left( {\sqrt {xz} } \right)}} + \frac{1}{{f\left( {\sqrt {yz} } \right)}} + \frac{1}{{f\left( {\sqrt {xy} } \right)}}}\right]\ge 3\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{1}}{{f\left( {\sqrt[3]{{xyz}}} \right)}}, \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=-\log\left(x\right)$, then $f$ is ${\rm{G_tH_{1/t}}}$-convex for all $x>1$. Applying Corollary \ref{cor22}, then we get \begin{align*} \frac{3}{2}\left[{\frac{1}{{\log\left( {\sqrt {xz} } \right)}} + \frac{1}{{\log\left( {\sqrt {yz} } \right)}} + \frac{1}{{\log\left( {\sqrt {xy} } \right)}}}\right]\le 3\left[ {\frac{1}{{\log\left( x \right)}} + \frac{1}{{\log\left( y \right)}} + \frac{1}{{\log\left( z \right)}}} \right] + \frac{{1}}{{\log\left( {\sqrt[3]{{xyz}}} \right)}}, \end{align*} for all $x,y,z > 1$. \end{example} \begin{corollary} \label{cor23} If $f : I \to \left(0,\infty\right)$ is $1$-${\rm{G_tH_t}}$-convex function, then \begin{align*} \frac{1}{{f\left( {\sqrt {xz} } \right)}} + \frac{1}{{f\left( {\sqrt {yz} } \right)}} + \frac{1}{{f\left( {\sqrt {xy} } \right)}} \ge \left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{1}{{f\left( {\sqrt[3]{{xyz}}} \right)}}, \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=-\log\left(x\right)$, then $f$ is ${\rm{G_tH_1}}$-convex for all $x>1$. Applying Corollary \ref{cor23}, then we get \begin{align*} \frac{1}{{\log\left( {\sqrt {xz} } \right)}} + \frac{1}{{\log\left( {\sqrt {yz} } \right)}} + \frac{1}{{\log\left( {\sqrt {xy} } \right)}} \le \left[ {\frac{1}{{\log\left( x \right)}} + \frac{1}{{\log\left( y \right)}} + \frac{1}{{\log\left( z \right)}}} \right] + \frac{1}{{\log\left( {\sqrt[3]{{xyz}}} \right)}}, \end{align*} for all $x,y,z >1$. \end{example} \section{Popoviciu inequalities for $h$-${\rm{HN}}$-convex functions} \subsection{The case when $f$ is ${\rm{H_tA_h}}$-convex} \begin{theorem} \label{thm8} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive. If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tA_h}}$-convex (concave) function, then \begin{align} &f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right)\nonumber \\ &\le \,(\ge)\,h\left( {3/2} \right)f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right) + f\left( z \right)} \right], \label{eq4.1} \end{align} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{H_tA_h}}$-convex iff the inequality \begin{align*} f\left( {\frac{{\alpha \beta }}{{t\alpha + \left( {1 - t} \right)\beta }}} \right) \le h\left( {1-t} \right)f\left( \alpha \right) + h\left( { t} \right)f\left( \beta \right), \qquad 0\le t\le 1, \end{align*} holds for all $\alpha,\beta \in I$. Assume that $x\le y \le z$. If $ y \le \frac{{3xyz}}{{xy + yz + xz}}$, then \begin{align*} \frac{{3xyz}}{{xy + yz + xz}} \le \frac{{2xz}}{{x + z}} \le z \,\,\text{and}\,\, \frac{{3xyz}}{{xy + yz + xz}} \le \frac{{2yz}}{{y + z}} \le z, \end{align*} so that there exist two numbers $s,t \in \left[0,1\right]$ satisfying \begin{align*} \frac{{2xz}}{{x + z}} = \frac{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}}{{s{\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {1 - s} \right)z}}, \end{align*} and \begin{align*} \frac{{2yz}}{{y + z}} = \frac{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}}{{t{\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {1 - t} \right)z}}. \end{align*} For simplicity set, $u=\frac{{3xyz}}{{xy + yz + xz}}$, summing the reciprocal of the previous two equations \begin{align*} \frac{{x + z}}{{2xz}} + \frac{{y + z}}{{2yz}} = \frac{{\left( {s + t} \right){\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {2 - s - t} \right)z}}{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}} = \frac{{3\left( {s + t} \right)u + \left( {2 - s - t} \right)z}}{{3u \cdot z}}. \end{align*} Simplifying the above equation and reverse it back to the original form (taking the reciprocal again), we get \begin{align*} \frac{u}{{u + z}} = \frac{u}{{2\left( {s + t} \right)u + \frac{2}{3}\left( {2 - s - t} \right)z}}, \end{align*} since $y,x,z>0$, this yields that $x=y=z $ and thus Popoviciu's inequality holds, or $s+t=\frac{1}{2}$ and in this case since $f$ is ${\rm{H_tA_h}}$-convex, we have \begin{align*} f\left( {\frac{{2xz}}{{x + z}}} \right) &= f\left( {\frac{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}}{{s{\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {1 - s} \right)z}}} \right) \le h\left( s \right)f\left( z \right) + h\left( {1 - s} \right)f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right), \\ f\left( {\frac{{2yz}}{{y + z}}} \right) &= f\left( {\frac{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}}{{t{\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {1 - t} \right)z}}} \right) \le h\left( t \right)f\left( z \right) + h\left( {1 - t} \right)f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right), \\ f\left( {\frac{{2xy}}{{x + y}}} \right) &\le h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y\right)} \right]. \end{align*} Summing up these inequalities we get \begin{align*} &f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right) \\ &\le \left[ {h\left( s \right) + h\left( t \right)} \right]f\left( z \right) + \left[ {h\left( {1 - s} \right) + h\left( {1 - t} \right)} \right]f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right)} \right] \\ &\le h\left( {s + t} \right)f\left( z \right) + h\left( {2 - s - t} \right)f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right)} \right] \\ &= h\left( {3/2} \right)f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right) + f\left( z \right)} \right], \end{align*} which proves the inequality in \eqref{eq4.1}. \end{proof} \begin{remark} Setting $z=y$ in \eqref{eq4.1}, then we get \begin{align*} 2f\left( {\frac{{2xy}}{{x + y}}} \right) + f\left( {y} \right) \le \,(\ge)\,h\left( {3/2} \right)f\left( {\frac{{3xy}}{{2x + y}}} \right) + h\left( {1/2} \right)\left[ {f\left( x \right) + 2f\left( y \right)} \right], \end{align*} for all $x,y\in I$. \end{remark} \begin{corollary} \label{cor24} If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tA_t}}$-convex (concave) function, then \begin{align*} \frac{2}{3}\left[{f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right)}\right] \le \,(\ge)\, f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + \frac{f\left( x \right) + f\left( y \right) + f\left( z \right)}{3}, \end{align*} for all $x,y,z\in I$. The equality holds with $f\left(x\right)=\frac{1}{x}$, $x> 0$. \end{corollary} \begin{example} Let $f\left(x\right)=\arctan\left(x\right)$, then $f$ is ${\rm{H_tA_t}}$-convex on $\left(0,\infty\right)$. Applying Corollary \ref{cor24}, then we get \begin{multline*} \frac{2}{3}\left[{\arctan\left( {\frac{{2xz}}{{x + z}}} \right) + \arctan\left( {\frac{{2yz}}{{y + z}}} \right) + \arctan\left( {\frac{{2xy}}{{x + y}}} \right)}\right] \\ \le \arctan\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + \frac{\arctan\left( x \right) + \arctan\left( y \right) + \arctan\left( z \right)}{3}, \end{multline*} \end{example} \begin{corollary} \label{cor25}If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tA_{1/t}}}$-concave function, then \begin{align*} \frac{3}{2}\left[{f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right)}\right] \ge f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + 3\left[ {f\left( x \right) + f\left( y \right) + f\left( z \right)} \right], \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=x^2$, therefore $f$ is ${\rm{H_tA_{1/t}}}$-concave on $x<0$. Applying Corollary \ref{cor25}, then we get \begin{align*} \left( {\frac{{xz}}{{x + z}}} \right)^2 + \left( {\frac{{yz}}{{y + z}}} \right)^2 +\left( {\frac{{xy}}{{x + y}}} \right)^2 \ge \frac{3}{2}\left( {\frac{{ xyz}}{{xy + yz + xz}}} \right)^2 + \frac{1}{18}\left( {x^2+y^2+z^2 } \right), \end{align*} for all $x,y,z<0$. \end{example} \begin{corollary} \label{cor26}If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tA_1}}$-concave function, then \begin{align*} &f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right)\nonumber \\ &\ge f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + \left[ {f\left( x \right) + f\left( y \right) + f\left( z \right)} \right], \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=x^2$, therefore $f$ is ${\rm{H_tA_1}}$-concave on $\left(-\infty,0\right)$. Applying Corollary \ref{cor26}, then we get \begin{align*} \left( {\frac{{xz}}{{x + z}}} \right)^2 + \left( {\frac{{yz}}{{y + z}}} \right)^2 + \left( {\frac{{xy}}{{x + y}}} \right)^2 \ge \frac{9}{4}\left[{\frac{x^2+y^2+z^2}{9}+\left( {\frac{{xyz}}{{xy + yz + xz}}} \right)^2 } \right], \end{align*} for all $x,y,z<0$. \end{example} \begin{corollary} \label{cor27}In Theorem \ref{thm8}. \begin{enumerate} \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{H_tA_h}}$-convex and superadditive, then \begin{align*} &2\left[{f\left( {\frac{{xz}}{{x + z}} } \right)+f\left( { \frac{{yz}}{{y + z}} } \right)+f\left( { \frac{{xy}}{{x + y}}} \right)}\right] \\ & \le f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right)\nonumber \\ &\le h\left( {3/2} \right)f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right) + f\left( z \right)} \right] \\ &\le h\left( {3/2} \right)f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + h\left( {1/2} \right)f\left( x+y+z \right), \end{align*} for all $x,y,z\in I$. If $f$ is an ${\rm{H_tA_h}}$-concave and subadditive, then the inequality is reversed. \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{H_tA_h}}$-convex and subadditive, then \begin{align*} &f\left( {\frac{{2xz}}{{x + z}}+\frac{{2yz}}{{y + z}}+\frac{{2xy}}{{x + y}}} \right) \\ &\le f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right) \\ &\le h\left( {3/2} \right)f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) + h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right) + f\left( z \right)} \right] \\ &\le 3h\left( {3/2} \right)f\left( {\frac{{xyz}}{{xy + yz + xz}}} \right) + h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right) + f\left( z \right)} \right], \end{align*} for all $x,y,z\in I$. If $f$ is an ${\rm{H_tA_h}}$-concave and superadditive, then the inequality is reversed. \end{enumerate} \end{corollary} \subsection{The case when $f$ is ${\rm{H_tG_h}}$-convex} \begin{theorem} \label{thm9} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive. If $f : I\to \left(0,\infty\right)$ is ${\rm{H_tG_h}}$-convex (concave) function, then \begin{align} &f\left( {\frac{{2xz}}{{x + z}}} \right) f\left( {\frac{{2yz}}{{y + z}}} \right) f\left( {\frac{{2xy}}{{x + y}}} \right)\nonumber \\ &\le \,(\ge)\,\left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \label{eq4.2} \end{align} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{H_tG_h}}$-convex iff the inequality \begin{align*} f\left( {\frac{{\alpha \beta }}{{t\alpha + \left( {1 - t} \right)\beta }}} \right) \le \left[ {f\left( \alpha \right)} \right]^{h\left( { 1-t} \right)} \left[ {f\left( \beta \right)} \right]^{h\left( { t} \right)}, \qquad 0\le t\le 1. \end{align*} holds for all $\alpha,\beta \in I$. As in the proof of Theorem \ref{thm8}, if $x=y=z$, then the inequality holds. If $s+t=\frac{1}{2}$ since $f$ is ${\rm{H_tG_h}}$-convex, we have \begin{align*} f\left( {\frac{{2xz}}{{x + z}}} \right) &= f\left( {\frac{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}}{{s{\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {1 - s} \right)z}}} \right) \le \left[ {f\left( z \right)} \right]^{h\left( s \right)} \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {1 - s} \right)}, \\ f\left( {\frac{{2yz}}{{y + z}}} \right) &= f\left( {\frac{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}}{{t{\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {1 - t} \right)z}}} \right) \le \left[ {f\left( z \right)} \right]^{h\left( t \right)} \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {1 - t} \right)}, \\ f\left( {\frac{{2xy}}{{x + y}}} \right) &\le \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)}. \end{align*} Multiplying these inequalities we get \begin{align*} &f\left( {\frac{{2xz}}{{x + z}}} \right)f\left( {\frac{{2yz}}{{y + z}}} \right)f\left( {\frac{{2xy}}{{x + y}}} \right) \\ &\le \left[ {f\left( z \right)} \right]^{h\left( s \right)} \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {1 - s} \right)} \left[ {f\left( z \right)} \right]^{h\left( t \right)} \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {1 - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &\le \left[ {f\left( z \right)} \right]^{h\left( s \right) + h\left( t \right)} \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {1 - s} \right) + h\left( {1 - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &\le \left[ {f\left( z \right)} \right]^{h\left( {s + t} \right)} \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {2 - s - t} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &= \left[ {f\left( z \right)} \right]^{h\left( {1/2} \right)} \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)} \right]^{h\left( {1/2} \right)} \\ &= \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \end{align*} which proves the inequality in \eqref{eq4.2}. \end{proof} \begin{remark} Setting $z=y$ in \eqref{eq4.2}, we get that \begin{align*} 2f\left( {\frac{{2xy}}{{x + y}}} \right) f\left( {y} \right)\le \,(\ge)\,\left[ {f\left( {\frac{{3xy}}{{2x + y}}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f^2\left( y \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y\in I$. \end{remark} \begin{corollary} \label{cor28}If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tG_t}}$-convex (concave) function, then \begin{align*} &f\left( {\frac{{2xz}}{{x + z}}} \right) f\left( {\frac{{2yz}}{{y + z}}} \right) f\left( {\frac{{2xy}}{{x + y}}} \right)\nonumber \\ &\le \,(\ge)\,\left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{3/2} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{1/2}, \end{align*} for all $x,y,z\in I$. The equality holds with $f\left(x\right)={\rm{e}}^{\frac{1}{x}}$, $x>0$. \end{corollary} \begin{example} Let $f\left(x\right)=\exp\left(x\right)$, $x>0$. Then, $f$ is ${\rm{H_tG_t}}$-convex on $\left(0,\infty\right)$. Applying Corollary \ref{cor28} we get \begin{align*} \frac{{4xz}}{{x + z}}+\frac{{4yz}}{{y + z}}+\frac{{4xy}}{{x + y}}\le \frac{{9xyz}}{{xy + yz + xz}}+ xyz, \end{align*} for all $x,y,z>0$. \end{example} \begin{corollary} \label{cor29}If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tG_{1/t}}}$-concave, then \begin{align*} &f\left( {\frac{{2xz}}{{x + z}}} \right) f\left( {\frac{{2yz}}{{y + z}}} \right) f\left( {\frac{{2xy}}{{x + y}}} \right)\nonumber \\ &\ge\left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{2/3} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{2}, \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=\exp\left(-x\right)$, $x>0$. Then, $f$ is ${\rm{H_tG_{1/t}}}$-concave on $\left(0,\infty\right)$. Applying Corollary \ref{cor29} we get \begin{align*} \frac{{ xz}}{{x + z}}+\frac{{ yz}}{{y + z}}+ \frac{{ xy}}{{x + y}} \le \frac{{ xyz}}{{xy + yz + xz}}+ xyz, \end{align*} for all $x,y,z>0$. \end{example} \begin{corollary} \label{cor30} If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tG_1}}$-concave function, then \begin{align*} &f\left( {\frac{{2xz}}{{x + z}}} \right) f\left( {\frac{{2yz}}{{y + z}}} \right) f\left( {\frac{{2xy}}{{x + y}}} \right)\nonumber \\ &\ge f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right) f\left( x \right)f\left( y \right)f\left( z \right) , \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=\exp\left(-x\right)$, $x>0$. Then, $f$ is ${\rm{H_tG_1}}$-concave on $\left(0,\infty\right)$. Applying Corollary \ref{cor30} we get \begin{align*} \frac{{2xz}}{{x + z}}+\frac{{2yz}}{{y + z}}+\frac{{2xy}}{{x + y}} \le \frac{{3xyz}}{{xy + yz + xz}} + x+y +z \end{align*} for all $x,y,z >0$. \end{example} \begin{corollary} \label{cor13}In Theorem \ref{thm9}. \begin{enumerate} \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{H_tG_h}}$-convex and superadditive, then \begin{align*} &2 \left[{f\left( {\frac{{xz}}{{x + z}} } \right)+f\left( { \frac{{yz}}{{y + z}} } \right)+f\left( { \frac{{xy}}{{x + y}}} \right)}\right] \\ &\le f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right) \\ &\le \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y,z\in I$. If $f$ is an $h$-${\rm{H_tG_t}}$-concave and subadditive, then the inequality is reversed. \item If $f : I \to \left(0,\infty\right)$ is an ${\rm{H_tG_h}}$-convex and subadditive, then \begin{align*} &f\left( {\frac{{2xz}}{{x + z}}+\frac{{2yz}}{{y + z}}+\frac{{2xy}}{{x + y}}} \right) \\ &\le f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right) \\ &\le \left[ {f\left( {\frac{{3xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)} \\ &\le \left[ {3f\left( {\frac{{xyz}}{{xy + yz + xz}}} \right)} \right]^{h\left( {3/2} \right)} \left[ {f\left( x \right)f\left( y \right)f\left( z \right)} \right]^{h\left( {1/2} \right)}, \end{align*} for all $x,y,z\in I$. If $f$ is an ${\rm{H_tG_h}}$-concave and superadditive, then the inequality is reversed. \end{enumerate} \end{corollary} \subsection{The case when $f$ is ${\rm{H_tH_h}}$-convex} \begin{theorem} Let $h: I\to \left(0,\infty\right)$ be a non-negative super(sub)additive. If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tH_h}}$-concave (convex) function, then \begin{align} &f\left( {\frac{{2xz}}{{x + z}}} \right) + f\left( {\frac{{2yz}}{{y + z}}} \right) + f\left( {\frac{{2xy}}{{x + y}}} \right)\nonumber \\ &\le \,(\ge)\,h\left( {\frac{1}{2}} \right)\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right)}},\label{eq4.3} \end{align} for all $x,y,z\in I$. \end{theorem} \begin{proof} $f$ is ${\rm{H_tH_h}}$-convex iff the inequality \begin{align*} f\left( {\frac{{\alpha \beta }}{{t\alpha + \left( {1 - t} \right)\beta }}} \right) \le \frac{{f\left( \alpha \right)f\left( \beta \right)}}{{h\left( {t} \right)f\left( \alpha \right) + h\left( {1 - t} \right)f\left( \beta \right)}}, \qquad 0\le t\le 1 \end{align*} holds for all $\alpha, \beta \in I$. As in the proof of Theorem \ref{thm8}, if $x=y=z$, then the inequality holds. If $s+t=\frac{1}{2}$ since $f$ is ${\rm{H_tH_h}}$-convex, we have \begin{align*} f\left( {\frac{{2xz}}{{x + z}}} \right) &= f\left( {\frac{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}}{{s{\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {1 - s} \right)z}}} \right) \ge \frac{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) \cdot f\left( z \right)}}{{h\left( s \right)f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) + h\left( {1 - s} \right)f\left( z \right)}}, \\ f\left( {\frac{{2yz}}{{y + z}}} \right) &= f\left( {\frac{{{\textstyle{{3xyz} \over {xy + yz + xz}}} \cdot z}}{{t{\textstyle{{3xyz} \over {xy + yz + xz}}} + \left( {1 - t} \right)z}}} \right) \ge \frac{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) \cdot f\left( z \right)}}{{h\left( t \right)f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) + h\left( {1 - t} \right)f\left( z \right)}}, \\ f\left( {\frac{{2xy}}{{x + y}}} \right) &\ge \frac{{f\left( x \right)f\left( y \right)}}{{h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right)} \right]}}, \end{align*} Therefore, by summing the reciprocal of the above inequalities we get \begin{align*} &\frac{1}{{f\left( {{\textstyle{{2xz} \over {x + z}}}} \right)}} + \frac{1}{{f\left( {{\textstyle{{2yz} \over {y + z}}}} \right)}} + \frac{1}{{f\left( {{\textstyle{{2xy} \over {x + y}}}} \right)}} \\ &\le \frac{{h\left( s \right)f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) + h\left( {1 - s} \right)f\left( z \right) + h\left( t \right)f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) + h\left( {1 - t} \right)f\left( z \right)}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) \cdot f\left( z \right)}} \\ &\qquad+ \frac{{h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right)} \right]}}{{f\left( x \right)f\left( y \right)}} \\ &\le \frac{{\left[ {h\left( s \right) + h\left( s \right)} \right]f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) + \left[ {h\left( {1 - s} \right) + h\left( {1 - t} \right)} \right]f\left( z \right)}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) \cdot f\left( z \right)}} + \frac{{h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right)} \right]}}{{f\left( x \right)f\left( y \right)}} \\ &\le \frac{{h\left( {s + t} \right)f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) + h\left( {2 - s - t} \right)f\left( z \right)}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) \cdot f\left( z \right)}} + \frac{{h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right)} \right]}}{{f\left( x \right)f\left( y \right)}} \\ &= \frac{{h\left( {1/2} \right)f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) + h\left( {3/2} \right)f\left( z \right)}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) \cdot f\left( z \right)}} + \frac{{h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right)} \right]}}{{f\left( x \right)f\left( y \right)}} \\ &= \frac{{h\left( {1/2} \right)f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) + h\left( {3/2} \right)f\left( z \right)}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right) \cdot f\left( z \right)}} + \frac{{h\left( {1/2} \right)\left[ {f\left( x \right) + f\left( y \right)} \right]}}{{f\left( x \right)f\left( y \right)}} \\ &= h\left( {\frac{1}{2}} \right)\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right)}}, \end{align*} which proves the inequality in \eqref{eq4.3}. \end{proof} \begin{remark} Setting $z=y$ in \eqref{eq4.3}, then we get \begin{align*} 2f\left( {\frac{{2xy}}{{x + y}}} \right) + f\left( {y} \right) \le \,(\ge)\,h\left( {\frac{1}{2}} \right)\left[ {\frac{1}{{f\left( x \right)}} + \frac{2}{{f\left( y \right)}} } \right] + \frac{{h\left( {3/2} \right)}}{{f\left( {{\textstyle{{3xy} \over {2x + y}}}} \right)}}, \end{align*} for all $x,y,z\in I$. \end{remark} \begin{corollary} \label{cor31}If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tH_t}}$-concave (convex) function, then \begin{align*} &\frac{2}{3}\left[ {\frac{1}{f\left( {\frac{{2xz}}{{x + z}}} \right)} + \frac{1}{f\left( {\frac{{2yz}}{{y + z}}} \right)} + \frac{1}{f\left( {\frac{{2xy}}{{x + y}}} \right)}}\right]\nonumber \\ &\le \,(\ge)\, \frac{1}{3}\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{1}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right)}}, \end{align*} for all $x,y,z\in I$. The equality holds with $f\left(x\right)=x$, $x>1$. \end{corollary} \begin{example} Let $f\left(x\right)=\arctan\left(x\right)$, $x>0$. Then $f$ is ${\rm{H_tH_t}}$-concave on $\left(0,\infty\right)$. Applying Corollary \ref{cor31}, then we get \begin{align*} &\frac{2}{3}\left[ {\frac{1}{\arctan\left( {\frac{{2xz}}{{x + z}}} \right)} + \frac{1}{\arctan\left( {\frac{{2yz}}{{y + z}}} \right)} + \frac{1}{\arctan\left( {\frac{{2xy}}{{x + y}}} \right)}}\right] \\ &\le \frac{1}{3}\left[ {\frac{1}{{\arctan\left( x \right)}} + \frac{1}{{\arctan\left( y \right)}} + \frac{1}{{\arctan\left( z \right)}}} \right] + \frac{{1}}{{\arctan\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right)}}, \end{align*} for all $x,y,z>0$. \end{example} \begin{corollary} \label{cor32} If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tH_{1/t}}}$-convex function, then \begin{align*} &\frac{3}{2}\left[ {\frac{1}{f\left( {\frac{{2xz}}{{x + z}}} \right)} + \frac{1}{f\left( {\frac{{2yz}}{{y + z}}} \right)} + \frac{1}{f\left( {\frac{{2xy}}{{x + y}}} \right)}}\right]\nonumber \\ &\ge 3\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{{1}}{{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right)}}, \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=-\log\left(x\right)$, $x>1$. Then $f$ is ${\rm{H_tH_{1/t}}}$-convex on $\left(0,\infty\right)$. Applying Corollary \ref{cor32}, then we get \begin{align*} &\frac{3}{2}\left[ {\frac{1}{\log\left( {\frac{{2xz}}{{x + z}}} \right)} + \frac{1}{\log\left( {\frac{{2yz}}{{y + z}}} \right)} + \frac{1}{\log\left( {\frac{{2xy}}{{x + y}}} \right)}}\right] \\ &\le 3\left[ {\frac{1}{{\log\left( x \right)}} + \frac{1}{{\log\left( y \right)}} + \frac{1}{{\log\left( z \right)}}} \right] + \frac{{1}}{{\log\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right)}}, \end{align*} for all $x,y,z>0$. \end{example} \begin{corollary} \label{cor33}If $f : I \to \left(0,\infty\right)$ is ${\rm{H_tH_1}}$-convex function, then \begin{align*} &\frac{1}{f\left( {\frac{{2xz}}{{x + z}}} \right)} + \frac{1}{f\left( {\frac{{2yz}}{{y + z}}} \right)} + \frac{1}{f\left( {\frac{{2xy}}{{x + y}}} \right)} \\ &\ge\left[ {\frac{1}{{f\left( x \right)}} + \frac{1}{{f\left( y \right)}} + \frac{1}{{f\left( z \right)}}} \right] + \frac{1}{f\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right)}, \end{align*} for all $x,y,z\in I$. \end{corollary} \begin{example} Let $f\left(x\right)=-\log\left(x\right)$, $x>0$. Then $f$ is ${\rm{H_tH_1}}$-convex on $\left(0,\infty\right)$. Applying Corollary \ref{cor33}, then we get \begin{align*} &\frac{1}{\log\left( {\frac{{2xz}}{{x + z}}} \right)} + \frac{1}{\log\left( {\frac{{2yz}}{{y + z}}} \right)} + \frac{1}{\log\left( {\frac{{2xy}}{{x + y}}} \right)} \\ &\le\left[ {\frac{1}{{\log\left( x \right)}} + \frac{1}{{\log\left( y \right)}} + \frac{1}{{\log\left( z \right)}}} \right] + \frac{1}{\log\left( {{\textstyle{{3xyz} \over {xy + yz + xz}}}} \right)}, \end{align*} for all $x,y,z>0$. \end{example} \end{document}
\begin{document} \author{V.A.~Vassiliev} \address{National Research University Higher School of Economics \\ Steklov Mathematical Institute of Russian Academy of Sciences} \email{[email protected]} \thanks{Research supported by the Russian Science Foundation grant, project 16-11-10316} \title{Local Petrovskii lacunas at parabolic singular points of wavefronts of strictly hyperbolic PDE's} \date{} \begin{abstract} We enumerate the local Petrovskii lacunas (that is, the domains of local regularity of the principal fundamental solutions of strictly hyperbolic PDE's with constant coefficients in ${\mathbb R}^N$) at the {\em parabolic} singular points of their wavefronts (that is, at the points of types $P_8^1$, $P_8^2$, $\pm X_9$, $X_9^1$, $X_9^2$, $J_{10}^1$, $J_{10}^3$). These points form the next difficult family of classes of the natural classification of singular points after the so-called {\em simple} singularities $A_k, D_k, E_6, E_7$, $E_8$, studied previously. Also we promote a computer program counting for topologically different morsifications of critical points of smooth functions, and hence also for local components of the complement of a generic wavefront at its singular points. Keywords: wavefront, lacuna, hyperbolic operator, sharpness, morsification, Petrovskii cycle, Petrovskii criterion. \end{abstract} \maketitle \section{Introduction} The {\em lacunas} of a hyperbolic PDE are the components of the complement of its wavefront such that the principal fundamental solution of this equation can be extended from any such component to a regular function in some its neighborhood. The theory of lacunas was created by I.G.~Petrovskii \cite{Petrovskii 45}. He has related this regularity condition to the topology and geometry of algebraic manifolds, and gave a criterion of it in the terms of certain homology classes of complex projective algebraic manifolds defined by the principal symbol of the hyperbolic operator. This theory was further developed in numerous works including \cite{Davydova 45}, \cite{Borovikov 59}, \cite{Borovikov 61}, \cite{Leray 62}, \cite{ABG 70}, \cite{ABG 73}, \cite{Gording 77}, \cite{Vassiliev 86}, \cite{Varchenko 87}, \cite{Vassiliev 92}, \cite{APLt}; for an important preceding work see \cite{Hadamard 32}. Most of these works treat also the local aspect of the problem, explicitly formulated in \cite{ABG 73} in the terms of {\em local lacunas} and a local version of the Petrovskii topological condition. Any hyperbolic operator with constant coefficients in ${\mathbb R}^N$ admits a unique fundamental solution with support in a proper cone in the half-space ${\mathbb R}^N_+$ of the Cauchy problem. This fundamental solution is regular (that is, locally coincides with some smooth analytic functions) everywhere in ${\mathbb R}^N$ outside some conic semialgebraic hypersurface in ${\mathbb R}^N_+$, called the {\em wavefront} of our operator. We consider only {\em strictly} hyperbolic operators, which means that the cone $A(P) \subset \check {\mathbb R}^N$ of zeros of the principal symbol of our operator $P$ is non-singular outside the origin in $\check {\mathbb R}^N$. Here $\check {\mathbb R}^N$ is the dual space of {\em momenta} with coordinates $\eta_j \equiv \frac{1}{i}\frac{\partial}{\partial x_j},$ so that the operator $P$ is considered as a polynomial in these variables. In this case the wavefront $W(P) \subset {\mathbb R}^N_+$ is just the cone projectively dual to $A(P)$, that is, the union of those rays from the origin in ${\mathbb R}_+^N$ whose orthogonal hyperplanes in $\check {\mathbb R}^N$ are tangent to the cone $A(P)$. The singular points of the wavefront (besides the origin) correspond via the projective duality to the inflection points of $A(P)$, that is, to those points where the rank of the second fundamental form of this cone is smaller than $N-2$. A deep classification of these singular points was developed in the works by V.I.~Arnold, see e.g. \cite{AVG 82}. \begin{definition} \label{dll} A {\it local $C^\infty$-lacuna} $($respectively, {\it holomorphic local lacuna}$)$ at some point of the wavefront is any component of the complement of the wavefront in a neighborhood of this point, such that the restriction of the principal fundamental solution to this component can be extended to a $C^\infty$-smooth function on the closure of this component $($respectively, to an analytic function in entire neighborhood of our point). \end{definition} One and the same (global) component of the complement of the wavefront can to be a local lacuna at some points of its boundary and not to be at the other ones. All local lacunas occurring in the neighborhoods of all singularities of wavefronts from an initial segment of the Arnold classification (so-called {simple} singularities) were enumerated in \cite{Vassiliev 86} and \cite{Vassiliev 92}. In the present work we study and enumerate the holomorphic local lacunas neighboring to the singularities whose classes the next natural segment of this classification. \subsection{Previous results on local lacunas} The non-singular points of the wavefront of the operator $P$ correspond to the points of the cone $A(P)$, at which its second fundamental form is maximally non-degenerate. The existence and the number of local lacunas close to such points of the wavefront can be determined in the terms of its differential geometry, see \cite{Davydova 45} and \cite{Borovikov 59}. Namely, a component of the complement of the wavefront at such a point is a local lacuna if and only if the positive inertia index of the second fundamental form of the wavefront (with the normal directed into this component) is even. A.M.~Davydova \cite{Davydova 45} has proved the ``only if'' part of this statement: if this signature condition is not satisfied, then already the leading term of the asymptotics of the fundamental solution behaves as a half-integer (but not integer) power of the distance from the wavefront. V.A.~Borovikov \cite{Borovikov 59}, using complicated analytic estimates, has proved that otherwise we have a local lacuna, that is, {\em all} terms of the asymptotic expansion of this solution in the terms of this distance have integer powers, and the corresponding power series does converge. His result was later explained in \cite{ABG 73} as a corollary of the removable singularity theorem by moving into the complex domain. All local lacunas neighboring to the simplest singular points of the wavefronts, of types $A_2$ (cuspidal edges, see Fig.~\ref{a2}) and $A_3$ (swallowtails) were counted for in \cite{Gording 77}. An interesting situation occurs close to a point of the cuspidal edge, if $N$ is odd and the inertia indices of the quadratic part of the {\em generating function} of our point (see \S \ref{gengen} below) also is odd. For all other combinations of these numbers, if a component of the complement of the wavefront close to the cuspidal edge is not a local lacuna, then already the Davydova--Borovikov signature condition from the side of this component is not satisfied at some non-singular points of the wavefront arbitrarily close to the edge. However, in the case of odd $N$ and $i_{\pm}$ the Davydova--Borovikov condition from the side of the bigger component (see Fig.~\ref{a2}) is satisfied at all nearby non-singular points, nevertheless this component is not a local lacuna (and also is not for all other combinations of $N$ and $i_{\pm}$). \begin{figure} \caption{Cuspidal edge in the 3-dimensional space} \label{a2} \end{figure} \begin{table} \begin{center} \caption{Numbers of local lacunas at simple singularities of wavefronts} \label{t12} \begin{tabular}{|l|c|c|c|c|} \hline Singularity & $N$ even & $N$ even & $N$ odd & $N$ odd \cr class & $i_+$ even & $i_+$ odd & $i_+$ even & $i_+$ odd \cr \hline $A_1$ & 2 & 0 & 1 & 1 \cr $A_{2k}, \; k\ge 1$ & 0 & 0 & 1 & 0 \cr $\pm A_{2k+1}, \; k \ge 1$ & 0 & 1 & 1 & 1 \cr $D_4^-$ & 0 & 3 & 1 & 1 \cr $D_{2k}^+, \; k \ge 2$ & 0 & 0 & 1 & 1 \cr $D_{2k}^-, \; k \ge 3$ & 0 & 2 & 1 & 1 \cr $\pm D_{2k+1}, \; k \ge 2$ & 0 & 0 & 1 & 1 \cr $\pm E_6$ & 0 & 0 & 1 & 1 \cr $E_7$ & 0 & 0 & 1 & 1 \cr $E_8$ & 0 & 0 & 1 & 1 \cr \hline \end{tabular} \end{center} \end{table} All local lacunas for all {\em simple} singularities of wavefronts (that is, singularities of classes $A_k, D_k,$ $E_6,$ $E_7,$ $E_8$ in the Arnold's classification) were found in \cite{Vassiliev 86}, see Table \ref{t12} for the number of them. Atiyah, Bott and G\aa rding \cite{ABG 73} have introduced the local version of the homological Petrovskii criterion, and proved that it implies that the corresponding local component of the complement of the wavefront is a holomorphic local lacuna. In \cite{Vassiliev 86} the converse implication was proved for {\em finite type} points of wavefronts (that is, for points corresponding by the projective duality to only finitely many lines in the complexification of $A(P)$; this condition is satisfied for all singular points of wavefronts of {\em generic} operators). In \cite{Vassiliev 92}, an easy geometric criterion for a component to be a local lacuna of a simple singularity was proved. Namely, it follows from the above described facts that if a local component of the complement of the wavefront is a local lacuna, then the Davydova--Borovikov signature condition is satisfied at all non-singular points of its boundary, and in addition our component is the ``smaller'' component of the complement of the wavefront at all points of type $A_2$ (that is, cuspidal edges) of this boundary, see Fig.~\ref{a2}. In \cite{Vassiliev 92} it was proved that if the singularity is simple, and some technical condition (the {\em versality} of the generating family, which always holds for wavefronts of generic operators) is satisfied, then this necessary condition is also sufficient; moreover, in this case the notions of local $C^\infty$-lacunas and holomorphic local lacunas are equivalent. \begin{table} \begin{center} \caption{Numbers of local lacunas at parabolic singularities} \label{t13} \begin{tabular}{|l|c|c|c|c|} \hline Singularity & $N$ even & $N$ even & $N$ odd & $N$ odd \cr class & $i_+$ even & $i_+$ odd & $i_+$ even & $i_+$ odd \cr \hline $P_8^1$ & $0_c$ & $0_c$ & $\ge 2$ & $0$ \cr \hline $P_8^2$ & $0_c$ & $0_c$ & $\ge 2$ & 0 \cr \hline $\pm X_9$ & $1_c$ & 0 & $\ge 2$ & 0 \cr \hline $X_9^1$ & 0 & 0 & 0 & 0 \cr \hline $X_9^2$ & 0 & $\ge 4$ & 0 & 0 \cr \hline $J_{10}^3$ & $0_c$ & $\ge 1$ & 0 & 0 \cr \hline $J_{10}^1$ & $0_c$ & $0_c$ & 0 & 0 \cr \hline \end{tabular} \end{center} \end{table} The next important natural set of singularity classes of wavefronts is that of {\it parabolic} (or {\it simple-elliptic}) singularities, see \cite{AVG 82}. It consists of seven one-parameter families of singularities listed in the left-hand column of Table \ref{t13}. The numbers of local lacunas at these singularities are shown in the remaining columns of this table: this (together with an explicit description of these lacunas) is the main result of the present article, see Theorem \ref{thm1} below. However, we need some preliminaries to describe these singularities and formulate this result accurately. \subsection{Generating functions and generating families of wavefronts} \label{gengen} Given a point $x \in {\mathbb R}^N \setminus 0$ of the wavefront of a strictly hyperbolic operator $P$, the local geometry of this wavefront at this point (in particular the set of local components of its complement at this point) is determined by its {\em generating function}, which is just the function $f$ in the local equation $$\xi_{0} = f(\xi_1, \dots, \xi_{N-2}) $$ of the projectivization $A^*(P) \subset \check{{\mathbb {RP}}}^{N-1}$ of the set of zeros of the principal symbol of our operator. Here $\xi_0, \dots, \xi_{N-2}$ are affine local coordinates in $\check {\mathbb {RP}}^{N-1}$ with the origin at the tangency point of the hypersurface $A^*(P)$ and the hyperplane $L(x)$ orthogonal to the line containing the point $x$, such that this hyperplane $L(x)$ is distinguished by the equation $\xi_0=0$. In particular, $f$ has a critical point at the origin; the dual piece of the wavefront is smooth if this critical point is Morse. Denote by $n$ the number $N-2$ of variables of generating functions of wavefronts in ${\mathbb R}^N$. The {\em parabolic} singularity classes studied in this work have the generating functions which can be reduced by a local diffeomorphism in ${\mathbb R}^{n}$ to the following normal forms. The functions of class $P_8$ in appropriate (curvilinear) local coordinates have the formula $\varphi(x_1, x_2, x_3)+Q(x_4, \dots, x_n),$ where $\varphi$ is a non-degenerate homogeneous cubic polynomial, and $Q$ is a non-degenerate quadratic function in the remaining coordinates, e.g. $\pm x_4^2 \pm \dots \pm x_n^2$. The projectivization of the zero set of the polynomial $\varphi$ can consist of one or two curves, therefore we obtain two subclasses, called $P_8^1$ and $P_8^2$ respectively. The remaining parabolic functions have the following normal forms (where $Q$ are non-degenerate quadratic functions in coordinates $x_3, \dots, x_n$): $$ \begin{array}{cll} \pm X_9 & \pm (x_1^4 + \alpha x_1^2 x_2^2 + x_2^4 + Q) & \alpha > -2 \\ X_9^1 & x_1x_2(x_1^2 + \alpha x_1x_2 + x_2^2) + Q & \alpha^2 < 4 \\ X_9^2 & x_1x_2(x_1 + x_2)(x_1 + \alpha x_2) + Q & \alpha \in (0,1) \\ J_{10}^3 & x_1(x_1 - x_2^2)(x_1 - \alpha x_2^2) + Q & \alpha \in (0,1) \\ J_{10}^1 & x_1(x_1^2 + \alpha x_1x_2^2 + x_2^4) + Q & \alpha^2 < 4 \end{array} $$ The index $i_+$ in Table \ref{t13} is the positive inertia index of the quadratic part $Q$ of the corresponding function. Another important notion, reducing the study of wavefronts to the context of critical points of functions, is that of {\em generating families}. In our case, this is the name of the family of functions \begin{equation} \label{genfam} f_\lambda \equiv f(\xi_1, \dots, \xi_{N-2}) -\lambda_0 - \lambda_1 \xi_1 - \dots - \lambda_{N-2}\xi_{N-2},\end{equation} depending on the parameter $\lambda =(\lambda_0, \dots, \lambda_{N-2}) \in {\mathbb R}^{N-1}$. It is natural to consider these parameters $\lambda_i$ as local affine coordinates in ${\mathbb {RP}}^{N-1}$ close to the point $\{x\}$. Indeed, any collection $\lambda$ of these numbers defines a hyperplane $L(\lambda) \subset \check {\mathbb {RP}}^{N-1}$ distinguished by the equation $$\xi_0=\lambda_0 + \lambda_1 \xi_1 + \dots + \lambda_{N-2}\xi_{N-2},$$ hence a line in ${\mathbb R}^N$ or a point in ${\mathbb {RP}}^{N-1}$. The projectivized wavefront close to our point in ${\mathbb {RP}}^{N-1}$ consists of all {\em discriminant} values of the parameters of the family (\ref{genfam}), that is, of those values of $\lambda$ for which the function (\ref{genfam}) has critical value 0 (which is equivalent to the tangency of hypersurfaces $A^*(P)$ and $L(\lambda)$). So, the role of the (projectivized) wavefronts in the language of critical points of functions is played by the {\em discriminant varieties} of {\em function deformations}. Recall that a {\em deformation} of the function $f: {\mathbb R}^n \to {\mathbb R}$ is a function $F: {\mathbb R}^n \times {\mathbb R}^l \to {\mathbb R}$ considered as a family of functions $f_\lambda \equiv F(\cdot ,\lambda): {\mathbb R}^n \to {\mathbb R}$ depending on the parameter $\lambda \in {\mathbb R}^l$, such that $f_0$ coincides with the deformed function $f$. The discriminant variety of such a deformation is the set of values of its parameter $\lambda$ such that the corresponding function $f_\lambda$ has a critical point with zero critical value. In particular, the generating family of a wavefront is a deformation of its generating function, and the wavefront itself is the discriminant set of this deformation. The notion of (holomorphic) local lacunas has sense for arbitrary deformations of critical points of real functions (not necessarily related with the wavefronts): a component of the complement of the discriminant set of such a deformation is a local lacuna if some homological condition (the triviality of the local Petrovskii homology class, described in the next section) concerning the level manifolds $f_\lambda^{-1}(0)$ is satisfied for values of $\lambda$ from this component. If our deformation is the generating family of the wavefront of a hyperbolic operator, then this notion turns out to be equivalent to the one described previously in the terms of fundamental solutions, see Proposition \ref{petcrit} of the next section. Among the deformations $F(x,\lambda)$ of a function $f$ with an isolated critical point there is a distinguished class of {\em versal deformations}, that is, of sufficiently ample deformations such that all other deformations can be reduced to them in some precise sense, see \cite{AVG 82}, \cite{APLt}. The number of parameters of a versal deformation cannot be smaller than the {\em Milnor number} $\mu(f)$ of our critical point (which is the standard lower index of the notation of its class, like 8 for $P_8^1$), on the other hand almost all deformations depending on $\ge \mu(f)$ parameters satisfy this condition. An important corollary of the notion of versality is as follows: if the parameter space of some versal deformation of the function $f$ contains no local lacunas, then the same is true for any other its deformation. \begin{theorem}\label{thm1} The number of holomorphic local lacunas in the parameter space of a versal deformation of a parabolic critical point of a function $f_0:{\mathbb R}^{N-2} \to {\mathbb R}$ is equal to the number indicated in the corresponding cell of Table \ref{t13} or satisfies the inequality given in this table. $($The subscript $ _c$ in some cells means that in the corresponding case the upper bound on the number of local lacunas has only a computer proof$)$. In the case of non-versal deformations this statement remains true for all cells of the Table \ref{t13}, where we have $0$ or $0_c$. If our deformation contains all functions $f_0 + \mbox{const}$ $($which holds for all generating families of wavefronts$)$ then both signs $1_c$ and $\ge 2$ for the singularity $\pm X_9$ can be replaced by $\ge 1$. \end{theorem} \begin{remark} \rm Some of these results were obtained earlier, see e.g. \cite{AVGL 89}, \cite{APLt}. The new results are as follows: The singularity $P_8^2$ is investigated for the first time. For $\pm X_9$, $N$ even, $i_+$ even: the estimate $ \ge 1$ is replaced by the exact value $1_c$. For $X_9^2$, $N$ even, $i_+$ odd: $ \ge 2$ is replaced by $\ge 4$. For $J_{10}^3$, $N$ even, $i_+$ even: the absence of local lacunas is proved. For $J_{10}^1$, $N$ even, $i_+$ even or odd: the absence of local lacunas is proved in both cases. \end{remark} All local lacunas mentioned in non-zero cells of Table \ref{t13} will be presented in \S \ref{realiz}. All zeros (but not signs $0_c$) in this table follow from a topological obstruction described in \S \ref{bound}. All signs $0_c$ are proved by a combinatorial Fortran program which enumerates all possible topological types of morsifications of given critical points and checks the local Petrovskii condition for them. This program has also found for the first time the local lacunas for singularities $P_8^1$, $X_9^2$ and $J_{10}^3$ presented below, as well as one of local lacunas for $\pm X_9$, see Proposition \ref{otherx9} on the page \pageref{otherx9}. This program is described in \S \ref{progr}. The upper bound $1_c$ for the singularity $\pm X_9$ with even $N$ and $i_+$ will be proved in \S \ref{noextra}. \begin{conjecture} In all cells of Table \ref{t13} $($except maybe for the case $X_9^2)$ the inequalities can be replaced by equalities. \end{conjecture} Here is a particular information supporting this conjecture. Given a Morse perturbation $f_\lambda$ of a function $f$ with complicated critical point at the origin, denote by $\chi(\lambda)$ the number of its real critical points with negative critical values and even Morse index minus the number of points also with negative values but odd Morse index. \begin{proposition} \label{chi} The perturbations of our singularity $f$, which belong to the local lacunas, cannot have values of $\chi(\lambda)$ different from those for perturbations presented in \S \ref{realiz}. \end{proposition} This fact is also proved by our program. $\Box$ \section{Local Petrovskii classes and their properties} Let $f: ({\mathbb C}^n, {\mathbb R}^n, 0) \to ({\mathbb C},{\mathbb R},0)$ be a holomorphic function with isolated critical point at $0$, $\mu(f)$ its {\em Milnor number} (see \cite{AVG 82}), $B_\varepsilon \subset {\mathbb C}^n$ a ball centered at $0$ with a small radius $\varepsilon$. Let $f_\lambda$ be a very small (with respect to $\varepsilon$) perturbation of $f$, such that $0$ is not a critical value of $f_\lambda$ in $B_\varepsilon$. Consider the corresponding {\it Milnor fiber} $V_\lambda \equiv f_\lambda^{-1}(0) \cap B_\varepsilon$. It is a smooth $(2n-2)$-dimensional manifold with boundary $\partial V_\lambda \equiv V_\lambda \cap \partial B_\varepsilon$. By the Milnor's theorem it is homotopy equivalent to the wedge of $\mu(f)$ spheres $S^{n-1}$, in particular $\tilde H_{n-1}(V_\lambda) \simeq {\mathbb Z}^{\mu(f)} \simeq \tilde H_{n-1}(V_\lambda, \partial V_\lambda)$; here $\tilde H_*(V_\lambda)$ denotes the homology group reduced modulo a point, and $\tilde H_*(V_\lambda, \partial V_\lambda)$ the relative homology group reduced additionally modulo the fundamental cycle. Also, $\mu(f)$ is equal to the number of critical points of $f_\lambda$ in $B_\varepsilon$, if the function $f_\lambda$ is {\em Morse} and is indeed sufficiently close to $f$. We will assume that an orientation of ${\mathbb R}^n$ is fixed, and the differential form $dx_1 \wedge \dots \wedge dx_n$ is positive with respect to this orientation. There are two important elements in the group $\tilde H_{n-1}(V_\lambda, \partial V_\lambda)$, the {\em even} and {\em odd Petrovskii classes}. The first of them, $P_{\mbox{ev}}(\lambda),$ is presented by the cycle of real points ${\mathbb R}^n \cap V_\lambda$ oriented by the differential form $(dx_1 \wedge \dots \wedge dx_n) /df_\lambda$. The definition of the second class, $P_{\mbox{odd}}$, is a bit more complicated. First we consider an $n$-dimensional cycle $\Pi(\lambda)$ in $B_\varepsilon \setminus V_\lambda$, presented by two copies of canonically oriented ${\mathbb R}^n$, slightly moved in a small neighborhood of the submanifold ${\mathbb R}^n \cap V_\lambda$ in $B_\varepsilon$ so that they streamline $V_\lambda$ from two different sides in the complex domain: for the case $n=1$ see the left-hand part of Fig. \ref{oddp} where the set $V_\lambda$ is marked by thick dots. \begin{figure} \caption{Odd Petrovskii cycle for $n=1$} \label{oddp} \end{figure} The {\em odd Petrovskii class} $P_{\mbox{odd}}$ is defined as the preimage of the homology class of this cycle under the {\it Leray tube operator} $$\tilde H_{n-1}(V_\lambda, \partial V_\lambda) \to \tilde H_n(B_\varepsilon \setminus V_\lambda, \partial B_\varepsilon),$$ which sends any relative cycle in the submanifold $V_\lambda$ to the union of boundaries of the fibers of the tubular neighborhood of this submanifold over the points of this cycle. This operator is conjugate via the Poincar\'e--Lefschetz isomorphisms to the boundary isomorphism $\tilde H_n(B_\varepsilon, V_\lambda) \to \tilde H_{n-1}(V_\lambda),$ and also is an isomorphism. Let $F:({\mathbb R}^n \times {\mathbb R}^l,0) \to ({\mathbb R},0)$ be a deformation of the function $f$, and $\Sigma(F) \subset {\mathbb R}^l$ be the set of discriminant values of the parameter $\lambda$. \begin{definition} \rm A local (close to the point $0 \in {\mathbb R}^l$) connected component of the set ${\mathbb R}^l \setminus \Sigma(F)$ is called an {\em even} (respectively, {\em odd}) {\em local lacuna} of the deformation $F$ if for any value of $\lambda$ from this component the element $P_{\mbox{ev}}(\lambda)$ (respectively, $P_{\mbox{odd}}(\lambda)$) of the group $\tilde H_{n-1}(V_\lambda, \partial V_\lambda)$, related with the corresponding perturbation $f_\lambda$ of $f$, is equal to 0. \end{definition} This definition is consistent with Definition \ref{dll} by the following reason. \begin{proposition} \label{petcrit} Suppose that $x \in {\mathbb R}^N \setminus 0$ is a point of the wavefront of a strictly hyperbolic operator, the critical point of its generating function $f$ is isolated, and $y$ is a point outside the wavefront but very close to $x$. Let $y^* \in {\mathbb {RP}}^{N-1}$ be the direction of the line containing $y$, $\lambda=(\lambda_0, \dots, \lambda_{N-2})$ the local coordinates of the point $y^*$ accordingly to \S \ref{gengen}, and $f_\lambda$ the perturbation of $f$ given by the formula $($\ref{genfam}$)$ with these values of $\lambda_i$. Then the point $y$ belongs to a local $($close to $x)$ holomorphic lacuna of our hyperbolic operator if and only if the point $\lambda$ belongs to an even even lacuna of the corresponding generating family $($if $N$ is even$)$ or to an odd local lacuna $($if $N$ is odd$)$. \end{proposition} The part ``if'' of this proposition was essentially proved in \cite{ABG 73}, and ``only if'' was conjectured there and proved in the translator's note to the Russian translation of \cite{ABG 73}, see also \cite{Vassiliev 86}. This proposition reduces the study of local lacunas to a problem on deformations of real critical points of functions, namely to the calculation of local Petrovskii classes of perturbations of such functions, and the hunt for those perturbations for which these classes vanish. \subsection{Important example} \label{examin} If the function $f$ has a minimum $($res\-pec\-ti\-vely, maximum$)$ point at $0$, then the function $f+\tau$ $($respectively, $f-\tau)$ with sufficiently small positive $\tau$ belongs to its even local lacuna. Indeed, in this case the set of real points of the corresponding Milnor fiber is empty. \begin{conjecture} If a function $f(x_1, x_2)$ has an isolated non-Morse critical point at 0, then its deformations have no even local lacunas unless $f$ has an extremum at the origin; in the latter case all critical values of real critical points of all its small perturbations which belong to such a lacuna are positive $($if $f$ has a minimum point at the origin$)$ or negative $($if $f$ has a maximum$)$. \end{conjecture} \subsection{Explicit calculation of local Petrovskii classes} \label{expl} The group $\tilde H_{n-1}(V_\lambda, \partial V_\lambda)$ is Poincar\'e dual to $\tilde H_{n-1}(V_\lambda)$, therefore any its element is completely characterized by its intersection indices with basic elements of the latter group. For these basic elements we can take the {\em vanishing cycles} (see e.g. \cite{AVG 84}, \cite{Gusein-Zade 77}, \cite{APLt}) corresponding to critical points of $f_\lambda$. In \cite{Leray 62}, \cite{Vassiliev 86} explicit formulas for these intersection indices of both local Petrovskii classes with cycles vanishing in real critical points were calculated: these indices are expressed in the terms of Morse indices of these critical points and the intersection indices of vanishing cycles. We do not give here these large formulas and refer to section V.1.6 in \cite{APLt} or \S 5.1.4 in \cite{AVGL 89}. In particular, these formulas describe the Petrovskii classes completely if all $\mu(f)$ critical points of the perturbation $f_\lambda$ are real, and we know their Morse indices and the intersection indices of corresponding vanishing cycles. \begin{remark} \rm In this and other related calculations it is important to use the orientations of these vanishing cycles, compatible with the fixed orientation of ${\mathbb R}^n$. Fortunately, the methods of calculating the intersection indices developed in \cite{Gusein-Zade 74}, \cite{A'Campo 75}, \cite{Ga1} use exactly these orientations. These methods give us the desired data for appropriate perturbations of all parabolic singularities except for $P_8^2$, in the latter case we solve the similar problem in \S \ref{DDP82}. \end{remark} \subsection{Stabilization} \begin{definition}[see \cite{AVG 82}] \rm Two functions $f, \tilde f:({\mathbb R}^n,0) \to ({\mathbb R},0)$ with critical points at $0$ are {\em equivalent} if they can be taken one into the other by a germ of diffeomorphism $G:({\mathbb R}^n,0)\to ({\mathbb R}^n,0)$, that is, $f \equiv \tilde f\circ G.$ Two critical points of functions (maybe depending on a different number of variables) are {\em stably equivalent} if they become equivalent after the summation with non-degenerate quadratic forms depending on additional variables. \end{definition} For example, the functions $f(x)=x^k$, $f_1(x,y)=x^k +y^2$, $f_2(x,y,z)=x^k -yz$ and $f_3(x,y)=-y^2+(x-y)^k$ are stably equivalent to one another, but they are not stably equivalent to the function $f_4(x,y)=x^k$. If $F(x,\lambda)$ is a deformation of the function $f(x)$, $x=(x_1, \dots, x_n)$, then the family of functions $F(x_1, \dots, x_n,\lambda)\pm x_{n+1}^2 \pm \dots \pm x^2_{n+m}$ depending on $n+m$ variables is a deformation of the stabilization $f(x) \pm x^2_{n+1} \pm \dots \pm x^2_{n+m}$ of $f(x)$; the latter deformation is versal if and only if $F(x,\lambda)$ is. \begin{proposition}[see e.g. \cite{APLt}] \label{stabil} The following conditions are equivalent: 1$)$ the perturbation $f_\lambda$ of the function $f:({\mathbb C}^n,{\mathbb R}^n,0)\to({\mathbb C},{\mathbb R},0)$ belongs to an even $($respectively, odd$)$ local lacuna; 2$)$ the perturbation $f_\lambda+x_{n+1}^2 +x_{n+2}^2$ of the function $f+x_{n+1}^2+x_{n+2}^2:({\mathbb C}^{n+2},{\mathbb R}^{n+2},0)\to({\mathbb C},{\mathbb R},0)$ belongs to an even $($respectively, odd$)$ local lacuna; 3$)$ the perturbation $f_\lambda-x_{n+1}^2 -x_{n+2}^2$ of the function $f-x_{n+1}^2-x_{n+2}^2:({\mathbb C}^{n+2},{\mathbb R}^{n+2},0)\to({\mathbb C},{\mathbb R},0)$ belongs to an even $($respectively, odd$)$ local lacuna; 4$)$ the perturbation $f_\lambda+x_{n+1}^2 -x_{n+2}^2$ of the function $f+x_{n+1}^2-x_{n+2}^2:({\mathbb C}^{n+2},{\mathbb R}^{n+2},0)\to({\mathbb C},{\mathbb R},0)$ belongs to an \underline{odd} $($respectively, \underline{even}$)$ local lacuna. \end{proposition} So, the summation with a positive or negative definite quadratic form in an even number of additional variables moves even (respectively, odd) local lacunas to the lacunas of the same type; the summation with a quadratic function of signature $(1,1)$ in two additional variables moves even lacunas to odd ones and vice versa. Therefore any stable equivalence class of functions splits into four subclasses, depending on the parities of $n$ and of an arbitrary (say, positive) inertia index of the quadratic part of the Taylor expansion of these functions. The sets of local lacunas (of the same parity) of versal deformations of functions from any of these four subclasses are in a one to one correspondence with each other. \begin{corollary} \label{extra} If the function $f(x_1, \dots, x_n)$ has a minimum $($respectively, maximum$)$ point at $0$, then the function $f(x_1, \dots, x_n) -x_{n+1}^2 +\tau$ $($res\-pec\-ti\-vely, $f(x_1, \dots, x_n) + x_{n+1}^2 -\tau)$ with sufficiently small $\tau>0$ belongs to an {\em odd} local lacuna. \end{corollary} Indeed, say in the first case the function $f -x_{n+1}^2+x_{n+2}^2-x_{n+3}^2 +\tau$ belongs to an even local lacuna by item 3) of Proposition \ref{stabil} and by \S \ref{examin}; it remains to use item 4) of the same proposition. $\Box$ \subsection{Multiplication by $-1$} \label{pm} It follows immediately from the definitions of Petrovskii classes that the perturbation $-f_\lambda$ of the function $-f$ belongs to a local lacuna if and only if the perturbation $f_\lambda$ of $f$ does. \subsection{Another form of the odd Petrovskii cycle} \label{another} It is easy to see that the cycle of Fig.~\ref{oddp} (left) is homological in ${\mathbb C}^1 \setminus V_\lambda$ (modulo the complement of $B_\varepsilon$) to the sum of small circles going around all non-real points of the set $V_\lambda$, as shown in the right-hand part of this figure. So the pre-image of its homology class under the Leray tube operator, that is, the odd Petrovskii class, is represented by the sum of these points taken with appropriate signs. The same construction allows us to realize this class in the case of an arbitrary odd $n$. Namely, let us choose a point ${\bf x} \in {\mathbb R}^n \setminus V_\lambda$. Let $S({\bf x}) \sim S^{n-1}$ be the space of all oriented affine lines in ${\mathbb R}^n$ through ${\bf x}$, $\phi: E({\bf x}) \to S({\bf x})$ the tautological line bundle, and $\phi_{{\mathbb C}}: E_{{\mathbb C}}({\bf x}) \to S({\bf x})$ its complexification, so that $E_{\mathbb C}$ is the union of pairs $(l,x) \in S({\bf x}) \times {\mathbb C}^n$ such that $x$ belongs to the complexification of the line $\{l\} \subset {\mathbb R}^n$. The manifold $E$ is obviously orientable and is separated into two parts by the section of the line bundle consisting of all points $(l,{\bf x})$. The forgetful map $\Psi:(l,x) \mapsto x$ sends any of these parts diffeomorphically to ${\mathbb R}^n \setminus {\bf x}$. In the case of odd $n$ the orientations of these parts induced from the fixed orientation of ${\mathbb R}^n$ belong to one and the same orientation of entire $E$.\footnote{In the case $n=1$, responsible for Fig.~\ref{oddp}, the role of the orientation of the base is played by the choice of (different) signs of two points of the 0-dimensional sphere $S({\bf x})$. So the canonical orientation of the line over the negative point should be reversed.} Extend the map $\Psi$ to the similar forgetful map $\Psi_{\mathbb C} : E_{\mathbb C} \to {\mathbb C}^n$. For any oriented line $l \in S({\bf x})$ the set $\Psi_{{\mathbb C}}^{-1}(V_\lambda) \cap \{l_{\mathbb C}\}$ is a finite set symmetric with respect to the real line $\{l\} \subset \{l_{\mathbb C}\}$. Define the relative cycle $\tilde P({\bf x}) \subset E_{{\mathbb C}} \cap \Psi_{{\mathbb C}}^{-1}(B_\epsilon \setminus V_\lambda) $ as the union (over all points $l \in S({\bf x})$) of real lines $\{l\}$ slightly moved inside their complexifications $\{l_{\mathbb C}\}$ close to all points of $\Psi^{-1}(V_\lambda)$ in such a way that they bypass these points from the left side with respect to the canonical orientation of $\{l\}$. The homology class of the cycle $\Pi(\lambda)$ from the construction of the odd Petrovskii class can be realized as the direct image of this cycle $\tilde P({\bf x})$ under the map $\Psi_{{\mathbb C}}$. (In fact, $\tilde P({\bf x})$ is a kind of ``blowing up'' the cycle $\Pi(\lambda)$ at the point ${\bf x}$). For any $l$ the obtained 1-dimensional cycle in $\{l_{\mathbb C}\} \setminus \Psi^{-1}(V_\lambda)$ is homological (within the upper half-plane and modulo the intersection with $\Psi^{-1}({\mathbb C}^n \setminus B_\varepsilon)$) to the union of small circles around all imaginary points of $\Psi^{-1}(V_\lambda \cap B_\varepsilon)$ in this left-hand half-plane. These homologies can be performed uniformly over all $l$ and sweep out a homology between the cycle $\tilde P({\bf x})$ and a cycle which is the Leray tube around the union (over all $l \in S({\bf x})$) of all such imaginary points with positive imaginary parts in the fibers $\{l_{\mathbb C}\}$. Thus the odd Petrovskii class can be realized in the case of odd $n$ as the direct image under the map $\Psi_{\mathbb C}$ of the cycle composed by this union. This is essentially the original definition of the odd Petrovskii cycle, see \cite{Petrovskii 45}. Unlike the even cycle, it depends on the choice of the point ${\bf x}$, but its homology class does not. \section{An obstruction to the existence of local lacunas} \label{bound} Either of two local Petrovskii classes related to a non-discriminant point $\lambda \in {\mathbb R}^l$ is an element of the relative homology group $\tilde H_{n-1}(V_\lambda, \partial V_\lambda)$ of the corresponding Milnor fiber $V_\lambda = f_\lambda^{-1}(0) \cap \partial B_\epsilon$. The boundary operator of the exact sequence of the pair $(V_\lambda, \partial V_\lambda)$ sends this class to some element of the group $\tilde H_{n-2}(\partial V_\lambda)$. For any deformation $F(x,\lambda)$ of the function $f(x)$, the spaces $\partial V_\lambda$ form a locally trivial (and hence trivializable) fiber bundle over a neighborhood of the origin in the parameter space of our deformation (including the discriminant values of $\lambda$). Therefore the homology groups $\tilde H_{n-2}(\partial V_\lambda)$ over all values of $\lambda$ are naturally identified to one another. It follows easily from the construction of the Petrovskii classes, that the boundaries of all cycles $P_{\mbox{ev}}(\lambda)$ (respectively, $P_{\mbox{odd}}(\lambda)$) over all non-discriminant values of $\lambda$ are mapped into one another by this identification. Therefore if for some value $\lambda \in {\mathbb R}^l$ this boundary is not homologous to zero, then the same is true for all other values of $\lambda$, in particular the local Petrovskii classes of the same parity for all $\lambda$ are non-trivial and we have no local lacunas for the corresponding singularity. All zeros in the cells of Table \ref{t13} (but not the signs $0_c$) except for the last column of $P_8^2$ follow from this obstruction and from the explicit calculation of the Petrovskii classes mentioned in \S \ref{expl}. Conversely, in the case of $P_8^2$ the calculation of intersection indices of vanishing cycles in \S \ref{DDP82} will be based on the calculation of this boundary which will be done in \S \ref{obstrp8}. \begin{remark} \rm \label{rem1} The group $\tilde H_*(\partial V_\lambda)$ can be non-trivial only in dimensions $n-1$ and $n-2$, and its structure can be obtained from the intersection form in $\tilde H_{n-1}(V_\lambda)$. Indeed, by the Milnor's theorem the only non-trivial segment of the exact sequence of the pair $(V_\lambda, \partial V_\lambda)$ is \begin{equation} \label{exact} 0 \to \tilde H_{n-1}(\partial V_\lambda) \to \tilde H_{n-1}(V_\lambda) \stackrel{j}{\to} \tilde H_{n-1}(V_\lambda, \partial V_\lambda) \to \tilde H_{n-2}(\partial V_\lambda) \to 0. \end{equation} If we fix Poincar\'e dual frames in two central groups of this sequence (which are isomorphic to ${\mathbb Z}^{\mu(f)}$), then the homomorphism $j$ will be given by the intersection matrix of basic elements of $\tilde H_{n-1}(V_\lambda)$. It determines completely both marginal groups $\tilde H_i(\partial V_\lambda)$. \end{remark} \subsection{Boundary of the even Petrovskii class for $P_8$ singularities} \label{obstrp8} \begin{proposition} For any singularity of the class $P_8^1$ or $P_8^2$, presented by a homogeneous function $f(x,y,z)$ of degree 3, and for any its non-discriminant perturbation $f_\lambda$, the boundary of the {\em even} local Petrovskii class is non-trivial in $\tilde H_1(\partial V_\lambda)$. \end{proposition} {\it Proof.} The isomorphism class of the intersection form of the singularities $P_8$ is well-known, see e.g. \cite{Ga1}. The group $\tilde H_1(\partial V_\lambda)$ for any non-discriminant (and hence for any at all) perturbation $f_\lambda$ of this function $f$ can be easily calculated from (\ref{exact}) and is equal to ${\mathbb Z}^2 \oplus {\mathbb Z}_3$. We can assume that the coordinates $x,y,z$ are chosen to take $f$ in the Newton-Weierstrass normal form $x^3+axz^2 + bz^3 - y^2z$. The Hopf bundle projection $S^5 \to {\mathbb {CP}}^2$ maps $\partial V_0$ to the elliptic curve $\{f=0\}$. Regarding the submanifolds $f^{-1}(\tau) \cap B_\varepsilon \cap {\mathbb R}^n$, $0< \tau <<\varepsilon$, realizing the classes $P_{\mbox{ev}}(f-\tau)$, and tending $\tau$ to 0, we see that the class of $\partial P_{\mbox{ev}}(0)$ is mapped by this Hopf projection into twice the class of the real part of this elliptic curve, oriented as the boundary of the part of the affine chart $\{z=1\}$ in ${\mathbb {RP}}^2$, in which the function $f$ takes negative values. The latter class is never homological to zero in the elliptic curve. (In the case of $P_8^2$, when this real part consists of two components, some sum of these components is homological to zero, but the orientations of these components in this sum should be coordinated in a different way.) $\Box$ \subsection{Boundary of $P_{\mbox{ev}}$ for critical points of functions of two variables} Denote by $D_\varepsilon$ the real part $B_\varepsilon \cap {\mathbb R}^n$ of the ball $B_\varepsilon$. The real zero set of a function $f(x_1,x_2)$ with a critical point at 0 consists of several irreducible curves passing through $0$. Any such curve intersects the circle $\partial D_\varepsilon$ at two points. The corresponding {\em chord diagram} is the graph consisting of the circle $\partial D_\varepsilon$ and all its chords connecting the endpoints of any such component. \label{obstr2dim} \begin{proposition} The boundary $\partial P_{\mbox{ev}} \in \tilde H_0(V_\lambda)$ is trivial for some $($and then for all$)$ non-discriminant perturbations $f_\lambda$ of $f$ if and only if any chord of this chord diagram intersects an even number of other chords. \end{proposition} {\em Proof.} Let us take the function $f-\tau$, $0<\tau <<\varepsilon$, for the perturbation $f_\lambda$, so that its Milnor fiber is the set $f^{-1}(\tau) \cap B_\varepsilon$. The geometric boundary of the set of real points of this fiber is in the obvious one-to-one correspondence with the set of endpoints of chords of the chord diagram. The points of this boundary belong to one and the same component of the manifold $\partial V_\lambda$ if and only if they correspond to the endpoints of one and the same chord. It is easy to calculate that two points corresponding to the endpoints of some chord are counted for in the homological boundary of the even Petrovskii cycle with one and the same sign if and only if they are separated by an odd number of other endpoints in the circle $\partial D_\varepsilon$. $\Box$ \section{Invariants of components of the complement of the real discriminant} \label{maininv} Let us choose the number $\Delta>0$ small enough so that all varieties $ f^{-1}(t),$ $t \in [-\Delta, \Delta]$, are transversal to $\partial D_\varepsilon$. Let $\Lambda \subset {\mathbb R}^l$ be a very small neighborhood of the origin in the space of parameters $\lambda$, such that the same transversality condition is satisfied not only for $f$, but also for all functions $f_\lambda$, $\lambda \in \Lambda$, and additionally all real critical values of these functions $f_\lambda$ in $D_\varepsilon$ belong to the interval $(-\Delta, \Delta)$. Denote by $M_-(\lambda), M_0(\lambda)$ and $M_+(\lambda)$ the sets of lower values $f_\lambda^{-1}((\infty,-\Delta]) \cap D_\varepsilon$, $f_\lambda^{-1}((\infty,0]) \cap D_\varepsilon$, and $f_\lambda^{-1}((\infty,\Delta]) \cap D_\varepsilon$, respectively. The diagrams of spaces \begin{equation} \label{diag1} \begin{array}{ccccc} M_-(\lambda) & \subset & M_+(\lambda) & \subset & D_\varepsilon \\ \cup & & \cup & & \cup \\ M_-(\lambda) \cap \partial D_\varepsilon& \subset & M_+(\lambda) \cap \partial D_\varepsilon & \subset & \partial D_\varepsilon \end{array} \end{equation} form a locally trivial (and hence trivializable) fiber bundle over the neighborhood $\Lambda$. Therefore we can fix a family (depending continuously on $\lambda$) of homeomorphisms of all of them to one and the same diagram corresponding to some distinguished value $\lambda_0$ of $\lambda$, say to $\lambda_0=0$. The spaces $M_-(\lambda_0)$ and $M_+(\lambda_0)$ for this distinguished value will be called just $M_-$ and $M_+$. Given an arbitrary $\lambda$, composing the embedding $M_0(\lambda) \to D_\varepsilon$ with this unifying homeomorphism we obtain the diagram of spaces \begin{equation} \label{diag12} \begin{array}{ccccccc} M_- & \subset & M_0(\lambda) & \subset & M_+ & \subset & D_\varepsilon \\ \cup & & \cup & & \cup & & \cup \\ M_- \cap \partial D_\varepsilon & \subset & M_0(\lambda) \cap \partial D_\varepsilon & \subset & M_+ \cap \partial D_\varepsilon & \subset & \partial D_\varepsilon \end{array} \end{equation} \begin{proposition} \label{homotinv} If two points $\lambda, \lambda' \in \Lambda$ belong to one and the same connected component of the set of non-discriminant perturbations of $f$, then the corresponding diagrams $($\ref{diag12}$)$ are isotopic to one another via an isotopy of the pair $(D_\varepsilon, \partial D_\varepsilon)$ constant on $M_-$ and on $D_\varepsilon \setminus M_+$. $\Box$ \end{proposition} In particular, all homological invariants of isotopy classes of such diagrams also are invariants of components of the complement of the discriminant, and we get the following corollary. \begin{proposition} \label{homolinv} The following objects are the same for all $\lambda$ from one and the same component of the complement of the discriminant: a$)$ the isomorphism classes of groups $H_*(M_0(\lambda))$, $H_*(M_0(\lambda), \partial D_\varepsilon)$, $H_*(M_0(\lambda), M_-(\lambda))$, $H_*(M_0(\lambda),(M_- \cup \partial D_\varepsilon))$, $H_*(M_+,M_0(\lambda))$, and $H_*(M_+,(M_0(\lambda)\cup \partial D_\varepsilon))$; b$)$ images of boundary operators $\partial: H_*(M_0(\lambda),M_-) \to H_*(M_-)$ and $\partial: H_*(M_0(\lambda), M_- \cup \partial D_\varepsilon) \to H_*(M_- \cup \partial D_\varepsilon)$, c$)$ kernels of operators defined by inclusions, $H_*(M_-) \to H_*(M_0(\lambda))$, $H_*(M_- \cup \partial D_\varepsilon) \to H_*(M_0(\lambda) \cup \partial D_\varepsilon)$, $H_*(M_+, M_-) \to H_*(M_+, M_0(\lambda))$, etc. $\Box$ \end{proposition} The invariant $\chi(\lambda)$ used in Proposition \ref{chi} is just the Euler characteristic of the third group mentioned in item (a) of Proposition \ref{homolinv}. \section{Realization of local lacunas promised in Theorem \ref{thm1}} \label{realiz} \subsection{Classes $P_8^1$ and $P_8^2$} \label{realp8} \begin{proposition} If $f(x_1, x_2, x_3)$ is a non-degenerate homogeneous polynomial of degree 3 $($so that it belongs to one of classes $P_8^1$ or $P_8^2)$ then the polynomials $f_{\pm \tau} \equiv f \pm (\tau (x_1^2+x_2^2+x_3^2) - \tau^3)$ with sufficiently small $\tau>0$ belong to odd local lacunas. Moreover, the perturbations $f_\tau$ and $f_{-\tau}$ belong to {\em different} odd local lacunas. \end{proposition} {\it Proof.} The odd Petrovskii cycle of $f_{\pm \tau}$, realized as in \S \ref{another} with the central point ${\bf x}$ at the origin, is empty. Indeed, any complex line through $0$, which is the complexification of a real line, intersects $V_{\pm \tau}$ in at least two real points. The set of non-real intersection points is complex conjugate to itself and consists of no more than one point, since the degree of $f_{\pm \tau}$ is equal to 3. The perturbations $f_\tau$ and $f_{-\tau}$ are separated by an invariant from Proposition \ref{homolinv}(a). Namely, the relative homology group $H_*(M_0(\tau), (M_- \cup \partial D_\varepsilon))$ coincides with the homology group of a single point, and the group $H_*(M_0(-\tau), (M_- \cup \partial D_\varepsilon))$ is isomorphic to $H_*(S^2,\mbox{pt})$. $\Box$ \subsection{The class $\pm X_9$} \label{lex} We will consider the singularity class $+X_9$ only, since the class $-X_9$ can be reduced to it, see \S \ref{pm}. The local lacuna for a singularity of the class $+X_9$ assumed in the second column of Table \ref{t13} is described in \S \ref{examin}. One of two lacunas assumed in the fourth column is described in Corollary \ref{extra} on page \pageref{extra} and is represented by the function $\varphi(x_1, x_2) - x_3^2+\tau$, where $\tau>0$ is small enough and $\varphi$ has a minimum point at $0$. \begin{proposition} \label{otherx9} If the function $\varphi(x_1,x_2)$ of the class $+X_9$ is a non-negative homogeneous polynomial of degree 4, then the function $f_\tau \equiv \varphi(x_1,x_2)-\tau(x_1^2+x_2^2)-x_3^2+ \tau^3$ with sufficiently small $\tau>0$ belongs to the odd local lacuna of the function $\varphi(x_1,x_2)-x_3^2$. This lacuna is different from the second lacuna indicated in the previous paragraph. \end{proposition} {\it Proof.} By the Morse lemma, changing slightly the local coordinate $x_3$ (which certainly does not change the values of the Petrovskii classes) we can replace the function $-x_3^2$ in one variable by $-x_3^2+x_3^4$, and hence the function $f_\tau$ by $f_\tau+x_3^4$. The corresponding odd Petrovskii cycle, described in \S \ref{another} for ${\bf x}=0$, is empty since any real line through $0$ intersects the zero set of this function $f_\tau +x_3^4$ at four points. The last statement of the proposition follows immediately from Proposition \ref{homolinv}. $\Box$ \subsection{Remaining lacunas for corank 2 parabolic singularities} By Proposition \ref{stabil} all remaining local lacunas assumed in non-zero cells of Table \ref{t13} can be considered as odd local lacunas of some functions in two variables. We realize these lacunas in the following way. As in \cite{Gusein-Zade 74}, \cite{A'Campo 75}, we demonstrate a perturbation $f_\lambda(x_1,x_2)$ of the corresponding function $f$, all whose $\mu(f)$ critical points are real, all critical values at the saddlepoints are equal to 0, and all critical values at minima (respectively, maxima) are negative (respectively, positive). Using a further very small perturbation of $f_\lambda$ we can obtain a function $f_{\tilde \lambda}$ arbitrarily close to $f_\lambda$ but with critical values at all saddlepoints moved from 0 to any prescribed sides; in particular $f_{\tilde \lambda}$ is non-discriminant. In the Figures \ref{X92}, \ref{j103} we draw the zero sets of the preliminary perturbations $f_\lambda$, and indicate by black (respectively, white) circles the saddlepoints, the values at which should be moved in the negative (respectively, positive) direction from 0. \begin{figure} \caption{Lacunas for $X_9^2$} \label{X92} \end{figure} \unitlength 0.80mm \begin{figure} \caption{Lacuna for $J_{10} \label{j103} \end{figure} \begin{proposition} If a function $f(x_1,x_2)$ of the class $X_9^2$ is represented by a homogenous polynomial of degree 4 vanishing on four different real lines, then 1$)$ it has a perturbation $f_\lambda$ whose zero set is as shown in either side of Fig.~\ref{X92}; 2$)$ the further non-discriminant perturbations $f_{\tilde \lambda}(x_1,x_2)$ shown in these pictures by black and white circles belong to odd local lacunas of $f$; 3$)$ these two local lacunas are different; 4$)$ rotating both pictures of Fig.~\ref{X92} by the angle $\pi/2$ we obtain the pictures of two other perturbations of $f$ which belong to two additional odd local lacunas different from the previous two. \end{proposition} {\it Proof.} Statement 1 is obvious, 2 follows from the calculation of odd Petrovskii cycles mentioned in \S \ref{expl}, and statements 3, 4 follow from Proposition \ref{homolinv}: indeed, the images of the boundary operators $H_1(M_0(\lambda),M_-) \to H_0(M_-) \simeq {\mathbb Z}^4$ for these four cases are four different subgroups of the latter group. $\Box$ \begin{proposition} If the function $f(x_1,x_2)$ belongs to the class $J_{10}^3$, then 1$)$ it has a perturbation $f_\lambda$ whose zero set is homeomorphic to the one shown in Fig.~\ref{j103}, 2$)$ the further non-discriminant perturbation $f_{\tilde \lambda}(x_1,x_2)$ described in this picture by black and white circles belongs to an odd local lacuna of $f$. \end{proposition} {\it Proof.} This proposition follows immediately from the normal form of critical points of type $J_{10}^3$ and from the calculation of odd Petrovskii cycles discussed in \S \ref{expl}. $\Box$ \section{A program counting for topologically different morsifications of critical points of real functions} \label{progr} This program has two versions: one for singularities of corank $\leq 2$ (see \\ \verb"https://www.hse.ru/mirror/pubs/share/185895886", currently it contains the starting data for the singularity class $J_{10}^3$), and the other one for singularities of arbitrary ranks (\verb"https://www.hse.ru/mirror/pubs/share/185895827", currently with initial data of $P_8^1$). The further versions of the program will occur at the bottom of the page \verb"https://www.hse.ru/en/org/persons/1297545#sci". For a description of the program see \S V.8 of the book \cite{APLt}, however the web reference given there leads to an obsolete version of the program. The starting data for the program are the topological characteristics of some morsification $f_\lambda$ of $f$, all whose critical points are real, and all their critical values are different and not equal to $0$. Namely, these data include the Morse indices of all critical points ordered by the increase of their critical values (in the program for corank=2 singularities), or just the parities of these indices (in the program for for the general case) and the intersection indices of corresponding vanishing cycles in $\tilde H_{n-1}(V_\lambda)$, defined by a canonical system of paths and having canonical orientations compatible with the orientation of ${\mathbb R}^n$. One additional element of data is the number of negative critical values of $f_\lambda$. This information is sufficient to calculate both Petrovskii classes of our morsification $f_\lambda$ and of all its stabilizations. Our program is modelling (on the level of similar sets of topological data) all potentially possible topological surgeries of the initial morsifications, namely the jumps of critical values through $0$, collisions of the real critical values (which can either bypass one another or undergo a Morse surgery and go into the imaginary domain), the opposite operations (that is, a collision of two complex conjugate critical values at a real point), and also rotations of imaginary critical values around one another. Knowing our topological data before any of these surgeries is enough to predict the similar data after it. In general, it is not sure that any sequence of such operations over the sets of topological invariants actually can be realized by a path in the parameter space of the deformation, so we consider their results as {\em virtual morsifications}, that is, some admissible collections of our topological data, including the Petrovskii classes. However, any actual morsification is represented by a virtual one, which surely will be found by our algorithm (if it will have enough of memory and time). In particular, if the program has enumerated all possible virtual morsifications of a singularity class and found that their Petrovskii classes never vanish, then we can put the sign $0_c$ in the corresponding cell of Table \ref{t13}. On the other hand, a majority of real local lacunas described in \S \ref{realiz} was discovered by this program. More precisely, it has found suspicious virtual morsifications with vanishing Petrovskii classes and printed out their topological data; after that in all our cases it was easy to find by hands the real morsifications with these data. The numbers of topologically distinct virtual morsifications found by our program are equal to 6503 for $P_8^1$, 9174 for $P_8^2$, 16928 for $\pm X_9$, 96960 for $X_9^2$, 549797 for $J_{10}^1$, and 77380 for $J_{10}^3$. \section{Starting data for the singularity $P_8^2$} \label{DDP82} The initial data of our program (that is, convenient morsifications with only real critical points and intersection indices of their vanishing cycles) for all real parabolic singularities except for $P_8^2$ can be easily calculated by the methods of \cite{Gusein-Zade 74}, \cite{A'Campo 75} (for critical points whose quadratic part is of corank $\leq 2$) or \cite{Ga1} (for the class $P_8^1$ which has a convenient representative $x^3+y^3+z^3$). It is important for our algorithm that the orientations of these vanishing cycles, defining the signs of the intersection indices, should be compatible with the fixed orientation of ${\mathbb R}^n$; fortunately all these methods satisfy this condition. In this section we solve the similar problem for the remaining case $P_8^2$. \begin{figure} \caption{Convenient morsification for $E_6$} \label{sabe6} \end{figure} We choose a function in this class whose Newton--Weierstrass normal form is $f = x^3-x z^2 + y^2z$. Consider its small perturbation $f_1=f+ \varepsilon z^2$, $\varepsilon>0$; by the dilation of the coordinates and the function we can assume that $\varepsilon=1$. $f_1$ has a critical point of type $E_6$ at the origin. By a small (with unit linear part) local change of coordinates at the origin this function can be reduced to the form $\tilde x^3 - y^4 + \tilde z^2$. In addition $f_1$ has two real Morse critical points $(1,0,\sqrt{3})$ and $(1,0,\sqrt{3})$ with common critical value $1$; their Morse indices are equal to 2 and 1 respectively. As was shown in \cite{Gusein-Zade 74}, Example 3, we can slightly perturb our function $f_1$ so that its $E_6$-type critical point splits into six real Morse critical points, and the new function $f_2$ has the form $\varphi(\tilde x,\tilde y) + \tilde z^2$ in the corresponding neighborhood of the origin, where the zero level set of $\varphi$ in ${\mathbb R}^2$ looks as in Fig.~\ref{sabe6}. The crossing points of this set correspond to the Morse critical points of $f_2$ with critical value 0 and Morse index 1, and any of three bounded domains contains a point with a slightly greater critical value and Morse index 2. Let $f_3$ be an additional very small perturbation of the Morse function $f_2$ making it a strictly Morse function, that is, separating all 8 critical values. We can do it in such a way that the order of these values in ${\mathbb R}^1$ will be as indicated by the numbers in Fig.~\ref{sabe6}. Choose a real value $A$ greater than all six critical values obtained from the perturbation of $E_6$-type critical point but lower than the critical values at two Morse critical points obtained from two points with critical value 1. Define the basis of vanishing cycles in $H_2(f_3^{-1}(A))$ by the system of paths connecting $A$ to all eight critical values within the upper half-plane in ${\mathbb C}^1$. Number the first six basis cycles $\Delta_i$ by the order of corresponding critical values, see Fig. \ref{sabe6}; let the 7th and the 8th cycles be the ones arising from the critical points with value $\approx 1$ and Morse indices 2 and 1 respectively. Orient all these cycles in the correspondence with the fixed orientation of ${\mathbb R}^n$ (see page 177 in \cite{APLt}, especially formula (V.6) there). Our purpose is to calculate the matrix of intersection indices of these eight cycles: this will give us a set of initial data for the perturbation $f_3-A$ of the initial function $f$. This matrix is symmetric since $n-1$ is even. \begin{equation} \left( \begin{array}{cccccccc} -2 \ & 0 & 0 & 1 & 0 & 1 & X & -\frac{Z+W}{2} \\ 0 & -2 \ & 0 & 0 & 1 & 1 & X & -\frac{Z+W}{2} \\ 0 & 0 & -2 \ & 0 & 0 & 1 & Y & -\frac{W}{2} \\ 1 & 0 & 0 & -2 \ & 0 & 0 & 0 & Z \\ 0 & 1 & 0 & 0 & -2 \ & 0 & 0 & Z \\ 1 & 1 & 1 & 0 & 0 & -2 \ & 0 & W \\ X & X & Y & 0 & 0 & 0 & -2 \ & 0 \\ -\frac{Z+W}{2} & -\frac{Z+W}{2} & -\frac{W}{2} & Z & Z & W & 0 & -2 \ \end{array} \right) \label{matp8} \end{equation} \begin{lemma} The wanted intersection matrix has the form $($\ref{matp8}$)$ for some values $X, Y, Z$ and $W$. \end{lemma} {\it Proof.} The intersection indices of the first six cycles can be calculated by the method of \cite{Gusein-Zade 74}, \cite{A'Campo 75} and are as shown in the upper left-hand $6 \times 6$ corner of the matrix (\ref{matp8}). The intersection index $\langle \Delta_7, \Delta_8\rangle$ is equal to 0 because these cycles appear from distant critical points with almost coinciding critical values. The cycles $\Delta_4, \Delta_5$ and $\Delta_6$ are invariant under the complex conjugation in $f_3^{-1}(A)$, therefore their intersection indices with $\Delta_7$ (which is anti-invariant) are equal to 0, as indicated in the 7th row of (\ref{matp8}). Also, the perturbation $f_1$ of the original function $f$ is invariant under the reflection in the hyperplane $y=0$, and its further perturbation $f_2$ can be accomplished keeping this symmetry. This reflection keeps the basis cycles in $f_2^{-1}(A)$ which are close to the cycles $\Delta_7$ and $\Delta_8$, only changing their canonical orientations; on the other hand it permutes the cycles close to $\Delta_1$ and $-\Delta_2$, and also cycles close to $\Delta_4$ and $-\Delta_5$. Therefore $\langle \Delta_1,\Delta_7\rangle =\langle-\Delta_2,-\Delta_7\rangle \equiv \langle\Delta_2,\Delta_7\rangle$ and $\langle \Delta_4,\Delta_8\rangle =\langle\Delta_5,\Delta_8\rangle$. It is why the corresponding cells of our matrix (\ref{matp8}) are filled in by equal letters ($X$ in the first case and $Z$ in the second). For any $i=1,2,3$ denote by $\bar \Delta_i$ the vanishing cycle in $f_3^{-1}(A)$ obtained from the $i$th critical point by the path connecting the corresponding critical value with $A$ in the {\em lower} half-plane of ${\mathbb C}^1$. It is easy to see that the cycle $\Delta_i+ \bar \Delta_i$ is anti-invariant under the complex conjugation and $\Delta_8$ is invariant, therefore $\langle \Delta_i + \bar \Delta_i, \Delta_8\rangle =0$. But $\bar \Delta_i$ can be considered as the image of $\Delta_i$ under the Picard-Lefschetz monodromy operator along a loop $L$ starting and ending at the point $A$ and embracing all critical values placed between the $i$th one and the point $A$. This image is equal to $\Delta_i + \mbox{Var}_L(\Delta_i)$, therefore the previous equation gives us $-2\langle \Delta_i,\Delta_8\rangle = \langle \mbox{Var}_L(\Delta_i),\Delta_8 \rangle$. By the Picard--Lefschetz formula the last number is equal to $\sum_{j=4}^6 \langle \Delta_i,\Delta_j\rangle \langle \Delta_j, \Delta_8 \rangle$, which gives us the expressions of intersection indices $\langle \Delta_i,\Delta_8\rangle$ through the numbers $Z\equiv\langle\Delta_{4},\Delta_8\rangle = \langle\Delta_{5},\Delta_8\rangle$ and $W\equiv\langle\Delta_6,\Delta_8\rangle$; see the first three cells of the last row of (\ref{matp8}). $\Box$ It remains to calculate the numbers $X, Y, Z$ and $W$ in this matrix. We know that it is the matrix of the $P_8$ type bilinear form in some basis in ${\mathbb Z}^8$. This form is well-known, see e.g. \cite{Ga1}. In particular it is easy to check that the image of this lattice in the dual lattice under the map defined by this bilinear form coincides with the image of an arbitrary $E_6$-sublattice. Therefore the last two rows of the desired matrix are integer linear combinations of the first six ones. Writing these rows in the form of such linear combinations with indeterminate coefficients, we get 16 equations in 16 unknowns $a_1, \dots, a_6, b_1, \dots, b_6, X, Y, Z, W$. Some two of these equations are consequences of the others, but the diophantine system of remaining 14 equations is easily solvable and has exactly four different integer solutions, which imply four possible combinations of coefficients of the matrix (\ref{matp8}): $\{X=0, Y=\pm 1$, $Z=0, W = \pm 2\}$. Let us select the correct version. Both local Petrovskii classes of a morsification, all whose critical points are real, can be calculated explicitly from the Morse indices of these critical points and intersection indices of all (properly oriented) vanishing cycle, see \S \ref{expl}. Substituting all four hypothetical combinations of intersection indices in these calculations, in three cases we get a contradiction with the previously obtained results on these classes for $P_8^2$ singularities, saying that a) in the case of odd $n$ and even index $i_+$ (e.g. for $n=3$) these singularities have local lacunas, therefore the homological boundary of the odd Petrovskii class is equal to 0 for all non-discriminant perturbations of $f$, see \S \ref{realp8}; b) the similar homological boundary of the even Petrovskii class is not equal to 0 for the same values of $n$ and $i_+$, see \S \ref{obstrp8}. The unique remaining case gives us $Y=1$, $W=-2$, and the intersection matrix is completely calculated. Plugging it into the initial data of our program, we get from it the messages that no local lacunas exist close to the $P_8^2$ singularities in the cases of even $n$ and both even and odd $i_+$. This proves the first two zeros in Table \ref{t13} for $P_8^2$. \section{No extra lacunas for $\pm X_9$} \label{noextra} The sign $1_{c}$ in the second column of Table \ref{t13} for $\pm X_9$ (that is, the fact that this singularity has no local lacunas in addition to the one mentioned in \S \ref{lex}) is proved by our program with the help of the following fact. \begin{proposition} If the function $f$ has a minimum point at the origin, then all its sufficiently small perturbations $f_\lambda$, such that all real critical values of $f_\lambda$ are positive, belong to one and the same component of the complement of the discriminant of an arbitrary its versal deformation. \end{proposition} {\it Proof.} The described property of functions is preserved by inducing and equivalence of deformations, therefore it is enough to prove our proposition for one arbitrary versal deformation of the function $f$. In particular we can assume that this deformation contains together with any perturbation $f_\lambda$ of $f$ also all perturbations $f_\lambda + c$, where the constants $c$ run some interval containing 0. Choose some small value $c>0$ in this interval, then there is some number $\delta>0$ such that the $\delta$-neighborhood of the point $\{f+c\}$ in the space ${\mathbb R}^l$ of parameters of this deformation is separated from the discriminant. For any point $\lambda$ from the $\delta$-neighborhood of the origin in ${\mathbb R}^l$, such that $f_\lambda$ satisfies the condition of our Proposition, the entire segment consisting of functions $f_\lambda+\tau$, $\tau \in [0,c]$ belongs to the complement of the discriminant, and its last point $f_\lambda+c$ belongs to the $\delta$-neighborhood of the point $f+c$ mentioned above. $\Box$ Therefore I have asked my program to check that the functions of type $+X_9$ do not have virtual morsifications with trivial even Petrovskii class and at least one negative critical value. Its confirmative answer justifies the sign $1_c$ in the cell under question. \end{document}